Updates from: 09/16/2022 01:09:58
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Add Api Connector Token Enrichment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/add-api-connector-token-enrichment.md
Content-type: application/json
| -- | -- | -- | -- | | version | String | Yes | The version of your API. | | action | String | Yes | Value must be `Continue`. |
-| \<builtInUserAttribute> | \<attribute-type> | No | They can returned in the token if selected as an **Application claim**. |
+| \<builtInUserAttribute> | \<attribute-type> | No | They can be returned in the token if selected as an **Application claim**. |
| \<extension\_{extensions-app-id}\_CustomAttribute> | \<attribute-type> | No | The claim does not need to contain `_<extensions-app-id>_`, it is *optional*. They can returned in the token if selected as an **Application claim**. | ::: zone-end
active-directory Scim Validator Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/scim-validator-tutorial.md
+
+ Title: Tutorial - Test your SCIM endpoint for compatibility with the Azure Active Directory (Azure AD) provisioning service.
+description: This tutorial describes how to use the Azure AD SCIM Validator to validate that your provisioning server is compatible with the Azure SCIM client.
+++++++ Last updated : 09/13/2022+++++
+# Tutorial: Validate a SCIM endpoint
+
+This tutorial describes how to use the Azure AD SCIM Validator to validate that your provisioning server is compatible with the Azure SCIM client. The tutorial is intended for developers who want to build a SCIM compatible server to manage their identities with the Azure AD provisioning service.
+
+In this tutorial, you learn how to:
+
+> [!div class="checklist"]
+> * Select a testing method
+> * Configure the testing method
+> * Validate your SCIM endpoint
+
+## Prerequisites
+
+- An Azure Active Directory account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- A SCIM endpoint that conforms to the SCIM 2.0 standard and meets the provision service requirements. To learn more, see [Tutorial: Develop and plan provisioning for a SCIM endpoint in Azure Active Directory](use-scim-to-provision-users-and-groups.md).
++
+## Select a testing method
+The first step is to select a testing method to validate your SCIM endpoint.
+
+1. Open your web browser and navigate to the SCIM Validator: [https://scimvalidator.microsoft.com/](https://scimvalidator.microsoft.com/).
+1. Select one of the three test options. You can use default attributes, automatically discover the schema, or upload a schema.
+
+ :::image type="content" source="./media/scim-validator-tutorial/scim-validator.png" alt-text="Screenshot of SCIM Validator main page." lightbox="./media/scim-validator-tutorial/scim-validator.png":::
+
+**Use default attributes** - The system provides the default attributes, and you modify them to meet your need.
+
+**Discover schema** - If your end point supports /Schema, this option will allow the tool to discover the supported attributes. We recommend this option as it reduces the overhead of updating your app as you build it out.
+
+**Upload Azure AD Schema** - Upload the schema you've downloaded from your sample app on Azure AD.
++
+## Configure the testing method
+Now that you've selected a testing method, the next step is to configure it.
++
+1. If you're using the default attributes option, then fill in all of the indicated fields.
+2. If you're using the discover schema option, then enter the SCIM endpoint URL and token.
+3. If you're uploading a schema, then select your .json file to upload. The option accepts a .json file exported from your sample app on the Azure portal. To learn how to export a schema, see [How-to: Export provisioning configuration and roll back to a known good state](export-import-provisioning-configuration.md#export-your-provisioning-configuration).
+> [!NOTE]
+> To test *group attributes*, make sure to select **Enable Group Tests**.
+
+4. Edit the list attributes as desired for both the user and group types using the ΓÇÿAdd AttributeΓÇÖ option at the end of the attribute list and minus (-) sign on the right side of the page.
+5. Select the joining property from both the user and group attributes list.
+> [!NOTE]
+> The joining property, also known as matching attribute, is an attribute that user and group resources can be uniquely queried on at the source and matched in the target system.
++
+## Validate your SCIM endpoint
+Finally, you need to test and validate your endpoint.
+
+1. Select **Test Schema** to begin the test.
+1. Review the results with a summary of passed and failed tests.
+1. Select the **show details** tab and review and fix issues.
+1. Continue to test your schema until all tests pass.
+
+ :::image type="content" source="./media/scim-validator-tutorial/scim-validator-results.png" alt-text="Screenshot of SCIM Validator results page." lightbox="./media/scim-validator-tutorial/scim-validator-results.png":::
+
+### Use Postman to test endpoints (optional)
+
+In addition to using the SCIM Validator tool, you can also use Postman to validate an endpoint. This example provides a set of tests in Postman that validate CRUD (create, read, update, and delete) operations on users and groups, filtering, updates to group membership, and disabling users.
+
+The endpoints are in the `{host}/scim/` directory, and you can use standard HTTP requests to interact with them. To modify the `/scim/` route, see *ControllerConstant.cs* in **AzureADProvisioningSCIMreference** > **ScimReferenceApi** > **Controllers**.
+
+> [!NOTE]
+> You can only use HTTP endpoints for local tests. The Azure AD provisioning service requires that your endpoint support HTTPS.
+
+1. Download [Postman](https://www.getpostman.com/downloads/) and start the application.
+1. Copy and paste this link into Postman to import the test collection: `https://aka.ms/ProvisioningPostman`.
+
+ ![Screenshot that shows importing the test collection in Postman.](media/scim-validator-tutorial/postman-collection.png)
+
+1. Create a test environment that has these variables:
+
+ |Environment|Variable|Value|
+ |-|-|-|
+ |Run the project locally by using IIS Express|||
+ ||**Server**|`localhost`|
+ ||**Port**|`:44359` *(don't forget the **`:`**)*|
+ ||**Api**|`scim`|
+ |Run the project locally by using Kestrel|||
+ ||**Server**|`localhost`|
+ ||**Port**|`:5001` *(don't forget the **`:`**)*|
+ ||**Api**|`scim`|
+ |Host the endpoint in Azure|||
+ ||**Server**|*(input your SCIM URL)*|
+ ||**Port**|*(leave blank)*|
+ ||**Api**|`scim`|
+
+1. Use **Get Key** from the Postman collection to send a **GET** request to the token endpoint and retrieve a security token to be stored in the **token** variable for subsequent requests.
+
+ ![Screenshot that shows the Postman Get Key folder.](media/scim-validator-tutorial/postman-get-key.png)
+
+ > [!NOTE]
+ > To make a SCIM endpoint secure, you need a security token before you connect. The tutorial uses the `{host}/scim/token` endpoint to generate a self-signed token.
+
+That's it! You can now run the **Postman** collection to test the SCIM endpoint functionality.
+
+## Clean up resources
+
+If you created any Azure resources in your testing that are no longer needed, don't forget to delete them.
+
+## Known Issues with Azure AD SCIM Validator
+
+- Soft deletes (disables) arenΓÇÖt yet supported.
+- The time zone format is randomly generated and will fail for systems that try to validate it.
+- The preferred language format is randomly generated and will fail for systems that try to validate it.
+- The patch user remove attributes may attempt to remove mandatory/required attributes for certain systems. Such failures should be ignored.
++
+## Next steps
+- [Learn how to add an app that is not in the Azure AD app gallery](../manage-apps/overview-application-gallery.md)
active-directory Use Scim To Build Users And Groups Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/use-scim-to-build-users-and-groups-endpoints.md
The default token validation code is configured to use an Azure AD token and req
} ```
-### Use Postman to test endpoints
-
-After you deploy the SCIM endpoint, you can test to ensure that it's compliant with SCIM RFC. This example provides a set of tests in Postman that validate CRUD (create, read, update, and delete) operations on users and groups, filtering, updates to group membership, and disabling users.
-
-The endpoints are in the `{host}/scim/` directory, and you can use standard HTTP requests to interact with them. To modify the `/scim/` route, see *ControllerConstant.cs* in **AzureADProvisioningSCIMreference** > **ScimReferenceApi** > **Controllers**.
-
-> [!NOTE]
-> You can only use HTTP endpoints for local tests. The Azure AD provisioning service requires that your endpoint support HTTPS.
-
-1. Download [Postman](https://www.getpostman.com/downloads/) and start the application.
-1. Copy and paste this link into Postman to import the test collection: `https://aka.ms/ProvisioningPostman`.
-
- ![Screenshot that shows importing the test collection in Postman.](media/use-scim-to-build-users-and-groups-endpoints/postman-collection.png)
-
-1. Create a test environment that has these variables:
-
- |Environment|Variable|Value|
- |-|-|-|
- |Run the project locally by using IIS Express|||
- ||**Server**|`localhost`|
- ||**Port**|`:44359` *(don't forget the **`:`**)*|
- ||**Api**|`scim`|
- |Run the project locally by using Kestrel|||
- ||**Server**|`localhost`|
- ||**Port**|`:5001` *(don't forget the **`:`**)*|
- ||**Api**|`scim`|
- |Host the endpoint in Azure|||
- ||**Server**|*(input your SCIM URL)*|
- ||**Port**|*(leave blank)*|
- ||**Api**|`scim`|
-
-1. Use **Get Key** from the Postman collection to send a **GET** request to the token endpoint and retrieve a security token to be stored in the **token** variable for subsequent requests.
-
- ![Screenshot that shows the Postman Get Key folder.](media/use-scim-to-build-users-and-groups-endpoints/postman-get-key.png)
-
- > [!NOTE]
- > To make a SCIM endpoint secure, you need a security token before you connect. The tutorial uses the `{host}/scim/token` endpoint to generate a self-signed token.
-
-That's it! You can now run the **Postman** collection to test the SCIM endpoint functionality.
- ## Next steps To develop a SCIM-compliant user and group endpoint with interoperability for a client, see [SCIM client implementation](http://www.simplecloud.info/#Implementations2).
-> [!div class="nextstepaction"]
-> [Tutorial: Develop and plan provisioning for a SCIM endpoint](use-scim-to-provision-users-and-groups.md)
-> [Tutorial: Configure provisioning for a gallery app](configure-automatic-user-provisioning-portal.md)
+- [Tutorial: Validate a SCIM endpoint](scim-validator-tutorial.md)
+- [Tutorial: Develop and plan provisioning for a SCIM endpoint](use-scim-to-provision-users-and-groups.md)
+- [Tutorial: Configure provisioning for a gallery app](configure-automatic-user-provisioning-portal.md)
active-directory Concept Authentication Passwordless https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-authentication-passwordless.md
Previously updated : 09/13/2022 Last updated : 09/15/2022
Here are some factors for you to consider when choosing Microsoft passwordless t
||**Windows Hello for Business**|**Passwordless sign-in with the Authenticator app**|**FIDO2 security keys**| |:-|:-|:-|:-|
-|**Pre-requisite**| Windows 10, version 1809 or later<br>Azure Active Directory| Authenticator app<br>Phone (iOS and Android devices running Android 8.0 or above.)|Windows 10, version 1903 or later<br>Azure Active Directory|
+|**Pre-requisite**| Windows 10, version 1809 or later<br>Azure Active Directory| Authenticator app<br>Phone (iOS and Android devices)|Windows 10, version 1903 or later<br>Azure Active Directory|
|**Mode**|Platform|Software|Hardware| |**Systems and devices**|PC with a built-in Trusted Platform Module (TPM)<br>PIN and biometrics recognition |PIN and biometrics recognition on phone|FIDO2 security devices that are Microsoft compatible| |**User experience**|Sign in using a PIN or biometric recognition (facial, iris, or fingerprint) with Windows devices.<br>Windows Hello authentication is tied to the device; the user needs both the device and a sign-in component such as a PIN or biometric factor to access corporate resources.|Sign in using a mobile phone with fingerprint scan, facial or iris recognition, or PIN.<br>Users sign in to work or personal account from their PC or mobile phone.|Sign in using FIDO2 security device (biometrics, PIN, and NFC)<br>User can access device based on organization controls and authenticate based on PIN, biometrics using devices such as USB security keys and NFC-enabled smartcards, keys, or wearables.|
active-directory Howto Authentication Passwordless Phone https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-authentication-passwordless-phone.md
Previously updated : 09/13/2022 Last updated : 09/15/2022
The Azure AD accounts can be in the same tenant or different tenants. Guest acco
To use passwordless phone sign-in with Microsoft Authenticator, the following prerequisites must be met: - Recommended: Azure AD Multi-Factor Authentication, with push notifications allowed as a verification method. Push notifications to your smartphone or tablet help the Authenticator app to prevent unauthorized access to accounts and stop fraudulent transactions. The Authenticator app automatically generates codes when set up to do push notifications so a user has a backup sign-in method even if their device doesn't have connectivity. -- Latest version of Microsoft Authenticator installed on devices running iOS 12.0 or greater, or Android 8.0 or greater.
+- Latest version of Microsoft Authenticator installed on devices running iOS or Android.
- For Android, the device that runs Microsoft Authenticator must be registered to an individual user. We're actively working to enable multiple accounts on Android. - For iOS, the device must be registered with each tenant where it's used to sign in. For example, the following device must be registered with Contoso and Wingtiptoys to allow all accounts to sign in: - balas@contoso.com
active-directory Tutorial Enable Azure Mfa https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/tutorial-enable-azure-mfa.md
In this tutorial you learn how to:
To complete this tutorial, you need the following resources and privileges:
-* A working Azure AD tenant with at least an Azure AD Premium P1 or trial license enabled.
+* A working Azure AD tenant with Azure AD Premium P1 or trial licenses enabled.
* If you need to, [create one for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). * An account with *Conditional Access Administrator*, *Security Administrator*, or *Global Administrator* privileges. Some MFA settings can also be managed by an *Authentication Policy Administrator*. For more information, see [Authentication Policy Administrator](../roles/permissions-reference.md#authentication-policy-administrator).
active-directory Product Integrations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/product-integrations.md
Title: View integration information about an authorization system in Permissions Management description: View integration information about an authorization system in Permissions Management. --++ Last updated 02/23/2022-++ # View integration information about an authorization system
active-directory Device Management Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/device-management-azure-portal.md
To view or copy BitLocker keys, you need to be the owner of the device or have o
- Security Reader ## Block users from viewing their BitLocker keys (preview)
-In this preivew, admins can block self-service BitLocker key access to the registered owner of the device. Default users without the BitLocker read permission will be unable to view or copy their BitLocker key(s) for their owned devices.
+In this preview, admins can block self-service BitLocker key access to the registered owner of the device. Default users without the BitLocker read permission will be unable to view or copy their BitLocker key(s) for their owned devices.
To disable/enable self-service BitLocker recovery:
You must be assigned one of the following roles to view or manage device setting
- **Additional local administrators on Azure AD joined devices**: This setting allows you to select the users who are granted local administrator rights on a device. These users are added to the Device Administrators role in Azure AD. Global Administrators in Azure AD and device owners are granted local administrator rights by default. This option is a premium edition capability available through products like Azure AD Premium and Enterprise Mobility + Security.-- **Users may register their devices with Azure AD**: You need to configure this setting to allow users to register Windows 10 or newer personal, iOS, Android, and macOS devices with Azure AD. If you select **None**, devices aren't allowed to register with Azure AD. Enrollment with Microsoft Intune or mobile device management for Microsoft 365 requires registration. If you've configured either of these services, **ALL** is selected and **NONE** is unavailable.-- **Require Multi-Factor Authentication to register or join devices with Azure AD**: This setting allows you to specify whether users are required to provide another authentication factor to join or register their devices to Azure AD. The default is **No**. We recommend that you require multifactor authentication when a device is registered or joined. Before you enable multifactor authentication for this service, you must ensure that multifactor authentication is configured for users that register their devices. For more information on Azure AD Multi-Factor Authentication services, see [getting started with Azure AD Multi-Factor Authentication](../authentication/concept-mfa-howitworks.md). This setting may not work with third-party identity providers.
+- **Users may register their devices with Azure AD**: You need to configure this setting to allow users to register Windows 10 or newer personal, iOS, Android, and macOS devices with Azure AD. If you select **None**, devices aren't allowed to register with Azure AD. Enrollment with Microsoft Intune or mobile device management for Microsoft 365 requires registration. If you've configured either of these services, **ALL** is selected, and **NONE** is unavailable.
+- **Require Multi-Factor Authentication to register or join devices with Azure AD**:
+ - We recommend organizations use the [Register or join devices user](../conditional-access/concept-conditional-access-cloud-apps.md#user-actions) action in Conditional Access to enforce multifactor authentication. You must configure this toggle to **No** if you use a Conditional Access policy to require multifactor authentication.
+ - This setting allows you to specify whether users are required to provide another authentication factor to join or register their devices to Azure AD. The default is **No**. We recommend that you require multifactor authentication when a device is registered or joined. Before you enable multifactor authentication for this service, you must ensure that multifactor authentication is configured for users that register their devices. For more information on Azure AD Multi-Factor Authentication services, see [getting started with Azure AD Multi-Factor Authentication](../authentication/concept-mfa-howitworks.md). This setting may not work with third-party identity providers.
> [!NOTE] > The **Require Multi-Factor Authentication to register or join devices with Azure AD** setting applies to devices that are either Azure AD joined (with some exceptions) or Azure AD registered. This setting doesn't apply to hybrid Azure AD joined devices, [Azure AD joined VMs in Azure](./howto-vm-sign-in-azure-ad-windows.md#enable-azure-ad-login-for-a-windows-vm-in-azure), or Azure AD joined devices that use [Windows Autopilot self-deployment mode](/mem/autopilot/self-deploying).
- > [!IMPORTANT]
- > - We recommend that you use the [Register or join devices user](../conditional-access/concept-conditional-access-cloud-apps.md#user-actions) action in Conditional Access to enforce multifactor authentication for joining or registering a device.
- > - You must configure this setting to **No** if you're using Conditional Access policy to require multifactor authentication.
- - **Maximum number of devices**: This setting enables you to select the maximum number of Azure AD joined or Azure AD registered devices that a user can have in Azure AD. If users reach this limit, they can't add more devices until one or more of the existing devices are removed. The default value is **50**. You can increase the value up to 100. If you enter a value above 100, Azure AD will set it to 100. You can also use **Unlimited** to enforce no limit other than existing quota limits. > [!NOTE]
active-directory Security Operations Applications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/security-operations-applications.md
Previously updated : 08/19/2022 Last updated : 09/06/2022
# Azure Active Directory security operations guide for Applications
-Applications provide an attack surface for security breaches and must be monitored. While not targeted as often as user accounts, breaches can occur. Since applications often run without human intervention, the attacks may be harder to detect.
+Applications have an attack surface for security breaches and must be monitored. While not targeted as often as user accounts, breaches can occur. Because applications often run without human intervention, the attacks may be harder to detect.
-This article provides guidance to monitor and alert on application events and helps enable you to:
+This article provides guidance to monitor and alert on application events. It's regularly updated to help ensure you:
-* Prevent malicious applications from getting unwarranted access to data.
+* Prevent malicious applications from getting unwarranted access to data
-* Prevent existing applications from being compromised by bad actors.
+* Prevent applications from being compromised by bad actors
-* Gather insights that enable you to build and configure new applications more securely.
+* Gather insights that enable you to build and configure new applications more securely
If you're unfamiliar with how applications work in Azure Active Directory (Azure AD), see [Apps and service principals in Azure AD](../develop/app-objects-and-service-principals.md).
If you're unfamiliar with how applications work in Azure Active Directory (Azure
## What to look for
-As you monitor your application logs for security incidents, review the following to help differentiate normal activity from malicious activity. The following events may indicate security concerns and each are covered in the rest of the article.
+As you monitor your application logs for security incidents, review the following list to help differentiate normal activity from malicious activity. The following events might indicate security concerns. Each is covered in the article.
-* Any changes occurring outside of normal business processes and schedules.
+* Any changes occurring outside normal business processes and schedules
* Application credentials changes * Application permissions
- * Service principal assigned to an Azure AD or Azure RBAC role.
+ * Service principal assigned to an Azure AD or an Azure role-based access control (RBAC) role
- * Applications that are granted highly privileged permissions.
+ * Applications granted highly privileged permissions
- * Azure Key Vault changes.
+ * Azure Key Vault changes
- * End user granting applications consent.
+ * End user granting applications consent
- * Stopped end user consent based on level of risk.
+ * Stopped end-user consent based on level of risk
* Application configuration changes
- * Universal resource identifier (URI) changed or non-standard.
+ * Universal resource identifier (URI) changed or non-standard
- * Changes to application owners.
+ * Changes to application owners
- * Logout URLs modified.
+ * Log-out URLs modified
## Where to look
The log files you use for investigation and monitoring are:
* [Azure Key Vault logs](../../key-vault/general/logging.md)
-From the Azure portal, you can view the Azure AD Audit logs and download as comma-separated value (CSV) or JavaScript Object Notation (JSON) files. The Azure portal has several ways to integrate Azure AD logs with other tools that allow for greater automation of monitoring and alerting:
+From the Azure portal, you can view the Azure AD Audit logs and download as comma-separated value (CSV) or JavaScript Object Notation (JSON) files. The Azure portal has several ways to integrate Azure AD logs with other tools, which allow more automation of monitoring and alerting:
-* **[Microsoft Sentinel](../../sentinel/overview.md)** ΓÇô enables intelligent security analytics at the enterprise level by providing security information and event management (SIEM) capabilities.
+* **[Microsoft Sentinel](../../sentinel/overview.md)** ΓÇô enables intelligent security analytics at the enterprise level with security information and event management (SIEM) capabilities.
-* **[Azure Monitor](../../azure-monitor/overview.md)** ΓÇô enables automated monitoring and alerting of various conditions. Can create or use workbooks to combine data from different sources.
+* **[Sigma rule templates](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure)** - Sigma is an evolving open standard for writing rules and templates that automated management tools can use to parse log files. Where there are Sigma templates for our recommended search criteria, we've added a link to the Sigma repo. The Sigma templates aren't written, tested, and managed by Microsoft. Rather, the repo and templates are created and collected by the worldwide IT security community.
-* **[Azure Event Hubs](../../event-hubs/event-hubs-about.md) integrated with a SIEM**- [Azure AD logs can be integrated to other SIEMs](../reports-monitoring/tutorial-azure-monitor-stream-logs-to-event-hub.md) such as Splunk, ArcSight, QRadar, and Sumo Logic via the Azure Event Hub integration.
+* **[Azure Monitor](../../azure-monitor/overview.md)** ΓÇô automated monitoring and alerting of various conditions. Can create or use workbooks to combine data from different sources.
-* **[Microsoft Defender for Cloud Apps](/cloud-app-security/what-is-cloud-app-security)** ΓÇô enables you to discover and manage apps, govern across apps and resources, and check your cloud appsΓÇÖ compliance.
+* **[Azure Event Hubs](../../event-hubs/event-hubs-about.md) integrated with a SIEM**- [Azure AD logs can be integrated to other SIEMs](../reports-monitoring/tutorial-azure-monitor-stream-logs-to-event-hub.md) such as Splunk, ArcSight, QRadar, and Sumo Logic via the Azure Event Hubs integration.
-* **[Securing workload identities with Identity Protection Preview](..//identity-protection/concept-workload-identity-risk.md)** - Used to detect risk on workload identities across sign-in behavior and offline indicators of compromise.
+* **[Microsoft Defender for Cloud Apps](/cloud-app-security/what-is-cloud-app-security)** ΓÇô discover and manage apps, govern across apps and resources, and check your cloud appsΓÇÖ compliance.
-Much of what you will monitor and alert on are the effects of your Conditional Access policies. You can use the [Conditional Access insights and reporting workbook](../conditional-access/howto-conditional-access-insights-reporting.md) to examine the effects of one or more Conditional Access policies on your sign-ins, as well as the results of policies, including device state. This workbook enables you to view an impact summary, and identify the impact over a specific time period. You can also use the workbook to investigate the sign-ins of a specific user.
+* **[Securing workload identities with Identity Protection Preview](..//identity-protection/concept-workload-identity-risk.md)** - detects risk on workload identities across sign-in behavior and offline indicators of compromise.
- The remainder of this article describes what we recommend you monitor and alert on, and is organized by the type of threat. Where there are specific pre-built solutions we link to them or provide samples following the table. Otherwise, you can build alerts using the preceding tools.
+Much of what you monitor and alert on are the effects of your Conditional Access policies. You can use the [Conditional Access insights and reporting workbook](../conditional-access/howto-conditional-access-insights-reporting.md) to examine the effects of one or more Conditional Access policies on your sign-ins, and the results of policies, including device state. Use the workbook to view a summary, and identify the effects over a time period. You can use the workbook to investigate the sign-ins of a specific user.
-## Application credentials
+The remainder of this article is what we recommend you monitor and alert on. It's organized by the type of threat. Where there are pre-built solutions, we link to them or provide samples after the table. Otherwise, you can build alerts using the preceding tools.
-Many applications use credentials to authenticate in Azure AD. Any additional credentials added outside of expected processes could be a malicious actor using those credentials. We strongly recommend using X509 certificates issued by trusted authorities or Managed Identities instead of using client secrets. However, if you need to use client secrets, follow good hygiene practices to keep applications safe. Note, application and service principal updates are logged as two entries in the audit log.
+## Application credentials
-* Monitor applications to identify those with long credential expiration times.
+Many applications use credentials to authenticate in Azure AD. Any other credentials added outside expected processes could be a malicious actor using those credentials. We recommend using X509 certificates issued by trusted authorities or Managed Identities instead of using client secrets. However, if you need to use client secrets, follow good hygiene practices to keep applications safe. Note, application and service principal updates are logged as two entries in the audit log.
-* Replace long-lived credentials with credentials that have a short life span. Take steps to ensure that credentials don't get committed in code repositories and are stored securely.
+* Monitor applications to identify long credential expiration times.
+* Replace long-lived credentials with a short life span. Ensure credentials don't get committed in code repositories, and are stored securely.
| What to monitor| Risk Level| Where| Filter/sub-filter| Notes | | -|-|-|-|-|
-| Added credentials to existing applications| High| Azure AD Audit logs| Service-Core Directory, Category-ApplicationManagement <br>Activity: Update Application-Certificates and secrets management<br>-and-<br>Activity: Update Service principal/Update Application| Alert when credentials are:<li> added outside of normal business hours or workflows.<li> of types not used in your environment.<li> added to a non-SAML flow supporting service principal. |
-| Credentials with a lifetime longer than your policies allow.| Medium| Microsoft Graph| State and end date of Application Key credentials<br>-and-<br>Application password credentials| You can use MS Graph API to find the start and end date of credentials, and evaluate those with a longer than allowed lifetime. See PowerShell script following this table. |
+| Added credentials to existing applications| High| Azure AD Audit logs| Service-Core Directory, Category-ApplicationManagement <br>Activity: Update Application-Certificates and secrets management<br>-and-<br>Activity: Update Service principal/Update Application| Alert when credentials are: added outside of normal business hours or workflows, of types not used in your environment, or added to a non-SAML flow supporting service principal.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/NewAppOrServicePrincipalCredential.yaml)<br><br>[Sigma rules template](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
+| Credentials with a lifetime longer than your policies allow.| Medium| Microsoft Graph| State and end date of Application Key credentials<br>-and-<br>Application password credentials| You can use MS Graph API to find the start and end date of credentials, and evaluate longer-than-allowed lifetimes. See PowerShell script following this table. |
- The following pre-built monitoring and alerts are available.
+ The following pre-built monitoring and alerts are available:
-* Microsoft Sentinel ΓÇô [Alert when new app or service principle credentials added](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/NewAppOrServicePrincipalCredential.yaml)
+* Microsoft Sentinel ΓÇô [Alert when new app or service principle credentials added](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/NewAppOrServicePrincipalCredential.yaml)
* Azure Monitor ΓÇô [Azure AD workbook to help you assess Solorigate risk - Microsoft Tech Community](https://techcommunity.microsoft.com/t5/azure-active-directory-identity/azure-ad-workbook-to-help-you-assess-solorigate-risk/ba-p/2010718)
Many applications use credentials to authenticate in Azure AD. Any additional cr
## Application permissions
-Like an administrator account, applications can be assigned privileged roles. Apps can be assigned Azure AD roles, such as global administrator, or Azure RBAC roles such as subscription owner. Because they can run without a user present and as a background service, closely monitor anytime an application is granted a highly privileged role or permission.
+Like an administrator account, applications can be assigned privileged roles. Apps can be assigned Azure AD roles, such as Global Administrator, or Azure RBAC roles such as Subscription Owner. Because they can run without a user, and as a background service, closely monitor when an application is granted a highly privileged role or permission.
### Service principal assigned to a role - | What to monitor| Risk Level| Where| Filter/sub-filter| Notes | |-|-|-|-|-|
-| App assigned to Azure RBAC role, or Azure AD Role| High to Medium| Azure AD Audit logs| Type: service principal<br>Activity: ΓÇ£Add member to roleΓÇ¥ or ΓÇ£Add eligible member to roleΓÇ¥<br>-or-<br>ΓÇ£Add scoped member to role.ΓÇ¥| For highly privileged roles such as Global Administrator, risk is high. For lower privileged roles risk is medium. Alert anytime an application is assigned to an Azure role or Azure AD role outside of normal change management or configuration procedures. |
+| App assigned to Azure RBAC role, or Azure AD Role| High to Medium| Azure AD Audit logs| Type: service principal<br>Activity: ΓÇ£Add member to roleΓÇ¥ or ΓÇ£Add eligible member to roleΓÇ¥<br>-or-<br>ΓÇ£Add scoped member to role.ΓÇ¥| For highly privileged roles such as Global Administrator, risk is high. For lower privileged roles risk is medium. Alert anytime an application is assigned to an Azure role or Azure AD role outside of normal change management or configuration procedures.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/ServicePrincipalAssignedPrivilegedRole.yaml)<br><br>[Sigma rules template](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
### Application granted highly privileged permissions
-Applications should also follow the principal of least privilege. Investigate application permissions to ensure they're truly needed. You can create an [app consent grant report](https://aka.ms/getazureadpermissions) to help identify existing applications and highlight privileged permissions.
+Applications should follow the principle of least privilege. Investigate application permissions to ensure they're needed. You can create an [app consent grant report](https://aka.ms/getazureadpermissions) to help identify applications and highlight privileged permissions.
| What to monitor|Risk Level|Where| Filter/sub-filter| Notes| |-|-|-|-|-|
-| App granted highly privileged permissions, such as permissions with ΓÇ£*.AllΓÇ¥ (Directory.ReadWrite.All) or wide ranging permissions (Mail.*)| High |Azure AD Audit logs| ΓÇ£Add app role assignment to service principalΓÇ¥, <br>- where-<br> Target(s) identifies an API with sensitive data (such as Microsoft Graph) <br>-and-<br>AppRole.Value identifies a highly privileged application permission (app role).| Apps granted broad permissions such as ΓÇ£*.AllΓÇ¥ (Directory.ReadWrite.All) or wide ranging permissions (Mail.*) |
-| Administrator granting either application permissions (app roles) or highly privileged delegated permissions |High| Microsoft 365 portal| ΓÇ£Add app role assignment to service principalΓÇ¥, <br>-where-<br>Target(s) identifies an API with sensitive data (such as Microsoft Graph)<br>ΓÇ£Add delegated permission grantΓÇ¥, <br>-where-<br>Target(s) identifies an API with sensitive data (such as Microsoft Graph) <br>-and-<br>DelegatedPermissionGrant.Scope includes high-privilege permissions.| Alert when a global administrator, application administrator, or cloud application administrator consents to an application. Especially look for consent outside of normal activity and change procedures. |
-| Application is granted permissions for Microsoft Graph, Exchange, SharePoint, or Azure AD. |High| Azure AD Audit logs| ΓÇ£Add delegated permission grantΓÇ¥ <br>-or-<br>ΓÇ£Add app role assignment to service principalΓÇ¥, <br>-where-<br>Target(s) identifies an API with sensitive data (such as Microsoft Graph, Exchange Online, and so on)| Alert as in the preceding row. |
-| Application permissions (app roles) for other APIs are granted |Medium| Azure AD Audit logs| ΓÇ£Add app role assignment to service principalΓÇ¥, <br>-where-<br>Target(s) identifies any other API.| Alert as in the preceding row. |
-| Highly privileged delegated permissions are granted on behalf of all users |High| Azure AD Audit logs| ΓÇ£Add delegated permission grantΓÇ¥, where Target(s) identifies an API with sensitive data (such as Microsoft Graph), <br> DelegatedPermissionGrant.Scope includes high-privilege permissions, <br>-and-<br>DelegatedPermissionGrant.ConsentType is ΓÇ£AllPrincipalsΓÇ¥.| Alert as in the preceding row. |
+| App granted highly privileged permissions, such as permissions with ΓÇ£*.AllΓÇ¥ (Directory.ReadWrite.All) or wide ranging permissions (Mail.*)| High |Azure AD Audit logs| ΓÇ£Add app role assignment to service principalΓÇ¥, <br>- where-<br> Target(s) identifies an API with sensitive data (such as Microsoft Graph) <br>-and-<br>AppRole.Value identifies a highly privileged application permission (app role).| Apps granted broad permissions such as ΓÇ£*.AllΓÇ¥ (Directory.ReadWrite.All) or wide ranging permissions (Mail.*)<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/ServicePrincipalAssignedAppRoleWithSensitiveAccess.yaml)<br><br>[Sigma rules template](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
+| Administrator granting either application permissions (app roles) or highly privileged delegated permissions |High| Microsoft 365 portal| ΓÇ£Add app role assignment to service principalΓÇ¥, <br>-where-<br>Target(s) identifies an API with sensitive data (such as Microsoft Graph)<br>ΓÇ£Add delegated permission grantΓÇ¥, <br>-where-<br>Target(s) identifies an API with sensitive data (such as Microsoft Graph) <br>-and-<br>DelegatedPermissionGrant.Scope includes high-privilege permissions.| Alert when a global administrator, application administrator, or cloud application administrator consents to an application. Especially look for consent outside of normal activity and change procedures.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/ServicePrincipalAssignedAppRoleWithSensitiveAccess.yaml)<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/AzureADRoleManagementPermissionGrant.yaml)<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/MailPermissionsAddedToApplication.yaml)<br><br>[Sigma rules template](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
+| Application is granted permissions for Microsoft Graph, Exchange, SharePoint, or Azure AD. |High| Azure AD Audit logs| ΓÇ£Add delegated permission grantΓÇ¥ <br>-or-<br>ΓÇ£Add app role assignment to service principalΓÇ¥, <br>-where-<br>Target(s) identifies an API with sensitive data (such as Microsoft Graph, Exchange Online, and so on)| Alert as in the preceding row.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/ServicePrincipalAssignedAppRoleWithSensitiveAccess.yaml)<br><br>[Sigma rules template](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
+| Application permissions (app roles) for other APIs are granted |Medium| Azure AD Audit logs| ΓÇ£Add app role assignment to service principalΓÇ¥, <br>-where-<br>Target(s) identifies any other API.| Alert as in the preceding row.<br>[Link to Sigma repo](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
+| Highly privileged delegated permissions are granted on behalf of all users |High| Azure AD Audit logs| ΓÇ£Add delegated permission grantΓÇ¥, where Target(s) identifies an API with sensitive data (such as Microsoft Graph), <br> DelegatedPermissionGrant.Scope includes high-privilege permissions, <br>-and-<br>DelegatedPermissionGrant.ConsentType is ΓÇ£AllPrincipalsΓÇ¥.| Alert as in the preceding row.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/ServicePrincipalAssignedAppRoleWithSensitiveAccess.yaml)<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/AzureADRoleManagementPermissionGrant.yaml)<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/SuspiciousOAuthApp_OfflineAccess.yaml)<br><br>[Sigma rules template](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
For more information on monitoring app permissions, see this tutorial: [Investigate and remediate risky OAuth apps](/cloud-app-security/investigate-risky-oauth). ### Azure Key Vault
-Azure Key Vault can be used to store your tenantΓÇÖs secrets. We recommend you pay particular attention to any changes to Key Vault configuration and activities.
+Use Azure Key Vault to store your tenantΓÇÖs secrets. We recommend you pay attention to any changes to Key Vault configuration and activities.
| What to monitor| Risk Level| Where| Filter/sub-filter| Notes | |-|-|-|-|-|
-| How and when your Key Vaults are accessed and by whom| Medium| [Azure Key Vault logs](../../key-vault/general/logging.md?tabs=Vault)| Resource type: Key Vaults| Look for <li> any access to Key Vault outside of regular processes and hours. <li> any changes to Key Vault ACL. |
+| How and when your Key Vaults are accessed and by whom| Medium| [Azure Key Vault logs](../../key-vault/general/logging.md?tabs=Vault)| Resource type: Key Vaults| Look for: any access to Key Vault outside regular processes and hours, any changes to Key Vault ACL.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/AzureDiagnostics/AzureKeyVaultAccessManipulation.yaml)<br><br>[Sigma rules template](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
-After setting up Azure Key Vault, be sure to [enable logging](../../key-vault/general/howto-logging.md?tabs=azure-cli), which shows [how and when your Key Vaults are accessed](../../key-vault/general/logging.md?tabs=Vault), and [configure alerts](../../key-vault/general/alert.md) on Key Vault to notify assigned users or distribution lists via email, phone call, text message, or [event grid](../../key-vault/general/event-grid-overview.md) notification if health is impacted. Additionally, setting up [monitoring](../../key-vault/general/alert.md) with Key Vault insights will give you a snapshot of Key Vault requests, performance, failures, and latency. [Log Analytics](../../azure-monitor/logs/log-analytics-overview.md) also has some [example queries](../../azure-monitor/logs/queries.md) for Azure Key Vault that can be accessed after selecting your Key Vault and then under ΓÇ£MonitoringΓÇ¥ selecting ΓÇ£LogsΓÇ¥.
+After you set up Azure Key Vault, [enable logging](../../key-vault/general/howto-logging.md?tabs=azure-cli). See [how and when your Key Vaults are accessed](../../key-vault/general/logging.md?tabs=Vault), and [configure alerts](../../key-vault/general/alert.md) on Key Vault to notify assigned users or distribution lists via email, phone, text, or [Event Grid](../../key-vault/general/event-grid-overview.md) notification, if health is affected. In addition, setting up [monitoring](../../key-vault/general/alert.md) with Key Vault insights gives you a snapshot of Key Vault requests, performance, failures, and latency. [Log Analytics](../../azure-monitor/logs/log-analytics-overview.md) also has some [example queries](../../azure-monitor/logs/queries.md) for Azure Key Vault that can be accessed after selecting your Key Vault and then under ΓÇ£MonitoringΓÇ¥ selecting ΓÇ£LogsΓÇ¥.
### End-user consent | What to monitor| Risk Level| Where| Filter/sub-filter| Notes | |-|-|-|-|-|
-| End-user consent to application| Low| Azure AD Audit logs| Activity: Consent to application / ConsentContext.IsAdminConsent = false| Look for: <li>high profile or highly privileged accounts.<li> app requests high-risk permissions<li>apps with suspicious names, for example generic, misspelled, etc. |
+| End-user consent to application| Low| Azure AD Audit logs| Activity: Consent to application / ConsentContext.IsAdminConsent = false| Look for: high profile or highly privileged accounts, app requests high-risk permissions, apps with suspicious names, for example generic, misspelled, etc.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/AuditLogs/ConsentToApplicationDiscovery.yaml)<br><br>[Sigma rules template](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
-
-The act of consenting to an application is not in itself malicious. However, investigate new end-user consent grants looking for suspicious applications. You can [restrict user consent operations](../../security/fundamentals/steps-secure-identity.md).
+The act of consenting to an application isn't malicious. However, investigate new end-user consent grants looking for suspicious applications. You can [restrict user consent operations](../../security/fundamentals/steps-secure-identity.md).
For more information on consent operations, see the following resources:
For more information on consent operations, see the following resources:
* [Incident response playbook - App consent grant investigation](/security/compass/incident-response-playbook-app-consent)
-### End user stopped due to risk-based consent
+### End user stopped due to risk-based consent
| What to monitor| Risk Level| Where| Filter/sub-filter| Notes | |-|-|-|-|-|
-| End-user consent stopped due to risk-based consent| Medium| Azure AD Audit logs| Core Directory / ApplicationManagement / Consent to application<br> Failure status reason = Microsoft.online.Security.userConsent<br>BlockedForRiskyAppsExceptions| Monitor and analyze any time consent is stopped due to risk. Look for:<li>high profile or highly privileged accounts.<li> app requests high-risk permissions<li>apps with suspicious names, for example generic, misspelled, etc. |
+| End-user consent stopped due to risk-based consent| Medium| Azure AD Audit logs| Core Directory / ApplicationManagement / Consent to application<br> Failure status reason = Microsoft.online.Security.userConsent<br>BlockedForRiskyAppsExceptions| Monitor and analyze any time consent is stopped due to risk. Look for: high profile or highly privileged accounts, app requests high-risk permissions, or apps with suspicious names, for example generic, misspelled, etc.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/End-userconsentstoppedduetorisk-basedconsent.yaml)<br><br>[Sigma rules template](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
-## Application Authentication Flows
-There are several flows defined in the OAuth 2.0 protocol. The recommended flow for an application depends on the type of application that is being built. In some cases, there is a choice of flows available to the application, and in this case, some authentication flows are recommended over others. Specifically, resource owner password credentials (ROPC) should be avoided if at all possible as this requires the user to expose their current password credentials to the application directly. The application then uses those credentials to authenticate the user against the identity provider. Most applications should use the auth code flow, or auth code flow with Proof Key for Code Exchange (PKCE), as this flow is highly recommended.
+## Application authentication flows
+There are several flows in the OAuth 2.0 protocol. The recommended flow for an application depends on the type of application being built. In some cases, there's a choice of flows available to the application. For this case, some authentication flows are recommended over others. Specifically, avoid resource owner password credentials (ROPC) because these require the user to expose their current password credentials to the application. The application then uses the credentials to authenticate the user against the identity provider. Most applications should use the auth code flow, or auth code flow with Proof Key for Code Exchange (PKCE), because this flow is recommended.
-The only scenario where ROPC is suggested is for automated testing of applications. See [Run automated integration tests](../develop/test-automate-integration-testing.md) for details.
+The only scenario where ROPC is suggested is for automated application testing. See [Run automated integration tests](../develop/test-automate-integration-testing.md) for details.
-
-Device code flow is another OAuth 2.0 protocol flow specifically for input constrained devices and is not used in all environments. If this type of flow is seen in the environment and not being used in an input constrained device scenario further investigation is warranted. This can be a misconfigured application or potentially something malicious.
+Device code flow is another OAuth 2.0 protocol flow for input-constrained devices and isn't used in all environments. When device code flow appears in the environment, and isn't used in an input constrained device scenario. More investigation is warranted for a misconfigured application or potentially something malicious.
Monitor application authentication using the following formation: | What to monitor| Risk level| Where| Filter/sub-filter| Notes | | - | - | - | - | - |
-| Applications that are using the ROPC authentication flow|Medium | Azure AD Sign-ins log|Status=Success<br><br>Authentication Protocol-ROPC| High level of trust is being placed in this application as the credentials can be cached or stored. Move if possible to a more secure authentication flow.This should only be used in automated testing of applications, if at all. For more information, see [Microsoft identity platform and OAuth 2.0 Resource Owner Password Credentials](../develop/v2-oauth-ropc.md)|
-|Applications that are using the Device code flow |Low to medium|Azure AD Sign-ins log|Status=Success<br><br>Authentication Protocol-Device Code|Device code flows are used for input constrained devices which may not be present in all environments. If successful device code flows are seen without an environment need for them they should be further investigated for validity. For more information, see [Microsoft identity platform and the OAuth 2.0 device authorization grant flow](../develop/v2-oauth2-device-code.md)|
+| Applications that are using the ROPC authentication flow|Medium | Azure AD Sign-ins log|Status=Success<br><br>Authentication Protocol-ROPC| High level of trust is being placed in this application as the credentials can be cached or stored. Move if possible to a more secure authentication flow. This should only be used in automated testing of applications, if at all. For more information, see [Microsoft identity platform and OAuth 2.0 Resource Owner Password Credentials](../develop/v2-oauth-ropc.md)<br><br>[Sigma rules template](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure)|
+|Applications using the Device code flow |Low to medium|Azure AD Sign-ins log|Status=Success<br><br>Authentication Protocol-Device Code|Device code flows are used for input constrained devices, which may not be in all environments. If successful device code flows appear, without a need for them, investigate for validity. For more information, see [Microsoft identity platform and the OAuth 2.0 device authorization grant flow](../develop/v2-oauth2-device-code.md)<br><br>[Sigma rules template](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure)|
+ ## Application configuration changes
-Monitor changes to any applicationΓÇÖs configuration. Specifically, configuration changes to the uniform resource identifier (URI), ownership, and logout URL.
+Monitor changes to application configuration. Specifically, configuration changes to the uniform resource identifier (URI), ownership, and log-out URL.
-### Dangling URI and Redirect URI changes
+### Dangling URI and Redirect URI changes
| What to monitor| Risk Level| Where| Filter/sub-filter| Notes | |-|-|-|-|-|
-| Dangling URI| High| Azure AD Logs and Application Registration| Service-Core Directory, Category-ApplicationManagement<br>Activity: Update Application<br>Success ΓÇô Property Name AppAddress| Look for dangling URIs, for example, that point to a domain name that no longer exists or one that you donΓÇÖt explicitly own. |
-| Redirect URI configuration changes| High| Azure AD logs| Service-Core Directory, Category-ApplicationManagement<br>Activity: Update Application<br>Success ΓÇô Property Name AppAddress| Look for URIs not using HTTPS*, URIs with wildcards at the end or the domain of the URL, URIs that are NOT unique to the application, URIs that point to a domain you do not control. |
+| Dangling URI| High| Azure AD Logs and Application Registration| Service-Core Directory, Category-ApplicationManagement<br>Activity: Update Application<br>Success ΓÇô Property Name AppAddress| For example, look for dangling URIs that point to a domain name that no longer exists or one that you donΓÇÖt explicitly own.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/URLAddedtoApplicationfromUnknownDomain.yaml)<br><br>[Link to Sigma repo](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
+| Redirect URI configuration changes| High| Azure AD logs| Service-Core Directory, Category-ApplicationManagement<br>Activity: Update Application<br>Success ΓÇô Property Name AppAddress| Look for URIs not using HTTPS*, URIs with wildcards at the end or the domain of the URL, URIs that are NOT unique to the application, URIs that point to a domain you don't control.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/ApplicationRedirectURLUpdate.yaml)<br><br>[Sigma rules template](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
-Alert anytime these changes are detected.
+Alert when these changes are detected.
### AppID URI added, modified, or removed - | What to monitor| Risk Level| Where| Filter/sub-filter| Notes | |-|-|-|-|-|
-| Changes to AppID URI| High| Azure AD logs| Service-Core Directory, Category-ApplicationManagement<br>Activity: Update<br>Application<br>Activity: Update Service principal| Look for any AppID URI modifications, such as adding, modifying, or removing the URI. |
+| Changes to AppID URI| High| Azure AD logs| Service-Core Directory, Category-ApplicationManagement<br>Activity: Update<br>Application<br>Activity: Update Service principal| Look for any AppID URI modifications, such as adding, modifying, or removing the URI.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/ApplicationIDURIChanged.yaml)<br><br>[Sigma rules template](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
+Alert when these changes are detected outside approved change management procedures.
-Alert any time these changes are detected outside of approved change management procedures.
-
-### New Owner
-
+### New owner
| What to monitor| Risk Level| Where| Filter/sub-filter| Notes | |-|-|-|-|-|
-| Changes to application ownership| Medium| Azure AD logs| Service-Core Directory, Category-ApplicationManagement<br>Activity: Add owner to application| Look for any instance of a user being added as an application owner outside of normal change management activities. |
+| Changes to application ownership| Medium| Azure AD logs| Service-Core Directory, Category-ApplicationManagement<br>Activity: Add owner to application| Look for any instance of a user being added as an application owner outside of normal change management activities.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/ChangestoApplicationOwnership.yaml)<br><br>[Sigma rules template](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
-### Logout URL modified or removed
+### Log-out URL modified or removed
| What to monitor| Risk Level| Where| Filter/sub-filter| Notes | |-|-|-|-|-|
-| Changes to logout URL| Low| Azure AD logs| Service-Core Directory, Category-ApplicationManagement<br>Activity: Update Application<br>-and-<br>Activity: Update service principle| Look for any modifications to a sign out URL. Blank entries or entries to non-existent locations would stop a user from terminating a session. |
-
-## Additional Resources
+| Changes to log-out URL| Low| Azure AD logs| Service-Core Directory, Category-ApplicationManagement<br>Activity: Update Application<br>-and-<br>Activity: Update service principle| Look for any modifications to a sign-out URL. Blank entries or entries to non-existent locations would stop a user from terminating a session.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/ChangestoApplicationLogoutURL.yaml) |
-The following are links to useful resources:
+## Resources
* GitHub Azure AD toolkit - [https://github.com/microsoft/AzureADToolkit](https://github.com/microsoft/AzureADToolkit)
The following are links to useful resources:
* OAuth attack detection guidance - [Unusual addition of credentials to an OAuth app](/cloud-app-security/investigate-anomaly-alerts)
-Azure AD monitoring configuration information for SIEMs - [Partner tools with Azure Monitor integration](../..//azure-monitor/essentials/stream-monitoring-data-event-hubs.md)
+* Azure AD monitoring configuration information for SIEMs - [Partner tools with Azure Monitor integration](../..//azure-monitor/essentials/stream-monitoring-data-event-hubs.md)
- ## Next steps
-
-See these security operations guide articles:
+## Next steps
[Azure AD security operations overview](security-operations-introduction.md) [Security operations for user accounts](security-operations-user-accounts.md)
+[Security operations for consumer accounts](security-operations-consumer-accounts.md)
+ [Security operations for privileged accounts](security-operations-privileged-accounts.md) [Security operations for Privileged Identity Management](security-operations-privileged-identity-management.md)
-[Security operations for applications](security-operations-applications.md)
- [Security operations for devices](security-operations-devices.md)
-
-[Security operations for infrastructure](security-operations-infrastructure.md)
+[Security operations for infrastructure](security-operations-infrastructure.md)
active-directory Security Operations Consumer Accounts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/security-operations-consumer-accounts.md
+
+ Title: Azure Active Directory security operations for consumer accounts
+description: Guidance to establish baselines and how to monitor and alert on potential security issues with consumer accounts.
+++++++ Last updated : 07/15/2021+++++
+# Azure Active Directory security operations for consumer accounts
+
+Activities associated with consumer identities is another critical area for your organization to protect and monitor. This article is for Azure AD B2C tenants and provides guidance for monitoring consumer account activities. The activities are organized by:
+
+* Consumer account activities
+* Privileged account activities
+* Application activities
+* Infrastructure activities
+
+If you have not yet read the [Azure Active Directory (Azure AD) security operations overview](security-operations-introduction.md), we recommend you do so before proceeding.
+
+## Define a baseline
+
+To discover anomalous behavior, you first must define what normal and expected behavior is. Defining what expected behavior for your organization is, helps you determine when unexpected behavior occurs. The definition also helps to reduce the noise level of false positives when monitoring and alerting.
+
+Once you define what you expect, you perform baseline monitoring to validate your expectations. With that information, you can monitor the logs for anything that falls outside of tolerances you define.
+
+Use the Azure AD Audit Logs, Azure AD Sign-in Logs, and directory attributes as your data sources for accounts created outside of normal processes. The following are suggestions to help you think about and define what normal is for your organization.
+
+* **Consumer account creation** ΓÇô evaluate the following:
+
+ * Strategy and principles for tools and processes used for creating and managing consumer accounts. For example, are there standard attributes, formats that are applied to consumer account attributes.
+
+ * Approved sources for account creation. For example, onboarding custom policies, customer provisioning or migration tool.
+
+ * Alert strategy for accounts created outside of approved sources. Is there a controlled list of organizations your organization collaborates with?
+
+ * Strategy and alert parameters for accounts created, modified, or disabled by an account that isn't an approved consumer account administrator.
+
+ * Monitoring and alert strategy for consumer accounts missing standard attributes, such as customer number or not following organizational naming conventions.
+
+ * Strategy, principles, and process for account deletion and retention.
+
+## Where to look
+
+The log files you use for investigation and monitoring are:
+
+* [Azure AD Audit logs](../reports-monitoring/concept-audit-logs.md)
+
+* [Sign-in logs](../reports-monitoring/concept-all-sign-ins.md)
+
+* [Risky Users log](../identity-protection/howto-identity-protection-investigate-risk.md)
+
+* [UserRiskEvents log](../identity-protection/howto-identity-protection-investigate-risk.md)
+
+From the Azure portal, you can view the Azure AD Audit logs and download as comma separated value (CSV) or JavaScript Object Notation (JSON) files. The Azure portal has several ways to integrate Azure AD logs with other tools that allow for greater automation of monitoring and alerting:
+
+* **[Microsoft Sentinel](../../sentinel/overview.md)** ΓÇô enables intelligent security analytics at the enterprise level by providing security information and event management (SIEM) capabilities.
+
+* **[Sigma rule templates](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure)** - Sigma is an evolving open standard for writing rules and templates that automated management tools can use to parse log files. Where Sigma templates exist for our recommended search criteria, we've added a link to the Sigma repo. The Sigma templates aren't written, tested, and managed by Microsoft. Rather, the repo and templates are created and collected by the worldwide IT security community.
+
+* **[Azure Monitor](../../azure-monitor/overview.md)** ΓÇô enables automated monitoring and alerting of various conditions. Can create or use workbooks to combine data from different sources.
+
+* **[Azure Event Hubs](../../event-hubs/event-hubs-about.md) integrated with a SIEM**- [Azure AD logs can be integrated to other SIEMs](../reports-monitoring/tutorial-azure-monitor-stream-logs-to-event-hub.md) such as Splunk, ArcSight, QRadar and Sumo Logic via the Azure Event Hubs integration.
+
+* **[Microsoft Defender for Cloud Apps](/cloud-app-security/what-is-cloud-app-security)** ΓÇô enables you to discover and manage apps, govern across apps and resources, and check your cloud apps' compliance.
+
+* **[Securing workload identities with Identity Protection Preview](..//identity-protection/concept-workload-identity-risk.md)** - Used to detect risk on workload identities across sign-in behavior and offline indicators of compromise.
+
+ The remainder of this article describes what we recommend you monitor and alert on, and is organized by the type of threat. Where there are specific pre-built solutions we link to them or provide samples following the table. Otherwise, you can build alerts using the preceding tools.
+
+## Consumer accounts
+
+| What to monitor | Risk Level | Where | Filter / subfilter | Notes |
+| - | - | - | - | - |
+| Large number of account creations or deletions | High | Azure AD Audit logs | Activity: Add user<br>Status = success<br>Initiated by (actor) = CPIM Service<br>-and-<br>Activity: Delete user<br>Status = success<br>Initiated by (actor) = CPIM Service | Define a baseline threshold, and then monitor and adjust to suite your organizational behaviors and limit false alerts from being generated. |
+| Accounts created and deleted by non-approved users or processes. | Medium | Azure AD Audit logs | Initiated by (actor) ΓÇô USER PRINCIPAL NAME<br>-and-<br>Activity: Add user<br>Status = success<br>Initiated by (actor) != CPIM Service<br>and-or<br>Activity: Delete user<br>Status = success<br>Initiated by (actor) != CPIM Service | If the actors are non-approved users, configure to send an alert. |
+| Accounts assigned to a privileged role. | High | Azure AD Audit logs | Activity: Add user<br>Status = success<br>Initiated by (actor) == CPIM Service<br>-and-<br>Activity: Add member to role<br>Status = success | If the account is assigned to an Azure AD role, Azure role, or privileged group membership, alert and prioritize the investigation. |
+| Failed sign-in attempts. | Medium - if Isolated incident<br>High - if many accounts are experiencing the same pattern | Azure AD Sign-ins log | Status = failed<br>-and-<br>Sign-in error code 50126 - Error validating credentials due to invalid username or password.<br>-and-<br>Application == "CPIM PowerShell Client"<br>-or-<br>Application == "ProxyIdentityExperienceFramework" | Define a baseline threshold, and then monitor and adjust to suite your organizational behaviors and limit false alerts from being generated. |
+| Smart lock-out events. | Medium - if Isolated incident<br>High - if many accounts are experiencing the same pattern or a VIP | Azure AD Sign-ins log | Status = failed<br>-and-<br>Sign-in error code = 50053 ΓÇô IdsLocked<br>-and-<br>Application == "CPIM PowerShell Client"<br>-or-<br>Application =="ProxyIdentityExperienceFramework" | Define a baseline threshold, and then monitor and adjust to suite your organizational behaviors and limit false alerts from being generated. |
+| Failed authentications from countries you don't operate out of. | Medium | Azure AD Sign-ins log | Status = failed<br>-and-<br>Location = \<unapproved location><br>-and-<br>Application == "CPIM PowerShell Client"<br>-or-<br>Application == "ProxyIdentityExperienceFramework" | Monitor entries not equal to the city names you provide. |
+| Increased failed authentications of any type. | Medium | Azure AD Sign-ins log | Status = failed<br>-and-<br>Application == "CPIM PowerShell Client"<br>-or-<br>Application == "ProxyIdentityExperienceFramework" | If you don't have a set threshold, monitor and alert if failures increase by 10% or greater. |
+| Account disabled/blocked for sign-ins | Low | Azure AD Sign-ins log | Status = Failure<br>-and-<br>error code = 50057, The user account is disabled. | This could indicate someone is trying to gain access to an account after they have left an organization. Although the account is blocked it's important to log and alert on this activity. |
+| Measurable increase of successful sign-ins. | Low | Azure AD Sign-ins log | Status = Success<br>-and-<br>Application == "CPIM PowerShell Client"<br>-or-<br>Application == "ProxyIdentityExperienceFramework" | If you don't have a set threshold, monitor and alert if successful authentications increase by 10% or greater. |
+
+## Privileged accounts
+
+| What to monitor | Risk Level | Where | Filter/sub-filter | Notes |
+| - | - | - | - | - |
+| Sign-in failure, bad password threshold | High | Azure AD Sign-ins log | Status = Failure<br>-and-<br>error code = 50126 | Define a baseline threshold and then monitor and adjust to suit your organizational behaviors and limit false alerts from being generated. |
+| Failure because of Conditional Access requirement | High | Azure AD Sign-ins log | Status = Failure<br>-and-<br>error code = 53003<br>-and-<br>Failure reason = Blocked by Conditional Access | This event can be an indication an attacker is trying to get into the account. |
+| Interrupt | High, medium | Azure AD Sign-ins log | Status = Failure<br>-and-<br>error code = 53003<br>-and-<br>Failure reason = Blocked by Conditional Access | This event can be an indication an attacker has the password for the account but can't pass the MFA challenge. |
+| Account lockout | High | Azure AD Sign-ins log | Status = Failure<br>-and-<br>error code = 50053 | Define a baseline threshold, and then monitor and adjust to suit your organizational behaviors and limit false alerts from being generated. |
+| Account disabled or blocked for sign-ins | low | Azure AD Sign-ins log | Status = Failure<br>-and-<br>Target = User UPN<br>-and-<br>error code = 50057 | This event could indicate someone is trying to gain access to an account after they've left the organization. Although the account is blocked, it's still important to log and alert on this activity. |
+| MFA fraud alert or block | High | Azure AD Sign-ins log/Azure Log Analytics | Sign-ins>Authentication details<br> Result details = MFA denied, fraud code entered | Privileged user has indicated they haven't instigated the MFA prompt, which could indicate an attacker has the password for the account. |
+| MFA fraud alert or block | High | Azure AD Sign-ins log/Azure Log Analytics | Activity type = Fraud reported - User is blocked for MFA or fraud reported - No action taken (based on fraud report tenant-level settings) | Privileged user indicated no instigation of the MFA prompt. This can indicate an attacker has the account password. |
+| Privileged account sign-ins outside of expected controls | High | Azure AD Sign-ins log | Status = Failure<br>UserPricipalName = \<Admin account> <br> Location = \<unapproved location> <br> IP address = \<unapproved IP><br>Device info = \<unapproved Browser, Operating System> | Monitor and alert on any entries that you've defined as unapproved. |
+| Outside of normal sign-in times | High | Azure AD Sign-ins log | Status = Success<br>-and-<br>Location =<br>-and-<br>Time = Outside of working hours | Monitor and alert if sign-ins occur outside of expected times. It's important to find the normal working pattern for each privileged account and to alert if there are unplanned changes outside of normal working times. Sign-ins outside of normal working hours could indicate compromise or possible insider threats. |
+| Password change | High | Azure AD Audit logs | Activity actor = Admin/self-service<br>-and-<br>Target = User<br>-and-<br>Status = Success or failure | Alert on any admin account password changes, especially for global admins, user admins, subscription admins, and emergency access accounts. Write a query targeted at all privileged accounts. |
+| Changes to authentication methods | High | Azure AD Audit logs | Activity: Create identity provider<br>Category: ResourceManagement<br>Target: User Principal Name | This change could be an indication of an attacker adding an auth method to the account so they can have continued access. |
+| Identity Provider updated by non-approved actors | High | Azure AD Audit logs | Activity: Update identity provider<br>Category: ResourceManagement<br>Target: User Principal Name | This change could be an indication of an attacker adding an auth method to the account so they can have continued access. |
+Identity Provider deleted by non-approved actors | High | Azure AD Access Reviews | Activity: Delete identity provider<br>Category: ResourceManagement<br>Target: User Principal Name | This change could be an indication of an attacker adding an auth method to the account so they can have continued access. |
+
+## Applications
+
+| What to monitor | Risk Level | Where | Filter/sub-filter | Notes |
+| - | - | - | - | - |
+| Added credentials to existing applications | High | Azure AD Audit logs | Service-Core Directory, Category-ApplicationManagement<br>Activity: Update Application-Certificates and secrets management<br>-and-<br>Activity: Update Service principal/Update Application | Alert when credentials are: added outside of normal business hours or workflows, of types not used in your environment, or added to a non-SAML flow supporting service principal. |
+| App assigned to an Azure role-based access control (RBAC) role, or Azure AD Role | High to medium | Azure AD Audit logs | Type: service principal<br>Activity: ΓÇ£Add member to roleΓÇ¥<br>or<br>ΓÇ£Add eligible member to roleΓÇ¥<br>-or-<br>ΓÇ£Add scoped member to role.ΓÇ¥ |
+| App granted highly privileged permissions, such as permissions with ΓÇ£.AllΓÇ¥ (Directory.ReadWrite.All) or wide-ranging permissions (Mail.) | High | Azure AD Audit logs |N/A | Apps granted broad permissions such as ΓÇ£.AllΓÇ¥ (Directory.ReadWrite.All) or wide-ranging permissions (Mail.) |
+| Administrator granting either application permissions (app roles) or highly privileged delegated permissions | High | Microsoft 365 portal | ΓÇ£Add app role assignment to service principalΓÇ¥<br>-where-<br>Target(s) identifies an API with sensitive data (such as Microsoft Graph) ΓÇ£Add delegated permission grantΓÇ¥<br>-where-<br>Target(s) identifies an API with sensitive data (such as Microsoft Graph)<br>-and-<br>DelegatedPermissionGrant.Scope includes high-privilege permissions. | Alert when a global administrator, application administrator, or cloud application administrator consents to an application. Especially look for consent outside of normal activity and change procedures. |
+| Application is granted permissions for Microsoft Graph, Exchange, SharePoint, or Azure AD. | High | Azure AD Audit logs | ΓÇ£Add delegated permission grantΓÇ¥<br>-or-<br>ΓÇ£Add app role assignment to service principalΓÇ¥<br>-where-<br>Target(s) identifies an API with sensitive data (such as Microsoft Graph, Exchange Online, and so on) | Alert as in the preceding row. |
+| Highly privileged delegated permissions are granted on behalf of all users | High | Azure AD Audit logs | ΓÇ£Add delegated permission grantΓÇ¥<br>where<br>Target(s) identifies an API with sensitive data (such as Microsoft Graph)<br>DelegatedPermissionGrant.Scope includes high-privilege permissions<br>-and-<br>DelegatedPermissionGrant.ConsentType is ΓÇ£AllPrincipalsΓÇ¥. | Alert as in the preceding row. |
+| Applications that are using the ROPC authentication flow | Medium | Azure AD Sign-ins log | Status=Success<br>Authentication Protocol-ROPC | High level of trust is being placed in this application as the credentials can be cached or stored. Move if possible to a more secure authentication flow. This should only be used in automated testing of applications, if at all. For more information |
+| Dangling URI | High | Azure AD Logs and Application Registration | Service-Core Directory<br>Category-ApplicationManagement<br>Activity: Update Application<br>Success ΓÇô Property Name AppAddress | For example look for dangling URIs that point to a domain name that no longer exists or one you donΓÇÖt own. |
+| Redirect URI configuration changes | High | Azure AD logs | Service-Core Directory<br>Category-ApplicationManagement<br>Activity: Update Application<br>Success ΓÇô Property Name AppAddress | Look for URIs not using HTTPS*, URIs with wildcards at the end or the domain of the URL, URIs that are **not** unique to the application, URIs that point to a domain you don't control. |
+| Changes to AppID URI | High | Azure AD logs | Service-Core Directory<br>Category-ApplicationManagement<br>Activity: Update Application<br>Activity: Update Service principal | Look for any AppID URI modifications, such as adding, modifying, or removing the URI. |
+| Changes to application ownership | Medium | Azure AD logs | Service-Core Directory<br>Category-ApplicationManagement<br>Activity: Add owner to application | Look for any instance of a user being added as an application owner outside of normal change management activities. |
+| Changes to log-out URL | Low | Azure AD logs | Service-Core Directory<br>Category-ApplicationManagement<br>Activity: Update Application<br>-and-<br>Activity: Update service principle | Look for any modifications to a sign-out URL. Blank entries or entries to non-existent locations would stop a user from terminating a session.
+
+## Infrastructure
+
+| What to monitor | Risk Level | Where | Filter/sub-filter | Notes |
+| - | - | - | - | - |
+| New Conditional Access Policy created by non-approved actors | High | Azure AD Audit logs | Activity: Add conditional access policy<br>Category: Policy<br>Initiated by (actor): User Principal Name | Monitor and alert on Conditional Access changes. Is Initiated by (actor): approved to make changes to Conditional Access? |
+| Conditional Access Policy removed by non-approved actors | Medium | Azure AD Audit logs | Activity: Delete conditional access policy<br>Category: Policy<br>Initiated by (actor): User Principal Name | Monitor and alert on Conditional Access changes. Is Initiated by (actor): approved to make changes to Conditional Access? |
+| Conditional Access Policy updated by non-approved actors | High | Azure AD Audit logs | Activity: Update conditional access policy<br>Category: Policy<br>Initiated by (actor): User Principal Name | Monitor and alert on Conditional Access changes. Is Initiated by (actor): approved to make changes to Conditional Access?<br>Review Modified Properties and compare ΓÇ£oldΓÇ¥ vs ΓÇ£newΓÇ¥ value |
+| B2C Custom policy created by non-approved actors | High | Azure AD Audit logs| Activity: Create custom policy<br>Category: ResourceManagement<br>Target: User Principal Name | Monitor and alert on Custom Policies changes. Is Initiated by (actor): approved to make changes to Custom Policies? |
+| B2C Custom policy updated by non-approved actors | High | Azure AD Audit logs| Activity: Get custom policies<br>Category: ResourceManagement<br>Target: User Principal Name | Monitor and alert on Custom Policies changes. Is Initiated by (actor): approved to make changes to Custom Policies? |
+| B2C Custom policy deleted by non-approved actors | Medium |Azure AD Audit logs | Activity: Delete custom policy<br>Category: ResourceManagement<br>Target: User Principal Name | Monitor and alert on Custom Policies changes. Is Initiated by (actor): approved to make changes to Custom Policies? |
+| User Flow created by non-approved actors | High |Azure AD Audit logs | Activity: Create user flow<br>Category: ResourceManagement<br>Target: User Principal Name | Monitor and alert on User Flow changes. Is Initiated by (actor): approved to make changes to User Flows? |
+| User Flow updated by non-approved actors | High | Azure AD Audit logs| Activity: Update user flow<br>Category: ResourceManagement<br>Target: User Principal Name | Monitor and alert on User Flow changes. Is Initiated by (actor): approved to make changes to User Flows? |
+| User Flow deleted by non-approved actors | Medium | Azure AD Audit logs| Activity: Delete user flow<br>Category: ResourceManagement<br>Target: User Principal Name | Monitor and alert on User Flow changes. Is Initiated by (actor): approved to make changes to User Flows? |
+| API Connectors created by non-approved actors | Medium | Azure AD Audit log| Activity: Create API connector<br>Category: ResourceManagement<br>Target: User Principal Name | Monitor and alert on API Connector changes. Is Initiated by (actor): approved to make changes to API Connectors? |
+| API Connectors updated by non-approved actors | Medium | Azure AD Audit logs| Activity: Update API connector<br>Category: ResourceManagement<br>Target: User Principal Name: ResourceManagement | Monitor and alert on API Connector changes. Is Initiated by (actor): approved to make changes to API Connectors? |
+| API Connectors deleted by non-approved actors | Medium | Azure AD Audit log|Activity: Update API connector<br>Category: ResourceManagment<br>Target: User Principal Name: ResourceManagment | Monitor and alert on API Connector changes. Is Initiated by (actor): approved to make changes to API Connectors? |
+| Identity Provider created by non-approved actors | High |Azure AD Audit log | Activity: Create identity provider<br>Category: ResourceManagement<br>Target: User Principal Name | Monitor and alert on Identity Provider changes. Is Initiated by (actor): approved to make changes to Identity Provider configuration? |
+| Identity Provider updated by non-approved actors | High | Azure AD Audit log| Activity: Update identity provider<br>Category: ResourceManagement<br>Target: User Principal Name | Monitor and alert on Identity Provider changes. Is Initiated by (actor): approved to make changes to Identity Provider configuration? |
+Identity Provider deleted by non-approved actors | Medium | | Activity: Delete identity provider<br>Category: ResourceManagement<br>Target: User Principal Name | Monitor and alert on Identity Provider changes. Is Initiated by (actor): approved to make changes to Identity Provider configuration? |
++
+## Next steps
+
+See these security operations guide articles:
+
+[Azure AD security operations overview](security-operations-introduction.md)
+
+[Security operations for user accounts](security-operations-user-accounts.md)
+
+[Security operations for privileged accounts](security-operations-privileged-accounts.md)
+
+[Security operations for Privileged Identity Management](security-operations-privileged-identity-management.md)
+
+[Security operations for applications](security-operations-applications.md)
+
+[Security operations for devices](security-operations-devices.md)
+
+[Security operations for infrastructure](security-operations-infrastructure.md)
active-directory Security Operations Devices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/security-operations-devices.md
Previously updated : 08/19/2022 Last updated : 09/06/2022
# Azure Active Directory security operations for devices
-Devices aren't commonly targeted in identity-based attacks, but *can* be used to satisfy and trick security controls, or to impersonate users. Devices can have one of four relationships with Azure AD:
+Devices aren't commonly targeted in identity-based attacks, but *can* be used to satisfy and trick security controls, or to impersonate users. Devices can have one of four relationships with Azure AD:
* Unregistered
Devices aren't commonly targeted in identity-based attacks, but *can* be used to
* [Azure AD joined](../devices/concept-azure-ad-join.md)
-* [Hybrid Azure AD joined](../devices/concept-azure-ad-join-hybrid.md)
-ΓÇÄ
+* [Hybrid Azure AD joined](../devices/concept-azure-ad-join-hybrid.md)
Registered and joined devices are issued a [Primary Refresh Token (PRT),](../devices/concept-primary-refresh-token.md) which can be used as a primary authentication artifact, and in some cases as a multifactor authentication artifact. Attackers may try to register their own devices, use PRTs on legitimate devices to access business data, steal PRT-based tokens from legitimate user devices, or find misconfigurations in device-based controls in Azure Active Directory. With Hybrid Azure AD joined devices, the join process is initiated and controlled by administrators, reducing the available attack methods.
To reduce the risk of bad actors attacking your infrastructure through devices,
## Where to look
-The log files you use for investigation and monitoring are:
+The log files you use for investigation and monitoring are:
* [Azure AD Audit logs](../reports-monitoring/concept-audit-logs.md) * [Sign-in logs](../reports-monitoring/concept-all-sign-ins.md)
-* [Microsoft 365 Audit logs](/microsoft-365/compliance/auditing-solutions-overview)
+* [Microsoft 365 Audit logs](/microsoft-365/compliance/auditing-solutions-overview)
* [Azure Key Vault logs](../..//key-vault/general/logging.md?tabs=Vault) From the Azure portal, you can view the Azure AD Audit logs and download as comma-separated value (CSV) or JavaScript Object Notation (JSON) files. The Azure portal has several ways to integrate Azure AD logs with other tools that allow for greater automation of monitoring and alerting:
-* **[Microsoft Sentinel](../../sentinel/overview.md)** ΓÇô enables intelligent security analytics at the enterprise level by providing security information and event management (SIEM) capabilities.
+* **[Microsoft Sentinel](../../sentinel/overview.md)** ΓÇô enables intelligent security analytics at the enterprise level by providing security information and event management (SIEM) capabilities.
+
+* **[Sigma rule templates](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure)** - Sigma is an evolving open standard for writing rules and templates that automated management tools can use to parse log files. Where Sigma templates exist for our recommended search criteria, we've added a link to the Sigma repo. The Sigma templates aren't written, tested, and managed by Microsoft. Rather, the repo and templates are created and collected by the worldwide IT security community.
* **[Azure Monitor](../..//azure-monitor/overview.md)** ΓÇô enables automated monitoring and alerting of various conditions. Can create or use workbooks to combine data from different sources.
-* **[Azure Event Hubs](../../event-hubs/event-hubs-about.md) -integrated with a SIEM**- [Azure AD logs can be integrated to other SIEMs](../reports-monitoring/tutorial-azure-monitor-stream-logs-to-event-hub.md) such as Splunk, ArcSight, QRadar, and Sumo Logic via the Azure Event Hub integration.
+* **[Azure Event Hubs](../../event-hubs/event-hubs-about.md) -integrated with a SIEM**- [Azure AD logs can be integrated to other SIEMs](../reports-monitoring/tutorial-azure-monitor-stream-logs-to-event-hub.md) such as Splunk, ArcSight, QRadar, and Sumo Logic via the Azure Event Hubs integration.
-* **[Microsoft Defender for Cloud Apps](/cloud-app-security/what-is-cloud-app-security)** ΓÇô enables you to discover and manage apps, govern across apps and resources, and check your cloud appsΓÇÖ compliance.
+* **[Microsoft Defender for Cloud Apps](/cloud-app-security/what-is-cloud-app-security)** ΓÇô enables you to discover and manage apps, govern across apps and resources, and check your cloud appsΓÇÖ compliance.
* **[Securing workload identities with Identity Protection Preview](..//identity-protection/concept-workload-identity-risk.md)** - Used to detect risk on workload identities across sign-in behavior and offline indicators of compromise.
-Much of what you'll monitor and alert on are the effects of your Conditional Access policies. You can use the [Conditional Access insights and reporting workbook](../conditional-access/howto-conditional-access-insights-reporting.md) to examine the effects of one or more Conditional Access policies on your sign-ins, and the results of policies including device state. This workbook enables you to view an impact summary, and identify the impact over a specific time period. You can also use the workbook to investigate the sign-ins of a specific user.
+Much of what you'll monitor and alert on are the effects of your Conditional Access policies. You can use the [Conditional Access insights and reporting workbook](../conditional-access/howto-conditional-access-insights-reporting.md) to examine the effects of one or more Conditional Access policies on your sign-ins, and the results of policies including device state. This workbook enables you to view a summary, and identify the effects over a specific time period. You can also use the workbook to investigate the sign-ins of a specific user.
- The rest of this article describes what we recommend you monitor and alert on, and is organized by the type of threat. Where there are specific pre-built solutions we link to them or provide samples following the table. Otherwise, you can build alerts using the preceding tools.
+ The rest of this article describes what we recommend you monitor and alert on, and is organized by the type of threat. Where there are specific pre-built solutions we link to them or provide samples following the table. Otherwise, you can build alerts using the preceding tools.
- ## Device registrations and joins outside policy
+## Device registrations and joins outside policy
-Azure AD registered and Azure AD joined devices possess primary refresh tokens (PRTs), which are the equivalent of a single authentication factor. These devices can at times contain strong authentication claims. For more information on when PRTs contain strong authentication claims, see [When does a PRT get an MFA claim](../devices/concept-primary-refresh-token.md)? To keep bad actors from registering or joining devices, require multifactor authentication (MFA) to register or join devices. Then monitor for any devices registered or joined without MFA. YouΓÇÖll also need to watch for changes to MFA settings and policies, and device compliance policies.
+Azure AD registered and Azure AD joined devices possess primary refresh tokens (PRTs), which are the equivalent of a single authentication factor. These devices can at times contain strong authentication claims. For more information on when PRTs contain strong authentication claims, see [When does a PRT get an MFA claim](../devices/concept-primary-refresh-token.md)? To keep bad actors from registering or joining devices, require multi-factor authentication (MFA) to register or join devices. Then monitor for any devices registered or joined without MFA. YouΓÇÖll also need to watch for changes to MFA settings and policies, and device compliance policies.
| What to monitor| Risk Level| Where| Filter/sub-filter| Notes | | - |- |- |- |- |
-| Device registration or join completed without MFA| Medium| Sign-in logs| Activity: successful authentication to Device Registration Service. <br>And<br>No MFA required| Alert when: <br>Any device registered or joined without MFA<br>[Azure Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/SigninLogs/SuspiciousSignintoPrivilegedAccount.yaml) |
-| Changes to the Device Registration MFA toggle in Azure AD| High| Audit log| Activity: Set device registration policies| Look for: <br>The toggle being set to off. There isn't audit log entry. Schedule periodic checks. |
-| Changes to Conditional Access policies requiring domain joined or compliant device.| High| Audit log| Changes to CA policies<br>| Alert when: <br><li> Change to any policy requiring domain joined or compliant.<li>Changes to trusted locations.<li> Accounts or devices added to MFA policy exceptions. |
-
+| Device registration or join completed without MFA| Medium| Sign-in logs| Activity: successful authentication to Device Registration Service. <br>And<br>No MFA required| Alert when: Any device registered or joined without MFA<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/SigninLogs/SuspiciousSignintoPrivilegedAccount.yaml)<br>[Sigma rules template](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
+| Changes to the Device Registration MFA toggle in Azure AD| High| Audit log| Activity: Set device registration policies| Look for: The toggle being set to off. There isn't audit log entry. Schedule periodic checks.<br>[Sigma rules template](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
+| Changes to Conditional Access policies requiring domain joined or compliant device.| High| Audit log| Changes to CA policies<br>| Alert when: Change to any policy requiring domain joined or compliant, changes to trusted locations, or accounts or devices added to MFA policy exceptions. |
You can create an alert that notifies appropriate administrators when a device is registered or joined without MFA by using Microsoft Sentinel.-
-```
+~~~
Sign-in logs
-| where ResourceDisplayName == "Device Registration Service"
+ where ResourceDisplayName == "Device Registration Service"
-| where conditionalAccessStatus == "success"
+ where conditionalAccessStatus == "success"
-| where AuthenticationRequirement <> "multiFactorAuthentication"
-```
+ where AuthenticationRequirement <> "multiFactorAuthentication"
+~~~
You can also use [Microsoft Intune to set and monitor device compliance policies](/mem/intune/protect/device-compliance-get-started).
-## Non-compliant device sign in
+## Non-compliant device sign-in
-It might not be possible to block access to all cloud and software-as-a-service applications with Conditional Access policies requiring compliant devices.
+It might not be possible to block access to all cloud and software-as-a-service applications with Conditional Access policies requiring compliant devices.
-[Mobile device management](/windows/client-management/mdm/) (MDM) helps you keep Windows 10 devices compliant. With Windows version 1809, we released a [security baseline](/windows/client-management/mdm/) of policies. Azure Active Directory can [integrate with MDM](/windows/client-management/mdm/azure-active-directory-integration-with-mdm) to enforce device compliance with corporate policies, and can report a deviceΓÇÖs compliance status.
+[Mobile device management](/windows/client-management/mdm/) (MDM) helps you keep Windows 10 devices compliant. With Windows version 1809, we released a [security baseline](/windows/client-management/mdm/) of policies. Azure Active Directory can [integrate with MDM](/windows/client-management/mdm/azure-active-directory-integration-with-mdm) to enforce device compliance with corporate policies, and can report a deviceΓÇÖs compliance status.
| What to monitor| Risk Level| Where| Filter/sub-filter| Notes | | - |- |- |- |- |
-| Sign-ins by non-compliant devices| High| Sign-in logs| DeviceDetail.isCompliant == false| If requiring sign-in from compliant devices, alert when:<br><li> any sign in by non-compliant devices.<li> any access without MFA or a trusted location.<p>If working toward requiring devices, monitor for suspicious sign-ins.<br>[Azure Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/SigninLogs/SuspiciousSignintoPrivilegedAccount.yaml) |
-| Sign-ins by unknown devices| Low| Sign-in logs| <li>DeviceDetail is empty<li>Single factor authentication<li>From a non-trusted location| Look for: <br><li>any access from out of compliance devices.<li>any access without MFA or trusted location |
-
+| Sign-ins by non-compliant devices| High| Sign-in logs| DeviceDetail.isCompliant == false| If requiring sign-in from compliant devices, alert when: any sign in by non-compliant devices, or any access without MFA or a trusted location.<p>If working toward requiring devices, monitor for suspicious sign-ins.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/SigninLogs/SuccessfulSigninFromNon-CompliantDevice.yaml)<br><br>[Sigma rules template](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
+| Sign-ins by unknown devices| Low| Sign-in logs| DeviceDetail is empty, single factor authentication, or from a non-trusted location| Look for: any access from out of compliance devices, any access without MFA or trusted location<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/SigninLogs/AnomolousSingleFactorSignin.yaml)<br><br>[Sigma rules template](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
### Use LogAnalytics to query
SigninLogs
| where conditionalAccessStatus == "success" ```
-
**Sign-ins by unknown devices**
SigninLogs
| where NetworkLocationDetails == "[]" ```
-
+ ## Stale devices
-Stale devices include devices that haven't signed in for a specified time period. Devices can become stale when a user gets a new device or loses a device, or when an Azure AD joined device is wiped or reprovisioned. Devices may also remain registered or joined when the user is no longer associated with the tenant. Stale devices should be removed so that their primary refresh tokens (PRTs) cannot be used.
+Stale devices include devices that haven't signed in for a specified time period. Devices can become stale when a user gets a new device or loses a device, or when an Azure AD joined device is wiped or reprovisioned. Devices might also remain registered or joined when the user is no longer associated with the tenant. Stale devices should be removed so the primary refresh tokens (PRTs) cannot be used.
| What to monitor| Risk Level| Where| Filter/sub-filter| Notes | | - |- |- |- |- |
Attackers who have compromised a userΓÇÖs device may retrieve the [BitLocker](/w
| What to monitor| Risk Level| Where| Filter/sub-filter| Notes | | - |- |- |- |- |
-| Key retrieval| Medium| Audit logs| OperationName == "Read BitLocker key"| Look for <br><li>key retrieval`<li> other anomalous behavior by users retrieving keys. |
-
+| Key retrieval| Medium| Audit logs| OperationName == "Read BitLocker key"| Look for: key retrieval, other anomalous behavior by users retrieving keys.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/AuditLogs/BitLockerKeyRetrieval.yaml)<br><br>[Sigma rules template](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
In LogAnalytics create a query such as
Global administrators and cloud Device Administrators automatically get local ad
| What to monitor| Risk Level| Where| Filter/sub-filter| Notes | | - |- |- |- |- |
-| Users added to global or device admin roles| High| Audit logs| Activity type = Add member to role.| Look for:<li> new users added to these Azure AD roles.<li> Subsequent anomalous behavior by machines or users. |
-
+| Users added to global or device admin roles| High| Audit logs| Activity type = Add member to role.| Look for: new users added to these Azure AD roles, subsequent anomalous behavior by machines or users.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/4ad195f4fe6fdbc66fb8469120381e8277ebed81/Detections/AuditLogs/UserAddedtoAdminRole.yaml)<br><br>[Sigma rules template](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
## Non-Azure AD sign-ins to virtual machines
Azure AD sign-in for LINUX allows organizations to sign in to their Azure LINUX
| What to monitor| Risk Level| Where| Filter/sub-filter| Notes | | - |- |- |- |- |
-| Non-Azure AD account signing in, especially over SSH| High| Local authentication logs| Ubuntu: <br>ΓÇÄmonitor /var/log/auth.log for SSH use<br>RedHat: <br>monitor /var/log/sssd/ for SSH use| Look for:<li> entries [where non-Azure AD accounts are successfully connecting to VMs.](../devices/howto-vm-sign-in-azure-ad-linux.md) <li>See following example. |
-
+| Non-Azure AD account signing in, especially over SSH| High| Local authentication logs| Ubuntu: <br>monitor /var/log/auth.log for SSH use<br>RedHat: <br>monitor /var/log/sssd/ for SSH use| Look for: entries [where non-Azure AD accounts are successfully connecting to VMs](../devices/howto-vm-sign-in-azure-ad-linux.md). See following example. |
Ubuntu example:
Ubuntu example:
May 9 23:49:43 ubuntu1804 sshd[3909]: pam_unix(sshd:session): session opened for user localusertest01 by (uid=0).
-You can set policy for LINUX VM sign-ins, and detect and flag Linux VMs that have non-approved local accounts added. To learn more, see using [Azure Policy to ensure standards and assess compliance](../devices/howto-vm-sign-in-azure-ad-linux.md).
+You can set policy for LINUX VM sign-ins, and detect and flag Linux VMs that have non-approved local accounts added. To learn more, see using [Azure Policy to ensure standards and assess compliance](../devices/howto-vm-sign-in-azure-ad-linux.md).
### Azure AD sign-ins for Windows Server
Azure AD sign-in for Windows allows your organization to sign in to your Azure W
| What to monitor| Risk Level| Where| Filter/sub-filter| Notes | | - |- |- |- |- |
-| Non-Azure AD account signing in, especially over RDP| High| Windows Server event logs| Interactive Login to Windows VM| Event 528, logon type 10 (RemoteInteractive).<br>Shows when a user signs in over Terminal Services or Remote Desktop. |
-
+| Non-Azure AD account sign-in, especially over RDP| High| Windows Server event logs| Interactive Login to Windows VM| Event 528, log-on type 10 (RemoteInteractive).<br>Shows when a user signs in over Terminal Services or Remote Desktop. |
-## Next Steps
-
-See these additional security operations guide articles:
+## Next steps
[Azure AD security operations overview](security-operations-introduction.md) [Security operations for user accounts](security-operations-user-accounts.md)
+[Security operations for consumer accounts](security-operations-consumer-accounts.md)
+ [Security operations for privileged accounts](security-operations-privileged-accounts.md) [Security operations for Privileged Identity Management](security-operations-privileged-identity-management.md) [Security operations for applications](security-operations-applications.md)
-[Security operations for devices](security-operations-devices.md)
-
-
[Security operations for infrastructure](security-operations-infrastructure.md)
active-directory Security Operations Infrastructure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/security-operations-infrastructure.md
Previously updated : 08/19/2022 Last updated : 09/06/2022
Infrastructure has many components where vulnerabilities can occur if not proper
* Hybrid Authentication components incl. Federation Servers
-* Policies
+* Policies
* Subscriptions
-Monitoring and alerting the components of your authentication infrastructure is critical. Any compromise can lead to a full compromise of the whole environment. Many enterprises that use Azure AD operate in a hybrid authentication environment. This means both cloud and on-premises components should be included in your monitoring and alerting strategy. Having a hybrid authentication environment also introduces another attack vector to your environment.
+Monitoring and alerting the components of your authentication infrastructure is critical. Any compromise can lead to a full compromise of the whole environment. Many enterprises that use Azure AD operate in a hybrid authentication environment. Cloud and on-premises components should be included in your monitoring and alerting strategy. Having a hybrid authentication environment also introduces another attack vector to your environment.
-We recommend all the components be considered Control Plane / Tier 0 assets, as well as the accounts used to manage them. Refer to [Securing privileged assets](/security/compass/overview) (SPA) for guidance on designing and implementing your environment. This guidance includes recommendations for each of the hybrid authentication components that could potentially be used for an Azure AD tenant.
+We recommend all the components be considered Control Plane / Tier 0 assets, and the accounts used to manage them. Refer to [Securing privileged assets](/security/compass/overview) (SPA) for guidance on designing and implementing your environment. This guidance includes recommendations for each of the hybrid authentication components that could potentially be used for an Azure AD tenant.
A first step in being able to detect unexpected events and potential attacks is to establish a baseline. For all on-premises components listed in this article, see [Privileged access deployment](/security/compass/privileged-access-deployment), which is part of the Securing privileged assets (SPA) guide. ## Where to look
-The log files you use for investigation and monitoring are:
+The log files you use for investigation and monitoring are:
* [Azure AD Audit logs](../reports-monitoring/concept-audit-logs.md) * [Sign-in logs](../reports-monitoring/concept-all-sign-ins.md)
-* [Microsoft 365 Audit logs](/microsoft-365/compliance/auditing-solutions-overview)
+* [Microsoft 365 Audit logs](/microsoft-365/compliance/auditing-solutions-overview)
* [Azure Key Vault logs](../../key-vault/general/logging.md?tabs=Vault)
-From the Azure portal you can view the Azure AD Audit logs and download as comma separated value (CSV) or JavaScript Object Notation (JSON) files. The Azure portal has several ways to integrate Azure AD logs with other tools that allow for greater automation of monitoring and alerting:
+From the Azure portal, you can view the Azure AD Audit logs and download as comma separated value (CSV) or JavaScript Object Notation (JSON) files. The Azure portal has several ways to integrate Azure AD logs with other tools that allow for greater automation of monitoring and alerting:
-* [Microsoft Sentinel](../../sentinel/overview.md) ΓÇô enables intelligent security analytics at the enterprise level by providing security information and event management (SIEM) capabilities.
+* **[Microsoft Sentinel](../../sentinel/overview.md)** ΓÇô Enables intelligent security analytics at the enterprise level by providing security information and event management (SIEM) capabilities.
-* [Azure Monitor](../../azure-monitor/overview.md) ΓÇô enables automated monitoring and alerting of various conditions. Can create or use workbooks to combine data from different sources.
+* **[Sigma rule templates](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure)** - Sigma is an evolving open standard for writing rules and templates that automated management tools can use to parse log files. Where Sigma templates exist for our recommended search criteria, we've added a link to the Sigma repo. The Sigma templates aren't written, tested, and managed by Microsoft. Rather, the repo and templates are created and collected by the worldwide IT security community.
-* [Azure Event Hubs](../../event-hubs/event-hubs-about.md) integrated with a SIEM- [Azure AD logs can be integrated to other SIEMs](../reports-monitoring/tutorial-azure-monitor-stream-logs-to-event-hub.md) such as Splunk, ArcSight, QRadar and Sumo Logic via the Azure Event Hub integration.
+* **[Azure Monitor](../../azure-monitor/overview.md)** ΓÇô Enables automated monitoring and alerting of various conditions. Can create or use workbooks to combine data from different sources.
-* [Microsoft Defender for Cloud Apps](/cloud-app-security/what-is-cloud-app-security) ΓÇô enables you to discover and manage apps, govern across apps and resources, and check your cloud appsΓÇÖ compliance.
+* **[Azure Event Hubs](../../event-hubs/event-hubs-about.md)** integrated with a SIEM - [Azure AD logs can be integrated to other SIEMs](../reports-monitoring/tutorial-azure-monitor-stream-logs-to-event-hub.md) such as Splunk, ArcSight, QRadar and Sumo Logic via the Azure Event Hubs integration.
+
+* **[Microsoft Defender for Cloud Apps](/cloud-app-security/what-is-cloud-app-security)** ΓÇô Enables you to discover and manage apps, govern across apps and resources, and check your cloud appsΓÇÖ compliance.
* **[Securing workload identities with Identity Protection Preview](..//identity-protection/concept-workload-identity-risk.md)** - Used to detect risk on workload identities across sign-in behavior and offline indicators of compromise.
-The remainder of this article describes what you should monitor and alert on and is organized by the type of threat. Where there are specific pre-built solutions, you will find links to them following the table. Otherwise, you can build alerts using the preceding tools.
+The remainder of this article describes what to monitor and alert on. It is organized by the type of threat. Where there are pre-built solutions, you'll find links to them, after the table. Otherwise, you can build alerts using the preceding tools.
## Authentication infrastructure
In hybrid environments that contain both on-premises and cloud-based resources a
* [Securing privileged access overview](/security/compass/overview) ΓÇô This article provides an overview of current techniques using Zero Trust techniques to create and maintain secure privileged access.
-* [Microsoft Defender for Identity monitored domain activities](/defender-for-identity/monitored-activities) - This article provides a comprehensive list of activities to monitor and set alerts for.
+* [Microsoft Defender for Identity monitored domain activities](/defender-for-identity/monitored-activities) - This article provides a comprehensive list of activities to monitor and set alerts for.
* [Microsoft Defender for Identity security alert tutorial](/defender-for-identity/understanding-security-alerts) - This article provides guidance on creating and implementing a security alert strategy. The following are links to specific articles that focus on monitoring and alerting your authentication infrastructure:
-* [Understand and use Lateral Movement Paths with Microsoft Defender for Identity](/defender-for-identity/use-case-lateral-movement-path) - This article describes detection techniques you can use to help identify when non-sensitive accounts are used to gain access to sensitive accounts throughout your network.
+* [Understand and use Lateral Movement Paths with Microsoft Defender for Identity](/defender-for-identity/use-case-lateral-movement-path) - Detection techniques to help identify when non-sensitive accounts are used to gain access to sensitive network accounts.
-* [Working with security alerts in Microsoft Defender for Identity](/defender-for-identity/working-with-suspicious-activities) - This article describes how to review and manage alerts once they are logged.
+* [Working with security alerts in Microsoft Defender for Identity](/defender-for-identity/working-with-suspicious-activities) - This article describes how to review and manage alerts after they're logged.
The following are specific things to look for: | What to monitor| Risk level| Where| Notes | | - | - | - | - |
-| Extranet lockout trends| High| Azure AD Connect Health| Use information at [Monitor AD FS using Azure AD Connect Health](../hybrid/how-to-connect-health-adfs.md) for tools and techniques to help detect extranet lockout trends. |
+| Extranet lockout trends| High| Azure AD Connect Health| See, [Monitor AD FS using Azure AD Connect Health](../hybrid/how-to-connect-health-adfs.md) for tools and techniques to help detect extranet lock-out trends. |
| Failed sign-ins|High | Connect Health Portal| Export or download the Risky IP report and follow the guidance at [Risky IP report (public preview)](../hybrid/how-to-connect-health-adfs-risky-ip.md) for next steps. |
-| In privacy compliant| Low| Azure AD Connect Health| Configure Azure AD Connect Health to be disable data collections and monitoring using the [User privacy and Azure AD Connect Health](../hybrid/reference-connect-health-user-privacy.md) article. |
+| In privacy compliant| Low| Azure AD Connect Health| Configure Azure AD Connect Health to disable data collections and monitoring using the [User privacy and Azure AD Connect Health](../hybrid/reference-connect-health-user-privacy.md) article. |
| Potential brute force attack on LDAP| Medium| Microsoft Defender for Identity| Use sensor to help detect potential brute force attacks against LDAP. | | Account enumeration reconnaissance| Medium| Microsoft Defender for Identity| Use sensor to help perform account enumeration reconnaissance. | | General correlation between Azure AD and Azure AD FS|Medium | Microsoft Defender for Identity| Use capabilities to correlate activities between your Azure AD and Azure AD FS environments. |
+### Pass-through authentication monitoring
-
-
-### Pass-through authentication monitoring
-
-Azure Active Directory (Azure AD) Pass-through Authentication signs users in by validating their passwords directly against on-premises Active Directory.
+Azure Active Directory (Azure AD) Pass-through Authentication signs users in by validating their passwords directly against on-premises Active Directory.
The following are specific things to look for: | What to monitor| Risk level| Where| Filter/sub-filter| Notes | | - | - | - | - | - |
-| Azure AD pass-through authentication errors|Medium | Application and ΓÇÄService Logs\Microsoft\AΓÇÄzureAdConnecΓÇÄt\AuthenticatioΓÇÄnAgent\Admin| AADSTS80001 ΓÇô Unable to connect to Active Directory| Ensure that agent servers are members of the same AD forest as the users whose passwords need to be validated and they can connect to Active Directory. |
-| Azure AD pass-through authentication errors| Medium| Application and ΓÇÄService Logs\Microsoft\AΓÇÄzureAdConnecΓÇÄt\AuthenticatioΓÇÄnAgent\Admin| AADSTS8002 - A timeout occurred connecting to Active Directory| Check to ensure that Active Directory is available and is responding to requests from the agents. |
-| Azure AD pass-through authentication errors|Medium | Application and ΓÇÄService Logs\Microsoft\AΓÇÄzureAdConnecΓÇÄt\AuthenticatioΓÇÄnAgent\Admin| AADSTS80004 - The username passed to the agent was not valid| Ensure the user is attempting to sign in with the right username. |
-| Azure AD pass-through authentication errors|Medium | Application and ΓÇÄService Logs\Microsoft\AΓÇÄzureAdConnecΓÇÄt\AuthenticatioΓÇÄnAgent\Admin| AADSTS80005 - Validation encountered unpredictable WebException| A transient error. Retry the request. If it continues to fail, contact Microsoft support. |
-| Azure AD pass-through authentication errors| Medium| Application and ΓÇÄService Logs\Microsoft\AΓÇÄzureAdConnecΓÇÄt\AuthenticatioΓÇÄnAgent\Admin| AADSTS80007 - An error occurred communicating with Active Directory| Check the agent logs for more information and verify that Active Directory is operating as expected. |
-| Azure AD pass-through authentication errors|High | Win32 LogonUserA function API| Logon events 4624(s): An account was successfully logged on<br>- correlate with ΓÇô<br>4625(F): An account failed to log on| Use with the suspected usernames on the domain controller that is authenticating requests. Guidance at [LogonUserA function (winbase.h)](/windows/win32/api/winbase/nf-winbase-logonusera) |
-| Azure AD pass-through authentication errors| Medium| PowerShell script of domain controller| see query following table. | Use the information at [Azure AD Connect: Troubleshoot Pass-through Authentication](../hybrid/tshoot-connect-pass-through-authentication.md)for additional guidance. |
+| Azure AD pass-through authentication errors|Medium | Application and Service Logs\Microsoft\AzureAdConnect\AuthenticationAgent\Admin| AADSTS80001 ΓÇô Unable to connect to Active Directory| Ensure that agent servers are members of the same AD forest as the users whose passwords need to be validated and they can connect to Active Directory. |
+| Azure AD pass-through authentication errors| Medium| Application and Service Logs\Microsoft\AzureAdConnect\AuthenticationAgent\Admin| AADSTS8002 - A timeout occurred connecting to Active Directory| Check to ensure that Active Directory is available and is responding to requests from the agents. |
+| Azure AD pass-through authentication errors|Medium | Application and Service Logs\Microsoft\AzureAdConnect\AuthenticationAgent\Admin| AADSTS80004 - The username passed to the agent was not valid| Ensure the user is attempting to sign in with the right username. |
+| Azure AD pass-through authentication errors|Medium | Application and Service Logs\Microsoft\AzureAdConnect\AuthenticationAgent\Admin| AADSTS80005 - Validation encountered unpredictable WebException| A transient error. Retry the request. If it continues to fail, contact Microsoft support. |
+| Azure AD pass-through authentication errors| Medium| Application and Service Logs\Microsoft\AzureAdConnect\AuthenticationAgent\Admin| AADSTS80007 - An error occurred communicating with Active Directory| Check the agent logs for more information and verify that Active Directory is operating as expected. |
+| Azure AD pass-through authentication errors|High | Win32 LogonUserA function API| Log on events 4624(s): An account was successfully logged on<br>- correlate with ΓÇô<br>4625(F): An account failed to log on| Use with the suspected usernames on the domain controller that is authenticating requests. Guidance at [LogonUserA function (winbase.h)](/windows/win32/api/winbase/nf-winbase-logonusera) |
+| Azure AD pass-through authentication errors| Medium| PowerShell script of domain controller| See the query after the table. | Use the information at [Azure AD Connect: Troubleshoot Pass-through Authentication](../hybrid/tshoot-connect-pass-through-authentication.md)for guidance. |
```Kusto
The following are specific things to look for:
</QueryList> ```
+## Monitoring for creation of new Azure AD tenants
+
+Organizations might need to monitor for and alert on the creation of new Azure AD tenants when the action is initiated by identities from their organizational tenant. Monitoring for this scenario provides visibility on how many tenants are being created and could be accessed by end users.
+
+| What to monitor| Risk level| Where| Filter/sub-filter| Notes |
+| - | - | - | - | - |
+| Creation of a new Azure AD tenant, using an identity from your tenant. | Medium | Azure AD Audit logs | Category: Directory Management<br><br>Activity: Create Company | Target(s) shows the created TenantID |
+ ### AppProxy Connector
-Azure AD and Azure AD Application Proxy give remote users a single sign-on (SSO) experience. Users securely connect to on-premises apps without a virtual private network (VPN) or dual-homed servers and firewall rules. If your Azure AD Application Proxy connector server is compromised, attackers could alter the SSO experience or change access to published applications.
+Azure AD and Azure AD Application Proxy give remote users a single sign-on (SSO) experience. Users securely connect to on-premises apps without a virtual private network (VPN) or dual-homed servers and firewall rules. If your Azure AD Application Proxy connector server is compromised, attackers could alter the SSO experience or change access to published applications.
-To configuring monitoring for Application Proxy, see [Troubleshoot Application Proxy problems and error messages](../app-proxy/application-proxy-troubleshoot.md). The data file that logs information can be found in Applications and Services Logs\Microsoft\AadApplicationProxy\Connector\Admin. For a complete reference guide to audit activity, see [Azure AD audit activity reference](../reports-monitoring/reference-audit-activities.md). Specific things to monitor:
+To configure monitoring for Application Proxy, see [Troubleshoot Application Proxy problems and error messages](../app-proxy/application-proxy-troubleshoot.md). The data file that logs information can be found in Applications and Services Logs\Microsoft\AadApplicationProxy\Connector\Admin. For a complete reference guide to audit activity, see [Azure AD audit activity reference](../reports-monitoring/reference-audit-activities.md). Specific things to monitor:
| What to monitor| Risk level| Where| Filter/sub-filter| Notes | | - | - | - | - | - | | Kerberos errors| Medium | Various tools| Medium | Kerberos authentication error guidance under Kerberos errors on [Troubleshoot Application Proxy problems and error messages](../app-proxy/application-proxy-troubleshoot.md). | | DC security issues| High| DC Security Audit logs| Event ID 4742(S): A computer account was changed<br>-and-<br>Flag ΓÇô Trusted for Delegation<br>-or-<br>Flag ΓÇô Trusted to Authenticate for Delegation| Investigate any flag change. |
-| Pass-the-ticket like attacks| High| | | Follow guidance in:<li>[Security principal reconnaissance (LDAP) (external ID 2038)](/defender-for-identity/reconnaissance-alerts)<li>[Tutorial: Compromised credential alerts](/defender-for-identity/compromised-credentials-alerts)<li> [Understand and use Lateral Movement Paths with Microsoft Defender for Identity](/defender-for-identity/use-case-lateral-movement-path)<li> [Understanding entity profiles](/defender-for-identity/entity-profiles) |
-
+| Pass-the-ticket like attacks| High| | | Follow guidance in:<br>[Security principal reconnaissance (LDAP) (external ID 2038)](/defender-for-identity/reconnaissance-alerts)<br>[Tutorial: Compromised credential alerts](/defender-for-identity/compromised-credentials-alerts)<br>[Understand and use Lateral Movement Paths with Microsoft Defender for Identity](/defender-for-identity/use-case-lateral-movement-path)<br>[Understanding entity profiles](/defender-for-identity/entity-profiles) |
### Legacy authentication settings
-For multifactor authentication (MFA) to be effective, you also need to block legacy authentication. You then need to monitor your environment and alert on any use of legacy authentication. This is because legacy authentication protocols like POP, SMTP, IMAP, and MAPI canΓÇÖt enforce MFA. This makes these protocols preferred entry points for attackers of your organization. For more information on tools that you can use to block legacy authentication, see [New tools to block legacy authentication in your organization](https://techcommunity.microsoft.com/t5/azure-active-directory-identity/new-tools-to-block-legacy-authentication-in-your-organization/ba-p/1225302).
+For multifactor authentication (MFA) to be effective, you also need to block legacy authentication. You then need to monitor your environment and alert on any use of legacy authentication. Legacy authentication protocols like POP, SMTP, IMAP, and MAPI canΓÇÖt enforce MFA. This makes these protocols the preferred entry points for attackers. For more information on tools that you can use to block legacy authentication, see [New tools to block legacy authentication in your organization](https://techcommunity.microsoft.com/t5/azure-active-directory-identity/new-tools-to-block-legacy-authentication-in-your-organization/ba-p/1225302).
Legacy authentication is captured in the Azure AD Sign-ins log as part of the detail of the event. You can use the Azure Monitor workbook to help with identifying legacy authentication usage. For more information, see [Sign-ins using legacy authentication](../reports-monitoring/howto-use-azure-monitor-workbooks.md), which is part of [How to use Azure Monitor Workbooks for Azure Active Directory reports](../reports-monitoring/howto-use-azure-monitor-workbooks.md). You can also use the Insecure protocols workbook for Microsoft Sentinel. For more information, see [Microsoft Sentinel Insecure Protocols Workbook Implementation Guide](https://techcommunity.microsoft.com/t5/azure-sentinel/azure-sentinel-insecure-protocols-workbook-implementation-guide/ba-p/1197564). Specific activities to monitor include: | What to monitor| Risk level| Where| Filter/sub-filter| Notes | | - | - | - | - | - |
-| Legacy authentications|High | Azure AD Sign-ins log| ClientApp : POP<br>ClientApp : IMAP<br>ClientApp : MAPI<br>ClientApp: SMTP<br>ClientApp : ActiveSync go to EXO<br>Other Clients = SharePoint and EWS| In federated domain environments, failed authentications are not recorded so will not appear in the log. |
-
+| Legacy authentications|High | Azure AD Sign-ins log| ClientApp : POP<br>ClientApp : IMAP<br>ClientApp : MAPI<br>ClientApp: SMTP<br>ClientApp : ActiveSync go to EXO<br>Other Clients = SharePoint and EWS| In federated domain environments, failed authentications aren't recorded and don't appear in the log. |
## Azure AD Connect
Azure AD Connect provides a centralized location that enables account and attrib
* [Password hash synchronization](../hybrid/whatis-phs.md) - A sign-in method that synchronizes a hash of a userΓÇÖs on-premises AD password with Azure AD.
-* [Synchronization](../hybrid/how-to-connect-sync-whatis.md) - Responsible for creating users, groups, and other objects. As well as, making sure identity information for your on-premises users and groups is matching the cloud. This synchronization also includes password hashes.
+* [Synchronization](../hybrid/how-to-connect-sync-whatis.md) - Responsible for creating users, groups, and other objects. And, making sure identity information for your on-premises users and groups matches the cloud. This synchronization also includes password hashes.
* [Health Monitoring](../hybrid/whatis-azure-ad-connect.md) - Azure AD Connect Health can provide robust monitoring and provide a central location in the Azure portal to view this activity.
-Synchronizing identity between your on-premises environment and you cloud environment introduces a new attack surface for your on-premises and cloud-based environment. We recommend:
+Synchronizing identity between your on-premises environment and your cloud environment introduces a new attack surface for your on-premises and cloud-based environment. We recommend:
-* You treat your Azure AD Connect primary and staging servers as Tier 0 Systems in your control plane.
+* You treat your Azure AD Connect primary and staging servers as Tier 0 Systems in your control plane.
* You follow a standard set of policies that govern each type of account and its usage in your environment.
-* You install Azure AD Connect and Connect Health. These primarily provide operational data for the environment.
+* You install Azure AD Connect and Connect Health. These primarily provide operational data for the environment.
Logging of Azure AD Connect operations occurs in different ways:
Azure AD uses Microsoft SQL Server Data Engine or SQL to store Azure AD Connect
| mms_server_configuration| SQL service audit records| See [SQL Server Audit Records](/sql/relational-databases/security/auditing/sql-server-audit-records) | | mms_synchronization_rule| SQL service audit records| See [SQL Server Audit Records](/sql/relational-databases/security/auditing/sql-server-audit-records) | - For information on what and how to monitor configuration information refer to: * For SQL server, see [SQL Server Audit Records](/sql/relational-databases/security/auditing/sql-server-audit-records).
-* For Microsoft Sentinel, see [Connect to Windows servers to collect security events](/sql/relational-databases/security/auditing/sql-server-audit-records).
+* For Microsoft Sentinel, see [Connect to Windows servers to collect security events](/sql/relational-databases/security/auditing/sql-server-audit-records).
* For information on configuring and using Azure AD Connect, see [What is Azure AD Connect?](../hybrid/whatis-azure-ad-connect.md) ### Monitoring and troubleshooting synchronization
- One function of Azure AD Connect is to synchronize hash synchronization between a userΓÇÖs on-premises password and Azure AD. If passwords are not synchronizing as expected, the synchronization might affect a subset of users or all users. Use the following to help verify proper operation or troubleshoot issues:
+ One function of Azure AD Connect is to synchronize hash synchronization between a userΓÇÖs on-premises password and Azure AD. If passwords aren't synchronizing as expected, the synchronization might affect a subset of users or all users. Use the following to help verify proper operation or troubleshoot issues:
-* Information for checking and troubleshooting hash synchronization, see [Troubleshoot password hash synchronization with Azure AD Connect sync](../hybrid/tshoot-connect-password-hash-synchronization.md).
+* Information for checking and troubleshooting hash synchronization, see [Troubleshoot password hash synchronization with Azure AD Connect sync](../hybrid/tshoot-connect-password-hash-synchronization.md).
-* Modifications to the connector spaces, see [Troubleshoot Azure AD Connect objects and attributes](/troubleshoot/azure/active-directory/troubleshoot-aad-connect-objects-attributes).
+* Modifications to the connector spaces, see [Troubleshoot Azure AD Connect objects and attributes](/troubleshoot/azure/active-directory/troubleshoot-aad-connect-objects-attributes).
**Important resources on monitoring**
For information on what and how to monitor configuration information refer to:
| - | - | | Hash synchronization validation|See [Troubleshoot password hash synchronization with Azure AD Connect sync](../hybrid/tshoot-connect-password-hash-synchronization.md) | Modifications to the connector spaces|see [Troubleshoot Azure AD Connect objects and attributes](/troubleshoot/azure/active-directory/troubleshoot-aad-connect-objects-attributes) |
-| Modifications to the rules you configured| Specifically, monitor filtering changes, domain and OU changes, attribute changes, and group-based changes |
+| Modifications to rules you configured| Monitor changes to: filtering, domain and OU, attribute, and group-based changes |
| SQL and MSDE changes | Changes to logging parameters and addition of custom functions |
-**Monitor the following**:
+**Monitor the following**:
| What to monitor| Risk level| Where| Filter/sub-filter| Notes | | - | - | - | - | - | | Scheduler changes|High | PowerShell| Set-ADSyncScheduler| Look for modifications to schedule | | Changes to scheduled tasks| High | Azure AD Audit logs| Activity = 4699(S): A scheduled task was deleted<br>-or-<br>Activity = 4701(s): A scheduled task was disabled<br>-or-<br>Activity = 4701(s): A scheduled task was updated| Monitor all | --
-* For more information on logging PowerShell script operations, refer to [Enabling Script Block Logging](/powershell/module/microsoft.powershell.core/about/about_logging_windows), which is part of the PowerShell reference documentation.
+* For more information on logging PowerShell script operations, see [Enabling Script Block Logging](/powershell/module/microsoft.powershell.core/about/about_logging_windows), which is part of the PowerShell reference documentation.
* For more information on configuring PowerShell logging for analysis by Splunk, refer to [Get Data into Splunk User Behavior Analytics](https://docs.splunk.com/Documentation/UBA/5.0.4.1/GetDataIn/AddPowerShell). ### Monitoring seamless single sign-on
-Azure Active Directory (Azure AD) Seamless Single Sign-On (Seamless SSO) automatically signs in users when they are on their corporate desktops that are connected to your corporate network. Seamless SSO provides your users with easy access to your cloud-based applications without needing any additional on-premises components. SSO uses the pass-through authentication and password hash synchronization capabilities provided by Azure AD Connect.
+Azure Active Directory (Azure AD) Seamless Single Sign-On (Seamless SSO) automatically signs in users when they are on their corporate desktops that are connected to your corporate network. Seamless SSO provides your users with easy access to your cloud-based applications without other on-premises components. SSO uses the pass-through authentication and password hash synchronization capabilities provided by Azure AD Connect.
Monitoring single sign-on and Kerberos activity can help you detect general credential theft attack patterns. Monitor using the following information:
Monitoring single sign-on and Kerberos activity can help you detect general cred
</QueryList> ```+ ## Password protection policies
-If you deploy Azure AD Password Protection, monitoring and reporting are essential tasks. The following links provide details to help you understand various monitoring techniques, including where each service logs information and how to report on the use of Azure AD Password Protection.
+If you deploy Azure AD Password Protection, monitoring and reporting are essential tasks. The following links provide details to help you understand various monitoring techniques, including where each service logs information and how to report on the use of Azure AD Password Protection.
-The domain controller (DC) agent and proxy services both log event log messages. All PowerShell cmdlets described below are only available on the proxy server (see the AzureADPasswordProtection PowerShell module). The DC agent software does not install a PowerShell module.
+The domain controller (DC) agent and proxy services both log event log messages. All PowerShell cmdlets described below are only available on the proxy server (see the AzureADPasswordProtection PowerShell module). The DC agent software doesn't install a PowerShell module.
Detailed information for planning and implementing on-premises password protection is available at [Plan and deploy on-premises Azure Active Directory Password Protection](../authentication/howto-password-ban-bad-on-premises-deploy.md). For monitoring details, see [Monitor on-premises Azure AD Password Protection](../authentication/howto-password-ban-bad-on-premises-monitor.md). On each domain controller, the DC agent service software writes the results of each individual password validation operation (and other status) to the following local event log:
The DC agent Admin log is the primary source of information for how the software
* Azure AD Audit Log, Category Application Proxy
-Complete reference for Azure AD audit activities is available at [Azure Active Directory (Azure AD) audit activity reference](../reports-monitoring/reference-audit-activities.md).
+Complete reference for Azure AD audit activities is available at [Azure Active Directory (Azure AD) audit activity reference](../reports-monitoring/reference-audit-activities.md).
## Conditional Access
-In Azure AD, you can protect access to your resources by configuring Conditional Access policies. As an IT administrator, you want to ensure that your Conditional Access policies work as expected to ensure that your resources are properly protected. Monitoring and alerting on changes to the Conditional Access service is critical to ensure that polices defined by your organization for access to data are enforced correctly. Azure AD logs when changes are made to Conditional Access and also provides workbooks to ensure your policies are providing the expected coverage.
+In Azure AD, you can protect access to your resources by configuring Conditional Access policies. As an IT administrator, you want to ensure your Conditional Access policies work as expected to ensure that your resources are protected. Monitoring and alerting on changes to the Conditional Access service ensures policies defined by your organization for access to data are enforced. Azure AD logs when changes are made to Conditional Access and also provides workbooks to ensure your policies are providing the expected coverage.
**Workbook Links**
Monitor changes to Conditional Access policies using the following information:
| What to monitor| Risk level| Where| Filter/sub-filter| Notes | | - | - | - | - | - |
-| New Conditional Access Policy created by non-approved actors|Medium | Azure AD Audit logs|Activity: Add conditional access policy<br><br>Category: Policy<br><br>Initiated by (actor): User Principal Name | Monitor and alert on Conditional Access changes. Is Initiated by (actor): approved to make changes to Conditional Access?|
-|Conditional Access Policy removed by non-approved actors|Medium|Azure AD Audit logs|Activity: Delete conditional access policy<br><br>Category: Policy<br><br>Initiated by (actor): User Principal Name|Monitor and alert on Conditional Access changes. Is Initiated by (actor): approved to make changes to Conditional Access?|
-|Conditional Access Policy updated by non-approved actors|Medium|Azure AD Audit logs|Activity: Update conditional access policy<br><br>Category: Policy<br><br>Initiated by (actor): User Principal Name|Monitor and alert on Conditional Access changes. Is Initiated by (actor): approved to make changes to Conditional Access?<br><br>Review Modified Properties and compare ΓÇ£oldΓÇ¥ vs ΓÇ£newΓÇ¥ value|
-|Removal of a user from a group used to scope critical Conditional Access policies|Medium|Azure AD Audit logs|Activity: Remove member from group<br><br>Category: GroupManagement<br><br>Target: User Principal Name|Montior and Alert for groups used to scope critical Conditional Access Policies.<br><br>"Target" is the user that has been removed.|
-|Addition of a user to a group used to scope critical Conditional Access policies|Low|Azure AD Audit logs|Activity: Add member to group<br><br>Category: GroupManagement<br><br>Target: User Principal Name|Montior and Alert for groups used to scope critical Conditional Access Policies.<br><br>"Target" is the user that has been added.|
+| New Conditional Access Policy created by non-approved actors|Medium | Azure AD Audit logs|Activity: Add conditional access policy<br><br>Category: Policy<br><br>Initiated by (actor): User Principal Name | Monitor and alert on Conditional Access changes. Is Initiated by (actor): approved to make changes to Conditional Access?<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/ConditionalAccessPolicyModifiedbyNewUser.yaml)<br><br>[Sigma rules template](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure)|
+|Conditional Access Policy removed by non-approved actors|Medium|Azure AD Audit logs|Activity: Delete conditional access policy<br><br>Category: Policy<br><br>Initiated by (actor): User Principal Name|Monitor and alert on Conditional Access changes. Is Initiated by (actor): approved to make changes to Conditional Access?<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/ConditionalAccessPolicyModifiedbyNewUser.yaml)<br><br>[Sigma rules template](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
+|Conditional Access Policy updated by non-approved actors|Medium|Azure AD Audit logs|Activity: Update conditional access policy<br><br>Category: Policy<br><br>Initiated by (actor): User Principal Name|Monitor and alert on Conditional Access changes. Is Initiated by (actor): approved to make changes to Conditional Access?<br><br>Review Modified Properties and compare ΓÇ£oldΓÇ¥ vs ΓÇ£newΓÇ¥ value<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/ConditionalAccessPolicyModifiedbyNewUser.yaml)<br><br>[Sigma rules template](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
+|Removal of a user from a group used to scope critical Conditional Access policies|Medium|Azure AD Audit logs|Activity: Remove member from group<br><br>Category: GroupManagement<br><br>Target: User Principal Name|Montior and Alert for groups used to scope critical Conditional Access Policies.<br><br>"Target" is the user that has been removed.<br><br>[Sigma rules template](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure)|
+|Addition of a user to a group used to scope critical Conditional Access policies|Low|Azure AD Audit logs|Activity: Add member to group<br><br>Category: GroupManagement<br><br>Target: User Principal Name|Montior and Alert for groups used to scope critical Conditional Access Policies.<br><br>"Target" is the user that has been added.<br><br>[Sigma rules template](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure)|
## Next steps -
-See these additional security operations guide articles:
- [Azure AD security operations overview](security-operations-introduction.md) [Security operations for user accounts](security-operations-user-accounts.md)
+[Security operations for consumer accounts](security-operations-consumer-accounts.md)
+ [Security operations for privileged accounts](security-operations-privileged-accounts.md) [Security operations for Privileged Identity Management](security-operations-privileged-identity-management.md)
See these additional security operations guide articles:
[Security operations for applications](security-operations-applications.md) [Security operations for devices](security-operations-devices.md)
-
-[Security operations for infrastructure](security-operations-infrastructure.md)
--
-
-
-
-ΓÇÄ
active-directory Security Operations Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/security-operations-introduction.md
Previously updated : 08/24/2022 Last updated : 09/06/2022 - it-pro
As you audit your current security operations or establish security operations f
### Audience
-The Azure AD SecOps Guide is intended for enterprise IT identity and security operations teams and managed service providers that need to counter threats through better identity security configuration and monitoring profiles. This guide is especially relevant for IT administrators and identity architects advising Security Operations Center (SOC) defensive and penetration testing teams to improve and maintain their identity security posture.
+The Azure AD SecOps Guide is intended for enterprise IT identity and security operations teams and managed service providers that need to counter threats through better identity security configuration and monitoring profiles. This guide is especially relevant for IT administrators and identity architects advising Security Operations Center (SOC) defensive and penetration testing teams to improve and maintain their identity security posture.
### Scope
The log files you use for investigation and monitoring are:
From the Azure portal, you can view the Azure AD Audit logs. Download logs as comma separated value (CSV) or JavaScript Object Notation (JSON) files. The Azure portal has several ways to integrate Azure AD logs with other tools that allow for greater automation of monitoring and alerting:
-* **[Microsoft Sentinel](../../sentinel/overview.md)**. Enables intelligent security analytics at the enterprise level by providing security information and event management (SIEM) capabilities.
+* **[Microsoft Sentinel](../../sentinel/overview.md)** - Enables intelligent security analytics at the enterprise level by providing security information and event management (SIEM) capabilities.
-* **[Azure Monitor](../../azure-monitor/overview.md)**. Enables automated monitoring and alerting of various conditions. Can create or use workbooks to combine data from different sources.
+* **[Sigma rule templates](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure)** - Sigma is an evolving open standard for writing rules and templates that automated management tools can use to parse log files. Where Sigma templates exist for our recommended search criteria, we have added a link to the Sigma repo. The Sigma templates are not written, tested, and managed by Microsoft. Rather, the repo and templates are created and collected by the worldwide IT security community.
+
+* **[Azure Monitor](../../azure-monitor/overview.md)** - Enables automated monitoring and alerting of various conditions. Can create or use workbooks to combine data from different sources.
* **[Azure Event Hubs](../../event-hubs/event-hubs-about.md)** integrated with a SIEM. Azure AD logs can be integrated to other SIEMs such as Splunk, ArcSight, QRadar and Sumo Logic via the Azure Event Hubs integration. For more information, see [Stream Azure Active Directory logs to an Azure event hub](../reports-monitoring/tutorial-azure-monitor-stream-logs-to-event-hub.md).
-* **[Microsoft Defender for Cloud Apps](/cloud-app-security/what-is-cloud-app-security)**. Enables you to discover and manage apps, govern across apps and resources, and check the compliance of your cloud apps.
+* **[Microsoft Defender for Cloud Apps](/cloud-app-security/what-is-cloud-app-security)** - Enables you to discover and manage apps, govern across apps and resources, and check the compliance of your cloud apps.
-* **[Securing workload identities with Identity Protection Preview](../identity-protection/concept-workload-identity-risk.md)**. Used to detect risk on workload identities across sign-in behavior and offline indicators of compromise.
+* **[Securing workload identities with Identity Protection Preview](../identity-protection/concept-workload-identity-risk.md)** - Used to detect risk on workload identities across sign-in behavior and offline indicators of compromise.
Much of what you will monitor and alert on are the effects of your Conditional Access policies. You can use the Conditional Access insights and reporting workbook to examine the effects of one or more Conditional Access policies on your sign-ins and the results of policies, including device state. This workbook enables you to view an impact summary, and identify the impact over a specific time period. You can also use the workbook to investigate the sign-ins of a specific user. For more information, see [Conditional Access insights and reporting](../conditional-access/howto-conditional-access-insights-reporting.md).
If you don't plan to use Microsoft Defender for Identity, monitor your domain co
As part of an Azure hybrid environment, the following items should be baselined and included in your monitoring and alerting strategy.
-* **PTA Agent**. The pass-through authentication agent is used to enable pass-through authentication and is installed on-premises. See [Azure AD Pass-through Authentication agent: Version release history](../hybrid/reference-connect-pta-version-history.md) for information on verifying your agent version and next steps.
+* **PTA Agent** - The pass-through authentication agent is used to enable pass-through authentication and is installed on-premises. See [Azure AD Pass-through Authentication agent: Version release history](../hybrid/reference-connect-pta-version-history.md) for information on verifying your agent version and next steps.
-* **AD FS/WAP**. Azure Active Directory Federation Services (Azure AD FS) and Web Application Proxy (WAP) enable secure sharing of digital identity and entitlement rights across your security and enterprise boundaries. For information on security best practices, see [Best practices for securing Active Directory Federation Services](/windows-server/identity/ad-fs/deployment/best-practices-securing-ad-fs).
+* **AD FS/WAP** - Azure Active Directory Federation Services (Azure AD FS) and Web Application Proxy (WAP) enable secure sharing of digital identity and entitlement rights across your security and enterprise boundaries. For information on security best practices, see [Best practices for securing Active Directory Federation Services](/windows-server/identity/ad-fs/deployment/best-practices-securing-ad-fs).
-* **Azure AD Connect Health Agent**. The agent used to provide a communications link for Azure AD Connect Health. For information on installing the agent, see [Azure AD Connect Health agent installation](../hybrid/how-to-connect-health-agent-install.md).
+* **Azure AD Connect Health Agent** - The agent used to provide a communications link for Azure AD Connect Health. For information on installing the agent, see [Azure AD Connect Health agent installation](../hybrid/how-to-connect-health-agent-install.md).
-* **Azure AD Connect Sync Engine**. The on-premises component, also called the sync engine. For information on the feature, see [Azure AD Connect sync service features](../hybrid/how-to-connect-syncservice-features.md).
+* **Azure AD Connect Sync Engine** - The on-premises component, also called the sync engine. For information on the feature, see [Azure AD Connect sync service features](../hybrid/how-to-connect-syncservice-features.md).
-* **Password Protection DC agent**. Azure password protection DC agent is used to help with monitoring and reporting event log messages. For information, see [Enforce on-premises Azure AD Password Protection for Active Directory Domain Services](../authentication/concept-password-ban-bad-on-premises.md).
+* **Password Protection DC agent** - Azure password protection DC agent is used to help with monitoring and reporting event log messages. For information, see [Enforce on-premises Azure AD Password Protection for Active Directory Domain Services](../authentication/concept-password-ban-bad-on-premises.md).
-* **Password Filter DLL**. The password filter DLL of the DC Agent receives user password-validation requests from the operating system. The filter forwards them to the DC Agent service that's running locally on the DC. For information on using the DLL, see [Enforce on-premises Azure AD Password Protection for Active Directory Domain Services](../authentication/concept-password-ban-bad-on-premises.md).
+* **Password Filter DLL** - The password filter DLL of the DC Agent receives user password-validation requests from the operating system. The filter forwards them to the DC Agent service that's running locally on the DC. For information on using the DLL, see [Enforce on-premises Azure AD Password Protection for Active Directory Domain Services](../authentication/concept-password-ban-bad-on-premises.md).
-* **Password writeback Agent**. Password writeback is a feature enabled with [Azure AD Connect](../hybrid/whatis-hybrid-identity.md) that allows password changes in the cloud to be written back to an existing on-premises directory in real time. For more information on this feature, see [How does self-service password reset writeback work in Azure Active Directory](../authentication/concept-sspr-writeback.md).
+* **Password writeback Agent** - Password writeback is a feature enabled with [Azure AD Connect](../hybrid/whatis-hybrid-identity.md) that allows password changes in the cloud to be written back to an existing on-premises directory in real time. For more information on this feature, see [How does self-service password reset writeback work in Azure Active Directory](../authentication/concept-sspr-writeback.md).
-* **Azure AD Application Proxy Connector**. Lightweight agents that sit on-premises and facilitate the outbound connection to the Application Proxy service. For more information, see [Understand Azure ADF Application Proxy connectors](../app-proxy/application-proxy-connectors.md).
+* **Azure AD Application Proxy Connector** - Lightweight agents that sit on-premises and facilitate the outbound connection to the Application Proxy service. For more information, see [Understand Azure ADF Application Proxy connectors](../app-proxy/application-proxy-connectors.md).
## Components of cloud-based authentication As part of an Azure cloud-based environment, the following items should be baselined and included in your monitoring and alerting strategy.
-* **Azure AD Application Proxy**. This cloud service provides secure remote access to on-premises web applications. For more information, see [Remote access to on-premises applications through Azure AD Application Proxy](../app-proxy/application-proxy-connectors.md).
+* **Azure AD Application Proxy** - This cloud service provides secure remote access to on-premises web applications. For more information, see [Remote access to on-premises applications through Azure AD Application Proxy](../app-proxy/application-proxy-connectors.md).
-* **Azure AD Connect**. Services used for an Azure AD Connect solution. For more information, see [What is Azure AD Connect](../hybrid/whatis-azure-ad-connect.md).
+* **Azure AD Connect** - Services used for an Azure AD Connect solution. For more information, see [What is Azure AD Connect](../hybrid/whatis-azure-ad-connect.md).
-* **Azure AD Connect Health**. Service Health provides you with a customizable dashboard that tracks the health of your Azure services in the regions where you use them. For more information, see [Azure AD Connect Health](../hybrid/whatis-azure-ad-connect.md).
+* **Azure AD Connect Health** - Service Health provides you with a customizable dashboard that tracks the health of your Azure services in the regions where you use them. For more information, see [Azure AD Connect Health](../hybrid/whatis-azure-ad-connect.md).
-* **Azure AD multifactor authentication**. Multifactor authentication requires a user to provide more than one form of proof for authentication. This approach can provide a proactive first step to securing your environment. For more information, see [Azure AD multi-factor authentication](../authentication/concept-mfa-howitworks.md).
+* **Azure AD multifactor authentication** - Multifactor authentication requires a user to provide more than one form of proof for authentication. This approach can provide a proactive first step to securing your environment. For more information, see [Azure AD multi-factor authentication](../authentication/concept-mfa-howitworks.md).
-* **Dynamic groups**. Dynamic configuration of security group membership for Azure AD Administrators can set rules to populate groups that are created in Azure AD based on user attributes. For more information, see [Dynamic groups and Azure Active Directory B2B collaboration](../external-identities/use-dynamic-groups.md).
+* **Dynamic groups** - Dynamic configuration of security group membership for Azure AD Administrators can set rules to populate groups that are created in Azure AD based on user attributes. For more information, see [Dynamic groups and Azure Active Directory B2B collaboration](../external-identities/use-dynamic-groups.md).
-* **Conditional Access**. Conditional Access is the tool used by Azure Active Directory to bring signals together, to make decisions, and enforce organizational policies. Conditional Access is at the heart of the new identity driven control plane. For more information, see [What is Conditional Access](../conditional-access/overview.md).
+* **Conditional Access** - Conditional Access is the tool used by Azure Active Directory to bring signals together, to make decisions, and enforce organizational policies. Conditional Access is at the heart of the new identity driven control plane. For more information, see [What is Conditional Access](../conditional-access/overview.md).
-* **Identity Protection**. A tool that enables organizations to automate the detection and remediation of identity-based risks, investigate risks using data in the portal, and export risk detection data to your SIEM. For more information, see [What is Identity Protection](../identity-protection/overview-identity-protection.md).
+* **Identity Protection** - A tool that enables organizations to automate the detection and remediation of identity-based risks, investigate risks using data in the portal, and export risk detection data to your SIEM. For more information, see [What is Identity Protection](../identity-protection/overview-identity-protection.md).
-* **Group-based licensing**. Licenses can be assigned to groups rather than directly to users. Azure AD stores information about license assignment states for users.
+* **Group-based licensing** - Licenses can be assigned to groups rather than directly to users. Azure AD stores information about license assignment states for users.
-* **Provisioning Service**. Provisioning refers to creating user identities and roles in the cloud applications that users need access to. In addition to creating user identities, automatic provisioning includes the maintenance and removal of user identities as status or roles change. For more information, see [How Application Provisioning works in Azure Active Directory](../app-provisioning/how-provisioning-works.md).
+* **Provisioning Service** - Provisioning refers to creating user identities and roles in the cloud applications that users need access to. In addition to creating user identities, automatic provisioning includes the maintenance and removal of user identities as status or roles change. For more information, see [How Application Provisioning works in Azure Active Directory](../app-provisioning/how-provisioning-works.md).
-* **Graph API**. The Microsoft Graph API is a RESTful web API that enables you to access Microsoft Cloud service resources. After you register your app and get authentication tokens for a user or service, you can make requests to the Microsoft Graph API. For more information, see [Overview of Microsoft Graph](/graph/overview).
+* **Graph API** - The Microsoft Graph API is a RESTful web API that enables you to access Microsoft Cloud service resources. After you register your app and get authentication tokens for a user or service, you can make requests to the Microsoft Graph API. For more information, see [Overview of Microsoft Graph](/graph/overview).
-* **Domain Service**. Azure Active Directory Domain Services (AD DS) provides managed domain services such as domain join, group policy. For more information, see [What is Azure Active Directory Domain Services](../../active-directory-domain-services/overview.md).
+* **Domain Service** - Azure Active Directory Domain Services (AD DS) provides managed domain services such as domain join, group policy. For more information, see [What is Azure Active Directory Domain Services](../../active-directory-domain-services/overview.md).
-* **Azure Resource Manager**. Azure Resource Manager is the deployment and management service for Azure. It provides a management layer that enables you to create, update, and delete resources in your Azure account. For more information, see [What is Azure Resource Manager](../../azure-resource-manager/management/overview.md).
+* **Azure Resource Manager** - Azure Resource Manager is the deployment and management service for Azure. It provides a management layer that enables you to create, update, and delete resources in your Azure account. For more information, see [What is Azure Resource Manager](../../azure-resource-manager/management/overview.md).
-* **Managed identity**. Managed identities eliminate the need for developers to manage credentials. Managed identities provide an identity for applications to use when connecting to resources that support Azure AD authentication. For more information, see [What are managed identities for Azure resources](../managed-identities-azure-resources/overview.md).
+* **Managed identity** - Managed identities eliminate the need for developers to manage credentials. Managed identities provide an identity for applications to use when connecting to resources that support Azure AD authentication. For more information, see [What are managed identities for Azure resources](../managed-identities-azure-resources/overview.md).
-* **Privileged Identity Management**. PIM is a service in Azure AD that enables you to manage, control, and monitor access to important resources in your organization. For more information, see [What is Azure AD Privileged Identity Management](../privileged-identity-management/pim-configure.md).
+* **Privileged Identity Management** - PIM is a service in Azure AD that enables you to manage, control, and monitor access to important resources in your organization. For more information, see [What is Azure AD Privileged Identity Management](../privileged-identity-management/pim-configure.md).
-* **Access reviews**. Azure AD access reviews enable organizations to efficiently manage group memberships, access to enterprise applications, and role assignments. User's access can be reviewed regularly to make sure only the right people have continued access. For more information, see [What are Azure AD access reviews](../governance/access-reviews-overview.md).
+* **Access reviews** - Azure AD access reviews enable organizations to efficiently manage group memberships, access to enterprise applications, and role assignments. User's access can be reviewed regularly to make sure only the right people have continued access. For more information, see [What are Azure AD access reviews](../governance/access-reviews-overview.md).
-* **Entitlement management**. Azure AD entitlement management is an [identity governance](../governance/identity-governance-overview.md) feature. Organizations can manage identity and access lifecycle at scale, by automating access request workflows, access assignments, reviews, and expiration. For more information, see [What is Azure AD entitlement management](../governance/entitlement-management-overview.md).
+* **Entitlement management** - Azure AD entitlement management is an [identity governance](../governance/identity-governance-overview.md) feature. Organizations can manage identity and access lifecycle at scale, by automating access request workflows, access assignments, reviews, and expiration. For more information, see [What is Azure AD entitlement management](../governance/entitlement-management-overview.md).
-* **Activity logs**. The Activity log is an Azure [platform log](../../azure-monitor/essentials/platform-logs-overview.md) that provides insight into subscription-level events. This log includes such information as when a resource is modified or when a virtual machine is started. For more information, see [Azure Activity log](../../azure-monitor/essentials/activity-log.md).
+* **Activity logs** - The Activity log is an Azure [platform log](../../azure-monitor/essentials/platform-logs-overview.md) that provides insight into subscription-level events. This log includes such information as when a resource is modified or when a virtual machine is started. For more information, see [Azure Activity log](../../azure-monitor/essentials/activity-log.md).
-* **Self-service password reset service**. Azure AD self-service password reset (SSPR) gives users the ability to change or reset their password. The administrator or help desk isn't required. For more information, see [How it works: Azure AD self-service password reset](../authentication/concept-sspr-howitworks.md).
+* **Self-service password reset service** - Azure AD self-service password reset (SSPR) gives users the ability to change or reset their password. The administrator or help desk isn't required. For more information, see [How it works: Azure AD self-service password reset](../authentication/concept-sspr-howitworks.md).
-* **Device services**. Device identity management is the foundation for [device-based Conditional Access](../conditional-access/require-managed-devices.md). With device-based Conditional Access policies, you can ensure that access to resources in your environment is only possible with managed devices. For more information, see [What is a device identity](../devices/overview.md).
+* **Device services** - Device identity management is the foundation for [device-based Conditional Access](../conditional-access/require-managed-devices.md). With device-based Conditional Access policies, you can ensure that access to resources in your environment is only possible with managed devices. For more information, see [What is a device identity](../devices/overview.md).
-* **Self-service group management**. You can enable users to create and manage their own security groups or Microsoft 365 groups in Azure AD. The owner of the group can approve or deny membership requests and can delegate control of group membership. Self-service group management features aren't available for mail-enabled security groups or distribution lists. For more information, see [Set up self-service group management in Azure Active Directory](../enterprise-users/groups-self-service-management.md).
+* **Self-service group management** - You can enable users to create and manage their own security groups or Microsoft 365 groups in Azure AD. The owner of the group can approve or deny membership requests and can delegate control of group membership. Self-service group management features aren't available for mail-enabled security groups or distribution lists. For more information, see [Set up self-service group management in Azure Active Directory](../enterprise-users/groups-self-service-management.md).
-* **Risk detections**. Contains information about other risks triggered when a risk is detected and other pertinent information such as sign-in location and any details from Microsoft Defender for Cloud Apps.
+* **Risk detections** - Contains information about other risks triggered when a risk is detected and other pertinent information such as sign-in location and any details from Microsoft Defender for Cloud Apps.
## Next steps See these security operations guide articles:
-* [Azure AD security operations overview](security-operations-introduction.md)
-* [Security operations for user accounts](security-operations-user-accounts.md)
-* [Security operations for privileged accounts](security-operations-privileged-accounts.md)
-* [Security operations for Privileged Identity Management](security-operations-privileged-identity-management.md)
-* [Security operations for applications](security-operations-applications.md)
-* [Security operations for devices](security-operations-devices.md)
-* [Security operations for infrastructure](security-operations-infrastructure.md)
+[Security operations for user accounts](security-operations-user-accounts.md)
+
+[Security operations for consumer accounts](security-operations-consumer-accounts.md)
+
+[Security operations for privileged accounts](security-operations-privileged-accounts.md)
+
+[Security operations for Privileged Identity Management](security-operations-privileged-identity-management.md)
+
+[Security operations for applications](security-operations-applications.md)
+
+[Security operations for devices](security-operations-devices.md)
+
+[Security operations for infrastructure](security-operations-infrastructure.md)
active-directory Security Operations Privileged Accounts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/security-operations-privileged-accounts.md
Previously updated : 04/29/2022- Last updated : 09/06/2022+
You're entirely responsible for all layers of security for your on-premises IT e
The log files you use for investigation and monitoring are: * [Azure AD Audit logs](../reports-monitoring/concept-audit-logs.md)+ * [Microsoft 365 Audit logs](/microsoft-365/compliance/auditing-solutions-overview)+ * [Azure Key Vault insights](../../key-vault/key-vault-insights-overview.md) From the Azure portal, you can view the Azure AD Audit logs and download as comma-separated value (CSV) or JavaScript Object Notation (JSON) files. The Azure portal has several ways to integrate Azure AD logs with other tools that allow for greater automation of monitoring and alerting: * **[Microsoft Sentinel](../../sentinel/overview.md)**. Enables intelligent security analytics at the enterprise level by providing security information and event management (SIEM) capabilities.+
+* **[Sigma rule templates](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure)** - Sigma is an evolving open standard for writing rules and templates that automated management tools can use to parse log files. Where Sigma templates exist for our recommended search criteria, we have added a link to the Sigma repo. The Sigma templates are not written, tested, and managed by Microsoft. Rather, the repo and templates are created and collected by the worldwide IT security community.
+ * **[Azure Monitor](../../azure-monitor/overview.md)**. Enables automated monitoring and alerting of various conditions. Can create or use workbooks to combine data from different sources.+ * **[Azure Event Hubs](../../event-hubs/event-hubs-about.md)** integrated with a SIEM. Enables Azure AD logs to be pushed to other SIEMs such as Splunk, ArcSight, QRadar, and Sumo Logic via the Azure Event Hubs integration. For more information, see [Stream Azure Active Directory logs to an Azure event hub](../reports-monitoring/tutorial-azure-monitor-stream-logs-to-event-hub.md).+ * **[Microsoft Defender for Cloud Apps](/cloud-app-security/what-is-cloud-app-security)**. Enables you to discover and manage apps, govern across apps and resources, and check your cloud apps' compliance.+ * **Microsoft Graph**. Enables you to export data and use Microsoft Graph to do more analysis. For more information, see [Microsoft Graph PowerShell SDK and Azure Active Directory Identity Protection](../identity-protection/howto-identity-protection-graph-api.md).+ * **[Identity Protection](../identity-protection/overview-identity-protection.md)**. Generates three key reports you can use to help with your investigation: * **Risky users**. Contains information about which users are at risk, details about detections, history of all risky sign-ins, and risk history.
+
* **Risky sign-ins**. Contains information about a sign-in that might indicate suspicious circumstances. For more information on investigating information from this report, see [Investigate risk](../identity-protection/howto-identity-protection-investigate-risk.md).
+
* **Risk detections**. Contains information about other risks triggered when a risk is detected and other pertinent information such as sign-in location and any details from Microsoft Defender for Cloud Apps. * **[Securing workload identities with Identity Protection Preview](..//identity-protection/concept-workload-identity-risk.md)**. Use to detect risk on workload identities across sign-in behavior and offline indicators of compromise.
You can monitor privileged account sign-in events in the Azure AD Sign-in logs.
| What to monitor | Risk level | Where | Filter/subfilter | Notes | | - | - | - | - | - |
-| Sign-in failure, bad password threshold | High | Azure AD Sign-ins log | Status = Failure<br>-and-<br>error code = 50126 | Define a baseline threshold and then monitor and adjust to suit your organizational behaviors and limit false alerts from being generated.<br>[Azure Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/MultipleDataSources/PrivilegedAccountsSigninFailureSpikes.yaml) |
-| Failure because of Conditional Access requirement |High | Azure AD Sign-ins log | Status = Failure<br>-and-<br>error code = 53003<br>-and-<br>Failure reason = Blocked by Conditional Access | This event can be an indication an attacker is trying to get into the account.<br>[Azure Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/SigninLogs/UserAccounts-CABlockedSigninSpikes.yaml) |
+| Sign-in failure, bad password threshold | High | Azure AD Sign-ins log | Status = Failure<br>-and-<br>error code = 50126 | Define a baseline threshold and then monitor and adjust to suit your organizational behaviors and limit false alerts from being generated.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/MultipleDataSources/PrivilegedAccountsSigninFailureSpikes.yaml)<br><br>[Sigma rule templates](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
+| Failure because of Conditional Access requirement |High | Azure AD Sign-ins log | Status = Failure<br>-and-<br>error code = 53003<br>-and-<br>Failure reason = Blocked by Conditional Access | This event can be an indication an attacker is trying to get into the account.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/SigninLogs/UserAccounts-CABlockedSigninSpikes.yaml)<br><br>[Sigma rule templates](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
| Privileged accounts that don't follow naming policy| | Azure subscription | [List Azure role assignments using the Azure portal](../../role-based-access-control/role-assignments-list-portal.md)| List role assignments for subscriptions and alert where the sign-in name doesn't match your organization's format. An example is the use of ADM_ as a prefix. |
-| Interrupt | High, medium | Azure AD Sign-ins | Status = Interrupted<br>-and-<br>error code = 50074<br>-and-<br>Failure reason = Strong auth required<br>Status = Interrupted<br>-and-<br>Error code = 500121<br>Failure reason = Authentication failed during strong authentication request | This event can be an indication an attacker has the password for the account but can't pass the multifactor authentication challenge.<br>[Azure Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/MultipleDataSources/AADPrivilegedAccountsFailedMFA.yaml) |
+| Interrupt | High, medium | Azure AD Sign-ins | Status = Interrupted<br>-and-<br>error code = 50074<br>-and-<br>Failure reason = Strong auth required<br>Status = Interrupted<br>-and-<br>Error code = 500121<br>Failure reason = Authentication failed during strong authentication request | This event can be an indication an attacker has the password for the account but can't pass the multi-factor authentication challenge.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/MultipleDataSources/AADPrivilegedAccountsFailedMFA.yaml)<br><br>[Sigma rule templates](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
| Privileged accounts that don't follow naming policy| High | Azure AD directory | [List Azure AD role assignments](../roles/view-assignments.md)| List role assignments for Azure AD roles and alert where the UPN doesn't match your organization's format. An example is the use of ADM_ as a prefix. |
-| Discover privileged accounts not registered for multifactor authentication | High | Microsoft Graph API| Query for IsMFARegistered eq false for admin accounts. [List credentialUserRegistrationDetails - Microsoft Graph beta](/graph/api/reportroot-list-credentialuserregistrationdetails?view=graph-rest-beta&preserve-view=true&tabs=http) | Audit and investigate to determine if the event is intentional or an oversight. |
-| Account lockout | High | Azure AD Sign-ins log | Status = Failure<br>-and-<br>error code = 50053 | Define a baseline threshold, and then monitor and adjust to suit your organizational behaviors and limit false alerts from being generated.<br>[Azure Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/MultipleDataSources/PrivilegedAccountsLockedOut.yaml) |
-| Account disabled or blocked for sign-ins | Low | Azure AD Sign-ins log | Status = Failure<br>-and-<br>Target = User UPN<br>-and-<br>error code = 50057 | This event could indicate someone is trying to gain access to an account after they've left the organization. Although the account is blocked, it's still important to log and alert on this activity.<br>[Azure Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/SigninLogs/UserAccounts-BlockedAccounts.yaml) |
-| MFA fraud alert or block | High | Azure AD Sign-ins log/Azure Log Analytics | Sign-ins>Authentication details Result details = MFA denied, fraud code entered | Privileged user has indicated they haven't instigated the multi-factor authentication prompt, which could indicate an attacker has the password for the account.<br>[Azure Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/SigninLogs/MFARejectedbyUser.yaml) |
-| MFA fraud alert or block | High | Azure AD Audit log log/Azure Log Analytics | Activity type = Fraud reported - User is blocked for MFA or fraud reported - No action taken (based on tenant-level settings for fraud report) | Privileged user has indicated they haven't instigated the multi-factor authentication prompt, which could indicate an attacker has the password for the account.<br>[Azure Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/SigninLogs/MFARejectedbyUser.yaml) |
-| Privileged account sign-ins outside of expected controls | | Azure AD Sign-ins log | Status = Failure<br>UserPricipalName = \<Admin account\><br>Location = \<unapproved location\><br>IP address = \<unapproved IP\><br>Device info = \<unapproved Browser, Operating System\> | Monitor and alert on any entries that you've defined as unapproved.<br>[Azure Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/SigninLogs/SuspiciousSignintoPrivilegedAccount.yaml) |
-| Outside of normal sign-in times | High | Azure AD Sign-ins log | Status = Success<br>-and-<br>Location =<br>-and-<br>Time = Outside of working hours | Monitor and alert if sign-ins occur outside of expected times. It's important to find the normal working pattern for each privileged account and to alert if there are unplanned changes outside of normal working times. Sign-ins outside of normal working hours could indicate compromise or possible insider threats.<br>[Azure Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/MultipleDataSources/AnomolousSignInsBasedonTime.yaml) |
+| Discover privileged accounts not registered for multi-factor authentication | High | Microsoft Graph API| Query for IsMFARegistered eq false for admin accounts. [List credentialUserRegistrationDetails - Microsoft Graph beta](/graph/api/reportroot-list-credentialuserregistrationdetails?view=graph-rest-beta&preserve-view=true&tabs=http) | Audit and investigate to determine if the event is intentional or an oversight. |
+| Account lockout | High | Azure AD Sign-ins log | Status = Failure<br>-and-<br>error code = 50053 | Define a baseline threshold, and then monitor and adjust to suit your organizational behaviors and limit false alerts from being generated.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/MultipleDataSources/PrivilegedAccountsLockedOut.yaml)<br><br>[Sigma rule templates](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
+| Account disabled or blocked for sign-ins | Low | Azure AD Sign-ins log | Status = Failure<br>-and-<br>Target = User UPN<br>-and-<br>error code = 50057 | This event could indicate someone is trying to gain access to an account after they've left the organization. Although the account is blocked, it's still important to log and alert on this activity.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/SigninLogs/UserAccounts-BlockedAccounts.yaml)<br><br>[Sigma rule templates](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
+| MFA fraud alert or block | High | Azure AD Sign-ins log/Azure Log Analytics | Sign-ins>Authentication details Result details = MFA denied, fraud code entered | Privileged user has indicated they haven't instigated the multi-factor authentication prompt, which could indicate an attacker has the password for the account.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/SigninLogs/MFARejectedbyUser.yaml)<br><br>[Sigma rule templates](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
+| MFA fraud alert or block | High | Azure AD Audit log log/Azure Log Analytics | Activity type = Fraud reported - User is blocked for MFA or fraud reported - No action taken (based on tenant-level settings for fraud report) | Privileged user has indicated they haven't instigated the multi-factor authentication prompt, which could indicate an attacker has the password for the account.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/SigninLogs/MFARejectedbyUser.yaml)<br><br>[Sigma rule templates](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
+| Privileged account sign-ins outside of expected controls | | Azure AD Sign-ins log | Status = Failure<br>UserPricipalName = \<Admin account\><br>Location = \<unapproved location\><br>IP address = \<unapproved IP\><br>Device info = \<unapproved Browser, Operating System\> | Monitor and alert on any entries that you've defined as unapproved.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/SigninLogs/SuspiciousSignintoPrivilegedAccount.yaml)<br><br>[Sigma rule templates](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
+| Outside of normal sign-in times | High | Azure AD Sign-ins log | Status = Success<br>-and-<br>Location =<br>-and-<br>Time = Outside of working hours | Monitor and alert if sign-ins occur outside of expected times. It's important to find the normal working pattern for each privileged account and to alert if there are unplanned changes outside of normal working times. Sign-ins outside of normal working hours could indicate compromise or possible insider threats.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/MultipleDataSources/AnomolousSignInsBasedonTime.yaml)<br><br>[Sigma rule templates](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
| Identity protection risk | High | Identity Protection logs | Risk state = At risk<br>-and-<br>Risk level = Low, medium, high<br>-and-<br>Activity = Unfamiliar sign-in/TOR, and so on | This event indicates there's some abnormality detected with the sign-in for the account and should be alerted on. |
-| Password change | High | Azure AD Audit logs | Activity actor = Admin/self-service<br>-and-<br>Target = User<br>-and-<br>Status = Success or failure | Alert on any admin account password changes, especially for global admins, user admins, subscription admins, and emergency access accounts. Write a query targeted at all privileged accounts.<br>[Azure Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/MultipleDataSources/PrivilegedAccountPasswordChanges.yaml) |
-| Change in legacy authentication protocol | High | Azure AD Sign-ins log | Client App = Other client, IMAP, POP3, MAPI, SMTP, and so on<br>-and-<br>Username = UPN<br>-and-<br>Application = Exchange (example) | Many attacks use legacy authentication, so if there's a change in auth protocol for the user, it could be an indication of an attack. |
-| New device or location | High | Azure AD Sign-ins log | Device info = Device ID<br>-and-<br>Browser<br>-and-<br>OS<br>-and-<br>Compliant/Managed<br>-and-<br>Target = User<br>-and-<br>Location | Most admin activity should be from [privileged access devices](/security/compass/privileged-access-devices), from a limited number of locations. For this reason, alert on new devices or locations.<br>[Azure Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/SigninLogs/SuspiciousSignintoPrivilegedAccount.yaml) |
-| Audit alert setting is changed | High | Azure AD Audit logs | Service = PIM<br>-and-<br>Category = Role management<br>-and-<br>Activity = Disable PIM alert<br>-and-<br>Status = Success | Changes to a core alert should be alerted if unexpected. |
-| Administrators authenticating to other Azure AD tenants| Medium| Azure AD Sign-ins log| Status = success<br><br>Resource tenantID != Home Tenant ID| When scoped to Privileged Users, this monitor detects when an administrator has successfully authenticated to another Azure AD tenant with an identity in your organization's tenant. <br><br>Alert if Resource TenantID isn't equal to Home Tenant ID |
-|Admin User state changed from Guest to Member|Medium|Azure AD Audit logs|Activity: Update user<br><br>Category: UserManagement<br><br>UserType changed from Guest to Member|Monitor and alert on change of user type from Guest to Member.<br><br> Was this change expected?
-|Guest users invited to tenant by non-approved inviters|Medium|Azure AD Audit logs|Activity: Invite external user<br><br>Category: UserManagement<br><br>Initiated by (actor): User Principal Name|Monitor and alert on non-approved actors inviting external users.
+| Password change | High | Azure AD Audit logs | Activity actor = Admin/self-service<br>-and-<br>Target = User<br>-and-<br>Status = Success or failure | Alert on any admin account password changes, especially for global admins, user admins, subscription admins, and emergency access accounts. Write a query targeted at all privileged accounts.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/MultipleDataSources/PrivilegedAccountPasswordChanges.yaml)<br><br>[Sigma rule templates](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
+| Change in legacy authentication protocol | High | Azure AD Sign-ins log | Client App = Other client, IMAP, POP3, MAPI, SMTP, and so on<br>-and-<br>Username = UPN<br>-and-<br>Application = Exchange (example) | Many attacks use legacy authentication, so if there's a change in auth protocol for the user, it could be an indication of an attack.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/17ead56ae30b1a8e46bb0f95a458bdeb2d30ba9b/Hunting%20Queries/SigninLogs/LegacyAuthAttempt.yaml)<br><br>[Sigma rule templates](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
+| New device or location | High | Azure AD Sign-ins log | Device info = Device ID<br>-and-<br>Browser<br>-and-<br>OS<br>-and-<br>Compliant/Managed<br>-and-<br>Target = User<br>-and-<br>Location | Most admin activity should be from [privileged access devices](/security/compass/privileged-access-devices), from a limited number of locations. For this reason, alert on new devices or locations.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/SigninLogs/SuspiciousSignintoPrivilegedAccount.yaml)<br><br>[Sigma rule templates](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
+| Audit alert setting is changed | High | Azure AD Audit logs | Service = PIM<br>-and-<br>Category = Role management<br>-and-<br>Activity = Disable PIM alert<br>-and-<br>Status = Success | Changes to a core alert should be alerted if unexpected.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/SecurityAlert/DetectPIMAlertDisablingActivity.yaml)<br><br>[Sigma rule templates](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
+| Administrators authenticating to other Azure AD tenants| Medium| Azure AD Sign-ins log| Status = success<br><br>Resource tenantID != Home Tenant ID| When scoped to Privileged Users, this monitor detects when an administrator has successfully authenticated to another Azure AD tenant with an identity in your organization's tenant. <br><br>Alert if Resource TenantID isn't equal to Home Tenant ID<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/SigninLogs/AdministratorsAuthenticatingtoAnotherAzureADTenant.yaml)<br><br>[Sigma rule templates](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
+|Admin User state changed from Guest to Member|Medium|Azure AD Audit logs|Activity: Update user<br><br>Category: UserManagement<br><br>UserType changed from Guest to Member|Monitor and alert on change of user type from Guest to Member.<br><br> Was this change expected?<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/UserStatechangedfromGuesttoMember.yaml)<br><br>[Sigma rule templates](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
+|Guest users invited to tenant by non-approved inviters|Medium|Azure AD Audit logs|Activity: Invite external user<br><br>Category: UserManagement<br><br>Initiated by (actor): User Principal Name|Monitor and alert on non-approved actors inviting external users.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/GuestUsersInvitedtoTenantbyNewInviters.yaml)<br><br>[Sigma rule templates](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
## Changes by privileged accounts
Privileged accounts that have been assigned permissions in Azure AD Domain Servi
* [Enable security audits for Azure Active Directory Domain Services](../../active-directory-domain-services/security-audit-events.md) * [Audit Sensitive Privilege Use](/windows/security/threat-protection/auditing/audit-sensitive-privilege-use)
-| What to monitor | Risk level | Where | Filter/subfilter | Notes |
-||||-|-|
-| Attempted and completed changes | High | Azure AD Audit logs | Date and time<br>-and-<br>Service<br>-and-<br>Category and name of the activity (what)<br>-and-<br>Status = Success or failure<br>-and-<br>Target<br>-and-<br>Initiator or actor (who) | Any unplanned changes should be alerted on immediately. These logs should be retained to help with any investigation. Any tenant-level changes should be investigated immediately (link out to Infra doc) that would lower the security posture of your tenant. An example is excluding accounts from multifactor authentication or Conditional Access. Alert on any additions or changes to applications. See [Azure Active Directory security operations guide for Applications](security-operations-applications.md). |
-| **EXAMPLE**<br>Attempted or completed change to high-value apps or services | High | Audit log | Service<br>-and-<br>Category and name of the activity | <li>Date and time <li>Service <li>Category and name of the activity <li>Status = Success or failure <li>Target <li>Initiator or actor (who) |
-| Privileged changes in Azure AD Domain Services | High | Azure AD Domain Services | Look for event [4673](/windows/security/threat-protection/auditing/event-4673) | [Enable security audits for Azure Active Directory Domain Services](../../active-directory-domain-services/security-audit-events.md)<br>For a list of all privileged events, see [Audit Sensitive Privilege use](/windows/security/threat-protection/auditing/audit-sensitive-privilege-use). |
+| What to monitor | Risk level | Where | Filter/subfilter | Notes |
+| - | - | - | - | - |
+| Attempted and completed changes | High | Azure AD Audit logs | Date and time<br>-and-<br>Service<br>-and-<br>Category and name of the activity (what)<br>-and-<br>Status = Success or failure<br>-and-<br>Target<br>-and-<br>Initiator or actor (who) | Any unplanned changes should be alerted on immediately. These logs should be retained to help with any investigation. Any tenant-level changes should be investigated immediately (link out to Infra doc) that would lower the security posture of your tenant. An example is excluding accounts from multifactor authentication or Conditional Access. Alert on any additions or changes to applications. See [Azure Active Directory security operations guide for Applications](security-operations-applications.md). |
+| **Example**<br>Attempted or completed change to high-value apps or services | High | Audit log | Service<br>-and-<br>Category and name of the activity | Date and time, Service, Category and name of the activity, Status = Success or failure, Target, Initiator or actor (who) |
+| Privileged changes in Azure AD Domain Services | High | Azure AD Domain Services | Look for event [4673](/windows/security/threat-protection/auditing/event-4673) | [Enable security audits for Azure Active Directory Domain Services](../../active-directory-domain-services/security-audit-events.md)<br>For a list of all privileged events, see [Audit Sensitive Privilege use](/windows/security/threat-protection/auditing/audit-sensitive-privilege-use). |
## Changes to privileged accounts
Investigate changes to privileged accounts' authentication rules and privileges,
| What to monitor| Risk level| Where| Filter/subfilter| Notes | | - | - | - | - | - |
-| Privileged account creation| Medium| Azure AD Audit logs| Service = Core Directory<br>-and-<br>Category = User management<br>-and-<br>Activity type = Add user<br>-correlate with-<br>Category type = Role management<br>-and-<br>Activity type = Add member to role<br>-and-<br>Modified properties = Role.DisplayName| Monitor creation of any privileged accounts. Look for correlation that's of a short time span between creation and deletion of accounts.<br>[Azure Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/UserAssignedPrivilegedRole.yaml) |
-| Changes to authentication methods| High| Azure AD Audit logs| Service = Authentication Method<br>-and-<br>Activity type = User registered security information<br>-and-<br>Category = User management| This change could be an indication of an attacker adding an auth method to the account so they can have continued access.<br>[Azure Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/MultipleDataSources/AuthenticationMethodsChangedforPrivilegedAccount.yaml) |
-| Alert on changes to privileged account permissions| High| Azure AD Audit logs| Category = Role management<br>-and-<br>Activity type = Add eligible member (permanent)<br>-or-<br>Activity type = Add eligible member (eligible)<br>-and-<br>Status = Success or failure<br>-and-<br>Modified properties = Role.DisplayName| This alert is especially for accounts being assigned roles that aren't known or are outside of their normal responsibilities. |
-| Unused privileged accounts| Medium| Azure AD Access Reviews| | Perform a monthly review for inactive privileged user accounts. |
+| Privileged account creation| Medium| Azure AD Audit logs| Service = Core Directory<br>-and-<br>Category = User management<br>-and-<br>Activity type = Add user<br>-correlate with-<br>Category type = Role management<br>-and-<br>Activity type = Add member to role<br>-and-<br>Modified properties = Role.DisplayName| Monitor creation of any privileged accounts. Look for correlation that's of a short time span between creation and deletion of accounts.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/UserAssignedPrivilegedRole.yaml)<br><br>[Sigma rule templates](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
+| Changes to authentication methods| High| Azure AD Audit logs| Service = Authentication Method<br>-and-<br>Activity type = User registered security information<br>-and-<br>Category = User management| This change could be an indication of an attacker adding an auth method to the account so they can have continued access.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/MultipleDataSources/AuthenticationMethodsChangedforPrivilegedAccount.yaml)<br><br>[Sigma rule templates](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
+| Alert on changes to privileged account permissions| High| Azure AD Audit logs| Category = Role management<br>-and-<br>Activity type = Add eligible member (permanent)<br>-or-<br>Activity type = Add eligible member (eligible)<br>-and-<br>Status = Success or failure<br>-and-<br>Modified properties = Role.DisplayName| This alert is especially for accounts being assigned roles that aren't known or are outside of their normal responsibilities.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/PrivilegedAccountPermissionsChanged.yaml)<br><br>[Sigma rule templates](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
+| Unused privileged accounts| Medium| Azure AD Access Reviews| | Perform a monthly review for inactive privileged user accounts.<br>[Sigma rule templates](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
| Accounts exempt from Conditional Access| High| Azure Monitor Logs<br>-or-<br>Access Reviews| Conditional Access = Insights and reporting| Any account exempt from Conditional Access is most likely bypassing security controls and is more vulnerable to compromise. Break-glass accounts are exempt. See information on how to monitor break-glass accounts later in this article.|
-| Addition of a Temporary Access Pass to a privileged account| High| Azure AD Audit logs| Activity: Admin registered security info<br><br>Status Reason: Admin registered temporary access pass method for user<br><br>Category: UserManagement<br><br>Initiated by (actor): User Principal Name<br><br>Target: User Principal Name|Monitor and alert on a Temporary Access Pass being created for a privileged user.
+| Addition of a Temporary Access Pass to a privileged account| High| Azure AD Audit logs| Activity: Admin registered security info<br><br>Status Reason: Admin registered temporary access pass method for user<br><br>Category: UserManagement<br><br>Initiated by (actor): User Principal Name<br><br>Target: User Principal Name|Monitor and alert on a Temporary Access Pass being created for a privileged user.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/tree/master/Detections/AuditLogs/AdditionofaTemporaryAccessPasstoaPrivilegedAccount.yaml)<br><br>[Sigma rule templates](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
For more information on how to monitor for exceptions to Conditional Access policies, see [Conditional Access insights and reporting](../conditional-access/howto-conditional-access-insights-reporting.md).
You can monitor privileged account changes by using Azure AD Audit logs and Azur
| What to monitor| Risk level| Where| Filter/subfilter| Notes | | - | - | - | - | - |
-| Added to eligible privileged role| High| Azure AD Audit Logs| Service = PIM<br>-and-<br>Category = Role managementΓÇï<br>-and-<br>Activity type = Add member to role completed (eligible)<br>-and-<br>Status = Success or failureΓÇï<br>-and-<br>Modified properties = Role.DisplayName| Any account eligible for a role is now being given privileged access. If the assignment is unexpected or into a role that isn't the responsibility of the account holder, investigate.<br>[Azure Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/UserAssignedPrivilegedRole.yaml) |
-| Roles assigned out of PIM| High| Azure AD Audit Logs| Service = PIM<br>-and-<br>Category = Role managementΓÇï<br>-and-<br>Activity type = Add member to role (permanent)<br>-and-<br>Status = Success or failure<br>-and-<br>Modified properties = Role.DisplayName| These roles should be closely monitored and alerted. Users shouldn't be assigned roles outside of PIM where possible.<br>[Azure Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/PrivlegedRoleAssignedOutsidePIM.yaml) |
-| Elevations| Medium| Azure AD Audit Logs| Service = PIM<br>-and-<br>Category = Role management<br>-and-<br>Activity type = Add member to role completed (PIM activation)<br>-and-<br>Status = Success or failure <br>-and-<br>Modified properties = Role.DisplayName| After a privileged account is elevated, it can now make changes that could affect the security of your tenant. All elevations should be logged and, if happening outside of the standard pattern for that user, should be alerted and investigated if not planned. |
-| Approvals and deny elevation| Low| Azure AD Audit Logs| Service = Access Review<br>-and-<br>Category = UserManagement<br>-and-<br>Activity type = Request approved or denied<br>-and-<br>Initiated actor = UPN| Monitor all elevations because it could give a clear indication of the timeline for an attack.<br>[Azure Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/PIMElevationRequestRejected.yaml) |
-| Changes to PIM settings| High| Azure AD Audit Logs| Service = PIM<br>-and-<br>Category = Role management<br>-and-<br>Activity type = Update role setting in PIM<br>-and-<br>Status reason = MFA on activation disabled (example)| One of these actions could reduce the security of the PIM elevation and make it easier for attackers to acquire a privileged account. |
-| Elevation not occurring on SAW/PAW| High| Azure AD Sign In logs| Device ID <br>-and-<br>Browser<br>-and-<br>OS<br>-and-<br>Compliant/Managed<br>Correlate with:<br>Service = PIM<br>-and-<br>Category = Role management<br>-and-<br>Activity type = Add member to role completed (PIM activation)<br>-and-<br>Status = Success or failure<br>-and-<br>Modified properties = Role.DisplayName| If this change is configured, any attempt to elevate on a non-PAW/SAW device should be investigated immediately because it could indicate an attacker is trying to use the account. |
+| Added to eligible privileged role| High| Azure AD Audit Logs| Service = PIM<br>-and-<br>Category = Role managementΓÇï<br>-and-<br>Activity type = Add member to role completed (eligible)<br>-and-<br>Status = Success or failureΓÇï<br>-and-<br>Modified properties = Role.DisplayName| Any account eligible for a role is now being given privileged access. If the assignment is unexpected or into a role that isn't the responsibility of the account holder, investigate.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/UserAssignedPrivilegedRole.yaml)<br><br>[Sigma rule templates](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
+| Roles assigned out of PIM| High| Azure AD Audit Logs| Service = PIM<br>-and-<br>Category = Role managementΓÇï<br>-and-<br>Activity type = Add member to role (permanent)<br>-and-<br>Status = Success or failure<br>-and-<br>Modified properties = Role.DisplayName| These roles should be closely monitored and alerted. Users shouldn't be assigned roles outside of PIM where possible.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/PrivlegedRoleAssignedOutsidePIM.yaml)<br><br>[Sigma rule templates](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
+| Elevations| Medium| Azure AD Audit Logs| Service = PIM<br>-and-<br>Category = Role management<br>-and-<br>Activity type = Add member to role completed (PIM activation)<br>-and-<br>Status = Success or failure <br>-and-<br>Modified properties = Role.DisplayName| After a privileged account is elevated, it can now make changes that could affect the security of your tenant. All elevations should be logged and, if happening outside of the standard pattern for that user, should be alerted and investigated if not planned.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/tree/master/Detections/AuditLogs/AccountElevatedtoNewRole.yaml) |
+| Approvals and deny elevation| Low| Azure AD Audit Logs| Service = Access Review<br>-and-<br>Category = UserManagement<br>-and-<br>Activity type = Request approved or denied<br>-and-<br>Initiated actor = UPN| Monitor all elevations because it could give a clear indication of the timeline for an attack.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/PIMElevationRequestRejected.yaml)<br><br>[Sigma rule templates](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
+| Changes to PIM settings| High| Azure AD Audit Logs| Service = PIM<br>-and-<br>Category = Role management<br>-and-<br>Activity type = Update role setting in PIM<br>-and-<br>Status reason = MFA on activation disabled (example)| One of these actions could reduce the security of the PIM elevation and make it easier for attackers to acquire a privileged account.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/4ad195f4fe6fdbc66fb8469120381e8277ebed81/Detections/AuditLogs/ChangestoPIMSettings.yaml)<br><br>[Sigma rule templates](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
+| Elevation not occurring on SAW/PAW| High| Azure AD Sign In logs| Device ID <br>-and-<br>Browser<br>-and-<br>OS<br>-and-<br>Compliant/Managed<br>Correlate with:<br>Service = PIM<br>-and-<br>Category = Role management<br>-and-<br>Activity type = Add member to role completed (PIM activation)<br>-and-<br>Status = Success or failure<br>-and-<br>Modified properties = Role.DisplayName| If this change is configured, any attempt to elevate on a non-PAW/SAW device should be investigated immediately because it could indicate an attacker is trying to use the account.<br>[Sigma rule templates](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
| Elevation to manage all Azure subscriptions| High| Azure Monitor| Activity Log tab <br>Directory Activity tab <br> Operations Name = Assigns the caller to user access admin <br> -and- <br> Event category = Administrative <br> -and-<br>Status = Succeeded, start, fail<br>-and-<br>Event initiated by| This change should be investigated immediately if it isn't planned. This setting could allow an attacker access to Azure subscriptions in your environment. | For more information about managing elevation, see [Elevate access to manage all Azure subscriptions and management groups](../../role-based-access-control/elevate-access-global-admin.md). For information on monitoring elevations by using information available in the Azure AD logs, see [Azure Activity log](../../azure-monitor/essentials/activity-log.md), which is part of the Azure Monitor documentation.
For information about configuring alerts for Azure roles, see [Configure securit
See these security operations guide articles:
-* [Azure AD security operations overview](security-operations-introduction.md)
-* [Security operations for user accounts](security-operations-user-accounts.md)
-* [Security operations for privileged accounts](security-operations-privileged-accounts.md)
-* [Security operations for Privileged Identity Management](security-operations-privileged-identity-management.md)
-* [Security operations for applications](security-operations-applications.md)
-* [Security operations for devices](security-operations-devices.md)
-* [Security operations for infrastructure](security-operations-infrastructure.md)
+[Azure AD security operations overview](security-operations-introduction.md)
+
+[Security operations for user accounts](security-operations-user-accounts.md)
+
+[Security operations for consumer accounts](security-operations-consumer-accounts.md)
+
+[Security operations for Privileged Identity Management](security-operations-privileged-identity-management.md)
+
+[Security operations for applications](security-operations-applications.md)
+
+[Security operations for devices](security-operations-devices.md)
+
+[Security operations for infrastructure](security-operations-infrastructure.md)
active-directory Security Operations Privileged Identity Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/security-operations-privileged-identity-management.md
Title: Azure Active Directory security operations for Privileged Identity Management
-description: Guidance to establish baselines and use Azure Active Directory Privileged Identity Management (PIM) to monitor and alert on potential issues with accounts that are governed by PIM.
+description: Establish baselines and use Azure AD Privileged Identity Management (PIM) to monitor and alert on issues with accounts governed by PIM.
Previously updated : 08/19/2022 Last updated : 09/06/2022
-# Azure Active Directory security operations for Privileged Identity Management (PIM)
+# Azure Active Directory security operations for Privileged Identity Management
-The security of business assets depends on the integrity of the privileged accounts that administer your IT systems. Cyber-attackers use credential theft attacks to target admin accounts and other privileged access accounts to try gaining access to sensitive data.
+The security of business assets depends on the integrity of the privileged accounts that administer your IT systems. Cyber-attackers use credential theft attacks to target admin accounts and other privileged access accounts to try gaining access to sensitive data.
-For cloud services, prevention and response are the joint responsibilities of the cloud service provider and the customer.
+For cloud services, prevention and response are the joint responsibilities of the cloud service provider and the customer.
Traditionally, organizational security has focused on the entry and exit points of a network as the security perimeter. However, SaaS apps and personal devices have made this approach less effective. In Azure Active Directory (Azure AD), we replace the network security perimeter with authentication in your organization's identity layer. As users are assigned to privileged administrative roles, their access must be protected in on-premises, cloud, and hybrid environments.
-You're entirely responsible for all layers of security for your on-premises IT environment. When you use Azure cloud services, prevention and response are joint responsibilities of Microsoft as the cloud service provider and you as the customer.
+You're entirely responsible for all layers of security for your on-premises IT environment. When you use Azure cloud services, prevention and response are joint responsibilities of Microsoft as the cloud service provider and you as the customer.
* For more information on the shared responsibility model, see [Shared responsibility in the cloud](../../security/fundamentals/shared-responsibility.md). * For more information on securing access for privileged users, see [Securing Privileged access for hybrid and cloud deployments in Azure AD](../roles/security-planning.md).
-* For a wide range of videos, how-to guides, and content of key concepts for privileged identity, visit [Privileged Identity Management documentation](../privileged-identity-management/index.yml).
+* For a wide range of videos, how-to guides, and content of key concepts for privileged identity, visit [Privileged Identity Management documentation](../privileged-identity-management/index.yml).
Privileged Identity Management (PIM) is an Azure AD service that enables you to manage, control, and monitor access to important resources in your organization. These resources include resources in Azure AD, Azure, and other Microsoft Online Services such as Microsoft 365 or Microsoft Intune. You can use PIM to help mitigate the following risks:
Privileged Identity Management (PIM) is an Azure AD service that enables you to
* Reduce the possibility of an unauthorized user inadvertently impacting sensitive resources.
-This article provides guidance on setting baselines, auditing sign-ins and usage of privileged accounts, and the source of audit logs you can use to help maintain the integrity of your privilege accounts.
+Use this article provides guidance to set baselines, audit sign-ins, and usage of privileged accounts. Use the source audit log source to help maintain privileged account integrity.
## Where to look
-The log files you use for investigation and monitoring are:
+The log files you use for investigation and monitoring are:
* [Azure AD Audit logs](../reports-monitoring/concept-audit-logs.md) * [Sign-in logs](../reports-monitoring/concept-all-sign-ins.md)
-* [Microsoft 365 Audit logs](/microsoft-365/compliance/auditing-solutions-overview)
+* [Microsoft 365 Audit logs](/microsoft-365/compliance/auditing-solutions-overview)
* [Azure Key Vault logs](../../key-vault/general/logging.md?tabs=Vault)
-In the Azure portal you can view the Azure AD Audit logs and download them as comma-separated value (CSV) or JavaScript Object Notation (JSON) files. The Azure portal has several ways to integrate Azure AD logs with other tools that allow for greater automation of monitoring and alerting:
+In the Azure portal, view the Azure AD Audit logs and download them as comma-separated value (CSV) or JavaScript Object Notation (JSON) files. The Azure portal has several ways to integrate Azure AD logs with other tools to automate monitoring and alerting:
-* [**Microsoft Sentinel**](../../sentinel/overview.md) ΓÇô enables intelligent security analytics at the enterprise level by providing security information and event management (SIEM) capabilities.
+* [**Microsoft Sentinel**](../../sentinel/overview.md) ΓÇô enables intelligent security analytics at the enterprise level by providing security information and event management (SIEM) capabilities.
+
+* **[Sigma rule templates](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure)** - Sigma is an evolving open standard for writing rules and templates that automated management tools can use to parse log files. Where Sigma templates exist for our recommended search criteria, we've added a link to the Sigma repo. The Sigma templates aren't written, tested, and managed by Microsoft. Rather, the repo and templates are created and collected by the worldwide IT security community.
* [**Azure Monitor**](../../azure-monitor/overview.md) ΓÇô enables automated monitoring and alerting of various conditions. Can create or use workbooks to combine data from different sources.
-* [**Azure Event Hubs**](../../event-hubs/event-hubs-about.md) **integrated with a SIEM**- [Azure AD logs can be integrated to other SIEMs](../reports-monitoring/tutorial-azure-monitor-stream-logs-to-event-hub.md) such as Splunk, ArcSight, QRadar, and Sumo Logic via the Azure Event Hub integration.
+* [**Azure Event Hubs**](../../event-hubs/event-hubs-about.md) **integrated with a SIEM**- [Azure AD logs can be integrated to other SIEMs](../reports-monitoring/tutorial-azure-monitor-stream-logs-to-event-hub.md) such as Splunk, ArcSight, QRadar, and Sumo Logic via the Azure Event Hubs integration.
-* [**Microsoft Defender for Cloud Apps**](/cloud-app-security/what-is-cloud-app-security) ΓÇô enables you to discover and manage apps, govern across apps and resources, and check your cloud appsΓÇÖ compliance.
+* [**Microsoft Defender for Cloud Apps**](/cloud-app-security/what-is-cloud-app-security) ΓÇô enables you to discover and manage apps, govern across apps and resources, and check your cloud appsΓÇÖ compliance.
* **[Securing workload identities with Identity Protection Preview](..//identity-protection/concept-workload-identity-risk.md)** - Used to detect risk on workload identities across sign-in behavior and offline indicators of compromise.
-The rest of this article provides recommendations for setting a baseline to monitor and alert on, organized using a tier model. Links to pre-built solutions are listed following the table. You can also build alerts using the preceding tools. The content is organized into the following topic areas of PIM:
+The rest of this article has recommendations to set a baseline to monitor and alert on, with a tier model. Links to pre-built solutions appear after the table. You can build alerts using the preceding tools. The content is organized into the following areas:
* Baselines
-* Azure AD role assignment
+* Azure AD role assignment
* Azure AD role alert settings
The following are recommended baseline settings:
| What to monitor| Risk level| Recommendation| Roles| Notes | | - |- |- |- |- |
-| Azure AD roles assignment| High| <li>Require justification for activation.<li>Require approval to activate.<li>Set two-level approver process.<li>On activation, require Azure Active Directory Multi-Factor Authentication (MFA).<li>Set maximum elevation duration to 8 hrs.| <li>Privileged Role Administration<li>Global Administrator| A privileged role administrator can customize PIM in their Azure AD organization, including changing the experience for users activating an eligible role assignment. |
-| Azure Resource Role Configuration| High| <li>Require justification for activation.<li>Require approval to activate.<li>Set two-level approver process.<li>On activation, require Azure MFA.<li>Set maximum elevation duration to 8 hrs.| <li>Owner<li>Resource Administrator<li>User Access <li>Administrator<li>Global Administrator<li>Security Administrator| Investigate immediately if not a planned change. This setting could enable an attacker access to Azure subscriptions in your environment. |
+| Azure AD roles assignment| High| Require justification for activation.Require approval to activate. Set two-level approver process. On activation, require Azure AD Multi-Factor Authentication (MFA). Set maximum elevation duration to 8 hrs.| Privileged Role Administration, Global Administrator| A privileged role administrator can customize PIM in their Azure AD organization, including changing the experience for users activating an eligible role assignment. |
+| Azure Resource Role Configuration| High| Require justification for activation. Require approval to activate. Set two-level approver process. On activation, require Azure AD Multi-Factor Authentication. Set maximum elevation duration to 8 hrs.| Owner, Resource Administrator, User Access, Administrator, Global Administrator, Security Administrator| Investigate immediately if not a planned change. This setting might enable attacker access to Azure subscriptions in your environment. |
## Azure AD roles assignment
-A privileged role administrator can customize PIM in their Azure AD organization. This includes changing the experience for a user who is activating an eligible role assignment as follows:
+A privileged role administrator can customize PIM in their Azure AD organization, which includes changing the user experience of activating an eligible role assignment:
-* Prevent bad actor to remove Azure MFA requirements to activate privileged access.
+* Prevent bad actor to remove Azure AD Multi-Factor Authentication requirements to activate privileged access.
* Prevent malicious users bypass justification and approval of activating privileged access. | What to monitor| Risk level| Where| Filter/sub-filter| Notes | | - |- |- |- |- |
-| Alert on Add changes to privileged account permissions| High| Azure AD Audit logs| Category = Role Management<br>-and-<br>Activity Type ΓÇô Add eligible member (permanent) <br>-and-<br>Activity Type ΓÇô Add eligible member (eligible) <br>-and-<br>Status = Success/failure<br>-and-<br>Modified properties = Role.DisplayName| Monitor and always alert for any changes to privileged role administrator and global administrator. <li>This can be an indication an attacker is trying to gain privilege to modify role assignment settings<li> If you donΓÇÖt have a defined threshold, alert on 4 in 60 minutes for users and 2 in 60 minutes for privileged accounts. |
-| Alert on bulk deletion changes to privileged account permissions| High| Azure AD Audit logs| Category = Role Management<br>-and-<br>Activity Type ΓÇô Remove eligible member (permanent) <br>-and-<br>Activity Type ΓÇô Remove eligible member (eligible) <br>-and-<br>Status = Success/failure<br>-and-<br>Modified properties = Role.DisplayName| Investigate immediately if not a planned change. This setting could enable an attacker access to Azure subscriptions in your environment.<br>[Azure Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/BulkChangestoPrivilegedAccountPermissions.yaml) |
-| Changes to PIM settings| High| Azure AD Audit Log| Service = PIM<br>-and-<br>Category = Role Management<br>-and-<br>Activity Type = Update role setting in PIM<br>-and-<br>Status Reason = MFA on activation disabled (example)| Monitor and always alert for any changes to Privileged Role Administrator and Global Administrator. <li>This can be an indication an attacker already gained access able to modify to modify role assignment settings<li>One of these actions could reduce the security of the PIM elevation and make it easier for attackers to acquire a privileged account. |
-| Approvals and deny elevation| High| Azure AD Audit Log| Service = Access Review<br>-and-<br>Category = UserManagement<br>-and-<br>Activity Type = Request Approved/Denied<br>-and-<br>Initiated actor = UPN| All elevations should be monitored. Log all elevations as this could give a clear indication of timeline for an attack. |
-| Alert setting changes to disabled.| High| Azure AD Audit logs| Service =PIM<br>-and-<br>Category = Role Management<br>-and-<br>Activity Type = Disable PIM Alert<br>-and-<br>Status = Success /Failure| Always alert. <li>Helps detect bad actor removing alerts associated with Azure MFA requirements to activate privileged access.<li>Helps detect suspicious or unsafe activity.<br>[Azure Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/SecurityAlert/DetectPIMAlertDisablingActivity.yaml) |
-
+| Alert on Add changes to privileged account permissions| High| Azure AD Audit logs| Category = Role Management<br>-and-<br>Activity Type ΓÇô Add eligible member (permanent) <br>-and-<br>Activity Type ΓÇô Add eligible member (eligible) <br>-and-<br>Status = Success/failure<br>-and-<br>Modified properties = Role.DisplayName| Monitor and always alert for any changes to privileged role administrator and global administrator. This can be an indication an attacker is trying to gain privilege to modify role assignment settings. If you donΓÇÖt have a defined threshold, alert on 4 in 60 minutes for users and 2 in 60 minutes for privileged accounts.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/UserAddedtoAdminRole.yaml)<br><br>[Sigma rule templates](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
+| Alert on bulk deletion changes to privileged account permissions| High| Azure AD Audit logs| Category = Role Management<br>-and-<br>Activity Type ΓÇô Remove eligible member (permanent) <br>-and-<br>Activity Type ΓÇô Remove eligible member (eligible) <br>-and-<br>Status = Success/failure<br>-and-<br>Modified properties = Role.DisplayName| Investigate immediately if not a planned change. This setting could enable an attacker access to Azure subscriptions in your environment.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/BulkChangestoPrivilegedAccountPermissions.yaml)<br><br>[Sigma rule templates](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
+| Changes to PIM settings| High| Azure AD Audit Log| Service = PIM<br>-and-<br>Category = Role Management<br>-and-<br>Activity Type = Update role setting in PIM<br>-and-<br>Status Reason = MFA on activation disabled (example)| Monitor and always alert for any changes to Privileged Role Administrator and Global Administrator. This can be an indication an attacker has access to modify role assignment settings. One of these actions could reduce the security of the PIM elevation and make it easier for attackers to acquire a privileged account.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/ChangestoPIMSettings.yaml)<br><br>[Sigma rule templates](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
+| Approvals and deny elevation| High| Azure AD Audit Log| Service = Access Review<br>-and-<br>Category = UserManagement<br>-and-<br>Activity Type = Request Approved/Denied<br>-and-<br>Initiated actor = UPN| All elevations should be monitored. Log all elevations to give a clear indication of timeline for an attack.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/PIMElevationRequestRejected.yaml)<br><br>[Sigma rule templates](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
+| Alert setting changes to disabled.| High| Azure AD Audit logs| Service =PIM<br>-and-<br>Category = Role Management<br>-and-<br>Activity Type = Disable PIM Alert<br>-and-<br>Status = Success /Failure| Always alert. Helps detect bad actor removing alerts associated with Azure AD Multi-Factor Authentication requirements to activate privileged access. Helps detect suspicious or unsafe activity.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/SecurityAlert/DetectPIMAlertDisablingActivity.yaml)<br><br>[Sigma rule templates](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
-For more information on identifying role setting changes in the Azure AD Audit log, see [View audit history for Azure AD roles in Privileged Identity Management](../privileged-identity-management/pim-how-to-use-audit-log.md).
+For more information on identifying role setting changes in the Azure AD Audit log, see [View audit history for Azure AD roles in Privileged Identity Management](../privileged-identity-management/pim-how-to-use-audit-log.md).
## Azure resource role assignment
-Monitoring Azure resource role assignments provides visibility into activity and activations for resources roles. These might be misused to create an attack surface to a resource. As you monitor for this type of activity, you are trying to detect:
+Monitoring Azure resource role assignments allows visibility into activity and activations for resources roles. These assignments might be misused to create an attack surface to a resource. As you monitor for this type of activity, you're trying to detect:
* Query role assignments at specific resources
Monitoring Azure resource role assignments provides visibility into activity and
| What to monitor| Risk level| Where| Filter/sub-filter| Notes | | - |- |- |- |- |
-| Audit Alert Resource Audit log for Privileged account activities| High| In PIM, under Azure Resources, Resource Audit| Action : Add eligible member to role in PIM completed (time bound) <br>-and-<br>Primary Target <br>-and-<br>Type User<br>-and-<br>Status = Succeeded<br>| Always alert. Helps detect bad actor adding eligible roles to manage all resources in Azure. |
-| Audit Alert Resource Audit for Disable Alert| Medium| In PIM, under Azure Resources, Resource Audit| Action : Disable Alert<br>-and-<br>Primary Target : Too many owners assigned to a resource<br>-and-<br>Status = Succeeded| Helps detect bad actor disabling alerts from Alerts pane which can bypass malicious activity being investigated |
-| Audit Alert Resource Audit for Disable Alert| Medium| In PIM, under Azure Resources, Resource Audit| Action : Disable Alert<br>-and-<br>Primary Target : Too many permanent owners assigned to a resource<br>-and-<br>Status = Succeeded| Prevent bad actor from disable alerts from Alerts pane which can bypass malicious activity being investigated |
-| Audit Alert Resource Audit for Disable Alert| Medium| In PIM, under Azure Resources, Resource Audit| Action : Disable Alert<br>-and-<br>Primary Target Duplicate role created<br>-and-<br>Status = Succeeded| Prevent bad actor from disable alerts from Alerts pane which can bypass malicious activity being investigated |
-
+| Audit Alert Resource Audit log for Privileged account activities| High| In PIM, under Azure Resources, Resource Audit| Action: Add eligible member to role in PIM completed (time bound) <br>-and-<br>Primary Target <br>-and-<br>Type User<br>-and-<br>Status = Succeeded<br>| Always alert. Helps detect bad actor adding eligible roles to manage all resources in Azure. |
+| Audit Alert Resource Audit for Disable Alert| Medium| In PIM, under Azure Resources, Resource Audit| Action: Disable Alert<br>-and-<br>Primary Target: Too many owners assigned to a resource<br>-and-<br>Status = Succeeded| Helps detect bad actor disabling alerts, in the Alerts pane, which can bypass malicious activity being investigated |
+| Audit Alert Resource Audit for Disable Alert| Medium| In PIM, under Azure Resources, Resource Audit| Action: Disable Alert<br>-and-<br>Primary Target: Too many permanent owners assigned to a resource<br>-and-<br>Status = Succeeded| Prevent bad actor from disable alerts, in the Alerts pane, which can bypass malicious activity being investigated |
+| Audit Alert Resource Audit for Disable Alert| Medium| In PIM, under Azure Resources, Resource Audit| Action: Disable Alert<br>-and-<br>Primary Target Duplicate role created<br>-and-<br>Status = Succeeded| Prevent bad actor from disable alerts, from the Alerts pane, which can bypass malicious activity being investigated |
For more information on configuring alerts and auditing Azure resource roles, see:
For more information on configuring alerts and auditing Azure resource roles, se
## Access management for Azure resources and subscriptions
-Users or members of a group assigned to the Owner or User Access Administrator subscriptions roles, and Azure AD Global administrators that enabled subscription management in Azure AD have Resource administrator permissions by default. These administrators can assign roles, configure role settings, and review access using Privileged Identity Management (PIM) for Azure resources.
+Users or group members assigned the Owner or User Access Administrator subscriptions roles, and Azure AD Global Administrators who enabled subscription management in Azure AD, have Resource Administrator permissions by default. The administrators assign roles, configure role settings, and review access using Privileged Identity Management (PIM) for Azure resources.
-A user who has Resource administrator permissions can manage PIM for Resources. The risk this introduces that you must monitor for and mitigate, is that the capability can be used to allow bad actors to have privileged access to Azure subscription resources, such as virtual machines or storage accounts.
+A user who has Resource administrator permissions can manage PIM for Resources. Monitor for and mitigate this introduced risk: the capability can be used to allow bad actors privileged access to Azure subscription resources, such as virtual machines (VMs) or storage accounts.
| What to monitor| Risk level| Where| Filter/sub-filter| Notes | | - |- |- |- |- |
-| Elevations| High| Azure AD, under Manage, Properties| Periodically review setting.<br>Access management for Azure resources| Global administrators can elevate by enabling Access management for Azure resources.<br>Verify bad actors have not gained permissions to assign roles in all Azure subscriptions and management groups associated with Active Directory. |
+| Elevations| High| Azure AD, under Manage, Properties| Periodically review setting.<br>Access management for Azure resources| Global administrators can elevate by enabling Access management for Azure resources.<br>Verify bad actors haven't gained permissions to assign roles in all Azure subscriptions and management groups associated with Active Directory. |
-
-For more information see [Assign Azure resource roles in Privileged Identity Management](../privileged-identity-management/pim-resource-roles-assign-roles.md)
+For more information, see [Assign Azure resource roles in Privileged Identity Management](../privileged-identity-management/pim-resource-roles-assign-roles.md)
## Next steps
-See these security operations guide articles:
[Azure AD security operations overview](security-operations-introduction.md) [Security operations for user accounts](security-operations-user-accounts.md)
-[Security operations for privileged accounts](security-operations-privileged-accounts.md)
+[Security operations for consumer accounts](security-operations-consumer-accounts.md)
-[Security operations for Privileged Identity Management](security-operations-privileged-identity-management.md)
+[Security operations for privileged accounts](security-operations-privileged-accounts.md)
[Security operations for applications](security-operations-applications.md)
active-directory Security Operations User Accounts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/security-operations-user-accounts.md
Previously updated : 08/19/2022 Last updated : 09/06/2022
-# Azure Active Directory security operations for user accounts
+# Azure Active Directory security operations for user accounts
-User identity is one of the most important aspects of protecting your organization and data. This article provides guidance for monitoring account creation, deletion, and account usage. The first portion covers how to monitor for unusual account creation and deletion. The second portion covers how to monitor for unusual account usage.
+User identity is one of the most important aspects of protecting your organization and data. This article provides guidance for monitoring account creation, deletion, and account usage. The first portion covers how to monitor for unusual account creation and deletion. The second portion covers how to monitor for unusual account usage.
If you have not yet read the [Azure Active Directory (Azure AD) security operations overview](security-operations-introduction.md), we recommend you do so before proceeding.
This article covers general user accounts. For privileged accounts, see Security
## Define a baseline
-To discover anomalous behavior, you first must define what normal and expected behavior is. Defining what expected behavior for your organization is, helps you determine when unexpected behavior occurs. The definition also helps to reduce the noise level of false positives when monitoring and alerting.
+To discover anomalous behavior, you first must define what normal and expected behavior is. Defining what expected behavior for your organization is, helps you determine when unexpected behavior occurs. The definition also helps to reduce the noise level of false positives when monitoring and alerting.
-Once you define what you expect, you perform baseline monitoring to validate your expectations. With that information, you can monitor the logs for anything that falls outside of tolerances you define.
+Once you define what you expect, you perform baseline monitoring to validate your expectations. With that information, you can monitor the logs for anything that falls outside of tolerances you define.
Use the Azure AD Audit Logs, Azure AD Sign-in Logs, and directory attributes as your data sources for accounts created outside of normal processes. The following are suggestions to help you think about and define what normal is for your organization. * **Users account creation** ΓÇô evaluate the following:
- * Strategy and principles for tools and processes used for creating and managing user accounts. For example, are there standard attributes, formats that are applied to user account attributes.
+ * Strategy and principles for tools and processes used for creating and managing user accounts. For example, are there standard attributes, formats that are applied to user account attributes.
* Approved sources for account creation. For example, originating in Active Directory (AD), Azure Active Directory or HR systems like Workday.
Use the Azure AD Audit Logs, Azure AD Sign-in Logs, and directory attributes as
* Provisioning of guest accounts and alert parameters for accounts created outside of entitlement management or other normal processes.
- * Strategy and alert parameters for accounts created, modified, or disabled by an account that is not an approved user administrator.
+ * Strategy and alert parameters for accounts created, modified, or disabled by an account that isn't an approved user administrator.
* Monitoring and alert strategy for accounts missing standard attributes, such as employee ID or not following organizational naming conventions.
Use the Azure AD Audit Logs, Azure AD Sign-in Logs, and directory attributes as
* The forests, domains, and organizational units (OUs) in scope for synchronization. Who are the approved administrators who can change these settings and how often is the scope checked?
- * The types of accounts that are synchronized. For example, user accounts and or service accounts.
+ * The types of accounts that are synchronized. For example, user accounts and or service accounts.
* The process for creating privileged on-premises accounts and how the synchronization of this type of account is controlled.
For more information for securing and monitoring on-premises accounts, see [Prot
* The process to create and maintain a list of trusted individuals and or processes expected to create and manage cloud user accounts.
- * The process to create and maintained an alert strategy for non-approved cloud-based accounts.
+ * The process to create and maintained an alert strategy for non-approved cloud-based accounts.
## Where to look
-The log files you use for investigation and monitoring are:
+The log files you use for investigation and monitoring are:
* [Azure AD Audit logs](../reports-monitoring/concept-audit-logs.md) * [Sign-in logs](../reports-monitoring/concept-all-sign-ins.md)
-* [Microsoft 365 Audit logs](/microsoft-365/compliance/auditing-solutions-overview)
+* [Microsoft 365 Audit logs](/microsoft-365/compliance/auditing-solutions-overview)
* [Azure Key Vault logs](../../key-vault/general/logging.md?tabs=Vault)
The log files you use for investigation and monitoring are:
* [UserRiskEvents log](../identity-protection/howto-identity-protection-investigate-risk.md)
-From the Azure portal you can view the Azure AD Audit logs and download as comma separated value (CSV) or JavaScript Object Notation (JSON) files. The Azure portal has several ways to integrate Azure AD logs with other tools that allow for greater automation of monitoring and alerting:
+From the Azure portal, you can view the Azure AD Audit logs and download as comma separated value (CSV) or JavaScript Object Notation (JSON) files. The Azure portal has several ways to integrate Azure AD logs with other tools that allow for greater automation of monitoring and alerting:
-* **[Microsoft Sentinel](../../sentinel/overview.md)** ΓÇô enables intelligent security analytics at the enterprise level by providing security information and event management (SIEM) capabilities.
+* **[Microsoft Sentinel](../../sentinel/overview.md)** ΓÇô enables intelligent security analytics at the enterprise level by providing security information and event management (SIEM) capabilities.
+
+* **[Sigma rule templates](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure)** - Sigma is an evolving open standard for writing rules and templates that automated management tools can use to parse log files. Where Sigma templates exist for our recommended search criteria, we've added a link to the Sigma repo. The Sigma templates aren't written, tested, and managed by Microsoft. Rather, the repo and templates are created and collected by the worldwide IT security community.
* **[Azure Monitor](../../azure-monitor/overview.md)** ΓÇô enables automated monitoring and alerting of various conditions. Can create or use workbooks to combine data from different sources.
-* **[Azure Event Hubs](../../event-hubs/event-hubs-about.md) integrated with a SIEM**- [Azure AD logs can be integrated to other SIEMs](../reports-monitoring/tutorial-azure-monitor-stream-logs-to-event-hub.md) such as Splunk, ArcSight, QRadar and Sumo Logic via the Azure Event Hub integration.
+* **[Azure Event Hubs](../../event-hubs/event-hubs-about.md)** integrated with a SIEM - [Azure AD logs can be integrated to other SIEMs](../reports-monitoring/tutorial-azure-monitor-stream-logs-to-event-hub.md) such as Splunk, ArcSight, QRadar and Sumo Logic via the Azure Event Hubs integration.
-* **[Microsoft Defender for Cloud Apps](/cloud-app-security/what-is-cloud-app-security)** ΓÇô enables you to discover and manage apps, govern across apps and resources, and check your cloud apps' compliance.
+* **[Microsoft Defender for Cloud Apps](/cloud-app-security/what-is-cloud-app-security)** ΓÇô enables you to discover and manage apps, govern across apps and resources, and check your cloud apps' compliance.
* **[Securing workload identities with Identity Protection Preview](..//identity-protection/concept-workload-identity-risk.md)** - Used to detect risk on workload identities across sign-in behavior and offline indicators of compromise.
-Much of what you will monitor and alert on are the effects of your Conditional Access policies. You can use the [Conditional Access insights and reporting workbook](../conditional-access/howto-conditional-access-insights-reporting.md) to examine the effects of one or more Conditional Access policies on your sign-ins, as well as the results of policies, including device state. This workbook enables you to view an impact summary, and identify the impact over a specific time period. You can also use the workbook to investigate the sign-ins of a specific user.
+Much of what you will monitor and alert on are the effects of your Conditional Access policies. You can use the [Conditional Access insights and reporting workbook](../conditional-access/howto-conditional-access-insights-reporting.md) to examine the effects of one or more Conditional Access policies on your sign-ins, and the results of policies, including device state. This workbook enables you to view a summary, and identify the effects over a specific time period. You can also use the workbook to investigate the sign-ins of a specific user.
- The remainder of this article describes what we recommend you monitor and alert on, and is organized by the type of threat. Where there are specific pre-built solutions we link to them or provide samples following the table. Otherwise, you can build alerts using the preceding tools.
+ The remainder of this article describes what we recommend you monitor and alert on, and is organized by the type of threat. Where there are specific pre-built solutions we link to them or provide samples following the table. Otherwise, you can build alerts using the preceding tools.
## Account creation
Anomalous account creation can indicate a security issue. Short lived accounts,
Account creation and deletion outside of normal identity management processes should be monitored in Azure AD. Short-lived accounts are accounts created and deleted in a short time span. This type of account creation and quick deletion could mean a bad actor is trying to avoid detection by creating accounts, using them, and then deleting the account.
-Short-lived account patterns might indicate non-approved people or processes might have the right to create and delete accounts that fall outside of established processes and policies. This type of behavior removes visible markers from the directory.
+Short-lived account patterns might indicate non-approved people or processes might have the right to create and delete accounts that fall outside of established processes and policies. This type of behavior removes visible markers from the directory.
-If the data trail for account creation and deletion is not discovered quickly, the information required to investigate an incident may no longer exist. For example, accounts might be deleted and then purged from the recycle bin. Audit logs are retained for 30 days. However, you can export your logs to Azure Monitor or a security information and event management (SIEM) solution for longer term retention.
+If the data trail for account creation and deletion is not discovered quickly, the information required to investigate an incident may no longer exist. For example, accounts might be deleted and then purged from the recycle bin. Audit logs are retained for 30 days. However, you can export your logs to Azure Monitor or a security information and event management (SIEM) solution for longer term retention.
-| What to monitor | Risk Level | Where | Filter/sub-filter | Notes |
-||||--|-|
-| Account creation and deletion events within a close time frame. | High | Azure AD Audit logs | Activity: Add user<br>Status = success<br>-and-<br>Activity: Delete user<br>Status = success<br> | Search for user principal name (UPN) events. Look for accounts created and then deleted in under 24 hours.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/AccountCreatedandDeletedinShortTimeframe.yaml) |
-| Accounts created and deleted by non-approved users or processes. | Medium | Azure AD Audit logs | Initiated by (actor) ΓÇô USER PRINCIPAL NAME<br>-and-<br>Activity: Add user<br>Status = success<br>and-or<br>Activity: Delete user<br>Status = success | If the actor are non-approved users, configure to send an alert. <br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/AccountCreatedDeletedByNonApprovedUser.yaml) |
-| Accounts from non-approved sources. | Medium | Azure AD Audit logs | Activity: Add user<br>Status = success<br>Target(s) = USER PRINCIPAL NAME | If the entry is not from an approved domain or is a known blocked domain, configure to send an alert. |
-| Accounts assigned to a privileged role. | High | Azure AD Audit logs | Activity: Add user<br>Status = success<br>-and-<br>Activity: Delete user<br>Status = success<br>-and-<br>Activity: Add member to role<br>Status = success | If the account is assigned to an Azure AD role, Azure role, or privileged group membership, alert and prioritize the investigation.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/UserAssignedPrivilegedRole.yaml) |
+|What to monitor|Risk Level|Where|Filter/sub-filter|Notes|
+||||||
+| Account creation and deletion events within a close time frame. | High | Azure AD Audit logs | Activity: Add user<br>Status = success<br>-and-<br>Activity: Delete user<br>Status = success<br> | Search for user principal name (UPN) events. Look for accounts created and then deleted in under 24 hours.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/AccountCreatedandDeletedinShortTimeframe.yaml) |
+| Accounts created and deleted by non-approved users or processes. | Medium| Azure AD Audit logs | Initiated by (actor) ΓÇô USER PRINCIPAL NAME<br>-and-<br>Activity: Add user<br>Status = success<br>and-or<br>Activity: Delete user<br>Status = success | If the actors are non-approved users, configure to send an alert. <br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/AccountCreatedDeletedByNonApprovedUser.yaml) |
+| Accounts from non-approved sources. | Medium | Azure AD Audit logs | Activity: Add user<br>Status = success<br>Target(s) = USER PRINCIPAL NAME | If the entry isn't from an approved domain or is a known blocked domain, configure to send an alert.<br> [Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/MultipleDataSources/Accountcreatedfromnon-approvedsources.yaml) |
+| Accounts assigned to a privileged role.| High | Azure AD Audit logs | Activity: Add user<br>Status = success<br>-and-<br>Activity: Delete user<br>Status = success<br>-and-<br>Activity: Add member to role<br>Status = success | If the account is assigned to an Azure AD role, Azure role, or privileged group membership, alert and prioritize the investigation.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/UserAssignedPrivilegedRole.yaml) |
-Both privileged and non-privileged accounts should be monitored and alerted. However, since privileged accounts have administrative permissions, they should have higher priority in your monitor, alert, and respond processes.
+Both privileged and non-privileged accounts should be monitored and alerted. However, since privileged accounts have administrative permissions, they should have higher priority in your monitor, alert, and respond processes.
### Accounts not following naming policies
-User accounts not following naming policies might have been created outside of organizational policies.
+User accounts not following naming policies might have been created outside of organizational policies.
A best practice is to have a naming policy for user objects. Having a naming policy makes management easier and helps provide consistency. The policy can also help discover when users have been created outside of approved processes. A bad actor might not be aware of your naming standards and might make it easier to detect an account provisioned outside of your organizational processes.
Organizations tend to have specific formats and attributes that are used for cre
* User account UPN = Firstname.Lastname@contoso.com
-User accounts also frequently have an attribute that identifies a real user. For example, EMPID = XXXNNN. The following are suggestions to help you think about and define what normal is for your organization, as well as thing to consider when defining your baseline for log entries where accounts don't follow your organization's naming convention:
+Frequently, user accounts have an attribute that identifies a real user. For example, EMPID = XXXNNN. Use the following suggestions to help define normal for your organization, and when defining a baseline for log entries when accounts don't follow your naming convention:
-* Accounts that don't follow the naming convention. For example, `nnnnnnn@contoso.com` versus `firstname.lastname@contoso.com`.
+* Accounts that don't follow the naming convention. For example, `nnnnnnn@contoso.com` versus `firstname.lastname@contoso.com`.
-* Accounts that don't have the standard attributes populated or are not in the correct format. For example, not having a valid employee ID.
+* Accounts that don't have the standard attributes populated or aren't in the correct format. For example, not having a valid employee ID.
| What to monitor| Risk Level| Where| Filter/sub-filter| Notes | | - | - | - | - | - |
-| User accounts that do not have expected attributes defined.| Low| Azure AD Audit logs| Activity: Add user<br>Status = success| Look for accounts with your standard attributes either null or in the wrong format. For example, EmployeeID |
-| User accounts created using incorrect naming format.| Low| Azure AD Audit logs| Activity: Add user<br>Status = success| Look for accounts with a UPN that does not follow your naming policy. |
-| Privileged accounts that do not follow naming policy.| High| Azure Subscription| [List Azure role assignments using the Azure portal - Azure RBAC](../../role-based-access-control/role-assignments-list-portal.md)| List role assignments for subscriptions and alert where sign in name does not match your organizations format. For example, ADM_ as a prefix. |
-| Privileged accounts that do not follow naming policy.| High| Azure AD directory| [List Azure AD role assignments](../roles/view-assignments.md)| List roles assignments for Azure AD roles alert where UPN does not match your organizations format. For example, ADM_ as a prefix. |
--
+| User accounts that don't have expected attributes defined.| Low| Azure AD Audit logs| Activity: Add user<br>Status = success| Look for accounts with your standard attributes either null or in the wrong format. For example, EmployeeID <br> [Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/Useraccountcreatedwithoutexpectedattributesdefined.yaml) |
+| User accounts created using incorrect naming format.| Low| Azure AD Audit logs| Activity: Add user<br>Status = success| Look for accounts with a UPN that does not follow your naming policy. <br> [Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/UserAccountCreatedUsingIncorrectNamingFormat.yaml) |
+| Privileged accounts that don't follow naming policy.| High| Azure Subscription| [List Azure role assignments using the Azure portal - Azure RBAC](../../role-based-access-control/role-assignments-list-portal.md)| List role assignments for subscriptions and alert where sign-in name does not match your organizations format. For example, ADM_ as a prefix. |
+| Privileged accounts that don't follow naming policy.| High| Azure AD directory| [List Azure AD role assignments](../roles/view-assignments.md)| List roles assignments for Azure AD roles alert where UPN doesn't match your organizations format. For example, ADM_ as a prefix. |
For more information on parsing, see:
-* For Azure AD Audit logs - [Parse text data in Azure Monitor Logs](../../azure-monitor/logs/parse-text.md)
+* Azure AD Audit logs - [Parse text data in Azure Monitor Logs](../../azure-monitor/logs/parse-text.md)
-* For Azure Subscriptions - [List Azure role assignments using Azure PowerShell](../../role-based-access-control/role-assignments-list-powershell.md)
+* Azure Subscriptions - [List Azure role assignments using Azure PowerShell](../../role-based-access-control/role-assignments-list-powershell.md)
-* For Azure Active Directory - [List Azure AD role assignments](../roles/view-assignments.md)
+* Azure Active Directory - [List Azure AD role assignments](../roles/view-assignments.md)
### Accounts created outside normal processes Having standard processes to create users and privileged accounts is important so that you can securely control the lifecycle of identities. If users are provisioned and deprovisioned outside of established processes, it can introduce security risks. Operating outside of established processes can also introduce identity management problems. Potential risks include:
-* User and privileged accounts might not be governed to adhere to organizational policies. This can lead to a wider attack surface on accounts that are not managed correctly.
+* User and privileged accounts might not be governed to adhere to organizational policies. This can lead to a wider attack surface on accounts that aren't managed correctly.
-* It becomes harder to detect when bad actors create accounts for malicious purposes. By having valid accounts created outside of established procedures, it becomes harder to detect when accounts are created, or permissions modified for malicious purposes.
+* It becomes harder to detect when bad actors create accounts for malicious purposes. By having valid accounts created outside of established procedures, it becomes harder to detect when accounts are created, or permissions modified for malicious purposes.
We recommend that user and privileged accounts only be created following your organization policies. For example, an account should be created with the correct naming standards, organizational information and under scope of the appropriate identity governance. Organizations should have rigorous controls for who has the rights to create, manage, and delete identities. Roles to create these accounts should be tightly managed and the rights only available after following an established workflow to approve and obtain these permissions. | What to monitor| Risk Level| Where| Filter/sub-filter| Notes | | - | - | - | - | - |
-| User accounts created or deleted by non-approved users or processes.| Medium| Azure AD Audit logs| Activity: Add user<br>Status = success<br>and-or-<br>Activity: Delete user<br>Status = success<br>-and-<br>Initiated by (actor) = USER PRINCIPAL NAME| Alert on accounts created by non-approved users or processes. Prioritize accounts created with heightened privileges.<br>[Azure Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/AccountCreatedDeletedByNonApprovedUser.yaml) |
+| User accounts created or deleted by non-approved users or processes.| Medium| Azure AD Audit logs| Activity: Add user<br>Status = success<br>and-or-<br>Activity: Delete user<br>Status = success<br>-and-<br>Initiated by (actor) = USER PRINCIPAL NAME| Alert on accounts created by non-approved users or processes. Prioritize accounts created with heightened privileges.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/AccountCreatedDeletedByNonApprovedUser.yaml) |
| User accounts created or deleted from non-approved sources.| Medium| Azure AD Audit logs| Activity: Add user<br>Status = success<br>-or-<br>Activity: Delete user<br>Status = success<br>-and-<br>Target(s) = USER PRINCIPAL NAME| Alert when the domain is non-approved or known blocked domain. |
+## Unusual sign-ins
-## Unusual sign ins
-
-Seeing failures for user authentication is normal. But seeing patterns or blocks of failures can be an indicator that something is happening with a user's Identity. For example, in the case of Password spray or Brute Force attacks, or when a user account is compromised. It is critical that you monitor and alert when patterns emerge. This helps ensure you can protect the user and your organization's data.
+Seeing failures for user authentication is normal. But seeing patterns or blocks of failures can be an indicator that something is happening with a user's Identity. For example, during Password spray or Brute Force attacks, or when a user account is compromised. It's critical that you monitor and alert when patterns emerge. This helps ensure you can protect the user and your organization's data.
-Success appears to say all is well. But it can mean that a bad actor has successfully accessed a service. Monitoring successful logins helps you detect user accounts that are gaining access but are not user accounts that should have access. User authentication successes are normal entries in Azure AD Sign-Ins logs. We recommend you monitor and alert to detect when patterns emerge. This helps ensure you can protect user accounts and your organization's data.
+Success appears to say all is well. But it can mean that a bad actor has successfully accessed a service. Monitoring successful logins helps you detect user accounts that are gaining access but aren't user accounts that should have access. User authentication successes are normal entries in Azure AD Sign-Ins logs. We recommend you monitor and alert to detect when patterns emerge. This helps ensure you can protect user accounts and your organization's data.
-
-As you design and operationalize a log monitoring and alerting strategy, consider the tools available to you through the Azure portal. Identity Protection enables you to automate the detection, protection, and remediation of identity-based risks. Identity protection uses intelligence-fed machine learning and heuristic systems to detect risk and assign a risk score for users and sign ins. Customers can configure policies based on a risk level for when to allow or deny access or allow the user to securely self-remediate from a risk. The following Identity Protection risk detections inform risk levels today:
+As you design and operationalize a log monitoring and alerting strategy, consider the tools available to you through the Azure portal. Identity Protection enables you to automate the detection, protection, and remediation of identity-based risks. Identity protection uses intelligence-fed machine learning and heuristic systems to detect risk and assign a risk score for users and sign-ins. Customers can configure policies based on a risk level for when to allow or deny access or allow the user to securely self-remediate from a risk. The following Identity Protection risk detections inform risk levels today:
| What to monitor | Risk Level | Where | Filter/sub-filter | Notes | | - | - | - | - | - |
As you design and operationalize a log monitoring and alerting strategy, conside
| Suspicious inbox forwarding sign-in risk detection| Varies| Azure AD Risk Detection logs| UX: Suspicious inbox forwarding<br><br>API: See [riskDetection resource type - Microsoft Graph](/graph/api/resources/riskdetection)| See [What is risk? Azure AD Identity Protection](../identity-protection/concept-identity-protection-risks.md) | | Azure AD threat intelligence sign-in risk detection| High| Azure AD Risk Detection logs| UX: Azure AD threat intelligence<br>API: See [riskDetection resource type - Microsoft Graph](/graph/api/resources/riskdetection)| See [What is risk? Azure AD Identity Protection](../identity-protection/concept-identity-protection-risks.md) |
-For more information, visit [What is Identity Protection](../identity-protection/overview-identity-protection.md).
-
+For more information, visit [What is Identity Protection](../identity-protection/overview-identity-protection.md).
### What to look for Configure monitoring on the data within the Azure AD Sign-ins Logs to ensure that alerting occurs and adheres to your organization's security policies. Some examples of this are:
-* **Failed Authentications**: As humans we all get our passwords wrong from time to time. However, many failed authentications can indicate that a bad actor is trying to obtain access. Attacks differ in ferocity but can range from a few attempts per hour to a much higher rate. For example, Password Spray normally preys on easier passwords against many accounts, while Brute Force attempts many passwords against targeted accounts.
+* **Failed Authentications**: As humans we all get our passwords wrong from time to time. However, many failed authentications can indicate that a bad actor is trying to obtain access. Attacks differ in ferocity but can range from a few attempts per hour to a much higher rate. For example, Password Spray normally preys on easier passwords against many accounts, while Brute Force attempts many passwords against targeted accounts.
-* **Interrupted Authentications**: An Interrupt in Azure AD represents an injection of an additional process to satisfy authentication, such as when enforcing a control in a CA policy. This is a normal event and can happen when applications are not configured correctly. But when you see many interrupts for a user account it could indicate something is happening with that account.
+* **Interrupted Authentications**: An Interrupt in Azure AD represents an injection of a process to satisfy authentication, such as when enforcing a control in a CA policy. This is a normal event and can happen when applications aren't configured correctly. But when you see many interrupts for a user account it could indicate something is happening with that account.
- * For example, if you filtered on a user in Sign-in logs and see a large volume of sign in status = Interrupted and Conditional Access = Failure. Digging deeper it may show in authentication details that the password is correct, but that strong authentication is required. This could mean the user is not completing multi-factor authentication (MFA) which could indicate the user's password is compromised and the bad actor is unable to fulfill MFA.
+ * For example, if you filtered on a user in Sign-in logs and see a large volume of sign in status = Interrupted and Conditional Access = Failure. Digging deeper it may show in authentication details that the password is correct, but that strong authentication is required. This could mean the user isn't completing multi-factor authentication (MFA) which could indicate the user's password is compromised and the bad actor is unable to fulfill MFA.
-* **Smart lock out**: Azure AD provides a smart lockout service which introduces the concept of familiar and non-familiar locations to the authentication process. A user account visiting a familiar location might authenticate successfully while a bad actor unfamiliar with the same location is blocked after several attempts. Look for accounts that have been locked out and investigate further.
+* **Smart lock-out**: Azure AD provides a smart lock-out service which introduces the concept of familiar and non-familiar locations to the authentication process. A user account visiting a familiar location might authenticate successfully while a bad actor unfamiliar with the same location is blocked after several attempts. Look for accounts that have been locked out and investigate further.
-* **IP Changes**: It is normal to see users originating from different IP addresses. However, Zero Trust states never trust and always verify. Seeing a large volume of IP addresses and failed sign ins can be an indicator of intrusion. Look for a pattern of many failed authentications taking place from multiple IP addresses. Note, virtual private network (VPN) connections can cause false positives. Regardless of the challenges, we recommend you monitor for IP address changes and if possible, use Azure AD Identity Protection to automatically detect and mitigate these risks.
+* **IP changes**: It is normal to see users originating from different IP addresses. However, Zero Trust states never trust and always verify. Seeing a large volume of IP addresses and failed sign-ins can be an indicator of intrusion. Look for a pattern of many failed authentications taking place from multiple IP addresses. Note, virtual private network (VPN) connections can cause false positives. Regardless of the challenges, we recommend you monitor for IP address changes and if possible, use Azure AD Identity Protection to automatically detect and mitigate these risks.
-* **Locations**: Generally, you expect a user account to be in the same geographical location. You also expect sign ins from locations where you have employees or business relations. When the user account comes from a different international location in less time than it would take to travel there, it can indicate the user account is being abused. Note, VPNs can cause false positives, we recommend you monitor for user accounts signing in from geographically distant locations and if possible, use Azure AD Identity Protection to automatically detect and mitigate these risks.
+* **Locations**: Generally, you expect a user account to be in the same geographical location. You also expect sign-ins from locations where you have employees or business relations. When the user account comes from a different international location in less time than it would take to travel there, it can indicate the user account is being abused. Note, VPNs can cause false positives, we recommend you monitor for user accounts signing in from geographically distant locations and if possible, use Azure AD Identity Protection to automatically detect and mitigate these risks.
-For this risk area we recommend you monitor both standard user accounts and privileged accounts but prioritize investigations of privileged accounts. Privileged accounts are the most important accounts in any Azure AD tenant. For specific guidance for privileged accounts, see Security operations ΓÇô privileged accounts.
+For this risk area, we recommend you monitor standard user accounts and privileged accounts but prioritize investigations of privileged accounts. Privileged accounts are the most important accounts in any Azure AD tenant. For specific guidance for privileged accounts, see Security operations ΓÇô privileged accounts.
### How to detect
-You use Azure Identity Protection and the Azure AD sign-in logs to help discover threats indicated by unusual sign-in characteristics. Information about Identity Protection is available at [What is Identity Protection](../identity-protection/overview-identity-protection.md). You can also replicate the data to Azure Monitor or a SIEM for monitoring and alerting purposes. To define normal for your environment and to set a baseline, determine the following:
+You use Azure Identity Protection and the Azure AD sign-in logs to help discover threats indicated by unusual sign-in characteristics. Information about Identity Protection is available at [What is Identity Protection](../identity-protection/overview-identity-protection.md). You can also replicate the data to Azure Monitor or a SIEM for monitoring and alerting purposes. To define normal for your environment and to set a baseline, determine:
-* the parameters that you consider normal for your user base.
+* the parameters you consider normal for your user base.
* the average number of tries of a password over a time before the user calls the service desk or performs a self-service password reset.
You use Azure Identity Protection and the Azure AD sign-in logs to help discover
* how many MFA attempts you want to allow before alerting, and if it will be different for user accounts and privileged accounts.
-* if legacy authentication is enabled and your roadmap for discontinuing usage.
+* if legacy authentication is enabled and your roadmap for discontinuing usage.
* the known egress IP addresses are for your organization.
You use Azure Identity Protection and the Azure AD sign-in logs to help discover
* whether there are groups of users that remain stationary within a network location or country.
-* Identify any other indicators for unusual sign ins that are specific to your organization. For example days or times of the week or year that your organization does not operate.
+* Identify any other indicators for unusual sign-ins that are specific to your organization. For example days or times of the week or year that your organization doesn't operate.
-Once you have scoped what normal is for the types of accounts in your environment, consider the following to help determine which scenarios you want to monitor for and alert on, and to fine-tune your alerting.
+After you scope what normal is for the accounts in your environment, consider the following list to help determine scenarios you want to monitor and alert on, and to fine-tune your alerting.
* Do you need to monitor and alert if Identity Protection is configured?
Once you have scoped what normal is for the types of accounts in your environmen
Configure Identity Protection to help ensure protection is in place that supports your security baseline policies. For example, blocking users if risk = high. This risk level indicates with a high degree of confidence that a user account is compromised. For more information on setting up sign in risk policies and user risk policies, visit [Identity Protection policies](../identity-protection/concept-identity-protection-policies.md). For more information on setting up conditional access, visit [Conditional Access: Sign-in risk-based Conditional Access](../conditional-access/howto-conditional-access-policy-risk.md).
-The following are listed in order of importance based on the impact and severity of the entries.
+The following are listed in order of importance based on the effect and severity of the entries.
### Monitoring external user sign ins | What to monitor| Risk Level| Where| Filter/sub-filter| Notes | | - |- |- |- |- |
-| Users authenticating to other Azure AD tenants.| Low| Azure AD Sign-ins log| Status = success<br>Resource tenantID != Home Tenant ID| Detects when a user has sucessfully authenticated to another Azure AD tenant with an identity in your organization's tenant.<br>Alert if Resource TenantID is not equal to Home Tenant ID |
-|User state changed from Guest to Member|Medium|Azure AD Audit logs|Activity: Update user<br>Category: UserManagement<br>UserType changed from Guest to Member|Monitor and alert on change of user type from Guest to Member. Was this expected?
-|Guest users invited to tenant by non-approved inviters|Medium|Azure AD Audit logs|Activity: Invite external user<br>Category: UserManagement<br>Initiated by (actor): User Principal Name|Monitor and alert on non-approved actors inviting external users.
+| Users authenticating to other Azure AD tenants.| Low| Azure AD Sign-ins log| Status = success<br>Resource tenantID != Home Tenant ID| Detects when a user has successfully authenticated to another Azure AD tenant with an identity in your organization's tenant.<br>Alert if Resource TenantID isn't equal to Home Tenant ID <br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/AuditLogs/UsersAuthenticatingtoOtherAzureADTenants.yaml) |
+|User state changed from Guest to Member|Medium|Azure AD Audit logs|Activity: Update user<br>Category: UserManagement<br>UserType changed from Guest to Member|Monitor and alert on change of user type from Guest to Member. Was this expected?<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/UserStatechangedfromGuesttoMember.yaml)
+|Guest users invited to tenant by non-approved inviters|Medium|Azure AD Audit logs|Activity: Invite external user<br>Category: UserManagement<br>Initiated by (actor): User Principal Name|Monitor and alert on non-approved actors inviting external users.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/GuestUsersInvitedtoTenantbyNewInviters.yaml)
+ ### Monitoring for failed unusual sign ins | What to monitor| Risk Level| Where| Filter/sub-filter| Notes | | - |- |- |- |- |
-| Failed sign-in attempts.| Medium - if Isolated Incident<br>High - if a number of accounts are experiencing the same pattern or a VIP.| Azure AD Sign-ins log| Status = failed<br>-and-<br>Sign-in error code 50126 - <br>Error validating credentials due to invalid username or password.| Define a baseline threshold, and then monitor and adjust to suite your organizational behaviors and limit false alerts from being generated.<br>[Azure Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/SigninLogs/SpikeInFailedSignInAttempts.yaml) |
-| Smart lock-out events.| Medium - if Isolated Incident<br>High - if a number of accounts are experiencing the same pattern or a VIP.| Azure AD Sign-ins log| Status = failed<br>-and-<br>Sign-in error code = 50053 ΓÇô IdsLocked| Define a baseline threshold, and then monitor and adjust to suite your organizational behaviors and limit false alerts from being generated.<br>[Azure Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/SigninLogs/SmartLockouts.yaml) |
-| Interrupts| Medium - if Isolated Incident<br>High - if a number of accounts are experiencing the same pattern or a VIP.| Azure AD Sign-ins log| 500121, Authentication failed during strong authentication request. <br>-or-<br>50097, Device authentication is required or 50074, Strong Authentication is required. <br>-or-<br>50155, DeviceAuthenticationFailed<br>-or-<br>50158, ExternalSecurityChallenge - External security challenge was not satisfied<br>-or-<br>53003 and Failure reason = blocked by CA| Monitor and alert on interrupts.<br>Define a baseline threshold, and then monitor and adjust to suite your organizational behaviors and limit false alerts from being generated.<br>[Azure Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/MultipleDataSources/AADPrivilegedAccountsFailedMFA.yaml) |
+| Failed sign-in attempts.| Medium - if Isolated Incident<br>High - if many accounts are experiencing the same pattern or a VIP.| Azure AD Sign-ins log| Status = failed<br>-and-<br>Sign-in error code 50126 - <br>Error validating credentials due to invalid username or password.| Define a baseline threshold, and then monitor and adjust to suite your organizational behaviors and limit false alerts from being generated.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/SigninLogs/SpikeInFailedSignInAttempts.yaml) |
+| Smart lock-out events.| Medium - if Isolated Incident<br>High - if many accounts are experiencing the same pattern or a VIP.| Azure AD Sign-ins log| Status = failed<br>-and-<br>Sign-in error code = 50053 ΓÇô IdsLocked| Define a baseline threshold, and then monitor and adjust to suite your organizational behaviors and limit false alerts from being generated.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/SigninLogs/SmartLockouts.yaml) |
+| Interrupts| Medium - if Isolated Incident<br>High - if many accounts are experiencing the same pattern or a VIP.| Azure AD Sign-ins log| 500121, Authentication failed during strong authentication request. <br>-or-<br>50097, Device authentication is required or 50074, Strong Authentication is required. <br>-or-<br>50155, DeviceAuthenticationFailed<br>-or-<br>50158, ExternalSecurityChallenge - External security challenge wasn't satisfied<br>-or-<br>53003 and Failure reason = blocked by CA| Monitor and alert on interrupts.<br>Define a baseline threshold, and then monitor and adjust to suite your organizational behaviors and limit false alerts from being generated.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/MultipleDataSources/AADPrivilegedAccountsFailedMFA.yaml) |
-
-The following are listed in order of importance based on the impact and severity of the entries.
+The following are listed in order of importance based on the effect and severity of the entries.
| What to monitor| Risk Level| Where| Filter/sub-filter| Notes | | - |- |- |- |- |
-| Multi-factor authentication (MFA) fraud alerts.| High| Azure AD Sign-ins log| Status = failed<br>-and-<br>Details = MFA Denied<br>| Monitor and alert on any entry.<br>[Azure Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/SigninLogs/MFARejectedbyUser.yaml) |
-| Failed authentications from countries you do not operate out of.| Medium| Azure AD Sign-ins log| Location = \<unapproved location\>| Monitor and alert on any entries. |
-| Failed authentications for legacy protocols or protocols that are not used .| Medium| Azure AD Sign-ins log| Status = failure<br>-and-<br>Client app = Other Clients, POP, IMAP, MAPI, SMTP, ActiveSync| Monitor and alert on any entries.<br>[Azure Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/9bd30c2d4f6a2de17956cd11536a83adcbfc1757/Hunting%20Queries/SigninLogs/LegacyAuthAttempt.yaml) |
-| Failures blocked by CA.| Medium| Azure AD Sign-ins log| Error code = 53003 <br>-and-<br>Failure reason = blocked by CA| Monitor and alert on any entries.<br>[Azure Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/SigninLogs/UserAccounts-CABlockedSigninSpikes.yaml) |
-| Increased failed authentications of any type.| Medium| Azure AD Sign-ins log| Capture increases in failures across the board. I.e., total failures for today is >10 % on the same day the previous week.| If you don't have a set threshold, monitor and alert if failures increase by 10% or greater.<br>[Azure Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/SigninLogs/SpikeInFailedSignInAttempts.yaml) |
-| Authentication occurring at times and days of the week when countries do not conduct normal business operations.| Low| Azure AD Sign-ins log| Capture interactive authentication occurring outside of normal operating days\time. <br>Status = success<br>-and-<br>Location = \<location\><br>-and-<br>Day\Time = \<not normal working hours\>| Monitor and alert on any entries.<br>[Azure Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/MultipleDataSources/AnomolousSignInsBasedonTime.yaml) |
-| Account disabled/blocked for sign-ins| Low| Azure AD Sign-ins log| Status = Failure<br>-and-<br>error code = 50057, The user account is disabled.| This could indicate someone is trying to gain access to an account once they have left an organization. Although the account is blocked it is still important to log and alert on this activity.<br>[Azure Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/SigninLogs/UserAccounts-BlockedAccounts.yaml) |
-
+| Multi-factor authentication (MFA) fraud alerts.| High| Azure AD Sign-ins log| Status = failed<br>-and-<br>Details = MFA Denied<br>| Monitor and alert on any entry.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/SigninLogs/MFARejectedbyUser.yaml) |
+| Failed authentications from countries you don't operate out of.| Medium| Azure AD Sign-ins log| Location = \<unapproved location\>| Monitor and alert on any entries. <br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/SigninLogs/AuthenticationAttemptfromNewCountry.yaml) |
+| Failed authentications for legacy protocols or protocols that aren't used.| Medium| Azure AD Sign-ins log| Status = failure<br>-and-<br>Client app = Other Clients, POP, IMAP, MAPI, SMTP, ActiveSync| Monitor and alert on any entries.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/9bd30c2d4f6a2de17956cd11536a83adcbfc1757/Hunting%20Queries/SigninLogs/LegacyAuthAttempt.yaml) |
+| Failures blocked by CA.| Medium| Azure AD Sign-ins log| Error code = 53003 <br>-and-<br>Failure reason = blocked by CA| Monitor and alert on any entries.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/SigninLogs/UserAccounts-CABlockedSigninSpikes.yaml) |
+| Increased failed authentications of any type.| Medium| Azure AD Sign-ins log| Capture increases in failures across the board. That is, the failure total for today is >10% on the same day, the previous week.| If you don't have a set threshold, monitor and alert if failures increase by 10% or greater.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/SigninLogs/SpikeInFailedSignInAttempts.yaml) |
+| Authentication occurring at times and days of the week when countries don't conduct normal business operations.| Low| Azure AD Sign-ins log| Capture interactive authentication occurring outside of normal operating days\time. <br>Status = success<br>-and-<br>Location = \<location\><br>-and-<br>Day\Time = \<not normal working hours\>| Monitor and alert on any entries.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/MultipleDataSources/AnomolousSignInsBasedonTime.yaml) |
+| Account disabled/blocked for sign-ins| Low| Azure AD Sign-ins log| Status = Failure<br>-and-<br>error code = 50057, The user account is disabled.| This could indicate someone is trying to gain access to an account once they have left an organization. Although the account is blocked, it is important to log and alert on this activity.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/SigninLogs/UserAccounts-BlockedAccounts.yaml) |
### Monitoring for successful unusual sign ins
- | What to monitor| Risk Level| Where| Filter/sub-filter| Notes |
+| What to monitor| Risk Level| Where| Filter/sub-filter| Notes |
| - |- |- |- |- |
-| Authentications of privileged accounts outside of expected controls.| High| Azure AD Sign-ins log| Status = success<br>-and-<br>UserPricipalName = \<Admin account\><br>-and-<br>Location = \<unapproved location\><br>-and-<br>IP Address = \<unapproved IP\><br>Device Info= \<unapproved Browser, Operating System\><br>| Monitor and alert on successful authentication for privileged accounts outside of expected controls. Three common controls are listed. |
-| When only single-factor authentication is required.| Low| Azure AD Sign-ins log| Status = success<br>Authentication requirement = Single-factor authentication| Monitor this periodically and ensure this is the expected behavior.<br>[Azure Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/SigninLogs/UserAccounts-NewSingleFactorAuth.yaml) |
+| Authentications of privileged accounts outside of expected controls.| High| Azure AD Sign-ins log| Status = success<br>-and-<br>UserPricipalName = \<Admin account\><br>-and-<br>Location = \<unapproved location\><br>-and-<br>IP Address = \<unapproved IP\><br>Device Info= \<unapproved Browser, Operating System\><br>| Monitor and alert on successful authentication for privileged accounts outside of expected controls. Three common controls are listed. <br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/SigninLogs/AuthenticationsofPrivilegedAccountsOutsideofExpectedControls.yaml)<br>[Sigma rules template](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
+| When only single-factor authentication is required.| Low| Azure AD Sign-ins log| Status = success<br>Authentication requirement = Single-factor authentication| Monitor periodically and ensure expected behavior.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/SigninLogs/UserAccounts-NewSingleFactorAuth.yaml)<br><br>[Sigma rules template](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
| Discover privileged accounts not registered for MFA.| High| Azure Graph API| Query for IsMFARegistered eq false for administrator accounts. <br>[List credentialUserRegistrationDetails - Microsoft Graph beta](/graph/api/reportroot-list-credentialuserregistrationdetails?view=graph-rest-beta&preserve-view=true&tabs=http)| Audit and investigate to determine if intentional or an oversight. |
-| Successful authentications from countries your organization does not operate out of.| Medium| Azure AD Sign-ins log| Status = success<br>Location = \<unapproved country\>| Monitor and alert on any entries not equal to the city names you provide. |
-| Successful authentication, session blocked by CA.| Medium| Azure AD Sign-ins log| Status = success<br>-and-<br>error code = 53003 ΓÇô Failure reason, blocked by CA| Monitor and investigate when authentication is successful, but session is blocked by CA.<br>[Azure Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/SigninLogs/UserAccounts-CABlockedSigninSpikes.yaml) |
-| Successful authentication after you have disabled legacy authentication.| Medium| Azure AD Sign-ins log| status = success <br>-and-<br>Client app = Other Clients, POP, IMAP, MAPI, SMTP, ActiveSync| If your organization has disabled legacy authentication, monitor and alert when successful legacy authentication has taken place.<br>[Azure Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/9bd30c2d4f6a2de17956cd11536a83adcbfc1757/Hunting%20Queries/SigninLogs/LegacyAuthAttempt.yaml) |
+| Successful authentications from countries your organization doesn't operate out of.| Medium| Azure AD Sign-ins log| Status = success<br>Location = \<unapproved country\>| Monitor and alert on any entries not equal to the city names you provide.<br>[Sigma rules template](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
+| Successful authentication, session blocked by CA.| Medium| Azure AD Sign-ins log| Status = success<br>-and-<br>error code = 53003 ΓÇô Failure reason, blocked by CA| Monitor and investigate when authentication is successful, but session is blocked by CA.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/SigninLogs/UserAccounts-CABlockedSigninSpikes.yaml)<br><br>[Sigma rules template](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
+| Successful authentication after you have disabled legacy authentication.| Medium| Azure AD Sign-ins log| status = success <br>-and-<br>Client app = Other Clients, POP, IMAP, MAPI, SMTP, ActiveSync| If your organization has disabled legacy authentication, monitor and alert when successful legacy authentication has taken place.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/9bd30c2d4f6a2de17956cd11536a83adcbfc1757/Hunting%20Queries/SigninLogs/LegacyAuthAttempt.yaml)<br><br>[Sigma rules template](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
-
-On periodic basis, we recommend you review authentications to medium business impact (MBI) and high business impact (HBI) applications where only single-factor authentication is required. For each, you want to determine if single-factor authentication was expected or not. Additionally, review for successful authentication increases or at unexpected times based on the location.
+We recommend you periodically review authentications to medium business impact (MBI) and high business impact (HBI) applications where only single-factor authentication is required. For each, you want to determine if single-factor authentication was expected or not. In addition, review for successful authentication increases or at unexpected times, based on the location.
| What to monitor| Risk Level| Where| Filter/sub-filter| Notes | | - | - |- |- |- |
-| Authentications to MBI and HBI application using single-factor authentication.| Low| Azure AD Sign-ins log| status = success<br>-and-<br>Application ID = \<HBI app\> <br>-and-<br>Authentication requirement = single-factor authentication.| Review and validate this configuration is intentional.<br>[Azure Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/SigninLogs/UserAccounts-NewSingleFactorAuth.yaml) |
-| Authentications at days and times of the week or year that countries do not conduct normal business operations.| Low| Azure AD Sign-ins log| Capture interactive authentication occurring outside of normal operating days\time. <br>Status = success<br>Location = \<location\><br>Date\Time = \<not normal working hours\>| Monitor and alert on authentications days and times of the week or year that countries do not conduct normal business operations.<br>[Azure Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/SigninLogs/UserAccounts-UnusualLogonTimes.yaml) |
-| Measurable increase of successful sign ins.| Low| Azure AD Sign-ins log| Capture increases in successful authentication across the board. I.e., total successes for today is >10 % on the same day the previous week.| If you don't have a set threshold, monitor and alert if successful authentications increase by 10% or greater.<br>[Azure Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/SigninLogs/UserAccountsMeasurableincreaseofsuccessfulsignins.yaml) |
+| Authentications to MBI and HBI application using single-factor authentication.| Low| Azure AD Sign-ins log| status = success<br>-and-<br>Application ID = \<HBI app\> <br>-and-<br>Authentication requirement = single-factor authentication.| Review and validate this configuration is intentional.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/SigninLogs/UserAccounts-NewSingleFactorAuth.yaml)<br><br>[Sigma rules template](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
+| Authentications at days and times of the week or year that countries do not conduct normal business operations.| Low| Azure AD Sign-ins log| Capture interactive authentication occurring outside of normal operating days\time. <br>Status = success<br>Location = \<location\><br>Date\Time = \<not normal working hours\>| Monitor and alert on authentications days and times of the week or year that countries do not conduct normal business operations.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/SigninLogs/UserAccounts-UnusualLogonTimes.yaml)<br><br>[Sigma rules template](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
+| Measurable increase of successful sign ins.| Low| Azure AD Sign-ins log| Capture increases in successful authentication across the board. That is, success totals for today are >10% on the same day, the previous week.| If you don't have a set threshold, monitor and alert if successful authentications increase by 10% or greater.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/SigninLogs/UserAccountsMeasurableincreaseofsuccessfulsignins.yaml)<br><br>[Sigma rules template](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
## Next steps+ See these security operations guide articles: [Azure AD security operations overview](security-operations-introduction.md)
-[Security operations for user accounts](security-operations-user-accounts.md)
+[Security operations for consumer accounts](security-operations-consumer-accounts.md)
[Security operations for privileged accounts](security-operations-privileged-accounts.md)
See these security operations guide articles:
[Security operations for applications](security-operations-applications.md) [Security operations for devices](security-operations-devices.md)
-
+ [Security operations for infrastructure](security-operations-infrastructure.md)
active-directory Check Status Workflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/check-status-workflow.md
To get further information than just the runs summary for a workflow, you're als
To view a status list of users processed by a workflow, which are UserProcessingResults, you'd make the following API call:
-```http
-GET https://graph.microsoft.com/beta/identityGovernance/lifecycleWorkflows/workflows/<workflowId>/userProcessingResults
-```
-
-By default **userProcessingResults** returns only information from the last 7 days. To get information as far back as 30 days, you would run the following API call:
-
-```http
-GET https://graph.microsoft.com/beta/identityGovernance/lifecycleWorkflows/workflows/<workflowId>/userProcessingResults?$filter=<Date range for processing results>
-```
-
-by default **userProcessingResults** returns only information from the last 7 days. To filter information as far back as 30 days, you would run the following API call:
-
-```http
-GET https://graph.microsoft.com/beta/identityGovernance/lifecycleWorkflows/workflows/<id>/userProcessingResults?$filter=<Date range for processing results>
-```
-
-An example of a call to get **userProcessingResults** for a month would be as follows:
-
-```http
-GET https://graph.microsoft.com/beta/identityGovernance/lifecycleWorkflows/workflows/<workflowId>/userProcessingResults?$filter=< startedDateTime ge 2022-05-23T00:00:00Z and startedDateTime le 2022-06-22T00:00:00Z
-```
+To view a list of user processing results using API via Microsoft Graph, see: [List userProcessingResults](/graph/api/identitygovernance-workflow-list-userprocessingresults)
### User processing results using Microsoft Graph
-When multiple user events are processed by a workflow, running the **userProcessingResults** may give incomprehensible information. To get a summary of information such as total users and tasks, and failed users and tasks, Lifecycle Workflows provides a call to get count totals.
-
-To view a summary in count form, you would run the following API call:
-```http
-GET https://graph.microsoft.com/beta/identityGovernance/lifecycleWorkflows/workflows/<workflowId>/userProcessingResults/summary(<Date Range>)
-```
-
-An example to get the summary between May 1, and May 30, you would run the following call:
+To view a summary of user processing results via API using Microsoft Graph, see: [userProcessingResult: summary](/graph/api/identitygovernance-userprocessingresult-summary)
-```http
-GET https://graph.microsoft.com/beta/identityGovernance/lifecycleWorkflows/workflows/<workflowId>/userProcessingResults/summary(startDateTime=2022-05-01T00:00:00Z,endDateTime=2022-05-30T00:00:00Z)
-```
-### List task processing results of a given user processing result
-
-```http
-GET https://graph.microsoft.com/beta/identityGovernance/lifecycleWorkflows/workflows/<workflowId>/userProcessingResults/<userProcessingResultId>/taskProcessingResults/
-```
## Run workflow history via Microsoft Graph ### List runs using Microsoft Graph
-With Microsoft Graph, you're able to get full details of workflow and user processing run information.
-
-To view a list of runs, you'd make the following API call:
+To view runs of a workflow via API using Microsoft Graph, see: [runs](/graph/api/resources/identitygovernance-run)
-```http
-GET https://graph.microsoft.com/beta/identityGovernance/lifecycleWorkflows/workflows/<workflowId>/runs
-```
### Get a summary of runs using Microsoft Graph
-To get a summary of runs for a workflow, which includes detailed information for counts of failed runs and tasks, along with successful runs and tasks for a time range, you'd make the following API call:
-
-```http
-GET https://graph.microsoft.com/beta/identityGovernance/lifecycleWorkflows/workflows/<workflowId>/runs/summary(startDateTime=<time>,endDateTime=<time>)
-```
-An example to get a summary of runs of a workflow through the time interval of May 2022 would be as follows:
-
-```http
-GET https://graph.microsoft.com/beta/identityGovernance/lifecycleWorkflows/workflows/<workflowId>/runs/summary(startDateTime=2022-05-01T00:00:00Z,endDateTime=202205-31T00:00:00Z)
-```
+To view run summary via API using Microsoft Graph, see: [run summary of a lifecycle workflow](/graph/api/identitygovernance-run-summary)
### List user and task processing results of a given run using Microsoft Graph
-With Lifecycle Workflows, you're able to check the status of each user and task who had a workflow processed for them as part of a run.
-
-
-You're also able to use **userProcessingResults** with the run call to get users processed for a run by making the following API call:
-
-```http
-GET https://graph.microsoft.com/beta/identityGovernance/lifecycleWorkflows/workflows/<workflowId>/runs/<runId>/userProcessingResults
-```
+To get user processing result for a run of a lifecycle workflow via API using Microsoft Graph, see: [Get userProcessingResult (for a run of a lifecycle workflow)](/graph/api/identitygovernance-userprocessingresult-get)
-This API call will also return a **userProcessingResults ID** value, which can be used to retrieve task processing information in the following call:
+To list task processing results for a user processing result via API using Microsoft Graph, see: [List taskProcessingResults (for a userProcessingResult)](/graph/api/identitygovernance-userprocessingresult-list-taskprocessingresults)
-```http
-GET https://graph.microsoft.com/beta/identityGovernance/lifecycleWorkflows/workflows/<workflowId> /runs/<runId>/userProcessingResults/<userProcessingResultId>/taskProcessingResults
-```
> [!NOTE] > A workflow must have activity in the past 7 days to get **userProcessingResults ID**. If there has not been any activity in that time-frame, the **userProcessingResults** call will not return a value.
active-directory Create Lifecycle Workflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/create-lifecycle-workflow.md
If you are using the Azure portal to create a workflow, you can customize existi
## Create a workflow using Microsoft Graph
-Workflows can be created using Microsoft Graph API. Creating a workflow using the Graph API allows you to automatically set it to enabled. Setting it to enabled is done using the `isEnabled` parameter.
-
-The table below shows the parameters that must be defined during workflow creation:
-
-|Parameter |Description |
-|||
-|category | A string that identifies the category of the workflow. String is "joiner", "mover", or "leaver. Category of tasks within a workflow must also contain the category of the workflow to run. For full task definitions, see: [Lifecycle workflow tasks and definitions](lifecycle-workflow-tasks.md) |
-|displayName | A unique string that identifies the workflow. |
-|description | A string that describes the purpose of the workflow for administrative use. (Optional) |
-|isEnabled | A boolean value that denotes whether the workflow is set to run or not. If set to ΓÇ£true" then the workflow will run. |
-|IsSchedulingEnabled | A Boolean value that denotes whether scheduling is enabled or not. Unlike isEnbaled, a workflow can still be run on demand if this value is set to false. |
-|executionConditions | An argument that contains: A time-based attribute and an integer parameter defining when a workflow will run between -60 and a scope attribute defining who the workflow runs for. |
-|tasks | An argument in a workflow that has a unique displayName and a description. It defines the specific tasks to be executed in the workflow. The specified task is outlined by the taskDefinitionID and its parameters. For a list of supported tasks, and their corresponding IDs, see [Lifecycle Workflow tasks and definitions](lifecycle-workflow-tasks.md). |
----
-To create a joiner workflow, in Microsoft Graph, use the following request and body:
-```http
-POST https://graph.microsoft.com/beta/identityGovernance/lifecycleWorkflows/workflows
-Content-type: application/json
-```
-
-```Request body
-{
- "category": "joiner",
- "displayName": "<Unique workflow name string>",
- "description": "<Unique workflow description>",
- "isEnabled":true,
- "tasks":[
- {
- "category": "joiner",
- "isEnabled": true,
- "taskTemplateId": "<Unique Task template>",
- "displayName": "<Unique task name>",
- "description": "<Task template description>",
- "arguments": "<task arguments>"
- }
- ],
- "executionConditions": {
- "@odata.type" : "microsoft.graph.identityGovernance.scopeAndTriggerBasedCondition",
- "trigger": {
- "@odata.type" : "microsoft.graph.identityGovernance.timeBasedAttributeTrigger",
- "timeBasedAttribute":"<time-based trigger argument>",
- "arguments": -7
- },
- "scope": {
- "@odata.type" : "microsoft.graph.identityGovernance.ruleBasedScope",
- "rule": "employeeType eq '<Employee type attribute>' AND department -eq '<department attribute>'"
- }
- }
-}
-
-> [!NOTE]
-> time based trigger arguments can be from -60 to 60. The negative value denotes **Before** a time based argument, while a positive value denotes **After**. For example the -7 in the workflow example above denotes the workflow will run 1 week before the time-based argument happens.
-
-```
-
-To change this workflow from joiner to leaver, replace the category parameters to "leaver". To get a list of the task definitions that can be added to your workflow run the following call:
-
-```http
-GET https://graph.microsoft.com/beta/identityGovernance/lifecycleWorkflows/taskDefinitions
-```
-
-The response to the code will look like:
-
-```Response body
-{
- "@odata.context": "https://graph.microsoft-ppe.com/testppebetalcwpp4/$metadata#identityGovernance/lifecycleWorkflows/taskDefinitions",
- "@odata.count": 13,
- "value": [
- {
- "category": "joiner,leaver",
- "description": "Add user to a group",
- "displayName": "Add User To Group",
- "id": "22085229-5809-45e8-97fd-270d28d66910",
- "version": 1,
- "parameters": [
- {
- "name": "groupID",
- "values": [],
- "valueType": "string"
- }
- ]
- },
- {
- "category": "joiner,leaver",
- "description": "Disable user account in the directory",
- "displayName": "Disable User Account",
- "id": "1dfdfcc7-52fa-4c2e-bf3a-e3919cc12950",
- "version": 1,
- "parameters": []
- },
- {
- "category": "joiner,leaver",
- "description": "Enable user account in the directory",
- "displayName": "Enable User Account",
- "id": "6fc52c9d-398b-4305-9763-15f42c1676fc",
- "version": 1,
- "parameters": []
- },
- {
- "category": "joiner,leaver",
- "description": "Run a custom task extension",
- "displayName": "run a Custom Task Extension",
- "id": "4262b724-8dba-4fad-afc3-43fcbb497a0e",
- "version": 1,
- "parameters":
- {
- "name": "customtaskextensionID",
- "values": [],
- "valueType": "string"
- }
- ]
- },
- {
- "category": "joiner,leaver",
- "description": "Remove user from membership of selected Azure AD groups",
- "displayName": "Remove user from selected groups",
- "id": "1953a66c-751c-45e5-8bfe-01462c70da3c",
- "version": 1,
- "parameters": [
- {
- "name": "groupID",
- "values": [],
- "valueType": "string"
- }
- ]
- },
- {
- "category": "joiner",
- "description": "Generate Temporary Access Password and send via email to user's manager",
- "displayName": "Generate TAP And Send Email",
- "id": "1b555e50-7f65-41d5-b514-5894a026d10d",
- "version": 1,
- "parameters": [
- {
- "name": "tapLifetimeMinutes",
- "values": [],
- "valueType": "string"
- },
- {
- "name": "tapIsUsableOnce",
- "values": [
- "true",
- "false"
- ],
- "valueType": "enum"
- }
- ]
- },
- {
- "category": "joiner",
- "description": "Send welcome email to new hire",
- "displayName": "Send Welcome Email",
- "id": "70b29d51-b59a-4773-9280-8841dfd3f2ea",
- "version": 1,
- "parameters": []
- },
- {
- "category": "joiner,leaver",
- "description": "Add user to a team",
- "displayName": "Add User To Team",
- "id": "e440ed8d-25a1-4618-84ce-091ed5be5594",
- "version": 1,
- "parameters": [
- {
- "name": "teamID",
- "values": [],
- "valueType": "string"
- }
- ]
- },
- {
- "category": "leaver",
- "description": "Delete user account in Azure AD",
- "displayName": "Delete User Account",
- "id": "8d18588d-9ad3-4c0f-99d0-ec215f0e3dff",
- "version": 1,
- "parameters": []
- },
- {
- "category": "joiner,leaver",
- "description": "Remove user from membership of selected Teams",
- "displayName": "Remove user from selected Teams",
- "id": "06aa7acb-01af-4824-8899-b14e5ed788d6",
- "version": 1,
- "parameters": [
- {
- "name": "teamID",
- "values": [],
- "valueType": "string"
- }
- ]
- },
- {
- "category": "leaver",
- "description": "Remove user from all Azure AD groups memberships",
- "displayName": "Remove user from all groups",
- "id": "b3a31406-2a15-4c9a-b25b-a658fa5f07fc",
- "version": 1,
- "parameters": []
- },
- {
- "category": "leaver",
- "description": "Remove user from all Teams memberships",
- "displayName": "Remove user from all Teams",
- "id": "81f7b200-2816-4b3b-8c5d-dc556f07b024",
- "version": 1,
- "parameters": []
- },
- {
- "category": "leaver",
- "description": "Remove all licenses assigned to the user",
- "displayName": "Remove all licenses for user",
- "id": "8fa97d28-3e52-4985-b3a9-a1126f9b8b4e",
- "version": 1,
- "parameters": []
- }
- ]
-}
-
-```
-For further details on task definitions and their parameters, see [Lifecycle Workflow Tasks](lifecycle-workflow-tasks.md).
-
+To create a workflow using Microsoft Graph API, see [Create workflow (lifecycle workflow)](/graph/api/identitygovernance-lifecycleworkflowscontainer-post-workflows)
## Next steps -- [Create workflow (lifecycle workflow)](/graph/api/identitygovernance-lifecycleworkflowscontainer-post-workflows?view=graph-rest-beta) - [Manage a workflow's properties](manage-workflow-properties.md) - [Manage Workflow Versions](manage-workflow-tasks.md)
active-directory Delete Lifecycle Workflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/delete-lifecycle-workflow.md
After deleting workflows, you can view them on the **Deleted Workflows (Preview)
## Delete a workflow using Microsoft Graph
- You're also able to delete, view deleted, and restore deleted Lifecycle workflows using Microsoft Graph.
+
+To delete a workflow using API via Microsoft Graph, see: [Delete workflow (lifecycle workflow)](/graph/api/identitygovernance-workflow-delete?view=graph-rest-beta).
++
+To view
Workflows can be deleted by running the following call: ```http DELETE https://graph.microsoft.com/beta/identityGovernance/lifecycleWorkflows/workflows/<id> ``` ## View deleted workflows using Microsoft Graph
-You can view a list of deleted workflows by running the following call:
-```http
-GET https://graph.microsoft.com/beta/identityGovernance/lifecycleWorkflows/deletedItems/workflows
-```
+
+To View a list of deleted workflows using API via Microsoft Graph, see: [List deleted workflows](/graph/api/identitygovernance-lifecycleworkflowscontainer-list-deleteditems).
+ ## Permanently delete a workflow using Microsoft Graph
-Deleted workflows can be permanently deleted by running the following call:
-```http
-DELETE https://graph.microsoft.com/beta/identityGovernance/lifecycleWorkflows/deletedItems/workflows/<id>
-```
+
+To permanently delete a workflow using API via Microsoft Graph, see: [Permanently delete a deleted workflow](/graph/api/identitygovernance-deleteditemcontainer-delete)
## Restore deleted workflows using Microsoft Graph
-Deleted workflows are available to be restored for 30 days before they're permanently deleted. To restore a deleted workflow, run the following API call:
-```http
-GET https://graph.microsoft.com/beta/identityGovernance/lifecycleWorkflows/deletedItems/workflows/<id>/restore
-```
+To restore a deleted workflow using API via Microsoft Graph, see: [Restore a deleted workflow](/graph/api/identitygovernance-workflow-restore)
> [!NOTE] > Permanently deleted workflows are not able to be restored. ## Next steps -- [Delete workflow (lifecycle workflow)](/graph/api/identitygovernance-workflow-delete?view=graph-rest-beta) - [What are Lifecycle Workflows?](what-are-lifecycle-workflows.md) - [Manage Lifecycle Workflow Versions](manage-workflow-tasks.md)
active-directory Identity Governance Applications Deploy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/identity-governance-applications-deploy.md
Azure AD, in conjunction with Azure Monitor, provides several reports to help yo
* An administrator, or a catalog owner, can [retrieve the list of users who have access package assignments](entitlement-management-access-package-assignments.md), via the Azure portal, Graph or PowerShell. * You can also send the audit logs to Azure Monitor and view a history of [changes to the access package](entitlement-management-logs-and-reporting.md#view-events-for-an-access-package), in the Azure portal, or via PowerShell.
-* You can view the last 30 days of sign ins to an application in the [sign ins report](../reports-monitoring/howto-find-activity-reports.md#sign-ins-report) in the Azure portal, or via [Graph](/graph/api/signin-list?view=graph-rest-1.0&tabs=http).
+* You can view the last 30 days of sign ins to an application in the [sign ins report](../reports-monitoring/howto-find-activity-reports.md#sign-ins-report) in the Azure portal, or via [Graph](/graph/api/signin-list?view=graph-rest-1.0&tabs=http&preserve-view=true).
* You can also send the [sign in logs to Azure Monitor](../reports-monitoring/concept-activity-logs-azure-monitor.md) to archive sign in activity for up to two years. ## Monitor to adjust entitlement management policies and access as needed
active-directory Manage Workflow Properties https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/manage-workflow-properties.md
To edit the properties of a workflow using the Azure portal, you'll do the follo
## Edit the properties of a workflow using Microsoft Graph
-To view the list of current workflows you'll run the following API call:
-
-```http
-GET https://graph.microsoft.com/beta/identityGovernance/lifecycleWorkflows/workflows/
-```
-
-Lifecycle workflows can have their basic information such as "displayName", "description", and "isEnabled" edited by running this patch call and body.
-
-```http
-PATCH https://graph.microsoft.com/beta/identityGovernance/lifecycleWorkflows/workflows/<id>
-Content-type: application/json
-
-{
- "displayName":"<Unique workflow name string>",
- "description":"<workflow description>",
- "isEnabled":<ΓÇ£trueΓÇ¥ or ΓÇ£falseΓÇ¥>,
-}
-
-```
+To update a workflow via API using Microsoft Graph, see: [Update workflow](/graph/api/identitygovernance-workflow-update)
active-directory Manage Workflow Tasks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/manage-workflow-tasks.md
To edit the execution conditions of a workflow using the Azure portal, you'll do
## Create a new version of an existing workflow using Microsoft Graph
-As stated above, creating a new version of a workflow is required to change any parameter that isn't "displayName", "description", or "isEnabled". Unlike in the Azure portal, to create a new version of a workflow using Microsoft Graph requires some additional steps. First, run the API call with the changes to the body of the workflow you want to update by doing the following:
--- Get the body of the workflow you want to create a new version of by running the API call:
- ```http
- GET https://graph.microsoft.com/beta/identityGovernance/LifecycleWorkflows/workflows/<workflow id>
- ```
-- Copy the body of the returned workflow excluding the **id**, **"odata.context**, and **tasks@odata.context** portions of the returned workflow body. -- Make the changes in tasks and execution conditions you want for the new version of the workflow.-- Run the following **createNewVersion** API call along with the updated body of the workflow. The workflow body is wrapped in a **Workflow:{}** block.
- ```http
- POST https://graph.microsoft.com/beta/identityGovernance/LifecycleWorkflows/workflows/<id>/createNewVersion
- Content-type: application/json
-
- {
- "workflow": {
- "displayName":"New version of a workflow",
- "description":"This is a new created version of a workflow",
- "isEnabled":"true",
- "tasks":[
- {
- "isEnabled":"true",
- "taskTemplateId":"70b29d51-b59a-4773-9280-8841dfd3f2ea",
- "displayName":"Send welcome email to new hire",
- "description":"Sends welcome email to a new hire",
- "executionSequence": 1,
- "arguments":[]
- },
- {
- "isEnabled":"true",
- "taskTemplateId":"22085229-5809-45e8-97fd-270d28d66910",
- "displayName":"Add user to group",
- "description":"Adds user to a group.",
- "executionSequence": 2,
- "arguments":[
- {
- "name":"groupID",
- "value":"<group id value>"
- }
- ]
- }
- ],
- "executionConditions": {
- "@odata.type": "microsoft.graph.identityGovernance.triggerAndScopeBasedConditions",
- "scope": {
- "@odata.type": "microsoft.graph.identityGovernance.ruleBasedSubjectSet",
- "rule": "(department eq 'sales')"
- },
- "trigger": {
- "@odata.type": "microsoft.graph.identityGovernance.timeBasedAttributeTrigger",
- "timeBasedAttribute": "employeeHireDate",
- "offsetInDays": -2
- }
- }
- }
- ```
+To create a new version of a workflow via API using Microsoft Graph, see: [workflow: createNewVersion](/graph/api/identitygovernance-workflow-createnewversion)
### List workflow versions using Microsoft Graph
-Once a new version of a workflow is created, you can always find other versions by running the following call:
-```http
-GET https://graph.microsoft.com/beta/identityGovernance/lifecycleWorkflows/workflows/<id>/versions
-```
-Or to get a specific version:
-
-```http
-GET https://graph.microsoft.com/beta/identityGovernance/lifecycleWorkflows/workflows/<id>/versions/<version number>
-```
-
-### Reorder Tasks in a workflow using Microsoft Graph
-
-If you want to reorder tasks in a workflow, so that certain tasks run before others, you'll follow these steps:
- 1. Use a GET call to return the body of the workflow in which you want to reorder the tasks.
- ```http
- GET https://graph.microsoft.com/beta/identityGovernance/lifecycleWorkflows/workflows/<workflow id>
- ```
- 1. Copy the body of the workflow and paste it in the body section for the new API call.
-
- 1. Tasks are run in the order they appear within the workflow. To update the task copy the one you want to run first in the workflow body, and place it above the tasks you want to run after it in the workflow.
-
- 1. Run the **createNewVersion** API call.
+To list workflow versions via API using Microsoft Graph, see: [List versions (of a lifecycle workflow)](/graph/api/identitygovernance-workflow-list-versions)
## Next steps - - [Check status of a workflows](check-status-workflow.md) - [Customize workflow schedule](customize-workflow-schedule.md)
active-directory On Demand Workflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/on-demand-workflow.md
Use the following steps to run a workflow on-demand.
## Run a workflow on-demand using Microsoft Graph
-Running a workflow on-demand using Microsoft Graph requires users to manually be added by their user ID with a POST call.
-
-To run a workflow on-demand in Microsoft Graph, use the following request and body:
-```http
-POST https://graph.microsoft.com/beta/identityGovernance/lifecycleWorkflows/workflows/<id>/activate
-Content-type: application/json
-```
-
-```Request body
-{
- "subjects":[
- {"id":"<userid>"},
- {"id":"<userid>"}
- ]
-}
-
-```
+To run a workflow on-demand using API via Microsoft Graph, see: [workflow: activate (run a workflow on-demand)](/graph/api/identitygovernance-workflow-activate).
## Next steps -- [workflow: activate (run a workflow on-demand)](/graph/api/identitygovernance-workflow-activate?view=graph-rest-beta) - [Customize the schedule of workflows](customize-workflow-schedule.md) - [Delete a Lifecycle workflow](delete-lifecycle-workflow.md)
active-directory Tutorial Prepare Azure Ad User Accounts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/tutorial-prepare-azure-ad-user-accounts.md
The manager attribute is used for email notification tasks. It's used by the li
:::image type="content" source="media/tutorial-lifecycle-workflows/graph-get-manager.png" alt-text="Screenshot of getting a manager in Graph explorer." lightbox="media/tutorial-lifecycle-workflows/graph-get-manager.png":::
-For more information about updating manager information for a user in Graph API, see [assign manager](/graph/api/user-post-manager?view=graph-rest-1.0&tabs=http) documentation. You can also set this attribute in the Azure Admin center. For more information, see [add or change profile information](/azure/active-directory/fundamentals/active-directory-users-profile-azure-portal?context=azure/active-directory/users-groups-roles/context/ugr-context).
+For more information about updating manager information for a user in Graph API, see [assign manager](/graph/api/user-post-manager?view=graph-rest-1.0&tabs=http&preserve-view=true) documentation. You can also set this attribute in the Azure Admin center. For more information, see [add or change profile information](/azure/active-directory/fundamentals/active-directory-users-profile-azure-portal?context=azure/active-directory/users-groups-roles/context/ugr-context).
### Enabling the Temporary Access Pass (TAP) A Temporary Access Pass is a time-limited pass issued by an admin that satisfies strong authentication requirements.
active-directory How To Connect Install Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-install-prerequisites.md
We recommend that you harden your Azure AD Connect server to decrease the securi
- Follow these [additional guidelines](/windows-server/identity/ad-ds/plan/security-best-practices/reducing-the-active-directory-attack-surface) to reduce the attack surface of your Active Directory environment. - Follow the [Monitor changes to federation configuration](how-to-connect-monitor-federation-changes.md) to setup alerts to monitor changes to the trust established between your Idp and Azure AD. - Enable Multi Factor Authentication (MFA) for all users that have privileged access in Azure AD or in AD. One security issue with using AADConnect is that if an attacker can get control over the Azure AD Connect server they can manipulate users in Azure AD. To prevent a attacker from using these capabilities to take over Azure AD accounts, MFA offers protections so that even if an attacker manages to e.g. reset a user's password using Azure AD Connect they still cannot bypass the second factor.-- Disable Soft Matching on your tenant. Soft Matching is a great feature to help transfering source of autority for existing cloud only objects to Azure AD Connect, but it comes with certain security risks. If you do not require Soft Matching, you should disable it: https://docs.microsoft.com/azure/active-directory/hybrid/how-to-connect-syncservice-features#blocksoftmatch
+- Disable Soft Matching on your tenant. Soft Matching is a great feature to help transfering source of autority for existing cloud only objects to Azure AD Connect, but it comes with certain security risks. If you do not require Soft Matching, you should disable it: [https://docs.microsoft.com/azure/active-directory/hybrid/how-to-connect-syncservice-features#blocksoftmatch](how-to-connect-syncservice-features.md#blocksoftmatch)
### SQL Server used by Azure AD Connect * Azure AD Connect requires a SQL Server database to store identity data. By default, a SQL Server 2019 Express LocalDB (a light version of SQL Server Express) is installed. SQL Server Express has a 10-GB size limit that enables you to manage approximately 100,000 objects. If you need to manage a higher volume of directory objects, point the installation wizard to a different installation of SQL Server. The type of SQL Server installation can impact the [performance of Azure AD Connect](./plan-connect-performance-factors.md#sql-database-factors).
active-directory Concept Workload Identity Risk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/identity-protection/concept-workload-identity-risk.md
We detect risk on workload identities across sign-in behavior and offline indica
| | | | | Azure AD threat intelligence | Offline | This risk detection indicates some activity that is consistent with known attack patterns based on Microsoft's internal and external threat intelligence sources. | | Suspicious Sign-ins | Offline | This risk detection indicates sign-in properties or patterns that are unusual for this service principal. <br><br> The detection learns the baselines sign-in behavior for workload identities in your tenant in between 2 and 60 days, and fires if one or more of the following unfamiliar properties appear during a later sign-in: IP address / ASN, target resource, user agent, hosting/non-hosting IP change, IP country, credential type. <br><br> Because of the programmatic nature of workload identity sign-ins, we provide a timestamp for the suspicious activity instead of flagging a specific sign-in event. <br><br> Sign-ins that are initiated after an authorized configuration change may trigger this detection. |
-| Suspicious Sign-ins | Offline | This risk detection indicates sign-in properties or patterns that are unusual for this service principal. <br><br> The detection learns the baselines sign-in behavior for workload identities in your tenant in between 2 and 60 days, and fires if one or more of the following unfamiliar properties appear during a later sign-in: IP address / ASN, target resource, user agent, hosting/non-hosting IP change, IP country, credential type. <br><br> Because of the programmatic nature of workload identity sign-ins, we provide a timestamp for the suspicious activity instead of flagging a specific sign-in event. <br><br> Sign-ins that are initiated after an authorized configuration change may trigger this detection. |
| Admin confirmed account compromised | Offline | This detection indicates an admin has selected 'Confirm compromised' in the Risky Workload Identities UI or using riskyServicePrincipals API. To see which admin has confirmed this account compromised, check the accountΓÇÖs risk history (via UI or API). | | Leaked Credentials | Offline | This risk detection indicates that the account's valid credentials have been leaked. This leak can occur when someone checks in the credentials in public code artifact on GitHub, or when the credentials are leaked through a data breach. <br><br> When the Microsoft leaked credentials service acquires credentials from GitHub, the dark web, paste sites, or other sources, they're checked against current valid credentials in Azure AD to find valid matches. | | Malicious application | Offline | This detection indicates that Microsoft has disabled an application for violating our terms of service. We recommend [conducting an investigation](https://go.microsoft.com/fwlink/?linkid=2208429) of the application.|
-| Suspicious application | Offline | This detection indicates that Microsoft has identified an application that might be violating our terms of service, but hasn't disabled it. We recommend [conducting an investigation](https://go.microsoft.com/fwlink/?linkid=2208429) of the application.|
+| Suspicious application | Offline | This detection indicates that Microsoft has identified an application that may be violating our terms of service, but has not disabled it. We recommend [conducting an investigation](https://go.microsoft.com/fwlink/?linkid=2208429) of the application.|
## Identify risky workload identities
active-directory How To View Associated Resources For An Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/how-to-view-associated-resources-for-an-identity.md
https://management.azure.com/subscriptions/{resourceID of user-assigned identity
| Parameter | Example |Description | ||||
-| $filter | ```'type' eq 'microsoft.cognitiveservices/account' and contains(name, 'test')``` | An OData expression that allows you to filter any of the available fields: name, type, resourceGroup, subscriptionId, subscriptionDisplayName<br/><br/>The following operations are supported: ```and```, ```or```, ```eq``` and ```contains``` |
+| $filter | ```type eq 'microsoft.cognitiveservices/account' and contains(name, 'test')``` | An OData expression that allows you to filter any of the available fields: name, type, resourceGroup, subscriptionId, subscriptionDisplayName<br/><br/>The following operations are supported: ```and```, ```or```, ```eq``` and ```contains``` |
| $orderby | ```name asc``` | An OData expression that allows you to order by any of the available fields | | $skip | 50 | The number of items you want to skip while paging through the results. | | $top | 10 | The number of resources to return. 0 will return only a count of the resources. |
active-directory Confluencemicrosoft Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/confluencemicrosoft-tutorial.md
As of now, following versions of Confluence are supported:
- Confluence: 5.0 to 5.10 - Confluence: 6.0.1 to 6.15.9-- Confluence: 7.0.1 to 7.17.0
+- Confluence: 7.0.1 to 7.19.0
> [!NOTE] > Please note that our Confluence Plugin also works on Ubuntu Version 16.04
active-directory Jiramicrosoft Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/jiramicrosoft-tutorial.md
Follow these steps to enable Azure AD SSO in the Azure portal.
![The Certificate download link](common/copy-metadataurl.png) +++
+1. The Name ID attribute in Azure AD can be mapped to any desired user attribute by editing the Attributes & Claims section.
+
+ > [!div class="mx-imgBorder"]
+ > ![Screenshot showing how to edit Attributes and Claims.](common/edit-attribute.png)
+
+ a. After clicking on Edit, any desired user attribute can be mapped by clicking on Unique User Identifier (Name ID).
+
+ > [!div class="mx-imgBorder"]
+ > ![Screenshot showing the NameID in Attributes and Claims.](common/attribute-nameID.png)
+
+ b. On the next screen, the desired attribute name like user.userprincipalname can be selected as an option from the Source Attribute dropdown menu.
+
+ > [!div class="mx-imgBorder"]
+ > ![Screenshot showing how to select Attributes and Claims.](common/attribute-select.png)
+
+ c. The selection can then be saved by clicking on the Save button at the top.
+
+ > [!div class="mx-imgBorder"]
+ > ![Screenshot showing how to save Attributes and Claims.](common/attribute-save.png)
+
+ d. Now, the user.userprincipalname attribute source in Azure AD is mapped to the Name ID attribute name in Azure AD which will be compared with the username attribute in Atlassian by the SSO plugin.
+
+ > [!div class="mx-imgBorder"]
+ > ![Screenshot showing how to review Attributes and Claims.](common/attribute-review.png)
+
+ > [!NOTE]
+ > The SSO service provided by Microsoft Azure supports SAML authentication which is able to perform user identification using different attributes such as givenname (first name), surname (last name), email (email address), and user principal name (username). We recommend not to use email as an authentication attribute as email addresses are not always verified by Azure AD. The plugin compares the values of Atlassian username attribute with the NameID attribute in Azure AD in order to determine the valid user authentication.
+ ### Create an Azure AD test user In this section, you'll create a test user in the Azure portal called B.Simon.
active-directory Lucid All Products Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/lucid-all-products-provisioning-tutorial.md
# Tutorial: Configure Lucid (All Products) for automatic user provisioning
-This tutorial describes the steps you need to perform in both Lucid (All Products) and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users and groups to [Lucid (All Products)](https://www.lucid.com/) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md).
+This tutorial describes the steps you need to perform in both Lucid (All Products) and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users and groups to [Lucid (All Products)](https://lucid.co/) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md).
## Capabilities supported
active-directory Ms Confluence Jira Plugin Adminguide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/ms-confluence-jira-plugin-adminguide.md
The plug-in supports the following versions of Jira and Confluence:
* JIRA also supports 5.2. For more details, click [Microsoft Azure Active Directory single sign-on for JIRA 5.2](./jira52microsoft-tutorial.md) * Confluence: 5.0 to 5.10 * Confluence: 6.0.1 to 6.15.9
-* Confluence: 7.0.1 to 7.17.0
+* Confluence: 7.0.1 to 7.19.0
## Installation
The plug-in supports these versions:
* JIRA also supports 5.2. For more details, click [Microsoft Azure Active Directory single sign-on for JIRA 5.2](./jira52microsoft-tutorial.md) * Confluence: 5.0 to 5.10 * Confluence: 6.0.1 to 6.15.9
-* Confluence: 7.0.1 to 7.17.0
+* Confluence: 7.0.1 to 7.19.0
### Is the plug-in free or paid?
active-directory Admin Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/admin-api.md
Content-type: application/json
{ "id": "f5bf2fc6-7135-4d94-a6fe-c26e4543bc5a",
- "servicePrincipal": "90e10a26-94cd-49d6-8cd7-cacb10f00686",
+ "verifiableCredentialServicePrincipalId": "90e10a26-94cd-49d6-8cd7-cacb10f00686",
+ "verifiableCredentialRequestServicePrincipalId": "870e10a26-94cd-49d6-8cd7-cacb10f00fe",
+ "verifiableCredentialAdminServicePrincipalId": "760e10a26-94cd-49d6-8cd7-cacb10f00ab",
"status": "Enabled" } ```
active-directory Verifiable Credentials Configure Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/verifiable-credentials-configure-tenant.md
The following diagram illustrates the Verified ID architecture and the component
## Prerequisites - You need an Azure tenant with an active subscription. If you don't have Azure subscription, [create one for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).-- Ensure that you have the [global administrator](../../active-directory/roles/permissions-reference.md#global-administrator) permission for the directory you want to configure.
+- Ensure that you have the [global administrator](../../active-directory/roles/permissions-reference.md#global-administrator) or the [authentication policy administrator](../../active-directory/roles/permissions-reference.md#authentication-policy-administrator) permission for the directory you want to configure. If you're not the global administrator, you will need permission [application administrator](../../active-directory/roles/permissions-reference.md#application-administrator) to complete the app registration including granting admin consent.
+- Ensure that you have the [contributor](../../role-based-access-control/built-in-roles.md#contributor) role for the Azure subscription or the resource group that you will deploy Azure Key Vault in.
## Create a key vault
To add the required permissions, follow these steps:
1. Select **APIs my organization uses**.
-1. Search for the **Verifiable Credentials Service Request** service principal, and select it.
+1. Search for the **Verifiable Credentials Service Request** and **Verifiable Credentials Service** service principals, and select them.
![Screenshot that shows how to select the service principal.](media/verifiable-credentials-configure-tenant/add-app-api-permissions-select-service-principal.png)
advisor Advisor Cost Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/advisor/advisor-cost-recommendations.md
Azure Advisor helps you optimize and reduce your overall Azure spend by identify
1. On the **Advisor** dashboard, select the **Cost** tab.
-## Optimize virtual machine spend by resizing or shutting down underutilized instances
+## Optimize virtual machine (VM) or virtual machine scale set (VMSS) spend by resizing or shutting down underutilized instances
-Although certain application scenarios can result in low utilization by design, you can often save money by managing the size and number of your virtual machines.
+Although certain application scenarios can result in low utilization by design, you can often save money by managing the size and number of your virtual machines or virtual machine scale sets.
-Advisor uses machine-learning algorithms to identify low utilization and to identify the ideal recommendation to ensure optimal usage of virtual machines. The recommended actions are shut down or resize, specific to the resource being evaluated.
+Advisor uses machine-learning algorithms to identify low utilization and to identify the ideal recommendation to ensure optimal usage of virtual machines and virtual machine scale sets. The recommended actions are shut down or resize, specific to the resource being evaluated.
### Shutdown recommendations
-Advisor identifies resources that have not been used at all over the last 7 days and makes a recommendation to shut them down.
+Advisor identifies resources that haven't been used at all over the last 7 days and makes a recommendation to shut them down.
-- Recommendation criteria include **CPU** and **Outbound Network utilization** metrics. **Memory** is not considered since weΓÇÖve found that **CPU** and **Outbound Network utilization** are sufficient.
+- Recommendation criteria include **CPU** and **Outbound Network utilization** metrics. **Memory** isn't considered since we've found that **CPU** and **Outbound Network utilization** are sufficient.
- The last 7 days of utilization data are analyzed-- Metrics are sampled every 30 seconds, aggregated to 1 min and then further aggregated to 30 mins (we take the max of average values while aggregating to 30 mins)
+- Metrics are sampled every 30 seconds, aggregated to 1 min and then further aggregated to 30 mins (we take the max of average values while aggregating to 30 mins). On virtual machine scale sets, the metrics from individual virtual machines are aggregated using the average of the metrics across instances.
- A shutdown recommendation is created if: - P95th of the maximum value of CPU utilization summed across all cores is less than 3%. - P100 of average CPU in last 3 days (sum over all cores) <= 2%
Advisor identifies resources that have not been used at all over the last 7 days
### Resize SKU recommendations
-Advisor recommends resizing virtual machines when it's possible to fit the current load on a more appropriate SKU, which is less expensive (based on retail rates).
+Advisor recommends resizing virtual machines when it's possible to fit the current load on a more appropriate SKU, which is less expensive (based on retail rates). On virtual machine scale sets, Advisor recommends resizing when it's possible to fit the current load on a more appropriate cheaper SKU, or a lower number of instances of the same SKU.
- Recommendation criteria include **CPU**, **Memory** and **Outbound Network utilization**. - The last 7 days of utilization data are analyzed-- Metrics are sampled every 30 seconds, aggregated to 1 min and then further aggregated to 30 mins (we take the max of average values while aggregating to 30 mins)-- An appropriate SKU is determined based on the following criteria:
- - Performance of the workloads on the new SKU should not be impacted.
+- Metrics are sampled every 30 seconds, aggregated to 1 minute and then further aggregated to 30 minutes (taking the max of average values while aggregating to 30 minutes). On virtual machine scale sets, the metrics from individual virtual machines are aggregated using the average of the metrics for instance count recommendations, and aggregated using the max of the metrics for SKU change recommendations.
+- An appropriate SKU (for virtual machines) or instance count (for virtual machine scale set resources) is determined based on the following criteria:
+ - Performance of the workloads on the new SKU shouldn't be impacted.
- Target for user-facing workloads: - P95 of CPU and Outbound Network utilization at 40% or lower on the recommended SKU - P100 of Memory utilization at 60% or lower on the recommended SKU - Target for non user-facing workloads: - P95 of the CPU and Outbound Network utilization at 80% or lower on the new SKU - P100 of Memory utilization at 80% or lower on the new SKU
- - The new SKU has the same Accelerated Networking and Premium Storage capabilities
- - The new SKU is supported in the current region of the Virtual Machine with the recommendation
- - The new SKU is less expensive
+ - The new SKU, if applicable, has the same Accelerated Networking and Premium Storage capabilities
+ - The new SKU, if applicable, is supported in the current region of the Virtual Machine with the recommendation
+ - The new SKU, if applicable, is less expensive
+ - Instance count recommendations also take into account if the virtual machine scale set is being managed by Service Fabric or AKS. For service fabric managed resources, recommendations take into account reliability and durability tiers.
- Advisor determines if a workload is user-facing by analyzing its CPU utilization characteristics. The approach is based on findings by Microsoft Research. You can find more details here: [Prediction-Based Power Oversubscription in Cloud Platforms - Microsoft Research](https://www.microsoft.com/research/publication/prediction-based-power-oversubscription-in-cloud-platforms/).-- Advisor recommends not just smaller SKUs in the same family (for example D3v2 to D2v2) but also SKUs in a newer version (for example D3v2 to D2v3) or a different family (for example D3v2 to E3v2) based on the best fit and the cheapest costs with no performance impacts.
+- Based on the best fit and the cheapest costs with no performance impacts, Advisor not only recommends smaller SKUs in the same family (for example D3v2 to D2v2), but also SKUs in a newer version (for example D3v2 to D2v3), or a different family (for example D3v2 to E3v2).
+- For virtual machine scale set resources, Advisor prioritizes instance count recommendations over SKU change recommendations because instance count changes are easily actionable, resulting in faster savings.
### Burstable recommendations We evaluate if workloads are eligible to run on specialized SKUs called **Burstable SKUs** that support variable workload performance requirements and are less expensive than general purpose SKUs. Learn more about burstable SKUs here: [B-series burstable - Azure Virtual Machines](../virtual-machines/sizes-b-series-burstable.md). -- A burstable SKU recommendation is made if:
+A burstable SKU recommendation is made if:
+ - The average **CPU utilization** is less than a burstable SKUs' baseline performance - If the P95 of CPU is less than two times the burstable SKUs' baseline performance
- - If the current SKU does not have accelerated networking enabled (burstable SKUs donΓÇÖt support accelerated networking yet)
+ - If the current SKU doesn't have accelerated networking enabled, since burstable SKUs don't support accelerated networking yet
- If we determine that the Burstable SKU credits are sufficient to support the average CPU utilization over 7 days-- The result is a recommendation suggesting that the user resize their current VM to a burstable SKU (with the same number of cores) to take advantage of the low costs and the fact that the workload has low average utilization but high spikes in cases, which can be best served by the B-series SKU. +
+The resulting recommendation suggests that a user should resize their current virtual machine or virtual machine scale set to a burstable SKU with the same number of cores. This suggestion is made so a user can take advantage of lower cost and also the fact that the workload has low average utilization but high spikes in cases, which can be best served by the B-series SKU.
-Advisor shows the estimated cost savings for either recommended action: resize or shut down. For resize, Advisor provides current and target SKU information.
-To be more selective about the actioning on underutilized virtual machines, you can adjust the CPU utilization rule on a per-subscription basis.
+Advisor shows the estimated cost savings for either recommended action: resize or shut down. For resize, Advisor provides current and target SKU/instance count information.
+To be more selective about the actioning on underutilized virtual machines or virtual machine scale sets, you can adjust the CPU utilization rule on a per-subscription basis.
-There are cases where the recommendations cannot be adopted or might not be applicable, such as some of these common scenarios (there may be other cases):
-- Virtual machine has been provisioned to accommodate upcoming traffic-- Virtual machine uses other resources not considered by the resize algo, i.e. metrics other than CPU, Memory and Network
+In some cases recommendations can't be adopted or might not be applicable, such as some of these common scenarios (there may be other cases):
+- Virtual machine or virtual machine scale set has been provisioned to accommodate upcoming traffic
+- Virtual machine or virtual machine scale set uses other resources not considered by the resize algo, such as metrics other than CPU, Memory and Network
- Specific testing being done on the current SKU, even if not utilized efficiently-- Need to keep VM SKUs homogeneous -- VM being utilized for disaster recovery purposes
+- Need to keep virtual machine or virtual machine scale set SKUs homogeneous
+- Virtual machine or virtual machine scale set being utilized for disaster recovery purposes
-In such cases simply use the Dismiss/Postpone options associated with the recommendation.
+In such cases, simply use the Dismiss/Postpone options associated with the recommendation.
-We are constantly working on improving these recommendations. Feel free to share feedback on [Advisor Forum](https://aka.ms/advisorfeedback).
+We're constantly working on improving these recommendations. Feel free to share feedback on [Advisor Forum](https://aka.ms/advisorfeedback).
## Next steps
advisor Advisor Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/advisor/advisor-release-notes.md
To learn more, visit [How to filter Advisor recommendations using tags](advisor-
## January 2022
-[**Shutdown/Resize your virtual machines**](advisor-cost-recommendations.md#optimize-virtual-machine-spend-by-resizing-or-shutting-down-underutilized-instances) recommendation was enhanced to increase the quality, robustness, and applicability.
+[**Shutdown/Resize your virtual machines**](advisor-cost-recommendations.md#optimize-virtual-machine-vm-or-virtual-machine-scale-set-vmss-spend-by-resizing-or-shutting-down-underutilized-instances) recommendation was enhanced to increase the quality, robustness, and applicability.
Improvements include:
Improvements include:
![vm-right-sizing-recommendation](media/advisor-overview/advisor-vm-right-sizing.png)
-Read the [How-to guide](advisor-cost-recommendations.md#optimize-virtual-machine-spend-by-resizing-or-shutting-down-underutilized-instances) to learn more.
+Read the [How-to guide](advisor-cost-recommendations.md#optimize-virtual-machine-vm-or-virtual-machine-scale-set-vmss-spend-by-resizing-or-shutting-down-underutilized-instances) to learn more.
aks Update Credentials https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/update-credentials.md
SP_ID=$(az aks show --resource-group myResourceGroup --name myAKSCluster \
With a variable set that contains the service principal ID, now reset the credentials using [az ad sp credential reset][az-ad-sp-credential-reset]. The following example lets the Azure platform generate a new secure secret for the service principal. This new secure secret is also stored as a variable. ```azurecli-interactive
-SP_SECRET=$(az ad sp credential reset --name "$SP_ID" --query password -o tsv)
+SP_SECRET=$(az ad sp credential reset --id "$SP_ID" --query password -o tsv)
``` Now continue on to [update AKS cluster with new service principal credentials](#update-aks-cluster-with-new-service-principal-credentials). This step is necessary for the Service Principal changes to reflect on the AKS cluster.
In this article, the service principal for the AKS cluster itself and the Azure
[az-ad-sp-credential-list]: /cli/azure/ad/sp/credential#az_ad_sp_credential_list [az-ad-sp-credential-reset]: /cli/azure/ad/sp/credential#az_ad_sp_credential_reset [node-image-upgrade]: ./node-image-upgrade.md
-[node-surge-upgrade]: upgrade-cluster.md#customize-node-surge-upgrade
+[node-surge-upgrade]: upgrade-cluster.md#customize-node-surge-upgrade
aks Use Network Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-network-policies.md
This article shows you how to install the Network Policy engine and create Kuber
## Before you begin
-You need the Azure CLI version 2.0.61 or later installed and configured. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli].
+You need the Azure CLI version 2.0.61 or later installed and configured. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli].
## Overview of Network Policy
Azure provides two ways to implement Network Policy. You choose a Network Policy
* Azure's own implementation, called *Azure Network Policy Manager (NPM)*. * *Calico Network Policies*, an open-source network and network security solution founded by [Tigera][tigera].
-Azure NPM for Linux uses Linux *IPTables* to enforce the specified policies. Policies are translated into sets of allowed and disallowed IP pairs. These pairs are then programmed as IPTable filter rules.
+Azure NPM for Linux uses Linux *IPTables* and Azure NPM for Windows uses *Host Network Service (HNS) ACLPolicies* to enforce the specified policies. Policies are translated into sets of allowed and disallowed IP pairs. These pairs are then programmed as IPTable/HNS ACLPolicy filter rules.
## Differences between Azure NPM and Calico Network Policy and their capabilities | Capability | Azure NPM | Calico Network Policy | ||-|--|
-| Supported platforms | Linux | Linux, Windows Server 2019 and 2022 |
+| Supported platforms | Linux, Windows Server 2022 | Linux, Windows Server 2019 and 2022 |
| Supported networking options | Azure CNI | Azure CNI (Linux, Windows Server 2019 and 2022) and kubenet (Linux) | | Compliance with Kubernetes specification | All policy types supported | All policy types supported | | Additional features | None | Extended policy model consisting of Global Network Policy, Global Network Set, and Host Endpoint. For more information on using the `calicoctl` CLI to manage these extended features, see [calicoctl user reference][calicoctl]. | | Support | Supported by Azure support and Engineering team | Calico community support. For more information on additional paid support, see [Project Calico support options][calico-support]. | | Logging | Logs available with **kubectl log -n kube-system <network-policy-pod>** command | For more information, see [Calico component logs][calico-logs] |
+## Limitations:
+
+Azure Network Policy Manager(NPM) doesn't support IPv6. Otherwise, Azure NPM fully supports the network policy spec in Linux.
+* In Windows, Azure NPM doesn't support the following:
+ * named ports
+ * SCTP protocol
+ * negative match label or namespace selectors (e.g. all labels except "debug=true")
+ * "except" CIDR blocks (a CIDR with exceptions)
+
+>[!NOTE]
+> * Azure NPM pod logs will record an error if an unsupported policy is created.
+ ## Create an AKS cluster and enable Network Policy To see network policies in action, let's create an AKS cluster that supports network policy and then work on adding policies.
The following example script:
Instead of using a system-assigned identity, you can also use a user-assigned identity. For more information, see [Use managed identities](use-managed-identity.md).
-### Create an AKS cluster with Azure NPM enabled
+### Create an AKS cluster with Azure NPM enabled - Linux only
-In this section, we will work on creating a cluster with Linux node pools and Azure NPM enabled.
+In this section, we'll work on creating a cluster with Linux node pools and Azure NPM enabled.
To begin, you should replace the values for *$RESOURCE_GROUP_NAME* and *$CLUSTER_NAME* variables.
az aks create \
--network-policy azure ```
+### Create an AKS cluster with Azure NPM enabled - Windows Server 2022 (Preview)
+
+In this section, we'll work on creating a cluster with Windows node pools and Azure NPM enabled.
+
+Please execute the following commands prior to creating a cluster:
+
+```azurecli
+ az extension add --name aks-preview
+ az extension update --name aks-preview
+ az feature register --namespace Microsoft.ContainerService --name AKSWindows2022Preview
+ az feature register --namespace Microsoft.ContainerService --name WindowsNetworkPolicyPreview
+ az provider register -n Microsoft.ContainerService
+```
+
+> [!NOTE]
+> At this time, Azure NPM with Windows nodes is available on Windows Server 2022 only
+>
+
+Now, you should replace the values for *$RESOURCE_GROUP_NAME*, *$CLUSTER_NAME* and *$WINDOWS_USERNAME* variables.
+
+```azurecli-interactive
+$RESOURCE_GROUP_NAME=myResourceGroup-NP
+$CLUSTER_NAME=myAKSCluster
+$WINDOWS_USERNAME=myWindowsUserName
+$LOCATION=canadaeast
+```
+
+Create a username to use as administrator credentials for your Windows Server containers on your cluster. The following command prompts you for a username. Set it to `$WINDOWS_USERNAME`(remember that the commands in this article are entered into a BASH shell).
+
+```azurecli-interactive
+echo "Please enter the username to use as administrator credentials for Windows Server containers on your cluster: " && read WINDOWS_USERNAME
+```
+
+Use the following command to create a cluster:
+
+```azurecli
+az aks create \
+ --resource-group $RESOURCE_GROUP_NAME \
+ --name $CLUSTER_NAME \
+ --node-count 1 \
+ --windows-admin-username $WINDOWS_USERNAME \
+ --network-plugin azure \
+ --network-policy azure
+```
+
+It takes a few minutes to create the cluster. By default, your cluster is created with only a Linux node pool. If you would like to use Windows node pools, you can add one. For example:
+
+```azurecli
+az aks nodepool add \
+ --resource-group $RESOURCE_GROUP_NAME \
+ --cluster-name $CLUSTER_NAME \
+ --os-type Windows \
+ --name npwin \
+ --node-count 1
+```
++ ### Create an AKS cluster for Calico network policies Create the AKS cluster and specify *azure* for the network plugin, and *calico* for the Network Policy. Using *calico* as the Network Policy enables Calico networking on both Linux and Windows node pools.
When the cluster is ready, configure `kubectl` to connect to your Kubernetes clu
```azurecli-interactive az aks get-credentials --resource-group $RESOURCE_GROUP_NAME --name $CLUSTER_NAME ```
-To begin verification of Network Policy, we will create a sample application and set traffic rules.
+To begin verification of Network Policy, we'll create a sample application and set traffic rules.
Firstly, let's create a namespace called *demo* to run the example pods:
Firstly, let's create a namespace called *demo* to run the example pods:
kubectl create namespace demo ```
-We will now create two pods in the cluster named *client* and *server*.
+We'll now create two pods in the cluster named *client* and *server*.
>[!NOTE] > If you want to schedule the *client* or *server* on a particular node, add the following bit before the *--command* argument in the pod creation [kubectl run][kubectl-run] command:
Now, in the client's shell, verify connectivity with the server by executing the
/agnhost connect <server-ip>:80 --timeout=3s --protocol=tcp ```
-Connectivity with traffic will be blocked since the server is labeled with app=server, but the client is not labeled. The connect command above will yield this output:
+Connectivity with traffic will be blocked since the server is labeled with app=server, but the client isn't labeled. The connect command above will yield this output:
```output TIMEOUT
To learn more about policies, see [Kubernetes network policies][kubernetes-netwo
[windows-server-password]: /windows/security/threat-protection/security-policy-settings/password-must-meet-complexity-requirements#reference [az-extension-add]: /cli/azure/extension#az_extension_add [az-extension-update]: /cli/azure/extension#az_extension_update
-[dsr]: ../load-balancer/load-balancer-multivip-overview.md#rule-type-2-backend-port-reuse-by-using-floating-ip
+[dsr]: ../load-balancer/load-balancer-multivip-overview.md#rule-type-2-backend-port-reuse-by-using-floating-ip
api-management Api Management Howto Aad B2c https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-aad-b2c.md
Azure Active Directory B2C is a cloud identity management solution for consumer-
In this tutorial, you'll learn the configuration required in your API Management service to integrate with Azure Active Directory B2C. As noted later in this article, if you are using the deprecated legacy developer portal, some steps will differ.
+For an overview of options to secure the developer portal, see [Authentication and authorization in API Management](authentication-authorization-overview.md#developer-portal-user-plane).
+ > [!IMPORTANT] > * This article has been updated with steps to configure an Azure AD B2C app using the Microsoft Authentication Library ([MSAL](../active-directory/develop/msal-overview.md)). > * If you previously configured an Azure AD B2C app for user sign-in using the Azure AD Authentication Library (ADAL), we recommend that you [migrate to MSAL](#migrate-to-msal).
-For information about enabling access to the developer portal by using classic Azure Active Directory, see [How to authorize developer accounts using Azure Active Directory](api-management-howto-aad.md).
- ## Prerequisites * An Azure Active Directory B2C tenant in which to create an application. For more information, see [Azure Active Directory B2C overview](../active-directory-b2c/overview.md).
api-management Api Management Howto Aad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-aad.md
In this article, you'll learn how to:
> * Enable access to the developer portal for users from Azure Active Directory (Azure AD). > * Manage groups of Azure AD users by adding external groups that contain the users.
+For an overview of options to secure the developer portal, see [Authentication and authorization in API Management](authentication-authorization-overview.md#developer-portal-user-plane).
+ > [!IMPORTANT] > * This article has been updated with steps to configure an Azure AD app using the Microsoft Authentication Library ([MSAL](../active-directory/develop/msal-overview.md)). > * If you previously configured an Azure AD app for user sign-in using the Azure AD Authentication Library (ADAL), we recommend that you [migrate to MSAL](#migrate-to-msal).
+
## Prerequisites
api-management Api Management Howto Mutual Certificates For Clients https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-mutual-certificates-for-clients.md
API Management provides the capability to secure access to APIs (i.e., client to API Management) using client certificates. You can validate certificates presented by the connecting client and check certificate properties against desired values using policy expressions.
-For information about securing access to the back-end service of an API using client certificates (i.e., API Management to backend), see [How to secure back-end services using client certificate authentication](./api-management-howto-mutual-certificates.md)
+For information about securing access to the back-end service of an API using client certificates (i.e., API Management to backend), see [How to secure back-end services using client certificate authentication](./api-management-howto-mutual-certificates.md).
+
+For a conceptual overview of API authorization, see [Authentication and authorization in API Management](authentication-authorization-overview.md#gateway-data-plane).
+ > [!IMPORTANT] > To receive and verify client certificates over HTTP/2 in the Developer, Basic, Standard, or Premium tiers you must turn on the "Negotiate client certificate" setting on the "Custom domains" blade as shown below.
api-management Api Management Howto Protect Backend With Aad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-protect-backend-with-aad.md
# Protect an API in Azure API Management using OAuth 2.0 authorization with Azure Active Directory
-In this article, you'll learn high level steps to configure your [Azure API Management](api-management-key-concepts.md) instance to protect an API, by using the [OAuth 2.0 protocol with Azure Active Directory (Azure AD)](../active-directory/develop/active-directory-v2-protocols.md).
+In this article, you'll learn high level steps to configure your [Azure API Management](api-management-key-concepts.md) instance to protect an API, by using the [OAuth 2.0 protocol with Azure Active Directory (Azure AD)](../active-directory/develop/active-directory-v2-protocols.md).
+
+For a conceptual overview of API authorization, see [Authentication and authorization in API Management](authentication-authorization-overview.md#gateway-data-plane).
## Prerequisites
api-management Self Hosted Gateway V0 V1 Retirement Oct 2023 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/breaking-changes/self-hosted-gateway-v0-v1-retirement-oct-2023.md
Your service is affected by this change if:
* Your service is in the Developer or Premium service tier. * You have deployed a self-hosted gateway using the version v0 or v1 of the self-hosted gateway [container image](../self-hosted-gateway-migration-guide.md#using-the-new-configuration-api).
+### Assessing impact with Azure Advisor
+
+In order to make the migration easier, we have introduced new Azure Advisor recommendations:
+
+- **Use self-hosted gateway v2** recommendation - Identifies Azure API Management instances where the usage of self-hosted gateway v0.x or v1.x was identified.
+- **Use Configuration API v2 for self-hosted gateways** recommendation - Identifies Azure API Management instances where the usage of Configuration API v1 for self-hosted gateway was identified.
+
+We highly recommend customers to use ["All Recommendations" overview in Azure Advisor](https://portal.azure.com/#view/Microsoft_Azure_Expert/AdvisorMenuBlade/~/All) to determine if a migration is required. Use the filtering options to see if one of the above recommendations is present.
+ ## What is the deadline for the change? **Support for the v1 configuration API and for the v0 and v1 container images of the self-hosted gateway will retire on 1 October 2023.**
If you have questions, get answers from community experts in [Microsoft Q&A](htt
## Next steps
-See all [upcoming breaking changes and feature retirements](overview.md).
+See all [upcoming breaking changes and feature retirements](overview.md).
api-management Developer Portal Basic Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/developer-portal-basic-authentication.md
In the developer portal for Azure API Management, the default authentication method for users is to provide a username and password. In this article, learn how to set up users with basic authentication credentials to the developer portal.
+For an overview of options to secure the developer portal, see [Authentication and authorization in API Management](authentication-authorization-overview.md#developer-portal-user-plane).
+ ## Prerequisites
api-management Howto Protect Backend Frontend Azure Ad B2c https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/howto-protect-backend-frontend-azure-ad-b2c.md
This scenario shows you how to configure your Azure API Management instance to protect an API. We'll use the Azure AD B2C SPA (Auth Code + PKCE) flow to acquire a token, alongside API Management to secure an Azure Functions backend using EasyAuth.
+For a conceptual overview of API authorization, see [Authentication and authorization in API Management](authentication-authorization-overview.md#gateway-data-plane).
++ ## Aims We're going to see how API Management can be used in a simplified scenario with Azure Functions and Azure AD B2C. You'll create a JavaScript (JS) app calling an API, that signs in users with Azure AD B2C. Then you'll use API Management's validate-jwt, CORS, and Rate Limit By Key policy features to protect the Backend API.
api-management Mitigate Owasp Api Threats https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/mitigate-owasp-api-threats.md
More information about this threat: [API10:2019 Insufficient logging and monito
## Next steps
+* [Authentication and authorization in API Management](authentication-authorization-overview.md)
* [Security baseline for API Management](/security/benchmark/azure/baselines/api-management-security-baseline) * [Security controls by Azure policy](security-controls-policy.md) * [Landing zone accelerator for API Management](/azure/cloud-adoption-framework/scenarios/app-platform/api-management/landing-zone-accelerator)
api-management Troubleshoot Response Timeout And Errors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/troubleshoot-response-timeout-and-errors.md
General strategies for mitigating SNAT port exhaustion are discussed in [Trouble
### Scale your APIM instance
-Each API Management instance is allocated a number of SNAT ports, based on APIM units. You can allocate additional SNAT ports by scaling your API Management instance with additional units. For more info, see [Scale your API Management service](upgrade-and-scale.md#scale-your-api-management-service)
+Each API Management instance is allocated a number of SNAT ports, based on APIM units. You can allocate additional SNAT ports by scaling your API Management instance with additional units. For more info, see [Scale your API Management service](upgrade-and-scale.md#scale-your-api-management-instance).
> [!NOTE] > SNAT port usage is currently not available as a metric for autoscaling API Management units.
See [API Management access restriction policies](api-management-access-restricti
## See also * [Azure Load Balancer: Troubleshooting outbound connections failures](../load-balancer/troubleshoot-outbound-connection.md)
-* [Azure App Service: Troubleshooting intermittent outbound connection errors](../app-service/troubleshoot-intermittent-outbound-connection-errors.md)
+* [Azure App Service: Troubleshooting intermittent outbound connection errors](../app-service/troubleshoot-intermittent-outbound-connection-errors.md)
api-management Upgrade And Scale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/upgrade-and-scale.md
-# Mandatory fields. See more on aka.ms/skyeye/meta.
Title: Upgrade and scale an Azure API Management instance | Microsoft Docs
-description: This topic describes how to upgrade and scale an Azure API Management instance.
-
+description: This article describes how to upgrade and scale an Azure API Management instance.
-+ -- Previously updated : 04/20/2020+ Last updated : 09/14/2022 # Upgrade and scale an Azure API Management instance
-Customers can scale an Azure API Management instance by adding and removing units. A **unit** is composed of dedicated Azure resources and has a certain load-bearing capacity expressed as a number of API calls per month. This number does not represent a call limit, but rather a maximum throughput value to allow for rough capacity planning. Actual throughput and latency vary broadly depending on factors such as number and rate of concurrent connections, the kind and number of configured policies, request and response sizes, and backend latency.
+Customers can scale an Azure API Management instance in a dedicated service tier by adding and removing units. A **unit** is composed of dedicated Azure resources and has a certain load-bearing capacity expressed as a number of API calls per second. This number doesn't represent a call limit, but rather an estimated maximum throughput value to allow for rough capacity planning. Actual throughput and latency vary broadly depending on factors such as number and rate of concurrent connections, the kind and number of configured policies, request and response sizes, and backend latency.
+
-Capacity and price of each unit depends on the **tier** in which the unit exists. You can choose between four tiers: **Developer**, **Basic**, **Standard**, **Premium**. If you need to increase capacity for a service within a tier, you should add a unit. If the tier that is currently selected in your API Management instance does not allow adding more units, you need to upgrade to a higher-level tier.
+> [!NOTE]
+> API Management instances in the **Consumption** tier scale automatically based on the traffic. Currently, you cannot upgrade from or downgrade to the Consumption tier.
-The price of each unit and the available features (for example, multi-region deployment) depends on the tier that you chose for your API Management instance. The [pricing details](https://azure.microsoft.com/pricing/details/api-management/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio) article, explains the price per unit and features you get in each tier.
+The throughput and price of each unit depend on the [service tier](api-management-features.md) in which the unit exists. If you need to increase capacity for a service within a tier, you should add a unit. If the tier that is currently selected in your API Management instance doesn't allow adding more units, you need to upgrade to a higher-level tier.
>[!NOTE]
->The [pricing details](https://azure.microsoft.com/pricing/details/api-management/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio) article shows approximate numbers of unit capacity in each tier. To get more accurate numbers, you need to look at a realistic scenario for your APIs. See the [Capacity of an Azure API Management instance](api-management-capacity.md) article.
+>See [API Management pricing](https://azure.microsoft.com/pricing/details/api-management/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio) for features, scale limits, and estimated throughput in each tier. To get more accurate throughput numbers, you need to look at a realistic scenario for your APIs. See [Capacity of an Azure API Management instance](api-management-capacity.md).
## Prerequisites To follow the steps from this article, you must:
-+ Have an active Azure subscription.
-
- [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)]
-
-+ Have an API Management instance. For more information, see [Create an Azure API Management instance](get-started-create-service-instance.md).
++ Have an API Management instance. For more information, see [Create an Azure API Management instance](get-started-create-service-instance.md). + Understand the concept of [Capacity of an Azure API Management instance](api-management-capacity.md). - ## Upgrade and scale
-You can choose between four tiers: **Developer**, **Basic**, **Standard**, and **Premium**. The **Developer** tier should be used to evaluate the service; it should not be used for production. The **Developer** tier does not have SLA and you cannot scale this tier (add/remove units).
+You can choose between four dedicated tiers: **Developer**, **Basic**, **Standard**, and **Premium**.
-**Basic**, **Standard**, and **Premium** are production tiers that have SLA and can be scaled. The **Basic** tier is the cheapest tier with an SLA and it can be scaled up to two units, **Standard** tier can be scaled to up to four units. You can add any number of units to the **Premium** tier.
+* The **Developer** tier should be used to evaluate the service; it shouldn't be used for production. The **Developer** tier doesn't have SLA and you can't scale this tier (add/remove units).
-The **Premium** tier enables you to distribute a single Azure API Management instance across any number of desired Azure regions. When you initially create an Azure API Management service, the instance contains only one unit and resides in a single Azure region. The initial region is designated as the **primary** region. Additional regions can be easily added. When adding a region, you specify the number of units you want to allocate. For example, you can have one unit in the **primary** region and five units in some other region. You can tailor the number of units to the traffic you have in each region. For more information, see [How to deploy an Azure API Management service instance to multiple Azure regions](api-management-howto-deploy-multi-region.md).
+* **Basic**, **Standard**, and **Premium** are production tiers that have SLA and can be scaled. For pricing details and scale limits, see [API Management pricing](https://azure.microsoft.com/pricing/details/api-management/#pricing).
-You can upgrade and downgrade to and from any tier. Upgrading or downgrading can remove some features - for example, VNETs or multi-region deployment, when downgrading to Standard or Basic from the Premium tier.
+* The **Premium** tier enables you to distribute a single Azure API Management instance across any number of desired Azure regions. When you initially create an Azure API Management service, the instance contains only one unit and resides in a single Azure region (the **primary** region).
-> [!NOTE]
-> The upgrade or scale process can take from 15 to 45 minutes to apply. You get notified when it is done.
+ Additional regions can be easily added. When adding a region, you specify the number of units you want to allocate. For example, you can have one unit in the primary region and five units in some other region. You can tailor the number of units to the traffic you have in each region. For more information, see [How to deploy an Azure API Management service instance to multiple Azure regions](api-management-howto-deploy-multi-region.md).
+
+* You can upgrade and downgrade to and from any dedicated service tier. Downgrading can remove some features. For example, downgrading to Standard or Basic from the Premium tier can remove virtual networks or multi-region deployment.
> [!NOTE]
-> API Management service in the **Consumption** tier scales automatically based on the traffic.
+> The upgrade or scale process can take from 15 to 45 minutes to apply. You get notified when it is done.
-## Scale your API Management service
+## Scale your API Management instance
![Scale API Management service in Azure portal](./media/upgrade-and-scale/portal-scale.png)
-1. Navigate to your API Management service in the [Azure portal](https://portal.azure.com/).
-2. Select **Locations** from the menu.
-3. Click on the row with the location you want to scale.
-4. Specify the new number of **units** - either use the slider or type the number.
-5. Click **Apply**.
+1. Navigate to your API Management instance in the [Azure portal](https://portal.azure.com/).
+1. Select **Locations** from the menu.
+1. Select the row with the location you want to scale.
+1. Specify the new number of **Units** - use the slider if available, or type the number.
+1. Select **Apply**.
+
+> [!NOTE]
+> In the Premium service tier, you can optionally configure availability zones and a virtual network in a selected location. For more information, see [Deploy API Management service to an additional location](api-management-howto-deploy-multi-region.md#-deploy-api-management-service-to-an-additional-location).
## Change your API Management service tier
-1. Navigate to your API Management service in the [Azure portal](https://portal.azure.com/).
-2. Click on the **Pricing tier** in the menu.
-3. Select the desired service tier from the dropdown. Use the slider to specify the scale of your API Management service after the change.
-4. Click **Save**.
+1. Navigate to your API Management instance in the [Azure portal](https://portal.azure.com/).
+1. Select **Pricing tier** in the menu.
+1. Select the desired service tier from the dropdown. Use the slider to specify the number of units for your API Management service after the change.
+1. Select **Save**.
## Downtime during scaling up and down
-If you are scaling from or to the Developer tier, there will be downtime. Otherwise, there is no downtime.
+If you're scaling from or to the Developer tier, there will be downtime. Otherwise, there is no downtime.
## Compute isolation
-If your security requirements include [compute isolation](../azure-government/azure-secure-isolation-guidance.md#compute-isolation), you can use the **Isolated** pricing tier. This tier ensures the compute resources of an API Management service instance consume the entire physical host and provide the necessary level of isolation required to support, for example, US Department of Defense Impact Level 5 (IL5) workloads. To get access to the Isolated tier, please [create a support ticket](../azure-portal/supportability/how-to-create-azure-support-request.md).
-
+If your security requirements include [compute isolation](../azure-government/azure-secure-isolation-guidance.md#compute-isolation), you can use the **Isolated** pricing tier. This tier ensures the compute resources of an API Management service instance consume the entire physical host and provide the necessary level of isolation required to support, for example, US Department of Defense Impact Level 5 (IL5) workloads. To get access to the Isolated tier, [create a support request](../azure-portal/supportability/how-to-create-azure-support-request.md).
## Next steps
app-service Deploy Azure Pipelines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/deploy-azure-pipelines.md
The code examples in this section assume you are deploying an ASP.NET web app. Y
Learn more about [Azure Pipelines ecosystem support](/azure/devops/pipelines/ecosystems/ecosystems).
-# [Classic](#tab/yaml/)
+# [YAML](#tab/yaml/)
1. Sign in to your Azure DevOps organization and navigate to your project.
app-service How To Migrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/how-to-migrate.md
Title: Use the migration feature to migrate App Service Environment v2 to App Service Environment v3
-description: Learn how to migrate your App Service Environment v2 to App Service Environment v3 using the migration feature
+ Title: Use the migration feature to migrate your App Service Environment to App Service Environment v3
+description: Learn how to migrate your App Service Environment to App Service Environment v3 using the migration feature
Previously updated : 4/27/2022 Last updated : 9/15/2022 zone_pivot_groups: app-service-cli-portal
-# Use the migration feature to migrate App Service Environment v2 to App Service Environment v3
+# Use the migration feature to migrate App Service Environment v1 and v2 to App Service Environment v3
-An App Service Environment v2 can be automatically migrated to an [App Service Environment v3](overview.md) using the migration feature. To learn more about the migration process and to see if your App Service Environment supports migration at this time, see the [Migration to App Service Environment v3 Overview](migrate.md).
+An App Service Environment v1 and v2 can be automatically migrated to an [App Service Environment v3](overview.md) using the migration feature. To learn more about the migration process and to see if your App Service Environment supports migration at this time, see the [Migration to App Service Environment v3 Overview](migrate.md).
> [!IMPORTANT] > It is recommended to use this feature for dev environments first before migrating any production environments to ensure there are no unexpected issues. Please provide any feedback related to this article or the feature using the buttons at the bottom of the page.
For this guide, [install the Azure CLI](/cli/azure/install-azure-cli) or use the
## 1. Get your App Service Environment ID
-Run these commands to get your App Service Environment ID and store it as an environment variable. Replace the placeholders for name and resource group with your values for the App Service Environment you want to migrate.
+Run these commands to get your App Service Environment ID and store it as an environment variable. Replace the placeholders for name and resource groups with your values for the App Service Environment you want to migrate. "ASE_RG" and "VNET_RG" will be the same if your virtual network and App Service Environment are in the same resource group.
```azurecli ASE_NAME=<Your-App-Service-Environment-name>
-ASE_RG=<Your-Resource-Group>
+ASE_RG=<Your-ASE-Resource-Group>
+VNET_RG=<Your-VNet-Resource-Group>
ASE_ID=$(az appservice ase show --name $ASE_NAME --resource-group $ASE_RG --query id --output tsv) ```
The following command will check whether your App Service Environment is support
az rest --method post --uri "${ASE_ID}/migrate?api-version=2021-02-01&phase=validation" ```
-If there are no errors, your migration is supported and you can continue to the next step.
+If there are no errors, your migration is supported, and you can continue to the next step.
## 3. Generate IP addresses for your new App Service Environment v3
Run the following command to check the status of this step.
az rest --method get --uri "${ASE_ID}?api-version=2021-02-01" --query properties.status ```
-If it's in progress, you'll get a status of "Migrating". Once you get a status of "Ready", run the following command to get your new IPs. If you don't see the new IPs immediately, wait a few minutes and try again.
+If it's in progress, you'll get a status of "Migrating". Once you get a status of "Ready", run the following command to view your new IPs. If you don't see the new IPs immediately, wait a few minutes and try again.
```azurecli az rest --method get --uri "${ASE_ID}/configurations/networking?api-version=2021-02-01"
az rest --method get --uri "${ASE_ID}/configurations/networking?api-version=2021
## 4. Update dependent resources with new IPs
-Don't move on to migration immediately after completing the previous step. Using the new IPs, update any resources and networking components to ensure your new environment functions as intended once migration is complete. It's your responsibility to make any necessary updates.
+Using the new IPs, update any of your resources or networking components to ensure your new environment functions as intended once migration is complete. It's your responsibility to make any necessary updates. Don't migrate until you've completed this step.
## 5. Delegate your App Service Environment subnet App Service Environment v3 requires the subnet it's in to have a single delegation of `Microsoft.Web/hostingEnvironments`. Previous versions didn't require this delegation. You'll need to confirm your subnet is delegated properly and update the delegation if needed before migrating. You can update the delegation either by running the following command or by navigating to the subnet in the [Azure portal](https://portal.azure.com). ```azurecli
-az network vnet subnet update -g $ASE_RG -n <subnet-name> --vnet-name <vnet-name> --delegations Microsoft.Web/hostingEnvironments
+az network vnet subnet update --resource-group $VNET_RG -name <subnet-name> --vnet-name <vnet-name> --delegations Microsoft.Web/hostingEnvironments
```
-## 6. Migrate to App Service Environment v3
+## 6. Prepare your configurations
+
+You can make your new App Service Environment v3 zone redundant if your existing environment is in a [region that supports zone redundancy](./overview.md#regions). This can be done by setting the `zoneRedundant` property to "true". Zone redundancy is an optional configuration. This configuration can only be set during the creation of your new App Service Environment v3 and can't be removed at a later time. For more information, see [Choose your App Service Environment v3 configurations](./migrate.md#choose-your-app-service-environment-v3-configurations). If you don't want to configure zone redundancy, don't include the `zoneRedundant` parameter.
+
+If your existing App Service Environment uses a custom domain suffix, you'll need to [configure one for your new App Service Environment v3 during the migration process](./migrate.md#choose-your-app-service-environment-v3-configurations). Migration will fail if you don't configure a custom domain suffix and are using one currently. Migration will also fail if you attempt to add a custom domain suffix during migration to an environment that doesn't have one configured currently. For more information on App Service Environment v3 custom domain suffix including requirements, step-by-step instructions, and best practices, see [Configure custom domain suffix for App Service Environment](./how-to-custom-domain-suffix.md).
+
+If your migration doesn't include a custom domain suffix and you aren't enabling zone redundancy, you can move on to migration.
+
+In order to set these configurations, create a file called "parameters.json" with the following details based on your scenario. Don't include the custom domain suffix properties if this feature doesn't apply to your migration. Be sure to pay attention to the value of the `zoneRedundant` property as this configuration is irreversible after migration. Ensure the value of the `kind` property is set based on your existing App Service Environment version. Accepted values for the `kind` property are "ASEV1" and "ASEV2".
+
+If you're migrating without a custom domain suffix and are enabling zone redundancy:
+
+```json
+{
+ "type": "Microsoft.Web/hostingEnvironments",
+ "name": "sample-ase-migration",
+ "kind": "ASEV2",
+ "location": "westcentralus",
+ "properties": {
+ "zoneRedundant": true
+ }
+}
+```
+
+If you're using a user assigned managed identity for your custom domain suffix configuration and **are enabling zone redundancy**:
+
+```json
+{
+ "type": "Microsoft.Web/hostingEnvironments",
+ "name": "sample-ase-migration",
+ "kind": "ASEV2",
+ "location": "westcentralus",
+ "properties": {
+ "zoneRedundant": true,
+ "customDnsSuffixConfiguration": {
+ "dnsSuffix": "internal-contoso.com",
+ "certificateUrl": "https://contoso.vault.azure.net/secrets/myCertificate",
+ "keyVaultReferenceIdentity": "/subscriptions/00000000-0000-0000-0000-000000000000/resourcegroups/asev3-migration/providers/Microsoft.ManagedIdentity/userAssignedIdentities/ase-managed-identity"
+ }
+ }
+}
+```
+
+If you're using a system assigned managed identity for your custom domain suffix configuration and **aren't enabling zone redundancy**:
+
+```json
+{
+ "type": "Microsoft.Web/hostingEnvironments",
+ "name": "sample-ase-migration",
+ "kind": "ASEV2",
+ "location": "westcentralus",
+ "properties": {
+ "customDnsSuffixConfiguration": {
+ "dnsSuffix": "internal-contoso.com",
+ "certificateUrl": "https://contoso.vault.azure.net/secrets/myCertificate",
+ "keyVaultReferenceIdentity": "SystemAssigned"
+ }
+ }
+}
+```
+
+## 7. Migrate to App Service Environment v3
+
+Only start this step once you've completed all pre-migration actions listed previously and understand the [implications of migration](migrate.md#migrate-to-app-service-environment-v3) including what will happen during this time. This step takes up to three hours for v2 to v3 migrations and up to six hours for v1 to v3 migrations depending on environment size. During that time, there will be about one hour of application downtime. Scaling, deployments, and modifications to your existing App Service Environment will be blocked during this step.
-Only start this step once you've completed all pre-migration actions listed previously and understand the [implications of migration](migrate.md#migrate-to-app-service-environment-v3) including what will happen during this time. This step takes up to three hours and during that time there will be about one hour of application downtime. Scaling and modifications to your existing App Service Environment will be blocked during this step.
+Only include the "body" parameter in the command if you're enabling zone redundancy and/or are configuring a custom domain suffix. If neither of those configurations apply to your migration, you can remove the parameter from the command.
```azurecli
-az rest --method post --uri "${ASE_ID}/migrate?api-version=2021-02-01&phase=fullmigration"
+az rest --method post --uri "${ASE_ID}/migrate?api-version=2021-02-01&phase=fullmigration" --body @parameters.json
``` Run the following command to check the status of your migration. The status will show as "Migrating" while in progress.
Run the following command to check the status of your migration. The status will
az rest --method get --uri "${ASE_ID}?api-version=2021-02-01" --query properties.status ```
-Once you get a status of "Ready", migration is done and you have an App Service Environment v3. Your apps will now be running in your new environment.
+Once you get a status of "Ready", migration is done, and you have an App Service Environment v3. Your apps will now be running in your new environment.
Get the details of your new environment by running the following command or by navigating to the [Azure portal](https://portal.azure.com).
az appservice ase show --name $ASE_NAME --resource-group $ASE_RG
From the [Azure portal](https://portal.azure.com), navigate to the **Migration** page for the App Service Environment you'll be migrating. You can do this by clicking on the banner at the top of the **Overview** page for your App Service Environment or by clicking the **Migration** item on the left-hand side.
-![migration access points](./media/migration/portal-overview.png)
:::image type="content" source="./media/migration/portal-overview.png" alt-text="Migration access points."::: On the migration page, the platform will validate if migration is supported for your App Service Environment. If your environment isn't supported for migration, a banner will appear at the top of the page and include an error message with a reason. See the [troubleshooting](migrate.md#troubleshooting) section for descriptions of the error messages you may see if you aren't eligible for migration. If your App Service Environment isn't supported for migration at this time or your environment is in an unhealthy or suspended state, you won't be able to use the migration feature. If your environment [won't be supported for migration with the migration feature](migrate.md#supported-scenarios) or you want to migrate to App Service Environment v3 without using the migration feature, see the [manual migration options](migration-alternatives.md).
If migration is supported for your App Service Environment, you'll be able to pr
## 2. Generate IP addresses for your new App Service Environment v3
-Under **Get new IP addresses**, confirm you understand the implications and start the process. This step will take about 15 minutes to complete. You won't be able to scale or make changes to your existing App Service Environment during this time. If after 15 minutes you don't see your new IP addresses, select refresh as shown in the sample to allow your new IP addresses to appear.
-
+Under **Get new IP addresses**, confirm you understand the implications and start the process. This step will take about 15 minutes to complete. You won't be able to scale or make changes to your existing App Service Environment during this time.
## 3. Update dependent resources with new IPs
App Service Environment v3 requires the subnet it's in to have a single delegati
:::image type="content" source="./media/migration/subnet-delegation-ux.png" alt-text="Subnet delegation using the portal.":::
-## 5. Migrate to App Service Environment v3
+## 5. Choose your configurations
+
+You can make your new App Service Environment v3 zone redundant if your existing environment is in a [region that supports zone redundancy](./overview.md#regions). Zone redundancy is an optional configuration. This configuration can only be set during the creation of your new App Service Environment v3 and can't be removed at a later time. For more information, see [Choose your App Service Environment v3 configurations](./migrate.md#choose-your-app-service-environment-v3-configurations). Select **Enabled** if you'd like to configure zone redundancy.
++
+If your environment is in a region that doesn't support zone redundancy, the checkbox will be disabled. If you need a zone redundant App Service Environment v3, use one of the manual migration options and create your new App Service Environment v3 in one of the regions that supports zone redundancy.
+
+If your existing App Service Environment uses a [custom domain suffix](./migrate.md#choose-your-app-service-environment-v3-configurations), you'll be required to configure one for your new App Service Environment v3. You'll be shown the custom domain suffix configuration options if this situation applies to you. You won't be able to migrate until you provide the required information. If you'd like to use a custom domain suffix but don't currently have one configured, you can configure one once migration is complete. For more information on App Service Environment v3 custom domain suffix including requirements, step-by-step instructions, and best practices, see [Configure custom domain suffix for App Service Environment](./how-to-custom-domain-suffix.md).
++
+After you add your custom domain suffix details, the "Migrate" button will be enabled.
-Once you've completed all of the above steps, you can start migration. Make sure you understand the [implications of migration](migrate.md#migrate-to-app-service-environment-v3) including what will happen during this time. This step takes up to three hours and during that time there will be about one hour of application downtime. Scaling and modifications to your existing App Service Environment will be blocked during this step.
-When migration is complete, you'll have an App Service Environment v3 and all of your apps will be running in your new environment. You can confirm the environment's version by checking the **Configuration** page for your App Service Environment.
+## 6. Migrate to App Service Environment v3
+
+Once you've completed all of the above steps, you can start migration. Make sure you understand the [implications of migration](migrate.md#migrate-to-app-service-environment-v3) including what will happen during this time. This step takes up to three hours for v2 to v3 migrations and up to six hours for v1 to v3 migrations depending on environment size. Scaling and modifications to your existing App Service Environment will be blocked during this step.
+
+When migration is complete, you'll have an App Service Environment v3, and all of your apps will be running in your new environment. You can confirm the environment's version by checking the **Configuration** page for your App Service Environment.
+
+If your migration included a custom domain suffix, for App Service Environment v3, the custom domain will no longer be shown in the **Essentials** section of the **Overview** page of the portal as it is for App Service Environment v1/v2. Instead, for App Service Environment v3, go to the **Custom domain suffix** page where you can confirm your custom domain suffix is configured correctly. You can also remove the configuration if you no longer need it or configure one if you didn't have one previously.
+ ::: zone-end
When migration is complete, you'll have an App Service Environment v3 and all of
> [!div class="nextstepaction"] > [App Service Environment v3 Networking](networking.md)+
+> [!div class="nextstepaction"]
+> [Custom domain suffix](./how-to-custom-domain-suffix.md)
app-service Migrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/migrate.md
Title: Migrate to App Service Environment v3 by using the migration feature
description: Overview of the migration feature for migration to App Service Environment v3 Previously updated : 7/29/2022 Last updated : 9/15/2022 # Migration to App Service Environment v3 using the migration feature
-App Service can now automate migration of your App Service Environment v2 to an [App Service Environment v3](overview.md). If you want to migrate an App Service Environment v1 to an App Service Environment v3, see the [manual migration options documentation](migration-alternatives.md). App Service Environment v3 provides [advantages and feature differences](overview.md#feature-differences) over earlier versions. Make sure to review the [supported features](overview.md#feature-differences) of App Service Environment v3 before migrating to reduce the risk of an unexpected application issue.
+App Service can now automate migration of your App Service Environment v1 and v2 to an [App Service Environment v3](overview.md). App Service Environment v3 provides [advantages and feature differences](overview.md#feature-differences) over earlier versions. Make sure to review the [supported features](overview.md#feature-differences) of App Service Environment v3 before migrating to reduce the risk of an unexpected application issue.
> [!IMPORTANT] > It is recommended to use this feature for dev environments first before migrating any production environments to ensure there are no unexpected issues. Please provide any feedback related to this article or the feature using the buttons at the bottom of the page.
App Service can now automate migration of your App Service Environment v2 to an
## Supported scenarios
-At this time, App Service Environment migrations to v3 using the migration feature support both [Internal Load Balancer (ILB)](create-ilb-ase.md) and [external (internet facing with public IP)](create-external-ase.md) App Service Environment v2 in the following regions:
+At this time, App Service Environment migrations to v3 using the migration feature are supported in the following regions:
- Australia East - Australia Central
At this time, App Service Environment migrations to v3 using the migration featu
- West US - West US 3
+The following App Service Environment configurations can be migrated using the migration feature. The table gives the App Service Environment v3 configuration you'll end up with when using the migration feature based on your existing App Service Environment. All supported App Service Environments can be migrated to a [zone redundant App Service Environment v3](../../availability-zones/migrate-app-service-environment.md) using the migration feature as long as the environment is [in a region that supports zone redundancy](./overview.md#regions). You can [configure zone redundancy](#choose-your-app-service-environment-v3-configurations) during the migration process.
+
+|Configuration |App Service Environment v3 Configuration |
+||--|
+|[Internal Load Balancer (ILB)](create-ilb-ase.md) App Service Environment v2 |ILB App Service Environment v3 |
+|[External (ELB/internet facing with public IP)](create-external-ase.md) App Service Environment v2 |ELB App Service Environment v3 |
+|ILB App Service Environment v2 with a custom domain suffix |ILB App Service Environment v3 with a custom domain suffix |
+|ILB App Service Environment v1 |ILB App Service Environment v3 |
+|ELB App Service Environment v1 |ELB App Service Environment v3 |
+|ILB App Service Environment v1 with a custom domain suffix |ILB App Service Environment v3 with a custom domain suffix |
+
+If you want your new App Service Environment v3 to use a custom domain suffix and you aren't using one currently, custom domain suffix can be configured at any time once migration is complete. For more information, see [Configure custom domain suffix for App Service Environment](./how-to-custom-domain-suffix.md).
+ You can find the version of your App Service Environment by navigating to your App Service Environment in the [Azure portal](https://portal.azure.com) and selecting **Configuration** under **Settings** on the left-hand side. You can also use [Azure Resource Explorer](https://resources.azure.com/) and review the value of the `kind` property for your App Service Environment. ## Migration feature limitations
-With the current version of the migration feature, your new App Service Environment will be placed in the existing subnet that was used for your old environment. Internet facing App Service Environment canΓÇÖt be migrated to ILB App Service Environment v3 and vice versa.
+The following are limitations when using the migration feature:
+
+- Your new App Service Environment v3 will be placed in the existing subnet that was used for your old environment.
+- You can't change the region your App Service Environment is located in.
+- ELB App Service Environment canΓÇÖt be migrated to ILB App Service Environment v3 and vice versa.
+- If your existing App Service Environment uses a custom domain suffix, you'll have to configure custom domain suffix for your App Service Environment v3 during the migration process.
+ - If you no longer want to use a custom domain suffix, you can remove it once the migration is complete.
-Note that App Service Environment v3 doesn't currently support the following features that you may be using with your current App Service Environment. If you require any of these features, don't migrate until they're supported.
+App Service Environment v3 doesn't currently support the following features that you may be using with your current App Service Environment. If you require any of these features, don't migrate until they're supported.
-- Sending SMTP traffic. You can still have email triggered alerts but your app can't send outbound traffic on port 25. - Monitoring your traffic with Network Watcher or NSG Flow. - Configuring an IP-based TLS/SSL binding with your apps.
-The following scenarios aren't supported in this version of the feature:
+The following scenarios aren't supported by the migration feature. See the [manual migration options](migration-alternatives.md) if your App Service Environment falls into one of these categories.
-- App Service Environment v2 -> Zone Redundant App Service Environment v3-- App Service Environment v1-- App Service Environment v1 -> Zone Redundant App Service Environment v3-- ILB App Service Environment v2 with a custom domain suffix-- ILB App Service Environment v1 with a custom domain suffix-- Internet facing App Service Environment v2 with IP SSL addresses-- Internet facing App Service Environment v1 with IP SSL addresses
+- App Service Environment v1 in a [Classic VNet](/previous-versions/azure/virtual-network/create-virtual-network-classic)
+- ELB App Service Environment v2 with IP SSL addresses
+- ELB App Service Environment v1 with IP SSL addresses
- [Zone pinned](zone-redundancy.md) App Service Environment v2 - App Service Environment in a region not listed in the supported regions
-The migration feature doesn't plan on supporting App Service Environment v1 within a Classic VNet. See the [manual migration options](migration-alternatives.md) if your App Service Environment falls into this category.
- The App Service platform will review your App Service Environment to confirm migration support. If your scenario doesn't pass all validation checks, you won't be able to migrate at this time using the migration feature. If your environment is in an unhealthy or suspended state, you won't be able to migrate until you make the needed updates. ### Troubleshooting
If your App Service Environment doesn't pass the validation checks or you try to
|Migrate can only be called on an ASE in ARM VNET and this ASE is in Classic VNET. |App Service Environments in Classic VNets can't migrate using the migration feature. |Migrate using one of the [manual migration options](migration-alternatives.md). | |ASEv3 Migration is not yet ready. |The underlying infrastructure isn't ready to support App Service Environment v3. |Migrate using one of the [manual migration options](migration-alternatives.md) if you want to migrate immediately. Otherwise, wait for the migration feature to be available in your region. | |Migration cannot be called on this ASE, please contact support for help migrating. |Support will need to be engaged for migrating this App Service Environment. This is potentially due to custom settings used by this environment. |Engage support to resolve your issue. |
-|Migrate cannot be called on Zone Pinned ASEs. |App Service Environment v2s that are zone pinned can't be migrated using the migration feature at this time. |Migrate using one of the [manual migration options](migration-alternatives.md) if you want to migrate immediately. Otherwise, wait for the migration feature to support this App Service Environment configuration. |
-|Migrate cannot be called if IP SSL is enabled on any of the sites|App Service Environments that have sites with IP SSL enabled can't be migrated using the migration feature at this time. |Migrate using one of the [manual migration options](migration-alternatives.md) if you want to migrate immediately. Otherwise, wait for the migration feature to support this App Service Environment configuration. |
-|Migrate is not available for this kind|App Service Environment v1 can't be migrated using the migration feature at this time. |Migrate using one of the [manual migration options](migration-alternatives.md) if you want to migrate immediately. Otherwise, wait for the migration feature to support this App Service Environment configuration. |
-|Full migration cannot be called before IP addresses are generated|You'll see this error if you attempt to migrate before finishing the pre-migration steps. |Ensure you've completed all pre-migration steps before you attempt to migrate. See the [step-by-step guide for migrating](how-to-migrate.md). |
-|Migration to ASEv3 is not allowed for this ASE|You won't be able to migrate using the migration feature. |Migrate using one of the [manual migration options](migration-alternatives.md). |
+|Migrate cannot be called on Zone Pinned ASEs. |App Service Environment v2 that is zone pinned can't be migrated using the migration feature at this time. |Migrate using one of the [manual migration options](migration-alternatives.md) if you want to migrate immediately. |
+|Migrate cannot be called if IP SSL is enabled on any of the sites.|App Service Environments that have sites with IP SSL enabled can't be migrated using the migration feature at this time. |Migrate using one of the [manual migration options](migration-alternatives.md) if you want to migrate immediately. |
+|Full migration cannot be called before IP addresses are generated. |You'll see this error if you attempt to migrate before finishing the pre-migration steps. |Ensure you've completed all pre-migration steps before you attempt to migrate. See the [step-by-step guide for migrating](how-to-migrate.md). |
+|Migration to ASEv3 is not allowed for this ASE. |You won't be able to migrate using the migration feature. |Migrate using one of the [manual migration options](migration-alternatives.md). |
|Subscription has too many App Service Environments. Please remove some before trying to create more.|The App Service Environment [quota for your subscription](../../azure-resource-manager/management/azure-subscription-service-limits.md#app-service-limits) has been met. |Remove unneeded environments or contact support to review your options. |
-|`<ZoneRedundant><DedicatedHosts><ASEv3/ASE>` is not available in this location|You'll see this error if you're trying to migrate an App Service Environment in a region that doesn't support one of your requested features. |Migrate using one of the [manual migration options](migration-alternatives.md) if you want to migrate immediately. Otherwise, wait for the migration feature to support this App Service Environment configuration. |
+|`<ZoneRedundant><DedicatedHosts><ASEv3/ASE>` is not available in this location. |You'll see this error if you're trying to migrate an App Service Environment in a region that doesn't support one of your requested features. |Migrate using one of the [manual migration options](migration-alternatives.md) if you want to migrate immediately. Otherwise, wait for the migration feature to support this App Service Environment configuration. |
|Migrate cannot be called on this ASE until the active upgrade has finished. |App Service Environments can't be migrated during platform upgrades. You can set your [upgrade preference](using-an-ase.md#upgrade-preference) from the Azure portal. |Wait until the upgrade finishes and then migrate. |
+|App Service Environment management operation in progress. |Your App Service Environment is undergoing a management operation. These operations can include activities such as deployments or upgrades. Migration is blocked until these operations are complete. |You'll be able to migrate once these operations are complete. |
## Overview of the migration process using the migration feature
Migration consists of a series of steps that must be followed in order. Key poin
### Generate IP addresses for your new App Service Environment v3
-The platform will create the [new inbound IP (if you're migrating an internet facing App Service Environment) and the new outbound IP](networking.md#addresses) addresses. While these IPs are getting created, activity with your existing App Service Environment won't be interrupted, however, you won't be able to scale or make changes to your existing environment. This process will take about 15 minutes to complete.
+The platform will create the [new inbound IP (if you're migrating an ELB App Service Environment) and the new outbound IP](networking.md#addresses) addresses. While these IPs are getting created, activity with your existing App Service Environment won't be interrupted, however, you won't be able to scale or make changes to your existing environment. This process will take about 15 minutes to complete.
When completed, you'll be given the new IPs that will be used by your future App Service Environment v3. These new IPs have no effect on your existing environment. The IPs used by your existing environment will continue to be used up until your existing environment is shut down during the migration step. ### Update dependent resources with new IPs
-Once the new IPs are created, you'll have the new default outbound to the internet public addresses so you can adjust any external firewalls, DNS routing, network security groups, and so on, in preparation for the migration. For public internet facing App Service Environment, you'll also have the new inbound IP address that you can use to set up new endpoints with services like [Traffic Manager](../../traffic-manager/traffic-manager-overview.md) or [Azure Front Door](../../frontdoor/front-door-overview.md). **It's your responsibility to update any and all resources that will be impacted by the IP address change associated with the new App Service Environment v3. Don't move on to the next step until you've made all required updates.**
+Once the new IPs are created, you'll have the new default outbound to the internet public addresses so you can adjust any external firewalls, DNS routing, network security groups, and any other resources that rely on these IPs, in preparation for the migration. For ELB App Service Environment, you'll also have the new inbound IP address that you can use to set up new endpoints with services like [Traffic Manager](../../traffic-manager/traffic-manager-overview.md) or [Azure Front Door](../../frontdoor/front-door-overview.md). **It's your responsibility to update any and all resources that will be impacted by the IP address change associated with the new App Service Environment v3. Don't move on to the next step until you've made all required updates.**
### Delegate your App Service Environment subnet App Service Environment v3 requires the subnet it's in to have a single delegation of `Microsoft.Web/hostingEnvironments`. Migration won't succeed if the App Service Environment's subnet isn't delegated or it's delegated to a different resource.
+### Choose your App Service Environment v3 configurations
+
+Your App Service Environment v3 can be deployed across availability zones in the regions that support it. This architecture is known as [zone redundancy](../../availability-zones/migrate-app-service-environment.md). Zone redundancy can only be configured during App Service Environment creation. If you want your new App Service Environment v3 to be zone redundant, enable the configuration during the migration process. Any App Service Environment that is using the migration feature to migrate can be configured as zone redundant as long as you're using a [region that supports zone redundancy for App Service Environment v3](./overview.md#regions). If you're existing environment is using a region that doesn't support zone redundancy, the configuration option will be disabled and you won't be able to configure it. The migration feature doesn't support changing regions. If you'd like to use a different region, use one of the [manual migration options](migration-alternatives.md).
+
+> [!NOTE]
+> Enabling zone redundancy can lead to additional charges. Review the [zone redundancy pricing model](../../availability-zones/migrate-app-service-environment.md#pricing) for more information.
+>
+
+If your existing App Service Environment uses a custom domain suffix, you'll be prompted to configure a custom domain suffix for your new App Service Environment v3. You'll need to provide the custom domain name, managed identity, and certificate. For more information on App Service Environment v3 custom domain suffix including requirements, step-by-step instructions, and best practices, see [Configure custom domain suffix for App Service Environment](./how-to-custom-domain-suffix.md). You must configure a custom domain suffix for your new environment even if you no longer want to use it. Once migration is complete, you can remove the custom domain suffix configuration if needed.
+
+If your migration includes a custom domain suffix, for App Service Environment v3, the custom domain will no longer be shown in the **Essentials** section of the **Overview** page of the portal as it is for App Service Environment v1/v2. Instead, for App Service Environment v3, go to the **Custom domain suffix** page where you can confirm your custom domain suffix is configured correctly.
+ ### Migrate to App Service Environment v3
-After updating all dependent resources with your new IPs and properly delegating your subnet, you should continue with migration as soon as possible.
+After completing the previous steps, you should continue with migration as soon as possible.
-During migration, which requires up to a three hour service window, the following events will occur:
+During migration, which requires up to a three hour service window for App Service Environment v2 to v3 migrations and up to a six hour service window depending on environment size for v1 to v3 migrations, scaling and environment configurations are blocked and the following events will occur:
- The existing App Service Environment is shut down and replaced by the new App Service Environment v3.-- All App Service plans in the App Service Environment are converted from Isolated to Isolated v2.-- All of the apps that are on your App Service Environment are temporarily down. You should expect about one hour of downtime during this period.
+- All App Service plans in the App Service Environment are converted from the Isolated to Isolated v2 SKU.
+- All of the apps that are on your App Service Environment are temporarily down. **You should expect about one hour of downtime during this period**.
- If you can't support downtime, see [migration-alternatives](migration-alternatives.md#guidance-for-manual-migration).-- The public addresses that are used by the App Service Environment will change to the IPs identified during the previous step.
+- The public addresses that are used by the App Service Environment will change to the IPs generated during the IP generation step.
-As in the IP generation step, you won't be able to scale or modify your App Service Environment or deploy apps to it during this process. When migration is complete, the apps that were on the old App Service Environment will be running on the new App Service Environment v3.
+As in the IP generation step, you won't be able to scale, modify your App Service Environment, or deploy apps to it during this process. When migration is complete, the apps that were on the old App Service Environment will be running on the new App Service Environment v3.
> [!NOTE] > Due to the conversion of App Service plans from Isolated to Isolated v2, your apps may be over-provisioned after the migration since the Isolated v2 tier has more memory and CPU per corresponding instance size. You'll have the opportunity to [scale your environment](../manage-scale-up.md) as needed once migration is complete. For more information, review the [SKU details](https://azure.microsoft.com/pricing/details/app-service/windows/).
There's no cost to migrate your App Service Environment. You'll stop being charg
- **What if migrating my App Service Environment is not currently supported?** You won't be able migrate using the migration feature at this time. If you have an unsupported environment and want to migrate immediately, see the [manual migration options](migration-alternatives.md). This doc will be updated as additional regions and supported scenarios become available. - **Will I experience downtime during the migration?**
- Yes, you should expect about one hour of downtime during the three hour service window during the migration step so plan accordingly. If downtime isn't an option for you, see the [manual migration options](migration-alternatives.md).
+ Yes, you should expect about one hour of downtime during the three to six hour service window during the migration step, so plan accordingly. If downtime isn't an option for you, see the [manual migration options](migration-alternatives.md).
- **Will I need to do anything to my apps after the migration to get them running on the new App Service Environment?** No, all of your apps running on the old environment will be automatically migrated to the new environment and run like before. No user input is needed. - **What if my App Service Environment has a custom domain suffix?**
- You won't be able migrate using the migration feature at this time. If you have an unsupported environment and want to migrate immediately, see the [manual migration options](migration-alternatives.md).
+ The migration feature supports this [migration scenario](#supported-scenarios). You can migrate using a manual method if you don't want to use the migration feature. You can configure your [custom domain suffix](./how-to-custom-domain-suffix.md) when creating your App Service Environment v3 or any time after.
- **What if my App Service Environment is zone pinned?**
- Zone pinned App Service Environment is currently not a supported scenario for migration using the migration feature. When supported, zone pinned App Service Environments will be migrated to zone redundant App Service Environment v3.
+ Zone pinned App Service Environment is currently not a supported scenario for migration using the migration feature. App Service Environment v3 doesn't support zone pinning. To migrate to App Service Environment v3, see the [manual migration options](migration-alternatives.md).
- **What properties of my App Service Environment will change?**
- You'll now be on App Service Environment v3 so be sure to review the [features and feature differences](overview.md#feature-differences) compared to previous versions. For ILB App Service Environment, you'll keep the same ILB IP address. For internet facing App Service Environment, the public IP address and the outbound IP address will change. Note for internet facing App Service Environment, previously there was a single IP for both inbound and outbound. For App Service Environment v3, they're separate. For more information, see [App Service Environment v3 networking](networking.md#addresses).
+ You'll now be on App Service Environment v3 so be sure to review the [features and feature differences](overview.md#feature-differences) compared to previous versions. For ILB App Service Environment, you'll keep the same ILB IP address. For internet facing App Service Environment, the public IP address and the outbound IP address will change. Note for ELB App Service Environment, previously there was a single IP for both inbound and outbound. For App Service Environment v3, they're separate. For more information, see [App Service Environment v3 networking](networking.md#addresses).
- **What happens if migration fails or there is an unexpected issue during the migration?** If there's an unexpected issue, support teams will be on hand. It's recommended to migrate dev environments before touching any production environments. - **What happens to my old App Service Environment?**
- If you decide to migrate an App Service Environment, the old environment gets shut down and deleted and all of your apps are migrated to a new environment. Your old environment will no longer be accessible. A rollback to the old environment will not be possible.
+ If you decide to migrate an App Service Environment using the migration feature, the old environment gets shut down, deleted, and all of your apps are migrated to a new environment. Your old environment will no longer be accessible. A rollback to the old environment won't be possible.
- **What will happen to my App Service Environment v1/v2 resources after 31 August 2024?** After 31 August 2024, if you haven't migrated to App Service Environment v3, your App Service Environment v1/v2s and the apps deployed in them will no longer be available. App Service Environment v1/v2 is hosted on App Service scale units running on [Cloud Services (classic)](../../cloud-services/cloud-services-choose-me.md) architecture that will be [retired on 31 August 2024](https://azure.microsoft.com/updates/cloud-services-retirement-announcement/). Because of this, [App Service Environment v1/v2 will no longer be available after that date](https://azure.microsoft.com/updates/app-service-environment-v1-and-v2-retirement-announcement/). Migrate to App Service Environment v3 to keep your apps running or save or back up any resources or data that you need to maintain. ## Next steps > [!div class="nextstepaction"]
-> [Migrate App Service Environment v2 to App Service Environment v3](how-to-migrate.md)
+> [Migrate your App Service Environment to App Service Environment v3](how-to-migrate.md)
> [!div class="nextstepaction"] > [Manually migrate to App Service Environment v3](migration-alternatives.md)
There's no cost to migrate your App Service Environment. You'll stop being charg
> [!div class="nextstepaction"] > [Using an App Service Environment v3](using.md)+
+> [!div class="nextstepaction"]
+> [Custom domain suffix](./how-to-custom-domain-suffix.md)
app-service Migration Alternatives https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/migration-alternatives.md
Title: Migrate to App Service Environment v3
description: How to migrate your applications to App Service Environment v3 Previously updated : 5/4/2022 Last updated : 9/15/2022 # Migrate to App Service Environment v3
App Service Environment v3 uses Isolated v2 App Service plans that are priced an
The [back up and restore](../manage-backup.md) feature allows you to keep your app configuration, file content, and database connected to your app when migrating to your new environment. Make sure you review the [details](../manage-backup.md#automatic-vs-custom-backups) of this feature.
-The step-by-step instructions in the current documentation for [backup and restore](../manage-backup.md) should be sufficient to allow you to use this feature. When restoring, the **Storage** option lets you select any backup ZIP file from any existing Azure Storage account container in your subscription. A sample of a restore configuration is given in the following screenshot.
+The step-by-step instructions in the current documentation for [backup and restore](../manage-backup.md) should be sufficient to allow you to use this feature. You can select a backup and use that to restore the app to an App Service in your App Service Environment v3.
-![back up and restore sample](./media/migration/back-up-restore-sample.png)
|Benefits |Limitations | |||
To clone an app using the [Azure portal](https://www.portal.azure.com), navigate
1. You can use an existing Windows **App Service plan** from your new environment if you created one already, or create a new one. The available Windows App Service plans in your new App Service Environment v3, if any, will be listed in the dropdown. 1. Modify **SKU and size** as needed using one of the Isolated v2 options if creating a new App Service plan. Note App Service Environment v3 uses Isolated v2 plans, which have more memory and CPU per corresponding instance size compared to the Isolated plan. For more information, see [App Service Environment v3 SKU details](overview.md#pricing).
-![clone sample](./media/migration/portal-clone-sample.png)
|Benefits |Limitations | |||
To clone an app using the [Azure portal](https://www.portal.azure.com), navigate
## Manually create your apps on an App Service Environment v3
-If the above features don't support your apps or you're looking to take a more manual route, you have the option of deploying your apps following the same process you used for your existing App Service Environment. You don't need to make updates when you deploy your apps to your new environment unless you want to make changes or take advantage of App Service Environment v3's dedicated features.
+If the above features don't support your apps or you're looking to take a more manual route, you have the option of deploying your apps following the same process you used for your existing App Service Environment. You don't need to make updates when you deploy your apps to your new environment.
-You can export [Azure Resource Manager (ARM) templates](../../azure-resource-manager/templates/overview.md) of your existing apps, App Service plans, and any other supported resources and deploy them in or with your new environment. To export a template for just your app, head over to your App Service and go to **Export template** under **Automation**.
+You can export [Azure Resource Manager (ARM) templates](../../azure-resource-manager/templates/overview.md) of your existing apps, App Service plans, and any other supported resources and deploy them in or with your new environment. To export a template for just your app, navigate to your App Service and go to **Export template** under **Automation**.
-![export from toc](./media/migration/export-toc.png)
You can also export templates for multiple resources directly from your resource group by going to your resource group, selecting the resources you want a template for, and then selecting **Export template**.
-![export template sample](./media/migration/export-template-sample.png)
The following initial changes to your Azure Resource Manager templates are required to get your apps onto your App Service Environment v3: -- Update SKU parameters for App Service plan to an Isolated v2 plan as shown below
+- Update SKU parameters for App Service plan to an Isolated v2 plan:
```json "type": "Microsoft.Web/serverfarms",
Once your migration and any testing with your new environment is complete, delet
- **Do I need to change anything about my apps to get them to run on App Service Environment v3?** No, apps that run on App Service Environment v1 and v2 shouldn't need any modifications to run on App Service Environment v3. - **What if my App Service Environment has a custom domain suffix?**
- The migration feature doesn't support migration of App Service Environments with custom domain suffixes at this time. You won't be able to migrate until it's supported.
+ The migration feature supports this [migration scenario](./migrate.md#supported-scenarios). You can migrate using a manual method if you don't want to use the migration feature. You can configure your [custom domain suffix](./how-to-custom-domain-suffix.md) when creating your App Service Environment v3 or any time after that.
- **What if my App Service Environment is zone pinned?**
- Zone pinning isn't a supported feature on App Service Environment v3. Use [zone redundancy](overview-zone-redundancy.md) instead.
+ Zone pinning isn't a supported feature on App Service Environment v3.
- **What properties of my App Service Environment will change?** You'll now be on App Service Environment v3 so be sure to review the [features and feature differences](overview.md#feature-differences) compared to previous versions. For ILB App Service Environment, you'll keep the same ILB IP address. For internet facing App Service Environment, the public IP address and the outbound IP address will change. Note for internet facing App Service Environment, previously there was a single IP for both inbound and outbound. For App Service Environment v3, they're separate. For more information, see [App Service Environment v3 networking](networking.md#addresses). - **What will happen to my App Service Environment v1/v2 resources after 31 August 2024?**
Once your migration and any testing with your new environment is complete, delet
> [Integrate your ILB App Service Environment with the Azure Application Gateway](integrate-with-application-gateway.md) > [!div class="nextstepaction"]
-> [Migrate to App Service Environment v3 by using the migration feature](migrate.md)
+> [Migrate to App Service Environment v3 using the migration feature](migrate.md)
+
+> [!div class="nextstepaction"]
+> [Custom domain suffix](./how-to-custom-domain-suffix.md)
app-service Manage Backup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/manage-backup.md
In [Azure App Service](overview.md), you can easily restore app backups. You can also make on-demand custom backups or configure scheduled custom backups. You can restore a backup by overwriting an existing app by restoring to a new app or slot. This article shows you how to restore a backup and make custom backups.
-Back up and restore **Standard**, **Premium**, **Isolated**. For more information about scaling your App Service plan to use a higher tier, see [Scale up an app in Azure](manage-scale-up.md).
+Backup and restore are supported in **Basic**, **Standard**, **Premium**, and **Isolated** tiers. For **Basic** tier, only the production slot can be backed up and restored. For more information about scaling your App Service plan to use a higher tier, see [Scale up an app in Azure](manage-scale-up.md).
> [!NOTE]
-> Support for custom and automatic backups in **Basic** tier (production slot only) and in App Service environments (ASE) V2 and V3 is in preview. For App Service environments:
+> Support in App Service environments (ASE) V2 and V3 is in preview. For App Service environments:
> > - Backups can be restored to a target app within the ASE itself, not in another ASE. > - Backups can be restored to a target app in another App Service plan in the ASE.
There are two types of backups in App Service. Automatic backups made for your a
| Linked database | Not backed up. | The following linked databases can be backed up: [SQL Database](/azure/azure-sql/database/), [Azure Database for MySQL](../mysql/index.yml), [Azure Database for PostgreSQL](../postgresql/index.yml), [MySQL in-app](https://azure.microsoft.com/blog/mysql-in-app-preview-app-service/). | | [Storage account](../storage/index.yml) required | No. | Yes. | | Backup frequency | Hourly, not configurable. | Configurable. |
-| Retention | 30 days, not configurable. | 0-30 days or indefinite. |
+| Retention | 30 days, not configurable. <br>- Days 1-3: hourly backups retained.<br>- Days 4-14: every 3 hourly backup retained.<br>- Days 15-30: every 6 hourly backup retained. | 0-30 days or indefinite. |
| Downloadable | No. | Yes, as Azure Storage blobs. | | Partial backups | Not supported. | Supported. |
app-service Manage Custom Dns Buy Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/manage-custom-dns-buy-domain.md
To test the custom domain, navigate to it in the browser.
## Renew the domain
-The App Service domain you bought is valid for one year from the time of purchase. By default, the domain is configured to renew automatically by charging your payment method for the next year. You can manually renew your domain name.
+The App Service domain you bought is valid for one year from the time of purchase. You can configure to renew your domain automatically which will charge your payment method when your domain renews the following year. You can also manually renew your domain name.
-If you want to turn off automatic renewal, or if you want to manually renew your domain, follow the steps here.
+If you want to configure automatic renewal, or if you want to manually renew your domain, follow the steps here.
1. In the search bar, search for and select **App Service Domains**.
If you want to turn off automatic renewal, or if you want to manually renew your
1. In the **App Service Domains** section, select the domain you want to configure.
-1. From the left navigation of the domain, select **Domain renewal**. To stop renewing your domain automatically, select **Off**. The setting takes effect immediately.
+1. From the left navigation of the domain, select **Domain renewal**. To start renewing your domain automatically, select **On**, otherwise select **Off**. The setting takes effect immediately. If automatic renewal is enabled, on the day after your domain expiration date, Azure attempts to bill you for the domain name renewal.
![Screenshot that shows the option to automatically renew your domain.](./media/custom-dns-web-site-buydomains-web-app/dncmntask-cname-buydomains-autorenew.png)
If you want to turn off automatic renewal, or if you want to manually renew your
> When navigating away from the page, disregard the "Your unsaved edits will be discarded" error by clicking **OK**. >
-To manually renew your domain, select **Renew domain**. However, this button is not active until [90 days before the domain's expiration](#when-domain-expires).
+To manually renew your domain, select **Renew domain**. However, this button is not active until 90 days before the domain's expiration date.
-If your domain renewal is successful, you receive an email notification within 24 hours.
-
-## When domain expires
-
-Azure deals with expiring or expired App Service domains as follows:
-
-* If automatic renewal is disabled: 90 days before domain expiration, a renewal notification email is sent to you and the **Renew domain** button is activated in the portal.
-* If automatic renewal is enabled: On the day after your domain expiration date, Azure attempts to bill you for the domain name renewal.
-* If an error occurs during automatic renewal (for example, your card on file is expired), or if automatic renewal is disabled and you allow the domain to expire, Azure notifies you of the domain expiration and parks your domain name. You can [manually renew](#renew-the-domain) your domain.
-* On the 4th and 12th days day after expiration, Azure sends you additional notification emails. You can [manually renew](#renew-the-domain) your domain. On the 5th day after expiration, DNS resolution stops for the expired domain.
-* On the 19th day after expiration, your domain remains on hold but becomes subject to a redemption fee. You can call customer support to renew your domain name, subject to any applicable renewal and redemption fees.
-* On the 25th day after expiration, Azure puts your domain up for auction with a domain name industry auction service. You can call customer support to renew your domain name, subject to any applicable renewal and redemption fees.
-* On the 30th day after expiration, you're no longer able to redeem your domain.
+If your domain renewal is successful, you receive an email notification within 24 hours.
<a name="custom"></a>
availability-zones Migrate Workload Aks Mysql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/availability-zones/migrate-workload-aks-mysql.md
+
+ Title: Migrate Azure Kubernetes Service and MySQL Flexible Server workloads to availability zone support
+description: Learn how to migrate Azure Kubernetes Service and MySQL Flexible Server workloads to availability zone support.
+++ Last updated : 08/29/2022++++
+
+# Migrate Azure Kubernetes Service (AKS) and MySQL Flexible Server workloads to availability zone support
+
+This guide describes how to migrate an Azure Kubernetes Service and MySQL Flexible Server workload to complete availability zone support across all dependent services. For complete list of all workload dependencies, see [Workload service dependencies](#workload-service-dependencies).
+
+Availability zone support for this workload must be enabled during the creation of your AKS cluster or MySQL Flexible Server. If you want availability zone support for an existing AKS cluster and MySQL Flexible Server, you'll need to redeploy those resources.
+
+This migration guidance focuses mainly on the infrastructure and availability considerations of running the following architecture on Azure:
++++
+## Workload service dependencies
+
+To provide full workload support for availability zones, each service dependency in the workload must support availability zones.
+
+There are two approaches types of availability zone supported
+
+The AKS and MySQL workload architecture consists of the following component dependencies:
+
+### Azure Kubernetes Service (AKS)
+
+- *Zonal* : The system node pool and user node pools are zonal when you pre-select the zones in which the node pools are deployed during creation time. We recommend that you pre-select all three zones for better resiliency. More user node pools that support availability zones can be added to an existing AKS cluster and by supplying a value for the `zones` parameter.
+
+- *Zone-redundant*: Kubernetes control plane components such as *etcd*, *API server*, *Scheduler*, and *Controller Manager* are automatically replicated or distributed across zones.
+
+ >[!NOTE]
+ >To enable zone-redundancy of the AKS cluster control plane components, you must define your default system node pool with zones when you create an AKS cluster. Adding more zonal node pools to an existing non-zonal AKS cluster won't make the AKS cluster zone-redundant, because that action doesn't distribute the control plane components across zones after-the-fact.
+
+### Azure Database for MySQL Flexible Server
+
+- *Zonal*: The zonal availability mode means that a standby server is always available within the same zone as the primary server. While this option reduces failover time and network latency, it's less resilient due to a single zone outage impacting both the primary and standby servers.
+
+- *Zone-redundant*: The zone-redundant availability mode means that a standby server is always available within another zone in the same region as the primary server. Two zones will be enabled for zone redundancy for the primary and standby servers. We recommend this configuration for better resiliency.
++
+### Azure Standard Load Balancer or Azure Application Gateway
+
+#### Standard Load Balancer
+To understand considerations related to Standard Load Balancer resources, see [Load Balancer and Availability Zones](../load-balancer/load-balancer-standard-availability-zones.md).
+
+- *Zone-redundant*: Choosing zone-redundancy is the recommended way to configure your Frontend IP with your existing Load Balancer. The zone-redundant front-end corresponds with the AKS cluster back-end pool, which is distributed across multiple zones.
+
+- *Zonal*: If you're pinning your node pools to specific zones such as zone 1 and 2, you can pre-select zone 1 and 2 for your Frontend IP in the existing Load Balancer. The reason why you may want to pin your node pools to specific zones could be due to the availability of specialized VM SKU series such as M-series.
+
+#### Azure Application Gateway
+
+Using the Application Gateway Ingress Controller add-on with your AKS cluster is supported only on Application Gateway v2 SKUs (Standard and WAF). To understand further considerations related to Azure Application Gateway, see [Scaling Application Gateway v2 and WAF v2](../application-gateway/application-gateway-autoscaling-zone-redundant.md).
+
+*Zonal*: To use the benefits of availability zones, we recommend that the Application Gateway resource be created in multiple zones, such as zone 1, 2, and 3. Select all three zones for best intra-region resiliency strategy. However, to correspond to your backend node pools, you may pin your node pools to specific zones by pre-selecting zone 1 and 2 during the creation of your App Gateway resource. The reason why you may want to pin your node pools to specific zones could be due to the availability of specialized VM SKU series such as `M-series`.
+
+#### Zone Redundant Storage (ZRS)
+
+- We recommend that your AKS cluster is configured with managed ZRS disks because they're zone-redundant resources. Volumes can be scheduled on all zones.
+
+- Kubernetes is aware of Azure availability zones since version 1.12. You can deploy a `PersistentVolumeClaim` object referencing an Azure Managed Disk in a multi-zone AKS cluster. Kubernetes will take care of scheduling any pod that claims this PVC in the correct availability zone.
+
+- For Azure Database for SQL, we recommend that the data and log files are hosted in zone-redundant storage (ZRS). These files are replicated to the standby server via the storage-level replication available with ZRS.
+
+#### Azure Firewall
+
+*Zonal*: To use the benefits of availability zones, we recommend that the Application Gateway resource be created in multiple zones, such as zone 1, 2, and 3. We recommend that you select all three zones for best intra-region resiliency strategy.
+
+#### Azure Bastion
+
+*Regional*: Azure Bastion is deployed within VNets or peered VNets and is associated to an Azure region. For more information, se [Bastion FAQ](../bastion/bastion-faq.md#dr).
+
+#### Azure Container Registry (ACR)
+
+*Zone-redundant*: We recommend that you create a zone-redundant registry in the Premium service tier. You can also create a zone-redundant registry replica by setting the `zoneRedundancy` property for the replica. To learn how to enable zone redundancy for your ACR, see [Enable zone redundancy in Azure Container Registry for resiliency and high availability](../container-registry/zone-redundancy.md).
+
+#### Azure Cache for Redis
+
+*Zone-redundant*: Azure Cache for Redis supports zone-redundant configurations in the Premium and Enterprise tiers. A zone-redundant cache places its nodes across different availability zones in the same region.
+
+#### Azure Active Directory (AD)
+
+*Global*: Azure AD is a global service with multiple levels of internal redundancy and automatic recoverability. Azure AD is deployed in over 30 datacenters around the world that provide availability zones where present. This number is growing rapidly as more regions are deployed.
+
+#### Azure Key Vault
+
+*Regional*: Azure Key Vault is deployed in a region. To maintain high durability of your keys and secrets, the contents of your key vault are replicated within the region and to a secondary region within the same geography.
+
+*Zone-redundant*: For Azure regions with availability zones and no region pair, Key Vault uses zone-redundant storage (ZRS) to replicate the contents of your key vault three times within the single location/region.
+
+## Workload considerations
+
+### Azure Kubernetes Service (AKS)
+
+- Pods can communicate with other pods, regardless of which node or the availability zone in which the pod lands on the node. Your application may experience higher response time if the pods are located in different availability zones. While the extra round-trip latencies between pods are expected to fall within an acceptable range for most applications, there are application scenarios which require low latency, especially for a chatty communication pattern between pods.
+
+- We recommend that you test your application to ensure it performs well across availability zones.
+
+- For performance reasons such low latency, pods can be co-located in the same data center within the same availability zone. To co-locate pods in the same data center within the same availability zone, you can create user node pools with a unique zone and proximity placement group. You can add a proximity placement group (PPG) to an existing AKS cluster by creating a new agent node pool and specifying the PPG. Use Pod Topology Spread Constraints to control how pods are spread in your AKS cluster across availability zones, nodes and regions.
+
+- After pods that require low latency communication are co-located in the same availability zone, communications between the pods aren't direct. Instead, pod communications are channeled through a service that defines a logical set of pods in your AKS cluster. Pods can be configured to talk to AKS and the communication to the service will be automatically load-balanced to all the pods that are members of the service.
+
+- To take advantage of availability zones, node pools contain underlying VMs that are zonal resources. To support applications that have different compute or storage demands, you can create user node pools with specific VM sizes when you create the user node pool.
+
+ For example, you may decide to use the `Standard_M32ms` under the `M-series` for your user nodes because the microservices in your application require high throughput, low latency, and memory optimized VM sizes that provide high vCPU counts and large amounts of memory. Depending on the deployment region, when you select the VM size in the Azure portal, you may see that this VM size is supported only in zone 1 and 2. You can accept this resiliency configuration as a trade-off for high performance.
+
+- You can't change the VM size of a node pool after you create it. For more information on node pool limitations, see [Limitations](../aks/use-multiple-node-pools.md#limitations).
+
+### Azure Database for MySQL Flexible Server
+
+The implication of deploying your node pools in specific zones, such as zone 1 and 2, is that all service dependencies of your AKS cluster must also support zone 1 and 2. In this workload architecture, your AKS cluster has a service dependency on Azure Database for MySQL Flexible Servers with zone resiliency. You would select zone 1 for your primary server and zone 2 for your standby server to be co-located with your AKS user node pools.
+++
+### Azure Cache for Redis
+
+- Azure Cache for Redis distributes nodes in a zone-redundant cache in a round-robin manner over the availability zones that you've selected.
+
+- You can't update an existing Premium cache to use zone redundancy. To use zone redundancy, you must recreate the Azure Cache for Redis.
+
+- To achieve optimal resiliency, we recommend that you create your Azure Cache for Redis with three, or more replicas so that you can distribute the replicas across three availability zones.
+++
+## Disaster recovery considerations
+
+*Availability zones* are used for better resiliency to achieve high availability of your workload within the primary region of your deployment.
+
+*Disaster Recovery* consists of recovery operations and practices defined in your business continuity plan. Your business continuity plan addresses both how your workload recovers during a disruptive event and how it fully recovers after the event. Consider extending your deployment to an alternative region.
+++
+For your application tier, please review the business continuity and disaster recovery considerations for AKS in this article.
+
+- Consider running multiple AKS clusters in alternative regions. The alternative region can use a secondary paired region. Or, where there's no region pairing for your primary region, you can select an alternative region based on your consideration for available services, capacity, geographical proximity, and data sovereignty. Please review the [Azure regions decision guide](/azure/cloud-adoption-framework/migrate/azure-best-practices/multiple-regions). Also review the [deployment stamp pattern](/azure/architecture/patterns/deployment-stamp).
+
+- You have the option of configuring active-active, active-standby, active-passive for your AKS clusters.
+
+- For your database tier, disaster recovery features include geo-redundant backups with the ability to initiate geo-restore and deploying read replicas in a different region.
+
+- During an outage, you'll need to decide whether to initiate a recovery. You'll need to initiate recovery operations only when the outage is likely to last longer than your workloadΓÇÖs recovery time objective (RTO). Otherwise, you'll wait for service recovery by checking the service status on the Azure Service Health Dashboard. On the Service Health blade of the Azure portal, you can view any notifications associated with your subscription.
+
+- When you do initiate recovery with the geo-restore feature in Azure Database for MySQL, a new database server is created using backup data that is replicated from another region.
++
+## Next Steps
+
+Learn more about:
+
+> [!div class="nextstepaction"]
+> [Regions and Availability Zones in Azure](az-overview.md)
+
+> [!div class="nextstepaction"]
+> [Azure Services that support Availability Zones](az-region.md)
availability-zones Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/availability-zones/overview.md
Last updated 02/08/2022 -+ # Resiliency in Azure **Resiliency** is a systemΓÇÖs ability to recover from failures and continue to function. ItΓÇÖs not only about avoiding failures but also involves responding to failures in a way that minimizes downtime or data loss. Because failures can occur at various levels, itΓÇÖs important to have protection for all types based on your service availability requirements. Resiliency in Azure supports and advances capabilities that respond to outages in real time to ensure continuous service and data protection assurance for mission-critical applications that require near-zero downtime and high customer confidence.
-Azure includes built-in resiliency services that you can leverage and manage based on your business needs. Whether itΓÇÖs a single hardware node failure, a rack level failure, a datacenter outage, or a large-scale regional outage, Azure provides solutions that improve resiliency. For example, availability sets ensure that the virtual machines deployed on Azure are distributed across multiple isolated hardware nodes in a cluster. Availability zones protect customersΓÇÖ applications and data from datacenter failures across multiple physical locations within a region. **Regions** and **availability zones** are central to your application design and resiliency strategy and are discussed in greater detail later in this article.
+Azure includes built-in resiliency services that you can use and manage based on your business needs. Whether itΓÇÖs a single hardware node failure, a rack level failure, a datacenter outage, or a large-scale regional outage, Azure provides solutions that improve resiliency. For example, availability sets ensure that the virtual machines deployed on Azure are distributed across multiple isolated hardware nodes in a cluster. Availability zones protect customersΓÇÖ applications and data from datacenter failures across multiple physical locations within a region. **Regions** and **availability zones** are central to your application design and resiliency strategy and are discussed in greater detail later in this article.
## Resiliency requirements The required level of resilience for any Azure solution depends on several considerations. Availability and latency SLA and other business requirements drive the architectural choices and resiliency level and should be considered first. Availability requirements range from how much downtime is acceptable ΓÇô and how much it costs your business ΓÇô to the amount of money and time that you can realistically invest in making an application highly available.
-Building resilient systems on Azure is a **shared responsibility**. Microsoft is responsible for the reliability of the cloud platform, including its global network and data centers. Azure customers and partners are responsible for the resilience of their cloud applications, using architectural best practices based on the requirements of each workload. While Azure continually strives for highest possible resiliency in SLA for the cloud platform, you must define your own target SLAs for each workload in your solution. An SLA makes it possible to evaluate whether the architecture meets the business requirements. As you strive for higher percentages of SLA guaranteed uptime, the cost and complexity to achieve that level of availability grows. An uptime of 99.99 percent translates to about five minutes of total downtime per month. Is it worth the additional complexity and cost to reach that percentage? The answer depends on the individual business requirements. While deciding final SLA commitments, understand MicrosoftΓÇÖs supported SLAs. Each Azure service has its own SLA.
+Building resilient systems on Azure is a **shared responsibility**. Microsoft is responsible for the reliability of the cloud platform, including its global network and data centers. Azure customers and partners are responsible for the resilience of their cloud applications, using architectural best practices based on the requirements of each workload. While Azure continually strives for highest possible resiliency in SLA for the cloud platform, you must define your own target SLAs for each workload in your solution. An SLA makes it possible to evaluate whether the architecture meets the business requirements. As you strive for higher percentages of SLA guaranteed uptime, the cost and complexity to achieve that level of availability grows. An uptime of 99.99 percent translates to about five minutes of total downtime per month. Is it worth the more complexity and cost to reach that percentage? The answer depends on the individual business requirements. While deciding final SLA commitments, understand MicrosoftΓÇÖs supported SLAs. Each Azure service has its own SLA.
## Building resiliency
-You should define your applicationΓÇÖs availability requirements at the beginning of planning. Many applications do not need 100% high availability; being aware of this can help to optimize costs during non-critical periods. Identify the type of failures an application can experience as well as the potential effect of each failure. A recovery plan should cover all critical services by finalizing recovery strategy at the individual component and the overall application level. Design your recovery strategy to protect against zonal, regional, and application-level failure. And perform testing of the end-to-end application environment to measure application resiliency and recovery against unexpected failure.
+You should define your applicationΓÇÖs availability requirements at the beginning of planning. If you know which applications don't need 100% high availability during certain periods of time, you can optimize costs during those non-critical periods. Identify the type of failures an application can experience, and the potential effect of each failure. A recovery plan should cover all critical services by finalizing recovery strategy at the individual component and the overall application level. Design your recovery strategy to protect against zonal, regional, and application-level failure. And perform testing of the end-to-end application environment to measure application resiliency and recovery against unexpected failure.
The following checklist covers the scope of resiliency planning.
The following checklist covers the scope of resiliency planning.
## Regions and availability zones
-Regions and Availability Zones are a big part of the resiliency equation. Regions feature multiple, physically separate Availability Zones, connected by a high-performance network featuring less than 2ms latency between physical zones to help your data stay synchronized and accessible when things go wrong. You can leverage this infrastructure strategically as you architect applications and data infrastructure that automatically replicate and deliver uninterrupted services between zones and across regions. Choose the best region for your needs based on technical and regulatory considerationsΓÇöservice capabilities, data residency, compliance requirements, latencyΓÇöand begin advancing your resiliency strategy.
+Regions and Availability Zones are a big part of the resiliency equation. Regions feature multiple, physically separate availability zones. These availability zones are connected by a high-performance network featuring less than 2ms latency between physical zones. Low latency helps your data stay synchronized and accessible when things go wrong. You can use this infrastructure strategically as you architect applications and data infrastructure that automatically replicate and deliver uninterrupted services between zones and across regions. Choose the best region for your needs based on technical and regulatory considerationsΓÇöservice capabilities, data residency, compliance requirements, latencyΓÇöand begin advancing your resiliency strategy.
-Microsoft Azure services support availability zones and are enabled to drive your cloud operations at optimum high availability while supporting your disaster recovery and business continuity strategy needs. Choose the best region for your needs based on technical and regulatory considerationsΓÇöservice capabilities, data residency, compliance requirements, latencyΓÇöand begin advancing your resiliency strategy. See [Azure regions and availability zones](az-overview.md) for more information.
+Microsoft Azure services support availability zones and are enabled to drive your cloud operations at optimum high availability while supporting your disaster recovery and business continuity strategy needs. Choose the best region for your needs based on technical and regulatory considerationsΓÇöservice capabilities, data residency, compliance requirements, latencyΓÇöand begin advancing your resiliency strategy. For more information, see [Azure regions and availability zones](az-overview.md).
## Shared responsibility
-Building resilient systems on Azure is a shared responsibility. Microsoft is responsible for the reliability of the cloud platform, which includes its global network and datacenters. Azure customers and partners are responsible for the resilience of their cloud applications, using architectural best practices based on the requirements of each workload. See [Business continuity management program in Azure](business-continuity-management-program.md) for more information.
+Building resilient systems on Azure is a shared responsibility. Microsoft is responsible for the reliability of the cloud platform, which includes its global network and datacenters. Azure customers and partners are responsible for the resilience of their cloud applications, using architectural best practices based on the requirements of each workload. For more information, see [Business continuity management program in Azure](business-continuity-management-program.md).
## Azure service dependencies
azure-app-configuration Enable Dynamic Configuration Azure Functions Csharp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/enable-dynamic-configuration-azure-functions-csharp.md
ms.devlang: csharp Previously updated : 11/17/2019 Last updated : 09/14/2022 Azure Functions
Azure Functions support running [in-process](../azure-functions/functions-dotnet
> [!TIP] > When you are updating multiple key-values in App Configuration, you normally don't want your application to reload configuration before all changes are made. You can register a *sentinel key* and update it only when all other configuration changes are completed. This helps to ensure the consistency of configuration in your application. >
- > You may also do following to minimize the risk of inconsistencies:
+ > You may also do the following to minimize the risk of inconsistencies:
> > * Design your application to be tolerable for transient configuration inconsistency > * Warm-up your application before bringing it online (serving requests)
azure-arc Plan Azure Arc Data Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/plan-azure-arc-data-services.md
In order to experience Azure Arc-enabled data services, you'll need to complete
The details in this article will guide your plan. 1. [Install client tools](install-client-tools.md).+
+1. Register the Microsoft.AzureArcData provider for the subscription where the Azure Arc-enabled data services will be deployed, as follows:
+ ```console
+ az provider register --namespace Microsoft.AzureArcData
+ ```
+ 1. Access a Kubernetes cluster. For demonstration, testing, and validation purposes, you can use an Azure Kubernetes Service cluster. To create a cluster, follow the instructions at [Quickstart: Deploy Azure Arc-enabled data services - directly connected mode - Azure portal](create-complete-managed-instance-directly-connected.md) to walk through the entire process.
Verify that:
kubectl cluster-info ``` - You have an Azure subscription that resources such as an Azure Arc data controller, Azure Arc-enabled SQL managed instance, or Azure Arc-enabled PostgreSQL server will be projected and billed to.
+- The Microsoft.AzureArcData provider is registered for the subscription where the Azure Arc-enabled data services will be deployed.
After you're prepared the infrastructure, deploy Azure Arc-enabled data services in the following way: 1. Create an Azure Arc-enabled data controller on one of the validated distributions of a Kubernetes cluster.
azure-arc Private Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/private-link.md
The Private Endpoint on your virtual network allows it to reach Azure Arc-enable
1. Let validation pass. 1. Select **Create**.
- :::image type="content" source="media/private-link/create-private-endpoint-2.png" alt-text="Screenshot of the Configuration step to create a private endpoint in the Azure portal.":::
-
- > [!NOTE]
- > If you choose **No** and prefer to manage DNS records manually, first complete setting up your Private Link, including this private endpoint and the Private Scope configuration. Next, configure your DNS according to the instructions in [Azure Private Endpoint DNS configuration](/azure/private-link/private-endpoint-dns). Make sure not to create empty records as preparation for your Private Link setup. The DNS records you create can override existing settings and impact your connectivity with Arc-enabled Kubernetes clusters.
## Configure on-premises DNS forwarding
azure-arc Day2 Operations Resource Bridge https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/vmware-vsphere/day2-operations-resource-bridge.md
Title: Perform ongoing administration for Arc-enabled VMware vSphere description: Learn how to perform day 2 administrator operations related to Azure Arc-enabled VMware vSphere Previously updated : 08/25/2022 Last updated : 09/15/2022
There are two different sets of credentials stored on the Arc resource bridge. Y
- **Account for Arc resource bridge**. This account is used for deploying the Arc resource bridge VM and will be used for upgrade. - **Account for VMware cluster extension**. This account is used to discover inventory and perform all VM operations through Azure Arc-enabled VMware vSphere
-To update the credentials of the account for Arc resource bridge, use the Azure CLI command [`az arcappliance update-infracredentials vmware`](/cli/azure/arcappliance/update-infracredentials#az-arcappliance-update-infracredentials-vmware). Run the command from a workstation that can access cluster configuration IP address of the Arc resource bridge locally:
+To update the credentials of the account for Arc resource bridge, run the following Azure CLI commands . Run the commands from a workstation that can access cluster configuration IP address of the Arc resource bridge locally:
```azurecli
-az arcappliance update-infracredentials vmware --kubeconfig <kubeconfig>
+az account set -s <subscription id>
+az arcappliance get-credentials -n <name of the appliance> -g <resource group name>
+az arcappliance update-infracredentials vmware --kubeconfig kubeconfig
```
+For more details on the commands see [`az arcappliance get-credentials`](/cli/azure/arcappliance/get-credentials#az-arcappliance-get-credentials) and [`az arcappliance update-infracredentials vmware`](/cli/azure/arcappliance/update-infracredentials#az-arcappliance-update-infracredentials-vmware).
+ To update the credentials used by the VMware cluster extension on the resource bridge. This command can be run from anywhere with `connectedvmware` CLI extension installed.
az connectedvmware vcenter connect --custom-location <name of the custom locatio
For any issues encountered with the Azure Arc resource bridge, you can collect logs for further investigation. To collect the logs, use the Azure CLI [`Az arcappliance log`](/cli/azure/arcappliance/logs#az-arcappliance-logs-vmware) command.
-The `az arcappliance log` command must be run from a workstation that can communicate with the Arc resource bridge either via the cluster configuration IP address or the IP address of the Arc resource bridge VM.
-
-To save the logs to a destination folder, run the following command. This command requires connectivity to cluster configuration IP address.
+To save the logs to a destination folder, run the following commands. These commands need connectivity to cluster configuration IP address.
```azurecli
-az arcappliance logs <provider> --kubeconfig <path to kubeconfig> --out-dir <path to specified output directory>
+az account set -s <subscription id>
+az arcappliance get-credentials -n <name of the appliance> -g <resource group name>
+az arcappliance logs vmware --kubeconfig kubeconfig --out-dir <path to specified output directory>
```
-If the Kubernetes cluster on the resource bridge isn't in functional state, you can use the following command. This command requires connectivity to IP address of the Azure Arc resource bridge VM via SSH
+If the Kubernetes cluster on the resource bridge isn't in functional state, you can use the following commands. These commands require connectivity to IP address of the Azure Arc resource bridge VM via SSH
```azurecli
-az arcappliance logs <provider> --out-dir <path to specified output directory> --ip XXX.XXX.XXX.XXX
+az account set -s <subscription id>
+az arcappliance get-credentials -n <name of the appliance> -g <resource group name>
+az arcappliance logs vmware --out-dir <path to specified output directory> --ip XXX.XXX.XXX.XXX
```
-During initial onboarding, SSH keys are saved to the workstation. If you're running this command from the workstation that was used for onboarding, no other steps are required.
-
-If you're running this command from a different workstation, make sure the following files are copied to the new workstation in the same location.
--- For a Windows workstation, `C:\ProgramData\kva\.ssh\logkey` and `C:\ProgramData\kva\.ssh\logkey.pub`
-
-- For a Linux workstation, `$HOME\.KVA\.ssh\logkey` and `$HOME\.KVA\.ssh\logkey.pub`- ## Next steps - [Troubleshoot common issues related to resource bridge](../resource-bridge/troubleshoot-resource-bridge.md)
azure-arc Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/vmware-vsphere/overview.md
Title: What is Azure Arc-enabled VMware vSphere (preview)? description: Azure Arc-enabled VMware vSphere (preview) extends Azure governance and management capabilities to VMware vSphere infrastructure and delivers a consistent management experience across both platforms. Previously updated : 11/10/2021 Last updated : 09/15/2022
To deliver this experience, you need to deploy the [Azure Arc resource bridge](.
Azure Arc-enabled VMware vSphere (preview) works with VMware vSphere version 6.7 and 7. > [!NOTE]
-> Azure Arc-enabled VMware vSphere (preview) supports vCenters with a maximum of 2500 VMs. If your vCenter has more than 2500 VMs, it is not recommended to use Arc-enabled VMware vSphere with it at this point.
+> Azure Arc-enabled VMware vSphere (preview) supports vCenters with a maximum of 9500 VMs. If your vCenter has more than 9500 VMs, it is not recommended to use Arc-enabled VMware vSphere with it at this point.
## Supported scenarios
You can use Azure Arc-enabled VMware vSphere (preview) in these supported region
- West Europe
+- Australia East
+
+- Canada Central
+ ## Next steps - [Connect VMware vCenter to Azure Arc using the helper script](quick-start-connect-vcenter-to-arc-using-script.md)
azure-arc Quick Start Connect Vcenter To Arc Using Script https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/vmware-vsphere/quick-start-connect-vcenter-to-arc-using-script.md
Title: Connect VMware vCenter Server to Azure Arc by using the helper script
description: In this quickstart, you'll learn how to use the helper script to connect your VMware vCenter Server instance to Azure Arc. Previously updated : 11/10/2021 Last updated : 09/05/2022 # Customer intent: As a VI admin, I want to connect my vCenter Server instance to Azure to enable self-service through Azure Arc.
First, the script deploys a virtual appliance called [Azure Arc resource bridge
- A datastore with a minimum of 100 GB of free disk space available through the resource pool or cluster. > [!NOTE]
-> Azure Arc-enabled VMware vSphere (preview) supports vCenter Server instances with a maximum of 2,500 virtual machines (VMs). If your vCenter Server instance has more than 2,500 VMs, we don't recommend that you use Azure Arc-enabled VMware vSphere with it at this point.
+> Azure Arc-enabled VMware vSphere (preview) supports vCenter Server instances with a maximum of 9,500 virtual machines (VMs). If your vCenter Server instance has more than 9,500 VMs, we don't recommend that you use Azure Arc-enabled VMware vSphere with it at this point.
### vSphere account
Use the following instructions to run the script, depending on which operating s
1. Open a PowerShell window as an Administrator and go to the folder where you've downloaded the PowerShell script.
-> [!NOTE]
-> On Windows workstations, the script must be run in PowerShell window and not in PowerShell Integrated Script Editor (ISE) as PowerShell ISE doesn't display the input prompts from Azure CLI commands. If the script is run on PowerShell ISE, it could appear as though the script is stuck while it is waiting for input.
+ > [!NOTE]
+ > On Windows workstations, the script must be run in PowerShell window and not in PowerShell Integrated Script Editor (ISE) as PowerShell ISE doesn't display the input prompts from Azure CLI commands. If the script is run on PowerShell ISE, it could appear as though the script is stuck while it is waiting for input.
2. Run the following command to allow the script to run, because it's an unsigned script. (If you close the session before you complete all the steps, run this command again for the new session.)
A typical onboarding that uses the script takes 30 to 60 minutes. During the pro
After the command finishes running, your setup is complete. You can now use the capabilities of Azure Arc-enabled VMware vSphere.
-## Save SSH keys and kubeconfig
-
-> [!IMPORTANT]
-> Performing [day 2 operations on the Arc resource bridge](day2-operations-resource-bridge.md) will require the SSH key to the resource bridge VM and kubeconfig to the Kubernetes cluster on it. It is important to store them to a secure location as it is not possible to retrieve them if the workstation used for the onboarding is deleted.
-
-You will find the kubeconfig file with the name `kubeconfig` in the folder where the onboarding script is downloaded and run.
-
-The SSH key pair will be available in the following location.
--- If you used a Windows workstation, `C:\ProgramData\kva\.ssh\logkey` and `C:\ProgramData\kva\.ssh\logkey.pub`
-
-- If you used a Linux workstation, `$HOME\.KVA\.ssh\logkey` and `$HOME\.KVA\.ssh\logkey.pub`- ## Next steps - [Browse and enable VMware vCenter resources in Azure](browse-and-enable-vcenter-resources-in-azure.md)
azure-functions Functions Create Serverless Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-create-serverless-api.md
# Customize an HTTP endpoint in Azure Functions
-In this article, you learn how Azure Functions allows you to build highly scalable APIs. Azure Functions comes with a collection of built-in HTTP triggers and bindings, which make it easy to author an endpoint in a variety of languages, including Node.js, C#, and more. In this article, you'll customize an HTTP trigger to handle specific actions in your API design. You'll also prepare for growing your API by integrating it with Azure Functions Proxies and setting up mock APIs. These tasks are accomplished on top of the Functions serverless compute environment, so you don't have to worry about scaling resources - you can just focus on your API logic.
+In this article, you learn how Azure Functions allows you to build highly scalable APIs. Azure Functions comes with a collection of built-in HTTP triggers and bindings, which make it easy to author an endpoint in various languages, including Node.js, C#, and more. In this article, you'll customize an HTTP trigger to handle specific actions in your API design. You'll also prepare for growing your API by integrating it with Azure Functions Proxies and setting up mock APIs. These tasks are accomplished on top of the Functions serverless compute environment, so you don't have to worry about scaling resources - you can just focus on your API logic.
+ ## Prerequisites
In this section, you create a new proxy, which serves as a frontend to your over
### Setting up the frontend environment
-Repeat the steps to [Create a function app](./functions-get-started.md) to create a new function app in which you will create your proxy. This new app's URL serves as the frontend for our API, and the function app you were previously editing serves as a backend.
+Repeat the steps to [Create a function app](./functions-get-started.md) to create a new function app in which you'll create your proxy. This new app's URL serves as the frontend for our API, and the function app you were previously editing serves as a backend.
1. Navigate to your new frontend function app in the portal. 1. Select **Configuration** and choose **Application Settings**.
Next, you'll use a proxy to create a mock API for your solution. This proxy allo
To create this mock API, we'll create a new proxy, this time using the [App Service Editor](https://github.com/projectkudu/kudu/wiki/App-Service-Editor). To get started, navigate to your function app in the portal. Select **Platform features**, and under **Development Tools** find **App Service Editor**. The App Service Editor opens in a new tab.
-Select `proxies.json` in the left navigation. This file stores the configuration for all of your proxies. If you use one of the [Functions deployment methods](./functions-continuous-deployment.md), you maintain this file in source control. To learn more about this file, see [Proxies advanced configuration](./functions-proxies.md#advanced-configuration).
+Select `proxies.json` in the left navigation. This file stores the configuration for all of your proxies. If you use one of the [Functions deployment methods](./functions-continuous-deployment.md), you maintain this file in source control. To learn more about this file, see [Proxies advanced configuration](./legacy-proxies.md#advanced-configuration).
If you've followed along so far, your proxies.json should look like as follows:
Next, you'll add your mock API. Replace your proxies.json file with the followin
} ```
-This code adds a new proxy, `GetUserByName`, without the `backendUri` property. Instead of calling another resource, it modifies the default response from Proxies using a response override. Request and response overrides can also be used in conjunction with a backend URL. This technique is particularly useful when proxying to a legacy system, where you might need to modify headers, query parameters, and so on. To learn more about request and response overrides, see [Modifying requests and responses in Proxies](./functions-proxies.md).
+This code adds a new proxy, `GetUserByName`, without the `backendUri` property. Instead of calling another resource, it modifies the default response from Proxies using a response override. Request and response overrides can also be used with a backend URL. This technique is useful when proxying to a legacy system, where you might need to modify headers, query parameters, and so on. To learn more about request and response overrides, see [Modifying requests and responses in Proxies](./legacy-proxies.md).
Test your mock API by calling the `<YourProxyApp>.azurewebsites.net/api/users/{username}` endpoint using a browser or your favorite REST client. Be sure to replace _{username}_ with a string value representing a username.
The following references may be helpful as you develop your API further:
[Create your first function]: ./functions-get-started.md
-[Working with Azure Functions Proxies]: ./functions-proxies.md
+[Working with Azure Functions Proxies]: ./legacy-proxies.md
azure-functions Functions Proxies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-proxies.md
Title: Work with proxies in Azure Functions
-description: Overview of how to use Azure Functions Proxies
-
+ Title: Create serverless APIs using Azure Functions
+description: Describes how to use Azure Functions as the basis of a cohesive set of serverless APIs.
Previously updated : 11/08/2021 Last updated : 09/14/2022
-# Work with Azure Functions Proxies
-
-This article explains how to configure and work with Azure Functions Proxies. With this feature, you can specify endpoints on your function app that are implemented by another resource. You can use these proxies to break a large API into multiple function apps (as in a microservice architecture), while still presenting a single API surface for clients.
-
-Standard Functions billing applies to proxy executions. For more information, see [Azure Functions pricing](https://azure.microsoft.com/pricing/details/functions/).
-
-> [!NOTE]
-> Proxies is available in Azure Functions [versions](./functions-versions.md) 1.x to 3.x.
->
-> You should also consider using [Azure API Management](../api-management/api-management-key-concepts.md) for your application. It provides the same capabilities as Functions Proxies as well as other tools for building and maintaining APIs, such as OpenAPI integration, rate limiting, and advanced policies.
-
-## <a name="create"></a>Create a proxy
-
-This section shows you how to create a proxy in the Functions portal.
-
-> [!NOTE]
-> Not all languages and operating system combinations support in-portal editing. If you're unable to create a proxy in the portal, you can instead manually create a _proxies.json_ file in the root of your function app project folder. To learn more about portal editing support, see [Language support details](functions-create-function-app-portal.md#language-support-details).
-
-1. Open the [Azure portal], and then go to your function app.
-2. In the left pane, select **Proxies** and then select **+Add**.
-3. Provide a name for your proxy.
-4. Configure the endpoint that's exposed on this function app by specifying the **route template** and **HTTP methods**. These parameters behave according to the rules for [HTTP triggers].
-5. Set the **backend URL** to another endpoint. This endpoint could be a function in another function app, or it could be any other API. The value does not need to be static, and it can reference [application settings] and [parameters from the original client request].
-6. Click **Create**.
-
-Your proxy now exists as a new endpoint on your function app. From a client perspective, it is equivalent to an HttpTrigger in Azure Functions. You can try out your new proxy by copying the Proxy URL and testing it with your favorite HTTP client.
-
-## <a name="modify-requests-responses"></a>Modify requests and responses
-
-With Azure Functions Proxies, you can modify requests to and responses from the back-end. These transformations can use variables as defined in [Use variables].
-
-### <a name="modify-backend-request"></a>Modify the back-end request
-
-By default, the back-end request is initialized as a copy of the original request. In addition to setting the back-end URL, you can make changes to the HTTP method, headers, and query string parameters. The modified values can reference [application settings] and [parameters from the original client request].
-
-Back-end requests can be modified in the portal by expanding the *request override* section of the proxy detail page.
-
-### <a name="modify-response"></a>Modify the response
-
-By default, the client response is initialized as a copy of the back-end response. You can make changes to the response's status code, reason phrase, headers, and body. The modified values can reference [application settings], [parameters from the original client request], and [parameters from the back-end response].
-
-Back-end responses can be modified in the portal by expanding the *response override* section of the proxy detail page.
-
-## <a name="using-variables"></a>Use variables
-
-The configuration for a proxy does not need to be static. You can condition it to use variables from the original client request, the back-end response, or application settings.
-
-### <a name="reference-localhost"></a>Reference local functions
-You can use `localhost` to reference a function inside the same function app directly, without a roundtrip proxy request.
-
-`"backendUri": "https://localhost/api/httptriggerC#1"` will reference a local HTTP triggered function at the route `/api/httptriggerC#1`
-
-
->[!Note]
->If your function uses *function, admin or sys* authorization levels, you will need to provide the code and clientId, as per the original function URL. In this case the reference would look like: `"backendUri": "https://localhost/api/httptriggerC#1?code=<keyvalue>&clientId=<keyname>"` We recommend storing these keys in [application settings] and referencing those in your proxies. This avoids storing secrets in your source code.
-
-### <a name="request-parameters"></a>Reference request parameters
-
-You can use request parameters as inputs to the back-end URL property or as part of modifying requests and responses. Some parameters can be bound from the route template that's specified in the base proxy configuration, and others can come from properties of the incoming request.
-
-#### Route template parameters
-Parameters that are used in the route template are available to be referenced by name. The parameter names are enclosed in braces ({}).
-
-For example, if a proxy has a route template, such as `/pets/{petId}`, the back-end URL can include the value of `{petId}`, as in `https://<AnotherApp>.azurewebsites.net/api/pets/{petId}`. If the route template terminates in a wildcard, such as `/api/{*restOfPath}`, the value `{restOfPath}` is a string representation of the remaining path segments from the incoming request.
-
-#### Additional request parameters
-In addition to the route template parameters, the following values can be used in config values:
-
-* **{request.method}**: The HTTP method that's used on the original request.
-* **{request.headers.\<HeaderName\>}**: A header that can be read from the original request. Replace *\<HeaderName\>* with the name of the header that you want to read. If the header is not included on the request, the value will be the empty string.
-* **{request.querystring.\<ParameterName\>}**: A query string parameter that can be read from the original request. Replace *\<ParameterName\>* with the name of the parameter that you want to read. If the parameter is not included on the request, the value will be the empty string.
-
-### <a name="response-parameters"></a>Reference back-end response parameters
-
-Response parameters can be used as part of modifying the response to the client. The following values can be used in config values:
-
-* **{backend.response.statusCode}**: The HTTP status code that's returned on the back-end response.
-* **{backend.response.statusReason}**: The HTTP reason phrase that's returned on the back-end response.
-* **{backend.response.headers.\<HeaderName\>}**: A header that can be read from the back-end response. Replace *\<HeaderName\>* with the name of the header you want to read. If the header is not included on the response, the value will be the empty string.
-
-### <a name="use-appsettings"></a>Reference application settings
-
-You can also reference [application settings defined for the function app](./functions-how-to-use-azure-function-app-settings.md) by surrounding the setting name with percent signs (%).
-
-For example, a back-end URL of *https://%ORDER_PROCESSING_HOST%/api/orders* would have "%ORDER_PROCESSING_HOST%" replaced with the value of the ORDER_PROCESSING_HOST setting.
-
-> [!TIP]
-> Use application settings for back-end hosts when you have multiple deployments or test environments. That way, you can make sure that you are always talking to the right back-end for that environment.
+# Serverless REST APIs using Azure Functions
-## <a name="debugProxies"></a>Troubleshoot Proxies
+Azure Functions is an essential compute service that you use to build serverless REST-based APIs. HTTP triggers expose REST endpoints that can be called by your clients, like browsers, mobile apps, and other backend services. With [native support for routes](functions-bindings-http-webhook-trigger.md#customize-the-http-endpoint), a single HTTP triggered function can expose a highly functional REST API. Functions also provides its own basic key-based authorization scheme to help limit access only to specific clients. For more information, see [Azure Functions HTTP trigger](functions-bindings-http-webhook-trigger.md)
-By adding the flag `"debug":true` to any proxy in your `proxies.json` you will enable debug logging. Logs are stored in `D:\home\LogFiles\Application\Proxies\DetailedTrace` and accessible through the advanced tools (kudu). Any HTTP responses will also contain a `Proxy-Trace-Location` header with a URL to access the log file.
+In some scenarios, you may need your API to support a more complex set of REST behaviors. For example, you may need to combine multiple HTTP function endpoints into a single API. You might also want to pass requests through to one or more backend REST-based services. Finally, your APIs might require a higher-degree of security that lets you monetize its use.
-You can debug a proxy from the client side by adding a `Proxy-Trace-Enabled` header set to `true`. This will also log a trace to the file system, and return the trace URL as a header in the response.
+Today, the recommended approach to build more complex and robust APIs based on your functions is to use the comprehensive API services provided by [Azure API Management](../api-management/api-management-key-concepts.md).
+API Management uses a policy-based model to let you control routing, security, and OpenAPI integration. It also supports advanced policies like rate limiting monetization. Previous versions of the Functions runtime used the legacy Functions Proxies feature.
-### Block proxy traces
-For security reasons you may not want to allow anyone calling your service to generate a trace. They will not be able to access the trace contents without your login credentials, but generating the trace consumes resources and exposes that you are using Function Proxies.
+## <a name="migration"></a>Moving from Functions Proxies to API Management
-Disable traces altogether by adding `"debug":false` to any particular proxy in your `proxies.json`.
+When moving from Functions Proxies to using API Management, you must integrate your function app with an API Management instance, and then configure the API Management instance to behave like the previous proxy. The following section provides links to the relevant articles that help you succeed in using API Management with Azure Functions.
-## Advanced configuration
+If you have challenges moving from Proxies or if Azure API Management doesn't address your specific scenarios, create an issue in the [Azure Functions repository](https://github.com/Azure/Azure-Functions). Make sure to tag the issue with the label `proxy-deprecation`.
-The proxies that you configure are stored in a *proxies.json* file, which is located in the root of a function app directory. You can manually edit this file and deploy it as part of your app when you use any of the [deployment methods](./functions-continuous-deployment.md) that Functions supports.
+## API Management integration
-> [!TIP]
-> If you have not set up one of the deployment methods, you can also work with the *proxies.json* file in the portal. Go to your function app, select **Platform features**, and then select **App Service Editor**. By doing so, you can view the entire file structure of your function app and then make changes.
+API Management lets you import an existing function app. After import, each HTTP triggered function endpoint becomes an API that you can modify and manage. After import, you can also use API Management to generate an OpenAPI definition file for your APIs. During import, any endpoints with an `admin` [authorization level](functions-bindings-http-webhook-trigger.md#http-auth) are ignored. For more information about using API Management with Functions, see the following articles:
-*Proxies.json* is defined by a proxies object, which is composed of named proxies and their definitions. Optionally, if your editor supports it, you can reference a [JSON schema](http://json.schemastore.org/proxies) for code completion. An example file might look like the following:
+| Article | Description |
+| | |
+| [Expose serverless APIs from HTTP endpoints using Azure API Management](functions-openapi-definition.md) | Shows how to create a new API Management instance from an existing function app in the Azure portal. Supports all languages. |
+| [Create serverless APIs in Visual Studio using Azure Functions and API Management integration](openapi-apim-integrate-visual-studio.md) | Shows how to use Visual Studio to create a C# project that uses the [OpenAPI extension](https://github.com/Azure/azure-functions-openapi-extension). The OpenAPI extension lets you define your .NET APIs by applying attributes directly to your C# code. |
+| [Quickstart: Create a new Azure API Management service instance by using the Azure portal](../api-management/get-started-create-service-instance.md) | Create a new API Management instance in the portal. After you create an API Management instance, you can connect it to your function app. Other non-portal creation methods are supported. |
+| [Import an Azure function app as an API in Azure API Management](../api-management/import-function-app-as-api.md) | Shows how to import an existing function app to expose existing HTTP trigger endpoints as a managed API. This article supports both creating a new API and adding the endpoints to an existing managed API. |
-```json
-{
- "$schema": "http://json.schemastore.org/proxies",
- "proxies": {
- "proxy1": {
- "matchCondition": {
- "methods": [ "GET" ],
- "route": "/api/{test}"
- },
- "backendUri": "https://<AnotherApp>.azurewebsites.net/api/<FunctionName>"
- }
- }
-}
-```
+After you have your function app endpoints exposed by using API Management, the following articles provide general information about how to manage your Functions-based APIs in the API Management instance.
-Each proxy has a friendly name, such as *proxy1* in the preceding example. The corresponding proxy definition object is defined by the following properties:
+| Article | Description |
+| | |
+| [Edit an API](../api-management/edit-api.md) | Shows you how to work with an existing API hosted in API Management. |
+| [Policies in Azure API Management](../api-management/api-management-howto-policies.md) | In API Management, publishers can change API behavior through configuration using policies. Policies are a collection of statements that are run sequentially on the request or response of an API. |
+| [API Management policy reference](../api-management/api-management-policies.md) | Reference that details all supported API Management policies. |
+| [API Management policy samples](/azure/api-management/policies/) | Helpful collection of samples using API Management policies in key scenarios. |
-* **matchCondition**: Required--an object defining the requests that trigger the execution of this proxy. It contains two properties that are shared with [HTTP triggers]:
- * _methods_: An array of the HTTP methods that the proxy responds to. If it is not specified, the proxy responds to all HTTP methods on the route.
- * _route_: Required--defines the route template, controlling which request URLs your proxy responds to. Unlike in HTTP triggers, there is no default value.
-* **backendUri**: The URL of the back-end resource to which the request should be proxied. This value can reference application settings and parameters from the original client request. If this property is not included, Azure Functions responds with an HTTP 200 OK.
-* **requestOverrides**: An object that defines transformations to the back-end request. See [Define a requestOverrides object].
-* **responseOverrides**: An object that defines transformations to the client response. See [Define a responseOverrides object].
+## Legacy Functions Proxies
-> [!NOTE]
-> The *route* property in Azure Functions Proxies does not honor the *routePrefix* property of the Function App host configuration. If you want to include a prefix such as `/api`, it must be included in the *route* property.
+The legacy [Functions Proxies feature](legacy-proxies.md) also provides a set of basic API functionality for version 3.x and older version of the Functions runtime.
-### <a name="disableProxies"></a> Disable individual proxies
-You can disable individual proxies by adding `"disabled": true` to the proxy in the `proxies.json` file. This will cause any requests meeting the matchCondition to return 404.
-```json
-{
- "$schema": "http://json.schemastore.org/proxies",
- "proxies": {
- "Root": {
- "disabled":true,
- "matchCondition": {
- "route": "/example"
- },
- "backendUri": "https://<AnotherApp>.azurewebsites.net/api/<FunctionName>"
- }
- }
-}
-```
+Some basic hints for how to perform equivalent tasks using API Management have been added to the [Functions Proxies article](legacy-proxies.md). We don't currently have documentation or tools to help you migrate an existing Functions Proxies implementation to API Management.
-### <a name="applicationSettings"></a> Application Settings
+## Next steps
-The proxy behavior can be controlled by several app settings. They are all outlined in the [Functions App Settings reference](./functions-app-settings.md)
-
-* [AZURE_FUNCTION_PROXY_DISABLE_LOCAL_CALL](./functions-app-settings.md#azure_function_proxy_disable_local_call)
-* [AZURE_FUNCTION_PROXY_BACKEND_URL_DECODE_SLASHES](./functions-app-settings.md#azure_function_proxy_backend_url_decode_slashes)
-
-### <a name="reservedChars"></a> Reserved Characters (string formatting)
-
-Proxies read all strings out of a JSON file, using \ as an escape symbol. Proxies also interpret curly braces. See a full set of examples below.
-
-|Character|Escaped Character|Example|
-|-|-|-|
-|{ or }|{{ or }}|`{{ example }}` --> `{ example }`
-| \ | \\\\ | `example.com\\text.html` --> `example.com\text.html`
-|"|\\\"| `\"example\"` --> `"example"`
-
-### <a name="requestOverrides"></a>Define a requestOverrides object
-
-The requestOverrides object defines changes made to the request when the back-end resource is called. The object is defined by the following properties:
-
-* **backend.request.method**: The HTTP method that's used to call the back-end.
-* **backend.request.querystring.\<ParameterName\>**: A query string parameter that can be set for the call to the back-end. Replace *\<ParameterName\>* with the name of the parameter that you want to set. Note that if an empty string is provided, the parameter is still included on the back-end request.
-* **backend.request.headers.\<HeaderName\>**: A header that can be set for the call to the back-end. Replace *\<HeaderName\>* with the name of the header that you want to set. Note that if an empty string is provided, the parameter is still included on the back-end request.
-
-Values can reference application settings and parameters from the original client request.
-
-An example configuration might look like the following:
-
-```json
-{
- "$schema": "http://json.schemastore.org/proxies",
- "proxies": {
- "proxy1": {
- "matchCondition": {
- "methods": [ "GET" ],
- "route": "/api/{test}"
- },
- "backendUri": "https://<AnotherApp>.azurewebsites.net/api/<FunctionName>",
- "requestOverrides": {
- "backend.request.headers.Accept": "application/xml",
- "backend.request.headers.x-functions-key": "%ANOTHERAPP_API_KEY%"
- }
- }
- }
-}
-```
-
-### <a name="responseOverrides"></a>Define a responseOverrides object
-
-The requestOverrides object defines changes that are made to the response that's passed back to the client. The object is defined by the following properties:
-
-* **response.statusCode**: The HTTP status code to be returned to the client.
-* **response.statusReason**: The HTTP reason phrase to be returned to the client.
-* **response.body**: The string representation of the body to be returned to the client.
-* **response.headers.\<HeaderName\>**: A header that can be set for the response to the client. Replace *\<HeaderName\>* with the name of the header that you want to set. If you provide the empty string, the header is not included on the response.
-
-Values can reference application settings, parameters from the original client request, and parameters from the back-end response.
-
-An example configuration might look like the following:
-
-```json
-{
- "$schema": "http://json.schemastore.org/proxies",
- "proxies": {
- "proxy1": {
- "matchCondition": {
- "methods": [ "GET" ],
- "route": "/api/{test}"
- },
- "responseOverrides": {
- "response.body": "Hello, {test}",
- "response.headers.Content-Type": "text/plain"
- }
- }
- }
-}
-```
-> [!NOTE]
-> In this example, the response body is set directly, so no `backendUri` property is needed. The example shows how you might use Azure Functions Proxies for mocking APIs.
-
-[Azure portal]: https://portal.azure.com
-[HTTP triggers]: ./functions-bindings-http-webhook.md
-[Modify the back-end request]: #modify-backend-request
-[Modify the response]: #modify-response
-[Define a requestOverrides object]: #requestOverrides
-[Define a responseOverrides object]: #responseOverrides
-[application settings]: #use-appsettings
-[Use variables]: #using-variables
-[parameters from the original client request]: #request-parameters
-[parameters from the back-end response]: #response-parameters
+> [!div class="nextstepaction"]
+> [Expose serverless APIs from HTTP endpoints using Azure API Management](functions-openapi-definition.md)
azure-functions Legacy Proxies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/legacy-proxies.md
+
+ Title: Work with legacy Azure Functions Proxies
+description: Overview of how to use the legacy Proxies feature in Azure Functions
+ Last updated : 09/14/2022 ++
+# Work with legacy proxies
+
+> To help make it easier to migrate from existing proxy implemetations, this article links to equivalent API Management content, when available.
+
+This article explains how to configure and work with Azure Functions Proxies. With this feature, you can specify endpoints on your function app that are implemented by another resource. You can use these proxies to break a large API into multiple function apps (as in a microservice architecture), while still presenting a single API surface for clients.
+
+Standard Functions billing applies to proxy executions. For more information, see [Azure Functions pricing](https://azure.microsoft.com/pricing/details/functions/).
+
+## <a name="create"></a>Create a proxy
+
+> [!IMPORTANT]
+> For equivalent content using API Management, see [Expose serverless APIs from HTTP endpoints using Azure API Management](functions-openapi-definition.md).
+
+Proxies are defined in the _proxies.json_ file in the root of your function app. The steps in this section show you how to use the Azure portal to create this file in your function app. Not all languages and operating system combinations support in-portal editing. If you can't modify your function app files in the portal, you can instead create and deploy the equivalent `proxies.json` file from the root of your local project folder. To learn more about portal editing support, see [Language support details](functions-create-function-app-portal.md#language-support-details).
+
+1. Open the [Azure portal], and then go to your function app.
+1. In the left pane, select **Proxies** and then select **+Add**.
+1. Provide a name for your proxy.
+1. Configure the endpoint that's exposed on this function app by specifying the **route template** and **HTTP methods**. These parameters behave according to the rules for [HTTP triggers].
+1. Set the **backend URL** to another endpoint. This endpoint could be a function in another function app, or it could be any other API. The value doesn't need to be static, and it can reference [application settings] and [parameters from the original client request].
+1. Select **Create**.
+
+Your proxy now exists as a new endpoint on your function app. From a client perspective, it's the same as an HttpTrigger in Functions. You can try out your new proxy by copying the **Proxy URL** and testing it with your favorite HTTP client.
+
+## <a name="modify-requests-responses"></a>Modify requests and responses
+
+> [!IMPORTANT]
+> API Management lets you can change API behavior through configuration using policies. Policies are a collection of statements that are run sequentially on the request or response of an API. For more information about API Management policies, see [Policies in Azure API Management](../api-management/api-management-howto-policies.md).
+
+With proxies, you can modify requests to and responses from the back-end. These transformations can use variables as defined in [Use variables].
+
+### <a name="modify-backend-request"></a>Modify the back-end request
+
+By default, the back-end request is initialized as a copy of the original request. In addition to setting the back-end URL, you can make changes to the HTTP method, headers, and query string parameters. The modified values can reference [application settings] and [parameters from the original client request].
+
+Back-end requests can be modified in the portal by expanding the *request override* section of the proxy detail page.
+
+### <a name="modify-response"></a>Modify the response
+
+By default, the client response is initialized as a copy of the back-end response. You can make changes to the response's status code, reason phrase, headers, and body. The modified values can reference [application settings], [parameters from the original client request], and [parameters from the back-end response].
+
+Back-end responses can be modified in the portal by expanding the *response override* section of the proxy detail page.
+
+## <a name="using-variables"></a>Use variables
+
+The configuration for a proxy doesn't need to be static. You can condition it to use variables from the original client request, the back-end response, or application settings.
+
+### <a name="reference-localhost"></a>Reference local functions
+You can use `localhost` to reference a function inside the same function app directly, without a roundtrip proxy request.
+
+`"backendUri": "https://localhost/api/httptriggerC#1"` will reference a local HTTP triggered function at the route `/api/httptriggerC#1`
+
+>[!Note]
+>If your function uses *function, admin or sys* authorization levels, you will need to provide the code and clientId, as per the original function URL. In this case the reference would look like: `"backendUri": "https://localhost/api/httptriggerC#1?code=<keyvalue>&clientId=<keyname>"` We recommend storing these keys in [application settings] and referencing those in your proxies. This avoids storing secrets in your source code.
+
+### <a name="request-parameters"></a>Reference request parameters
+
+You can use request parameters as inputs to the back-end URL property or as part of modifying requests and responses. Some parameters can be bound from the route template that's specified in the base proxy configuration, and others can come from properties of the incoming request.
+
+#### Route template parameters
+Parameters that are used in the route template are available to be referenced by name. The parameter names are enclosed in braces ({}).
+
+For example, if a proxy has a route template, such as `/pets/{petId}`, the back-end URL can include the value of `{petId}`, as in `https://<AnotherApp>.azurewebsites.net/api/pets/{petId}`. If the route template terminates in a wildcard, such as `/api/{*restOfPath}`, the value `{restOfPath}` is a string representation of the remaining path segments from the incoming request.
+
+#### Additional request parameters
+In addition to the route template parameters, the following values can be used in config values:
+
+* **{request.method}**: The HTTP method that's used on the original request.
+* **{request.headers.\<HeaderName\>}**: A header that can be read from the original request. Replace *\<HeaderName\>* with the name of the header that you want to read. If the header isn't included on the request, the value will be the empty string.
+* **{request.querystring.\<ParameterName\>}**: A query string parameter that can be read from the original request. Replace *\<ParameterName\>* with the name of the parameter that you want to read. If the parameter isn't included on the request, the value will be the empty string.
+
+### <a name="response-parameters"></a>Reference back-end response parameters
+
+Response parameters can be used as part of modifying the response to the client. The following values can be used in config values:
+
+* **{backend.response.statusCode}**: The HTTP status code that's returned on the back-end response.
+* **{backend.response.statusReason}**: The HTTP reason phrase that's returned on the back-end response.
+* **{backend.response.headers.\<HeaderName\>}**: A header that can be read from the back-end response. Replace *\<HeaderName\>* with the name of the header you want to read. If the header isn't included on the response, the value will be the empty string.
+
+### <a name="use-appsettings"></a>Reference application settings
+
+You can also reference [application settings defined for the function app](./functions-how-to-use-azure-function-app-settings.md) by surrounding the setting name with percent signs (%).
+
+For example, a back-end URL of *https://%ORDER_PROCESSING_HOST%/api/orders* would have "%ORDER_PROCESSING_HOST%" replaced with the value of the ORDER_PROCESSING_HOST setting.
+
+> [!TIP]
+> Use application settings for back-end hosts when you have multiple deployments or test environments. That way, you can make sure that you are always talking to the right back-end for that environment.
+
+## <a name="debugProxies"></a>Troubleshoot Proxies
+
+By adding the flag `"debug":true` to any proxy in your `proxies.json`, you'll enable debug logging. Logs are stored in `D:\home\LogFiles\Application\Proxies\DetailedTrace` and accessible through the advanced tools (kudu). Any HTTP responses will also contain a `Proxy-Trace-Location` header with a URL to access the log file.
+
+You can debug a proxy from the client side by adding a `Proxy-Trace-Enabled` header set to `true`. This will also log a trace to the file system, and return the trace URL as a header in the response.
+
+### Block proxy traces
+
+For security reasons you may not want to allow anyone calling your service to generate a trace. They won't be able to access the trace contents without your sign-in credentials, but generating the trace consumes resources and exposes that you're using Function Proxies.
+
+Disable traces altogether by adding `"debug":false` to any particular proxy in your `proxies.json`.
+
+## Advanced configuration
+
+The proxies that you configure are stored in a *proxies.json* file, which is located in the root of a function app directory. You can manually edit this file and deploy it as part of your app when you use any of the [deployment methods](./functions-continuous-deployment.md) that Functions supports.
+
+> [!TIP]
+> If you have not set up one of the deployment methods, you can also work with the *proxies.json* file in the portal. Go to your function app, select **Platform features**, and then select **App Service Editor**. By doing so, you can view the entire file structure of your function app and then make changes.
+
+*Proxies.json* is defined by a proxies object, which is composed of named proxies and their definitions. Optionally, if your editor supports it, you can reference a [JSON schema](http://json.schemastore.org/proxies) for code completion. An example file might look like the following:
+
+```json
+{
+ "$schema": "http://json.schemastore.org/proxies",
+ "proxies": {
+ "proxy1": {
+ "matchCondition": {
+ "methods": [ "GET" ],
+ "route": "/api/{test}"
+ },
+ "backendUri": "https://<AnotherApp>.azurewebsites.net/api/<FunctionName>"
+ }
+ }
+}
+```
+
+Each proxy has a friendly name, such as *proxy1* in the preceding example. The corresponding proxy definition object is defined by the following properties:
+
+* **matchCondition**: Required--an object defining the requests that trigger the execution of this proxy. It contains two properties that are shared with [HTTP triggers]:
+ * _methods_: An array of the HTTP methods that the proxy responds to. If it isn't specified, the proxy responds to all HTTP methods on the route.
+ * _route_: Required--defines the route template, controlling which request URLs your proxy responds to. Unlike in HTTP triggers, there's no default value.
+* **backendUri**: The URL of the back-end resource to which the request should be proxied. This value can reference application settings and parameters from the original client request. If this property isn't included, Azure Functions responds with an HTTP 200 OK.
+* **requestOverrides**: An object that defines transformations to the back-end request. See [Define a requestOverrides object].
+* **responseOverrides**: An object that defines transformations to the client response. See [Define a responseOverrides object].
+
+> [!NOTE]
+> The *route* property in Azure Functions Proxies does not honor the *routePrefix* property of the Function App host configuration. If you want to include a prefix such as `/api`, it must be included in the *route* property.
+
+### <a name="disableProxies"></a> Disable individual proxies
+
+You can disable individual proxies by adding `"disabled": true` to the proxy in the `proxies.json` file. This will cause any requests meeting the matchCondition to return 404.
+```json
+{
+ "$schema": "http://json.schemastore.org/proxies",
+ "proxies": {
+ "Root": {
+ "disabled":true,
+ "matchCondition": {
+ "route": "/example"
+ },
+ "backendUri": "https://<AnotherApp>.azurewebsites.net/api/<FunctionName>"
+ }
+ }
+}
+```
+
+### <a name="applicationSettings"></a> Application Settings
+
+The proxy behavior can be controlled by several app settings. They're all outlined in the [Functions App Settings reference](./functions-app-settings.md)
+
+* [AZURE_FUNCTION_PROXY_DISABLE_LOCAL_CALL](./functions-app-settings.md#azure_function_proxy_disable_local_call)
+* [AZURE_FUNCTION_PROXY_BACKEND_URL_DECODE_SLASHES](./functions-app-settings.md#azure_function_proxy_backend_url_decode_slashes)
+
+### <a name="reservedChars"></a> Reserved Characters (string formatting)
+
+Proxies read all strings out of a JSON file, using \ as an escape symbol. Proxies also interpret curly braces. See a full set of examples below.
+
+|Character|Escaped Character|Example|
+|-|-|-|
+|{ or }|{{ or }}|`{{ example }}` --> `{ example }`
+| \ | \\\\ | `example.com\\text.html` --> `example.com\text.html`
+|"|\\\"| `\"example\"` --> `"example"`
+
+### <a name="requestOverrides"></a>Define a requestOverrides object
+
+The requestOverrides object defines changes made to the request when the back-end resource is called. The object is defined by the following properties:
+
+* **backend.request.method**: The HTTP method that's used to call the back-end.
+* **backend.request.querystring.\<ParameterName\>**: A query string parameter that can be set for the call to the back-end. Replace *\<ParameterName\>* with the name of the parameter that you want to set. If an empty string is provided, the parameter is still included on the back-end request.
+* **backend.request.headers.\<HeaderName\>**: A header that can be set for the call to the back-end. Replace *\<HeaderName\>* with the name of the header that you want to set. If an empty string is provided, the parameter is still included on the back-end request.
+
+Values can reference application settings and parameters from the original client request.
+
+An example configuration might look like the following:
+
+```json
+{
+ "$schema": "http://json.schemastore.org/proxies",
+ "proxies": {
+ "proxy1": {
+ "matchCondition": {
+ "methods": [ "GET" ],
+ "route": "/api/{test}"
+ },
+ "backendUri": "https://<AnotherApp>.azurewebsites.net/api/<FunctionName>",
+ "requestOverrides": {
+ "backend.request.headers.Accept": "application/xml",
+ "backend.request.headers.x-functions-key": "%ANOTHERAPP_API_KEY%"
+ }
+ }
+ }
+}
+```
+
+### <a name="responseOverrides"></a>Define a responseOverrides object
+
+The requestOverrides object defines changes that are made to the response that's passed back to the client. The object is defined by the following properties:
+
+* **response.statusCode**: The HTTP status code to be returned to the client.
+* **response.statusReason**: The HTTP reason phrase to be returned to the client.
+* **response.body**: The string representation of the body to be returned to the client.
+* **response.headers.\<HeaderName\>**: A header that can be set for the response to the client. Replace *\<HeaderName\>* with the name of the header that you want to set. If you provide the empty string, the header isn't included on the response.
+
+Values can reference application settings, parameters from the original client request, and parameters from the back-end response.
+
+An example configuration might look like the following:
+
+```json
+{
+ "$schema": "http://json.schemastore.org/proxies",
+ "proxies": {
+ "proxy1": {
+ "matchCondition": {
+ "methods": [ "GET" ],
+ "route": "/api/{test}"
+ },
+ "responseOverrides": {
+ "response.body": "Hello, {test}",
+ "response.headers.Content-Type": "text/plain"
+ }
+ }
+ }
+}
+```
+> [!NOTE]
+> In this example, the response body is set directly, so no `backendUri` property is needed. The example shows how you might use Azure Functions Proxies for mocking APIs.
+
+[Azure portal]: https://portal.azure.com
+[HTTP triggers]: ./functions-bindings-http-webhook.md
+[Modify the back-end request]: #modify-backend-request
+[Modify the response]: #modify-response
+[Define a requestOverrides object]: #requestOverrides
+[Define a responseOverrides object]: #responseOverrides
+[application settings]: #use-appsettings
+[Use variables]: #using-variables
+[parameters from the original client request]: #request-parameters
+[parameters from the back-end response]: #response-parameters
azure-government Azure Services In Fedramp Auditscope https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/compliance/azure-services-in-fedramp-auditscope.md
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Azure Active Directory Provisioning Service](../../active-directory/app-provisioning/how-provisioning-works.md)| &#x2705; | &#x2705; | | [Azure AD Multi-Factor Authentication](../../active-directory/authentication/concept-mfa-howitworks.md) | &#x2705; | &#x2705; | | [Azure API for FHIR](../../healthcare-apis/azure-api-for-fhir/index.yml) | &#x2705; | &#x2705; |
+| [Azure App Service](../../app-service/index.yml) | &#x2705; | &#x2705; |
| [Azure Arc-enabled servers](../../azure-arc/servers/index.yml) | &#x2705; | &#x2705; | | **Service** | **FedRAMP High** | **DoD IL2** | | [Azure Cache for Redis](../../azure-cache-for-redis/index.yml) | &#x2705; | &#x2705; |
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Virtual WAN](../../virtual-wan/index.yml) | &#x2705; | &#x2705; | | [VPN Gateway](../../vpn-gateway/index.yml) | &#x2705; | &#x2705; | | [Web Application Firewall](../../web-application-firewall/index.yml) | &#x2705; | &#x2705; |
-| [Web Apps (App Service)](../../app-service/index.yml) | &#x2705; | &#x2705; |
| [Windows 10 IoT Core Services](/windows-hardware/manufacture/iot/iotcoreservicesoverview) | &#x2705; | &#x2705; | **&ast;** FedRAMP High authorization for edge devices (such as Azure Data Box and Azure Stack Edge) applies only to Azure services that support on-premises, customer-managed devices. For example, FedRAMP High authorization for Azure Data Box covers datacenter infrastructure services and Data Box pod and disk service, which are the online software components supporting your Data Box hardware appliance. You are wholly responsible for the authorization package that covers the physical devices. For assistance with accelerating your onboarding and authorization of devices, contact your Microsoft account representative.
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Azure Active Directory Domain Services](../../active-directory-domain-services/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Azure AD Multi-Factor Authentication](../../active-directory/authentication/concept-mfa-howitworks.md) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Azure API for FHIR](../../healthcare-apis/azure-api-for-fhir/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
+| [Azure App Service](../../app-service/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
| [Azure Arc-enabled Kubernetes](../../azure-arc/kubernetes/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Azure Arc-enabled servers](../../azure-arc/servers/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Azure Cache for Redis](../../azure-cache-for-redis/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Azure Information Protection](/azure/information-protection/) **&ast;&ast;** | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Azure Kubernetes Service (AKS)](../../aks/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Azure Maps](../../azure-maps/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
-| [Azure Monitor](../../azure-monitor/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
-| Azure Monitor [Application Insights](../../azure-monitor/app/app-insights-overview.md) | | | | | &#x2705; |
-| Azure Monitor [Log Analytics](../../azure-monitor/logs/data-platform-logs.md) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
+| [Azure Monitor](../../azure-monitor/index.yml) (incl. [Application Insights](../../azure-monitor/app/app-insights-overview.md), [Log Analytics](../../azure-monitor/logs/data-platform-logs.md)) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
| [Azure NetApp Files](../../azure-netapp-files/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Azure Policy](../../governance/policy/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Azure Policy's guest configuration](../../governance/machine-configuration/overview.md) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Virtual WAN](../../virtual-wan/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [VPN Gateway](../../vpn-gateway/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Web Application Firewall](../../web-application-firewall/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
-| [Web Apps (App Service)](../../app-service/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
**&ast;** Authorizations for edge devices (such as Azure Data Box and Azure Stack Edge) apply only to Azure services that support on-premises, customer-managed devices. You are wholly responsible for the authorization package that covers the physical devices. For assistance with accelerating your onboarding and authorization of devices, contact your Microsoft account representative.
azure-monitor Azure Monitor Agent Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-migration.md
Previously updated : 08/04/2022 Last updated : 9/14/2022
-# Customer intent: As an IT manager, I want to understand if and when I should move from using legacy agents to Azure Monitor Agent.
+# Customer intent: As an IT manager, I want to understand how I should move from using legacy agents to Azure Monitor Agent.
# Migrate to Azure Monitor Agent from Log Analytics agent
-[Azure Monitor Agent (AMA)](./agents-overview.md) replaces the Log Analytics agent (also known as MMA and OMS) for both Windows and Linux machines Azure Monitor and introduces a simplified, flexible method of configuring collection configuration called [Data Collection Rules (DCRs)](../essentials/data-collection-rule-overview.md). This article outlines the benefits of migrating to Azure Monitor Agent (AMA) and provides guidance on how to implement a successful migration.
+[Azure Monitor Agent (AMA)](./agents-overview.md) replaces the Log Analytics agent (also known as MMA and OMS) for both Windows and Linux machines, in Azure and on premises. It introduces a simplified, flexible method of configuring collection configuration called [Data Collection Rules (DCRs)](../essentials/data-collection-rule-overview.md). This article outlines the benefits of migrating to Azure Monitor Agent (AMA) and provides guidance on how to implement a successful migration.
> [!IMPORTANT] > The Log Analytics agent will be [retired on **31 August, 2024**](https://azure.microsoft.com/updates/were-retiring-the-log-analytics-agent-in-azure-monitor-on-31-august-2024/). If you are currently using the Log Analytics agent with Azure Monitor or other supported features and services, you should start planning your migration to Azure Monitor Agent using the information in this article.
Azure Monitor Agent provides the following benefits over legacy agents:
Your migration plan to the Azure Monitor Agent should take into account: -- **Current and new feature requirements:** Review [Azure Monitor Agent's supported services and features](agents-overview.md#supported-services-and-features) to ensure that Azure Monitor Agent has the features you require. If you currently use unsupported features you can temporarily do without, consider migrating to benefit from other important features in the new agent. Use the [AMA Migration Helper](./azure-monitor-agent-migration-tools.md#using-ama-migration-helper-preview) to discover what solutions and features you're using the legacy agent for.
+- **Current and new feature requirements:** Review [Azure Monitor Agent's supported services and features](agents-overview.md#supported-services-and-features) to ensure that Azure Monitor Agent has the features you require. If you currently use unsupported features you can temporarily do without, consider migrating to the new agent to benefit from added security and reduced cost immediately. Use the [AMA Migration Helper](./azure-monitor-agent-migration-tools.md#using-ama-migration-helper-preview) to **discover what solutions and features you're using today that depend on the legacy agent**.
If you use Microsoft Sentinel, see [Gap analysis for Microsoft Sentinel](../../sentinel/ama-migrate.md#gap-analysis-between-agents) for a comparison of the extra data collected by Microsoft Sentinel. -- **Installing Azure Monitor Agent alongside a legacy agent:** If you're setting up a new environment with resources, such as deployment scripts and onboarding templates, and you still need a legacy agent, assess the effort of migrating to Azure Monitor Agent later. If the setup will take a significant amount of rework, install Azure Monitor Agent together with a legacy agent in your new environment to decrease the migration effort.
+- **Installing Azure Monitor Agent alongside a legacy agent:** If you're setting up a **new environment** with resources, such as deployment scripts and onboarding templates, and you still need a legacy agent, assess the effort of migrating to Azure Monitor Agent later. If the setup will take a significant amount of rework, install Azure Monitor Agent together with a legacy agent in your new environment to decrease the migration effort.
- Azure Monitor Agent can run alongside the legacy Log Analytics agents on the same machine so that you can continue to use existing functionality during evaluation or migration. While this allows you to begin the transition given the limitations:
+ Azure Monitor Agent can run alongside the legacy Log Analytics agents on the same machine so that you can continue to use existing functionality during evaluation or migration. While this allows you to begin the transition, ensure you understand the limitations:
- Be careful in collecting duplicate data from the same machine, which could skew query results and affect downstream features like alerts, dashboards or workbooks. For example, VM Insights uses the Log Analytics agent to send performance data to a Log Analytics workspace. You might also have configured the workspace to collect Windows events and Syslog events from agents.
-
If you install Azure Monitor Agent and create a data collection rule for these events and performance data, you'll collect duplicate data. If you're using both agents to collect the same type of data, make sure the agents are **collecting data from different machines** or **sending the data to different destinations**. Collecting duplicate data also generates more charges for data ingestion and retention. - Running two telemetry agents on the same machine consumes double the resources, including, but not limited to CPU, memory, storage space, and network bandwidth.
+## Prerequisites
+Review the [prerequisites](./azure-monitor-agent-manage.md#prerequisites) for use Azure Monitor Agent. For on-premises servers or other cloud managed servers, [installing the Azure Arc agent](/azure/azure-arc/servers/agent-overview) is an important prerequisite that then helps to install the agent extension and other required extensions. Using Arc for this purpose comes at no added cost, and it's not mandatory to use Arc for server management overall (i.e. you can continue using your existing on premises management solutions). Once Arc agent is installed, you can follow the same guidance below across Azure and on-premise for migration.
+ ## Migration testing To ensure safe deployment during migration, begin testing with few resources running Azure Monitor Agent in your nonproduction environment. After you validate the data collected on these test resources, roll out to production by following the same steps.
-See [create new data collection rules](./data-collection-rule-azure-monitor-agent.md#create-data-collection-rule-and-association) to start collecting some of the existing data types. After you validate that data is flowing as expected with Azure Monitor Agent, check the `Category` column in the [Heartbeat](/azure/azure-monitor/reference/tables/heartbeat) table for the value *Azure Monitor Agent* for AMA collected data. Ensure it matches data flowing through the existing Log Analytics agent.
+See [create new data collection rules](./data-collection-rule-azure-monitor-agent.md#create-data-collection-rule-and-association) to start collecting some of the existing data types. Alternatively you can use the [DCR Config Generator](./azure-monitor-agent-migration-tools.md#installing-and-using-dcr-config-generator-preview) to convert existing legacy agent configuration into data collection rules.
+After you **validate** that data is flowing as expected with Azure Monitor Agent, check the `Category` column in the [Heartbeat](/azure/azure-monitor/reference/tables/heartbeat) table for the value *Azure Monitor Agent* for AMA collected data. Ensure it matches data flowing through the existing Log Analytics agent.
## At-scale migration using Azure Policy We recommend using [Azure Policy](../../governance/policy/overview.md) to migrate a large number of agents. Start by analyzing your current monitoring setup with the Log Analytics agent using the [AMA Migration Helper](./azure-monitor-agent-migration-tools.md#using-ama-migration-helper-preview) to find sources, such as virtual machines, virtual machine scale sets, and on-premises servers.
azure-monitor Proactive Email Notification https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/proactive-email-notification.md
# Smart Detection e-mail notification change >[!NOTE]
->You can migrate your Application Insight resources to alerts-bases smart detection (preview). The migration creates alert rules for the different smart detection modules. Once created, you can manage and configure these rules just like any other Azure Monitor alert rules. You can also configure action groups for these rules, thus enabling multiple methods of taking actions or triggering notification on new detections.
+>You can migrate your Application Insight resources to alerts-based smart detection (preview). The migration creates alert rules for the different smart detection modules. Once created, you can manage and configure these rules just like any other Azure Monitor alert rules. You can also configure action groups for these rules, thus enabling multiple methods of taking actions or triggering notification on new detections.
> > See [Smart Detection Alerts migration](./alerts-smart-detections-migration.md) for more details on the migration process and the behavior of smart detection after the migration.
azure-monitor Console https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/console.md
You need a subscription with [Microsoft Azure](https://azure.com). Sign in with a Microsoft account, which you might have for Windows, Xbox Live, or other Microsoft cloud services. Your team might have an organizational subscription to Azure: ask the owner to add you to it using your Microsoft account. > [!NOTE]
-> It is *highly recommended* to use the [Microsoft.ApplicationInsights.WorkerService](https://www.nuget.org/packages/Microsoft.ApplicationInsights.WorkerService) package and associated instructions from [here](./worker-service.md) for any Console Applications. This package is compatible with [Long Term Support (LTS) versions](https://dotnet.microsoft.com/platform/support/policy/dotnet-core) of .NET Core and .NET Framework or higher.
+> It is *highly recommended* to use the newer [Microsoft.ApplicationInsights.WorkerService](https://www.nuget.org/packages/Microsoft.ApplicationInsights.WorkerService) package and associated instructions from [here](./worker-service.md) for any Console Applications. This package is compatible with [Long Term Support (LTS) versions](https://dotnet.microsoft.com/platform/support/policy/dotnet-core) of .NET Core and .NET Framework or higher.
[!INCLUDE [azure-monitor-log-analytics-rebrand](../../../includes/azure-monitor-instrumentation-key-deprecation.md)]
You may initialize and configure Application Insights from the code or using `Ap
> [!NOTE] > Instructions referring to **ApplicationInsights.config** are only applicable to apps that are targeting the .NET Framework, and do not apply to .NET Core applications.
-### Using config file
+### Using config file
-By default, Application Insights SDK looks for `ApplicationInsights.config` file in the working directory when `TelemetryConfiguration` is being created
+For .NET Framework based application, by default, Application Insights SDK looks for `ApplicationInsights.config` file in the working directory when `TelemetryConfiguration` is being created. Reading config file is not supported on .NET Core.
```csharp TelemetryConfiguration config = TelemetryConfiguration.Active; // Reads ApplicationInsights.config file if present
You may get a full example of the config file by installing latest version of [M
### Configuring telemetry collection from code > [!NOTE]
-> Reading config file is not supported on .NET Core. You may consider using [Application Insights SDK for ASP.NET Core](./asp-net-core.md)
+> Reading config file is not supported on .NET Core.
* During application start-up, create and configure `DependencyTrackingTelemetryModule` instance - it must be singleton and must be preserved for application lifetime.
azure-monitor Autoscale Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/autoscale/autoscale-best-practices.md
Title: Best practices for autoscale
-description: Autoscale patterns in Azure for Web Apps, Virtual Machine Scale sets, and Cloud Services
+description: Autoscale patterns in Azure for Web Apps, virtual machine scale sets, and Cloud Services
Previously updated : 04/22/2022 Last updated : 09/13/2022
In this example, you can have a situation in which the memory usage is over 90%
### Choose the appropriate statistic for your diagnostics metric For diagnostics metrics, you can choose among *Average*, *Minimum*, *Maximum* and *Total* as a metric to scale by. The most common statistic is *Average*.
-### Choose the thresholds carefully for all metric types
-We recommend carefully choosing different thresholds for scale-out and scale-in based on practical situations.
-We *do not recommend* autoscale settings like the examples below with the same or similar threshold values for out and in conditions:
-
-* Increase instances by 1 count when Thread Count >= 600
-* Decrease instances by 1 count when Thread Count <= 600
-
-Let's look at an example of what can lead to a behavior that may seem confusing. Consider the following sequence.
-
-1. Assume there are two instances to begin with and then the average number of threads per instance grows to 625.
-2. Autoscale scales out adding a third instance.
-3. Next, assume that the average thread count across instance falls to 575.
-4. Before scaling down, autoscale tries to estimate what the final state will be if it scaled in. For example, 575 x 3 (current instance count) = 1,725 / 2 (final number of instances when scaled down) = 862.5 threads. This means autoscale would have to immediately scale out again even after it scaled in, if the average thread count remains the same or even falls only a small amount. However, if it scaled up again, the whole process would repeat, leading to an infinite loop.
-5. To avoid this situation (termed "flapping"), autoscale does not scale down at all. Instead, it skips and reevaluates the condition again the next time the service's job executes. The flapping state can confuse many people because autoscale wouldn't appear to work when the average thread count was 575.
-
-Estimation during a scale-in is intended to avoid "flapping" situations, where scale-in and scale-out actions continually go back and forth. Keep this behavior in mind when you choose the same thresholds for scale-out and in.
-
-We recommend choosing an adequate margin between the scale-out and in thresholds. As an example, consider the following better rule combination.
-
-* Increase instances by 1 count when CPU% >= 80
-* Decrease instances by 1 count when CPU% <= 60
-
-In this case
-
-1. Assume there are 2 instances to start with.
-2. If the average CPU% across instances goes to 80, autoscale scales out adding a third instance.
-3. Now assume that over time the CPU% falls to 60.
-4. Autoscale's scale-in rule estimates the final state if it were to scale-in. For example, 60 x 3 (current instance count) = 180 / 2 (final number of instances when scaled down) = 90. So autoscale does not scale-in because it would have to scale-out again immediately. Instead, it skips scaling down.
-5. The next time autoscale checks, the CPU continues to fall to 50. It estimates again - 50 x 3 instance = 150 / 2 instances = 75, which is below the scale-out threshold of 80, so it scales in successfully to 2 instances.
-
-> [!NOTE]
-> If the autoscale engine detects flapping could occur as a result of scaling to the target number of instances, it will also try to scale to a different number of instances between the current count and the target count. If flapping does not occur within this range, autoscale will continue the scale operation with the new target.
### Considerations for scaling threshold values for special metrics For special metrics such as Storage or Service Bus Queue length metric, the threshold is the average number of messages available per current number of instances. Carefully choose the threshold value for this metric.
We recommend you do NOT explicit set your agent to only use TLS 1.2 unless absol
## Next Steps
+- [Autoscale flapping](/azure/azure-monitor/autoscale/autoscale-flapping)
- [Create an Activity Log Alert to monitor all autoscale engine operations on your subscription.](https://github.com/Azure/azure-quickstart-templates/tree/master/demos/monitor-autoscale-alert) - [Create an Activity Log Alert to monitor all failed autoscale scale in/scale out operations on your subscription](https://github.com/Azure/azure-quickstart-templates/tree/master/demos/monitor-autoscale-failed-alert)
azure-monitor Autoscale Flapping https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/autoscale/autoscale-flapping.md
+
+ Title: Autoscale flapping
+description: "Flapping in Autoscale"
+++++ Last updated : 09/13/2022++
+#Customer intent: As a cloud administrator, I want understand flapping so that I can configure autoscale correctly.
++
+# Flapping in Autoscale
+
+This article describes flapping in autoscale and how to avoid it.
+
+Flapping refers to a loop condition that causes a series of opposing scale events. Flapping happens when a scale event triggers the opposite scale event.
+
+Autoscale evaluates a pending scale-in action to see if it would cause flapping. In cases where flapping could occur, autoscale may skip the scale action and reevaluate at the next run, or autoscale may scale by less than the specified number of resource instances. The autoscale evaluation process occurs each time the autoscale engine runs, which is every 30 to 60 seconds, depending on the resource type.
+
+To ensure adequate resources, checking for potential flapping doesn't occur for scale-out events. Autoscale will only defer a scale-in event to avoid flapping.
+
+For example, let's assume the following rules:
+
+* Scale out increasing by 1 instance when average CPU usage is above 50%.
+* Scale in decreasing the instance count by 1 instance when average CPU usage is lower than 30%.
+
+ In the table below at T0, when usage is at 56%, a scale-out action is triggered and results in 56% CPU usage across 2 instances. That gives an average of 28% for the scale set. As 28% is less than the scale-in threshold, autoscale should scale back in. Scaling in would return the scale set to 56% CPU usage, which triggers a scale-out action.
+
+|Time| Instance count| CPU% |CPU% per instance| Scale event| Resulting instance count
+|||||||
+T0|1|56%|56%|Scale out|2|
+T1|2|56%|28%|Scale in|1|
+T2|1|56%|56%|Scale out|2|
+T3|2|56%|28%|Scale in|1|
+
+If left uncontrolled, there would be an ongoing series of scale events. However, in this situation, the autoscale engine will defer the scale-in event at *T1* and reevaluate during the next autoscale run. The scale-in will only happen once the average CPU usage is below 30%.
+
+Flapping is often caused by:
+
+* Small or no margins between thresholds
+* Scaling by more than one instance
+* Scaling in and out using different metrics
+
+## Small or no margins between thresholds
+
+To avoid flapping, keep adequate margins between scaling thresholds.
+
+For example, the following rules where there's no margin between thresholds, cause flapping.
+
+* Scale out when thread count >=600
+* Scale in when thread count < 600
++
+The table below shows a potential outcome of these autoscale rules:
+
+|Time| Instance count| Thread count|Thread count per instance| Scale event| Resulting instance count
+|||||||
+T0|2|1250|625|Scale out|3|
+T1|3|1250|417|Scale in|2|
+
+* At time T0, there are two instances handling 1250 threads, or 625 treads per instance. Autoscale scales out to three instances.
+* Following the scale-out, at T1, we have the same 1250 threads, but with three instances, only 417 threads per instance. A scale-in event is triggered.
+* Before scaling-in, autoscale evaluates what would happen if the scale-in event occurs. In this example, 1250 / 2 = 625, that is, 625 threads per instance. Autoscale would have to immediately scale out again after it scaled in. If it scaled out again, the process would repeat, leading to flapping loop.
+* To avoid this situation, autoscale doesn't scale in. Autoscale skips the current scale event and reevaluates the rule in the next execution cycle.
+
+In this case, it looks like autoscale isn't working since no scale event takes place. Check the *Run history* tab on the autoscale setting page to see if there's any flapping.
++
+Setting an adequate margin between thresholds avoids the above scenario. For example,
+
+* Scale out when thread count >=600
+* Scale in when thread count < 400
++
+If the scale-in thread count is 400, the total thread count would have to drop to below 1200 before a scale event would take place. See the table below.
+
+|Time| Instance count| Thread count|Thread count per instance| Scale event| Resulting instance count
+|||||||
+T0|2|1250|625|Scale out|3|
+T1|3|1250|417|no scale event|3|
+T2|3|1180|394|scale in|2|
+T3|3|1180|590|no scale event|2|
+
+## Scaling by more than one instance
+
+To avoid flapping when scaling in or out by more than one instance, autoscale may scale by less than the number of instances specified in the rule.
+
+For example, the following rules can cause flapping:
+
+* Scale out by 20 when the request count >=200 per instance.
+* OR when CPU > 70% per instance.
+* Scale in by 10 when the request count <=50 per instance.
++
+The table below shows a potential outcome of these autoscale rules:
+
+|Time|Number of instances|CPU |Request count| Scale event| Resulting instances|Comments|
+||||||||
+|T0|30|65%|3000, or 100 per instance.|No scale event|30|
+|T1|30|65|1500| Scale in by 3 instances |27|Scaling-in by 10 would cause an estimated CPU rise above 70%, leading to a scale-out event.
+
+At time T0, the app is running with 30 instances, a total request count of 3000, and a CPU usage of 65% per instance.
+
+At T1, when the request count drops to 1500 requests, or 50 requests per instance, autoscale will try to scale in by 10 instances to 20. However, autoscale estimates that the CPU load for 20 instances will be above 70%, causing a scale-out event.
+
+To avoid flapping, the autoscale engine estimates the CPU usage for instance counts above 20 until it finds an instance count where all metrics are with in the defined thresholds:
+
+* Keep the CPU below 70%.
+* Keep the number of requests per instance is above 50.
+* Reduce the number of instances below 30.
+
+In this situation, autoscale may scale in by 3, from 30 to 27 instances in order to satisfy the rules, even though the rule specifies a decrease of 10. A log message is written to the activity log with a description that includes *Scale down will occur with updated instance count to avoid flapping*
+
+If autoscale can't find a suitable number of instances, it will skip the scale in event and reevaluate during the next cycle.
+
+> [!NOTE]
+> If the autoscale engine detects that flapping could occur as a result of scaling to the target number of instances, it will also try to scale to a lower number of instances between the current count and the target count. If flapping does not occur within this range, autoscale will continue the scale operation with the new target.
+
+## Log files
+
+Find flapping in the activity log with the following query:
+
+````KQL
+// Activity log, CategoryValue: Autoscale
+// Lists latest Autoscale operations from the activity log, with OperationNameValue =="Microsoft.Insights/AutoscaleSettings/Flapping/Action
+AzureActivity
+|where CategoryValue =="Autoscale" and OperationNameValue =="Microsoft.Insights/AutoscaleSettings/Flapping/Action"
+|sort by TimeGenerated desc
+````
+
+Below is an example of an activity log record for flapping:
++
+````JSON
+{
+"eventCategory": "Autoscale",
+"eventName": "FlappingOccurred",
+"operationId": "ffd31c67-1438-47a5-bee4-1e3a102cf1c2",
+"eventProperties":
+ "{"Description":"Scale down will occur with updated instance count to avoid flapping.
+ Resource: '/subscriptions/d1234567-9876-a1b2-a2b1-123a567b9f8767/ resourcegroups/ed-rg-001/providers/Microsoft.Web/serverFarms/ ScaleableAppServicePlan'.
+ Current instance count: '6',
+ Intended new instance count: '1'.
+ Actual new instance count: '4'",
+ "ResourceName":"/subscriptions/d1234567-9876-a1b2-a2b1-123a567b9f8767/resourcegroups/ed-rg-001/providers/Microsoft.Web/serverFarms/ScaleableAppServicePlan",
+ "OldInstancesCount":6,
+ "NewInstancesCount":4,
+ "ActiveAutoscaleProfile":{"Name":"Auto created scale condition",
+ "Capacity":{"Minimum":"1","Maximum":"30","Default":"1"},
+ "Rules":[{"MetricTrigger":{"Name":"Requests","Namespace":"microsoft.web/sites","Resource":"/subscriptions/d1234567-9876-a1b2-a2b1-123a567b9f8767/resourceGroups/ed-rg-001/providers/Microsoft.Web/sites/ScaleableWebApp1","ResourceLocation":"West Central US","TimeGrain":"PT1M","Statistic":"Average","TimeWindow":"PT1M","TimeAggregation":"Maximum","Operator":"GreaterThanOrEqual","Threshold":3.0,"Source":"/subscriptions/d1234567-9876-a1b2-a2b1-123a567b9f8767/resourceGroups/ed-rg-001/providers/Microsoft.Web/sites/ScaleableWebApp1","MetricType":"MDM","Dimensions":[],"DividePerInstance":true},"ScaleAction":{"Direction":"Increase","Type":"ChangeCount","Value":"10","Cooldown":"PT1M"}},{"MetricTrigger":{"Name":"Requests","Namespace":"microsoft.web/sites","Resource":"/subscriptions/d1234567-9876-a1b2-a2b1-123a567b9f8767/resourceGroups/ed-rg-001/providers/Microsoft.Web/sites/ScaleableWebApp1","ResourceLocation":"West Central US","TimeGrain":"PT1M","Statistic":"Max","TimeWindow":"PT1M","TimeAggregation":"Maximum","Operator":"LessThan","Threshold":3.0,"Source":"/subscriptions/d1234567-9876-a1b2-a2b1-123a567b9f8767/resourceGroups/ed-rg-001/providers/Microsoft.Web/sites/ScaleableWebApp1","MetricType":"MDM","Dimensions":[],"DividePerInstance":true},"ScaleAction":{"Direction":"Decrease","Type":"ChangeCount","Value":"5","Cooldown":"PT1M"}}]}}",
+"eventDataId": "b23ae911-55d0-4881-8684-fc74227b2ddb",
+"eventSubmissionTimestamp": "2022-09-13T07:20:41.1589076Z",
+"resource": "scaleableappserviceplan",
+"resourceGroup": "ED-RG-001",
+"resourceProviderValue": "MICROSOFT.WEB",
+"subscriptionId": "D1234567-9876-A1B2-A2B1-123A567B9F876",
+"activityStatusValue": "Succeeded"
+}
+````
+
+## Next steps
+
+To learn more about autoscale, see the following resources:
+
+* [Overview of common autoscale patterns](/azure/azure-monitor/autoscale/autoscale-common-scale-patterns)
+* [Automatically scale a virtual machine scale](/azure/virtual-machine-scale-sets/tutorial-autoscale-powershell)
+* [Use autoscale actions to send email and webhook alert notifications](/azure/azure-monitor/autoscale/autoscale-webhook-email)
azure-monitor Autoscale Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/autoscale/autoscale-get-started.md
You can make changes in JSON directly, if necessary. These changes will be refle
### Cool-down period effects
-Autoscale uses a cool-down period to prevent "flapping," which is the rapid, repetitive up-and-down scaling of instances. For more information, see [Autoscale evaluation steps](autoscale-understanding-settings.md#autoscale-evaluation). For other valuable information on flapping and understanding how to monitor the autoscale engine, see [Autoscale best practices](autoscale-best-practices.md#choose-the-thresholds-carefully-for-all-metric-types) and [Troubleshooting autoscale](autoscale-troubleshoot.md), respectively.
+Autoscale uses a cool-down period to prevent "flapping," which is the rapid, repetitive up-and-down scaling of instances. For more information, see [Autoscale evaluation steps](autoscale-understanding-settings.md#autoscale-evaluation). For other valuable information on flapping and understanding how to monitor the autoscale engine, see [Flapping in Autoscale](autoscale-flapping.md) and [Troubleshooting autoscale](autoscale-troubleshoot.md), respectively.
## Route traffic to healthy instances (App Service)
azure-monitor Continuous Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/continuous-monitoring.md
- Title: Continuous monitoring with Azure Monitor | Microsoft Docs
-description: Describes specific steps for using Azure Monitor to enable continuous monitoring throughout your workflows.
--- Previously updated : 06/07/2022----
-# Continuous monitoring with Azure Monitor
-
-Continuous monitoring refers to the process and technology required to incorporate monitoring across each phase of your DevOps and IT operations lifecycles. It helps to continuously ensure the health, performance, and reliability of your application and infrastructure as it moves from development to production. Continuous monitoring builds on the concepts of continuous integration and continuous deployment (CI/CD). CI/CD helps you develop and deliver software faster and more reliably to provide continuous value to your users.
-
-[Azure Monitor](overview.md) is the unified monitoring solution in Azure that provides full-stack observability across applications and infrastructure in the cloud and on-premises. It works seamlessly with [Visual Studio and Visual Studio Code](https://visualstudio.microsoft.com/) during development and test. It integrates with [Azure DevOps](/azure/devops/user-guide/index) for release management and work item management during deployment and operations. It even integrates across the IT system management (ITSM) and SIEM tools of your choice to help track issues and incidents within your existing IT processes.
-
-This article describes specific steps for using Azure Monitor to enable continuous monitoring throughout your workflows. Links to other documentation provide information on implementing different features.
-
-## Enable monitoring for all your applications
-
-To gain observability across your entire environment, you need to enable monitoring on all your web applications and services. This way, you can easily visualize end-to-end transactions and connections across all the components. For example:
--- [Azure DevOps projects](../devops-project/overview.md) give you a simplified experience with your existing code and Git repository. You can also choose from one of the sample applications to create a CI/CD pipeline to Azure.-- [Continuous monitoring in your DevOps release pipeline](./app/continuous-monitoring.md) allows you to gate or roll back your deployment based on monitoring data.-- [Status Monitor](./app/status-monitor-v2-overview.md) allows you to instrument a live .NET app on Windows with Application Insights, without having to modify or redeploy your code.-- If you have access to the code for your application, enable full monitoring with [Application Insights](./app/app-insights-overview.md) by installing the Azure Monitor Application Insights SDK for [.NET](./app/asp-net.md), [.NET Core](./app/asp-net-core.md), [Java](./app/java-in-process-agent.md), [Node.js](./app/nodejs-quick-start.md), or [any other programming languages](./app/platforms.md). Full monitoring allows you to specify custom events, metrics, or page views that are relevant to your application and your business.-
-## Enable monitoring for your entire infrastructure
-
-Applications are only as reliable as their underlying infrastructure. Having monitoring enabled across your entire infrastructure will help you achieve full observability and make it easier to discover a potential root cause when something fails. Azure Monitor helps you track the health and performance of your entire hybrid infrastructure including resources such as VMs, containers, storage, and network. For example, you can:
--- Get [platform metrics, activity logs, and diagnostics logs](data-sources.md) automatically from most of your Azure resources with no configuration.-- Enable deeper monitoring for VMs with [VM insights](vm/vminsights-overview.md).-- Enable deeper monitoring for Azure Kubernetes Service (AKS) clusters with [Container insights](containers/container-insights-overview.md).-- Add [monitoring solutions](./monitor-reference.md) for different applications and services in your environment.-
-[Infrastructure as code](/azure/devops/learn/what-is-infrastructure-as-code) is the management of infrastructure in a descriptive model, using the same versioning that DevOps teams use for source code. It adds reliability and scalability to your environment and allows you to use similar processes that are used to manage your applications. For example, you can:
--- Use [Azure Resource Manager templates](./logs/resource-manager-workspace.md) to enable monitoring and configure alerts over a large set of resources.-- Use [Azure Policy](../governance/policy/overview.md) to enforce different rules over your resources. Azure Policy ensures that those resources stay compliant with your corporate standards and service level agreements.-
-## Combine resources in Azure resource groups
-
-A typical application on Azure today includes multiple resources such as VMs and app services or microservices hosted on Azure Cloud Services, AKS clusters, or Azure Service Fabric. These applications frequently use dependencies like Azure Event Hubs, Azure Storage, Azure SQL, and Azure Service Bus. For example, you can:
--- Combine resources in Azure resource groups to get full visibility across all your resources that make up your different applications. [Resource group insights](./insights/resource-group-insights.md) provides a simple way to keep track of the health and performance of your entire full-stack application and enables drilling down into respective components for any investigations or debugging.-
-## Ensure quality through continuous deployment
-
-CI/CD allows you to automatically integrate and deploy code changes to your application based on the results of automated testing. It streamlines the deployment process and ensures the quality of any changes before they move into production. For example, you can:
--- Use [Azure Pipelines](/azure/devops/pipelines) to implement continuous deployment and automate your entire process from code commit to production based on your CI/CD tests.-- Use [quality gates](/azure/devops/pipelines/release/approvals/gates) to integrate monitoring into your pre-deployment or post-deployment. Quality gates ensure that you're meeting the key health and performance metrics, also known as KPIs, as your applications move from development to production. They also ensure that any differences in the infrastructure environment or scale aren't negatively affecting your KPIs.-- [Maintain separate monitoring instances](./app/separate-resources.md) between your different deployment environments, such as Dev, Test, Canary, and Prod. Separate monitoring instances ensure that collected data is relevant across the associated applications and infrastructure. If you need to correlate data across environments, you can use [multi-resource charts in metrics explorer](./essentials/metrics-charts.md) or create [cross-resource queries in Azure Monitor](logs/cross-workspace-query.md).-
-## Create actionable alerts with actions
-
-A critical aspect of monitoring is proactively notifying administrators of any current and predicted issues. For example, you can:
--- Create [alerts in Azure Monitor](./alerts/alerts-overview.md) based on logs and metrics to identify predictable failure states. You should have a goal of making all alerts actionable, which means that they represent actual critical conditions and seek to reduce false positives. Use [dynamic thresholds](alerts/alerts-dynamic-thresholds.md) to automatically calculate baselines on metric data rather than defining your own static thresholds.-- Define actions for alerts to use the most effective means of notifying your administrators. Available [actions for notification](alerts/action-groups.md#create-an-action-group-by-using-the-azure-portal) are SMS, emails, push notifications, or voice calls.-- Use more advanced actions to [connect to your ITSM tool](alerts/itsmc-overview.md) or other alert management systems through [webhooks](alerts/activity-log-alerts-webhook.md).-- Remediate situations identified in alerts as well with [Azure Automation runbooks](../automation/automation-webhooks.md) or [Azure Logic Apps](/connectors/custom-connectors/create-webhook-trigger) that can be launched from an alert by using webhooks.-- Use [autoscaling](./autoscale/tutorial-autoscale-performance-schedule.md) to dynamically increase and decrease your compute resources based on collected metrics.-
-## Prepare dashboards and workbooks
-
-Ensuring that your development and operations have access to the same telemetry and tools allows them to view patterns across your entire environment and minimize your mean time to detect and mean time to restore. For example, you can:
--- Prepare [custom dashboards](./app/tutorial-app-dashboards.md) based on common metrics and logs for the different roles in your organization. Dashboards can combine data from all Azure resources.-- Prepare [workbooks](./visualize/workbooks-overview.md) to ensure knowledge sharing between development and operations. Workbooks could be prepared as dynamic reports with metric charts and log queries. They can also be troubleshooting guides prepared by developers to help customer support or operations handle basic problems.-
-## Continuously optimize
-
- Monitoring is one of the fundamental aspects of the popular Build-Measure-Learn philosophy, which recommends continuously tracking your KPIs and user behavior metrics and then striving to optimize them through planning iterations. Azure Monitor helps you collect metrics and logs relevant to your business and add new data points in the next deployment as required. For example, you can:
--- Use tools in Application Insights to [track user behavior and engagement](./app/tutorial-users.md).-- Use [Impact analysis](./app/usage-impact.md) to help you prioritize which areas to focus on to drive to important KPIs.-
-## Next steps
--- Learn about the difference components of [Azure Monitor](overview.md).-- [Add continuous monitoring](./app/continuous-monitoring.md) to your release pipeline.
azure-monitor Vminsights Dependency Agent Maintenance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/vminsights-dependency-agent-maintenance.md
Since the Dependency agent works at the kernel level, support is also dependent
## Next steps
-If you want to stop monitoring your VMs for a while or remove VM insights entirely, see [Disable monitoring of your VMs in VM insights](../vm/vminsights-optout.md).
+If you want to stop monitoring your VMs for a while or remove VM insights entirely, see [Disable monitoring of your VMs in VM insights](../vm/vminsights-optout.md).
azure-monitor Vminsights Migrate From Service Map https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/vminsights-migrate-from-service-map.md
[Azure Monitor VM insights](../vm/vminsights-overview.md) monitors the performance and health of your virtual machines and virtual machine scale sets, including their running processes and dependencies on other resources. This article explains how to migrate from [Service Map](../vm/service-map.md) to Azure Monitor VM insights, which provides a map feature similar to Service Map, along with other benefits. > [!NOTE]
-> Service Map will be retired on 30 September 2025. Be sure to migrate to VM insights before this date to continue monitoring the communication between services.
+> Service Map will be retired on 30 September 2025. Be sure to migrate to VM insights before this date to continue monitoring processes and dependencies for your virtual machines.
The map feature of VM insights visualizes virtual machine dependencies by discovering running processes that have active network connection between servers, inbound and outbound connection latency, or ports across any TCP-connected architecture over a specified time range. For more information about the benefits of the VM insights map feature over Service Map, see [How is VM insights Map feature different from Service Map?](/azure/azure-monitor/faq#how-is-vm-insights-map-feature-different-from-service-map-).
azure-netapp-files Azure Netapp Files Network Topologies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-network-topologies.md
na Previously updated : 09/09/2022 Last updated : 09/15/2022 # Guidelines for Azure NetApp Files network planning
Azure NetApp Files Standard network features are supported for the following reg
* France Central * Germany West Central * Japan East
+* Japan West
* North Central US * North Europe * Norway East
azure-resource-manager Create Troubleshooting Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/troubleshooting/create-troubleshooting-template.md
Title: Create a troubleshooting template
description: Describes how to create a template to troubleshoot Azure resource deployed with Azure Resource Manager templates (ARM templates) or Bicep files. tags: top-support-issue Previously updated : 11/02/2021 Last updated : 09/14/2022 # Create a troubleshooting template
The following ARM template and Bicep file get information from an existing stora
"resources": [], "outputs": { "exampleOutput": {
- "value": "[reference(resourceId(parameters('storageResourceGroup'), 'Microsoft.Storage/storageAccounts', parameters('storageName')), '2021-04-01')]",
+ "value": "[reference(resourceId(parameters('storageResourceGroup'), 'Microsoft.Storage/storageAccounts', parameters('storageName')), '2022-05-01')]",
"type": "object" } }
In Bicep, use the `existing` keyword and run the deployment from the resource gr
```bicep param storageName string
-resource stg 'Microsoft.Storage/storageAccounts@2021-04-01' existing = {
+resource stg 'Microsoft.Storage/storageAccounts@2022-05-01' existing = {
name: storageName }
azure-resource-manager Enable Debug Logging https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/troubleshooting/enable-debug-logging.md
Title: Enable debug logging
description: Describes how to enable debug logging to troubleshoot Azure resources deployed with Bicep files or Azure Resource Manager templates (ARM templates). tags: top-support-issue Previously updated : 06/20/2022 Last updated : 09/14/2022
The `DeploymentDebugLogLevel` parameter is available for other deployment scopes
# [Azure CLI](#tab/azure-cli)
-You can't enable debug logging with Azure CLI but you can get debug logging data using the `request` and `response` properties.
+You can't enable debug logging with Azure CLI but you can get the debug log's data using the `request` and `response` properties.
azure-resource-manager Find Error Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/troubleshooting/find-error-code.md
Title: Find error codes
description: Describes how to find error codes to troubleshoot Azure resources deployed with Azure Resource Manager templates (ARM templates) or Bicep files. tags: top-support-issue Previously updated : 05/16/2022 Last updated : 09/14/2022
When an Azure resource deployment fails using Azure Resource Manager templates (
There are three types of errors that are related to a deployment: -- **Validation errors** occur before a deployment begins and are caused by syntax errors in your file. Your editor can identify these errors.
+- **Validation errors** occur before a deployment begins and are caused by syntax errors in your file. A code editor like Visual Studio Code can identify these errors.
- **Preflight validation errors** occur when a deployment command is run but resources aren't deployed. These errors are found without starting the deployment. For example, if a parameter value is incorrect, the error is found in preflight validation.-- **Deployment errors** occur during the deployment process and can only be found by assessing the deployment's progress.
+- **Deployment errors** occur during the deployment process and can only be found by assessing the deployment's progress in your Azure environment.
All types of errors return an error code that you use to troubleshoot the deployment. Validation and preflight errors are shown in the activity log but don't appear in your deployment history. A Bicep file with syntax errors doesn't compile into JSON and isn't shown in the activity log.
To identify syntax errors, you can use [Visual Studio Code](https://code.visuals
## Validation errors
-Templates are validated during the deployment process and error codes are displayed. Before you run a deployment, you can run validation tests with Azure PowerShell or Azure CLI to identify validation and preflight errors.
+Templates are validated during the deployment process and error codes are displayed. Before you run a deployment, you can identify validation and preflight errors by running validation tests with Azure PowerShell or Azure CLI.
# [Portal](#tab/azure-portal)
bicep build main.bicep
unexpected new line character. ```
-There are more PowerShell cmdlets available to validate deployment templates:
+### Other scopes
-- [Test-AzDeployment](/powershell/module/az.resources/test-azdeployment) for subscription level deployments.-- [Test-AzManagementGroupDeployment](/powershell/module/az.resources/test-azmanagementgroupdeployment)-- [Test-AzTenantDeployment](/powershell/module/az.resources/test-aztenantdeployment)
+There are Azure PowerShell cmdlets to validate deployment templates for the subscription, management group, and tenant scopes.
+
+| Scope | Cmdlets |
+| - | - |
+| Subscription | [Test-AzDeployment](/powershell/module/az.resources/test-azdeployment) |
+| Management group | [Test-AzManagementGroupDeployment](/powershell/module/az.resources/test-azmanagementgroupdeployment) |
+| Tenant | [Test-AzTenantDeployment](/powershell/module/az.resources/test-aztenantdeployment) |
# [Azure CLI](#tab/azure-cli)
az deployment group validate \
unexpected new line character. ```
-There are more Azure CLI commands available to validate deployment templates:
+### Other scopes
+
+There are Azure CLI commands to validate deployment templates for the subscription, management group, and tenant scopes.
+
+| Scope | Commands |
+| - | - |
+| Subscription | [az deployment sub validate](/cli/azure/deployment/sub#az-deployment-sub-validate) |
+| Management group | [az deployment mg validate](/cli/azure/deployment/mg#az-deployment-mg-validate) |
+| Tenant | [az deployment tenant validate](/cli/azure/deployment/tenant#az-deployment-tenant-validate) |
-- [az deployment sub validate](/cli/azure/deployment/sub#az-deployment-sub-validate)-- [az deployment mg validate](/cli/azure/deployment/mg#az-deployment-mg-validate)-- [az deployment tenant validate](/cli/azure/deployment/tenant#az-deployment-tenant-validate)
Get-AzResourceGroupDeployment `
-ResourceGroupName examplegroup ```
+### Other scopes
+
+There are Azure PowerShell cmdlets to get deployment information for the subscription, management group, and tenant scopes.
+
+| Scope | Cmdlets |
+| - | - |
+| Subscription | [Get-AzDeploymentOperation](/powershell/module/az.resources/get-azdeploymentoperation) <br> [Get-AzDeployment](/powershell/module/az.resources/get-azdeployment) |
+| Management group | [Get-AzManagementGroupDeploymentOperation](/powershell/module/az.resources/get-azmanagementgroupdeploymentoperation) <br> [Get-AzManagementGroupDeployment](/powershell/module/az.resources/get-azmanagementgroupdeployment) |
+| Tenant | [Get-AzTenantDeploymentOperation](/powershell/module/az.resources/get-aztenantdeploymentoperation) <br> [Get-AzTenantDeployment](/powershell/module/az.resources/get-aztenantdeployment) |
++ # [Azure CLI](#tab/azure-cli) To see a deployment's operations messages with Azure CLI, use [az deployment operation group list](/cli/azure/deployment/operation/group#az-deployment-operation-group-list).
az deployment group show \
--name exampledeployment ```
+### Other scopes
+
+There are Azure CLI commands to get deployment information for the subscription, management group, and tenant scopes.
+
+| Scope | Commands |
+| - | - |
+| Subscription | [az deployment operation sub list](/cli/azure/deployment/operation/sub#az-deployment-operation-sub-list) <br> [az deployment sub show](/cli/azure/deployment/sub#az-deployment-sub-show) |
+| Management group | [az deployment operation mg list](/cli/azure/deployment/operation/mg#az-deployment-operation-mg-list) <br> [az deployment mg show](/cli/azure/deployment/mg#az-deployment-mg-show) |
+| Tenant | [az deployment operation tenant list](/cli/azure/deployment/operation/tenant#az-deployment-operation-tenant-list) <br> [az deployment tenant show](/cli/azure/deployment/tenant#az-deployment-tenant-show) |
+ ## Next steps
azure-resource-manager Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/troubleshooting/overview.md
Title: Overview of ARM template and Bicep file troubleshooting
-description: Describes troubleshooting for Azure resource deployment with Azure Resource Manager templates (ARM templates) and Bicep files.
+ Title: Overview of deployment troubleshooting for Bicep files and ARM templates
+description: Describes deployment troubleshooting when you use Bicep files or Azure Resource Manager templates (ARM templates) to deploy Azure resources.
Previously updated : 10/26/2021 Last updated : 09/14/2022 # What is deployment troubleshooting?
-When you deploy Bicep files or Azure Resource Manager templates (ARM templates), you may get an error. This documentation helps you find possible solutions for the error.
+When you deploy Azure resources with Bicep files or Azure Resource Manager templates (ARM templates), you may get an error. There are troubleshooting tools available to help you resolve syntax errors before deployment. You can get more information about error codes and deployment errors from the Azure portal, Azure PowerShell, and Azure CLI. This documentation helps you find solutions to troubleshoot errors.
## Error types
-There are two types of errors you can get - **validation errors** and **deployment errors**.
+Validation errors occur before a deployment begins and are caused by incorrect syntax that can be identified by a code editor like Visual Studio Code. For example, a misspelled property name or a function that's missing an argument.
-Validation errors happen before the deployment is started. These errors can be determined without interacting with your current Azure environment. For example, validation makes you aware of syntax errors or missing arguments for a function before your deployment starts.
+Preflight validation errors occur when a deployment command is run but resources aren't deployed in Azure. For example, if an incorrect parameter value is used, the deployment command returns an error message.
Deployment errors can only be determined by attempting the deployment and interacting with your Azure environment. For example, a virtual machine (VM) requires a network interface card (NIC). If the NIC doesn't exist when the VM is deployed, you get a deployment error. ## Troubleshooting tools
-To help identify syntax errors before a deployment, use the latest version of [Visual Studio Code](https://code.visualstudio.com). Install the latest version of either:
+There are several troubleshooting tools available to resolve errors.
-* [Bicep extension](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-bicep)
-* [Azure Resource Manager Tools extension](https://marketplace.visualstudio.com/items?itemName=msazurermtools.azurerm-vscode-tools)
+### Syntax errors
-To troubleshoot deployments, it's helpful to learn about a resource provider's properties or API versions. For more information, see [Define resources with Bicep and ARM templates](/azure/templates).
+To help identify syntax errors before a deployment, use the latest version of [Visual Studio Code](https://code.visualstudio.com). Install the latest version of the extension for Bicep or ARM templates.
+
+- [Bicep](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-bicep)
+- [Azure Resource Manager Tools](https://marketplace.visualstudio.com/items?itemName=msazurermtools.azurerm-vscode-tools)
+
+To follow best practices for developing your deployment templates, use the following tools:
-To follow best practices for developing your templates, use either:
+- [Bicep linter](../bicep/linter.md)
+- [ARM template test toolkit](../templates/test-toolkit.md)
+
+### Resource provider and API version
+
+To troubleshoot deployments, it's helpful to learn about a resource provider's properties or API versions. For more information, see [Define resources with Bicep and ARM templates](/azure/templates).
-* [Bicep linter](../bicep/linter.md)
-* [ARM template test toolkit](../templates/test-toolkit.md)
+### Error details
When you deploy, you can find the cause of errors from the Azure portal in a resource group's **Deployments** or **Activity log**. If you're using Azure PowerShell, use commands like [Get-AzResourceGroupDeploymentOperation](/powershell/module/az.resources/get-azresourcegroupdeploymentoperation) and [Get-AzActivityLog](/powershell/module/az.monitor/get-azactivitylog). For Azure CLI, use commands like [az deployment operation group](/cli/azure/deployment/operation/group) and [az monitor activity-log list](/cli/azure/monitor/activity-log#az-monitor-activity-log-list). ## Next steps
+- To learn more about how to find deployment error codes and troubleshoot deployment problems, see [Find error codes](find-error-code.md).
- For solutions based on the error code, see [Troubleshoot common Azure deployment errors](common-deployment-errors.md).-- For an introduction to finding the error code, see [Quickstart: Troubleshoot ARM template deployments](quickstart-troubleshoot-arm-deployment.md) or [Quickstart: Troubleshoot Bicep file deployments](quickstart-troubleshoot-bicep-deployment.md).
+- For an introduction to finding the error code, see [Quickstart: Troubleshoot ARM template JSON deployments](quickstart-troubleshoot-arm-deployment.md) or [Quickstart: Troubleshoot Bicep file deployments](quickstart-troubleshoot-bicep-deployment.md).
azure-resource-manager Quickstart Troubleshoot Arm Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/troubleshooting/quickstart-troubleshoot-arm-deployment.md
Title: Troubleshoot ARM template JSON deployments description: Learn how to troubleshoot Azure Resource Manager template (ARM template) JSON deployments. Previously updated : 12/08/2021 Last updated : 09/14/2022
This quickstart describes how to troubleshoot Azure Resource Manager template (A
There are three types of errors that are related to a deployment: -- **Validation errors** occur before a deployment begins and are caused by syntax errors in your file. Your editor can identify these errors.
+- **Validation errors** occur before a deployment begins and are caused by syntax errors in your file. A code editor like Visual Studio Code can identify these errors.
- **Preflight validation errors** occur when a deployment command is run but resources aren't deployed. These errors are found without starting the deployment. For example, if a parameter value is incorrect, the error is found in preflight validation.-- **Deployment errors** occur during the deployment process and can only be found by assessing the deployment's progress.
+- **Deployment errors** occur during the deployment process and can only be found by assessing the deployment's progress in your Azure environment.
All types of errors return an error code that you use to troubleshoot the deployment. Validation and preflight errors are shown in the activity log but don't appear in your deployment history.
The template fails preflight validation and the deployment isn't run. The `prefi
Storage names must be between 3 and 24 characters and use only lowercase letters and numbers. The prefix value created an invalid storage name. For more information, see [Resolve errors for storage account names](error-storage-account-name.md). To fix the preflight error, use a prefix that's 11 characters or less and contains only lowercase letters or numbers.
-Because the deployment didn't run there's no deployment history.
+Because the deployment didn't run, there's no deployment history.
:::image type="content" source="media/quickstart-troubleshoot-arm-deployment/preflight-no-deploy.png" alt-text="Screenshot of resource group overview that shows no deployment for preflight error.":::
The deployment begins and is visible in the deployment history. The deployment f
:::image type="content" source="media/quickstart-troubleshoot-arm-deployment/deployment-failed.png" alt-text="Screenshot of resource group overview that shows a failed deployment.":::
-To fix the deployment error you would change the reference function to use a valid resource. For more information, see [Resolve resource not found errors](error-not-found.md). For this quickstart, delete the comma that precedes `vnetResult` and all of `vnetResult`. Save the file and rerun the deployment.
+To fix the deployment error, change the reference function to use a valid resource. For more information, see [Resolve resource not found errors](error-not-found.md). For this quickstart, delete the comma that precedes `vnetResult` and all of `vnetResult`. Save the file and rerun the deployment.
```json "vnetResult": {
azure-resource-manager Quickstart Troubleshoot Bicep Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/troubleshooting/quickstart-troubleshoot-bicep-deployment.md
Title: Troubleshoot Bicep file deployments description: Learn how to monitor and troubleshoot Bicep file deployments. Shows activity logs and deployment history. Previously updated : 11/04/2021 Last updated : 09/14/2022
This quickstart describes how to troubleshoot Bicep file deployment errors. You'
There are three types of errors that are related to a deployment: -- **Validation errors** occur before a deployment begins and are caused by syntax errors in your file. Your editor can identify these errors.
+- **Validation errors** occur before a deployment begins and are caused by syntax errors in your file. A code editor like Visual Studio Code can identify these errors.
- **Preflight validation errors** occur when a deployment command is run but resources aren't deployed. These errors are found without starting the deployment. For example, if a parameter value is incorrect, the error is found in preflight validation.-- **Deployment errors** occur during the deployment process and can only be found by assessing the deployment's progress.
+- **Deployment errors** occur during the deployment process and can only be found by assessing the deployment's progress in your Azure environment.
All types of errors return an error code that you use to troubleshoot the deployment. Validation and preflight errors are shown in the activity log but don't appear in your deployment history. A Bicep file with syntax errors doesn't compile into JSON and isn't shown in the activity log.
When you hover over `parameter`, you see an error message.
:::image type="content" source="media/quickstart-troubleshoot-bicep-deployment/declaration-not-recognized.png" alt-text="Screenshot of error message in Visual Studio Code.":::
-The message states: "This declaration type is not recognized. Specify a parameter, variable, resource, or output declaration." If you attempt to deploy this file, you'll get the same error message from the deployment command.
+The message states: _This declaration type is not recognized. Specify a parameter, variable, resource, or output declaration._ If you attempt to deploy this file, you'll get the same error message from the deployment command.
If you look at the documentation for a [parameter declaration](../bicep/parameters.md), you'll see the keyword is actually `param`. When you change that syntax, the validation error disappears. The `@allowed` decorator was also marked as an error, but that error is also resolved by changing the parameter declaration. The decorator was marked as an error because it expects a parameter declaration after the decorator. This condition wasn't true when the declaration was incorrect.
azure-signalr Concept Service Mode https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/concept-service-mode.md
Title: Service mode in Azure SignalR Service
-description: An overview of different service modes in Azure SignalR Service, explain their differences and applicable user scenarios
+description: An overview of service modes in Azure SignalR Service.
Previously updated : 08/19/2020 Last updated : 09/01/2022 # Service mode in Azure SignalR Service
-Service mode is an important concept in Azure SignalR Service. When you create a new SignalR resource, you will be asked to specify a service mode:
+Service mode is an important concept in Azure SignalR Service. SignalR Service currently supports three service modes: *Default*, *Serverless*, and *Classic*. Your SignalR Service resource will behave differently in each mode. In this article, you'll learn how to choose the right service mode based on your scenario.
+## Setting the service mode
-You can also change it later in the settings menu:
+You'll be asked to specify a service mode when you create a new SignalR resource in the Azure portal.
++
+You can also change the service mode later in the settings menu.
:::image type="content" source="media/concept-service-mode/update.png" alt-text="Update service mode":::
-Azure SignalR Service currently supports three service modes: **default**, **serverless** and **classic**. Your SignalR resource will behave differently in different modes. In this article, you'll learn their differences and how to choose the right service mode based on your scenario.
+Use `az signalr create` and `az signalr update` to set or change the service mode by using the [Azure SignalR CLI](/cli/azure/service-page/azure%20signalr).
## Default mode
-Default mode is the default value for service mode when you create a new SignalR resource. In this mode, your application works as a typical ASP.NET Core (or ASP.NET) SignalR application, where you have a web server that hosts a hub (called hub server hereinafter) and clients can have duplex real-time communication with the hub server. The only difference is instead of connecting client and server directly, client and server both connect to SignalR service and use the service as a proxy. Below is a diagram that illustrates the typical application structure in default mode:
+As the name implies, *Default* mode is the default service mode for SignalR Service. In Default mode, your application works as a typical [ASP.NET Core SignalR](/aspnet/core/signalr/introduction) or ASP.NET SignalR (deprecated) application. You have a web server application that hosts a hub, called a *hub server*, and clients have full duplex communication with the hub server. The difference between ASP.NET Core SignalR and Azure SignalR Service is instead of connecting client and hub server directly, client and server both connect to SignalR Service and use the service as a proxy. The following diagram shows the typical application structure in Default mode.
-So if you have a SignalR application and want to integrate with SignalR service, default mode should be the right choice for most cases.
+Default mode is usually the right choice when you have a SignalR application that you want to use with SignalR Service.
-### Connection routing in default mode
+### Connection routing in Default mode
-In default mode, there will be websocket connections between hub server and SignalR service (called server connections). These connections are used to transfer messages between server and client. When a new client is connected, SignalR service will route the client to one hub server (assume you have more than one server) through existing server connections. Then the client connection will stick to the same hub server during its lifetime. When client sends messages, they always go to the same hub server. With this behavior, you can safely maintain some states for individual connections on your hub server. For example, if you want to stream something between server and client, you don't need to consider the case that data packets go to different servers.
+In Default mode, there are WebSocket connections between hub server and SignalR Service called *server connections*. These connections are used to transfer messages between a server and client. When a new client is connected, SignalR Service will route the client to one hub server (assume you've more than one server) through existing server connections. The client connection will stick to the same hub server during its lifetime. This property is referred to as *connection stickiness*. When the client sends messages, they always go to the same hub server. With stickiness behavior, you can safely maintain some states for individual connections on your hub server. For example, if you want to stream something between server and client, you don't need to consider the case where data packets go to different servers.
> [!IMPORTANT]
-> This also means in default mode a client cannot connect without server being connected first. If all your hub servers are disconnected due to network interruption or server reboot, your client connections will get an error telling you no server is connected. So it's your responsibility to make sure at any time there is at least one hub server connected to SignalR service (for example, have multiple hub servers and make sure they won't go offline at the same time for things like maintenance).
+> In Default mode a client cannot connect without a hub server being connected to the service first. If all your hub servers are disconnected due to network interruption or server reboot, your client connections will get an error telling you no server is connected. It's your responsibility to make sure there is always at least one hub server connected to SignalR service. For example, you can design your application with multiple hub servers, and then make sure they won't all go offline at the same time.
-This routing model also means when a hub server goes offline, the connections routed that server will be dropped. So you should expect connection drop when your hub server is offline for maintenance and handle reconnect properly so that it won't have negative impact to your application.
+The default routing model also means when a hub server goes offline, the connections routed to that server will be dropped. You should expect connections to drop when your hub server is offline for maintenance, and handle reconnection to minimize the effects on your application.
+
+> [!NOTE]
+> In Default mode you can also use REST API, management SDK, and function binding to directly send messages to a client if you don't want to go through a hub server. In Default mode client connections are still handled by hub servers and upstream endpoints won't work in that mode.
## Serverless mode
-In Serverless mode, you don't have a hub server. Unlike default mode, the client doesn't require a hub server to be running. All connections are connected in a "serverless" mode and the Azure SignalR service is responsible for maintaining client connections like handling client pings (in default mode this is handled by hub servers).
+Unlike Default mode, Serverless mode doesn't require a hub server to be running, which is why this mode is named "serverless." SignalR Service is responsible for maintaining client connections. There's no guarantee of connection stickiness and HTTP requests may be less efficient than WebSockets connections.
+
+Serverless mode works with Azure Functions to provide real time messaging capability. Clients work with [SignalR Service bindings for Azure Functions](../azure-functions/functions-bindings-signalr-service.md), called *function binding*, to send messages as an output binding.
-Also there is no server connection in this mode (if you try to use service SDK to establish server connection, you will get an error). Therefore there is also no connection routing and server-client stickiness (as described in the default mode section). But you can still have server-side application to push messages to clients. This can be done in two ways, use [REST APIs](https://github.com/Azure/azure-signalr/blob/dev/docs/rest-api.md) for one-time send, or through a websocket connection so that you can send multiple messages more efficiently (note this websocket connection is different than server connection).
+Because there's no server connection, if you try to use a server SDK to establish a server connection you'll get an error. SignalR Service will reject server connection attempts in Serverless mode.
+
+Serverless mode doesn't have connection stickiness, but you can still have a server-side application push messages to clients. There are two ways to push messages to clients in Serverless mode:
+
+- Use [REST APIs](https://github.com/Azure/azure-signalr/blob/dev/docs/rest-api.md) for a one-time send event, or
+- Use a WebSocket connection so that you can send multiple messages more efficiently. This WebSocket connection is different than a server connection.
> [!NOTE]
-> Both REST API and websocket way are supported in SignalR service [management SDK](https://github.com/Azure/azure-signalr/blob/dev/docs/management-sdk-guide.md). If you're using a language other than .NET, you can also manually invoke the REST APIs following this [spec](https://github.com/Azure/azure-signalr/blob/dev/docs/rest-api.md).
->
-> If you're using Azure Functions, you can use [SignalR service bindings for Azure Functions](../azure-functions/functions-bindings-signalr-service.md) (hereinafter called function binding) to send messages as an output binding.
+> Both REST API and WebSockets are supported in SignalR service [management SDK](https://github.com/Azure/azure-signalr/blob/dev/docs/management-sdk-guide.md). If you're using a language other than .NET, you can also manually invoke the REST APIs following this [specification](https://github.com/Azure/azure-signalr/blob/dev/docs/rest-api.md).
-It's also possible for your server application to receive messages and connection events from clients. Service will deliver messages and connection events to preconfigured endpoints (called Upstream) using webhooks. Comparing to default mode, there is no guarantee of stickiness and HTTP requests may be less efficient than websocket connections.
+It's also possible for your server application to receive messages and connection events from clients. SignalR Service will deliver messages and connection events to pre-configured endpoints (called *upstream endpoints*) using web hooks. Upstream endpoints can only be configured in Serverless mode. For more information, see [Upstream settings](concept-upstream.md).
-For more information about how to configure upstream, see this [doc](./concept-upstream.md).
-Below is a diagram that illustrates how serverless mode works:
+The following diagram shows how Serverless mode works.
-> [!NOTE]
-> Please note in default mode you can also use REST API/management SDK/function binding to directly send messages to client if you don't want to go through hub server. But in default mode client connections are still handled by hub servers and upstream won't work in that mode.
## Classic mode
-Classic is a mixed mode of default and serverless mode. In this mode, connection mode is decided by whether there is hub server connected when client connection is established. If there is hub server, client connection will be routed to a hub server. Otherwise it will enter a serverless mode where client to server message cannot be delivered to hub server. This will cause some discrepancies, for example if all hub servers are unavailable for a short time, all client connections created during that time will be in serverless mode and cannot send messages to hub server.
- > [!NOTE]
-> Classic mode is mainly for backward compatibility for those applications created before there is default and serverless mode. It's strongly recommended to not use this mode anymore. For new applications, please choose default or serverless based on your scenario. For existing applications, it's also recommended to review your use cases and choose a proper service mode.
+> Classic mode is mainly for backward compatibility for applications created before the Default and Serverless modes were introduced. Don't use Classic mode except as a last resort. Use Default or Serverless for new applications, based on your scenario. You should consider redesigning existing applications to eliminate the need for Classic mode.
+
+Classic is a mixed mode of Default and Serverless modes. In Classic mode, connection type is decided by whether there's a hub server connected when the client connection is established. If there's a hub server, the client connection will be routed to a hub server. If a hub server isn't available, the client connection will be made in a limited serverless mode where client-to-server messages can't be delivered to a hub server. Classic mode serverless connections don't support some features such as upstream endpoints.
-Classic mode also doesn't support some new features like upstream in serverless mode.
+If all your hub servers are offline for any reason, connections will be made in Serverless mode. It's your responsibility to ensure that at least one hub server is always available.
## Choose the right service mode
-Now you should understand the differences between service modes and know how to choose between them. As you already learned in the previous section, classic mode is not encouraged and you should only choose between default and serverless. Here are some more tips that can help you make the right choice for new applications and retire classic mode for existing applications.
+Now you should understand the differences between service modes and know how to choose between them. As previously discussed, Classic mode isn't recommended for new or existing applications. Here are some more tips that can help you make the right choice for service mode and help you retire Classic mode for existing applications.
-* If you're already familiar with how SignalR library works and want to move from a self-hosted SignalR to use Azure SignalR Service, choose default mode. Default mode works exactly the same way as self-hosted SignalR (and you can use the same programming model in SignalR library), SignalR service just acts as a proxy between clients and hub servers.
+- Choose Default mode if you're already familiar with how SignalR library works and want to move from a self-hosted SignalR to use Azure SignalR Service. Default mode works exactly the same way as self-hosted SignalR, and you can use the same programming model in SignalR library. SignalR Service acts as a proxy between clients and hub servers.
-* If you're creating a new application and don't want to maintain hub server and server connections, choose serverless mode. This mode usually works together with Azure Functions so you don't need to maintain any server at all. You can still have duplex communications (with REST API/management SDK/function binding + upstream) but the programming model will be different than SignalR library.
+- Choose Serverless mode if you're creating a new application and don't want to maintain hub server and server connections. Serverless mode works together with Azure Functions so that you don't need to maintain any server at all. You can still have full duplex communications with REST API, management SDK, or function binding + upstream endpoint, but the programming model will be different than SignalR library.
-* If you have both hub servers to serve client connections and backend application to directly push messages to clients (for example through REST API), you should still choose default mode. Keep in mind that the key difference between default and serverless mode is whether you have hub servers and how client connections are routed. REST API/management SDK/function binding can be used in both modes.
+- Choose Default mode if you have *both* hub servers to serve client connections and a backend application to directly push messages to clients. The key difference between Default and Serverless mode is whether you have hub servers and how client connections are routed. REST API/management SDK/function binding can be used in both modes.
-* If you really have a mixed scenario, for example, you have two different hubs on the same SignalR resource, one used as a traditional SignalR hub and the other one used with Azure Functions and doesn't have hub server, you should really consider to separate them into two SignalR resources, one in default mode and one in serverless mode.
+- If you really have a mixed scenario, you should consider separating use cases into multiple SignalR Service instances with service mode set according to use. An example of a mixed scenario that requires Classic mode is where you have two different hubs on the same SignalR resource. One hub is used as a traditional SignalR hub and the other hub is used with Azure Functions. This example should be split into two resources, with one instance in Default mode and one in Serverless mode.
## Next steps
-To learn more about how to use default and serverless mode, read the following articles:
+See the following articles to learn more about how to use Default and Serverless modes.
-* [Azure SignalR Service internals](signalr-concept-internals.md)
+- [Azure SignalR Service internals](signalr-concept-internals.md)
-* [Azure Functions development and configuration with Azure SignalR Service](signalr-concept-serverless-development-config.md)
+- [Azure Functions development and configuration with Azure SignalR Service](signalr-concept-serverless-development-config.md)
azure-vmware Azure Vmware Solution Platform Updates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/azure-vmware-solution-platform-updates.md
For more information on this vCenter version, see [VMware vCenter Server 6.7 Upd
>This is non-disruptive and should not impact Azure VMware Services or workloads. During maintenance, various VMware alerts, such as _Lost network connectivity on DVPorts_ and _Lost uplink redundancy on DVPorts_, appear in vCenter Server and clear automatically as the maintenance progresses. ## Post update
-Once complete, newer versions of VMware components appear. If you notice any issues or have any questions, contact our support team by opening a support ticket.
+Once complete, newer versions of VMware components appear. If you notice any issues or have any questions, contact our support team by opening a support ticket.
backup Backup Azure Arm Restore Vms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-arm-restore-vms.md
Azure Backup provides several ways to restore a VM.
**Restore disk** | Restores a VM disk, which can then be used to create a new VM.<br/><br/> Azure Backup provides a template to help you customize and create a VM. <br/><br> The restore job generates a template that you can download and use to specify custom VM settings, and create a VM.<br/><br/> The disks are copied to the Resource Group you specify.<br/><br/> Alternatively, you can attach the disk to an existing VM, or create a new VM using PowerShell.<br/><br/> This option is useful if you want to customize the VM, add configuration settings that weren't there at the time of backup, or add settings that must be configured using the template or PowerShell. **Replace existing** | You can restore a disk, and use it to replace a disk on the existing VM.<br/><br/> The current VM must exist. If it's been deleted, this option can't be used.<br/><br/> Azure Backup takes a snapshot of the existing VM before replacing the disk, and stores it in the staging location you specify. Existing disks connected to the VM are replaced with the selected restore point.<br/><br/> The snapshot is copied to the vault, and retained in accordance with the retention policy. <br/><br/> After the replace disk operation, the original disk is retained in the resource group. You can choose to manually delete the original disks if they aren't needed. <br/><br/>Replace existing is supported for unencrypted managed VMs, including VMs [created using custom images](https://azure.microsoft.com/resources/videos/create-a-custom-virtual-machine-image-in-azure-resource-manager-with-powershell/). It's unsupported for classic VMs, unmanaged VMs, and [generalized VMs](../virtual-machines/windows/upload-generalized-managed.md).<br/><br/> If the restore point has more or less disks than the current VM, then the number of disks in the restore point will only reflect the VM configuration.<br><br> Replace existing is also supported for VMs with linked resources, like [user-assigned managed-identity](../active-directory/managed-identities-azure-resources/overview.md) or [Key Vault](../key-vault/general/overview.md). **Cross Region (secondary region)** | Cross Region restore can be used to restore Azure VMs in the secondary region, which is an [Azure paired region](../availability-zones/cross-region-replication-azure.md).<br><br> You can restore all the Azure VMs for the selected recovery point if the backup is done in the secondary region.<br><br> During the backup, snapshots aren't replicated to the secondary region. Only the data stored in the vault is replicated. So secondary region restores are only [vault tier](about-azure-vm-restore.md#concepts) restores. The restore time for the secondary region will be almost the same as the vault tier restore time for the primary region. <br><br> This feature is available for the options below:<br><br> - [Create a VM](#create-a-vm) <br> - [Restore Disks](#restore-disks) <br><br> We don't currently support the [Replace existing disks](#replace-existing-disks) option.<br><br> Permissions<br> The restore operation on secondary region can be performed by Backup Admins and App admins.
-**Cross Subscription Restore** | Allows you to restore Azure Virtual Machines or disks to any subscription (as per the Azure RBAC capabilities) from restore points. <br><br> You can trigger Cross Subscription Restore for managed virtual machines only. <br><br> Cross Subscription Restore is currently enabled only in [standard policy](backup-during-vm-creation.md#create-a-vm-with-backup-configured) from Vault tier. It's also supported for [Restore with Managed System Identities (MSI)](backup-azure-arm-restore-vms.md#restore-vms-with-managed-identities). <br><br> It's unsupported from [snapshots](backup-azure-vms-introduction.md#snapshot-creation) and [secondary region](backup-azure-arm-restore-vms.md#restore-in-secondary-region) restores. <br><br> It's unsupported for [Encrypted Azure VMs](backup-azure-vms-introduction.md#encryption-of-azure-vm-backups) and [Trusted Launch VMs](backup-support-matrix-iaas.md#tvm-backup).
+**Cross Subscription Restore** | Allows you to restore Azure Virtual Machines or disks to any subscription (as per the Azure RBAC capabilities) from restore points. <br><br> You can trigger Cross Subscription Restore for managed virtual machines only. <br><br> Cross Subscription Restore is supported for [Restore with Managed System Identities (MSI)](backup-azure-arm-restore-vms.md#restore-vms-with-managed-identities). <br><br> It's unsupported from [snapshots](backup-azure-vms-introduction.md#snapshot-creation) and [secondary region](backup-azure-arm-restore-vms.md#restore-in-secondary-region) restores. <br><br> It's unsupported for [Encrypted Azure VMs](backup-azure-vms-introduction.md#encryption-of-azure-vm-backups) and [Trusted Launch VMs](backup-support-matrix-iaas.md#tvm-backup).
>[!Tip]
batch Batch Certificate Migration Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-certificate-migration-guide.md
+
+ Title: Batch Certificate Migration Guide
+description: Describes the migration steps for the batch certificates and the end of support details.
++++ Last updated : 08/15/2022+
+# Batch Certificate Migration Guide
+
+Securing the application and critical information has become essential in today's needs. With growing customers and increasing demand for security, managing key information plays a significant role in securing data. Many customers need to store secure data in the application, and it needs to be managed to avoid any leakage. In addition, only legitimate administrators or authorized users should access it. Azure Batch offers Certificates created and managed by the Batch service. Azure Batch also provides a Key Vault option, and it's considered an azure-standard method for delivering more controlled secure access management.
+
+Azure Batch provides certificates feature at the account level. Customers must generate the Certificate and upload it manually to the Azure Batch via the portal. To access the Certificate, it must be associated and installed for the 'Current User.' The Certificate is usually valid for one year and must follow a similar procedure every year.
+
+For Azure Batch customers, a secure way of access should be provided in a more standardized way, reducing any manual interruption and reducing the readability of key generated. Therefore, we'll retire the certificate feature on **29 February 2024** to reduce the maintenance effort and better guide customers to use Azure Key Vault as a standard and more modern method with advanced security. After it's retired, the Certificate functionality may cease working properly. Additionally, pool creation with certificates will be rejected and possibly resize up.
+
+## Retirement alternatives
+
+Azure Key Vault is the service provided by Microsoft Azure to store and manage secrets, certificates, tokens, keys, and other configuration values that authenticated users access the applications and services. The original idea was to remove the hard-coded storing of these secrets and keys in the application code.
+
+Azure Key Vault provides security at the transport layer by ensuring any data flow from the key vault to the client application is encrypted. Azure key vault stores the secrets and keys with such strong encryption that even Microsoft itself won't see the keys or secrets in any way.
+
+Azure Key Vault provides a secure way to store the information and define the fine-grained access control. All the secrets can be managed from one dashboard. Azure Key Vault can store the key in the software-protected or hardware protected by hardware security module (HSMs) mechanism. In addition, it has a mechanism to auto-renew the Key Vault certificates.
+
+## Migration steps
+
+Azure Key Vault can be created in three ways:
+
+1. Using Azure portal
+
+2. Using PowerShell
+
+3. Using CLI
+
+**Create Azure Key Vault step by step procedure using Azure portal:**
+
+__Prerequisite__: Valid Azure subscription and owner/contributor access on Key Vault service.
+
+ 1. Log in to the Azure portal.
+
+ 2. In the top-level search box, look for **Key Vaults**.
+
+ 3. In the Key Vault dashboard, click on create and provide all the details like subscription, resource group, Key Vault name, select the pricing tier (standard/premium), and select region. Once all these details are provided, click on review, and create. This will create the Key Vault account.
+
+ 4. Key Vault names need to be unique across the globe. Once any user has taken a name, it wonΓÇÖt be available for other users.
+
+ 5. Now go to the newly created Azure Key Vault. There you can see the vault name and the vault URI used to access the vault.
+
+**Create Azure Key Vault step by step using the Azure PowerShell:**
+
+ 1. Log in to the user PowerShell using the following command - Login-AzAccount
+
+ 2. Create an 'azure secure' resource group in the 'eastus' location. You can change the name and location as per your need.
+```
+ New-AzResourceGroup -Name "azuresecure" -Location "EastUS"
+```
+ 3. Create the Azure Key Vault using the cmdlet. You need to provide the key vault name, resource group, and location.
+```
+ New-AzKeyVault -Name "azuresecureKeyVault" -ResourceGroupName "azuresecure" -Location "East US"
+```
+
+ 4. Created the Azure Key Vault successfully using the PowerShell cmdlet.
+
+**Create Azure Key Vault step by step using the Azure CLI bash:**
+
+ 1. Create an 'azure secure' resource in the 'eastus' location. You can change the name and location as per your need. Use the following bash command.
+```
+ az group create ΓÇôname "azuresecure" -l "EastUS."
+```
+
+ 2. Create the Azure Key Vault using the bash command. You need to provide the key vault name, resource group, and location.
+```
+ az keyvault create ΓÇôname ΓÇ£azuresecureKeyVaultΓÇ¥ ΓÇôresource-group ΓÇ£azureΓÇ¥ ΓÇôlocation ΓÇ£EastUSΓÇ¥
+```
+ 3. Successfully created the Azure Key Vault using the Azure CLI bash command.
+
+## FAQ
+
+ 1. Is Certificates or Azure Key Vault recommended?
+ Azure Key Vault is recommended and essential to protect the data in the cloud.
+
+ 2. Does user subscription mode support Azure Key Vault?
+ Yes, it's mandatory to create Key Vault while creating the Batch account in user subscription mode.
+
+ 3. Are there best practices to use Azure Key Vault?
+ Best practices are covered [here](../key-vault/general/best-practices.md).
+
+## Next steps
+
+For more information, see [Certificate Access Control](../key-vault/certificates/certificate-access-control.md).
batch Batch Pools Without Public Ip Addresses Classic Retirement Migration Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-pools-without-public-ip-addresses-classic-retirement-migration-guide.md
+
+ Title: Batch Pools without Public IP Addresses Classic Retirement Migration Guide
+description: Describes the migration steps for the batch pool without public ip addresses and the end of support details.
++++ Last updated : 09/01/2022+
+# Batch Pools without Public IP Addresses Classic Retirement Migration Guide
+
+By default, all the compute nodes in an Azure Batch virtual machine (VM) configuration pool are assigned a public IP address. This address is used by the Batch service to schedule tasks and for communication with compute nodes, including outbound access to the internet. To restrict access to these nodes and reduce the discoverability of these nodes from the internet, we released [Batch pools without public IP addresses (classic)](./batch-pool-no-public-ip-address.md).
+
+In late 2021, we launched a simplified compute node communication model for Azure Batch. The new communication model improves security and simplifies the user experience. Batch pools no longer require inbound Internet access and outbound access to Azure Storage, only outbound access to the Batch service. As a result, Batch pools without public IP addresses (classic) which is currently in public preview will be retired on **31 March 2023**, and will be replaced with simplified compute node communication pools without public IPs.
+
+## Retirement alternatives
+
+[Simplified Compute Node Communication Pools without Public IPs](./simplified-node-communication-pool-no-public-ip.md) requires using simplified compute node communication. It provides customers with enhanced security for their workload environments on network isolation and data exfiltration to Azure Batch accounts. Its key benefits include:
+
+* Allow creating simplified node communication pool without public IP addresses.
+* Support Batch private pool using a new private endpoint (sub-resource nodeManagement) for Azure Batch account.
+* Simplified private link DNS zone for Batch account private endpoints: changed from **privatelink.\<region>.batch.azure.com** to **privatelink.batch.azure.com**.
+* Mutable public network access for Batch accounts.
+* Firewall support for Batch account public endpoints: configure IP address network rules to restrict public network access with Batch accounts.
+
+## Migration steps
+
+Batch pool without public IP addresses (classic) will retire on **31/2023 and will be updated to simplified compute node communication pools without public IPs. For existing pools that use the previous preview version of Batch pool without public IP addresses (classic), it's only possible to migrate pools created in a virtual network. To migrate the pool, follow the opt-in process for simplified compute node communication:
+
+1. Opt in to [use simplified compute node communication](./simplified-compute-node-communication.md#opt-your-batch-account-in-or-out-of-simplified-compute-node-communication).
+
+ ![Support Request](../batch/media/certificates/opt-in.png)
+
+2. Create a private endpoint for Batch node management in the virtual network.
+
+ ![Create Endpoint](../batch/media/certificates/private-endpoint.png)
+
+3. Scale down the pool to zero nodes.
+
+ ![Scale Down](../batch/media/certificates/scale-down-pool.png)
+
+4. Scale out the pool again. The pool is then automatically migrated to the new version of the preview.
+
+ ![Scale Out](../batch/media/certificates/scale-out-pool.png)
+
+## FAQ
+
+* How can I migrate my Batch pool without public IP addresses (classic) to simplified compute node communication pools without public IPs?
+
+ You can only migrate your pool to simplified compute node communication pools if it was created in a virtual network. Otherwise, youΓÇÖd need to create a new simplified compute node communication pool without public IPs.
+
+* What differences will I see in billing?
+
+ Compared with Batch pools without public IP addresses (classic), the simplified compute node communication pools without public IPs support will reduce costs because it wonΓÇÖt need to create network resources the following: load balancer, network security groups, and private link service with the Batch pool deployments. However, there will be a [cost associated with private link](https://azure.microsoft.com/pricing/details/private-link/) or other outbound network connectivity used by pools, as controlled by the user, to allow communication with the Batch service without public IP addresses.
+
+* Will there be any performance changes?
+
+ No known performance differences compared to Batch pools without public IP addresses (classic).
+
+* How can I connect to my pool nodes for troubleshooting?
+
+ Similar to Batch pools without public IP addresses (classic). As there is no public IP address for the Batch pool, users will need to connect their pool nodes from within the virtual network. You can create a jump box VM in the virtual network or use other remote connectivity solutions like [Azure Bastion](../bastion/bastion-overview.md).
+
+* Will there be any change to how my workloads are downloaded from Azure Storage?
+
+ Similar to Batch pools without public IP addresses (classic), users will need to provide their own internet outbound connectivity if their workloads need access to other resources like Azure Storage.
+
+* What if I donΓÇÖt migrate to simplified compute node communication pools without public IPs?
+
+ After **31 March 2023**, we will stop supporting Batch pool without public IP addresses. The functionality of the existing pool in that configuration may break, such as scale out operations, or may be actively scaled down to zero at any point in time after that date.
+
+## Next steps
+
+For more information, refer to [Simplified compute node communication](./simplified-compute-node-communication.md).
batch Batch Tls 101 Migration Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-tls-101-migration-guide.md
+
+ Title: Batch Tls 1.0 Migration Guide
+description: Describes the migration steps for the batch TLS 1.0 and the end of support details.
++++ Last updated : 08/16/2022+
+# Batch TLS 1.0 Migration Guide
+
+Transport Layer Security (TLS) versions 1.0 and 1.1 are known to be susceptible to attacks such as BEAST and POODLE, and to have other Common Vulnerabilities and Exposures (CVE) weaknesses. They also don't support the modern encryption methods and cipher suites recommended by Payment Card Industry (PCI) compliance standards. There's an industry-wide push toward the exclusive use of TLS version 1.2 or later.
+
+To follow security best practices and remain in compliance with industry standards, Azure Batch will retire Batch TLS 1.0/1.1 on **31 March 2023**. Most customers have already migrated to TLS 1.2. Customers who continue to use TLS 1.0/1.1 can be identified via existing BatchOperation telemetry. Customers will need to adjust their existing workflows to ensure that they're using TLS 1.2. Failure to migrate to TLS 1.2 will break existing Batch workflows.
+
+## Migration strategy
+
+Customers must update client code before the TLS 1.0/1.1 retirement.
+
+- Customers using native WinHTTP for client code can follow this [guide](https://support.microsoft.com/topic/update-to-enable-tls-1-1-and-tls-1-2-as-default-secure-protocols-in-winhttp-in-windows-c4bd73d2-31d7-761e-0178-11268bb10392).
+
+- Customers using .NET framework for their client code should upgrade to .NET > 4.7, that which enforces TLS 1.2 by default.
+
+- For customers on .NET framework who are unable to upgrade to > 4.7, please follow this [guide](https://docs.microsoft.com/dotnet/framework/network-programming/tls) to enforce TLS 1.2.
+
+For TLS best practices, refer to [TLS best practices for .NET framework](https://docs.microsoft.com/dotnet/framework/network-programming/tls).
+
+## FAQ
+
+* Why must we upgrade to TLS 1.2?<br>
+ TLS 1.0/1.1 has security issues that are fixed in TLS 1.2. TLS 1.2 has been available since 2008 and is the current default version in most frameworks.
+
+* What happens if I donΓÇÖt upgrade?<br>
+ After the feature retirement, our client application won't work until you upgrade.<br>
+
+* Will Upgrading to TLS 1.2 affect the performance?<br>
+ Upgrading to TLS 1.2 won't affect performance.<br>
+
+* How do I know if IΓÇÖm using TLS 1.0/1.1?<br>
+ You can check the Audit Log to determine the TLS version you're using.
+
+## Next steps
+
+For more information, see [How to enable TLS 1.2 on clients](https://docs.microsoft.com/mem/configmgr/core/plan-design/security/enable-tls-1-2-client).
batch Job Pool Lifetime Statistics Migration Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/job-pool-lifetime-statistics-migration-guide.md
+
+ Title: Batch job pool lifetime statistics migration guide
+description: Describes the migration steps for the batch job pool lifetime statistics and the end of support details.
++++ Last updated : 08/15/2022+
+# Batch Job Pool Lifetime Statistics Migration Guide
+
+The Azure Batch service currently supports API for Job/Pool to retrieve lifetime statistics. The API is used to get lifetime statistics for all the Pools/Jobs in the specified batch account or for a specified Pool/Job. The API collects the statistical data from when the Batch account was created until the last time updated or entire lifetime of the specified Job/Pool. Job/Pool lifetime statistics API is helpful for customers to analyze and evaluate their usage.
+
+To make the statistical data available for customers, the Batch service allocates batch pools and schedule jobs with an in-house MapReduce implementation to perform background periodic roll-up of statistics. The aggregation is performed for all accounts/pools/jobs in each region, no matter if customer needs or queries the stats for their account/pool/job. The operating cost includes eleven VMs allocated in each region to execute MapReduce aggregation jobs. For busy regions, we had to increase the pool size further to accommodate the extra aggregation load.
+
+The MapReduce aggregation logic was implemented with legacy code, and no new features are being added or improvised due to technical challenges with legacy code. Still, the legacy code and its hosting repo need to be updated frequently to accommodate ever growing load in production and to meet security/compliance requirements. In addition, since the API is featured to provide lifetime statistics, the data is growing and demands more storage and performance issues, even though most customers aren't using the API. Batch service currently eats up all the compute and storage usage charges associated with MapReduce pools and jobs.
+
+The purpose of the API is designed and maintained to serve the customer in troubleshooting. However, not many customers use it in real life, and the customers are interested in extracting the details for not more than a month. Now more advanced ways of log/job/pool data can be collected and used on a need basis using Azure portal logs, Alerts, Log export, and other methods. Therefore, we are retire Job/Pool Lifetime.
+
+Job/Pool Lifetime Statistics API will be retired on **30 April 2023**. Once complete, the API will no longer work and will return an appropriate HTTP response error code back to the client.
+
+## FAQ
+
+* Is there an alternate way to view logs of Pool/Jobs?
+
+ Azure portal has various options to enable the logs, namely system logs, diagnostic logs. Refer [Monitor Batch Solutions](./monitoring-overview.md) for more information.
+
+* Can customers extract logs to their system if the API doesn't exist?
+
+ Azure portal log feature allows every customer to extract the output and error logs to their workspace. Refer [Monitor with Application Insights](./monitor-application-insights.md) for more information.
+
+## Next steps
+
+For more information, refer to [Azure Monitor Logs](../azure-monitor/logs/data-platform-logs.md).
batch Low Priority Vms Retirement Migration Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/low-priority-vms-retirement-migration-guide.md
+
+ Title: Low priority vms retirement migration guide
+description: Describes the migration steps for the low priority vms retirement and the end of support details.
++++ Last updated : 08/10/2022+
+# Low Priority VMs Retirement Migration Guide
+
+Azure Batch offers Low priority and Spot virtual machines (VMs). The virtual machines are computing instances allocated from spare capacity, offered at a highly discounted rate compared to "on-demand" VMs.
+
+Low priority VMs enable the customer to take advantage of unutilized capacity. The amount of available unutilized capacity can vary based on size, region, time of day, and more. At any point in time when Azure needs the capacity back, we'll evict low-priority VMs. Therefore, the low-priority offering is excellent for flexible workloads, like large processing jobs, dev/test environments, demos, and proofs of concept. In addition, low-priority VMs can easily be deployed through our virtual machine scale set offering.
+
+Low priority VMs are a deprecated feature, and it will never become Generally Available (GA). Spot VMs are the official preemptible offering from the Compute platform, and is generally available. Therefore, we'll retire Low Priority VMs on **30 September 2025**. After that, we'll stop supporting Low priority VMs. The existing Low priority pools may no longer work or be provisioned.
+
+## Retirement alternative
+
+As of May 2020, Azure offers Spot VMs in addition to Low Priority VMs. Like Low Priority, the Spot option allows the customer to purchase spare capacity at a deeply discounted price in exchange for the possibility that the VM may be evicted. Unlike Low Priority, you can use the Azure Spot option for single VMs and scale sets. Virtual machine scale sets scale up to meet demand, and when used with Spot VMs, will only allocate when capacity is available. 
+
+The Spot VMs can be evicted when Azure needs the capacity or when the price goes above your maximum price. In addition, the customer can choose to get a 30-second eviction notice and attempt to redeploy. 
+
+The other key difference is that Azure Spot pricing is variable and based on the capacity for size or SKU in an Azure region. Prices change slowly to provide stabilization. The price will never go above pay-as-you-go rates.
+
+When it comes to eviction, you have two policy options to choose between:
+
+* Stop/Deallocate (default) ΓÇô when evicted, the VM is deallocated, but you keep (and pay for) underlying disks. This is ideal for cases where the state is stored on disks.
+* Delete ΓÇô when evicted, the VM and underlying disks are deleted.
+
+While similar in idea, there are a few key differences between these two purchasing options:
+
+| | **Low Priority VMs** | **Spot VMs** |
+||||
+| **Availability** | **Azure Batch** | **Single VMs, Virtual machine scale sets** |
+| **Pricing** | **Fixed pricing** | **Variable pricing with ability to set maximum price** |
+| **Eviction/Preemption** | **Preempted when Azure needs the capacity. Tasks on preempted node VMs are re-queued and run again.** | **Evicted when Azure needs the capacity or if the price exceeds your maximum. If evicted for price and afterward the price goes below your maximum, the VM will not be automatically restarted.** |
+
+## Migration steps
+
+Customers in User Subscription mode have the option to include Spot VMs using the following the steps below:
+
+1. In the Azure portal, select the Batch account and view the existing pool or create a new pool.
+2. Under **Scale**, users can choose 'Target dedicated nodes' or 'Target Spot/low-priority nodes.'
+
+ ![Scale Target Nodes](../batch/media/certificates/lowpriorityvms-scale-target-nodes.png)
+
+3. Navigate to the existing Pool and select 'Scale' to update the number of Spot nodes required based on the job scheduled.
+4. Click **Save**.
+
+Customers in Batch Managed mode must recreate the Batch account, pool, and jobs under User Subscription mode to take advantage of spot VMs.
+
+## FAQ
+
+* How to create a new Batch account /job/pool?
+
+ Refer to the quick start [link](./batch-account-create-portal.md) on creating a new Batch account/pool/task.
+
+* Are Spot VMs available in Batch Managed mode?
+
+ No, Spot VMs are available in User Subscription mode - Batch accounts only.
+
+* What is the pricing and eviction policy of Spot VMs? Can I view pricing history and eviction rates?
+
+ Refer to [Spot VMs](../virtual-machines/spot-vms.md) for more information on using Spot VMs. Yes, you can see historical pricing and eviction rates per size in a region in the portal.
+
+## Next steps
+
+Use the [CLI](../virtual-machines/linux/spot-cli.md), [portal](../virtual-machines/spot-portal.md), [ARM template](../virtual-machines/linux/spot-template.md), or [PowerShell](../virtual-machines/windows/spot-powershell.md) to deploy Azure Spot Virtual Machines.
+
+You can also deploy a [scale set with Azure Spot Virtual Machine instances](../virtual-machine-scale-sets/use-spot.md).
center-sap-solutions Install Software https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/center-sap-solutions/install-software.md
In this how-to guide, you'll learn how to upload and install all the required co
- A [network set up for your infrastructure deployment](prepare-network.md). - A deployment of S/4HANA infrastructure. - The SSH private key for the virtual machines in the SAP system. You generated this key during the infrastructure deployment.-- If you're installing a Highly Available (HA) SAP system, get the Service Principal identifier (SPN ID) and password to authorize the Azure fence agent (STONITH device) against Azure resources. For more information, see [Use Azure CLI to create an Azure AD app and configure it to access Media Services API](/azure/media-services/previous/media-services-cli-create-and-configure-aad-app). For an example, see the Red Hat documentation for [Creating an Azure Active Directory Application](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/deploying_red_hat_enterprise_linux_7_on_public_cloud_platforms/configuring-rhel-high-availability-on-azure_cloud-content#azure-create-an-azure-directory-application-in-ha_configuring-rhel-high-availability-on-azure).
+- If you're installing a Highly Available (HA) SAP system, get the Service Principal identifier (SPN ID) and password to authorize the Azure fence agent (fencing device) against Azure resources. For more information, see [Use Azure CLI to create an Azure AD app and configure it to access Media Services API](/azure/media-services/previous/media-services-cli-create-and-configure-aad-app). For an example, see the Red Hat documentation for [Creating an Azure Active Directory Application](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/deploying_red_hat_enterprise_linux_7_on_public_cloud_platforms/configuring-rhel-high-availability-on-azure_cloud-content#azure-create-an-azure-directory-application-in-ha_configuring-rhel-high-availability-on-azure).
To avoid frequent password expiry, use the Azure Command-Line Interface (Azure CLI) to create the Service Principal identifier and password instead of the Azure portal.
To install the SAP software on Azure, use the ACSS installation wizard.
1. For **SAP FQDN**, provide a fully qualified domain name (FQDN) for your SAP system. For example, `sap.contoso.com`.
- 1. For High Availability (HA) systems only, enter the client identifier for the STONITH Fencing Agent service principal for **Fencing client ID**.
+ 1. For High Availability (HA) systems only, enter the client identifier for the Fencing Agent service principal for **Fencing client ID**.
- 1. For High Availability (HA) systems only, enter the password for the STONITH Fencing Agent service principal for **Fencing client password**.
+ 1. For High Availability (HA) systems only, enter the password for the Fencing Agent service principal for **Fencing client password**.
1. For **SSH private key**, provide the SSH private key that you created or selected as part of your infrastructure deployment.
cognitive-services Autoscale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/autoscale.md
Autoscale feature is available for the following
* [Computer Vision](computer-vision/index.yml) * [Language](language-service/overview.md) (only available for sentiment analysis, key phrase extraction, named entity recognition, and text analytics for health)
-* [Form Recognizer](/azure/applied-ai-services/form-recognizer/overview?tabs=v3-0)
+* [Form Recognizer](../applied-ai-services/form-recognizer/overview.md?tabs=v3-0)
### Can I test this feature using a free subscription?
No, the autoscale feature is not available to free tier subscriptions.
- [Plan and Manage costs for Azure Cognitive Services](./plan-manage-costs.md). - [Optimize your cloud investment with Azure Cost Management](../cost-management-billing/costs/cost-mgt-best-practices.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn). - Learn about how to [prevent unexpected costs](../cost-management-billing/cost-management-billing-overview.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).-- Take the [Cost Management](/learn/paths/control-spending-manage-bills?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) guided learning course.
+- Take the [Cost Management](/learn/paths/control-spending-manage-bills?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) guided learning course.
communication-services Certified Session Border Controllers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/telephony/certified-session-border-controllers.md
If you have any questions about the SBC certification program for Communication
|Vendor|Product|Software version| |: |: |:
-|[AudioCodes](https://www.audiocodes.com/media/lbjfezwn/mediant-sbc-with-microsoft-azure-communication-services.pdf)|Mediant SBC|7.40A
+|[AudioCodes](https://www.audiocodes.com/media/lbjfezwn/mediant-sbc-with-microsoft-azure-communication-services.pdf)|Mediant SBC VE|7.40A
|[Metaswitch](https://manuals.metaswitch.com/Perimeta/V4.9/AzureCommunicationServicesIntegrationGuide/Source/notices.html)|Perimeta SBC|4.9| |[Oracle](https://www.oracle.com/technical-resources/documentation/acme-packet.html)|Oracle Acme Packet SBC|8.4| |Ribbon Communications|[SBC SWe / SBC 5400 / SBC 7000](https://support.sonus.net/display/ALLDOC/Ribbon+Configurations+with+Azure+Communication+Services+Direct+Routing)|9.02| ||[SBC SWe Lite / SBC 1000 / SBC 2000](https://support.sonus.net/display/UXDOC90/Best+Practice+-+Configure+SBC+Edge+for+Azure+Communication+Services+Direct+Routing)|9.0
+|[TE-SYSTEMS](https://community.te-systems.de/community-download/files?fileId=9624)|anynode|4.6|
Note the certification granted to a major version. That means that firmware with any number in the SBC firmware following the major version is supported.
communication-services Call Automation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/voice-video-calling/call-automation.md
The following list presents the set of features that are currently available in
| Pre-call scenarios | Answer a one-to-one call | ✔️ | ✔️ | | | Answer a group call | ✔️ | ✔️ | | | Place new outbound call to one or more endpoints | ✔️ | ✔️ |
-| | Redirect* (forward) a call to one or more endpoints | ✔️ | ✔️ |
+| | Redirect (forward) a call to one or more endpoints | ✔️ | ✔️ |
| | Reject an incoming call | ✔️ | ✔️ | | Mid-call scenarios | Add one or more endpoints to an existing call | ✔️ | ✔️ | | | Play Audio from an audio file | ✔️ | ✔️ | | | Remove one or more endpoints from an existing call| ✔️ | ✔️ |
-| | Blind Transfer** a call to another endpoint | ✔️ | ✔️ |
+| | Blind Transfer* a call to another endpoint | ✔️ | ✔️ |
| | Hang up a call (remove the call leg) | ✔️ | ✔️ | | | Terminate a call (remove all participants and end call)| ✔️ | ✔️ | | Query scenarios | Get the call state | ✔️ | ✔️ | | | Get a participant in a call | ✔️ | ✔️ | | | List all participants in a call | ✔️ | ✔️ |
-*Redirecting a call to a phone number is currently not supported.
-
-**Transfer of VoIP call to a phone number is currently not supported.
+*Transfer of VoIP call to a phone number is currently not supported.
## Architecture
communication-services File Sharing Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/tutorials/file-sharing-tutorial.md
Note that the tutorial above assumes that your Azure blob storage container allo
For downloading the files you upload to Azure blob storage, you can use shared access signatures (SAS). A shared access signature (SAS) provides secure delegated access to resources in your storage account. With a SAS, you have granular control over how a client can access your data.
-The downloadable [GitHub sample](https://github.com/Azure-Samples/communication-services-javascript-quickstarts/tree/main/ui-library-filesharing-chat-composite) showcases the use of SAS for creating SAS URLs to Azure Storage contents. Additionally, you can [read more about SAS](/azure/storage/common/storage-sas-overview).
+The downloadable [GitHub sample](https://github.com/Azure-Samples/communication-services-javascript-quickstarts/tree/main/ui-library-filesharing-chat-composite) showcases the use of SAS for creating SAS URLs to Azure Storage contents. Additionally, you can [read more about SAS](../../storage/common/storage-sas-overview.md).
UI Library requires a React environment to be setup. Next we will do that. If you already have a React App, you can skip this section.
You may also want to:
- [Add chat to your app](../quickstarts/chat/get-started.md) - [Creating user access tokens](../quickstarts/access-tokens.md) - [Learn about client and server architecture](../concepts/client-and-server-architecture.md)-- [Learn about authentication](../concepts/authentication.md)
+- [Learn about authentication](../concepts/authentication.md)
container-registry Container Registry Soft Delete Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-soft-delete-policy.md
+
+ Title: Enable soft delete policy
+description: Learn how to enable a soft delete policy in your Azure Container Registry for recovering accidentally deleted artifacts for a set retention period.
+ Last updated : 04/19/2022+++
+# Enable soft delete policy in Azure Container Registry (Preview)
+
+Azure Container Registry (ACR) allows you to enable the *soft delete policy* to recover any accidentally deleted artifacts for a set retention period.
++++
+This feature is available in all the service tiers (also known as SKUs). For information about registry service tiers, see [Azure Container Registry service tiers](container-registry-skus.md).
+
+> [!NOTE]
+>The soft deleted artifacts are billed as per active sku pricing for storage.
+
+The article gives you an overview of the soft delete policy and walks you through the step by step process to enable the soft delete policy using Azure CLI and Azure portal.
+
+You can use the Azure Cloud Shell or a local installation of the Azure CLI to run the command examples in this article. If you'd like to use it locally, version 2.0.74 or later is required. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][azure-cli](/cli/azure/install-azure-cli).
+
+## Prerequisites
+
+* The user will require following permissions (at registry level) to perform soft delete operations:
+
+ | Permission | Description |
+ |||
+ | Microsoft.ContainerRegistry/registries/deleted/read | List soft-deleted artifacts |
+ | Microsoft.ContainerRegistry/registries/deleted/restore/action | Restore soft-deleted artifact |
+
+## About soft delete policy
+
+The soft delete policy can be enabled/disabled at your convenience.
+
+Once you enable the soft delete policy, ACR manages the deleted artifacts as the soft deleted artifacts with a set retention period. Thereby you have ability to list, filter, and restore the soft deleted artifacts. Once the retention period is complete, all the soft deleted artifacts are auto-purged.
+
+## Retention period
+
+The default retention period is seven days. It's possible to set the retention period value between one to 90 days. The user can set, update and change the retention policy value. The soft deleted artifacts will expire once the retention period is complete.
+
+## Auto-purge
+
+The auto-purge runs every 24 hours. The auto-purge always considers the current value of `retention days` before permanently deleting the soft deleted artifacts.
+For example, after five days of soft deleting the artifact, if the user changes the value of retention days from seven to 14 days, the artifact will only expire after 14 days from the initial soft delete.
+
+## Preview limitations
+
+* ACR currently doesn't support manually purging soft deleted artifacts.
+* The soft delete policy doesn't support a geo-replicated registry.
+* ACR doesn't allow enabling both the retention policy and the soft delete policy. See [retention policy for untagged manifests.](container-registry-retention-policy.md)
+
+## Enable soft delete policy for registry - CLI
+
+1. Update soft delete policy for a given `MyRegistry` ACR with a retention period set between 1 to 90 days.
+
+ ```azurecli-interactive
+ az acr config soft-delete update -r MyRegistry --days 7 --status <enabled/disabled>
+ ```
+
+2. Show configured soft delete policy for a given `MyRegistry` ACR.
+
+ ```azurecli-interactive
+ az acr config soft-delete show -r MyRegistry
+ ```
+
+### List the soft-delete artifacts- CLI
+
+The `az acr repository list-deleted` commands enable fetching and listing of the soft deleted repositories. For more information use `--help`.
+
+1. List the soft deleted repositories in a given `MyRegistry` ACR.
+
+ ```azurecli-interactive
+ az acr repository list-deleted -n MyRegistry
+ ```
+
+The `az acr manifest list-deleted` commands enable fetching and listing of the soft delete manifests.
+
+2. List the soft deleted manifests of a `hello-world` repository in a given `MyRegistry` ACR.
+
+ ```azurecli-interactive
+ az acr manifest list-deleted -r MyRegistry -n hello-world
+ ```
+
+The `az acr manifest list-deleted-tags` commands enable fetching and listing of the soft delete tags.
+
+3. List the soft delete tags of a `hello-world` repository in a given `MyRegistry` ACR.
+
+ ```azurecli-interactive
+ az acr manifest list-deleted-tags -r MyRegistry -n hello-world
+ ```
+
+4. Filter the soft delete tags of a `hello-world` repository to match tag `latest` in a given `MyRegistry` ACR.
+
+ ```azurecli-interactive
+ az acr manifest list-deleted-tags -r MyRegistry -n hello-world:latest
+ ```
+
+### Restore the soft delete artifacts - CLI
+
+The `az acr manifest restore` commands restore a single image by tag and digest.
+
+1. Restore the image of a `hello-world` repository by tag `latest`and digest `sha256:abc123` in a given `MyRegistry` ACR.
+
+ ```azurecli-interactive
+ az acr manifest restore -r MyRegistry -n hello-world:latest -d sha256:abc123
+ ```
+
+2. Restore the most recently deleted manifest of a `hello-world` repository by tag `latest` in a given `MyRegistry` ACR.
+
+ ```azurecli-interactive
+ az acr manifest restore -r MyRegistry -n hello-world:latest
+ ```
+
+Force restore will overwrite the existing tag with the same name in the repository. If the soft delete policy is enabled during force restore. The overwritten tag will be soft deleted. You can force restore with specific arguments `--force, -f`.
+
+3. Force restore the image of a `hello-world` repository by tag `latest`and digest `sha256:abc123` in a given `MyRegistry` ACR.
+
+ ```azurecli-interactive
+ az acr manifest restore -r MyRegistry -n hello-world:latest -d sha256:abc123 -f
+ ```
+
+> [!IMPORTANT]
+>* Restoring a [manifest list](push-multi-architecture-images.md#manifest-list) won't recursively restore any underlying soft deleted manifests.
+>* If you're restoring soft deleted [ORAS artifacts](container-registry-oras-artifacts.md), then restoring a subject doesn't recursively restore the referrer chain. Also, the subject has to be restored first, only then a referrer manifest is allowed to restore. Otherwise it throws an error.
+
+## Enable soft delete policy for registry - Portal
+
+You can also enable a registry's soft delete policy in the [Azure portal](https://portal.azure.com).
+
+1. Navigate to your Azure Container Registry.
+2. In the **Overview tab**, verify the status of the **Soft Delete** (Preview).
+3. If the **Status** is **Disabled**, Select **Update**.
++++++
+4. Select the checkbox to **Enable Soft Delete**.
+5. Select the number of days between `0` and `90` days to retain the soft deleted artifacts.
+6. Select **Save** to save your changes.
++++++
+### Restore the soft deleted artifacts - Portal
+
+1. Navigate to your Azure Container Registry.
+2. In the **Menu** section, Select **Services**, and Select **Repositories**.
+3. In the **Repositories**, Select your preferred **Repository**.
+4. Click on the **Manage deleted artifacts** to see all the soft deleted artifacts.
+
+> [!NOTE]
+> Once you enable the soft delete policy and perform actions such as untag a manifest or delete an artifact, You will be able to find these tags and artifacts in the Managed delete artifacts before the number of retention days expire.
++++++
+5. Filter the deleted artifact you have to restore
+6. Select the artifact, and Click on the **Restore** in the right column.
+7. A **Restore Artifact** window pops up.
++++++
+8. Select the tag to restore, here you have an option to choose, and recover any additional tags.
+9. Click on **Restore**.
++++++
+### Restore from soft deleted repositories - Portal
+
+1. Navigate to your Azure Container Registry.
+2. In the **Menu** section, Select **Services**,
+3. In the **Services** tab, Select **Repositories**.
+4. In the **Repositories** tab, Click on **Manage Deleted Repositories**.
++++++
+5. Filter the deleted repository in the **Soft Deleted Repositories**(Preview).
++++++
+6. Select the deleted repository, filter the deleted artifact from on the **Manage deleted artifacts**.
+7. Select the artifact, and Click on the **Restore** in the right column.
+8. A **Restore Artifact** window pops up.
++++++
+9. Select the tag to restore, here you have an option to choose, and recover any additional tags.
+10. Click on **Restore**.
++++++
+> [!IMPORTANT]
+>* Importing a soft deleted image at both source and target resources is blocked.
+>* Pushing an image to the soft deleted repository will restore the soft deleted repository.
+>* Pushing an image that shares a same manifest digest with the soft deleted image is not allowed. Instead restore the soft deleted image.
+
+## Next steps
+
+* Learn more about options to [delete images and repositories](container-registry-delete.md) in Azure Container Registry.
cost-management-billing Tutorial Acm Opt Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/tutorial-acm-opt-recommendations.md
The **Impact** category, along with the **Potential yearly savings**, are design
High impact recommendations include: - [Buy reserved virtual machine instances to save money over pay-as-you-go costs](../../advisor/advisor-reference-cost-recommendations.md#buy-virtual-machine-reserved-instances-to-save-money-over-pay-as-you-go-costs)-- [Optimize virtual machine spend by resizing or shutting down underutilized instances](../../advisor/advisor-cost-recommendations.md#optimize-virtual-machine-spend-by-resizing-or-shutting-down-underutilized-instances)
+- [Optimize virtual machine spend by resizing or shutting down underutilized instances](../../advisor/advisor-cost-recommendations.md#optimize-virtual-machine-vm-or-virtual-machine-scale-set-vmss-spend-by-resizing-or-shutting-down-underutilized-instances)
- [Use Standard Storage to store Managed Disks snapshots](../../advisor/advisor-reference-cost-recommendations.md#use-standard-storage-to-store-managed-disks-snapshots) Medium impact recommendations include:
data-factory Concepts Parameters Variables https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/concepts-parameters-variables.md
Last updated 09/13/2022
# Pipeline parameters and variables in Azure Data Factory and Azure Synapse Analytics + This article helps you understand the difference between pipeline parameters and variables in Azure Data Factory and Azure Synapse Analytics and how to use them to control your pipeline. ## Pipeline parameters
data-factory Connector Google Sheets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-google-sheets.md
+
+ Title: Transform data in Google Sheets (Preview)
+
+description: Learn how to transform data in Google Sheets (Preview) by using Data Factory or Azure Synapse Analytics.
++++++ Last updated : 08/30/2022++
+# Transform data in Google Sheets (Preview) using Azure Data Factory or Synapse Analytics
++
+This article outlines how to use Data Flow to transform data in Google Sheets (Preview). To learn more, read the introductory article for [Azure Data Factory](introduction.md) or [Azure Synapse Analytics](../synapse-analytics/overview-what-is.md).
+
+> [!IMPORTANT]
+> This connector is currently in preview. You can try it out and give us feedback. If you want to take a dependency on preview connectors in your solution, please contact [Azure support](https://azure.microsoft.com/support/).
+
+## Supported capabilities
+
+This Google Sheets connector is supported for the following capabilities:
+
+| Supported capabilities|IR |
+|| --|
+|[Mapping data flow](concepts-data-flow-overview.md) (source/-)|&#9312; |
+
+<small>*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*</small>
+
+For a list of data stores that are supported as sources/sinks, see the [Supported data stores](connector-overview.md#supported-data-stores) table.
+
+## Create a Google Sheets linked service using UI
+
+Use the following steps to create a Google Sheets linked service in the Azure portal UI.
+
+1. Browse to the Manage tab in your Azure Data Factory or Synapse workspace and select Linked Services, then select New:
+
+ # [Azure Data Factory](#tab/data-factory)
+
+ :::image type="content" source="media/doc-common-process/new-linked-service.png" alt-text="Screenshot of creating a new linked service with Azure Data Factory U I.":::
+
+ # [Azure Synapse](#tab/synapse-analytics)
+
+ :::image type="content" source="media/doc-common-process/new-linked-service-synapse.png" alt-text="Screenshot of creating a new linked service with Azure Synapse U I.":::
+
+2. Search for Google Sheets (Preview) and select the Google Sheets (Preview) connector.
+
+ :::image type="content" source="media/connector-google-sheets/google-sheets-connector.png" alt-text="Screenshot showing selecting Google Sheets connector.":::
+
+3. Configure the service details, test the connection, and create the new linked service.
+
+ :::image type="content" source="media/connector-google-sheets/configure-google-sheets-linked-service.png" alt-text="Screenshot of configuration for Google Sheets linked service.":::
+
+## Connector configuration details
+
+The following sections provide information about properties that are used to define Data Factory and Synapse pipeline entities specific to Google Sheets.
+
+## Linked service properties
+
+The following properties are supported for the Google Sheets linked service:
+
+| Property | Description | Required |
+|: |: |: |
+| type | The type property must be set to **GoogleSheets**. | Yes |
+| apiToken | Specify an API token for the Google Sheets. Mark this field as **SecureString** to store it securely. Or, you can [reference a secret stored in Azure Key Vault](store-credentials-in-key-vault.md). |Yes |
+
+**Example:**
+
+```json
+{
+ "name": "GoogleSheetsLinkedService",
+ "properties": {
+ "type": "GoogleSheets",
+ "typeProperties": {
+ "apiToken": {
+ "type": "SecureString",
+ "value": "<API token>"
+ }
+ }
+ }
+}
+```
+
+## Mapping data flow properties
+
+When transforming data in mapping data flow, you can read resources from Google Sheets. For more information, see the [source transformation](data-flow-source.md) in mapping data flows. You can only use an [inline dataset](data-flow-source.md#inline-datasets) as source type.
++
+### Source transformation
+
+The below table lists the properties supported by Google Sheets source. You can edit these properties in the **Source options** tab.
+
+| Name | Description | Required | Allowed values | Data flow script property |
+| - | -- | -- | -- | - |
+| SpreadSheet ID | The spreadsheet ID in your Google Sheets. Make sure the general access of the spreadsheet is set as **Anyone with the link**. | Yes | String | spreadSheetId |
+| Sheet name | The name of the sheet in the spreadsheet. | Yes | String | sheetName |
+| Start cell | The start cell of the sheet from where the data is required, for example A2, B4. | Yes | String | startCell |
+| End cell | The end cell of the sheet till where the data is required, for example F10, S600. | Yes | String | endCell |
+
+#### Google Sheets source script example
+
+When you use Google Sheets as source type, the associated data flow script is:
+
+```
+source(allowSchemaDrift: true,
+ validateSchema: false,
+ store: 'googlesheets',
+ format: 'rest',
+ spreadSheetId: $spreadSheetId,
+ startCell: 'A2',
+ endCell: 'F10',
+ sheetName: 'Sheet1') ~> GoogleSheetsSource
+```
+
+## Next steps
+
+For a list of data stores supported as sources and sinks by the copy activity, see [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats).
data-factory Connector Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-rest.md
Previously updated : 08/30/2022 Last updated : 09/14/2022
For a list of data stores that are supported as sources/sinks, see [Supported da
Specifically, this generic REST connector supports: - Copying data from a REST endpoint by using the **GET** or **POST** methods and copying data to a REST endpoint by using the **POST**, **PUT** or **PATCH** methods.-- Copying data by using one of the following authentications: **Anonymous**, **Basic**, **Service Principal**, and **user-assigned managed identity**.
+- Copying data by using one of the following authentications: **Anonymous**, **Basic**, **Service Principal**, **OAuth2 Client Credential**, **System Assigned Managed Identity** and **User Assigned Managed Identity**.
- **[Pagination](#pagination-support)** in the REST APIs. - For REST as source, copying the REST JSON response [as-is](#export-json-response-as-is) or parse it by using [schema mapping](copy-activity-schema-and-type-mapping.md#schema-mapping). Only response payload in **JSON** is supported.
For different authentication types, see the corresponding sections for details.
- [Basic authentication](#use-basic-authentication) - [Service Principal authentication](#use-service-principal-authentication) - [OAuth2 Client Credential authentication](#use-oauth2-client-credential-authentication)
+- [System-assigned managed identity authentication](#managed-identity)
- [User-assigned managed identity authentication](#use-user-assigned-managed-identity-authentication) - [Anonymous authentication](#using-authentication-headers)
Set the **authenticationType** property to **Basic**. In addition to the generic
} ``` + ### Use Service Principal authentication Set the **authenticationType** property to **AadServicePrincipal**. In addition to the generic properties that are described in the preceding section, specify the following properties:
Set the **authenticationType** property to **OAuth2ClientCredential**. In additi
} ```
+### <a name="managed-identity"></a> Use system-assigned managed identity authentication
+
+Set the **authenticationType** property to **ManagedServiceIdentity**. In addition to the generic properties that are described in the preceding section, specify the following properties:
+
+| Property | Description | Required |
+|: |: |: |
+| aadResourceId | Specify the AAD resource you are requesting for authorization, for example, `https://management.core.windows.net`.| Yes |
+
+**Example**
+
+```json
+{
+ "name": "RESTLinkedService",
+ "properties": {
+ "type": "RestService",
+ "typeProperties": {
+ "url": "<REST endpoint e.g. https://www.example.com/>",
+ "authenticationType": "ManagedServiceIdentity",
+ "aadResourceId": "<AAD resource URL e.g. https://management.core.windows.net>"
+ },
+ "connectVia": {
+ "referenceName": "<name of Integration Runtime>",
+ "type": "IntegrationRuntimeReference"
+ }
+ }
+}
+```
+ ### Use user-assigned managed identity authentication Set the **authenticationType** property to **ManagedServiceIdentity**. In addition to the generic properties that are described in the preceding section, specify the following properties:
data-factory Copy Activity Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/copy-activity-overview.md
To configure it programmatically, add the `additionalColumns` property in your c
} ] ```
+>[!TIP]
+>After configuring additional columns remember to map them to you destination sink, in the Mapping tab.
## Auto create sink tables
ddos-protection Ddos Disaster Recovery Guidance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/ddos-disaster-recovery-guidance.md
A: The virtual network and the resources in the affected region remains inaccess
![Simple Virtual Network Diagram.](../virtual-network/media/virtual-network-disaster-recovery-guidance/vnet.png)
-**Q: What can I to do re-create the same virtual network in a different region?**
+**Q: What can I do to re-create the same virtual network in a different region?**
A: Virtual networks are fairly lightweight resources. You can invoke Azure APIs to create a VNet with the same address space in a different region. To recreate the same environment that was present in the affected region, you make API calls to redeploy the resources in the VNets that you had. If you have on-premises connectivity, such as in a hybrid deployment, you have to deploy a new VPN Gateway, and connect to your on-premises network.
To create a virtual network, see [Create a virtual network](../virtual-network/m
## Next steps -- Learn how to [create a DDoS protection plan](manage-ddos-protection.md).
+- Learn how to [create a DDoS protection plan](manage-ddos-protection.md).
ddos-protection Ddos Protection Reference Architectures https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/ddos-protection-reference-architectures.md
For more information on this reference architecture, see the [Extend Azure HDIns
documentation.
-> [!NOTE]
-> Azure App Service Environment for Power Apps or API management in a virtual network with a public IP are both not natively supported.
- ## Hub-and-spoke network topology with Azure Firewall and Azure Bastion This reference architecture details a hub-and-spoke topology with Azure Firewall inside the hub as a DMZ for scenarios that require central control over security aspects. Azure Firewall is a managed firewall as a service and is placed in its own subnet. Azure Bastion is deployed and placed in its own subnet.
Azure DDoS Protection Standard is enabled on the hub virtual network. Therefore,
DDoS Protection Standard is designed for services that are deployed in a virtual network. For more information, see [Deploy dedicated Azure service into virtual networks](../virtual-network/virtual-network-for-azure-services.md#services-that-can-be-deployed-into-a-virtual-network). > [!NOTE]
-> DDoS Protection Standard protects the Public IPs of Azure resource. DDoS Protection Basic, which requires no configuration and is enabled by default, only protects the Azure underlying platform infrastructure (e.g. Azure DNS). For more information, see [Azure DDoS Protection Standard overview](ddos-protection-overview.md).
+> DDoS Protection Standard protects the Public IPs of Azure resource. DDoS infrastructure protection, which requires no configuration and is enabled by default, only protects the Azure underlying platform infrastructure (e.g. Azure DNS). For more information, see [Azure DDoS Protection Standard overview](ddos-protection-overview.md).
For more information about hub-and-spoke topology, see [Hub-spoke network topology](/azure/architecture/reference-architectures/hybrid-networking/hub-spoke?tabs=cli). ## Next steps
ddos-protection Telemetry https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/telemetry.md
na Previously updated : 08/25/2022 Last updated : 09/14/2022
Telemetry for an attack is provided through Azure Monitor in real time. While [m
You can view DDoS telemetry for a protected public IP address through three different resource types: DDoS protection plan, virtual network, and public IP address. +
+### Metrics
+
+The metric names present different packet types, and bytes vs. packets, with a basic construct of tag names on each metric as follows:
+- **Dropped tag name** (for example, **Inbound Packets Dropped DDoS**): The number of packets dropped/scrubbed by the DDoS protection system.
+- **Forwarded tag name** (for example **Inbound Packets Forwarded DDoS**): The number of packets forwarded by the DDoS system to the destination VIP ΓÇô traffic that was not filtered.
+- **No tag name** (for example **Inbound Packets DDoS**): The total number of packets that came into the scrubbing system ΓÇô representing the sum of the packets dropped and forwarded.
> [!NOTE] > While multiple options for **Aggregation** are displayed on Azure portal, only the aggregation types listed in the table below are supported for each metric. We apologize for this confusion and we are working to resolve it.
+The following [metrics](../azure-monitor/essentials/metrics-supported.md#microsoftnetworkpublicipaddresses) are available for Azure DDoS Protection Standard. These metrics are also exportable via diagnostic settings (see [View and configure DDoS diagnostic logging](diagnostic-logging.md)).
+
+| Metric | Metric Display Name | Unit | Aggregation Type | Description |
+| | | | | |
+| BytesDroppedDDoSΓÇï | Inbound bytes dropped DDoSΓÇï | BytesPerSecondΓÇï | MaximumΓÇï | Inbound bytes dropped DDoSΓÇï|
+| BytesForwardedDDoSΓÇï | Inbound bytes forwarded DDoSΓÇï | BytesPerSecondΓÇï | MaximumΓÇï | Inbound bytes forwarded DDoSΓÇï |
+| BytesInDDoSΓÇï | Inbound bytes DDoSΓÇï | BytesPerSecondΓÇï | MaximumΓÇï | Inbound bytes DDoSΓÇï |
+| DDoSTriggerSYNPacketsΓÇï | Inbound SYN packets to trigger DDoS mitigationΓÇï | CountPerSecondΓÇï | MaximumΓÇï | Inbound SYN packets to trigger DDoS mitigationΓÇï |
+| DDoSTriggerTCPPacketsΓÇï | Inbound TCP packets to trigger DDoS mitigationΓÇï | CountPerSecondΓÇï | MaximumΓÇï | Inbound TCP packets to trigger DDoS mitigationΓÇï |
+| DDoSTriggerUDPPacketsΓÇï | Inbound UDP packets to trigger DDoS mitigationΓÇï | CountPerSecondΓÇï | MaximumΓÇï | Inbound UDP packets to trigger DDoS mitigationΓÇï |
+| IfUnderDDoSAttackΓÇï | Under DDoS attack or notΓÇï | CountΓÇï | MaximumΓÇï | Under DDoS attack or notΓÇï |
+| PacketsDroppedDDoSΓÇï | Inbound packets dropped DDoSΓÇï | CountPerSecondΓÇï | MaximumΓÇï | Inbound packets dropped DDoSΓÇï |
+| PacketsForwardedDDoSΓÇï | Inbound packets forwarded DDoSΓÇï | CountPerSecondΓÇï | MaximumΓÇï | Inbound packets forwarded DDoSΓÇï |
+| PacketsInDDoSΓÇï | Inbound packets DDoSΓÇï | CountPerSecondΓÇï | MaximumΓÇï | Inbound packets DDoSΓÇï |
+| TCPBytesDroppedDDoSΓÇï | Inbound TCP bytes dropped DDoSΓÇï | BytesPerSecondΓÇï | MaximumΓÇï | Inbound TCP bytes dropped DDoSΓÇï |
+| TCPBytesForwardedDDoSΓÇï | Inbound TCP bytes forwarded DDoSΓÇï | BytesPerSecondΓÇï | MaximumΓÇï | Inbound TCP bytes forwarded DDoSΓÇï |
+| TCPBytesInDDoSΓÇï | Inbound TCP bytes DDoSΓÇï | BytesPerSecondΓÇï | MaximumΓÇï | Inbound TCP bytes DDoSΓÇï |
+| TCPPacketsDroppedDDoSΓÇï | Inbound TCP packets dropped DDoSΓÇï | CountPerSecondΓÇï | MaximumΓÇï | Inbound TCP packets dropped DDoSΓÇï |
+| TCPPacketsForwardedDDoSΓÇï | Inbound TCP packets forwarded DDoSΓÇï | CountPerSecondΓÇï | MaximumΓÇï | Inbound TCP packets forwarded DDoSΓÇï |
+| TCPPacketsInDDoSΓÇï | Inbound TCP packets DDoSΓÇï | CountPerSecondΓÇï | MaximumΓÇï | Inbound TCP packets DDoSΓÇï |
+| UDPBytesDroppedDDoSΓÇï | Inbound UDP bytes dropped DDoSΓÇï | BytesPerSecondΓÇï | MaximumΓÇï | Inbound UDP bytes dropped DDoSΓÇï |
+| UDPBytesForwardedDDoSΓÇï | Inbound UDP bytes forwarded DDoSΓÇï | BytesPerSecondΓÇï | MaximumΓÇï | Inbound UDP bytes forwarded DDoSΓÇï |
+| UDPBytesInDDoSΓÇï | Inbound UDP bytes DDoSΓÇï | BytesPerSecondΓÇï | MaximumΓÇï | Inbound UDP bytes DDoSΓÇï |
+| UDPPacketsDroppedDDoSΓÇï | Inbound UDP packets dropped DDoSΓÇï | CountPerSecondΓÇï | MaximumΓÇï | Inbound UDP packets dropped DDoSΓÇï |
+| UDPPacketsForwardedDDoSΓÇï | Inbound UDP packets forwarded DDoSΓÇï | CountPerSecondΓÇï | MaximumΓÇï | Inbound UDP packets forwarded DDoSΓÇï |
+| UDPPacketsInDDoSΓÇï | Inbound UDP packets DDoSΓÇï | CountPerSecondΓÇï | MaximumΓÇï | Inbound UDP packets DDoSΓÇï |
+ ### View metrics from DDoS protection plan
defender-for-cloud Enable Data Collection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/enable-data-collection.md
When you select a data collection tier in Microsoft Defender for Cloud, the secu
The enhanced security protections of Defender for Cloud are required for storing Windows security event data. Learn more about [the enhanced protection plans](defender-for-cloud-introduction.md).
-You maybe charged for storing data in Log Analytics. For more information, see the [pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/).
+You may be charged for storing data in Log Analytics. For more information, see the [pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/).
### Information for Microsoft Sentinel users
defender-for-cloud Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes.md
Learn more about [alert suppression rules](alerts-suppression-rules.md).
File integrity monitoring (FIM) examines operating system files and registries for changes that might indicate an attack.
-FIM is now available in a new version based on Azure Monitor Agent (AMA), which you can deploy through Defender for Cloud.
+FIM is now available in a new version based on Azure Monitor Agent (AMA), which you can [deploy through Defender for Cloud](auto-deploy-azure-monitoring-agent.md).
Learn more about [File Integrity Monitoring with the Azure Monitor Agent](file-integrity-monitoring-enable-ama.md).
defender-for-iot Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/release-notes.md
For more information, see the [Microsoft Security Development Lifecycle practice
## August 2022 -- **Sensor software version 22.2.5**: Minor version with stability improvements-- [New alert columns with timestamp data](#new-alert-columns-with-timestamp-data)-- [Sensor health from the Azure portal (Public preview)](#sensor-health-from-the-azure-portal-public-preview)
+|Service area |Updates |
+|||
+|**OT networks** |**Sensor software version 22.2.5**: Minor version with stability improvements<br><br>**Sensor software version 22.2.4**: [New alert columns with timestamp data](#new-alert-columns-with-timestamp-data)<br><br>**Sensor software version 22.1.3**: [Sensor health from the Azure portal (Public preview)](#sensor-health-from-the-azure-portal-public-preview) |
### New alert columns with timestamp data
event-grid Availability Zones Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/availability-zones-disaster-recovery.md
Event Grid also provides [diagnostic logs schemas](diagnostic-logs.md) and [metr
## More information
-You may find more information availability zone resiliency and disaster recovery in Azure Event Grid in our [FAQ](/azure/event-grid/event-grid-faq).
+You may find more information availability zone resiliency and disaster recovery in Azure Event Grid in our [FAQ](./event-grid-faq.yml).
## Next steps -- If you want to implement your own disaster recovery plan for Azure Event Grid topics and domains, see [Build your own disaster recovery for custom topics in Event Grid](custom-disaster-recovery.md).
+- If you want to implement your own disaster recovery plan for Azure Event Grid topics and domains, see [Build your own disaster recovery for custom topics in Event Grid](custom-disaster-recovery.md).
event-grid Configure Custom Topic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/configure-custom-topic.md
You can use similar steps to enable an identity for an event grid domain.
1. On the left menu, select **Configuration** under **Settings**. 1. 2. For **Data residency**, select whether you don't want any data to be replicated to another region (**Regional**) or you want the metadata to be replicated to a predefined secondary region (**Cross-Geo**).
- The **Cross-Geo** option allows Microsoft-initiated failover to the paired region in case of a region failure. For more information, see [Server-side geo disaster recovery in Azure Event Grid](geo-disaster-recovery.md). Microsoft-initiated failover is exercised by Microsoft in rare situations to fail over Event Grid resources from an affected region to the corresponding geo-paired region. This process doesn't require an intervention from user. Microsoft reserves right to make a determination of when this path will be taken. The mechanism doesn't involve a user consent before the user's topic or domain is failed over. For more information, see [How do I recover from a failover?](/azure/event-grid/event-grid-faq).
+ The **Cross-Geo** option allows Microsoft-initiated failover to the paired region in case of a region failure. For more information, see [Server-side geo disaster recovery in Azure Event Grid](geo-disaster-recovery.md). Microsoft-initiated failover is exercised by Microsoft in rare situations to fail over Event Grid resources from an affected region to the corresponding geo-paired region. This process doesn't require an intervention from user. Microsoft reserves right to make a determination of when this path will be taken. The mechanism doesn't involve a user consent before the user's topic or domain is failed over. For more information, see [How do I recover from a failover?](./event-grid-faq.yml).
If you select the **Regional** option, you may define your own disaster recovery plan. For more information, see [Build your own disaster recovery plan for Azure Event Grid topics and domains](custom-disaster-recovery.md).
See the following samples to learn about publishing events to and consuming even
- [Azure Event Grid samples for Java](/samples/azure/azure-sdk-for-java/eventgrid-samples/) - [Azure Event Grid samples for Python](/samples/azure/azure-sdk-for-python/eventgrid-samples/) - [Azure Event Grid samples for JavaScript](/samples/azure/azure-sdk-for-js/eventgrid-javascript/)-- [Azure Event Grid samples for TypeScript](/samples/azure/azure-sdk-for-js/eventgrid-typescript/)
+- [Azure Event Grid samples for TypeScript](/samples/azure/azure-sdk-for-js/eventgrid-typescript/)
event-grid Create Custom Topic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/create-custom-topic.md
This article shows how to create a custom topic or a domain in Azure Event Grid. ## Prerequisites
-If you new to Azure Event Grid, read through [Event Grid overview](overview.md) before starting this tutorial.
+If you're new to Azure Event Grid, read through [Event Grid overview](overview.md) before starting this tutorial.
[!INCLUDE [event-grid-register-provider-portal.md](../../includes/event-grid-register-provider-portal.md)]
On the **Security** page of the **Create Topic** or **Create Event Grid Domain*
:::image type="content" source="./media/create-custom-topic/data-residency.png" alt-text="Screenshot showing the Data residency section of the Advanced page in the Create Topic wizard.":::
- The **Cross-Geo** option allows Microsoft-initiated failover to the paired region in case of a region failure. For more information, see [Server-side geo disaster recovery in Azure Event Grid](geo-disaster-recovery.md). Microsoft-initiated failover is exercised by Microsoft in rare situations to fail over Event Grid resources from an affected region to the corresponding geo-paired region. This process doesn't require an intervention from user. Microsoft reserves right to make a determination of when this path will be taken. The mechanism doesn't involve a user consent before the user's topic or domain is failed over. For more information, see [How do I recover from a failover?](/azure/event-grid/event-grid-faq).
+ The **Cross-Geo** option allows Microsoft-initiated failover to the paired region in case of a region failure. For more information, see [Server-side geo disaster recovery in Azure Event Grid](geo-disaster-recovery.md). Microsoft-initiated failover is exercised by Microsoft in rare situations to fail over Event Grid resources from an affected region to the corresponding geo-paired region. This process doesn't require an intervention from user. Microsoft reserves right to make a determination of when this path will be taken. The mechanism doesn't involve a user consent before the user's topic or domain is failed over. For more information, see [How do I recover from a failover?](./event-grid-faq.yml).
If you select the **Regional** option, you may define your own disaster recovery plan. For more information, see [Build your own disaster recovery plan for Azure Event Grid topics and domains](custom-disaster-recovery.md). 3. Select **Next: Tags** to move to the **Tags** page.
event-grid Subscribe To Partner Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/subscribe-to-partner-events.md
Title: Azure Event Grid - Subscribe to partner events description: This article explains how to subscribe to events from a partner using Azure Event Grid. Previously updated : 06/09/2022 Last updated : 09/14/2022 # Subscribe to events published by a partner with Azure Event Grid
Here are the steps that a subscriber needs to perform to receive events from a p
You must grant your consent to the partner to create partner topics in a resource group that you designate. This authorization has an expiration time. It's effective for the time period you specify between 1 to 365 days. > [!IMPORTANT]
-> For a greater security stance, specify the minimum expiration time that offers the partner enough time to configure your events to flow to Event Grid and to provision your partner topic.
+> For a greater security stance, specify the minimum expiration time that offers the partner enough time to configure your events to flow to Event Grid and to provision your partner topic. Your partner won't be able to create resources (partner topics) in your Azure subscription after the authorization expiration time.
> [!NOTE] > Event Grid started enforcing authorization checks to create partner topics or partner destinations around June 30th, 2022.
Following example shows the way to create a partner configuration resource that
1. Specify authorization expiration time. 1. Select **Add**.
- :::image type="content" source="./media/subscribe-to-partner-events/add-non-verified-partner.png" alt-text="Screenshot for granting a non-verified partner the authorization to create resources in your resource group.":::
+ :::image type="content" source="./media/subscribe-to-partner-events/add-non-verified-partner.png" alt-text="Screenshot for granting a non-verified partner the authorization to create resources in your resource group.":::
+
+ > [!IMPORTANT]
+ > Your partner won't be able to create resources (partner topics) in your Azure subscription after the authorization expiration time.
1. Back on the **Create Partner Configuration** page, verify that the partner is added to the partner authorization list at the bottom. 1. Select **Review + create** at the bottom of the page.
event-hubs Apache Kafka Configurations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/apache-kafka-configurations.md
Property | Recommended Values | Permitted Range | Notes
Property | Recommended Values | Permitted Range | Notes |:|--:|
-`retries` | > 0 | | Default is 2. We recommend that you keep this value.
+`retries` | 2 | | Default is 2147483647.
`request.timeout.ms` | 30000 .. 60000 | > 20000| Event Hubs will internally default to a minimum of 20,000 ms. `librdkafka` default value is 5000, which can be problematic. *While requests with lower timeout values are accepted, client behavior isn't guaranteed.* `partitioner` | `consistent_random` | See librdkafka documentation | `consistent_random` is default and best. Empty and null keys are handled ideally for most cases. `compression.codec` | `none` || Compression currently not supported.
firewall-manager Check Point Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall-manager/check-point-overview.md
Check Point unifies multiple security services under one umbrella. Integrated se
Threat Emulation (sandboxing) protects users from unknown and zero-day threats. Check Point SandBlast Zero-Day Protection is a cloud-hosted sand-boxing technology where files are quickly quarantined and inspected. It runs in a virtual sandbox to discover malicious behavior before it enters your network. It prevents threats before the damage is done to save staff valuable time responding to threats.
+>[!NOTE]
+> This offering provides limited features compared to the [Check Point NVA integration with Virtual WAN](../virtual-wan/about-nva-hub.md#partners). We strongly recommend using this NVA integration to secure your network traffic.
## Deployment example Watch the following video to see how to deploy Check Point CloudGuard Connect as a trusted Azure security partner.
Watch the following video to see how to deploy Check Point CloudGuard Connect as
## Next steps -- [Deploy a security partner provider](deploy-trusted-security-partner.md)
+- [Deploy a security partner provider](deploy-trusted-security-partner.md)
frontdoor Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/best-practices.md
This article summarizes best practices for using Azure Front Door.
### Avoid combining Traffic Manager and Front Door
-For most solutions, you should use *either* Front Door *or* [Azure Traffic Manager](/azure/traffic-manager/traffic-manager-overview).
+For most solutions, you should use *either* Front Door *or* [Azure Traffic Manager](../traffic-manager/traffic-manager-overview.md).
Traffic Manager is a DNS-based load balancer. It sends traffic directly to your origin's endpoints. In contrast, Front Door terminates connections at points of presence (PoPs) near to the client and establishes separate long-lived connections to the origins. The products work differently and are intended for different use cases.
For more information, see [Select the certificate for Azure Front Door to deploy
### Use the same domain name on Front Door and your origin
-Front Door can rewrite the `Host` header of incoming requests. This feature can be helpful when you manage a set of customer-facing custom domain names that route to a single origin. The feature can also help when you want to avoid configuring custom domain names in Front Door and at your origin. However, when you rewrite the `Host` header, request cookies and URL redirections might break. In particular, when you use platforms like Azure App Service, features like [session affinity](/azure/app-service/configure-common#configure-general-settings) and [authentication and authorization](/azure/app-service/overview-authentication-authorization) might not work correctly.
+Front Door can rewrite the `Host` header of incoming requests. This feature can be helpful when you manage a set of customer-facing custom domain names that route to a single origin. The feature can also help when you want to avoid configuring custom domain names in Front Door and at your origin. However, when you rewrite the `Host` header, request cookies and URL redirections might break. In particular, when you use platforms like Azure App Service, features like [session affinity](../app-service/configure-common.md#configure-general-settings) and [authentication and authorization](../app-service/overview-authentication-authorization.md) might not work correctly.
Before you rewrite the `Host` header of your requests, carefully consider whether your application is going to work correctly.
For more information, see [Supported HTTP methods for health probes](health-prob
## Next steps
-Learn how to [create an Front Door profile](create-front-door-portal.md).
+Learn how to [create an Front Door profile](create-front-door-portal.md).
hdinsight Apache Hadoop Linux Create Cluster Get Started Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hadoop/apache-hadoop-linux-create-cluster-get-started-portal.md
keywords: hadoop getting started,hadoop linux,hadoop quickstart,hive getting sta
Previously updated : 02/24/2020 Last updated : 09/15/2022 #Customer intent: As a data analyst, I need to create a Hadoop cluster in Azure HDInsight using Azure portal and run a Hive job
hdinsight Apache Hbase Provision Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hbase/apache-hbase-provision-vnet.md
description: Get started using HBase in Azure HDInsight. Learn how to create HDI
Previously updated : 12/23/2019 Last updated : 09/15/2022 # Create Apache HBase clusters on HDInsight in Azure Virtual Network
hdinsight Apache Hbase Replication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hbase/apache-hbase-replication.md
description: Learn how to set up HBase replication from one HDInsight version to
Previously updated : 12/06/2019 Last updated : 09/15/2022 # Set up Apache HBase cluster replication in Azure virtual networks
hdinsight Hdinsight Hadoop Use Data Lake Storage Gen1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-hadoop-use-data-lake-storage-gen1.md
description: Learn how to query data from Azure Data Lake Storage Gen1 and to st
Previously updated : 04/24/2020 Last updated : 09/15/2022 # Use Data Lake Storage Gen1 with Azure HDInsight clusters
hdinsight Hdinsight Os Patching https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-os-patching.md
description: Learn how to configure OS patching schedule for Linux-based HDInsig
Previously updated : 08/30/2021 Last updated : 09/15/2022 # Configure the OS patching schedule for Linux-based HDInsight clusters
hdinsight Hdinsight Use Availability Zones https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-use-availability-zones.md
description: Learn how to create an Azure HDInsight cluster that uses Availabili
Previously updated : 09/01/2021 Last updated : 09/15/2022 # Create an HDInsight cluster that uses Availability Zones (Preview)
hdinsight Apache Hadoop Connect Hive Power Bi Directquery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/interactive-query/apache-hadoop-connect-hive-power-bi-directquery.md
description: Use Microsoft Power BI to visualize Interactive Query Hive data fro
Previously updated : 06/17/2019 Last updated : 09/15/2022 # Visualize Interactive Query Apache Hive data with Microsoft Power BI using direct query in HDInsight
In this article, you learned how to visualize data from HDInsight using Microsof
* [Connect Excel to Apache Hadoop by using Power Query](../hadoop/apache-hadoop-connect-excel-power-query.md). * [Connect to Azure HDInsight and run Apache Hive queries using Data Lake Tools for Visual Studio](../hadoop/apache-hadoop-visual-studio-tools-get-started.md). * [Use Azure HDInsight Tool for Visual Studio Code](../hdinsight-for-vscode.md).
-* [Upload Data to HDInsight](./../hdinsight-upload-data.md).
+* [Upload Data to HDInsight](./../hdinsight-upload-data.md).
hdinsight Apache Kafka Connector Iot Hub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/kafka/apache-kafka-connector-iot-hub.md
description: Learn how to use Apache Kafka on HDInsight with Azure IoT Hub. The
Previously updated : 11/26/2019 Last updated : 09/15/2022 # Use Apache Kafka on HDInsight with Azure IoT Hub
hdinsight Apache Spark Run Machine Learning Automl https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/spark/apache-spark-run-machine-learning-automl.md
Title: Run Azure Machine Learning workloads on Apache Spark in HDInsight
description: Learn how to run Azure Machine Learning workloads with automated machine learning (AutoML) on Apache Spark in Azure HDInsight. Previously updated : 12/13/2019 Last updated : 09/15/2022 # Run Azure Machine Learning workloads with automated machine learning on Apache Spark in HDInsight
iot-develop About Iot Sdks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/about-iot-sdks.md
The SDKs are available in **multiple languages** providing the flexibility to ch
| Language | Package | Source | Quickstarts | Samples | Reference | | :-- | :-- | :-- | :-- | :-- | :-- |
-| **.NET** | [NuGet](https://www.nuget.org/packages/Microsoft.Azure.Devices.Client) | [GitHub](https://github.com/Azure/azure-iot-sdk-csharp) | [IoT Hub](quickstart-send-telemetry-iot-hub.md?pivots=programming-language-csharp) / [IoT Central](quickstart-send-telemetry-central.md?pivots=programming-language-csharp) | [Samples](https://github.com/Azure-Samples/azure-iot-samples-csharp) | [Reference](/dotnet/api/microsoft.azure.devices.client) |
+| **.NET** | [NuGet](https://www.nuget.org/packages/Microsoft.Azure.Devices.Client) | [GitHub](https://github.com/Azure/azure-iot-sdk-csharp) | [IoT Hub](quickstart-send-telemetry-iot-hub.md?pivots=programming-language-csharp) / [IoT Central](quickstart-send-telemetry-central.md?pivots=programming-language-csharp) | [Samples](https://github.com/Azure/azure-iot-sdk-csharp/tree/main/iothub/device/samples) | [Reference](/dotnet/api/microsoft.azure.devices.client) |
| **Python** | [pip](https://pypi.org/project/azure-iot-device/) | [GitHub](https://github.com/Azure/azure-iot-sdk-python) | [IoT Hub](quickstart-send-telemetry-iot-hub.md?pivots=programming-language-python) / [IoT Central](quickstart-send-telemetry-central.md?pivots=programming-language-python) | [Samples](https://github.com/Azure/azure-iot-sdk-python/tree/main/samples) | [Reference](/python/api/azure-iot-device) | | **Node.js** | [npm](https://www.npmjs.com/package/azure-iot-device) | [GitHub](https://github.com/Azure/azure-iot-sdk-node) | [IoT Hub](quickstart-send-telemetry-iot-hub.md?pivots=programming-language-nodejs) / [IoT Central](quickstart-send-telemetry-central.md?pivots=programming-language-nodejs) | [Samples](https://github.com/Azure/azure-iot-sdk-node/tree/main/device/samples) | [Reference](/javascript/api/azure-iot-device/) | | **Java** | [Maven](https://mvnrepository.com/artifact/com.microsoft.azure.sdk.iot/iot-device-client) | [GitHub](https://github.com/Azure/azure-iot-sdk-java) | [IoT Hub](quickstart-send-telemetry-iot-hub.md?pivots=programming-language-java) / [IoT Central](quickstart-send-telemetry-central.md?pivots=programming-language-java) | [Samples](https://github.com/Azure/azure-iot-sdk-java/tree/master/device/iot-device-samples) | [Reference](/java/api/com.microsoft.azure.sdk.iot.device) |
iot-develop Libraries Sdks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/libraries-sdks.md
The IoT Plug and Play libraries and SDKs enable developers to build IoT solution
| Language | Package | Code Repository | Samples | Quickstart | Reference | ||||||| | C - Device | [vcpkg 1.3.9](https://github.com/Azure/azure-iot-sdk-c/blob/master/doc/setting_up_vcpkg.md) | [GitHub](https://github.com/Azure/azure-iot-sdk-c) | [Samples](https://github.com/Azure/azure-iot-sdk-c/tree/master/iothub_client/samples/pnp) | [Connect to IoT Hub](tutorial-connect-device.md) | [Reference](/azure/iot-hub/iot-c-sdk-ref/) |
-| .NET - Device | [NuGet 1.31.0](https://www.nuget.org/packages/Microsoft.Azure.Devices.Client) | [GitHub](https://github.com/Azure/azure-iot-sdk-csharp/tree/main/) | [Samples](https://github.com/Azure-Samples/azure-iot-samples-csharp/tree/main/iot-hub/Samples/device/PnpDeviceSamples) | [Connect to IoT Hub](tutorial-connect-device.md) | [Reference](/dotnet/api/microsoft.azure.devices.client) |
+| .NET - Device | [NuGet 1.41.2](https://www.nuget.org/packages/Microsoft.Azure.Devices.Client) | [GitHub](https://github.com/Azure/azure-iot-sdk-csharp/tree/main/) | [Samples](https://github.com/Azure/azure-iot-sdk-csharp/tree/main/iothub/device/samples/solutions/PnpDeviceSamples) | [Connect to IoT Hub](tutorial-connect-device.md) | [Reference](/dotnet/api/microsoft.azure.devices.client) |
| Java - Device | [Maven 1.26.0](https://mvnrepository.com/artifact/com.microsoft.azure.sdk.iot/iot-device-client) | [GitHub](https://github.com/Azure/azure-iot-sdk-jav) | [Reference](/java/api/com.microsoft.azure.sdk.iot.device) | | Python - Device | [pip 2.3.0](https://pypi.org/project/azure-iot-device/) | [GitHub](https://github.com/Azure/azure-iot-sdk-python) | [Samples](https://github.com/Azure/azure-iot-sdk-python/tree/main/samples/pnp) | [Connect to IoT Hub](tutorial-connect-device.md) | [Reference](/python/api/azure-iot-device/azure.iot.device) | | Node - Device | [npm 1.17.2](https://www.npmjs.com/package/azure-iot-device)  | [GitHub](https://github.com/Azure/azure-iot-sdk-node) | [Samples](https://github.com/Azure/azure-iot-sdk-node/tree/main/device/samples/javascript/) | [Connect to IoT Hub](tutorial-connect-device.md) | [Reference](/javascript/api/azure-iot-device/) |
The IoT Plug and Play libraries and SDKs enable developers to build IoT solution
| Platform | Package | Code Repository | Samples | Quickstart | Reference | |||||||
-| .NET - IoT Hub service | [NuGet 1.27.1](https://www.nuget.org/packages/Microsoft.Azure.Devices ) | [GitHub](https://github.com/Azure/azure-iot-sdk-csharp) | [Samples](https://github.com/Azure-Samples/azure-iot-samples-csharp/tree/main/iot-hub/Samples/service/PnpServiceSamples) | N/A | [Reference](/dotnet/api/microsoft.azure.devices) |
+| .NET - IoT Hub service | [NuGet 1.38.1](https://www.nuget.org/packages/Microsoft.Azure.Devices ) | [GitHub](https://github.com/Azure/azure-iot-sdk-csharp) | [Samples](https://github.com/Azure/azure-iot-sdk-csharp/tree/main/iothub/service/samples/solutions/PnpServiceSamples) | N/A | [Reference](/dotnet/api/microsoft.azure.devices) |
| Java - IoT Hub service | [Maven 1.26.0](https://mvnrepository.com/artifact/com.microsoft.azure.sdk.iot/iot-service-client/1.26.0) | [GitHub](https://github.com/Azure/azure-iot-sdk-java) | [Samples](https://github.com/Azure/azure-iot-sdk-java/tree/main/service/iot-service-samples/pnp-service-sample) | N/A | [Reference](/java/api/com.microsoft.azure.sdk.iot.service) | | Node - IoT Hub service | [npm 1.13.0](https://www.npmjs.com/package/azure-iothub) | [GitHub](https://github.com/Azure/azure-iot-sdk-node) | [Samples](https://github.com/Azure/azure-iot-sdk-node/tree/main/service/samples) | N/A | [Reference](/javascript/api/azure-iothub/) | | Python - IoT Hub service | [pip 2.2.3](https://pypi.org/project/azure-iot-hub) | [GitHub](https://github.com/Azure/azure-iot-hub-python) | [Samples](https://github.com/Azure/azure-iot-hub-python/tree/main/samples) | N/A | [Reference](/python/api/azure-iot-hub/) |
iot-develop Tutorial Migrate Device To Module https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/tutorial-migrate-device-to-module.md
Add a module called **my-module** to the **my-module-device**:
If you haven't already done so, clone the Azure IoT Hub Device C# SDK GitHub repository to your local machine:
-Open a command prompt in a folder of your choice. Use the following command to clone the [Azure IoT C# Samples](https://github.com/Azure-Samples/azure-iot-samples-csharp) GitHub repository into this location:
+Open a command prompt in a folder of your choice. Use the following command to clone the [Azure IoT C# SDK](https://github.com/Azure/azure-iot-sdk-csharp) GitHub repository into this location:
```cmd
-git clone https://github.com/Azure-Samples/azure-iot-samples-csharp.git
+git clone https://github.com/Azure/azure-iot-sdk-csharp.git
``` ## Prepare the project To open and prepare the sample project:
-1. Open the *azure-iot-sdk-csharp\iot-hub\Samples\device\PnpDeviceSamples\Thermostat\Thermostat.csproj* project file in Visual Studio 2019.
+1. Open the *azure-iot-sdk-csharp\iothub\device\samples\solutions\PnpDeviceSamples\Thermostat\Thermostat.csproj* project file in Visual Studio 2019.
1. In Visual Studio, navigate to **Project > Thermostat Properties > Debug**. Then add the following environment variables to the project:
To open and prepare the sample project:
| IOTHUB_DEVICE_SECURITY_TYPE | connectionString | | IOTHUB_MODULE_CONNECTION_STRING | The module connection string you made a note of previously |
- To learn more about the sample configuration, see the [sample readme](https://github.com/Azure-Samples/azure-iot-samples-csharp/blob/main/iot-hub/Samples/device/PnpDeviceSamples/readme.md).
+ To learn more about the sample configuration, see the [sample readme](https://github.com/Azure/azure-iot-sdk-csharp/tree/main/iothub/device/samples/solutions/PnpDeviceSamples#readme).
## Modify the code
iot-develop Tutorial Multiple Components https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/tutorial-multiple-components.md
zone_pivot_groups: programming-languages-set-twenty-six
:::zone pivot="programming-language-ansi-c" :::zone-end
iot-dps How To Legacy Device Symm Key https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/how-to-legacy-device-symm-key.md
This tutorial also assumes that the device update takes place in a secure enviro
This tutorial is oriented toward a Windows-based workstation. However, you can perform the procedures on Linux. For a Linux example, see [Tutorial: Provision for geolatency](how-to-provision-multitenant.md). > [!NOTE]
-> The sample used in this tutorial is written in C. There is also a [C# device provisioning symmetric key sample](https://github.com/Azure-Samples/azure-iot-samples-csharp/tree/main/provisioning/Samples/device/SymmetricKeySample) available. To use this sample, download or clone the [azure-iot-samples-csharp](https://github.com/Azure-Samples/azure-iot-samples-csharp) repository and follow the in-line instructions in the sample code. You can follow the instructions in this tutorial to create a symmetric key enrollment group using the portal and to find the ID Scope and enrollment group primary and secondary keys needed to run the sample. You can also create individual enrollments using the sample.
+> The sample used in this tutorial is written in C. There is also a [C# device provisioning symmetric key sample](https://github.com/Azure/azure-iot-sdk-csharp/tree/main/provisioning/device/samples/How%20To/SymmetricKeySample) available. To use this sample, download or clone the [azure-iot-sdk-csharp](https://github.com/Azure/azure-iot-sdk-csharp) repository and follow the in-line instructions in the sample code. You can follow the instructions in this tutorial to create a symmetric key enrollment group using the portal and to find the ID Scope and enrollment group primary and secondary keys needed to run the sample. You can also create individual enrollments using the sample.
## Prerequisites
iot-dps How To Send Additional Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/how-to-send-additional-data.md
If the custom allocation policy webhook wishes to return some data to the device
This feature is available in C, C#, JAVA and Node.js client SDKs. To learn more about the Azure IoT SDKs available for IoT Hub and the IoT Hub Device Provisioning service, see [Microsoft Azure IoT SDKs]( https://github.com/Azure/azure-iot-sdks).
-[IoT Plug and Play (PnP)](../iot-develop/overview-iot-plug-and-play.md) devices use the payload to send their model ID when they register with DPS. You can find examples of this usage in the PnP samples in the SDK or sample repositories. For example, [C# PnP thermostat](https://github.com/Azure-Samples/azure-iot-samples-csharp/blob/main/iot-hub/Samples/device/PnpDeviceSamples/Thermostat/Program.cs) or [Node.js PnP temperature controller](https://github.com/Azure/azure-iot-sdk-node/blob/main/device/samples/javascript/pnp_temperature_controller.js).
+[IoT Plug and Play (PnP)](../iot-develop/overview-iot-plug-and-play.md) devices use the payload to send their model ID when they register with DPS. You can find examples of this usage in the PnP samples in the SDK or sample repositories. For example, [C# PnP thermostat](https://github.com/Azure/azure-iot-sdk-csharp/blob/main/iothub/device/samples/solutions/PnpDeviceSamples/Thermostat/Program.cs) or [Node.js PnP temperature controller](https://github.com/Azure/azure-iot-sdk-node/blob/main/device/samples/javascript/pnp_temperature_controller.js).
## IoT Edge support
iot-dps How To Verify Certificates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/how-to-verify-certificates.md
Now, you need to sign the *Verification Code* with the private key associated wi
Microsoft provides tools and samples that can help you create a signed verification certificate: - The **Azure IoT Hub C SDK** provides PowerShell (Windows) and Bash (Linux) scripts to help you create CA and leaf certificates for development and to perform proof-of-possession using a verification code. You can download the [files](https://github.com/Azure/azure-iot-sdk-c/tree/master/tools/CACertificates) relevant to your system to a working folder and follow the instructions in the [Managing CA certificates readme](https://github.com/Azure/azure-iot-sdk-c/blob/master/tools/CACertificates/CACertificateOverview.md) to perform proof-of-possession on a CA certificate. -- The **Azure IoT Hub C# SDK** contains the [Group Certificate Verification Sample](https://github.com/Azure-Samples/azure-iot-samples-csharp/tree/main/provisioning/Samples/service/GroupCertificateVerificationSample), which you can use to do proof-of-possession.
+- The **Azure IoT Hub C# SDK** contains the [Group Certificate Verification Sample](https://github.com/Azure/azure-iot-sdk-csharp/tree/main/provisioning/service/samples/How%20To/GroupCertificateVerificationSample), which you can use to do proof-of-possession.
> [!IMPORTANT] > In addition to performing proof-of-possession, the PowerShell and Bash scripts cited previously also allow you to create root certificates, intermediate certificates, and leaf certificates that can be used to authenticate and provision devices. These certificates should be used for development only. They should never be used in a production environment.
iot-dps Libraries Sdks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/libraries-sdks.md
The DPS device SDKs provide implementations of the [Register](/rest/api/iot-dps/
| Platform | Package | Code repository | Samples | Quickstart | Reference | | --|--|--|--|--|--|
-| .NET|[NuGet](https://www.nuget.org/packages/Microsoft.Azure.Devices.Provisioning.Client/) |[GitHub](https://github.com/Azure/azure-iot-sdk-csharp/)|[Samples](https://github.com/Azure-Samples/azure-iot-samples-csharp/tree/main/provisioning/Samples/device)|[Quickstart](./quick-create-simulated-device-x509.md?pivots=programming-language-csharp&tabs=windows)| [Reference](/dotnet/api/microsoft.azure.devices.provisioning.client) |
+| .NET|[NuGet](https://www.nuget.org/packages/Microsoft.Azure.Devices.Provisioning.Client/) |[GitHub](https://github.com/Azure/azure-iot-sdk-csharp/)|[Samples](https://github.com/Azure/azure-iot-sdk-csharp/tree/main/provisioning/device/samples)|[Quickstart](./quick-create-simulated-device-x509.md?pivots=programming-language-csharp&tabs=windows)| [Reference](/dotnet/api/microsoft.azure.devices.provisioning.client) |
| C|[apt-get, MBED, Arduino IDE or iOS](https://github.com/Azure/azure-iot-sdk-c/blob/master/readme.md#packages-and-libraries)|[GitHub](https://github.com/Azure/azure-iot-sdk-c/blob/master/provisioning\_client)|[Samples](https://github.com/Azure/azure-iot-sdk-c/tree/main/provisioning_client/samples)|[Quickstart](./quick-create-simulated-device-x509.md?pivots=programming-language-ansi-c&tabs=windows)|[Reference](/azure/iot-hub/iot-c-sdk-ref/) | | Java|[Maven](https://mvnrepository.com/artifact/com.microsoft.azure.sdk.iot.provisioning/provisioning-device-client)|[GitHub](https://github.com/Azure/azure-iot-sdk-jav?pivots=programming-language-java&tabs=windows)|[Reference](/java/api/com.microsoft.azure.sdk.iot.provisioning.device) | | Node.js|[npm](https://www.npmjs.com/package/azure-iot-provisioning-device) |[GitHub](https://github.com/Azure/azure-iot-sdk-node/tree/main/provisioning)|[Samples](https://github.com/Azure/azure-iot-sdk-node/tree/main/provisioning/device/samples)|[Quickstart](./quick-create-simulated-device-x509.md?pivots=programming-language-nodejs&tabs=windows)|[Reference](/javascript/api/azure-iot-provisioning-device) |
The DPS service SDKs help you build backend applications to manage enrollments a
| Platform | Package | Code repository | Samples | Quickstart | Reference | | --|--|--|--|--|--|
-| .NET|[NuGet](https://www.nuget.org/packages/Microsoft.Azure.Devices.Provisioning.Service/) |[GitHub](https://github.com/Azure/azure-iot-sdk-csharp/)|[Samples](https://github.com/Azure-Samples/azure-iot-samples-csharp/tree/main/provisioning/Samples/service)|[Quickstart](./quick-enroll-device-tpm.md?pivots=programming-language-csharp&tabs=symmetrickey)|[Reference](/dotnet/api/microsoft.azure.devices.provisioning.service) |
+| .NET|[NuGet](https://www.nuget.org/packages/Microsoft.Azure.Devices.Provisioning.Service/) |[GitHub](https://github.com/Azure/azure-iot-sdk-csharp/)|[Samples](https://github.com/Azure/azure-iot-sdk-csharp/tree/main/provisioning/service/samples)|[Quickstart](./quick-enroll-device-tpm.md?pivots=programming-language-csharp&tabs=symmetrickey)|[Reference](/dotnet/api/microsoft.azure.devices.provisioning.service) |
| Java|[Maven](https://mvnrepository.com/artifact/com.microsoft.azure.sdk.iot.provisioning/provisioning-service-client)|[GitHub](https://github.com/Azure/azure-iot-sdk-jav?pivots=programming-language-java&tabs=symmetrickey)|[Reference](/java/api/com.microsoft.azure.sdk.iot.provisioning.service) | | Node.js|[npm](https://www.npmjs.com/package/azure-iot-provisioning-service)|[GitHub](https://github.com/Azure/azure-iot-sdk-node/tree/main/provisioning)|[Samples](https://github.com/Azure/azure-iot-sdk-node/tree/main/provisioning/service/samples)|[Quickstart](./quick-enroll-device-tpm.md?pivots=programming-language-nodejs&tabs=symmetrickey)|[Reference](/javascript/api/azure-iot-provisioning-service) |
iot-dps Quick Create Simulated Device Symm Key https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/quick-create-simulated-device-symm-key.md
In this section, you'll prepare a development environment that's used to build t
1. Open a Git CMD or Git Bash command-line environment.
-2. Clone the [Azure IoT Samples for C#](https://github.com/Azure-Samples/azure-iot-samples-csharp) GitHub repository using the following command:
+2. Clone the [Azure IoT SDK for C#](https://github.com/Azure/azure-iot-sdk-csharp) GitHub repository using the following command:
```cmd
- git clone https://github.com/Azure-Samples/azure-iot-samples-csharp.git
+ git clone https://github.com/Azure/azure-iot-sdk-csharp.git
``` ::: zone-end
To update and run the provisioning sample with your device information:
:::image type="content" source="./media/quick-create-simulated-device-symm-key/extract-dps-endpoints.png" alt-text="Extract Device Provisioning Service endpoint information":::
-3. Open a command prompt and go to the *SymmetricKeySample* in the cloned samples repository:
+3. Open a command prompt and go to the *SymmetricKeySample* in the cloned sdk repository:
```cmd
- cd azure-iot-samples-csharp\provisioning\Samples\device\SymmetricKeySample
+ cd '.\azure-iot-sdk-csharp\provisioning\device\samples\How To\SymmetricKeySample\'
``` 4. In the *SymmetricKeySample* folder, open *Parameters.cs* in a text editor. This file shows the parameters that are supported by the sample. Only the first three required parameters will be used in this article when running the sample. Review the code in this file. No changes are needed.
To update and run the provisioning sample with your device information:
7. You should now see something similar to the following output. A "TestMessage" string is sent to the hub as a test message. ```output
- D:\azure-iot-samples-csharp\provisioning\Samples\device\SymmetricKeySample>dotnet run --s 0ne00000A0A --i symm-key-csharp-device-01 --p sbDDeEzRuEuGKag+kQKV+T1QGakRtHpsERLP0yPjwR93TrpEgEh/Y07CXstfha6dhIPWvdD1nRxK5T0KGKA+nQ==
+ D:\azure-iot-sdk-csharp\provisioning\device\samples\How To\SymmetricKeySample>dotnet run --s 0ne00000A0A --i symm-key-csharp-device-01 --p sbDDeEzRuEuGKag+kQKV+T1QGakRtHpsERLP0yPjwR93TrpEgEh/Y07CXstfha6dhIPWvdD1nRxK5T0KGKA+nQ==
Initializing the device provisioning client... Initialized for registration Id symm-key-csharp-device-01.
iot-dps Quick Create Simulated Device Tpm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/quick-create-simulated-device-tpm.md
In this section, you'll prepare a development environment used to build the [Azu
1. Open a Git CMD or Git Bash command-line environment.
-2. Clone the [Azure IoT Samples for C#](https://github.com/Azure-Samples/azure-iot-samples-csharp) GitHub repository using the following command:
+2. Clone the [Azure IoT SDK for C#](https://github.com/Azure/azure-iot-sdk-csharp) GitHub repository using the following command:
```cmd
- git clone https://github.com/Azure-Samples/azure-iot-samples-csharp.git
+ git clone https://github.com/Azure/azure-iot-sdk-csharp.git
``` ::: zone-end
In this section, you'll build and execute a sample that reads the endorsement ke
1. In a command prompt, change directories to the project directory for the TPM device provisioning sample. ```cmd
- cd .\azure-iot-samples-csharp\provisioning\Samples\device\TpmSample
+ cd '.\azure-iot-sdk-csharp\provisioning\device\samples\How To\TpmSample\'
``` 2. Type the following command to build and run the TPM device provisioning sample. Copy the endorsement key returned from your TPM 2.0 hardware security module to use later when enrolling your device.
In this section, you'll configure sample code to use the [Advanced Message Queui
3. In a command prompt, change directories to the project directory for the TPM device provisioning sample. ```cmd
- cd .\azure-iot-samples-csharp\provisioning\Samples\device\TpmSample
+ cd '.\azure-iot-sdk-csharp\provisioning\device\samples\How To\TpmSample\'
``` 4. Run the following command to register your device. Replace `<IdScope>` with the value for the DPS you just copied and `<RegistrationId>` with the value you used when creating the device enrollment.
iot-dps Quick Create Simulated Device X509 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/quick-create-simulated-device-x509.md
In this section, you'll prepare a development environment that's used to build t
::: zone pivot="programming-language-csharp"
-1. In your Windows command prompt, clone the [Azure IoT Samples for C#](https://github.com/Azure-Samples/azure-iot-samples-csharp) GitHub repository using the following command:
+1. In your Windows command prompt, clone the [Azure IoT SDK for C#](https://github.com/Azure/azure-iot-sdk-csharp) GitHub repository using the following command:
```cmd
- git clone https://github.com/Azure-Samples/azure-iot-samples-csharp.git
+ git clone https://github.com/Azure/azure-iot-sdk-csharp.git
``` ::: zone-end
The C# sample code is set up to use X.509 certificates that are stored in a pass
1. Copy the PKCS12 formatted certificate file to the project directory for the X.509 device provisioning sample. The path given is relative to the location where you downloaded the sample repo. ```bash
- cp certificate.pfx ./azure-iot-samples-csharp/provisioning/Samples/device/X509Sample
+ cp certificate.pfx ./azure-iot-sdk-csharp/provisioning/device/samples/Getting Started/X509Sample
``` You won't need the Git Bash prompt for the rest of this quickstart. However, you may want to keep it open to check your certificate if you have problems in later steps.
iot-dps Quick Enroll Device X509 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/quick-enroll-device-x509.md
If you plan to explore the Azure IoT Hub Device Provisioning Service tutorials,
The [Azure IoT C SDK](https://github.com/Azure/azure-iot-sdk-c) has scripts that can help you create root CA, intermediate CA, and device certificates, and do proof-of-possession with the service to verify root and intermediate CA certificates. To learn more, see [Managing test CA certificates for samples and tutorials](https://github.com/Azure/azure-iot-sdk-c/blob/master/tools/CACertificates/CACertificateOverview.md).
-The [Group certificate verification sample](https://github.com/Azure-Samples/azure-iot-samples-csharp/tree/master/provisioning/Samples/service/GroupCertificateVerificationSample) in the [Azure IoT Samples for C# (.NET)](https://github.com/Azure-Samples/azure-iot-samples-csharp) shows how to do proof-of-possession in C# with an existing X.509 intermediate or root CA certificate.
+The [Group certificate verification sample](https://github.com/Azure/azure-iot-sdk-csharp/tree/main/provisioning/service/samples/How%20To/GroupCertificateVerificationSample) in the [Azure IoT SDK for C# (.NET)](https://github.com/Azure/azure-iot-sdk-csharp) shows how to do proof-of-possession in C# with an existing X.509 intermediate or root CA certificate.
:::zone-end
iot-edge Production Checklist https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/production-checklist.md
Since the IP address of an IoT hub can change without notice, always use the FQD
Some of these firewall rules are inherited from Azure Container Registry. For more information, see [Configure rules to access an Azure container registry behind a firewall](../container-registry/container-registry-firewall-access-rules.md).
-You can enable dedicated data endpoints in your Azure Container registry to avoid wildcard allowlisting of the *\*.blob.core.windows.net* FQDN. For more information, see [Enable dedicated data endpoints](/azure/container-registry/container-registry-firewall-access-rules#enable-dedicated-data-endpoints).
+You can enable dedicated data endpoints in your Azure Container registry to avoid wildcard allowlisting of the *\*.blob.core.windows.net* FQDN. For more information, see [Enable dedicated data endpoints](../container-registry/container-registry-firewall-access-rules.md#enable-dedicated-data-endpoints).
> [!NOTE] > To provide a consistent FQDN between the REST and data endpoints, beginning **June 15, 2020** the Microsoft Container Registry data endpoint will change from `*.cdn.mscr.io` to `*.data.mcr.microsoft.com`
These constraints can be applied to individual modules by using create options i
## Next steps * Learn more about [IoT Edge automatic deployment](module-deployment-monitoring.md).
-* See how IoT Edge supports [Continuous integration and continuous deployment](how-to-continuous-integration-continuous-deployment.md).
+* See how IoT Edge supports [Continuous integration and continuous deployment](how-to-continuous-integration-continuous-deployment.md).
machine-learning How To Auto Train Image Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-auto-train-image-models.md
Once the run completes, you can register the model that was created from the bes
[!INCLUDE [cli v2](../../includes/machine-learning-cli-v2.md)] ```yaml
-
+CLI example not available, please use Python SDK.
``` # [Python SDK](#tab/python)
For a detailed description on task specific hyperparameters, please refer to [Hy
If you want to use tiling, and want to control tiling behavior, the following parameters are available: `tile_grid_size`, `tile_overlap_ratio` and `tile_predictions_nms_thresh`. For more details on these parameters please check [Train a small object detection model using AutoML](./how-to-use-automl-small-object-detect.md). -
+### Test the deployment
+Please check this [Test the deployment](./tutorial-auto-train-image-models.md#test-the-deployment) section to test the deployment and visualize the detections from the model.
## Example notebooks
machine-learning How To Autoscale Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-autoscale-endpoints.md
The following snippet creates the autoscale profile:
> [!NOTE] > For more, see the [reference page for autoscale](/cli/azure/monitor/autoscale) +
+# [Python](#tab/python)
+
+Import modules:
+```python
+from azure.ai.ml import MLClient
+from azure.identity import DefaultAzureCredential
+from azure.mgmt.monitor import MonitorManagementClient
+from azure.mgmt.monitor.models import AutoscaleProfile, ScaleRule, MetricTrigger, ScaleAction, Recurrence, RecurrentSchedule
+import random
+import datetime
+```
+
+Define variables for the workspace, endpoint, and deployment:
+
+```python
+subscription_id = "<YOUR-SUBSCRIPTION-ID>"
+resource_group = "<YOUR-RESOURCE-GROUP>"
+workspace = "<YOUR-WORKSPACE>"
+
+endpoint_name = "<YOUR-ENDPOINT-NAME>"
+deployment_name = "blue"
+```
+
+Get Azure ML and Azure Monitor clients:
+
+```python
+credential = DefaultAzureCredential()
+ml_client = MLClient(
+ credential, subscription_id, resource_group, workspace
+)
+
+mon_client = MonitorManagementClient(
+ credential, subscription_id
+)
+```
+
+Get the endpoint and deployment objects:
+
+```python
+deployment = ml_client.online_deployments.get(
+ deployment_name, endpoint_name
+)
+
+endpoint = ml_client.online_endpoints.get(
+ endpoint_name
+)
+```
+
+Create an autoscale profile:
+
+```python
+# Set a unique name for autoscale settings for this deployment. The below will append a random number to make the name unique.
+autoscale_settings_name = f"autoscale-{endpoint_name}-{deployment_name}-{random.randint(0,1000)}"
+
+mon_client.autoscale_settings.create_or_update(
+ resource_group,
+ autoscale_settings_name,
+ parameters = {
+ "location" : endpoint.location,
+ "target_resource_uri" : deployment.id,
+ "profiles" : [
+ AutoscaleProfile(
+ name="my-scale-settings",
+ capacity={
+ "minimum" : 2,
+ "maximum" : 5,
+ "default" : 2
+ },
+ rules = []
+ )
+ ]
+ }
+)
+```
+ # [Studio](#tab/azure-studio) In [Azure Machine Learning studio](https://ml.azure.com), select your workspace and then select __Endpoints__ from the left side of the page. Once the endpoints are listed, select the one you want to configure.
The rule is part of the `my-scale-settings` profile (`autoscale-name` matches th
> [!NOTE] > For more information on the CLI syntax, see [`az monitor autoscale`](/cli/azure/monitor/autoscale). +
+# [Python](#tab/python)
+
+Create the rule definition:
+
+```python
+rule_scale_out = ScaleRule(
+ metric_trigger = MetricTrigger(
+ metric_name="CpuUtilizationPercentage",
+ metric_resource_uri = deployment.id,
+ time_grain = datetime.timedelta(minutes = 1),
+ statistic = "Average",
+ operator = "GreaterThan",
+ time_aggregation = "Last",
+ time_window = datetime.timedelta(minutes = 5),
+ threshold = 70
+ ),
+ scale_action = ScaleAction(
+ direction = "Increase",
+ type = "ChangeCount",
+ value = 2,
+ cooldown = datetime.timedelta(hours = 1)
+ )
+)
+```
+This rule is refers to the last 5 minute average of `CPUUtilizationpercentage` from the arguments `metric_name`, `time_window` and `time_aggregation`. When value of the metric is greater than the `threshold` of 70, two more VM instances are allocated.
+
+Update the `my-scale-settings` profile to include this rule:
+
+```python
+mon_client.autoscale_settings.create_or_update(
+ resource_group,
+ autoscale_settings_name,
+ parameters = {
+ "location" : endpoint.location,
+ "target_resource_uri" : deployment.id,
+ "profiles" : [
+ AutoscaleProfile(
+ name="my-scale-settings",
+ capacity={
+ "minimum" : 2,
+ "maximum" : 5,
+ "default" : 2
+ },
+ rules = [
+ rule_scale_out
+ ]
+ )
+ ]
+ }
+)
+```
+ # [Studio](#tab/azure-studio) In the __Rules__ section, select __Add a rule__. The __Scale rule__ page is displayed. Use the following information to populate the fields on this page:
When load is light, a scaling in rule can reduce the number of VM instances. The
:::code language="azurecli" source="~/azureml-examples-main/cli/deploy-moe-autoscale.sh" ID="scale_in_on_cpu_util" :::
+# [Python](#tab/python)
+
+Create the rule definition:
+
+```python
+rule_scale_in = ScaleRule(
+ metric_trigger = MetricTrigger(
+ metric_name="CpuUtilizationPercentage",
+ metric_resource_uri = deployment.id,
+ time_grain = datetime.timedelta(minutes = 1),
+ statistic = "Average",
+ operator = "GreaterThan",
+ time_aggregation = "Last",
+ time_window = datetime.timedelta(minutes = 5),
+ threshold = 70
+ ),
+ scale_action = ScaleAction(
+ direction = "Increase",
+ type = "ChangeCount",
+ value = 2,
+ cooldown = datetime.timedelta(hours = 1)
+ )
+)
+```
+
+Update the `my-scale-settings` profile to include this rule:
+
+```python
+mon_client.autoscale_settings.create_or_update(
+ resource_group,
+ autoscale_settings_name,
+ parameters = {
+ "location" : endpoint.location,
+ "target_resource_uri" : deployment.id,
+ "profiles" : [
+ AutoscaleProfile(
+ name="my-scale-settings",
+ capacity={
+ "minimum" : 2,
+ "maximum" : 5,
+ "default" : 2
+ },
+ rules = [
+ rule_scale_out,
+ rule_scale_in
+ ]
+ )
+ ]
+ }
+)
+```
+
# [Studio](#tab/azure-studio) In the __Rules__ section, select __Add a rule__. The __Scale rule__ page is displayed. Use the following information to populate the fields on this page:
The previous rules applied to the deployment. Now, add a rule that applies to th
:::code language="azurecli" source="~/azureml-examples-main/cli/deploy-moe-autoscale.sh" ID="scale_up_on_request_latency" ::: +
+# [Python](#tab/python)
+
+Create the rule definition:
+
+```python
+rule_scale_out_endpoint = ScaleRule(
+ metric_trigger = MetricTrigger(
+ metric_name="RequestLatency",
+ metric_resource_uri = endpoint.id,
+ time_grain = datetime.timedelta(minutes = 1),
+ statistic = "Average",
+ operator = "GreaterThan",
+ time_aggregation = "Last",
+ time_window = datetime.timedelta(minutes = 5),
+ threshold = 70
+ ),
+ scale_action = ScaleAction(
+ direction = "Increase",
+ type = "ChangeCount",
+ value = 1,
+ cooldown = datetime.timedelta(hours = 1)
+ )
+)
+
+```
+This rule's `metric_resource_uri` field now refers to the endpoint rather than the deployment.
+
+Update the `my-scale-settings` profile to include this rule:
+
+```python
+mon_client.autoscale_settings.create_or_update(
+ resource_group,
+ autoscale_settings_name,
+ parameters = {
+ "location" : endpoint.location,
+ "target_resource_uri" : deployment.id,
+ "profiles" : [
+ AutoscaleProfile(
+ name="my-scale-settings",
+ capacity={
+ "minimum" : 2,
+ "maximum" : 5,
+ "default" : 2
+ },
+ rules = [
+ rule_scale_out,
+ rule_scale_in,
+ rule_scale_out_endpoint
+ ]
+ )
+ ]
+ }
+)
+```
+ # [Studio](#tab/azure-studio) From the bottom of the page, select __+ Add a scale condition__.
You can also create rules that apply only on certain days or at certain times. I
:::code language="azurecli" source="~/azureml-examples-main/cli/deploy-moe-autoscale.sh" ID="weekend_profile" :::
+# [Python](#tab/python)
+
+```python
+mon_client.autoscale_settings.create_or_update(
+ resource_group,
+ autoscale_settings_name,
+ parameters = {
+ "location" : endpoint.location,
+ "target_resource_uri" : deployment.id,
+ "profiles" : [
+ AutoscaleProfile(
+ name="Default",
+ capacity={
+ "minimum" : 2,
+ "maximum" : 2,
+ "default" : 2
+ },
+ recurrence = Recurrence(
+ frequency = "Week",
+ schedule = RecurrentSchedule(
+ time_zone = "Pacific Standard Time",
+ days = ["Saturday", "Sunday"],
+ hours = [],
+ minutes = []
+ )
+ )
+ )
+ ]
+ }
+)
+```
+ # [Studio](#tab/azure-studio) From the bottom of the page, select __+ Add a scale condition__. On the new scale condition, use the following information to populate the fields:
From the bottom of the page, select __+ Add a scale condition__. On the new scal
If you are not going to use your deployments, delete them:
+# [Azure CLI](#tab/azure-cli)
[!INCLUDE [cli v2](../../includes/machine-learning-cli-v2.md)] :::code language="azurecli" source="~/azureml-examples-main/cli/deploy-moe-autoscale.sh" ID="delete_endpoint" :::
+# [Python](#tab/python)
+
+```python
+mon_client.autoscale_settings.delete(
+ resource_group,
+ autoscale_settings_name
+)
+
+ml_client.online_endpoints.begin_delete(endpoint_name)
+```
+
+# [Studio](#tab/azure-studio)
+1. Go to the [Azure Machine Learning studio](https://ml.azure.com).
+1. In the left navigation bar, select the **Endpoints** page.
+1. Select an endpoint by checking the circle next to the model name.
+1. Select **Delete**.
+
+Alternatively, you can delete a managed online endpoint directly in the [endpoint details page](how-to-use-managed-online-endpoint-studio.md#view-managed-online-endpoints).
+
+
+ ## Next steps To learn more about autoscale with Azure Monitor, see the following articles:
machine-learning How To Debug Managed Online Endpoints Visual Studio Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-debug-managed-online-endpoints-visual-studio-code.md
ms.devlang: azurecli
[!INCLUDE [cli v2](../../includes/machine-learning-cli-v2.md)] +
+> [!IMPORTANT]
+> SDK v2 is currently in public preview.
+> The preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+ Learn how to use the Visual Studio Code (VS Code) debugger to test and debug online endpoints locally before deploying them to Azure. Azure Machine Learning local endpoints help you test and debug your scoring script, environment configuration, code configuration, and machine learning model locally.
The following table provides an overview of scenarios to help you choose what wo
## Prerequisites
+# [Azure CLI](#tab/cli)
+ This guide assumes you have the following items installed locally on your PC. - [Docker](https://docs.docker.com/engine/install/)
az account set --subscription <subscription>
az configure --defaults workspace=<workspace> group=<resource-group> location=<location> ```
+# [Python](#tab/python)
+
+This guide assumes you have the following items installed locally on your PC.
+
+- [Docker](https://docs.docker.com/engine/install/)
+- [VS Code](https://code.visualstudio.com/#alt-downloads)
+- [Azure CLI](/cli/azure/install-azure-cli)
+- [Azure CLI `ml` extension (v2)](how-to-configure-cli.md)
+- [Azure ML Python SDK (v2)](https://aka.ms/sdk-v2-install)
+
+For more information, see the guide on [how to prepare your system to deploy managed online endpoints](how-to-deploy-managed-online-endpoints.md#prepare-your-system).
+
+The examples in this article can be found in the [Debug online endpoints locally in Visual Studio Code](https://github.com/Azure/azureml-examples/blob/main/sdk/endpoints/online/managed/debug-online-endpoints-locally-in-visual-studio-code.ipynb) notebook within the[azureml-examples](https://github.com/azure/azureml-examples) repository. To run the code locally, clone the repo and then change directories to the notebook's parent directory `sdk/endpoints/online/managed`.
+
+```azurecli
+git clone https://github.com/Azure/azureml-examples --depth 1
+cd azureml-examples
+cd sdk/endpoints/online/managed
+```
+
+Import the required modules:
+
+```python
+from azure.ai.ml import MLClient
+from azure.ai.ml.entities import (
+ ManagedOnlineEndpoint,
+ ManagedOnlineDeployment,
+ Model,
+ CodeConfiguration,
+ Environment,
+)
+from azure.identity import DefaultAzureCredential, AzureCliCredential
+```
+
+Set up variables for the workspace and endpoint:
+
+```python
+subscription_id = "<SUBSCRIPTION_ID>"
+resource_group = "<RESOURCE_GROUP>"
+workspace_name = "<AML_WORKSPACE_NAME>"
+
+endpoint_name = "<ENDPOINT_NAME>"
+```
+
+
+ ## Launch development container
+# [Azure CLI](#tab/cli)
+ Azure Machine Learning local endpoints use Docker and VS Code development containers (dev container) to build and configure a local debugging environment. With dev containers, you can take advantage of VS Code features from inside a Docker container. For more information on dev containers, see [Create a development container](https://code.visualstudio.com/docs/remote/create-dev-container). To debug online endpoints locally in VS Code, use the `--vscode-debug` flag when creating or updating and Azure Machine Learning online deployment. The following command uses a deployment example from the examples repo:
You'll use a few VS Code extensions to debug your deployments in the dev contain
> [!IMPORTANT] > Before starting your debug session, make sure that the VS Code extensions have finished installing in your dev container. +
+# [Python](#tab/python)
+
+Azure Machine Learning local endpoints use Docker and VS Code development containers (dev container) to build and configure a local debugging environment. With dev containers, you can take advantage of VS Code features from inside a Docker container. For more information on dev containers, see [Create a development container](https://code.visualstudio.com/docs/remote/create-dev-container).
+
+Get a handle to the workspace:
+
+```python
+credential = AzureCliCredential()
+ml_client = MLClient(
+ credential,
+ subscription_id=subscription_id,
+ resource_group_name=resource_group,
+ workspace_name=workspace_name,
+)
+```
+
+To debug online endpoints locally in VS Code, set the `vscode-debug` and `local` flags when creating or updating an Azure Machine Learning online deployment. The following code mirrors a deployment example from the examples repo:
+
+```python
+deployment = ManagedOnlineDeployment(
+ name="blue",
+ endpoint_name=endpoint_name,
+ model=Model(path="../model-1/model"),
+ code_configuration=CodeConfiguration(
+ code="../model-1/onlinescoring", scoring_script="score.py"
+ ),
+ environment=Environment(
+ conda_file="../model-1/environment/conda.yml",
+ image="mcr.microsoft.com/azureml/openmpi3.1.2-ubuntu18.04:20210727.v1",
+ ),
+ instance_type="Standard_DS2_v2",
+ instance_count=1,
+)
+
+deployment = ml_client.online_deployments.begin_create_or_update(
+ deployment,
+ local=True,
+ vscode_debug=True,
+)
+```
+
+> [!IMPORTANT]
+> On Windows Subsystem for Linux (WSL), you'll need to update your PATH environment variable to include the path to the VS Code executable or use WSL interop. For more information, see [Windows interoperability with Linux](/windows/wsl/interop).
+
+A Docker image is built locally. Any environment configuration or model file errors are surfaced at this stage of the process.
+
+> [!NOTE]
+> The first time you launch a new or updated dev container it can take several minutes.
+
+Once the image successfully builds, your dev container opens in a VS Code window.
+
+You'll use a few VS Code extensions to debug your deployments in the dev container. Azure Machine Learning automatically installs these extensions in your dev container.
+
+- Inference Debug
+- [Pylance](https://marketplace.visualstudio.com/items?itemName=ms-python.vscode-pylance)
+- [Jupyter](https://marketplace.visualstudio.com/items?itemName=ms-toolsai.jupyter)
+- [Python](https://marketplace.visualstudio.com/items?itemName=ms-python.python)
+
+> [!IMPORTANT]
+> Before starting your debug session, make sure that the VS Code extensions have finished installing in your dev container.
+++++++ ## Start debug session Once your environment is set up, use the VS Code debugger to test and debug your deployment locally.
For more information on the VS Code debugger, see [Debugging in VS Code](https:/
## Debug your endpoint
+# [Azure CLI](#tab/cli)
+ Now that your application is running in the debugger, try making a prediction to debug your scoring script. Use the `ml` extension `invoke` command to make a request to your local endpoint.
In this case, `<REQUEST-FILE>` is a JSON file that contains input data samples f
At this point, any breakpoints in your `run` function are caught. Use the debug actions to step through your code. For more information on debug actions, see the [debug actions guide](https://code.visualstudio.com/Docs/editor/debugging#_debug-actions). +
+# [Python](#tab/python)
+
+Now that your application is running in the debugger, try making a prediction to debug your scoring script.
+
+Use the `invoke` method on your `MLClient` object to make a request to your local endpoint.
+
+```python
+endpoint = ml_client.online_endpoints.get(name=endpoint_name, local=True)
+
+request_file_path = "../model-1/sample-request.json"
+
+endpoint.invoke(endpoint_name, request_file_path, local=True)
+```
+
+In this case, `<REQUEST-FILE>` is a JSON file that contains input data samples for the model to make predictions on similar to the following JSON:
+
+```json
+{"data": [
+ [1,2,3,4,5,6,7,8,9,10],
+ [10,9,8,7,6,5,4,3,2,1]
+]}
+```
+
+> [!TIP]
+> The scoring URI is the address where your endpoint listens for requests. The `as_dict` method of endpoint objects returns information similar to `show` in the Azure CLI. The endpoint object can be obtained through `.get`.
+>
+> ```python
+> endpoint = ml_client.online_endpoints.get(endpoint_name, local=True)
+> endpoint.as_dict()
+> ```
+>
+> The output should look similar to the following:
+>
+> ```json
+> {
+> "auth_mode": "aml_token",
+> "location": "local",
+> "name": "my-new-endpoint",
+> "properties": {},
+> "provisioning_state": "Succeeded",
+> "scoring_uri": "http://localhost:5001/score",
+> "tags": {},
+> "traffic": {},
+> "type": "online"
+>}
+>```
+>
+>The scoring URI can be found in the `scoring_uri` key.
+
+At this point, any breakpoints in your `run` function are caught. Use the debug actions to step through your code. For more information on debug actions, see the [debug actions guide](https://code.visualstudio.com/Docs/editor/debugging#_debug-actions).
++
+
++ ## Edit your endpoint
+# [Azure CLI](#tab/cli)
+ As you debug and troubleshoot your application, there are scenarios where you need to update your scoring script and configurations. To apply changes to your code:
az ml online-deployment update --file <DEPLOYMENT-YAML-SPECIFICATION-FILE> --loc
Once the updated image is built and your development container launches, use the VS Code debugger to test and troubleshoot your updated endpoint.
+# [Python](#tab/python)
+
+As you debug and troubleshoot your application, there are scenarios where you need to update your scoring script and configurations.
+
+To apply changes to your code:
+
+1. Update your code
+1. Restart your debug session using the `Developer: Reload Window` command in the command palette. For more information, see the [command palette documentation](https://code.visualstudio.com/docs/getstarted/userinterface#_command-palette).
+
+> [!NOTE]
+> Since the directory containing your code and endpoint assets is mounted onto the dev container, any changes you make in the dev container are synced with your local file system.
+
+For more extensive changes involving updates to your environment and endpoint configuration, use your `MLClient`'s `online_deployments.update` module/method. Doing so will trigger a full image rebuild with your changes.
+
+```python
+new_deployment = ManagedOnlineDeployment(
+ name="green",
+ endpoint_name=endpoint_name,
+ model=Model(path="../model-2/model"),
+ code_configuration=CodeConfiguration(
+ code="../model-2/onlinescoring", scoring_script="score.py"
+ ),
+ environment=Environment(
+ conda_file="../model-2/environment/conda.yml",
+ image="mcr.microsoft.com/azureml/openmpi3.1.2-ubuntu18.04:20210727.v1",
+ ),
+ instance_type="Standard_DS2_v2",
+ instance_count=2,
+)
+
+ml_client.online_deployments.update(new_deployment, local=True, vscode_debug=True)
+```
+
+Once the updated image is built and your development container launches, use the VS Code debugger to test and troubleshoot your updated endpoint.
+++
+
+ ## Next steps - [Deploy and score a machine learning model by using a managed online endpoint (preview)](how-to-deploy-managed-online-endpoints.md)
machine-learning How To Github Actions Machine Learning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-github-actions-machine-learning.md
Previously updated : 10/21/2021 Last updated : 09/13/2022 # Use GitHub Actions with Azure Machine Learning- Get started with [GitHub Actions](https://docs.github.com/en/actions) to train a model on Azure Machine Learning.
-> [!NOTE]
-> GitHub Actions for Azure Machine Learning are provided as-is, and are not fully supported by Microsoft. If you encounter problems with a specific action, open an issue in the repository for the action. For example, if you encounter a problem with the aml-deploy action, report the problem in the [https://github.com/Azure/aml-deploy](https://github.com/Azure/aml-deploy) repo.
+This article will teach you how to create a GitHub Actions workflow that builds and deploys a machine learning model to [Azure Machine Learning](/azure/machine-learning/overview-what-is-azure-machine-learning). You'll train a [scikit-learn](https://scikit-learn.org/) linear regression model on the NYC Taxi dataset.
-## Prerequisites
+GitHub Actions uses a workflow YAML (.yml) file in the `/.github/workflows/` path in your repository. This definition contains the various steps and parameters that make up the workflow.
-- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).-- A GitHub account. If you don't have one, sign up for [free](https://github.com/join).
-## Workflow file overview
+## Prerequisites
-A workflow is defined by a YAML (.yml) file in the `/.github/workflows/` path in your repository. This definition contains the various steps and parameters that make up the workflow.
-The file has four sections:
+* A GitHub account. If you don't have one, sign up for [free](https://github.com/join).
-|Section |Tasks |
-|||
-|**Authentication** | 1. Define a service principal. <br /> 2. Create a GitHub secret. |
-|**Connect** | 1. Connect to the machine learning workspace. <br /> 2. Connect to a compute target. |
-|**Job** | 1. Submit a training job. |
-|**Deploy** | 1. Register model in Azure Machine Learning registry. 1. Deploy the model. |
+## Step 1. Get the code
-## Create repository
+Fork the following repo at GitHub:
-Create a new repository off the [ML Ops with GitHub Actions and Azure Machine Learning template](https://github.com/machine-learning-apps/ml-template-azure).
+```
+https://github.com/azure/azureml-examples
+```
-1. Open the [template](https://github.com/machine-learning-apps/ml-template-azure) on GitHub.
-2. Select **Use this template**.
+## Step 2. Authenticate with Azure
- :::image type="content" source="media/how-to-github-actions-machine-learning/gh-actions-use-template.png" alt-text="Select use this template":::
-3. Create a new repository from the template. Set the repository name to `ml-learning` or a name of your choice.
+You'll need to first define how to authenticate with Azure. You can use a [service principal](../active-directory/develop/app-objects-and-service-principals.md#service-principal-object) or [OpenID Connect](https://docs.github.com/en/actions/deployment/security-hardening-your-deployments/about-security-hardening-with-openid-connect).
+### Generate deployment credentials
-## Generate deployment credentials
+# [Service principal](#tab/userlevel)
-You can create a [service principal](../active-directory/develop/app-objects-and-service-principals.md#service-principal-object) with the [az ad sp create-for-rbac](/cli/azure/ad/sp#az-ad-sp-create-for-rbac) command in the [Azure CLI](/cli/azure/). Run this command with [Azure Cloud Shell](https://shell.azure.com/) in the Azure portal or by selecting the **Try it** button.
+Create a [service principal](../active-directory/develop/app-objects-and-service-principals.md#service-principal-object) with the [az ad sp create-for-rbac](/cli/azure/ad/sp#az-ad-sp-create-for-rbac) command in the [Azure CLI](/cli/azure/). Run this command with [Azure Cloud Shell](https://shell.azure.com/) in the Azure portal or by selecting the **Try it** button.
```azurecli-interactive az ad sp create-for-rbac --name "myML" --role contributor \
In the example above, replace the placeholders with your subscription ID, resour
} ```
-## Configure the GitHub secret
+# [OpenID Connect](#tab/openid)
-1. In [GitHub](https://github.com/), browse your repository, select **Settings > Secrets > Add a new secret**.
+OpenID Connect is an authentication method that uses short-lived tokens. Setting up [OpenID Connect with GitHub Actions](https://docs.github.com/en/actions/deployment/security-hardening-your-deployments/about-security-hardening-with-openid-connect) is more complex process that offers hardened security.
-2. Paste the entire JSON output from the Azure CLI command into the secret's value field. Give the secret the name `AZURE_CREDENTIALS`.
+1. If you don't have an existing application, register a [new Active Directory application and service principal that can access resources](../active-directory/develop/howto-create-service-principal-portal.md). Create the Active Directory application.
-## Connect to the workspace
+ ```azurecli-interactive
+ az ad app create --display-name myApp
+ ```
-Use the **Azure Machine Learning Workspace action** to connect to your Azure Machine Learning workspace.
+ This command will output JSON with an `appId` that is your `client-id`. Save the value to use as the `AZURE_CLIENT_ID` GitHub secret later.
-```yaml
- - name: Connect/Create Azure Machine Learning Workspace
- id: aml_workspace
- uses: Azure/aml-workspace@v1
- with:
- azure_credentials: ${{ secrets.AZURE_CREDENTIALS }}
-```
+ You'll use the `objectId` value when creating federated credentials with Graph API and reference it as the `APPLICATION-OBJECT-ID`.
-By default, the action expects a `workspace.json` file. If your JSON file has a different name, you can specify it with the `parameters_file` input parameter. If there is not a file, a new one will be created with the repository name.
+1. Create a service principal. Replace the `$appID` with the appId from your JSON output.
+ This command generates JSON output with a different `objectId` and will be used in the next step. The new `objectId` is the `assignee-object-id`.
+
+ Copy the `appOwnerTenantId` to use as a GitHub secret for `AZURE_TENANT_ID` later.
-```yaml
- - name: Connect/Create Azure Machine Learning Workspace
- id: aml_workspace
- uses: Azure/aml-workspace@v1
- with:
- azure_credentials: ${{ secrets.AZURE_CREDENTIALS }}
- parameters_file: "alternate_workspace.json"
-```
-The action writes the workspace Azure Resource Manager (ARM) properties to a config file, which will be picked by all future Azure Machine Learning GitHub Actions. The file is saved to `GITHUB_WORKSPACE/aml_arm_config.json`.
+ ```azurecli-interactive
+ az ad sp create --id $appId
+ ```
-## Connect to a Compute Target in Azure Machine Learning
+1. Create a new role assignment by subscription and object. By default, the role assignment will be tied to your default subscription. Replace `$subscriptionId` with your subscription ID, `$resourceGroupName` with your resource group name, and `$assigneeObjectId` with the generated `assignee-object-id`. Learn [how to manage Azure subscriptions with the Azure CLI](/cli/azure/manage-azure-subscriptions-azure-cli).
-Use the [Azure Machine Learning Compute action](https://github.com/Azure/aml-compute) to connect to a compute target in Azure Machine Learning. If the compute target exists, the action will connect to it. Otherwise the action will create a new compute target. The [AML Compute action](https://github.com/Azure/aml-compute) only supports the Azure ML compute cluster and Azure Kubernetes Service (AKS).
+ ```azurecli-interactive
+ az role assignment create --role contributor --scope /subscriptions/$subscriptionId/resourceGroups/$resourceGroupName --subscription $subscriptionId --assignee-object-id $assigneeObjectId --assignee-principal-type ServicePrincipal
+ ```
-```yaml
- - name: Connect/Create Azure Machine Learning Compute Target
- id: aml_compute_training
- uses: Azure/aml-compute@v1
- with:
- azure_credentials: ${{ secrets.AZURE_CREDENTIALS }}
-```
-## Submit training job
+1. Run the following command to [create a new federated identity credential](/graph/api/application-post-federatedidentitycredentials?view=graph-rest-beta&preserve-view=true) for your active directory application.
-Use the [Azure Machine Learning Training action](https://github.com/Azure/aml-run) to submit a ScriptRun, an Estimator or a Pipeline to Azure Machine Learning.
+ * Replace `APPLICATION-OBJECT-ID` with the **objectId (generated while creating app)** for your Active Directory application.
+ * Set a value for `CREDENTIAL-NAME` to reference later.
+ * Set the `subject`. The value of this is defined by GitHub depending on your workflow:
+ * Jobs in your GitHub Actions environment: `repo:< Organization/Repository >:environment:< Name >`
+ * For Jobs not tied to an environment, include the ref path for branch/tag based on the ref path used for triggering the workflow: `repo:< Organization/Repository >:ref:< ref path>`. For example, `repo:n-username/ node_express:ref:refs/heads/my-branch` or `repo:n-username/ node_express:ref:refs/tags/my-tag`.
+ * For workflows triggered by a pull request event: `repo:< Organization/Repository >:pull_request`.
+
+ ```azurecli
+ az rest --method POST --uri 'https://graph.microsoft.com/beta/applications/<APPLICATION-OBJECT-ID>/federatedIdentityCredentials' --body '{"name":"<CREDENTIAL-NAME>","issuer":"https://token.actions.githubusercontent.com","subject":"repo:organization/repository:ref:refs/heads/main","description":"Testing","audiences":["api://AzureADTokenExchange"]}'
+ ```
+
+To learn how to create a Create an active directory application, service principal, and federated credentials in Azure portal, see [Connect GitHub and Azure](/azure/developer/github/connect-from-azure#use-the-azure-login-action-with-openid-connect).
-```yaml
- - name: Submit training run
- id: aml_run
- uses: Azure/aml-run@v1
- with:
- azure_credentials: ${{ secrets.AZURE_CREDENTIALS }}
-```
+
-## Register model in registry
+### Create secrets
-Use the [Azure Machine Learning Register Model action](https://github.com/Azure/aml-registermodel) to register a model to Azure Machine Learning.
+# [Service principal](#tab/userlevel)
-```yaml
- - name: Register model
- id: aml_registermodel
- uses: Azure/aml-registermodel@v1
- with:
- azure_credentials: ${{ secrets.AZURE_CREDENTIALS }}
- run_id: ${{ steps.aml_run.outputs.run_id }}
- experiment_name: ${{ steps.aml_run.outputs.experiment_name }}
-```
+1. In [GitHub](https://github.com/), browse your repository, select **Settings > Secrets > Actions**. Select **New repository secret**.
-## Deploy model to Azure Machine Learning to ACI
+2. Paste the entire JSON output from the Azure CLI command into the secret's value field. Give the secret the name `AZ_CREDS`.
-Use the [Azure Machine Learning Deploy action](https://github.com/Azure/aml-deploy) to deploys a model and create an endpoint for the model. You can also use the Azure Machine Learning Deploy to deploy to Azure Kubernetes Service. See [this sample workflow](https://github.com/Azure-Samples/mlops-enterprise-template) for a model that deploys to Azure Kubernetes Service.
+ # [OpenID Connect](#tab/openid)
-```yaml
- - name: Deploy model
- id: aml_deploy
- uses: Azure/aml-deploy@v1
- with:
- azure_credentials: ${{ secrets.AZURE_CREDENTIALS }}
- model_name: ${{ steps.aml_registermodel.outputs.model_name }}
- model_version: ${{ steps.aml_registermodel.outputs.model_version }}
+You need to provide your application's **Client ID**, **Tenant ID**, and **Subscription ID** to the login action. These values can either be provided directly in the workflow or can be stored in GitHub secrets and referenced in your workflow. Saving the values as GitHub secrets is the more secure option.
-```
+1. In [GitHub](https://github.com/), browse your repository, select **Settings > Secrets > Actions**. Select **New repository secret**.
-## Complete example
-
-Train your model and deploy to Azure Machine Learning.
-
-```yaml
-# Actions train a model on Azure Machine Learning
-name: Azure Machine Learning training and deployment
-on:
- push:
- branches:
- - master
- # paths:
- # - 'code/*'
-jobs:
- train:
- runs-on: ubuntu-latest
- steps:
- # Checks-out your repository under $GITHUB_WORKSPACE, so your job can access it
- - name: Check Out Repository
- id: checkout_repository
- uses: actions/checkout@v2
-
- # Connect or Create the Azure Machine Learning Workspace
- - name: Connect/Create Azure Machine Learning Workspace
- id: aml_workspace
- uses: Azure/aml-workspace@v1
- with:
- azure_credentials: ${{ secrets.AZURE_CREDENTIALS }}
-
- # Connect or Create a Compute Target in Azure Machine Learning
- - name: Connect/Create Azure Machine Learning Compute Target
- id: aml_compute_training
- uses: Azure/aml-compute@v1
- with:
- azure_credentials: ${{ secrets.AZURE_CREDENTIALS }}
+1. Create secrets for `AZURE_CLIENT_ID`, `AZURE_TENANT_ID`, and `AZURE_SUBSCRIPTION_ID`. Use these values from your Active Directory application for your GitHub secrets:
+
+ |GitHub Secret | Active Directory Application |
+ |||
+ |AZURE_CLIENT_ID | Application (client) ID |
+ |AZURE_TENANT_ID | Directory (tenant) ID |
+ |AZURE_SUBSCRIPTION_ID | Subscription ID |
+
+1. Save each secret by selecting **Add secret**.
++++
+## Step 3. Update `setup.sh` to connect to your Azure Machine Learning workspace
+
+You'll need to update the CLI setup file variables to match your workspace.
+
+1. In your cloned repository, go to `azureml-examples/cli/`.
+1. Edit `setup.sh` and update these variables in the file.
+
+ |Variable | Description |
+ |||
+ |GROUP | Name of resource group |
+ |LOCATION | Location of your workspace (example: `eastus2`) |
+ |WORKSPACE | Name of Azure ML workspace |
+
+## Step 4. Update `pipeline.yml` with your compute cluster name
+
+You'll use a `pipeline.yml` file to deploy your Azure ML pipeline. This is a machine learning pipeline and not a DevOps pipeline. You only need to make this update if you're using a name other than `cpu-cluster` for your computer cluster name.
+
+1. In your cloned repository, go to `azureml-examples/cli/jobs/pipelines/nyc-taxi/pipeline.yml`.
+1. Each time you see `compute: azureml:cpu-cluster`, update the value of `cpu-cluster` with your compute cluster name. For example, if your cluster is named `my-cluster`, your new value would be `azureml:my-cluster`. There are five updates.
+
+## Step 5: Run your GitHub Actions workflow
+
+Your workflow authenticates with Azure, sets up the Azure Machine Learning CLI, and uses the CLI to train a model in Azure Machine Learning.
+
+# [Service principal](#tab/userlevel)
++
+Your workflow file is made up of a trigger section and jobs:
+- A trigger starts the workflow in the `on` section. The workflow runs by default on a cron schedule and when a pull request is made from matching branches and paths. Learn more about [events that trigger workflows](https://docs.github.com/actions/using-workflows/events-that-trigger-workflows).
+- In the jobs section of the workflow, you checkout code and log into Azure with your service principal secret.
+- The jobs section also includes a setup action that installs and sets up the [Machine Learning CLI (v2)](how-to-configure-cli.md). Once the CLI is installed, the run job action runs your Azure Machine Learning `pipeline.yml` file to train a model with NYC taxi data.
++
+### Enable your workflow
+
+1. In your cloned repository, open `.github/workflows/cli-jobs-pipelines-nyc-taxi-pipeline.yml` and verify that your workflow looks like this.
+
+ ```yaml
+ name: cli-jobs-pipelines-nyc-taxi-pipeline
+ on:
+ workflow_dispatch:
+ schedule:
+ - cron: "0 0/4 * * *"
+ pull_request:
+ branches:
+ - main
+ - sdk-preview
+ paths:
+ - cli/jobs/pipelines/nyc-taxi/**
+ - .github/workflows/cli-jobs-pipelines-nyc-taxi-pipeline.yml
+ - cli/run-pipeline-jobs.sh
+ - cli/setup.sh
+ jobs:
+ build:
+ runs-on: ubuntu-latest
+ steps:
+ - name: check out repo
+ uses: actions/checkout@v2
+ - name: azure login
+ uses: azure/login@v1
+ with:
+ creds: ${{secrets.AZ_CREDS}}
+ - name: setup
+ run: bash setup.sh
+ working-directory: cli
+ continue-on-error: true
+ - name: run job
+ run: bash -x ../../../run-job.sh pipeline.yml
+ working-directory: cli/jobs/pipelines/nyc-taxi
+ ```
+
+1. Select **View runs**.
+1. Enable workflows by selecting **I understand my workflows, go ahead and enable them**.
+1. Select the **cli-jobs-pipelines-nyc-taxi-pipeline workflow** and choose to **Enable workflow**.
+ :::image type="content" source="media/how-to-github-actions-machine-learning/enable-github-actions-ml-workflow.png" alt-text="Screenshot of enable GitHub Actions workflow.":::
+1. Select **Run workflow** and choose the option to **Run workflow** now.
+ :::image type="content" source="media/how-to-github-actions-machine-learning/github-actions-run-workflow.png" alt-text="Screenshot of run GitHub Actions workflow.":::
+
+ # [OpenID Connect](#tab/openid)
+
+Your workflow file is made up of a trigger section and jobs:
+- A trigger starts the workflow in the `on` section. The workflow runs by default on a cron schedule and when a pull request is made from matching branches and paths. Learn more about [events that trigger workflows](https://docs.github.com/actions/using-workflows/events-that-trigger-workflows).
+- In the jobs section of the workflow, you checkout code and log into Azure with the Azure login action using OpenID Connect.
+- The jobs section also includes a setup action that installs and sets up the [Machine Learning CLI (v2)](how-to-configure-cli.md). Once the CLI is installed, the run job action runs your Azure Machine Learning `pipeline.yml` file to train a model with NYC taxi data.
+
+### Enable your workflow
+
+1. In your cloned repository, open `.github/workflows/cli-jobs-pipelines-nyc-taxi-pipeline.yml` and verify that your workflow looks like this.
+
+ ```yaml
+ name: cli-jobs-pipelines-nyc-taxi-pipeline
+ on:
+ workflow_dispatch:
+ schedule:
+ - cron: "0 0/4 * * *"
+ pull_request:
+ branches:
+ - main
+ - sdk-preview
+ paths:
+ - cli/jobs/pipelines/nyc-taxi/**
+ - .github/workflows/cli-jobs-pipelines-nyc-taxi-pipeline.yml
+ - cli/run-pipeline-jobs.sh
+ - cli/setup.sh
+ jobs:
+ build:
+ runs-on: ubuntu-latest
+ steps:
+ - name: check out repo
+ uses: actions/checkout@v2
+ - name: azure login
+ uses: azure/login@v1
+ with:
+ client-id: ${{ secrets.AZURE_CLIENT_ID }}
+ tenant-id: ${{ secrets.AZURE_TENANT_ID }}
+ subscription-id: ${{ secrets.AZURE_SUBSCRIPTION_ID }}
+ - name: setup
+ run: bash setup.sh
+ working-directory: cli
+ continue-on-error: true
+ - name: run job
+ run: bash -x ../../../run-job.sh pipeline.yml
+ working-directory: cli/jobs/pipelines/nyc-taxi
+ ```
- # Submit a training run to the Azure Machine Learning
- - name: Submit training run
- id: aml_run
- uses: Azure/aml-run@v1
- with:
- azure_credentials: ${{ secrets.AZURE_CREDENTIALS }}
-
- # Register model in Azure Machine Learning model registry
- - name: Register model
- id: aml_registermodel
- uses: Azure/aml-registermodel@v1
- with:
- azure_credentials: ${{ secrets.AZURE_CREDENTIALS }}
- run_id: ${{ steps.aml_run.outputs.run_id }}
- experiment_name: ${{ steps.aml_run.outputs.experiment_name }}
-
- # Deploy model in Azure Machine Learning to ACI
- - name: Deploy model
- id: aml_deploy
- uses: Azure/aml-deploy@v1
- with:
- azure_credentials: ${{ secrets.AZURE_CREDENTIALS }}
- model_name: ${{ steps.aml_registermodel.outputs.model_name }}
- model_version: ${{ steps.aml_registermodel.outputs.model_version }}
+1. Select **View runs**.
+1. Enable workflows by selecting **I understand my workflows, go ahead and enable them**.
+1. Select the **cli-jobs-pipelines-nyc-taxi-pipeline workflow** and choose to **Enable workflow**.
-```
+ :::image type="content" source="media/how-to-github-actions-machine-learning/enable-github-actions-ml-workflow.png" alt-text="Screenshot of enable GitHub Actions workflow.":::
+
+1. Select **Run workflow** and choose the option to **Run workflow** now.
+
+ :::image type="content" source="media/how-to-github-actions-machine-learning/github-actions-run-workflow.png" alt-text="Screenshot of run GitHub Actions workflow.":::
++
+## Step 6: Verify your workflow run
+
+1. Open your completed workflow run and verify that the build job ran successfully. You'll see a green checkmark next to the job.
+1. Open Azure Machine Learning studio and navigate to the **nyc-taxi-pipeline-example**. Verify that each part of your job (prep, transform, train, predict, score) completed and that you see a green checkmark.
+
+ :::image type="content" source="media/how-to-github-actions-machine-learning/github-actions-machine-learning-nyc-taxi-complete.png" alt-text="Screenshot of successful Machine Learning Studio run.":::
## Clean up resources
When your resource group and repository are no longer needed, clean up the resou
## Next steps > [!div class="nextstepaction"]
-> [Learning path: End-to-end MLOps with Azure Machine Learning](/learn/paths/build-first-machine-operations-workflow/)
-> [Create and run machine learning pipelines with Azure Machine Learning SDK v1](v1/how-to-create-machine-learning-pipelines.md)
+> [Create production ML pipelines with Python SDK](tutorial-pipeline-python-sdk.md)
machine-learning How To Troubleshoot Environments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-troubleshoot-environments.md
description: Learn how to troubleshoot issues with environment image builds and
--++ Last updated 03/01/2022
-# Troubleshoot environment image builds
-Learn how to troubleshoot issues with Docker environment image builds and package installations.
+# Troubleshooting environment image builds using troubleshooting log error messages
-## Prerequisites
+In this article, learn how to troubleshoot common problems you may encounter with environment image builds.
-* An Azure subscription. Try the [free or paid version of Azure Machine Learning](https://azure.microsoft.com/free/).
-* The [Azure Machine Learning SDK](/python/api/overview/azure/ml/install).
-* The [Azure CLI](/cli/azure/install-azure-cli).
-* The [CLI extension for Azure Machine Learning](v1/reference-azure-machine-learning-cli.md).
-* To debug locally, you must have a working Docker installation on your local system.
+## Azure Machine Learning environments
-## Docker image build failures
-
-For most image build failures, you'll find the root cause in the image build log.
-Find the image build log from the Azure Machine Learning portal (20\_image\_build\_log.txt) or from your Azure Container Registry task job logs.
-
-It's usually easier to reproduce errors locally. Check the kind of error and try one of the following `setuptools`:
+Azure Machine Learning environments are an encapsulation of the environment where your machine learning training happens.
+They specify the base docker image, Python packages, and software settings around your training and scoring scripts.
+Environments are managed and versioned assets within your Machine Learning workspace that enable reproducible, auditable, and portable machine learning workflows across various compute targets.
-- Install a conda dependency locally: `conda install suspicious-dependency==X.Y.Z`.-- Install a pip dependency locally: `pip install suspicious-dependency==X.Y.Z`.-- Try to materialize the entire environment: `conda create -f conda-specification.yml`.
+## Types of environments
-> [!IMPORTANT]
-> Make sure that the platform and interpreter on your local compute cluster match the ones on the remote compute cluster.
-
-### Timeout
-
-The following network issues can cause timeout errors:
--- Low internet bandwidth-- Server issues-- Large dependencies that can't be downloaded with the given conda or pip timeout settings
-
-Messages similar to the following examples will indicate the issue:
-
-```
-('Connection broken: OSError("(104, \'ECONNRESET\')")', OSError("(104, 'ECONNRESET')"))
-```
-```
-ReadTimeoutError("HTTPSConnectionPool(host='****', port=443): Read timed out. (read timeout=15)",)
-```
-
-If you get an error message, try one of the following possible solutions:
-
-- Try a different source, such as mirrors, Azure Blob Storage, or other Python feeds, for the dependency.-- Update conda or pip. If you're using a custom Docker file, update the timeout settings.-- Some pip versions have known issues. Consider adding a specific version of pip to the environment dependencies.-
-### Package not found
-
-The following errors are most common for image build failures:
--- Conda package couldn't be found:-
- ```
- ResolvePackageNotFound:
- - not-existing-conda-package
- ```
--- Specified pip package or version couldn't be found:-
- ```
- ERROR: Could not find a version that satisfies the requirement invalid-pip-package (from versions: none)
- ERROR: No matching distribution found for invalid-pip-package
- ```
--- Bad nested pip dependency:-
- ```
- ERROR: No matching distribution found for bad-package==0.0 (from good-package==1.0)
- ```
-
-Check that the package exists on the specified sources. Use [pip search](https://pip.pypa.io/en/stable/reference/pip_search/) to verify pip dependencies:
--- `pip search azureml-core`-
-For conda dependencies, use [conda search](https://docs.conda.io/projects/conda/en/latest/commands/search.html):
--- `conda search conda-forge::numpy`-
-For more options, try:
-- `pip search -h`-- `conda search -h`-
-#### Installer notes
-
-Make sure that the required distribution exists for the specified platform and Python interpreter version.
-
-For pip dependencies, go to `https://pypi.org/project/[PROJECT NAME]/[VERSION]/#files` to see if the required version is available. Go to https://pypi.org/project/azureml-core/1.11.0/#files to see an example.
-
-For conda dependencies, check the package on the channel repository.
-For channels maintained by Anaconda, Inc., check the [Anaconda Packages page](https://repo.anaconda.com/pkgs/).
-
-### Pip package update
-
-During an installation or an update of a pip package, the resolver might need to update an already-installed package to satisfy the new requirements.
-Uninstallation can fail for various reasons related to the pip version or the way the dependency was installed.
-The most common scenario is that a dependency installed by conda couldn't be uninstalled by pip.
-For this scenario, consider uninstalling the dependency by using `conda remove mypackage`.
-
-```
- Attempting uninstall: mypackage
- Found existing installation: mypackage X.Y.Z
-ERROR: Cannot uninstall 'mypackage'. It is a distutils installed project and thus we cannot accurately determine which files belong to it which would lead to only a partial uninstall.
-```
-### Installer issues
-
-Certain installer versions have issues in the package resolvers that can lead to a build failure.
-
-If you're using a custom base image or Dockerfile, we recommend using conda version 4.5.4 or later.
+Environments can broadly be divided into three categories: curated, user-managed, and system-managed.
-A pip package is required to install pip dependencies. If a version isn't specified in the environment, the latest version will be used.
-We recommend using a known version of pip to avoid transient issues or breaking changes that the latest version of the tool might cause.
+Curated environments are pre-created environments that are managed by Azure Machine Learning (AzureML) and are available by default in every workspace provisioned. ```
-Consider pinning the pip version in your environment if you see the following message:
+Intended to be used as is, they contain collections of Python packages and settings to help you get started with various machine learning frameworks.
+These pre-created environments also allow for faster deployment time.
- ```
- Warning: you have pip-installed dependencies in your environment file, but you do not list pip itself as one of your conda dependencies. Conda may not use the correct pip to install your packages, and they may end up in the wrong place. Please add an explicit pip dependency. I'm adding one for you, but still nagging you.
- ```
+In user-managed environments, you're responsible for setting up your environment and installing every package that your training script needs on the compute target.
+Also be sure to include any dependencies needed for model deployment.
+These types of environments are represented by two subtypes, BYOC (bring your own container) ΓÇô a Docker image user brings to AzureML and Docker build context based environment where AzureML materializes the image from the user provided content.
-Pip subprocess error:
- ```
- ERROR: THESE PACKAGES DO NOT MATCH THE HASHES FROM THE REQUIREMENTS FILE. If you have updated the package versions, update the hashes as well. Otherwise, examine the package contents carefully; someone may have tampered with them.
- ```
+You use system-managed environments when you want conda to manage the Python environment for you.
+A new isolated conda environment is materialized from your conda specification on top of a base Docker image. By default, common properties are added to the derived image.
+Note that environment isolation implies that Python dependencies installed in the base image won't be available in the derived image.
-Pip installation can be stuck in an infinite loop if there are unresolvable conflicts in the dependencies.
-If you're working locally, downgrade the pip version to < 20.3.
-In a conda environment created from a YAML file, you'll see this issue only if conda-forge is the highest-priority channel. To mitigate the issue, explicitly specify pip < 20.3 (!=20.3 or =20.2.4 pin to other version) as a conda dependency in the conda specification file.
+## Create and manage environments
-### ModuleNotFoundError: No module named 'distutils.dir_util'
+You can create and manage environments from clients like AzureML Python SDK, AzureML CLI, AzureML Studio UI, VS code extension.
-When setting up your environment, sometimes you'll run into the issue **ModuleNotFoundError: No module named 'distutils.dir_util'**. To fix it, run the following command:
+"Anonymous" environments are automatically registered in your workspace when you submit an experiment without registering or referencing an already existing environment.
+They won't be listed but may be retrieved by version or label.
-```bash
-apt-get install -y --no-install-recommends python3 python3-distutils && \
-ln -sf /usr/bin/python3 /usr/bin/python
-```
+AzureML builds environment definitions into Docker images.
+It also caches the environments in Azure Container Registry associated with your AzureML Workspace so they can be reused in subsequent training jobs and service endpoint deployments.
+Multiple environments with the same definition may result the same image, so the cached image will be reused.
+Running a training script remotely requires the creation of a Docker image.
-When working with a Dockerfile, run it as part of a RUN command.
+## Reproducibility and vulnerabilities
-```dockerfile
-RUN apt-get update && \
- apt-get install -y --no-install-recommends python3 python3-distutils && \
- ln -sf /usr/bin/python3 /usr/bin/python
+Over time vulnerabilities are discovered and Docker images that correspond to AzureML environments may be flagged by the scanning tools.
+Updates for AzureML based images are released regularly, with a commitment of no unpatched vulnerabilities older than 30 days in the latest version of the image.
+It's your responsibility to evaluate the threat and address vulnerabilities in environments.
+Not all the vulnerabilities are exploitable, so you need to use your judgment when choosing between reproducibility and resolving vulnerabilities.
+> [!IMPORTANT]
+> There's no guarantee that the same set of python dependencies will be materialized with an image rebuild or for a new environment with the same set of Python dependencies.
+
+## *Environment definition problems*
+
+### Environment name issues
+#### **"Curated prefix not allowed"**
+Terminology:
+
+"Curated": environments Microsoft creates and maintains.
+
+"Custom": environments you create and maintain.
+
+- The name of your custom environment uses terms reserved only for curated environments
+- Don't start your environment name with *Microsoft* or *AzureML*--these prefixes are reserved for curated environments
+- To customize a curated environment, you must clone and rename the environment
+- For more information about proper curated environment usage, see [create and manage reusable environments](https://aka.ms/azureml/environment/create-and-manage-reusable-environments)
+
+#### **"Environment name is too long"**
+- Environment names can be up to 255 characters in length
+- Consider renaming and shortening your environment name
+
+### Docker issues
+To create a new environment, you must use one of the following approaches:
+1. Base image
+ - Provide base image name, repository from which to pull it, credentials if needed
+ - Provide a conda specification
+2. Base Dockerfile (V1 only, Deprecated)
+ - Provide a Dockerfile
+ - Provide a conda specification
+3. Docker build context
+ - Provide the location of the build context (URL)
+ - The build context must contain at least a Dockerfile, but may contain other files as well
++
+#### **"Missing Docker definition"**
+- An environment has a `DockerSection` that must be populated with either a base image, base Dockerfile, or build context
+- This section configures settings related to the final Docker image built to the specifications of the environment and whether to use Docker containers to build the environment
+- See [DockerSection](https://aka.ms/azureml/environment/environment-docker-section)
+
+#### **"Missing Docker build context location"**
+- If you're specifying a Docker build context as part of your environment build, you must provide the path of the build context directory
+- See [BuildContext](https://aka.ms/azureml/environment/build-context-class)
+
+#### **"Too many Docker options"**
+Only one of the following options can be specified:
+
+*V1*
+- `base_image`
+- `base_dockerfile`
+- `build_context`
+- See [DockerSection](https://aka.ms/azureml/environment/docker-section-class)
+
+*V2*
+- `image`
+- `build`
+- See [azure.ai.ml.entities.Environment](https://aka.ms/azureml/environment/environment-class-v2)
+
+#### **"Missing Docker option"**
+*V1*
+- You must specify one of: base image, base Dockerfile, or build context
+
+*V2:*
+- You must specify one of: image or build context
+
+#### **"Container registry credentials missing either username or password"**
+- To access the base image in the container registry specified, you must provide both a username and password. One is missing.
+- Note that providing credentials in this way is deprecated. For the current method of providing credentials, see the *secrets in base image registry* section.
+
+#### **"Multiple credentials for base image registry"**
+- When specifying credentials for a base image registry, you must specify only one set of credentials.
+- The following authentication types are currently supported:
+ - Basic (username/password)
+ - Registry identity (clientId/resourceId)
+- If you're using workspace connections to specify credentials, [delete one of the connections](https://aka.ms/azureml/environment/delete-connection-v1)
+- If you've specified credentials directly in your environment definition, choose either username/password or registry identity
+to use, and set the other credentials you won't use to `null`
+ - Specifying credentials in this way is deprecated. It's recommended that you use workspace connections. See
+ *secrets in base image registry* below
+
+#### **"Secrets in base image registry"**
+- If you specify a base image in your `DockerSection`, you must specify the registry address from which the image will be pulled,
+and credentials to authenticate to the registry, if needed.
+- Historically, credentials have been specified in the environment definition. However, this isn't secure and should be
+avoided.
+- Users should set credentials using workspace connections. For instructions on how to
+do this, see [set_connection](https://aka.ms/azureml/environment/set-connection-v1)
+
+#### **"Deprecated Docker attribute"**
+- The following `DockerSection` attributes are deprecated:
+ - `enabled`
+ - `arguments`
+ - `shared_volumes`
+ - `gpu_support`
+ - Azure Machine Learning now automatically detects and uses NVIDIA Docker extension when available.
+ - `smh_size`
+- Use [DockerConfiguration](https://aka.ms/azureml/environment/docker-configuration-class) instead
+- See [DockerSection deprecated variables](https://aka.ms/azureml/environment/docker-section-class)
+
+#### **"Dockerfile length over limit"**
+- The specified Dockerfile can't exceed the maximum Dockerfile size of 100KB
+- Consider shortening your Dockerfile to get it under this limit
+
+### Docker build context issues
+#### **"Missing Dockerfile path"**
+- In the Docker build context, a Dockerfile path must be specified
+- This is the path to the Dockerfile relative to the root of Docker build context directory
+- See [Build Context class](https://aka.ms/azureml/environment/build-context-class)
+
+#### **"Not allowed to specify attribute with Docker build context"**
+- If a Docker build context is specified, then the following items can't also be specified in the
+environment definition:
+ - Environment variables
+ - Conda dependencies
+ - R
+ - Spark
+
+#### **"Location type not supported/Unknown location type"**
+- The following are accepted location types:
+ - Git
+ - Git URLs can be provided to AzureML, but images can't yet be built using them. Use a storage
+ account until builds have Git support
+ - [How to use git repository as build context](https://aka.ms/azureml/environment/git-repo-as-build-context)
+ - Storage account
+
+#### **"Invalid location"**
+- The specified location of the Docker build context is invalid
+- If the build context is stored in a git repository, the path of the build context must be specified as a git URL
+- If the build context is stored in a storage account, the path of the build context must be specified as
+ - `https://storage-account.blob.core.windows.net/container/path/`
+
+### Base image issues
+#### **"Base image is deprecated"**
+- The following base images are deprecated:
+ - `azureml/base`
+ - `azureml/base-gpu`
+ - `azureml/base-lite`
+ - `azureml/intelmpi2018.3-cuda10.0-cudnn7-ubuntu16.04`
+ - `azureml/intelmpi2018.3-cuda9.0-cudnn7-ubuntu16.04`
+ - `azureml/intelmpi2018.3-ubuntu16.04`
+ - `azureml/o16n-base/python-slim`
+ - `azureml/openmpi3.1.2-cuda10.0-cudnn7-ubuntu16.04`
+ - `azureml/openmpi3.1.2-ubuntu16.04`
+ - `azureml/openmpi3.1.2-cuda10.0-cudnn7-ubuntu18.04`
+ - `azureml/openmpi3.1.2-cuda10.1-cudnn7-ubuntu18.04`
+ - `azureml/openmpi3.1.2-cuda10.2-cudnn7-ubuntu18.04`
+ - `azureml/openmpi3.1.2-cuda10.2-cudnn8-ubuntu18.04`
+- AzureML can't provide troubleshooting support for failed builds with deprecated images.
+- Deprecated images are also at risk for vulnerabilities since they're no longer updated or maintained.
+It's best to use newer, non-deprecated versions.
+
+#### **"No tag or digest"**
+- For the environment to be reproducible, one of the following must be included on a provided base image:
+ - Version tag
+ - Digest
+- See [image with immutable identifier](https://aka.ms/azureml/environment/pull-image-by-digest)
+
+### Environment variable issues
+#### **"Misplaced runtime variables"**
+- An environment definition shouldn't contain runtime variables
+- Use the `environment_variables` attribute on the [RunConfiguration object](https://aka.ms/azureml/environment/environment-variables-on-run-config) instead
+
+### Python issues
+#### **"Python section missing"**
+*V1*
+
+- An environment definition must have a Python section
+- Conda dependencies are specified in this section, and Python (along with its version) should be one of them
+```json
+"python": {
+ "baseCondaEnvironment": null,
+ "condaDependencies": {
+ "channels": [
+ "anaconda",
+ "conda-forge"
+ ],
+ "dependencies": [
+ "python=3.6.2"
+ ],
+ },
+ "condaDependenciesFile": null,
+ "interpreterPath": "python",
+ "userManagedDependencies": false
+}
```
+- See [PythonSection class](https://aka.ms/azureml/environment/environment-python-section)
-Running this command installs the correct module dependencies to configure your environment.
-
-### Build failure when using Spark packages
-
-Configure the environment to not precache the packages.
+#### **"Python version missing"**
+- A Python version must be specified in the environment definition
+- A Python version can be added by adding Python as a conda package, specifying the version:
```python
-env.spark.precache_packages = False
-```
-
-## Service-side failures
-
-See the following scenarios to troubleshoot possible service-side failures.
-
-### You're unable to pull an image from a container registry, or the address couldn't be resolved for a container registry
-
-Possible issues:
-- The path name to the container registry might not be resolving correctly. Check that image names use double slashes and the direction of slashes on Linux versus Windows hosts is correct.-- If a container registry behind a virtual network is using a private endpoint in [an unsupported region](../private-link/private-link-overview.md#availability), configure the container registry by using the service endpoint (public access) from the portal and retry.-- After you put the container registry behind a virtual network, run the [Azure Resource Manager template](./how-to-network-security-overview.md) so the workspace can communicate with the container registry instance.-
-### You get a 401 error from a workspace container registry
-
-Resynchronize storage keys by using [ws.sync_keys()](/python/api/azureml-core/azureml.core.workspace.workspace#sync-keys--).
-
-### The environment keeps throwing a "Waiting for other conda operations to finish…" error
-
-When an image build is ongoing, conda is locked by the SDK client. If the process crashed or was canceled incorrectly by the user, conda stays in the locked state. To resolve this issue, manually delete the lock file.
-
-### Your custom Docker image isn't in the registry
-
-Check if the [correct tag](./how-to-use-environments.md#create-an-environment) is used and that `user_managed_dependencies = True`. `Environment.python.user_managed_dependencies = True` disables conda and uses the user's installed packages.
+from azureml.core.environment import CondaDependencies
-### You get one of the following common virtual network issues
--- Check that the storage account, compute cluster, and container registry are all in the same subnet of the virtual network.-- When your container registry is behind a virtual network, it can't directly be used to build images. You'll need to use the compute cluster to build images.-- Storage might need to be placed behind a virtual network if you:
- - Use inferencing or private wheel.
- - See 403 (not authorized) service errors.
- - Can't get image details from Azure Container Registry.
-
-### The image build fails when you're trying to access network protected storage
--- Azure Container Registry tasks don't work behind a virtual network. If the user has their container registry behind a virtual network, they need to use the compute cluster to build an image.-- Storage should be behind a virtual network in order to pull dependencies from it.-
-### You can't run experiments when storage has network security enabled
-
-If you're using default Docker images and enabling user-managed dependencies, use the MicrosoftContainerRegistry and AzureFrontDoor.FirstParty [service tags](./how-to-network-security-overview.md) to allowlist Azure Container Registry and its dependencies.
-
- For more information, see [Enabling virtual networks](./how-to-network-security-overview.md).
-
-### Error response from daemon: get "https://viennaglobal.azurecr.io": context deadline exceeded
-
-This error happens when you have configured the workspace to build images using a compute cluster, and the compute cluster is configured for no public IP address. Using a compute cluster to build images is required if your Azure Container Registry is behind a virtual network. For more information, see [Enable Azure Container Registry](how-to-secure-workspace-vnet.md#enable-azure-container-registry-acr).
-
-To resolve this error, use the following steps:
-
-1. [Create a new compute cluster that has a public IP address](how-to-create-attach-compute-cluster.md).
-1. [Configure the workspace to build images using the compute cluster created in step 1](how-to-secure-workspace-vnet.md#enable-azure-container-registry-acr).
--
-## Next steps
--- [Train a machine learning model to categorize flowers](how-to-train-scikit-learn.md)-- [Train a machine learning model by using a custom Docker image](how-to-train-with-custom-image.md)
+myenv = Environment(name="myenv")
+conda_dep = CondaDependencies()
+conda_dep.add_conda_package("python==3.8")
+```
+- See [Add conda package](https://aka.ms/azureml/environment/add-conda-package-v1)
+
+#### **"Multiple Python versions"**
+- Only one Python version can be specified in the environment definition
+
+#### **"Python version not supported"**
+- The Python version provided in the environment definition isn't supported
+- Consider using a newer version of Python
+- See [Python versions](https://aka.ms/azureml/environment/python-versions) and [Python end-of-life dates](https://aka.ms/azureml/environment/python-end-of-life)
+
+#### **"Python version not recommended"**
+- The Python version used in the environment definition is deprecated, and its use should be avoided
+- Consider using a newer version of Python as the specified version will eventually unsupported
+- See [Python versions](https://aka.ms/azureml/environment/python-versions) and [Python end-of-life dates](https://aka.ms/azureml/environment/python-end-of-life)
+
+#### **"Failed to validate Python version"**
+- The provided Python version may have been formatted improperly or specified with incorrect syntax
+- See [conda package pinning](https://aka.ms/azureml/environment/how-to-pin-conda-packages)
+
+### Conda issues
+#### **"Missing conda dependencies"**
+- The [environment definition](https://aka.ms/azureml/environment/environment-class-v1)
+has a [PythonSection](https://aka.ms/azureml/environment/environment-python-section)
+that contains a `user_managed_dependencies` bool and a `conda_dependencies` object
+- If `user_managed_dependencies` is set to `True`, you're responsible for ensuring that all the necessary packages are available in the
+Python environment in which you choose to run the script
+- If `user_managed_dependencies` is set to `False` (the default), Azure ML will create a Python environment for you based on `conda_dependencies`.
+The environment is built once and is reused as long as the conda dependencies remain unchanged
+- You'll receive a *"missing conda dependencies"* error when `user_managed_dependencies` is set to `False` and you haven't provided a conda specification.
+- See [how to create a conda file manually](https://aka.ms/azureml/environment/how-to-create-conda-file)
+- See [CondaDependencies class](https://aka.ms/azureml/environment/conda-dependencies-class)
+- See [how to set a conda specification on the environment definition](https://aka.ms/azureml/environment/set-conda-spec-on-environment-definition)
+
+#### **"Invalid conda dependencies"**
+- Make sure the conda dependencies specified in your conda specification are formatted correctly
+- See [how to create a conda file manually](https://aka.ms/azureml/environment/how-to-create-conda-file)
+
+#### **"Missing conda channels"**
+- If no conda channels are specified, conda will use defaults that might change
+- For reproducibility of your environment, specify channels from which to pull dependencies
+- See [how to manage conda channels](https://aka.ms/azureml/environment/managing-conda-channels)
+for more information
+
+#### **"Base conda environment not recommended"**
+- Partial environment updates can lead to dependency conflicts and/or unexpected runtime errors,
+so the use of base conda environments isn't recommended
+- Instead, specify all packages needed for your environment in the `conda_dependencies` section of your
+environment definition
+ - See [from_conda_specification](https://aka.ms/azureml/environment/set-conda-spec-on-environment-definition)
+ - See [CondaDependencies class](https://aka.ms/azureml/environment/conda-dependencies-class)
+- If you're using V2, add a conda specification to your [build context](https://aka.ms/azureml/environment/environment-build-context)
+
+#### **"Unpinned dependencies"**
+- For reproducibility, specify dependency versions for the packages in your conda specification
+- If versions aren't specified, there's a chance that the conda or pip package resolver will choose a different
+version of a package on subsequent builds of an environment. This can lead to unexpected errors and incorrect behavior
+- See [conda package pinning](https://aka.ms/azureml/environment/how-to-pin-conda-packages)
+
+### Pip issues
+#### **"Pip not specified"**
+- For reproducibility, pip should be specified as a dependency in your conda specification, and it should be pinned
+- See [how to set a conda dependency](https://aka.ms/azureml/environment/add-conda-package-v1)
+
+#### **"Pip not pinned"**
+- For reproducibility, specify the pip resolver version in your conda dependencies
+- If the pip version isn't specified, there's a chance different versions of pip will be used on subsequent
+image builds on the environment
+ - This could cause the build to fail if the different pip versions resolve your packages differently
+ - To avoid this and to achieve reproducibility of your environment, specify the pip version
+- See [conda package pinning](https://aka.ms/azureml/environment/how-to-pin-conda-packages)
+- See [how to set pip as a dependency](https://aka.ms/azureml/environment/add-conda-package-v1)
+
+### Deprecated environment property issues
+#### **"R section is deprecated"**
+- The Azure Machine Learning SDK for R will be deprecated by the end of 2021 to make way for an improved R training and deployment
+experience using Azure Machine Learning CLI 2.0
+- See the [samples repository](https://aka.ms/azureml/environment/train-r-models-cli-v2) to get started with the Public Preview edition of the 2.0 CLI
+
+## *Image build problems*
+
+### Miscellaneous issues
+#### **"Build log unavailable"**
+- Build logs are optional and not available for all environments since the image might already exist
+
+#### **"ACR unreachable"**
+- There was a failure communicating with the workspace's container registry
+- If your scenario involves a VNet, you may need to build images using a compute cluster
+- See [secure a workspace using virtual networks](https://aka.ms/azureml/environment/acr-private-endpoint)
+
+### Docker pull issues
+#### **"Failed to pull Docker image"**
+- Possible issues:
+ - The path name to the container registry might not be resolving correctly
+ - For a registry `my-registry.io` and image `test/image` with tag `3.2`, a valid image path would be `my-registry.io/test/image:3.2`
+ - See [registry path documentation](https://aka.ms/azureml/environment/docker-registries)
+ - If a container registry behind a virtual network is using a private endpoint in an [unsupported region](https://aka.ms/azureml/environment/private-link-availability),
+ configure the container registry by using the service endpoint (public access) from the portal and retry
+ - After you put the container registry behind a virtual network, run the [Azure Resource Manager template](https://aka.ms/azureml/environment/secure-resources-using-vnet)
+ so the workspace can communicate with the container registry instance
+ - The image you're trying to reference doesn't exist in the container registry you specified
+ - Check that the correct tag is used and that `user_managed_dependencies` is set to `True`.
+ Setting [user_managed_dependencies](https://aka.ms/azureml/environment/environment-python-section)
+ to `True` disables conda and uses the user's installed packages.
+ - You haven't provided credentials for a private registry you're trying to pull the image from, or the provided credentials are incorrect
+ - Set [workspace connections](https://aka.ms/azureml/environment/set-connection-v1) for the container registry if needed
+
+### Conda issues during build
+#### **"Bad spec"**
+- Failed to create or update the conda environment due to an invalid package specification
+ - See [package match specifications](https://aka.ms/azureml/environment/conda-package-match-specifications)
+ - See [how to create a conda file manually](https://aka.ms/azureml/environment/how-to-create-conda-file)
+
+#### **"Communications error"**
+- Failed to communicate with a conda channel or package repository
+- Retrying the image build may work if the issue is transient
+
+#### **"Compile error"**
+- Failed to build a package required for the conda environment
+- Another version of the failing package may work. If it doesn't, review the image build log, hunt for a solution, and update the environment definition.
+
+#### **"Missing command"**
+- Failed to build a package required for the conda environment due to a missing command
+- Identify the missing command from the image build log, determine how to add it to your image, and then update the environment definition.
+
+#### **"Conda timeout"**
+- Failed to create or update the conda environment because it took too long
+- Consider removing unnecessary packages and pinning specific versions
+- See [understanding and improving conda's performance](https://aka.ms/azureml/environment/improve-conda-performance)
+
+#### **"Out of memory"**
+- Failed to create or update the conda environment due to insufficient memory
+- Consider removing unnecessary packages and pinning specific versions
+- See [understanding and improving conda's performance](https://aka.ms/azureml/environment/improve-conda-performance)
+
+#### **"Package not found"**
+- One or more packages specified in your conda specification couldn't be found
+- Ensure that all packages you've specified exist, and can be found using the channels you've specified in your conda specification
+- If you don't specify conda channels, conda will use defaults that are subject to change
+ - For reproducibility, specify channels from which to pull dependencies
+- See [managing channels](https://aka.ms/azureml/environment/managing-conda-channels)
+
+#### **"Missing Python module"**
+- Check the Python modules specified in your environment definition and correct any misspellings or incorrect pinned versions.
+
+#### **"No matching distribution"**
+- Failed to find Python package matching a specified distribution
+- Search for the distribution you're looking for and ensure it exists: [pypi](https://aka.ms/azureml/environment/pypi)
+
+#### **"Cannot build mpi4py"**
+- Failed to build wheel for mpi4py
+- Review and update your build environment or use a different installation method
+- See [mpi4py installation](https://aka.ms/azureml/environment/install-mpi4py)
+
+#### **"Interactive auth was attempted"**
+- Failed to create or update the conda environment because pip attempted interactive authentication
+- Instead, provide authentication via [workspace connection](https://aka.ms/azureml/environment/set-connection-v1)
+
+#### **"Forbidden blob"**
+- Failed to create or update the conda environment because a blob contained in the associated storage account was inaccessible
+- Either open up permissions on the blob or add/replace the SAS token in the URL
+
+#### **"Horovod build"**
+- Failed to create or update the conda environment because horovod failed to build
+- See [horovod installation](https://aka.ms/azureml/environment/install-horovod)
+
+#### **"Conda command not found"**
+- Failed to create or update the conda environment because the conda command is missing
+- For system-managed environments, conda should be in the path in order to create the user's environment
+from the provided conda specification
+
+#### **"Incompatible Python version"**
+- Failed to create or update the conda environment because a package specified in the conda environment isn't compatible with the specified python version
+- Update the Python version or use a different version of the package
+
+#### **"Conda bare redirection"**
+- Failed to create or update the conda environment because a package was specified on the command line using ">" or "<"
+without using quotes. Consider adding quotes around the package specification
+
+### Pip issues during build
+#### **"Failed to install packages"**
+- Failed to install Python packages
+- Review the image build log for more information on this error
+
+#### **"Cannot uninstall package"**
+- Pip failed to uninstall a Python package that was installed via the OS's package manager
+- Consider creating a separate environment using conda instead
machine-learning Reference Automl Images Hyperparameters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-automl-images-hyperparameters.md
Last updated 01/18/2022
# Hyperparameters for computer vision tasks in automated machine learning
+> [!div class="op_single_selector" title1="Select the version of Azure Machine Learning you are using:"]
+> * [v1](v1/reference-automl-images-hyperparameters-v1.md)
+> * [v2 (current version)](reference-automl-images-hyperparameters.md)
+ Learn which hyperparameters are available specifically for computer vision tasks in automated ML experiments. With support for computer vision tasks, you can control the model algorithm and sweep hyperparameters. These model algorithms and hyperparameters are passed in as the parameter space for the sweep. While many of the hyperparameters exposed are model-agnostic, there are instances where hyperparameters are model-specific or task-specific.
This table summarizes hyperparameters specific to the `yolov5` algorithm.
| - |-|-| | `validation_metric_type` | Metric computation method to use for validation metrics. <br> Must be `none`, `coco`, `voc`, or `coco_voc`. | `voc` | | `validation_iou_threshold` | IOU threshold for box matching when computing validation metrics. <br>Must be a float in the range [0.1, 1]. | 0.5 |
-| `img_size` | Image size for train and validation. <br> Must be a positive integer. <br> <br> *Note: training run may get into CUDA OOM if the size is too big*. | 640 |
-| `model_size` | Model size. <br> Must be `small`, `medium`, `large`, or `xlarge`. <br><br> *Note: training run may get into CUDA OOM if the model size is too big*. | `medium` |
+| `image_size` | Image size for train and validation. <br> Must be a positive integer. <br> <br> *Note: training run may get into CUDA OOM if the size is too big*. | 640 |
+| `model_size` | Model size. <br> Must be `small`, `medium`, `large`, or `extra_large`. <br><br> *Note: training run may get into CUDA OOM if the model size is too big*. | `medium` |
| `multi_scale` | Enable multi-scale image by varying image size by +/- 50% <br> Must be 0 or 1. <br> <br> *Note: training run may get into CUDA OOM if no sufficient GPU memory*. | 0 |
-| `box_score_thresh` | During inference, only return proposals with a score greater than `box_score_thresh`. The score is the multiplication of the objectness score and classification probability. <br> Must be a float in the range [0, 1]. | 0.1 |
-| `nms_iou_thresh` | IOU threshold used during inference in non-maximum suppression post processing. <br> Must be a float in the range [0, 1]. | 0.5 |
+| `box_score_threshold` | During inference, only return proposals with a score greater than `box_score_threshold`. The score is the multiplication of the objectness score and classification probability. <br> Must be a float in the range [0, 1]. | 0.1 |
+| `nms_iou_threshold` | IOU threshold used during inference in non-maximum suppression post processing. <br> Must be a float in the range [0, 1]. | 0.5 |
This table summarizes hyperparameters specific to the `maskrcnn_*` for instance segmentation during inference.
The following table describes the hyperparameters that are model agnostic.
| `number_of_epochs` | Number of training epochs. <br>Must be a positive integer. | 15 <br> (except `yolov5`: 30) | | `training_batch_size` | Training batch size.<br> Must be a positive integer. | Multi-class/multi-label: 78 <br>(except *vit-variants*: <br> `vits16r224`: 128 <br>`vitb16r224`: 48 <br>`vitl16r224`:10)<br><br>Object detection: 2 <br>(except `yolov5`: 16) <br><br> Instance segmentation: 2 <br> <br> *Note: The defaults are largest batch size that can be used on 12 GiB GPU memory*.| | `validation_batch_size` | Validation batch size.<br> Must be a positive integer. | Multi-class/multi-label: 78 <br>(except *vit-variants*: <br> `vits16r224`: 128 <br>`vitb16r224`: 48 <br>`vitl16r224`:10)<br><br>Object detection: 1 <br>(except `yolov5`: 16) <br><br> Instance segmentation: 1 <br> <br> *Note: The defaults are largest batch size that can be used on 12 GiB GPU memory*.|
-| `grad_accumulation_step` | Gradient accumulation means running a configured number of `grad_accumulation_step` without updating the model weights while accumulating the gradients of those steps, and then using the accumulated gradients to compute the weight updates. <br> Must be a positive integer. | 1 |
+| `gradient_accumulation_step` | Gradient accumulation means running a configured number of `gradient_accumulation_step` without updating the model weights while accumulating the gradients of those steps, and then using the accumulated gradients to compute the weight updates. <br> Must be a positive integer. | 1 |
| `early_stopping` | Enable early stopping logic during training. <br> Must be 0 or 1.| 1 | | `early_stopping_patience` | Minimum number of epochs or validation evaluations with<br>no primary metric improvement before the run is stopped.<br> Must be a positive integer. | 5 | | `early_stopping_delay` | Minimum number of epochs or validation evaluations to wait<br>before primary metric improvement is tracked for early stopping.<br> Must be a positive integer. | 5 | | `learning_rate` | Initial learning rate. <br>Must be a float in the range [0, 1]. | Multi-class: 0.01 <br>(except *vit-variants*: <br> `vits16r224`: 0.0125<br>`vitb16r224`: 0.0125<br>`vitl16r224`: 0.001) <br><br> Multi-label: 0.035 <br>(except *vit-variants*:<br>`vits16r224`: 0.025<br>`vitb16r224`: 0.025 <br>`vitl16r224`: 0.002) <br><br> Object detection: 0.005 <br>(except `yolov5`: 0.01) <br><br> Instance segmentation: 0.005 |
-| `lr_scheduler` | Type of learning rate scheduler. <br> Must be `warmup_cosine` or `step`. | `warmup_cosine` |
+| `learning_rate_scheduler` | Type of learning rate scheduler. <br> Must be `warmup_cosine` or `step`. | `warmup_cosine` |
| `step_lr_gamma` | Value of gamma when learning rate scheduler is `step`.<br> Must be a float in the range [0, 1]. | 0.5 | | `step_lr_step_size` | Value of step size when learning rate scheduler is `step`.<br> Must be a positive integer. | 5 | | `warmup_cosine_lr_cycles` | Value of cosine cycle when learning rate scheduler is `warmup_cosine`. <br> Must be a float in the range [0, 1]. | 0.45 |
The following table describes the hyperparameters that are model agnostic.
|`nesterov`| Enable `nesterov` when optimizer is `sgd`. <br> Must be 0 or 1.| 1 | |`beta1` | Value of `beta1` when optimizer is `adam` or `adamw`. <br> Must be a float in the range [0, 1]. | 0.9 | |`beta2` | Value of `beta2` when optimizer is `adam` or `adamw`.<br> Must be a float in the range [0, 1]. | 0.999 |
-|`amsgrad` | Enable `amsgrad` when optimizer is `adam` or `adamw`.<br> Must be 0 or 1. | 0 |
+|`ams_gradient` | Enable `ams_gradient` when optimizer is `adam` or `adamw`.<br> Must be 0 or 1. | 0 |
|`evaluation_frequency`| Frequency to evaluate validation dataset to get metric scores. <br> Must be a positive integer. | 1 | |`checkpoint_frequency`| Frequency to store model checkpoints. <br> Must be a positive integer. | Checkpoint at epoch with best primary metric on validation.|
-|`checkpoint_run_id`| The run id of the experiment that has a pretrained checkpoint for incremental training.| no default |
-|`checkpoint_dataset_id`| FileDataset id containing pretrained checkpoint(s) for incremental training. Make sure to pass `checkpoint_filename` along with `checkpoint_dataset_id`.| no default |
-|`checkpoint_filename`| The pretrained checkpoint filename in FileDataset for incremental training. Make sure to pass `checkpoint_dataset_id` along with `checkpoint_filename`.| no default |
+|`checkpoint_run_id`| The run ID of the experiment that has a pretrained checkpoint for incremental training.| no default |
|`layers_to_freeze`| How many layers to freeze for your model. For instance, passing 2 as value for `seresnext` means freezing layer0 and layer1 referring to the below supported model layer info. <br> Must be a positive integer. <br><br>`'resnet': [('conv1.', 'bn1.'), 'layer1.', 'layer2.', 'layer3.', 'layer4.'],`<br>`'mobilenetv2': ['features.0.', 'features.1.', 'features.2.', 'features.3.', 'features.4.', 'features.5.', 'features.6.', 'features.7.', 'features.8.', 'features.9.', 'features.10.', 'features.11.', 'features.12.', 'features.13.', 'features.14.', 'features.15.', 'features.16.', 'features.17.', 'features.18.'],`<br>`'seresnext': ['layer0.', 'layer1.', 'layer2.', 'layer3.', 'layer4.'],`<br>`'vit': ['patch_embed', 'blocks.0.', 'blocks.1.', 'blocks.2.', 'blocks.3.', 'blocks.4.', 'blocks.5.', 'blocks.6.','blocks.7.', 'blocks.8.', 'blocks.9.', 'blocks.10.', 'blocks.11.'],`<br>`'yolov5_backbone': ['model.0.', 'model.1.', 'model.2.', 'model.3.', 'model.4.','model.5.', 'model.6.', 'model.7.', 'model.8.', 'model.9.'],`<br>`'resnet_backbone': ['backbone.body.conv1.', 'backbone.body.layer1.', 'backbone.body.layer2.','backbone.body.layer3.', 'backbone.body.layer4.']` | no default | ## Image classification (multi-class and multi-label) specific hyperparameters
The following table summarizes hyperparmeters for image classification (multi-cl
| Parameter name | Description | Default | | - |-|--|
-| `weighted_loss` | 0 for no weighted loss.<br>1 for weighted loss with sqrt.(class_weights) <br> 2 for weighted loss with class_weights. <br> Must be 0 or 1 or 2. | 0 |
-| `valid_resize_size` | Image size to which to resize before cropping for validation dataset. <br> Must be a positive integer. <br> <br> *Notes: <li> `seresnext` doesn't take an arbitrary size. <li> Training run may get into CUDA OOM if the size is too big*. | 256  |
-| `valid_crop_size` | Image crop size that's input to your neural network for validation dataset. <br> Must be a positive integer. <br> <br> *Notes: <li> `seresnext` doesn't take an arbitrary size. <li> *ViT-variants* should have the same `valid_crop_size` and `train_crop_size`. <li> Training run may get into CUDA OOM if the size is too big*. | 224 |
-| `train_crop_size` | Image crop size that's input to your neural network for train dataset. <br> Must be a positive integer. <br> <br> *Notes: <li> `seresnext` doesn't take an arbitrary size. <li> *ViT-variants* should have the same `valid_crop_size` and `train_crop_size`. <li> Training run may get into CUDA OOM if the size is too big*. | 224 |
+| `weighted_loss` | <li> 0 for no weighted loss. <li> 1 for weighted loss with sqrt.(class_weights) <li> 2 for weighted loss with class_weights. <li> Must be 0 or 1 or 2. | 0 |
+| `validation_resize_size` | <li> Image size to which to resize before cropping for validation dataset. <li> Must be a positive integer. <br> <br> *Notes: <li> `seresnext` doesn't take an arbitrary size. <li> Training run may get into CUDA OOM if the size is too big*. | 256  |
+| `validation_crop_size` | <li> Image crop size that's input to your neural network for validation dataset. <li> Must be a positive integer. <br> <br> *Notes: <li> `seresnext` doesn't take an arbitrary size. <li> *ViT-variants* should have the same `validation_crop_size` and `training_crop_size`. <li> Training run may get into CUDA OOM if the size is too big*. | 224 |
+| `training_crop_size` | <li> Image crop size that's input to your neural network for train dataset. <li> Must be a positive integer. <br> <br> *Notes: <li> `seresnext` doesn't take an arbitrary size. <li> *ViT-variants* should have the same `validation_crop_size` and `training_crop_size`. <li> Training run may get into CUDA OOM if the size is too big*. | 224 |
## Object detection and instance segmentation task specific hyperparameters The following hyperparameters are for object detection and instance segmentation tasks. > [!WARNING]
-> These parameters are not supported with the `yolov5` algorithm. See the [model specific hyperparameters](#model-specific-hyperparameters) section for `yolo5` supported hyperparmeters.
+> These parameters are not supported with the `yolov5` algorithm. See the [model specific hyperparameters](#model-specific-hyperparameters) section for `yolov5` supported hyperparmeters.
| Parameter name | Description | Default | | - |-|--|
The following hyperparameters are for object detection and instance segmentation
| `validation_iou_threshold` | IOU threshold for box matching when computing validation metrics. <br>Must be a float in the range [0.1, 1]. | 0.5 | | `min_size` | Minimum size of the image to be rescaled before feeding it to the backbone. <br> Must be a positive integer. <br> <br> *Note: training run may get into CUDA OOM if the size is too big*.| 600 | | `max_size` | Maximum size of the image to be rescaled before feeding it to the backbone. <br> Must be a positive integer.<br> <br> *Note: training run may get into CUDA OOM if the size is too big*. | 1333 |
-| `box_score_thresh` | During inference, only return proposals with a classification score greater than `box_score_thresh`. <br> Must be a float in the range [0, 1].| 0.3 |
-| `nms_iou_thresh` | IOU (intersection over union) threshold used in non-maximum suppression (NMS) for the prediction head. Used during inference. <br>Must be a float in the range [0, 1]. | 0.5 |
-| `box_detections_per_img` | Maximum number of detections per image, for all classes. <br> Must be a positive integer.| 100 |
+| `box_score_threshold` | During inference, only return proposals with a classification score greater than `box_score_threshold`. <br> Must be a float in the range [0, 1].| 0.3 |
+| `nms_iou_threshold` | IOU (intersection over union) threshold used in non-maximum suppression (NMS) for the prediction head. Used during inference. <br>Must be a float in the range [0, 1]. | 0.5 |
+| `box_detections_per_image` | Maximum number of detections per image, for all classes. <br> Must be a positive integer.| 100 |
| `tile_grid_size` | The grid size to use for tiling each image. <br>*Note: tile_grid_size must not be None to enable [small object detection](how-to-use-automl-small-object-detect.md) logic*<br> A tuple of two integers passed as a string. Example: --tile_grid_size "(3, 2)" | No Default | | `tile_overlap_ratio` | Overlap ratio between adjacent tiles in each dimension. <br> Must be float in the range of [0, 1) | 0.25 |
-| `tile_predictions_nms_thresh` | The IOU threshold to use to perform NMS while merging predictions from tiles and image. Used in validation/ inference. <br> Must be float in the range of [0, 1] | 0.25 |
+| `tile_predictions_nms_threshold` | The IOU threshold to use to perform NMS while merging predictions from tiles and image. Used in validation/ inference. <br> Must be float in the range of [0, 1] | 0.25 |
## Next steps
machine-learning Tutorial Auto Train Image Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-auto-train-image-models.md
When doing a hyperparameter sweep, it can be useful to visualize the different c
Alternatively, here below you can see directly the HyperDrive parent run and navigate to its 'Child runs' tab:
- [!INCLUDE [sdk v2](../../includes/machine-learning-sdk-v2.md)]
+# [Azure CLI](#tab/cli)
+
+```yaml
+CLI example not available, please use Python SDK.
+```
++
+# [Python SDK](#tab/python)
```python hd_job = ml_client.jobs.get(returned_job.name + '_HD')
Once the run completes, you can register the model that was created from the bes
[!INCLUDE [cli v2](../../includes/machine-learning-cli-v2.md)] ```yaml
- to be supported
+CLI example not available, please use Python SDK.
```
az ml online-endpoint update --name 'od-fridge-items-endpoint' --traffic 'od-fri
[!INCLUDE [cli v2](../../includes/machine-learning-cli-v2.md)] ```yaml
-
+CLI example not available, please use Python SDK.
``` # [Python SDK](#tab/python)
Now that you have scored a test image, you can visualize the bounding boxes for
[!INCLUDE [cli v2](../../includes/machine-learning-cli-v2.md)] ```yaml
-
+CLI example not available, please use Python SDK.
``` # [Python SDK](#tab/python)
machine-learning Reference Automl Images Hyperparameters V1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/reference-automl-images-hyperparameters-v1.md
+
+ Title: Hyperparameter for AutoML computer vision tasks (v1)
+
+description: Learn which hyperparameters are available for computer vision tasks with automated ML (v1).
+++++++ Last updated : 01/18/2022++
+# Hyperparameters for computer vision tasks in automated machine learning (v1)
+
+> [!div class="op_single_selector" title1="Select the version of Azure Machine Learning you are using:"]
+> * [v1](reference-automl-images-hyperparameters-v1.md)
+> * [v2 (current version)](../reference-automl-images-hyperparameters.md)
+
+Learn which hyperparameters are available specifically for computer vision tasks in automated ML experiments.
+
+With support for computer vision tasks, you can control the model algorithm and sweep hyperparameters. These model algorithms and hyperparameters are passed in as the parameter space for the sweep. While many of the hyperparameters exposed are model-agnostic, there are instances where hyperparameters are model-specific or task-specific.
+
+## Model-specific hyperparameters
+
+This table summarizes hyperparameters specific to the `yolov5` algorithm.
+
+| Parameter name | Description | Default |
+| - |-|-|
+| `validation_metric_type` | Metric computation method to use for validation metrics. <br> Must be `none`, `coco`, `voc`, or `coco_voc`. | `voc` |
+| `validation_iou_threshold` | IOU threshold for box matching when computing validation metrics. <br>Must be a float in the range [0.1, 1]. | 0.5 |
+| `img_size` | Image size for train and validation. <br> Must be a positive integer. <br> <br> *Note: training run may get into CUDA OOM if the size is too big*. | 640 |
+| `model_size` | Model size. <br> Must be `small`, `medium`, `large`, or `xlarge`. <br><br> *Note: training run may get into CUDA OOM if the model size is too big*. | `medium` |
+| `multi_scale` | Enable multi-scale image by varying image size by +/- 50% <br> Must be 0 or 1. <br> <br> *Note: training run may get into CUDA OOM if no sufficient GPU memory*. | 0 |
+| `box_score_thresh` | During inference, only return proposals with a score greater than `box_score_thresh`. The score is the multiplication of the objectness score and classification probability. <br> Must be a float in the range [0, 1]. | 0.1 |
+| `nms_iou_thresh` | IOU threshold used during inference in non-maximum suppression post processing. <br> Must be a float in the range [0, 1]. | 0.5 |
+
+This table summarizes hyperparameters specific to the `maskrcnn_*` for instance segmentation during inference.
+
+| Parameter name | Description | Default |
+| - |-|-|
+| `mask_pixel_score_threshold` | Score cutoff for considering a pixel as part of the mask of an object. | 0.5 |
+| `max_number_of_polygon_points` | Maximum number of (x, y) coordinate pairs in polygon after converting from a mask. | 100 |
+| `export_as_image` | Export masks as images. | False |
+| `image_type` | Type of image to export mask as (options are jpg, png, bmp). | JPG |
+
+## Model agnostic hyperparameters
+
+The following table describes the hyperparameters that are model agnostic.
+
+| Parameter name | Description | Default|
+| | - | |
+| `number_of_epochs` | Number of training epochs. <br>Must be a positive integer. | 15 <br> (except `yolov5`: 30) |
+| `training_batch_size` | Training batch size.<br> Must be a positive integer. | Multi-class/multi-label: 78 <br>(except *vit-variants*: <br> `vits16r224`: 128 <br>`vitb16r224`: 48 <br>`vitl16r224`:10)<br><br>Object detection: 2 <br>(except `yolov5`: 16) <br><br> Instance segmentation: 2 <br> <br> *Note: The defaults are largest batch size that can be used on 12 GiB GPU memory*.|
+| `validation_batch_size` | Validation batch size.<br> Must be a positive integer. | Multi-class/multi-label: 78 <br>(except *vit-variants*: <br> `vits16r224`: 128 <br>`vitb16r224`: 48 <br>`vitl16r224`:10)<br><br>Object detection: 1 <br>(except `yolov5`: 16) <br><br> Instance segmentation: 1 <br> <br> *Note: The defaults are largest batch size that can be used on 12 GiB GPU memory*.|
+| `grad_accumulation_step` | Gradient accumulation means running a configured number of `grad_accumulation_step` without updating the model weights while accumulating the gradients of those steps, and then using the accumulated gradients to compute the weight updates. <br> Must be a positive integer. | 1 |
+| `early_stopping` | Enable early stopping logic during training. <br> Must be 0 or 1.| 1 |
+| `early_stopping_patience` | Minimum number of epochs or validation evaluations with<br>no primary metric improvement before the run is stopped.<br> Must be a positive integer. | 5 |
+| `early_stopping_delay` | Minimum number of epochs or validation evaluations to wait<br>before primary metric improvement is tracked for early stopping.<br> Must be a positive integer. | 5 |
+| `learning_rate` | Initial learning rate. <br>Must be a float in the range [0, 1]. | Multi-class: 0.01 <br>(except *vit-variants*: <br> `vits16r224`: 0.0125<br>`vitb16r224`: 0.0125<br>`vitl16r224`: 0.001) <br><br> Multi-label: 0.035 <br>(except *vit-variants*:<br>`vits16r224`: 0.025<br>`vitb16r224`: 0.025 <br>`vitl16r224`: 0.002) <br><br> Object detection: 0.005 <br>(except `yolov5`: 0.01) <br><br> Instance segmentation: 0.005 |
+| `lr_scheduler` | Type of learning rate scheduler. <br> Must be `warmup_cosine` or `step`. | `warmup_cosine` |
+| `step_lr_gamma` | Value of gamma when learning rate scheduler is `step`.<br> Must be a float in the range [0, 1]. | 0.5 |
+| `step_lr_step_size` | Value of step size when learning rate scheduler is `step`.<br> Must be a positive integer. | 5 |
+| `warmup_cosine_lr_cycles` | Value of cosine cycle when learning rate scheduler is `warmup_cosine`. <br> Must be a float in the range [0, 1]. | 0.45 |
+| `warmup_cosine_lr_warmup_epochs` | Value of warmup epochs when learning rate scheduler is `warmup_cosine`. <br> Must be a positive integer. | 2 |
+| `optimizer` | Type of optimizer. <br> Must be either `sgd`, `adam`, `adamw`. | `sgd` |
+| `momentum` | Value of momentum when optimizer is `sgd`. <br> Must be a float in the range [0, 1]. | 0.9 |
+| `weight_decay` | Value of weight decay when optimizer is `sgd`, `adam`, or `adamw`. <br> Must be a float in the range [0, 1]. | 1e-4 |
+|`nesterov`| Enable `nesterov` when optimizer is `sgd`. <br> Must be 0 or 1.| 1 |
+|`beta1` | Value of `beta1` when optimizer is `adam` or `adamw`. <br> Must be a float in the range [0, 1]. | 0.9 |
+|`beta2` | Value of `beta2` when optimizer is `adam` or `adamw`.<br> Must be a float in the range [0, 1]. | 0.999 |
+|`amsgrad` | Enable `amsgrad` when optimizer is `adam` or `adamw`.<br> Must be 0 or 1. | 0 |
+|`evaluation_frequency`| Frequency to evaluate validation dataset to get metric scores. <br> Must be a positive integer. | 1 |
+|`checkpoint_frequency`| Frequency to store model checkpoints. <br> Must be a positive integer. | Checkpoint at epoch with best primary metric on validation.|
+|`checkpoint_run_id`| The run ID of the experiment that has a pretrained checkpoint for incremental training.| no default |
+|`checkpoint_dataset_id`| FileDataset ID containing pretrained checkpoint(s) for incremental training. Make sure to pass `checkpoint_filename` along with `checkpoint_dataset_id`.| no default |
+|`checkpoint_filename`| The pretrained checkpoint filename in FileDataset for incremental training. Make sure to pass `checkpoint_dataset_id` along with `checkpoint_filename`.| no default |
+|`layers_to_freeze`| How many layers to freeze for your model. For instance, passing 2 as value for `seresnext` means freezing layer0 and layer1 referring to the below supported model layer info. <br> Must be a positive integer. <br><br>`'resnet': [('conv1.', 'bn1.'), 'layer1.', 'layer2.', 'layer3.', 'layer4.'],`<br>`'mobilenetv2': ['features.0.', 'features.1.', 'features.2.', 'features.3.', 'features.4.', 'features.5.', 'features.6.', 'features.7.', 'features.8.', 'features.9.', 'features.10.', 'features.11.', 'features.12.', 'features.13.', 'features.14.', 'features.15.', 'features.16.', 'features.17.', 'features.18.'],`<br>`'seresnext': ['layer0.', 'layer1.', 'layer2.', 'layer3.', 'layer4.'],`<br>`'vit': ['patch_embed', 'blocks.0.', 'blocks.1.', 'blocks.2.', 'blocks.3.', 'blocks.4.', 'blocks.5.', 'blocks.6.','blocks.7.', 'blocks.8.', 'blocks.9.', 'blocks.10.', 'blocks.11.'],`<br>`'yolov5_backbone': ['model.0.', 'model.1.', 'model.2.', 'model.3.', 'model.4.','model.5.', 'model.6.', 'model.7.', 'model.8.', 'model.9.'],`<br>`'resnet_backbone': ['backbone.body.conv1.', 'backbone.body.layer1.', 'backbone.body.layer2.','backbone.body.layer3.', 'backbone.body.layer4.']` | no default |
+
+## Image classification (multi-class and multi-label) specific hyperparameters
+
+The following table summarizes hyperparmeters for image classification (multi-class and multi-label) tasks.
+
+| Parameter name | Description | Default |
+| - |-|--|
+| `weighted_loss` | 0 for no weighted loss.<br>1 for weighted loss with sqrt.(class_weights) <br> 2 for weighted loss with class_weights. <br> Must be 0 or 1 or 2. | 0 |
+| `valid_resize_size` | <li> Image size to which to resize before cropping for validation dataset. <li> Must be a positive integer. <br> <br> *Notes: <li> `seresnext` doesn't take an arbitrary size. <li> Training run may get into CUDA OOM if the size is too big*. | 256  |
+| `valid_crop_size` | <li> Image crop size that's input to your neural network for validation dataset. <li> Must be a positive integer. <br> <br> *Notes: <li> `seresnext` doesn't take an arbitrary size. <li> *ViT-variants* should have the same `valid_crop_size` and `train_crop_size`. <li> Training run may get into CUDA OOM if the size is too big*. | 224 |
+| `train_crop_size` | <li> Image crop size that's input to your neural network for train dataset. <li> Must be a positive integer. <br> <br> *Notes: <li> `seresnext` doesn't take an arbitrary size. <li> *ViT-variants* should have the same `valid_crop_size` and `train_crop_size`. <li> Training run may get into CUDA OOM if the size is too big*. | 224 |
+
+## Object detection and instance segmentation task specific hyperparameters
+
+The following hyperparameters are for object detection and instance segmentation tasks.
+
+> [!WARNING]
+> These parameters are not supported with the `yolov5` algorithm. See the [model specific hyperparameters](#model-specific-hyperparameters) section for `yolov5` supported hyperparmeters.
+
+| Parameter name | Description | Default |
+| - |-|--|
+| `validation_metric_type` | Metric computation method to use for validation metrics. <br> Must be `none`, `coco`, `voc`, or `coco_voc`. | `voc` |
+| `validation_iou_threshold` | IOU threshold for box matching when computing validation metrics. <br>Must be a float in the range [0.1, 1]. | 0.5 |
+| `min_size` | Minimum size of the image to be rescaled before feeding it to the backbone. <br> Must be a positive integer. <br> <br> *Note: training run may get into CUDA OOM if the size is too big*.| 600 |
+| `max_size` | Maximum size of the image to be rescaled before feeding it to the backbone. <br> Must be a positive integer.<br> <br> *Note: training run may get into CUDA OOM if the size is too big*. | 1333 |
+| `box_score_thresh` | During inference, only return proposals with a classification score greater than `box_score_thresh`. <br> Must be a float in the range [0, 1].| 0.3 |
+| `nms_iou_thresh` | IOU (intersection over union) threshold used in non-maximum suppression (NMS) for the prediction head. Used during inference. <br>Must be a float in the range [0, 1]. | 0.5 |
+| `box_detections_per_img` | Maximum number of detections per image, for all classes. <br> Must be a positive integer.| 100 |
+| `tile_grid_size` | The grid size to use for tiling each image. <br>*Note: tile_grid_size must not be None to enable [small object detection](../how-to-use-automl-small-object-detect.md) logic*<br> A tuple of two integers passed as a string. Example: --tile_grid_size "(3, 2)" | No Default |
+| `tile_overlap_ratio` | Overlap ratio between adjacent tiles in each dimension. <br> Must be float in the range of [0, 1) | 0.25 |
+| `tile_predictions_nms_thresh` | The IOU threshold to use to perform NMS while merging predictions from tiles and image. Used in validation/ inference. <br> Must be float in the range of [0, 1] | 0.25 |
+
+## Next steps
+
+* Learn how to [Set up AutoML to train computer vision models with Python (preview)](how-to-auto-train-image-models-v1.md).
+
+* [Tutorial: Train an object detection model (preview) with AutoML and Python](tutorial-auto-train-image-models-v1.md).
mysql Concepts Customer Managed Key https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/concepts-customer-managed-key.md
+
+ Title: Data encryption with customer managed keys ΓÇô Azure Database for MySQL ΓÇô Flexible Server Preview
+description: Learn how data encryption with customer-managed keys for Azure Database for MySQL flexible server enables you to bring your own key (BYOK) for data protection at rest
+++ Last updated : 09/15/2022+++++
+# Customer managed keys data encryption ΓÇô Azure Database for MySQL ΓÇô Flexible Server Preview
++
+With data encryption with customer-managed keys for Azure Database for MySQL - Flexible Server Preview, you can bring your own key (BYOK) for data protection at rest and implement separation of duties for managing keys and data. With customer managed keys (CMKs), the customer is responsible for and in a full control of key lifecycle management (key creation, upload, rotation, deletion), key usage permissions, and auditing operations on keys.
+
+Data encryption with CMKs is set at the server level. For a given server, a CMK, called the key encryption key (KEK), is used to encrypt the data encryption key (DEK) used by the service. The KEK is an asymmetric key stored in a customer-owned and customer-managed [Azure Key Vault instance](../../key-vault/general/security-features.md). Key Vault is highly available and scalable secure storage for RSA cryptographic keys, optionally backed by FIPS 140-2 Level 2 validated hardware security modules (HSMs). Key Vault doesn't allow direct access to a stored key, but instead provides encryption/decryption services using the key to the authorized entities. The key can be generated by the key vault, imported, or [transferred to the key vault from an on-premises HSM device](../../key-vault/keys/hsm-protected-keys.md).
+
+> [!Note]
+> In the Public Preview, we can't enable geo redundancy on a flexible server that has CMK enabled, nor can we enable geo redundancy on a flexible server that has CMK enabled.
+
+## Terminology and description
+
+**Data encryption key (DEK)**: A symmetric AES256 key used to encrypt a partition or block of data. Encrypting each block of data with a different key makes crypto analysis attacks more difficult. Access to DEKs is needed by the resource provider or application instance that is encrypting and decrypting a specific block. When you replace a DEK with a new key, only the data in its associated block must be re-encrypted with the new key.
+
+**Key encryption key (KEK)**: An encryption key used to encrypt the DEKs. A KEK that never leaves Key Vault allows the DEKs themselves to be encrypted and controlled. The entity that has access to the KEK might be different than the entity that requires the DEK. Since the KEK is required to decrypt the DEKs, the KEK is effectively a single point by which DEKs can be effectively deleted by deletion of the KEK. The DEKs, encrypted with the KEKs, are stored separately. Only an entity with access to the KEK can decrypt these DEKs. For more information, see [Security in encryption rest](../../security/fundamentals/encryption-atrest.md).
+
+## Benefits
+
+Data encryption with customer-managed keys for Azure Database for MySQL Flexible server provides the following benefits:
+
+- Data-access is fully controlled by you by the ability to remove the key and making the database inaccessible
+- Full control over the key-lifecycle, including rotation of the key to align with corporate policies
+- Central management and organization of keys in Azure Key Vault
+- Ability to implement separation of duties between security officers, and DBA and system administrators
+
+## How does data encryption with a customer-managed key work?
+
+Managed identities in Azure Active Directory (Azure AD) provide Azure services an alternative to storing credentials in the code by provisioning an automatically assigned identity that can be used to authenticate to any service supporting Azure AD authentication, such as Azure Key Vault (AKV). Azure Database for MySQL Flexible server currently supports only User-assigned Managed Identity (UMI). For more information, see [Managed identity types](../../active-directory/managed-identities-azure-resources/overview.md#managed-identity-types) in Azure.
+
+To configure the CMK for an Azure Database for MySQL flexible server, you need to link the UMI to the server and specify the Azure Key vault, and key to use.
+
+The UMI must have the following access to the key vault:
+
+- **Get**: For retrieving the public part and properties of the key in the key vault.
+- **List**: List the versions of the key stored in a Key Vault.
+- **Wrap Key**: To be able to encrypt the DEK. The encrypted DEK is stored in the Azure Database for MySQL Flexible server.
+- **Unwrap Key**: To be able to decrypt the DEK. Azure Database for MySQL Flexible server needs the decrypted DEK to encrypt/decrypt the data
+
+When you configure a flexible server to use a CMK stored in the key vault, the server sends the DEK to the key vault for encryptions. Key Vault returns the encrypted DEK, which is stored in the user database. Similarly, when needed, the flexible server will send the protected DEK to the key vault for decryption.
++
+After logging is enabled, auditors can use Azure Monitor to review Key Vault audit event logs. To enable logging of [Key Vault auditing events](../../key-vault/key-vault-insights-overview.md), see Monitoring your key vault service with Key Vault insights.
+
+> [!Note]
+> Permission changes can take up to 10 minutes to impact the key vault. This includes revoking access permissions to the TDE protector in AKV, and users within this time frame may still have access permissions.
+
+**Requirements for configuring data encryption for Azure Database for MySQL Flexible server**
+
+Before you attempt to configure Key Vault, be sure to address the following requirements.
+
+- The Key Vault and Azure Database for MySQL flexible server must belong to the same Azure Active Directory (Azure AD) tenant. Cross-tenant Key Vault and flexible server interactions aren't supported. If you move Key Vault resources after performing the configuration, youΓÇÖll need to reconfigure data encryption.
+- The Key Vault and Azure Database for MySQL flexible server must reside in the same region.
+- Enable the [soft-delete](../../key-vault/general/soft-delete-overview.md) feature on the key vault with retention period set to 90 days to protect from data loss should an accidental key (or Key Vault) deletion occur. The recover and purge actions have their own permissions associated in a Key Vault access policy. The soft-delete feature is off by default, but you can enable it through the Azure portal or by using PowerShell or the Azure CLI.
+- Enable the [Purge Protection](/azure/key-vault/general/soft-delete-overview#purge-protection.md) feature on the key vault and set the retention period to 90 days. When purge protection is on, a vault or an object in the deleted state can't be purged until the retention period has passed. You can enable this feature by using PowerShell or the Azure CLI, and only after you've enabled soft-delete.
+
+Before you attempt to configure the CMK, be sure to address the following requirements.
+
+- The customer-managed key to be used for encrypting the DEK can be only asymmetric, RSA 2048.
+- The key activation date (if set) must be a date and time in the past. The expiration date not set.
+- The key must be in the **Enabled** state.
+- The key must have [soft delete](../../key-vault/general/soft-delete-overview.md) with retention period set to 90 days. This implicitly sets the required key attribute recoveryLevel: ΓÇ£RecoverableΓÇ¥.
+- The key must have [purge protection enabled](../../key-vault/general/soft-delete-overview.md#purge-protection).
+- If you're [importing an existing key](/rest/api/keyvault/keys/import-key/import-key?tabs=HTTP) into the key vault, make sure to provide it in the supported file formats (.pfx, .byok, .backup).
+
+> [!Note]
+> For detailed, step-by-step instructions about how to configure date encryption for an Azure Database for MySQL flexible server via the Azure portal, see [Configure data encryption for MySQL Flexible server](how-to-data-encryption-portal.md).
+
+## Recommendations for configuring data encryption
+
+As you configure Key Vault to use data encryption by using a customer-managed key, keep in mind the following recommendations.
+
+- Set a resource lock on Key Vault to control who can delete this critical resource and prevent accidental or unauthorized deletion.
+- Enable auditing and reporting on all encryption keys. Key Vault provides logs that are easy to inject into other security information and event management tools. Azure Monitor Log Analytics is one example of a service that's already integrated.
+- Keep a copy of the customer-managed key in a secure place or escrow it to the escrow service.
+- If Key Vault generates the key, create a key backup before using the key for the first time. You can only restore the backup to Key Vault. For more information about the backup command, see [Backup-AzKeyVaultKey](/powershell/module/az.keyVault/backup-azkeyVaultkey?view=azps-8.3.0).
+
+## Inaccessible customer-managed key condition
+
+When you configure data encryption with a CMK in Key Vault, continuous access to this key is required for the server to stay online. If the flexible server loses access to the customer-managed key in Key Vault, the server begins denying all connections within 10 minutes. The flexible server issues a corresponding error message and changes the server state to Inaccessible. The server can reach this state for various reasons.
+
+- If you delete the KeyVault, the Azure Database for MySQL Flexible server will be unable to access the key, and will move to _Inaccessible_ state. Recover the [Key Vault](../../key-vault/general/key-vault-recovery.md) and revalidate the data encryption to make the Flexible server _Available_.
+- If we delete the key from the KeyVault, the Azure Database for MySQL Flexible server will be unable to access the key, and will move to _Inaccessible_ state. Recover the [Key](../../key-vault/general/key-vault-recovery.md) and revalidate the data encryption to make the Flexible server _Available_.
+- If the key stored in the Azure KeyVault expires, the key will become invalid, and the Azure Database for MySQL Flexible server will transition into _Inaccessible_ state. Extend the key expiry date using [CLI](/cli/azure/keyvault/key?view=azure-cli-latest#az-keyvault-key-set-attributes) and then revalidate the data encryption to make the Flexible server _Available_.
+
+## Accidental key access revocation from Key Vault
+
+It might happen that someone with sufficient access rights to Key Vault accidentally disables flexible server access to the key by:
+
+- Revoking the key vault's _get, list, wrap key_ and _unwrap key_ permissions from the server
+- Deleting the key
+- Deleting the key vault
+- Changing the key vault's firewall rules
+- Deleting the user managed identity used for encryption on the flexible server with a customer managed key in Azure AD
+
+## Monitor the customer-managed key in Key Vault
+
+To monitor the database state, and to enable alerting for the loss of transparent data encryption protector access, configure the following Azure features:
+
+- [Activity log](../../service-health/alerts-activity-log-service-notifications-portal.md): When access to the Customer Key in the customer-managed Key Vault fails, entries are added to the activity log. You can reinstate access as soon as possible if you create alerts for these events.
+- [Action groups](../../azure-monitor/alerts/action-groups.md): Define these groups to send you notifications and alerts based on your preferences.
+
+## Replica with a customer managed key in Key Vault
+
+Once Azure Database for MySQL flexible server is encrypted with a customer's managed key stored in Key Vault, any newly created copy of the server is also encrypted. When trying to encrypt Azure Database for MySQL flexible server with a customer managed key that already has a replica(s), we recommend configuring the replica(s) as well by adding the managed identity and key.
+
+## Restore with a customer managed key in Key Vault
+
+When attempting to restore an Azure Database for MySQL flexible server, you're given the option to select the User managed identity, and Key to encrypt the restore server.
+
+To avoid issues while setting up customer-managed data encryption during restore or read replica creation, it's important to follow these steps on the source and restored/replica servers:
+
+- Initiate the restore or read replica creation process from the source Azure Database for MySQL Flexible server.
+- On the restored/replica server, revalidate the customer-managed key in the data encryption settings to ensure that the User managed identity is given _Get, List, Wrap key_ and _Unwrap key_ permissions to the key stored in Key Vault.
+
+## Next steps
+- [Data encryption with Azure CLI (Preview)](how-to-data-encryption-cli.md)
+- [Data encryption with Azure portal (Preview)](how-to-data-encryption-portal.md)
+- [Azure Key Vault instance](../../key-vault/general/security-features.md)
+- [Security in encryption rest](../../security/fundamentals/encryption-atrest.md)
mysql How To Data Encryption Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/how-to-data-encryption-cli.md
+
+ Title: Set data encryption for Azure Database for MySQL flexible server by using the Azure CLI Preview
+description: Learn how to set up and manage data encryption for your Azure Database for MySQL flexible server using Azure CLI.
+++ Last updated : 09/15/2022+++++
+# Data encryption for Azure Database for MySQL - Flexible Server with Azure CLI Preview
++
+This tutorial shows you how to set up and manage data encryption for your Azure Database for MySQL - Flexible Server using Azure CLI preview.
+
+In this tutorial you'll learn how to:
+
+- Create a MySQL flexible server with data encryption
+- Update an existing MySQL flexible server with data encryption
+- Using an Azure Resource Manager template to enable data encryption
+
+## Prerequisites
+
+- An Azure account with an active subscription.
+
+- If you don't have an Azure subscription, create an [Azure free account](https://azure.microsoft.com/free) before you begin.
+
+ > [!Note]
+ > With an Azure free account, you can now try Azure Database for MySQL - Flexible Server for free for 12 months. For more information, see [Try Flexible Server for free](how-to-deploy-on-azure-free-account.md).
+
+- Install or upgrade Azure CLI to the latest version. See [Install Azure CLI](/cli/azure/install-azure-cli).
+
+- Log in to Azure account using [az login](/cli/azure/reference-index#az-login) command. Note the ID property, which refers to Subscription ID for your Azure account:
+
+```azurecli-interactive
+az login
+```
+
+- If you have multiple subscriptions, choose the appropriate subscription in which you want to create the server using the az account set command:
+
+```azurecli-interactive
+az account set --subscription \<subscription id\>
+```
+
+- In Azure Key Vault, create a key vault and a key. The key vault must have the following properties to use as a customer-managed key:
+
+[Soft delete](../../key-vault/general/soft-delete-overview.md):
+
+```azurecli-interactive
+az resource update --id $(az keyvault show --name \ \<key\_vault\_name\> -o tsv | awk '{print $1}') --set \ properties.enableSoftDelete=true
+```
+
+[Purge protected](../../key-vault/general/soft-delete-overview.md#purge-protection):
+
+```azurecli-interactive
+az keyvault update --name \<key\_vault\_name\> --resource-group \<resource\_group\_name\> --enable-purge-protection true
+```
+
+Retention days set to 90 days:
+
+```azurecli-interactive
+az keyvault update --name \<key\_vault\_name\> --resource-group \<resource\_group\_name\> --retention-days 90
+```
+
+The key must have the following attributes to use as a customer-managed key:
+
+ - No expiration dates
+ - Not disabled
+ - Perform **List** , **Get** , **Wrap** , **Unwrap** operations
+ - **recoverylevel** attribute set to Recoverable (this requires soft-delete enabled with retention period set to 90 days)
+ - **Purge protection** enabled
+
+You can verify the above attributes of the key by using the following command:
+
+```azurecli-interactive
+az keyvault key show --vault-name \<key\_vault\_name\> -n \<key\_name\>
+```
+
+> [!Note]
+> In the Public Preview, we can't enable geo redundancy on a flexible server that has CMK enabled, nor can we enable geo redundancy on a flexible server that has CMK enabled.
+
+## Update an existing MySQL flexible server with data encryption
+
+Set or change key and identity for data encryption:
+
+```azurecli-interactive
+az mysql flexible-server update --resource-group testGroup --name testserver \\ --key \<key identifier of newKey\> --identity newIdentity
+```
+
+Set or change key, identity, backup key and backup identity for data encryption with geo redundant backup:
+
+```azurecli-interactive
+az mysql flexible-server update --resource-group testGroup --name testserver \\ --key \<key identifier of newKey\> --identity newIdentity \\ --backup-key \<key identifier of newBackupKey\> --backup-identity newBackupIdentity
+```
+
+Disable data encryption for flexible server:
+
+```azurecli-interactive
+az mysql flexible-server update --resource-group testGroup --name testserver --disable-data-encryption
+```
+
+## Use an Azure Resource Manager template to enable data encryption
+
+The params **identityUri** and **primaryKeyUri** are the resource ID of the user managed identity and the user managed key, respectively.
+
+```json
+ "$schema": "http://schema.management.azure.com/schemas/2014-04-01-preview/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "administratorLogin": {
+ "type": "string"
+ },
+ "administratorLoginPassword": {
+ "type": "securestring"
+ },
+ "location": {
+ "type": "string"
+ },
+ "serverName": {
+ "type": "string"
+ },
+ "serverEdition": {
+ "type": "string"
+ },
+ "vCores": {
+ "type": "int",
+ "defaultValue": 4
+ },
+ "storageSizeGB": {
+ "type": "int"
+ },
+ "haEnabled": {
+ "type": "string",
+ "defaultValue": "Disabled"
+ },
+ "availabilityZone": {
+ "type": "string"
+ },
+ "standbyAvailabilityZone": {
+ "type": "string"
+ },
+ "version": {
+ "type": "string"
+ },
+ "tags": {
+ "type": "object",
+ "defaultValue": {}
+ },
+ "backupRetentionDays": {
+ "type": "int"
+ },
+ "geoRedundantBackup": {
+ "type": "string"
+ },
+ "vmName": {
+ "type": "string",
+ "defaultValue": "Standard_B1ms"
+ },
+ "storageIops": {
+ "type": "int"
+ },
+ "storageAutogrow": {
+ "type": "string",
+ "defaultValue": "Enabled"
+ },
+ "autoIoScaling": {
+ "type": "string",
+ "defaultValue": "Disabled"
+ },
+ "vnetData": {
+ "type": "object",
+ "metadata": {
+ "description": "Vnet data is an object which contains all parameters pertaining to vnet and subnet"
+ },
+ "defaultValue": {
+ "virtualNetworkName": "testVnet",
+ "subnetName": "testSubnet",
+ "virtualNetworkAddressPrefix": "10.0.0.0/16",
+ "virtualNetworkResourceGroupName": "[resourceGroup().name]",
+ "location": "eastus2",
+ "subscriptionId": "[subscription().subscriptionId]",
+ "subnetProperties": {},
+ "isNewVnet": false,
+ "subnetNeedsUpdate": false,
+ "Network": {}
+ }
+ },
+ "identityUri": {
+ "type": "string",
+ "metadata": {
+ "description": "The resource ID of the identity used for data encryption"
+ }
+ },
+ "primaryKeyUri": {
+ "type": "string",
+ "metadata": {
+ "description": "The resource ID of the key used for data encryption"
+ }
+ }
+ },
+ "variables": {
+ "api": "2021-05-01",
+ "identityData": "[if(empty(parameters('identityUri')), json('null'), createObject('type', 'UserAssigned', 'UserAssignedIdentities', createObject(parameters('identityUri'), createObject())))]",
+ "dataEncryptionData": "[if(or(empty(parameters('identityUri')), empty(parameters('primaryKeyUri'))), json('null'), createObject('type', 'AzureKeyVault', 'primaryUserAssignedIdentityId', parameters('identityUri'), 'primaryKeyUri', parameters('primaryKeyUri')))]"
+ },
+ "resources": [
+ {
+ "apiVersion": "[variables('api')]",
+ "location": "[parameters('location')]",
+ "name": "[parameters('serverName')]",
+ "identity": "[variables('identityData')]",
+ "properties": {
+ "version": "[parameters('version')]",
+ "administratorLogin": "[parameters('administratorLogin')]",
+ "administratorLoginPassword": "[parameters('administratorLoginPassword')]",
+ "Network": "[if(empty(parameters('vnetData').Network), json('null'), parameters('vnetData').Network)]",
+ "Storage": {
+ "StorageSizeGB": "[parameters('storageSizeGB')]",
+ "Iops": "[parameters('storageIops')]",
+ "Autogrow": "[parameters('storageAutogrow')]",
+ "AutoIoScaling": "[parameters('autoIoScaling')]"
+ },
+ "Backup": {
+ "backupRetentionDays": "[parameters('backupRetentionDays')]",
+ "geoRedundantBackup": "[parameters('geoRedundantBackup')]"
+ },
+ "availabilityZone": "[parameters('availabilityZone')]",
+ "highAvailability": {
+ "mode": "[parameters('haEnabled')]",
+ "standbyAvailabilityZone": "[parameters('standbyAvailabilityZone')]"
+ },
+ "dataEncryption": "[variables('dataEncryptionData')]"
+ },
+ "sku": {
+ "name": "[parameters('vmName')]",
+ "tier": "[parameters('serverEdition')]",
+ "capacity": "[parameters('vCores')]"
+ },
+ "tags": "[parameters('tags')]",
+ "type": "Microsoft.DBforMySQL/flexibleServers"
+ }
+ ]
+}
+```
+
+## Next steps
+
+- [Customer managed keys data encryption (Preview)](concepts-customer-managed-key.md)
+- [Data encryption with Azure portal (Preview)](how-to-data-encryption-portal.md)
+
mysql How To Data Encryption Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/how-to-data-encryption-portal.md
+
+ Title: Set data encryption for Azure Database for MySQL flexible server by using the Azure portal Preview
+description: Learn how to set up and manage data encryption for your Azure Database for MySQL - Flexible Server using Azure portal.
+++ Last updated : 09/15/2022+++++
+# Data encryption for Azure Database for MySQL - Flexible Server by using the Azure portal Preview
++
+This tutorial shows you how to set up and manage data encryption for your Azure Database for MySQL flexible server.
+
+In this tutorial, you learn how to:
+
+- Set data encryption for Azure Database for MySQL flexible server.
+- Configure data encryption for restore.
+- Configure data encryption for replica servers.
+
+## Prerequisites
+
+- An Azure account with an active subscription.
+- If you don't have an Azure subscription, create an [Azure free account](https://azure.microsoft.com/free) before you begin.
+
+ > [!Note]
+ > With an Azure free account, you can now try Azure Database for MySQL - Flexible Server for free for 12 months. For more information, see [Try Flexible Server for free](how-to-deploy-on-azure-free-account.md).
+
+## Set the right permissions for key operations
+
+1. In Key Vault, select **Access policies**, and then select **Create**.
+
+ :::image type="content" source="media/how-to-data-encryption-portal/1-mysql-key-vault-access-policy.jpeg" alt-text="Screenshot of Key Vault Access Policy in the Azure portal.":::
+
+2. On the **Permissions** tab, select the following **Key permissions - Get** , **List** , **Wrap Key** , **Unwrap Key**.
+
+3. On the **Principal** tab, select the User-assigned Managed Identity.
+
+ :::image type="content" source="media/how-to-data-encryption-portal/2-mysql-principal-tab.jpeg" alt-text="Screenshot of the principal tab in the Azure portal.":::
+
+4. Select **Create**.
+
+> [!Note]
+> In the Public Preview, we can't enable geo redundancy on a flexible server that has CMK enabled, nor can we enable geo redundancy on a flexible server that has CMK enabled.
+
+## Configure customer managed key
+
+To set up the customer managed key, perform the following steps.
+
+1. In the portal, navigate to your Azure Database for MySQL flexible server, and then, under **Security** , select **Data encryption**.
+
+ :::image type="content" source="media/how-to-data-encryption-portal/3-mysql-data-encryption.jpeg" alt-text="Screenshot of the data encryption page.":::
+
+2. On the **Data encryption** page, under **No identity assigned** , select **Change identity** ,
+
+3. In the **Select user assigned**** managed identity **dialog box, select the** demo-umi **identity, and then select** Add**.
+
+ :::image type="content" source="media/how-to-data-encryption-portal/4-mysql-assigned-managed-identity-demo-uni.jpeg" alt-text="Screenshot of selecting the demo-umi from the assigned managed identity page.":::
+
+4. To the right of **Key selection method** , either **Select a key** and specify a key vault and key pair, or select **Enter a key identifier**.
+
+ :::image type="content" source="media/how-to-data-encryption-portal/5-mysql-select-key.jpeg" alt-text="Screenshot of the Select Key page in the Azure portal.":::
+
+5. Select **Save**.
+
+## Using Data encryption for restore
+
+To use data encryption as part of a restore operation, perform the following steps.
+
+1. In the Azure portal, on the navigate Overview page for your server, select **Restore**.
+ 1. On the **Security** tab, you specify the identity and the key.
+
+ :::image type="content" source="media/how-to-data-encryption-portal/6-mysql-navigate-overview-page.jpeg" alt-text="Screenshot of overview page.":::
+
+2. Select **Change identity** and select the **User assigned managed identity** and select on **Add**
+**To select the Key** , you can either select a **key vault** and **key pair** or enter a **key identifier**
+
+ :::image type="content" source="media/how-to-data-encryption-portal/7-mysql-change-identity.jpeg" alt-text="SCreenshot of the change identity page.":::
+
+## Using Data encryption for replica servers
+
+After your Azure Database for MySQL flexible server is encrypted with a customer's managed key stored in Key Vault, any newly created copy of the server will also be encrypted.
+
+1. To configuration replication, under **Settings** , select **Replication** , and then select **Add replica**.
+
+ :::image type="content" source="media/how-to-data-encryption-portal/8-mysql-replication.jpeg" alt-text="Screenshot of the Replication page.":::
+
+2. In the Add Replica server to Azure Database for MySQL dialog box, select the appropriate **Compute + storage** option, and then select **OK**.
+
+ :::image type="content" source="media/how-to-data-encryption-portal/9-mysql-compute-storage.jpeg" alt-text="Screenshot of the Compute + Storage page.":::
+
+ > [!Important]
+ > When trying to encrypt Azure Database for MySQL flexible server with a customer managed key that already has a replica(s), we recommend configuring the replica(s) as well by adding the managed identity and key.
+
+## Next steps
+
+- [Customer managed keys data encryption (Preview)](concepts-customer-managed-key.md)
+- [Data encryption with Azure CLI (Preview)](how-to-data-encryption-cli.md)
mysql Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/whats-new.md
Previously updated : 08/16/2022 Last updated : 09/16/2022 # What's new in Azure Database for MySQL - Flexible Server?
This article summarizes new releases and features in Azure Database for MySQL -
> [!NOTE] > This article contains references to the term slave, a term that Microsoft no longer uses. When the term is removed from the software, we'll remove it from this article.
+## September 2022
+
+- **Customer managed keys data encryption ΓÇô Azure Database for MySQL ΓÇô Flexible Server (Preview)**
+
+ With data encryption with customer-managed keys (CMKs) for Azure Database for MySQL - Flexible Server Preview, you can bring your own key (BYOK) for data protection at rest and implement separation of duties for managing keys and data. Data encryption with CMKs is set at the server level. For a given server, a CMK, called the key encryption key (KEK), is used to encrypt the data encryption key (DEK) used by the service. With customer managed keys (CMKs), the customer is responsible for and in a full control of key lifecycle management (key creation, upload, rotation, deletion), key usage permissions, and auditing operations on keys. [Learn More](concepts-customer-managed-key.md)
+ ## August 2022 - **Server logs for Azure Database for MySQL - Flexible Server**
This article summarizes new releases and features in Azure Database for MySQL -
Business Critical tier for Azure Database for MySQL ΓÇô Flexible Server now supports the Ev5 compute series in more regions. Learn more about [Boost Azure MySQL Business Critical flexible server performance by 30% with the Ev5 compute series!](https://techcommunity.microsoft.com/t5/azure-database-for-mysql-blog/boost-azure-mysql-business-critical-flexible-server-performance/ba-p/3603698) -- **Server paramaters that are now configurable**
+- **Server parameters that are now configurable**
List of dynamic server parameters that are now configurable - [lc_time_names](https://dev.mysql.com/doc/refman/8.0/en/server-system-variables.html#sysvar_lc_time_names)
Learn more about [Boost Azure MySQL Business Critical flexible server performanc
- [performance_schema_max_digest_length](https://dev.mysql.com/doc/refman/8.0/en/performance-schema-system-variables.html#sysvar_performance_schema_max_digest_length) - [performance_schema_max_sql_text_length](https://dev.mysql.com/doc/refman/8.0/en/performance-schema-system-variables.html#sysvar_performance_schema_max_sql_text_length) - - **Known Issues**
- - When you try to connect to the server, you will receive error "ERROR 9107 (HY000): Only Azure Active Directory accounts are allowed to connect to server".
-
- Server parameter aad_auth_only was exposed in this month's deployment. Enabling server parameter aad_auth_only will block all non Azure Active Directory MySQL connections to your Azure Database for MySQL Flexible server. We are currently working on additional configurations required for AAD authentication to be fully functional, and the feature will be available in the upcoming deployments. Do not enable the aad_auth_only parameter until then.
-
+ - When you try to connect to the server, you receive error "ERROR 9107 (HY000): Only Azure Active Directory accounts are allowed to connect to server".
+ Server parameter aad_auth_only was exposed in this month's deployment. Enabling server parameter aad_auth_only will block all non Azure Active Directory MySQL connections to your Azure Database for MySQL Flexible server. We're currently working on additional configurations required for Azure Active Directory authentication to be fully functional, and the feature will be available in the upcoming deployments. Don't enable the aad_auth_only parameter until then.
## June 2022
Learn more about [Boost Azure MySQL Business Critical flexible server performanc
This release of Azure Database for MySQL - Flexible Server includes the following updates. - **Migrate from locally redundant backup storage to geo-redundant backup storage for existing flexible server**
- Azure Database for MySQL - Flexible Server now provides the added flexibility to migrate to geo-redundant backup storage from locally redundant backup storage post server-create to provide higher data resiliency. Enabling geo-redundancy via the server's Compute + Storage blade empowers customers to recover their existing flexible servers from a geographic disaster or regional failure when they canΓÇÖt access the server in the primary region. With this feature enabled for their existing servers, customers can perform geo-restore and deploy a new server to the geo-paired Azure region using the original serverΓÇÖs latest available geo-redundant backup. [Learn more](concepts-backup-restore.md)
+ Azure Database for MySQL - Flexible Server now provides the added flexibility to migrate to geo-redundant backup storage from locally redundant backup storage post server-create to provide higher data resiliency. Enabling geo-redundancy via the server's Compute + Storage page empowers customers to recover their existing flexible servers from a geographic disaster or regional failure when they canΓÇÖt access the server in the primary region. With this feature enabled for their existing servers, customers can perform geo-restore and deploy a new server to the geo-paired Azure region using the original serverΓÇÖs latest available geo-redundant backup. [Learn more](concepts-backup-restore.md)
- **Simulate disaster recovery drills for your stopped servers** Azure Database for MySQL - Flexible Server now provides the ability to perform geo-restore on stopped servers helping users simulate disaster recovery drills for their workloads to estimate impact and recovery time.This will help users plan better to meet their disaster recovery and business continuity objectives by using geo-redundancy feature offered by Azure Database for MySQL Flexible Server. [Learn more](how-to-restore-server-cli.md)
This release of Azure Database for MySQL - Flexible Server includes the followin
- When you're using ARM templates for provisioning or configuration changes for HA enabled servers, if a single deployment is made to enable/disable HA and along with other server properties like backup redundancy, storage etc., then deployment would fail. You can mitigate it by submitting the deployment request separately to enable\disable and configuration changes. You wouldnΓÇÖt have an issue with Portal or Azure CLI, as these requests are already separated.
- - When you're viewing automated backups for a HA enabled server in Backup and Restore blade, if at some point in time a forced or automatic failover is performed, you may lose viewing rights to the server's backups on the Backup and Restore blade. Despite the invisibility of information regarding backups on the portal, the flexible server is successfully taking daily automated backups for the server in the backend. The server can be restored to any point within the retention period.
+ - When you're viewing automated backups for a HA enabled server on the Backup and Restore page, if at some point in time a forced or automatic failover is performed, you may lose viewing rights to the server's backups on the Backup and Restore page. Despite the invisibility of information regarding backups on the portal, the flexible server is successfully taking daily automated backups for the server in the backend. The server can be restored to any point within the retention period.
## November 2021
This release of Azure Database for MySQL - Flexible Server includes the followin
- **View available full backups in Azure portal**
- A dedicated Backup and Restore blade is now available in the Azure portal. This blade lists the backups available within the serverΓÇÖs retention period, effectively providing you with single pane view for managing a serverΓÇÖs backups and consequent restores. You can use this blade to
+ A dedicated Backup and Restore option is now available in the Azure portal. This page lists the backups available within the serverΓÇÖs retention period, effectively providing you with single pane view for managing a serverΓÇÖs backups and consequent restores. You can use this option to:
1) View the completion timestamps for all available full backups within the serverΓÇÖs retention period 2) Perform restore operations using these full backups
This release of Azure Database for MySQL - Flexible Server includes the followin
With the fastest restore point option, you can restore a Flexible Server instance in the fastest time possible on a given day within the serverΓÇÖs retention period. This restore operation restores the full snapshot backup without requiring restore or recovery of logs. With fastest restore point, customers will see three options while performing point in time restores from Azure portal viz latest restore point, custom restore point and fastest restore point. [Learn more](concepts-backup-restore.md#point-in-time-restore) -- **FAQ blade in Azure portal**
+- **FAQ in the Azure portal**
- The Backup and Restore blade will also include section dedicated to listing your most frequently asked questions, together with answers. This should provide you with answers to most questions about backup directly within the Azure portal. In addition, selecting the question mark icon for FAQs on the top menu provides access to even more related detail.
+ The Backup and Restore page includes section dedicated to listing your most frequently asked questions, together with answers. This should provide you with answers to most questions about backup directly within the Azure portal. In addition, selecting the question mark icon for FAQs on the top menu provides access to even more related detail.
- **Restore a deleted Flexible server**
This release of Azure Database for MySQL - Flexible Server includes the followin
- **Read replicas in Azure Database for MySQL - Flexible servers will no longer be available on Burstable SKUs**
- You wonΓÇÖt be able to create new or maintain existing read replicas on the Burstable tier server. In the interest of providing a good query and development experience for Burstable SKU tiers, the support for creating and maintaining read replica for servers in the Burstable pricing tier will be discontinued.
- If you have an existing Azure Database for MySQL - Flexible Server with read replica enabled, youΓÇÖll have to scale up your server to either General Purpose or Business Critical pricing tiers or delete the read replica within 60 days. After the 60-day period, while you can continue to use the primary server for your read-write operations, replication to read replica servers will be stopped. For newly created servers, read replica option will be available only for the General Purpose and Business Critical pricing tiers. - **Monitoring Azure Database for MySQL - Flexible Server with Azure Monitor Workbooks**
This release of Azure Database for MySQL - Flexible Server includes the followin
- **validate_password and caching_sha2_password plugin available in private preview**
- Flexible Server now supports enabling validate_password and caching_sha2_password plugins in private preview. Email us at AskAzureDBforMySQL@service.microsoft.com
+ Flexible Server now supports enabling validate_password and caching_sha2_password plugins in preview. Email us at AskAzureDBforMySQL@service.microsoft.com
- **Availability in four additional Azure regions**
This release of Azure Database for MySQL - Flexible Server includes the followin
- Right after Zone-Redundant high availability server failover, clients fail to connect to the server if using SSL with ssl_mode VERIFY_IDENTITY. This issue can be mitigated by using ssl_mode as VERIFY_CA. - Unable to create Same-Zone High availability server in the following regions: Central India, East Asia, Korea Central, South Africa North, Switzerland North.
- - In a rare scenario and after HA failover, the primary server will be in read_only mode. Resolve the issue by updating ΓÇ£read_onlyΓÇ¥ value from the server parameters blade to OFF.
- - After successfully scaling Compute in the Compute+Storage blade, IOPS are reset to the SKU default. Customers can work around the issue by rescaling IOPs in the Compute+Storage blade to desired value (previously set) post the compute deployment and consequent IOPS reset.
+ - In a rare scenario and after HA failover, the primary server is in read_only mode. Resolve the issue by updating ΓÇ£read_onlyΓÇ¥ value from the server parameters page to OFF.
+ - After successfully scaling Compute on the Compute + Storage page, IOPS are reset to the SKU default. Customers can work around the issue by rescaling IOPs on the Compute + Storage page to desired value (previously set) post the compute deployment and consequent IOPS reset.
## July 2021
search Cognitive Search Output Field Mapping https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-output-field-mapping.md
Title: Map skill output fields
+ Title: Map enrichments in indexers
description: Export the enriched content created by a skillset by mapping its output fields to fields in a search index. --++ Previously updated : 08/10/2021 Last updated : 09/14/2022
-# Map enrichment output to fields in a search index
+# Map enriched output to fields in a search index in Azure Cognitive Search
![Indexer Stages](./media/cognitive-search-output-field-mapping/indexer-stages-output-field-mapping.png "indexer stages")
-In this article, you learn how to map enriched input fields to output fields in a searchable index. Once you've [defined a skillset](cognitive-search-defining-skillset.md), you must map the output fields of any skill that directly contributes values to a given field in your search index.
+This article explains how to set up *output field mappings* that determine a data path between in-memory data structures created during skill processing, and target fields in a search index. An output field mapping is defined in an [indexer](search-indexer-overview.md) and has the following elements:
-Output Field Mappings are required for moving content from enriched documents into the index. The enriched document is really a tree of information, and even though there is support for complex types in the index, sometimes you may want to transform the information from the enriched tree into a more simple type (for instance, an array of strings). Output field mappings allow you to perform data shape transformations by flattening information. Output field mappings always occur after skillset execution, although it is possible for this stage to run even if no skillset is defined.
+```json
+"outputFieldMappings": [
+ {
+ "sourceFieldName": "document/path-to-a-node-in-an-enriched-document",
+ "targetFieldName": "some-search-field-in-an-index",
+ "mappingFunction": null
+ }
+],
+```
-Examples of output field mappings:
+In contrast with a [`fieldMappings`](search-indexer-field-mappings.md) definition that maps a path between two physical data structures, an `outputFieldMappings` definition maps in-memory data to fields in a search index.
-* As part of your skillset, you extracted the names of organizations mentioned in each of the pages of your document. Now you want to map each of those organization names into a field in your index of type Edm.Collection(Edm.String).
+Output field mappings are required if your indexer has an attached [skillset](cognitive-search-working-with-skillsets.md) that creates new information, such as text translation or key phrase extraction. During indexer execution, AI-generated information exists in memory only. To persist this information in a search index, you'll need to tell the indexer where to send the data.
-* As part of your skillset, you produced a new node called ΓÇ£document/translated_textΓÇ¥. You would like to map the information on this node to a specific field in your index.
+Output field mappings can also be used to retrieve specific nodes in a source document's complex type. For example, you might want just "FullName/LastName" in a multi-part "FullName" property. When you don't need the full complex structure, you can [flatten individual nodes in a nested data structures](#flattening-information-from-complex-types), and then use an output field mapping to send the output to a string collection in your search index.
-* You donΓÇÖt have a skillset but are indexing a complex type from a Cosmos DB database. You would like to get to a node on that complex type and map it into a field in your index.
+Output field mappings apply to:
-> [!NOTE]
-> Output field mappings apply to search indexes only. For indexers that create [knowledge stores](knowledge-store-concept-intro.md), output field mappings are ignored.
++ In-memory content that's created by skills or extracted by an indexer. The source field is a node in an enriched document tree.
-## Use outputFieldMappings
++ Search indexes. If you're populating a [knowledge store](knowledge-store-concept-intro.md), use [projections](knowledge-store-projections-examples.md) for data path configuration.
-To map fields, add `outputFieldMappings` to your indexer definition as shown below:
+Output field mappings are applied after [skillset execution](cognitive-search-working-with-skillsets.md) or after document cracking if there's no associated skillset.
-```http
-PUT https://[servicename].search.windows.net/indexers/[indexer name]?api-version=2020-06-30
-api-key: [admin key]
-Content-Type: application/json
-```
+## Define an output field mapping
-The body of the request is structured as follows:
+Output field mappings are added to the `outputFieldMappings` array in an indexer definition, typically placed after the `fieldMappings` array. An output field mapping consists of three parts.
```json
+"fieldMappings": []
+"outputFieldMappings": [
+ {
+ "sourceFieldName": "/document/path-to-a-node-in-an-enriched-document",
+ "targetFieldName": "some-search-field-in-an-index",
+ "mappingFunction": null
+ }
+],
+```
+
+| Property | Description |
+|-|-|
+| sourceFieldName | Required. Specifies a path to enriched content. An example might be `/document/content`. See [Reference annotations in an Azure Cognitive Search skillset](cognitive-search-concept-annotations-syntax.md) for path syntax and examples. |
+| targetFieldName | Optional. Specifies the search field that receives the enriched content. Target fields must be top-level simple fields or collections. It can't be a path to a subfield in a complex type. If you want to retrieve specific nodes in a complex structure, you can [flatten individual nodes](#flattening-information-from-complex-types) in memory, and then send the output to a string collection in your index. |
+| mappingFunction | Optional. Adds extra processing provided by [mapping functions](search-indexer-field-mappings.md#mappingFunctions) supported by indexers. In the case of enrichment nodes, encoding and decoding are the most commonly used functions. |
+
+You can use the REST API or an Azure SDK to define output field mappings.
+
+> [!TIP]
+> Indexers created by the [Import data wizard](search-import-data-portal.md) include output field mappings generated by the wizard. If you need examples, run the wizard over your data source to see the rendered definition.
+
+### [**REST APIs**](#tab/rest)
+
+Use [Create Indexer (REST)](/rest/api/searchservice/create-Indexer) or [Update Indexer (REST)](/rest/api/searchservice/update-indexer), any API version.
+
+This example adds entities and sentiment labels extracted from a blob's content property to fields in a search index.
+
+```JSON
+PUT https://[service name].search.windows.net/indexers/myindexer?api-version=[api-version]
+Content-Type: application/json
+api-key: [admin key]
{ "name": "myIndexer", "dataSourceName": "myDataSource", "targetIndexName": "myIndex", "skillsetName": "myFirstSkillSet",
- "fieldMappings": [
- {
- "sourceFieldName": "metadata_storage_path",
- "targetFieldName": "id",
- "mappingFunction": {
- "name": "base64Encode"
- }
- }
- ],
+ "fieldMappings": [],
"outputFieldMappings": [ { "sourceFieldName": "/document/content/organizations/*/description",
The body of the request is structured as follows:
} ```
-For each output field mapping, set the location of the data in the enriched document tree (sourceFieldName), and the name of the field as referenced in the index (targetFieldName). Assign any [mapping functions](search-indexer-field-mappings.md#field-mapping-functions-and-examples) that you require to transform the content of a field before it's stored in the index.
+For each output field mapping, set the location of the data in the enriched document tree (sourceFieldName), and the name of the field as referenced in the index (targetFieldName). Assign any [mapping functions](search-indexer-field-mappings.md#mappingFunctions) that you require to transform the content of a field before it's stored in the index.
-## Flattening Information from Complex Types
+### [**.NET SDK (C#)**](#tab/csharp)
-The path in a sourceFieldName can represent one element or multiple elements. In the example above, ```/document/content/sentiment``` represents a single numeric value, while ```/document/content/organizations/*/description``` represents several organization descriptions.
+In the Azure SDK for .NET, use the [OutputFieldMappingEntry](/dotnet/api/azure.search.documents.indexes.models.outputfieldmappingentry) class that provides "Name" and "TargetFieldName" properties and an optional "MappingFunction" reference.
-In cases where there are several elements, they are "flattened" into an array that contains each of the elements.
+Specify output field mappings when constructing the indexer, or later by directly setting [SearchIndexer.OutputFieldMappings](/dotnet/api/azure.search.documents.indexes.models.searchindexer.outputfieldmappings). The following C# example sets the output field mappings when constructing an indexer.
-More concretely, for the ```/document/content/organizations/*/description``` example, the data in the *descriptions* field would look like a flat array of descriptions before it gets indexed:
+```csharp
+string indexerName = "cog-search-demo";
+SearchIndexer indexer = new SearchIndexer(
+ indexerName,
+ dataSourceConnectionName,
+ indexName)
+{
+ // Field mappings omitted for this example (assume default mappings)
+ OutputFieldMappings =
+ {
+ new FieldMapping("/document/content/organizations") { TargetFieldName = "orgNames" },
+ new FieldMapping("/document/content/sentiment") { TargetFieldName = "sentiment" }
+ },
+ SkillsetName = skillsetName
+};
+await indexerClient.CreateIndexerAsync(indexer);
```
- ["Microsoft is a company in Seattle","LinkedIn's office is in San Francisco"]
+ -->
+++
+<a name="flattening-information-from-complex-types"></a>
+
+## Flatten complex structures into a string collection
+
+If your source data is composed of nested or hierarchical JSON, you can't use field mappings to set up the data paths. Instead, your search index must mirror the source data structure for at each level for a full import.
+
+This section walks you through an import process that produces a one-to-one reflection of a complex document on both the source and target sides. Next, it uses the same source document to illustrate the retrieval and flattening of individual nodes into string collections.
+
+Here's an example of a document in Cosmos DB with nested JSON:
+
+```json
+{
+ "palette":"primary colors",
+ "colors":[
+ {
+ "name":"blue",
+ "medium":[
+ "acrylic",
+ "oil",
+ "pastel"
+ ]
+ },
+ {
+ "name":"red",
+ "medium":[
+ "acrylic",
+ "pastel",
+ "watercolor"
+ ]
+ },
+ {
+ "name":"yellow",
+ "medium":[
+ "acrylic",
+ "watercolor"
+ ]
+ }
+ ]
+}
```
-This is an important principle, so we will provide another example. Imagine that you have an array of complex types as part of the enrichment tree. Let's say there is a member called customEntities that has an array of complex types like the one described below.
+If you wanted to fully index the above source document, you'd create an index definition where the field names, levels, and types are reflected as a complex type. Because field mappings aren't supported for complex types in the search index, your index definition must mirror the source document.
```json
-"document/customEntities":
-[
- {
- "name": "heart failure",
- "matches": [
- {
- "text": "heart failure",
- "offset": 10,
- "length": 12,
- "matchDistance": 0.0
- }
- ]
- },
- {
- "name": "morquio",
- "matches": [
- {
- "text": "morquio",
- "offset": 25,
- "length": 7,
- "matchDistance": 0.0
- }
- ]
+{
+ "name": "my-test-index",
+ "defaultScoringProfile": "",
+ "fields": [
+ { "name": "id", "type": "Edm.String", "searchable": false, "retrievable": true, "key": true},
+ { "name": "palette", "type": "Edm.String", "searchable": true, "retrievable": true },
+ { "name": "colors", "type": "Collection(Edm.ComplexType)",
+ "fields": [
+ {
+ "name": "name",
+ "type": "Edm.String",
+ "searchable": true,
+ "retrievable": true
+ },
+ {
+ "name": "medium",
+ "type": "Collection(Edm.String)",
+ "searchable": true,
+ "retrievable": true,
+ }
+ ]
}
- //...
-]
+ ]
+}
```
-Let's assume that your index has a field called 'diseases' of type Collection(Edm.String), where you would like to store each of the names of the entities.
+Here's a sample indexer definition that executes the import (notice there are no field mappings and no skillset).
-This can be done easily by using the "\*" symbol, as follows:
+```json
+{
+ "name": "my-test-indexer",
+ "dataSourceName": "my-test-ds",
+ "skillsetName": null,
+ "targetIndexName": "my-test-index",
+
+ "fieldMappings": [],
+ "outputFieldMappings": []
+}
+```
+
+The result is the following sample search document, similar to the original in Cosmos DB.
```json
- "outputFieldMappings": [
+{
+ "value": [
+ {
+ "@search.score": 1,
+ "id": "240a98f5-90c9-406b-a8c8-f50ff86f116c",
+ "palette": "primary colors",
+ "colors": [
{
- "sourceFieldName": "/document/customEntities/*/name",
- "targetFieldName": "diseases"
+ "name": "blue",
+ "medium": [
+ "acrylic",
+ "oil",
+ "pastel"
+ ]
+ },
+ {
+ "name": "red",
+ "medium": [
+ "acrylic",
+ "pastel",
+ "watercolor"
+ ]
+ },
+ {
+ "name": "yellow",
+ "medium": [
+ "acrylic",
+ "watercolor"
+ ]
}
- ]
+ ]
+ }
+ ]
+}
```
-This operation will simply ΓÇ£flattenΓÇ¥ each of the names of the customEntities elements into a single array of strings like this:
+An alternative rendering in a search index is to flatten individual nodes in the source's nested structure into a string collection in a search index.
+
+To accomplish this task, you'll need an `outputFieldMapping` that maps an in-memory node to a string collection in the index. Although output field mappings primarily apply to skill outputs, you can also use them to address nodes after ["document cracking"](search-indexer-overview.md#stage-1-document-cracking) where the indexer opens a source document and reads it into memory.
+
+Below is a sample index definition in Cognitive Search, using string collections to receive flattened output:
```json
- "diseases" : ["heart failure","morquio"]
+{
+ "name": "my-new-flattened-index",
+ "defaultScoringProfile": "",
+ "fields": [
+ { "name": "id", "type": "Edm.String", "searchable": false, "retrievable": true, "key": true },
+ { "name": "palette", "type": "Edm.String", "searchable": true, "retrievable": true },
+ { "name": "color_names", "type": "Collection(Edm.String)", "searchable": true, "retrievable": true },
+ { "name": "color_mediums", "type": "Collection(Edm.String)", "searchable": true, "retrievable": true}
+ ]
+}
```
-## See also
+Here's the sample indexer definition, using `outputFieldMappings` to associate the nested JSON with the string collection fields. Notice that the source field uses the path syntax for enrichment nodes, even though there's no skillset. Enriched documents are created in the system during document cracking, which means you can access nodes in each document tree as long as those nodes exist when the document is cracked.
+
+```json
+{
+ "name": "my-test-indexer",
+ "dataSourceName": "my-test-ds",
+ "skillsetName": null,
+ "targetIndexName": "my-new-flattened-index",
+ "parameters": { },
+ "fieldMappings": [ ],
+ "outputFieldMappings": [
+ {
+ "sourceFieldName": "/document/colors/*/name",
+ "targetFieldName": "color_names"
+ },
+ {
+ "sourceFieldName": "/document/colors/*/medium",
+ "targetFieldName": "color_mediums"
+ }
+ ]
+}
+```
+
+Results from the above definition are as follows. Simplifying the structure loses context in this case. There's no longer any associations between a given color and the mediums it's available in. However, depending on your scenario, a result similar to the one shown below might be exactly what you need.
+
+```json
+{
+ "value": [
+ {
+ "@search.score": 1,
+ "id": "240a98f5-90c9-406b-a8c8-f50ff86f116c",
+ "palette": "primary colors",
+ "color_names": [
+ "blue",
+ "red",
+ "yellow"
+ ],
+ "color_mediums": [
+ "[\"acrylic\",\"oil\",\"pastel\"]",
+ "[\"acrylic\",\"pastel\",\"watercolor\"]",
+ "[\"acrylic\",\"watercolor\"]"
+ ]
+ }
+ ]
+}
+```
-* [Search indexes in Azure Cognitive Search](search-what-is-an-index.md).
+## See also
-* [Define field mappings in a search indexer](search-indexer-field-mappings.md).
++ [Define field mappings in a search indexer](search-indexer-field-mappings.md)++ [AI enrichment overview](cognitive-search-concept-intro.md)++ [Skillset overview](cognitive-search-working-with-skillsets.md)
search Cognitive Search Predefined Skills https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-predefined-skills.md
Title: Built-in text and image processing during indexing
+ Title: Built-in skills
description: Data extraction, natural language, and image processing skills add semantics and structure to raw content in an Azure Cognitive Search enrichment pipeline.
search Samples Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/samples-rest.md
Previously updated : 01/27/2021 Last updated : 09/15/2022 # REST code samples for Azure Cognitive Search
REST is the definitive programming interface for Azure Cognitive Search, and all
REST samples are usually developed and tested on Postman, but you can use any client that supports HTTP calls:
-+ Start with [Quickstart: Create an Azure Cognitive Search index using REST APIs](search-get-started-rest.md) for help in formulating HTTP calls.
-+ Try [Visual Studio Code extension for Azure Cognitive Search](search-get-started-vs-code.md), currently in preview, if you work in Visual Studio Code.
++ [Use Postman](search-get-started-rest.md). This quickstart explains how to formulate the HTTP request from end-to-end.++ [Use the Visual Studio Code extension for Azure Cognitive Search](search-get-started-vs-code.md), currently in preview. This quickstart uses Azure integration and builds the requests internally, which means you can complete tasks more quickly. ## Doc samples
Code samples from the Cognitive Search team demonstrate features and workflows.
| [projections](https://github.com/Azure-Samples/azure-search-postman-samples/tree/master/projections) | Source code for [Define projections in a knowledge store](knowledge-store-projections-examples.md). This article explains how to specify the physical data structures in a knowledge store.| | [index-encrypted-blobs](https://github.com/Azure-Samples/azure-search-postman-samples/commit/f5ebb141f1ff98f571ab84ac59dcd6fd06a46718) | Source code for [How to index encrypted blobs using blob indexers and skillsets](search-howto-index-encrypted-blobs.md). This article shows how to index documents in Azure Blob Storage that have been previously encrypted using Azure Key Vault. |
-> [!Tip]
+> [!TIP]
> Try the [Samples browser](/samples/browse/?expanded=azure&languages=http&products=azure-cognitive-search) to search for Microsoft code samples in GitHub, filtered by product, service, and language. ## Other samples
search Search How To Create Search Index https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-how-to-create-search-index.md
Previously updated : 06/24/2022 Last updated : 09/15/2022 # Create an index in Azure Cognitive Search
The following screenshot highlights where **Add index** and **Import data** appe
### [**REST**](#tab/index-rest)
-[**Create Index (REST)**](/rest/api/searchservice/create-index) is used to create an index. Both Postman and Visual Studio Code (with an extension for Azure Cognitive Search) can function as a search index client. Using either tool, you can connect to your search service and send requests:
+[**Create Index (REST API)**](/rest/api/searchservice/create-index) is used to create an index. Both Postman and Visual Studio Code (with an extension for Azure Cognitive Search) can function as a search index client. Using either tool, you can connect to your search service and send requests.
+
+The following links show you how to set up the request:
+ [Create a search index using REST and Postman](search-get-started-rest.md) + [Get started with Visual Studio Code and Azure Cognitive Search](search-get-started-vs-code.md)
search Search Howto Index Plaintext Blobs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-index-plaintext-blobs.md
Previously updated : 02/01/2021 Last updated : 09/13/2022 # How to index plain text blobs and files in Azure Cognitive Search
search Search Howto Monitor Indexers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-monitor-indexers.md
Previously updated : 01/28/2021 Last updated : 09/15/2022 # Monitor indexer status and results in Azure Cognitive Search
You can monitor indexer processing in the Azure portal, or programmatically thro
## Monitor using Azure portal
-You can see the current status of all of your indexers in your search service Overview page. Portal pages refresh every few minutes, so you won't see evidence of a new indexer run right away.
+You can see the current status of all of your indexers in your search service Overview page. Portal pages refresh every few minutes, so you won't see evidence of a new indexer run right away. Select **Refresh** at the top of the page to immediately retrieve the most recent view.
![Indexers list](media/search-monitor-indexers/indexers-list.png "Indexers list") | Status | Description | |--|-|
-| **In Progress** | Indicates active execution. The portal will report on partial information. As indexing progresses, you can watch the **Docs Succeeded** value grow in response. Indexers that process large volumes of data can take a long time to run. For example, indexers that handle millions of source documents can run for 24 hours, and then restart almost immediately. The status for high-volume indexers might always say **In Progress** in the portal. Even when an indexer is running, details are available about ongoing progress and previous runs. |
+| **In Progress** | Indicates active execution. The portal will report on partial information. As indexing progresses, you can watch the **Docs Succeeded** value grow in response. Indexers that process large volumes of data can take a long time to run. For example, indexers that handle millions of source documents can run for 24 hours, and then restart almost immediately to pick up where it left off. As such, the status for high-volume indexers might always say **In Progress** in the portal. Even when an indexer is running, details are available about ongoing progress and previous runs. |
| **Success** | Indicates the run was successful. An indexer run can be successful even if individual documents have errors, if the number of errors is less than the indexer's **Max failed items** setting. | | **Failed** | The number of errors exceeded **Max failed items** and indexing has stopped. | | **Reset** | The indexer's internal change tracking state was reset. The indexer will run in full, refreshing all documents, and not just those with newer timestamps. |
-You can click on an indexer in the list to see more details about the indexer's current and recent runs.
+You can select on an indexer in the list to see more details about the indexer's current and recent runs.
![Indexer summary and execution history](media/search-monitor-indexers/indexer-summary.png "Indexer summary and execution history") The **Indexer summary** chart displays a graph of the number of documents processed in its most recent runs.
-The **Execution details** list shows up to 50 of the most recent execution results. Click on an execution result in the list to see specifics about that run. This includes its start and end times, and any errors and warnings that occurred.
+The **Execution details** list shows up to 50 of the most recent execution results. Select on an execution result in the list to see specifics about that run. This includes its start and end times, and any errors and warnings that occurred.
![Indexer execution details](media/search-monitor-indexers/indexer-execution.png "Indexer execution details")
-If there were document-specific problems during the run, they will be listed in the Errors and Warnings fields.
+If there were document-specific problems during the run, they'll be listed in the Errors and Warnings fields.
![Indexer details with errors](media/search-monitor-indexers/indexer-execution-error.png "Indexer details with errors")
-Warnings are common with some types of indexers, and do not always indicate a problem. For example indexers that use Cognitive Services can report warnings when image or PDF files don't contain any text to process.
+Warnings are common with some types of indexers, and don't always indicate a problem. For example indexers that use Cognitive Services can report warnings when image or PDF files don't contain any text to process.
For more information about investigating indexer errors and warnings, see [Indexer troubleshooting guidance](search-indexer-troubleshooting.md).
Metric views can be filtered or split up by a set of predefined dimensions.
| Metric Name | Description | Dimensions | Sample use cases | ||||| | Document processed count | Shows the number of indexer processed documents. | Data source name, failed, index name, indexer name, skillset name | <br> - Can be referenced as a rough measure of throughput (number of documents processed by indexer over time) <br> - Set up to alert on failed documents |
-| Skill execution invocation count | Shows the number of skill invocations. | Data source name, failed, index name, indexer name, skill name, skill type, skillset name | <br> - Reference to ensure skills are invoked as expected by comparing relative invocation numbers between skills and number of skill invocation to the number of documents. <br> - Set up to alert on failed skill invocations |
+| Skill execution invocation count | Shows the number of skill invocations. | Data source name, failed, index name, indexer name, skill name, skill type, skillset name | <br> - Reference to ensure skills are invoked as expected by comparing relative invocation numbers between skills and number of skill invocations to the number of documents. <br> - Set up to alert on failed skill invocations |
The screenshot below shows the number of documents processed by indexers within a service over an hour, split up by indexer name. ![Indexer documents processed metric](media/search-monitor-indexers/indexers-documents-processed-metric.png "Indexer documents processed metric")
-You can also configure the graph to see the number of skill invocation over the same hour interval.
+You can also configure the graph to see the number of skill invocations over the same hour interval.
![Indexer skills invoked metric](media/search-monitor-indexers/indexers-skill-invocation-metric.png "Indexer skill invocation metric")
The response contains overall indexer status, the last (or in-progress) indexer
Execution history contains up to the 50 most recent runs, which are sorted in reverse chronological order (most recent first).
-Note there are two different status values. The top level status is for the indexer itself. A indexer status of **running** means the indexer is set up correctly and available to run, but not that it's currently running.
+Note there are two different status values. The top level status is for the indexer itself. An indexer status of **running** means the indexer is set up correctly and available to run, but not that it's currently running.
Each run of the indexer also has its own status that indicates whether that specific execution is ongoing (**running**), or already completed with a **success**, **transientFailure**, or **persistentFailure** status. When an indexer is reset to refresh its change tracking state, a separate execution history entry is added with a **Reset** status.
-For more details about status codes and indexer monitoring data, see [Get Indexer Status](/rest/api/searchservice/get-indexer-status).
+For more information about status codes and indexer monitoring data, see [Get Indexer Status](/rest/api/searchservice/get-indexer-status).
## Monitor using .NET
Latest run
Document Errors: 0, Warnings: 0 ```
-Note there are two different status values. The top-level status is the status of the indexer itself. A indexer status of **Running** means that the indexer is set up correctly and available for execution, but not that it is currently executing.
+Note there are two different status values. The top-level status is the status of the indexer itself. An indexer status of **Running** means that the indexer is set up correctly and available for execution, but not that it's currently executing.
Each run of the indexer also has its own status for whether that specific execution is ongoing (**Running**), or was already completed with a **Success** or **TransientError** status.
When an indexer is reset to refresh its change tracking state, a separate histor
## Next steps
-For more details about status codes and indexer monitoring information, refer to the following API reference:
+For more information about status codes and indexer monitoring information, see the following API reference:
* [GetIndexerStatus (REST API)](/rest/api/searchservice/get-indexer-status) * [IndexerStatus](/dotnet/api/azure.search.documents.indexes.models.indexerstatus)
search Search Indexer Field Mappings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-indexer-field-mappings.md
Title: Field mappings in indexers
+ Title: Map fields in indexers
description: Configure field mappings in an indexer to account for differences in field names and data representations.
Previously updated : 06/17/2022 Last updated : 09/14/2022 # Field mappings and transformations using Azure Cognitive Search indexers ![Indexer Stages](./media/search-indexer-field-mappings/indexer-stages-field-mappings.png "indexer stages")
-When using an Azure Cognitive Search indexer to push content into a search index, the indexer automatically assigns the source-to-destination field mappings. Implicit field mappings occur when field names and data types are compatible. If inputs and outputs don't match, you can define explicit *field mappings* to set up the data path, as described in this article.
+When an [Azure Cognitive Search indexer](search-indexer-overview.md) loads a search index, it determines the data path through source-to-destination field mappings. Implicit field mappings are internal and occur when field names and data types are compatible between the source and destination.
-Field mappings also provide light-weight data conversion through mapping functions. If more processing is required, consider [Azure Data Factory](../data-factory/index.yml) to bridge the gap.
+If inputs and outputs don't match, you can define explicit *field mappings* to set up the data path, as described in this article. Field mappings can also be used to introduce light-weight data conversion, such as encoding or decoding, through [mapping functions](#mappingFunctions). If more processing is required, consider [Azure Data Factory](../data-factory/index.yml) to bridge the gap.
-## Scenarios and limitations
+Field mappings apply to:
-Field mappings enable the following scenarios:
++ Physical data structures on both sides of the data stream (between a [supported data source](search-indexer-overview.md#supported-data-sources) and a [search index](search-what-is-an-index.md)). If you're importing skill-enriched content that resides in memory, use [outputFieldMappings](cognitive-search-output-field-mapping.md) instead.
-+ Rename fields or handle name discrepancies. Suppose your data source has a field named `_city`. Given that Azure Cognitive Search doesn't allow field names that start with an underscore, a field mapping lets you effectively rename a field.
++ Search indexes only. If you're populating a [knowledge store](knowledge-store-concept-intro.md), use [projections](knowledge-store-projections-examples.md) for data path configuration.
-+ Data type discrepancies. Cognitive Search has a smaller set of [supported data types](/rest/api/searchservice/supported-data-types) than many data sources. If you're importing SQL data, a field mapping allows you to [map the SQL data type](search-howto-connecting-azure-sql-database-to-azure-search-using-indexers.md#mapping-data-types) you want in a search index.
++ Top-level search fields only, where the "targetFieldName" is either a simple field or a collection. A target field can't be a complex type.
-+ One-to-many data paths. You can populate multiple fields in the index with content from the same field. For example, you might want to apply different analyzers to each field.
-
-+ Multiple data sources with different field names where you want to populate a search field with documents from more than one data source. If the field names vary between the data sources, you can use a field mapping to clarify the path.
-
-+ Base64 encoding or decoding of data. Field mappings support several [**mapping functions**](#mappingFunctions), including functions for Base64 encoding and decoding.
-
-+ Splitting strings or recasting a JSON array into a string collection. [Field mapping functions](#mappingFunctions) provide this capability.
-
-### Limitations
+> [!NOTE]
+> If you're working with complex data (nested or hierarchical structures), and you'd like to mirror that data structure in your search index, your search index must match the source structure exactly (same field names, levels, and types) so that the default mappings will work. Optionally, you might want just a few nodes in the complex structure. To get individual nodes, you can flatten incoming data into a string collection (see [outputFieldMappings](cognitive-search-output-field-mapping.md#flatten-complex-structures-into-a-string-collection) for this workaround).
-Before you start mapping fields, make sure the following limitations won't block you:
+## Supported scenarios
-+ The "targetFieldName" must be set to a single field name, either a simple field or a collection. You can't define a field path to a subfield in a complex field (such as `address/city`) at this time. A workaround is to add a skillset and use a [Shaper skill](cognitive-search-skill-shaper.md).
+| Use-case | Description |
+|-|-|
+| Name discrepancy | Suppose your data source has a field named `_city`. Given that Azure Cognitive Search doesn't allow field names that start with an underscore, a field mapping lets you effectively map "_city" to "city". </p>If your indexing requirements include retrieving content from multiple data sources, where field names vary among the sources, you could use a field mapping to clarify the path.|
+| Type discrepancy | Supposed you want a source integer field to be of type `Edm.String` so that it's searchable in the search index. Because the types are different, you'll need to define a field mapping in order for the data path to succeed. Note that Cognitive Search has a smaller set of [supported data types](/rest/api/searchservice/supported-data-types) than many data sources. If you're importing SQL data, a field mapping allows you to [map the SQL data type](search-howto-connecting-azure-sql-database-to-azure-search-using-indexers.md#mapping-data-types) you want in a search index.|
+| One-to-many data paths | You can populate multiple fields in the index with content from the same source field. For example, you might want to apply different analyzers to each field to support different use cases in your client app.|
+| Encoding and decoding | You can apply [mapping functions](#mappingFunctions) to support Base64 encoding or decoding of data during indexing. |
+| Split strings or recast arrays into collections | You can apply [mapping functions](#mappingFunctions) to split a string that includes a delimiter, or to send a JSON array to a search field of type `Collection(Edm.String)`.
-+ Field mappings only work for search indexes. For indexers that also create [knowledge stores](knowledge-store-concept-intro.md), [data shapes](knowledge-store-projection-shape.md) and [projections](knowledge-store-projections-examples.md) determine field associations, and any field mappings and output field mappings in the indexer are ignored.
+## Define a field mapping
-## Set up field mappings
+Field mappings are added to the "fieldMappings" array of an indexer definition. A field mapping consists of three parts.
-Field mappings are added to the "fieldMappings" array of the indexer definition. A field mapping consists of three parts.
+```json
+"fieldMappings": [
+ {
+ "sourceFieldName": "_city",
+ "targetFieldName": "city",
+ "mappingFunction": null
+ }
+],
+```
| Property | Description | |-|-|
-| "sourceFieldName" | Required. Represents a field in your data source. |
-| "targetFieldName" | Optional. Represents a field in your search index. If omitted, the value of "sourceFieldName" is assumed for the target. |
-| "mappingFunction" | Optional. Consists of [predefined functions](#mappingFunctions) that transform data. You can apply functions to both source and target field mappings. |
+| sourceFieldName | Required. Represents a field in your data source. |
+| targetFieldName | Optional. Represents a field in your search index. If omitted, the value of "sourceFieldName" is assumed for the target. Target fields must be top-level simple fields or collections. It can't be a complex type or collection. If you're handling a data type issue, a field's data type is specified in the index definition. The field mapping just needs to have the field's name.|
+| mappingFunction | Optional. Consists of [predefined functions](#mappingFunctions) that transform data. |
Azure Cognitive Search uses case-insensitive comparison to resolve the field and function names in field mappings. This is convenient (you don't have to get all the casing right), but it means that your data source or index can't have fields that differ only by case. > [!NOTE]
-> If no field mappings are present, indexers assume data source fields should be mapped to index fields with the same name. Adding a field mapping overrides these default field mappings for the source and target field. Some indexers, such as the [blob storage indexer](search-howto-indexing-azure-blob-storage.md), add default field mappings for the index key field.
+> If no field mappings are present, indexers assume data source fields should be mapped to index fields with the same name. Adding a field mapping overrides the default field mappings for the source and target field. Some indexers, such as the [blob storage indexer](search-howto-indexing-azure-blob-storage.md), add default field mappings for the index key field.
-You can use the portal, REST API, or an Azure SDK to define field mappings.
-
-### [**Azure portal**](#tab/portal)
-
-If you're using the [Import data wizard](search-import-data-portal.md), field mappings aren't supported because the wizard creates target search fields that mirror the origin source fields.
-
-In the portal, you can set field mappings in an indexer after the indexer already exists:
-
-1. Open the JSON definition of an existing indexer.
-
-1. Under the "fieldMappings" section, add the source and destination fields. Destination fields must exist in the search index and conform to [field naming conventions](/rest/api/searchservice/naming-rules). Refer to the REST API tab for more JSON syntax details.
-
-1. Save your changes.
-
-1. If the search field is empty, run the indexer to import data from the source field to the newly mapped search field. If the search field was previously populated, reset the indexer before running it to drop and add the content.
+You can use the REST API or an Azure SDK to define field mappings.
### [**REST APIs**](#tab/rest)
-Add field mappings when creating a new indexer using the [Create Indexer](/rest/api/searchservice/create-Indexer) API request. Manage the field mappings of an existing indexer using the [Update Indexer](/rest/api/searchservice/update-indexer) API request.
+Use [Create Indexer (REST)](/rest/api/searchservice/create-Indexer) or [Update Indexer (REST)](/rest/api/searchservice/update-indexer), any API version.
-This example maps a source field to a target field with a different name:
+This example handles a field name discrepancy.
```JSON PUT https://[service name].search.windows.net/indexers/myindexer?api-version=[api-version]
api-key: [admin key]
} ```
-A source field can be referenced in multiple field mappings. The following example shows how to "fork" a field, copying the same source field to two different index fields:
+This example maps a single source field to multiple target fields ("one-to-many" mappings). You can "fork" a field, copying the same source field content to two different index fields that will be analyzed or attributed differently in the index.
```JSON
await indexerClient.CreateOrUpdateIndexerAsync(indexer);
``` + <a name="mappingFunctions"></a>
-## Field mapping functions and examples
+## Mapping functions and examples
A field mapping function transforms the contents of a field before it's stored in the index. The following mapping functions are currently supported:
A document key (both before and after conversion) can't be longer than 1,024 cha
#### Example: Make a base-encoded field "searchable"
-There are times when you need to use an encoded version of a field like "metadata_storage_path" as the key, but also need an un-encoded version for full text search. To support both scenarios, you can map "metadata_storage_path" to two fields: one for the key (encoded), and a second for a path field that we can assume is attributed as "searchable" in the index schema.
+There are times when you need to use an encoded version of a field like "metadata_storage_path" as the key, but also need an unencoded version for full text search. To support both scenarios, you can map "metadata_storage_path" to two fields: one for the key (encoded), and a second for a path field that we can assume is attributed as "searchable" in the index schema.
```http PUT /indexers/blob-indexer?api-version=2020-06-30
Your source data might contain Base64-encoded strings, such as blob metadata str
If you don't include a parameters property, it defaults to the value `{"useHttpServerUtilityUrlTokenEncode" : true}`.
-Azure Cognitive Search supports two different Base64 encodings. You should use the same parameters when encoding and decoding the same field. For more details, see [base64 encoding options](#base64details) to decide which parameters to use.
+Azure Cognitive Search supports two different Base64 encodings. You should use the same parameters when encoding and decoding the same field. For more information, see [base64 encoding options](#base64details) to decide which parameters to use.
<a name="base64details"></a>
If the `useHttpServerUtilityUrlTokenEncode` or `useHttpServerUtilityUrlTokenDeco
> [!WARNING] > If `base64Encode` is used to produce key values, `useHttpServerUtilityUrlTokenEncode` must be set to true. Only URL-safe base64 encoding can be used for key values. See [Naming rules](/rest/api/searchservice/naming-rules) for the full set of restrictions on characters in key values.
-The .NET libraries in Azure Cognitive Search assume the full .NET Framework, which provides built-in encoding. The `useHttpServerUtilityUrlTokenEncode` and `useHttpServerUtilityUrlTokenDecode` options leverage this built-in functionality. If you're using .NET Core or another framework, we recommend setting those options to `false` and calling your framework's encoding and decoding functions directly.
+The .NET libraries in Azure Cognitive Search assume the full .NET Framework, which provides built-in encoding. The `useHttpServerUtilityUrlTokenEncode` and `useHttpServerUtilityUrlTokenDecode` options apply this built-in functionality. If you're using .NET Core or another framework, we recommend setting those options to `false` and calling your framework's encoding and decoding functions directly.
The following table compares different base64 encodings of the string `00>00?00`. To determine the required processing (if any) for your base64 functions, apply your library encode function on the string `00>00?00` and compare the output with the expected output `MDA-MDA_MDA`.
-| Encoding | Base64 encode output | Additional processing after library encoding | Additional processing before library decoding |
+| Encoding | Base64 encode output | Extra processing after library encoding | Extra processing before library decoding |
| | | | | | Base64 with padding | `MDA+MDA/MDA=` | Use URL-safe characters and remove padding | Use standard base64 characters and add padding | | Base64 without padding | `MDA+MDA/MDA` | Use URL-safe characters | Use standard base64 characters |
search Search Modeling Multitenant Saas Applications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-modeling-multitenant-saas-applications.md
Previously updated : 04/06/2021 Last updated : 09/15/2022 # Design patterns for multitenant SaaS applications and Azure Cognitive Search
-A multitenant application is one that provides the same services and capabilities to any number of tenants who cannot see or share the data of any other tenant. This document discusses tenant isolation strategies for multitenant applications built with Azure Cognitive Search.
+A multitenant application is one that provides the same services and capabilities to any number of tenants who can't see or share the data of any other tenant. This article discusses tenant isolation strategies for multitenant applications built with Azure Cognitive Search.
## Azure Cognitive Search concepts
As a search-as-a-service solution, [Azure Cognitive Search](search-what-is-azure
### Search services, indexes, fields, and documents
-Before discussing design patterns, it is important to understand a few basic concepts.
+Before discussing design patterns, it's important to understand a few basic concepts.
-When using Azure Cognitive Search, one subscribes to a *search service*. As data is uploaded to Azure Cognitive Search, it is stored in an *index* within the search service. There can be a number of indexes within a single service. To use the familiar concepts of databases, the search service can be likened to a database while the indexes within a service can be likened to tables within a database.
+When using Azure Cognitive Search, one subscribes to a *search service*. As data is uploaded to Azure Cognitive Search, it's stored in an *index* within the search service. There can be a number of indexes within a single service. To use the familiar concepts of databases, the search service can be likened to a database while the indexes within a service can be likened to tables within a database.
Each index within a search service has its own schema, which is defined by a number of customizable *fields*. Data is added to an Azure Cognitive Search index in the form of individual *documents*. Each document must be uploaded to a particular index and must fit that index's schema. When searching data using Azure Cognitive Search, the full-text search queries are issued against a particular index. To compare these concepts to those of a database, fields can be likened to columns in a table and documents can be likened to rows.
There are a few different [pricing tiers](https://azure.microsoft.com/pricing/de
#### S3 High Density
-In Azure Cognitive SearchΓÇÖs S3 pricing tier, there is an option for the High Density (HD) mode designed specifically for multitenant scenarios. In many cases, it is necessary to support a large number of smaller tenants under a single service to achieve the benefits of simplicity and cost efficiency.
+In Azure Cognitive SearchΓÇÖs S3 pricing tier, there's an option for the High Density (HD) mode designed specifically for multitenant scenarios. In many cases, it's necessary to support a large number of smaller tenants under a single service to achieve the benefits of simplicity and cost efficiency.
S3 HD allows for the many small indexes to be packed under the management of a single search service by trading the ability to scale out indexes using partitions for the ability to host more indexes in a single service.
-An S3 service is designed to host a fixed number of indexes (maximum 200) and allow each index to scale in size horizontally as new partitions are added to the service. Adding partitions to S3 HD services increases the maximum number of indexes that the service can host. The ideal maximum size for an individual S3HD index is around 50 - 80 GB, although there is no hard size limit on each index imposed by the system.
+An S3 service is designed to host a fixed number of indexes (maximum 200) and allow each index to scale in size horizontally as new partitions are added to the service. Adding partitions to S3 HD services increases the maximum number of indexes that the service can host. The ideal maximum size for an individual S3HD index is around 50 - 80 GB, although there's no hard size limit on each index imposed by the system.
## Considerations for multitenant applications
Multitenant applications must effectively distribute resources among the tenants
+ *Ease of Operations:* When developing a multitenant architecture, the impact on the application's operations and complexity is an important consideration. Azure Cognitive Search has a [99.9% SLA](https://azure.microsoft.com/support/legal/sla/search/v1_0/).
-+ *Global footprint:* Multitenant applications may need to effectively serve tenants which are distributed across the globe.
++ *Global footprint:* Multitenant applications may need to effectively serve tenants, which are distributed across the globe. + *Scalability:* Application developers need to consider how they reconcile between maintaining a sufficiently low level of application complexity and designing the application to scale with number of tenants and the size of tenants' data and workload.
Azure Cognitive Search offers a few boundaries that can be used to isolate tenan
## Modeling multitenancy with Azure Cognitive Search
-In the case of a multitenant scenario, the application developer consumes one or more search services and divide their tenants among services, indexes, or both. Azure Cognitive Search has a few common patterns when modeling a multitenant scenario:
+In the case of a multitenant scenario, the application developer consumes one or more search services and divides their tenants among services, indexes, or both. Azure Cognitive Search has a few common patterns when modeling a multitenant scenario:
+ *One index per tenant:* Each tenant has its own index within a search service that is shared with other tenants.
In the case of a multitenant scenario, the application developer consumes one or
In an index-per-tenant model, multiple tenants occupy a single Azure Cognitive Search service where each tenant has their own index.
-Tenants achieve data isolation because all search requests and document operations are issued at an index level in Azure Cognitive Search. In the application layer, there is the need awareness to direct the various tenantsΓÇÖ traffic to the proper indexes while also managing resources at the service level across all tenants.
+Tenants achieve data isolation because all search requests and document operations are issued at an index level in Azure Cognitive Search. In the application layer, there's the need awareness to direct the various tenantsΓÇÖ traffic to the proper indexes while also managing resources at the service level across all tenants.
A key attribute of the index-per-tenant model is the ability for the application developer to oversubscribe the capacity of a search service among the applicationΓÇÖs tenants. If the tenants have an uneven distribution of workload, the optimal combination of tenants can be distributed across a search serviceΓÇÖs indexes to accommodate a number of highly active, resource-intensive tenants while simultaneously serving a long tail of less active tenants. The trade-off is the inability of the model to handle situations where each tenant is concurrently highly active. The index-per-tenant model provides the basis for a variable cost model, where an entire Azure Cognitive Search service is bought up-front and then subsequently filled with tenants. This allows for unused capacity to be designated for trials and free accounts.
-For applications with a global footprint, the index-per-tenant model may not be the most efficient. If an application's tenants are distributed across the globe, a separate service may be necessary for each region which may duplicate costs across each of them.
+For applications with a global footprint, the index-per-tenant model may not be the most efficient. If an application's tenants are distributed across the globe, a separate service may be necessary for each region, which may duplicate costs across each of them.
Azure Cognitive Search allows for the scale of both the individual indexes and the total number of indexes to grow. If an appropriate pricing tier is chosen, partitions and replicas can be added to the entire search service when an individual index within the service grows too large in terms of storage or traffic.
-If the total number of indexes grows too large for a single service, another service has to be provisioned to accommodate the new tenants. If indexes have to be moved between search services as new services are added, the data from the index has to be manually copied from one index to the other as Azure Cognitive Search does not allow for an index to be moved.
+If the total number of indexes grows too large for a single service, another service has to be provisioned to accommodate the new tenants. If indexes have to be moved between search services as new services are added, the data from the index has to be manually copied from one index to the other as Azure Cognitive Search doesn't allow for an index to be moved.
## Model 2: One service per tenant
If the total number of indexes grows too large for a single service, another ser
In a service-per-tenant architecture, each tenant has its own search service.
-In this model, the application achieves the maximum level of isolation for its tenants. Each service has dedicated storage and throughput for handling search request as well as separate API keys.
+In this model, the application achieves the maximum level of isolation for its tenants. Each service has dedicated storage and throughput for handling search requests. Each tenant has individual ownership of API keys.
-For applications where each tenant has a large footprint or the workload has little variability from tenant to tenant, the service-per-tenant model is an effective choice as resources are not shared across various tenantsΓÇÖ workloads.
+For applications where each tenant has a large footprint or the workload has little variability from tenant to tenant, the service-per-tenant model is an effective choice as resources aren't shared across various tenantsΓÇÖ workloads.
-A service per tenant model also offers the benefit of a predictable, fixed cost model. There is no up-front investment in an entire search service until there is a tenant to fill it, however the cost-per-tenant is higher than an index-per-tenant model.
+A service per tenant model also offers the benefit of a predictable, fixed cost model. There's no up-front investment in an entire search service until there's a tenant to fill it, however the cost-per-tenant is higher than an index-per-tenant model.
-The service-per-tenant model is an efficient choice for applications with a global footprint. With geographically-distributed tenants, it is easy to have each tenant's service in the appropriate region.
+The service-per-tenant model is an efficient choice for applications with a global footprint. With geographically distributed tenants, it's easy to have each tenant's service in the appropriate region.
-The challenges in scaling this pattern arise when individual tenants outgrow their service. Azure Cognitive Search does not currently support upgrading the pricing tier of a search service, so all data would have to be manually copied to a new service.
+The challenges in scaling this pattern arise when individual tenants outgrow their service. Azure Cognitive Search doesn't currently support upgrading the pricing tier of a search service, so all data would have to be manually copied to a new service.
## Model 3: Hybrid
Another pattern for modeling multitenancy is mixing both index-per-tenant and se
By mixing the two patterns, an application's largest tenants can occupy dedicated services while the long tail of less active, smaller tenants can occupy indexes in a shared service. This model ensures that the largest tenants have consistently high performance from the service while helping to protect the smaller tenants from any noisy neighbors.
-However, implementing this strategy relies foresight in predicting which tenants will require a dedicated service versus an index in a shared service. Application complexity increases with the need to manage both of these multitenancy models.
+However, implementing this strategy relies on foresight in predicting which tenants will require a dedicated service versus an index in a shared service. Application complexity increases with the need to manage both of these multitenancy models.
## Achieving even finer granularity The above design patterns to model multitenant scenarios in Azure Cognitive Search assume a uniform scope where each tenant is a whole instance of an application. However, applications can sometimes handle many smaller scopes.
-If service-per-tenant and index-per-tenant models are not sufficiently small scopes, it is possible to model an index to achieve an even finer degree of granularity.
+If service-per-tenant and index-per-tenant models aren't sufficiently small scopes, it's possible to model an index to achieve an even finer degree of granularity.
-To have a single index behave differently for different client endpoints, a field can be added to an index which designates a certain value for each possible client. Each time a client calls Azure Cognitive Search to query or modify an index, the code from the client application specifies the appropriate value for that field using Azure Cognitive Search's [filter](./query-odata-filter-orderby-syntax.md) capability at query time.
+To have a single index behave differently for different client endpoints, a field can be added to an index, which designates a certain value for each possible client. Each time a client calls Azure Cognitive Search to query or modify an index, the code from the client application specifies the appropriate value for that field using Azure Cognitive Search's [filter](./query-odata-filter-orderby-syntax.md) capability at query time.
This method can be used to achieve functionality of separate user accounts, separate permission levels, and even completely separate applications.
search Search Monitor Logs Powerbi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-monitor-logs-powerbi.md
Previously updated : 04/07/2021 Last updated : 09/15/2022 # Visualize Azure Cognitive Search Logs and Metrics with Power BI
search Semantic Ranking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/semantic-ranking.md
Previously updated : 06/17/2021 Last updated : 09/13/2022 # Semantic ranking in Azure Cognitive Search
Each document is now represented by a single long string.
> [!NOTE] > In the 2020-06-30-preview, the "searchFields" parameter is used rather than the semantic configuration to determine which fields to use. We recommend upgrading to the 2021-04-30-preview API version for best results.
-The string is composed of tokens, not characters or words. The maximum token count is 128 unique tokens. For estimation purposes, you can assume that 128 tokens is roughly equivalent to a string that is 128 words in length.
+The string is composed of tokens, not characters or words. The maximum token count is 128 unique tokens. For estimation purposes, you can assume that 128 tokens are roughly equivalent to a string that is 128 words in length.
> [!NOTE]
->Tokenization is determined in part by the analyzer assignment on searchable fields. If you are using specialized analyzer, such as nGram or EdgeNGram, you might want to exclude that field from searchFields. For insights into how strings are tokenized, you can review the token output of an analyzer using the [Test Analyzer REST API](/rest/api/searchservice/test-analyzer).
+> Tokenization is determined in part by the analyzer assignment on searchable fields. If you are using specialized analyzer, such as nGram or EdgeNGram, you might want to exclude that field from searchFields. For insights into how strings are tokenized, you can review the token output of an analyzer using the [Test Analyzer REST API](/rest/api/searchservice/test-analyzer).
## Extraction
A [semantic answer](semantic-answers.md) will also be returned if you specified
1. The @search.rerankerScore is assigned to each document based on the semantic relevance of the caption.
-1. After all documents are scored, they are listed in descending order by score and included in the query response payload. The payload includes answers, plain text and highlighted captions, and any fields that you marked as retrievable or specified in a select clause.
+1. After all documents are scored, they're listed in descending order by score and included in the query response payload. The payload includes answers, plain text and highlighted captions, and any fields that you marked as retrievable or specified in a select clause.
## Next steps
search Service Create Private Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/service-create-private-endpoint.md
In this section, you'll create a new Azure Cognitive Search service with a Priva
| VM architecture | Accept the default **x64**. | | Size | Accept the default **Standard D2S v3**. | | **ADMINISTRATOR ACCOUNT** | |
- | Username | Enter the user name of the administrator. Use an account that's valid for your Azure subscription so that you can sign in to Azure portal from the VM. |
+ | Username | Enter the user name of the administrator. Use an account that's valid for your Azure subscription. You'll want to sign into Azure portal from the VM so that you can manage your search service. |
| Password | Enter the account password. The password must be at least 12 characters long and meet the [defined complexity requirements](../virtual-machines/windows/faq.yml?toc=%2fazure%2fvirtual-network%2ftoc.json#what-are-the-password-requirements-when-creating-a-vm-).| | Confirm Password | Reenter password. | | **INBOUND PORT RULES** | |
sentinel Normalization Content https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/normalization-content.md
The following built-in network session related content is supported for ASIM nor
### Analytics rules -- [Log4j vulnerability exploit aka Log4Shell IP IOC](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/MultipleDataSources/Log4J_IPIOC_Dec112021.yaml)
+- [Log4j vulnerability exploit aka Log4Shell IP IOC](https://github.com/Azure/Azure-Sentinel/blob/master/Solutions/Apache%20Log4j%20Vulnerability%20Detection/Analytic%20Rules/Log4J_IPIOC_Dec112021.yaml)
- [Excessive number of failed connections from a single source (ASIM Network Session schema)](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/ASimNetworkSession/ExcessiveHTTPFailuresFromSource.yaml) - [Potential beaconing activity (ASIM Network Session schema)](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/ASimNetworkSession/PossibleBeaconingActivity.yaml) - [(Preview) TI map IP entity to Network Session Events (ASIM Network Session schema)](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/ASimNetworkSession/IPEntity_imNetworkSession.yaml)
The following built-in web session related content is supported for ASIM normali
- [NOBELIUM - Domain and IP IOCs - March 2021](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/MultipleDataSources/NOBELIUM_DomainIOCsMarch2021.yaml) - [NOBELIUM - Domain, Hash, and IP IOCs - May 2021](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/MultipleDataSources/NOBELIUM_IOCsMay2021.yaml) - [Known Phosphorus group domains/IP](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/MultipleDataSources/PHOSPHORUSMarch2019IOCs.yaml)-- [User agent search for log4j exploitation attempt](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/MultipleDataSources/UserAgentSearch_log4j.yaml)
+- [User agent search for log4j exploitation attempt](https://github.com/Azure/Azure-Sentinel/blob/master/Solutions/Apache%20Log4j%20Vulnerability%20Detection/Analytic%20Rules/UserAgentSearch_log4j.yaml)
sentinel Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/whats-new.md
If you're looking for items older than six months, you'll find them in the [Arch
## September 2022
+- [Heads up: Name fields being removed from UEBA UserPeerAnalytics table](#heads-up-name-fields-being-removed-from-ueba-userpeeranalytics-table)
+- [Windows DNS Events via AMA connector (Preview)](#windows-dns-events-via-ama-connector-preview)
- [Create and delete incidents manually (Preview)](#create-and-delete-incidents-manually-preview) - [Add entities to threat intelligence (Preview)](#add-entities-to-threat-intelligence-preview)-- [Windows DNS Events via AMA connector (Preview)](#windows-dns-events-via-ama-connector-preview)+
+### Heads up: Name fields being removed from UEBA UserPeerAnalytics table
+
+As of **September 30, 2022**, the UEBA engine will no longer perform automatic lookups of user IDs and resolve them into names. This change will result in the removal of four name fields from the *UserPeerAnalytics* table:
+
+- UserName
+- UserPrincipalName
+- PeerUserName
+- PeerUserPrincipalName
+
+The corresponding ID fields remain part of the table, and any built-in queries and other operations will execute the appropriate name lookups in other ways (using the IdentityInfo table), so you shouldnΓÇÖt be affected by this change in nearly all circumstances.
+
+The only exception to this is if youΓÇÖve built custom queries or rules directly referencing any of these name fields. In this scenario, you can incorporate the following lookup queries into your own, so you can access the values that would have been in these name fields.
+
+The following query resolves **user** and **peer identifier fields**:
+
+```kusto
+UserPeerAnalytics
+| where TimeGenerated > ago(24h)
+// join to resolve user identifier fields
+| join kind=inner (
+ IdentityInfo
+ | where TimeGenerated > ago(14d)
+ | distinct AccountTenantId, AccountObjectId, AccountUPN, AccountDisplayName
+ | extend UserPrincipalNameIdentityInfo = AccountUPN
+ | extend UserNameIdentityInfo = AccountDisplayName
+ | project AccountTenantId, AccountObjectId, UserPrincipalNameIdentityInfo, UserNameIdentityInfo
+) on $left.AADTenantId == $right.AccountTenantId, $left.UserId == $right.AccountObjectId
+// join to resolve peer identifier fields
+| join kind=inner (
+ IdentityInfo
+ | where TimeGenerated > ago(14d)
+ | distinct AccountTenantId, AccountObjectId, AccountUPN, AccountDisplayName
+ | extend PeerUserPrincipalNameIdentityInfo = AccountUPN
+ | extend PeerUserNameIdentityInfo = AccountDisplayName
+ | project AccountTenantId, AccountObjectId, PeerUserPrincipalNameIdentityInfo, PeerUserNameIdentityInfo
+) on $left.AADTenantId == $right.AccountTenantId, $left.PeerUserId == $right.AccountObjectId
+```
+If your original query referenced the user or peer names (not just their IDs), substitute this query in its entirety for the table name (ΓÇ£UserPeerAnalyticsΓÇ¥) in your original query.
+
+### Windows DNS Events via AMA connector (Preview)
+
+You can now use the new [Windows DNS Events via AMA connector](connect-dns-ama.md) to stream and filter events from your Windows Domain Name System (DNS) server logs to the `ASimDnsActivityLog` normalized schema table. You can then dive into your data to protect your DNS servers from threats and attacks.
+
+The Azure Monitor Agent (AMA) and its DNS extension are installed on your Windows Server to upload data from your DNS analytical logs to your Microsoft Sentinel workspace.
+
+Here are some benefits of using AMA for DNS log collection:
+
+- AMA is faster compared to the existing Log Analytics Agent (MMA/OMS). AMA handles up to 5000 events per second (EPS) compared to 2000 EPS with the existing agent.
+- AMA provides centralized configuration using Data Collection Rules (DCRs), and also supports multiple DCRs.
+- AMA supports transformation from the incoming stream into other data tables.
+- AMA supports basic and advanced filtering of the data. The data is filtered on the DNS server and before the data is uploaded, which saves time and resources.
### Create and delete incidents manually (Preview)
Microsoft Sentinel allows you to flag the entity as malicious, right from within
Learn how to [add an entity to your threat intelligence](add-entity-to-threat-intelligence.md).
-### Windows DNS Events via AMA connector (Preview)
-
-You can now use the new [Windows DNS Events via AMA connector](connect-dns-ama.md) to stream and filter events from your Windows Domain Name System (DNS) server logs to the `ASimDnsActivityLog` normalized schema table. You can then dive into your data to protect your DNS servers from threats and attacks.
-
-The Azure Monitor Agent (AMA) and its DNS extension are installed on your Windows Server to upload data from your DNS analytical logs to your Microsoft Sentinel workspace.
-
-Here are some benefits of using AMA for DNS log collection:
--- AMA is faster compared to the existing Log Analytics Agent (MMA/OMS). AMA handles up to 5000 events per second (EPS) compared to 2000 EPS with the existing agent.-- AMA provides centralized configuration using Data Collection Rules (DCRs), and also supports multiple DCRs.-- AMA supports transformation from the incoming stream into other data tables.-- AMA supports basic and advanced filtering of the data. The data is filtered on the DNS server and before the data is uploaded, which saves time and resources.- ## August 2022
storage Data Protection Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/data-protection-overview.md
Title: Data protection overview
-description: The data protection options available for your for Blob Storage and Azure Data Lake Storage Gen2 data enable you to protect your data from being deleted or overwritten. If you should need to recover data that has been deleted or overwritten, this guide can help you to choose the recovery option that's best for your scenario.
+description: The data protection options available for you're for Blob Storage and Azure Data Lake Storage Gen2 data enable you to protect your data from being deleted or overwritten. If you should need to recover data that has been deleted or overwritten, this guide can help you to choose the recovery option that's best for your scenario.
Previously updated : 10/26/2021 Last updated : 09/14/2022
In the Azure Storage documentation, *data protection* refers to strategies for p
## Recommendations for basic data protection
-If you are looking for basic data protection coverage for your storage account and the data that it contains, then Microsoft recommends taking the following steps to begin with:
+If you're looking for basic data protection coverage for your storage account and the data that it contains, then Microsoft recommends taking the following steps to begin with:
- Configure an Azure Resource Manager lock on the storage account to protect the account from deletion or configuration changes. [Learn more...](../common/lock-account-resource.md) - Enable container soft delete for the storage account to recover a deleted container and its contents. [Learn more...](soft-delete-container-enable.md)
If you are looking for basic data protection coverage for your storage account a
- For Blob Storage workloads, enable blob versioning to automatically save the state of your data each time a blob is overwritten. [Learn more...](versioning-enable.md) - For Azure Data Lake Storage workloads, take manual snapshots to save the state of your data at a particular point in time. [Learn more...](snapshots-overview.md)
-These options, as well as additional data protection options for other scenarios, are described in more detail in the following section.
+These options, as well as other data protection options for other scenarios, are described in more detail in the following section.
For an overview of the costs involved with these features, see [Summary of cost considerations](#summary-of-cost-considerations). ## Overview of data protection options
-The following table summarizes the options available in Azure Storage for common data protection scenarios. Choose the scenarios that are applicable to your situation to learn more about the options available to you. Note that not all features are available at this time for storage accounts with a hierarchical namespace enabled.
+The following table summarizes the options available in Azure Storage for common data protection scenarios. Choose the scenarios that are applicable to your situation to learn more about the options available to you. Not all features are available at this time for storage accounts with a hierarchical namespace enabled.
| Scenario | Data protection option | Recommendations | Protection benefit | Available for Data Lake Storage | |--|--|--|--|--|
-| Prevent a storage account from being deleted or modified. | Azure Resource Manager lock<br />[Learn more...](../common/lock-account-resource.md) | Lock all of your storage accounts with an Azure Resource Manager lock to prevent deletion of the storage account. | Protects the storage account against deletion or configuration changes.<br /><br />Does not protect containers or blobs in the account from being deleted or overwritten. | Yes |
+| Prevent a storage account from being deleted or modified. | Azure Resource Manager lock<br />[Learn more...](../common/lock-account-resource.md) | Lock all of your storage accounts with an Azure Resource Manager lock to prevent deletion of the storage account. | Protects the storage account against deletion or configuration changes.<br /><br />Doesn't protect containers or blobs in the account from being deleted or overwritten. | Yes |
| Prevent a blob version from being deleted for an interval that you control. | Immutability policy on a blob version<br />[Learn more...](immutable-storage-overview.md) | Set an immutability policy on an individual blob version to protect business-critical documents, for example, in order to meet legal or regulatory compliance requirements. | Protects a blob version from being deleted and its metadata from being overwritten. An overwrite operation creates a new version.<br /><br />If at least one container has version-level immutability enabled, the storage account is also protected from deletion. Container deletion fails if at least one blob exists in the container. | No |
-| Prevent a container and its blobs from being deleted or modified for an interval that you control. | Immutability policy on a container<br />[Learn more...](immutable-storage-overview.md) | Set an immutability policy on a container to protect business-critical documents, for example, in order to meet legal or regulatory compliance requirements. | Protects a container and its blobs from all deletes and overwrites.<br /><br />When a legal hold or a locked time-based retention policy is in effect, the storage account is also protected from deletion. Containers for which no immutability policy has been set are not protected from deletion. | Yes, in preview |
-| Restore a deleted container within a specified interval. | Container soft delete<br />[Learn more...](soft-delete-container-overview.md) | Enable container soft delete for all storage accounts, with a minimum retention interval of 7 days.<br /><br />Enable blob versioning and blob soft delete together with container soft delete to protect individual blobs in a container.<br /><br />Store containers that require different retention periods in separate storage accounts. | A deleted container and its contents may be restored within the retention period.<br /><br />Only container-level operations (e.g., [Delete Container](/rest/api/storageservices/delete-container)) can be restored. Container soft delete does not enable you to restore an individual blob in the container if that blob is deleted. | Yes |
-| Automatically save the state of a blob in a previous version when it is overwritten. | Blob versioning<br />[Learn more...](versioning-overview.md) | Enable blob versioning, together with container soft delete and blob soft delete, for storage accounts where you need optimal protection for blob data.<br /><br />Store blob data that does not require versioning in a separate account to limit costs. | Every blob write operation creates a new version. The current version of a blob may be restored from a previous version if the current version is deleted or overwritten. | No |
-| Restore a deleted blob or blob version within a specified interval. | Blob soft delete<br />[Learn more...](soft-delete-blob-overview.md) | Enable blob soft delete for all storage accounts, with a minimum retention interval of 7 days.<br /><br />Enable blob versioning and container soft delete together with blob soft delete for optimal protection of blob data.<br /><br />Store blobs that require different retention periods in separate storage accounts. | A deleted blob or blob version may be restored within the retention period. | Yes |
-| Restore a set of block blobs to a previous point in time. | Point-in-time restore<br />[Learn more...](point-in-time-restore-overview.md) | To use point-in-time restore to revert to an earlier state, design your application to delete individual block blobs rather than deleting containers. | A set of block blobs may be reverted to their state at a specific point in the past.<br /><br />Only operations performed on block blobs are reverted. Any operations performed on containers, page blobs, or append blobs are not reverted. | No |
-| Manually save the state of a blob at a given point in time. | Blob snapshot<br />[Learn more...](snapshots-overview.md) | Recommended as an alternative to blob versioning when versioning is not appropriate for your scenario, due to cost or other considerations, or when the storage account has a hierarchical namespace enabled. | A blob may be restored from a snapshot if the blob is overwritten. If the blob is deleted, snapshots are also deleted. | Yes, in preview |
-| A blob can be deleted or overwritten, but the data is regularly copied to a second storage account. | Roll-your-own solution for copying data to a second account by using Azure Storage object replication or a tool like AzCopy or Azure Data Factory. | Recommended for peace-of-mind protection against unexpected intentional actions or unpredictable scenarios.<br /><br />Create the second storage account in the same region as the primary account to avoid incurring egress charges. | Data can be restored from the second storage account if the primary account is compromised in any way. | AzCopy and Azure Data Factory are supported.<br /><br />Object replication is not supported. |
+| Prevent a container and its blobs from being deleted or modified for an interval that you control. | Immutability policy on a container<br />[Learn more...](immutable-storage-overview.md) | Set an immutability policy on a container to protect business-critical documents, for example, in order to meet legal or regulatory compliance requirements. | Protects a container and its blobs from all deletes and overwrites.<br /><br />When a legal hold or a locked time-based retention policy is in effect, the storage account is also protected from deletion. Containers for which no immutability policy has been set aren't protected from deletion. | Yes, in preview |
+| Restore a deleted container within a specified interval. | Container soft delete<br />[Learn more...](soft-delete-container-overview.md) | Enable container soft delete for all storage accounts, with a minimum retention interval of seven days.<br /><br />Enable blob versioning and blob soft delete together with container soft delete to protect individual blobs in a container.<br /><br />Store containers that require different retention periods in separate storage accounts. | A deleted container and its contents may be restored within the retention period.<br /><br />Only container-level operations (for example, [Delete Container](/rest/api/storageservices/delete-container)) can be restored. Container soft delete doesn't enable you to restore an individual blob in the container if that blob is deleted. | Yes |
+| Automatically save the state of a blob in a previous version when it's overwritten. | Blob versioning<br />[Learn more...](versioning-overview.md) | Enable blob versioning, together with container soft delete and blob soft delete, for storage accounts where you need optimal protection for blob data.<br /><br />Store blob data that doesn't require versioning in a separate account to limit costs. | Every blob write operation creates a new version. The current version of a blob may be restored from a previous version if the current version is deleted or overwritten. | No |
+| Restore a deleted blob or blob version within a specified interval. | Blob soft delete<br />[Learn more...](soft-delete-blob-overview.md) | Enable blob soft delete for all storage accounts, with a minimum retention interval of seven days.<br /><br />Enable blob versioning and container soft delete together with blob soft delete for optimal protection of blob data.<br /><br />Store blobs that require different retention periods in separate storage accounts. | A deleted blob or blob version may be restored within the retention period. | Yes |
+| Restore a set of block blobs to a previous point in time. | Point-in-time restore<br />[Learn more...](point-in-time-restore-overview.md) | To use point-in-time restore to revert to an earlier state, design your application to delete individual block blobs rather than deleting containers. | A set of block blobs may be reverted to their state at a specific point in the past.<br /><br />Only operations performed on block blobs are reverted. Any operations performed on containers, page blobs, or append blobs aren't reverted. | No |
+| Manually save the state of a blob at a given point in time. | Blob snapshot<br />[Learn more...](snapshots-overview.md) | Recommended as an alternative to blob versioning when versioning isn't appropriate for your scenario, due to cost or other considerations, or when the storage account has a hierarchical namespace enabled. | A blob may be restored from a snapshot if the blob is overwritten. If the blob is deleted, snapshots are also deleted. | Yes, in preview |
+| A blob can be deleted or overwritten, but the data is regularly copied to a second storage account. | Roll-your-own solution for copying data to a second account by using Azure Storage object replication or a tool like AzCopy or Azure Data Factory. | Recommended for peace-of-mind protection against unexpected intentional actions or unpredictable scenarios.<br /><br />Create the second storage account in the same region as the primary account to avoid incurring egress charges. | Data can be restored from the second storage account if the primary account is compromised in any way. | AzCopy and Azure Data Factory are supported.<br /><br />Object replication isn't supported. |
## Data protection by resource type
The following table summarizes the Azure Storage data protection options accordi
| Blob snapshot | No | No | No | Yes | | Roll-your-own solution for copying data to a second account<sup>7</sup> | No | Yes | Yes | Yes |
-<sup>1</sup> An Azure Resource Manager lock does not protect a container from deletion.<br />
+<sup>1</sup> An Azure Resource Manager lock doesn't protect a container from deletion.<br />
<sup>2</sup> Storage account deletion fails if there is at least one container with version-level immutable storage enabled.<br /> <sup>3</sup> Container deletion fails if at least one blob exists in the container, regardless of whether policy is locked or unlocked.<br /> <sup>4</sup> Overwriting the contents of the current version of the blob creates a new version. An immutability policy protects a version's metadata from being overwritten.<br />
The following table summarizes the Azure Storage data protection options accordi
## Recover deleted or overwritten data
-If you should need to recover data that has been overwritten or deleted, how you proceed depends on which data protection options you have enabled and which resource was affected. The following table describes the actions that you can take to recover data.
+If you should need to recover data that has been overwritten or deleted, how you proceed depends on which data protection options you've enabled and which resource was affected. The following table describes the actions that you can take to recover data.
| Deleted or overwritten resource | Possible recovery actions | Requirements for recovery | |--|--|--|
-| Storage account | Attempt to recover the deleted storage account<br />[Learn more...](../common/storage-account-recover.md) | The storage account was originally created with the Azure Resource Manager deployment model and was deleted within the past 14 days. A new storage account with the same name has not been created since the original account was deleted. |
-| Container | Recover the soft-deleted container and its contents<br />[Learn more...](soft-delete-container-enable.md) | Container soft delete is enabled and the container soft delete retention period has not yet expired. |
+| Storage account | Attempt to recover the deleted storage account<br />[Learn more...](../common/storage-account-recover.md) | The storage account was originally created with the Azure Resource Manager deployment model and was deleted within the past 14 days. A new storage account with the same name hasn't been created since the original account was deleted. |
+| Container | Recover the soft-deleted container and its contents<br />[Learn more...](soft-delete-container-enable.md) | Container soft delete is enabled and the container soft delete retention period hasn't yet expired. |
| Containers and blobs | Restore data from a second storage account | All container and blob operations have been effectively replicated to a second storage account. | | Blob (any type) | Restore a blob from a previous version<sup>1</sup><br />[Learn more...](versioning-enable.md) | Blob versioning is enabled and the blob has one or more previous versions. |
-| Blob (any type) | Recover a soft-deleted blob<br />[Learn more...](soft-delete-blob-enable.md) | Blob soft delete is enabled and the soft delete retention interval has not expired. |
+| Blob (any type) | Recover a soft-deleted blob<br />[Learn more...](soft-delete-blob-enable.md) | Blob soft delete is enabled and the soft delete retention interval hasn't expired. |
| Blob (any type) | Restore a blob from a snapshot<br />[Learn more...](snapshots-manage-dotnet.md) | The blob has one or more snapshots. |
-| Set of block blobs | Recover a set of block blobs to their state at an earlier point in time<sup>1</sup><br />[Learn more...](point-in-time-restore-manage.md) | Point-in-time restore is enabled and the restore point is within the retention interval. The storage account has not been compromised or corrupted. |
-| Blob version | Recover a soft-deleted version<sup>1</sup><br />[Learn more...](soft-delete-blob-enable.md) | Blob soft delete is enabled and the soft delete retention interval has not expired. |
+| Set of block blobs | Recover a set of block blobs to their state at an earlier point in time<sup>1</sup><br />[Learn more...](point-in-time-restore-manage.md) | Point-in-time restore is enabled and the restore point is within the retention interval. The storage account hasn't been compromised or corrupted. |
+| Blob version | Recover a soft-deleted version<sup>1</sup><br />[Learn more...](soft-delete-blob-enable.md) | Blob soft delete is enabled and the soft delete retention interval hasn't expired. |
<sup>1</sup> Not currently supported for Data Lake Storage workloads.
The following table summarizes the cost considerations for the various data prot
| Container soft delete | No charge to enable container soft delete for a storage account. Data in a soft-deleted container is billed at same rate as active data until the soft-deleted container is permanently deleted. | | Blob versioning | No charge to enable blob versioning for a storage account. After blob versioning is enabled, every write or delete operation on a blob in the account creates a new version, which may lead to increased capacity costs.<br /><br />A blob version is billed based on unique blocks or pages. Costs therefore increase as the base blob diverges from a particular version. Changing a blob or blob version's tier may have a billing impact. For more information, see [Pricing and billing](versioning-overview.md#pricing-and-billing).<br /><br />Use lifecycle management to delete older versions as needed to control costs. For more information, see [Optimize costs by automating Azure Blob Storage access tiers](./lifecycle-management-overview.md). | | Blob soft delete | No charge to enable blob soft delete for a storage account. Data in a soft-deleted blob is billed at same rate as active data until the soft-deleted blob is permanently deleted. |
-| Point-in-time restore | No charge to enable point-in-time restore for a storage account; however, enabling point-in-time restore also enables blob versioning, soft delete, and change feed, each of which may result in additional charges.<br /><br />You are billed for point-in-time restore when you perform a restore operation. The cost of a restore operation depends on the amount of data being restored. For more information, see [Pricing and billing](point-in-time-restore-overview.md#pricing-and-billing). |
+| Point-in-time restore | No charge to enable point-in-time restore for a storage account; however, enabling point-in-time restore also enables blob versioning, soft delete, and change feed, each of which may result in other charges.<br /><br />You're billed for point-in-time restore when you perform a restore operation. The cost of a restore operation depends on the amount of data being restored. For more information, see [Pricing and billing](point-in-time-restore-overview.md#pricing-and-billing). |
| Blob snapshots | Data in a snapshot is billed based on unique blocks or pages. Costs therefore increase as the base blob diverges from the snapshot. Changing a blob or snapshot's tier may have a billing impact. For more information, see [Pricing and billing](snapshots-overview.md#pricing-and-billing).<br /><br />Use lifecycle management to delete older snapshots as needed to control costs. For more information, see [Optimize costs by automating Azure Blob Storage access tiers](./lifecycle-management-overview.md). | | Copy data to a second storage account | Maintaining data in a second storage account will incur capacity and transaction costs. If the second storage account is located in a different region than the source account, then copying data to that second account will additionally incur egress charges. | ## Disaster recovery
-Azure Storage always maintains multiple copies of your data so that it is protected from planned and unplanned events, including transient hardware failures, network or power outages, and massive natural disasters. Redundancy ensures that your storage account meets its availability and durability targets even in the face of failures. For more information about how to configure your storage account for high availability, see [Azure Storage redundancy](../common/storage-redundancy.md).
+Azure Storage always maintains multiple copies of your data so that it's protected from planned and unplanned events, including transient hardware failures, network or power outages, and massive natural disasters. Redundancy ensures that your storage account meets its availability and durability targets even in the face of failures. For more information about how to configure your storage account for high availability, see [Azure Storage redundancy](../common/storage-redundancy.md).
-In the event that a failure occurs in a data center, if your storage account is redundant across two geographical regions (geo-redundant), then you have the option to fail over your account from the primary region to the secondary region. For more information, see [Disaster recovery and storage account failover](../common/storage-disaster-recovery-guidance.md).
+If a failure occurs in a data center, if your storage account is redundant across two geographical regions (geo-redundant), then you have the option to fail over your account from the primary region to the secondary region. For more information, see [Disaster recovery and storage account failover](../common/storage-disaster-recovery-guidance.md).
-Customer-managed failover is not currently supported for storage accounts with a hierarchical namespace enabled. For more information, see [Blob storage features available in Azure Data Lake Storage Gen2](./storage-feature-support-in-storage-accounts.md).
+Customer-managed failover isn't currently supported for storage accounts with a hierarchical namespace enabled. For more information, see [Blob storage features available in Azure Data Lake Storage Gen2](./storage-feature-support-in-storage-accounts.md).
## Next steps
storage Immutable Legal Hold Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/immutable-legal-hold-overview.md
Title: Legal holds for immutable blob data
-description: A legal hold stores blob data in a Write-Once, Read-Many (WORM) format until it is explicitly cleared. Use a legal hold when the period of time that the data must be kept in a WORM state is unknown.
+description: A legal hold stores blob data in a Write-Once, Read-Many (WORM) format until it's explicitly cleared. Use a legal hold when the period of time that the data must be kept in a WORM state is unknown.
Previously updated : 12/01/2021 Last updated : 09/14/2022 # Legal holds for immutable blob data
-A legal hold is a temporary immutability policy that can be applied for legal investigation purposes or general protection policies. A legal hold stores blob data in a Write-Once, Read-Many (WORM) format until it is explicitly cleared. When a legal hold is in effect, blobs can be created and read, but not modified or deleted. Use a legal hold when the period of time that the data must be kept in a WORM state is unknown.
+A legal hold is a temporary immutability policy that can be applied for legal investigation purposes or general protection policies. A legal hold stores blob data in a Write-Once, Read-Many (WORM) format until it's explicitly cleared. When a legal hold is in effect, blobs can be created and read, but not modified or deleted. Use a legal hold when the period of time that the data must be kept in a WORM state is unknown.
For more information about immutability policies for Blob Storage, see [Store business-critical blob data with immutable storage](immutable-storage-overview.md).
For more information about immutability policies for Blob Storage, see [Store bu
A legal hold policy can be configured at either of the following scopes: - Version-level policy: A legal hold can be configured on an individual blob version level for granular management of sensitive data.-- Container-level policy: A legal hold that is configured at the container level applies to all blobs in that container. Individual blobs cannot be configured with their own immutability policies.
+- Container-level policy: A legal hold that is configured at the container level applies to all blobs in that container. Individual blobs can't be configured with their own immutability policies.
### Version-level policy scope
-To configure a legal hold on a blob version, you must first enable version-level immutability on the storage account or the parent container. Version-level immutability cannot be disabled after it is enabled. For more information, [Enable support for version-level immutability](immutable-policy-configure-version-scope.md#enable-support-for-version-level-immutability).
+To configure a legal hold on a blob version, you must first enable version-level immutability on the storage account or the parent container. Version-level immutability can't be disabled after it's enabled. For more information, [Enable support for version-level immutability](immutable-policy-configure-version-scope.md#enable-support-for-version-level-immutability).
After version-level immutability is enabled for a storage account or a container, a legal hold can no longer be set at the container level. Legal holds must be applied to individual blob versions. A legal hold may be configured for the current version or a previous version of a blob.
To learn more about enabling a version-level legal hold, see [Configure or clear
### Container-level scope
-When you configure a legal hold for a container, that hold applies to all objects in the container. When the legal hold is cleared, clients can once again write and delete objects in the container, unless there is also a time-based retention policy in effect for the container.
+A legal hold for a container applies to all objects in the container. When the legal hold is cleared, clients can once again write and delete objects in the container, unless there's also a time-based retention policy in effect for the container.
-When a legal hold is applied to a container, all existing blobs move into an immutable WORM state in less than 30 seconds. All new blobs that are uploaded to that policy-protected container will also move into an immutable state. Once all blobs are in an immutable state, overwrite or delete operations in the immutable container are not allowed. In the case of an account with a hierarchical namespace, blobs cannot be renamed or moved to a different directory.
+When a legal hold is applied to a container, all existing blobs move into an immutable WORM state in less than 30 seconds. All new blobs that are uploaded to that policy-protected container will also move into an immutable state. Once all blobs are in an immutable state, overwrite or delete operations in the immutable container aren't allowed. In an account that has a hierarchical namespace, blobs can't be renamed or moved to a different directory.
To learn how to configure a legal hold with container-level scope, see [Configure or clear a legal hold](immutable-policy-configure-container-scope.md#configure-or-clear-a-legal-hold).
A container-level legal hold must be associated with one or more user-defined al
Each container with a legal hold in effect provides a policy audit log. The log contains the user ID, command type, time stamps, and legal hold tags. The audit log is retained for the lifetime of the policy, in accordance with the SEC 17a-4(f) regulatory guidelines.
-The [Azure Activity log](../../azure-monitor/essentials/platform-logs-overview.md) provides a more comprehensive log of all management service activities. [Azure resource logs](../../azure-monitor/essentials/platform-logs-overview.md) retain information about data operations. It is the user's responsibility to store those logs persistently, as might be required for regulatory or other purposes.
+The [Azure Activity log](../../azure-monitor/essentials/platform-logs-overview.md) provides a more comprehensive log of all management service activities. [Azure resource logs](../../azure-monitor/essentials/platform-logs-overview.md) retain information about data operations. It's the user's responsibility to store those logs persistently, as might be required for regulatory or other purposes.
#### Limits The following limits apply to container-level legal holds: - For a storage account, the maximum number of containers with a legal hold setting is 10,000.-- For a container, the maximum number of legal hold tags is ten.
+- For a container, the maximum number of legal hold tags is 10.
- The minimum length of a legal hold tag is three alphanumeric characters. The maximum length is 23 alphanumeric characters.-- For a container, a maximum of ten legal hold policy audit logs are retained for the duration of the policy.
+- For a container, a maximum of 10 legal hold policy audit logs are retained for the duration of the policy.
+
+## Allow protected append blobs writes
+
+Append blobs are composed of blocks of data and optimized for data append operations required by auditing and logging scenarios. By design, append blobs only allow the addition of new blocks to the end of the blob. Regardless of immutability, modification or deletion of existing blocks in an append blob is fundamentally not allowed. To learn more about append blobs, see [About Append Blobs](/rest/api/storageservices/understanding-block-blobs--append-blobs--and-page-blobs#about-append-blobs).
+
+The **AllowProtectedAppendWritesAll** property setting allows for writing new blocks to an append blob while maintaining immutability protection and compliance. If this setting is enabled, you can create an append blob directly in the policy-protected container, and then continue to add new blocks of data to the end of the append blob with the Append Block operation. Only new blocks can be added; any existing blocks can't be modified or deleted. Enabling this setting doesn't affect the immutability behavior of block blobs or page blobs.
+
+> [!NOTE]
+> This property is available only for container-level policies. This property is not available for version-level policies.
+
+This setting also adds the ability to write new blocks to a block blob. The Blob Storage API doesn't provide a way for applications to do this directly. However, applications can accomplish this by using append and flush methods that are available in the Data Lake Storage Gen2 API. Also, this property enables Microsoft applications such as Azure Data Factory to append blocks of data by using internal APIs. If your workloads depend on any of these tools, then you can use this property to avoid errors that can appear when those tools attempt to append data to blobs.
## Next steps
storage Immutable Policy Configure Container Scope https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/immutable-policy-configure-container-scope.md
Previously updated : 12/01/2021 Last updated : 09/14/2022
To configure a time-based retention policy on a container, use the Azure portal,
To configure a time-based retention policy on a container with the Azure portal, follow these steps: 1. Navigate to the desired container.
-1. Select the **More** button on the right, then select **Access policy**.
-1. In the **Immutable blob storage** section, select **Add policy**.
-1. In the **Policy type** field, select **Time-based retention**, and specify the retention period in days.
-1. To create a policy with container scope, do not check the box for **Enable version-level immutability**.
-1. If desired, select **Allow additional protected appends** to enable writes to append blobs that are protected by an immutability policy. For more information, see [Allow protected append blobs writes](immutable-time-based-retention-policy-overview.md#allow-protected-append-blobs-writes).
+
+2. Select the **More** button on the right, then select **Access policy**.
+
+3. In the **Immutable blob storage** section, select **Add policy**.
+
+4. In the **Policy type** field, select **Time-based retention**, and specify the retention period in days.
+
+5. To create a policy with container scope, do not check the box for **Enable version-level immutability**.
+
+6. Choose whether to allow protected append writes.
+
+ The **Append blobs** option enables your workloads to add new blocks of data to the end of an append blob by using the [Append Block](/rest/api/storageservices/append-block) operation.
+
+ The **Block and append blobs** option provides you with the same permissions as the **Append blobs** option but adds the ability to write new blocks to a block blob. The Blob Storage API does not provide a way for applications to do this directly. However, applications can accomplish this by using append and flush methods that are available in the Data Lake Storage Gen2 API. Also, some Microsoft applications use internal APIs to create block blobs and then append to them. If your workloads depend on any of these tools, then you can use this property to avoid errors that can appear when those tools attempt to append blocks to a block blob.
+
+ To learn more about these options, see [Allow protected append blobs writes](immutable-time-based-retention-policy-overview.md#allow-protected-append-blobs-writes).
:::image type="content" source="media/immutable-policy-configure-container-scope/configure-retention-policy-container-scope.png" alt-text="Screenshot showing how to configure immutability policy scoped to container":::
Set-AzRmStorageContainerImmutabilityPolicy -ResourceGroupName <resource-group> `
-ImmutabilityPeriod 10 ```
+To allow protected append writes, set the `-AllowProtectedAppendWrite` or `-AllowProtectedAppendWriteAll` parameter to `true`.
+
+The **AllowProtectedAppendWrite** option enables your workloads to add new blocks of data to the end of an append blob by using the [Append Block](/rest/api/storageservices/append-block) operation.
+
+The **AllowProtectedAppendWriteAll** option provides you with the same permissions as the **AllowProtectedAppendWrite** option but adds the ability to write new blocks to a block blob. The Blob Storage API does not provide a way for applications to do this directly. However, applications can accomplish this by using append and flush methods that are available in the Data Lake Storage Gen2 API. Also, some Microsoft applications use internal APIs to create block blobs and then append to them. If your workloads depend on any of these tools, then you can use this property to avoid errors that can appear when those tools attempt to append blocks to a block blob.
+
+To learn more about these options, see [Allow protected append blobs writes](immutable-time-based-retention-policy-overview.md#allow-protected-append-blobs-writes).
+ ### [Azure CLI](#tab/azure-cli) To configure a time-based retention policy on a container with Azure CLI, call the [az storage container immutability-policy create](/cli/azure/storage/container/immutability-policy#az-storage-container-immutability-policy-create) command, providing the retention interval in days. Remember to replace placeholder values in angle brackets with your own values:
az storage container immutability-policy create \
--period 10 ```
+To allow protected append writes, set the `--allow-protected-append-writes` or `--allow-protected-append-writes-all` parameter to `true`.
+
+The **--allow-protected-append-writes** option enables your workloads to add new blocks of data to the end of an append blob by using the [Append Block](/rest/api/storageservices/append-block) operation.
+
+The **--allow-protected-append-writes-all** option provides you with the same permissions as the **--allow-protected-append-writes** option but adds the ability to write new blocks to a block blob. The Blob Storage API does not provide a way for applications to do this directly. However, applications can accomplish this by using append and flush methods that are available in the Data Lake Storage Gen2 API. Also, some Microsoft applications use internal APIs to create block blobs and then append to them. If your workloads depend on any of these tools, then you can use this property to avoid errors that can appear when those tools attempt to append blocks to a block blob.
+
+To learn more about these options, see [Allow protected append blobs writes](immutable-time-based-retention-policy-overview.md#allow-protected-append-blobs-writes).
+ ## Modify an unlocked retention policy
Set-AzRmStorageContainerImmutabilityPolicy -ResourceGroupName <resource-group> `
-StorageAccountName <storage-account> ` -ContainerName <container> ` -ImmutabilityPeriod 21 `
- -AllowProtectedAppendWrite true `
+ -AllowProtectedAppendWriteAll true `
-Etag $policy.Etag ` -ExtendPolicy ```
az storage container immutability-policy extend \
--container-name <container> \ --period 21 \ --if-match $etag \
- --allow-protected-append-writes true
+ --allow-protected-append-writes-all true
``` To delete an unlocked policy, call the [az storage container immutability-policy delete](/cli/azure/storage/container/immutability-policy#az-storage-container-immutability-policy-delete) command.
A legal hold stores immutable data until the legal hold is explicitly cleared. T
To configure a legal hold on a container with the Azure portal, follow these steps: 1. Navigate to the desired container.
-1. Select the **More** button and choose **Access policy**.
-1. Under the **Immutable blob versions** section, select **Add policy**.
-1. Choose **Legal hold** as the policy type, and select **OK** to apply it.
+
+2. Select the **More** button and choose **Access policy**.
+
+3. Under the **Immutable blob versions** section, select **Add policy**.
+
+4. Choose **Legal hold** as the policy type.
+
+5. Add one or more legal hold tags.
+
+6. Choose whether to allow protected append writes, and then select **Save**.
+
+ The **Append blobs** option enables your workloads to add new blocks of data to the end of an append blob by using the [Append Block](/rest/api/storageservices/append-block) operation.
+
+ This setting also adds the ability to write new blocks to a block blob. The Blob Storage API does not provide a way for applications to do this directly. However, applications can accomplish this by using append and flush methods that are available in the Data Lake Storage Gen2 API. Also, this property enables Microsoft applications such as Azure Data Factory to append blocks of data by using internal APIs. If your workloads depend on any of these tools, then you can use this property to avoid errors that can appear when those tools attempt to append data to blobs.
+
+ To learn more about these options, see [Allow protected append blobs writes](immutable-legal-hold-overview.md#allow-protected-append-blobs-writes).
+
+ :::image type="content" source="media/immutable-policy-configure-container-scope/configure-retention-policy-container-scope-legal-hold.png" alt-text="Screenshot showing how to configure legal hold policy scoped to container.":::
+
+After you've configured the immutability policy, you will see that it is scoped to the container:
The following image shows a container with both a time-based retention policy and legal hold configured.
To configure a legal hold on a container with PowerShell, call the [Add-AzRmStor
Add-AzRmStorageContainerLegalHold -ResourceGroupName <resource-group> ` -StorageAccountName <storage-account> ` -Name <container> `
- -Tag <tag1>,<tag2>,...
+ -Tag <tag1>,<tag2>,...`
+ -AllowProtectedAppendWriteAll true
``` To clear a legal hold, call the [Remove-AzRmStorageContainerLegalHold](/powershell/module/az.storage/remove-azrmstoragecontainerlegalhold) command:
az storage container legal-hold set \
--tags tag1 tag2 \ --container-name <container> \ --account-name <storage-account> \
- --resource-group <resource-group>
+ --resource-group <resource-group> \
+ --allow-protected-append-writes-all true
``` To clear a legal hold, call the [az storage container legal-hold clear](/cli/azure/storage/container/legal-hold#az-storage-container-legal-hold-clear) command:
az storage container legal-hold clear \
--tags tag1 tag2 \ --container-name <container> \ --account-name <storage-account> \
- --resource-group <resource-group>
+ --resource-group <resource-group> \
```
storage Immutable Policy Configure Version Scope https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/immutable-policy-configure-version-scope.md
Previously updated : 05/17/2022 Last updated : 09/14/2022
To configure a default version-level immutability policy for a storage account i
To configure a default version-level immutability policy for a container in the Azure portal, follow these steps: 1. In the Azure portal, navigate to the **Containers** page, and locate the container to which you want to apply the policy.
-1. Select the **More** button to the right of the container name, and choose **Access policy**.
-1. In the **Access policy** dialog, under the **Immutable blob storage** section, choose **Add policy**.
-1. Select **Time-based retention policy** and specify the retention interval.
-1. If desired, select **Allow additional protected appends** to enable writes to append blobs that are protected by an immutability policy. For more information, see [Allow protected append blobs writes](immutable-time-based-retention-policy-overview.md#allow-protected-append-blobs-writes).
-1. Select **OK** to apply the default policy to the container.
+2. Select the **More** button to the right of the container name, and choose **Access policy**.
+3. In the **Access policy** dialog, under the **Immutable blob storage** section, choose **Add policy**.
+4. Select **Time-based retention policy** and specify the retention interval.
+5. Choose whether to allow protected append writes.
+
+ The **Append blobs** option enables your workloads to add new blocks of data to the end of an append blob by using the [Append Block](/rest/api/storageservices/append-block) operation.
+
+ The **Block and append blobs** option extends this support by adding the ability to write new blocks to a block blob. The Blob Storage API does not provide a way for applications to do this directly. However, applications can accomplish this by using append and flush methods that are available in the Data Lake Storage Gen2 API. Also, this property enables Microsoft applications such as Azure Data Factory to append blocks of data by using internal APIs. If your workloads depend on any of these tools, then you can use this property to avoid errors that can appear when those tools attempt to append data to blobs.
- :::image type="content" source="media/immutable-policy-configure-version-scope/configure-default-retention-policy-container.png" alt-text="Screenshot showing how to configure a default version-level retention policy for a container":::
+ To learn more about these options, see [Allow protected append blobs writes](immutable-time-based-retention-policy-overview.md#allow-protected-append-blobs-writes).
+
+ :::image type="content" source="media/immutable-policy-configure-version-scope/configure-retention-policy-container-scope.png" alt-text="Screenshot showing how to configure immutability policy scoped to container.":::
#### [PowerShell](#tab/azure-powershell)
To configure a legal hold on a blob version, you must first enable version-level
To configure a legal hold on a blob version with the Azure portal, follow these steps: 1. Locate the target version, which may be the current version or a previous version of a blob. Select the **More** button and choose **Access policy**.
-1. Under the **Immutable blob versions** section, select **Add policy**.
-1. Choose **Legal hold** as the policy type, and select **OK** to apply it.
+
+2. Under the **Immutable blob versions** section, select **Add policy**.
+
+3. Choose **Legal hold** as the policy type, and select **OK** to apply it.
The following image shows a current version of a blob with both a time-based retention policy and legal hold configured.
storage Immutable Storage Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/immutable-storage-overview.md
Previously updated : 12/01/2021 Last updated : 09/14/2022
The following table provides a summary of protections provided by version-level
| A blob version is protected by an *active* retention policy and/or a legal hold is in effect | [Delete Blob](/rest/api/storageservices/delete-blob), [Set Blob Metadata](/rest/api/storageservices/set-blob-metadata), [Put Page](/rest/api/storageservices/put-page), and [Append Block](/rest/api/storageservices/append-block)<sup>1</sup> | The blob version cannot be deleted. User metadata cannot be written. <br /><br /> Overwriting a blob with [Put Blob](/rest/api/storageservices/put-blob), [Put Block List](/rest/api/storageservices/put-block-list), or [Copy Blob](/rest/api/storageservices/copy-blob) creates a new version.<sup>2</sup> | Container deletion fails if at least one blob exists in the container, regardless of whether policy is locked or unlocked. | Storage account deletion fails if there is at least one container with version-level immutable storage enabled, or if it is enabled for the account. | | A blob version is protected by an *expired* retention policy and no legal hold is in effect | [Set Blob Metadata](/rest/api/storageservices/set-blob-metadata), [Put Page](/rest/api/storageservices/put-page), and [Append Block](/rest/api/storageservices/append-block)<sup>1</sup> | The blob version can be deleted. User metadata cannot be written. <br /><br /> Overwriting a blob with [Put Blob](/rest/api/storageservices/put-blob), [Put Block List](/rest/api/storageservices/put-block-list), or [Copy Blob](/rest/api/storageservices/copy-blob) creates a new version<sup>2</sup>. | Container deletion fails if at least one blob exists in the container, regardless of whether policy is locked or unlocked. | Storage account deletion fails if there is at least one container that contains a blob version with a locked time-based retention policy.<br /><br />Unlocked policies do not provide delete protection. |
-<sup>1</sup> The [Append Block](/rest/api/storageservices/append-block) operation is only permitted for time-based retention policies with the **allowProtectedAppendWrites** property enabled. For more information, see [Allow protected append blobs writes](immutable-time-based-retention-policy-overview.md#allow-protected-append-blobs-writes).
+<sup>1</sup> The [Append Block](/rest/api/storageservices/append-block) operation is permitted only for policies with the **allowProtectedAppendWrites** or **allowProtectedAppendWritesAll** property enabled. For more information, see [Allow protected append blobs writes](immutable-time-based-retention-policy-overview.md#allow-protected-append-blobs-writes).
<sup>2</sup> Blob versions are always immutable for content. If versioning is enabled for the storage account, then a write operation to a block blob creates a new version, with the exception of the [Put Block](/rest/api/storageservices/put-block) operation. ### Scenarios with container-level scope
The following table provides a summary of protections provided by container-leve
<sup>1</sup> Azure Storage permits the [Put Blob](/rest/api/storageservices/put-blob) operation to create a new blob. Subsequent overwrite operations on an existing blob path in an immutable container are not allowed.
-<sup>2</sup> The [Append Block](/rest/api/storageservices/append-block) operation is only permitted for time-based retention policies with the **allowProtectedAppendWrites** property enabled. For more information, see [Allow protected append blobs writes](immutable-time-based-retention-policy-overview.md#allow-protected-append-blobs-writes).
+<sup>2</sup> The [Append Block](/rest/api/storageservices/append-block) operation is permitted only for policies with the **allowProtectedAppendWrites** or **allowProtectedAppendWritesAll** property enabled. For more information, see [Allow protected append blobs writes](immutable-time-based-retention-policy-overview.md#allow-protected-append-blobs-writes).
> [!NOTE] > Some workloads, such as [SQL Backup to URL](/sql/relational-databases/backup-restore/sql-server-backup-to-url), create a blob and then add to it. If a container has an active time-based retention policy or legal hold in place, this pattern will not succeed.
Azure Storage blob inventory provides an overview of the containers in your stor
When you enable blob inventory, Azure Storage generates an inventory report on a daily basis. The report provides an overview of your data for business and compliance requirements.
-For more information about blob inventory, see [Azure Storage blob inventory (preview)](blob-inventory.md).
+For more information about blob inventory, see [Azure Storage blob inventory](blob-inventory.md).
## Pricing
storage Immutable Time Based Retention Policy Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/immutable-time-based-retention-policy-overview.md
Previously updated : 05/02/2022 Last updated : 09/14/2022 # Time-based retention policies for immutable blob data
-A time-based retention policy stores blob data in a Write-Once, Read-Many (WORM) format for a specified interval. When a time-based retention policy is set, clients can create and read blobs, but cannot modify or delete them. After the retention interval has expired, blobs can be deleted but not overwritten.
+A time-based retention policy stores blob data in a Write-Once, Read-Many (WORM) format for a specified interval. When a time-based retention policy is set, clients can create and read blobs, but can't modify or delete them. After the retention interval has expired, blobs can be deleted but not overwritten.
For more information about immutability policies for Blob Storage, see [Store business-critical blob data with immutable storage](immutable-storage-overview.md).
For more information about immutability policies for Blob Storage, see [Store bu
The minimum retention interval for a time-based retention policy is one day, and the maximum is 146,000 days (400 years).
-When you configure a time-based retention policy, the affected objects will stay in the immutable state for the duration of the *effective* retention period. The effective retention period for objects is equal to the difference between the blob's creation time and the user-specified retention interval. Because a policy's retention interval can be extended, immutable storage uses the most recent value of the user-specified retention interval to calculate the effective retention period.
+When you configure a time-based retention policy, the affected objects will stay in the immutable state during the *effective* retention period. The effective retention period for objects is equal to the difference between the blob's creation time and the user-specified retention interval. Because a policy's retention interval can be extended, immutable storage uses the most recent value of the user-specified retention interval to calculate the effective retention period.
For example, suppose that a user creates a time-based retention policy with a retention interval of five years. An existing blob in that container, *testblob1*, was created one year ago, so the effective retention period for *testblob1* is four years. When a new blob, *testblob2*, is uploaded to the container, the effective retention period for *testblob2* is five years from the time of its creation. ## Locked versus unlocked policies
-When you first configure a time-based retention policy, the policy is unlocked for testing purposes. When you have finished testing, you can lock the policy so that it is fully compliant with SEC 17a-4(f) and other regulatory compliance.
+When you first configure a time-based retention policy, the policy is unlocked for testing purposes. When you have finished testing, you can lock the policy so that it's fully compliant with SEC 17a-4(f) and other regulatory compliance.
Both locked and unlocked policies protect against deletes and overwrites. However, you can modify an unlocked policy by shortening or extending the retention period. You can also delete an unlocked policy.
-You cannot delete a locked time-based retention policy. You can extend the retention period, but you cannot decrease it. A maximum of five increases to the effective retention period is allowed over the lifetime of a locked policy that is defined at the container level. For a policy configured for a blob version, there is no limit to the number of increase to the effective period.
+You can't delete a locked time-based retention policy. You can extend the retention period, but you can't decrease it. A maximum of five increases to the effective retention period is allowed over the lifetime of a locked policy that is defined at the container level. For a policy configured for a blob version, there's no limit to the number of increases to the effective period.
> [!IMPORTANT] > A time-based retention policy must be locked for the blob to be in a compliant immutable (write and delete protected) state for SEC 17a-4(f) and other regulatory compliance. Microsoft recommends that you lock the policy in a reasonable amount of time, typically less than 24 hours. While the unlocked state provides immutability protection, using the unlocked state for any purpose other than short-term testing is not recommended.
You cannot delete a locked time-based retention policy. You can extend the reten
A time-based retention policy can be configured at either of the following scopes: - Version-level policy: A time-based retention policy can be configured to apply to a blob version for granular management of sensitive data. You can apply the policy to an individual version, or configure a default policy for a storage account or individual container that will apply by default to all blobs uploaded to that account or container.-- Container-level policy: A time-based retention policy that is configured at the container level applies to all objects in that container. Individual objects cannot be configured with their own immutability policies.
+- Container-level policy: A time-based retention policy that is configured at the container level applies to all objects in that container. Individual objects can't be configured with their own immutability policies.
-Audit logs are available on the container for both version-level and container-level time-based retention policies. Audit logs are not available for a policy that is scoped to a blob version.
+Audit logs are available on the container for both version-level and container-level time-based retention policies. Audit logs aren't available for a policy that is scoped to a blob version.
### Version-level policy scope
-To configure version-level retention policies, you must first enable version-level immutability on the storage account or parent container. Version-level immutability cannot be disabled after it is enabled, although unlocked policies can be deleted. For more information, see [Enable support for version-level immutability](immutable-policy-configure-version-scope.md#enable-support-for-version-level-immutability).
+To configure version-level retention policies, you must first enable version-level immutability on the storage account or parent container. Version-level immutability can't be disabled after it's enabled, although unlocked policies can be deleted. For more information, see [Enable support for version-level immutability](immutable-policy-configure-version-scope.md#enable-support-for-version-level-immutability).
-Version-level immutability on the storage account must be enabled when you create the account. When you enable version-level immutability for a new storage account, all containers subsequently created in that account automatically support version-level immutability. It's not possible to disable support for version-level immutability on a storage account after you've enabled it, nor is it possible to create a container without version-level immutability support when it's enabled for the account.
+Version-level immutability on the storage account must be enabled when you create the account. When you enable version-level immutability for a new storage account, all containers later created in that account automatically support version-level immutability. It's not possible to disable support for version-level immutability on a storage account after you've enabled it, nor is it possible to create a container without version-level immutability support when it's enabled for the account.
-If you have not enabled support for version-level immutability on the storage account, then you can enable support for version-level immutability on an individual container at the time that you create the container. Existing containers can also support version-level immutability, but must undergo a migration process first. This process may take some time and is not reversible. You can migrate ten containers at a time per storage account. For more information about migrating a container to support version-level immutability, see [Migrate an existing container to support version-level immutability](immutable-policy-configure-version-scope.md#migrate-an-existing-container-to-support-version-level-immutability).
+If you haven't enabled support for version-level immutability on the storage account, then you can enable support for version-level immutability on an individual container at the time that you create the container. Existing containers can also support version-level immutability, but must undergo a migration process first. This process may take some time and isn't reversible. You can migrate 10 containers at a time per storage account. For more information about migrating a container to support version-level immutability, see [Migrate an existing container to support version-level immutability](immutable-policy-configure-version-scope.md#migrate-an-existing-container-to-support-version-level-immutability).
Version-level time-based retention policies require that [blob versioning](versioning-overview.md) is enabled for the storage account. To learn how to enable blob versioning, see [Enable and manage blob versioning](versioning-enable.md). Keep in mind that enabling versioning may have a billing impact. For more information, see the **Pricing and billing** section in [Blob versioning](versioning-overview.md#pricing-and-billing).
After versioning is enabled, when a blob is first uploaded, that version of the
If a default policy is in effect for the storage account or container, then when an overwrite operation creates a previous version, the new current version inherits the default policy for the account or container.
-Each version may have only one time-based retention policy configured. A version may also have one legal hold configured. For more details about supported immutability policy configurations based on scope, see [Immutability policy scope](immutable-storage-overview.md#immutability-policy-scope).
+Each version may have only one time-based retention policy configured. A version may also have one legal hold configured. For more information about supported immutability policy configurations based on scope, see [Immutability policy scope](immutable-storage-overview.md#immutability-policy-scope).
To learn how to configure version-level time-based retention policies, see [Configure immutability policies for blob versions](immutable-policy-configure-version-scope.md).
After you enable support for version-level immutability for a storage account or
If the default time-based retention policy for the account or container is unlocked, then the current version of a blob that inherits the default policy will also have an unlocked policy. After an individual blob is uploaded, you can shorten or extend the retention period for the policy on the current version of the blob, or delete the current version. You can also lock the policy for the current version, even if the default policy on the account or container remains unlocked.
-If the default time-based retention policy for the account or container is locked, then the current version of a blob that inherits the default policy will also have an locked policy. However, if you override the default policy when you upload a blob by setting a policy only for that blob, then that blob's policy will remain unlocked until you explicitly lock it. When the policy on the current version is locked, you can extend the retention interval, but you cannot delete the policy or shorten the retention interval.
+If the default time-based retention policy for the account or container is locked, then the current version of a blob that inherits the default policy will also have a locked policy. However, if you override the default policy when you upload a blob by setting a policy only for that blob, then that blob's policy will remain unlocked until you explicitly lock it. When the policy on the current version is locked, you can extend the retention interval, but you can't delete the policy or shorten the retention interval.
-If there is no default policy configured for either the storage account or the container, then you can upload a blob either with a custom policy or with no policy.
+If there's no default policy configured for either the storage account or the container, then you can upload a blob either with a custom policy or with no policy.
If the default policy on a storage account or container is modified, policies on objects within that container remain unchanged, even if those policies were inherited from the default policy.
The following table shows the various options available for setting a time-based
#### Configure a policy on a previous version
-When versioning is enabled, a write or delete operation to a blob creates a new previous version of that blob that saves the blob's state before the operation. By default, a previous version possesses the time-based retention policy that was in effect for the current version, if any, when the current version became a previous version. The new current version inherits the policy on the container, if there is one.
+When versioning is enabled, a write or delete operation to a blob creates a new previous version of that blob that saves the blob's state before the operation. By default, a previous version possesses the time-based retention policy that was in effect for the current version, if any, when the current version became a previous version. The new current version inherits the policy on the container, if there's one.
If the policy inherited by a previous version is unlocked, then the retention interval can be shortened or lengthened, or the policy can be deleted. The policy on a previous version can also be locked for that version, even if the policy on the current version is unlocked.
-If the policy inherited by a previous version is locked, then the retention interval can be lengthened. The policy cannot be deleted, nor can the retention interval be shortened.
+If the policy inherited by a previous version is locked, then the retention interval can be lengthened. The policy can't be deleted, nor can the retention interval be shortened.
-If there is no policy configured on the current version, then the previous version does not inherit any policy. You can configure a custom policy for the version.
+If there's no policy configured on the current version, then the previous version doesn't inherit any policy. You can configure a custom policy for the version.
If the policy on a current version is modified, the policies on existing previous versions remain unchanged, even if the policy was inherited from a current version.
If the policy on a current version is modified, the policies on existing previou
A container-level time-based retention policy applies to all objects in a container, both new and existing. For an account with a hierarchical namespace, a container-level policy also applies to all directories in the container.
-When a time-based retention policy is applied to a container, all existing blobs move into an immutable WORM state in less than 30 seconds. All new blobs that are uploaded to that policy-protected container will also move into an immutable state. Once all blobs are in an immutable state, overwrite or delete operations in the immutable container are not allowed. In the case of an account with a hierarchical namespace, blobs cannot be renamed or moved to a different directory.
+When a time-based retention policy is applied to a container, all existing blobs move into an immutable WORM state in less than 30 seconds. All new blobs that are uploaded to that policy-protected container will also move into an immutable state. Once all blobs are in an immutable state, overwrite or delete operations in the immutable container aren't allowed. In the case of an account with a hierarchical namespace, blobs can't be renamed or moved to a different directory.
The following limits apply to container-level retention policies:
To learn how to configure a time-based retention policy on a container, see [Con
## Allow protected append blobs writes
-Append blobs are comprised of blocks of data and optimized for data append operations required by auditing and logging scenarios. By design, append blobs only allow the addition of new blocks to the end of the blob. Regardless of immutability, modification or deletion of existing blocks in an append blob is fundamentally not allowed. To learn more about append blobs, see [About Append Blobs](/rest/api/storageservices/understanding-block-blobs--append-blobs--and-page-blobs#about-append-blobs).
+Append blobs are composed of blocks of data and optimized for data append operations required by auditing and logging scenarios. By design, append blobs only allow the addition of new blocks to the end of the blob. Regardless of immutability, modification or deletion of existing blocks in an append blob is fundamentally not allowed. To learn more about append blobs, see [About Append Blobs](/rest/api/storageservices/understanding-block-blobs--append-blobs--and-page-blobs#about-append-blobs).
-Only time-based retention policies have an the **AllowProtectedAppendWrites** property setting that allows for writing new blocks to an append blob while maintaining immutability protection and compliance. If this setting is enabled, you can create an append blob directly in the policy-protected container and then continue to add new blocks of data to the end of the append blob with the Append Block operation. Only new blocks can be added; any existing blocks cannot be modified or deleted. Time-retention immutability protection still applies, preventing deletion of the append blob until the effective retention period has elapsed. Enabling this setting does not affect the immutability behavior of block blobs or page blobs.
+The **AllowProtectedAppendWrites** property setting allows for writing new blocks to an append blob while maintaining immutability protection and compliance. If this setting is enabled, you can create an append blob directly in the policy-protected container, and then continue to add new blocks of data to the end of the append blob with the Append Block operation. Only new blocks can be added; any existing blocks can't be modified or deleted. Enabling this setting doesn't affect the immutability behavior of block blobs or page blobs.
-As this setting is part of a time-based retention policy, the append blobs remain in the immutable state for the duration of the *effective* retention period. Since new data can be appended beyond the initial creation of the append blob, there is a slight difference in how the retention period is determined. The effective retention is the difference between append blob's last modification time and the user-specified retention interval. Similarly, when the retention interval is extended, immutable storage uses the most recent value of the user-specified retention interval to calculate the effective retention period.
+The **AllowProtectedAppendWritesAll** property setting provides the same permissions as the **AllowProtectedAppendWrites** property and adds the ability to write new blocks to a block blob. The Blob Storage API doesn't provide a way for applications to do this directly. However, applications can accomplish this by using append and flush methods that are available in the Data Lake Storage Gen2 API. Also, this property enables Microsoft applications such as Azure Data Factory to append blocks of data by using internal APIs. If your workloads depend on any of these tools, then you can use this property to avoid errors that can appear when those tools attempt to append data to blobs.
+
+> [!NOTE]
+> This property is available only for container-level policies. This property is not available for version-level policies.
+
+Append blobs remain in the immutable state during the *effective* retention period. Since new data can be appended beyond the initial creation of the append blob, there's a slight difference in how the retention period is determined. The effective retention is the difference between append blob's last modification time and the user-specified retention interval. Similarly, when the retention interval is extended, immutable storage uses the most recent value of the user-specified retention interval to calculate the effective retention period.
For example, suppose that a user creates a time-based retention policy with the **AllowProtectedAppendWrites** property enabled and a retention interval of 90 days. An append blob, *logblob1*, is created in the container today, new logs continue to be added to the append blob for the next 10 days, so that the effective retention period for *logblob1* is 100 days from today (the time of its last append + 90 days).
-Unlocked time-based retention policies allow the **AllowProtectedAppendWrites** property setting to be enabled and disabled at any time. Once the time-based retention policy is locked, the **AllowProtectedAppendWrites** property setting cannot be changed.
+Unlocked time-based retention policies allow the **AllowProtectedAppendWrites** and the **AllowProtectedAppendWritesAll** property settings to be enabled and disabled at any time. Once the time-based retention policy is locked, the **AllowProtectedAppendWrites** and the **AllowProtectedAppendWritesAll** property settings can't be changed.
## Audit logging Each container with a time-based retention policy enabled provides a policy audit log. The audit log includes up to seven time-based retention commands for locked time-based retention policies. Log entries include the user ID, command type, time stamps, and retention interval. The audit log is retained for the lifetime of the policy, in accordance with the SEC 17a-4(f) regulatory guidelines.
-The [Azure Activity log](../../azure-monitor/essentials/platform-logs-overview.md) provides a more comprehensive log of all management service activities. [Azure resource logs](../../azure-monitor/essentials/platform-logs-overview.md) retain information about data operations. It is the user's responsibility to store those logs persistently, as might be required for regulatory or other purposes.
+The [Azure Activity log](../../azure-monitor/essentials/platform-logs-overview.md) provides a more comprehensive log of all management service activities. [Azure resource logs](../../azure-monitor/essentials/platform-logs-overview.md) retain information about data operations. It's the user's responsibility to store those logs persistently, as might be required for regulatory or other purposes.
-Changes to time-based retention policies at the version level are not audited.
+Changes to time-based retention policies at the version level aren't audited.
## Next steps
storage Secure File Transfer Protocol Support How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/secure-file-transfer-protocol-support-how-to.md
Previously updated : 09/13/2022 Last updated : 09/15/2022
$storageAccountName = "<storage-account>"
Set-AzStorageAccount -ResourceGroupName $resourceGroupName -Name $storageAccountName -EnableSftp $true ```
- > [!NOTE]
- > The `-EnableSftp` parameter is currently only available in preview versions of Azure Powershell. Use the command below to install the preview version:
- > ```
- > Install-Module -Name Az.Storage -RequiredVersion 4.1.2-preview -AllowPrerelease
- > ```
### [Azure CLI](#tab/azure-cli)
-First, install the preview extension for the Azure CLI if it's not already installed:
-
-```azurecli
-az extension add --name storage-preview
-```
-
-Then, to enable SFTP support, call the [az storage account update](/cli/azure/storage/account#az-storage-account-update) command and set the `--enable-sftp` parameter to true. Remember to replace the values in angle brackets with your own values:
+To enable SFTP support, call the [az storage account update](/cli/azure/storage/account#az-storage-account-update) command and set the `--enable-sftp` parameter to true. Remember to replace the values in angle brackets with your own values:
```azurecli az storage account update -g <resource-group> -n <storage-account> --enable-sftp=true
storage Storage Auth Abac Attributes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-auth-abac-attributes.md
Title: Actions and attributes for Azure role assignment conditions in Azure Storage (preview)
+ Title: Actions and attributes for Azure role assignment conditions in Azure Storage
description: Supported actions and attributes for Azure role assignment conditions and Azure attribute-based access control (Azure ABAC) in Azure Storage.
Previously updated : 09/01/2022 Last updated : 09/14/2022
> [!IMPORTANT] > Azure ABAC and Azure role assignment conditions are currently in preview.
+>
> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities. > For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
This section lists the supported Azure Blob storage actions and suboperations yo
> | **Examples** | [Example: Read, write, or delete blobs in named containers](storage-auth-abac-examples.md#example-read-write-or-delete-blobs-in-named-containers)<br/>[Example: Read blobs in named containers with a path](storage-auth-abac-examples.md#example-read-blobs-in-named-containers-with-a-path)<br/>[Example: Read or list blobs in named containers with a path](storage-auth-abac-examples.md#example-read-or-list-blobs-in-named-containers-with-a-path)<br/>[Example: Write blobs in named containers with a path](storage-auth-abac-examples.md#example-write-blobs-in-named-containers-with-a-path)<br/>[Example: Read only current blob versions](storage-auth-abac-examples.md#example-read-only-current-blob-versions)<br/>[Example: Read current blob versions and any blob snapshots](storage-auth-abac-examples.md#example-read-current-blob-versions-and-any-blob-snapshots)<br/>[Example: Read only storage accounts with hierarchical namespace enabled](storage-auth-abac-examples.md#example-read-only-storage-accounts-with-hierarchical-namespace-enabled) | > | **Learn more** | [Azure Data Lake Storage Gen2 hierarchical namespace](../blobs/data-lake-storage-namespace.md) |
-## Azure Queue storage actions
-
-This section lists the supported Azure Queue storage actions you can target for conditions.
-
-### Peek messages
-
-> [!div class="mx-tdCol2BreakAll"]
-> | Property | Value |
-> | | |
-> | **Display name** | Peek messages |
-> | **Description** | DataAction for peeking messages. |
-> | **DataAction** | `Microsoft.Storage/storageAccounts/queueServices/queues/messages/read` |
-> | **Resource attributes** | [Account name](#account-name)<br/>[Queue name](#queue-name) |
-> | **Request attributes** | |
-> | **Principal attributes support** | True |
-
-### Put a message
-
-> [!div class="mx-tdCol2BreakAll"]
-> | Property | Value |
-> | | |
-> | **Display name** | Put a message |
-> | **Description** | DataAction for putting a message. |
-> | **DataAction** | `Microsoft.Storage/storageAccounts/queueServices/queues/messages/add/action` |
-> | **Resource attributes** | [Account name](#account-name)<br/>[Queue name](#queue-name) |
-> | **Request attributes** | |
-> | **Principal attributes support** | True |
-
-### Put or update a message
-
-> [!div class="mx-tdCol2BreakAll"]
-> | Property | Value |
-> | | |
-> | **Display name** | Put or update a message |
-> | **Description** | DataAction for putting or updating a message. |
-> | **DataAction** | `Microsoft.Storage/storageAccounts/queueServices/queues/messages/write` |
-> | **Resource attributes** | [Account name](#account-name)<br/>[Queue name](#queue-name) |
-> | **Request attributes** | |
-> | **Principal attributes support** | True |
-
-### Clear messages
-
-> [!div class="mx-tdCol2BreakAll"]
-> | Property | Value |
-> | | |
-> | **Display name** | Clear messages |
-> | **Description** | DataAction for clearing messages. |
-> | **DataAction** | `Microsoft.Storage/storageAccounts/queueServices/queues/messages/delete` |
-> | **Resource attributes** | [Account name](#account-name)<br/>[Queue name](#queue-name) |
-> | **Request attributes** | |
-> | **Principal attributes support** | True |
-
-### Get or delete messages
-
-> [!div class="mx-tdCol2BreakAll"]
-> | Property | Value |
-> | | |
-> | **Display name** | Get or delete messages |
-> | **Description** | DataAction for getting or deleting messages. |
-> | **DataAction** | `Microsoft.Storage/storageAccounts/queueServices/queues/messages/process/action` |
-> | **Resource attributes** | [Account name](#account-name)<br/>[Queue name](#queue-name) |
-> | **Request attributes** | |
-> | **Principal attributes support** | True |
- ## Azure Blob storage attributes This section lists the Azure Blob storage attributes you can use in your condition expressions depending on the action you target. If you select multiple actions for a single condition, there might be fewer attributes to choose from for your condition because the attributes must be available across the selected actions.
This section lists the Azure Blob storage attributes you can use in your conditi
> | **Examples** | `@Request[Microsoft.Storage/storageAccounts/blobServices/containers/blobs:versionId] DateTimeEquals '2022-06-01T23:38:32.8883645Z'`<br/>[Example: Read current blob versions and a specific blob version](storage-auth-abac-examples.md#example-read-current-blob-versions-and-a-specific-blob-version)<br/>[Example: Read current blob versions and any blob snapshots](storage-auth-abac-examples.md#example-read-current-blob-versions-and-any-blob-snapshots) | > | **Learn more** | [Azure Data Lake Storage Gen2 hierarchical namespace](../blobs/data-lake-storage-namespace.md) |
-## Azure Queue storage attributes
-
-This section lists the Azure Queue storage attributes you can use in your condition expressions depending on the action you target.
-
-### Queue name
-
-> [!div class="mx-tdCol2BreakAll"]
-> | Property | Value |
-> | | |
-> | **Display name** | Queue name |
-> | **Description** | Name of a storage queue. |
-> | **Attribute** | `Microsoft.Storage/storageAccounts/queueServices/queues:name` |
-> | **Attribute source** | Resource |
-> | **Attribute type** | String |
- ## See also - [Example Azure role assignment conditions (preview)](storage-auth-abac-examples.md)
storage Storage Auth Abac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-auth-abac.md
Title: Authorize access to blobs using Azure role assignment conditions (preview)
-description: Authorize access to Azure blobs using Azure role assignment conditions and Azure attribute-based access control (Azure ABAC). Define conditions on role assignments using Storage attributes.
+description: Authorize access to Azure blobs and Azure Data Lake Storage Gen2 (ADLS G2) using Azure role assignment conditions and Azure attribute-based access control (Azure ABAC). Define conditions on role assignments using Storage attributes.
Previously updated : 09/01/2022 Last updated : 09/14/2022
> [!IMPORTANT] > Azure ABAC and Azure role assignment conditions are currently in preview.
+>
> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities. > For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
Azure ABAC builds on Azure role-based access control (Azure RBAC) by adding [con
## Overview of conditions in Azure Storage
-Azure Storage enables the [use of Azure Active Directory](../common/authorize-data-access.md) (Azure AD) to authorize requests to blob, queue, and table resources using Azure RBAC. Azure RBAC helps you manage access to resources by defining who has access to resources and what they can do with those resources, using role definitions and role assignments. Azure Storage defines a set of Azure [built-in roles](../../role-based-access-control/built-in-roles.md#storage) that encompass common sets of permissions used to access blob, queue and table data. You can also define custom roles with select set of permissions. Azure Storage supports role assignments for storage accounts or blob containers.
+You can [use of Azure Active Directory](../common/authorize-data-access.md) (Azure AD) to authorize requests to Azure storage resources using Azure RBAC. Azure RBAC helps you manage access to resources by defining who has access to resources and what they can do with those resources, using role definitions and role assignments. Azure Storage defines a set of Azure [built-in roles](../../role-based-access-control/built-in-roles.md#storage) that encompass common sets of permissions used to access Azure storage data. You can also define custom roles with select sets of permissions. Azure Storage supports role assignments for both storage accounts and blob containers.
-Azure ABAC builds on Azure RBAC by adding role assignment conditions in the context of specific actions. A *role assignment condition* is an additional check that is evaluated when the action on the storage resource is being authorized. This condition is expressed as a predicate using attributes associated with any of the following:
+Azure ABAC builds on Azure RBAC by adding [role assignment conditions](../../role-based-access-control/conditions-overview.md) in the context of specific actions. A *role assignment condition* is an additional check that is evaluated when the action on the storage resource is being authorized. This condition is expressed as a predicate using attributes associated with any of the following:
- Security principal that is requesting authorization - Resource to which access is being requested - Parameters of the request
Azure ABAC builds on Azure RBAC by adding role assignment conditions in the cont
The benefits of using role assignment conditions are: - **Enable finer-grained access to resources** - For example, if you want to grant a user read access to blobs in your storage accounts only if the blobs are tagged as Project=Sierra, you can use conditions on the read action using tags as an attribute.-- **Reduce the number of role assignments you have to create and manage** - You can do this by using a generalized role assignment for a security group, and then restricting the access for individual members of the group using a condition that matches attributes of a principal with attributes of a specific resource being accessed (such as, a blob or a container).
+- **Reduce the number of role assignments you have to create and manage** - You can do this by using a generalized role assignment for a security group, and then restricting the access for individual members of the group using a condition that matches attributes of a principal with attributes of a specific resource being accessed (such as a blob or a container).
- **Express access control rules in terms of attributes with business meaning** - For example, you can express your conditions using attributes that represent a project name, business application, organization function, or classification level. The tradeoff of using conditions is that you need a structured and consistent taxonomy when using attributes across your organization. Attributes must be protected to prevent access from being compromised. Also, conditions must be carefully designed and reviewed for their effect.
-Role-assignment conditions in Azure Storage are supported for blobs. You can use conditions with accounts that have the [hierarchical namespace](../blobs/data-lake-storage-namespace.md) (HNS) feature enabled on them. Conditions are currently not supported for queue, table, or file resources in Azure Storage.
+Role-assignment conditions in Azure Storage are supported for Azure blob storage. You can also use conditions with accounts that have the [hierarchical namespace](../blobs/data-lake-storage-namespace.md) (HNS) feature enabled on them (ADLS G2).
## Supported attributes and operations
In this preview, you can add conditions to built-in roles or custom roles. The b
- [Storage Blob Data Reader](../../role-based-access-control/built-in-roles.md#storage-blob-data-reader) - [Storage Blob Data Contributor](../../role-based-access-control/built-in-roles.md#storage-blob-data-contributor) - [Storage Blob Data Owner](../../role-based-access-control/built-in-roles.md#storage-blob-data-owner).-- [Storage Queue Data Contributor](../../role-based-access-control/built-in-roles.md#storage-queue-data-contributor)-- [Storage Queue Data Message Processor](../../role-based-access-control/built-in-roles.md#storage-queue-data-message-processor)-- [Storage Queue Data Message Sender](../../role-based-access-control/built-in-roles.md#storage-queue-data-message-sender)-- [Storage Queue Data Reader](../../role-based-access-control/built-in-roles.md#storage-queue-data-reader) You can use conditions with custom roles so long as the role includes [actions that support conditions](storage-auth-abac-attributes.md#azure-blob-storage-actions-and-suboperations).
storage Customer Managed Keys Configure Cross Tenant Existing Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/customer-managed-keys-configure-cross-tenant-existing-account.md
Previously updated : 08/31/2022 Last updated : 09/14/2022
To learn how to configure customer-managed keys for a new storage account, see [
## About the preview
-To use the preview, you must register for the Azure Active Directory federated client identity feature. Follow these instructions to register with PowerShell or Azure CLI:
+To use the preview, you must register for the Azure Active Directory federated client identity feature in the ISV's tenant. Follow these instructions to register with PowerShell or Azure CLI:
### [PowerShell](#tab/powershell-preview)
storage Customer Managed Keys Configure Cross Tenant New Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/customer-managed-keys-configure-cross-tenant-new-account.md
Previously updated : 08/31/2022 Last updated : 09/14/2022
To learn how to configure customer-managed keys for an existing storage account,
## About the preview
-To use the preview, you must register for the Azure Active Directory federated client identity feature. Follow these instructions to register with PowerShell or Azure CLI:
+To use the preview, you must register for the Azure Active Directory federated client identity feature in the ISV's tenant. Follow these instructions to register with PowerShell or Azure CLI:
### [PowerShell](#tab/powershell-preview)
storage File Sync Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-introduction.md
description: An overview of Azure File Sync, a service that enables you to creat
Previously updated : 04/19/2021 Last updated : 09/14/2022
Azure File Sync is backed by Azure Files, which offers several redundancy option
### Cloud-side backup
-Reduce your on-premises backup spending by taking centralized backups in the cloud using Azure Backup. Azure Files SMB shares have native snapshot capabilities, and the process can be automated using Azure Backup to schedule your backups and manage their retention. Azure Backup also integrates with your on-premises servers, so when you restore to the cloud, these changes are automatically downloaded on your Windows Servers.
+Reduce your on-premises backup spending by taking centralized backups in the cloud using Azure Backup. SMB Azure file shares have native snapshot capabilities, and the process can be automated using Azure Backup to schedule your backups and manage their retention. Azure Backup also integrates with your on-premises servers, so when you restore to the cloud, these changes are automatically downloaded on your Windows Servers.
+
+## Training
+
+For self-paced training, see the following modules:
+
+- [Implement a hybrid file server infrastructure](/training/modules/implement-hybrid-file-server-infrastructure/)
+- [Extend your on-premises file share capacity using Azure File Sync](/training/modules/extend-share-capacity-with-azure-file-sync/)
+
+## Architecture
+
+For guidance on architecting solutions with Azure Files and Azure File Sync using established patterns and practices, see the following:
+
+- [Azure enterprise cloud file share](/azure/architecture/hybrid/azure-files-private)
+- [Hybrid file services](/azure/architecture/hybrid/hybrid-file-services)
+- [Hybrid file share with disaster recovery for remote and local branch workers](/azure/architecture/example-scenario/hybrid/hybrid-file-share-dr-remote-local-branch-workers)
## Next Steps
storage File Sync Networking Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-networking-overview.md
Many organizations use a proxy server as an intermediary between resources insid
For more information on how to configure Azure File Sync with a proxy server, see [Configuring Azure File Sync with a proxy server](file-sync-firewall-and-proxy.md). ### Configuring firewalls and service tags
-Many organizations isolate their file servers from most internet locations for security purposes. To use Azure File Sync in such an environment, you need to adjust your firewall to open it up to select Azure services. You can do this by retrieving the IP address ranges for the services you required through a mechanism called [service tags](../../virtual-network/service-tags-overview.md).
+Many organizations isolate their file servers from most internet locations for security purposes. To use Azure File Sync in such an environment, you need to configure your firewall to allow outbound access to select Azure services. You can do this by allowing port 443 outbound access to [required cloud endpoints](file-sync-firewall-and-proxy.md#firewall) hosting those specific Azure services if your firewall supports url/domains. If it doesn't, you can retrieve the IP address ranges for these Azure services through [service tags](../../virtual-network/service-tags-overview.md).
Azure File Sync requires the IP address ranges for the following services, as identified by their service tags:
storage Storage Files Identity Auth Azure Active Directory Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-identity-auth-azure-active-directory-enable.md
description: Learn how to enable identity-based Kerberos authentication for hybr
Previously updated : 09/09/2022 Last updated : 09/15/2022
After enabling Azure AD Kerberos authentication, you'll need to explicitly grant
4. Select the application with the name matching **[Storage Account] $storageAccountName.file.core.windows.net**. 5. Select **API permissions** in the left pane.
-6. Select **Add permissions** at the bottom of the page.
-7. Select **Grant admin consent for "DirectoryName"**.
+6. Select **Grant admin consent for "DirectoryName"**.
+7. Select **Yes** to confirm.
## Disable multi-factor authentication on the storage account
storage Storage Files Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-introduction.md
description: An overview of Azure Files, a service that enables you to create an
Previously updated : 07/23/2021 Last updated : 09/14/2022
Here are some videos on the common use cases of Azure Files:
* [Replace your file server with a serverless Azure file share](https://sec.ch9.ms/ch9/3358/0addac01-3606-4e30-ad7b-f195f3ab3358/ITOpsTalkAzureFiles_high.mp4) * [Getting started with FSLogix profile containers on Azure Files in Azure Virtual Desktop leveraging AD authentication](https://www.youtube.com/embed/9S5A1IJqfOQ)
+To get started using Azure Files, see [Quickstart: Create and use an Azure file share](storage-how-to-use-files-portal.md).
+ ## Why Azure Files is useful Azure file shares can be used to:
Azure file shares can be used to:
* **Resiliency**. Azure Files has been built from the ground up to be always available. Replacing on-premises file shares with Azure Files means you no longer have to wake up to deal with local power outages or network issues. * **Familiar programmability**. Applications running in Azure can access data in the share via file [system I/O APIs](/dotnet/api/system.io.file). Developers can therefore leverage their existing code and skills to migrate existing applications. In addition to System IO APIs, you can use [Azure Storage Client Libraries](/previous-versions/azure/dn261237(v=azure.100)) or the [Azure Files REST API](/rest/api/storageservices/file-service-rest-api).
+## Training
+
+For self-paced training, see the following modules:
+
+- [Introduction to Azure Files](/training/modules/introduction-to-azure-files/)
+- [Configure Azure Files and Azure File Sync](/training/modules/configure-azure-files-file-sync/)
+
+## Architecture
+
+For guidance on architecting solutions on Azure Files using established patterns and practices, see the following:
+
+- [Azure enterprise cloud file share](/azure/architecture/hybrid/azure-files-private)
+- [Hybrid file services](/azure/architecture/hybrid/hybrid-file-services)
+- [Use Azure file shares in a hybrid environment](/azure/architecture/hybrid/azure-file-share)
+- [Hybrid file share with disaster recovery for remote and local branch workers](/azure/architecture/example-scenario/hybrid/hybrid-file-share-dr-remote-local-branch-workers)
+- [Azure files accessed on-premises and secured by AD DS](/azure/architecture/example-scenario/hybrid/azure-files-on-premises-authentication)
+ ## Case studies * Organizations across the world are leveraging Azure Files and Azure File Sync to optimize file access and storage. [Check out their case studies here](azure-files-case-study.md).
Azure file shares can be used to:
* [Connect and mount an SMB share on Linux](storage-how-to-use-files-linux.md) * [Connect and mount an SMB share on macOS](storage-how-to-use-files-mac.md) * [Connect and mount an NFS share on Linux](storage-files-how-to-mount-nfs-shares.md)
+* [Azure Files FAQ](storage-files-faq.md)
storage Authorize Access Azure Active Directory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/queues/authorize-access-azure-active-directory.md
Previously updated : 07/13/2021 Last updated : 09/14/2022 -+ # Authorize access to queues using Azure Active Directory
storage Queues Auth Abac Attributes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/queues/queues-auth-abac-attributes.md
+
+ Title: Actions and attributes for Azure role assignment conditions for Azure queues | Microsoft Docs
+
+description: Supported actions and attributes for Azure role assignment conditions and Azure attribute-based access control (Azure ABAC) for Azure queues.
+++++ Last updated : 09/14/2022+++++
+# Actions and attributes for Azure role assignment conditions for Azure queues
+
+> [!IMPORTANT]
+> Azure ABAC and Azure role assignment conditions are currently in preview.
+>
+> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+This article describes the supported attribute dictionaries that can be used in conditions on Azure role assignments for each Azure Storage [DataAction](../../role-based-access-control/role-definitions.md#dataactions). For the list of Queue service operations that are affected by a specific permission or DataAction, see [Permissions for Queue service operations](/rest/api/storageservices/authorize-with-azure-active-directory#permissions-for-queue-service-operations).
+
+To understand the role assignment condition format, see [Azure role assignment condition format and syntax](../../role-based-access-control/conditions-format.md).
+
+## Azure Queue storage actions
+
+This section lists the supported Azure Queue storage actions you can target for conditions.
+
+### Peek messages
+
+> [!div class="mx-tdCol2BreakAll"]
+> | Property | Value |
+> | | |
+> | **Display name** | Peek messages |
+> | **Description** | DataAction for peeking messages. |
+> | **DataAction** | `Microsoft.Storage/storageAccounts/queueServices/queues/messages/read` |
+> | **Resource attributes** | [Account name](#account-name)<br/>[Queue name](#queue-name) |
+> | **Request attributes** | |
+> | **Principal attributes support** | True |
+
+### Put a message
+
+> [!div class="mx-tdCol2BreakAll"]
+> | Property | Value |
+> | | |
+> | **Display name** | Put a message |
+> | **Description** | DataAction for putting a message. |
+> | **DataAction** | `Microsoft.Storage/storageAccounts/queueServices/queues/messages/add/action` |
+> | **Resource attributes** | [Account name](#account-name)<br/>[Queue name](#queue-name) |
+> | **Request attributes** | |
+> | **Principal attributes support** | True |
+
+### Put or update a message
+
+> [!div class="mx-tdCol2BreakAll"]
+> | Property | Value |
+> | | |
+> | **Display name** | Put or update a message |
+> | **Description** | DataAction for putting or updating a message. |
+> | **DataAction** | `Microsoft.Storage/storageAccounts/queueServices/queues/messages/write` |
+> | **Resource attributes** | [Account name](#account-name)<br/>[Queue name](#queue-name) |
+> | **Request attributes** | |
+> | **Principal attributes support** | True |
+
+### Clear messages
+
+> [!div class="mx-tdCol2BreakAll"]
+> | Property | Value |
+> | | |
+> | **Display name** | Clear messages |
+> | **Description** | DataAction for clearing messages. |
+> | **DataAction** | `Microsoft.Storage/storageAccounts/queueServices/queues/messages/delete` |
+> | **Resource attributes** | [Account name](#account-name)<br/>[Queue name](#queue-name) |
+> | **Request attributes** | |
+> | **Principal attributes support** | True |
+
+### Get or delete messages
+
+> [!div class="mx-tdCol2BreakAll"]
+> | Property | Value |
+> | | |
+> | **Display name** | Get or delete messages |
+> | **Description** | DataAction for getting or deleting messages. |
+> | **DataAction** | `Microsoft.Storage/storageAccounts/queueServices/queues/messages/process/action` |
+> | **Resource attributes** | [Account name](#account-name)<br/>[Queue name](#queue-name) |
+> | **Request attributes** | |
+> | **Principal attributes support** | True |
+
+## Azure Queue storage attributes
+
+This section lists the Azure Queue storage attributes you can use in your condition expressions depending on the action you target. If you select multiple actions for a single condition, there might be fewer attributes to choose from for your condition because the attributes must be available across the selected actions.
+
+> [!NOTE]
+> Attributes and values listed are considered case-insensitive, unless stated otherwise.
+
+### Account name
+
+> [!div class="mx-tdCol2BreakAll"]
+> | Property | Value |
+> | | |
+> | **Display name** | Account name |
+> | **Description** | Name of a storage account. |
+> | **Attribute** | `Microsoft.Storage/storageAccounts:name` |
+> | **Attribute source** | Resource |
+> | **Attribute type** | String |
+> | **Examples** | `@Resource[Microsoft.Storage/storageAccounts:name] StringEquals 'sampleaccount'` |
+
+### Queue name
+
+> [!div class="mx-tdCol2BreakAll"]
+> | Property | Value |
+> | | |
+> | **Display name** | Queue name |
+> | **Description** | Name of a storage queue. |
+> | **Attribute** | `Microsoft.Storage/storageAccounts/queueServices/queues:name` |
+> | **Attribute source** | Resource |
+> | **Attribute type** | String |
+
+## See also
+
+- [Example Azure role assignment conditions (preview)](../blobs\storage-auth-abac-examples.md)
+- [Azure role assignment condition format and syntax (preview)](../../role-based-access-control/conditions-format.md)
+- [Troubleshoot Azure role assignment conditions (preview)](../../role-based-access-control/conditions-troubleshoot.md)
storage Queues Auth Abac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/queues/queues-auth-abac.md
+
+ Title: Authorize access to queues using Azure role assignment conditions | Microsoft Docs
+
+description: Authorize access to Azure queues using Azure role assignment conditions and Azure attribute-based access control (Azure ABAC). Define conditions on role assignments using Storage attributes.
+++++ Last updated : 09/14/2022+++++
+# Authorize access to queues using Azure role assignment conditions
+
+> [!IMPORTANT]
+> Azure ABAC is currently in preview and is provided without a service level agreement. It is not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+Attribute-based access control (ABAC) is an authorization strategy that defines access levels based on attributes associated with an access request such as the security principal, the resource, the environment and the request itself. With ABAC, you can grant a security principal access to a resource based on [Azure role assignment conditions](../../role-based-access-control/conditions-overview.md).
+
+## Overview of conditions in Azure Storage
+
+You can [use of Azure Active Directory](../common/authorize-data-access.md) (Azure AD) to authorize requests to Azure storage resources using Azure RBAC. Azure RBAC helps you manage access to resources by defining who has access to resources and what they can do with those resources, using role definitions and role assignments. Azure Storage defines a set of Azure [built-in roles](../../role-based-access-control/built-in-roles.md#storage) that encompass common sets of permissions used to access Azure storage data. You can also define custom roles with select sets of permissions. Azure Storage supports role assignments for both storage accounts and blob containers or queues.
+
+Azure ABAC builds on Azure RBAC by adding [role assignment conditions](../../role-based-access-control/conditions-overview.md) in the context of specific actions. A *role assignment condition* is an additional check that is evaluated when the action on the storage resource is being authorized. This condition is expressed as a predicate using attributes associated with any of the following:
+- Security principal that is requesting authorization
+- Resource to which access is being requested
+- Parameters of the request
+- Environment from which the request originates
+
+The benefits of using role assignment conditions are:
+- **Enable finer-grained access to resources** - For example, if you want to grant a user access to peek messages in a specific queue, you can use peek messages DataAction and the queue name storage attribute.
+- **Reduce the number of role assignments you have to create and manage** - You can do this by using a generalized role assignment for a security group, and then restricting the access for individual members of the group using a condition that matches attributes of a principal with attributes of a specific resource being accessed (such as a queue).
+- **Express access control rules in terms of attributes with business meaning** - For example, you can express your conditions using attributes that represent a project name, business application, organization function, or classification level.
+
+The tradeoff of using conditions is that you need a structured and consistent taxonomy when using attributes across your organization. Attributes must be protected to prevent access from being compromised. Also, conditions must be carefully designed and reviewed for their effect.
+
+## Supported attributes and operations
+You can configure conditions on role assignments for [DataActions](../../role-based-access-control/role-definitions.md#dataactions) to achieve these goals. You can use conditions with a [custom role](../../role-based-access-control/custom-roles.md) or select built-in roles. Note, conditions are not supported for management [Actions](../../role-based-access-control/role-definitions.md#actions) through the [Storage resource provider](/rest/api/storagerp).
+
+You can add conditions to built-in roles or custom roles. The built-in roles on which you can use role-assignment conditions include:
+- [Storage Queue Data Contributor](../../role-based-access-control/built-in-roles.md#storage-queue-data-contributor)
+- [Storage Queue Data Message Processor](../../role-based-access-control/built-in-roles.md#storage-queue-data-message-processor)
+- [Storage Queue Data Message Sender](../../role-based-access-control/built-in-roles.md#storage-queue-data-message-sender)
+- [Storage Queue Data Reader](../../role-based-access-control/built-in-roles.md#storage-queue-data-reader)
+
+You can use conditions with custom roles so long as the role includes [actions that support conditions](..\blobs\storage-auth-abac-attributes.md#azure-blob-storage-actions-and-suboperations).
+
+The [Azure role assignment condition format](../../role-based-access-control/conditions-format.md) allows use of `@Principal`, `@Resource` or `@Request` attributes in the conditions. A `@Principal` attribute is a custom security attribute on a principal, such as a user, enterprise application (service principal), or managed identity. A `@Resource` attribute refers to an existing attribute of a storage resource that is being accessed, such as a storage account or a queue. A `@Request` attribute refers to an attribute or parameter included in a storage operation request.
+
+Azure RBAC currently supports 2,000 role assignments in a subscription. If you need to create thousands of Azure role assignments, you may encounter this limit. Managing hundreds or thousands of role assignments can be difficult. In some cases, you can use conditions to reduce the number of role assignments on your storage account and make them easier to manage. You can [scale the management of role assignments](../../role-based-access-control/conditions-custom-security-attributes-example.md) using conditions and [Azure AD custom security attributes]() for principals.
+
+## Next steps
+
+- [Prerequisites for Azure role assignment conditions](../../role-based-access-control/conditions-prerequisites.md)
+- [Actions and attributes for Azure role assignment conditions in Azure Storage](queues-auth-abac-attributes.md)
+- [Example Azure role assignment conditions](..\blobs\storage-auth-abac-examples.md)
+- [Troubleshoot Azure role assignment conditions](../../role-based-access-control/conditions-troubleshoot.md)
+
+## See also
+
+- [What is Azure attribute-based access control (Azure ABAC)?](../../role-based-access-control/conditions-overview.md)
+- [FAQ for Azure role assignment conditions](../../role-based-access-control/conditions-faq.md)
+- [Azure role assignment condition format and syntax](../../role-based-access-control/conditions-format.md)
+- [Scale the management of Azure role assignments by using conditions and custom security attributes (preview)](../../role-based-access-control/conditions-custom-security-attributes-example.md)
+- [Security considerations for Azure role assignment conditions in Azure Storage](..\blobs\storage-auth-abac-security.md)
stream-analytics Stream Analytics Documentdb Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/stream-analytics-documentdb-output.md
Previously updated : 02/2/2020 Last updated : 09/15/2022 # Azure Stream Analytics output to Azure Cosmos DB
If you're unfamiliar with Azure Cosmos DB, see the [Azure Cosmos DB documentatio
## Basics of Azure Cosmos DB as an output target The Azure Cosmos DB output in Stream Analytics enables writing your stream processing results as JSON output into your Azure Cosmos DB containers.
-Stream Analytics doesn't create containers in your database. Instead, it requires you to create them up front. You can then control the billing costs of Azure Cosmos DB containers. You can also tune the performance, consistency, and capacity of your containers directly by using the [Azure Cosmos DB APIs](/rest/api/cosmos-db/).
+Stream Analytics doesn't create containers in your database. Instead, it requires you to create them beforehand. You can then control the billing costs of Azure Cosmos DB containers. You can also tune the performance, consistency, and capacity of your containers directly by using the [Azure Cosmos DB APIs](/rest/api/cosmos-db/).
-> [!Note]
-> You must add 0.0.0.0 to the list of allowed IPs from your Azure Cosmos DB firewall.
The following sections detail some of the container options for Azure Cosmos DB.
If a transient failure, service unavailability, or throttling happens while Stre
* [Understand outputs from Azure Stream Analytics](stream-analytics-define-outputs.md) * [Azure Stream Analytics output to Azure SQL Database](stream-analytics-sql-output-perf.md)
-* [Azure Stream Analytics custom blob output partitioning](stream-analytics-custom-path-patterns-blob-storage-output.md)
+* [Azure Stream Analytics custom blob output partitioning](stream-analytics-custom-path-patterns-blob-storage-output.md)
synapse-analytics Runtime For Apache Spark Lifecycle And Supportability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/runtime-for-apache-spark-lifecycle-and-supportability.md
If necessary due to outstanding security issues, runtime usage, or other factors
### End of life and retirement As of the applicable EOL date, runtimes are considered retired. * Existing Spark Pools definitions and metadata will still exist in the workspace for a defined period, yet __all pipelines, jobs and notebooks will not be able to execute__.
-* Spark Pools definitions will be deleted from the Synapse workspace in 90 days after the applicable EOL date.
+* Spark Pools definitions will be deleted from the Synapse workspace in 90 days after the applicable EOL date.
synapse-analytics Develop Tables Statistics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/develop-tables-statistics.md
The more serverless SQL pool knows about your data, the faster it can execute qu
The serverless SQL pool query optimizer is a cost-based optimizer. It compares the cost of various query plans, and then chooses the plan with the lowest cost. In most cases, it chooses the plan that will execute the fastest.
-For example, if the optimizer estimates that the date your query is filtering on will return one row it will choose one plan. If it estimates that the selected date will return 1 million rows, it will return a different plan.
+For example, if the optimizer estimates that the date your query is filtering on will return one row it will choose one plan. If it estimates that the selected date will return 1 million rows, it will pick a different plan.
### Automatic creation of statistics
Serverless SQL pool analyzes incoming user queries for missing statistics. If st
The SELECT statement will trigger automatic creation of statistics. > [!NOTE]
-> Automatic creation of statistics is turned on for Parquet files. For CSV files, you need to create statistics manually until automatic creation of CSV files statistics is supported.
+> Automatic creation of statistics is turned on for Parquet files. For CSV files, statistics will be automatically created if you use OPENROWSET. You need to create statistics manually you use CSV external tables.
Automatic creation of statistics is done synchronously so you may incur slightly degraded query performance if your columns are missing statistics. The time to create statistics for a single column depends on the size of the files targeted. ### Manual creation of statistics
-Serverless SQL pool lets you create statistics manually. For CSV files, you have to create statistics manually because automatic creation of statistics isn't turned on for CSV files.
+Serverless SQL pool lets you create statistics manually. For CSV external tables, you have to create statistics manually because automatic creation of statistics isn't turned on for CSV external tables.
See the following examples for instructions on how to manually create statistics.
When statistics are stale, new ones will be created. The algorithm goes through
Manual stats are never declared stale. > [!NOTE]
-> Automatic recreation of statistics is turned on for Parquet files. For CSV files, you need to drop and create statistics manually until automatic creation of CSV files statistics is supported. Check the examples below on how to drop and create statistics.
+> Automatic recreation of statistics is turned on for Parquet files. For CSV files, statistics will be recreated if you use OPENROWSET. You need to drop and create statistics manually for CSV external tables. Check the examples below on how to drop and create statistics.
One of the first questions to ask when you're troubleshooting a query is, **"Are the statistics up to date?"**
synapse-analytics Sql Database Synapse Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/synapse-link/sql-database-synapse-link.md
For each table, you'll see the following status:
* **Failed:** the data on source table can't be replicated to destination due to a fatal error. If you want to retry after fixing the error, remove the table from link connection and add it back. * **Suspended:** replication is suspended for this table due to an error. It will be resumed after the error is resolved.
+You can also get the following metrics to enable advanced monitoring of the service:
+
+* **Link connection events:** number of link connection events including start, stop or failure.
+* **Link table event:** number of link table events including snapshot, removal or failure.
+* **Link latency in second:** data processing latency in second.
+* **Link data processed data volume (bytes):** data volume in bytes processed by Synapse link for SQL.
+* **Link processed row:** row counts (changed) processed by Synapse Link for SQL.
+
+For more information, see [Manage Synapse Link for SQL change feed](/sql/sql-server/synapse-link/synapse-link-sql-server-change-feed-manage).
+ ## Transactional consistency across tables You can enable transactional consistency across tables for each link connection. However, it limits overall replication throughput.
synapse-analytics Sql Server 2022 Synapse Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/synapse-link/sql-server-2022-synapse-link.md
For each table, you'll see the following status:
* **Failed:** the data on source table can't be replicated to destination. If you want to retry after fixing the error, remove the table from link connection and add it back. * **Suspended:** replication is suspended for this table due to an error. It will be resumed after the error is resolved.
+You can also get the following metrics to enable advanced monitoring of the service:
+
+* **Link connection events:** number of link connection events including start, stop or failure.
+* **Link table event:** number of link table events including snapshot, removal or failure.
+* **Link latency in second:** data processing latency in second.
+* **Link data processed data volume (bytes):** data volume in bytes processed by Synapse link for SQL.
+* **Link processed row:** row counts (changed) processed by Synapse Link for SQL.
+ For more information, see [Manage Synapse Link for SQL change feed](/sql/sql-server/synapse-link/synapse-link-sql-server-change-feed-manage). ## Transactional consistency across tables
synapse-analytics Whats New Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/whats-new-archive.md
Title: Previous monthly updates in Azure Synapse Analytics
+ Title: Previous monthly updates in Azure Synapse Analytics
description: Archive of the new features and documentation improvements for Azure Synapse Analytics -+ Last updated : 09/09/2022 Previously updated : 05/20/2022 # Previous monthly updates in Azure Synapse Analytics This article describes previous month updates to Azure Synapse Analytics. For the most current month's release, check out [Azure Synapse Analytics latest updates](whats-new.md). Each update links to the Azure Synapse Analytics blog and an article that provides more information.
+## June 2022 update
+
+## General
+
+* **Azure Orbital analytics with Synapse Analytics** - We now offer an [Azure Orbital analytics sample solution](https://github.com/Azure/Azure-Orbital-Analytics-Samples) showing an end-to-end implementation of extracting, loading, transforming, and analyzing spaceborne data by using geospatial libraries and AI models with [Azure Synapse Analytics](overview-what-is.md). The sample solution also demonstrates how to integrate geospatial-specific [Azure Cognitive Services](../cognitive-services/index.yml) models, AI models from partners, and bring-your-own-data models.
+
+* **Azure Synapse success by design** - Project success is no accident and requires careful planning and execution. The Synapse Analytics' Success by Design playbooks are now available. The [Azure Synapse proof of concept playbook](./guidance/proof-of-concept-playbook-overview.md) provides a guide to scope, design, execute, and evaluate a proof of concept for SQL or Spark workloads. These guides contain best practices from the most challenging and complex solution implementations incorporating Azure Synapse. To learn more about the Azure Synapse proof of concept playbook, read [Success by Design](./guidance/success-by-design-introduction.md).
+## SQL
+
+**Result set size limit increase** - We know that you turn to Azure Synapse Analytics to work with large amounts of data. With that in mind, the maximum size of query result sets in Serverless SQL pools has been increased from 200 GB to 400 GB. This limit is shared between concurrent queries. To learn more about this size limit increase and other constraints, read [Self-help for serverless SQL pool](./sql/resources-self-help-sql-on-demand.md?tabs=x80070002#constraints).
+
+## Synapse data explorer
+
+* **Web Explorer new homepage** - The new Synapse Web Explorer homepage makes it even easier to get started with Synapse Web Explorer. The [Web Explorer homepage](https://dataexplorer.azure.com/home) now includes the following sections:
+
+ * Get started ΓÇô Sample gallery offering example queries and dashboards for popular Synapse Data Explorer use cases.
+ * Recommended ΓÇô Popular learning modules designed to help you master Synapse Web Explorer and KQL.
+ * Documentation ΓÇô Synapse Web Explorer basic and advanced documentation.
+
+* **Web Explorer sample gallery** - A great way to learn about a product is to see how it is being used by others. The Web Explorer sample gallery provides end-to-end samples of how customers leverage Synapse Data Explorer popular use cases such as Logs Data, Metrics Data, IoT data and Basic big data examples. Each sample includes the dataset, well-documented queries, and a sample dashboard. To learn more about the sample gallery, read [Azure Data Explorer in 60 minutes with the new samples gallery](https://techcommunity.microsoft.com/t5/azure-data-explorer-blog/azure-data-explorer-in-60-minutes-with-the-new-samples-gallery/ba-p/3447552).
+
+* **Web Explorer dashboards drill through capabilities** - You can now add drill through capabilities to your Synapse Web Explorer dashboards. The new drill through capabilities allow you to easily jump back and forth between dashboard pages. This is made possible by using a contextual filter to connect your dashboards. Defining these contextual drill throughs is done by editing the visual interactions of the selected tile in your dashboard. To learn more about drill through capabilities, read [Use drillthroughs as dashboard parameters](/azure/data-explorer/dashboard-parameters#use-drillthroughs-as-dashboard-parameters).
+
+* **Time Zone settings for Web Explorer** - Being able to display data in different time zones is very powerful. You can now decide to view the data in UTC time, your local time zone, or the time zone of the monitored device/machine. The Time Zone settings of the Web Explorer now apply to both the Query results and to the Dashboard. By changing the time zone, the dashboards will be automatically refreshed to present the data with the selected time zone. For more information on time zone settings, read [Change datetime to specific time zone](/azure/data-explorer/web-query-data#change-datetime-to-specific-time-zone).
+
+## Data integration
+
+* **Fuzzy Join option in Join Transformation** - Fuzzy matching with a sliding similarity score option has been added to the Join transformation in Mapping Data Flows. You can create inner and outer joins on data values that are similar rather than exact matches! Previously, you would have had to use an exact match. The sliding scale value goes from 60% to 100%, making it easy to adjust the similarity threshold of the match. For learn more about fuzzy joins, read [Join transformation in mapping data flow](../data-factory/data-flow-join.md).
+
+* **Map Data [Generally Available]** - We're excited to announce that the Map Data tool is now Generally Available. The Map Data tool is a guided process to help you create ETL mappings and mapping data flows from your source data to Synapse without writing code. To learn more about Map Data, read [Map Data in Azure Synapse Analytics](./database-designer/overview-map-data.md).
+
+* **Rerun pipeline with new parameters** - You can now change pipeline parameters when rerunning a pipeline from the Monitoring page without having to return to the pipeline editor. After running a pipeline with new parameters, you can easily monitor the new run against the old ones without having to toggle between pages. To learn more about rerunning pipelines with new parameters, read [Rerun pipelines and activities](../data-factory/monitor-visually.md#rerun-pipelines-and-activities).
+
+* **User Defined Functions [Generally Available]** - We're excited to announce that user defined functions (UDFs) are now Generally Available. With user-defined functions, you can create customized expressions that can be reused across multiple mapping data flows. You no longer have to use the same string manipulation, math calculations, or other complex logic several times. User-defined functions will be grouped in libraries to help developers group common sets of functions. To learn more about user defined functions, read [User defined functions in mapping data flows](https://techcommunity.microsoft.com/t5/azure-data-factory-blog/introducing-user-defined-functions-preview-for-mapping-data/ba-p/3414628).
+
+## Machine learning
+
+**Distributed Deep Neural Network Training with Horovod and Petastorm [Public Preview]** - To simplify the process for creating and managing GPU-accelerated pools, Azure Synapse takes care of pre-installing low-level libraries and setting up all the complex networking requirements between compute nodes. This integration allows users to get started with GPU- accelerated pools within just a few minutes.
+
+Now, Azure Synapse Analytics provides built-in support for deep learning infrastructure. The Azure Synapse Analytics runtime for Apache Spark 3.1 and 3.2 now includes support for the most common deep learning libraries like TensorFlow and PyTorch. The Azure Synapse runtime also includes supporting libraries like Petastorm and Horovod, which are commonly used for distributed training. This feature is currently available in Public Preview.
+
+To learn more about how to leverage these libraries within your Azure Synapse Analytics GPU-accelerated pools, read the [Deep learning tutorials](./machine-learning/concept-deep-learning.md).
+ ## May 2022 update The following updates are new to Azure Synapse Analytics this month. ### General
-**Get connected with the new Azure Synapse Influencer program!** [Join a community of Azure Synapse Influencers](https://aka.ms/synapseinfluencers) who are helping each other achieve more with cloud analytics! The Azure Synapse Influencer program recognizes Azure Synapse Analytics users and advocates who actively support the community by sharing Synapse-related content, announcements, and product news via social media.
+**Get connected with the new Azure Synapse Influencer program!** [Join a community of Azure Synapse Influencers](https://aka.ms/synapseinfluencers) who are helping each other achieve more with cloud analytics! The Azure Synapse Influencer program recognizes Azure Synapse Analytics users and advocates who actively support the community by sharing Synapse-related content, announcements, and product news via social media.
### SQL
-* **Data Warehouse Migration guide for Dedicated SQL Pools in Azure Synapse Analytics** - With the benefits that cloud migration offers, we hear that you often look for steps, processes, or guidelines to follow for quick and easy migrations from existing data warehouse environments. We just released a set of [Data Warehouse migration guides](./migration-guides/index.yml) to make your transition to dedicated SQL Pools in Azure Synapse Analytics easier.
+* **Data Warehouse Migration guide for Dedicated SQL Pools in Azure Synapse Analytics** - With the benefits that cloud migration offers, we hear that you often look for steps, processes, or guidelines to follow for quick and easy migrations from existing data warehouse environments. We just released a set of [Data Warehouse migration guides](./migration-guides/index.yml) to make your transition to dedicated SQL Pools in Azure Synapse Analytics easier.
* **Automatic character column length calculation** - It's no longer necessary to define character column lengths! Serverless SQL pools let you query files in the data lake without knowing the schema upfront. The best practice was to specify the lengths of character columns to get optimal performance. Not anymore! With this new feature, you can get optimal query performance without having to define the schema. The serverless SQL pool will calculate the average column length for each inferred character column or character column defined as larger than 100 bytes. The schema will stay the same, while the serverless SQL pool will use the calculated average column lengths internally. It will also automatically calculate the cardinality estimation in case there was no previously created statistic. ### Apache Spark for Synapse
-* **Azure Synapse Dedicated SQL Pool Connector for Apache Spark Now Available in Python** - Previously, the Azure Synapse Dedicated SQL Pool connector was only available using Scala. Now, it can be used with Python on Spark 3. The only difference between the Scala and Python implementations is the optional Scala callback handle, which allows you to receive post-write metrics.
+* **Azure Synapse Dedicated SQL Pool Connector for Apache Spark Now Available in Python** - Previously, the Azure Synapse Dedicated SQL Pool connector was only available using Scala. Now, it can be used with Python on Spark 3. The only difference between the Scala and Python implementations is the optional Scala callback handle, which allows you to receive post-write metrics.
- The following are now supported in Python on Spark 3:
+ The following are now supported in Python on Spark 3:
* Read using Azure Active Directory (AD) Authentication or Basic Authentication * Write to Internal Table using Azure AD Authentication or Basic Authentication
- * Write to External Table using Azure AD Authentication or Basic Authentication
+ * Write to External Table using Azure AD Authentication or Basic Authentication
To learn more about the connector in Python, read [Azure Synapse Dedicated SQL Pool Connector for Apache Spark](./spark/synapse-spark-sql-pool-import-export.md).
The following updates are new to Azure Synapse Analytics this month.
* **Synapse Data Explorer live query in Excel** - Using the new Data Explorer web experience Open in Excel feature, you can now provide access to live results of your query by sharing the connected Excel Workbook with colleagues and team members.  You can open the live query in an Excel Workbook and refresh it directly from Excel to get the most up to date query results. To learn more about Excel live query, read [Open live query in Excel](https://techcommunity.microsoft.com/t5/azure-data-explorer-blog/open-live-kusto-query-in-excel/ba-p/3198500).
-* **Use Managed Identities for External SQL Server Tables** - One of the key benefits of Azure Synapse is the ability to bring together data integration, enterprise data warehousing, and big data analytics. With Managed Identity support, Synapse Data Explorer table definition is now simpler and more secure. You can now use managed identities instead of entering in your credentials.
-
- An external SQL table is a schema entity that references data stored outside the Synapse Data Explorer database. Using the Create and alter SQL Server external tables command, External SQL tables can easily be added to the Synapse Data Explorer database schema.
+* **Use Managed Identities for External SQL Server Tables** - One of the key benefits of Azure Synapse is the ability to bring together data integration, enterprise data warehousing, and big data analytics. With Managed Identity support, Synapse Data Explorer table definition is now simpler and more secure. You can now use managed identities instead of entering in your credentials.
- To learn more about managed identities, read [Managed identities overview](/azure/data-explorer/managed-identities-overview).
+ An external SQL table is a schema entity that references data stored outside the Synapse Data Explorer database. Using the Create and alter SQL Server external tables command, External SQL tables can easily be added to the Synapse Data Explorer database schema.
- To learn more about external tables, read [Create and alter SQL Server external tables](/azure/data-explorer/kusto/management/external-sql-tables).
+ To learn more about managed identities, read [Managed identities overview](/azure/data-explorer/managed-identities-overview).
-* **New KQL Learn module (2 out of 3) is live!** - The power of Kusto Query Language (KQL) is its simplicity to query structured, semi-structured, and unstructured data together. To make it easier for you to learn KQL, we are releasing Learn modules. Previously, we released [Write your first query with Kusto Query Language](/learn/modules/write-first-query-kusto-query-language/). New this month is [Gain insights from your data by using Kusto Query Language](/learn/modules/gain-insights-data-kusto-query-language/).
-
- KQL is the query language used to query Synapse Data Explorer big data. KQL has a fast-growing user community, with hundreds of thousands of developers, data engineers, data analysts, and students.
+ To learn more about external tables, read [Create and alter SQL Server external tables](/azure/data-explorer/kusto/management/external-sql-tables).
- Check out the newest [KQL Learn Model](/learn/modules/gain-insights-data-kusto-query-language/) and see for yourself how easy it is to become a KQL master.
+* **New KQL Learn module (2 out of 3) is live!** - The power of Kusto Query Language (KQL) is its simplicity to query structured, semi-structured, and unstructured data together. To make it easier for you to learn KQL, we are releasing Learn modules. Previously, we released [Write your first query with Kusto Query Language](/learn/modules/write-first-query-kusto-query-language/). New this month is [Gain insights from your data by using Kusto Query Language](/learn/modules/gain-insights-data-kusto-query-language/).
- To learn more about KQL, read [Kusto Query Language (KQL) overview](/azure/data-explorer/kusto/query/).
+ KQL is the query language used to query Synapse Data Explorer big data. KQL has a fast-growing user community, with hundreds of thousands of developers, data engineers, data analysts, and students.
-* **Azure Synapse Data Explorer connector for Microsoft Power Automate, Logic Apps, and Power Apps [Generally Available]** - The Azure Data Explorer connector for Power Automate enables you to orchestrate and schedule flows, send notifications, and alerts, as part of a scheduled or triggered task. To learn more, read [Azure Data Explorer connector for Microsoft Power Automate](/azure/data-explorer/flow) and [Usage examples for Azure Data Explorer connector to Power Automate](/azure/data-explorer/flow-usage).
+ Check out the newest [KQL Learn Model](/learn/modules/gain-insights-data-kusto-query-language/) and see for yourself how easy it is to become a KQL master.
-* **Dynamic events routing from event hub to multiple databases** - Routing events from Event Hub/IOT Hub/Event Grid is an activity commonly performed by Azure Data Explorer (ADX) users. Previously, you could route events only to a single database per defined connection. If you wanted to route the events to multiple databases, you needed to create multiple ADX cluster connections.
+ To learn more about KQL, read [Kusto Query Language (KQL) overview](/azure/data-explorer/kusto/query/).
- To simplify the experience, we now support routing events data to multiple databases hosted in a single ADX cluster. To learn more about dynamic routing, read [Ingest from event hub](/azure/data-explorer/ingest-data-event-hub-overview#events-routing).
+* **Azure Synapse Data Explorer connector for Microsoft Power Automate, Logic Apps, and Power Apps [Generally Available]** - The Azure Data Explorer connector for Power Automate enables you to orchestrate and schedule flows, send notifications, and alerts, as part of a scheduled or triggered task. To learn more, read [Azure Data Explorer connector for Microsoft Power Automate](/azure/data-explorer/flow) and [Usage examples for Azure Data Explorer connector to Power Automate](/azure/data-explorer/flow-usage).
-* **Configure a database using a KQL inline script as part of JSON ARM deployment template** - Previously, Azure Data Explorer supported running a Kusto Query Language (KQL) script to configure your database during Azure Resource Management (ARM) template deployment. Now, this can be done using an inline script provided inline as a parameter to a JSON ARM template. To learn more about using a KQL inline script, read [Configure a database using a Kusto Query Language script](/azure/data-explorer/database-script).
+* **Dynamic events routing from event hub to multiple databases** - Routing events from Event Hub/IOT Hub/Event Grid is an activity commonly performed by Azure Data Explorer (ADX) users. Previously, you could route events only to a single database per defined connection. If you wanted to route the events to multiple databases, you needed to create multiple ADX cluster connections.
+
+ To simplify the experience, we now support routing events data to multiple databases hosted in a single ADX cluster. To learn more about dynamic routing, read [Ingest from event hub](/azure/data-explorer/ingest-data-event-hub-overview#events-routing).
+
+* **Configure a database using a KQL inline script as part of JSON ARM deployment template** - Previously, Azure Data Explorer supported running a Kusto Query Language (KQL) script to configure your database during Azure Resource Manager (ARM) template deployment. Now, this can be done using an inline script provided inline as a parameter to a JSON ARM template. To learn more about using a KQL inline script, read [Configure a database using a Kusto Query Language script](/azure/data-explorer/database-script).
### Data Integration
-* **Export pipeline monitoring as a CSV** - The ability to export pipeline monitoring to CSV has been added after receiving many community requests for the feature. Simply filter the Pipeline runs screen to the data you want and click ΓÇÿExport to CSVΓÇÖ. To learn more about exporting pipeline monitoring and other monitoring improvements, read [Azure Data Factory monitoring improvements](https://techcommunity.microsoft.com/t5/azure-data-factory-blog/adf-monitoring-improvements/ba-p/3295531).
+* **Export pipeline monitoring as a CSV** - The ability to export pipeline monitoring to CSV has been added after receiving many community requests for the feature. Simply filter the Pipeline runs screen to the data you want and select *Export to CSV**. To learn more about exporting pipeline monitoring and other monitoring improvements, read [Azure Data Factory monitoring improvements](https://techcommunity.microsoft.com/t5/azure-data-factory-blog/adf-monitoring-improvements/ba-p/3295531).
-* **Incremental data loading made easy for Synapse and Azure Database for PostgreSQL and MySQL** - In a data integration solution, incrementally loading data after an initial full data load is a widely used scenario. Automatic incremental source data loading is now natively available for Synapse SQL and Azure Database for PostgreSQL and MySQL. With a simple click, users can ΓÇ£enable incremental extractΓÇ¥ and only inserted or updated rows will be read by the pipeline. To learn more about incremental data loading, read [Incrementally copy data from a source data store to a destination data store](../data-factory/tutorial-incremental-copy-overview.md).
+* **Incremental data loading made easy for Synapse and Azure Database for PostgreSQL and MySQL** - In a data integration solution, incrementally loading data after an initial full data load is a widely used scenario. Automatic incremental source data loading is now natively available for Synapse SQL and Azure Database for PostgreSQL and MySQL. Users can "enable incremental extract" and only inserted or updated rows will be read by the pipeline. To learn more about incremental data loading, read [Incrementally copy data from a source data store to a destination data store](../data-factory/tutorial-incremental-copy-overview.md).
-* **User-Defined Functions for Mapping Data Flows [Public Preview]** - We hear you that you can find yourself doing the same string manipulation, math calculations, or other complex logic several times. Now, with the new user-defined function feature, you can create customized expressions that can be reused across multiple mapping data flows. User-defined functions will be grouped in libraries to help developers group common sets of functions. Once youΓÇÖve created a data flow library, you can add in your user-defined functions. You can even add in multiple arguments to make your function more reusable. To learn more about user-defined functions, read [User defined functions in mapping data flows](https://techcommunity.microsoft.com/t5/azure-data-factory-blog/introducing-user-defined-functions-preview-for-mapping-data/ba-p/3414628).
+* **User-Defined Functions for Mapping Data Flows [Public Preview]** - We hear you that you can find yourself doing the same string manipulation, math calculations, or other complex logic several times. Now, with the new user-defined function feature, you can create customized expressions that can be reused across multiple mapping data flows. User-defined functions will be grouped in libraries to help developers group common sets of functions. Once you've created a data flow library, you can add in your user-defined functions. You can even add in multiple arguments to make your function more reusable. To learn more about user-defined functions, read [User defined functions in mapping data flows](https://techcommunity.microsoft.com/t5/azure-data-factory-blog/introducing-user-defined-functions-preview-for-mapping-data/ba-p/3414628).
* **Assert Error Handling** - Error handling has now been added to sinks following an assert transformation. Assert transformations enable you to build custom rules for data quality and data validation. You can now choose whether to output the failed rows to the selected sink or to a separate file. To learn more about error handling, read [Assert data transformation in mapping data flow](../data-factory/data-flow-assert.md).
-* **Mapping data flows projection editing** - New UI updates have been made to source projection editing in mapping data flows. You can now update source projection column names and column types with the click of a button. To learn more about source projection editing, read [Source transformation in mapping data flow](../data-factory/data-flow-source.md).
-
-### Synapse Link
+* **Mapping data flows projection editing** - New UI updates have been made to source projection editing in mapping data flows. You can now update source projection column names and column types. To learn more about source projection editing, read [Source transformation in mapping data flow](../data-factory/data-flow-source.md).
-**Azure Synapse Link for SQL [Public Preview]** - At Microsoft Build 2022, we announced the Public Preview availability of Azure Synapse Link for SQL, for both SQL Server 2022 and Azure SQL Database. Data-driven, quality insights are critical for companies to stay competitive. The speed to achieve those insights can make all the difference. The costly and time-consuming nature of traditional ETL and ELT pipelines is no longer enough. With this release, you can now take advantage of low- and no-code, near real-time data replication from your SQL-based operational stores into Azure Synapse Analytics. This makes it easier to run BI reporting on operational data in near real-time, with minimal impact on your operational store. To learn more, read [Announcing the Public Preview of Azure Synapse Link for SQL](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/announcing-the-public-preview-of-azure-synapse-link-for-sql/ba-p/3372986) and watch our YouTube video.
+### Azure Synapse Link
-> [!VIDEO https://www.youtube.com/embed/pgusZy34-Ek]
+**Azure Synapse Link for SQL [Public Preview]** - At Microsoft Build 2022, we announced the Public Preview availability of Azure Synapse Link for SQL, for both SQL Server 2022 and Azure SQL Database. Data-driven, quality insights are critical for companies to stay competitive. The speed to achieve those insights can make all the difference. The costly and time-consuming nature of traditional ETL and ELT pipelines is no longer enough. With this release, you can now take advantage of low- and no-code, near real-time data replication from your SQL-based operational stores into Azure Synapse Analytics. This makes it easier to run BI reporting on operational data in near real-time, with minimal impact on your operational store. To learn more, read [Announcing the Public Preview of Azure Synapse Link for SQL](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/announcing-the-public-preview-of-azure-synapse-link-for-sql/ba-p/3372986) and [watch our YouTube video](https://www.youtube.com/embed/pgusZy34-Ek).
## Apr 2022 update
The following updates are new to Azure Synapse Analytics this month.
* Based on popular customer feedback, we've made significant improvements to our exploration experience when creating a lake database using an industry template. To learn more, read [Quickstart: Create a new Lake database leveraging database templates](./database-designer/quick-start-create-lake-database.md).
-* We've added the option to clone a lake database. This unlocks additional opportunities to manage new versions of databases or support schemas that evolve in discrete steps. You can quickly clone a database using the action menu available on the lake database. To learn more, read [How-to: Clone a lake database](./database-designer/clone-lake-database.md).
+* We've added the option to clone a lake database. This unlocks additional opportunities to manage new versions of databases or support schemas that evolve in discrete steps. You can quickly clone a database using the action menu available on the lake database. To learn more, read [How-to: Clone a lake database](./database-designer/clone-lake-database.md).
-* You can now use wildcards to specify custom folder hierarchies. Lake databases sit on top of data that is in the lake and this data can live in nested folders that donΓÇÖt fit into clean partition patterns. Previously, querying lake databases required that your data exists in a simple directory structure that you could browse using the folder icon without the ability to manually specify directory structure or use wildcard characters. To learn more, read [How-to: Modify a datalake](./database-designer/modify-lake-database.md).
+* You can now use wildcards to specify custom folder hierarchies. Lake databases sit on top of data that is in the lake and this data can live in nested folders that don't fit into clean partition patterns. Previously, querying lake databases required that your data exists in a simple directory structure that you could browse using the folder icon without the ability to manually specify directory structure or use wildcard characters. To learn more, read [How-to: Modify a datalake](./database-designer/modify-lake-database.md).
### Apache Spark for Synapse
-* We are excited to announce the preview availability of Apache SparkΓäó 3.2 on Synapse Analytics. This new version incorporates user-requested enhancements and resolves 1,700+ Jira tickets. Please review the [official release notes](https://spark.apache.org/releases/spark-release-3-2-0.html) for the complete list of fixes and features and review the [migration guidelines between Spark 3.1 and 3.2](https://spark.apache.org/docs/latest/sql-migration-guide.html#upgrading-from-spark-sql-31-to-32) to assess potential changes to your applications. For more details, read [Apache Spark version support and Azure Synapse Runtime for Apache Spark 3.2](./spark/apache-spark-version-support.md).
+* We are excited to announce the preview availability of Apache Spark&trade; 3.2 on Synapse Analytics. This new version incorporates user-requested enhancements and resolves 1,700+ Jira tickets. Please review the [official release notes](https://spark.apache.org/releases/spark-release-3-2-0.html) for the complete list of fixes and features and review the [migration guidelines between Spark 3.1 and 3.2](https://spark.apache.org/docs/latest/sql-migration-guide.html#upgrading-from-spark-sql-31-to-32) to assess potential changes to your applications. For more details, read [Apache Spark version support and Azure Synapse Runtime for Apache Spark 3.2](./spark/apache-spark-version-support.md).
* Assigning parameters dynamically based on variables, metadata, or specifying Pipeline specific parameters has been one of your top feature requests. Now, with the release of parameterization for the Spark job definition activity, you can do just that. For more details, read [Transform data using Apache Spark job definition](quickstart-transform-data-using-spark-job-definition.md#settings-tab).
-* We often receive customer requests to access the snapshot of the Notebook when there is a Pipeline Notebook run failure or there is a long-running Notebook job. With the release of the Synapse Notebook snapshot feature, you can now view the snapshot of the Notebook activity run with the original Notebook code, the cell output, and the input parameters. You can also access the snapshot of the referenced Notebook from the referencing Notebook cell output if you refer to other Notebooks through Spark utils. To learn more, read [Transform data by running a Synapse notebook](synapse-notebook-activity.md?tabs=classical#see-notebook-activity-run-history) and [Introduction to Microsoft Spark utilities](./spark/microsoft-spark-utilities.md?pivots=programming-language-scala#reference-a-notebook-1).
+* We often receive customer requests to access the snapshot of the Notebook when there is a Pipeline Notebook run failure or there is a long-running Notebook job. With the release of the Synapse Notebook snapshot feature, you can now view the snapshot of the Notebook activity run with the original Notebook code, the cell output, and the input parameters. You can also access the snapshot of the referenced Notebook from the referencing Notebook cell output if you refer to other Notebooks through Spark utils. To learn more, read [Transform data by running a Synapse notebook](synapse-notebook-activity.md?tabs=classical#see-notebook-activity-run-history) and [Introduction to Microsoft Spark utilities](./spark/microsoft-spark-utilities.md?pivots=programming-language-scala#reference-a-notebook-1).
### Security * The Synapse Monitoring Operator RBAC role is now generally available. Since the GA of Synapse, customers have asked for a fine-grained RBAC (role-based access control) role that allows a user persona to monitor the execution of Synapse Pipelines and Spark applications without having the ability to run or cancel the execution of these applications. Now, customers can assign the Synapse Monitoring Operator role to such monitoring personas. This allows organizations to stay compliant while having flexibility in the delegation of tasks to individuals or teams. Learn more by reading [Synapse RBAC Roles](security/synapse-workspace-synapse-rbac-roles.md). ### Data integration
-* Microsoft has added Dataverse as a source and sink connector to Synapse Data Flows so that you can now build low-code data transformation ETL jobs in Synapse directly accessing your Dataverse environment. For more details on how to use this new connector, read [Mapping data flow properties](../data-factory/connector-dynamics-crm-office-365.md#mapping-data-flow-properties).
+* Microsoft has added Dataverse as a source and sink connector to Synapse Data Flows so that you can now build low-code data transformation ETL jobs in Synapse directly accessing your Dataverse environment. For more details on how to use this new connector, read [Mapping data flow properties](../data-factory/connector-dynamics-crm-office-365.md#mapping-data-flow-properties).
* We heard from you that a 1-minute timeout for Web activity was not long enough, especially in cases of synchronous APIs. Now, with the response timeout property 'httpRequestTimeout', you can define timeout for the HTTP request up to 10 minutes. Learn more by reading [Web activity response timeout improvements](https://techcommunity.microsoft.com/t5/azure-data-factory-blog/web-activity-response-timeout-improvement/ba-p/3260307).
-
+ ### Developer experience
-* Previously, if you wanted to reference a notebook in another notebook, you could only reference published or committed content. Now, when using %run notebooks, you can enable ΓÇÿunpublished notebook referenceΓÇÖ which will allow you to reference unpublished notebooks. When enabled, notebook run will fetch the current contents in the notebook web cache, meaning the changes in your notebook editor can be referenced immediately by other notebooks without having to be published (Live mode) or committed (Git mode). To learn more, read [Reference unpublished notebook](spark/apache-spark-development-using-notebooks.md#reference-unpublished-notebook).
+* Previously, if you wanted to reference a notebook in another notebook, you could only reference published or committed content. Now, when using %run notebooks, you can enable 'unpublished notebook reference' which will allow you to reference unpublished notebooks. When enabled, notebook run will fetch the current contents in the notebook web cache, meaning the changes in your notebook editor can be referenced immediately by other notebooks without having to be published (Live mode) or committed (Git mode). To learn more, read [Reference unpublished notebook](spark/apache-spark-development-using-notebooks.md#reference-unpublished-notebook).
## Mar 2022 update
Improvements to the Synapse Machine Learning library v0.9.5 (previously called M
* The Azure Synapse Analytics security overview - A whitepaper that covers the five layers of security. The security layers include authentication, access control, data protection, network security, and threat protection. [Understand each security feature in detailed](./guidance/security-white-paper-introduction.md) to implement an industry-standard security baseline and protect your data on the cloud.
-* TLS 1.2 is now required for newly created Synapse Workspaces. To learn more, see how [TLS 1.2 provides enhanced security using this article](./security/connectivity-settings.md) or the [blog post](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-january-update-2022/ba-p/3071681#TOCREF_6). Login attempts to a newly created Synapse workspace from connections using TLS versions lower than 1.2 will fail.
+* TLS 1.2 is now required for newly created Synapse Workspaces. To learn more, see how [TLS 1.2 provides enhanced security using this article](./security/connectivity-settings.md) or the [blog post](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-january-update-2022/ba-p/3071681#TOCREF_6). Sign-in attempts to a newly created Synapse workspace from connections using TLS versions lower than 1.2 will fail.
### Data Integration
The following updates are new to Azure Synapse Analytics this month.
### Integrate
-* Synapse Link for Dataverse [blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-november-2021-update/ba-p/3020740#toc-hId-1397891373) [article](/powerapps/maker/data-platform/azure-synapse-link-synapse)
-* Custom partitions for Synapse link for Azure Cosmos DB in preview [blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-november-2021-update/ba-p/3020740#toc-hId--409563090) [article](../cosmos-db/custom-partitioning-analytical-store.md)
+* Azure Synapse Link for Dataverse [blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-november-2021-update/ba-p/3020740#toc-hId-1397891373) [article](/powerapps/maker/data-platform/azure-synapse-link-synapse)
+* Custom partitions for Azure Synapse Link for Azure Cosmos DB in preview [blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-november-2021-update/ba-p/3020740#toc-hId--409563090) [article](../cosmos-db/custom-partitioning-analytical-store.md)
* Map data tool (Public Preview), a no-code guided ETL experience [blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-december-2021-update/ba-p/3042904#REF7) [article](./database-designer/overview-map-data.md) * Quick reuse of spark cluster [blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-december-2021-update/ba-p/3042904#REF7) [article](../data-factory/concepts-integration-runtime-performance.md#time-to-live) * External Call transformation [blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-december-2021-update/ba-p/3042904#REF9) [article](../data-factory/data-flow-external-call.md)
The following updates are new to Azure Synapse Analytics this month.
* Synapse Data Explorer now available in preview [blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-november-2021-update/ba-p/3020740#toc-hId-1022327194) [article](./data-explorer/data-explorer-overview.md)
-### Working with Databases and Data Lakes
+### Work with Databases and Data Lakes
* Introducing Lake databases (formerly known as Spark databases) [blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-november-2021-update/ba-p/3020740#toc-hId--795630373) [article](./database-designer/concepts-lake-database.md) * Lake database designer now available in preview [blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-november-2021-update/ba-p/3020740#toc-hId-1691882460) [article](./database-designer/concepts-lake-database.md#database-designer)
The following updates are new to Azure Synapse Analytics this month.
* Pipeline Fail activity [blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-november-2021-update/ba-p/3020740#toc-hId-1827125525) [article](../data-factory/control-flow-fail-activity.md) * Mapping Data Flow gets new native connectors [blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-november-2021-update/ba-p/3020740#toc-hId-717833003) [article](https://techcommunity.microsoft.com/t5/azure-data-factory-blog/mapping-data-flow-gets-new-native-connectors/ba-p/2866754)
-### Synapse Link
+### Azure Synapse Link
-* Synapse Link for Dataverse [blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-november-2021-update/ba-p/3020740#toc-hId-1397891373) [article](/powerapps/maker/data-platform/azure-synapse-link-synapse)
-* Custom partitions for Synapse link for Azure Cosmos DB in preview [blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-november-2021-update/ba-p/3020740#toc-hId--409563090) [article](../cosmos-db/custom-partitioning-analytical-store.md)
+* Azure Synapse Link for Dataverse [blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-november-2021-update/ba-p/3020740#toc-hId-1397891373) [article](/powerapps/maker/data-platform/azure-synapse-link-synapse)
+* Custom partitions for Azure Synapse Link for Azure Cosmos DB in preview [blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-november-2021-update/ba-p/3020740#toc-hId--409563090) [article](../cosmos-db/custom-partitioning-analytical-store.md)
## October 2021 update
The following updates are new to Azure Synapse Analytics this month.
### Apache Spark for Synapse
-* Spark performance optimizations [blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics/azure-synapse-analytics-october-update/ba-p/2875372#spark-performance)
+* Spark performance optimizations [blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics/azure-synapse-analytics-october-update/ba-p/2875372#spark-performance)
### Security * All Synapse RBAC roles are now generally available for use in production [blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics/azure-synapse-analytics-october-update/ba-p/2875372#synapse-rbac) [article](./security/synapse-workspace-synapse-rbac-roles.md) * Apply User-Assigned Managed Identities for Double Encryption [blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics/azure-synapse-analytics-october-update/ba-p/2875372#user-assigned-managed-identities) [article](./security/workspaces-encryption.md) * Synapse Administrators now have elevated access to dedicated SQL pools [blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics/azure-synapse-analytics-october-update/ba-p/2875372#elevated-access) [article](./security/synapse-workspace-access-control-overview.md)
-
-### Governance
+
+### Governance
* Synapse workspaces can now automatically push lineage data to Microsoft Purview [blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics/azure-synapse-analytics-october-update/ba-p/2875372#synapse-purview-lineage) [article](../purview/how-to-lineage-azure-synapse-analytics.md)
-
+ ### Integrate * Use Stringify in data flows to easily transform complex data types to strings [blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics/azure-synapse-analytics-october-update/ba-p/2875372#stringify-transform) [article](../data-factory/data-flow-stringify.md)
The following updates are new to Azure Synapse Analytics this month.
## Next steps
-[Get started with Azure Synapse Analytics](get-started.md)
+[Get started with Azure Synapse Analytics](get-started.md)
synapse-analytics Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/whats-new.md
Title: What's new?
+ Title: What's new?
description: Learn about the new features and documentation improvements for Azure Synapse Analytics -++ Last updated : 09/14/2022 Previously updated : 06/29/2022 # What's new in Azure Synapse Analytics?
-This article lists updates to Azure Synapse Analytics that are published in June 2022. Each update links to the Azure Synapse Analytics blog and an article that provides more information. For previous months releases, check out [Azure Synapse Analytics - updates archive](whats-new-archive.md).
+See below for a recent review of what's new in [Azure Synapse Analytics](overview-what-is.md), and also what features are currently in preview. To follow the latest in Azure Synapse news and features, see the [Azure Synapse Analytics Blog](https://aka.ms/SynapseMonthlyUpdate) and [companion videos on YouTube](https://www.youtube.com/channel/UCsZ4IlYjjVxqe5OZ14tyh5g).
-## General
+For older updates, review past [Azure Synapse Analytics Blog](https://aka.ms/SynapseMonthlyUpdate) posts or [previous monthly updates in Azure Synapse Analytics](whats-new-archive.md).
-* **Azure Orbital analytics with Synapse Analytics** - We now offer an [Azure Orbital analytics sample solution](https://github.com/Azure/Azure-Orbital-Analytics-Samples) showing an end-to-end implementation of extracting, loading, transforming, and analyzing spaceborne data by using geospatial libraries and AI models with [Azure Synapse Analytics](overview-what-is.md). The sample solution also demonstrates how to integrate geospatial-specific [Azure Cognitive Services](../cognitive-services/index.yml) models, AI models from partners, and bring-your-own-data models.
+## Features currently in preview
-* **Azure Synapse success by design** - Project success is no accident and requires careful planning and execution. The Synapse Analytics' Success by Design playbooks are now available. The [Azure Synapse proof of concept playbook](./guidance/proof-of-concept-playbook-overview.md) provides a guide to scope, design, execute, and evaluate a proof of concept for SQL or Spark workloads. These guides contain best practices from the most challenging and complex solution implementations incorporating Azure Synapse. To learn more about the Azure Synapse proof of concept playbook, read [Success by Design](./guidance/success-by-design-introduction.md).
-## SQL
+The following table lists the features of Azure Synapse Analytics that are currently in preview. Preview features are sorted alphabetically.
-**Result set size limit increase** - We know that you turn to Azure Synapse Analytics to work with large amounts of data. With that in mind, the maximum size of query result sets in Serverless SQL pools has been increased from 200GB to 400GB. This limit is shared between concurrent queries. To learn more about this size limit increase and other constraints, read [Self-help for serverless SQL pool](./sql/resources-self-help-sql-on-demand.md?tabs=x80070002#constraints).
+> [!NOTE]
+> Features currently in preview are available under [supplemental terms of use](https://azure.microsoft.com/support/legal/preview-supplemental-terms/), review for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. Azure Synapse Analytics provides previews to give you a chance to evaluate and [share feedback with the product group](https://feedback.azure.com/d365community/forum/9b9ba8e4-0825-ec11-b6e6-000d3a4f07b8) on features before they become generally available (GA).
-## Synapse data explorer
+| **Feature** | **Learn more**|
+|:-- |:-- |
+| **Azure Synapse Data Explorer** | The [Azure Synapse Data Explorer](./data-explorer/data-explorer-overview.md) provides an interactive query experience to unlock insights from log and telemetry data. Connectors for Azure Data Explorer are available for Synapse Data Explorer. |
+| **Azure Synapse Link to SQL** | Azure Synapse Link is in preview for both SQL Server 2022 and Azure SQL Database. The Azure Synapse Link feature provides low- and no-code, near real-time data replication from your SQL-based operational stores into Azure Synapse Analytics. Provide BI reporting on operational data in near real-time, with minimal impact on your operational store. To learn more, read [Announcing the Public Preview of Azure Synapse Link for SQL](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/announcing-the-public-preview-of-azure-synapse-link-for-sql/ba-p/3372986) and [watch our YouTube video](https://www.youtube.com/embed/pgusZy34-Ek). |
+| **Browse ADLS Gen2 folders in the Azure Synapse Analytics workspace** | You can now browse an Azure Data Lake Storage Gen2 (ADLS Gen2) container or folder in your Azure Synapse Analytics workspace by connecting to a specific container or folder in Synapse Studio. To learn more, see [Browse an ADLS Gen2 folder with ACLs in Azure Synapse Analytics](how-to-access-container-with-access-control-lists.md).|
+| **Custom partitions for Synapse link for Azure Cosmos DB** | Improve query execution times for your Spark queries, by creating custom partitions based on fields frequently used in your queries. To learn more, see [Custom partitioning in Azure Synapse Link for Azure Cosmos DB (Preview)](../cosmos-db/custom-partitioning-analytical-store.md). |
+| **Data flow improvements to Data Preview** | To learn more, see [Data Preview and debug improvements in Mapping Data Flows](https://techcommunity.microsoft.com/t5/azure-data-factory-blog/data-preview-and-debug-improvements-in-mapping-data-flows/ba-p/3268254?wt.mc_id=azsynapseblog_mar2022_blog_azureeng). |
+| **Distribution Advisor**| The Distribution Advisor is a new preview feature in Azure Synapse dedicated SQL pools Gen2 that analyzes queries and recommends the best distribution strategies for tables to improve query performance. For more information, see [Distribution Advisor in Azure Synapse SQL](sql/distribution-advisor.md).|
+| **Distributed Deep Neural Network Training** | Learn more about new distributed training libraries like Horovod, Petastorm, TensorFlow, and PyTorch in [Deep learning tutorials](./machine-learning/concept-deep-learning.md). |
+| **Ingest data from Azure Stream Analytics into Synapse Data Explorer** | You can now use a Streaming Analytics job to collect data from an event hub and send it to your Azure Data Explorer cluster using the Azure portal or an ARM template. For more information on this preview feature, see [Ingest data from Azure Stream Analytics into Azure Data Explorer](/azure/data-explorer/stream-analytics-connector). |
+| **Multi-column distribution in dedicated SQL pools** | You can now Hash Distribute tables on multiple columns for a more even distribution of the base table, reducing data skew over time and improving query performance. For more information on opting-in to the preview, see [CREATE TABLE distribution options](/sql/t-sql/statements/create-table-azure-sql-data-warehouse#TableDistributionOptions) or [CREATE TABLE AS SELECT distribution options](/sql/t-sql/statements/create-table-as-select-azure-sql-data-warehouse#table-distribution-options).|
+| **SAP CDC connector preview** | A new data connector for SAP Change Data Capture (CDC) is now available in preview. For more information, see [Announcing Public Preview of the SAP CDC solution in Azure Data Factory and Azure Synapse Analytics](https://techcommunity.microsoft.com/t5/azure-data-factory-blog/announcing-public-preview-of-the-sap-cdc-solution-in-azure-data/ba-p/3420904) and [SAP CDC solution in Azure Data Factory](/azure/data-factory/sap-change-data-capture-introduction-architecture).|
+| **Spark Delta Lake tables in serverless SQL pools** | The ability to for serverless SQL pools to access Delta Lake tables created in Spark databases is in preview. For more information, see [Azure Synapse Analytics shared metadata tables](metadat).|
+| **Spark elastic pool storage** | Azure Synapse Analytics Spark pools now support elastic pool storage in preview. Elastic pool storage allows the Spark engine to monitor worker nodes temporary storage and attach additional disks if needed. No action is required, and you should see fewer job failures as a result. For more information, see [Blog: Azure Synapse Analytics Spark elastic pool storage is available for public preview](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-august-update-2022/ba-p/3535126#TOCREF_8).|
+| **Spark Optimized Write** | Optimize Write is a Delta Lake on Synapse feature that reduces the number of files written by Apache Spark 3 (3.1 and 3.2) and aims to increase individual file size of the written data. To learn more about the usage scenarios and how to enable this preview feature, read [The need for optimize write on Apache Spark](spark/optimize-write-for-apache-spark.md).|
+| **Time-To-Live in managed virtual network (VNet)** | Reserve compute for the time-to-live (TTL) in managed virtual network TTL period, saving time and improving efficiency. For more information on this preview, see [Announcing public preview of Time-To-Live (TTL) in managed virtual network](https://techcommunity.microsoft.com/t5/azure-data-factory-blog/announcing-public-preview-of-time-to-live-ttl-in-managed-virtual/ba-p/3552879).|
+| **User-Assigned managed identities** | Now you can use user-assigned managed identities in linked services for authentication in Synapse Pipelines and Dataflows.To learn more, see [Credentials in Azure Data Factory and Azure Synapse](../data-factory/credentials.md?context=%2Fazure%2Fsynapse-analytics%2Fcontext%2Fcontext&tabs=data-factory).|
-* **Web Explorer new homepage** - The new Synapse Web Explorer homepage makes it even easier to get started with Synapse Web Explorer. The [Web Explorer homepage](https://dataexplorer.azure.com/home) now includes the following sections:
+## Generally available features
- * Get started ΓÇô Sample gallery offering example queries and dashboards for popular Synapse Data Explorer use cases.
- * Recommended ΓÇô Popular learning modules designed to help you master Synapse Web Explorer and KQL.
- * Documentation ΓÇô Synapse Web Explorer basic and advanced documentation.
+The following table lists the features of Azure Synapse Analytics that have transitioned from preview to general availability (GA) within the last 12 months.
-* **Web Explorer sample gallery** - A great way to learn about a product is to see how it is being used by others. The Web Explorer sample gallery provides end-to-end samples of how customers leverage Synapse Data Explorer popular use cases such as Logs Data, Metrics Data, IoT data and Basic big data examples. Each sample includes the dataset, well-documented queries, and a sample dashboard. To learn more about the sample gallery, read [Azure Data Explorer in 60 minutes with the new samples gallery](https://techcommunity.microsoft.com/t5/azure-data-explorer-blog/azure-data-explorer-in-60-minutes-with-the-new-samples-gallery/ba-p/3447552).
+|**Month** | **Feature** | **Learn more**|
+|:-- |:-- | :-- |
+| July 2022 | **Apache Spark&trade; 3.2 on Synapse Analytics** | Apache Spark&trade; 3.2 on Synapse Analytics is now generally available. Review the [official release notes](https://spark.apache.org/releases/spark-release-3-2-0.html) and [migration guidelines between Spark 3.1 and 3.2](https://spark.apache.org/docs/latest/sql-migration-guide.html#upgrading-from-spark-sql-31-to-32) to assess potential changes to your applications. For more details, read [Apache Spark version support and Azure Synapse Runtime for Apache Spark 3.2](./spark/apache-spark-version-support.md). Highlights of what got better in Spark 3.2 in the [Azure Synapse Analytics July Update 2022](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-july-update-2022/ba-p/3535089#TOCREF_1).|
+| July 2022 | **Apache Spark in Azure Synapse Intelligent Cache feature** | Intelligent Cache for Spark automatically stores each read within the allocated cache storage space, detecting underlying file changes and refreshing the files to provide the most recent data. To learn more, see how to [Enable/Disable the cache for your Apache Spark pool](./spark/apache-spark-intelligent-cache-concept.md).|
+| June 2022 | **Map Data tool** | The Map Data tool is a guided process to help you create ETL mappings and mapping data flows from your source data to Synapse without writing code. To learn more about the Map Data tool, read [Map Data in Azure Synapse Analytics](./database-designer/overview-map-data.md).|
+| June 2022 | **User Defined Functions** | User defined functions (UDFs) are now generally available. To learn more, read [User defined functions in mapping data flows](https://techcommunity.microsoft.com/t5/azure-data-factory-blog/introducing-user-defined-functions-preview-for-mapping-data/ba-p/3414628). |
+| May 2022 | **Azure Synapse Data Explorer connector for Power Automate, Logic Apps, and Power Apps** | The Azure Data Explorer connector for Power Automate enables you to orchestrate and schedule flows, send notifications, and alerts, as part of a scheduled or triggered task. To learn more, read [Azure Data Explorer connector for Microsoft Power Automate](/data-explorer/flow) and [Usage examples for Azure Data Explorer connector to Power Automate](/azure/data-explorer/flow-usage). |
+| April 2022 | **Cross-subscription restore for Azure Synapse SQL** | With the PowerShell `Az.Sql` module 3.8 update, the [Restore-AzSqlDatabase](/powershell/module/az.sql/restore-azsqldatabase) cmdlet can be used for cross-subscription restore of dedicated SQL pools. To learn more, see [Blog: Restore a dedicated SQL pool (formerly SQL DW) to a different subscription](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-april-update-2022/ba-p/3280185). This feature is now generally available for dedicated SQL pools (formerly SQL DW) and dedicated SQL pools in a Synapse workspace. [What's the difference?](https://aka.ms/dedicatedSQLpooldiff) |
+| April 2022 | **Database Designer** | The database designer allows users to visually create databases within Synapse Studio without writing a single line of code. For more information, see [Announcing General Availability of Database Designer](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/announcing-general-availability-of-database-designer-amp/ba-p/3294234). Read more about [lake databases](database-designer/concepts-lake-database.md) and learn [How to modify an existing lake database using the database designer](database-designer/modify-lake-database.md).|
+| April 2022 | **Database Templates** | New industry-specific database templates were introduced in the [Synapse Database Templates General Availability blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/synapse-database-templates-general-availability-and-new-synapse/ba-p/3289790). Learn more about [Database templates](database-designer/concepts-database-templates.md) and [the improved exploration experience](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-april-update-2022/ba-p/3295633#TOCREF_5).|
+| April 2022 | **Synapse Monitoring Operator RBAC role** | The Synapse Monitoring Operator RBAC (role-based access control) role allows a user persona to monitor the execution of Synapse Pipelines and Spark applications without having the ability to run or cancel the execution of these applications. For more information, review the [Synapse RBAC Roles](security/synapse-workspace-synapse-rbac-roles.md).|
+| March 2022 | **Flowlets** | Flowlets help you design portions of new data flow logic, or to extract portions of an existing data flow, and save them as separate artifact inside your Synapse workspace. Then, you can reuse these Flowlets can inside other data flows. To learn more, review the [Flowlets GA announcement blog post](https://techcommunity.microsoft.com/t5/azure-data-factory-blog/flowlets-and-change-feed-now-ga-in-azure-data-factory/ba-p/3267450) and read [Flowlets in mapping data flow](../data-factory/concepts-data-flow-flowlet.md). |
+| March 2022 | **Change Feed connectors** | Changed data capture (CDC) feed data flow source transformations for Cosmos DB, Azure Blob Storage, ADLS Gen1, ADLS Gen2, and Common Data Model (CDM) are now generally available. By simply checking a box, you can tell ADF to manage a checkpoint automatically for you and only read the latest rows that were updated or inserted since the last pipeline run. To learn more, review the [Change Feed connectors GA preview blog post](https://techcommunity.microsoft.com/t5/azure-data-factory-blog/flowlets-and-change-feed-now-ga-in-azure-data-factory/ba-p/3267450) and read [Copy and transform data in Azure Data Lake Storage Gen2 using Azure Data Factory or Azure Synapse Analytics](../data-factory/connector-azure-data-lake-storage.md).|
+| March 2022 | **Column level encryption for dedicated SQL pools** | [Column level encryption](/sql/relational-databases/security/encryption/encrypt-a-column-of-data?view=azure-sqldw-latest&preserve-view=true) is now generally available for use on new and existing Azure SQL logical servers with Azure Synapse dedicated SQL pools, as well as the dedicated SQL pools in Azure Synapse workspaces. SQL Server Data Tools (SSDT) support for column level encryption for the dedicated SQL pools is available starting with the 17.2 Preview 2 build of Visual Studio 2022. |
+| March 2022 | **Synapse Spark Common Data Model (CDM) connector** | The CDM format reader/writer enables a Spark program to read and write CDM entities in a CDM folder via Spark dataframes. To learn more, see [how the CDM connector supports reading, writing data, examples, & known issues](./spark/data-sources/apache-spark-cdm-connector.md). |
+| November 2021 | **PREDICT** | The T-SQL [PREDICT](/sql/t-sql/queries/predict-transact-sql) syntax is now generally available for dedicated SQL pools. Get started with the [Machine learning model scoring wizard for dedicated SQL pools](./machine-learning/tutorial-sql-pool-model-scoring-wizard.md).|
+| October 2021 | **Synapse RBAC Roles** | [Synapse role-based access control (RBAC) roles are now generally available](https://techcommunity.microsoft.com/t5/azure-synapse-analytics/azure-synapse-analytics-october-update/ba-p/2875372#synapse-rbac). Learn more about [Synapse RBAC roles](./security/synapse-workspace-synapse-rbac-roles.md) and [Azure Synapse role-based access control (RBAC) using PowerShell](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/retrieve-azure-synapse-role-based-access-control-rbac/ba-p/3466419#:~:text=Synapse%20RBAC%20is%20used%20to%20manage%20who%20can%3A,job%20execution%2C%20review%20job%20output%2C%20and%20execution%20logs.).|
-* **Web Explorer dashboards drill through capabilities** - You can now add drill through capabilities to your Synapse Web Explorer dashboards. The new drill through capabilities allow you to easily jump back and forth between dashboard pages. This is made possible by using a contextual filter to connect your dashboards. Defining these contextual drill throughs is done by editing the visual interactions of the selected tile in your dashboard. To learn more about drill through capabilities, read [Use drillthroughs as dashboard parameters](/azure/data-explorer/dashboard-parameters#use-drillthroughs-as-dashboard-parameters).
+## Community
-* **Time Zone settings for Web Explorer** - Being able to display data in different time zones is very powerful. You can now decide to view the data in UTC time, your local time zone, or the time zone of the monitored device/machine. The Time Zone settings of the Web Explorer now apply to both the Query results and to the Dashboard. By changing the time zone, the dashboards will be automatically refreshed to present the data with the selected time zone. For more information on time zone settings, read [Change datetime to specific time zone](/azure/data-explorer/web-query-data#change-datetime-to-specific-time-zone).
+This section summarizes new Azure Synapse Analytics community opportunities and the [Azure Synapse Influencers program](https://aka.ms/synapseinfluencers) from Microsoft. Follow us on our [@Azure_Synapse](https://twitter.com/Azure_Synapse/) Twitter account or [#AzureSynapse](https://twitter.com/hashtag/AzureSynapse?src=hashtag_click) for announcements on upcoming Azure Synapse Influencer events in the coming weeks.
+
+|**Month** | **Feature** | **Learn more**|
+|:-- |:-- | :-- |
+| May 2022 | **Azure Synapse influencer program** | Sign up for our free [Azure Synapse Influencers program](https://aka.ms/synapseinfluencers) and get connected with a community of Synapse-users who are dedicated to helping others achieve more with cloud analytics. Register now for our next [Synapse Influencers Ask the Experts session](https://aka.ms/synapseinfluencers/#events). It's free to attend and everyone is welcome to participate and join the discussion on Synapse-related topics. You can [watch past recorded Ask the Experts events](https://aka.ms/ATE-RecordedSessions) on the [Azure Synapse YouTube channel](https://www.youtube.com/channel/UCsZ4IlYjjVxqe5OZ14tyh5g). |
+| March 2022 | **Azure Synapse Analytics and Microsoft MVP YouTube video series** | A joint activity with the Azure Synapse product team and the Microsoft MVP community, a new [YouTube MVP Video Series about the Azure Synapse features](https://www.youtube.com/playlist?list=PLzUAjXZBFU9MEK2trKw_PGk4o4XrOzw4H) has launched. See more at the [Azure Synapse Analytics YouTube channel](https://www.youtube.com/channel/UCsZ4IlYjjVxqe5OZ14tyh5g).|
+
+## Apache Spark for Azure Synapse Analytics
+
+This section summarizes recent new features and capabilities of [Apache Spark for Azure Synapse Analytics](spark/apache-spark-overview.md).
+
+|**Month** | **Feature** | **Learn more**|
+|:-- |:-- | :-- |
+| September 2022 | **New query optimization techniques in Apache Spark for Azure Synapse Analytics** | Read the [findings from Microsoft's work](https://vldb.org/pvldb/vol15/p936-rajan.pdf) to gain considerable performance benefits across the board on the reference TPC-DS workload as well as a significant reduction in query plan generation time. |
+| August 2022 | **Spark elastic pool storage** | Azure Synapse Analytics Spark pools now support elastic pool storage in preview. Elastic pool storage allows the Spark engine to monitor worker nodes temporary storage and attach additional disks if needed. No action is required, and you should see fewer job failures as a result. For more information, see [Blog: Azure Synapse Analytics Spark elastic pool storage is available for public preview](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-august-update-2022/ba-p/3535126#TOCREF_8).|
+| August 2022 | **Spark Optimized Write** | Optimize Write is a Delta Lake on Synapse preview feature that reduces the number of files written by Apache Spark 3 (3.1 and 3.2) and aims to increase individual file size of the written data. To learn more, see [The need for optimize write on Apache Spark](spark/optimize-write-for-apache-spark.md).|
+| July 2022 | **Apache Spark 2.4 enters retirement lifecycle** | With the general availability of the Apache Spark 3.2 runtime, the Azure Synapse runtime for Apache Spark 2.4 enters a 12-month retirement cycle. You should relocate your workloads to the newer Apache Spark 3.2 runtime within this period. Read more at [Apache Spark runtimes in Azure Synapse](spark/apache-spark-version-support.md).|
+| May 2022 | **Azure Synapse dedicated SQL pool connector for Apache Spark now available in Python** | Previously, the [Azure Synapse Dedicated SQL Pool Connector for Apache Spark](./spark/synapse-spark-sql-pool-import-export.md) was only available using Scala. Now, [the dedicated SQL pool connector for Apache Spark can be used with Python on Spark 3](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-may-update-2022/ba-p/3430970#TOCREF_6). |
+| May 2022 | **Manage Azure Synapse Apache Spark configuration** | With the new [Apache Spark configurations](./spark/apache-spark-azure-create-spark-configuration.md) feature, you can create a standalone Spark configuration artifact with auto-suggestions and built-in validation rules. The Spark configuration artifact allows you to share your Spark configuration within and across Azure Synapse workspaces. You can also easily associate your Spark configuration with a Spark pool, a Notebook, and a Spark job definition for reuse and minimize the need to copy the Spark configuration in multiple places. |
+| April 2022 | **Apache Spark 3.2 on Synapse Analytics** | Apache Spark 3.2 on Synapse Analytics with preview availability. Review the [official Spark 3.2 release notes](https://spark.apache.org/releases/spark-release-3-2-0.html) and [migration guidelines between Spark 3.1 and 3.2](https://spark.apache.org/docs/latest/sql-migration-guide.html#upgrading-from-spark-sql-31-to-32) to assess potential changes to your applications. For more details, read [Apache Spark version support and Azure Synapse Runtime for Apache Spark 3.2](./spark/apache-spark-version-support.md). |
+| April 2022 | **Parameterization for Spark job definition** | You can now assign parameters dynamically based on variables, metadata, or specifying Pipeline specific parameters for the Spark job definition activity. For more details, read [Transform data using Apache Spark job definition](quickstart-transform-data-using-spark-job-definition.md#settings-tab). |
+| April 2022 | **Spark notebook snapshot** | You can access a snapshot of the Notebook when there is a Pipeline Notebook run failure or when there is a long-running Notebook job. To learn more, read [Transform data by running a Synapse notebook](synapse-notebook-activity.md?tabs=classical#see-notebook-activity-run-history) and [Introduction to Microsoft Spark utilities](./spark/microsoft-spark-utilities.md?pivots=programming-language-scala#reference-a-notebook-1). |
+| March 2022 | **Synapse Spark Common Data Model (CDM) connector** | The CDM format reader/writer enables a Spark program to read and write CDM entities in a CDM folder via Spark dataframes. To learn more, see [how the CDM connector supports reading, writing data, examples, & known issues](./spark/data-sources/apache-spark-cdm-connector.md). |
+| March 2022 | **Performance optimization for Synapse Spark dedicated SQL pool connector** | New improvements to the [Azure Synapse Dedicated SQL Pool Connector for Apache Spark](spark/synapse-spark-sql-pool-import-export.md) reduce data movement and leverage `COPY INTO`. Performance tests indicated at least ~5x improvement over the previous version. No action is required from the user to leverage these enhancements. For more information, see [Blog: Synapse Spark Dedicated SQL Pool (DW) Connector: Performance Improvements](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-march-update-2022/ba-p/3269194#TOCREF_10).|
+| March 2022 | **Support for all Spark Dataframe SaveMode choices** | The [Azure Synapse Dedicated SQL Pool Connector for Apache Spark](spark/synapse-spark-sql-pool-import-export.md) now supports all four Spark Dataframe SaveMode choices: Append, Overwrite, ErrorIfExists, Ignore. For more information on Spark SaveMode, read the [official Apache Spark documentation](https://spark.apache.org/docs/1.6.0/api/java/org/apache/spark/sql/SaveMode.html?wt.mc_id=azsynapseblog_mar2022_blog_azureeng). |
+| March 2022 | **Apache Spark in Azure Synapse Analytics Intelligent Cache feature** | Intelligent Cache for Spark automatically stores each read within the allocated cache storage space, detecting underlying file changes and refreshing the files to provide the most recent data. To learn more on this preview feature, see how to [Enable/Disable the cache for your Apache Spark pool](./spark/apache-spark-intelligent-cache-concept.md) or see the [blog post](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-march-update-2022/ba-p/3269194#TOCREF_12).|
## Data integration
-* **Fuzzy Join option in Join Transformation** - Fuzzy matching with a sliding similarity score option has been added to the Join transformation in Mapping Data Flows. You can create inner and outer joins on data values that are similar rather than exact matches! Previously, you would have had to use an exact match. The sliding scale value goes from 60% to 100%, making it easy to adjust the similarity threshold of the match. For learn more about fuzzy joins, read [Join transformation in mapping data flow](../data-factory/data-flow-join.md).
+This section summarizes recent new features and capabilities of Azure Synapse Analytics data integration. Learn how to [Load data into Azure Synapse Analytics using Azure Data Factory (ADF) or a Synapse pipeline](../data-factory/load-azure-sql-data-warehouse.md).
+
+|**Month** | **Feature** | **Learn more**|
+|:-- |:-- | :-- |
+| August 2022 | **Mapping data flows now support visual Cast transformation** | You can [use the cast transformation](/azure/data-factory/data-flow-cast) to easily modify the data types of individual columns in a data flow. |
+| August 2022 | **Default activity timeout changed to 12 hours** | The [default activity timeout is now 12 hours](https://techcommunity.microsoft.com/t5/azure-data-factory-blog/azure-data-factory-changing-default-pipeline-activity-timeout/ba-p/3598729). |
+| August 2022 | **Pipeline expression builder ease-of-use enhancements** | We've [updated our expression builder UI to make pipeline designing easier](https://techcommunity.microsoft.com/t5/azure-data-factory-blog/coming-soon-to-adf-more-pipeline-expression-builder-ease-of-use/ba-p/3567196). |
+| August 2022 | **New UI for mapping dataflow inline dataset types**| We've updated our data flow source UI to [make it easier to find your inline dataset type](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-august-update-2022/ba-p/3535126#TOCREF_21).|
+| July 2022 | **Time-To-Live in managed virtual network (VNet)** | Reserve compute for the time-to-live (TTL) in managed virtual network TTL period, saving time and improving efficiency. For more information on this preview, see [Announcing public preview of Time-To-Live (TTL) in managed virtual network](https://techcommunity.microsoft.com/t5/azure-data-factory-blog/announcing-public-preview-of-time-to-live-ttl-in-managed-virtual/ba-p/3552879).|
+| June 2022 | **SAP CDC connector preview** | A new data connector for SAP Change Data Capture (CDC) is now available in preview. For more information, see [Announcing Public Preview of the SAP CDC solution in Azure Data Factory and Azure Synapse Analytics](https://techcommunity.microsoft.com/t5/azure-data-factory-blog/announcing-public-preview-of-the-sap-cdc-solution-in-azure-data/ba-p/3420904) and [SAP CDC solution in Azure Data Factory](/azure/data-factory/sap-change-data-capture-introduction-architecture).|
+| June 2022 | **Fuzzy join option in Join Transformation** | Use fuzzy matching with a similarity threshold score slider has been added to the [Join transformation in Mapping Data Flows](../data-factory/data-flow-join.md). |
+| June 2022 | **Map Data tool GA** | We're excited to announce that the [Map Data tool](./database-designer/overview-map-data.md) is now Generally Available. The Map Data tool is a guided process to help you create ETL mappings and mapping data flows from your source data to Synapse without writing code. |
+| June 2022 | **Rerun pipeline with new parameters** | You can now change pipeline parameters when rerunning a pipeline from the Monitoring page without having to return to the pipeline editor. To learn more, read [Rerun pipelines and activities](../data-factory/monitor-visually.md#rerun-pipelines-and-activities).|
+| June 2022 | **User Defined Functions GA** | [User defined functions (UDFs) in mapping data flows](https://techcommunity.microsoft.com/t5/azure-data-factory-blog/introducing-user-defined-functions-preview-for-mapping-data/ba-p/3414628) are now generally available (GA). |
+| May 2022 | **Export pipeline monitoring as a CSV** | The ability to [export pipeline monitoring to CSV and other monitoring improvements](https://techcommunity.microsoft.com/t5/azure-data-factory-blog/adf-monitoring-improvements/ba-p/3295531) have been introduced to ADF. |
+| May 2022 | **Automatic incremental source data loading from PostgreSQL and MySQL** | Automatic [incremental source data loading from PostgreSQL and MySQL](/azure/data-factory/tutorial-incremental-copy-overview) to Synapse SQL and Azure Database is now natively available in ADF. |
+| May 2022 | **Assert transformation error handling** | Error handling has now been added to sinks following an [assert transformation in mapping data flow](/azure/data-factory/data-flow-assert). You can now choose whether to output the failed rows to the selected sink or to a separate file. |
+| May 2022 | **Mapping data flows projection editing** | In mapping data flows, you can now [update source projection column names and column types](/azure/data-factory/data-flow-source). |
+| April 2022 | **Dataverse connector for Synapse Data Flows** | Dataverse is now a source and sink connector to Synapse Data Flows. You can [Copy and transform data from Dynamics 365 (Microsoft Dataverse) or Dynamics CRM using Azure Data Factory or Azure Synapse Analytics](/azure/data-factory/connector-dynamics-crm-office-365?tabs=data-factory).|
+| April 2022 | **Configurable Synapse Pipelines Web activity response timeout** | With the response timeout property `httpRequestTimeout`, you can [define a timeout for the HTTP request up to 10 minutes](https://techcommunity.microsoft.com/t5/azure-data-factory-blog/web-activity-response-timeout-improvement/ba-p/3260307). Web activities work exceptionally well with APIs that follow [the asynchronous request-reply pattern](/azure/architecture/patterns/async-request-reply), a suggested approach for building scalable web APIs/services. |
+| March 2022 | **sFTP connector for Synapse data flows** | A native sftp connector in Synapse data flows is supported to read and write data from sFTP using the visual low-code data flows interface in Synapse. To learn more, see [Copy and transform data in SFTP server using Azure Data Factory or Azure Synapse Analytics](../data-factory/connector-sftp.md).|
+| March 2022 | **Data flow improvements to Data Preview** | Review features added to the [Data Preview and debug improvements in Mapping Data Flows](https://techcommunity.microsoft.com/t5/azure-data-factory-blog/data-preview-and-debug-improvements-in-mapping-data-flows/ba-p/3268254?wt.mc_id=azsynapseblog_mar2022_blog_azureeng). |
+| March 2022 | **Pipeline script activity** | You can now [Transform data by using the Script activity](/azure/data-factory/transform-data-using-script) to invoke SQL commands to perform both DDL and DML. |
+| December 2021 | **Custom partitions for Synapse link for Azure Cosmos DB** | Improve query execution times for your Spark queries, by creating custom partitions based on fields frequently used in your queries. To learn more, see [Custom partitioning in Azure Synapse Link for Azure Cosmos DB (Preview)](../cosmos-db/custom-partitioning-analytical-store.md). |
++
+## Database Templates & Database Designer
+
+This section summarizes recent new features and capabilities of [database templates](./database-designer/overview-database-templates.md) and [the database designer](database-designer/quick-start-create-lake-database.md).
+
+|**Month** | **Feature** | **Learn more**|
+|:-- |:-- | :-- |
+| July 2022 | **Browse industry templates** | Browse industry templates and add tables to create your own lake database. Learn more about [ways you can browse industry templates](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/ways-you-can-browse-industry-templates-and-add-tables-to-create/ba-p/3495011) and get started with [Quickstart: Create a new lake database leveraging database templates](database-designer/quick-start-create-lake-database.md).|
+| April 2022 | **Database Designer** | The database designer allows users to visually create databases within Synapse Studio without writing a single line of code. For more information, see [Announcing General Availability of Database Designer](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/announcing-general-availability-of-database-designer-amp/ba-p/3294234). Read more about [lake databases](database-designer/concepts-lake-database.md) and learn [How to modify an existing lake database using the database designer](database-designer/modify-lake-database.md).|
+| April 2022 | **Database Templates** | New industry-specific database templates were introduced in the [Synapse Database Templates General Availability blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/synapse-database-templates-general-availability-and-new-synapse/ba-p/3289790). Learn more about [Database templates](database-designer/concepts-database-templates.md) and [the improved exploration experience](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-april-update-2022/ba-p/3295633#TOCREF_5).|
+| April 2022 | **Clone lake database** | In Synapse Studio, you can now clone a database using the action menu available on the lake database. To learn more, read [How-to: Clone a lake database](./database-designer/clone-lake-database.md). |
+| April 2022 | **Use wildcards to specify custom folder hierarchies** | Lake databases sit on top of data that is in the lake and this data can live in nested folders that don't fit into clean partition patterns. You can now use wildcards to specify custom folder hierarchies. To learn more, read [How-to: Modify a datalake](./database-designer/modify-lake-database.md). |
+| January 2022 | **New database templates** | Learn more about new industry-specific [Automotive, Genomics, Manufacturing, and Pharmaceuticals templates](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/four-additional-azure-synapse-database-templates-now-available/ba-p/3058044) and get started with [database templates](./database-designer/overview-database-templates.md) in the Synapse Studio gallery. |
+
+## Developer experience
+
+This section summarizes recent new quality of life and feature improvements for [developers in Azure Synapse Analytics](sql/develop-overview.md).
+
+|**Month** | **Feature** | **Learn more**|
+|:-- |:-- | :-- |
+| July 2022 | **Synapse Notebooks compatibility with IPython** | The official kernel for Jupyter notebooks is IPython and it is now supported in Synapse Notebooks. For more information, see [Synapse Notebooks is now fully compatible with IPython](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-july-update-2022/ba-p/3535089#TOCREF_14).|
+| July 2022 | **Mssparkutils now has spark.stop() method** | A new API `mssparkutils.session.stop()` has been added to the mssparkutils package. This feature becomes handy when there are multiple sessions running against the same Spark pool. The new API is available for Scala and Python. To learn more, see [Stop an interactive session](spark/microsoft-spark-utilities.md#stop-an-interactive-session).|
+| May 2022 | **Updated Azure Synapse Analyzer Report** | Learn about the new features in [version 2.0 of the Synapse Analyzer report](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/updated-synapse-analyzer-report-workload-management-and-ability/ba-p/3580269).|
+| April 2022 | **Azure Synapse Analyzer Report** | The [Azure Synapse Analyzer Report](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analyzer-report-to-monitor-and-improve-azure/ba-p/3276960) helps you identify common issues that may be present in your database that can lead to performance issues.|
+| April 2022 | **Reference unpublished notebooks** | Now, when using %run notebooks, you can [enable 'unpublished notebook reference'](spark/apache-spark-development-using-notebooks.md#reference-unpublished-notebook), which will allow you to reference unpublished notebooks. When enabled, notebook run will fetch the current contents in the notebook web cache, meaning the changes in your notebook editor can be referenced immediately by other notebooks without having to be published (Live mode) or committed (Git mode). |
+| March 2022 | **Code cells with exception to show standard output**| Now in Synapse notebooks, both standard output and exception messages are shown when a code statement fails for Python and Scala languages. For examples, see [Synapse notebooks: Code cells with exception to show standard output](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-march-update-2022/ba-p/3269194#TOCREF_1).|
+| March 2022 | **Partial output is available for running notebook code cells** | Now in Synapse notebooks, you can see anything you write (with `println` commands, for example) as the cell executes, instead of waiting until it ends. For examples, see [Synapse notebooks: Partial output is available for running notebook code cells ](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-march-update-2022/ba-p/3269194#TOCREF_1).|
+| March 2022 | **Dynamically control your Spark session configuration with pipeline parameters** | Now in Synapse notebooks, you can use pipeline parameters to configure the session with the notebook %%configure magic. For examples, see [Synapse notebooks: Dynamically control your Spark session configuration with pipeline parameters](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-march-update-2022/ba-p/3269194#TOCREF_2).|
+| March 2022 | **Reuse and manage notebook sessions** | Now in Synapse notebooks, it is easy to reuse an active session conveniently without having to start a new one and to see and manage your active sessions in the **Active sessions** list. To view your sessions, select the 3 dots in the notebook and select **Manage sessions.** For examples, see [Synapse notebooks: Reuse and manage notebook sessions](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-march-update-2022/ba-p/3269194#TOCREF_3).|
+| March 2022 | **Support for Python logging** | Now in Synapse notebooks, anything written through the Python logging module is captured, in addition to the driver logs. For examples, see [Synapse notebooks: Support for Python logging](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-march-update-2022/ba-p/3269194#TOCREF_4).|
+
+## Machine Learning
+
+This section summarizes recent new features and improvements to using machine learning models in Azure Synapse Analytics.
+
+|**Month** | **Feature** | **Learn more**|
+|:-- |:-- | :-- |
+| August 2022 | **SynapseML v.0.10.0** | New [release of SynapseML v0.10.0](https://github.com/microsoft/SynapseML/releases/tag/v0.10.0) (previously MMLSpark), an open-source library that aims to simplify the creation of massively scalable machine learning pipelines. Learn more about the [latest additions to SynapseML](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/exciting-new-release-of-synapseml/ba-p/3589606) and get started with [SynapseML](https://aka.ms/spark).|
+| August 2022 | **.NET support** | SynapseML v0.10 [adds full support for .NET languages](https://devblogs.microsoft.com/dotnet/announcing-synapseml-for-dotnet/) like C# and F#. For a .NET SynapseML example, see [.NET Example with LightGBMClassifier](https://microsoft.github.io/SynapseML/docs/getting_started/dotnet_example/).|
+| August 2022 | **Azure Open AI Service support** | SynapseML now allows users to tap into 175-Billion parameter language models (GPT-3) from OpenAI that can generate and complete text and code near human parity. For more information, see [Azure OpenAI for Big Data](https://microsoft.github.io/SynapseML/docs/features/cognitive_services/CognitiveServices%20-%20OpenAI/).|
+| August 2022 | **MLflow platform support** | SynapseML models now integrate with [MLflow](https://microsoft.github.io/SynapseML/docs/mlflow/introduction/) with full support for saving, loading, deployment, and [autologging](https://microsoft.github.io/SynapseML/docs/mlflow/autologging/).|
+| August 2022 | **SynapseML in Binder** | We know that Spark can be intimidating for first users but fear not because with the technology Binder, you can [explore and experiment with SynapseML in Binder](https://mybinder.org/v2/gh/microsoft/SynapseML/93d7ccf?labpath=notebooks%2Ffeatures) with zero setup, install, infrastructure, or Azure account required.|
+| June 2022 | **Distributed Deep Neural Network Training (preview)** | The Azure Synapse runtime also includes supporting libraries like Petastorm and Horovod, which are commonly used for distributed training. This feature is currently available in preview. The Azure Synapse Analytics runtime for Apache Spark 3.1 and 3.2 also now includes support for the most common deep learning libraries like TensorFlow and PyTorch. To learn more about how to leverage these libraries within your Azure Synapse Analytics GPU-accelerated pools, read the [Deep learning tutorials](./machine-learning/concept-deep-learning.md). |
+| November 2021 | **PREDICT** | The T-SQL [PREDICT](/sql/t-sql/queries/predict-transact-sql) syntax is now generally available for dedicated SQL pools. Get started with the [Machine learning model scoring wizard for dedicated SQL pools](./machine-learning/tutorial-sql-pool-model-scoring-wizard.md).|
+
+## Samples and guidance
+
+This section summarizes new guidance and sample project resources for Azure Synapse Analytics.
+
+|**Month** | **Feature** | **Learn more**|
+|:-- |:-- | :-- |
+| September 2022 | **Azure Synapse Customer Success Engineering blog series** | The new [Azure Synapse Customer Success Engineering blog series](https://aka.ms/synapsecseblog) launches with a detailed introduction to [Building the Lakehouse - Implementing a Data Lake Strategy with Azure Synapse](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/building-the-lakehouse-implementing-a-data-lake-strategy-with/ba-p/3612291).|
+| June 2022 | **Azure Orbital analytics with Synapse Analytics** | We now offer an [Azure Orbital analytics sample solution](https://github.com/Azure/Azure-Orbital-Analytics-Samples) showing an end-to-end implementation of extracting, loading, transforming, and analyzing spaceborne data by using geospatial libraries and AI models with Azure Synapse Analytics. The sample solution also demonstrates how to integrate geospatial-specific [Azure Cognitive Services](../cognitive-services/index.yml) models, AI models from partners, and bring-your-own-data models. |
+| June 2022 | **Migration guides for Oracle** | A new Microsoft-authored migration guide for Oracle to Azure Synapse Analytics is now available. [Design and performance for Oracle migrations](migration-guides/oracle/1-design-performance-migration.md). |
+| June 2022 | **Azure Synapse success by design** | The [Azure Synapse proof of concept playbook](./guidance/proof-of-concept-playbook-overview.md) provides a guide to scope, design, execute, and evaluate a proof of concept for SQL or Spark workloads. |
+| June 2022 | **Migration guides for Teradata** | A new Microsoft-authored migration guide for Teradata to Azure Synapse Analytics is now available. [Design and performance for Teradata migrations](migration-guides/teradat). |
+| June 2022 | **Migration guides for IBM Netezza** | A new Microsoft-authored migration guide for IBM Netezza to Azure Synapse Analytics is now available. [Design and performance for IBM Netezza migrations](migration-guides/netezz). |
+
+## Security
+
+This section summarizes recent new security features and settings in Azure Synapse Analytics.
+
+|**Month** | **Feature** | **Learn more**|
+|:-- |:-- | :-- |
+| August 2022 | **Execute Azure Synapse Spark Notebooks with system-assigned managed identity** | You can [now execute Spark Notebooks with the system-assigned managed identity (or workspace managed identity)](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-august-update-2022/ba-p/3535126#TOCREF_30) by enabling *Run as managed identity* from the **Configure** session menu. With this feature, you will be able to validate that your notebook works as expected when using the system-assigned managed identity, before using the notebook in a pipeline. For more information, see [Managed identity for Azure Synapse](synapse-service-identity.md).|
+| July 2022 | **Changes to permissions needed for publishing to Git** | Now, only Git permissions and the Synapse Artifact Publisher (Synapse RBAC) role are needed to commit changes in Git-mode. For more information, see [Access control enforcement in Synapse Studio](security/synapse-workspace-access-control-overview.md#access-control-enforcement-in-synapse-studio).|
+| April 2022 | **Synapse Monitoring Operator RBAC role** | The Synapse Monitoring Operator role-based access control (RBAC) role allows a user persona to monitor the execution of Synapse Pipelines and Spark applications without having the ability to run or cancel the execution of these applications. For more information, review the [Synapse RBAC Roles](security/synapse-workspace-synapse-rbac-roles.md).|
+| March 2022 | **Enforce minimal TLS version** | You can now raise or lower the minimum TLS version for dedicated SQL pools in Synapse workspaces. To learn more, see [Azure SQL connectivity settings](/sql/azure-sql/database/connectivity-settings#minimal-tls-version). The [workspace managed SQL API](/rest/api/synapse/sqlserver/workspace-managed-sql-server-dedicated-sql-minimal-tls-settings/update) can be used to modify the minimum TLS settings.|
+| March 2022 | **Azure Synapse Analytics now supports Azure Active Directory (Azure AD) only authentication** | You can now use Azure Active Directory authentication to centrally manage access to all Azure Synapse resources, including SQL pools. You can [disable local authentication](sql/active-directory-authentication.md#disable-local-authentication) upon creation or after a workspace is created through the Azure portal.|
+| December 2021 | **User-Assigned managed identities** | Now you can use user-assigned managed identities in linked services for authentication in Synapse Pipelines and Dataflows. To learn more, see [Credentials in Azure Data Factory and Azure Synapse](../data-factory/credentials.md?context=%2Fazure%2Fsynapse-analytics%2Fcontext%2Fcontext&tabs=data-factory).|
+| December 2021 | **Browse ADLS Gen2 folders in the Azure Synapse Analytics workspace** | You can now [browse and secure an Azure Data Lake Storage Gen2 (ADLS Gen2) container or folder](how-to-access-container-with-access-control-lists.md) in your Azure Synapse Analytics workspace by connecting to a specific container or folder in Synapse Studio.|
+| December 2021 | **TLS 2.1 enforced for new Synapse Workspaces** | Starting in December 2021, [a requirement for TLS 1.2](security/connectivity-settings.md#minimal-tls-version) has been implemented for new Synapse Workspaces only. |
+
+## Azure Synapse Data Explorer (preview)
+
+Azure Data Explorer (ADX) is a fast and highly scalable data exploration service for log and telemetry data. It offers ingestion from Event Hubs, IoT Hubs, blobs written to blob containers, and Azure Stream Analytics jobs. This section summarizes recent new features and capabilities of [the Azure Synapse Data Explorer](data-explorer/data-explorer-overview.md) and [the Kusto Query Language (KQL)](/azure/data-explorer/kusto/query/). Read more about [What is the difference between Azure Synapse Data Explorer and Azure Data Explorer? (Preview)](data-explorer/data-explorer-compare.md)
+
+|**Month** | **Feature** | **Learn more**|
+|:-- |:-- | :-- |
+| August 2022 | **Free cluster upgrade option** | You can now [upgrade your Azure Data Explorer free cluster to a full cluster](/azure/data-explorer/start-for-free-upgrade) that removes the storage limitation allowing you more capacity to grow your data. |
+| August 2022 | **Analyze fresh ADX data from Excel pivot table** | Now you can [Use fresh and unlimited volume of ADX data (Kusto) from your favorite analytic tool, Excel pivot tables](https://techcommunity.microsoft.com/t5/azure-data-explorer-blog/use-fresh-and-unlimited-volume-of-adx-data-kusto-from-your/ba-p/3588894). MDX queries generated by the Pivot code, will find their way to the Kusto backend as KQL statements that will aggregate the data as needed by the pivot and back to Excel.|
+| August 2022 | **Query results - color by value** | Highlight unique data at-a-glance in query results to visually group rows that share identical values for a specific column. Use **Explore results** and **Color by value** to [apply color to rows based on the selected column](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-august-update-2022/ba-p/3535126#TOCREF_14).|
+| August 2022 | **Web explorer - crosshair support for charts** | The `ysplit` property now supports [the crosshair visual](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-august-update-2022/ba-p/3535126#TOCREF_15) (vertical lines that move along the mouse pointer) for many charts. |
+| July 2022 | **Ingest data from Azure Stream Analytics into Synapse Data Explorer (Preview)** | You can now use a Streaming Analytics job to collect data from an event hub and send it to your Azure Data Explorer cluster using the Azure portal or an ARM template. For more information, see [Ingest data from Azure Stream Analytics into Azure Data Explorer](/azure/data-explorer/stream-analytics-connector). |
+| July 2022 | **Render charts for each y column** | Synapse Web Data Explorer now supports rendering charts for each y column. For an example, see the [Azure Synapse Analytics July Update 2022](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-july-update-2022/ba-p/3535089#TOCREF_6).|
+| June 2022 | **Web Explorer new homepage** | The new Azure Synapse [Web Explorer homepage](https://dataexplorer.azure.com/home) makes it even easier to get started with Synapse Web Explorer. |
+| June 2022 | **Web Explorer sample gallery** | The [Web Explorer sample gallery]((https://techcommunity.microsoft.com/t5/azure-data-explorer-blog/azure-data-explorer-in-60-minutes-with-the-new-samples-gallery/ba-p/3447552) provides end-to-end samples of how customers leverage Synapse Data Explorer popular use cases such as Logs Data, Metrics Data, IoT data and Basic big data examples. |
+| June 2022 | **Web Explorer dashboards drill through capabilities** | You can now [use drillthroughs as parameters in your Synapse Web Explorer dashboards](/azure/data-explorer/dashboard-parameters#use-drillthroughs-as-dashboard-parameters). |
+| June 2022 | **Time Zone settings for Web Explorer** | The [Time Zone settings of the Web Explorer](/azure/data-explorer/web-query-data#change-datetime-to-specific-time-zone) now apply to both the Query results and to the Dashboard. By changing the time zone, the dashboards will be automatically refreshed to present the data with the selected time zone. |
+| May 2022 | **Synapse Data Explorer live query in Excel** | Using the [new Data Explorer web experience Open in Excel feature](https://techcommunity.microsoft.com/t5/azure-data-explorer-blog/open-live-kusto-query-in-excel/ba-p/3198500), you can now provide access to live results of your query by sharing the connected Excel Workbook with colleagues and team members. You can open the live query in an Excel Workbook and refresh it directly from Excel to get the most up to date query results. To create an Excel Workbook connected to Synapse Data Explorer, [start by running a query in the Web experience](https://aka.ms/adx.help.livequery). |
+| May 2022 | **Use Managed Identities for external SQL Server tables** | With Managed Identity support, Synapse Data Explorer table definition is now simpler and more secure. You can now [use managed identities](/azure/data-explorer/managed-identities-overview) instead of entering in your credentials. To learn more about external tables, read [Create and alter SQL Server external tables](/azure/data-explorer/kusto/management/external-sql-tables).|
+| May 2022 | **Azure Synapse Data Explorer connector for Microsoft Power Automate, Logic Apps, and Power Apps** | New Azure Data Explorer connectors for Power Automate are generally available (GA). To learn more, read [Azure Data Explorer connector for Microsoft Power Automate](/azure/data-explorer/flow), the [Microsoft Logic App and Azure Data Explorer](/azure/data-explorer/kusto/tools/logicapps), and the ability to [Create Power Apps application to query data in Azure Data Explorer](/azure/data-explorer/power-apps-connector). |
+| May 2022 | **Dynamic events routing from event hub to multiple databases** | We now support [routing events data from Azure Event Hub/Azure IoT Hub/Azure Event Grid to multiple databases](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-may-update-2022/ba-p/3430970#TOCREF_15) hosted in a single ADX cluster. To learn more about dynamic routing, read [Ingest from event hub](/azure/data-explorer/ingest-data-event-hub-overview#events-routing). |
+| May 2022 | **Configure a database using a KQL inline script as part of JSON ARM deployment template** | Running a [Kusto Query Language (KQL) script to configure your database](/azure/data-explorer/database-script) can now be done using an inline script provided inline as a parameter to a JSON ARM template. |
+
+## Azure Synapse Link
+
+Azure Synapse Link is an automated system for replicating data from [SQL Server or Azure SQL Database](synapse-link/sql-synapse-link-overview.md), [Cosmos DB](/azure/cosmos-db/synapse-link?context=/azure/synapse-analytics/context/context), or [Dataverse](/power-apps/maker/data-platform/export-to-data-lake?context=%2Fazure%2Fsynapse-analytics%2Fcontext%2Fcontext) into Azure Synapse Analytics. This section summarizes recent news about the Azure Synapse Link feature.
+
+|**Month** | **Feature** | **Learn more**|
+|:-- |:-- | :-- |
+| July 2022 | **Batch mode** | Decide between cost and latency in Azure Synapse Link to SQL by selecting *continuous* or *batch* mode to replicate your data. Batch mode allows you to save even more on costs by only paying for ingestion service during the batch loads instead of it being continuously on. You can select between 20 and 60 minutes for batch processing.|
+| May 2022 | **Synapse Link for SQL preview** | Azure Synapse Link for SQL is in preview for both SQL Server 2022 and Azure SQL Database. The Azure Synapse Link feature provides low- and no-code, near real-time data replication from your SQL-based operational stores into Azure Synapse Analytics. Provide BI reporting on operational data in near real-time, with minimal impact on your operational store. The [Azure Synapse Link for SQL preview has been announced](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/announcing-the-public-preview-of-azure-synapse-link-for-sql/ba-p/3372986). For more information, see [Blog: Azure Synapse Link for SQL Deep Dive](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/synapse-link-for-sql-deep-dive/ba-p/3567645).|
+
+## Synapse SQL
-* **Map Data [Generally Available]** - WeΓÇÖre excited to announce that the Map Data tool is now Generally Available. The Map Data tool is a guided process to help you create ETL mappings and mapping data flows from your source data to Synapse without writing code. To learn more about Map Data, read [Map Data in Azure Synapse Analytics](./database-designer/overview-map-data.md).
+This section summarizes recent improvements and features in SQL pools in Azure Synapse Analytics.
-* **Rerun pipeline with new parameters** - You can now change pipeline parameters when re-running a pipeline from the Monitoring page without having to return to the pipeline editor. After running a pipeline with new parameters, you can easily monitor the new run against the old ones without having to toggle between pages. To learn more about rerunning pipelines with new parameters, read [Rerun pipelines and activities](../data-factory/monitor-visually.md#rerun-pipelines-and-activities).
+|**Month** | **Feature** | **Learn more**|
+|:-- |:-- | :-- |
+| August 2022| **Spark Delta Lake tables in serverless SQL pools** | The ability to for serverless SQL pools to access Delta Lake tables created in Spark databases is in preview. For more information, see [Azure Synapse Analytics shared metadata tables](metadat).|
+| August 2022| **Multi-column distribution in dedicated SQL pools** | You can now Hash Distribute tables on multiple columns for a more even distribution of the base table, reducing data skew over time and improving query performance. For more information on opting-in to the preview, see [CREATE TABLE distribution options](/sql/t-sql/statements/create-table-azure-sql-data-warehouse#TableDistributionOptions) or [CREATE TABLE AS SELECT distribution options](/sql/t-sql/statements/create-table-as-select-azure-sql-data-warehouse#table-distribution-options).|
+| August 2022| **Distribution Advisor**| The Distribution Advisor is a new preview feature in Azure Synapse dedicated SQL pools Gen2 that analyzes queries and recommends the best distribution strategies for tables to improve query performance. For more information, see [Distribution Advisor in Azure Synapse SQL](sql/distribution-advisor.md).|
+| August 2022 | **Add SQL objects and users in Lake databases** | New capabilities announced for lake databases in serverless SQL pools: create schemas, views, procedures, inline table-valued functions. You can also database users from your Azure Active Directory domain and assign them to the db_datareader role. For more information, see [Access lake databases using serverless SQL pool in Azure Synapse Analytics](metadat).|
+| June 2022 | **Result set size limit increase** | The [maximum size of query result sets](./sql/resources-self-help-sql-on-demand.md?tabs=x80070002#constraints) in serverless SQL pools has been increased from 200 GB to 400 GB. |
+| May 2022 | **Automatic character column length calculation for serverless SQL pools** | It is no longer necessary to define character column lengths for serverless SQL pools in the data lake. You can get optimal query performance [without having to define the schema](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-may-update-2022/ba-p/3430970#TOCREF_4), because the serverless SQL pool will use automatically calculated average column lengths and cardinality estimation. |
+| April 2022 | **Cross-subscription restore for Azure Synapse SQL GA** | With the PowerShell `Az.Sql` module 3.8 update, the [Restore-AzSqlDatabase](/powershell/module/az.sql/restore-azsqldatabase) cmdlet can be used for cross-subscription restore of dedicated SQL pools. To learn more, see [Restore a dedicated SQL pool to a different subscription](sql-data-warehouse/sql-data-warehouse-restore-active-paused-dw.md#restore-an-existing-dedicated-sql-pool-formerly-sql-dw-to-a-different-subscription-through-powershell). This feature is now generally available for dedicated SQL pools (formerly SQL DW) and dedicated SQL pools in a Synapse workspace. [What's the difference?](https://aka.ms/dedicatedSQLpooldiff)|
+| April 2022 | **Recover SQL pool from dropped server or workspace** | With the PowerShell Restore cmdlets in `Az.Sql` and `Az.Synapse` modules, you can now restore from a deleted server or workspace without filing a support ticket. For more information, see [Restore a dedicated SQL pool from a deleted Azure Synapse workspace](backuprestore/restore-sql-pool-from-deleted-workspace.md) or [Restore a standalone dedicated SQL pools (formerly SQL DW) from a deleted server](backuprestore/restore-sql-pool-from-deleted-workspace.md), depending on your scenario. |
+| March 2022 | **Column level encryption for dedicated SQL pools** | [Column level encryption](/sql/relational-databases/security/encryption/encrypt-a-column-of-data?view=azure-sqldw-latest&preserve-view=true) is now generally available for use on new and existing Azure SQL logical servers with Azure Synapse dedicated SQL pools, as well as the dedicated SQL pools in Azure Synapse workspaces. SQL Server Data Tools (SSDT) support for column level encryption for the dedicated SQL pools is available starting with the 17.2 Preview 2 build of Visual Studio 2022.|
+| March 2022 | **Parallel execution for CETAS** | Better performance for [CREATE TABLE AS SELECT](sql/develop-tables-cetas.md) (CETAS) and subsequent SELECT statements now made possible by use of parallel execution plans. For examples, see [Better performance for CETAS and subsequent SELECTs](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-march-update-2022/ba-p/3269194#TOCREF_7).|
-* **User Defined Functions [Generally Available]** - WeΓÇÖre excited to announce that user defined functions (UDFs) are now Generally Available. With user-defined functions, you can create customized expressions that can be reused across multiple mapping data flows. You no longer have to use the same string manipulation, math calculations, or other complex logic several times. User-defined functions will be grouped in libraries to help developers group common sets of functions. To learn more about user defined functions, read [User defined functions in mapping data flows](https://techcommunity.microsoft.com/t5/azure-data-factory-blog/introducing-user-defined-functions-preview-for-mapping-data/ba-p/3414628).
-## Machine learning
-**Distributed Deep Neural Network Training with Horovod and Petastorm [Public Preview]** - To simplify the process for creating and managing GPU-accelerated pools, Azure Synapse takes care of pre-installing low-level libraries and setting up all the complex networking requirements between compute nodes. This integration allows users to get started with GPU- accelerated pools within just a few minutes.
+## Learn more
-Now, Azure Synapse Analytics provides built-in support for deep learning infrastructure. The Azure Synapse Analytics runtime for Apache Spark 3.1 and 3.2 now includes support for the most common deep learning libraries like TensorFlow and PyTorch. The Azure Synapse runtime also includes supporting libraries like Petastorm and Horovod, which are commonly used for distributed training. This feature is currently available in Public Preview.
+- [Get started with Azure Synapse Analytics](get-started.md)
+- [Introduction to Azure Synapse Analytics](/training/modules/introduction-azure-synapse-analytics/)
+- [Realize Integrated Analytical Solutions with Azure Synapse Analytics](/training/paths/realize-integrated-analytical-solutions-with-azure-synapse-analytics/)
+- [Data integration at scale with Azure Data Factory or Azure Synapse Pipeline](/training/paths/data-integration-scale-azure-data-factory/)
+- [Microsoft Training Learning Paths for Azure Synapse](/training/browse/?terms=synapse&resource_type=learning%20path)
+- [Azure Synapse Analytics in Microsoft Q&A](/answers/topics/azure-synapse-analytics.html)
-To learn more about how to leverage these libraries within your Azure Synapse Analytics GPU-accelerated pools, read the [Deep learning tutorials](./machine-learning/concept-deep-learning.md).
## Next steps
-[Get started with Azure Synapse Analytics](get-started.md)
+- [Azure Synapse Analytics Blog](https://aka.ms/SynapseMonthlyUpdate)
+- [Become a Azure Synapse Influencer](https://aka.ms/synapseinfluencers)
+- [Azure Synapse Analytics terminology](overview-terminology.md)
+- [Azure Synapse Analytics migration guides](migration-guides/index.yml)
+- [Azure Synapse Analytics frequently asked questions](overview-faq.yml)
virtual-desktop Create Host Pools Azure Marketplace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/create-host-pools-azure-marketplace.md
Host pools are a collection of one or more identical virtual machines (VMs), als
This article will walk you through the setup process for creating a host pool for an Azure Virtual Desktop environment through the Azure portal. This method provides a browser-based user interface to create a host pool in Azure Virtual Desktop, create a resource group with VMs in an Azure subscription, join those VMs to either an Active Directory (AD) domain or Azure Active Directory (Azure AD) tenant, and register the VMs with Azure Virtual Desktop.
+You can create host pools in the following Azure regions:
+
+- Australia East
+- Canada Central
+- Canada East
+- Central US
+- East US
+- East US 2
+- Japan East
+- North Central US
+- North Europe
+- South Central US
+- UK South
+- UK West
+- West Central US
+- West Europe
+- West US
+- West US 2
+ ## Prerequisites Before you can create a host pool, make sure you've completed the prerequisites. For more information, see [Prerequisites for Azure Virtual Desktop](prerequisites.md).
virtual-desktop Create Host Pools Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/create-host-pools-powershell.md
Host pools are a collection of one or more identical virtual machines within Azure Virtual Desktop. Each host pool can be associated with multiple RemoteApp groups, one desktop app group, and multiple session hosts.
+You can create host pools in the following Azure regions:
+
+- Australia East
+- Canada Central
+- Canada East
+- Central US
+- East US
+- East US 2
+- Japan East
+- North Central US
+- North Europe
+- South Central US
+- UK South
+- UK West
+- West Central US
+- West Europe
+- West US
+- West US 2
+ ## Create a host pool ### [Azure PowerShell](#tab/azure-powershell)
virtual-desktop Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/prerequisites.md
You have a choice of operating systems that you can use for session hosts to pro
|<ul><li>Windows Server 2022</li><li>Windows Server 2019</li><li>Windows Server 2016</li><li>Windows Server 2012 R2</li></ul>|License entitlement:<ul><li>Remote Desktop Services (RDS) Client Access License (CAL) with Software Assurance (per-user or per-device), or RDS User Subscription Licenses.</li></ul>Per-user access pricing is not available for Windows Server operating systems.| > [!IMPORTANT]
-> Azure Virtual Desktop doesn't support 32-bit operating systems or SKUs not listed in the previous table. In addition, Windows 7 doesn't support any VHD or VHDX-based profile solutions hosted on managed Azure Storage due to a sector size limitation.
+> - Azure Virtual Desktop doesn't support 32-bit operating systems or SKUs not listed in the previous table. In addition, Windows 7 doesn't support any VHD or VHDX-based profile solutions hosted on managed Azure Storage due to a sector size limitation.
>
-> Azure Virtual Desktop extended support for Windows 7 session host VMs ends on January 10, 2023. To see which operating systems are supported, review [Operating systems and licenses](prerequisites.md#operating-systems-and-licenses).
+> - Azure Virtual Desktop extended support for Windows 7 session host VMs ends on January 10, 2023. To see which operating systems are supported, review [Operating systems and licenses](prerequisites.md#operating-systems-and-licenses).
+>
+> - [Ephemeral OS disks for Azure VMs](../virtual-machines/ephemeral-os-disks.md) are not supported.
You can use operating system images provided by Microsoft in the [Azure Marketplace](https://azuremarketplace.microsoft.com), or your own custom images stored in an Azure Compute Gallery, as a managed image, or storage blob. To learn more about how to create custom images, see:
virtual-desktop Troubleshoot Client https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/troubleshoot-client.md
Title: Troubleshoot Remote Desktop client Azure Virtual Desktop - Azure
-description: How to resolve issues when you set up client connections in a Azure Virtual Desktop tenant environment.
+description: How to resolve issues with the Remote Desktop client when connecting to Azure Virtual Desktop.
Previously updated : 08/11/2020 Last updated : 09/15/2022
If the Web client keeps prompting for credentials, follow these instructions:
4. Clear browser cache. For more information, see [Clear browser cache for your browser](https://binged.it/2RKyfdU). 5. Open your browser in Private mode.
+## Web client
+
+### Web client out of memory
+
+When using the web client, if you see the error message "Oops, we couldn't connect to 'SessionDesktop,'" (where *SessionDesktop* is the name of the resource you're connecting to), then the web client has run out of memory.
+
+To resolve this issue, you'll need to either reduce the size of the browser window or disconnect all existing connections and try connecting again. If you still encounter this issue after doing these things, ask your local admin or tech support for help.
+
+#### Authentication issues while using an N SKU
+
+This issue may also be happening because you're using an N SKU without a media features pack. To resolve this issue, [install the media features pack](https://support.microsoft.com/topic/media-feature-pack-list-for-windows-n-editions-c1c6fffa-d052-8338-7a79-a4bb980a700a).
+
+#### Authentication issues when TLS 1.2 not enabled
+
+Authentication issues can also happen when your client doesn't have TLS 1.2 enabled. To learn how to enable TLS 1.2 on a compatible client, see [Enable TLS 1.2 on client or server operating systems](/troubleshoot/azure/active-directory/enable-support-tls-environment?tabs=azure-monitor#enable-tls-12-on-client-or-server-operating-systems).
+ ## Windows client blocks Azure Virtual Desktop (classic) feed If the Windows client feed won't show Azure Virtual Desktop (classic) apps, follow these instructions:
virtual-machine-scale-sets Virtual Machine Scale Sets Orchestration Modes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-orchestration-modes.md
The following table compares the Flexible orchestration mode, Uniform orchestrat
| Mix operating systems | Yes, Linux and Windows can reside in the same Flexible scale set | No, instances are the same operating system | Yes, Linux and Windows can reside in the same availability set | | Disk Types | Managed disks only, all storage types | Managed and unmanaged disks, all storage types | Managed and unmanaged disks, Ultradisk not supported | | Disk Server Side Encryption with Customer Managed Keys | Yes | Yes | Yes |
-| Write Accelerator  | No | Yes | Yes |
+| Write Accelerator  | Yes | Yes | Yes |
| Proximity Placement Groups  | Yes, read [Proximity Placement Groups documentation](../virtual-machine-scale-sets/proximity-placement-groups.md) | Yes, read [Proximity Placement Groups documentation](../virtual-machine-scale-sets/proximity-placement-groups.md) | Yes | | Azure Dedicated Hosts  | No | Yes | Yes | | Managed Identity | [User Assigned Identity](../active-directory/managed-identities-azure-resources/qs-configure-portal-windows-vmss.md#user-assigned-managed-identity) only<sup>1</sup> | System Assigned or User Assigned | N/A (can specify Managed Identity on individual instances) |
virtual-machines Run Command Managed https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/run-command-managed.md
az vm run-command delete --name "myRunCommand" --vm-name "myVM" --resource-group
### Execute a script with the VM This command will deliver the script to the VM, execute it, and return the captured output. + ```azurepowershell-interactive Set-AzVMRunCommand -ResourceGroupName "myRG" -VMName "myVM" -Location "EastUS" -RunCommandName "RunCommandName" ΓÇôSourceScript "echo Hello World!" ```
+### Execute a script on the VM using SourceScriptUri parameter
+`OutputBlobUri` and `ErrorBlobUri` are optional parameters.
+
+```azurepowershell-interactive
+Set-AzVMRunCommand -ResourceGroupName -VMName -RunCommandName -SourceScriptUri ΓÇ£< SAS URI of a storage blob with read access or public URI>" -OutputBlobUri ΓÇ£< SAS URI of a storage append blob with read, add, create, write access>ΓÇ¥ -ErrorBlobUri ΓÇ£< SAS URI of a storage append blob with read, add, create, write access>ΓÇ¥
+```
++ ### List all deployed RunCommand resources on a VM This command will return a full list of previously deployed Run Commands along with their properties.
Remove-AzVMRunCommand -ResourceGroupName "myRG" -VMName "myVM" -RunCommandName "
To deploy a new Run Command, execute a PUT on the VM directly and specify a unique name for the Run Command instance. ```rest
-PUT /subscriptions/<subscriptionId>/resourceGroups/<resourceGroupName>/providers/Microsoft.Compute/virtualMachines/<vmName>/runcommands/<runCommandName>?api-version=2019-12-01
+GET /subscriptions/<subscriptionId>/resourceGroups/<resourceGroupName>/providers/Microsoft.Compute/virtualMachines/<vmName>/runcommands?api-version=2019-12-01
``` ```json
PUT /subscriptions/<subscriptionId>/resourceGroups/<resourceGroupName>/providers
"location": "<location>", "properties": { "source": {
- "script": "echo Hello World",
- "scriptUri": "<URI>",
+ "script": "Write-Host Hello World!",
+ "scriptUri": "<SAS URI of a storage blob with read access or public URI>",
"commandId": "<Id>" }, "parameters": [
PUT /subscriptions/<subscriptionId>/resourceGroups/<resourceGroupName>/providers
"runAsUser": "userName", "runAsPassword": "userPassword", "timeoutInSeconds": 3600,
- "outputBlobUri": "<URI>",
- "errorBlobUri": "<URI>"
+ "outputBlobUri": "< SAS URI of a storage append blob with read, add, create, write access>",
+ "errorBlobUri": "< SAS URI of a storage append blob with read, add, create, write access >"
} } ``` ### Notes -- You can provide an inline script, a script URI, or a built-in script [command ID](run-command.md#available-commands) as the input source-- Only one type of source input is supported for one command execution -- Run Command supports output to Storage blobs, which can be used to store large script outputs-- Run Command supports error output to Storage blobs
+- You can provide an inline script, a script URI, or a built-in script [command ID](run-command.md#available-commands) as the input source. Script URI is either storage blob SAS URI with read access or public URI.
+- Only one type of source input is supported for one command execution.
+- Run Command supports writing output and error to Storage blobs using outputBlobUri and errorBlobUri parameters, which can be used to store large script outputs. Use SAS URI of a storage append blob with read, add, create, write access. The blob should be of type AppendBlob. Writing the script output or error blob would fail otherwise. The blob will be overwritten if it already exists. It will be created if it does not exist.
+ ### List running instances of Run Command on a VM
PUT /subscriptions/<subscriptionId>/resourceGroups/<resourceGroupName>/providers
GET /subscriptions/<subscriptionId>/resourceGroups/<resourceGroupName>/providers/Microsoft.Compute/virtualMachines/<vmName>/runcommands?api-version=2019-12-01 ``` ++ ### Get output details for a specific Run Command deployment ```rest
In this example, **secondRunCommand** will execute after **firstRunCommand**.
], "properties":{ "source":{
- "scriptUrl":"http://github.com/myscript.ps1"
+ "scriptUri":"http://github.com/myscript.ps1"
}, "timeoutInSeconds":60 }
virtual-machines Share Gallery Direct https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/share-gallery-direct.md
POST https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/
## Next steps - Create an [image definition and an image version](image-version.md).-- Create a VM from a [generalized](vm-generalized-image-version.md#create-a-vm-from-a-community-gallery-image) or [specialized](vm-specialized-image-version.md#create-a-vm-from-a-community-gallery-image) image in a direct shared gallery.
+- Create a VM from a [generalized](vm-generalized-image-version.md#create-a-vm-from-a-gallery-shared-with-your-subscription-or-tenant) or [specialized](vm-specialized-image-version.md#create-a-vm-from-a-gallery-shared-with-your-subscription-or-tenant) image from a direct shared image in the target subscription or tenant.
virtual-machines Updates Maintenance Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/updates-maintenance-overview.md
Enabling [automatic VM guest patching](automatic-vm-guest-patching.md) for your
You can use [Update Management in Azure Automation](../automation/update-management/overview.md?context=/azure/virtual-machines/context/context) to manage operating system updates for your Windows and Linux virtual machines in Azure, in on-premises environments, and in other cloud environments. You can quickly assess the status of available updates on all agent machines and manage the process of installing required updates for servers.
+## Update management center (preview)
+
+[Update management center (preview)](../update-center/overview.md) is a new-age unified service in Azure to manage and govern updates (Windows and Linux), both on-premises and other cloud platforms, across hybrid environments from a single dashboard. The new functionality provides native and out-of-the-box experience, granular access controls, flexibility to create schedules or take action now, ability to check updates automatically and much more. The enhanced functionality ensures that the administrators have visibility into the health of all systems in the environment. For more information, see [key benefits](../update-center/overview.md#key-benefits).
+ ## Maintenance control Manage platform updates, that don't require a reboot, using [maintenance control](maintenance-configurations.md). Azure frequently updates its infrastructure to improve reliability, performance, security or launch new features. Most updates are transparent to users. Some sensitive workloads, like gaming, media streaming, and financial transactions, can't tolerate even few seconds of a VM freezing or disconnecting for maintenance. Maintenance control gives you the option to wait on platform updates and apply them within a 35-day rolling window.
virtual-machines Vm Applications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/vm-applications.md
$resultSummary | convertto-json -depth 5
## Next steps -- Learn how to [create and deploy VM application packages](vm-applications-how-to.md).
+- Learn how to [create and deploy VM application packages](vm-applications-how-to.md).
virtual-machines Proximity Placement Groups Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/proximity-placement-groups-portal.md
A proximity placement group is a logical grouping used to make sure that Azure c
> [!NOTE] > Proximity placement groups cannot be used with dedicated hosts. >
+> Intent for proximity placement groups is not supported on Azure portal. Use ARM templates or other client tools like Powershell or CLI to provide intent for proximity placement groups.
+>
> If you want to use availability zones together with placement groups, you need to make sure that the VMs in the placement group are also all in the same availability zone. >
If the VM is part of the Availability set, you need to add the availability set
## Next steps
-You can also use the [Azure PowerShell](proximity-placement-groups.md) to create proximity placement groups.
+You can also use the [Azure PowerShell](proximity-placement-groups.md) to create proximity placement groups.
virtual-machines Run Command Managed https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/run-command-managed.md
az vm run-command delete --name "myRunCommand" --vm-name "myVM" --resource-group
This command will deliver the script to the VM, execute it, and return the captured output. ```powershell-interactive
-Set-AzVMRunCommand -ResourceGroupName "myRG" -VMName "myVM" -RunCommandName "RunCommandName" ΓÇôSourceScript "Write-Host Hello World!"
+Set-AzVMRunCommand -ResourceGroupName "myRG" -VMName "myVM" -Location "EastUS" -RunCommandName "RunCommandName" ΓÇôSourceScript "echo Hello World!"
```
+### Execute a script on the VM using SourceScriptUri parameter
+`OutputBlobUri` and `ErrorBlobUri` are optional parameters.
+
+```powershell-interactive
+Set-AzVMRunCommand -ResourceGroupName -VMName -RunCommandName -SourceScriptUri ΓÇ£< SAS URI of a storage blob with read access or public URI>" -OutputBlobUri ΓÇ£< SAS URI of a storage append blob with read, add, create, write access>ΓÇ¥ -ErrorBlobUri ΓÇ£< SAS URI of a storage append blob with read, add, create, write access>ΓÇ¥
+```
+ ### List all deployed RunCommand resources on a VM This command will return a full list of previously deployed Run Commands along with their properties.
PUT /subscriptions/<subscriptionId>/resourceGroups/<resourceGroupName>/providers
"properties": { "source": { "script": "Write-Host Hello World!",
- "scriptUri": "<URI>",
+ "scriptUri": "<SAS URI of a storage blob with read access or public URI>",
"commandId": "<Id>" }, "parameters": [
PUT /subscriptions/<subscriptionId>/resourceGroups/<resourceGroupName>/providers
"runAsUser": "userName", "runAsPassword": "userPassword", "timeoutInSeconds": 3600,
- "outputBlobUri": "<URI>",
- "errorBlobUri": "<URI>"
+ "outputBlobUri": "< SAS URI of a storage append blob with read, add, create, write access>",
+ "errorBlobUri": "< SAS URI of a storage append blob with read, add, create, write access >"
} } ``` ### Notes -- You can provide an inline script, a script URI, or a built-in script [command ID](run-command.md#available-commands) as the input source-- Only one type of source input is supported for one command execution -- Run Command supports output to Storage blobs, which can be used to store large script outputs-- Run Command supports error output to Storage blobs
+- You can provide an inline script, a script URI, or a built-in script [command ID](run-command.md#available-commands) as the input source. Script URI is either storage blob SAS URI with read access or public URI.
+- Only one type of source input is supported for one command execution.
+- Run Command supports writing output and error to Storage blobs using outputBlobUri and errorBlobUri parameters, which can be used to store large script outputs. Use SAS URI of a storage append blob with read, add, create, write access. The blob should be of type AppendBlob. Writing the script output or error blob would fail otherwise. The blob will be overwritten if it already exists. It will be created if it does not exist.
### List running instances of Run Command on a VM
In this example, **secondRunCommand** will execute after **firstRunCommand**.
], "properties":{ "source":{
- "scriptUrl":"http://github.com/myscript.ps1"
+ "scriptUri":"http://github.com/myscript.ps1"
}, "timeoutInSeconds":60 }
virtual-machines Automation Configure System https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/automation-configure-system.md
The table below contains the Terraform parameters, these parameters need to be
The high availability configuration for the database tier and the SCS tier is configured using the `database_high_availability` and `scs_high_availability` flags.
-High availability configurations use Pacemaker with Azure fencing agents. The fencing agents should be configured to use a unique service principal with permissions to stop and start virtual machines. For more information, see [Create Fencing Agent](high-availability-guide-suse-pacemaker.md#create-an-azure-fence-agent-stonith-device)
+High availability configurations use Pacemaker with Azure fencing agents. The fencing agents should be configured to use a unique service principal with permissions to stop and start virtual machines. For more information, see [Create Fencing Agent](high-availability-guide-suse-pacemaker.md#create-an-azure-fence-agent-device)
```azurecli-interactive az ad sp create-for-rbac --role="Linux Fence Agent Role" --scopes="/subscriptions/<subscriptionID>" --name="<prefix>-Fencing-Agent"
virtual-machines Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/get-started.md
In the SAP workload documentation space, you can find the following areas:
- December 08, 2021: Release of scenario [HA of SAP HANA Scale-up with Azure NetApp Files on SLES](./sap-hana-high-availability-netapp-files-suse.md) - December 07, 2021: Change in [Setting up Pacemaker on RHEL in Azure](./high-availability-guide-rhel-pacemaker.md) to clarify that the instructions are applicable for both RHEL 7 and RHEL 8 - December 07, 2021: Change in [HA for SAP NW on SLES with NFS on Azure Files](./high-availability-guide-suse-nfs-azure-files.md), [HA for SAP NW on Azure VMs on SLES with ANF](./high-availability-guide-suse-netapp-files.md) and [HA for SAP NW on Azure VMs on SLES for SAP applications](./high-availability-guide-suse.md) to adjust the instructions for configuring SWAP file. -- December 02, 2021: Introduction of new STONITH fencing method in [Setting up Pacemaker on SUSE Linux Enterprise Server in Azure](high-availability-guide-suse-pacemaker.md) using Azure shared disk SBD device
+- December 02, 2021: Introduction of new fencing method in [Setting up Pacemaker on SUSE Linux Enterprise Server in Azure](high-availability-guide-suse-pacemaker.md) using Azure shared disk SBD device
- December 01, 2021: Change in [SAP ASCS/SCS instance with WSFC and file share](./sap-high-availability-guide-wsfc-file-share.md), [HA for SAP NetWeaver on Azure VMs on Windows with Azure NetApp Files(SMB)](./high-availability-guide-windows-netapp-files-smb.md) and [HA for SAP NetWeaver on Azure VMs on Windows with Azure Files(SMB)](./high-availability-guide-windows-azure-files-smb.md) to update the SAP kernel version, required to support clustering SAP on Windows with file share - November 30, 2021: Added [Using Windows DFS-N to support flexible SAPMNT share creation for SMB-based file share](./high-availability-guide-windows-dfs.md) - November 22, 2021: Change in [HA for SAP NW on SLES with NFS on Azure Files](./high-availability-guide-suse-nfs-azure-files.md) and [HA for SAP NW on RHEL with NFS on Azure Files](./high-availability-guide-rhel-nfs-azure-files.md) to clarify the guidelines for J2EE SAP systems and share consolidations per storage account.
In the SAP workload documentation space, you can find the following areas:
- July 22, 2021: Change in [HA for SAP NW on Azure VMs on RHEL for SAP applications](./high-availability-guide-rhel.md), [HA for SAP NW on Azure VMs on RHEL with ANF](./high-availability-guide-rhel-netapp-files.md) and [HA for SAP NW on Azure VMs on RHEL multi-SID guide](./high-availability-guide-rhel-multi-sid.md) to remove `failure-timeout` for the ASCS cluster resource (ENSA2 only) - July 16, 2021: Restructuring of the SAP on Azure documentation Table of contents(TOC) for more streamlined navigation - July 2, 2021: Change in [Backup and restore of SAP HANA on HANA Large Instances](./hana-backup-restore.md) to remove duplicate content for azacsnap tool and backup and restore of HANA Large Instances-- July 2, 2021: Change in [Setting up Pacemaker on RHEL in Azure](./high-availability-guide-rhel-pacemaker.md) to add information how to avoid fence race in two node Pacemaker cluster and a link to KB, explaining how to reduce failover delays when using optional STONITH configuration with `fence_kdump`
+- July 2, 2021: Change in [Setting up Pacemaker on RHEL in Azure](./high-availability-guide-rhel-pacemaker.md) to add information how to avoid fence race in two node Pacemaker cluster and a link to KB, explaining how to reduce failover delays when using optional fencing configuration with `fence_kdump`
- July 1, 2021: Adding new certified HANA Large Instances SKUs in [Available SKUs for HLI](./hana-available-skus.md) - June 30, 2021: Change in [HA guide for SAP ASCS/SCS with WSFC and Azure NetApp Files(SMB)](./high-availability-guide-windows-netapp-files-smb.md) to add a section for recommended SAP profile parameters -- June 29, 2021: Change in [Setting up Pacemaker on RHEL in Azure](./high-availability-guide-rhel-pacemaker.md) to add optional stonith configuration with fence_kdump
+- June 29, 2021: Change in [Setting up Pacemaker on RHEL in Azure](./high-availability-guide-rhel-pacemaker.md) to add optional fencing configuration with fence_kdump
- June 28, 2021: Change in [HA guide for SAP ASCS/SCS with WSFC and Azure NetApp Files(SMB)](./high-availability-guide-windows-netapp-files-smb.md) to add a statement that the SMB Server (Computer Account) Prefix should be no longer than 8 characters to avoid running into SAP hostname length limitation - June 17, 2020: Change in [High availability of SAP HANA on Azure VMs on RHEL](./sap-hana-high-availability-rhel.md) to remove meta keyword from HANA resource creation command (RHEL 8.x) - June 09, 2021: Correct VM SKU names for M192_v2 in [SAP HANA Azure virtual machine storage configurations](./hana-vm-operations-storage.md)
In the SAP workload documentation space, you can find the following areas:
- March 15, 2021: Change in [SAP ASCS/SCS instance with WSFC and file share](./sap-high-availability-guide-wsfc-file-share.md),[Install SAP ASCS/SCS instance with WSFC and file share](./sap-high-availability-installation-wsfc-file-share.md) and [SAP ASCS/SCS multi-SID with WSFC and file share](./sap-ascs-ha-multi-sid-wsfc-file-share.md) to clarify that the SAP ASCS/SCS instances and the SOFS share must be deployed in separate clusters - March 03, 2021: Change in [HA guide for SAP ASCS/SCS with WSFC and Azure NetApp Files(SMB)](./high-availability-guide-windows-netapp-files-smb.md) to add a cautionary statement that elevated privileges are required for the user running SWPM, during the installation of the SAP system - February 11, 2021: Changes in [High availability of IBM Db2 LUW on Azure VMs on Red Hat Enterprise Linux Server](./high-availability-guide-rhel-ibm-db2-luw.md) to amend pacemaker cluster commands for RHEL 8.x-- February 03, 2021: Change in [Setting up Pacemaker on RHEL in Azure](./high-availability-guide-rhel-pacemaker.md) to update pcmk_host_map in the stonith create command-- February 03, 2021: Change in [Setting up Pacemaker on SLES in Azure](./high-availability-guide-suse-pacemaker.md) to add pcmk_host_map in the stonith create command
+- February 03, 2021: Change in [Setting up Pacemaker on RHEL in Azure](./high-availability-guide-rhel-pacemaker.md) to update pcmk_host_map in the `stonith create` command
+- February 03, 2021: Change in [Setting up Pacemaker on SLES in Azure](./high-availability-guide-suse-pacemaker.md) to add pcmk_host_map in the `stonith create` command
- February 03, 2021: More details on I/O scheduler settings for SUSE in article [SAP HANA Azure virtual machine storage configurations](./hana-vm-operations-storage.md) - February 01, 2021: Change in [HA for SAP HANA scale-up with ANF on RHEL](./sap-hana-high-availability-netapp-files-red-hat.md), [SAP HANA scale-out HSR with Pacemaker on Azure VMs on RHEL](./sap-hana-high-availability-scale-out-hsr-rhel.md), [SAP HANA scale-out with standby node on Azure VMs with ANF on SLES](./sap-hana-scale-out-standby-netapp-files-suse.md) and [SAP HANA scale-out with standby node on Azure VMs with ANF on RHEL](./sap-hana-scale-out-standby-netapp-files-rhel.md) to add a link to [NFS v4.1 volumes on Azure NetApp Files for SAP HANA](./hana-vm-operations-netapp.md) - January 23, 2021: Introduce the functionality of HANA data volume partitioning as functionality to stripe I/O operations against HANA data files across different Azure disks or NFS shares without using a disk volume manager in articles [SAP HANA Azure virtual machine storage configurations](./hana-vm-operations-storage.md) and [NFS v4.1 volumes on Azure NetApp Files for SAP HANA](./hana-vm-operations-netapp.md)
In the SAP workload documentation space, you can find the following areas:
- September 28, 2020: Adding a new storage operation guide for SAP HANA using Azure NetApp Files with the document [NFS v4.1 volumes on Azure NetApp Files for SAP HANA](./hana-vm-operations-netapp.md) - September 23, 2020: Add new certified SKUs for HLI in [Available SKUs for HLI](./hana-available-skus.md) - September 20, 2020: Changes in documents [Considerations for Azure Virtual Machines DBMS deployment for SAP workload](./dbms_guide_general.md), [SQL Server Azure Virtual Machines DBMS deployment for SAP NetWeaver](./dbms_guide_sqlserver.md), [Azure Virtual Machines Oracle DBMS deployment for SAP workload](./dbms_guide_oracle.md), [IBM Db2 Azure Virtual Machines DBMS deployment for SAP workload](./dbms_guide_ibm.md) to adapt to new configuration suggestion that recommends separation of DBMS binaries and SAP binaries into different Azure disks. Also adding Ultra disk recommendations to the different guides.-- September 08, 2020: Change in [High availability of SAP HANA on Azure VMs on SLES](./sap-hana-high-availability.md) to clarify stonith definitions
+- September 08, 2020: Change in [High availability of SAP HANA on Azure VMs on SLES](./sap-hana-high-availability.md) to clarify fencing definitions
- September 03, 2020: Change in [SAP HANA Azure virtual machine storage configurations](./hana-vm-operations-storage.md) to adapt to minimal 2 IOPS per 1 GB capacity with Ultra disk - September 02, 2020: Change in [Available SKUs for HLI](./hana-available-skus.md) to get more transparent in what SKUs are HANA certified - August 25, 2020: Change in [HA for SAP NW on Azure VMs on SLES with ANF](./high-availability-guide-suse-netapp-files.md) to fix typo
In the SAP workload documentation space, you can find the following areas:
- July 16, 2020: Describe how to use Azure PowerShell to install new VM Extension for SAP in the [Deployment Guide](deployment-guide.md) - July 04,2020: Release of [Azure Monitor for SAP solutions (preview)](./monitor-sap-on-azure.md) - July 01, 2020: Suggesting less expensive storage configuration based on Azure premium storage burst functionality in document [SAP HANA Azure virtual machine storage configurations](./hana-vm-operations-storage.md) -- June 24, 2020: Change in [Setting up Pacemaker on SLES in Azure](./high-availability-guide-suse-pacemaker.md) to release new improved Azure Fence Agent and more resilient STONITH configuration for devices, based on Azure Fence Agent -- June 24, 2020: Change in [Setting up Pacemaker on RHEL in Azure](./high-availability-guide-rhel-pacemaker.md) to release more resilient STONITH configuration
+- June 24, 2020: Change in [Setting up Pacemaker on SLES in Azure](./high-availability-guide-suse-pacemaker.md) to release new improved Azure Fence Agent and more resilient fencing configuration for devices, based on Azure Fence Agent
+- June 24, 2020: Change in [Setting up Pacemaker on RHEL in Azure](./high-availability-guide-rhel-pacemaker.md) to release more resilient fencing configuration
- June 23, 2020: Changes to [Azure Virtual Machines planning and implementation for SAP NetWeaver](./planning-guide.md) guide and introduction of [Azure Storage types for SAP workload](./planning-guide-storage.md) guide - June 22, 2020: Add installation steps for new VM Extension for SAP to the [Deployment Guide](deployment-guide.md) - June 16, 2020: Change in [Public endpoint connectivity for VMs using Azure Standard ILB in SAP HA scenarios](./high-availability-guide-standard-load-balancer-outbound-connections.md) to add a link to SUSE Public Cloud Infrastructure 101 documentation
virtual-machines Ha Setup With Fencing Device https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/ha-setup-with-fencing-device.md
+
+ Title: High availability setup with fencing device for SAP HANA on Azure (Large Instances)| Microsoft Docs
+description: Learn to establish high availability for SAP HANA on Azure (Large Instances) in SUSE by using the fencing device.
+
+documentationcenter:
++
+editor:
+++
+ vm-linux
+ Last updated : 9/01/2021++++
+# High availability setup in SUSE using the fencing device
+
+In this article, we'll go through the steps to set up high availability (HA) in HANA Large Instances on the SUSE operating system by using the fencing device.
+
+> [!NOTE]
+> This guide is derived from successfully testing the setup in the Microsoft HANA Large Instances environment. The Microsoft Service Management team for HANA Large Instances doesn't support the operating system. For troubleshooting or clarification on the operating system layer, contact SUSE.
+>
+> The Microsoft Service Management team does set up and fully support the fencing device. It can help troubleshoot fencing device problems.
+
+## Prerequisites
+
+To set up high availability by using SUSE clustering, you need to:
+
+- Provision HANA Large Instances.
+- Install and register the operating system with the latest patches.
+- Connect HANA Large Instance servers to the SMT server to get patches and packages.
+- Set up Network Time Protocol (NTP time server).
+- Read and understand the latest SUSE documentation on HA setup.
+
+## Setup details
+
+This guide uses the following setup:
+
+- Operating system: SLES 12 SP1 for SAP
+- HANA Large Instances: 2xS192 (four sockets, 2 TB)
+- HANA version: HANA 2.0 SP1
+- Server names: sapprdhdb95 (node1) and sapprdhdb96 (node2)
+- Fencing device: iSCSI based
+- NTP on one of the HANA Large Instance nodes
+
+When you set up HANA Large Instances with HANA system replication, you can request that the Microsoft Service Management team set up the fencing device. Do this at the time of provisioning.
+
+If you're an existing customer with HANA Large Instances already provisioned, you can still get the fencing device set up. Provide the following information to the Microsoft Service Management team in the service request form (SRF). You can get the SRF through the Technical Account Manager or your Microsoft contact for HANA Large Instance onboarding.
+
+- Server name and server IP address (for example, myhanaserver1 and 10.35.0.1)
+- Location (for example, US East)
+- Customer name (for example, Microsoft)
+- HANA system identifier (SID) (for example, H11)
+
+After the fencing device is configured, the Microsoft Service Management team will provide you with the SBD name and IP address of the iSCSI storage. You can use this information to configure fencing setup.
+
+Follow the steps in the following sections to set up HA by using the fencing device.
+
+## Identify the SBD device
+
+> [!NOTE]
+> This section applies only to existing customers. If you're a new customer, the Microsoft Service Management team will give you the SBD device name, so skip this section.
+
+1. Modify */etc/iscsi/initiatorname.isci* to:
+
+ ```
+ iqn.1996-04.de.suse:01:<Tenant><Location><SID><NodeNumber>
+ ```
+
+ Microsoft Service Management provides this string. Modify the file on *both* nodes. However, the node number is different on each node.
+
+ ![Screenshot that shows an initiatorname file with InitiatorName values for a node.](media/HowToHLI/HASetupWithFencing/initiatorname.png)
+
+2. Modify */etc/iscsi/iscsid.conf* by setting `node.session.timeo.replacement_timeout=5` and `node.startup = automatic`. Modify the file on *both* nodes.
+
+3. Run the following discovery command on *both* nodes.
+
+ ```
+ iscsiadm -m discovery -t st -p <IP address provided by Service Management>:3260
+ ```
+
+ The results show four sessions.
+
+ ![Screenshot that shows a console window with results of the discovery command.](mediiscovery.png)
+
+4. Run the following command on *both* nodes to sign in to the iSCSI device.
+
+ ```
+ iscsiadm -m node -l
+ ```
+
+ The results show four sessions.
+
+ ![Screenshot that shows a console window with results of the node command.](media/HowToHLI/HASetupWithFencing/iSCSIadmLogin.png)
+
+5. Use the following command to run the *rescan-scsi-bus.sh* rescan script. This script shows the new disks created for you. Run it on *both* nodes.
+
+ ```
+ rescan-scsi-bus.sh
+ ```
+
+ The results should show a LUN number greater than zero (for example: 1, 2, and so on).
+
+ ![Screenshot that shows a console window with results of the script.](media/HowToHLI/HASetupWithFencing/rescanscsibus.png)
+
+6. To get the device name, run the following command on *both* nodes.
+
+ ```
+ fdisk ΓÇôl
+ ```
+
+ In the results, choose the device with the size of 178 MiB.
+
+ ![Screenshot that shows a console window with results of the f disk command.](media/HowToHLI/HASetupWithFencing/fdisk-l.png)
+
+## Initialize the SBD device
+
+1. Use the following command to initialize the SBD device on *both* nodes.
+
+ ```
+ sbd -d <SBD Device Name> create
+ ```
+ ![Screenshot that shows a console window with the result of the s b d create command.](media/HowToHLI/HASetupWithFencing/sbdcreate.png)
+
+2. Use the following command on *both* nodes to check what has been written to the device.
+
+ ```
+ sbd -d <SBD Device Name> dump
+ ```
+
+## Configure the SUSE HA cluster
+
+1. Use the following command to check whether ha_sles and SAPHanaSR-doc patterns are installed on *both* nodes. If they're not installed, install them.
+
+ ```
+ zypper in -t pattern ha_sles
+ zypper in SAPHanaSR SAPHanaSR-doc
+ ```
+ ![Screenshot that shows a console window with the result of the pattern command.](media/HowToHLI/HASetupWithFencing/zypperpatternha_sles.png)
+
+ ![Screenshot that shows a console window with the result of the SAPHanaSR-doc command.](media/HowToHLI/HASetupWithFencing/zypperpatternSAPHANASR-doc.png)
+
+2. Set up the cluster by using either the `ha-cluster-init` command or the yast2 wizard. In this example, we're using the yast2 wizard. Do this step only on the *primary node*.
+
+ 1. Go to **yast2** > **High Availability** > **Cluster**.
+
+ ![Screenshot that shows the YaST Control Center with High Availability and Cluster selected.](media/HowToHLI/HASetupWithFencing/yast-control-center.png)
+
+ 1. In the dialog that appears about the hawk package installation, select **Cancel** because the halk2 package is already installed.
+
+ ![Screenshot that shows a dialog with Install and Cancel options.](media/HowToHLI/HASetupWithFencing/yast-hawk-install.png)
+
+ 1. In the dialog that appears about continuing, select **Continue**.
+
+ ![Screenshot that shows a message about continuing without installing required packages.](media/HowToHLI/HASetupWithFencing/yast-hawk-continue.png)
+
+ 1. The expected value is the number of nodes deployed (in this case, 2). Select **Next**.
+
+
+
+ 1. Add node names, and then select **Add suggested files**.
+
+ ![Screenshot that shows the Cluster Configure window with Sync Host and Sync File lists.](media/HowToHLI/HASetupWithFencing/yast-cluster-configure-csync2.png)
+
+ 1. Select **Turn csync2 ON**.
+
+ 1. Select **Generate Pre-Shared-Keys**.
+
+ 1. In the pop-up message that appears, select **OK**.
+
+ ![Screenshot that shows a message that your key has been generated.](media/HowToHLI/HASetupWithFencing/yast-key-file.png)
+
+ 1. The authentication is performed using the IP addresses and preshared keys in Csync2. The key file is generated with `csync2 -k /etc/csync2/key_hagroup`.
+
+ Manually copy the file *key_hagroup* to all members of the cluster after it's created. Be sure to copy the file from node1 to node2. Then select **Next**.
+
+ ![Screenshot that shows a Cluster Configure dialog box with options necessary to copy the key to all members of the cluster.](media/HowToHLI/HASetupWithFencing/yast-cluster-conntrackd.png)
+
+ 1. In the default option, **Booting** was **Off**. Change it to **On**, so the pacemaker service is started on boot. You can make the choice based on your setup requirements.
+
+ ![Screenshot that shows the Cluster Service window with Booting turned on.](media/HowToHLI/HASetupWithFencing/yast-cluster-service.png)
+
+ 1. Select **Next**, and the cluster configuration is complete.
+
+## Set up the softdog watchdog
+
+1. Add the following line to */etc/init.d/boot.local* on *both* nodes.
+
+ ```
+ modprobe softdog
+ ```
+ ![Screenshot that shows a boot file with the softdog line added.](media/HowToHLI/HASetupWithFencing/modprobe-softdog.png)
+
+2. Use the following command to update the file */etc/sysconfig/sbd* on *both* nodes.
+
+ ```
+ SBD_DEVICE="<SBD Device Name>"
+ ```
+ ![Screenshot that shows the s b d file with the S B D_DEVICE value added.](media/HowToHLI/HASetupWithFencing/sbd-device.png)
+
+3. Load the kernel module on *both* nodes by running the following command.
+
+ ```
+ modprobe softdog
+ ```
+ ![Screenshot that shows part of a console window with the command modprobe softdog.](media/HowToHLI/HASetupWithFencing/modprobe-softdog-command.png)
+
+4. Use the following command to ensure that softdog is running on *both* nodes.
+
+ ```
+ lsmod | grep dog
+ ```
+ ![Screenshot that shows part of a console window with the result of running the l s mod command.](media/HowToHLI/HASetupWithFencing/lsmod-grep-dog.png)
+
+5. Use the following command to start the SBD device on *both* nodes.
+
+ ```
+ /usr/share/sbd/sbd.sh start
+ ```
+ ![Screenshot that shows part of a console window with the start command.](media/HowToHLI/HASetupWithFencing/sbd-sh-start.png)
+
+6. Use the following command to test the SBD daemon on *both* nodes.
+
+ ```
+ sbd -d <SBD Device Name> list
+ ```
+ The results show two entries after configuration on both nodes.
+
+ ![Screenshot that shows part of a console window displaying two entries.](media/HowToHLI/HASetupWithFencing/sbd-list.png)
+
+7. Send the following test message to *one* of your nodes.
+
+ ```
+ sbd -d <SBD Device Name> message <node2> <message>
+ ```
+
+8. On the *second* node (node2), use the following command to check the message status.
+
+ ```
+ sbd -d <SBD Device Name> list
+ ```
+ ![Screenshot that shows part of a console window with one of the members displaying a test value for the other member.](media/HowToHLI/HASetupWithFencing/sbd-list-message.png)
+
+9. To adopt the SBD configuration, update the file */etc/sysconfig/sbd* as follows on *both* nodes.
+
+ ```
+ SBD_DEVICE=" <SBD Device Name>"
+ SBD_WATCHDOG="yes"
+ SBD_PACEMAKER="yes"
+ SBD_STARTMODE="clean"
+ SBD_OPTS=""
+ ```
+10. Use the following command to start the pacemaker service on the *primary node* (node1).
+
+ ```
+ systemctl start pacemaker
+ ```
+ ![Screenshot that shows a console window displaying the status after starting pacemaker.](media/HowToHLI/HASetupWithFencing/start-pacemaker.png)
+
+ If the pacemaker service fails, see the section [Scenario 5: Pacemaker service fails](#scenario-5-pacemaker-service-fails) later in this article.
+
+## Join the node to the cluster
+
+Run the following command on *node2* to let that node join the cluster.
+
+```
+ha-cluster-join
+```
+
+If you receive an error during joining of the cluster, see the section [Scenario 6: Node2 can't join the cluster](#scenario-6-node2-cant-join-the-cluster) later in this article.
+
+## Validate the cluster
+
+1. Use the following commands to check and optionally start the cluster for the first time on *both* nodes.
+
+ ```
+ systemctl status pacemaker
+ systemctl start pacemaker
+ ```
+ ![Screenshot that shows a console window with the status of pacemaker.](media/HowToHLI/HASetupWithFencing/systemctl-status-pacemaker.png)
+
+2. Run the following command to ensure that *both* nodes are online. You can run it on *any of the nodes* of the cluster.
+
+ ```
+ crm_mon
+ ```
+ ![Screenshot that shows a console window with the results of the c r m_mon command.](media/HowToHLI/HASetupWithFencing/crm-mon.png)
+
+ You can also sign in to hawk to check the cluster status: `https://\<node IP>:7630`. The default user is **hacluster**, and the password is **linux**. If needed, you can change the password by using the `passwd` command.
+
+## Configure cluster properties and resources
+
+This section describes the steps to configure the cluster resources.
+In this example, you set up the following resources. You can configure the rest (if needed) by referencing the SUSE HA guide.
+
+- Cluster bootstrap
+- Fencing device
+- Virtual IP address
+
+Do the configuration on the *primary node* only.
+
+1. Create the cluster bootstrap file and configure it by adding the following text.
+
+ ```
+ sapprdhdb95:~ # vi crm-bs.txt
+ # enter the following to crm-bs.txt
+ property $id="cib-bootstrap-options" \
+ no-quorum-policy="ignore" \
+ stonith-enabled="true" \
+ stonith-action="reboot" \
+ stonith-timeout="150s"
+ rsc_defaults $id="rsc-options" \
+ resource-stickiness="1000" \
+ migration-threshold="5000"
+ op_defaults $id="op-options" \
+ timeout="600"
+ ```
+
+2. Use the following command to add the configuration to the cluster.
+
+ ```
+ crm configure load update crm-bs.txt
+ ```
+ ![Screenshot that shows part of a console window running the c r m command.](media/HowToHLI/HASetupWithFencing/crm-configure-crmbs.png)
+
+3. Configure the fencing device by adding the resource, creating the file, and adding text as follows.
+
+ ```
+ # vi crm-sbd.txt
+ # enter the following to crm-sbd.txt
+ primitive stonith-sbd stonith:external/sbd \
+ params pcmk_delay_max="15"
+ ```
+ Use the following command to add the configuration to the cluster.
+
+ ```
+ crm configure load update crm-sbd.txt
+ ```
+
+4. Add the virtual IP address for the resource by creating the file and adding the following text.
+
+ ```
+ # vi crm-vip.txt
+ primitive rsc_ip_HA1_HDB10 ocf:heartbeat:IPaddr2 \
+ operations $id="rsc_ip_HA1_HDB10-operations" \
+ op monitor interval="10s" timeout="20s" \
+ params ip="10.35.0.197"
+ ```
+
+ Use the following command to add the configuration to the cluster.
+
+ ```
+ crm configure load update crm-vip.txt
+ ```
+
+5. Use the `crm_mon` command to validate the resources.
+
+ The results show the two resources.
+
+ ![Screenshot that shows a console window with two resources.](media/HowToHLI/HASetupWithFencing/crm_mon_command.png)
+
+ You can also check the status at *https://\<node IP address>:7630/cib/live/state*.
+
+ ![Screenshot that shows the status of the two resources.](media/HowToHLI/HASetupWithFencing/hawlk-status-page.png)
+
+## Test the failover process
+
+1. To test the failover process, use the following command to stop the pacemaker service on node1.
+
+ ```
+ Service pacemaker stop
+ ```
+
+ The resources fail over to node2.
+
+2. Stop the pacemaker service on node2, and resources fail over to node1.
+
+ Here's the status before failover:
+ ![Screenshot that shows the status of the two resources before failover.](media/HowToHLI/HASetupWithFencing/Before-failover.png)
+
+ Here's the status after failover:
+ ![Screenshot that shows the status of the two resources after failover.](media/HowToHLI/HASetupWithFencing/after-failover.png)
+
+ ![Screenshot that shows a console window with the status of resources after failover.](media/HowToHLI/HASetupWithFencing/crm-mon-after-failover.png)
+
+
+## Troubleshooting
+
+This section describes failure scenarios that you might encounter during setup.
+
+### Scenario 1: Cluster node not online
+
+If any of the nodes don't show online in Cluster Manager, you can try this procedure to bring it online.
+
+1. Use the following command to start the iSCSI service.
+
+ ```
+ service iscsid start
+ ```
+
+2. Use the following command to sign in to that iSCSI node.
+
+ ```
+ iscsiadm -m node -l
+ ```
+
+ The expected output looks like:
+
+ ```
+ sapprdhdb45:~ # iscsiadm -m node -l
+ Logging in to [iface: default, target: iqn.1992-08.com.netapp:hanadc11:1:t020, portal: 10.250.22.11,3260] (multiple)
+ Logging in to [iface: default, target: iqn.1992-08.com.netapp:hanadc11:1:t020, portal: 10.250.22.12,3260] (multiple)
+ Logging in to [iface: default, target: iqn.1992-08.com.netapp:hanadc11:1:t020, portal: 10.250.22.22,3260] (multiple)
+ Logging in to [iface: default, target: iqn.1992-08.com.netapp:hanadc11:1:t020, portal: 10.250.22.21,3260] (multiple)
+ Login to [iface: default, target: iqn.1992-08.com.netapp:hanadc11:1:t020, portal: 10.250.22.11,3260] successful.
+ Login to [iface: default, target: iqn.1992-08.com.netapp:hanadc11:1:t020, portal: 10.250.22.12,3260] successful.
+ Login to [iface: default, target: iqn.1992-08.com.netapp:hanadc11:1:t020, portal: 10.250.22.22,3260] successful.
+ Login to [iface: default, target: iqn.1992-08.com.netapp:hanadc11:1:t020, portal: 10.250.22.21,3260] successful.
+ ```
+### Scenario 2: Yast2 doesn't show graphical view
+
+The yast2 graphical screen is used to set up the high-availability cluster in this article. If yast2 doesn't open with the graphical window as shown, and it throws a Qt error, take the following steps to install the required packages. If it opens with the graphical window, you can skip the steps.
+
+Here's an example of the Qt error:
+
+![Screenshot that shows part of a console window with an error message.](media/HowToHLI/HASetupWithFencing/yast2-qt-gui-error.png)
+
+Here's an example of the expected output:
+
+![Screenshot that shows the YaST Control Center with High Availability and Cluster highlighted.](media/HowToHLI/HASetupWithFencing/yast-control-center.png)
+
+1. Make sure that you're logged in as user "root" and have SMT set up to download and install the packages.
+
+2. Go to **yast** > **Software** > **Software Management** > **Dependencies**, and then select **Install recommended packages**.
+
+ >[!NOTE]
+ >Perform the steps on *both* nodes, so that you can access the yast2 graphical view from both nodes.
+
+ The following screenshot shows the expected screen.
+
+ ![Screenshot that shows a console window displaying the YaST Control Center.](media/HowToHLI/HASetupWithFencing/yast-sofwaremanagement.png)
+
+3. Under **Dependencies**, select **Install Recommended Packages**.
+
+ ![Screenshot that shows a console window with Install Recommended Packages selected.](media/HowToHLI/HASetupWithFencing/yast-dependencies.png)
+
+4. Review the changes and select **OK**.
+
+ ![Screenshot that shows a console window with a list of packages that have been selected for installation.](media/HowToHLI/HASetupWithFencing/yast-automatic-changes.png)
+
+ The package installation proceeds.
+
+ ![Screenshot that shows a console window displaying progress of the installation.](media/HowToHLI/HASetupWithFencing/yast-performing-installation.png)
+
+5. Select **Next**.
+
+6. When the **Installation Successfully Finished** screen appears, select **Finish**.
+
+ ![Screenshot that shows a console window with a success message.](media/HowToHLI/HASetupWithFencing/yast-installation-report.png)
+
+7. Use the following commands to install the libqt4 and libyui-qt packages.
+
+ ```
+ zypper -n install libqt4
+ ```
+ ![Screenshot that shows a console window installing the first package.](media/HowToHLI/HASetupWithFencing/zypper-install-libqt4.png)
+
+ ```
+ zypper -n install libyui-qt
+ ```
+ ![Screenshot that shows a console window installing the second package.](media/HowToHLI/HASetupWithFencing/zypper-install-ligyui.png)
+
+ ![Screenshot that shows a console window installing the second package, continued.](media/HowToHLI/HASetupWithFencing/zypper-install-ligyui_part2.png)
+
+ Yast2 can now open the graphical view.
+
+ ![Screenshot that shows the YaST Control Center with Software and Online Update selected.](media/HowToHLI/HASetupWithFencing/yast2-control-center.png)
+
+### Scenario 3: Yast2 doesn't show the high-availability option
+
+For the high-availability option to be visible on the yast2 control center, you need to install the other packages.
+
+1. Go to **Yast2** > **Software** > **Software Management**. Then select **Software** > **Online Update**.
+
+ ![Screenshot that shows the YaST Control Center with Software and Online Update selected.](media/HowToHLI/HASetupWithFencing/yast2-control-center.png)
+
+2. Select patterns for the following items. Then select **Accept**.
+
+ - SAP HANA server base
+ - C/C++ compiler and tools
+ - High availability
+ - SAP application server base
+
+ ![Screenshot that shows selecting the first pattern in the item for compiler and tools.](media/HowToHLI/HASetupWithFencing/yast-pattern1.png)
+
+ ![Screenshot that shows selecting the second pattern in the item for compiler and tools.](media/HowToHLI/HASetupWithFencing/yast-pattern2.png)
+
+4. In the list of packages that have been changed to resolve dependencies, select **Continue**.
+
+ ![Screenshot that shows the Changed Packages dialog with packages changed to resolve dependencies.](media/HowToHLI/HASetupWithFencing/yast-changed-packages.png)
+
+4. On the **Performing Installation** status page, select **Next**.
+
+ ![Screenshot that shows the Performing Installation status page.](media/HowToHLI/HASetupWithFencing/yast2-performing-installation.png)
+
+5. When the installation is complete, an installation report appears. Select **Finish**.
+
+ ![Screenshot that shows the installation report.](media/HowToHLI/HASetupWithFencing/yast2-installation-report.png)
+
+### Scenario 4: HANA installation fails with gcc assemblies error
+
+If the HANA installation fails, you might get the following error.
+
+![Screenshot that shows an error message that the operating system isn't ready to perform g c c 5 assemblies.](media/HowToHLI/HASetupWithFencing/Hana-installation-error.png)
+
+To fix the problem, install the libgcc_sl and libstdc++6 libraries as shown in the following screenshot.
+
+![Screenshot that shows a console window installing required libraries.](media/HowToHLI/HASetupWithFencing/zypper-install-lib.png)
+
+### Scenario 5: Pacemaker service fails
+
+The following information appears if the pacemaker service can't start.
+
+```
+sapprdhdb95:/ # systemctl start pacemaker
+A dependency job for pacemaker.service failed. See 'journalctl -xn' for details.
+```
+```
+sapprdhdb95:/ # journalctl -xn
+-- Logs begin at Thu 2017-09-28 09:28:14 EDT, end at Thu 2017-09-28 21:48:27 EDT. --
+Sep 28 21:48:27 sapprdhdb95 corosync[68812]: [SERV ] Service engine unloaded: corosync configuration map
+Sep 28 21:48:27 sapprdhdb95 corosync[68812]: [QB ] withdrawing server sockets
+Sep 28 21:48:27 sapprdhdb95 corosync[68812]: [SERV ] Service engine unloaded: corosync configuration ser
+Sep 28 21:48:27 sapprdhdb95 corosync[68812]: [QB ] withdrawing server sockets
+Sep 28 21:48:27 sapprdhdb95 corosync[68812]: [SERV ] Service engine unloaded: corosync cluster closed pr
+Sep 28 21:48:27 sapprdhdb95 corosync[68812]: [QB ] withdrawing server sockets
+Sep 28 21:48:27 sapprdhdb95 corosync[68812]: [SERV ] Service engine unloaded: corosync cluster quorum se
+Sep 28 21:48:27 sapprdhdb95 corosync[68812]: [SERV ] Service engine unloaded: corosync profile loading s
+Sep 28 21:48:27 sapprdhdb95 corosync[68812]: [MAIN ] Corosync Cluster Engine exiting normally
+Sep 28 21:48:27 sapprdhdb95 systemd[1]: Dependency failed for Pacemaker High Availability Cluster Manager
+-- Subject: Unit pacemaker.service has failed
+-- Defined-By: systemd
+-- Support: https://lists.freedesktop.org/mailman/listinfo/systemd-devel
+--
+-- Unit pacemaker.service has failed.
+--
+-- The result is dependency.
+```
+```
+sapprdhdb95:/ # tail -f /var/log/messages
+2017-09-28T18:44:29.675814-04:00 sapprdhdb95 corosync[57600]: [QB ] withdrawing server sockets
+2017-09-28T18:44:29.676023-04:00 sapprdhdb95 corosync[57600]: [SERV ] Service engine unloaded: corosync cluster closed process group service v1.01
+2017-09-28T18:44:29.725885-04:00 sapprdhdb95 corosync[57600]: [QB ] withdrawing server sockets
+2017-09-28T18:44:29.726069-04:00 sapprdhdb95 corosync[57600]: [SERV ] Service engine unloaded: corosync cluster quorum service v0.1
+2017-09-28T18:44:29.726164-04:00 sapprdhdb95 corosync[57600]: [SERV ] Service engine unloaded: corosync profile loading service
+2017-09-28T18:44:29.776349-04:00 sapprdhdb95 corosync[57600]: [MAIN ] Corosync Cluster Engine exiting normally
+2017-09-28T18:44:29.778177-04:00 sapprdhdb95 systemd[1]: Dependency failed for Pacemaker High Availability Cluster Manager.
+2017-09-28T18:44:40.141030-04:00 sapprdhdb95 systemd[1]: [/usr/lib/systemd/system/fstrim.timer:8] Unknown lvalue 'Persistent' in section 'Timer'
+2017-09-28T18:45:01.275038-04:00 sapprdhdb95 cron[57995]: pam_unix(crond:session): session opened for user root by (uid=0)
+2017-09-28T18:45:01.308066-04:00 sapprdhdb95 CRON[57995]: pam_unix(crond:session): session closed for user root
+```
+
+To fix it, delete the following line from the file */usr/lib/systemd/system/fstrim.timer*:
+
+```
+Persistent=true
+```
+
+![Screenshot that shows the f s trim file with the value of Persistent=true to be deleted.](media/HowToHLI/HASetupWithFencing/Persistent.png)
+
+### Scenario 6: Node2 can't join the cluster
+
+The following error appears if there's a problem with joining node2 to the existing cluster through the *ha-cluster-join* command.
+
+```
+ERROR: CanΓÇÖt retrieve SSH keys from <Primary Node>
+```
+
+![Screenshot that shows a console window with an error message that says S S H keys can't be retrieved from a particular I P address.](media/HowToHLI/HASetupWithFencing/ha-cluster-join-error.png)
+
+To fix it:
+
+1. Run the following commands on *both nodes*.
+
+ ```
+ ssh-keygen -q -f /root/.ssh/id_rsa -C 'Cluster Internal' -N ''
+ cat /root/.ssh/id_rsa.pub >> /root/.ssh/authorized_keys
+ ```
+
+ ![Screenshot that shows part of a console window running the command on the first node.](media/HowToHLI/HASetupWithFencing/ssh-keygen-node1.PNG)
+
+ ![Screenshot that shows part of a console window running the command on the second node.](media/HowToHLI/HASetupWithFencing/ssh-keygen-node2.PNG)
+
+2. Confirm that node2 is added to the cluster.
+
+ ![Screenshot that shows a console window with a successful join command.](media/HowToHLI/HASetupWithFencing/ha-cluster-join-fix.png)
+
+## Next steps
+
+You can find more information on SUSE HA setup in the following articles:
+
+- [SAP HANA SR Performance Optimized Scenario](https://www.suse.com/support/kb/doc/?id=000019450) (SUSE website)
+- [Fencing and fencing devices](https://documentation.suse.com/sle-ha/15-SP1/html/SLE-HA-all/cha-ha-fencing.html) (SUSE website)
+- [Be Prepared for Using Pacemaker Cluster for SAP HANA ΓÇô Part 1: Basics](https://blogs.sap.com/2017/11/19/be-prepared-for-using-pacemaker-cluster-for-sap-hana-part-1-basics/) (SAP blog)
+- [Be Prepared for Using Pacemaker Cluster for SAP HANA ΓÇô Part 2: Failure of Both Nodes](https://blogs.sap.com/2017/11/19/be-prepared-for-using-pacemaker-cluster-for-sap-hana-part-2-failure-of-both-nodes/) (SAP blog)
+- [OS backup and restore](large-instance-os-backup.md)
virtual-machines Hana Large Instance Virtual Machine Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/hana-large-instance-virtual-machine-migration.md
This article makes the following assumptions:
- You've validated the design and migration plan. - Plan for disaster recovery VM along with the primary site. You can't use the HLI as the DR node for the primary site running on VMs after the migration. - You copied the required backup files to target VMs, based on business recoverability and compliance requirements. With VM accessible backups, it allows for point-in-time recovery during the transition period.-- For SAP HANA system replication (HSR) high availability (HA), you need to set up and configure the STONITH device per SAP HANA HA guides for [SLES](./high-availability-guide-suse-pacemaker.md) and [RHEL](./high-availability-guide-rhel-pacemaker.md). ItΓÇÖs not preconfigured like the HLI case.
+- For SAP HANA system replication (HSR) high availability (HA), you need to set up and configure the fencing device per SAP HANA HA guides for [SLES](./high-availability-guide-suse-pacemaker.md) and [RHEL](./high-availability-guide-rhel-pacemaker.md). ItΓÇÖs not preconfigured like the HLI case.
- This migration approach doesn't cover the HLI SKUs with Optane configuration. ## Deployment scenarios
You can migrate to Azure VMs for all HLI scenarios. Common deployment models for
| 2 | [Single node with Multiple Components in One System (MCOS)](./hana-supported-scenario.md#single-node-mcos) | Yes | - | | 3 | [Single node with DR using storage replication](./hana-supported-scenario.md#single-node-with-dr-using-storage-replication) | No | Storage replication isn't available with Azure virtual platform; change current DR solution to either HSR or backup/restore. | | 4 | [Single node with DR (multipurpose) using storage replication](./hana-supported-scenario.md#single-node-with-dr-multipurpose-using-storage-replication) | No | Storage replication isn't available with Azure virtual platform; change current DR solution to either HSR or backup/restore. |
-| 5 | [HSR with STONITH for high availability](./hana-supported-scenario.md#hsr-with-stonith-for-high-availability) | Yes | No preconfigured SBD for target VMs. Select and deploy a STONITH solution. Possible options: Azure Fencing Agent (supported for both [RHEL](./high-availability-guide-rhel-pacemaker.md), [SLES](./high-availability-guide-suse-pacemaker.md)), and STONITH block device (SBD). |
+| 5 | [HSR with fencing for high availability](./hana-supported-scenario.md#hsr-with-fencing-for-high-availability) | Yes | No preconfigured SBD for target VMs. Select and deploy a fencing solution. Possible options: Azure Fencing Agent (supported for both [RHEL](./high-availability-guide-rhel-pacemaker.md), [SLES](./high-availability-guide-suse-pacemaker.md)), and SBD. |
| 6 | [HA with HSR, DR with storage replication](./hana-supported-scenario.md#high-availability-with-hsr-and-dr-with-storage-replication) | No | Replace storage replication for DR needs with either HSR or backup/restore. | | 7 | [Host auto failover (1+1)](./hana-supported-scenario.md#host-auto-failover-11) | Yes | Use Azure NetApp Files (ANF) for shared storage with Azure VMs. | | 8 | [Scale-out with standby](./hana-supported-scenario.md#scale-out-with-standby) | Yes | BW/4HANA with M128s, M416s, M416ms VMs using ANF for storage only. |
virtual-machines Hana Monitor Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/hana-monitor-troubleshoot.md
Do an SAP HANA Health Check through HANA\_Configuration\_Minichecks. This tool r
## Next steps
-Learn how to set up high availability on the SUSE operating system using the STONITH device.
+Learn how to set up high availability on the SUSE operating system using the fencing device.
> [!div class="nextstepaction"]
-> [High availability set up in SUSE using the STONITH](ha-setup-with-stonith.md)
+> [High availability set up in SUSE using a fencing device](ha-setup-with-fencing-device.md)
virtual-machines Hana Overview Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/hana-overview-architecture.md
The different documents of HANA Large Instance guidance cover the following area
- [Install and configure SAP HANA (Large Instances) on Azure](hana-installation.md) - [SAP HANA (Large Instances) high availability and disaster recovery on Azure](hana-overview-high-availability-disaster-recovery.md) - [SAP HANA (Large Instances) troubleshooting and monitoring on Azure](troubleshooting-monitoring.md)-- [High availability set up in SUSE by using the STONITH](./ha-setup-with-stonith.md)
+- [High availability set up in SUSE by using a fencing device](./ha-setup-with-fencing-device.md)
- [OS Backup](./large-instance-os-backup.md) - [Save on SAP HANA Large Instances with an Azure reservation](../../../cost-management-billing/reservations/prepay-hana-large-instances-reserved-capacity.md)
virtual-machines Hana Supported Scenario https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/hana-supported-scenario.md
Each provisioned server comes preconfigured with sets of Ethernet interfaces. Th
- **A**: Used for or by client access. - **B**: Used for node-to-node communication. This interface is configured on all servers no matter what topology you request. However, it's used only for scale-out scenarios. - **C**: Used for node-to-storage connectivity.-- **D**: Used for node-to-iSCSI device connection for STONITH setup. This interface is configured only when an HSR setup is requested.
+- **D**: Used for node-to-iSCSI device connection for fencing setup. This interface is configured only when an HSR setup is requested.
| NIC logical interface | SKU type | Name with SUSE OS | Name with RHEL OS | Use case| | | | | | | | A | TYPE I | eth0.tenant | eno1.tenant | Client-to-HLI | | B | TYPE I | eth2.tenant | eno3.tenant | Node-to-node| | C | TYPE I | eth1.tenant | eno2.tenant | Node-to-storage |
-| D | TYPE I | eth4.tenant | eno4.tenant | STONITH |
+| D | TYPE I | eth4.tenant | eno4.tenant | Fencing |
| A | TYPE II | vlan\<tenantNo> | team0.tenant | Client-to-HLI | | B | TYPE II | vlan\<tenantNo+2> | team0.tenant+2 | Node-to-node| | C | TYPE II | vlan\<tenantNo+1> | team0.tenant+1 | Node-to-storage |
-| D | TYPE II | vlan\<tenantNo+3> | team0.tenant+3 | STONITH |
+| D | TYPE II | vlan\<tenantNo+3> | team0.tenant+3 | Fencing |
You choose the interface based on the topology that's configured on the HLI unit. For example, interface ΓÇ£BΓÇ¥ is set up for node-to-node communication, which is useful when you have a scale-out topology configured. This interface isn't used for single node scale-up configurations. For more information about interface usage, review your required scenarios (later in this article).
For HANA system replication or HANA scale-out deployment, a blade configuration
- Ethernet ΓÇ£CΓÇ¥ should have an assigned IP address that's used for communication to NFS storage. This type of address shouldn't be maintained in the *etc/hosts* directory. -- Ethernet ΓÇ£DΓÇ¥ should be used exclusively for access to STONITH devices for Pacemaker. This interface is required when you configure HANA system replication and want to achieve auto failover of the operating system by using an SBD-based device.
+- Ethernet ΓÇ£DΓÇ¥ should be used exclusively for access to fencing devices for Pacemaker. This interface is required when you configure HANA system replication and want to achieve auto failover of the operating system by using an SBD-based device.
### Storage
Here are the supported scenarios:
* Single node MCOS * Single node with DR (normal) * Single node with DR (multipurpose)
-* HSR with STONITH
+* HSR with fencing
* HSR with DR (normal/multipurpose) * Host auto failover (1+1) * Scale-out with standby
The following mount points are preconfigured:
- At the DR site: The data, log backups, log, and shared volumes for QA (marked as ΓÇ£QA instance installationΓÇ¥) are configured for the QA instance installation. - The boot volume for *SKU Type I class* is replicated to the DR node.
-## HSR with STONITH for high availability
+## HSR with fencing for high availability
This topology supports two nodes for the HANA system replication configuration. This configuration is supported only for single HANA instances on a node. MCOS scenarios *aren't* supported.
This topology supports two nodes for the HANA system replication configuration.
### Architecture diagram
-![HSR with STONITH for high availability](media/hana-supported-scenario/HSR-with-STONITH.png)
+![HSR with fencing for high availability](media/hana-supported-scenario/hsr-with-fencing.png)
The following network interfaces are preconfigured:
| A | TYPE I | eth0.tenant | eno1.tenant | Client-to-HLI | | B | TYPE I | eth2.tenant | eno3.tenant | Configured but not in use | | C | TYPE I | eth1.tenant | eno2.tenant | Node-to-storage |
-| D | TYPE I | eth4.tenant | eno4.tenant | Used for STONITH |
+| D | TYPE I | eth4.tenant | eno4.tenant | Used for fencing |
| A | TYPE II | vlan\<tenantNo> | team0.tenant | Client-to-HLI | | B | TYPE II | vlan\<tenantNo+2> | team0.tenant+2 | Configured but not in use | | C | TYPE II | vlan\<tenantNo+1> | team0.tenant+1 | Node-to-storage |
-| D | TYPE II | vlan\<tenantNo+3> | team0.tenant+3 | Used for STONITH |
+| D | TYPE II | vlan\<tenantNo+3> | team0.tenant+3 | Used for fencing |
### Storage The following mount points are preconfigured:
The following mount points are preconfigured:
### Key considerations - /usr/sap/SID is a symbolic link to /hana/shared/SID. - For MCOS: Volume size distribution is based on the database size in memory. To learn what database sizes in memory are supported in a multi-SID environment, see [Overview and architecture](./hana-overview-architecture.md).-- STONITH: An SBD is configured for the STONITH setup. However, the use of STONITH is optional.
+- Fencing: An SBD is configured for the fencing device setup. However, the use of fencing is optional.
## High availability with HSR and DR with storage replication
The following network interfaces are preconfigured:
| A | TYPE I | eth0.tenant | eno1.tenant | Client-to-HLI | | B | TYPE I | eth2.tenant | eno3.tenant | Configured but not in use | | C | TYPE I | eth1.tenant | eno2.tenant | Node-to-storage |
-| D | TYPE I | eth4.tenant | eno4.tenant | Used for STONITH |
+| D | TYPE I | eth4.tenant | eno4.tenant | Used for fencing |
| A | TYPE II | vlan\<tenantNo> | team0.tenant | Client-to-HLI | | B | TYPE II | vlan\<tenantNo+2> | team0.tenant+2 | Configured but not in use | | C | TYPE II | vlan\<tenantNo+1> | team0.tenant+1 | Node-to-storage |
-| D | TYPE II | vlan\<tenantNo+3> | team0.tenant+3 | Used for STONITH |
+| D | TYPE II | vlan\<tenantNo+3> | team0.tenant+3 | Used for fencing |
### Storage The following mount points are preconfigured:
The following mount points are preconfigured:
### Key considerations - /usr/sap/SID is a symbolic link to /hana/shared/SID. - For MCOS: Volume size distribution is based on the database size in memory. To learn what database sizes in memory are supported in a multi-SID environment, see [Overview and architecture](./hana-overview-architecture.md).-- STONITH: An SBD is configured for the STONITH setup. However, the use of STONITH is optional.
+- Fencing: An SBD is configured for the fencing setup. However, the use of fencing is optional.
- At the DR site: *Two sets of storage volumes are required* for primary and secondary node replication. - At the DR site: The volumes and mount points are configured (marked as ΓÇ£Required for HANA installationΓÇ¥) for the production HANA instance installation at the DR HLI unit. - At the DR site: The data, log backups, and shared volumes (marked as ΓÇ£Storage ReplicationΓÇ¥) are replicated via snapshot from the production site. These volumes are mounted during failover only. For more information, see [Disaster recovery failover procedure](./hana-overview-high-availability-disaster-recovery.md).
virtual-machines Hana Vm Operations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/hana-vm-operations.md
Be sure to install SAProuter in a separate VM and not in your Jumpbox VM. The se
For more information on how to set up and maintain remote support connections through SAProuter, see the [SAP documentation](https://support.sap.com/en/tools/connectivity-tools/remote-support.html). ### High-availability with SAP HANA on Azure native VMs
-If you're running SUSE Linux Enterprise Server or Red Hat, you can establish a Pacemaker cluster with STONITH devices. You can use the devices to set up an SAP HANA configuration that uses synchronous replication with HANA System Replication and automatic failover. For more information listed in the 'next steps' section.
+If you're running SUSE Linux Enterprise Server or Red Hat, you can establish a Pacemaker cluster with fencing devices. You can use the devices to set up an SAP HANA configuration that uses synchronous replication with HANA System Replication and automatic failover. For more information listed in the 'next steps' section.
## Next Steps Get familiar with the articles as listed
virtual-machines High Availability Guide Rhel Pacemaker https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/high-availability-guide-rhel-pacemaker.md
The following items are prefixed with either **[A]** - applicable to all nodes,
<pre><code>sudo pcs property set concurrent-fencing=true </code></pre>
-## Create STONITH device
+## Create fencing device
-The STONITH device uses either a managed identity for Azure resource or service principal to authorize against Microsoft Azure.
+The fencing device uses either a managed identity for Azure resource or service principal to authorize against Microsoft Azure.
### Using Managed Identity To create a managed identity (MSI), [create a system-assigned](/azure/active-directory/managed-identities-azure-resources/qs-configure-portal-windows-vm#system-assigned-managed-identity) managed identity for each VM in the cluster. Should a system-assigned managed identity already exist, it will be used. User assigned managed identities should not be used with Pacemaker at this time.
Assign the custom role "Linux Fence Agent Role" that was created in the last cha
Assign the custom role "Linux Fence Agent Role" that was created in the last chapter to the service principal. Do not use the Owner role anymore! For detailed steps, see [Assign Azure roles using the Azure portal](../../../role-based-access-control/role-assignments-portal.md). Make sure to assign the role for both cluster nodes.
-### **[1]** Create the STONITH devices
+### **[1]** Create the fencing devices
-After you edited the permissions for the virtual machines, you can configure the STONITH devices in the cluster.
+After you edited the permissions for the virtual machines, you can configure the fencing devices in the cluster.
<pre><code> sudo pcs property set stonith-timeout=900
sudo pcs property set stonith-timeout=900
> [!NOTE] > Option 'pcmk_host_map' is ONLY required in the command, if the RHEL host names and the Azure VM names are NOT identical. Specify the mapping in the format **hostname:vm-name**.
-> Refer to the bold section in the command. For more information, see [What format should I use to specify node mappings to stonith devices in pcmk_host_map](https://access.redhat.com/solutions/2619961)
+> Refer to the bold section in the command. For more information, see [What format should I use to specify node mappings to fencing devices in pcmk_host_map](https://access.redhat.com/solutions/2619961)
#### [Managed Identity](#tab/msi)
op monitor interval=3600
> [!IMPORTANT] > The monitoring and fencing operations are de-serialized. As a result, if there is a longer running monitoring operation and simultaneous fencing event, there is no delay to the cluster failover, due to the already running monitoring operation.
-### **[1]** Enable the use of a STONITH device
+### **[1]** Enable the use of a fencing device
<pre><code>sudo pcs property set stonith-enabled=true </code></pre>
op monitor interval=3600
>Azure Fence Agent requires outbound connectivity to public end points as documented, along with possible solutions, in [Public endpoint connectivity for VMs using standard ILB](./high-availability-guide-standard-load-balancer-outbound-connections.md).
-## Optional STONITH configuration
+## Optional fencing configuration
> [!TIP] > This section is only applicable, if it is desired to configure special fencing device `fence_kdump`.
-If there is a need to collect diagnostic information within the VM , it may be useful to configure additional STONITH device, based on fence agent `fence_kdump`. The `fence_kdump` agent can detect that a node entered kdump crash recovery and can allow the crash recovery service to complete, before other fencing methods are invoked. Note that `fence_kdump` is not a replacement for traditional fence mechanisms, like Azure Fence Agent when using Azure VMs.
+If there is a need to collect diagnostic information within the VM , it may be useful to configure additional fencing device, based on fence agent `fence_kdump`. The `fence_kdump` agent can detect that a node entered kdump crash recovery and can allow the crash recovery service to complete, before other fencing methods are invoked. Note that `fence_kdump` is not a replacement for traditional fence mechanisms, like Azure Fence Agent when using Azure VMs.
> [!IMPORTANT]
-> Be aware that when `fence_kdump` is configured as a first level stonith, it will introduce delays in the fencing operations and respectively delays in the application resources failover.
+> Be aware that when `fence_kdump` is configured as a first level fencing device, it will introduce delays in the fencing operations and respectively delays in the application resources failover.
> > If a crash dump is successfully detected, the fencing will be delayed until the crash recovery service completes. If the failed node is unreachable or if it doesn't respond, the fencing will be delayed by time determined by the configured number of iterations and the `fence_kdump` timeout. For more details, see [How do I configure fence_kdump in a Red Hat Pacemaker cluster](https://access.redhat.com/solutions/2876971). > The proposed fence_kdump timeout may need to be adapted to the specific environment. >
-> We recommend to configure `fence_kdump` stonith only when necessary to collect diagnostics within the VM and always in combination with traditional fence method as Azure Fence Agent.
+> We recommend to configure `fence_kdump` fencing only when necessary to collect diagnostics within the VM and always in combination with traditional fence method as Azure Fence Agent.
-The following Red Hat KBs contain important information about configuring `fence_kdump` stonith:
+The following Red Hat KBs contain important information about configuring `fence_kdump` fencing:
* [How do I configure fence_kdump in a Red Hat Pacemaker cluster](https://access.redhat.com/solutions/2876971)
-* [How to configure/manage STONITH levels in RHEL cluster with Pacemaker](https://access.redhat.com/solutions/891323)
+* [How to configure/manage fencing levels in RHEL cluster with Pacemaker](https://access.redhat.com/solutions/891323)
* [fence_kdump fails with "timeout after X seconds" in a RHEL 6 0r 7 HA cluster with kexec-tools older than 2.0.14](https://access.redhat.com/solutions/2388711) * For information how to change change the default timeout see [How do I configure kdump for use with the RHEL 6,7,8 HA Add-On](https://access.redhat.com/articles/67570) * For information on how to reduce failover delay, when using `fence_kdump` see [Can I reduce the expected delay of failover when adding fence_kdump configuration](https://access.redhat.com/solutions/5512331)
-Execute the following optional steps to add `fence_kdump` as a first level STONITH configuration, in addition to the Azure Fence Agent configuration.
+Execute the following optional steps to add `fence_kdump` as a first level fencing configuration, in addition to the Azure Fence Agent configuration.
1. **[A]** Verify that kdump is active and configured.
Execute the following optional steps to add `fence_kdump` as a first level STONI
``` yum install fence-agents-kdump ```
-3. **[1]** Create `fence_kdump` stonith device in the cluster.
+3. **[1]** Create `fence_kdump` fencing device in the cluster.
<pre><code> pcs stonith create rsc_st_kdump fence_kdump pcmk_reboot_action="off" <b>pcmk_host_list="prod-cl1-0 prod-cl1-1</b>" timeout=30 </code></pre>
-4. **[1]** Configure stonith levels, so that `fence_kdump` fencing mechanism is engaged first.
+4. **[1]** Configure fencing levels, so that `fence_kdump` fencing mechanism is engaged first.
<pre><code> pcs stonith create rsc_st_kdump fence_kdump pcmk_reboot_action="off" <b>pcmk_host_list="prod-cl1-0 prod-cl1-1</b>" pcs stonith level add 1 <b>prod-cl1-0</b> rsc_st_kdump pcs stonith level add 1 <b>prod-cl1-1</b> rsc_st_kdump pcs stonith level add 2 <b>prod-cl1-0</b> rsc_st_azure pcs stonith level add 2 <b>prod-cl1-1</b> rsc_st_azure
- # Check the stonith level configuration
+ # Check the fencing level configuration
pcs stonith level # Example output # Target: <b>prod-cl1-0</b>
virtual-machines High Availability Guide Suse Pacemaker https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/high-availability-guide-suse-pacemaker.md
This article discusses how to set up Pacemaker on SUSE Linux Enterprise Server (
[sles-nfs-guide]:high-availability-guide-suse-nfs.md [sles-guide]:high-availability-guide-suse.md
-In Azure, you have two options for setting up STONITH in the Pacemaker cluster for SLES. You can use an Azure fence agent, which restarts a failed node via the Azure APIs, or you can use a STONITH block device (SBD device).
+In Azure, you have two options for setting up fencing in the Pacemaker cluster for SLES. You can use an Azure fence agent, which restarts a failed node via the Azure APIs, or you can use SBD device.
+ ### Use an SBD device
You can configure the SBD device by using either of two options:
- For more information about limitations for Azure shared disks, carefully review the "Limitations" section of [Azure shared disk documentation](../../disks-shared.md#limitations). ### Use an Azure fence agent
-You can set up STONITH by using an Azure fence agent. Azure fence agent require managed identities for the cluster VMs or a service principal, that manages restarting failed nodes via Azure APIs. Azure fence agent doesn't require the deployment of additional virtual machines.
+You can set up fencing by using an Azure fence agent. Azure fence agent require managed identities for the cluster VMs or a service principal, that manages restarting failed nodes via Azure APIs. Azure fence agent doesn't require the deployment of additional virtual machines.
## SBD with an iSCSI target server
If you want to deploy resources by using the Azure CLI or the Azure portal, you
## Use an Azure fence agent
-This section applies only if you want to use a STONITH device with an Azure fence agent.
+This section applies only if you want to use a fencing device with an Azure fence agent.
-### Create an Azure fence agent STONITH device
+### Create an Azure fence agent device
-This section applies only if you're using a STONITH device that's based on an Azure fence agent. The STONITH device uses either a managed identity or a service principal to authorize against Microsoft Azure.
+This section applies only if you're using a fencing device that's based on an Azure fence agent. The fencing device uses either a managed identity or a service principal to authorize against Microsoft Azure.
#### Using managed identity To create a managed identity (MSI), [create a system-assigned](/azure/active-directory/managed-identities-azure-resources/qs-configure-portal-windows-vm#system-assigned-managed-identity) managed identity for each VM in the cluster. Should a system-assigned managed identity already exist, it will be used. User assigned managed identities should not be used with Pacemaker at this time.
Make sure to assign the custom role to the service principal at all VM (cluster
sudo vi /root/.ssh/authorized_keys </code></pre>
-1. **[A]** Install the *fence-agents* package if you're using a STONITH device, based on the Azure fence agent.
+1. **[A]** Install the *fence-agents* package if you're using a fencing device, based on the Azure fence agent.
<pre><code>sudo zypper install fence-agents </code></pre>
Make sure to assign the custom role to the service principal at all VM (cluster
# Port for ring0 [5405] <b>Select Enter</b> # Do you wish to use SBD (y/n)? <b>n</b> #WARNING: Not configuring SBD - STONITH will be disabled.+ # Do you wish to configure an administration IP (y/n)? <b>n</b> </code></pre>
Make sure to assign the custom role to the service principal at all VM (cluster
<pre><code>sudo service corosync restart </code></pre>
-### Create a STONITH device on the Pacemaker cluster
+### Create a fencing device on the Pacemaker cluster
-1. **[1]** If you're using an SDB device (iSCSI target server or Azure shared disk) as STONITH, run the following commands. Enable the use of a STONITH device, and set the fence delay.
+1. **[1]** If you're using an SDB device (iSCSI target server or Azure shared disk) as a fencing device, run the following commands. Enable the use of a fencing device, and set the fence delay.
<pre><code>sudo crm configure property stonith-timeout=144 sudo crm configure property stonith-enabled=true
Make sure to assign the custom role to the service principal at all VM (cluster
op monitor interval="600" timeout="15" </code></pre>
-1. **[1]** If you're using an Azure fence agent as STONITH, run the following commands. After you've assigned roles to both cluster nodes, you can configure the STONITH devices in the cluster.
+1. **[1]** If you're using an Azure fence agent for fencing, run the following commands. After you've assigned roles to both cluster nodes, you can configure the fencing devices in the cluster.
<pre><code>sudo crm configure property stonith-enabled=true crm configure property concurrent-fencing=true
virtual-machines Large Instance High Availability Rhel https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/large-instance-high-availability-rhel.md
In this section, you initialize the cluster. This section uses the same two host
WARNINGS:
- No stonith devices and stonith-enabled is not false
+ No stonith devices and `stonith-enabled` is not false
Stack: corosync
In this section, you initialize the cluster. This section uses the same two host
```
-14. Enable Stonith settings.
+14. Enable fencing device settings.
``` pcs stonith enable SBD --device=/dev/mapper/3600a098038304179392b4d6c6e2f4d65 pcs property set stonith-watchdog-timeout=20
In this section, you initialize the cluster. This section uses the same two host
```
-19. For the rest of the SAP HANA clustering you can disable STONITH by setting:
+19. For the rest of the SAP HANA clustering you can disable fencing by setting:
* pcs property set `stonith-enabled=false`
- * It is sometimes easier to keep STONITH deactivated during setup of the cluster, because you will avoid unexpected reboots of the system.
+ * It is sometimes easier to keep fencing deactivated during setup of the cluster, because you will avoid unexpected reboots of the system.
* This parameter must be set to true for productive usage. If this parameter is not set to true, the cluster will be not supported. * pcs property set `stonith-enabled=true`
virtual-machines Monitor Sap On Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/monitor-sap-on-azure.md
Data collection in AMS depends on the providers that you configure. During publi
High availability (HA) Pacemaker cluster data includes: -- Node, resource, and STONITH block device (SBD) status
+- Node, resource, and SBD status
- Pacemaker location constraints - Quorum votes and ring status
virtual-machines Sap Ha Availability Zones https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/sap-ha-availability-zones.md
The following considerations apply for this configuration:
- For SUSE Linux, an NFS share that's built as documented in [High availability for NFS on Azure VMs on SUSE Linux Enterprise Server](./high-availability-guide-suse-nfs.md). Currently, the solution that uses Microsoft Scale-Out File Server, as documented in [Prepare Azure infrastructure for SAP high availability by using a Windows failover cluster and file share for SAP ASCS/SCS instances](./sap-high-availability-infrastructure-wsfc-file-share.md), is not supported across zones.-- The third zone is used to host the SBD device if you build a [SUSE Linux Pacemaker cluster](./high-availability-guide-suse-pacemaker.md#create-an-azure-fence-agent-stonith-device) and use SBD devices instead of the Azure Fencing Agent. Or for additional application instances.
+- The third zone is used to host the SBD device if you build a [SUSE Linux Pacemaker cluster](./high-availability-guide-suse-pacemaker.md#create-an-azure-fence-agent-device) and use SBD devices instead of the Azure Fencing Agent. Or for additional application instances.
- To achieve run time consistency for critical business processes, you can try to direct certain batch jobs and users to application instances that are in-zone with the active DBMS instance by using SAP batch server groups, SAP logon groups, or RFC groups. However, in the case of a zonal failover, you would need to manually move these groups to instances running on VMs that are in-zone with the active DB VM. - You might want to deploy dormant dialog instances in each of the zones.
The following considerations apply for this configuration:
- For SUSE Linux, an NFS share that's built as documented in [High availability for NFS on Azure VMs on SUSE Linux Enterprise Server](./high-availability-guide-suse-nfs.md). Currently, the solution that uses Microsoft Scale-Out File Server, as documented in [Prepare Azure infrastructure for SAP high availability by using a Windows failover cluster and file share for SAP ASCS/SCS instances](./sap-high-availability-infrastructure-wsfc-file-share.md), is not supported across zones.-- The third zone is used to host the SBD device if you build a [SUSE Linux Pacemaker cluster](./high-availability-guide-suse-pacemaker.md#create-an-azure-fence-agent-stonith-device) and use SBD devices instead of the Azure Fencing Agent. Or for additional application instances.
+- The third zone is used to host the SBD device if you build a [SUSE Linux Pacemaker cluster](./high-availability-guide-suse-pacemaker.md#create-an-azure-fence-agent-device) and use SBD devices instead of the Azure Fencing Agent. Or for additional application instances.
- You should deploy dormant VMs in the passive zone (from a DBMS point of view) so you can start application resources for the case of a zone failure. - [Azure Site Recovery](https://azure.microsoft.com/services/site-recovery/) is currently unable to replicate active VMs to dormant VMs between zones. - You should invest in automation that allows you to automatically start the SAP application layer in the second zone if a zonal outage occurs.
The following considerations apply for this configuration:
- For SUSE Linux, an NFS share that's built as documented in [High availability for NFS on Azure VMs on SUSE Linux Enterprise Server](./high-availability-guide-suse-nfs.md). Currently, the solution that uses Microsoft Scale-Out File Server, as documented in [Prepare Azure infrastructure for SAP high availability by using a Windows failover cluster and file share for SAP ASCS/SCS instances](./sap-high-availability-infrastructure-wsfc-file-share.md), is not supported across zones.-- The third zone is used to host the SBD device if you build a [SUSE Linux Pacemaker cluster](./high-availability-guide-suse-pacemaker.md#create-an-azure-fence-agent-stonith-device) and use SBD devices instead of the Azure Fencing Agent.
+- The third zone is used to host the SBD device if you build a [SUSE Linux Pacemaker cluster](./high-availability-guide-suse-pacemaker.md#create-an-azure-fence-agent-device) and use SBD devices instead of the Azure Fencing Agent.
virtual-machines Sap Hana Availability One Region https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/sap-hana-availability-one-region.md
In this scenario, data that's replicated to the HANA instance in the second VM i
### SAP HANA system replication with automatic failover
-In the standard and most common availability configuration within one Azure region, two Azure VMs running SLES Linux have a failover cluster defined. The SLES Linux cluster is based on the [Pacemaker](./high-availability-guide-suse-pacemaker.md) framework, in conjunction with a [STONITH](./high-availability-guide-suse-pacemaker.md#create-an-azure-fence-agent-stonith-device) device.
+In the standard and most common availability configuration within one Azure region, two Azure VMs running SLES Linux have a failover cluster defined. The SLES Linux cluster is based on the [Pacemaker](./high-availability-guide-suse-pacemaker.md) framework, in conjunction with a [fencing device](./high-availability-guide-suse-pacemaker.md#create-an-azure-fence-agent-device).
From an SAP HANA perspective, the replication mode that's used is synced and an automatic failover is configured. In the second VM, the SAP HANA instance acts as a hot standby node. The standby node receives a synchronous stream of change records from the primary SAP HANA instance. As transactions are committed by the application at the HANA primary node, the primary HANA node waits to confirm the commit to the application until the secondary SAP HANA node confirms that it received the commit record. SAP HANA offers two synchronous replication modes. For details and for a description of differences between these two synchronous replication modes, see the SAP article [Replication modes for SAP HANA system replication](https://help.sap.com/viewer/6b94445c94ae495c83a19646e7c3fd56/2.0.02/en-US/c039a1a5b8824ecfa754b55e0caffc01.html).
virtual-machines Sap Planning Supported Configurations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/sap-planning-supported-configurations.md
For Azure VMs, the following high availability configurations are supported on D
- [SAP ASE Azure Virtual Machines DBMS deployment for SAP workload](./dbms_guide_sapase.md) - [SAP MaxDB, liveCache, and Content Server deployment on Azure VMs](./dbms_guide_maxdb.md) - HANA Large Instances high availability scenarios are detailed in:
- - [Supported scenarios for HANA Large Instances- HSR with STONITH for high availability](./hana-supported-scenario.md#hsr-with-stonith-for-high-availability)
+ - [Supported scenarios for HANA Large Instances- HSR with fencing for high availability](./hana-supported-scenario.md#hsr-with-fencing-for-high-availability)
- [Supported scenarios for HANA Large Instances - Host auto failover (1+1)](./hana-supported-scenario.md#host-auto-failover-11) > [!IMPORTANT]
virtual-network-manager Concept Enforcement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/concept-enforcement.md
+
+ Title: 'Virtual network enforcement with security admin rules in Azure Virtual Network Manager (Preview)'
+description: This article covers using security admin rules Azure Virtual Network Manager to enforcement security policies across virtual networks.
++++ Last updated : 09/15/2022+
+# Virtual network enforcement with security admin rules in Azure Virtual Network Manager (Preview)
+
+In this article, you'll learn how [security admins rules](concept-security-admins.md) provide flexible and scalable enforcement of security policies over tools like [network security groups](../virtual-network/network-security-groups-overview.md). First, you learn the different models of virtual network enforcement. Then, you'll learn the general steps for enforcing security with security admin rules.
+
+> [!IMPORTANT]
+> Azure Virtual Network Manager is currently in public preview.
+> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+## Virtual network enforcement
+
+With [network security groups (NSGs)](../virtual-network/network-security-group-how-it-works.md) alone, widespread enforcement on VNets across several applications, teams, or even entire organizations can be tricky. Often thereΓÇÖs a balancing act between attempts at centralized enforcement across an organization and handing over granular, flexible control to teams.
+
+[Security admin rules](concept-security-admins.md) aim to eliminate this sliding scale between enforcement and flexibility altogether by consolidating the pros of each of these models while reducing the cons of each. Central governance teams establish guard rails through security admin rules, while still leaving room for individual teams to flexibly pinpoint security as needed through NSG rules. Security admin rules aren't meant to override NSG rules. Instead, they work with NSG rules to provide enforcement and flexibility across your organization.
+
+## Enforcement Models
+
+LetΓÇÖs look at a few common models of security management without security admin rules, and their pros and cons:
+
+### Model 1 - Central governance
+In this model, NSGs are managed by a central governance team within an organization.
+
+| Pros | Cons |
+| - | - |
+| The central governance team can enforce important security rules. | Operational overhead is high as admins need to manage each NSG, as the number of NSGs increases, the burden increases. |
+
+### Model 2 - NSGs are managed by individual teams.
+In this model, NSGs are managed by individual teams within an organization without a centralized governance team.
+
+| Pros | Cons |
+| - | - |
+| The individual team has flexible control in tailoring security rules based on their service requirements. | The central governance team can't enforce critical security rules, such as blocking risky ports. </br> </br> Individual team might also misconfigure or forget to attach NSGs, leading to vulnerability exposures.|
+
+### Model 3 - NSGs are created through Azure Policy and managed by individual teams.
+In this model, NSGs are still managed by individual teams. The difference is the NSGs are created using Azure Policy to set standard rules. Modifying these rules would trigger audit notifications.
+
+| Pros | Cons |
+| - | - |
+| The individual team has flexible control in tailoring security rules. </br></br> The central governance team can create standard security rules and receive notifications if rules are modified. | The central governance team still can't enforce the standard security rules, since NSG owners in teams can still modify them. </br></br> Notifications would also be overwhelming to manage. |
+
+## Enforcement and flexibility in practice
+
+LetΓÇÖs apply the concepts discussed so far to an example scenario. A company network administrator wants to enforce a security rule to block inbound SSH traffic for the whole company. As mentioned above, having such enforcement was difficult without AVNMΓÇÖs security admin rule. If the administrator manages all the NSGs, then management overhead is high, and the administrator can't rapidly respond to product teamsΓÇÖ needs to modify NSG rules. On the other hand, if the product teams manage their own NSGs without security admin rules, then the administrator can't enforce critical security rules, leaving potential security risks open. Using both security admin rules and NSGs can solve this dilemma.
+
+In this case, the administrator wants to make an exception to the application virtual networks as the application team needs management access to the application with SSH. The diagram below shows how the administrator can achieve the following goals:
+- Enforcement with security admin rules across the organization.
+- Creating exceptions for the application team to handle SSH traffic through NSGs.
++
+#### Step 1: Create a network manager instance
+
+The company administrator can create a network manager with the root management group of the firm as the scope of this network manager instance.
+
+#### Step 2: Create network groups for VNets
+
+The administrator creates two network groups ΓÇô *ALL network group* consisting of all the VNets in the organization, and App network group consisting of VNets for the application needing an exception. ALL network group in the above diagram consists of *VNet 1* to *VNet 5*, and App network group has *VNet 4* and *VNet 5*. Users can easily define both network groups using dynamic membership.
+
+#### Step 3: Create a security admin configuration
+
+In this step, two security admin rules are defined with the following security admin configuration:
+- a security admin rule to block inbound SSH traffic for ALL network group.
+- a security admin rule to allow inbound SSH traffic for App network group with a higher priority.
+
+#### Step 4: Deploy the security admin configuration
+
+After the deployment of the security admin configuration, all VNets in the company will have the deny inbound SSH traffic rule enforced by the security admin rule. No individual team can modify this rule, only the defined company administrator. The App VNets will have both an allow inbound SSH traffic rule and a deny inbound SSH traffic rule (inherited from All network group rule). The priority number of the allow inbound SSH traffic rule for App network group should be smaller so that it's evaluated first. When inbound SSH traffic comes to an App VNet, it will be allowed by this higher priority security admin rule. Assuming there are NSGs on the subnets of the App VNets, this inbound SSH traffic will be further evaluated by NSGs set by the application team. The security admin rule methodology described here allows the company administrator to effectively enforce company policies and create flexible security guard rails across an organization that work with NSGs.
++
+## Next steps
+
+- Learn how to [block high risk ports with security admin rules](how-to-block-high-risk-ports.md)
+
+- Check out the [Azure Virtual Network Manager FAQ](faq.md)
virtual-network-manager Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/faq.md
### What are common use cases for using Azure Virtual Network Manager?
-* As an IT security manager you can create different network groups to meet the requirements of your environment and its functions. For example, you can create network groups for Production and Test network environments, Dev teams, Finance department, etc. to manage their connectivity and security rules at scale.
+* You can create different network groups to meet the security requirements of your environment and its functions. For example, you can create network groups for your Production and Test environments to manage their connectivity and security rules at scale. For security rules, you'd create a security admin configuration with two security admin rule collections, each targeted on your Production and Test network groups, respectively. Once deployed, this configuration would enforce one set of security rules for network resources in your Production environment, and one set for Test environment.
* You can apply connectivity configurations to create a mesh or a hub-and-spoke network topology for a large number of virtual networks across your organization's subscriptions.
Azure SQL Managed Instance has some network requirements. If your security admin
| 443, 12000 | TCP | **VirtualNetwork** | AzureCloud | Allow | | Any | Any | **VirtualNetwork** | **VirtualNetwork** | Allow |
-## Can an Azure Virtual WAN hub be part of a network group?
+### Can an Azure Virtual WAN hub be part of a network group?
No, an Azure Virtual WAN hub can't be in a network group at this time.
-## Can an Azure Virtual WAN be used as the hub in AVNM's hub and spoke topology configuration?
+### Can an Azure Virtual WAN be used as the hub in AVNM's hub and spoke topology configuration?
No, an Azure Virtual WAN hub isn't supported as the hub in a hub and spoke topology at this time.
-## My Virtual Network isn't getting the configurations I'm expecting. How do I troubleshoot?
+### My Virtual Network isn't getting the configurations I'm expecting. How do I troubleshoot?
-### Have you deployed your configuration to the VNet's region?
+#### Have you deployed your configuration to the VNet's region?
Configurations in Azure Virtual Network Manager don't take effect until they're deployed. Make a deployment to the virtual networks region with the appropriate configurations.
-### Is your virtual network in scope?
+#### Is your virtual network in scope?
A network manager is only delegated enough access to apply configurations to virtual networks within your scope. Even if a resource is in your network group but out of scope, it will not receive any configurations.
-### Are you applying security rules to a VNet containing Azure SQL Managed Instances?
+#### Are you applying security rules to a VNet containing Azure SQL Managed Instances?
Azure SQL Managed Instance has some network requirements. These are enforced through high priority Network Intent Policies, whose purpose conflicts with Security Admin Rules. By default, the application of Admin rules will be skipped on VNets containing any of these Intent Policies. Since allow rules pose no risk of conflict, you can opt to apply *Allow Only* rules by setting the If you only wish to use Allow rules, you can set AllowOnlyRules on `securityConfiguration.properties.applyOnNetworkIntentPolicyBasedServices`. ## Limits
virtual-network Kubernetes Network Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/kubernetes-network-policies.md
Network Policies provides micro-segmentation for pods just like Network Security
![Kubernetes network policies overview](./media/kubernetes-network-policies/kubernetes-network-policies-overview.png)
-Azure NPM implementation works in conjunction with the Azure CNI that provides VNet integration for containers. NPM is supported only on Linux today. The implementation enforces traffic filtering by configuring allow and deny IP rules in Linux IPTables based on the defined policies. These rules are grouped together using Linux IPSets.
+Azure NPM implementation works with the Azure CNI that provides VNet integration for containers. NPM is supported only on Linux today. The implementation enforces traffic filtering by configuring allow and deny IP rules in Linux IPTables based on the defined policies. These rules are grouped together using Linux IPSets.
## Planning security for your Kubernetes cluster When implementing security for your cluster, use network security groups (NSGs) to filter traffic entering and leaving your cluster subnet (North-South traffic). Use Azure NPM for traffic between pods in your cluster (East-West traffic).
See a [configuration for these alerts](#set-up-alerts-for-alertmanager) below.
##### Visualizations and Debugging via our Grafana Dashboard or Azure Monitor Workbook 1. See how many IPTables rules your policies create (having a massive amount of IPTables rules may increase latency slightly).
-2. Correlate cluster counts (e.g. ACLs) to execution times.
-3. Get the human-friendly name of an ipset in a given IPTables rule (e.g. "azure-npm-487392" represents "podlabel-role:database").
+2. Correlate cluster counts (for example, ACLs) to execution times.
+3. Get the human-friendly name of an ipset in a given IPTables rule (for example, "azure-npm-487392" represents "podlabel-role:database").
### All supported metrics The following is the list of supported metrics. Any `quantile` label has possible values `0.5`, `0.9`, and `0.99`. Any `had_error` label has possible values `false` and `true`, representing whether the operation succeeded or failed.
The dashboard has visuals similar to the Azure Workbook. You can add panels to c
### Set up for Prometheus Server Some users may choose to collect metrics with a Prometheus Server instead of Azure Monitor for containers. You merely need to add two jobs to your scrape config to collect NPM metrics.
-To install a simple Prometheus Server, add this helm repo on your cluster
+To install a Prometheus Server, add this helm repo on your cluster
``` helm repo add stable https://kubernetes-charts.storage.googleapis.com helm repo update
where `prometheus-server-scrape-config.yaml` consists of
action: drop ``` - You can also replace the `azure-npm-node-metrics` job with the content below or incorporate it into a pre-existing job for Kubernetes pods: ``` - job_name: "azure-npm-node-metrics-from-pod-config"
You can also replace the `azure-npm-node-metrics` job with the content below or
``` #### Set up Alerts for AlertManager
-If you use a Prometheus Server, you can set up an AlertManager like so. Here is an example config for [the two alerting rules described above](#alerts-via-a-prometheus-alertmanager):
+If you use a Prometheus Server, you can set up an AlertManager like so. Here's an example config for [the two alerting rules described above](#alerts-via-a-prometheus-alertmanager):
``` groups: - name: npm.rules
Following are some sample dashboard for NPM metrics in Container Insights (CI) a
[![Grafana Dashboard runtime quantiles](media/kubernetes-network-policies/grafana-runtime-quantiles.png)](media/kubernetes-network-policies/grafana-runtime-quantiles.png#lightbox) - ## Next steps - Learn about [Azure Kubernetes Service](../aks/intro-kubernetes.md). - Learn about [container networking](container-networking-overview.md).
virtual-network Manage Network Security Group https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/manage-network-security-group.md
The list contains any rules you've created and the network security group's [def
## Work with application security groups
-An application security group contains zero or more network interfaces. To learn more, see [application security groups](./network-security-groups-overview.md#application-security-groups). All network interfaces in an application security group must exist in the same virtual network. To learn how to add a network interface to an application security group, see [Add a network interface to an application security group](virtual-network-network-interface.md#add-to-or-remove-from-application-security-groups).
+An application security group contains zero or more network interfaces. To learn more, see [application security groups](./network-security-groups-overview.md#application-security-groups). All network interfaces in an application security group must exist in the same virtual network. To learn how to add a network interface to an application security group, see [Add a network interface to an application security group](virtual-network-network-interface.md#add-or-remove-from-application-security-groups).
### Create an application security group
Go to the [Azure portal](https://portal.azure.com) to view your application secu
### Delete an application security group
-You can't delete an application security group if it contains any network interfaces. To remove all network interfaces from the application security group, either change the network interface settings or delete the network interfaces. To learn more, see [Add to or remove from application security groups](virtual-network-network-interface.md#add-to-or-remove-from-application-security-groups) or [Delete a network interface](virtual-network-network-interface.md#delete-a-network-interface).
+You can't delete an application security group if it contains any network interfaces. To remove all network interfaces from the application security group, either change the network interface settings or delete the network interfaces. To learn more, see [Add or remove from application security groups](virtual-network-network-interface.md#add-or-remove-from-application-security-groups) or [Delete a network interface](virtual-network-network-interface.md#delete-a-network-interface).
1. Go to the [Azure portal](https://portal.azure.com) to manage your application security groups. Search for and select **Application security groups**.
virtual-network Nat Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/nat-gateway/nat-overview.md
For information on the SLA, see [SLA for Virtual Network NAT](https://azure.micr
* Learn about the [NAT gateway resource](./nat-gateway-resource.md).
-* [Learn module: Introduction to Azure Virtual Network NAT](/learn/modules/intro-to-azure-virtual-network-nat).
+* [Learn module: Introduction to Azure Virtual Network NAT](/learn/modules/intro-to-azure-virtual-network-nat).
virtual-network Virtual Network Network Interface https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/virtual-network-network-interface.md
Title: Create, change, or delete an Azure network interface description: Learn what a network interface is and how to create, change settings for, and delete one.---+ - Previously updated : 1/22/2020- Last updated : 09/15/2022+ # Create, change, or delete a network interface
-Learn how to create, change settings for, and delete a network interface. A network interface enables an Azure Virtual Machine to communicate with internet, Azure, and on-premises resources. When creating a virtual machine using the Azure portal, the portal creates one network interface with default settings for you. You may instead choose to create network interfaces with custom settings and add one or more network interfaces to a virtual machine when you create it. You may also want to change default network interface settings for an existing network interface. This article explains how to create a network interface with custom settings, change existing settings, such as network filter (network security group) assignment, subnet assignment, DNS server settings, and IP forwarding, and delete a network interface.
+Learn how to create, change settings for, and delete a network interface. A network interface enables an Azure Virtual Machine to communicate with internet, Azure, and on-premises resources. A virtual machine created with the Azure portal, has one network interface with default settings. You may instead choose to create network interfaces with custom settings and add one or more network interfaces to a virtual machine when you create it. You may also want to change default network interface settings for an existing network interface.
+
+This article explains how to create a network interface with custom settings and change the following existing settings:
+
+* [DNS server settings](#change-dns-servers)
+
+* [IP Forwarding](#enable-or-disable-ip-forwarding)
+
+* [Subnet assignment](#change-subnet-assignment)
+
+* [Application security group](#add-or-remove-from-application-security-groups)
+
+* [Network security group](#associate-or-dissociate-a-network-security-group)
+
+* [Network interface deletion](#delete-a-network-interface)
If you need to add, change, or remove IP addresses for a network interface, see [Manage IP addresses](./ip-services/virtual-network-network-interface-addresses.md). If you need to add network interfaces to, or remove network interfaces from virtual machines, see [Add or remove network interfaces](virtual-network-network-interface-vm.md).
-## Before you begin
+## Prerequisites
+
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+
+- An existing Azure Virtual Network. For information about creating an Azure Virtual Network, see [Quickstart: Create a virtual network using the Azure portal](/azure/virtual-network/quick-create-portal).
+
+ - The example virtual network used in this article is named **myVNet**. Replace the example value with the name of your virtual network.
+
+ - The example subnet used in this article is named **myBackendSubnet**. Replace the example value with the name of your subnet.
+
+ - The example network interface name used in this article is **myNIC**. Replace the example value with the name of your network interface.
+
+
+- This how-to article requires version 2.31.0 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
+
+- Azure PowerShell installed locally or Azure Cloud Shell.
+- Sign in to Azure PowerShell and ensure you've selected the subscription with which you want to use this feature. For more information, see [Sign in with Azure PowerShell](/powershell/azure/authenticate-azureps).
-Complete the following tasks before completing steps in any section of this article:
+- Ensure your `Az.Network` module is 4.3.0 or later. To verify the installed module, use the command Get-InstalledModule -Name "Az.Network". If the module requires an update, use the command `Update-Module -Name Az.Network` if necessary.
-- If you don't already have an Azure account, sign up for a [free trial account](https://azure.microsoft.com/free).-- If using the portal, open https://portal.azure.com, and log in with your Azure account.-- If using PowerShell commands to complete tasks in this article, either run the commands in the [Azure Cloud Shell](https://shell.azure.com/powershell), or by running PowerShell from your computer. The Azure Cloud Shell is a free interactive shell that you can use to run the steps in this article. It has common Azure tools preinstalled and configured to use with your account. This tutorial requires the Azure PowerShell module version 1.0.0 or later. Run `Get-Module -ListAvailable Az` to find the installed version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-az-ps). If you are running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.-- If using Azure CLI commands to complete tasks in this article, either run the commands in the [Azure Cloud Shell](https://shell.azure.com/bash), or by running the Azure CLI from your computer. This tutorial requires the Azure CLI version 2.0.28 or later. Run `az --version` to find the installed version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli). If you are running the Azure CLI locally, you also need to run `az login` to create a connection with Azure.
+If you choose to install and use PowerShell locally, this article requires the Azure PowerShell module version 5.4.1 or later. Run `Get-Module -ListAvailable Az` to find the installed version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-Az-ps). If you're running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
-The account you log into, or connect to Azure with, must be assigned to the [network contributor](../role-based-access-control/built-in-roles.md?toc=%2fazure%2fvirtual-network%2ftoc.json#network-contributor) role or to a [custom role](../role-based-access-control/custom-roles.md?toc=%2fazure%2fvirtual-network%2ftoc.json) that is assigned the appropriate actions listed in [Permissions](#permissions).
+Your account must be assigned to the [network contributor](../role-based-access-control/built-in-roles.md?toc=%2fazure%2fvirtual-network%2ftoc.json#network-contributor) role or to a [custom role](../role-based-access-control/custom-roles.md?toc=%2fazure%2fvirtual-network%2ftoc.json) that is assigned the appropriate actions listed in [Permissions](#permissions).
## Create a network interface
-When creating a virtual machine using the Azure portal, the portal creates a network interface with default settings for you. If you'd rather specify all your network interface settings, you can create a network interface with custom settings and attach the network interface to a virtual machine when creating the virtual machine (using PowerShell or the Azure CLI). You can also create a network interface and add it to an existing virtual machine (using PowerShell or the Azure CLI). To learn how to create a virtual machine with an existing network interface or to add to, or remove network interfaces from existing virtual machines, see [Add or remove network interfaces](virtual-network-network-interface-vm.md). Before creating a network interface, you must have an existing [virtual network](manage-virtual-network.md) in the same location and subscription you create a network interface in.
+A virtual machine created with the Azure portal is created with a network interface with default settings. To create a network interface with custom settings and attach to a virtual machine, use PowerShell or the Azure CLI. You can also create a network interface and add it to an existing virtual machine with PowerShell or the Azure CLI.
-1. In the box that contains the text *Search resources* at the top of the Azure portal, type *network interfaces*. When **network interfaces** appear in the search results, select it.
-2. Select **+ Add** under **Network interfaces**.
-3. Enter, or select values for the following settings, then select **Create**:
+For more information on how to create a virtual machine with an existing network interface or how to add or remove from an existing virtual machine, see [Add or remove network interfaces](virtual-network-network-interface-vm.md).
- |Setting|Required?|Details|
- ||||
- |Name|Yes|The name must be unique within the resource group you select. Over time, you'll likely have several network interfaces in your Azure subscription. For suggestions when creating a naming convention to make managing several network interfaces easier, see [Naming conventions](/azure/cloud-adoption-framework/ready/azure-best-practices/naming-and-tagging#resource-naming). The name cannot be changed after the network interface is created.|
- |Virtual network|Yes|Select the virtual network for the network interface. You can only assign a network interface to a virtual network that exists in the same subscription and location as the network interface. Once a network interface is created, you cannot change the virtual network it is assigned to. The virtual machine you add the network interface to must also exist in the same location and subscription as the network interface.|
- |Subnet|Yes|Select a subnet within the virtual network you selected. You can change the subnet the network interface is assigned to after it's created.|
- |Private IP address assignment|Yes| In this setting, you're choosing the assignment method for the IPv4 address. Choose from the following assignment methods: **Dynamic:** When selecting this option, Azure automatically assigns the next available address from the address space of the subnet you selected. **Static:** When selecting this option, you must manually assign an available IP address from within the address space of the subnet you selected. Static and dynamic addresses do not change until you change them or the network interface is deleted. You can change the assignment method after the network interface is created. The Azure DHCP server assigns this address to the network interface within the operating system of the virtual machine.|
- |Network security group|No| Leave set to **None**, select an existing [network security group](./network-security-groups-overview.md), or [create a network security group](tutorial-filter-network-traffic.md). Network security groups enable you to filter network traffic in and out of a network interface. You can apply zero or one network security group to a network interface. Zero or one network security group can also be applied to the subnet the network interface is assigned to. When a network security group is applied to a network interface and the subnet the network interface is assigned to, sometimes unexpected results occur. To troubleshoot network security groups applied to network interfaces and subnets, see [Troubleshoot network security groups](diagnose-network-traffic-filter-problem.md).|
- |Subscription|Yes|Select one of your Azure [subscriptions](../azure-glossary-cloud-terminology.md?toc=%2fazure%2fvirtual-network%2ftoc.json#subscription). The virtual machine you attach a network interface to and the virtual network you connect it to must exist in the same subscription.|
- |Private IP address (IPv6)|No| If you select this checkbox, an IPv6 address is assigned to the network interface, in addition to the IPv4 address assigned to the network interface. See the IPv6 section of this article for important information about use of IPv6 with network interfaces. You cannot select an assignment method for the IPv6 address. If you choose to assign an IPv6 address, it is assigned with the dynamic method.
- |IPv6 name (only appears when the **Private IP address (IPv6)** checkbox is checked) |Yes, if the **Private IP address (IPv6)** checkbox is checked.| This name is assigned to a secondary IP configuration for the network interface. To learn more about IP configurations, see [View network interface settings](#view-network-interface-settings).|
- |Resource group|Yes|Select an existing [resource group](../azure-glossary-cloud-terminology.md?toc=%2fazure%2fvirtual-network%2ftoc.json#resource-group) or create one. A network interface can exist in the same, or different resource group, than the virtual machine you attach it to, or the virtual network you connect it to.|
- |Location|Yes|The virtual machine you attach a network interface to and the virtual network you connect it to must exist in the same [location](https://azure.microsoft.com/regions), also referred to as a region.|
+# [**Portal**](#tab/network-interface-portal)
-The portal doesn't provide the option to assign a public IP address to the network interface when you create it, though the portal does create a public IP address and assign it to a network interface when you create a virtual machine using the portal. To learn how to add a public IP address to the network interface after creating it, see [Manage IP addresses](./ip-services/virtual-network-network-interface-addresses.md). If you want to create a network interface with a public IP address, you must use the CLI or PowerShell to create the network interface.
+1. Sign-in to the [Azure portal](https://portal.azure.com).
-The portal doesn't provide the option to assign the network interface to application security groups when creating a network interface, but the Azure CLI and PowerShell do. You can assign an existing network interface to an application security group using the portal however, as long as the network interface is attached to a virtual machine. To learn how to assign a network interface to an application security group, see [Add to or remove from application security groups](#add-to-or-remove-from-application-security-groups).
+2. In the search box at the top of the portal, enter **Network interface**. Select **Network interfaces** in the search results.
->[!Note]
+3. Select **+ Create**.
+
+4. Enter or select the following information in **Create network interface**.
+
+| Setting | Value | Details |
+| - | | - |
+| **Project details** | | |
+| Subscription | Select your subscription. | You can only assign a network interface to a virtual network that exists in the same subscription and location as the network interface. |
+| Resource group | Select your resource group or create a new one. The example used in this article is **myResourceGroup**. | A resource group is a logical container for grouping Azure resources. A network interface can exist in the same, or different resource group, than the virtual machine you attach it to, or the virtual network you connect it to.|
+| **Instance details** | | |
+| Name | Enter **myNIC**. | The name must be unique within the resource group you select. Over time, you'll likely have several network interfaces in your Azure subscription. For suggestions when creating a naming convention to make managing several network interfaces easier, see [Naming conventions](/azure/cloud-adoption-framework/ready/azure-best-practices/naming-and-tagging#resource-naming). The name can't be changed after the network interface is created. |
+| Region | Select your region. The example used in this article is **East US 2**. | The Azure region where the network interface is created. |
+| Virtual network | Select **myVNet** or your virtual network. | You can only assign a network interface to a virtual network that exists in the same subscription and location as the network interface. Once a network interface is created, you can't change the virtual network it's assigned to. The virtual machine you add the network interface to must also exist in the same location and subscription as the network interface. |
+| Subnet | Select **myBackendSubnet**. | A subnet within the virtual network you selected. You can change the subnet the network interface is assigned to after it's created. |
+| IP Version | Select **IPv4** or **IPv4 and IPv6**. | You can choose to create the network interface with an IPv4 address or an IPv4 and IPv6 address. The network and subnet used for the virtual network must also have an IPv6 and IPv6 subnet for the IPv6 address to be assigned. An IPv6 configuration is assigned to a secondary IP configuration for the network interface. To learn more about IP configurations, see [View network interface settings](#view-network-interface-settings).|
+| Private IP address assignment | Select **Dynamic** or **Static**. | **Dynamic:** If dynamic is selected, Azure automatically assigns the next available address from the address space of the subnet you selected. </br> **Static:** When selecting this option, you must manually assign an available IP address from within the address space of the subnet you selected. Static and dynamic addresses don't change until you change them or the network interface is deleted. You can change the assignment method after the network interface is created. The Azure DHCP server assigns this address to the network interface within the operating system of the virtual machine. |
++
+5. Select **Review + create**.
+
+6. Select **Create**.
+
+The portal doesn't provide the option to assign a public IP address to the network interface when you create it. The portal does create a public IP address and assign it to a network interface when you create a virtual machine in the portal. To learn how to add a public IP address to the network interface after creating it, see [Manage IP addresses](./ip-services/virtual-network-network-interface-addresses.md). If you want to create a network interface with a public IP address, you must use the Azure CLI, or PowerShell to create the network interface.
+
+The portal doesn't provide the option to assign the network interface to application security groups when creating a network interface, but the Azure CLI and PowerShell do. You can assign an existing network interface to an application security group using the portal however, as long as the network interface is attached to a virtual machine. To learn how to assign a network interface to an application security group, see [Add to or remove from application security groups](#add-or-remove-from-application-security-groups).
+
+# [**PowerShell**](#tab/network-interface-powershell)
+
+In this example, you'll create an Azure Public IP address and associate it with the network interface.
+
+Use [New-AzPublicIpAddress](/powershell/module/az.network/new-azpublicipaddress) to create a primary public IP address.
+
+```azurepowershell-interactive
+$ip = @{
+ Name = 'myPublicIP'
+ ResourceGroupName = 'myResourceGroup'
+ Location = 'eastus2'
+ Sku = 'Standard'
+ AllocationMethod = 'Static'
+ IpAddressVersion = 'IPv4'
+ Zone = 1,2,3
+}
+New-AzPublicIpAddress @ip
+```
+
+Use [New-AzNetworkInterface](/powershell/module/az.network/new-aznetworkinterface) and [New-AzNetworkInterfaceIpConfig](/powershell/module/az.network/new-aznetworkinterfaceipconfig) to create the network interface for the virtual machine. To create a network interface without the public IP address, omit the **`-PublicIpAddress`** parameter for **`New-AzNetworkInterfaceIPConfig`**.
+
+```azurepowershell-interactive
+## Place the virtual network into a variable. ##
+$net = @{
+ Name = 'myVNet'
+ ResourceGroupName = 'myResourceGroup'
+}
+$vnet = Get-AzVirtualNetwork @net
+
+## Place the primary public IP address into a variable. ##
+$pub = @{
+ Name = 'myPublicIP'
+ ResourceGroupName = 'myResourceGroup'
+}
+$pubIP = Get-AzPublicIPAddress @pub
+
+## Create primary configuration for NIC. ##
+$IP1 = @{
+ Name = 'ipconfig1'
+ Subnet = $vnet.Subnets[0]
+ PrivateIpAddressVersion = 'IPv4'
+ PublicIPAddress = $pubIP
+}
+$IP1Config = New-AzNetworkInterfaceIpConfig @IP1 -Primary
+
+## Command to create network interface for VM ##
+$nic = @{
+ Name = 'myNIC'
+ ResourceGroupName = 'myResourceGroup'
+ Location = 'eastus2'
+ IpConfiguration = $IP1Config
+}
+New-AzNetworkInterface @nic
+```
+
+# [**Azure CLI**](#tab/network-interface-cli)
+
+In this example, you'll create an Azure Public IP address and associate it with the network interface.
+
+Use [az network public-ip create](/cli/azure/network/public-ip#az-network-public-ip-create)to create a primary public IP address.
+
+```azurecli-interactive
+ az network public-ip create \
+ --resource-group myResourceGroup \
+ --name myPublicIP \
+ --sku Standard \
+ --version IPv4 \
+ --zone 1 2 3
+```
+
+Use [az network nic create](/cli/azure/network/nic#az-network-nic-create) to create the network interface. To create a network interface without the public IP address, omit the **`--public-ip-address`** parameter for **`az network nic create`**.
+
+```azurecli-interactive
+ az network nic create \
+ --resource-group myResourceGroup \
+ --name myNIC \
+ --private-ip-address-version IPv4 \
+ --vnet-name myVNet \
+ --subnet myBackEndSubnet \
+ --public-ip-address myPublicIP
+```
+++
+>[!NOTE]
> Azure assigns a MAC address to the network interface only after the network interface is attached to a virtual machine and the virtual machine is started the first time. You cannot specify the MAC address that Azure assigns to the network interface. The MAC address remains assigned to the network interface until the network interface is deleted or the private IP address assigned to the primary IP configuration of the primary network interface is changed. To learn more about IP addresses and IP configurations, see [Manage IP addresses](./ip-services/virtual-network-network-interface-addresses.md) [!INCLUDE [ephemeral-ip-note.md](../../includes/ephemeral-ip-note.md)]
-**Commands**
+## View network interface settings
+
+You can view and change most settings for a network interface after it's created. The portal doesn't display the DNS suffix or application security group membership for the network interface. You can use Azure PowerShell or Azure CLI to view the DNS suffix and application security group membership.
-|Tool|Command|
-|||
-|CLI|[az network nic create](/cli/azure/network/nic)|
-|PowerShell|[New-AzNetworkInterface](/powershell/module/az.network/new-aznetworkinterface)|
+# [**Portal**](#tab/network-interface-portal)
-## View network interface settings
+1. Sign-in to the [Azure portal](https://portal.azure.com).
+
+2. In the search box at the top of the portal, enter **Network interface**. Select **Network interfaces** in the search results.
+
+3. Select the network interface you want to view or change settings for from the list.
+
+4. The following items are listed for the network interface you selected:
+
+ - **Overview:** The overview provides essential information about the network interface. IP addresses for IPv4 and IPv6 and network security group membership are displayed. The accelerated networking feature for network interfaces can be set in the overview. For more information about accelerated networking, see [What is Accelerated Networking?](accelerated-networking-overview.md)
+
+ The following screenshot displays the overview settings for a network interface named **myNIC**:
-You can view and change most settings for a network interface after it's created. The portal does not display the DNS suffix or application security group membership for the network interface. You can use the PowerShell or Azure CLI [commands](#view-settings-commands) to view the DNS suffix and application security group membership.
+ :::image type="content" source="./media/virtual-network-network-interface/nic-overview.png" alt-text="Screenshot of network interface overview.":::
-1. In the box that contains the text *Search resources* at the top of the Azure portal, type *network interfaces*. When **network interfaces** appear in the search results, select it.
-2. Select the network interface you want to view or change settings for from the list.
-3. The following items are listed for the network interface you selected:
- - **Overview:** Provides information about the network interface, such as the IP addresses assigned to it, the virtual network/subnet the network interface is assigned to, and the virtual machine the network interface is attached to (if it's attached to one). The following picture shows the overview settings for a network interface named **mywebserver256**:
- ![Network interface overview](./media/virtual-network-network-interface/nic-overview.png)
+ - **IP configurations:** Public and private IPv4 and IPv6 address assigned to IP configurations are listed. To learn more about IP configurations and how to add and remove IP addresses, see [Configure IP addresses for an Azure network interface](./ip-services/virtual-network-network-interface-addresses.md). IP forwarding and subnet assignment are also configured in this section. To learn more about these settings, see [Enable or disable IP forwarding](#enable-or-disable-ip-forwarding) and [Change subnet assignment](#change-subnet-assignment).
- You can move a network interface to a different resource group or subscription by selecting (**change**) next to the **Resource group** or **Subscription name**. If you move the network interface to a new subscription, you must move all resources related to the network interface with it. If the network interface is attached to a virtual machine, for example, you must also move the virtual machine, and other virtual machine-related resources. To move a network interface, see [Move resource to a new resource group or subscription](../azure-resource-manager/management/move-resource-group-and-subscription.md?toc=%2fazure%2fvirtual-network%2ftoc.json#use-the-portal). The article lists prerequisites, and how to move resources using the Azure portal, PowerShell, and the Azure CLI.
- - **IP configurations:** Public and private IPv4 and IPv6 addresses assigned to IP configurations are listed here. If an IPv6 address is assigned to an IP configuration, the address is not displayed. To learn more about IP configurations and how to add and remove IP addresses, see [Configure IP addresses for an Azure network interface](./ip-services/virtual-network-network-interface-addresses.md). IP forwarding and subnet assignment are also configured in this section. To learn more about these settings, see [Enable or disable IP forwarding](#enable-or-disable-ip-forwarding) and [Change subnet assignment](#change-subnet-assignment).
- - **DNS servers:** You can specify which DNS server a network interface is assigned by the Azure DHCP servers. The network interface can inherit the setting from the virtual network the network interface is assigned to, or have a custom setting that overrides the setting for the virtual network it's assigned to. To modify what's displayed, see [Change DNS servers](#change-dns-servers).
- - **Network security group (NSG):** Displays which NSG is associated to the network interface (if any). An NSG contains inbound and outbound rules to filter network traffic for the network interface. If an NSG is associated to the network interface, the name of the associated NSG is displayed. To modify what's displayed, see [Associate or dissociate a network security group](#associate-or-dissociate-a-network-security-group).
- - **Properties:** Displays key settings about the network interface, including its MAC address (blank if the network interface isn't attached to a virtual machine), and the subscription it exists in.
- - **Effective security rules:** Security rules are listed if the network interface is attached to a running virtual machine, and an NSG is associated to the network interface, the subnet it's assigned to, or both. To learn more about what's displayed, see [View effective security rules](#view-effective-security-rules). To learn more about NSGs, see [Network security groups](./network-security-groups-overview.md).
- - **Effective routes:** Routes are listed if the network interface is attached to a running virtual machine. The routes are a combination of the Azure default routes, any user-defined routes, and any BGP routes that may exist for the subnet the network interface is assigned to. To learn more about what's displayed, see [View effective routes](#view-effective-routes). To learn more about Azure default routes and user-defined routes, see [Routing overview](virtual-networks-udr-overview.md).
-Common Azure Resource Manager settings: To learn more about common Azure Resource Manager settings, see [Activity log](../azure-monitor/essentials/platform-logs-overview.md), [Access control (IAM)](../role-based-access-control/overview.md), [Tags](../azure-resource-manager/management/tag-resources.md?toc=%2fazure%2fvirtual-network%2ftoc.json), [Locks](../azure-resource-manager/management/lock-resources.md?toc=%2fazure%2fvirtual-network%2ftoc.json), and [Automation script](../azure-resource-manager/templates/export-template-portal.md).
+ :::image type="content" source="./media/virtual-network-network-interface/ip-configurations.png" alt-text="Screenshot of network interface IP configurations.":::
-<a name="view-settings-commands"></a>**Commands**
+ - **DNS servers:** You can specify which DNS server a network interface is assigned by the Azure DHCP servers. The network interface can inherit the setting from the virtual network or have a custom setting that overrides the setting for the virtual network it's assigned to. To modify what's displayed, see [Change DNS servers](#change-dns-servers).
+
+ :::image type="content" source="./media/virtual-network-network-interface/dns-servers.png" alt-text="Screenshot of DNS server configuration.":::
-If an IPv6 address is assigned to a network interface, the PowerShell output returns the fact that the address is assigned, but it doesn't return the assigned address. Similarly, the CLI returns the fact that the address is assigned, but returns *null* in its output for the address.
+ - **Network security group (NSG):** Displays which NSG is associated to the network interface. An NSG contains inbound and outbound rules to filter network traffic for the network interface. If an NSG is associated to the network interface, the name of the associated NSG is displayed. To modify what's displayed, see [Associate or dissociate a network security group](#associate-or-dissociate-a-network-security-group).
+
+ :::image type="content" source="./media/virtual-network-network-interface/network-security-group.png" alt-text="Screenshot of network security group configuration.":::
-|Tool|Command|
-|||
-|CLI|[az network nic list](/cli/azure/network/nic) to view network interfaces in the subscription; [az network nic show](/cli/azure/network/nic) to view settings for a network interface|
-|PowerShell|[Get-AzNetworkInterface](/powershell/module/az.network/get-aznetworkinterface) to view network interfaces in the subscription or view settings for a network interface|
+ - **Properties:** Displays settings about the network interface, MAC address, and the subscription it exists in. The MAC address is blank if the network interface isn't attached to a virtual machine.
+
+ :::image type="content" source="./media/virtual-network-network-interface/nic-properties.png" alt-text="Screenshot of network interface properties.":::
+
+ - **Effective security rules:** Security rules are listed if the network interface is attached to a running virtual machine and associated with a network security group. The network security group can be assigned to the subnet the network interface is assigned to, or both. To learn more about what's displayed, see [View effective security rules](#view-effective-security-rules). To learn more about NSGs, see [Network security groups](./network-security-groups-overview.md).
+
+ :::image type="content" source="./media/virtual-network-network-interface/effective-security-rules.png" alt-text="Screenshot of effective security rules.":::
+
+ - **Effective routes:** Routes are listed if the network interface is attached to a running virtual machine. The routes are a combination of the Azure default routes, any user-defined routes, and any BGP routes that may exist for the subnet the network interface is assigned to. To learn more about what's displayed, see [View effective routes](#view-effective-routes). To learn more about Azure default routes and user-defined routes, see [Routing overview](virtual-networks-udr-overview.md).
+
+ :::image type="content" source="./media/virtual-network-network-interface/effective-routes.png" alt-text="Screenshot of effective routes.":::
+
+# [**PowerShell**](#tab/network-interface-powershell)
+
+Use [Get-AzNetworkInterface](/powershell/module/az.network/get-aznetworkinterface) to view network interfaces in the subscription or view settings for a network interface.
+
+>[!NOTE]
+> Removal of the parameters **`-Name`** and **`-ResourceGroupName`** will return all of the network interfaces in the subscription.
+
+```azurepowershell
+Get-AzNetworkInterface -Name myNIC -ResourceGroupName myResourceGroup
+```
+
+# [**Azure CLI**](#tab/network-interface-cli)
+
+Use [az network nic list](/cli/azure/network/nic#az-network-nic-list) to view network interfaces in the subscription.
+
+```azurecli
+az network nic list
+```
+
+Use [az network nic show](/azure/network/nic#az-network-nic-show) to view the settings for a network interface.
+
+```azurecli
+az network nic show --name myNIC --resource-group myResourceGroup
+```
++ ## Change DNS servers
-The DNS server is assigned by the Azure DHCP server to the network interface within the virtual machine operating system. The DNS server assigned is whatever the DNS server setting is for a network interface. To learn more about name resolution settings for a network interface, see [Name resolution for virtual machines](virtual-networks-name-resolution-for-vms-and-role-instances.md). The network interface can inherit the settings from the virtual network, or use its own unique settings that override the setting for the virtual network.
+The DNS server is assigned by the Azure DHCP server to the network interface within the virtual machine operating system. To learn more about name resolution settings for a network interface, see [Name resolution for virtual machines](virtual-networks-name-resolution-for-vms-and-role-instances.md). The network interface can inherit the settings from the virtual network, or use its own unique settings that override the setting for the virtual network.
+
+# [**Portal**](#tab/network-interface-portal)
+
+1. Sign-in to the [Azure portal](https://portal.azure.com).
+
+2. In the search box at the top of the portal, enter **Network interface**. Select **Network interfaces** in the search results.
+
+3. Select the network interface you want to view or change settings for from the list.
+
+4. In **Settings**, select **DNS servers**.
+
+5. Select either:
+
+ - **Inherit from virtual network**: Choose this option to inherit the DNS server setting defined for the virtual network the network interface is assigned to. At the virtual network level, either a custom DNS server or the Azure-provided DNS server is defined. The Azure-provided DNS server can resolve hostnames for resources assigned to the same virtual network. FQDN must be used to resolve for resources assigned to different virtual networks.
+
+ - **Custom**: You can configure your own DNS server to resolve names across multiple virtual networks. Enter the IP address of the server you want to use as a DNS server. The DNS server address you specify is assigned only to this network interface and overrides any DNS setting for the virtual network the network interface is assigned to.
+
+ >[!NOTE]
+ >If the VM uses a NIC that's part of an availability set, all the DNS servers that are specified for each of the VMs from all NICs that are part of the availability set are inherited.
-1. In the box that contains the text *Search resources* at the top of the Azure portal, type *network interfaces*. When **network interfaces** appear in the search results, select it.
-2. Select the network interface that you want to change a DNS server for from the list.
-3. Select **DNS servers** under **SETTINGS**.
-4. Select either:
- - **Inherit from virtual network**: Choose this option to inherit the DNS server setting defined for the virtual network the network interface is assigned to. At the virtual network level, either a custom DNS server or the Azure-provided DNS server is defined. The Azure-provided DNS server can resolve hostnames for resources assigned to the same virtual network. FQDN must be used to resolve for resources assigned to different virtual networks.
- - **Custom**: You can configure your own DNS server to resolve names across multiple virtual networks. Enter the IP address of the server you want to use as a DNS server. The DNS server address you specify is assigned only to this network interface and overrides any DNS setting for the virtual network the network interface is assigned to.
- >[!Note]
- >If the VM uses a NIC that's part of an availability set, all the DNS servers that are specified for each of the VMs from all NICs that are part of the availability set will be inherited.
5. Select **Save**.
-**Commands**
+# [**PowerShell**](#tab/network-interface-powershell)
+
+Use [Set-AzNetworkInterface](/powershell/module/az.network/set-aznetworkinterface) to change the DNS server setting from inherited to a custom setting. Replace the DNS server IP addresses with your custom IP addresses.
+
+```azurepowershell
+## Place the network interface configuration into a variable. ##
+$nic = Get-AzNetworkInterface -Name myNIC -ResourceGroupName myResourceGroup
+
+## Add the DNS servers to the configuration. ##
+$nic.DnsSettings.DnsServers.Add("192.168.1.100")
+
+## Add a secondary DNS server if needed, otherwise set the configuration. ##
+$nic.DnsSettings.DnsServers.Add("192.168.1.101")
-|Tool|Command|
-|||
-|CLI|[az network nic update](/cli/azure/network/nic)|
-|PowerShell|[Set-AzNetworkInterface](/powershell/module/az.network/set-aznetworkinterface)|
+## Apply the new configuration to the network interface. ##
+$nic | Set-AzNetworkInterface
+```
+
+To remove the DNS servers and change the setting to inherit from the virtual network, use the following command. Replace the DNS server IP addresses with your custom IP addresses.
+
+```azurepowershell
+## Place the network interface configuration into a variable. ##
+$nic = Get-AzNetworkInterface -Name myNIC -ResourceGroupName myResourceGroup
+
+## Add the DNS servers to the configuration. ##
+$nic.DnsSettings.DnsServers.Remove("192.168.1.100")
+
+## Add a secondary DNS server if needed, otherwise set the configuration. ##
+$nic.DnsSettings.DnsServers.Remove("192.168.1.101")
+
+## Apply the new configuration to the network interface. ##
+$nic | Set-AzNetworkInterface
+```
+
+# [**Azure CLI**](#tab/network-interface-cli)
+
+Use [az network nic update](/cli/azure/network/nic#az-network-nic-update) to change the DNS server setting from inherited to a custom setting. Replace the DNS server IP addresses with your custom IP addresses.
+
+```azurecli
+az network nic update \
+ --name myNIC \
+ --resource-group myResourceGroup \
+ --dns-servers 192.168.1.100 192.168.1.101
+```
+
+To remove the DNS servers and change the setting to virtual network setting inheritance, use the following command.
+
+```azurecli
+az network nic update \
+ --name myNIC \
+ --resource-group myResourceGroup \
+ --dns-servers ""
+```
++ ## Enable or disable IP forwarding
-IP forwarding enables the virtual machine a network interface is attached to:
+IP forwarding enables the virtual machine network interface to:
+ - Receive network traffic not destined for one of the IP addresses assigned to any of the IP configurations assigned to the network interface.+ - Send network traffic with a different source IP address than the one assigned to one of a network interface's IP configurations.
-The setting must be enabled for every network interface that is attached to the virtual machine that receives traffic that the virtual machine needs to forward. A virtual machine can forward traffic whether it has multiple network interfaces or a single network interface attached to it. While IP forwarding is an Azure setting, the virtual machine must also run an application able to forward the traffic, such as firewall, WAN optimization, and load balancing applications. When a virtual machine is running network applications, the virtual machine is often referred to as a network virtual appliance. You can view a list of ready to deploy network virtual appliances in the [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/category/networking?page=1&subcategories=appliances). IP forwarding is typically used with user-defined routes. To learn more about user-defined routes, see [User-defined routes](virtual-networks-udr-overview.md).
+The setting must be enabled for every network interface that is attached to the virtual machine that receives traffic that the virtual machine needs to forward. A virtual machine can forward traffic whether it has multiple network interfaces or a single network interface attached to it. While IP forwarding is an Azure setting, the virtual machine must also run an application able to forward the traffic, such as firewall, WAN optimization, and load balancing applications.
-1. In the box that contains the text *Search resources* at the top of the Azure portal, type *network interfaces*. When **network interfaces** appear in the search results, select it.
-2. Select the network interface that you want to enable or disable IP forwarding for.
-3. Select **IP configurations** in the **SETTINGS** section.
-4. Select **Enabled** or **Disabled** (default setting) to change the setting.
-5. Select **Save**.
+When a virtual machine is running network applications, the virtual machine is often referred to as a network virtual appliance. You can view a list of ready to deploy network virtual appliances in the [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/category/networking?page=1&subcategories=appliances). IP forwarding is typically used with user-defined routes. To learn more about user-defined routes, see [User-defined routes](virtual-networks-udr-overview.md).
+
+# [**Portal**](#tab/network-interface-portal)
+
+1. Sign-in to the [Azure portal](https://portal.azure.com).
+
+2. In the search box at the top of the portal, enter **Network interface**. Select **Network interfaces** in the search results.
+
+3. Select the network interface you want to view or change settings for from the list.
+
+4. In **Settings**, select **IP configurations**.
+
+5. Select **Enabled** or **Disabled** (default setting) to change the setting.
+
+6. Select **Save**.
-**Commands**
+# [**PowerShell**](#tab/network-interface-powershell)
-|Tool|Command|
-|||
-|CLI|[az network nic update](/cli/azure/network/nic)|
-|PowerShell|[Set-AzNetworkInterface](/powershell/module/az.network/set-aznetworkinterface)|
+Use [Set-AzNetworkInterface](/powershell/module/az.network/set-aznetworkinterface) to enable or disable the IP forwarding setting.
+
+To enable IP forwarding, use the following command:
+
+```azurepowershell
+## Place the network interface configuration into a variable. ##
+$nic = Get-AzNetworkInterface -Name myNIC -ResourceGroupName myResourceGroup
+
+## Set the IP forwarding setting to enabled. ##
+$nic.EnableIPForwarding = 1
+
+## Apply the new configuration to the network interface. ##
+$nic | Set-AzNetworkInterface
+
+```
+
+To disable IP forwarding, use the following command:
+
+```azurepowershell
+## Place the network interface configuration into a variable. ##
+$nic = Get-AzNetworkInterface -Name myNIC -ResourceGroupName myResourceGroup
+
+## Set the IP forwarding setting to disabled. ##
+$nic.EnableIPForwarding = 0
+
+## Apply the new configuration to the network interface. ##
+$nic | Set-AzNetworkInterface
+
+```
+
+# [**Azure CLI**](#tab/network-interface-cli)
+
+Use [az network nic update](/cli/azure/network/nic#az-network-nic-update) to enable or disable the IP forwarding setting.
+
+To enable IP forwarding, use the following command:
+
+```azurecli
+az network nic update \
+ --name myNIC \
+ --resource-group myResourceGroup \
+ --ip-forwarding true
+```
+
+To disable IP forwarding, use the following command:
+
+```azurecli
+az network nic update \
+ --name myNIC \
+ --resource-group myResourceGroup \
+ --ip-forwarding false
+```
++ ## Change subnet assignment You can change the subnet, but not the virtual network, that a network interface is assigned to.
-1. In the box that contains the text *Search resources* at the top of the Azure portal, type *network interfaces*. When **network interfaces** appear in the search results, select it.
-2. Select the network interface that you want to change subnet assignment for.
-3. Select **IP configurations** under **SETTINGS**. If any private IP addresses for any IP configurations listed have **(Static)** next to them, you must change the IP address assignment method to dynamic by completing the steps that follow. All private IP addresses must be assigned with the dynamic assignment method to change the subnet assignment for the network interface. If the addresses are assigned with the dynamic method, continue to step five. If any IPv4 addresses are assigned with the static assignment method, complete the following steps to change the assignment method to dynamic:
- - Select the IP configuration you want to change the IPv4 address assignment method for from the list of IP configurations.
- - Select **Dynamic** for the private IP address **Assignment** method. You cannot assign an IPv6 address with the static assignment method.
- - Select **Save**.
-4. Select the subnet you want to move the network interface to from the **Subnet** drop-down list.
-5. Select **Save**. New dynamic addresses are assigned from the subnet address range for the new subnet. After assigning the network interface to a new subnet, you can assign a static IPv4 address from the new subnet address range if you choose. To learn more about adding, changing, and removing IP addresses for a network interface, see [Manage IP addresses](./ip-services/virtual-network-network-interface-addresses.md).
+# [**Portal**](#tab/network-interface-portal)
+
+1. Sign-in to the [Azure portal](https://portal.azure.com).
+
+2. In the search box at the top of the portal, enter **Network interface**. Select **Network interfaces** in the search results.
+
+3. Select the network interface you want to view or change settings for from the list.
+
+4. In **Settings**, select **IP configurations**.
+
+5. If any private IP addresses for any IP configurations listed have **(Static)** next to them, you must change the IP address assignment method to dynamic. All private IP addresses must be assigned with the dynamic assignment method to change the subnet assignment for the network interface. Skip to step 6 if your private IPs are set to dynamic.
-**Commands**
+ Complete the following steps to change the assignment method to dynamic:
+
+ - Select the IP configuration you want to change the IPv4 address assignment method for from the list of IP configurations.
+
+ - Select **Dynamic** for the private IP address in **Assignment**.
+
+ - Select **Save**.
-|Tool|Command|
-|||
-|CLI|[az network nic ip-config update](/cli/azure/network/nic/ip-config)|
-|PowerShell|[Set-AzNetworkInterfaceIpConfig](/powershell/module/az.network/set-aznetworkinterfaceipconfig)|
+6. Select the subnet you want to move the network interface to from the **Subnet** drop-down list.
-## Add to or remove from application security groups
+5. Select **Save**.
-You can only add a network interface to, or remove a network interface from an application security group using the portal if the network interface is attached to a virtual machine. You can use PowerShell or the Azure CLI to add a network interface to, or remove a network interface from an application security group, whether the network interface is attached to a virtual machine or not. Learn more about [Application security groups](./network-security-groups-overview.md#application-security-groups) and how to [create an application security group](manage-network-security-group.md).
+New dynamic addresses are assigned from the subnet address range for the new subnet. After assigning the network interface to a new subnet, you can assign a static IPv4 address from the new subnet address range if you choose. To learn more about adding, changing, and removing IP addresses for a network interface, see [Manage IP addresses](./ip-services/virtual-network-network-interface-addresses.md).
-1. In the *Search resources, services, and docs* box at the top of the portal, begin typing the name of a virtual machine that has a network interface that you want to add to, or remove from, an application security group. When the name of your VM appears in the search results, select it.
-2. Under **SETTINGS**, select **Networking**. Select **Application Security Groups** then **Configure the application security groups**elect the application security groups that you want to add the network interface to, or unselect the application security groups that you want to remove the network interface from, and then select **Save**. Only network interfaces that exist in the same virtual network can be added to the same application security group. The application security group must exist in the same location as the network interface.
+# [**PowerShell**](#tab/network-interface-powershell)
-**Commands**
+Use [Set-AzNetworkInterfaceIpConfig](/powershell/module/az.network/set-aznetworkinterfaceipconfig) to change the subnet of the network interface.
-|Tool|Command|
-|||
-|CLI|[az network nic update](/cli/azure/network/nic)|
-|PowerShell|[Set-AzNetworkInterface](/powershell/module/az.network/set-aznetworkinterface)|
+```azurepowershell
+## Place the virtual network into a variable. ##
+$net = @{
+ Name = 'myVNet'
+ ResourceGroupName = 'myResourceGroup'
+}
+$vnet = Get-AzVirtualNetwork @net
+
+## Place the network interface configuration into a variable. ##
+$nic = Get-AzNetworkInterface -Name myNIC -ResourceGroupName myResourceGroup
+
+## Change the subnet in the IP configuration. Replace the subnet number with number of your subnet in your VNet. Your first listed subnet in your VNet is 0, next is 1, and so on. ##
+$IP = @{
+ Name = 'ipv4config'
+ Subnet = $vnet.Subnets[1]
+}
+$nic | Set-AzNetworkInterfaceIpConfig @IP
+
+## Apply the new configuration to the network interface. ##
+$nic | Set-AzNetworkInterface
+
+```
+
+# [**Azure CLI**](#tab/network-interface-cli)
+
+Use [az network nic ip-config update](/cli/azure/network/nic#az-network-nic-ip-config-update) to change the subnet of the network interface.
+
+```azurecli
+az network nic ip-config update \
+ --name ipv4config \
+ --nic-name myNIC \
+ --resource-group myResourceGroup \
+ --subnet mySubnet \
+ --vnet-name myVNet
+```
+++
+## Add or remove from application security groups
+
+You can only add a network interface, or remove a network interface from an application security group using the portal if the network interface is attached to a virtual machine.
+
+You can use PowerShell or the Azure CLI to add a network interface to, or remove a network interface from an application security group regardless of virtual machine configuration. Learn more about [Application security groups](./network-security-groups-overview.md#application-security-groups) and how to [create an application security group](manage-network-security-group.md).
+
+# [**Portal**](#tab/network-interface-portal)
+
+1. Sign-in to the [Azure portal](https://portal.azure.com).
+
+2. In the search box at the top of the portal, enter **Virtual machine**. Select **Virtual machines** in the search results.
+
+3. Select the virtual machine you want to view or change settings for from the list.
+
+4. In **Settings**, select **Networking**.
+
+5. Select the **Application security groups** tab.
+
+6. Select **Configure the application security groups**.
+
+ :::image type="content" source="./media/virtual-network-network-interface/application-security-group.png" alt-text="Screenshot of application security group configuration.":::
+
+7. Select the application security groups that you want to add the network interface to, or unselect the application security groups that you want to remove the network interface from.
+
+8. Select **Save**.
+
+# [**PowerShell**](#tab/network-interface-powershell)
+
+Use [Set-AzNetworkInterfaceIpConfig](/powershell/module/az.network/set-aznetworkinterfaceipconfig) to set the application security group.
+
+```azurepowershell
+## Place the virtual network into a variable. ##
+$net = @{
+ Name = 'myVNet'
+ ResourceGroupName = 'myResourceGroup'
+}
+$vnet = Get-AzVirtualNetwork @net
+
+## Place the subnet configuration into a variable. ##
+$subnet = Get-AzVirtualNetworkSubnetConfig -Name mySubnet -VirtualNetwork $vnet
+
+## Place the network interface configuration into a variable. ##
+$nic = Get-AzNetworkInterface -Name myNIC -ResourceGroupName myResourceGroup
+
+## Place the application security group configuration into a variable. ##
+$asg = Get-AzApplicationSecurityGroup -Name myASG -ResourceGroupName myResourceGroup
+
+## Add the application security group to the IP configuration. ##
+$IP = @{
+ Name = 'ipv4config'
+ Subnet = $subnet
+ ApplicationSecurityGroup = $asg
+}
+$nic | Set-AzNetworkInterfaceIpConfig @IP
+
+## Save the configuration to the network interface. ##
+$nic | Set-AzNetworkInterface
+```
+
+# [**Azure CLI**](#tab/network-interface-cli)
+
+Use [az network nic ip-config update](/cli/azure/network/nic#az-network-nic-ip-config-update) to set the application security group.
+
+```azurecli
+az network nic ip-config update \
+ --name ipv4config \
+ --nic-name myNIC \
+ --resource-group myResourceGroup \
+ --application-security-groups myASG
+```
+++
+Only network interfaces that exist in the same virtual network can be added to the same application security group. The application security group must exist in the same location as the network interface.
## Associate or dissociate a network security group
-1. In the search box at the top of the portal, enter *network interfaces* in the search box. When **network interfaces** appear in the search results, select it.
-2. Select the network interface in the list that you want to associate a network security group to, or dissociate a network security group from.
-3. Select **Network security group** under **SETTINGS**.
-4. Select **Edit**.
-5. Select **Network security group** and then select the network security group you want to associate to the network interface, or select **None**, to dissociate a network security group.
+# [**Portal**](#tab/network-interface-portal)
+
+1. Sign-in to the [Azure portal](https://portal.azure.com).
+
+2. In the search box at the top of the portal, enter **Network interface**. Select **Network interfaces** in the search results.
+
+3. Select the network interface you want to view or change settings for from the list.
+
+4. In **Settings**, select **Network security group**.
+
+5. Select the network security group in the pull-down box.
+ 6. Select **Save**.
-**Commands**
+# [**PowerShell**](#tab/network-interface-powershell)
+
+Use [Set-AzNetworkInterface](/powershell/module/az.network/set-aznetworkinterface) to set the network security group for the network interface.
+
+```azurepowershell
+## Place the network interface configuration into a variable. ##
+$nic = Get-AzNetworkInterface -Name myNIC -ResourceGroupName myResourceGroup
+
+## Place the network security group configuration into a variable. ##
+$nsg = Get-AzNetworkSecurityGroup -Name myNSG -ResourceGroupName myResourceGroup
+
+## Add the NSG to the NIC configuration. ##
+$nic.NetworkSecurityGroup = $nsg
+
+## Save the configuration to the network interface. ##
+$nic | Set-AzNetworkInterface
+```
-- Azure CLI: [az network nic update](/cli/azure/network/nic#az-network-nic-update)-- PowerShell: [Set-AzNetworkInterface](/powershell/module/az.network/set-aznetworkinterface)
+# [**Azure CLI**](#tab/network-interface-cli)
+
+Use [az network nic update](/cli/azure/network/nic#az-network-nic-update) to set the network security group for the network interface.
+
+```azurecli
+az network nic update \
+ --name myNIC \
+ --resource-group myResourceGroup \
+ --network-security-group myNSG
+```
++ ## Delete a network interface
-You can delete a network interface as long as it's not attached to a virtual machine. If a network interface is attached to a virtual machine, you must first place the virtual machine in the stopped (deallocated) state, then detach the network interface from the virtual machine. To detach a network interface from a virtual machine, complete the steps in [Detach a network interface from a virtual machine](virtual-network-network-interface-vm.md#remove-a-network-interface-from-a-vm). You cannot detach a network interface from a virtual machine if it's the only network interface attached to the virtual machine however. A virtual machine must always have at least one network interface attached to it. Deleting a virtual machine detaches all network interfaces attached to it, but does not delete the network interfaces.
+You can delete a network interface if it't not attached to a virtual machine. If a network interface is attached to a virtual machine, you must first place the virtual machine in the stopped (deallocated) state, then detach the network interface from the virtual machine.
+
+To detach a network interface from a virtual machine, complete the steps in [Detach a network interface from a virtual machine](virtual-network-network-interface-vm.md#remove-a-network-interface-from-a-vm). You can't detach a network interface from a virtual machine if it's the only network interface attached to the virtual machine however. A virtual machine must always have at least one network interface attached to it.
+
+# [**Portal**](#tab/network-interface-portal)
+
+1. Sign-in to the [Azure portal](https://portal.azure.com).
+
+2. In the search box at the top of the portal, enter **Network interface**. Select **Network interfaces** in the search results.
+
+3. Select the network interface you want to view or change settings for from the list.
+
+4. In **Overview**, select **Delete**.
+
+# [**PowerShell**](#tab/network-interface-powershell)
+
+Use [Remove-AzNetworkInterface](/powershell/module/az.network/remove-aznetworkinterface) to delete the network interface.
-1. In the box that contains the text *Search resources* at the top of the Azure portal, type *network interfaces*. When **network interfaces** appear in the search results, select it.
-2. Select the network interface in the list that you want to delete.
-3. Under **Overview** Select **Delete**.
-4. Select **Yes** to confirm deletion of the network interface.
+```azurepowershell
+Remove-AzNetworkInterface -Name myNIC -ResourceGroupName myResourceGroup
+```
-When you delete a network interface, any MAC or IP addresses assigned to it are released.
+# [**Azure CLI**](#tab/network-interface-cli)
-**Commands**
+Use [az network nic delete](/cli/azure/network/nic#az-network-nic-delete) to delete the network interface.
-|Tool|Command|
-|||
-|CLI|[az network nic delete](/cli/azure/network/nic)|
-|PowerShell|[Remove-AzNetworkInterface](/powershell/module/az.network/remove-aznetworkinterface)|
+```azurecli
+az network nic delete --name myNIC --resource-group myResourceGroup
+```
++ ## Resolve connectivity issues
-If you are unable to communicate to or from a virtual machine, network security group security rules or routes effective for a network interface, may be causing the problem. You have the following options to help resolve the issue:
+If you're experiencing communication problems with a virtual machine, network security group rules or effective routes may be causing the problem. You have the following options to help resolve the issue:
### View effective security rules The effective security rules for each network interface attached to a virtual machine are a combination of the rules you've created in a network security group and [default security rules](./network-security-groups-overview.md#default-security-rules). Understanding the effective security rules for a network interface may help you determine why you're unable to communicate to or from a virtual machine. You can view the effective rules for any network interface that is attached to a running virtual machine.
-1. In the search box at the top of the portal, enter the name of a virtual machine you want to view effective security rules for. If you don't know the name of a virtual machine, enter *virtual machines* in the search box. When **Virtual machines** appear in the search results, select it, and then select a virtual machine from the list.
-2. Select **Networking** under **SETTINGS**.
-3. Select the name of a network interface.
-4. Select **Effective security rules** under **SUPPORT + TROUBLESHOOTING**.
-5. Review the list of effective security rules to determine if the correct rules exist for your required inbound and outbound communication. Learn more about what you see in the list in [Network security group overview](./network-security-groups-overview.md).
+# [**Portal**](#tab/network-interface-portal)
+
+1. Sign-in to the [Azure portal](https://portal.azure.com).
+
+2. In the search box at the top of the portal, enter **Virtual machine**. Select **Virtual machines** in the search results.
+
+3. Select the virtual machine you want to view or change settings for from the list.
+
+4. In **Settings**, select **Networking**.
+
+5. Select the name of the network interface.
+
+6. Select **Effective security rules**.
+
+7. Review the list of effective security rules to determine if the correct rules exist for your required inbound and outbound communication. For more information about security rules, see [Network security group overview](./network-security-groups-overview.md).
+
+# [**PowerShell**](#tab/network-interface-powershell)
+
+Use [Get-AzEffectiveNetworkSecurityGroup](/powershell/module/az.network/get-azeffectivenetworksecuritygroup) to view the list of effective security rules.
-The IP flow verify feature of Azure Network Watcher can also help you determine if security rules are preventing communication between a virtual machine and an endpoint. To learn more, see [IP flow verify](../network-watcher/diagnose-vm-network-traffic-filtering-problem.md?toc=%2fazure%2fvirtual-network%2ftoc.json).
+```azurepowershell
+Get-AzEffectiveNetworkSecurityGroup -NetworkInterfaceName myNIC -ResourceGroupName myResourceGroup
+```
-**Commands**
+# [**Azure CLI**](#tab/network-interface-cli)
-- Azure CLI: [az network nic list-effective-nsg](/cli/azure/network/nic#az-network-nic-list-effective-nsg)-- PowerShell: [Get-AzEffectiveNetworkSecurityGroup](/powershell/module/az.network/get-azeffectivenetworksecuritygroup)
+Use [az network nic list-effective-nsg](/cli/azure/network/nic#az-network-nic-list-effective-nsg) to view the list of effective security rules.
+
+```azurecli
+az network nic list-effective-nsg --name myNIC --resource-group myResourceGroup
+```
++ ### View effective routes
-The effective routes for the network interfaces attached to a virtual machine are a combination of default routes, any routes you've created, and any routes propagated from on-premises networks via BGP through an Azure virtual network gateway. Understanding the effective routes for a network interface may help you determine why you're unable to communicate to or from a virtual machine. You can view the effective routes for any network interface that is attached to a running virtual machine.
+The effective routes for the network interface or interfaces attached to a virtual machine are a combination of:
-1. In the search box at the top of the portal, enter the name of a virtual machine you want to view effective security rules for. If you don't know the name of a virtual machine, enter *virtual machines* in the search box. When **Virtual machines** appear in the search results, select it, and then select a virtual machine from the list.
-2. Select **Networking** under **SETTINGS**.
-3. Select the name of a network interface.
-4. Select **Effective routes** under **SUPPORT + TROUBLESHOOTING**.
-5. Review the list of effective routes to determine if the correct routes exist for your required inbound and outbound communication. Learn more about what you see in the list in [Routing overview](virtual-networks-udr-overview.md).
+- Default routes
-The next hop feature of Azure Network Watcher can also help you determine if routes are preventing communication between a virtual machine and an endpoint. To learn more, see [Next hop](../network-watcher/diagnose-vm-network-routing-problem.md?toc=%2fazure%2fvirtual-network%2ftoc.json).
+- User created routes
+
+- Routes propagated from on-premises networks via BGP through an Azure virtual network gateway.
+
+Understanding the effective routes for a network interface may help you determine why you're unable to communicate to or from a virtual machine. You can view the effective routes for any network interface that is attached to a running virtual machine.
+
+# [**Portal**](#tab/network-interface-portal)
+
+1. Sign-in to the [Azure portal](https://portal.azure.com).
+
+2. In the search box at the top of the portal, enter **Network interface**. Select **Network interfaces** in the search results.
+
+3. Select the network interface you want to view or change settings for from the list.
+
+4. In **Help**, select **Effective routes**.
+
+5. Review the list of effective routes to determine if the correct routes exist for your required inbound and outbound communication. For more information about routing, see [Routing overview](virtual-networks-udr-overview.md).
-**Commands**
+# [**PowerShell**](#tab/network-interface-powershell)
-- Azure CLI: [az network nic show-effective-route-table](/cli/azure/network/nic#az-network-nic-show-effective-route-table)-- PowerShell: [Get-AzEffectiveRouteTable](/powershell/module/az.network/get-azeffectiveroutetable)
+Use [Get-AzEffectiveRouteTable](/powershell/module/az.network/get-azeffectiveroutetable) to view a list of the effective routes.
+
+```azurepowershell
+Get-AzEffectiveRouteTable -NetworkInterfaceName myNIC -ResourceGroupName myResourceGroup
+```
+
+# [**Azure CLI**](#tab/network-interface-cli)
+
+Use [az network nic show-effective-route-table](/cli/azure/network/nic#az-network-nic-show-effective-route-table) to view a list of the effective routes.
+
+```azurecli
+az network nic show-effective-route-table --name myNIC --resource-group myResourceGroup
+```
+++
+The next hop feature of Azure Network Watcher can also help you determine if routes are preventing communication between a virtual machine and an endpoint. To learn more, see [Next hop](../network-watcher/diagnose-vm-network-routing-problem.md?toc=%2fazure%2fvirtual-network%2ftoc.json).
## Permissions
To perform tasks on network interfaces, your account must be assigned to the [ne
| Microsoft.Network/networkInterfaces/write | Create or update network interface | | Microsoft.Network/networkInterfaces/join/action | Attach a network interface to a virtual machine | | Microsoft.Network/networkInterfaces/delete | Delete network interface |
-| Microsoft.Network/networkInterfaces/joinViaPrivateIp/action | Join a resource to a network interface via a servi... |
+| Microsoft.Network/networkInterfaces/joinViaPrivateIp/action | Join a resource to a network interface via private ip |
| Microsoft.Network/networkInterfaces/effectiveRouteTable/action | Get network interface effective route table | | Microsoft.Network/networkInterfaces/effectiveNetworkSecurityGroups/action | Get network interface effective security groups | | Microsoft.Network/networkInterfaces/loadBalancers/read | Get network interface load balancers |
To perform tasks on network interfaces, your account must be assigned to the [ne
## Next steps - Create a VM with multiple NICs using the [Azure CLI](../virtual-machines/linux/multiple-nics.md?toc=%2fazure%2fvirtual-network%2ftoc.json) or [PowerShell](../virtual-machines/windows/multiple-nics.md?toc=%2fazure%2fvirtual-network%2ftoc.json)+ - Create a single NIC VM with multiple IPv4 addresses using the [Azure CLI](./ip-services/virtual-network-multiple-ip-addresses-cli.md) or [PowerShell](./ip-services/virtual-network-multiple-ip-addresses-powershell.md)+ - Create a single NIC VM with a private IPv6 address (behind an Azure Load Balancer) using the [Azure CLI](../load-balancer/load-balancer-ipv6-internet-cli.md?toc=%2fazure%2fvirtual-network%2ftoc.json), [PowerShell](../load-balancer/load-balancer-ipv6-internet-ps.md?toc=%2fazure%2fvirtual-network%2ftoc.json), or [Azure Resource Manager template](../load-balancer/load-balancer-ipv6-internet-template.md?toc=%2fazure%2fvirtual-network%2ftoc.json)-- Create a network interface using [PowerShell](powershell-samples.md) or [Azure CLI](cli-samples.md) sample scripts, or using Azure [Resource Manager template](template-samples.md)-- Create and assign [Azure Policy definitions](./policy-reference.md) for virtual networks+
virtual-wan Virtual Wan Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/virtual-wan-faq.md
YouΓÇÖll only be able to update your virtual hub router if all the resources (ga
If the update fails for any reason, your hub will be auto recovered to the old version to ensure there is still a working setup. >[!NOTE]
-> The user will need to have an **owner** or **contributor** role to see an accurate status of the hub router version. If a user is assigned a **Reader** role to the Azure subscription, then Azure portal will display to the user that the hub router needs to be upgraded to the latest version, even if the hub is already on the latest version.
+> The user will need to have an **owner** or **contributor** role to see an accurate status of the hub router version. If a user is assigned a **reader** role to the Virtual WAN resource, then Azure portal will display to that user that the hub router needs to be upgraded to the latest version, even if the hub is already on the latest version.
### Is there a route limit for OpenVPN clients connecting to an Azure P2S VPN gateway?
vpn-gateway Tutorial Site To Site Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/tutorial-site-to-site-portal.md
Previously updated : 05/26/2022 Last updated : 09/14/2022
Last updated 05/26/2022
Azure VPN gateways provide cross-premises connectivity between customer premises and Azure. This tutorial shows you how to use the Azure portal to create a site-to-site VPN gateway connection from your on-premises network to the VNet. You can also create this configuration using [Azure PowerShell](vpn-gateway-create-site-to-site-rm-powershell.md) or [Azure CLI](vpn-gateway-howto-site-to-site-resource-manager-cli.md). In this tutorial, you learn how to:
If you're not going to continue to use this application or go to the next tutori
these resources using the following steps: 1. Enter the name of your resource group in the **Search** box at the top of the portal and select it from the search results.- 1. Select **Delete resource group**.- 1. Enter your resource group for **TYPE THE RESOURCE GROUP NAME** and select **Delete**. ## Next steps
vpn-gateway Vpn Gateway Howto Vnet Vnet Resource Manager Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-howto-vnet-vnet-resource-manager-portal.md
Title: 'Configure a VNet-to-VNet VPN gateway connection: Azure portal' description: Learn how to create a VPN gateway connection between VNets.- - Previously updated : 09/23/2021 Last updated : 09/14/2022 # Configure a VNet-to-VNet VPN gateway connection by using the Azure portal
-This article helps you connect virtual networks (VNets) by using the VNet-to-VNet connection type using the Azure portal. The virtual networks can be in different regions and from different subscriptions. When you connect VNets from different subscriptions, the subscriptions don't need to be associated with the same Active Directory tenant. This type of configuration creates a connection between two virtual network gateways. This article does not apply to VNet peering. For VNet peering, see the [Virtual Network peering](../virtual-network/virtual-network-peering-overview.md) article.
+This article helps you connect virtual networks (VNets) by using the VNet-to-VNet connection type using the Azure portal. The virtual networks can be in different regions and from different subscriptions. When you connect VNets from different subscriptions, the subscriptions don't need to be associated with the same Active Directory tenant. This type of configuration creates a connection between two virtual network gateways. This article doesn't apply to VNet peering. For VNet peering, see the [Virtual Network peering](../virtual-network/virtual-network-peering-overview.md) article.
-You can create this configuration using various tools, depending on the deployment model of your VNet. The steps in this article apply to the Azure [Resource Manager deployment model](../azure-resource-manager/management/deployment-models.md) and the Azure portal. To switch to a different deployment model or deployment method article, use the dropdown.
+You can create this configuration using various tools, depending on the deployment model of your VNet. The steps in this article apply to the Azure [Resource Manager deployment model](../azure-resource-manager/management/deployment-models.md) and the Azure portal. To switch to a different deployment model or deployment method article, use the dropdown.
> [!div class="op_single_selector"] > * [Azure portal](vpn-gateway-howto-vnet-vnet-resource-manager-portal.md)
The following sections describe the different ways to connect virtual networks.
### VNet-to-VNet
-Configuring a VNet-to-VNet connection is a simple way to connect VNets. When you connect a virtual network to another virtual network with a VNet-to-VNet connection type (VNet2VNet), it's similar to creating a Site-to-Site IPsec connection to an on-premises location. Both connection types use a VPN gateway to provide a secure tunnel with IPsec/IKE and function the same way when communicating. However, they differ in the way the local network gateway is configured.
+Configuring a VNet-to-VNet connection is a simple way to connect VNets. When you connect a virtual network to another virtual network with a VNet-to-VNet connection type (VNet2VNet), it's similar to creating a Site-to-Site IPsec connection to an on-premises location. Both connection types use a VPN gateway to provide a secure tunnel with IPsec/IKE and function the same way when communicating. However, they differ in the way the local network gateway is configured.
+
+When you create a VNet-to-VNet connection, the local network gateway address space is automatically created and populated. If you update the address space for one VNet, the other VNet automatically routes to the updated address space. It's typically faster and easier to create a VNet-to-VNet connection than a Site-to-Site connection. However, the local network gateway isn't visible in this configuration.
-When you create a VNet-to-VNet connection, the local network gateway address space is automatically created and populated. If you update the address space for one VNet, the other VNet automatically routes to the updated address space. It's typically faster and easier to create a VNet-to-VNet connection than a Site-to-Site connection. However, the local network gateway is not visible in this configuration.
* If you know you want to specify additional address spaces for the local network gateway, or plan to add additional connections later and need to adjust the local network gateway, you should create the configuration using the Site-to-Site steps.
-* The VNet-to-VNet connection does not include Point-to-Site client pool address space. If you need transitive routing for Point-to-Site clients, then create a Site-to-Site connection between the virtual network gateways, or use VNet peering.
+* The VNet-to-VNet connection doesn't include Point-to-Site client pool address space. If you need transitive routing for Point-to-Site clients, then create a Site-to-Site connection between the virtual network gateways, or use VNet peering.
### Site-to-Site (IPsec)
If you're working with a complicated network configuration, you may prefer to co
### VNet peering You can also connect your VNets by using VNet peering.+ * VNet peering doesn't use a VPN gateway and has different constraints. * [VNet peering pricing](https://azure.microsoft.com/pricing/details/virtual-network) is calculated differently than [VNet-to-VNet VPN Gateway pricing](https://azure.microsoft.com/pricing/details/vpn-gateway). * For more information about VNet peering, see the [Virtual Network peering](../virtual-network/virtual-network-peering-overview.md) article.
After you've configured VNet1, create VNet4 and the VNet4 gateway by repeating t
## Configure the VNet1 gateway connection
-When the virtual network gateways for both VNet1 and VNet4 have completed, you can create your virtual network gateway connections. In this section, you create a connection from VNet1 to VNet4. These steps work only for VNets in the same subscription. If your VNets are in different subscriptions, you must use [PowerShell](vpn-gateway-vnet-vnet-rm-ps.md) to make the connection. However, if your VNets are in different resource groups in the same subscription, you can connect them by using the portal.
+When the virtual network gateways for both VNet1 and VNet4 have completed, you can create your virtual network gateway connections. In this section, you create a connection from VNet1 to VNet4. VNets in the same subscription can be connected using the portal, even if they are in different resource groups. However, if your VNets are in different subscriptions, you must use [PowerShell](vpn-gateway-vnet-vnet-rm-ps.md) to make the connections.
-1. In the Azure portal, select **All resources**, enter *virtual network gateway* in the search box, and then navigate to the virtual network gateway for your VNet. For example, **VNet1GW**. Select the gateway to open the **Virtual network gateway** page.
-1. On the gateway page, go to **Settings ->Connections**. Then, select **+Add**.
+1. In the portal, go to your virtual network gateway. For example, **VNet1GW**.
+1. On the virtual network gateway page, go to **Connections**. Select **+Add**.
- :::image type="content" source="./media/vpn-gateway-howto-vnet-vnet-resource-manager-portal/connections.png" alt-text="Screenshot showing the connections page." border="false":::
-1. The **Add connection** page opens.
+ :::image type="content" source="./media/vpn-gateway-howto-vnet-vnet-resource-manager-portal/connections-add.png" alt-text="Screenshot showing the connections page." lightbox="./media/vpn-gateway-howto-vnet-vnet-resource-manager-portal/connections-add.png":::
- :::image type="content" source="./media/vpn-gateway-howto-vnet-vnet-resource-manager-portal/vnet1-vnet4.png" alt-text="Screenshot showing the Add connection page.":::
+1. On the **Add connection** page, fill in the connection values.
- On the **Add connection** page, fill in the values for your connection:
+ :::image type="content" source="./media/vpn-gateway-howto-vnet-vnet-resource-manager-portal/add-connection.png" alt-text="Screenshot showing the Add Connection page." lightbox="./media/vpn-gateway-howto-vnet-vnet-resource-manager-portal/add-connection.png":::
* **Name**: Enter a name for your connection. For example, *VNet1toVNet4*.
When the virtual network gateways for both VNet1 and VNet4 have completed, you c
* **Second virtual network gateway**: This field is the virtual network gateway of the VNet that you want to create a connection to. Select **Choose another virtual network gateway** to open the **Choose virtual network gateway** page.
- :::image type="content" source="./media/vpn-gateway-howto-vnet-vnet-resource-manager-portal/choose.png" alt-text="Screenshot showing Choose a virtual network gateway page with another gateway selected.":::
+ :::image type="content" source="./media/vpn-gateway-howto-vnet-vnet-resource-manager-portal/choose-gateway.png" alt-text="Screenshot showing Choose a virtual network gateway page with another gateway selected."lightbox="./media/vpn-gateway-howto-vnet-vnet-resource-manager-portal/choose-gateway.png":::
* View the virtual network gateways that are listed on this page. Notice that only virtual network gateways that are in your subscription are listed. If you want to connect to a virtual network gateway that isn't in your subscription, use the [PowerShell](vpn-gateway-vnet-vnet-rm-ps.md).
Next, create a connection from VNet4 to VNet1. In the portal, locate the virtual
1. Locate the virtual network gateway in the Azure portal. 1. On the **Virtual network gateway** page, select **Connections** to view the **Connections** page for the virtual network gateway. After the connection is established, you'll see the **Status** values change to **Connected**.
- :::image type="content" source="./media/vpn-gateway-howto-vnet-vnet-resource-manager-portal/view-connections.png" alt-text="Screenshot showing the Connections page to verify the connections." border="false":::
+ :::image type="content" source="./media/vpn-gateway-howto-vnet-vnet-resource-manager-portal/view-connections.png" alt-text="Screenshot showing the Connections page to verify the connections." border="false" lightbox="./media/vpn-gateway-howto-vnet-vnet-resource-manager-portal/view-connections.png":::
1. Under the **Name** column, select one of the connections to view more information. When data begins flowing, you'll see values for **Data in** and **Data out**.
- :::image type="content" source="./media/vpn-gateway-howto-vnet-vnet-resource-manager-portal/status.png" alt-text="Screenshot shows a resource group with values for Data in and Data out" border="false":::
+ :::image type="content" source="./media/vpn-gateway-howto-vnet-vnet-resource-manager-portal/status.png" alt-text="Screenshot shows a resource group with values for Data in and Data out." border="false" lightbox="./media/vpn-gateway-howto-vnet-vnet-resource-manager-portal/status.png":::
## Add additional connections
If you want to add additional connections, navigate to the virtual network gatew
## VNet-to-VNet FAQ
-View the FAQ details for additional information about VNet-to-VNet connections.
-
+See the [VPN Gateway FAQ](vpn-gateway-vpn-faq.md#V2VMulti) for VNet-to-VNet frequently asked questions.
## Next steps