Updates from: 09/23/2022 01:08:31
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory Application Proxy Add On Premises Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-add-on-premises-application.md
Now that you've prepared your environment and installed a connector, you're read
| **Name** | The name of the application that will appear on My Apps and in the Azure portal. | | **Internal URL** | The URL for accessing the application from inside your private network. You can provide a specific path on the backend server to publish, while the rest of the server is unpublished. In this way, you can publish different sites on the same server as different apps, and give each one its own name and access rules.<br><br>If you publish a path, make sure that it includes all the necessary images, scripts, and style sheets for your application. For example, if your app is at `https://yourapp/app` and uses images located at `https://yourapp/media`, then you should publish `https://yourapp/` as the path. This internal URL doesn't have to be the landing page your users see. For more information, see [Set a custom home page for published apps](application-proxy-configure-custom-home-page.md). | | **External URL** | The address for users to access the app from outside your network. If you don't want to use the default Application Proxy domain, read about [custom domains in Azure AD Application Proxy](./application-proxy-configure-custom-domain.md). |
- | **Pre Authentication** | How Application Proxy verifies users before giving them access to your application.<br><br>**Azure Active Directory** - Application Proxy redirects users to sign in with Azure AD, which authenticates their permissions for the directory and application. We recommend keeping this option as the default so that you can take advantage of Azure AD security features like Conditional Access and Multi-Factor Authentication. **Azure Active Directory** is required for monitoring the application with Microsoft Cloud Application Security.<br><br>**Passthrough** - Users don't have to authenticate against Azure AD to access the application. You can still set up authentication requirements on the backend. |
+ | **Pre Authentication** | How Application Proxy verifies users before giving them access to your application.<br><br>**Azure Active Directory** - Application Proxy redirects users to sign in with Azure AD, which authenticates their permissions for the directory and application. We recommend keeping this option as the default so that you can take advantage of Azure AD security features like Conditional Access and Multi-Factor Authentication. **Azure Active Directory** is required for monitoring the application with Microsoft Defender for Cloud Apps.<br><br>**Passthrough** - Users don't have to authenticate against Azure AD to access the application. You can still set up authentication requirements on the backend. |
| **Connector Group** | Connectors process the remote access to your application, and connector groups help you organize connectors and apps by region, network, or purpose. If you don't have any connector groups created yet, your app is assigned to **Default**.<br><br>If your application uses WebSockets to connect, all connectors in the group must be version 1.5.612.0 or later. | 6. If necessary, configure **Additional settings**. For most applications, you should keep these settings in their default states.
active-directory How To Mfa Additional Context https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/how-to-mfa-additional-context.md
Title: Use additional context in Microsoft Authenticator notifications - Azure Active Directory
+ Title: Use additional context in Microsoft Authenticator notifications (Preview) - Azure Active Directory
description: Learn how to use additional context in MFA notifications Previously updated : 09/15/2022 Last updated : 09/22/2022 # Customer intent: As an identity administrator, I want to encourage users to use the Microsoft Authenticator app in Azure AD to improve and secure user sign-in events.
-# How to use additional context in Microsoft Authenticator notifications - Authentication methods policy
+# How to use additional context in Microsoft Authenticator notifications (Preview) - Authentication methods policy
This topic covers how to improve the security of user sign-in by adding the application name and geographic location of the sign-in to Microsoft Authenticator passwordless and push notifications.
active-directory How To Mfa Number Match https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/how-to-mfa-number-match.md
Title: Use number matching in multifactor authentication (MFA) notifications - Azure Active Directory
+ Title: Use number matching in multifactor authentication (MFA) notifications (Preview) - Azure Active Directory
description: Learn how to use number matching in MFA notifications Previously updated : 09/15/2022 Last updated : 09/22/2022 # Customer intent: As an identity administrator, I want to encourage users to use the Microsoft Authenticator app in Azure AD to improve and secure user sign-in events.
-# How to use number matching in multifactor authentication (MFA) notifications - Authentication methods policy
+# How to use number matching in multifactor authentication (MFA) notifications (Preview) - Authentication methods policy
This topic covers how to enable number matching in Microsoft Authenticator push notifications to improve user sign-in security.
active-directory How To Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-sync/how-to-prerequisites.md
If there's a firewall between your servers and Azure AD, configure the following
|--|--| |&#42;.msappproxy.us</br>&#42;.servicebus.usgovcloudapi.net|The agent uses these URLs to communicate with the Azure AD cloud service. | |`mscrl.microsoft.us:80` </br>`crl.microsoft.us:80` </br>`ocsp.msocsp.us:80` </br>`www.microsoft.us:80`| The agent uses these URLs to verify certificates.|
- |login.windows.us </br>secure.aadcdn.microsoftonline-p.com </br>&#42;.microsoftonline.us </br>&#42;.microsoftonline-p.us </br>&#42;.msauth.net </br>&#42;.msauthimages.net </br>&#42;.msecnd.net</br>&#42;.msftauth.net </br>&#42;.msftauthimages.net</br>&#42;.phonefactor.net </br>enterpriseregistration.windows.net</br>management.azure.com </br>policykeyservice.dc.ad.msft.net</br>ctldl.windowsupdate.us:80| The agent uses these URLs during the registration process.
+ |login.windows.us </br>secure.aadcdn.microsoftonline-p.com </br>&#42;.microsoftonline.us </br>&#42;.microsoftonline-p.us </br>&#42;.msauth.net </br>&#42;.msauthimages.net </br>&#42;.msecnd.net</br>&#42;.msftauth.net </br>&#42;.msftauthimages.net</br>&#42;.phonefactor.net </br>enterpriseregistration.windows.net</br>management.azure.com </br>policykeyservice.dc.ad.msft.net</br>ctldl.windowsupdate.us:80 </br>aadcdn.msftauthimages.us </br>*.microsoft.us </br>msauthimages.us </br>mfstauthimages.us| The agent uses these URLs during the registration process.
The following are known limitations:
### Scoping filter When using OU scoping filter-- You can only sync up to 59 separate OUs for a given configuration.
+- You can only sync up to 59 separate OUs or Security Groups for a given configuration.
- Nested OUs are supported (that is, you **can** sync an OU that has 130 nested OUs, but you **cannot** sync 60 separate OUs in the same configuration). ### Password Hash Sync
active-directory Security Best Practices For App Registration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/security-best-practices-for-app-registration.md
Security is an important concept when registering an application in Azure Active Directory (Azure AD) and is a critical part of its business use in the organization. Any misconfiguration of an application can result in downtime or compromise. Depending on the permissions added to an application, there can be organization-wide effects.
-Because secure applications are essential to the organization, any downtime to them because of security issues can affect the business or some critical service that the business depends upon. So, it's important to allocate time and resources to ensure applications stay in a healthy and secure state always. Conduct a periodical security and health assessment of applications much like a Security Threat Model assessment for code. For a broader perspective on security for organizations, see the [security development lifecycle](https://www.microsoft.com/securityengineering/sdl) (SDL).
+Because secure applications are essential to the organization, any downtime to them because of security issues can affect the business or some critical service that the business depends upon. So, it's important to allocate time and resources to ensure applications always stay in a healthy and secure state. Conduct a periodic security and health assessment of applications, much like a Security Threat Model assessment for code. For a broader perspective on security for organizations, see the [security development lifecycle](https://www.microsoft.com/securityengineering/sdl) (SDL).
This article describes security best practices for the following application properties:
It's important to keep Redirect URIs of your application up to date. Under **Aut
Consider the following guidance for redirect URIs: -- Maintain ownership of all URIs. A lapse in the ownership of one of the redirect URIs can lead to an application compromise.-- Make sure that all DNS records are updated and monitored periodically for changes.
+- Maintain ownership of all URIs. A lapse in the ownership of one of the redirect URIs can lead to application compromise.
+- Make sure all DNS records are updated and monitored periodically for changes.
- Don't use wildcard reply URLs or insecure URI schemes such as http, or URN. - Keep the list small. Trim any unnecessary URIs. If possible, update URLs from Http to Https.
Certificates and secrets, also known as credentials, are a vital part of an appl
Consider the following guidance related to certificates and secrets: - Always use [certificate credentials](./active-directory-certificate-credentials.md) whenever possible and don't use password credentials, also known as *secrets*. While it's convenient to use password secrets as a credential, when possible use x509 certificates as the only credential type for getting tokens for an application.-- Use Key Vault with [Managed identities](../managed-identities-azure-resources/overview.md) to manage credentials for an application.
+- Use Key Vault with [managed identities](../managed-identities-azure-resources/overview.md) to manage credentials for an application.
- If an application is used only as a Public Client App (allows users to sign in using a public endpoint), make sure that there are no credentials specified on the application object.-- Review the credentials used in applications for freshness of use and their expiration. An unused credential on an application can result in security breach. Rollover credentials frequently and don't share credentials across applications. Don't have many credentials on one application.
+- Review the credentials used in applications for freshness of use and their expiration. An unused credential on an application can result in a security breach. Rollover credentials frequently and don't share credentials across applications. Don't have many credentials on one application.
- Monitor your production pipelines to prevent credentials of any kind from being committed into code repositories. - [Credential Scanner](../../security/develop/security-code-analysis-overview.md#credential-scanner) is a static analysis tool that can be used to detect credentials (and other sensitive content) in source code and build output. ## Application ID URI
-The **Application ID URI** property of the application specifies the globally unique URI used to identify the web API. It's the prefix for scopes and in access tokens, it's also the value of the audience claim and it must use a verified customer owned domain. For multi-tenant applications, the value must also be globally unique. Also referred to as an identifier URI. Under **Expose an API** for the application in the Azure portal, the **Application ID URI** property can be defined.
+The **Application ID URI** property of the application specifies the globally unique URI used to identify the web API. It's the prefix for scopes and in access tokens, it's also the value of the audience claim and it must use a verified customer owned domain. For multi-tenant applications, the value must also be globally unique. It's also referred to as an identifier URI. Under **Expose an API** for the application in the Azure portal, the **Application ID URI** property can be defined.
:::image type="content" source="./media/active-directory-application-registration-best-practices/app-id-uri.png" alt-text="Screenshot that shows where the Application I D U R I is located.":::
Consider the following guidance related to defining the Application ID URI:
- The api or https URI schemes are recommended. Set the property in the supported formats to avoid URI collisions in your organization. Don't use wildcards. - Use a verified domain in Line of Business (LoB) applications. - Keep an inventory of the URIs in your organization to help maintain security.-- Use the Application ID URI to expose the WebApi in the organization and don't use the Application ID URI to identify the application, instead use the Application (client) ID property.
+- Use the Application ID URI to expose the WebApi in the organization. Don't use the Application ID URI to identify the application, and instead use the Application (client) ID property.
[!INCLUDE [active-directory-identifierUri](../../../includes/active-directory-identifier-uri-patterns.md)]
active-directory Licensing Service Plan Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/licensing-service-plan-reference.md
Previously updated : 09/21/2022 Last updated : 09/22/2022
When managing licenses in [the Azure portal](https://portal.azure.com/#blade/Mic
- **Service plans included (friendly names)**: A list of service plans (friendly names) in the product that correspond to the string ID and GUID >[!NOTE]
->This information last updated on September 21st, 2022.<br/>You can also download a CSV version of this table [here](https://download.microsoft.com/download/e/3/e/e3e9faf2-f28b-490a-9ada-c6089a1fc5b0/Product%20names%20and%20service%20plan%20identifiers%20for%20licensing.csv).
+>This information last updated on September 22nd, 2022.<br/>You can also download a CSV version of this table [here](https://download.microsoft.com/download/e/3/e/e3e9faf2-f28b-490a-9ada-c6089a1fc5b0/Product%20names%20and%20service%20plan%20identifiers%20for%20licensing.csv).
><br/> | Product name | String ID | GUID | Service plans included | Service plans included (friendly names) |
When managing licenses in [the Azure portal](https://portal.azure.com/#blade/Mic
| Microsoft Stream Storage Add-On (500 GB) | STREAM_STORAGE | 9bd7c846-9556-4453-a542-191d527209e8 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>STREAM_STORAGE (83bced11-77ce-4071-95bd-240133796768) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Microsoft Stream Storage Add-On (83bced11-77ce-4071-95bd-240133796768) | | Microsoft Teams Audio Conferencing select dial-out | Microsoft_Teams_Audio_Conferencing_select_dial_out | 1c27243e-fb4d-42b1-ae8c-fe25c9616588 | MCOMEETBASIC (9974d6cf-cd24-4ba2-921c-e2aa687da846) | Microsoft Teams Audio Conferencing with dial-out to select geographies (9974d6cf-cd24-4ba2-921c-e2aa687da846) | | Microsoft Teams (Free) | TEAMS_FREE | 16ddbbfc-09ea-4de2-b1d7-312db6112d70 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>MCOFREE (617d9209-3b90-4879-96e6-838c42b2701d)<br/>TEAMS_FREE (4fa4026d-ce74-4962-a151-8e96d57ea8e4)<br/>SHAREPOINTDESKLESS (902b47e5-dcb2-4fdc-858b-c63a90a2bdb9)<br/>TEAMS_FREE_SERVICE (bd6f2ac2-991a-49f9-b23c-18c96a02c228)<br/>WHITEBOARD_FIRSTLINE1 (36b29273-c6d0-477a-aca6-6fbe24f538e3) | EXCHANGE FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>MCO FREE FOR MICROSOFT TEAMS (FREE) (617d9209-3b90-4879-96e6-838c42b2701d)<br/>MICROSOFT TEAMS (FREE) (4fa4026d-ce74-4962-a151-8e96d57ea8e4)<br/>SHAREPOINT KIOSK (902b47e5-dcb2-4fdc-858b-c63a90a2bdb9)<br/>TEAMS FREE SERVICE (bd6f2ac2-991a-49f9-b23c-18c96a02c228)<br/>WHITEBOARD (FIRSTLINE) (36b29273-c6d0-477a-aca6-6fbe24f538e3) |
+| Microsoft Teams Essentials | Teams_Ess | fde42873-30b6-436b-b361-21af5a6b84ae | TeamsEss (f4f2f6de-6830-442b-a433-e92249faebe2) | Microsoft Teams Essentials (f4f2f6de-6830-442b-a433-e92249faebe2) |
| Microsoft Teams Exploratory | TEAMS_EXPLORATORY | 710779e8-3d4a-4c88-adb9-386c958d1fdf | CDS_O365_P1 (bed136c6-b799-4462-824d-fc045d3a9d25)<br/>EXCHANGE_S_STANDARD (9aaf7827-d63c-4b61-89c3-182f06f82e5c)<br/>MYANALYTICS_P2 (33c4f319-9bdd-48d6-9c4d-410b750a4a5a)<br/>FORMS_PLAN_E1 (159f4cd6-e380-449f-a816-af1a9ef76344)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>MICROSOFT_SEARCH (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>DESKLESS (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>STREAM_O365_E1 (743dd19e-1ce3-4c62-a3ad-49ba8f63a2f6)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>MCO_TEAMS_IW (42a3ec34-28ba-46b6-992f-db53a675ac5b)<br/>INTUNE_O365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>OFFICEMOBILE_SUBSCRIPTION (c63d4d19-e8cb-460e-b37c-4d6c34603745)<br/>POWERAPPS_O365_P1 (92f7a6f3-b89b-4bbd-8c30-809e6da5ad1c)<br/>FLOW_O365_P1 (0f9b09cb-62d1-4ff4-9129-43f4996f83f4)<br/>POWER_VIRTUAL_AGENTS_O365_P1 (0683001c-0492-4d59-9515-d9a6426b5813)<br/>SHAREPOINTSTANDARD (c7699d2e-19aa-44de-8edf-1736da088ca1)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>BPOS_S_TODO_1 (5e62787c-c316-451f-b873-1d05acd4d12c)<br/>WHITEBOARD_PLAN1 (b8afc642-032e-4de5-8c0a-507a7bba7e5d)<br/>YAMMER_ENTERPRISE (7547a3fe-08ee-4ccb-b430-5077c5041653) | COMMON DATA SERVICE FOR TEAMS_P1 (bed136c6-b799-4462-824d-fc045d3a9d25)<br/>EXCHANGE ONLINE (PLAN 1) (9aaf7827-d63c-4b61-89c3-182f06f82e5c)<br/>INSIGHTS BY MYANALYTICS (33c4f319-9bdd-48d6-9c4d-410b750a4a5a)<br/>MICROSOFT FORMS (PLAN E1) (159f4cd6-e380-449f-a816-af1a9ef76344)<br/>MICROSOFT PLANNER (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>MICROSOFT SEARCH (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>MICROSOFT STAFFHUB (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>MICROSOFT STREAM FOR O365 E1 SKU (743dd19e-1ce3-4c62-a3ad-49ba8f63a2f6)<br/>MICROSOFT TEAMS (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>MICROSOFT TEAMS (42a3ec34-28ba-46b6-992f-db53a675ac5b)<br/>MOBILE DEVICE MANAGEMENT FOR OFFICE 365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>OFFICE FOR THE WEB (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>OFFICE MOBILE APPS FOR OFFICE 365 (c63d4d19-e8cb-460e-b37c-4d6c34603745)<br/>POWER APPS FOR OFFICE 365 (92f7a6f3-b89b-4bbd-8c30-809e6da5ad1c)<br/>POWER AUTOMATE FOR OFFICE 365 (0f9b09cb-62d1-4ff4-9129-43f4996f83f4)<br/>POWER VIRTUAL AGENTS FOR OFFICE 365 P1 (0683001c-0492-4d59-9515-d9a6426b5813)<br/>SHAREPOINT STANDARD (c7699d2e-19aa-44de-8edf-1736da088ca1)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>TO-DO (PLAN 1) (5e62787c-c316-451f-b873-1d05acd4d12c)<br/>WHITEBOARD (PLAN 1) (b8afc642-032e-4de5-8c0a-507a7bba7e5d)<br/>YAMMER ENTERPRISE (7547a3fe-08ee-4ccb-b430-5077c5041653 | | Microsoft Teams Phone Standard | MCOEV | e43b5b99-8dfb-405f-9987-dc307f34bcbd | MCOEV (4828c8ec-dc2e-4779-b502-87ac9ce28ab7) | MICROSOFT 365 PHONE SYSTEM (4828c8ec-dc2e-4779-b502-87ac9ce28ab7) | | Microsoft Teams Phone Standard for DOD | MCOEV_DOD | d01d9287-694b-44f3-bcc5-ada78c8d953e | MCOEV (4828c8ec-dc2e-4779-b502-87ac9ce28ab7) | MICROSOFT 365 PHONE SYSTEM (4828c8ec-dc2e-4779-b502-87ac9ce28ab7) |
active-directory Concept Fundamentals Security Defaults https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/concept-fundamentals-security-defaults.md
This policy applies to all users who are accessing Azure Resource Manager servic
Security defaults users are required to register for and use Azure AD Multi-Factor Authentication **using the Microsoft Authenticator app using notifications**. Users may use verification codes from the Microsoft Authenticator app but can only register using the notification option.
-Starting in July 2022, anyone with the global administrator role assigned to them will be required to register a phone-based method like call or text as a backup method.
- > [!WARNING] > Do not disable methods for your organization if you are using security defaults. Disabling methods may lead to locking yourself out of your tenant. Leave all **Methods available to users** enabled in the [MFA service settings portal](../authentication/howto-mfa-getstarted.md#choose-authentication-methods-for-mfa).
active-directory How To Connect Install Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-install-prerequisites.md
We recommend that you harden your Azure AD Connect server to decrease the securi
### SQL Server used by Azure AD Connect * Azure AD Connect requires a SQL Server database to store identity data. By default, a SQL Server 2019 Express LocalDB (a light version of SQL Server Express) is installed. SQL Server Express has a 10-GB size limit that enables you to manage approximately 100,000 objects. If you need to manage a higher volume of directory objects, point the installation wizard to a different installation of SQL Server. The type of SQL Server installation can impact the [performance of Azure AD Connect](./plan-connect-performance-factors.md#sql-database-factors). * If you use a different installation of SQL Server, these requirements apply:
- * Azure AD Connect supports all versions of SQL Server from 2012 (with the latest service pack) to SQL Server 2019. Azure SQL Database *isn't supported* as a database. This includes both Azure SQL Database and Azure SQL Managed Instance.
+ * Azure AD Connect support all mainstream supported SQL Server versions up to SQL Server 2019. Please refer to the [SQL Server lifecycle article](https://learn.microsoft.com/lifecycle/products/?products=sql-server) to verify the support status of your SQL Server version. Azure SQL Database *isn't supported* as a database. This includes both Azure SQL Database and Azure SQL Managed Instance.
* You must use a case-insensitive SQL collation. These collations are identified with a \_CI_ in their name. Using a case-sensitive collation identified by \_CS_ in their name *isn't supported*. * You can have only one sync engine per SQL instance. Sharing a SQL instance with FIM/MIM Sync, DirSync, or Azure AD Sync *isn't supported*.
active-directory Reference Connect Adconnectivitytools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/reference-connect-adconnectivitytools.md
Confirm-DnsConnectivity [-Forest] <String> [-DCs] <Array> [-ReturnResultAsPSObje
### DESCRIPTION Runs local Dns connectivity tests.
-In order to configure the Active Directory connector, user must have both name resolutionthe
-for the forest they is attempting to connect to as well as in the domain controllers
+In order to configure the Active Directory connector, AADConnect server must have both name resolution
+for the forest it's attempting to connect to as well as to the domain controllers
associated to this forest. ### EXAMPLES
active-directory Crayon Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/crayon-tutorial.md
+
+ Title: 'Tutorial: Azure AD SSO integration with Crayon'
+description: Learn how to configure single sign-on between Azure Active Directory and Crayon.
++++++++ Last updated : 09/15/2022++++
+# Tutorial: Azure AD SSO integration with Crayon
+
+In this tutorial, you'll learn how to integrate Crayon with Azure Active Directory (Azure AD). When you integrate Crayon with Azure AD, you can:
+
+* Control in Azure AD who has access to Crayon.
+* Enable your users to be automatically signed-in to Crayon with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Crayon single sign-on (SSO) enabled subscription.
+* Along with Cloud Application Administrator, Application Administrator can also add or manage applications in Azure AD.
+For more information, see [Azure built-in roles](../roles/permissions-reference.md).
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* Crayon supports **SP** and **IDP** initiated SSO.
+* Crayon supports **Just In Time** user provisioning.
+
+## Add Crayon from the gallery
+
+To configure the integration of Crayon into Azure AD, you need to add Crayon from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Crayon** in the search box.
+1. Select **Crayon** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+
+Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, as well as walk through the SSO configuration as well. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides)
+
+## Configure and test Azure AD SSO for Crayon
+
+Configure and test Azure AD SSO with Crayon using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user at Crayon.
+
+To configure and test Azure AD SSO with Crayon, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure Crayon SSO](#configure-crayon-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Crayon test user](#create-crayon-test-user)** - to have a counterpart of B.Simon in Crayon that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **Crayon** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Screenshot shows to edit Basic SAML Configuration.](common/edit-urls.png "Basic Configuration")
+
+1. On the **Basic SAML Configuration** section, perform the following steps:
+
+ a. In the **Identifier** textbox, type a URL using the following pattern:
+ `https://app.crayon.co/auth/sso/<CustomerName>/`
+
+ b. In the **Reply URL** textbox, type a URL using the following pattern:
+ `https://app.crayon.co/auth/sso/<CustomerName>/acs/`
+
+1. Click **Set additional URLs** and perform the following step if you wish to configure the application in **SP** initiated mode:
+
+ In the **Sign-on URL** text box, type the URL:
+ `https://app.crayon.co/login/`
+
+ > [!Note]
+ > These values are not real. Update these values with the actual Identifier and Reply URL. Contact [Crayon support team](mailto:support@crayon.co) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+
+1. Crayon application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes.
+
+ ![Screenshot shows the image of attributes configuration.](common/default-attributes.png "Image")
+
+1. In addition to above, Crayon application expects few more attributes to be passed back in SAML response, which are shown below. These attributes are also pre populated but you can review them as per your requirements.
+
+ | Name | Source Attribute|
+ | | |
+ | email | user.mail |
+ | firstName | user.givenname |
+ | lastName | user.surname |
+ | jobTitle | user.jobtitle |
+
+1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Base64)** and select **Download** to download the certificate and save it on your computer.
+
+ ![Screenshot shows the Certificate download link.](common/certificatebase64.png "Certificate")
+
+1. On the **Set up Crayon** section, copy the appropriate URL(s) based on your requirement.
+
+ ![Screenshot shows to copy configuration appropriate URL.](common/copy-configuration-urls.png "Metadata")
+
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Crayon.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Crayon**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure Crayon SSO
+
+To configure single sign-on on **Crayon** side, you need to send the downloaded **Certificate (Base64)** and appropriate copied URLs from Azure portal to [Crayon support team](mailto:support@crayon.co). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create Crayon test user
+
+In this section, a user called B.Simon is created in Crayon. Crayon supports just-in-time user provisioning, which is enabled by default. There is no action item for you in this section. If a user doesn't already exist in Crayon, a new one is created after authentication.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+#### SP initiated:
+
+* Click on **Test this application** in Azure portal. This will redirect to Crayon Sign-on URL where you can initiate the login flow.
+
+* Go to Crayon Sign-on URL directly and initiate the login flow from there.
+
+#### IDP initiated:
+
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the Crayon for which you set up the SSO.
+
+You can also use Microsoft My Apps to test the application in any mode. When you click the Crayon tile in the My Apps, if configured in SP mode you would be redirected to the application sign-on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Crayon for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Next steps
+
+Once you configure Crayon you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Cytric Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/cytric-tutorial.md
+
+ Title: 'Tutorial: Azure AD SSO integration with Cytric'
+description: Learn how to configure single sign-on between Azure Active Directory and Cytric.
++++++++ Last updated : 09/15/2022++++
+# Tutorial: Azure AD SSO integration with Cytric
+
+In this tutorial, you'll learn how to integrate Cytric with Azure Active Directory (Azure AD). When you integrate Cytric with Azure AD, you can:
+
+* Control in Azure AD who has access to Cytric.
+* Enable your users to be automatically signed-in to Cytric with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Cytric single sign-on (SSO) enabled subscription.
+* Along with Cloud Application Administrator, Application Administrator can also add or manage applications in Azure AD.
+For more information, see [Azure built-in roles](../roles/permissions-reference.md).
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* Cytric supports **SP** initiated SSO.
+
+## Add Cytric from the gallery
+
+To configure the integration of Cytric into Azure AD, you need to add Cytric from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Cytric** in the search box.
+1. Select **Cytric** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+
+Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, as well as walk through the SSO configuration as well. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides)
+
+## Configure and test Azure AD SSO for Cytric
+
+Configure and test Azure AD SSO with Cytric using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Cytric.
+
+To configure and test Azure AD SSO with Cytric, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure Cytric SSO](#configure-cytric-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Cytric test user](#create-cytric-test-user)** - to have a counterpart of B.Simon in Cytric that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **Cytric** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Screenshot shows to edit Basic SAML Configuration.](common/edit-urls.png "Basic Configuration")
+
+1. On the **Basic SAML Configuration** section, perform the following steps:
+
+ a. In the **Identifier** textbox, type a URL using the following pattern:
+ `https://<domain>.cytric.net/saml2`
+
+ b. In the **Reply URL** textbox, type a URL using the following pattern:
+ `https://<domain>.cytric.net/saml2/sp/acs/post`
+
+ c. In the **Sign on URL** text box, type a URL using the following pattern:
+ `https://<domain>.cytric.net/svc/SAML2/cWS/pre/AUTH?clientId=<Customer>`
+
+ > [!Note]
+ > These values are not real. Update these values with the actual Identifier, Reply URL and Sign on URL. Contact [Cytric support team](mailto:ifao.cgs@amadeus.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+
+1. On the **Set-up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Federation Metadata XML** and select **Download** to download the certificate and save it on your computer.
+
+ ![Screenshot shows the Certificate download link.](common/metadataxml.png "Certificate")
+
+1. On the **Set up Cytric** section, copy the appropriate URL(s) based on your requirement.
+
+ ![Screenshot shows to copy configuration appropriate URL.](common/copy-configuration-urls.png "Attributes")
+
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Cytric.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Cytric**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure Cytric SSO
+
+To configure single sign-on on **Cytric** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [Cytric support team](mailto:ifao.cgs@amadeus.com). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create Cytric test user
+
+In this section, you create a user called Britta Simon in Cytric. Work with [Cytric support team](mailto:ifao.cgs@amadeus.com) to add the users in the Cytric platform. Users must be created and activated before you use single sign-on.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+* Click on **Test this application** in Azure portal. This will redirect to Cytric Sign-on URL where you can initiate the login flow.
+
+* Go to Cytric Sign-on URL directly and initiate the login flow from there.
+
+* You can use Microsoft My Apps. When you click the Cytric tile in the My Apps, this will redirect to Cytric Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Next steps
+
+Once you configure Cytric you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Valence Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/valence-tutorial.md
+
+ Title: 'Tutorial: Azure AD SSO integration with Valence Security Platform'
+description: Learn how to configure single sign-on between Azure Active Directory and Valence Security Platform.
++++++++ Last updated : 09/12/2022++++
+# Tutorial: Azure AD SSO integration with Valence Security Platform
+
+In this tutorial, you'll learn how to integrate Valence Security Platform with Azure Active Directory (Azure AD). When you integrate Valence Security Platform with Azure AD, you can:
+
+* Control in Azure AD who has access to Valence Security Platform.
+* Enable your users to be automatically signed-in to Valence Security Platform with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Valence Security Platform single sign-on (SSO) enabled subscription.
+* Along with Cloud Application Administrator, Application Administrator can also add or manage applications in Azure AD.
+For more information, see [Azure built-in roles](../roles/permissions-reference.md).
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* Valence Security Platform supports **IDP** initiated SSO.
+
+## Add Valence Security Platform from the gallery
+
+To configure the integration of Valence Security Platform into Azure AD, you need to add Valence Security Platform from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Valence Security Platform** in the search box.
+1. Select **Valence Security Platform** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+
+Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, as well as walk through the SSO configuration as well. You can learn more about O365 wizards [here](/microsoft-365/admin/misc/azure-ad-setup-guides?view=o365-worldwide&preserve-view=true).
+
+## Configure and test Azure AD SSO for Valence Security Platform
+
+Configure and test Azure AD SSO with Valence Security Platform using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Valence Security Platform.
+
+To configure and test Azure AD SSO with Valence Security Platform, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure Valence Security Platform SSO](#configure-valence-security-platform-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Valence Security Platform test user](#create-valence-security-platform-test-user)** - to have a counterpart of B.Simon in Valence Security Platform that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **Valence Security Platform** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Screenshot shows to edit Basic SAML Configuration.](common/edit-urls.png "Basic Configuration")
+
+1. On the **Basic SAML Configuration** section, perform the following steps:
+
+ a. In the **Identifier** textbox, type a URL using the following pattern:
+ `https://app.valencesecurity.com/auth/realms/valence/broker/<CustomerName>/endpoint/clients/oktasamlapp`
+
+ b. In the **Reply URL** textbox, type a URL using the following pattern:
+ `https://app.valencesecurity.com/auth/realms/valence/broker/<CustomerName>/endpoint/clients/oktasamlapp`
+
+ > [!Note]
+ > These values are not real. Update these values with the actual Identifier and Reply URL. Contact [Valence Security Platform support team](mailto:support@valencesecurity.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+
+1. On the **Set-up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Federation Metadata XML** and select **Download** to download the certificate and save it on your computer.
+
+ ![Screenshot shows the Certificate download link.](common/metadataxml.png "Certificate")
+
+1. On the **Set up Valence Security Platform** section, copy the appropriate URL(s) based on your requirement.
+
+ ![Screenshot shows to copy configuration appropriate URL.](common/copy-configuration-urls.png "Attributes")
+
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Valence Security Platform.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Valence Security Platform**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure Valence Security Platform SSO
+
+To configure single sign-on on **Valence Security Platform** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [Valence Security Platform support team](mailto:support@valencesecurity.com). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create Valence Security Platform test user
+
+In this section, you create a user called Britta Simon in Valence Security Platform. Work with [Valence Security Platform support team](mailto:support@valencesecurity.com) to add the users in the Valence Security Platform platform. Users must be created and activated before you use single sign-on.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+* Click on Test this application in Azure portal and you should be automatically signed in to the Valence Security Platform for which you set up the SSO.
+
+* You can use Microsoft My Apps. When you click the Valence Security Platform tile in the My Apps, you should be automatically signed in to the Valence Security Platform for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Next steps
+
+Once you configure Valence Security Platform you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Workday Writeback Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/workday-writeback-tutorial.md
If you want to delay the UserID or Email writeback so that it happens on or afte
The expression above uses the [DateDiff](../app-provisioning/functions-for-customizing-application-data.md#datediff) function to evaluate the difference between *employeeHireDate* and today's date in UTC obtained using [Now](../app-provisioning/functions-for-customizing-application-data.md#now) function. If *employeeHireDate* is greater than or equal to today's date, then it updates the UserID. Else it returns an empty value and the [IgnoreFlowIfNullOrEmpty](../app-provisioning/functions-for-customizing-application-data.md#ignoreflowifnullorempty) function excludes this attribute from Writeback. > [!IMPORTANT]
-> For the delayed Writeback to work as expected, an operation in on-premises AD or Azure AD must trigger a change to the user just a day before the arrival or on the hire date, so that this user's profile is updated and is considered for Writeback.
+> For the delayed Writeback to work as expected, an operation in on-premises AD or Azure AD must trigger a change to the user just a day before the arrival or on the hire date, so that this user's profile is updated and is considered for Writeback. It must be a change, that updates an attribute value on the user profile, where the new attribute value is different from the old attribute value.
### Handling phone number with country code and phone number
active-directory Admin Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/admin-api.md
example:
| Property | Type | Description | | -- | -- | -- |
-|`url`| string (url) | url of the logo (optional if image is specified) |
+|`uri`| string (uri) | uri of the logo (optional if image is specified) |
|`description` | string | the description of the logo |
-|`image` | string | the base-64 encoded image (optional if url is specified) |
+|`image` | string | the base-64 encoded image (optional if uri is specified) |
#### displayConsent type
active-directory How To Dnsbind https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/how-to-dnsbind.md
If the trust system is ION, once the domain changes are published to ION, the do
Congratulations, you now have bootstrapped the web of trust with your DID!
+## How can I verify that the verification is working?
+
+The portal verifies that the `did-configuration.json` is reachable and correct when you click the **Refresh verification status** button. You should also consider verifying that you can request that URL in a browser to avoid errors like not using https, a bad SSL certificate or the URL not being public. If the `did-configuration.json` file cannot be requested anonymously in a browser or via tools such as `curl`, without warnings or errors, the portal will not be able to complete the **Refresh verification status** step either.
+
+>[!NOTE]
+> If you are experiencing problems refreshing your verification status, you can troubleshoot it via running `curl -Iv https://yourdomain.com/.well-known/did-configuration.json` on an machine with Ubuntu OS. Windows Subsystem for Linux with Ubuntu will work too. If curl fails, refreshing the verification status will not work.
+ ## Linked Domain domain made easy for developers The easiest way for a developer to get a domain to use for linked domain is to use Azure Storage's static website feature. You can't control what the domain name will be, other than it will contain your storage account name as part of it's hostname.
active-directory How To Register Didwebsite https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/how-to-register-didwebsite.md
The DID document in the `did.json` file needs to be republished if you changed t
## How can I verify that the registration is working?
-The portal verifies that the `did.json` is reachable and correct when you click the [**Refresh registration status** button](#how-do-i-register-my-website-id). You should also consider verifying that you can request that URL in a browser to avoid errors like not using https, a bad SSL certificate or the URL not being public. If the `did.json` file cannot be requested anonymously in a browser, without warnings or errors, the portal will not be able to complete the **Refresh registration status** step either.
+The portal verifies that the `did.json` is reachable and correct when you click the [**Refresh registration status** button](#how-do-i-register-my-website-id). You should also consider verifying that you can request that URL in a browser to avoid errors like not using https, a bad SSL certificate or the URL not being public. If the `did.json` file cannot be requested anonymously in a browser or via tools such as `curl`, without warnings or errors, the portal will not be able to complete the **Refresh registration status** step either.
+
+>[!NOTE]
+> If you are experiencing problems refreshing your registration status, you can troubleshoot it via running `curl -Iv https://yourdomain.com/.well-known/did.json` on an machine with Ubuntu OS. Windows Subsystem for Linux with Ubuntu will work too. If curl fails, refreshing the registration status will not work.
## Next steps
aks Dapr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/dapr.md
Previously updated : 08/12/2022 Last updated : 09/08/2022
az k8s-extension create --cluster-type managedClusters \
--configuration-settings "dapr_operator.replicaCount=3" ```
+## Set the outbound proxy for Dapr extension for Azure Arc on-prem
+
+If you want to use an outbound proxy with the Dapr extension for AKS, you can do so by:
+
+1. Setting the proxy environment variables using the [`dapr.io/env` annotations](https://docs.dapr.io/reference/arguments-annotations-overview/):
+ - `HTTP_PROXY`
+ - `HTTPS_PROXY`
+ - `NO_PROXY`
+1. [Installing the proxy certificate in the sidecar](https://docs.dapr.io/operations/configuration/install-certificates/).
+
+## Meet network requirements
+
+The Dapr extension for AKS and Arc for Kubernetes requires outbound URLs on `https://:443` to function. In addition to the `https://mcr.microsoft.com/daprio` URL for pulling Dapr artifacts, verify you've included the [outbound URLs required for AKS or Arc for Kubernetes](../azure-arc/kubernetes/quickstart-connect-cluster.md#meet-network-requirements).
+ ## Troubleshooting extension errors If the extension fails to create or update, you can inspect where the creation of the extension failed by running the `az k8s-extension list` command. For example, if a wrong key is used in the configuration-settings, such as `global.ha=false` instead of `global.ha.enabled=false`:
aks Scale Down Mode https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/scale-down-mode.md
When an Azure VM is in the `Stopped` (deallocated) state, you will not be charge
> [!WARNING] > In order to preserve any deallocated VMs, you must set Scale-down Mode to Deallocate. That includes VMs that have been deallocated using IaaS APIs (Virtual Machine Scale Set APIs). Setting Scale-down Mode to Delete will remove any deallocate VMs.
+> Once applied the deallocated mode and scale down operation occured, those nodes keep registered in APIserver and appear as NotReady state.
This article assumes that you have an existing AKS cluster. If you need an AKS cluster, see the AKS quickstart [using the Azure CLI][aks-quickstart-cli], [using Azure PowerShell][aks-quickstart-powershell], or [using the Azure portal][aks-quickstart-portal].
az aks nodepool add --enable-cluster-autoscaler --min-count 1 --max-count 10 --m
[cluster-autoscaler]: cluster-autoscaler.md [ephemeral-os]: cluster-configuration.md#ephemeral-os [state-billing-azure-vm]: ../virtual-machines/states-billing.md
-[spot-node-pool]: spot-node-pool.md
+[spot-node-pool]: spot-node-pool.md
app-service Configure Language Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-language-java.md
By default, any public or private certificates [uploaded to App Service Linux](c
More configuration may be necessary for encrypting your JDBC connection with certificates in the Java Key Store. Refer to the documentation for your chosen JDBC driver. -- [PostgreSQL](https://jdbc.postgresql.org/documentation/head/ssl-client.html)
+- [PostgreSQL](https://jdbc.postgresql.org/documentation/ssl/)
- [SQL Server](/sql/connect/jdbc/connecting-with-ssl-encryption) - [MySQL](https://dev.mysql.com/doc/connector-j/5.1/en/connector-j-reference-using-ssl.html) - [MongoDB](https://mongodb.github.io/mongo-java-driver/3.4/driver/tutorials/ssl/)
These instructions apply to all database connections. You will need to fill plac
| Database | Driver Class Name | JDBC Driver | ||--||
-| PostgreSQL | `org.postgresql.Driver` | [Download](https://jdbc.postgresql.org/download.html) |
+| PostgreSQL | `org.postgresql.Driver` | [Download](https://jdbc.postgresql.org/download/) |
| MySQL | `com.mysql.jdbc.Driver` | [Download](https://dev.mysql.com/downloads/connector/j/) (Select "Platform Independent") | | SQL Server | `com.microsoft.sqlserver.jdbc.SQLServerDriver` | [Download](/sql/connect/jdbc/download-microsoft-jdbc-driver-for-sql-server#download) |
These instructions apply to all database connections. You will need to fill plac
| Database | Driver Class Name | JDBC Driver | ||--||
-| PostgreSQL | `org.postgresql.Driver` | [Download](https://jdbc.postgresql.org/download.html) |
+| PostgreSQL | `org.postgresql.Driver` | [Download](https://jdbc.postgresql.org/download/) |
| MySQL | `com.mysql.jdbc.Driver` | [Download](https://dev.mysql.com/downloads/connector/j/) (Select "Platform Independent") | | SQL Server | `com.microsoft.sqlserver.jdbc.SQLServerDriver` | [Download](/sql/connect/jdbc/download-microsoft-jdbc-driver-for-sql-server#download) |
Next, determine if the data source should be available to one application or to
<Resource name="jdbc/dbconnection" type="javax.sql.DataSource"
- url="${dbuser}"
+ url="${connURL}"
driverClassName="<insert your driver class name>"
- username="${dbpassword}"
- password="${connURL}"
+ username="${dbuser}"
+ password="${dbpassword}"
/> </Context> ```
app-service Firewall Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/firewall-integration.md
With an Azure Firewall, you automatically get everything below configured with t
| Endpoint | |-| |gr-prod-\*.cloudapp.net:443 |
+|gr-prod-\*.azurewebsites.windows.net:443 |
| \*.management.azure.com:443 | | \*.update.microsoft.com:443 | | \*.windowsupdate.microsoft.com:443 |
app-service Overview Certificates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/overview-certificates.md
You can [configure the TLS setting](../configure-ssl-bindings.md#enforce-tls-ver
## Private client certificate
-A common use case is to configure your app as a client in a client-server model. If you secure your server with a private CA certificate, you'll need to upload the client certificate to your app. The following instructions will load certificates to the truststore of the workers that your app is running on. You only need to upload the certificate once to use it with apps that are in the same App Service plan.
+A common use case is to configure your app as a client in a client-server model. If you secure your server with a private CA certificate, you'll need to upload the client certificate to your app. The following instructions will load certificates to the trust store of the workers that your app is running on. You only need to upload the certificate once to use it with apps that are in the same App Service plan.
>[!NOTE]
-> Private client certificates are not supported outside the app. This limits usage in scenarios such as pulling the app container image from a registry using a private certificate and TLS validating through the front-end servers using a private certificate.
+> Private client certificates are only supported from custom code in Windows code apps. Private client certificates are not supported outside the app. This limits usage in scenarios such as pulling the app container image from a registry using a private certificate and TLS validating through the front-end servers using a private certificate.
Follow these steps to upload the certificate (*.cer* file) to your app in your App Service Environment. The *.cer* file can be exported from your certificate. For testing purposes, there's a PowerShell example at the end to generate a temporary self-signed certificate: 1. Go to the app that needs the certificate in the Azure portal 1. Go to **TLS/SSL settings** in the app. Select **Public Key Certificate (.cer)**. Select **Upload Public Key Certificate**. Provide a name. Browse and select your *.cer* file. Select upload. 1. Copy the thumbprint.
-1. Go to **Application Settings**. Create an app setting WEBSITE_LOAD_ROOT_CERTIFICATES with the thumbprint as the value. If you have multiple certificates, you can put them in the same setting separated by commas and no whitespace like
+1. Go to **Configuration** > **Application Settings**. Create an app setting WEBSITE_LOAD_ROOT_CERTIFICATES with the thumbprint as the value. If you have multiple certificates, you can put them in the same setting separated by commas and no whitespace like
84EC242A4EC7957817B8E48913E50953552DAFA6,6A5C65DC9247F762FE17BF8D4906E04FE6B31819
-The certificate will be available by all the apps in the same app service plan as the app, which configured that setting. If you need it to be available for apps in a different App Service plan, you'll need to repeat the app setting operation in an app in that App Service plan. To check that the certificate is set, go to the Kudu console and issue the following command in the PowerShell debug console:
+The certificate will be available by all the apps in the same app service plan as the app, which configured that setting, but all apps that depend on the private CA certificate should have the Application Setting configured to avoid timing issues.
+
+If you need it to be available for apps in a different App Service plan, you'll need to repeat the app setting operation for the apps in that App Service plan. To check that the certificate is set, go to the Kudu console and issue the following command in the PowerShell debug console:
```azurepowershell-interactive dir Cert:\LocalMachine\Root
applied-ai-services Concept Id Document https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-id-document.md
recommendations: false
# Form Recognizer ID document model
-The ID document model combines Optical Character Recognition (OCR) with deep learning models to analyze and extracts key information from US Drivers Licenses (all 50 states and District of Columbia), international passport biographical pages, US state ID, social security card, green card and more. The API analyzes identity documents, extracts key information, and returns a structured JSON data representation.
+The ID document model combines Optical Character Recognition (OCR) with deep learning models to analyze and extract key information from US Drivers Licenses (all 50 states and District of Columbia), international passport biographical pages, US state IDs, social security cards, and permanent resident (green) cards. The API analyzes identity documents, extracts key information, and returns a structured JSON data representation.
***Sample U.S. Driver's License processed with [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=idDocument)***
azure-arc Plan Azure Arc Data Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/plan-azure-arc-data-services.md
You can deploy Azure Arc-enabled data services on various types of Kubernetes cl
> [!IMPORTANT] > * The minimum supported version of Kubernetes is v1.21. For more information, see the "Known issues" section of [Release notes&nbsp;- Azure Arc-enabled data services](./release-notes.md#known-issues). > * The minimum supported version of OCP is 4.8.
+> * OCP 4.11 is not supported.
> * If you're using Azure Kubernetes Service, your cluster's worker node virtual machine (VM) size should be at least Standard_D8s_v3 and use Premium Disks. > * The cluster should not span multiple availability zones. > * For more information, see the "Known issues" section of [Release notes&nbsp;- Azure Arc-enabled data services](./release-notes.md#known-issues).
azure-arc Day2 Operations Resource Bridge https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/vmware-vsphere/day2-operations-resource-bridge.md
az account set -s <subscription id>
az arcappliance get-credentials -n <name of the appliance> -g <resource group name> az arcappliance update-infracredentials vmware --kubeconfig kubeconfig ```
-For more details on the commands see [`az arcappliance get-credentials`](/cli/azure/arcappliance/get-credentials#az-arcappliance-get-credentials) and [`az arcappliance update-infracredentials vmware`](/cli/azure/arcappliance/update-infracredentials#az-arcappliance-update-infracredentials-vmware).
+For more details on the commands see [`az arcappliance get-credentials`](/cli/azure/arcappliance#az-arcappliance-get-credentials) and [`az arcappliance update-infracredentials vmware`](/cli/azure/arcappliance/update-infracredentials#az-arcappliance-update-infracredentials-vmware).
To update the credentials used by the VMware cluster extension on the resource bridge. This command can be run from anywhere with `connectedvmware` CLI extension installed.
azure-cache-for-redis Cache Network Isolation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-network-isolation.md
Azure Private Link provides private connectivity from a virtual network to Azure
### Advantages of Private Link
-* Supported on Basic, Standard, and Premium Azure Cache for Redis instances.
-* By using [Azure Private Link](../private-link/private-link-overview.md), you can connect to an Azure Cache instance from your virtual network via a private endpoint. The endpoint is assigned a private IP address in a subnet within the virtual network. With this private link, cache instances are available from both within the VNet and publicly.
-* Once a private endpoint is created, access to the public network can be restricted through the `publicNetworkAccess` flag. This flag is set to `Disabled` by default, which will only allow private link access. You can set the value to `Enabled` or `Disabled` with a PATCH request. For more information, see [Azure Cache for Redis with Azure Private Link](cache-private-link.md).
-* All external cache dependencies won't affect the VNet's NSG rules.
+- Supported on Basic, Standard, and Premium Azure Cache for Redis instances.
+- By using [Azure Private Link](../private-link/private-link-overview.md), you can connect to an Azure Cache instance from your virtual network via a private endpoint. The endpoint is assigned a private IP address in a subnet within the virtual network. With this private link, cache instances are available from both within the VNet and publicly.
+- Once a private endpoint is created, access to the public network can be restricted through the `publicNetworkAccess` flag. This flag is set to `Disabled` by default, which will only allow private link access. You can set the value to `Enabled` or `Disabled` with a PATCH request. For more information, see [Azure Cache for Redis with Azure Private Link](cache-private-link.md).
+- All external cache dependencies won't affect the VNet's NSG rules.
### Limitations of Private Link
-* Network security groups (NSG) are disabled for private endpoints. However, if there are other resources on the subnet, NSG enforcement will apply to those resources.
-* Currently, portal console support, and persistence to firewall storage accounts aren't supported.
-* To connect to a clustered cache, `publicNetworkAccess` needs to be set to `Disabled` and there can only be one private endpoint connection.
+- Network security groups (NSG) are disabled for private endpoints. However, if there are other resources on the subnet, NSG enforcement will apply to those resources.
+- Currently, portal console support, and persistence to firewall storage accounts aren't supported.
+- To connect to a clustered cache, `publicNetworkAccess` needs to be set to `Disabled`, and there can only be one private endpoint connection.
> [!NOTE]
-> When adding a private endpoint to a cache instance, all Redis traffic will be moved to the private endpoint because of the DNS.
+> When adding a private endpoint to a cache instance, all Redis traffic is moved to the private endpoint because of the DNS.
> Ensure previous firewall rules are adjusted before. ## Azure Virtual Network injection
VNet is the fundamental building block for your private network in Azure. VNet e
### Advantages of VNet injection
-* When an Azure Cache for Redis instance is configured with a VNet, it's not publicly addressable. It can only be accessed from virtual machines and applications within the VNet.
-* When VNet is combined with restricted NSG policies, it helps reduce the risk of data exfiltration.
-* VNet deployment provides enhanced security and isolation for your Azure Cache for Redis. Subnets, access control policies, and other features further restrict access.
-* Geo-replication is supported.
+- When an Azure Cache for Redis instance is configured with a VNet, it's not publicly addressable. It can only be accessed from virtual machines and applications within the VNet.
+- When VNet is combined with restricted NSG policies, it helps reduce the risk of data exfiltration.
+- VNet deployment provides enhanced security and isolation for your Azure Cache for Redis. Subnets, access control policies, and other features further restrict access.
+- Geo-replication is supported.
### Limitations of VNet injection
-* VNet injected caches are only available for Premium Azure Cache for Redis.
-* When using a VNet injected cache, you must change your VNet to cache dependencies such as CRLs/PKI, AKV, Azure Storage, Azure Monitor, and more.
+- VNet injected caches are only available for Premium Azure Cache for Redis.
+- When using a VNet injected cache, you must change your VNet to cache dependencies such as CRLs/PKI, AKV, Azure Storage, Azure Monitor, and more.
## Azure Firewall rules
VNet is the fundamental building block for your private network in Azure. VNet e
### Advantages of firewall rules
-* When firewall rules are configured, only client connections from the specified IP address ranges can connect to the cache. Connections from Azure Cache for Redis monitoring systems are always permitted, even if firewall rules are configured. NSG rules that you define are also permitted.
+- When firewall rules are configured, only client connections from the specified IP address ranges can connect to the cache. Connections from Azure Cache for Redis monitoring systems are always permitted, even if firewall rules are configured. NSG rules that you define are also permitted.
### Limitations of firewall rules
-* Firewall rules can be used with VNet injected caches, but not private endpoints currently.
+- Firewall rules can be used with VNet injected caches, but not private endpoints.
+- Firewall rules configuration is available for all Basic, Standard, and Premium tiers.
+- Firewall rules configuration isn't available for Enterprise nor Enterprise Flash tiers.
## Next steps
-* Learn how to configure a [VNet injected cache for a Premium Azure Cache for Redis instance](cache-how-to-premium-vnet.md).
-* Learn how to configure [firewall rules for all Azure Cache for Redis tiers](cache-configure.md#firewall).
-* Learn how to [configure private endpoints for all Azure Cache for Redis tiers](cache-private-link.md).
+- Learn how to configure a [VNet injected cache for a Premium Azure Cache for Redis instance](cache-how-to-premium-vnet.md).
+- Learn how to configure [firewall rules for all Azure Cache for Redis tiers](cache-configure.md#firewall).
+- Learn how to [configure private endpoints for all Azure Cache for Redis tiers](cache-private-link.md).
azure-monitor Azure Monitor Agent Data Collection Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-data-collection-endpoint.md
The Azure Monitor Agent supports [Azure virtual network service tags](../../virt
## Firewall requirements
-| Cloud |Endpoint |Purpose |Port |Direction |Bypass HTTPS inspection|
-|||||--|--|
-| Azure Commercial |global.handler.control.monitor.azure.com |Access control service|Port 443 |Outbound|Yes |
-| Azure Commercial |`<virtual-machine-region-name>`.handler.control.monitor.azure.com |Fetch data collection rules for specific machine |Port 443 |Outbound|Yes |
-| Azure Commercial |`<log-analytics-workspace-id>`.ods.opinsights.azure.com |Ingest logs data |Port 443 |Outbound|Yes |
-| Azure Commercial | management.azure.com | Only needed if sending time series data (metrics) to Azure Monitor [Custom metrics](../essentials/metrics-custom-overview.md) database | Port 443 | Outbound | Yes |
+| Cloud |Endpoint |Purpose |Port |Direction |Bypass HTTPS inspection| Example |
+|||||--|--||
+| Azure Commercial |global.handler.control.monitor.azure.com |Access control service|Port 443 |Outbound|Yes | - |
+| Azure Commercial |`<virtual-machine-region-name>`.handler.control.monitor.azure.com |Fetch data collection rules for specific machine |Port 443 |Outbound|Yes | westus2.handler.control.monitor.azure.com |
+| Azure Commercial |`<log-analytics-workspace-id>`.ods.opinsights.azure.com |Ingest logs data |Port 443 |Outbound|Yes | 1234a123-aa1a-123a-aaa1-a1a345aa6789.ods.opsinsights.azure.com
+| Azure Commercial | management.azure.com | Only needed if sending time series data (metrics) to Azure Monitor [Custom metrics](../essentials/metrics-custom-overview.md) database | Port 443 | Outbound | Yes | - |
+| Azure Commercial | `<virtual-machine-region-name>`.monitoring.azure.com | Only needed if sending time series data (metrics) to Azure Monitor [Custom metrics](../essentials/metrics-custom-overview.md) database | Port 443 | Outbound | Yes | westus2.monitoring.azure.com |
| Azure Government | Replace '.com' above with '.us' | Same as above | Same as above | Same as above| Same as above | | Azure China | Replace '.com' above with '.cn' | Same as above | Same as above | Same as above| Same as above |
-If you use private links on the agent, you must also add the [DCE endpoints](../essentials/data-collection-endpoint-overview.md#components-of-a-data-collection-endpoint).
+>[!NOTE]
+> If you use private links on the agent, you must also add the [DCE endpoints](../essentials/data-collection-endpoint-overview.md#components-of-a-data-collection-endpoint).
+> Azure Monitor metrics (custom metrics) preview is not available in Azure Government and Azure China clouds
## Proxy configuration
azure-monitor Azure Monitor Agent Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-manage.md
description: Options for managing the Azure Monitor agent on Azure virtual machi
Previously updated : 08/18/2022 Last updated : 09/22/2022
Use the following PowerShell commands to install the Azure Monitor agent on Azur
# [Windows](#tab/PowerShellWindows) ```powershell
-Set-AzVMExtension -Name AzureMonitorWindowsAgent -ExtensionType AzureMonitorWindowsAgent -Publisher Microsoft.Azure.Monitor -ResourceGroupName <resource-group-name> -VMName <virtual-machine-name> -Location <location> -TypeHandlerVersion <version-number> -SettingString '{"authentication":{"managedIdentity":{"identifier-name":"mi_res_id","identifier-value":/subscriptions/<my-subscription-id>/resourceGroups/<my-resource-group>/providers/Microsoft.ManagedIdentity/userAssignedIdentities/<my-user-assigned-identity>"}}}'
+Set-AzVMExtension -Name AzureMonitorWindowsAgent -ExtensionType AzureMonitorWindowsAgent -Publisher Microsoft.Azure.Monitor -ResourceGroupName <resource-group-name> -VMName <virtual-machine-name> -Location <location> -TypeHandlerVersion <version-number> -EnableAutomaticUpgrade $true -SettingString '{"authentication":{"managedIdentity":{"identifier-name":"mi_res_id","identifier-value":/subscriptions/<my-subscription-id>/resourceGroups/<my-resource-group>/providers/Microsoft.ManagedIdentity/userAssignedIdentities/<my-user-assigned-identity>"}}}'
``` # [Linux](#tab/PowerShellLinux) ```powershell
-Set-AzVMExtension -Name AzureMonitorLinuxAgent -ExtensionType AzureMonitorLinuxAgent -Publisher Microsoft.Azure.Monitor -ResourceGroupName <resource-group-name> -VMName <virtual-machine-name> -Location <location> -TypeHandlerVersion <version-number> -SettingString '{"authentication":{"managedIdentity":{"identifier-name":"mi_res_id","identifier-value":/subscriptions/<my-subscription-id>/resourceGroups/<my-resource-group>/providers/Microsoft.ManagedIdentity/userAssignedIdentities/<my-user-assigned-identity>"}}}'
+Set-AzVMExtension -Name AzureMonitorLinuxAgent -ExtensionType AzureMonitorLinuxAgent -Publisher Microsoft.Azure.Monitor -ResourceGroupName <resource-group-name> -VMName <virtual-machine-name> -Location <location> -TypeHandlerVersion <version-number> -EnableAutomaticUpgrade $true -SettingString '{"authentication":{"managedIdentity":{"identifier-name":"mi_res_id","identifier-value":/subscriptions/<my-subscription-id>/resourceGroups/<my-resource-group>/providers/Microsoft.ManagedIdentity/userAssignedIdentities/<my-user-assigned-identity>"}}}'
```
Set-AzVMExtension -Name AzureMonitorLinuxAgent -ExtensionType AzureMonitorLinuxA
# [Windows](#tab/PowerShellWindows) ```powershell
-Set-AzVMExtension -Name AzureMonitorWindowsAgent -ExtensionType AzureMonitorWindowsAgent -Publisher Microsoft.Azure.Monitor -ResourceGroupName <resource-group-name> -VMName <virtual-machine-name> -Location <location> -TypeHandlerVersion <version-number>
+Set-AzVMExtension -Name AzureMonitorWindowsAgent -ExtensionType AzureMonitorWindowsAgent -Publisher Microsoft.Azure.Monitor -ResourceGroupName <resource-group-name> -VMName <virtual-machine-name> -Location <location> -TypeHandlerVersion <version-number> -EnableAutomaticUpgrade $true
``` # [Linux](#tab/PowerShellLinux) ```powershell
-Set-AzVMExtension -Name AzureMonitorLinuxAgent -ExtensionType AzureMonitorLinuxAgent -Publisher Microsoft.Azure.Monitor -ResourceGroupName <resource-group-name> -VMName <virtual-machine-name> -Location <location> -TypeHandlerVersion <version-number>
+Set-AzVMExtension -Name AzureMonitorLinuxAgent -ExtensionType AzureMonitorLinuxAgent -Publisher Microsoft.Azure.Monitor -ResourceGroupName <resource-group-name> -VMName <virtual-machine-name> -Location <location> -TypeHandlerVersion <version-number> -EnableAutomaticUpgrade $true
```
Use the following PowerShell commands to install the Azure Monitor agent on Azur
# [Windows](#tab/PowerShellWindowsArc) ```powershell
-New-AzConnectedMachineExtension -Name AzureMonitorWindowsAgent -ExtensionType AzureMonitorWindowsAgent -Publisher Microsoft.Azure.Monitor -ResourceGroupName <resource-group-name> -MachineName <arc-server-name> -Location <arc-server-location>
+New-AzConnectedMachineExtension -Name AzureMonitorWindowsAgent -ExtensionType AzureMonitorWindowsAgent -Publisher Microsoft.Azure.Monitor -ResourceGroupName <resource-group-name> -MachineName <arc-server-name> -Location <arc-server-location> -EnableAutomaticUpgrade
``` # [Linux](#tab/PowerShellLinuxArc) ```powershell
-New-AzConnectedMachineExtension -Name AzureMonitorLinuxAgent -ExtensionType AzureMonitorLinuxAgent -Publisher Microsoft.Azure.Monitor -ResourceGroupName <resource-group-name> -MachineName <arc-server-name> -Location <arc-server-location>
+New-AzConnectedMachineExtension -Name AzureMonitorLinuxAgent -ExtensionType AzureMonitorLinuxAgent -Publisher Microsoft.Azure.Monitor -ResourceGroupName <resource-group-name> -MachineName <arc-server-name> -Location <arc-server-location> -EnableAutomaticUpgrade
```
Use the following CLI commands to install the Azure Monitor agent on Azure virtu
# [Windows](#tab/CLIWindows) ```azurecli
-az vm extension set --name AzureMonitorWindowsAgent --publisher Microsoft.Azure.Monitor --ids <vm-resource-id> --settings '{"authentication":{"managedIdentity":{"identifier-name":"mi_res_id","identifier-value":/subscriptions/<my-subscription-id>/resourceGroups/<my-resource-group>/providers/Microsoft.ManagedIdentity/userAssignedIdentities/<my-user-assigned-identity>"}}}'
+az vm extension set --name AzureMonitorWindowsAgent --publisher Microsoft.Azure.Monitor --ids <vm-resource-id> --enable-auto-upgrade true --settings '{"authentication":{"managedIdentity":{"identifier-name":"mi_res_id","identifier-value":/subscriptions/<my-subscription-id>/resourceGroups/<my-resource-group>/providers/Microsoft.ManagedIdentity/userAssignedIdentities/<my-user-assigned-identity>"}}}'
``` # [Linux](#tab/CLILinux) ```azurecli
-az vm extension set --name AzureMonitorLinuxAgent --publisher Microsoft.Azure.Monitor --ids <vm-resource-id> --settings '{"authentication":{"managedIdentity":{"identifier-name":"mi_res_id","identifier-value":/subscriptions/<my-subscription-id>/resourceGroups/<my-resource-group>/providers/Microsoft.ManagedIdentity/userAssignedIdentities/<my-user-assigned-identity>"}}}'
+az vm extension set --name AzureMonitorLinuxAgent --publisher Microsoft.Azure.Monitor --ids <vm-resource-id> --enable-auto-upgrade true --settings '{"authentication":{"managedIdentity":{"identifier-name":"mi_res_id","identifier-value":/subscriptions/<my-subscription-id>/resourceGroups/<my-resource-group>/providers/Microsoft.ManagedIdentity/userAssignedIdentities/<my-user-assigned-identity>"}}}'
```
az vm extension set --name AzureMonitorLinuxAgent --publisher Microsoft.Azure.Mo
# [Windows](#tab/CLIWindows) ```azurecli
-az vm extension set --name AzureMonitorWindowsAgent --publisher Microsoft.Azure.Monitor --ids <vm-resource-id>
+az vm extension set --name AzureMonitorWindowsAgent --publisher Microsoft.Azure.Monitor --ids <vm-resource-id> --enable-auto-upgrade true
``` # [Linux](#tab/CLILinux) ```azurecli
-az vm extension set --name AzureMonitorLinuxAgent --publisher Microsoft.Azure.Monitor --ids <vm-resource-id>
+az vm extension set --name AzureMonitorLinuxAgent --publisher Microsoft.Azure.Monitor --ids <vm-resource-id> --enable-auto-upgrade true
```
Use the following CLI commands to install the Azure Monitor agent on Azure Arc-e
# [Windows](#tab/CLIWindowsArc) ```azurecli
-az connectedmachine extension create --name AzureMonitorWindowsAgent --publisher Microsoft.Azure.Monitor --type AzureMonitorWindowsAgent --machine-name <arc-server-name> --resource-group <resource-group-name> --location <arc-server-location>
+az connectedmachine extension create --name AzureMonitorWindowsAgent --publisher Microsoft.Azure.Monitor --type AzureMonitorWindowsAgent --machine-name <arc-server-name> --resource-group <resource-group-name> --location <arc-server-location> --enable-auto-upgrade true
``` # [Linux](#tab/CLILinuxArc) ```azurecli
-az connectedmachine extension create --name AzureMonitorLinuxAgent --publisher Microsoft.Azure.Monitor --type AzureMonitorLinuxAgent --machine-name <arc-server-name> --resource-group <resource-group-name> --location <arc-server-location>
+az connectedmachine extension create --name AzureMonitorLinuxAgent --publisher Microsoft.Azure.Monitor --type AzureMonitorLinuxAgent --machine-name <arc-server-name> --resource-group <resource-group-name> --location <arc-server-location> --enable-auto-upgrade true
```
azure-monitor Alerts Log Api Switch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-log-api-switch.md
Last updated 2/23/2022
# Upgrade legacy rules management to the current Log Alerts API from legacy Log Analytics Alert API > [!NOTE]
-> This article is only relevant to Azure public (**not** to Azure Government or Azure China cloud).
+> This article is only relevant to Azure public and government clouds (**not** to Azure China cloud).
> [!NOTE] > Once a user chooses to switch rules with legacy management to the current [scheduledQueryRules API](/rest/api/monitor/scheduledqueryrule-2021-08-01/scheduled-query-rules) it is not possible to revert back to the older [legacy Log Analytics Alert API](./api-alerts.md).
armclient PUT /subscriptions/<subscriptionId>/resourceGroups/<resourceGroupName>
You can also use [Azure CLI](/cli/azure/reference-index#az-rest) tool: ```bash
-az rest --method put --url /subscriptions/<subscriptionId>/resourceGroups/<resourceGroupName>/providers/Microsoft.OperationalInsights/workspaces/<workspaceName>/alertsversion?api-version=2017-04-26-preview --body '{"scheduledQueryRulesEnabled": true}'
+az rest --method put --url /subscriptions/<subscriptionId>/resourceGroups/<resourceGroupName>/providers/Microsoft.OperationalInsights/workspaces/<workspaceName>/alertsversion?api-version=2017-04-26-preview --body "{\"scheduledQueryRulesEnabled\" : true}"
``` If the switch is successful, the response is:
azure-monitor Monitor Virtual Machine https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/monitor-virtual-machine.md
# Monitor virtual machines with Azure Monitor
-This scenario describes how to use Azure Monitor to monitor the health and performance of virtual machines and their workloads. It includes collection of telemetry critical for monitoring, analysis and visualization of collected data to identify trends, and how to configure alerting to be proactively notified of critical issues.
+
+This scenario describes how to use Azure Monitor to monitor the health and performance of virtual machines and their workloads. It includes collection of telemetry critical for monitoring and analysis and visualization of collected data to identify trends. It also shows you how to configure alerting to be proactively notified of critical issues.
> [!NOTE]
-> This scenario describes how to implement complete monitoring of your Azure and hybrid virtual machine environment. To get started monitoring your first Azure virtual machine, see [Monitor Azure virtual machines](../../virtual-machines/monitor-vm.md).
+> This scenario describes how to implement complete monitoring of your Azure and hybrid virtual machine environment. To get started monitoring your first Azure virtual machine, see [Monitor Azure virtual machines](../../virtual-machines/monitor-vm.md).
This article introduces the scenario and provides general concepts for monitoring virtual machines in Azure Monitor. If you want to jump right into a specific area, see one of the other articles that are part of this scenario described in the following table. | Article | Description | |:|:|
-| [Enable monitoring](monitor-virtual-machine-configure.md) | Configure Azure Monitor to monitor virtual machines, which includes enabling VM insights and enabling each virtual machine for monitoring. |
+| [Enable monitoring](monitor-virtual-machine-configure.md) | Configure Azure Monitor to monitor virtual machines, which includes enabling VM insights and enabling each virtual machine for monitoring. |
| [Analyze](monitor-virtual-machine-analyze.md) | Analyze monitoring data collected by Azure Monitor from virtual machines and their guest operating systems and applications to identify trends and critical information. |
-| [Alerts](monitor-virtual-machine-alerts.md) | Create alerts to proactively identify critical issues in your monitoring data. |
+| [Alerts](monitor-virtual-machine-alerts.md) | Create alerts to proactively identify critical issues in your monitoring data. |
| [Monitor security](monitor-virtual-machine-security.md) | Discover Azure services for monitoring security of virtual machines. | | [Monitor workloads](monitor-virtual-machine-workloads.md) | Monitor applications and other workloads running on your virtual machines. | > [!IMPORTANT]
-> This scenario doesn't include features that aren't generally available. Features in public preview such as [virtual machine guest health](vminsights-health-overview.md) have the potential to significantly modify the recommendations made here. The scenario will be updated as preview features move into general availability.
+> This scenario doesn't include features that aren't generally available. Features in public preview, such as [virtual machine guest health](vminsights-health-overview.md), have the potential to significantly modify the recommendations made here. The scenario will be updated as preview features move into general availability.
## Types of machines
-This scenario includes monitoring of the following types of machines using Azure Monitor. Many of the processes described here are the same regardless of the type of machine. Considerations for different types of machines are clearly identified where appropriate. The types of machines include:
+
+This scenario includes monitoring of the following types of machines using Azure Monitor. Many of the processes described here are the same regardless of the type of machine. Considerations for different types of machines are clearly identified where appropriate. The types of machines include:
- Azure virtual machines. - Azure virtual machine scale sets. - Hybrid machines, which are virtual machines running in other clouds, with a managed service provider, or on-premises. They also include physical machines running on-premises. ## Layers of monitoring+ There are fundamentally four layers to a virtual machine that require monitoring. Each layer has a distinct set of telemetry and monitoring requirements. | Layer | Description |
There are fundamentally four layers to a virtual machine that require monitoring
| Virtual machine host | The host virtual machine in Azure. Azure Monitor has no access to the host in other clouds but must rely on information collected from the guest operating system. The host can be useful for tracking activity such as configuration changes, but typically isn't used for significant alerting. | | Guest operating system | The operating system running on the virtual machine, which is some version of either Windows or Linux. A significant amount of monitoring data is available from the guest operating system, such as performance data and events. VM insights in Azure Monitor provides a significant amount of logic for monitoring the health and performance of the guest operating system. | | Workloads | Workloads running in the guest operating system that support your business applications. Azure Monitor provides predefined monitoring for some workloads. You typically need to configure data collection and alerting for other workloads by using monitoring data that they generate. |
-| Application | The business application that depends on your virtual machines can be monitored by using [Application Insights](../app/app-insights-overview.md).
+| Application | The business application that depends on your virtual machines can be monitored by using [Application Insights](../app/app-insights-overview.md).
:::image type="content" source="media/monitor-virtual-machines/monitoring-layers.png" alt-text="Diagram that shows monitoring layers." lightbox="media/monitor-virtual-machines/monitoring-layers.png"::: ## VM insights+ This scenario focuses on [VM insights](../vm/vminsights-overview.md), which is the primary feature in Azure Monitor for monitoring virtual machines. VM insights provides the following features: -- Simplified onboarding of agents to enable monitoring of a virtual machine guest operating system and workloads.
+- Simplified onboarding of agents to enable monitoring of a virtual machine guest operating system and workloads.
- Predefined trending performance charts and workbooks that you can use to analyze core performance metrics from the virtual machine's guest operating system. - Dependency map that displays processes running on each virtual machine and the interconnected components with other machines and external sources. ## Agents
-Any monitoring tool, such as Azure Monitor, requires an agent installed on a machine to collect data from its guest operating system. Azure Monitor currently has multiple agents that collect different data, send data to different locations, and support different features. VM insights manages the deployment and configuration of the agents that most customers will use. Different agents are described in the following table in case you require the particular scenarios that they support. For a detailed description and comparison of the different agents, see [Overview of Azure Monitor agents](../agents/agents-overview.md).
+
+Any monitoring tool, such as Azure Monitor, requires an agent installed on a machine to collect data from its guest operating system. Azure Monitor currently has multiple agents that collect different data, send data to different locations, and support different features. VM insights manages the deployment and configuration of the agents that most customers will use.
+
+Different agents are described in the following table in case you require the particular scenarios that they support. For a detailed description and comparison of the different agents, see [Overview of Azure Monitor agents](../agents/agents-overview.md).
> [!NOTE]
-> The Azure Monitor agent will completely replace the Log Analytics agent, diagnostic extension, and Telegraf agent once it gains required functionality. These other agents are still required for features such as VM insights, Microsoft Defender for Cloud, and Microsoft Sentinel.
+> The Azure Monitor agent will completely replace the Log Analytics agent, Azure Diagnostics extension, and Telegraf agent after it gains required functionality. These other agents are still required for features such as VM insights, Microsoft Defender for Cloud, and Microsoft Sentinel.
-- [Azure Monitor agent](../agents/agents-overview.md): Supports virtual machines in Azure, other cloud environments, and on-premises. Sends data to Azure Monitor Metrics and Logs. When it fully supports VM insights, Microsoft Defender for Cloud, and Microsoft Sentinel, then it will completely replace the Log Analytics agent and diagnostic extension.-- [Log Analytics agent](../agents/log-analytics-agent.md): Supports virtual machines in Azure, other cloud environments, and on-premises. Sends data to Azure Monitor Logs. Supports VM insights and monitoring solutions. This agent is the same agent used for System Center Operations Manager.-- [Dependency agent](vminsights-dependency-agent-maintenance.md): Collects data about the processes running on the virtual machine and their dependencies. Relies on the Log Analytics agent to transmit data into Azure and supports VM insights, Service Map, and Wire Data 2.0 solutions.-- [Azure Diagnostic extension](../agents/diagnostics-extension-overview.md): Available for Azure Monitor virtual machines only. Can send data to Azure Event Hubs and Azure Storage.
+| Agent | Description |
+|:|:|
+| [Azure Monitor agent](../agents/agents-overview.md) | Supports virtual machines in Azure, other cloud environments, and on-premises. Sends data to Azure Monitor Metrics and Logs. When it fully supports VM insights, Microsoft Defender for Cloud, and Microsoft Sentinel, it will completely replace the Log Analytics agent and Azure Diagnostics extension. |
+| [Log Analytics agent](../agents/log-analytics-agent.md) | Supports virtual machines in Azure, other cloud environments, and on-premises. Sends data to Azure Monitor Logs. Supports VM insights and monitoring solutions. This agent is the same agent used for System Center Operations Manager. |
+| [Dependency agent](vminsights-dependency-agent-maintenance.md) | Collects data about the processes running on the virtual machine and their dependencies. Relies on the Log Analytics agent to transmit data into Azure and supports VM insights, Service Map, and Wire Data 2.0 solutions. |
+| [Azure Diagnostics extension](../agents/diagnostics-extension-overview.md) | Available for Azure Monitor virtual machines only. Can send data to Azure Event Hubs and Azure Storage.
## Next steps
azure-monitor Service Map https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/service-map.md
Title: Using Service Map solution in Azure | Microsoft Docs
-description: Service Map is a solution in Azure that automatically discovers application components on Windows and Linux systems and maps the communication between services. This article provides details for deploying Service Map in your environment and using it in a variety of scenarios.
+ Title: Use the Service Map solution in Azure | Microsoft Docs
+description: Learn how to deploy and use the Service Map solution to automatically discover application components on Windows and Linux systems and map communication between services.
-# Using Service Map solution in Azure
+# Use the Service Map solution in Azure
-Service Map automatically discovers application components on Windows and Linux systems and maps the communication between services. With Service Map, you can view your servers in the way that you think of them: as interconnected systems that deliver critical services. Service Map shows connections between servers, processes, inbound and outbound connection latency, and ports across any TCP-connected architecture, with no configuration required other than the installation of an agent.
+Service Map automatically discovers application components on Windows and Linux systems and maps the communication between services. With Service Map, you can view your servers as interconnected systems that deliver critical services. Service Map shows connections between servers, processes, inbound and outbound connection latency, and ports across any TCP-connected architecture. No configuration is required other than the installation of an agent.
> [!IMPORTANT]
-> Service map will be retired on 30 September 2025. To monitor connections between servers, processes, inbound and outbound connection latency, and ports across any TCP-connected architecture, make sure to [migrate to Azure Monitor VM insights](../vm/vminsights-migrate-from-service-map.md) before this date.
+> Service Map will be retired on September 30, 2025. To monitor connections between servers, processes, inbound and outbound connection latency, and ports across any TCP-connected architecture, make sure to [migrate to Azure Monitor VM insights](../vm/vminsights-migrate-from-service-map.md) before this date.
-This article describes the details of onboarding and using Service Map. The prerequisites of the solution are the following:
+This article describes how to deploy and use Service Map. The prerequisites of the solution are:
* A Log Analytics workspace in a [supported region](vminsights-configure-workspace.md#supported-regions).- * The [Log Analytics agent](vminsights-enable-overview.md#agents) installed on the Windows computer or Linux server connected to the same workspace that you enabled the solution with.- * The [Dependency agent](vminsights-enable-overview.md#agents) installed on the Windows computer or Linux server. >[!NOTE]
->If you have already deployed Service Map, you can now also view your maps in VM insights, which includes additional features to monitor VM health and performance. To learn more, see [VM insights overview](../vm/vminsights-overview.md). To learn about the differences between the Service Map solution and VM insights Map feature, see the following [FAQ](../faq.yml).
+>If you've already deployed Service Map, you can now also view your maps in VM insights, which includes more features to monitor VM health and performance. To learn more, see [VM insights overview](../vm/vminsights-overview.md). To learn about the differences between the Service Map solution and the VM insights Map feature, see [this FAQ](../faq.yml).
## Sign in to Azure
-Sign in to the Azure portal at [https://portal.azure.com](https://portal.azure.com).
+Sign in to the [Azure portal](https://portal.azure.com).
## Enable Service Map
-1. Enable the Service Map solution from the [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/Microsoft.ServiceMapOMS?tab=Overview) or by using the process described in [Add monitoring solutions from the Solutions Gallery](../insights/solutions.md).
-1. [Install the Dependency agent on Windows](../vm/vminsights-enable-hybrid.md#install-the-dependency-agent-on-windows) or [Install the Dependency agent on Linux](../vm/vminsights-enable-hybrid.md#install-the-dependency-agent-on-linux) on each computer where you want to get data. The Dependency Agent can monitor connections to immediate neighbors, so you might not need an agent on every computer.
+1. Enable the Service Map solution from [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/Microsoft.ServiceMapOMS?tab=Overview). Or use the process described in [Add monitoring solutions from the Solutions Gallery](../insights/solutions.md).
+1. [Install the Dependency agent on Windows](../vm/vminsights-enable-hybrid.md#install-the-dependency-agent-on-windows) or [install the Dependency agent on Linux](../vm/vminsights-enable-hybrid.md#install-the-dependency-agent-on-linux) on each computer where you want to get data. The Dependency agent can monitor connections to immediate neighbors, so you might not need an agent on every computer.
+
+1. Access Service Map in the Azure portal from your Log Analytics workspace. Select the **Solutions** option from the left pane.
+
+ ![Screenshot that shows selecting the Solutions option in the workspace.](./media/service-map/select-solution-from-workspace.png).
+1. From the list of solutions, select **ServiceMap(workspaceName)**. On the **Service Map** solution overview page, select the **Service Map** summary tile.
-You access Service Map in the Azure portal from your Log Analytics workspace, and select the option **Solutions** from the left pane.<br><br> ![Select Solutions option in workspace](./media/service-map/select-solution-from-workspace.png).<br> From the list of solutions, select **ServiceMap(workspaceName)** and in the Service Map solution overview page click on the Service Map summary tile.<br><br> ![Service Map summary tile](./media/service-map/service-map-summary-tile.png).
+ ![Screenshot that shows the Service Map summary tile.](./media/service-map/service-map-summary-tile.png).
## Use cases: Make your IT processes dependency aware ### Discovery
-Service Map automatically builds a common reference map of dependencies across your servers, processes, and third-party services. It discovers and maps all TCP dependencies, identifying surprise connections, remote third-party systems you depend on, and dependencies to traditional dark areas of your network, such as Active Directory. Service Map discovers failed network connections that your managed systems are attempting to make, helping you identify potential server misconfiguration, service outage, and network issues.
+Service Map automatically builds a common reference map of dependencies across your servers, processes, and third-party services. It discovers and maps all TCP dependencies. It identifies surprise connections, remote third-party systems you depend on, and dependencies to traditional dark areas of your network, such as Active Directory. Service Map discovers failed network connections that your managed systems are attempting to make. This information helps you identify potential server misconfiguration, service outage, and network issues.
### Incident management
-Service Map helps eliminate the guesswork of problem isolation by showing you how systems are connected and affecting each other. In addition to identifying failed connections, it helps identify misconfigured load balancers, surprising or excessive load on critical services, and rogue clients, such as developer machines talking to production systems. By using integrated workflows with Change Tracking, you can also see whether a change event on a back-end machine or service explains the root cause of an incident.
+Service Map helps eliminate the guesswork of problem isolation by showing you how systems are connected and affect each other. Along with identifying failed connections, it helps identify misconfigured load balancers, surprising or excessive load on critical services, and rogue clients, such as developer machines talking to production systems. By using integrated workflows with Change Tracking, you can also see whether a change event on a back-end machine or service explains the root cause of an incident.
### Migration assurance
-By using Service Map, you can effectively plan, accelerate, and validate Azure migrations, which helps ensure that nothing is left behind and surprise outages do not occur. You can discover all interdependent systems that need to migrate together, assess system configuration and capacity, and identify whether a running system is still serving users or is a candidate for decommissioning instead of migration. After the move is complete, you can check on client load and identity to verify that test systems and customers are connecting. If your subnet planning and firewall definitions have issues, failed connections in Service Map maps point you to the systems that need connectivity.
+By using Service Map, you can effectively plan, accelerate, and validate Azure migrations to help ensure that nothing is left behind and surprise outages don't occur. You can:
+
+- Discover all interdependent systems that need to migrate together.
+- Assess system configuration and capacity.
+- Identify whether a running system is still serving users or is a candidate for decommissioning instead of migration.
+
+After the move is complete, you can check the client load and identity to verify that test systems and customers are connecting. If your subnet planning and firewall definitions have issues, failed connections in maps in Service Map point you to the systems that need connectivity.
### Business continuity
-If you are using Azure Site Recovery and need help defining the recovery sequence for your application environment, Service Map can automatically show you how systems rely on each other to ensure that your recovery plan is reliable. By choosing a critical server or group and viewing its clients, you can identify which front-end systems to recover after the server is restored and available. Conversely, by looking at critical servers' back-end dependencies, you can identify which systems to recover before your focus systems are restored.
+If you're using Azure Site Recovery and need help with defining the recovery sequence for your application environment, Service Map can automatically show you how systems rely on each other. This information helps to ensure that your recovery plan is reliable.
+
+By choosing a critical server or group and viewing its clients, you can identify which front-end systems to recover after the server is restored and available. Conversely, by looking at critical servers' back-end dependencies, you can identify which systems to recover before your focus systems are restored.
### Patch management
-Service Map enhances your use of the System Update Assessment by showing you which other teams and servers depend on your service, so you can notify them in advance before you take down your systems for patching. Service Map also enhances patch management by showing you whether your services are available and properly connected after they are patched and restarted.
+Service Map enhances your use of the System Update Assessment by showing you which other teams and servers depend on your service. This way, you can notify them in advance before you take down your systems for patching. Service Map also enhances patch management by showing you whether your services are available and properly connected after they're patched and restarted.
## Mapping overview
-Service Map agents gather information about all TCP-connected processes on the server where they're installed and details about the inbound and outbound connections for each process.
+Service Map agents gather information about all TCP-connected processes on the server where they're installed. They also collect details about the inbound and outbound connections for each process.
+
+From the list in the left pane, you can select machines or groups that have Service Map agents to visualize their dependencies over a specified time range. Machine dependency maps focus on a specific machine. They show all the machines that are direct TCP clients or servers of that machine. Machine group maps show sets of servers and their dependencies.
-From the list in the left pane, you can select machines or groups that have Service Map agents to visualize their dependencies over a specified time range. Machine dependency maps focus on a specific machine, and they show all the machines that are direct TCP clients or servers of that machine. Machine Group maps show sets of servers and their dependencies.
+![Screenshot that shows a Service Map overview.](media/service-map/service-map-overview.png)
-![Service Map overview](media/service-map/service-map-overview.png)
+Machines can be expanded in the map to show the running process groups and processes with active network connections during the selected time range. When a remote machine with a Service Map agent is expanded to show process details, only those processes that communicate with the focus machine are shown.
-Machines can be expanded in the map to show the running process groups and processes with active network connections during the selected time range. When a remote machine with a Service Map agent is expanded to show process details, only those processes that communicate with the focus machine are shown. The count of agentless front-end machines that connect into the focus machine is indicated on the left side of the processes they connect to. If the focus machine is making a connection to a back-end machine that has no agent, the back-end server is included in a Server Port Group, along with other connections to the same port number.
+The count of agentless front-end machines that connect into the focus machine is indicated on the left side of the processes they connect to. If the focus machine is making a connection to a back-end machine that has no agent, the back-end server is included in a server port group. This group also includes other connections to the same port number.
-By default, Service Map maps show the last 30 minutes of dependency information. By using the time controls at the upper left, you can query maps for historical time ranges of up to one hour to show how dependencies looked in the past (for example, during an incident or before a change occurred). Service Map data is stored for 30 days in paid workspaces, and for 7 days in free workspaces.
+By default, maps in Service Map show the last 30 minutes of dependency information. You can use the time controls at the upper left to query maps for historical time ranges of up to one hour to see how dependencies looked in the past. For example, you might want to see how they looked during an incident or before a change occurred. Service Map data is stored for 30 days in paid workspaces and for 7 days in free workspaces.
## Status badges and border coloring
-At the bottom of each server in the map can be a list of status badges conveying status information about the server. The badges indicate that there is some relevant information for the server from one of the solution integrations. Clicking a badge takes you directly to the details of the status in the right pane. The currently available status badges include Alerts, Service Desk, Changes, Security, and Updates.
+At the bottom of each server in the map, a list of status badges that convey status information about the server might appear. The badges indicate there's relevant information for the server from one of the solution integrations.
+
+Selecting a badge takes you directly to the details of the status in the right pane. The currently available status badges include **Alerts**, **Service Desk**, **Changes**, **Security**, and **Updates**.
Depending on the severity of the status badges, machine node borders can be colored red (critical), yellow (warning), or blue (informational). The color represents the most severe status of any of the status badges. A gray border indicates a node that has no status indicators.
-![Status badges](media/service-map/status-badges.png)
+![Screenshot that shows status badges.](media/service-map/status-badges.png)
+
+## Process groups
-## Process Groups
+Process groups combine processes that are associated with a common product or service into a process group. When a machine node is expanded, it will display standalone processes along with process groups. If an inbound or outbound connection to a process within a process group has failed, the connection is shown as failed for the entire process group.
-Process Groups combine processes that are associated with a common product or service into a process group. When a machine node is expanded it will display standalone processes along with process groups. If any inbound and outbound connections to a process within a process group has failed then the connection is shown as failed for the entire process group.
+## Machine groups
-## Machine Groups
+Machine groups allow you to see maps centered around a set of servers, not just one. In this way, you can see all the members of a multi-tier application or server cluster in one map.
-Machine Groups allow you to see maps centered around a set of servers, not just one so you can see all the members of a multi-tier application or server cluster in one map.
+Users select which servers belong in a group together and choose a name for the group. You can then choose to view the group with all its processes and connections. You can also view it with only the processes and connections that directly relate to the other members of the group.
-Users select which servers belong in a group together and choose a name for the group. You can then choose to view the group with all of its processes and connections, or view it with only the processes and connections that directly relate to the other members of the group.
+![Screenshot that shows machine groups.](media/service-map/machine-group.png)
-![Machine Group](media/service-map/machine-group.png)
+### Create a machine group
-### Creating a Machine Group
+To create a group:
-To create a group, select the machine or machines you want in the Machines list and click **Add to group**.
+1. Select the machine or machines you want in the **Machines** list and select **Add to group**.
-![Create Group](media/service-map/machine-groups-create.png)
+ ![Screenshot that shows creating a group.](media/service-map/machine-groups-create.png)
-There, you can choose **Create new** and give the group a name.
+1. Select **Create new** and give the group a name.
-![Name Group](media/service-map/machine-groups-name.png)
+ ![Screenshot that shows naming a group.](media/service-map/machine-groups-name.png)
>[!NOTE] >Machine groups are limited to 10 servers.
-### Viewing a Group
+### View a group
+
+After you've created some groups, you can view them.
+
+1. Select the **Groups** tab.
-Once you've created some groups, you can view them by choosing the Groups tab.
+ ![Screenshot that shows the Groups tab.](media/service-map/machine-groups-tab.png)
-![Groups tab](media/service-map/machine-groups-tab.png)
+1. Select the group name to view the map for that machine group.
-Then select the Group name to view the map for that Machine Group.
-![Machine Group](media/service-map/machine-group.png)
-The machines that belong to the group are outlined in white in the map.
+ ![Screenshot that shows a machine group map.](media/service-map/machine-group.png)
-Expanding the Group will list the machines that make up the Machine Group.
+ The machines that belong to the group are outlined in white in the map.
-![Machine Group machines](media/service-map/machine-groups-machines.png)
+1. Expand the group to list the machines that make up the machine group.
+
+ ![Screenshot that shows machine group machines.](media/service-map/machine-groups-machines.png)
### Filter by processes
-You can toggle the map view between showing all processes and connections in the Group and only the ones that directly relate to the Machine Group. The default view is to show all processes. You can change the view by clicking the filter icon above the map.
+You can toggle the map view to show all processes and connections in the group or only the ones that directly relate to the machine group. The default view shows all processes.
+
+1. Select the filter icon above the map to change the view.
-![Filter Group](media/service-map/machine-groups-filter.png)
+ ![Screenshot that shows filtering a group.](media/service-map/machine-groups-filter.png)
-When **All processes** is selected, the map will include all processes and connections on each of the machines in the Group.
+1. Select **All processes** to see the map with all processes and connections on each of the machines in the group.
-![Machine Group all processes](media/service-map/machine-groups-all.png)
+ ![Screenshot that shows the machine group All processes option.](media/service-map/machine-groups-all.png)
-If you change the view to show only **group-connected processes**, the map will be narrowed down to only those processes and connections that are directly connected to other machines in the group, creating a simplified view.
+1. To create a simplified view, change the view to show only **group-connected processes**. The map is then narrowed down to show only those processes and connections directly connected to other machines in the group.
-![Machine Group filtered processes](media/service-map/machine-groups-filtered.png)
-
-### Adding machines to a group
+ ![Screenshot that shows the machine group filtered processes.](media/service-map/machine-groups-filtered.png)
-To add machines to an existing group, check the boxes next to the machines you want and then click **Add to group**. Then, choose the group you want to add the machines to.
-
-### Removing machines from a group
+### Add machines to a group
-In the Groups List, expand the group name to list the machines in the Machine Group. Then, click on the ellipsis menu next to the machine you want to remove and choose **Remove**.
+To add machines to an existing group, select the checkboxes next to the machines you want and select **Add to group**. Then choose the group you want to add the machines to.
-![Remove machine from group](media/service-map/machine-groups-remove.png)
+### Remove machines from a group
-### Removing or renaming a group
+In the **Groups** list, expand the group name to list the machines in the machine group. Select the ellipsis menu next to the machine you want to remove and select **Remove**.
-Click on the ellipsis menu next to the group name in the Group List.
+![Screenshot that shows removing a machine from a group.](media/service-map/machine-groups-remove.png)
-![Machine group menu](media/service-map/machine-groups-menu.png)
+### Remove or rename a group
+Select the ellipsis menu next to the group name in the **Groups** list.
+
+![Screenshot that shows the machine group menu.](media/service-map/machine-groups-menu.png)
## Role icons
-Certain processes serve particular roles on machines: web servers, application servers, database, and so on. Service Map annotates process and machine boxes with role icons to help identify at a glance the role a process or server plays.
+Certain processes serve particular roles on machines, such as web servers, application servers, and databases. Service Map annotates process and machine boxes with role icons to help identify at a glance the role a process or server plays.
| Role icon | Description | |:--|:--|
Certain processes serve particular roles on machines: web servers, application s
| ![LDAP server](media/service-map/role-ldap.png) | LDAP server | | ![SMB server](media/service-map/role-smb.png) | SMB server |
-![Role icons](media/service-map/role-icons.png)
-
+![Screenshot that shows role icons.](media/service-map/role-icons.png)
## Failed connections
-Failed connections are shown in Service Map maps for processes and computers, with a dashed red line indicating that a client system is failing to reach a process or port. Failed connections are reported from any system with a deployed Service Map agent if that system is the one attempting the failed connection. Service Map measures this process by observing TCP sockets that fail to establish a connection. This failure could result from a firewall, a misconfiguration in the client or server, or a remote service being unavailable.
+In Service Map, failed connections are shown in maps for processes and computers. A dashed red line indicates that a client system is failing to reach a process or port.
-![Screenshot of one part of a Service Map highlighting a dashed red line that indicates a failed connection between the backup.pl process and Port 4475.](media/service-map/failed-connections.png)
+Failed connections are reported from any system with a deployed Service Map agent if that system is the one attempting the failed connection. Service Map measures this process by observing TCP sockets that fail to establish a connection. This failure could result from a firewall, a misconfiguration in the client or server, or a remote service being unavailable.
-Understanding failed connections can help with troubleshooting, migration validation, security analysis, and overall architectural understanding. Failed connections are sometimes harmless, but they often point directly to a problem, such as a failover environment suddenly becoming unreachable, or two application tiers being unable to talk after a cloud migration.
+![Screenshot that shows one part of a Service Map highlighting a dashed red line that indicates a failed connection between the backup.pl process and Port 4475.](media/service-map/failed-connections.png)
-## Client Groups
+Understanding failed connections can help with troubleshooting, migration validation, security analysis, and overall architectural understanding. Failed connections are sometimes harmless, but they often point directly to a problem. A failover environment might suddenly become unreachable or two application tiers might be unable to talk after a cloud migration.
-Client Groups are boxes on the map that represent client machines that do not have Dependency Agents. A single Client Group represents the clients for an individual process or machine.
+## Client groups
-![Client Groups](media/service-map/client-groups.png)
+Client groups are boxes on the map that represent client machines that don't have Dependency agents. A single client group represents the clients for an individual process or machine.
-To see the IP addresses of the servers in a Client Group, select the group. The contents of the group are listed in the **Client Group Properties** pane.
+![Screenshot that shows client groups.](media/service-map/client-groups.png)
-![Client Group properties](media/service-map/client-group-properties.png)
+To see the IP addresses of the servers in a client group, select the group. The contents of the group are listed in the **Client Group Properties** pane.
-## Server Port Groups
+![Screenshot that shows client group properties.](media/service-map/client-group-properties.png)
-Server Port Groups are boxes that represent server ports on servers that do not have Dependency Agents. The box contains the server port and a count of the number of servers with connections to that port. Expand the box to see the individual servers and connections. If there is only one server in the box, the name or IP address is listed.
+## Server port groups
-![Server Port Groups](media/service-map/server-port-groups.png)
+Server port groups are boxes that represent server ports on servers that don't have Dependency agents. The box contains the server port and a count of the number of servers with connections to that port. Expand the box to see the individual servers and connections. If there's only one server in the box, the name or IP address is listed.
+
+![Screenshot that shows server port groups.](media/service-map/server-port-groups.png)
## Context menu
-Clicking the ellipsis (...) at the top right of any server displays the context menu for that server.
+Select the ellipsis (...) at the top right of any server to display the context menu for that server.
-![Screenshot showing the opened context menu for a server in Service Map. The menu has the options Load Server Map and Show Self-Links.](media/service-map/context-menu.png)
+![Screenshot that shows the Load Server Map and Show Self-Links options for a server in Service Map.](media/service-map/context-menu.png)
### Load server map
-Clicking **Load Server Map** takes you to a new map with the selected server as the new focus machine.
+Select **Load Server Map** to go to a new map with the selected server as the new focus machine.
### Show self-links
-Clicking **Show Self-Links** redraws the server node, including any self-links, which are TCP connections that start and end on processes within the server. If self-links are shown, the menu command changes to **Hide Self-Links**, so that you can turn them off.
+Select **Show Self-Links** to redraw the server node, including any self-links, which are TCP connections that start and end on processes within the server. If self-links are shown, the menu command changes to **Hide Self-Links** so that you can turn them off.
## Computer summary The **Machine Summary** pane includes an overview of a server's operating system, dependency counts, and data from other solutions. Such data includes performance metrics, service desk tickets, change tracking, security, and updates.
-![Machine Summary pane](media/service-map/machine-summary.png)
+![Screenshot that shows the Machine Summary pane.](media/service-map/machine-summary.png)
## Computer and process properties
-When you navigate a Service Map map, you can select machines and processes to gain additional context about their properties. Machines provide information about DNS name, IPv4 addresses, CPU and memory capacity, VM type, operating system and version, last reboot time, and the IDs of their OMS and Service Map agents.
+When you navigate a map in Service Map, you can select machines and processes to gain more context about their properties. Machines provide information about DNS name, IPv4 addresses, CPU and memory capacity, VM type, operating system and version, last reboot time, and the IDs of their OMS and Service Map agents.
-![Machine Properties pane](media/service-map/machine-properties.png)
+![Screenshot that shows the Machine Properties pane.](media/service-map/machine-properties.png)
-You can gather process details from operating-system metadata about running processes, including process name, process description, user name and domain (on Windows), company name, product name, product version, working directory, command line, and process start time.
+You can gather process details from operating-system metadata about running processes. Details include process name, process description, user name and domain (on Windows), company name, product name, product version, working directory, command line, and process start time.
-![Process Properties pane](media/service-map/process-properties.png)
+![Screenshot that shows the Process Properties pane.](media/service-map/process-properties.png)
-The **Process Summary** pane provides additional information about the process's connectivity, including its bound ports, inbound and outbound connections, and failed connections.
+The **Process Summary** pane provides more information about the process's connectivity, including its bound ports, inbound and outbound connections, and failed connections.
-![Process Summary pane](media/service-map/process-summary.png)
+![Screenshot that shows the Process Summary pane.](media/service-map/process-summary.png)
## Alerts integration Service Map integrates with Azure Alerts to show fired alerts for the selected server in the selected time range. The server displays an icon if there are current alerts, and the **Machine Alerts** pane lists the alerts.
-![Machine Alerts pane](media/service-map/machine-alerts.png)
+![Screenshot that shows the Machine Alerts pane.](media/service-map/machine-alerts.png)
To enable Service Map to display relevant alerts, create an alert rule that fires for a specific computer. To create proper alerts:-- Include a clause to group by computer (for example, **by Computer interval 1 minute**).+
+- Include a clause to group by computer. An example is **by Computer interval 1 minute**.
- Choose to alert based on metric measurement. ## Log events integration
-Service Map integrates with Log Search to show a count of all available log events for the selected server during the selected time range. You can click any row in the list of event counts to jump to Log Search and see the individual log events.
+Service Map integrates with Log Search to show a count of all available log events for the selected server during the selected time range. You can select any row in the list of event counts to jump to Log Search and see the individual log events.
-![Machine Log Events pane](media/service-map/log-events.png)
+![Screenshot that shows the Machine Log Events pane.](media/service-map/log-events.png)
## Service Desk integration
Service Map integration with the IT Service Management Connector is automatic wh
The **Machine Service Desk** pane lists all IT Service Management events for the selected server in the selected time range. The server displays an icon if there are current items and the Machine Service Desk pane lists them.
-![Machine Service Desk pane](media/service-map/service-desk.png)
+![Screenshot that shows the Machine Service Desk pane.](media/service-map/service-desk.png)
-To open the item in your connected ITSM solution, click **View Work Item**.
+To open the item in your connected ITSM solution, select **View Work Item**.
-To view the details of the item in Log Search, click **Show in Log Search**.
-Connection metrics are written to two new tables in Log Analytics
+To view the details of the item in Log Search, select **Show in Log Search**.
+Connection metrics are written to two new tables in Log Analytics.
## Change Tracking integration Service Map integration with Change Tracking is automatic when both solutions are enabled and configured in your Log Analytics workspace.
-The **Machine Change Tracking** pane lists all changes, with the most recent first, along with a link to drill down to Log Search for additional details.
+The **Machine Change Tracking** pane lists all changes, with the most recent first, along with a link to drill down to Log Search for more details.
-![Screenshot of the Machine Change Tracking pane in Service Map.](media/service-map/change-tracking.png)
+![Screenshot that shows the Machine Change Tracking pane.](media/service-map/change-tracking.png)
-The following image is a detailed view of a ConfigurationChange event that you might see after you select **Show in Log Analytics**.
+The following image is a detailed view of a *ConfigurationChange* event that you might see after you select **Show in Log Analytics**.
-![ConfigurationChange event](media/service-map/configuration-change-event-01.png)
+![Screenshot that shows the ConfigurationChange event.](media/service-map/configuration-change-event-01.png)
## Performance integration The **Machine Performance** pane displays standard performance metrics for the selected server. The metrics include CPU utilization, memory utilization, network bytes sent and received, and a list of the top processes by network bytes sent and received.
-![Machine Performance pane](media/service-map/machine-performance.png)
+![Screenshot that shows the Machine Performance pane.](media/service-map/machine-performance.png)
-To see performance data, you may need to [enable the appropriate Log Analytics performance counters](../agents/data-sources-performance-counters.md). The counters you will want to enable:
+To see performance data, you might need to [enable the appropriate Log Analytics performance counters](../agents/data-sources-performance-counters.md). The counters you'll want to enable:
Windows: - Processor(*)\\% Processor Time
Linux:
- Memory(*)\\% Used Memory - Network Adapter(*)\\Bytes Sent/sec - Network Adapter(*)\\Bytes Received/sec
-
+ ## Security integration Service Map integration with Security and Audit is automatic when both solutions are enabled and configured in your Log Analytics workspace.
-The **Machine Security** pane shows data from the Security and Audit solution for the selected server. The pane lists a summary of any outstanding security issues for the server during the selected time range. Clicking any of the security issues drills down into a Log Search for details about them.
+The **Machine Security** pane shows data from the Security and Audit solution for the selected server. The pane lists a summary of any outstanding security issues for the server during the selected time range. Selecting any of the security issues drills down into a log search for details about them.
-![Machine Security pane](media/service-map/machine-security.png)
+![Screenshot that shows the Machine Security pane.](media/service-map/machine-security.png)
## Updates integration
Service Map integration with Update Management is automatic when both solutions
The **Machine Updates** pane displays data from the Update Management solution for the selected server. The pane lists a summary of any missing updates for the server during the selected time range.
-![Screenshot of the Machine Updates pane in Service Map.](media/service-map/machine-updates.png)
+![Screenshot that shows the Machine Updates pane.](media/service-map/machine-updates.png)
## Log Analytics records Service Map computer and process inventory data is available for [search](../logs/log-query-overview.md) in Log Analytics. You can apply this data to scenarios that include migration planning, capacity analysis, discovery, and on-demand performance troubleshooting.
-One record is generated per hour for each unique computer and process, in addition to the records that are generated when a process or computer starts or is on-boarded to Service Map. These records have the properties in the following tables. The fields and values in the ServiceMapComputer_CL events map to fields of the Machine resource in the ServiceMap Azure Resource Manager API. The fields and values in the ServiceMapProcess_CL events map to the fields of the Process resource in the ServiceMap Azure Resource Manager API. The ResourceName_s field matches the name field in the corresponding Resource Manager resource.
+One record is generated per hour for each unique computer and process, in addition to the records that are generated when a process or computer starts or is on-boarded to Service Map. These records have the properties in the following tables.
+
+The fields and values in the *ServiceMapComputer_CL* events map to fields of the Machine resource in the ServiceMap Azure Resource Manager API. The fields and values in the *ServiceMapProcess_CL* events map to the fields of the Process resource in the ServiceMap Azure Resource Manager API. The *ResourceName_s* field matches the name field in the corresponding Resource Manager resource.
>[!NOTE] >As Service Map features grow, these fields are subject to change.
-There are internally generated properties you can use to identify unique processes and computers:
+You can use internally generated properties to identify unique processes and computers:
-- Computer: Use *ResourceId* or *ResourceName_s* to uniquely identify a computer within a Log Analytics workspace.-- Process: Use *ResourceId* to uniquely identify a process within a Log Analytics workspace. *ResourceName_s* is unique within the context of the machine on which the process is running (MachineResourceName_s)
+- **Computer**: Use *ResourceId* or *ResourceName_s* to uniquely identify a computer within a Log Analytics workspace.
+- **Process**: Use *ResourceId* to uniquely identify a process within a Log Analytics workspace. *ResourceName_s* is unique within the context of the machine on which the process is running *MachineResourceName_s*.
-Because multiple records can exist for a specified process and computer in a specified time range, queries can return more than one record for the same computer or process. To include only the most recent record, add "| dedup ResourceId" to the query.
+Because multiple records can exist for a specified process and computer in a specified time range, queries can return more than one record for the same computer or process. To include only the most recent record, add `"| dedup ResourceId"` to the query.
### Connections
-Connection metrics are written to a new table in Log Analytics - VMConnection. This table provides information about the connections for a machine (inbound and outbound). Connection Metrics are also exposed with APIs that provide the means to obtain a specific metric during a time window. TCP connections resulting from accepting on a listening socket are inbound, while those created by connecting to a given IP and port are outbound. The direction of a connection is represented by the Direction property, which can be set to either **inbound** or **outbound**.
+Connection metrics are written to a new table in Log Analytics named *VMConnection*. This table provides information about the inbound and outbound connections for a machine. Connection Metrics are also exposed with APIs that provide the means to obtain a specific metric during a time window.
-Records in these tables are generated from data reported by the Dependency agent. Every record represents an observation over a one minute time interval. The TimeGenerated property indicates the start of the time interval. Each record contains information to identify the respective entity, that is, connection or port, as well as metrics associated with that entity. Currently, only network activity that occurs using TCP over IPv4 is reported.
+TCP connections resulting from accepting on a listening socket are inbound. Those connections created by connecting to a given IP and port are outbound. The direction of a connection is represented by the `Direction` property, which can be set to either `inbound` or `outbound`.
-To manage cost and complexity, connection records do not represent individual physical network connections. Multiple physical network connections are grouped into a logical connection, which is then reflected in the respective table. Meaning, records in *VMConnection* table represent a logical grouping and not the individual physical connections that are being observed. Physical network connection sharing the same value for the following attributes during a given one minute interval, are aggregated into a single logical record in *VMConnection*.
+Records in these tables are generated from data reported by the Dependency agent. Every record represents an observation over a one-minute time interval. The `TimeGenerated` property indicates the start of the time interval. Each record contains information to identify the respective entity, that is, the connection or port, and the metrics associated with that entity. Currently, only network activity that occurs by using TCP over IPv4 is reported.
+
+To manage cost and complexity, connection records don't represent individual physical network connections. Multiple physical network connections are grouped into a logical connection, which is then reflected in the respective table. So records in the *VMConnection* table represent a logical grouping and not the individual physical connections that are being observed.
+
+Physical network connections that share the same value for the following attributes during a given one-minute interval are aggregated into a single logical record in *VMConnection*.
| Property | Description | |:--|:--|
-| `Direction` |Direction of the connection, value is *inbound* or *outbound* |
-| `Machine` |The computer FQDN |
-| `Process` |Identity of process or groups of processes, initiating/accepting the connection |
-| `SourceIp` |IP address of the source |
-| `DestinationIp` |IP address of the destination |
-| `DestinationPort` |Port number of the destination |
-| `Protocol` |Protocol used for the connection. Values is *tcp*. |
+| `Direction` |Direction of the connection. The value is *inbound* or *outbound*. |
+| `Machine` |The computer FQDN. |
+| `Process` |Identity of process or groups of processes initiating or accepting the connection. |
+| `SourceIp` |IP address of the source. |
+| `DestinationIp` |IP address of the destination. |
+| `DestinationPort` |Port number of the destination. |
+| `Protocol` |Protocol used for the connection. Value is *tcp*. |
-To account for the impact of grouping, information about the number of grouped physical connections is provided in the following properties of the record:
+To account for the impact of grouping, information about the number of grouped physical connections is provided in the following properties of the record.
| Property | Description | |:--|:--|
-| `LinksEstablished` |The number of physical network connections that have been established during the reporting time window |
-| `LinksTerminated` |The number of physical network connections that have been terminated during the reporting time window |
+| `LinksEstablished` |The number of physical network connections that have been established during the reporting time window. |
+| `LinksTerminated` |The number of physical network connections that have been terminated during the reporting time window. |
| `LinksFailed` |The number of physical network connections that have failed during the reporting time window. This information is currently available only for outbound connections. |
-| `LinksLive` |The number of physical network connections that were open at the end of the reporting time window|
+| `LinksLive` |The number of physical network connections that were open at the end of the reporting time window.|
#### Metrics
-In addition to connection count metrics, information about the volume of data sent and received on a given logical connection or network port are also included in the following properties of the record:
+In addition to connection count metrics, information about the volume of data sent and received on a specific logical connection or network port is also included in the following properties of the record.
| Property | Description | |:--|:--|
-| `BytesSent` |Total number of bytes that have been sent during the reporting time window |
-| `BytesReceived` |Total number of bytes that have been received during the reporting time window |
-| `Responses` |The number of responses observed during the reporting time window.
-| `ResponseTimeMax` |The largest response time (milliseconds) observed during the reporting time window. If no value, the property is blank.|
-| `ResponseTimeMin` |The smallest response time (milliseconds) observed during the reporting time window. If no value, the property is blank.|
-| `ResponseTimeSum` |The sum of all response times (milliseconds) observed during the reporting time window. If no value, the property is blank|
+| `BytesSent` |Total number of bytes that have been sent during the reporting time window. |
+| `BytesReceived` |Total number of bytes that have been received during the reporting time window. |
+| `Responses` |The number of responses observed during the reporting time window.
+| `ResponseTimeMax` |The largest response time in milliseconds observed during the reporting time window. If there's no value, the property is blank.|
+| `ResponseTimeMin` |The smallest response time in milliseconds observed during the reporting time window. If there's no value, the property is blank.|
+| `ResponseTimeSum` |The sum of all response times in milliseconds observed during the reporting time window. If there's no value, the property is blank|
+
+The third type of data being reported is response time. How long does a caller spend waiting for a request sent over a connection to be processed and responded to by the remote endpoint?
+
+The response time reported is an estimation of the true response time of the underlying application protocol. It's computed by using heuristics based on the observation of the flow of data between the source and destination end of a physical network connection.
-The third type of data being reported is response time - how long does a caller spend waiting for a request sent over a connection to be processed and responded to by the remote endpoint. The response time reported is an estimation of the true response time of the underlying application protocol. It is computed using heuristics based on the observation of the flow of data between the source and destination end of a physical network connection. Conceptually, it is the difference between the time the last byte of a request leaves the sender, and the time when the last byte of the response arrives back to it. These two timestamps are used to delineate request and response events on a given physical connection. The difference between them represents the response time of a single request.
+Conceptually, response time is the difference between the time the last byte of a request leaves the sender and the time when the last byte of the response arrives back to it. These two timestamps are used to delineate request and response events on a specific physical connection. The difference between them represents the response time of a single request.
-In this first release of this feature, our algorithm is an approximation that may work with varying degree of success depending on the actual application protocol used for a given network connection. For example, the current approach works well for request-response based protocols such as HTTP(S), but does not work with one-way or message queue-based protocols.
+In this first release of this feature, our algorithm is an approximation that might work with varying degrees of success depending on the actual application protocol used for a specific network connection. For example, the current approach works well for request-response-based protocols, such as HTTP/HTTPS. But this approach doesn't work with one-way or message queue-based protocols.
Here are some important points to consider:
-1. If a process accepts connections on the same IP address but over multiple network interfaces, a separate record for each interface will be reported.
-2. Records with wildcard IP will contain no activity. They are included to represent the fact that a port on the machine is open to inbound traffic.
-3. To reduce verbosity and data volume, records with wildcard IP will be omitted when there is a matching record (for the same process, port, and protocol) with a specific IP address. When a wildcard IP record is omitted, the IsWildcardBind record property with the specific IP address, will be set to "True" to indicate that the port is exposed over every interface of the reporting machine.
-4. Ports that are bound only on a specific interface have IsWildcardBind set to "False".
+- If a process accepts connections on the same IP address but over multiple network interfaces, a separate record for each interface will be reported.
+- Records with wildcard IP will contain no activity. They're included to represent the fact that a port on the machine is open to inbound traffic.
+- To reduce verbosity and data volume, records with wildcard IP will be omitted when there's a matching record (for the same process, port, and protocol) with a specific IP address. When a wildcard IP record is omitted, the `IsWildcardBind` record property with the specific IP address will be set to `True.` This setting indicates that the port is exposed over every interface of the reporting machine.
+- Ports that are bound only on a specific interface have `IsWildcardBind` set to `False`.
-#### Naming and Classification
+#### Naming and classification
-For convenience, the IP address of the remote end of a connection is included in the RemoteIp property. For inbound connections, RemoteIp is the same as SourceIp, while for outbound connections, it is the same as DestinationIp. The RemoteDnsCanonicalNames property represents the DNS canonical names reported by the machine for RemoteIp. The RemoteDnsQuestions and RemoteClassification properties are reserved for future use.
+For convenience, the IP address of the remote end of a connection is included in the `RemoteIp` property. For inbound connections, `RemoteIp` is the same as `SourceIp`, while for outbound connections, it's the same as `DestinationIp`. The `RemoteDnsCanonicalNames` property represents the DNS canonical names reported by the machine for `RemoteIp`. The `RemoteDnsQuestions` and `RemoteClassification` properties are reserved for future use.
#### Geolocation
-*VMConnection* also includes geolocation information for the remote end of each connection record in the following properties of the record:
+*VMConnection* also includes geolocation information for the remote end of each connection record in the following properties of the record.
| Property | Description | |:--|:--|
-| `RemoteCountry` |The name of the country/region hosting RemoteIp. For example, *United States* |
-| `RemoteLatitude` |The geolocation latitude. For example, *47.68* |
-| `RemoteLongitude` |The geolocation longitude. For example, *-122.12* |
+| `RemoteCountry` |The name of the country/region hosting `RemoteIp`. An example is *United States*. |
+| `RemoteLatitude` |The geolocation latitude. An example is *47.68*. |
+| `RemoteLongitude` |The geolocation longitude. An example is *-122.12*. |
#### Malicious IP
-Every RemoteIp property in *VMConnection* table is checked against a set of IPs with known malicious activity. If the RemoteIp is identified as malicious the following properties will be populated (they are empty, when the IP is not considered malicious) in the following properties of the record:
+Every `RemoteIp` property in the *VMConnection* table is checked against a set of IPs with known malicious activity. If the `RemoteIp` is identified as malicious, the following properties will be populated (they're empty when the IP isn't considered malicious) in the following properties of the record.
| Property | Description | |:--|:--|
-| `MaliciousIp` |The RemoteIp address |
-| `IndicatorThreadType` |Threat indicator detected is one of the following values, *Botnet*, *C2*, *CryptoMining*, *Darknet*, *DDos*, *MaliciousUrl*, *Malware*, *Phishing*, *Proxy*, *PUA*, *Watchlist*. |
+| `MaliciousIp` |The `RemoteIp` address. |
+| `IndicatorThreadType` |Threat indicator detected is one of the following values: *Botnet*, *C2*, *CryptoMining*, *Darknet*, *DDos*, *MaliciousUrl*, *Malware*, *Phishing*, *Proxy*, *PUA*, or *Watchlist*. |
| `Description` |Description of the observed threat. |
-| `TLPLevel` |Traffic Light Protocol (TLP) Level is one of the defined values, *White*, *Green*, *Amber*, *Red*. |
+| `TLPLevel` |Traffic Light Protocol (TLP) Level is one of the defined values: *White*, *Green*, *Amber*, *Red*. |
| `Confidence` |Values are *0 ΓÇô 100*. |
-| `Severity` |Values are *0 ΓÇô 5*, where *5* is the most severe and *0* is not severe at all. Default value is *3*. |
+| `Severity` |Values are *0 ΓÇô 5*, where *5* is the most severe and *0* isn't severe. The default value is *3*. |
| `FirstReportedDateTime` |The first time the provider reported the indicator. | | `LastReportedDateTime` |The last time the indicator was seen by Interflow. | | `IsActive` |Indicates indicators are deactivated with *True* or *False* value. | | `ReportReferenceLink` |Links to reports related to a given observable. |
-| `AdditionalInformation` |Provides additional information, if applicable, about the observed threat. |
+| `AdditionalInformation` |Provides more information, if applicable, about the observed threat. |
### ServiceMapComputer_CL records
-Records with a type of *ServiceMapComputer_CL* have inventory data for servers with Service Map agents. These records have the properties in the following table:
+Records with a type of *ServiceMapComputer_CL* have inventory data for servers with Service Map agents. These records have the properties in the following table.
| Property | Description | |:--|:--|
Records with a type of *ServiceMapComputer_CL* have inventory data for servers w
### ServiceMapProcess_CL Type records
-Records with a type of *ServiceMapProcess_CL* have inventory data for TCP-connected processes on servers with Service Map agents. These records have the properties in the following table:
+Records with a type of *ServiceMapProcess_CL* have inventory data for TCP-connected processes on servers with Service Map agents. These records have the properties in the following table.
| Property | Description | |:--|:--| | `Type` | *ServiceMapProcess_CL* | | `SourceSystem` | *OpsManager* | | `ResourceId` | The unique identifier for a process within the workspace |
-| `ResourceName_s` | The unique identifier for a process within the machine on which it is running|
+| `ResourceName_s` | The unique identifier for a process within the machine on which it's running|
| `MachineResourceName_s` | The resource name of the machine | | `ExecutableName_s` | The name of the process executable | | `StartTime_t` | The process pool start time |
Records with a type of *ServiceMapProcess_CL* have inventory data for TCP-connec
## Sample log searches
+This section lists log search samples.
+ ### List all known machines `ServiceMapComputer_CL | summarize arg_max(TimeGenerated, *) by ResourceId`
-### List the physical memory capacity of all managed computers.
+### List the physical memory capacity of all managed computers
`ServiceMapComputer_CL | summarize arg_max(TimeGenerated, *) by ResourceId | project PhysicalMemory_d, ComputerName_s`
-### List computer name, DNS, IP, and OS.
+### List computer name, DNS, IP, and OS
`ServiceMapComputer_CL | summarize arg_max(TimeGenerated, *) by ResourceId | project ComputerName_s, OperatingSystemFullName_s, DnsNames_s, Ipv4Addresses_s`
All the server, process, and dependency data in Service Map is available via the
## Diagnostic and usage data
-Microsoft automatically collects usage and performance data through your use of the Service Map service. Microsoft uses this data to provide and improve the quality, security, and integrity of the Service Map service. To provide accurate and efficient troubleshooting capabilities, the data includes information about the configuration of your software, such as operating system and version, IP address, DNS name, and workstation name. Microsoft does not collect names, addresses, or other contact information.
+Microsoft automatically collects usage and performance data through your use of Service Map. Microsoft uses this data to provide and improve the quality, security, and integrity of Service Map.
+
+To provide accurate and efficient troubleshooting capabilities, the data includes information about the configuration of your software. This information can be the operating system and version, IP address, DNS name, and workstation name. Microsoft doesn't collect names, addresses, or other contact information.
For more information about data collection and usage, see the [Microsoft Online Services Privacy Statement](https://go.microsoft.com/fwlink/?LinkId=512132).
Learn more about [log searches](../logs/log-query-overview.md) in Log Analytics
## Troubleshooting
-If you have any problems installing or running Service Map, this section can help you. If you still can't resolve your problem, please contact Microsoft Support.
+If you have any problems installing or running Service Map, this section can help you. If you still can't resolve your problem, contact Microsoft Support.
### Dependency agent installation problems
+This section addresses issues with Dependency agent installation.
+ #### Installer prompts for a reboot
-The Dependency agent *generally* does not require a reboot upon installation or removal. However, in certain rare cases, Windows Server requires a reboot to continue with an installation. This happens when a dependency, usually the Microsoft Visual C++ Redistributable library requires a reboot because of a locked file.
+
+The Dependency agent *generally* doesn't require a reboot upon installation or removal. In certain rare cases, Windows Server requires a reboot to continue with an installation. This issue happens when a dependency, usually the Microsoft Visual C++ Redistributable library, requires a reboot because of a locked file.
#### Message "Unable to install Dependency agent: Visual Studio Runtime libraries failed to install (code = [code_number])" appears
-The Microsoft Dependency agent is built on the Microsoft Visual Studio runtime libraries. You'll get a message if there's a problem during installation of the libraries.
+The Microsoft Dependency agent is built on the Microsoft Visual Studio runtime libraries. You'll get a message if there's a problem during installation of the libraries.
-The runtime library installers create logs in the %LOCALAPPDATA%\temp folder. The file is `dd_vcredist_arch_yyyymmddhhmmss.log`, where *arch* is `x86` or `amd64` and *yyyymmddhhmmss* is the date and time (24-hour clock) when the log was created. The log provides details about the problem that's blocking installation.
+The runtime library installers create logs in the %LOCALAPPDATA%\temp folder. The file is `dd_vcredist_arch_yyyymmddhhmmss.log`, where *arch* is `x86` or `amd64` and *yyyymmddhhmmss* is the date and time (based on a 24-hour clock) when the log was created. The log provides details about the problem that's blocking installation.
It might be useful to install the [latest runtime libraries](https://support.microsoft.com/help/2977003/the-latest-supported-visual-c-downloads) first.
The following table lists code numbers and suggested resolutions.
| Code | Description | Resolution | |:--|:--|:--|
-| 0x17 | The library installer requires a Windows update that hasn't been installed. | Look in the most recent library installer log.<br><br>If a reference to `Windows8.1-KB2999226-x64.msu` is followed by a line `Error 0x80240017: Failed to execute MSU package,` you don't have the prerequisites to install KB2999226. Follow the instructions in the prerequisites section in [Universal C Runtime in Windows](https://support.microsoft.com/kb/2999226) article. You might need to run Windows Update and reboot multiple times in order to install the prerequisites.<br><br>Run the Microsoft Dependency agent installer again. |
+| 0x17 | The library installer requires a Windows update that hasn't been installed. | Look in the most recent library installer log.<br><br>If a reference to `Windows8.1-KB2999226-x64.msu` is followed by a line `Error 0x80240017: Failed to execute MSU package,` you don't have the prerequisites to install KB2999226. Follow the instructions in the prerequisites section in the [Universal C Runtime in Windows](https://support.microsoft.com/kb/2999226) article. You might need to run Windows Update and reboot multiple times to install the prerequisites.<br><br>Run the Microsoft Dependency agent installer again. |
### Post-installation issues
+This section addresses post-installation issues.
+ #### Server doesn't appear in Service Map If your Dependency agent installation succeeded, but you don't see your machine in the Service Map solution:
-* Is the Dependency agent installed successfully? You can validate this by checking to see if the service is installed and running.<br><br>
-**Windows**: Look for the service named **Microsoft Dependency agent**.
-**Linux**: Look for the running process **microsoft-dependency-agent**.
-* Are you on the [Log Analytics free tier](https://azure.microsoft.com/pricing/details/monitor/)? The Free plan allows for up to five unique Service Map machines. Any subsequent machines won't appear in Service Map, even if the prior five are no longer sending data.
+* Is the Dependency agent installed successfully? Check to see if the service is installed and running.<br><br>
+ - **Windows**: Look for the service named **Microsoft Dependency agent**.
+ - **Linux**: Look for the running process **microsoft-dependency-agent**.
-* Is your server sending log and perf data to Azure Monitor Logs? Go to Azure Monitor\Logs and run the following query for your computer:
+* Are you on the [Log Analytics free tier](https://azure.microsoft.com/pricing/details/monitor/)? The Free plan allows for up to five unique Service Map machines. Any subsequent machines won't appear in Service Map, even if the prior five are no longer sending data.
+* Is your server sending log and perf data to Azure Monitor Logs? Go to Azure Monitor\Logs and run the following query for your computer:
```kusto Usage | where Computer == "admdemo-appsvr" | summarize sum(Quantity), any(QuantityUnit) by DataType ```
-Did you get a variety of events in the results? Is the data recent? If so, your Log Analytics agent is operating correctly and communicating with the workspace. If not, check the agent on your machine: [Log Analytics agent for Windows troubleshooting](../agents/agent-windows-troubleshoot.md) or [Log Analytics agent for Linux troubleshooting](../agents/agent-linux-troubleshoot.md).
+Did you get a variety of events in the results? Is the data recent? If so, your Log Analytics agent is operating correctly and communicating with the workspace. If not, check the agent on your machine. See [Log Analytics agent for Windows troubleshooting](../agents/agent-windows-troubleshoot.md) or [Log Analytics agent for Linux troubleshooting](../agents/agent-linux-troubleshoot.md).
#### Server appears in Service Map but has no processes
-If you see your machine in Service Map, but it has no process or connection data, that indicates that the Dependency agent is installed and running, but the kernel driver didn't load.
+You see your machine in Service Map, but it has no process or connection data. That behavior indicates the Dependency agent is installed and running but the kernel driver didn't load.
-Check the `C:\Program Files\Microsoft Dependency Agent\logs\wrapper.log file` (Windows) or `/var/opt/microsoft/dependency-agent/log/service.log file` (Linux). The last lines of the file should indicate why the kernel didn't load. For example, the kernel might not be supported on Linux if you updated your kernel.
+Check `C:\Program Files\Microsoft Dependency Agent\logs\wrapper.log file` for Windows or `/var/opt/microsoft/dependency-agent/log/service.log file` for Linux. The last lines of the file should indicate why the kernel didn't load. For example, the kernel might not be supported on Linux if you updated your kernel.
## Suggestions
-Do you have any feedback for us about Service Map or this documentation? Visit our [User Voice page](https://feedback.azure.com/d365community/forum/aa68334e-1925-ec11-b6e6-000d3a4f09d0?c=ad4304e4-1925-ec11-b6e6-000d3a4f09d0), where you can suggest features or vote up existing suggestions.
+Do you have any feedback for us about Service Map or this documentation? See our [User Voice page](https://feedback.azure.com/d365community/forum/aa68334e-1925-ec11-b6e6-000d3a4f09d0?c=ad4304e4-1925-ec11-b6e6-000d3a4f09d0) where you can suggest features or vote up existing suggestions.
azure-monitor Vminsights Enable Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/vminsights-enable-overview.md
Title: Enable VM insights overview
-description: Learn how to deploy and configure VM insights. Find out the system requirements.
+description: Learn how to deploy and configure VM insights and find out about the system requirements.
# Enable VM insights overview
-This article provides an overview of the options available to enable VM insights to monitor health and performance of the following:
+This article provides an overview of the options available to enable VM insights to monitor the health and performance of:
-- Azure virtual machines -- Azure virtual machine scale sets-- Hybrid virtual machines connected with Azure Arc-- On-premises virtual machines-- Virtual machines hosted in another cloud environment.
+- Azure virtual machines.
+- Azure virtual machine scale sets.
+- Hybrid virtual machines connected with Azure Arc.
+- On-premises virtual machines.
+- Virtual machines hosted in another cloud environment.
## Installation options and supported machines+ The following table shows the installation methods available for enabling VM insights on supported machines. | Method | Scope | |:|:| | [Azure portal](vminsights-enable-portal.md) | Enable individual machines with the Azure portal. | | [Azure Policy](vminsights-enable-policy.md) | Create policy to automatically enable when a supported machine is created. |
-| [Resource Manager templates](../vm/vminsights-enable-resource-manager.md) | Enable multiple machines using any of the supported methods to deploy a Resource Manager template such as CLI and PowerShell. |
+| [Azure Resource Manager templates](../vm/vminsights-enable-resource-manager.md) | Enable multiple machines by using any of the supported methods to deploy a Resource Manager template, such as the Azure CLI and PowerShell. |
| [PowerShell](vminsights-enable-powershell.md) | Use a PowerShell script to enable multiple machines. Log Analytics agent only. |
-| [Manual install](vminsights-enable-hybrid.md) | Virtual machines or physical computers on-premises other cloud environments. Log Analytics agent only |
-
+| [Manual install](vminsights-enable-hybrid.md) | Virtual machines or physical computers on-premises with other cloud environments. Log Analytics agent only. |
## Supported Azure Arc machines
-VM insights is available for Azure Arc-enabled servers in regions where the Arc extension service is available. You must be running version 0.9 or above of the Arc Agent.
+
+VM insights is available for Azure Arc-enabled servers in regions where the Arc extension service is available. You must be running version 0.9 or above of the Azure Arc agent.
## Supported operating systems
-VM insights supports any operating system that supports the Dependency agent and either the Azure Monitor agent (preview) or Log Analytics agent. See [Overview of Azure Monitor agents
-](../agents/agents-overview.md#supported-operating-systems) for a complete list.
+VM insights supports any operating system that supports the Dependency agent and either the Azure Monitor agent (preview) or Log Analytics agent. For a complete list, see [Azure Monitor agent overview](../agents/agents-overview.md#supported-operating-systems).
> [!IMPORTANT]
-> If the ethernet device for your virtual machine has more than nine characters, then it wonΓÇÖt be recognized by VM insights and data wonΓÇÖt be sent to the InsightsMetrics table. The agent will collect data from [other sources](../agents/agent-data-sources.md).
-
+> If the Ethernet device for your virtual machine has more than nine characters, it won't be recognized by VM insights and data won't be sent to the InsightsMetrics table. The agent will collect data from [other sources](../agents/agent-data-sources.md).
### Linux considerations+ See the following list of considerations on Linux support of the Dependency agent that supports VM insights: - Only default and SMP Linux kernel releases are supported.-- Nonstandard kernel releases, such as Physical Address Extension (PAE) and Xen, aren't supported for any Linux distribution. For example, a system with the release string of *2.6.16.21-0.8-xen* isn't supported.
+- Nonstandard kernel releases, such as physical address extension (PAE) and Xen, aren't supported for any Linux distribution. For example, a system with the release string of *2.6.16.21-0.8-xen* isn't supported.
- Custom kernels, including recompilations of standard kernels, aren't supported.-- For Debian distros other than version 9.4, the map feature isn't supported, and the Performance feature is available only from the Azure Monitor menu. It isn't available directly from the left pane of the Azure VM.
+- For Debian distros other than version 9.4, the Map feature isn't supported. The Performance feature is available only from the Azure Monitor menu. It isn't available directly from the left pane of the Azure VM.
- CentOSPlus kernel is supported.
-The Linux kernel must be patched for the Spectre and Meltdown vulnerabilities. Please consult your Linux distribution vendor for more details. Run the following command to check for available if Spectre/Meltdown has been mitigated:
+The Linux kernel must be patched for the Spectre and Meltdown vulnerabilities. For more information, consult your Linux distribution vendor. Run the following command to check for availability if Spectre/Meltdown has been mitigated:
``` $ grep . /sys/devices/system/cpu/vulnerabilities/*
Output for this command will look similar to the following and specify whether a
/sys/devices/system/cpu/vulnerabilities/spectre_v2:Vulnerable: Minimal generic ASM retpoline ``` - ## Log Analytics workspace
-VM insights requires a Log Analytics workspace. See [Configure Log Analytics workspace for VM insights](vminsights-configure-workspace.md) for details and requirements of this workspace.
+
+VM insights requires a Log Analytics workspace. For requirements of this workspace, see [Configure Log Analytics workspace for VM insights](vminsights-configure-workspace.md).
> [!NOTE]
-> VM Insights does not support sending data to more than one Log Analytics workspace (multi-homing).
->
+> VM Insights doesn't support sending data to more than one Log Analytics workspace (multi-homing).
+>
## Network requirements -- See [Network requirements](../agents/log-analytics-agent.md#network-requirements) for the network requirements for the Log Analytics agent.-- The dependency agent requires a connection from the virtual machine to the address 169.254.169.254. This is the Azure metadata service endpoint. Ensure that firewall settings allow connections to this endpoint.
+- For the network requirements for the Log Analytics agent, see [Network requirements](../agents/log-analytics-agent.md#network-requirements).
+- The Dependency agent requires a connection from the virtual machine to the address 169.254.169.254. This address identifies the Azure metadata service endpoint. Ensure that firewall settings allow connections to this endpoint.
## Agents
-When you enable VM insights for a machine, the following agents are installed. See [Network requirements](../agents/log-analytics-agent.md#network-requirements) for the network requirements for these agents.
-> [!IMPORTANT]
-> VM insights support for Azure Monitor agent is currently in public preview. Azure Monitor agent includes several advantages over Log Analytics agent, and is the preferred agent for virtual machines and virtual machine scale sets. See [Migrate to Azure Monitor agent from Log Analytics agent](../agents/azure-monitor-agent-migration.md) for comparison of the agent and information on migrating.
--- [Azure Monitor agent](../agents/azure-monitor-agent-overview.md) or [Log Analytics agent](../agents/log-analytics-agent.md). Collects data from the virtual machine or virtual machine scale set and delivers it to the Log Analytics workspace. -- Dependency agent. Collects discovered data about processes running on the virtual machine and external process dependencies, which are used by the [Map feature in VM insights](../vm/vminsights-maps.md). The Dependency agent relies on the Azure Monitor agent or Log Analytics agent to deliver its data to Azure Monitor.
+When you enable VM insights for a machine, the following agents are installed. For the network requirements for these agents, see [Network requirements](../agents/log-analytics-agent.md#network-requirements).
-## Changes for Azure Monitor agent
-There are several changes in the process for enabling VM insights when using the Azure Monitor agent.
+> [!IMPORTANT]
+> VM insights support for the Azure Monitor agent is currently in public preview. The Azure Monitor agent has several advantages over the Log Analytics agent. It's the preferred agent for virtual machines and virtual machine scale sets. For a comparison of the agent and information on migrating, see [Migrate to Azure Monitor agent from Log Analytics agent](../agents/azure-monitor-agent-migration.md).
-**Workspace configuration.** You no longer need to [enable VM insights on the Log Analytics workspace](vminsights-configure-workspace.md) since the VMinsights management pack isn't used by Azure Monitor agent.
+- **[Azure Monitor agent](../agents/azure-monitor-agent-overview.md) or [Log Analytics agent](../agents/log-analytics-agent.md):** Collects data from the virtual machine or virtual machine scale set and delivers it to the Log Analytics workspace.
+- **Dependency agent**: Collects discovered data about processes running on the virtual machine and external process dependencies, which are used by the [Map feature in VM insights](../vm/vminsights-maps.md). The Dependency agent relies on the Azure Monitor agent or Log Analytics agent to deliver its data to Azure Monitor.
-**Data collection rule.** Azure Monitor agent uses [data collection rules](../essentials/data-collection-rule-overview.md) to configure its data collection. VM insights creates a data collection rule that is automtically deployed if you enable your machine using the Azure portal. If you use other methods to onboard your machines, then you may need to install the data collection rule first.
+## Changes for the Azure Monitor agent
-**Agent deployment.** There are minor changes to the the process for onboarding virtual machines and virtual machine scale sets to VM insights in the Azure portal. You must now select which agent you want to use, and you must select a data collection rule for Azure Monitor agent. See [Enable VM insights in the Azure portal](vminsights-enable-portal.md) for details.
+There are several changes in the process for enabling VM insights when you use the Azure Monitor agent:
+- **Workspace configuration:** You no longer need to [enable VM insights on the Log Analytics workspace](vminsights-configure-workspace.md) because the Azure Monitor agent doesn't use the *VMInsights* management pack.
+- **Data collection rule (DCR):** The Azure Monitor agent uses [data collection rules](../essentials/data-collection-rule-overview.md) to configure its data collection. VM insights creates a DCR that's automatically deployed if you enable your machine by using the Azure portal. If you use other methods to onboard your machines, you might need to install the DCR first.
+- **Agent deployment:** There are minor changes to the process for onboarding virtual machines and virtual machine scale sets to VM insights in the Azure portal. You must now select which agent you want to use, and you must select a DCR for the Azure Monitor agent. For more information, see [Enable VM insights in the Azure portal](vminsights-enable-portal.md).
## Data collection rule (Azure Monitor agent)
-When you enable VM insights on a machine with the Azure Monitor agent you must specify a [data collection rule (DCR)](../essentials/data-collection-rule-overview.md) to use. The DCR specifies the data to collect and the workspace to use. VM insights creates a default DCR if one doesn't already exist. See [Enable VM insights for Azure Monitor agent
-](vminsights-enable-portal.md#enable-vm-insights-for-azure-monitor-agent) for more information on creating and editing the VM insights data collection rule.
+
+When you enable VM insights on a machine with the Azure Monitor agent, you must specify a [data collection rule](../essentials/data-collection-rule-overview.md) to use. The DCR specifies the data to collect and the workspace to use. VM insights creates a default DCR if one doesn't already exist. For more information on how to create and edit the VM insights DCR, see [Enable VM insights for the Azure Monitor agent
+](vminsights-enable-portal.md#enable-vm-insights-for-azure-monitor-agent).
> [!IMPORTANT]
-> It's not recommended to create your own DCR to support VM insights. The DCR created by VM insights includes a special data stream required for its operation. While you can edit this DCR to collect additional data such as Windows and Syslog events, you should create additional DCRs and associate with the machine.
+> Don't create your own DCR to support VM insights. The DCR created by VM insights includes a special data stream required for its operation. You can edit this DCR to collect more data, such as Windows and Syslog events, but you should create more DCRs and associate them with the machine.
-The DCR is defined by the options in the following table.
+The DCR is defined by the options in the following table.
| Option | Description | |:|:|
-| Guest performance | Specifies whether to collect performance data from the guest operating system. This is required for all machines. |
-| Processes and dependencies | Collected details about processes running on the virtual machine and dependencies between machines. This enables the [map feature in VM insights](vminsights-maps.md). This is optional and enables the [VM insights map feature](vminsights-maps.md) for the machine. |
-| Log Analytics workspace | Workspace to store the data. Only workspaces with VM insights will be listed. |
+| Guest performance | Specifies whether to collect performance data from the guest operating system. This option is required for all machines. |
+| Processes and dependencies | Collects information about processes running on the virtual machine and dependencies between machines. This information enables the [Map feature in VM insights](vminsights-maps.md). This is optional and enables the [VM insights Map feature](vminsights-maps.md) for the machine. |
+| Log Analytics workspace | Workspace to store the data. Only workspaces with VM insights are listed. |
## Management packs (Log Analytics agent)
-When a Log Analytics workspace is configured for VM insights, two management packs are forwarded to all the Windows computers connected to that workspace. The management packs are named *Microsoft.IntelligencePacks.ApplicationDependencyMonitor* and *Microsoft.IntelligencePacks.VMInsights* and are written to *%Programfiles%\Microsoft Monitoring Agent\Agent\Health Service State\Management Packs*.
+
+When a Log Analytics workspace is configured for VM insights, two management packs are forwarded to all the Windows computers connected to that workspace. The management packs are named *Microsoft.IntelligencePacks.ApplicationDependencyMonitor* and *Microsoft.IntelligencePacks.VMInsights*. They're written to *%Programfiles%\Microsoft Monitoring Agent\Agent\Health Service State\Management Packs*.
The data source used by the *ApplicationDependencyMonitor* management pack is **%Program files%\Microsoft Monitoring Agent\Agent\Health Service State\Resources\<AutoGeneratedID>\Microsoft.EnterpriseManagement.Advisor.ApplicationDependencyMonitorDataSource.dll*. The data source used by the *VMInsights* management pack is *%Program files%\Microsoft Monitoring Agent\Agent\Health Service State\Resources\<AutoGeneratedID>\ Microsoft.VirtualMachineMonitoringModule.dll*. ## Migrate from Log Analytics agent
-The Azure Monitor agent and the Log Analytics agent can both be installed on the same machine during migration. You should be careful that running both agents may lead to duplication of data and increased cost. If a machine has both agents installed, you'll have a warning in the Azure portal that you may be collecting duplicate data.
+
+The Azure Monitor agent and the Log Analytics agent can both be installed on the same machine during migration. Running both agents might lead to duplication of data and increased cost. If a machine has both agents installed, you'll see a warning in the Azure portal that you might be collecting duplicate data.
> [!WARNING] > Collecting duplicate data from a single machine with both the Azure Monitor agent and Log Analytics agent can result in the following consequences: >
-> - Additional ingestion cost from sending duplicate data to the Log Analytics workspace.
-> - The map feature of VM insights may be inaccurate since it does not check for duplicate data.
+> - Extra ingestion cost from sending duplicate data to the Log Analytics workspace.
+> - The Map feature of VM insights might be inaccurate because it doesn't check for duplicate data.
-You must remove the Log Analytics agent yourself from any machines that are using it. Before you do this, ensure that the machine is not relying any other solutions that require the Log Analytics agent. See [Migrate to Azure Monitor agent from Log Analytics agent](../agents/azure-monitor-agent-migration.md) for details.
+You must remove the Log Analytics agent yourself from any machines that are using it. Before you do this step, ensure that the machine isn't relying on any other solutions that require the Log Analytics agent. For more information, see [Migrate to Azure Monitor agent from Log Analytics agent](../agents/azure-monitor-agent-migration.md).
-After you verify that no Log Analytics agents are still connected to your Log Analytics workspace, you can [remove the VMInsights solution from the workspace](vminsights-configure-workspace.md#remove-vminsights-solution-from-workspace) which is no longer needed.
+After you verify that no Log Analytics agents are still connected to your Log Analytics workspace, you can [remove the *VMInsights* solution from the workspace](vminsights-configure-workspace.md#remove-vminsights-solution-from-workspace). It's no longer needed.
> [!NOTE]
-> To check if you have any machines with both agents sending data to your Log Analytics workspace, run the following [log query](../logs/log-query-overview.md) in [Log Analytics](../logs/log-analytics-overview.md). This will show the last heartbeat for each computer. If a computer has both agents, then it will return two records each with a different `category`. The Azure Monitor agent will have a `category` of *Azure Monitor Agent*. The Log Analytics agent will have a `category` of *Direct Agent*.
+> To check if you have any machines with both agents sending data to your Log Analytics workspace, run the following [log query](../logs/log-query-overview.md) in [Log Analytics](../logs/log-analytics-overview.md). This query will show the last heartbeat for each computer. If a computer has both agents, it will return two records, each with a different `category`. The Azure Monitor agent will have a `category` of *Azure Monitor Agent*. The Log Analytics agent will have a `category` of *Direct Agent*.
> > ```KQL > Heartbeat
After you verify that no Log Analytics agents are still connected to your Log An
> | sort by Computer > ``` - ## Diagnostic and usage data
-Microsoft automatically collects usage and performance data through your use of the Azure Monitor service. Microsoft uses this data to improve the quality, security, and integrity of the service.
+Microsoft automatically collects usage and performance data through your use of Azure Monitor. Microsoft uses this data to improve the quality, security, and integrity of the service.
To provide accurate and efficient troubleshooting capabilities, the Map feature includes data about the configuration of your software. The data provides information such as the operating system and version, IP address, DNS name, and workstation name. Microsoft doesn't collect names, addresses, or other contact information.
azure-monitor Vminsights Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/vminsights-overview.md
Title: What is VM insights?
-description: Overview of VM insights, which monitors the health and performance of the Azure VMs and automatically discovers and maps application components and their dependencies.
+description: Overview of VM insights, which monitors the health and performance of Azure VMs and automatically discovers and maps application components and their dependencies.
Last updated 06/21/2022
# Overview of VM insights
-VM insights monitors the performance and health of your virtual machines and virtual machine scale sets, including their running processes and dependencies on other resources. It can help deliver predictable performance and availability of vital applications by identifying performance bottlenecks and network issues and can also help you understand whether an issue is related to other dependencies.
+VM insights monitors the performance and health of your virtual machines and virtual machine scale sets. It monitors their running processes and dependencies on other resources. VM insights can help deliver predictable performance and availability of vital applications by identifying performance bottlenecks and network issues. It can also help you understand whether an issue is related to other dependencies.
> [!NOTE]
-> VM insights now supports [Azure Monitor agent](../agents/azure-monitor-agent-overview.md). See [Enable VM insights overview](vminsights-enable-overview.md#agents).
+> VM insights now supports [Azure Monitor agent](../agents/azure-monitor-agent-overview.md). For more information, see [Enable VM insights overview](vminsights-enable-overview.md#agents).
-VM insights supports Windows and Linux operating systems on the following machines:
+VM insights supports Windows and Linux operating systems on:
-- Azure virtual machines-- Azure virtual machine scale sets-- Hybrid virtual machines connected with Azure Arc-- On-premises virtual machines-- Virtual machines hosted in another cloud environment
-
+- Azure virtual machines.
+- Azure virtual machine scale sets.
+- Hybrid virtual machines connected with Azure Arc.
+- On-premises virtual machines.
+- Virtual machines hosted in another cloud environment.
-VM insights stores its data in Azure Monitor Logs, which allows it to deliver powerful aggregation and filtering and to analyze data trends over time. You can view this data in a single VM from the virtual machine directly, or you can use Azure Monitor to deliver an aggregated view of multiple VMs.
-
-![Virtual machine insights perspective in the Azure portal](media/vminsights-overview/vminsights-azmon-directvm.png)
+VM insights stores its data in Azure Monitor Logs, which allows it to deliver powerful aggregation and filtering and to analyze data trends over time. You can view this data in a single VM from the virtual machine directly. Or, you can use Azure Monitor to deliver an aggregated view of multiple VMs.
+![Screenshot that shows the VM insights perspective in the Azure portal.](media/vminsights-overview/vminsights-azmon-directvm.png)
## Pricing+ There's no direct cost for VM insights, but you're charged for its activity in the Log Analytics workspace. Based on the pricing that's published on the [Azure Monitor pricing page](https://azure.microsoft.com/pricing/details/monitor/), VM insights is billed for: - Data ingested from agents and stored in the workspace.-- Health state data collected from guest health (preview)
+- Health state data collected from guest health (preview).
- Alert rules based on log and health data. - Notifications sent from alert rules.
-The log size varies by the string lengths of performance counters, and it can increase with the number of logical disks and network adapters allocated to the VM. If you're already using Service Map, the only change you'll see is the extra performance data that's sent to the Azure Monitor `InsightsMetrics` data type.ΓÇï
+The log size varies by the string lengths of performance counters. It can increase with the number of logical disks and network adapters allocated to the VM. If you're already using Service Map, the only change you'll see is the extra performance data that's sent to the Azure Monitor `InsightsMetrics` data type.ΓÇï
+
+## Access VM insights
+
+Access VM insights for all your virtual machines and virtual machine scale sets by selecting **Virtual Machines** from the **Monitor** menu in the Azure portal. To access VM insights for a single virtual machine or virtual machine scale set, select **Insights** from the machine's menu in the Azure portal.
-## Accessing VM insights
-Access VM insights for all your virtual machines and virtual machine scale sets by selecting **Virtual Machines** from the **Monitor** menu in the Azure portal. Access VM insights for a single virtual machine or virtual machine scale set by selecting **Insights** from the machine's menu in the Azure portal.
+## Configure VM insights
-## Configuring VM insights
-The steps to configure VM insights are as follows. Follow each link for detailed guidance on each step:
+To configure VM insights, follow the steps in each link for detailed guidance:
-- [Create Log Analytics workspace.](./vminsights-configure-workspace.md#create-log-analytics-workspace)-- [Add VMInsights solution to workspace.](./vminsights-configure-workspace.md#add-vminsights-solution-to-workspace) (Log Analytics agent only))-- [Install agents on virtual machine and virtual machine scale set to be monitored.](./vminsights-enable-overview.md)
+- [Create a Log Analytics workspace](./vminsights-configure-workspace.md#create-log-analytics-workspace).
+- [Add the VMInsights solution to a workspace](./vminsights-configure-workspace.md#add-vminsights-solution-to-workspace) (Log Analytics agent only).
+- [Install agents on the virtual machine and virtual machine scale set to be monitored](./vminsights-enable-overview.md).
> [!NOTE]
-> VM Insights does not support sending data to more than one Log Analytics workspace (multi-homing).
+> VM insights doesn't support sending data to more than one Log Analytics workspace (multi-homing).
## Next steps -- See [Deploy VM insights](./vminsights-enable-overview.md) for requirements and methods that to enable monitoring for your virtual machines.
+See [Deploy VM insights](./vminsights-enable-overview.md) for requirements and methods to enable monitoring for your virtual machines.
azure-netapp-files Azure Netapp Files Network Topologies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-network-topologies.md
na Previously updated : 09/15/2022 Last updated : 09/22/2022 # Guidelines for Azure NetApp Files network planning
Azure NetApp Files Standard network features are supported for the following reg
* Germany West Central * Japan East * Japan West
+* Korea Central
* North Central US * North Europe * Norway East
Azure NetApp Files Standard network features are supported for the following reg
* South India * Southeast Asia * Switzerland North
+* UAE Central
* UK South * West Europe * West US
azure-resource-manager Bicep Config Linter https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-config-linter.md
Title: Linter settings for Bicep config description: Describes how to customize configuration values for the Bicep linter Previously updated : 08/01/2022 Last updated : 09/21/2022 # Add linter settings in the Bicep config file
The following example shows the rules that are available for configuration.
"artifacts-parameters": { "level": "warning" },
+ "max-outputs": {
+ "level": "warning"
+ },
+ "max-params": {
+ "level": "warning"
+ },
+ "max-resources": {
+ "level": "warning"
+ },
+ "max-variables": {
+ "level": "warning"
+ },
"no-hardcoded-env-urls": { "level": "warning" },
+ "no-hardcoded-location": {
+ "level": "warning"
+ },
+ "no-loc-expr-outside-params": {
+ "level": "warning"
+ },
"no-unnecessary-dependson": { "level": "warning" },
The following example shows the rules that are available for configuration.
"secure-parameter-default": { "level": "warning" },
- "simplify-interpolation": {
+ "secure-params-in-nested-deploy": {
"level": "warning" }, "secure-secrets-in-params": { "level": "warning" },
+ "simplify-interpolation": {
+ "level": "warning"
+ },
+ "use-protectedsettings-for-commandtoexecute-secrets": {
+ "level": "warning"
+ },
"use-stable-resource-identifiers": { "level": "warning" },
azure-resource-manager Linter Rule Secure Params In Nested Deploy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/linter-rule-secure-params-in-nested-deploy.md
+
+ Title: Linter rule - secure params in nested deploy
+description: Linter rule - secure params in nested deploy
+ Last updated : 09/22/2022++
+# Linter rule - secure params in nested deploy
+
+Outer-scoped nested deployment resources shouldn't use for secure parameters or list* functions. You could expose the secure values in the deployment history.
+
+## Linter rule code
+
+Use the following value in the [Bicep configuration file](bicep-config-linter.md) to customize rule settings:
+
+`secure-params-in-nested-deploy`
+
+## Solution
+
+Either set the [deployment's properties.expressionEvaluationOptions.scope](/azure/templates/microsoft.resources/deployments?pivots=deployment-language-bicep) to `inner` or use a Bicep module instead.
+
+The following example fails this test because a secure parameter is referenced in an outer-scoped nested deployment resource.
+
+```bicep
+@secure()
+param secureValue string
+
+resource nested 'Microsoft.Resources/deployments@2021-04-01' = {
+ name: 'nested'
+ properties: {
+ mode: 'Incremental'
+ template: {
+ '$schema': 'https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#'
+ contentVersion: '1.0.0.0'
+ variables: {}
+ resources: [
+ {
+ name: 'outerImplicit'
+ type: 'Microsoft.Network/networkSecurityGroups'
+ apiVersion: '2019-11-01'
+ location: '[resourceGroup().location]'
+ properties: {
+ securityRules: [
+ {
+ name: 'outerImplicit'
+ properties: {
+ description: format('{0}', secureValue)
+ protocol: 'Tcp'
+ }
+ }
+ ]
+ }
+ }
+ ]
+ }
+ }
+}
+```
+
+You can fix it by setting the deployment's properties.expressionEvaluationOptions.scope to 'inner':
+
+```bicep
+@secure()
+param secureValue string
+
+resource nested 'Microsoft.Resources/deployments@2021-04-01' = {
+ name: 'nested'
+ properties: {
+ mode: 'Incremental'
+ expressionEvaluationOptions: {
+ scope: 'Inner' // Set to inner scope
+ }
+ template: {
+ '$schema': 'https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#'
+ contentVersion: '1.0.0.0'
+ variables: {}
+ resources: [
+ {
+ name: 'outerImplicit'
+ type: 'Microsoft.Network/networkSecurityGroups'
+ apiVersion: '2019-11-01'
+ location: '[resourceGroup().location]'
+ properties: {
+ securityRules: [
+ {
+ name: 'outerImplicit'
+ properties: {
+ description: format('{0}', secureValue)
+ protocol: 'Tcp'
+ }
+ }
+ ]
+ }
+ }
+ ]
+ }
+ }
+}
+
+```
+
+## Next steps
+
+For more information about the linter, see [Use Bicep linter](./linter.md).
azure-resource-manager Linter https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/linter.md
Title: Use Bicep linter description: Learn how to use Bicep linter. Previously updated : 07/29/2022 Last updated : 09/21/2022 # Use Bicep linter
The default set of linter rules is minimal and taken from [arm-ttk test cases](.
- [max-resources](./linter-rule-max-resources.md) - [max-variables](./linter-rule-max-variables.md) - [no-hardcoded-env-urls](./linter-rule-no-hardcoded-environment-urls.md)
+- [no-hardcoded-location](./linter-rule-no-hardcoded-location.md)
+- [no-loc-expr-outside-params](./linter-rule-no-loc-expr-outside-params.md)
- [no-unnecessary-dependson](./linter-rule-no-unnecessary-dependson.md) - [no-unused-existing-resources](./linter-rule-no-unused-existing-resources.md) - [no-unused-params](./linter-rule-no-unused-parameters.md)
The default set of linter rules is minimal and taken from [arm-ttk test cases](.
- [outputs-should-not-contain-secrets](./linter-rule-outputs-should-not-contain-secrets.md) - [prefer-interpolation](./linter-rule-prefer-interpolation.md) - [prefer-unquoted-property-names](./linter-rule-prefer-unquoted-property-names.md)
+- [protect-commandtoexecute-secrets](./linter-rule-protect-commandtoexecute-secrets.md)
- [secure-parameter-default](./linter-rule-secure-parameter-default.md)
+- [secure-params-in-nested-deploy](./linter-rule-secure-params-in-nested-deploy.md)
- [secure-secrets-in-params](./linter-rule-secure-secrets-in-parameters.md) - [simplify-interpolation](./linter-rule-simplify-interpolation.md) - [use-protectedsettings-for-commandtoexecute-secrets](./linter-rule-use-protectedsettings-for-commandtoexecute-secrets.md)
You can integrate these checks as a part of your CI/CD pipelines. You can use a
## Silencing false positives
-Sometimes a rule can have false positives. For example you may need to include a link to a blob storage directly without using the [environment()](./bicep-functions-deployment.md#environment) function.
+Sometimes a rule can have false positives. For example, you may need to include a link to a blob storage directly without using the [environment()](./bicep-functions-deployment.md#environment) function.
In this case you can disable the warning for one line only, not the entire document, by adding `#disable-next-line <rule name>` before the line with the warning. ```bicep
In this case you can disable the warning for one line only, not the entire docum
scriptDownloadUrl: 'https://mytools.blob.core.windows.net/...' ```
-It is good practice to add a comment explaining why the rule does not apply to this line.
+It's good practice to add a comment explaining why the rule doesn't apply to this line.
## Next steps
azure-resource-manager Modules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/modules.md
The following example concatenates the deployment name to the module name. If yo
module stgModule 'storageAccount.bicep' = { name: '${deployment().name}-storageDeploy' scope: resourceGroup('demoRG')
+}
``` If you need to **specify a scope** that is different than the scope for the main file, add the scope property. For more information, see [Set module scope](#set-module-scope).
azure-signalr Howto Use Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/howto-use-managed-identity.md
We provide libraries and code samples that show how to handle token validation.
Setting access token validation in Function App is easy and efficient without code works.
-1. In the **Authentication (classic)** page, switch **App Service Authentication** to **On**.
+1. In the **Authentication** page, click **Add identity provider**
2. Select **Log in with Azure Active Directory** in **Action to take when request is not authenticated**.
-3. In the Authentication Provider, click into **Azure Active Directory**
-
-4. In the new page. Select **Express** and **Create New AD App** and then click **OK**
+3. Select **Microsoft** in the identity provider dropdown. The option to create a new registration is selected by default. You can change the name of the registration. For more details on enabling Azure AD provider, please refer to [Configure your App Service or Azure Functions app to use Azure AD login](../app-service/configure-authentication-provider-aad.md)
:::image type="content" source="media/signalr-howto-use-managed-identity/function-aad.png" alt-text="Function Aad":::
-5. Navigate to SignalR Service and follow [steps](howto-use-managed-identity.md#add-a-system-assigned-identity) to add a system-assigned identity or user-assigned identity.
+4. Navigate to SignalR Service and follow [steps](howto-use-managed-identity.md#add-a-system-assigned-identity) to add a system-assigned identity or user-assigned identity.
-6. Get into **Upstream settings** in SignalR Service and choose **Use Managed Identity** and **Select from existing Applications**. Select the application you created previously.
+5. Get into **Upstream settings** in SignalR Service and choose **Use Managed Identity** and **Select from existing Applications**. Select the application you created previously.
After these settings, the Function App will reject requests without an access token in the header. > [!Important]
-> To pass the authentication, the *Issuer Url* must match the *iss* claim in token. Currently, we only support v1 endpoint (see [v1.0 and v2.0](../active-directory/develop/access-tokens.md)), so the *Issuer Url* should look like `https://sts.windows.net/<tenant-id>/`. Check the *Issuer Url* configured in Azure Function. For **Authentication**, go to *Identity provider* -> *Edit* -> *Issuer Url* and for **Authentication (classic)**, go to *Azure Active Directory* -> *Advanced* -> *Issuer Url*
+> To pass the authentication, the *Issuer Url* must match the *iss* claim in token. Currently, we only support v1 endpoint (see [v1.0 and v2.0](../active-directory/develop/access-tokens.md)), so the *Issuer Url* should look like `https://sts.windows.net/<tenant-id>/`. Check the *Issuer Url* configured in Azure Function. For **Authentication**, go to *Identity provider* -> *Edit* -> *Issuer Url*
## Use a managed identity for Key Vault reference
azure-vmware Configure External Identity Source Nsx T https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/configure-external-identity-source-nsx-t.md
+
+ Title: Configure external identity source for NSX-T
+description: Learn how to use the Azure VMware Solution to configure an external identity source for NSX-T.
++ Last updated : 09/20/2022++
+# Configure external identity source for NSX-T
+
+In this article, you'll learn how to configure an external identity source for NSX-T in an Azure VMware Solution. The NSX-T Data Center can be configured with external LDAP directory service to add remote directory users or groups. The users can be assigned an NSX-T Data Center Role-based access control (RBAC) role like you've on-premises.
+
+## Prerequisites
+
+- A working connectivity from your Active Directory network to your Azure VMware Solution private cloud.
+- If you require Active Directory authentication with LDAPS:
+ - You'll need access to the Active Directory Domain Controller(s) with Administrator permissions.
+
+ - Your Active Directory Domain Controller(s) must have LDAPS enabled with a valid certificate. The certificate could be issued by an [Active Directory Certificate Services Certificate Authority (CA)](https://social.technet.microsoft.com/wiki/contents/articles/2980.ldap-over-ssl-ldaps-certificate.aspx) or a [third-party CA](https://docs.microsoft.com/troubleshoot/windows-server/identity/enable-ldap-over-ssl-3rd-certification-authority).
+ >[!Note]
+ > Self-sign certificates are not recommended for production environments. ΓÇ»
+
+- Ensure your Azure VMware Solution has DNS resolution configured to your on-premises AD. Enable DNS Forwarder from Azure portal. For more information, see [Configure NSX-T DNS for resolution to your Active Directory Domain and Configure DNS forwarder for Azure VMware Solution](configure-dns-azure-vmware-solution.md) .
+>[!NOTE]
+> For more information about LDAPS and certificate issuance, see with your security or identity management team.
+
+## Add Active Directory as LDAPS identity source
+
+1. Sign-in to NSX-T and Navigate to System > Users and Roles > LDAP.
+
+1. Select on the Add Identity Source.
+
+1. Enter a name for the identity source. For example, avslab.local.
+
+1. Enter a domain name. The name must correspond to the domain name of your Active Directory server, if using Active Directory. For example, `avslab.local`.
+
+1. Select the type as Active Directory over LDAP, if using Active Directory.
+
+1. Enter the Base DN. Base DN is the starting point that an LDAP server uses when searching for user authentication within an Active Directory domain. For example: DC=avslab,DC=local.
+ >[!NOTE]
+ > All of the user and group entries you intend to use to control access to NSX-T Data Center must be contained within the LDAP directory tree rooted at the specified Base DN. If the Base DN is set to something too specific, such as an Organizational Unit deeper in your LDAP tree, NSX may not be able to find the entries it needs to locate users and determine group membership. Selecting a broad Base DN is a best practice if you are unsure.
+
+1. After filling in the required fields, you can select Set to configure LDAP servers. One LDAP server is supported for each domain.
+
+ | **Field** | **Value** |
+ | -- | -- |
+ |Hostname/IP | The hostname or IP address of your LDAP server. For example,ΓÇ»`dc.avslab.local.`|
+ | LDAP Protocol | SelectΓÇ»**LDAPS**ΓÇ»(LDAP is unsecured). |
+ | Port | The default port is populated based on the selected protocol 636 for LDAPS and 389 for LDAP. If your LDAP server is running on a non-standard port, you can edit this text box to give the port number. |
+ | Connection Status | After filling in the mandatory text boxes, including the LDAP server information, select **Connection Status** to test the connection. |
+ | Use StartTLS | If selected, the LDAPv3 StartTLS extension is used to upgrade the connection to use encryption. To determine if you should use this option, consult your LDAP server administrator. This option can only be used if LDAP protocol is selected. |
+ | Certificate | If you're using LDAPS or LDAP + StartTLS, this text box should contain the PEM-encoded X.509 certificate of the server. If you leave this text box blank and select the **Check Status** link, NSX connects to the LDAP server. NSX will then retrieve the LDAP server's certificate, and prompt you if you want to trust that certificate. If you've verified that the certificate is correct, select **OK**, and the certificate text box will be populated with the retrieved certificate. |
+ |Bind Identity | The format is `user@domainName`, or you can specify the distinguished name. For Active Directory, you can use either the userPrincipalName (user@domainName) or the distinguished name. For OpenLDAP, you must supply a distinguished name. This text box is required unless your LDAP server supports anonymous bind, then it's optional. Consult your LDAP server administrator if you aren't sure.|
+ |Password |Enter a password for the LDAP server. This text box is required unless your LDAP server supports anonymous bind, then it's optional. Consult your LDAP server administrator.|
+1. SelectΓÇ»**Add**.ΓÇ»
+ :::image type="content" source="./media/nsxt/set-ldap-server.png" alt-text="Screenshot showing how to set an LDAP server." border="true" lightbox="./media/nsxt/set-ldap-server.png":::
+
+
+1. Select **Save** to complete the changes.
+ :::image type="content" source="./media/nsxt/user-roles-ldap-server.png" alt-text="Screenshot showing user roles on an LDAP server." border="true" lightbox="./media/nsxt/user-roles-ldap-server.png":::
+
+## Assign other NSX-T roles to Active Directory identities
+
+After adding an external identity, you can assign NSX-T Roles to Active Directory security groups based on your organization's security controls.
+
+1. Sign in to NSX-T and navigate toΓÇ»**System** > **Users and Roles**.
+
+1. Select **Add** >ΓÇ»**Role Assignment for LDAP**.ΓÇ»
+
+ 1. Select a domain.
+ 1. Enter the first few characters of the user's name, sign in ID, or a group name to search the LDAP directory, then select a user or group from the list that appears.
+ 1. Select a role.
+ 1. Select **Save**.
+ :::image type="content" source="./media/nsxt/user-roles-ldap-review.png" alt-text="Screenshot showing how to review different roles on the LDAP server." border="true" lightbox="./media/nsxt/user-roles-ldap-review.png":::
+
+1. Verify the permission assignment is displayed underΓÇ»**Users and Roles**.
+
+1. Users should now be able to sign in to NSX-T using their Active Directory credentials.
+
+## Next steps
+Now that you've configured the external source, you can also learn about:
+
+- [Configure external identity source for vCenter Server](configure-identity-source-vcenter.md)
+- [Azure VMware Solution identity concepts](concepts-identity.md)
+- [VMware product documentation](https://docs.vmware.com/en/VMware-NSX-T-Data-Center/3.1/administration/GUID-DB5A44F1-6E1D-4E5C-8B50-D6161FFA5BD2.html)
+
azure-vmware Configure Identity Source Vcenter https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/configure-identity-source-vcenter.md
You'll run the `New-LDAPIdentitySource` cmdlet to add AD over LDAP as an externa
| **BaseDNUsers** | Where to look for valid users, for example, **CN=users,DC=avslab,DC=local**. Base DN is needed to use LDAP Authentication. | | **BaseDNGroups** | Where to look for groups, for example, **CN=group1, DC=avslab,DC=local**. Base DN is needed to use LDAP Authentication. | | **Credential** | The domain username and password used for authentication with the AD source (not cloudadmin). The user must be in the **username@avslab.local** format. |
- | **GroupName** | The group to give cloud admin access in your external identity source, for example, **avs-admins**. |
+ | **GroupName** | The group to give cloudadmin access in your external identity source, for example, **avs-admins**. |
| **Retain up to** | Retention period of the cmdlet output. The default value is 60 days. | | **Specify name for execution** | Alphanumeric name, for example, **addexternalIdentity**. | | **Timeout** | The period after which a cmdlet exits if taking too long to finish. |
You'll run the `New-LDAPIdentitySource` cmdlet to add AD over LDAP as an externa
## Add existing AD group to cloudadmin group
-You'll run the `Add-GroupToCloudAdmins` cmdlet to add an existing AD group to a cloudadmin group. Users in the cloud admin group have privileges equal to the cloudadmin (cloudadmin@vsphere.local) role defined in vCenter Server SSO.
+You'll run the `Add-GroupToCloudAdmins` cmdlet to add an existing AD group to a cloudadmin group. Users in the cloudadmin group have privileges equal to the cloudadmin (cloudadmin@vsphere.local) role defined in vCenter Server SSO.
1. Select **Run command** > **Packages** > **Add-GroupToCloudAdmins**.
You'll run the `Get-ExternalIdentitySources` cmdlet to list all external identit
## Assign additional vCenter Server Roles to Active Directory Identities After you've added an external identity over LDAP or LDAPS you can assign vCenter Server Roles to Active Directory security groups based on your organization's security controls.
-1. After you sign in to vCenter Server with cloud admin privileges, you can select an item from the inventory, select **ACTIONS** menu and select **Add Permission**.
+1. After you sign in to vCenter Server with cloudadmin privileges, you can select an item from the inventory, select **ACTIONS** menu and select **Add Permission**.
:::image type="content" source="media/run-command/ldaps-vcenter-permission-assignment-1.png" alt-text="Screenshot displaying hot to add permission assignment." lightbox="media/run-command/ldaps-vcenter-permission-assignment-1.png":::
Now that you've learned about how to configure LDAP and LDAPS, you can learn mor
- [How to configure storage policy](configure-storage-policy.md) - Each VM deployed to a vSAN datastore is assigned at least one VM storage policy. You can assign a VM storage policy in an initial deployment of a VM or when you do other VM operations, such as cloning or migrating. -- [Azure VMware Solution identity concepts](concepts-identity.md) - Use vCenter Server to manage virtual machine (VM) workloads and NSX-T Manager to manage and extend the private cloud. Access and identity management use the CloudAdmin role for vCenter Server and restricted administrator rights for NSX-T Manager.
+- [Azure VMware Solution identity concepts](concepts-identity.md) - Use vCenter Server to manage virtual machine (VM) workloads and NSX-T Manager to manage and extend the private cloud. Access and identity management use the cloudadmin role for vCenter Server and restricted administrator rights for NSX-T Manager.
+- [Configure external identity source for NSX-T](configure-external-identity-source-nsx-t.md)
+- [Azure VMware Solution identity concepts](concepts-identity.md)
+- [VMware product documentation](https://docs.vmware.com/en/VMware-NSX-T-Data-Center/3.1/administration/GUID-DB5A44F1-6E1D-4E5C-8B50-D6161FFA5BD2.html)
+
azure-vmware Deploy Vsan Stretched Clusters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/deploy-vsan-stretched-clusters.md
+
+ Title: Deploy vSAN stretched clusters
+description: Learn how to deploy vSAN stretched clusters.
++ Last updated : 09/02/2022+++
+# Deploy vSAN stretched clusters
+
+In this article, you'll learn how to implement a vSAN stretched cluster for an Azure VMware Solution private cloud.
+
+## Background
+
+AzureΓÇÖs global infrastructure is broken up into Regions. Each region supports the services for a given geography. Within each region, Azure builds isolated, and redundant islands of infrastructure called availability zones (AZ). An AZ acts as a boundary for resource management. The compute and other resources available to an AZ are finite and may become exhausted by customer demands. An AZ is built to be independently resilient, meaning failures in one AZ doesn't affect other AZs.
+
+With Azure VMware Solution, ESXi hosts deployed in a standard vSphere cluster traditionally reside in a single Azure Availability Zone (AZ) and are protected by vSphere high availability (HA). However, it doesn't protect the workloads against an Azure AZ failure. To protect against an AZ failure, a single vSAN cluster can be enabled to span two separate availability zones, called a [vSAN stretched cluster](https://docs.vmware.com/en/VMware-vSphere/7.0/com.vmware.vsphere.vsan-planning.doc/GUID-4172337E-E25F-4C6B-945E-01623D314FDA.html?hWord=N4IghgNiBcIG4GcwDsAECAuAnAphgxgBY4Amq+EArpjliAL5A).
+
+Stretched clusters allow the configuration of vSAN Fault Domains across two AZs to notify vCenter Server that hosts reside in each Availability Zone (AZ). Each fault domain is named after the AZ it resides within to increase clarity. When you stretch a vSAN cluster across two AZs within a region, should an AZ go down, it's treated as a vSphere HA event and the virtual machine is restarted in the other AZ.
+
+**Stretched cluster benefits:**
+- Improve application availability.
+- Provide a zero recovery point objective (RPO) capability for enterprise applications without needing to redesign them, or deploy expensive disaster recovery (DR) solutions.
+- A private cloud with stretched clusters is designed to provide 99.99% availability due to its resilience to AZ failures.
+- Enable customers to focus on core application requirements and features, instead of infrastructure availability.
+
+To protect against split-brain scenarios and help measure site health, a managed vSAN Witness is created in a third AZ. With a copy of the data in each AZ, vSphere HA attempts to recover from any failure using a simple restart of the virtual machine.
+
+**vSAN stretched cluster**
++
+In summary, stretched clusters simplify protection needs by providing the same trusted controls and capabilities in addition to the scale and flexibility of the Azure infrastructure.
+
+It's important to understand that stretched cluster private clouds only offer an extra layer of resiliency, and they don't address all failure scenarios. For example, stretched cluster private clouds:
+- Don't protect against region-level failures within Azure or data loss scenarios caused by application issues or poorly planned storage policies.
+- Provides protection against a single zone failure but aren't designed to provide protection against double or progressive failures. For example:
+ - Despite various layers of redundancy built into the fabric, if an inter-AZ failure results in the partitioning of the secondary site, vSphere HA starts powering off the workload VMs on the secondary site. The following diagram shows the secondary site partitioning scenario.
+
+ :::image type="content" source="media/stretch-clusters/diagram-2-secondary-site-power-off-workload.png" alt-text="Diagram shows vSphere high availability powering off the workload virtual machines on the secondary site.":::
+
+ - If the secondary site partitioning progressed into the failure of the primary site instead, or resulted in a complete partitioning, vSphere HA would attempt to restart the workload VMs on the secondary site. If vSphere HA attempted to restart the workload VMs on the secondary site, it would put the workload VMs in an unsteady state. The following diagram shows the preferred site failure or complete partitioning scenario.
+
+ :::image type="content" source="media/stretch-clusters/diagram-3-restart-workload-secondary-site.png" alt-text="Diagram shows vSphere high availability trying to restart the workload virtual machines on the secondary site when preferred site failure or complete partitioning occurs.":::
+
+It should be noted that these types of failures, although rare, fall outside the scope of the protection offered by a stretched cluster private cloud. Because of this, a stretched cluster solution should be regarded as a multi-AZ high availability solution reliant upon vSphere HA. It's important you understand that a stretched cluster solution isn't meant to replace a comprehensive multi-region Disaster Recovery strategy that can be employed to ensure application availability. The reason is because a Disaster Recovery solution typically has separate management and control planes in separate Azure regions. Azure VMware Solution stretched clusters have a single management and control plane stretched across two availability zones within the same Azure region. For example, one vCenter Server, one NSX-T Manager cluster, one NSX-T Data Center Edge VM pair.
+
+## Deploy a stretched cluster private cloud
+
+Currently, Azure VMware Solution stretched clusters is in a limited availability phase. In the limited availability phase, you must contact Microsoft to request and qualify for support.
+
+## Prerequisites
+
+To request support, send an email request to **avsStretchedCluster@microsoft.com** with the following details:
+
+- Company name
+- Point of contact (email)
+- Subscription (a new, separate subscription is required)
+- Region requested (West Europe, UK South, Germany West Central)
+- Number of nodes in first stretched cluster (minimum 6, maximum 16 - in multiples of two)
+- Estimated provisioning date (used for billing purposes)
+
+When the request support details are received, quota will be reserved for a stretched cluster environment in the region requested. The subscription gets enabled to deploy a stretched cluster SDDC through the Azure portal. A confirmation email will be sent to the designated point of contact within two business days upon which you should be able to [self-deploy a stretched cluster private cloud via the Azure portal](https://docs.microsoft.com/azure/azure-vmware/tutorial-create-private-cloud?tabs=azure-portal#create-a-private-cloud). Be sure to select **Hosts in two availability zones** to ensure that a stretched cluster gets deployed in the region of your choice.
++
+Once the private cloud is created, you can peer both availability zones (AZs) to your on-premises ExpressRoute circuit with Global Reach that helps connect your on-premises data center to the private cloud. Peering both the AZs will ensure that an AZ failure doesn't result in a loss of connectivity to your private cloud. Since an ExpressRoute Auth Key is valid for only one connection, repeat the [Create an ExpressRoute auth key in the on-premises ExpressRoute circuit](https://docs.microsoft.com/azure/azure-vmware/tutorial-expressroute-global-reach-private-cloud#create-an-expressroute-auth-key-in-the-on-premises-expressroute-circuit) process to generate another authorization.
++
+Next, repeat the process to [peer ExpressRoute Global Reach](https://docs.microsoft.com/azure/azure-vmware/tutorial-expressroute-global-reach-private-cloud#peer-private-cloud-to-on-premises) two availability zones to the on-premises ExpressRoute circuit.
++
+## Supported scenarios
+
+The following scenarios are supported:
+
+- Workload connectivity to internet from both AZs via Customer vWAN or On-premises data center
+- Private DNS resolution
+- Placement policies (except for VM-AZ affinity)
+- Cluster scale out and scale in
+- The following SPBM policies are supported, with a PFTT of ΓÇ£Dual Site MirroringΓÇ¥ and SFTT of ΓÇ£RAID 1 (Mirroring)ΓÇ¥ enabled as the default policies for the cluster:
+ - Site disaster tolerance settings (PFTT):
+ - Dual site mirroring
+ - None - keep data on preferred
+ - None - keep data on non-preferred
+ - Local failures to tolerate (SFTT):
+ - 1 failure ΓÇô RAID 1 (Mirroring)
+ - 1 failure ΓÇô RAID 5 (Erasure coding), requires a minimum of 4 hosts in each AZ
+ - 2 failures ΓÇô RAID 1 (Mirroring)
+ - 2 failures ΓÇô RAID 6 (Erasure coding), requires a minimum of 6 hosts in each AZ
+ - 3 failures ΓÇô RAID 1 (Mirroring)
+
+In this phase, while the creation of the private cloud and the first stretched cluster is enabled via the Azure portal, open a [support ticket](https://rc.portal.azure.com/#create/Microsoft.Support) from the Azure portal for other supported scenarios and configurations listed below. While doing so, make sure you select **Stretched Clusters** as a Problem Type.
+
+Once stretched clusters are made generally available, it's expected that all the following supported scenarios will be enabled in an automated self-service fashion.
+
+- HCX installation, deployment, removal, and support for migration
+- Connect a private cloud in another region to a stretched cluster private cloud
+- Connect two stretched cluster private clouds in a single region
+- Configure Active Directory as an identity source for vCenter Server
+- A PFTT of ΓÇ£Keep data on preferredΓÇ¥ or ΓÇ£Keep data on non-preferredΓÇ¥ requires keeping VMs on either one of the availability zones. For such VMs, open a support ticket to ensure that those VMs are pinned to an availability zone.
+- Cluster addition
+- Cluster deletion
+- Private cloud deletion
+
+## Supported regions
+
+Azure VMware Solution stretched clusters are available in the following regions:
+
+- UK South
+- West Europe
+- Germany West Central
+
+## FAQ
+
+### Are any other regions planned?
+
+As of now, the only 3 regions listed above are planned for support of stretched clusters.
+
+### What kind of SLA does Azure VMware Solution provide with the stretched clusters limited availability release?
+
+A private cloud created with a vSAN stretched cluster is designed to offer a 99.99% infrastructure availability commitment when the following conditions exist:
+- A minimum of 6 nodes are deployed in the cluster (3 in each availability zone)
+- When a VM storage policy of PFTT of "Dual-Site Mirroring" and an SFTT of 1 is used by the workload VMs
+- Compliance with the **Additional Requirements** captured in the [SLA details of Azure VMware Solution](https://azure.microsoft.com/support/legal/sla/azure-vmware/v1_1/) is required to achieve the availability goals
+
+### Do I get to choose the availability zone in which a private cloud is deployed?
+
+No. A stretched cluster is created between two availability zones, while the third zone is used for deploying the witness node. Because all of the zones are effectively used for deploying a stretched cluster environment, a choice isn't provided to the customer. Instead, the customer chooses to deploy hosts in multiple AZs at the time of private cloud creation.
+
+### What are the limitations I should be aware of?
+
+- Once a private cloud has been created with a stretched cluster, it can't be changed to a standard cluster private cloud. Similarly, a standard cluster private cloud can't be changed to a stretched cluster private cloud after creation.
+- Scale out and scale-in of stretched clusters can only happen in pairs. A minimum of 6 nodes and a maximum of 16 nodes are supported in a stretched cluster environment.
+- Customer workload VMs are restarted with a medium vSphere HA priority. Management VMs have the highest restart priority.
+- The solution relies on vSphere HA and vSAN for restarts and replication. Recovery time objective (RTO) is determined by the amount of time it takes vSphere HA to restart a VM on the surviving AZ after the failure of a single AZ.
+- Preview features for standard private cloud environments aren't supported in a stretched cluster environment. For example, external storage options like disk pools and Azure NetApp Files (ANF), Customer Management Keys, Public IP via NSX-T Data Center Edge, and others.
+- Disaster recovery addons like, VMware SRM, Zerto, and JetStream are currently not supported in a stretched cluster environment.
+
+### What kind of latencies should I expect between the availability zones (AZs)?
+
+vSAN stretched clusters operate within a 5 minute round trip time (RTT) and 10 Gb/s or greater bandwidth between the AZs that host the workload VMs. The Azure VMware Solution stretched cluster deployment follows that guiding principle. Consider that information when deploying applications (with SFTT of dual site mirroring, which uses synchronous writes) that have stringent latency requirements.
+
+### Can I mix stretched and standard clusters in my private cloud?
+
+No. A mix of stretched and standard clusters aren't supported within the same private cloud. A stretched or standard cluster environment is selected when you create the private cloud. Once a private cloud has been created with a stretched cluster, it's assumed that all clusters created within that private cloud are stretched in nature.
+
+### How much does the solution cost?
+
+Customers will be charged based on the number of nodes deployed within the private cloud.
+
+### Will I be charged for the witness node and for inter-AZ traffic?
+
+No. While in limited availability, customers won't see a charge for the witness node and the inter-AZ traffic. The witness node is entirely service managed, and Azure VMware Solution provides the required lifecycle management of the witness node. As the entire solution is service managed, the customer only needs to identify the appropriate SPBM policy to set for the workload virtual machines. The rest is managed by Microsoft.
+
+### Which SKUs are available?
+
+Stretched clusters will solely be supported on the AV36 SKU.
azure-vmware Enable Public Ip Nsx Edge https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/enable-public-ip-nsx-edge.md
With this capability, you have the following features:
The architecture shows Internet access to and from your Azure VMware Solution private cloud using a Public IP directly to the NSX Edge. :::image type="content" source="media/public-ip-nsx-edge/architecture-internet-access-avs-public-ip.png" alt-text="Diagram that shows architecture of Internet access to and from your Azure VMware Solution Private Cloud using a Public IP directly to the NSX Edge." border="false" lightbox="media/public-ip-nsx-edge/architecture-internet-access-avs-public-ip-expanded.png":::
+>[!IMPORTANT]
+>The use of Public IP down to the NSX Edge is not compatible with reverse DNS Lookup.
+ ## Configure a Public IP in the Azure portal 1. Log on to the Azure portal. 1. Search for and select Azure VMware Solution.
azure-vmware Enable Sql Azure Hybrid Benefit https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/enable-sql-azure-hybrid-benefit.md
Last updated 06/14/2022
# Enable SQL Azure hybrid benefit for Azure VMware Solution (Preview)
-In this article, youΓÇÖll learn how to apply SQL Azure hybrid benefits to an Azure VMware Solution private cloud by configuring a placement policy. The placement policy defines the number of hosts that are running SQL.
+In this article, youΓÇÖll learn how to configure SQL Azure hybrid benefits to an Azure VMware Solution private cloud by configuring a placement policy. The placement policy defines the hosts that are running SQL as well as the virtual machines on that host.
>[!IMPORTANT] > It is important to note that SQL benefits are applied at the host level.
-For example, if each host in Azure VMware Solution has 36 cores and you signal that two hosts run SQL, then SQL Azure hybrid benefit will apply to 72 cores.
+For example, if each host in Azure VMware Solution has 36 cores and you signal that two hosts run SQL, then SQL Azure hybrid benefit will apply to 72 cores irrespective of the number of SQL or other virtual machines on that host.
## Configure host-VM placement policy 1. From your Azure VMware Solution private cloud, select Azure hybrid benefit, then Create host-VM placement policy.
For example, if each host in Azure VMware Solution has 36 cores and you signal t
1. Fill in the required fields for creating the placement policy. 1. **Name** ΓÇô Select the name that identifies this policy.
- 2. **Type** ΓÇô Select the type of policy. This type must be VM-Host affinity only.
+ 2. **Type** ΓÇô Select the type of policy. This type must be a VM-Host affinity rule only.
3. **Azure hybrid benefit** ΓÇô Select the checkbox to apply the SQL Azure hybrid benefit.
- 4. **Cluster** ΓÇô Select the necessary cluster. The policy is applicable per cluster only.
+ 4. **Cluster** ΓÇô Select the correct cluster. The policy is scoped to host in this cluster only.
1. **Enabled** ΓÇô Select enabled to apply the policy immediately once created. :::image type="content" source="media/sql-azure-hybrid-benefit/create-placement-policy.png" alt-text="Diagram that shows how to create a host virtual machine placement policy using the host VM affinity."::: 3. Select the hosts and VMs that will be applied to the VM-Host affinity policy.
- 1. **Add Hosts** ΓÇô Select the hosts that will be running SQL.
+ 1. **Add Hosts** ΓÇô Select the hosts that will be running SQL. When hosts are replaced, policies are re-created on the new hosts automatically.
2. **Add VMs** ΓÇô Select the VMs that should run on the selected hosts. 3. **Review and Create** the policy. :::image type="content" source="media/sql-azure-hybrid-benefit/select-policy-host.png" alt-text="Diagram that shows how to create a host virtual machine affinity.":::
baremetal-infrastructure Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/baremetal-infrastructure/workloads/nc2-public-preview/faq.md
For more information about Production Support tiers and SLAs, see Product Suppor
Technically, yes. Raise a support ticket from Azure portal to get this functionality enabled.
+## Does Microsoft BareMetal service store customer data outside the Azure region that a customer has chosen?
+
+No.
+ ## Next steps Learn more:
baremetal-infrastructure Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/baremetal-infrastructure/workloads/nc2-public-preview/get-started.md
Title: Getting started
-description: Learn how to sign up, set up, and use NC2 on Azure Public Preview.
+description: Learn how to sign up, set up, and use Nutanix Cloud Clusters on Azure Public Preview.
Last updated 07/01/2021
-# Getting started with NC2 on Azure
+# Getting started with Nutanix Cloud Clusters on Azure
-Learn how to sign up for, set up, and use NC2 on Azure Public Preview.
+Learn how to sign up for, set up, and use Nutanix Cloud Clusters (NC2) on Azure Public Preview.
## Sign up for the Public Preview
baremetal-infrastructure Nc2 Baremetal Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/baremetal-infrastructure/workloads/nc2-public-preview/nc2-baremetal-overview.md
Title: What is BareMetal Infrastructure for NC2 on Azure?
+ Title: What is BareMetal Infrastructure for Nutanix Cloud Clusters on Azure?
description: Learn about the features BareMetal Infrastructure offers for NC2 workloads. Last updated 07/01/2022
-# What is BareMetal Infrastructure for NC2 on Azure?
+# What is BareMetal Infrastructure for Nutanix Cloud Clusters on Azure?
In this article, we'll give an overview of the features BareMetal Infrastructure offers for Nutanix workloads.
cognitive-services Captioning Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/captioning-concepts.md
Captioning can accompany real time or pre-recorded speech. Whether you're showin
The Speech service supports output formats such as SRT (SubRip Text) and WebVTT (Web Video Text Tracks). These can be loaded onto most video players such as VLC, automatically adding the captions on to your video.
+> [!TIP]
+> The Speech service provides [profanity filter](display-text-format.md#profanity-filter) options. You can specify whether to mask, remove, or show profanity.
+ The [SRT](https://docs.fileformat.com/video/srt/) (SubRip Text) timespan output format is `hh:mm:ss,fff`. ```srt
RECOGNIZING: Text=welcome to applied mathematics
RECOGNIZED: Text=Welcome to applied Mathematics course 201. ```
-## Profanity filter
-
-You can specify whether to mask, remove, or show profanity in recognition results.
-
-> [!NOTE]
-> Microsoft also reserves the right to mask or remove any word that is deemed inappropriate. Such words will not be returned by the Speech service, whether or not you enabled profanity filtering.
-
-The profanity filter options are:
-- `Masked`: Replaces letters in profane words with asterisk (*) characters. This is the default option.-- `Raw`: Include the profane words verbatim.-- `Removed`: Removes profane words.-
-For example, to remove profane words from the speech recognition result, set the profanity filter to `Removed` as shown here:
-
-```csharp
-speechConfig.SetProfanity(ProfanityOption.Removed);
-```
-```cpp
-speechConfig->SetProfanity(ProfanityOption::Removed);
-```
-```go
-speechConfig.SetProfanity(common.Removed)
-```
-```java
-speechConfig.setProfanity(ProfanityOption.Removed);
-```
-```javascript
-speechConfig.setProfanity(sdk.ProfanityOption.Removed);
-```
-```objective-c
-[self.speechConfig setProfanityOptionTo:SPXSpeechConfigProfanityOption.SPXSpeechConfigProfanityOption_ProfanityRemoved];
-```
-```swift
-self.speechConfig!.setProfanityOptionTo(SPXSpeechConfigProfanityOption_ProfanityRemoved)
-```
-```python
-speech_config.set_profanity(speechsdk.ProfanityOption.Removed)
-```
-```console
-spx recognize --file caption.this.mp4 --format any --profanity masked --output vtt file - --output srt file -
-```
-
-Profanity filter is applied to the result `Text` and `MaskedNormalizedForm` properties. Profanity filter isn't applied to the result `LexicalForm` and `NormalizedForm` properties. Neither is the filter applied to the word level results.
- ## Language identification If the language in the audio could change, use continuous [language identification](language-identification.md). Language identification is used to identify languages spoken in audio when compared against a list of [supported languages](language-support.md?tabs=language-identification). You provide up to 10 candidate languages, at least one of which is expected be in the audio. The Speech service returns the most likely language in the audio.
cognitive-services Display Text Format https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/display-text-format.md
+
+ Title: Display text formatting with speech to text - Speech service
+
+description: An overview of key concepts for display text formatting with speech to text.
++++++ Last updated : 09/19/2022+
+zone_pivot_groups: programming-languages-speech-sdk-cli
++
+# Display text formatting with speech to text
+
+Speech-to-text offers an array of formatting features to ensure that the transcribed text is clear and legible. Below is an overview of these features and how each one is used to improve the overall clarity of the final text output.
+
+## ITN
+
+Inverse Text Normalization (ITN) is a process that converts spoken words into their written form. For example, the spoken word "four" is converted to the written form "4". This process is performed by the speech-to-text service and isn't configurable. Some of the supported text formats include dates, times, decimals, currencies, addresses, emails, and phone numbers. You can speak naturally, and the service formats text as expected. The following table shows the ITN rules that are applied to the text output.
+
+|Recognized speech|Display text|
+|||
+|`that will cost nine hundred dollars`|`That will cost $900.`|
+|`my phone number is one eight hundred, four five six, eight nine ten`|`My phone number is 1-800-456-8910.`|
+|`the time is six forty five p m`|`The time is 6:45 PM.`|
+|`I live on thirty five lexington avenue`|`I live on 35 Lexington Ave.`|
+|`the answer is six point five`|`The answer is 6.5.`|
+|`send it to support at help dot com`|`Send it to support@help.com.`|
+
+## Capitalization
+
+Speech-to-text models recognize words that should be capitalized to improve readability, accuracy, and grammar. For example, the Speech service will automatically capitalize proper nouns and words at the beginning of a sentence. Some examples are shown in this table.
+
+|Recognized speech|Display text|
+|||
+|`i got an x l t shirt`|`I got an XL t-shirt.`|
+|`my name is jennifer smith`|`My name is Jennifer Smith.`|
+|`i want to visit new york city`|`I want to visit New York City.`|
+
+## Disfluency removal
+
+When speaking, it's common for someone to stutter, duplicate words, and say filler words like "uhm" or "uh". Speech-to-text can recognize such disfluencies and remove them from the display text. Disfluency removal is great for transcribing live unscripted speeches to read them back later. Some examples are shown in this table.
+
+|Recognized speech|Display text|
+|||
+|`i uh said that we can go to the uhmm movies`|`I said that we can go to the movies.`|
+|`its its not that big of uhm a deal`|`It's not that big of a deal.`|
+|`umm i think tomorrow should work`|`I think tomorrow should work.`|
+
+## Punctuation
+
+Speech-to-text automatically punctuates your text to improve clarity. Punctuation is helpful for reading back call or conversation transcriptions. Some examples are shown in this table.
+
+|Recognized speech|Display text|
+|||
+|`how are you`|`How are you?`|
+|`we can go to the mall park or beach`|`We can go to the mall, park, or beach.`|
+
+When you're using speech-to-text with continuous recognition, you can configure the Speech service to recognize explicit punctuation marks. Then you can speak punctuation aloud in order to make your text more legible. This is especially useful in a situation where you want to use complex punctuation without having to merge it later. Some examples are shown in this table.
+
+|Recognized speech|Display text|
+|||
+|`they entered the room dot dot dot`|`They entered the room...`|
+|`i heart emoji you period`|`I <3 you.`|
+|`the options are apple forward slash banana forward slash orange period`|`The options are apple/banana/orange.`|
+|`are you sure question mark`|`Are you sure?`|
+
+Use the Speech SDK to enable dictation mode when you're using speech-to-text with continuous recognition. This mode will cause the speech configuration instance to interpret word descriptions of sentence structures such as punctuation.
+
+```csharp
+speechConfig.EnableDictation();
+```
+```cpp
+speechConfig->EnableDictation();
+```
+```go
+speechConfig.EnableDictation()
+```
+```java
+speechConfig.enableDictation();
+```
+```javascript
+speechConfig.enableDictation();
+```
+```objective-c
+[self.speechConfig enableDictation];
+```
+```swift
+self.speechConfig!.enableDictation()
+```
+```python
+speech_config.enable_dictation()
+```
+
+## Profanity filter
+
+You can specify whether to mask, remove, or show profanity in the final transcribed text. Masking replaces profane words with asterisk (*) characters so that you can keep the original sentiment of your text while making it more appropriate for certain situations
+
+> [!NOTE]
+> Microsoft also reserves the right to mask or remove any word that is deemed inappropriate. Such words will not be returned by the Speech service, whether or not you enabled profanity filtering.
+
+The profanity filter options are:
+- `Masked`: Replaces letters in profane words with asterisk (*) characters. Masked is the default option.
+- `Raw`: Include the profane words verbatim.
+- `Removed`: Removes profane words.
+
+For example, to remove profane words from the speech recognition result, set the profanity filter to `Removed` as shown here:
+
+```csharp
+speechConfig.SetProfanity(ProfanityOption.Removed);
+```
+```cpp
+speechConfig->SetProfanity(ProfanityOption::Removed);
+```
+```go
+speechConfig.SetProfanity(common.Removed)
+```
+```java
+speechConfig.setProfanity(ProfanityOption.Removed);
+```
+```javascript
+speechConfig.setProfanity(sdk.ProfanityOption.Removed);
+```
+```objective-c
+[self.speechConfig setProfanityOptionTo:SPXSpeechConfigProfanityOption.SPXSpeechConfigProfanityOption_ProfanityRemoved];
+```
+```swift
+self.speechConfig!.setProfanityOptionTo(SPXSpeechConfigProfanityOption_ProfanityRemoved)
+```
+```python
+speech_config.set_profanity(speechsdk.ProfanityOption.Removed)
+```
+```console
+spx recognize --file caption.this.mp4 --format any --profanity masked --output vtt file - --output srt file -
+```
+
+Profanity filter is applied to the result `Text` and `MaskedNormalizedForm` properties. Profanity filter isn't applied to the result `LexicalForm` and `NormalizedForm` properties. Neither is the filter applied to the word level results.
++
+## Next steps
+
+* [Speech-to-text quickstart](get-started-speech-to-text.md)
+* [Get speech recognition results](get-speech-recognition-results.md)
cognitive-services Cognitive Services Container Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/cognitive-services-container-support.md
- Previously updated : 03/14/2022+ Last updated : 09/21/2022 keywords: on-premises, Docker, container, Kubernetes #Customer intent: As a potential customer, I want to know more about how Cognitive Services provides and supports Docker containers for each service.
-# Azure Cognitive Services containers
+# What are Azure Cognitive Services containers?
Azure Cognitive Services provides several [Docker containers](https://www.docker.com/what-container) that let you use the same APIs that are available in Azure, on-premises. Using these containers gives you the flexibility to bring Cognitive Services closer to your data for compliance, security or other operational reasons. Container support is currently available for a subset of Azure Cognitive Services.
cognitive-services Concepts Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/personalizer/concepts-features.md
JSON objects can include nested JSON objects and simple property/values. An arra
``` ## Inference Explainability
-Personalizer can help you to understand which features are the most and least influential when determining the best action. When enabled, inference explainability includes feature scores from the underlying model into the Rank API response, so your application receives this information at the time of inference.
+Personalizer can help you to understand which features of a chosen action are the most and least influential to then model during inference. When enabled, inference explainability includes feature scores from the underlying model into the Rank API response, so your application receives this information at the time of inference.
Feature scores empower you to better understand the relationship between features and the decisions made by Personalizer. They can be used to provide insight to your end-users into why a particular recommendation was made, or to analyze whether your model is exhibiting bias toward or against certain contextual settings, users, and actions. Setting the service configuration flag IsInferenceExplainabilityEnabled in your service configuration enables Personalizer to include feature values and weights in the Rank API response. To update your current service configuration, use the [Service Configuration ΓÇô Update API](/rest/api/personalizer/1.1preview1/service-configuration/update?tabs=HTTP). In the JSON request body, include your current service configuration and add the additional entry: ΓÇ£IsInferenceExplainabilityEnabledΓÇ¥: true. If you donΓÇÖt know your current service configuration, you can obtain it from the [Service Configuration ΓÇô Get API](/rest/api/personalizer/1.1preview1/service-configuration/get?tabs=HTTP)
Enabling inference explainability will add a collection to the JSON response fro
}, { "id": "SportsArticle",
- "probability": 0
+ "probability": 0.15
}, { "id": "NewsArticle",
- "probability": 0.2
+ "probability": 0.05
} ], "eventId": "75269AD0-BFEE-4598-8196-C57383D38E10",
Enabling inference explainability will add a collection to the JSON response fro
} ```
-Recall that Personalizer will either return the _best action_ as determined by the model or an _exploratory action_ chosen by the exploration policy. The best action is the one that the model has determined has the highest probability of maximizing the average reward, whereas exploratory actions are chosen among the set of all possible actions provided in the Rank API call. Actions taken during exploration do not leverage the feature scores in determining which action to take, therefore **feature scores for exploratory actions should not be used to gain an understanding of why the action was taken.** [You can learn more about exploration here](/azure/cognitive-services/personalizer/concepts-exploration).
+In the example above, three action IDs are returned in the _ranking_ collection along with their respective probabilities scores. The action with the largest probability is the_ best action_ as determined by the model trained on data sent to the Personalizer APIs, which in this case is `"id": "EntertainmentArticle"`. The action ID can be seen again in the _inferenceExplanation_ collection, along with the feature names and scores determined by the model for that action and the features and values sent to the Rank API.
+
+Recall that Personalizer will either return the _best action_ or an _exploratory action_ chosen by the exploration policy. The best action is the one that the model has determined has the highest probability of maximizing the average reward, whereas exploratory actions are chosen among the set of all possible actions provided in the Rank API call. Actions taken during exploration do not leverage the feature scores in determining which action to take, therefore **feature scores for exploratory actions should not be used to gain an understanding of why the action was taken.** [You can learn more about exploration here](/azure/cognitive-services/personalizer/concepts-exploration).
For the best actions returned by Personalizer, the feature scores can provide general insight where:
-* Larger positive scores provide more support for the model choosing the best action.
-* Larger negative scores provide more support for the model not choosing the best action.
-* Scores close to zero have a small effect on the decision to choose the best action.
+* Larger positive scores provide more support for the model choosing this action.
+* Larger negative scores provide more support for the model not choosing this action.
+* Scores close to zero have a small effect on the decision to choose this action.
### Important considerations for Inference Explainability * **Increased latency.** Enabling _Inference Explainability_ will significantly increase the latency of Rank API calls due to processing of the feature information. Run experiments and measure the latency in your scenario to see if it satisfies your applicationΓÇÖs latency requirements. Future versions of Inference Explainability will mitigate this issue. * **Correlated Features.** Features that are highly correlated with each other can reduce the utility of feature scores. For example, suppose Feature A is highly correlated with Feature B. It may be that Feature AΓÇÖs score is a large positive value while Feature BΓÇÖs score is a large negative value. In this case, the two features may effectively cancel each other out and have little to no impact on the model. While Personalizer is very robust to highly correlated features, when using _Inference Explainability_, ensure that features sent to Personalizer are not highly correlated-
+* **Default exploration only.** Currently, Inference Explainability supports only the default exploration algorithm. Future releases will enable the use of this capability with additional exploration algorithms.
## Next steps
connectors Managed https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/managed.md
Title: Overview about managed connectors in Azure Logic Apps
-description: Learn about Microsoft-managed connectors to create automated integration workflows in Azure Logic Apps.
+ Title: Managed connector overview
+description: Learn about Microsoft-managed connectors hosted on Azure in Azure Logic Apps.
ms.suite: integration
container-apps Firewall Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/firewall-integration.md
Network Security Groups (NSGs) needed to configure virtual networks closely rese
You can lock down a network via NSGs with more restrictive rules than the default NSG rules to control all inbound and outbound traffic for the Container App Environment.
-Using custom user-defined routes (UDRs) or ExpressRoutes, other than with UDRs of selected destinations that you own, are not yet supported for Container App Environments with VNETs. Therefore, securing a Container App Environment with a firewall is not yet supported.
+Using custom user-defined routes (UDRs) or ExpressRoutes, other than with UDRs of selected destinations that you own, are not yet supported for Container App Environments with VNETs. Therefore, securing outbound traffic with a firewall is not yet supported.
## NSG allow rules
cost-management-billing Mca Setup Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/mca-setup-account.md
tags: billing
Previously updated : 04/20/2022 Last updated : 09/23/2022
To set up the billing account, you must transition the billing of Azure subscrip
Before you start the setup, we recommend you do the following actions:
+- Before you transition to the Microsoft Customer Agreement, **delete users using the EA portal that don't need access to the new billing account**.
+ - Deleting the users will simplify the transition and improve the security of your new billing account.
- **Understand your new billing account** - Your new account simplifies billing for your organization. [Get a quick overview of your new billing account](../understand/mca-overview.md) - **Verify your access to complete the setup**
Before you start the setup, we recommend you do the following actions:
To complete the setup, you need the following access: - Owner of the billing account that was created when the Microsoft Customer Agreement was signed. To learn more about billing accounts, see [Your billing account](../understand/mca-overview.md#your-billing-account).
-&mdash; And &mdash;
+ΓÇö And ΓÇö
- Enterprise administrator on the enrollment that is renewed. ### Start migration and get permission needed to complete setup
An Azure Active Directory (AD) tenant is selected for the new billing account wh
Your new account only supports users from the tenant that was selected while signing the Microsoft Customer Agreement. If users with administrative permission on your Enterprise Agreement are part of the tenant, they'll get access to the new billing account during the setup. If they're not part of the tenant, they can't access the new billing account unless you invite them.
-When you invite the users, they're added to the tenant as guest users and get access to the billing account. To invite the users, guest access must be turned on for the tenant. For more information, see [control guest access in Azure Active Directory](/microsoftteams/teams-dependencies#control-guest-access-in-azure-active-directory). If the guest access is turned off, contact the global administrators of your tenant to turn it on. <!-- Todo - How can they find their global administrator -->
+When you invite the users, they're added to the tenant as guest users and get access to the billing account. To invite the users, guest access must be turned on for the tenant. For more information, see [control guest access in Azure Active Directory](/microsoftteams/teams-dependencies#control-guest-access-in-azure-active-directory). If the guest access is turned off, contact the global administrators of your tenant to turn it on.
## View replaced features
Support benefits don't transfer as part of the transition. Purchase a new suppor
### Past charges and balance
-Charges and credits balance prior to transition can be viewed in your Enterprise Agreement enrollment through the Azure portal. <!--Todo - Add a link for this-->
+Charges and credits balance prior to transition can be viewed in your Enterprise Agreement enrollment through the Azure portal.
### When should the setup be completed?
cost-management-billing Reservation Discount App Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/reservation-discount-app-service.md
After you buy an Azure App Service Isolated v2 Reserved Instance, the reservatio
### How the discount is applied to Azure App Service
-A reservation discount is _use-it-or-lose-it_. So, if you don't have matching resources for any hour, then you lose a reservation quantity for that hour. You can't carry forward unused reserved hours. When you shut down a resource, the reservation discount automatically applies to another matching resource in the specified scope. If no matching resources are found in the specified scope, then the reserved hours are lost.
+A reservation discount is _use-it-or-lose-it_. So, if you don't have matching resources for any hour, then you lose a reservation quantity for that hour. You can't carry forward unused reserved hours.
+
+When you shut down a resource, the reservation discount automatically applies to another matching resource in the specified scope. If no matching resources are found in the specified scope, then the reserved hours are lost.
+
+Stopped resources are billed and continue to use reservation hours. Deallocate or delete resources or scale-in other resources to use your available reservation hours with other workloads.
### Reservation discount for Isolated v2 Instances
cost-management-billing Understand Azure Cache For Redis Reservation Charges https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/understand-azure-cache-for-redis-reservation-charges.md
A reservation discount is ***use-it-or-lose-it***. So, if you don't have matchin
When you shut down a resource, the reservation discount automatically applies to another matching resource in the specified scope. If no matching resources are found in the specified scope, then the reserved hours are lost.
+Stopped resources are billed and continue to use reservation hours. Deallocate or delete resources or scale-in other resources to use your available reservation hours with other workloads.
+ ## Discount applied to Azure Cache for Redis The Azure Cache for Redis reserved capacity discount is applied to your caches on an hourly basis. The reservation that you buy is matched to the compute usage emitted by the running caches. When these caches don't run the full hour, the reservation is automatically applied to other caches matching the reservation attributes. The discount can apply to caches that are running concurrently. If you don't have caches that run for an entire hour that match the reservation attributes, you don't get the full benefit of the reservation discount for that hour.
cost-management-billing Understand Cosmosdb Reservation Charges https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/understand-cosmosdb-reservation-charges.md
A reservation discount is "*use-it-or-lose-it*". So, if you don't have matching
When you shut down a resource, the reservation discount automatically applies to another matching resource in the specified scope. If no matching resources are found in the specified scope, then the reserved hours are *lost*.
+Stopped resources are billed and continue to use reservation hours. Deallocate or delete resources or scale-in other resources to use your available reservation hours with other workloads.
+ ## Reservation discount applied to Azure Cosmos DB accounts A reservation discount is applied to [provisioned throughput](../../cosmos-db/request-units.md) in terms of request units per second (RU/s) on an hour-by-hour basis. For Azure Cosmos DB resources that don't run the full hour, the reservation discount is automatically applied to other Cosmos DB resources that match the reservation attributes. The discount can apply to Azure Cosmos DB resources that are running concurrently. If you don't have Cosmos DB resources that run for the full hour and that match the reservation attributes, you don't get the full benefit of the reservation discount for that hour.
cost-management-billing Understand Disk Reservations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/understand-disk-reservations.md
The Azure disk reservation discount is a use-it-or-lose-it discount. It's applie
When you delete a resource, the reservation discount automatically applies to another matching resource in the specified scope. If no matching resource is found, the reserved hours are lost.
+Stopped resources are billed and continue to use reservation hours. Deallocate or delete resources or scale-in other resources to use your available reservation hours with other workloads.
+ ## Discount examples The following examples show how the Azure disk reservation discount applies depending on your deployment.
cost-management-billing Understand Reservation Charges Mariadb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/understand-reservation-charges-mariadb.md
A reservation discount is ***use-it-or-lose-it***. So, if you don't have matchin
When you shut down a resource, the reservation discount automatically applies to another matching resource in the specified scope. If no matching resources are found in the specified scope, then the reserved hours are lost.
+Stopped resources are billed and continue to use reservation hours. Deallocate or delete resources or scale-in other resources to use your available reservation hours with other workloads.
+ ## Discount applied to Azure Database for MariaDB The Azure Database for MariaDB reserved capacity discount is applied to running your MariaDB servers on an hourly basis. The reservation that you buy is matched to the compute usage emitted by the running Azure Database for MariaDB servers. For MariaDB servers that don't run the full hour, the reservation is automatically applied to other Azure Database for MariaDB matching the reservation attributes. The discount can apply to Azure Database for MariaDB servers that are running concurrently. If you don't have an MariaDB server that runs for the full hour that matches the reservation attributes, you don't get the full benefit of the reservation discount for that hour.
cost-management-billing Understand Reservation Charges Mysql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/understand-reservation-charges-mysql.md
After you buy an Azure Database for MySQL reserved capacity, the reservation dis
## How reservation discount is applied
-A reservation discount is ***use-it-or-lose-it***. So, if you don't have matching resources for any hour, then you lose a reservation quantity for that hour. You can't carry forward unused reserved hours.</br>
+A reservation discount is ***use-it-or-lose-it***. So, if you don't have matching resources for any hour, then you lose a reservation quantity for that hour. You can't carry forward unused reserved hours.
When you shut down a resource, the reservation discount automatically applies to another matching resource in the specified scope. If no matching resources are found in the specified scope, then the reserved hours are lost.
+Stopped resources are billed and continue to use reservation hours. Deallocate or delete resources or scale-in other resources to use your available reservation hours with other workloads.
+ ## Discount applied to Azure Database for MySQL The Azure Database for MySQL reserved capacity discount is applied to running your MySQL servers on an hourly basis. The reservation that you buy is matched to the compute usage emitted by the running Azure Database for MySQL servers. For MySQL servers that don't run the full hour, the reservation is automatically applied to other Azure Database for MySQL matching the reservation attributes. The discount can apply to Azure Database for MySQL servers that are running concurrently. If you don't have an MySQL server that runs for the full hour that matches the reservation attributes, you don't get the full benefit of the reservation discount for that hour.
cost-management-billing Understand Reservation Charges Postgresql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/understand-reservation-charges-postgresql.md
After you buy an Azure Database for PostgreSQL Single server reserved capacity,
## How reservation discount is applied
-A reservation discount is ***use-it-or-lose-it***. So, if you don't have matching resources for any hour, then you lose a reservation quantity for that hour. You can't carry forward unused reserved hours.</br>
+A reservation discount is ***use-it-or-lose-it***. So, if you don't have matching resources for any hour, then you lose a reservation quantity for that hour. You can't carry forward unused reserved hours.
When you shut down a resource, the reservation discount automatically applies to another matching resource in the specified scope. If no matching resources are found in the specified scope, then the reserved hours are lost.
+Stopped resources are billed and continue to use reservation hours. Deallocate or delete resources or scale-in other resources to use your available reservation hours with other workloads.
+ ## Discount applied to Azure Database for PostgreSQL Single server
-The Azure Database for PostgreSQL Single server reserved capacity discount is applied to running your PostgreSQL Single server on an hourly basis. The reservation that you buy is matched to the compute usage emitted by the running Azure Database for PostgreSQL Single server. For PostgreSQL Single servers that don't run the full hour, the reservation is automatically applied to other Azure Database for PostgreSQL Single server matching the reservation attributes. The discount can apply to Azure Database for PostgreSQL Single servers that are running concurrently. If you don't have an PostgreSQL Single server that run for the full hour that matches the reservation attributes, you don't get the full benefit of the reservation discount for that hour.
+The Azure Database for PostgreSQL Single server reserved capacity discount is applied to running your PostgreSQL Single server on an hourly basis. The reservation that you buy is matched to the compute usage emitted by the running Azure Database for PostgreSQL Single server. For PostgreSQL Single servers that don't run the full hour, the reservation is automatically applied to other Azure Database for PostgreSQL Single server matching the reservation attributes. The discount can apply to Azure Database for PostgreSQL Single servers that are running concurrently. If you don't have a PostgreSQL Single server that run for the full hour that matches the reservation attributes, you don't get the full benefit of the reservation discount for that hour.
The following examples show how the Azure Database for PostgreSQL Single server reserved capacity discount applies depending on the number of cores you bought, and when they're running.
To understand and view the application of your Azure Reservations in billing usa
## Next steps
-If you have questions or need help, [create a support request](https://go.microsoft.com/fwlink/?linkid=2083458).
+If you have questions or need help, [create a support request](https://go.microsoft.com/fwlink/?linkid=2083458).
cost-management-billing Understand Reservation Charges https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/understand-reservation-charges.md
A reservation discount is "*use-it-or-lose-it*". So, if you don't have matching
When you shut down a resource, the reservation discount automatically applies to another matching resource in the specified scope. If no matching resources are found in the specified scope, then the reserved hours are *lost*.
+Stopped resources are billed and continue to use reservation hours. Deallocate or delete resources or scale-in other resources to use your available reservation hours with other workloads.
+ ## Discount applied to running SQL databases The SQL Database reserved capacity discount is applied to running SQL databases on an hourly basis. The reservation that you buy is matched to the compute usage emitted by the running SQL databases. For SQL databases that don't run the full hour, the reservation is automatically applied to other SQL databases matching the reservation attributes. The discount can apply to SQL databases that are running concurrently. If you don't have SQL databases that run for the full hour that match the reservation attributes, you don't get the full benefit of the reservation discount for that hour.
cost-management-billing Understand Storage Charges https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/understand-storage-charges.md
For more information about Azure Blob storage and Azure Data Lake storage Gen 2
For information about Azure Blob storage reservation pricing, see [Block blob pricing](https://azure.microsoft.com/pricing/details/storage/blobs/) and [Azure Data Lake Storage Gen 2 pricing](https://azure.microsoft.com/pricing/details/storage/data-lake/). For information about Azure Files storage reservation pricing, see [Azure Files pricing](https://azure.microsoft.com/pricing/details/storage/files). ## How the reservation discount is applied+ The reserved capacity discount is applied to supported storage resources on an hourly basis. The reserved capacity discount is a "use-it-or-lose-it" discount. If you don't have any block blobs, Azure file shares, or Azure Data Lake Storage Gen2 resources that meet the terms of the reservation for a given hour, then you lose a reservation quantity for that hour. You can't carry forward unused reserved hours. When you delete a resource, the reservation discount automatically applies to another matching resource in the specified scope. If no matching resources are found in the specified scope, then the reserved hours are lost.
+Stopped resources are billed and continue to use reservation hours. Deallocate or delete resources or scale-in other resources to use your available reservation hours with other workloads.
+ ## Discount examples The following examples show how the reserved capacity discount applies, depending on the deployments.
cost-management-billing Understand Suse Reservation Charges https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/understand-suse-reservation-charges.md
A reservation discount is "*use-it-or-lose-it*". So, if you don't have matching
When you shut down a resource, the reservation discount automatically applies to another matching resource in the specified scope. If no matching resources are found in the specified scope, then the reserved hours are *lost*.
+Stopped resources are billed and continue to use reservation hours. Deallocate or delete resources or scale-in other resources to use your available reservation hours with other workloads.
+ ## Review RedHat VM usage before you buy Get the product name from your usage data and buy the RedHat plan with the same type and size.
For example, if you buy a plan for SUSE Linux Enterprise Server for HPC Priority
- 1 deployed VM with 3 or 4 vCPUs, - or 0.77 or about 77% of a VM with 5 or more vCPUs.
-The ratio for 5 or more vCPUs is 2.6. So a reservation for SUSE with a VM with 5 or more vCPUs covers a only portion of the software cost, which is about 77%.
+The ratio for 5 or more vCPUs is 2.6. So a reservation for SUSE with a VM with 5 or more vCPUs covers only a portion of the software cost, which is about 77%.
The following tables show the software plans you can buy a reservation for, their associated usage meters, and the ratios for each.
data-factory Concepts Data Flow Schema Drift https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/concepts-data-flow-schema-drift.md
Previously updated : 09/09/2021 Last updated : 09/19/2022 # Schema drift in mapping data flow
Columns coming into your data flow from your source definition are defined as "d
In a source transformation, schema drift is defined as reading columns that aren't defined in your dataset schema. To enable schema drift, check **Allow schema drift** in your source transformation. When schema drift is enabled, all incoming fields are read from your source during execution and passed through the entire flow to the Sink. By default, all newly detected columns, known as *drifted columns*, arrive as a string data type. If you wish for your data flow to automatically infer data types of drifted columns, check **Infer drifted column types** in your source settings.
When schema drift is enabled, all incoming fields are read from your source duri
In a sink transformation, schema drift is when you write additional columns on top of what is defined in the sink data schema. To enable schema drift, check **Allow schema drift** in your sink transformation. If schema drift is enabled, make sure the **Auto-mapping** slider in the Mapping tab is turned on. With this slider on, all incoming columns are written to your destination. Otherwise you must use rule-based mapping to write drifted columns.
To explicitly reference drifted columns, you can quickly generate mappings for t
In the generated Derived Column transformation, each drifted column is mapped to its detected name and data type. In the above data preview, the column 'movieId' is detected as an integer. After **Map Drifted** is clicked, movieId is defined in the Derived Column as `toInteger(byName('movieId'))` and included in schema views in downstream transformations. ## Next steps In the [Data Flow Expression Language](data-transformation-functions.md), you'll find additional facilities for column patterns and schema drift including "byName" and "byPosition".
data-factory Connector Azure Databricks Delta Lake https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-azure-databricks-delta-lake.md
During copy activity execution, if the cluster you configured has been terminate
2. In the **Databricks Runtime Version** drop-down, select a Databricks runtime version.
-3. Turn on [Auto Optimize](/azure/databricks/delta/optimizations/auto-optimize) by adding the following properties to your [Spark configuration](/azure/databricks/clusters/configure#spark-config):
+3. Turn on [Auto Optimize](/azure/databricks/optimizations/auto-optimize) by adding the following properties to your [Spark configuration](/azure/databricks/clusters/configure#spark-config):
``` spark.databricks.delta.optimizeWrite.enabled true
data-factory Connector Azure Sql Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-azure-sql-database.md
When you copy data from/to Azure SQL Database with [Always Encrypted](/sql/relat
1. Store the [Column Master Key (CMK)](/sql/relational-databases/security/encryption/create-and-store-column-master-keys-always-encrypted?view=sql-server-ver15&preserve-view=true) in an [Azure Key Vault](../key-vault/general/overview.md). Learn more on [how to configure Always Encrypted by using Azure Key Vault](/azure/azure-sql/database/always-encrypted-azure-key-vault-configure?tabs=azure-powershell)
-2. Make sure to great access to the key vault where the [Column Master Key (CMK)](/sql/relational-databases/security/encryption/create-and-store-column-master-keys-always-encrypted?view=sql-server-ver15&preserve-view=true) is stored. Refer to this [article](/sql/relational-databases/security/encryption/create-and-store-column-master-keys-always-encrypted?view=sql-server-ver15&preserve-view=true#key-vaults) for required permissions.
+2. Make sure to get access to the key vault where the [Column Master Key (CMK)](/sql/relational-databases/security/encryption/create-and-store-column-master-keys-always-encrypted?view=sql-server-ver15&preserve-view=true) is stored. Refer to this [article](/sql/relational-databases/security/encryption/create-and-store-column-master-keys-always-encrypted?view=sql-server-ver15&preserve-view=true#key-vaults) for required permissions.
3. Create linked service to connect to your SQL database and enable 'Always Encrypted' function by using either managed identity or service principal.
data-factory Control Flow Get Metadata Activity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/control-flow-get-metadata-activity.md
Previously updated : 09/22/2021 Last updated : 09/20/2022
You can use the Get Metadata activity to retrieve the metadata of any data in Az
To use a Get Metadata activity in a pipeline, complete the following steps: 1. Search for _Get Metadata_ in the pipeline Activities pane, and drag a Fail activity to the pipeline canvas.
-1. Select the new Get Metadata activity on the canvas if it is not already selected, and its **Dataset** tab, to edit its details.
+1. Select the new Get Metadata activity on the canvas if it is not already selected, and its **Settings** tab, to edit its details.
1. Choose a dataset, or create a new one with the New button. Then you can specify filter options and add columns from the available metadata for the dataset. :::image type="content" source="media/control-flow-get-metadata-activity/get-metadata-activity.png" alt-text="Shows the UI for a Get Metadata activity.":::
data-factory Copy Activity Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/copy-activity-overview.md
Previously updated : 09/09/2021 Last updated : 09/20/2022
databox-online Azure Stack Edge Gpu 2209 Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-2209-release-notes.md
+
+ Title: Azure Stack Edge 2209 release notes
+description: Describes critical open issues and resolutions for the Azure Stack Edge running 2209 release.
++
+
+++ Last updated : 09/21/2022+++
+# Azure Stack Edge 2209 release notes
++
+The following release notes identify the critical open issues and the resolved issues for the 2209 release for your Azure Stack Edge devices. Features and issues that correspond to a specific model of Azure Stack Edge are called out wherever applicable.
+
+The release notes are continuously updated, and as critical issues requiring a workaround are discovered, they're added. Before you deploy your device, carefully review the information contained in the release notes.
+
+This article applies to the **Azure Stack Edge 2209** release, which maps to software version **2.2.2088.5593**. This software can be applied to your device if you're running at least **Azure Stack Edge 2207** (2.2.2307.5375).
+
+> [!IMPORTANT]
+> Azure Stack Edge 2209 update contains critical security fixes. As with any new release, we strongly encourage customers to apply this update at the earliest opportunity.
+
+## What's new
+
+The 2209 release has the following features and enhancements:
+
+**Security update** - This release includes a security update for the cluster connect feature of Azure Arc-enabled Kubernetes clusters. The Arc agent running on your Azure Stack Edge device will be upgraded to the latest version. No further action is required of you after the update to Azure Stack Edge 2209 is complete.
+
+If you have questions or concerns, [open a support case through the Azure portal](azure-stack-edge-contact-microsoft-support.md).
+
+## Known issues in 2209 release
+
+| No. | Feature | Issue | Workaround/comments |
+| | | | |
+|**1.**|Preview features |For this release, the following features are available in preview: <br> - Clustering and Multi-Access Edge Computing (MEC) for Azure Stack Edge Pro GPU devices only. <br> - VPN for Azure Stack Edge Pro R and Azure Stack Edge Mini R only. <br> - Local Azure Resource Manager, VMs, Cloud management of VMs, Kubernetes cloud management, and Multi-process service (MPS) for Azure Stack Edge Pro GPU, Azure Stack Edge Pro R, and Azure Stack Edge Mini R. |These features will be generally available in later releases. |
+
+## Known issues from previous releases
+
+The following table provides a summary of known issues carried over from the previous releases.
+
+| No. | Feature | Issue | Workaround/comments |
+| | | | |
+| **1.** |Azure Stack Edge Pro + Azure SQL | Creating SQL database requires Administrator access. |Do the following steps instead of Steps 1-2 in [Create-the-sql-database](../iot-edge/tutorial-store-data-sql-server.md#create-the-sql-database). <br> - In the local UI of your device, enable compute interface. Select **Compute > Port # > Enable for compute > Apply.**<br> - Download `sqlcmd` on your client machine from [SQL command utility](/sql/tools/sqlcmd-utility). <br> - Connect to your compute interface IP address (the port that was enabled), adding a ",1401" to the end of the address.<br> - Final command will look like this: sqlcmd -S {Interface IP},1401 -U SA -P "Strong!Passw0rd". After this, steps 3-4 from the current documentation should be identical. |
+| **2.** |Refresh| Incremental changes to blobs restored via **Refresh** are NOT supported |For Blob endpoints, partial updates of blobs after a Refresh, may result in the updates not getting uploaded to the cloud. For example, sequence of actions such as:<br> 1. Create blob in cloud. Or delete a previously uploaded blob from the device.<br> 2. Refresh blob from the cloud into the appliance using the refresh functionality.<br> 3. Update only a portion of the blob using Azure SDK REST APIs. These actions can result in the updated sections of the blob to not get updated in the cloud. <br>**Workaround**: Use tools such as robocopy, or regular file copy through Explorer or command line, to replace entire blobs.|
+|**3.**|Throttling|During throttling, if new writes to the device aren't allowed, writes by the NFS client fail with a "Permission Denied" error.| The error will show as below:<br>`hcsuser@ubuntu-vm:~/nfstest$ mkdir test`<br>mkdir: can't create directory 'test': Permission deniedΓÇï|
+|**4.**|Blob Storage ingestion|When using AzCopy version 10 for Blob storage ingestion, run AzCopy with the following argument: `Azcopy <other arguments> --cap-mbps 2000`| If these limits aren't provided for AzCopy, it could potentially send a large number of requests to the device, resulting in issues with the service.|
+|**5.**|Tiered storage accounts|The following apply when using tiered storage accounts:<br> - Only block blobs are supported. Page blobs aren't supported.<br> - There's no snapshot or copy API support.<br> - Hadoop workload ingestion through `distcp` isn't supported as it uses the copy operation heavily.||
+|**6.**|NFS share connection|If multiple processes are copying to the same share, and the `nolock` attribute isn't used, you may see errors during the copy.ΓÇï|The `nolock` attribute must be passed to the mount command to copy files to the NFS share. For example: `C:\Users\aseuser mount -o anon \\10.1.1.211\mnt\vms Z:`.|
+|**7.**|Kubernetes cluster|When applying an update on your device that is running a Kubernetes cluster, the Kubernetes virtual machines will restart and reboot. In this instance, only pods that are deployed with replicas specified are automatically restored after an update. |If you have created individual pods outside a replication controller without specifying a replica set, these pods won't be restored automatically after the device update. You'll need to restore these pods.<br>A replica set replaces pods that are deleted or terminated for any reason, such as node failure or disruptive node upgrade. For this reason, we recommend that you use a replica set even if your application requires only a single pod.|
+|**8.**|Kubernetes cluster|Kubernetes on Azure Stack Edge Pro is supported only with Helm v3 or later. For more information, go to [Frequently asked questions: Removal of Tiller](https://v3.helm.sh/docs/faq/).|
+|**9.**|Kubernetes |Port 31000 is reserved for Kubernetes Dashboard. Port 31001 is reserved for Edge container registry. Similarly, in the default configuration, the IP addresses 172.28.0.1 and 172.28.0.10, are reserved for Kubernetes service and Core DNS service respectively.|Don't use reserved IPs.|
+|**10.**|Kubernetes |Kubernetes doesn't currently allow multi-protocol LoadBalancer services. For example, a DNS service that would have to listen on both TCP and UDP. |To work around this limitation of Kubernetes with MetalLB, two services (one for TCP, one for UDP) can be created on the same pod selector. These services use the same sharing key and spec.loadBalancerIP to share the same IP address. IPs can also be shared if you have more services than available IP addresses. <br> For more information, see [IP address sharing](https://metallb.universe.tf/usage/#ip-address-sharing).|
+|**11.**|Kubernetes cluster|Existing Azure IoT Edge marketplace modules may require modifications to run on IoT Edge on Azure Stack Edge device.|For more information, see [Run existing IoT Edge modules from Azure Stack Edge Pro FPGA devices on Azure Stack Edge Pro GPU device](azure-stack-edge-gpu-modify-fpga-modules-gpu.md).|
+|**12.**|Kubernetes |File-based bind mounts aren't supported with Azure IoT Edge on Kubernetes on Azure Stack Edge device.|IoT Edge uses a translation layer to translate `ContainerCreate` options to Kubernetes constructs. Creating `Binds` maps to `hostpath` directory and thus file-based bind mounts can't be bound to paths in IoT Edge containers. If possible, map the parent directory.|
+|**13.**|Kubernetes |If you bring your own certificates for IoT Edge and add those certificates on your Azure Stack Edge device after the compute is configured on the device, the new certificates aren't picked up.|To work around this problem, you should upload the certificates before you configure compute on the device. If the compute is already configured, [Connect to the PowerShell interface of the device and run IoT Edge commands](azure-stack-edge-gpu-connect-powershell-interface.md#use-iotedge-commands). Restart `iotedged` and `edgehub` pods.|
+|**14.**|Certificates |In certain instances, certificate state in the local UI may take several seconds to update. |The following scenarios in the local UI may be affected. <br> - **Status** column in **Certificates** page. <br> - **Security** tile in **Get started** page. <br> - **Configuration** tile in **Overview** page.<br> |
+|**15.**|Certificates|Alerts related to signing chain certificates aren't removed from the portal even after uploading new signing chain certificates.| |
+|**16.**|Web proxy |NTLM authentication-based web proxy isn't supported. ||
+|**17.**|Internet Explorer|If enhanced security features are enabled, you may not be able to access local web UI pages. | Disable enhanced security, and restart your browser.|
+|**18.**|Kubernetes |Kubernetes doesn't support ":" in environment variable names that are used by .NET applications. This is also required for Event Grid IoT Edge module to function on Azure Stack Edge device and other applications. For more information, see [ASP.NET core documentation](/aspnet/core/fundamentals/configuration/?tabs=basicconfiguration#environment-variables).|Replace ":" by double underscore. For more information,see [Kubernetes issue](https://github.com/kubernetes/kubernetes/issues/53201)|
+|**19.** |Azure Arc + Kubernetes cluster |By default, when resource `yamls` are deleted from the Git repository, the corresponding resources aren't deleted from the Kubernetes cluster. |To allow the deletion of resources when they're deleted from the git repository, set `--sync-garbage-collection` in Arc OperatorParams. For more information, see [Delete a configuration](../azure-arc/kubernetes/tutorial-use-gitops-connected-cluster.md#additional-parameters). |
+|**20.**|NFS |Applications that use NFS share mounts on your device to write data should use Exclusive write. That ensures the writes are written to the disk.| |
+|**21.**|Compute configuration |Compute configuration fails in network configurations where gateways or switches or routers respond to Address Resolution Protocol (ARP) requests for systems that don't exist on the network.| |
+|**22.**|Compute and Kubernetes |If Kubernetes is set up first on your device, it claims all the available GPUs. Hence, it isn't possible to create Azure Resource Manager VMs using GPUs after setting up the Kubernetes. |If your device has 2 GPUs, then you can create one VM that uses the GPU and then configure Kubernetes. In this case, Kubernetes will use the remaining available one GPU. |
+|**23.**|Custom script VM extension |There's a known issue in the Windows VMs that were created in an earlier release and the device was updated to 2103. <br> If you add a custom script extension on these VMs, the Windows VM Guest Agent (Version 2.7.41491.901 only) gets stuck in the update causing the extension deployment to time out. | To work around this issue: <br> - Connect to the Windows VM using remote desktop protocol (RDP). <br> - Make sure that the `waappagent.exe` is running on the machine: `Get-Process WaAppAgent`. <br> - If the `waappagent.exe` isn't running, restart the `rdagent` service: `Get-Service RdAgent` \| `Restart-Service`. Wait for 5 minutes.<br> - While the `waappagent.exe` is running, kill the `WindowsAzureGuest.exe` process. <br> - After you kill the process, the process starts running again with the newer version. <br> - Verify that the Windows VM Guest Agent version is 2.7.41491.971 using this command: `Get-Process WindowsAzureGuestAgent` \| `fl ProductVersion`.<br> - [Set up custom script extension on Windows VM](azure-stack-edge-gpu-deploy-virtual-machine-custom-script-extension.md). |
+|**24.**|Multi-Process Service (MPS) |When the device software and the Kubernetes cluster are updated, the MPS setting isn't retained for the workloads. |[Re-enable MPS](azure-stack-edge-gpu-connect-powershell-interface.md#connect-to-the-powershell-interface) and redeploy the workloads that were using MPS. |
+|**25.**|Wi-Fi |Wi-Fi doesn't work on Azure Stack Edge Pro 2 in this release. | |
+
+## Next steps
+
+- [Update your device](azure-stack-edge-gpu-install-update.md)
databox-online Azure Stack Edge Gpu Install Update https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-install-update.md
Title: Install Update on Azure Stack Edge Pro GPU device | Microsoft Docs
-description: Describes how to apply updates using the Azure portal and local web UI for Azure Stack Edge Pro GPU device and the Kubernetes cluster on the device
+description: Describes how to apply updates using the Azure portal and local web UI for Azure Stack Edge Pro GPU device and the Kubernetes cluster on the device.
Previously updated : 08/04/2022 Last updated : 09/20/2022 # Update your Azure Stack Edge Pro GPU
The procedure described in this article was performed using a different version
## About latest update
-The current update is Update 2207. This update installs two updates, the device update followed by Kubernetes updates. The associated versions for this update are:
+The current update is Update 2209. This update installs two updates, the device update followed by Kubernetes updates. The associated versions for this update are:
-- Device software version - **2.2.2037.5375**-- Device Kubernetes version - **2.2.2037.5375**-- Kubernetes server version - **v1.22.6**-- IoT Edge version: **0.1.0-beta15**-- Azure Arc version: **1.6.6**-- GPU driver version: **515.48.07**-- CUDA version: **11.7**
+- Device software version: Azure Stack Edge 2209 (2.2.2088.5593)
+- Device Kubernetes version: Azure Stack Kubernetes Edge 2209 (2.2.2088.5593)
+- Kubernetes server version: v1.22.6
+- IoT Edge version: 0.1.0-beta15
+- Azure Arc version: 1.7.18
+- GPU driver version: 515.48.07
+- CUDA version: 11.7
-For information on what's new in this update, go to [Release notes](azure-stack-edge-gpu-2207-release-notes.md).
+For information on what's new in this update, go to [Release notes](azure-stack-edge-gpu-2209-release-notes.md).
-**To apply 2207 update, your device must be running 2106 or later.**
+**To apply 2209 update, your device must be running version 2207.**
-- If you are not running the minimal supported version, you'll see this error: *Update package cannot be installed as its dependencies are not met*. -- You can update to 2106 from an older version and then install 2207.
+- If you are not running the minimum required version, you'll see this error:
+
+ *Update package cannot be installed as its dependencies are not met.*
+- You can update to 2207 from 2106, and then install 2209.
### Updates for a single-node vs two-node
Do the following steps to download the update from the Microsoft Update Catalog.
2. In the search box of the Microsoft Update Catalog, enter the Knowledge Base (KB) number of the hotfix or terms for the update you want to download. For example, enter **Azure Stack Edge**, and then click **Search**.
- The update listing appears as **Azure Stack Edge Update 2207**.
+ The update listing appears as **Azure Stack Edge Update 2209**.
<!--![Search catalog 2](./media/azure-stack-edge-gpu-install-update/download-update-2-b.png)-->
databox-online Azure Stack Edge Gpu Troubleshoot Activation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-troubleshoot-activation.md
The following table summarizes the errors related to device activation and the c
| Error message| Recommended resolution | ||--|
-| If the diagnostic setting creation fails for your key vault, you'll see this error. <!--<br> ![Key vault error 3](./medi#create-a-storage-account-for-your-logs). |
-| If the storage account creation fails, for example, because an account already exists for the name you specified, you'll see this error. <!--<br> ![Key vault error 3](./medi#create-a-storage-account-for-your-logs). |
+| If the diagnostic setting creation fails for your key vault, you'll see this error. <!--<br> ![Key vault error 3](./medi). |
+| If the storage account creation fails, for example, because an account already exists for the name you specified, you'll see this error. <!--<br> ![Key vault error 3](./medi). |
|If the system assigned managed identity for your Azure Stack Edge resource is deleted, you'll see this error. <!--<br> ![Key vault error 3](./medi#recover-key-vault) | | If the managed identity doesn't have access to the key vault, you'll see this error. <!--<br> ![Key vault error 3](./medi#recover-key-vault). |
The following table summarizes the errors related to device activation and the c
||--| | If the key vault resource is moved across resource groups or subscriptions, you'll see this error. <!--<br> ![Key vault error 3](./medi#recover-key-vault). | | If the subscription you are using, is moved across tenants, you'll see this error. <!--<br> ![Key vault error 3](./medi#recover-key-vault). |
-| If the storage account resource that is used for audit logs, is moved across resource groups or subscriptions, you won't see an error. | You can [Create a new storage account and configure it to store the audit logs](../key-vault/general/howto-logging.md#create-a-storage-account-for-your-logs). |
+| If the storage account resource that is used for audit logs, is moved across resource groups or subscriptions, you won't see an error. | You can [Create a new storage account and configure it to store the audit logs](../key-vault/general/howto-logging.md). |
## Other errors
defender-for-iot Concept Sentinel Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/concept-sentinel-integration.md
Microsoft Sentinel is a scalable cloud solution for security information event m
The Defender for IoT and Microsoft Sentinel integration delivers out-of-the-box capabilities to SOC teams. This helps them to efficiently and effectively view, analyze, and respond to OT security alerts, and the incidents they generate in a broader organizational threat context.
-Bring Defender for IoT's rich telemetry into Microsoft Sentinel to bridge the gap between OT and SOC teams with the Microsoft Sentinel data connector for Defender for IoT and the **IoT OT Threat Monitoring with Defender for IoT** solution.
+Bring Defender for IoT's rich telemetry into Microsoft Sentinel to bridge the gap between OT and SOC teams with the Microsoft Sentinel data connector for Defender for IoT and the **Microsoft Defender for IoT** solution.
-The **IoT OT Threat Monitoring with Defender for IoT** solution installs out-of-the-box security content to your Microsoft Sentinel, including analytics rules to automatically open incidents, workbooks to visualize and monitor data, and playbooks to automate response actions
+The **Microsoft Defender for IoT** solution installs out-of-the-box security content to your Microsoft Sentinel, including analytics rules to automatically open incidents, workbooks to visualize and monitor data, and playbooks to automate response actions.
Once Defender for IoT data is ingested into Microsoft Sentinel, security experts can work with IoT/OT-specific analytics rules, workbooks, and SOAR playbooks, as well as incident mappings to [MITRE ATT&CK for ICS](https://collaborate.mitre.org/attackics/index.php/Overview). ### Workbooks
-To visualize and monitor your Defender for IoT data, use the workbooks deployed to your Microsoft Sentinel workspace as part of the **IoT OT Threat Monitoring with Defender for IoT** solution.
+To visualize and monitor your Defender for IoT data, use the workbooks deployed to your Microsoft Sentinel workspace as part of the **Microsoft Defender for IoT** solution.
-Defenders for IoT workbooks provide guided investigations for OT entities based on open incidents, alert notifications, and activities for OT assets. They also provide a hunting experience across the MITRE ATT&CK® framework for ICS, and are designed to enable analysts, security engineers, and MSSPs to gain situational awareness of OT security posture.
+Defender for IoT workbooks provide guided investigations for OT entities based on open incidents, alert notifications, and activities for OT assets. They also provide a hunting experience across the MITRE ATT&CK® framework for ICS, and are designed to enable analysts, security engineers, and MSSPs to gain situational awareness of OT security posture.
For example, workbooks can display alerts by any of the following dimensions:
The following table shows how both the OT team, on the Defender for IoT side, an
For more information, see: -- [Integrate Microsoft Defender for IoT and Microsoft Sentinel](../../sentinel/iot-solution.md?tabs=use-out-of-the-box-analytics-rules-recommended)-- [Detect threats out-of-the-box with Defender for IoT data](../../sentinel/detect-threats-custom.md)
+- [Tutorial: Connect Microsoft Defender for IoT with Microsoft Sentinel](../../sentinel/iot-solution.md)
+- [Detect threats out-of-the-box with Defender for IoT data](../../sentinel/iot-advanced-threat-monitoring.md#detect-threats-out-of-the-box-with-defender-for-iot-data)
+- [Create custom analytics rules to detect threats](../../sentinel/detect-threats-custom.md)
- [Tutorial Use playbooks with automation rules in Microsoft Sentinel](../../sentinel/tutorial-respond-threats-playbook.md)
defender-for-iot How To Manage Sensors On The Cloud https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-manage-sensors-on-the-cloud.md
Make the downloaded activation file accessible to the sensor console admin so th
+## Site management options from the Azure portal
+
+When onboarding a new OT sensor to the Defender for IoT, you can add it to a new or existing site. When working with OT networks, organizing your sensors into sites allows you to manage your sensors more efficiently. Enterprise IoT sensors are all automatically added to the same site, named **Enterprise network**.
+
+To edit a site's details, select the site's name on the **Sites and sensors** page. In the **Edit site** pane that opens on the right, modify any of the following values:
+
+- **Display name**: Enter a meaningful name for your site.
+
+- **Tags**: (Optional) Enter values for the **Key** and **Value** fields for each new tag you want to add to your site. Select **+ Add** to add a new tag.
+
+- **Owner**: For sites with OT sensors only. Enter one or more email addresses for the user you want to designate as the owner of the devices at this site. The site owner is inherited by all devices at the site, and is shown on the IoT device entity pages and in incident details in Microsoft Sentinel.
+
+ In Microsoft Sentinel, use the **AD4IoT-SendEmailtoIoTOwner** and **AD4IoT-CVEAutoWorkflow** playbooks to automatically notify device owners about important alerts or incidents. For more information, see [Investigate and detect threats for IoT devices](../../sentinel/iot-advanced-threat-monitoring.md).
+
+When you're done, select **Save** to save your changes.
+ ## Sensor management options from the Azure portal Sensors that you've on-boarded to Defender for IoT are listed on the Defender for IoT **Sites and sensors** page. Select a specific sensor name to drill down to more details for that sensor.
-Use the options on the **Sites and sensor** page and a sensor details page to do any of the following tasks. If you're on the **Sites and sensors** page, select multiple sensors to apply your actions in bulk using toolbar options. For individual sensors, use the **Sites and sensors** toolbar options, the **...** options menu at the right of a sensor row, or the options on a sensor details page.
+Use the options on the **Sites and sensor** page and a sensor details page to do any of the following tasks. If you're on the **Sites and sensors** page, select multiple sensors to apply your actions in bulk using toolbar options. For individual sensors, use the **Sites and sensors** toolbar options, the **...** options menu at the right of a sensor row, or the options on a sensor details page.
|Task |Description | |||
Use the options on the **Sites and sensor** page and a sensor details page to do
|:::image type="icon" source="media/how-to-manage-sensors-on-the-cloud/icon-recover.png" border="false"::: **Recover a password** | Individual, OT sensors only. <br><br>Available from the **...** options menu or a sensor details page. Enter the secret identifier obtained on the sensor's sign-in screen. | |:::image type="icon" source="media/how-to-manage-sensors-on-the-cloud/icon-export.png" border="false"::: **Export sensor data** | Available from the **Sites and sensors** toolbar only, to download a CSV file with details about all the sensors listed. | |:::image type="icon" source="media/how-to-manage-sensors-on-the-cloud/icon-export.png" border="false"::: **Download an activation file** | Individual, OT sensors only. <br><br>Available from the **...** options menu or a sensor details page. |
-|:::image type="icon" source="media/how-to-manage-sensors-on-the-cloud/icon-edit.png" border="false"::: **Edit a sensor zone** | For individual sensors only, from the **...** options menu or a sensor details page. <br><br>Select **Edit**, and then elect a new zone from the **Zone** menu or select **Create new zone**. Select **Submit** to save your changes. |
+|:::image type="icon" source="media/how-to-manage-sensors-on-the-cloud/icon-edit.png" border="false"::: **Edit a sensor zone** | For individual sensors only, from the **...** options menu or a sensor details page. <br><br>Select **Edit**, and then select a new zone from the **Zone** menu or select **Create new zone**. Select **Submit** to save your changes. |
|:::image type="icon" source="medi#install-the-sensor-software). | |:::image type="icon" source="media/how-to-manage-sensors-on-the-cloud/icon-edit.png" border="false"::: **Edit automatic threat intelligence updates** | Individual, OT sensors only. <br><br>Available from the **...** options menu or a sensor details page. <br><br>Select **Edit** and then toggle the **Automatic Threat Intelligence Updates (Preview)** option on or off as needed. Select **Submit** to save your changes. | |:::image type="icon" source="media/how-to-manage-sensors-on-the-cloud/icon-delete.png" border="false"::: **Delete a sensor** | For individual sensors only, from the **...** options menu or a sensor details page. |
defender-for-iot Release Notes Sentinel https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/release-notes-sentinel.md
+
+ Title: Release notes for the Microsoft Defender for IoT solution in Microsoft Sentinel
+description: Learn about the updates available in each version of the Microsoft Defender for IoT solution, available from the Microsoft Sentinel content hub.
Last updated : 09/22/2022+++
+# Release notes for the Microsoft Defender for IoT solution in Microsoft Sentinel
+
+This article lists the updates to out-of-the-box security content available from each version of the **Microsoft Defender for IoT** solution. The **Microsoft Defender for IoT** solution is available from the Microsoft Sentinel content hub.
+
+The **Microsoft Defender for IoT** solution enhances the integration between Defender for IoT and Microsoft Sentinel, helping to streamline SOC workflows to analyze, investigate, and respond efficiently to OT incidents.
+
+For more information, see:
+
+- [What's new in Microsoft Defender for IoT?](release-notes.md)
+- [Tutorial: Integrate Microsoft Sentinel and Microsoft Defender for IoT](/azure/sentinel/iot-solution?toc=%2Fazure%2Fdefender-for-iot%2Forganizations%2Ftoc.json&bc=%2Fazure%2Fdefender-for-iot%2Fbreadcrumb%2Ftoc.json)
+- [Tutorial: Investigate and detect threats for IoT devices](/azure/sentinel/iot-advanced-threat-monitoring?toc=%2Fazure%2Fdefender-for-iot%2Forganizations%2Ftoc.json&bc=%2Fazure%2Fdefender-for-iot%2Fbreadcrumb%2Ftoc.json).
+## Version 2.1
+
+**Released**: September 2022
+
+New features in this version include:
+
+- Solution name changed to **Microsoft Defender for IoT**
+
+- Workbook improvements:
+
+ - A new overview dashboard
+ - A new vulnerability dashboard
+ - Inventory dashboard improvements
+
+- New SOC playbooks for automation with CVEs, triaging incidents that involve sensitive devices, and email notifications to device owners for new incidents.
+
+For more information, see [Updates to the Microsoft Defender for IoT solution](release-notes.md#updates-to-the-microsoft-defender-for-iot-solution-in-microsoft-sentinels-content-hub).
+
+## Version 2.0
+
+**Released**: September 2022
+
+This version provides enhanced experiences for managing, installing, and updating the solution package in the Microsoft Sentinel content hub.
+
+For more information, see [Centrally discover and deploy Microsoft Sentinel out-of-the-box content and solutions](/azure/sentinel/sentinel-solutions-deploy)
+## Version 1.0.14
+
+**Released**: July 2022
+
+New features in this version include:
+
+- [Microsoft Sentinel incident synch with Defender for IoT alerts](release-notes.md#microsoft-sentinel-incident-synch-with-defender-for-iot-alerts)
+- IoT device entities displayed in related Microsoft Sentinel incidents.
++
+## Version 1.0.13
+
+**Released**: March 2022
+
+New features in this version include:
+
+- A bug fix to prevent new incidents from being created in Microsoft Sentinel each time an alert in Defender for IoT is updated or deleted.
+- A new analytics rule for the **No traffic on sensor detected** Defender for IoT alert.
+- Updates in the **Unauthorized PLC changes** analytics rule to support the **Illegal Beckhoff AMS Command** Defender for IoT alert.
+- A new, deep link to Defender for IoT alerts directly from related Microsoft Sentinel incidents.
+
+## Earlier versions
+
+For more information about earlier versions of the **Microsoft Defender for IoT** solution, contact us via the [Defender for IoT community](https://techcommunity.microsoft.com/t5/microsoft-defender-for-iot/bd-p/MicrosoftDefenderIoT).
+
+## Next steps
+
+Learn more in [What's new in Microsoft Defender for IoT?](release-notes.md) and the [Microsoft Sentinel documentation](/azure/sentinel/).
defender-for-iot Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/release-notes.md
Last updated 08/08/2022
# What's new in Microsoft Defender for IoT? - This article lists Microsoft Defender for IoT's new features and enhancements for end-user organizations from the last nine months. Features released earlier than nine months ago are listed in [What's new archive for Microsoft Defender for IoT for organizations](release-notes-archive.md).
For more information, see the [Microsoft Security Development Lifecycle practice
|Service area |Updates | |||
-|**OT networks** |**Sensor software version 22.2.6**: <br> - Bug fixes and stability improvements <br>- Enhancements to the device type classification algorithm |
+|**OT networks** |**Sensor software version 22.2.6**: <br> - Bug fixes and stability improvements <br>- Enhancements to the device type classification algorithm<br><br>- **Microsoft Sentinel integration**: <br>- [Investigation enhancements with IOT device entities](#investigation-enhancements-with-iot-device-entities-in-microsoft-sentinel)<br>- [Updates to the Microsoft Defender for IoT solution](#updates-to-the-microsoft-defender-for-iot-solution-in-microsoft-sentinels-content-hub) |
+
+### Investigation enhancements with IOT device entities in Microsoft Sentinel
+
+Defender for IoT's integration with Microsoft Sentinel now supports an IoT device entity page. When investigating incidents and monitoring IoT security in Microsoft Sentinel, you can now identify your most sensitive devices and jump directly to more details on each device entity page.
+
+The IoT device entity page provides contextual device information about an IoT device, with basic device details and device owner contact information. Device owners are defined by site in the **Sites and sensors** page in Defender for IoT.
+
+The IoT device entity page can help prioritize remediation based on device importance and business impact, as per each alert's site, zone, and sensor. For example:
++
+You can also now hunt for vulnerable devices on the Microsoft Sentinel **Entity behavior** page. For example, view the top five IoT devices with the highest number of alerts, or search for a device by IP address or device name:
++
+For more information, see [Investigate further with IoT device entities](https://review.learn.microsoft.com/en-us/azure/sentinel/iot-advanced-threat-monitoring#investigate-further-with-iot-device-entities) and [Site management options from the Azure portal](how-to-manage-sensors-on-the-cloud.md#site-management-options-from-the-azure-portal).
+
+### Updates to the Microsoft Defender for IoT solution in Microsoft Sentinel's content hub
+
+This month, we've released version 2.0 of the **Microsoft Defender for IoT** solution in Microsoft Sentinel's content hub, previously known as the **IoT/OT Threat Monitoring with Defender for IoT** solution.
+
+Updates in this version of the solution include:
+
+- **A name change**. If you'd previously installed the **IoT/OT Threat Monitoring with Defender for IoT** solution in your Microsoft Sentinel workspace, the solution is automatically renamed to **Microsoft Defender for IoT**, even if you don't update the solution.
+
+- **Workbook improvements**: The **Defender for IoT** workbook now includes:
+
+ - A new **Overview** dashboard with key metrics on the device inventory, threat detection, and security posture. For example:
+
+ :::image type="content" source="media/release-notes/sentinel-workbook-overview.png" alt-text="Screenshot of the new Overview tab in the IoT OT Threat Monitoring with Defender for IoT workbook." lightbox="media/release-notes/sentinel-workbook-overview.png":::
+
+ - A new **Vulnerabilities** dashboard with details about CVEs shown in your network and their related vulnerable devices. For example:
+
+ :::image type="content" source="media/release-notes/sentinel-workbook-vulnerabilities.png" alt-text="Screenshot of the new Vulnerability tab in the IoT OT Threat Monitoring with Defender for IoT workbook." lightbox="media/release-notes/sentinel-workbook-vulnerabilities.png":::
+
+ - Improvements on the **Device inventory** dashboard, including access to device recommendations, vulnerabilities, and direct links to the Defender for IoT device details pages. The **Device inventory** dashboard in the **IoT/OT Threat Monitoring with Defender for IoT** workbook is fully aligned with the Defender for IoT [device inventory data](how-to-manage-device-inventory-for-organizations.md).
+
+- **Playbook updates**: The **Microsoft Defender for IoT** solution now supports the following SOC automation functionality with new playbooks:
+
+ - **Automation with CVE details**: Use the *AD4IoT-CVEAutoWorkflow* playbook to enrich incident comments with CVEs of related devices based on Defender for IoT data. The incidents are triaged, and if the CVE is critical, the asset owner is notified about the incident by email.
+
+ - **Automation for email notifications to device owners**. Use the *AD4IoT-SendEmailtoIoTOwner* playbook to have a notification email automatically sent to a device's owner about new incidents. Device owners can then reply to the email to update the incident as needed. Device owners are defined at the site level in Defender for IoT.
+
+ - **Automation for incidents with sensitive devices**: Use the *AD4IoT-AutoTriageIncident* playbook to automatically update an incident's severity based on the devices involved in the incident, and their sensitivity level or importance to your organization. For example, any incident involving a sensitive device can be automatically escalated to a higher severity level.
+
+For more information, see [Investigate Microsoft Defender for IoT incidents with Microsoft Sentinel](/azure/sentinel/iot-advanced-threat-monitoring?toc=%2Fazure%2Fdefender-for-iot%2Forganizations%2Ftoc.json&bc=%2Fazure%2Fdefender-for-iot%2Fbreadcrumb%2Ftoc.json).
## August 2022
The **IoT OT Threat Monitoring with Defender for IoT** solution now ensures that
This synchronization overrides any status defined in Defender for IoT, in the Azure portal or the sensor console, so that the alert statuses match that of the related incident.
-Update your **IoT OT Threat Monitoring with Defender for IoT** solution to use the latest synchronization support, including the new **AD4IoT-AutoAlertStatusSync** playbook. After updating the solution, make sure that you also take the [required steps](../../sentinel/iot-solution.md?tabs=use-out-of-the-box-analytics-rules-recommended#update-alert-statuses-in-defender-for-iot) to ensure that the new playbook works as expected.
+Update your **IoT OT Threat Monitoring with Defender for IoT** solution to use the latest synchronization support, including the new [**AD4IoT-AutoAlertStatusSync** playbook](../../sentinel/iot-advanced-threat-monitoring.md#update-alert-statuses-in-defender-for-iot). After updating the solution, make sure that you also take the [required steps](../../sentinel/iot-advanced-threat-monitoring.md#playbook-prerequisites) to ensure that the new playbook works as expected.
For more information, see: -- [Tutorial: Integrate Defender for Iot and Sentinel](../../sentinel/iot-solution.md?tabs=use-out-of-the-box-analytics-rules-recommended)
+- [Integrate Defender for Iot and Sentinel](../../sentinel/iot-advanced-threat-monitoring.md)
+- [Update alert statuses playbook](../../sentinel/iot-advanced-threat-monitoring.md#update-alert-statuses-in-defender-for-iot)
- [View and manage alerts on the Defender for IoT portal (Preview)](how-to-manage-cloud-alerts.md) - [View alerts on your sensor](how-to-view-alerts.md)
For more information, see [Use Azure Monitor workbooks in Microsoft Defender for
The IoT OT Threat Monitoring with Defender for IoT solution in Microsoft Sentinel is now GA. In the Azure portal, use this solution to help secure your entire OT environment, whether you need to protect existing OT devices or build security into new OT innovations.
-For more information, see [OT threat monitoring in enterprise SOCs](concept-sentinel-integration.md) and [Tutorial: Integrate Defender for IoT and Sentinel](../../sentinel/iot-solution.md?tabs=use-out-of-the-box-analytics-rules-recommended).
+For more information, see [OT threat monitoring in enterprise SOCs](concept-sentinel-integration.md) and [Tutorial: Investigate Microsoft Defender for IoT devices with Microsoft Sentinel](../../sentinel/iot-advanced-threat-monitoring.md).
### Edit and delete devices from the Azure portal (Public preview)
The following Defender for IoT options and configurations have been moved, remov
The new **IoT OT Threat Monitoring with Defender for IoT solution** is available and provides enhanced capabilities for Microsoft Defender for IoT integration with Microsoft Sentinel. The **IoT OT Threat Monitoring with Defender for IoT solution** is a set of bundled content, including analytics rules, workbooks, and playbooks, configured specifically for Defender for IoT data. This solution currently supports only Operational Networks (OT/ICS).
-For information on integrating with Microsoft Sentinel, see [Tutorial: Integrate Defender for Iot and Sentinel](../../sentinel/iot-solution.md?tabs=use-out-of-the-box-analytics-rules-recommended)
+For information on integrating with Microsoft Sentinel, see [Tutorial: Connect Microsoft Defender for IoT with Microsoft Sentinel](../../sentinel/iot-solution.md) and [Tutorial: Investigate and detect threats for IoT devices](../../sentinel/iot-advanced-threat-monitoring.md).
### Apache Log4j vulnerability
dns Dns Private Resolver Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-private-resolver-overview.md
Previously updated : 09/20/2022 Last updated : 09/22/2022 #Customer intent: As an administrator, I want to evaluate Azure DNS Private Resolver so I can determine if I want to use it instead of my current DNS resolver service.
Azure DNS Private Resolver is available in the following regions:
- East US 2 - West Europe
+## Data residency
+
+Azure DNS Private Resolver doesn't move or store customer data out of the region where the resolver is deployed.
+ ## DNS resolver endpoints For more information about endpoints and rulesets, see [Azure DNS Private Resolver endpoints and rulesets](private-resolver-endpoints-rulesets.md).
event-grid Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/concepts.md
For an example of setting an expiration, see [Subscribe with advanced filters](h
## Event handlers
-From an Event Grid perspective, an event handler is the place where the event is sent. The handler takes some further action to process the event. Event Grid supports several handler types. You can use a supported Azure service, your own webhook or a [partner destination](#partner-destination) as the handler. Depending on the type of handler, Event Grid follows different mechanisms to guarantee the delivery of the event. For HTTP webhook event handlers, the event is retried until the handler returns a status code of `200 ΓÇô OK`. For Azure Storage Queue, the events are retried until the Queue service successfully processes the message push into the queue.
+From an Event Grid perspective, an event handler is the place where the event is sent. The handler takes some further action to process the event. Event Grid supports several handler types. You can use a supported Azure service, or your own webhook as the handler. Depending on the type of handler, Event Grid follows different mechanisms to guarantee the delivery of the event. For HTTP webhook event handlers, the event is retried until the handler returns a status code of `200 ΓÇô OK`. For Azure Storage Queue, the events are retried until the Queue service successfully processes the message push into the queue.
For information about delivering events to any of the supported Event Grid handlers, see [Event handlers in Azure Event Grid](event-handlers.md).
-### Partner destination
-A partner destination is a resource that is provisioned by a [partner](#partners) and represents a webhook URL on a partner service or application. Partner destinations are created for the purpose of forwarding events to a partner system to enable event-driven integration across platforms. This way, a partner destination can be seen as a type of [event handler](#event-handlers) that you can configure in your event subscription for any kind of topic. For more information, see [Partner Events Overview](partner-events-overview.md).
- ## Security Event Grid provides security for subscribing to topics, and publishing topics. When subscribing, you must have adequate permissions on the resource or Event Grid topic. When publishing, you must have a SAS token or key authentication for the topic. For more information, see [Event Grid security and authentication](security-authentication.md).
event-grid Deliver Events To Partner Destinations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/deliver-events-to-partner-destinations.md
- Title: Azure Event Grid - deliver events to partner destinations
-description: This article explains how to use a partner destination as a handler for events.
- Previously updated : 03/31/2022--
-# Deliver events to a partner destination (Azure Event Grid)
-In the Azure portal, when creating an event subscription for a topic (system topic, custom topic, domain topic, or partner topic) or a domain, you can specify a partner destination as an endpoint. This article shows you how to create an event subscription using a partner destination so that events are delivered to a partner system.
-
-## Overview
-As an end user, you give your partner the authorization to create a partner destination in a resource group within your Azure subscription. For details, see [Authorize partner to create a partner destination](subscribe-to-partner-events.md#authorize-partner-to-create-a-partner-topic).
-
-A partner creates a channel that in turn creates a partner destination in the Azure subscription and a resource group you provided to the partner. Prior to using it, you must activate the partner destination. Once activated, you can select the partner destination as a delivery endpoint when creating or updating event subscriptions.
-
-## Activate a partner destination
-Before you can use a partner destination as an endpoint for an event subscription, you need to activate the partner destination.
-
-1. In the search bar of the Azure portal, search for and select **Event Grid Partner Destinations**.
-1. On the **Event Grid Partner Destinations** page, select the partner destination in the list.
-1. Review the activate message, and select **Activate** on the page or on the command bar to activate the partner topic before the expiration time mentioned on the page.
-1. Confirm that the activation status is set to **Activated**.
--
-## Create an event subscription using partner destination
-
-In the Azure portal, when creating an [event subscription](subscribe-through-portal.md), follow these steps:
-
-1. In the **Endpoint details** section, select **Partner Destination** for **Endpoint Type**.
-1. Click **Select an endpoint**.
-
- :::image type="content" source="./media/deliver-events-to-partner-destinations/select-endpoint-link.png" alt-text="Screenshot showing the Create Event Subscription page with Select an endpoint link selected.":::
-1. On the **Select Partner Destination** page, select the **Azure subscription** and **resource group** that contains the partner destination.
-1. For **Partner Destination**, select a partner destination.
-1. Select **Confirm selection**.
-
- :::image type="content" source="./media/deliver-events-to-partner-destinations/subscription-partner-destination.png" alt-text="Screenshot showing the Select Partner Destination page.":::
-1. On the **Create Event Subscription** page, confirm that you see **Endpoint Type** is set to **Partner Destination**, and the endpoint is set to a partner destination, and then select **Create**.
-
- :::image type="content" source="./media/deliver-events-to-partner-destinations/partner-destination-configure.png" alt-text="Screenshot showing the Create Event Subscription page with a partner destination configured.":::
-
-## Next steps
-See the following articles:
--- [Authorize partner to create a partner destination](subscribe-to-partner-events.md#authorize-partner-to-create-a-partner-topic)-- [Create a channel](onboard-partner.md#create-a-channel) - see the steps to create a channel with partner destination as the channel type.
event-grid Onboard Partner https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/onboard-partner.md
Title: Onboard as an Azure Event Grid partner using Azure portal description: Use Azure portal to onboard an Azure Event Grid partner. Previously updated : 03/31/2022 Last updated : 09/21/2022 # Onboard as an Azure Event Grid partner using the Azure portal
In a nutshell, enabling your serviceΓÇÖs events to be consumed by users typicall
1. [Register the Event Grid resource provider](#register-the-event-grid-resource-provider) with your Azure subscription. 1. [Create a **partner registration**](#create-a-partner-registration). 1. [Create a **namespace**](#create-a-partner-namespace).
-1. [Create a **channel** and a **partner topic** or a **partner destination** in a single step](#create-a-channel).
+1. [Create a **channel** and a **partner topic** in a single step](#create-a-channel).
> [!IMPORTANT]
- > You may be able to create an event channel (legacy), which supports only partner topics, not partner destinations. **Channel** is the new routing resource type and is the preferred option, which supports both sending events via partner topics and receiving events via partner destinations. An **event channel** is a legacy resource and will be deprecated soon.
+ > You may be able to create an event channel (legacy), which supports partner topics. **Channel** is the new routing resource type and is the preferred option, which supports both sending events via partner topics. An **event channel** is a legacy resource and will be deprecated soon.
1. Test the Partner Events functionality end to end. For step #5, you should decide what kind of user experience you want to provide. You have the following options:
If you selected **Channel name header** for **Partner topic routing mode**, crea
:::image type="content" source="./media/onboard-partner/create-channel-button.png" lightbox="./media/onboard-partner/create-channel-button.png" alt-text="Image showing the selection of Create Channel button on the command bar of the Event Grid Partner Namespace page."::: 1. On the **Create Channel - Basics** page, follow these steps. 1. Enter a **name** for the channel. Channel name should be unique across the region in which is created.
- 1. For the channel type, select **Partner Topic** or **Partner Destination**.
+ 1. For the channel type, select **Partner Topic**.
- Partner topics are resources that hold published events. Partner destinations define target endpoints or services to which events are delivered.
-
- Select **Partner Topic** if you want to **forward events to a partner topic** that holds events to be processed by a handler later.
-
- Select **Partner Destination** if you want to **forward events to a partner service** that processes the events.
+ Partner topics are resources that hold published events. Select **Partner Topic** if you want to **forward events to a partner topic** that holds events to be processed by a handler later.
3. If you selected **Partner Topic**, enter the following details: 1. **ID of the subscription** in which the partner topic will be created. 1. **Resource group** in which the partner topic will be created.
If you selected **Channel name header** for **Partner topic routing mode**, crea
:::image type="content" source="./media/onboard-partner/event-type-definition-2.png" alt-text="Screenshot that shows the definition of a sample event type."::: :::image type="content" source="./media/onboard-partner/event-type-definition-3.png" alt-text="Screenshot that shows a list with the event type definition that was added.":::
- 1. If you selected **Partner Destination**, enter the following details:
- 1. **ID of the subscription** in which the partner topic will be created.
- 1. **Resource group** in which the partner topic will be created.
- 1. **Name** of the partner topic.
- 1. In the **Endpoint Details** section, specify the following values.
- 1. For **Endpoint URL**, specify the endpoint URL to which events are delivered.
- 1. For **Endpoint context**, enter additional information about the destination to which events will be sent that can help end users understand the location to which events are delivered.
- 1. For **Azure AD tenant ID**, specify the Azure Active Directory tenant ID used by Event Grid to authenticate to the destination endpoint URL.
- 1. For **Azure AD app ID or URI**, specify the Azure AD application ID (also called client ID) or application URI used by Event Grid to authenticate to the destination endpoint URL.
-
- :::image type="content" source="./media/onboard-partner/create-channel-partner-destination.png" alt-text="Image showing the Create Channel page with partner destination options.":::
1. Select **Next: Additional Features** link at the bottom of the page. 1. On the **Additional Features** page, follow these steps: 1. To set your own activation message that can help end user to activate the associated partner topic, select the check box next to **Set your own activation message**, and enter the message.
If you selected **Channel name header** for **Partner topic routing mode**, crea
**Partner topic** option: :::image type="content" source="./media/onboard-partner/create-channel-review-create.png" alt-text="Image showing the Create Channel - Review + create page.":::
- **Partner destination** option:
- :::image type="content" source="./media/onboard-partner/create-channel-review-create-destination.png" alt-text="Image showing the Create Channel - Review + create page when the Partner Destination option is selected.":::
## Manage a channel
If you created a channel you may be interested to update the configuration once
If you selected **Source attribute in event** for **Partner topic routing mode**, create an event channel by following steps in this section. > [!IMPORTANT]
-> - **Channel** is the new routing resource type and is the preferred option. An **event channel** is a legacy resource and will be deprecated soon.
+> - **Channel** is the new routing resource type and is the preferred option.
1. Go to the **Overview** page of the namespace you created.
If you selected **Source attribute in event** for **Partner topic routing mode**
:::image type="content" source="./media/onboard-partner/create-event-channel-additional-features-page.png" alt-text="Create event channel - additional features page"::: 1. On the **Review + create**, review the settings, and select **Create** to create the event channel.
-## Activate partner topics and partner destinations
+## Activate partner topics
Before your users can subscribe to partner topics you create in their Azure subscriptions, they'll have activate partner topics first. For details, see [Activate a partner topic](subscribe-to-partner-events.md#activate-a-partner-topic).
-Similarly, before your user can use the partner destinations you create in their subscriptions, they'll have to activate partner destinations first. For details, see [Activate a partner destination](deliver-events-to-partner-destinations.md#activate-a-partner-destination).
## Next steps
See the following articles for more details about the Partner Events feature:
- [Partner Events overview for partners](partner-events-overview-for-partners.md) - [Subscribe to partner events](subscribe-to-partner-events.md) - [Subscribe to Auth0 events](auth0-how-to.md)-- [Deliver events to partner destinations](deliver-events-to-partner-destinations.md)
event-grid Partner Events Overview For Partners https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/partner-events-overview-for-partners.md
For either publishing events or receiving events, you create the same kind of Ev
1. Communicate your interest in becoming a partner by sending an email to [GridPartner@microsoft.com](mailto:GridPartner@microsoft.com). Once you contact us, we'll guide you through the onboarding process and help your service get an entry card on our [Azure Event Grid gallery](https://portal.azure.com/#create/Microsoft.EventGridPartnerTopic) so that your service can be found on the Azure portal. 2. Create a [partner registration](#partner-registration). This is a global resource and you usually need to create once. 3. Create a [partner namespace](#partner-namespace). This resource exposes an endpoint to which you can publish events to Azure. When creating the partner namespace, provide the partner registration you created.
-4. Customer authorizes you to create a partner resource, either a [partner topic](concepts.md#partner-topics) or a [partner destination](concepts.md#partner-destination), in customer's Azure subscription.
+4. Customer authorizes you to create a [partner topic](concepts.md#partner-topics) in customer's Azure subscription.
5. Customer accesses your web page or executes a command, you define the user experience, to request either the flow of your events to Azure or the ability to receive Microsoft events into your system. In response to that request, you set up your system to do so with input from the customer. For example, the customer may have the option to select certain events from your system that should be forwarded to Azure.
-6. According to customer's requirements, you create a partner topic or a partner destination under the customer's Azure subscription, resource group and with the name the customer provides to you. It's achieved by using channels. Create a [channel](#channel) of type `partner topic`, if the customer wants to receive your events on Azure, or `partner destination` if the customer wants to send events to your system. Channels are resources contained by partner namespaces.
-7. Customer activates the partner topic or the partner destination that you created in their Azure subscription and resource group.
-8. If you created a partner topic, start publishing events to your partner namespace. If you created a partner destination, expect events coming to your system endpoints defined in the partner definition.
+6. Create a partner topic in customer's Azure subscription and resource group by using channels. [Channels](#channel) are resources contained by partner namespaces.
+7. Customer activates the partner topic that you created in their Azure subscription and resource group.
+8. Start publishing events to your partner namespace.
>[!NOTE] > You must [register the Azure Event Grid resource provider](subscribe-to-partner-events.md#register-the-event-grid-resource-provider) to every Azure subscription where you want create Event Grid resources. Otherwise, operations to create resources will fail.
Registrations are global. That is, they aren't associated with a particular Azur
### Channel A Channel is a nested resource to a Partner Namespace. A channel has two main purposes:
- - It's the resource type that allows you to create partner resources on a customer's Azure subscription. When you create a channel of type `partner topic`, a partner topic is created on a customer's Azure subscription. A partner topic is a customer's resource to which events are routed when a partner system publishes events. Similarly, when a channel of type `partner destination` is created, a partner destination is created on a customer's Azure subscription. Partner destinations are resources that represent a partner system endpoint to where events are delivered. A channel is the kind of resource, along with partner topics and partner destinations that enable bi-directional event integration.
+ - It's the resource type that allows you to create partner resources on a customer's Azure subscription. When you create a channel of type `partner topic`, a partner topic is created on a customer's Azure subscription. A partner topic is a customer's resource to which events are routed when a partner system publishes events.
A channel has the same lifecycle as its associated customer partner topic or destination. When a channel of type `partner topic` is deleted, for example, the associated customer's partner topic is deleted. Similarly, if the partner topic is deleted by the customer, the associated channel on your Azure subscription is deleted. - It's a resource that is used to route events. A channel of type ``partner topic`` is used to route events to a customer's partner topic. It supports two types of routing modes.
A Channel is a nested resource to a Partner Namespace. A channel has two main pu
>[!IMPORTANT] >Event types can be managed on the channel and once the values are updated, changes are reflected immediately on the associated partner topic.
- A channel of type ``partner destination`` is used to route events to a partner system. When creating a channel of this type, you provide your webhook URL where you receive the events published by Azure Event Grid. Once the channel is created, a customer can use the partner destination resource when creating an [event subscription](subscribe-through-portal.md) as the destination to deliver events to the partner system. Event Grid publishes events with the request including an http header `aeg-channel-name` too. Its value can be used to associate the incoming events with a specific user who in the first place requested the partner destination.
-
- A customer can use your partner destination to send your service any kind of events available to [Event Grid](overview.md).
- ### Partner namespace A partner namespace is a regional resource that has an endpoint to publish events to Azure Event Grid. Partner namespaces contain either channels or event channels (legacy resource). You must create partner namespaces in regions where customers request partner topics or destinations because channels and their corresponding partner resources must reside in the same region. You can't have a channel in a given region with its related partner topic, for example, located in a different region.
An Event channel is the resource that was first released with Partner Events to
A verified partner is a partner organization whose identity has been validated by Microsoft. It's strongly encouraged that your organization gets verified. Customers seek to engage with partners that have been verified as such verification provides greater assurances that they're dealing with a legitimate organization. Once verified, you benefit from having a presence on the [Event Grid Gallery](https://portal.azure.com/#create/Microsoft.EventGridPartnerTopic) where customers can discover your service easily and have a first-party experience when subscribing to your events, for example.
-## Customer's authorization to create partner topics and partner destinations
+## Customer's authorization to create partner topics
-Customers authorize you to create partner topics or partner destinations on their Azure subscription. The authorization is granted for a given resource group in a customer Azure subscription and it's time bound. You must create the channel before the expiration date set by the customer. You should have documentation suggesting the customer an adequate window of time for configuring your system to send or receive events and to create the channel before the authorization expires. If you attempt to create a channel without authorization or after it has expired, the channel creation will fail and no resource will be created on the customer's Azure subscription.
+Customers authorize you to create partner topics in their Azure subscription. The authorization is granted for a given resource group in a customer Azure subscription and it's time bound. You must create the channel before the expiration date set by the customer. You should have documentation suggesting the customer an adequate window of time for configuring your system to send or receive events and to create the channel before the authorization expires. If you attempt to create a channel without authorization or after it has expired, the channel creation will fail and no resource will be created on the customer's Azure subscription.
> [!NOTE]
-> Event Grid started **enforcing authorization checks to create partner topics or partner destinations** around June 30th, 2022. You should update your documentation asking your customers to grant you the authorization before you attempt to create a channel or an event channel.
+> Event Grid started **enforcing authorization checks to create partner topics** around June 30th, 2022. Your documentation should ask your customers to grant you the authorization as a prerequisite before you create a channel.
>[!IMPORTANT]
-> **A verified partner is not an authorized partner**. Even if a partner has been vetted by Microsoft, you still need to be authorized before you can create a partner topic or partner destination on the customer's Azure subscription.
+> **A verified partner is not an authorized partner**. Even if a partner has been vetted by Microsoft, you still need to be authorized before you can create a partner topic in the customer's Azure subscription.
-## Partner topic and partner destination activation
+## Partner topic activation
Customer activates the partner topic or destination you've created for them. At that point, the channel's activation status changes to **Activated**. Once a channel is activated, you can start publishing events to the partner namespace endpoint that contains the channel.
event-grid Partner Events Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/partner-events-overview.md
You receive events from a partner in a [partner topic](concepts.md#partner-topic
> [!NOTE] > You must [register the Azure Event Grid resource provider](subscribe-to-partner-events.md#register-the-event-grid-resource-provider) with every Azure subscription where you want create Event Grid resources. Otherwise, operations to create resources will fail.
-## Send events to a partner
-
-The process to send events to a partner is similar to that of receiving events from a partner. You send events to a partner using a [partner destination](concepts.md#partner-destination) that's created by the partner upon your request. A partner destination is a kind of resource that contains information such as the partner's endpoint URL to which Event Grid sends events. Here are the steps to send events to a partner.
-
-1. [Authorize partner](subscribe-to-partner-events.md#authorize-partner-to-create-a-partner-topic) to create a partner destination in a resource group you designate. Authorizations are stored in partner configurations.
-2. **Request partner to create a partner destination** resource in the specified Azure resource group in your Azure subscription. Prior to creating a partner destination, the partner should configure its system to be able to receive and, if supported, route your Microsoft events within its platform.
-1. After the partner creates a partner destination in your Azure subscription and resource group, [activate](deliver-events-to-partner-destinations.md#activate-a-partner-destination) your partner destination.
-1. [Subscribe to events](deliver-events-to-partner-destinations.md#create-an-event-subscription-using-partner-destination) using an event subscriptions on any kind of topic available to you: system topic (Azure services), custom topic or domain (your custom solutions) or a partner topic from another partner. When configuring your event subscription, select partner destination as the endpoint type and select the partner destination to which your events are going to start flowing.
-- ## Why should I use Partner Events? You may want to use the Partner Events feature if you've one or more of the following requirements.
A verified partner is a partner organization whose identity has been validated b
You manage the following types of resources. - **Partner topic** is the resource where you receive your events from the partner. -- **Partner destination** is a resource that represents the partner system to which you can send events.-- **[Event subscriptions](subscribe-through-portal.md)** is where you select what events to forward to an Azure service, a partner destination or to a public webhook on Azure or elsewhere.
+- **[Event subscriptions](subscribe-through-portal.md)** is where you select what events to forward to an Azure service or to a public webhook on Azure or elsewhere.
- **Partner configurations** is the resource the holds your authorizations to partners to create partner resources. ## Grant authorization to create partner topics and destinations
-You must authorize partners to create partner topics or partner destinations before they attempt to create those resources. If you don't grant your authorization, the partners' attempt to create the partner resource will fail.
+You must authorize partners to create partner topics before they attempt to create those resources. If you don't grant your authorization, the partners' attempt to create the partner resource will fail.
-You consent the partner to create partner topics or partner destinations by creating a **partner configuration** resource. You add a partner authorization to a partner configuration identifying the partner and providing an authorization expiration time by which a partner topic/destination must be created. The only types of resources that partners can create with your permission are partner topics and partner destinations.
+You consent the partner to create partner topics by creating a **partner configuration** resource. You add a partner authorization to a partner configuration identifying the partner and providing an authorization expiration time by which a partner topic/destination must be created. The only types of resources that partners can create with your permission are partner topics.
>[!IMPORTANT] > A verified partner isn't an authorized partner. Even if a partner has been vetted by Microsoft, you still need to authorize it before the partner can create resources on your behalf.
event-grid Subscribe To Partner Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/subscribe-to-partner-events.md
You must grant your consent to the partner to create partner topics in a resourc
> For a greater security stance, specify the minimum expiration time that offers the partner enough time to configure your events to flow to Event Grid and to provision your partner topic. Your partner won't be able to create resources (partner topics) in your Azure subscription after the authorization expiration time. > [!NOTE]
-> Event Grid started enforcing authorization checks to create partner topics or partner destinations around June 30th, 2022.
+> Event Grid started enforcing authorization checks to create partner topics around June 30th, 2022.
Following example shows the way to create a partner configuration resource that contains the partner authorization. You must identify the partner by providing either its **partner registration ID** or the **partner name**. Both can be obtained from your partner, but only one of them is required. For your convenience, the following examples leave a sample expiration time in the UTC format.
Following example shows the way to create a partner configuration resource that
:::image type="content" source="./media/subscribe-to-partner-events/partner-configurations.png" alt-text="Event Grid Partner Configurations page showing the list of partner configurations and the link to create a partner registration."::: 1. On the **Create Partner Configuration** page, do the following steps:
- 1. In the **Project Details** section, select the **Azure subscription** and the **resource group** where you want to allow the partner to create a partner topic or partner destination.
+ 1. In the **Project Details** section, select the **Azure subscription** and the **resource group** where you want to allow the partner to create a partner topic.
1. In the **Partner Authorizations** section, specify a default expiration time for partner authorizations defined in this configuration.
- 1. To provide your authorization for a partner to create partner topics or partner destinations in the specified resource group, select **+ Partner Authorization** link.
+ 1. To provide your authorization for a partner to create partner topics in the specified resource group, select **+ Partner Authorization** link.
:::image type="content" source="./media/subscribe-to-partner-events/partner-authorization-configuration.png" alt-text="Create Partner Configuration page with the Partner Authorization link selected.":::
-1. On the **Add partner authorization to create resources** page, you see a list of **verified partners**. A verified partner is a partner whose identity has been validated by Microsoft. You can select a verified partner, and select **Add** button at the bottom to give the partner the authorization to add a partner topic or a partner destination in your resource group. This authorization is effective up to the expiration time.
+1. On the **Add partner authorization to create resources** page, you see a list of **verified partners**. A verified partner is a partner whose identity has been validated by Microsoft. You can select a verified partner, and select **Add** button at the bottom to give the partner the authorization to add a partner topic in your resource group. This authorization is effective up to the expiration time.
You also have an option to authorize a **non-verified partner.** Unless the partner is an entity that you know well, for example, an organization within your company, it's strongly encouraged that you only work with verified partners. If the partner isn't yet verified, encourage them to get verified by asking them to contact the Event Grid team at askgrid@microsoft.com.
Subscribing to the partner topic tells Event Grid where you want your partner ev
1. On the **Create Event Subscription** page, do the following steps: 1. Enter a **name** for the event subscription. 1. For **Filter to Event Types**, select types of events that your subscription will receive.
- 1. For **Endpoint Type**, select an Azure service (Azure Function, Storage Queues, Event Hubs, Service Bus Queue, Service Bus Topic, Hybrid Connections. etc.), Web Hook, or Partner Destination.
+ 1. For **Endpoint Type**, select an Azure service (Azure Function, Storage Queues, Event Hubs, Service Bus Queue, Service Bus Topic, Hybrid Connections. etc.), or webhook.
1. Click the **Select an endpoint** link. In this example, let's use Azure Event Hubs destination or endpoint. :::image type="content" source="./media/subscribe-to-partner-events/select-endpoint.png" lightbox="./media/subscribe-to-partner-events/select-endpoint.png" alt-text="Image showing the configuration of an endpoint for an event subscription.":::
See the following articles for more details about the Partner Events feature:
- [Partner Events overview for customers](partner-events-overview.md) - [Partner Events overview for partners](partner-events-overview-for-partners.md) - [Onboard as a partner](onboard-partner.md)-- [Deliver events to partner destinations](deliver-events-to-partner-destinations.md)
event-grid Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/whats-new.md
This release corresponds to REST API version 2021-10-15-preview, which includes
- [Partner Events overview for partners](partner-events-overview-for-partners.md) - [Onboard as an Event Grid partner](onboard-partner.md) - [Subscribe to partner events](subscribe-to-partner-events.md)
- - [Deliver events to partner destinations](deliver-events-to-partner-destinations.md)
- New REST API - [Channels](/rest/api/eventgrid/controlplane-version2021-10-15-preview/channels) - [Partner Configurations](/rest/api/eventgrid/controlplane-version2021-10-15-preview/partner-configurations)
- - [Partner Destinations](/rest/api/eventgrid/controlplane-version2021-10-15-preview/partner-destinations)
- [Verified Partners](/rest/api/eventgrid/controlplane-version2021-10-15-preview/verified-partners)
firewall Long Running Sessions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/long-running-sessions.md
Previously updated : 09/21/2022 Last updated : 09/22/2022
Azure Firewall is designed to be available and redundant. Every effort is made to avoid service disruptions. However, there are few scenarios where Azure Firewall can potentially drop long running TCP sessions.
-## Scenarios impacting long running connections
+## Scenarios that impact long running TCP sessions
The following scenarios can potentially drop long running TCP sessions: - Scale down
Azure Firewall constantly monitors VM instances and recovers them automatically
## Applications sensitive to TCP session resets
-Session disconnection isnΓÇÖt an issue for resilient applications that can handle session reset gracefully. However, there are few applications (like traditional SAP GUI and SAP RFC based apps) which are sensitive to sessions resets. Secure such sensitive applications with Network Security Groups (NSGs).
+Session disconnection isnΓÇÖt an issue for resilient applications that can handle session reset gracefully. However, there are a few applications (like traditional SAP GUI and SAP RFC based apps) which are sensitive to sessions resets. Secure sensitive applications with Network Security Groups (NSGs).
## Network security groups
governance Assignment Structure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/concepts/assignment-structure.md
Title: Details of the policy assignment structure description: Describes the policy assignment definition used by Azure Policy to relate policy definitions and parameters to resources for evaluation. Previously updated : 05/12/2022 Last updated : 09/21/2022
_common_ properties used by Azure Policy. Each `metadata` property has a limit o
value. However, the scope isn't locked to the value and it can be changed to another scope. The following example of `parameterScopes` is for a _strongType_ parameter named
- **backupPolicyId** that sets a scope for resource selection when the assignment is edited in the
+ `backupPolicyId` that sets a scope for resource selection when the assignment is edited in the
Portal. ```json
_common_ properties used by Azure Policy. Each `metadata` property has a limit o
any. - `updatedOn` (string): The Universal ISO 8601 DateTime format of the assignment update time, if any.
+- `evidenceStorages` (object): An array of storage containers that holds attestation evidence for policy assignments with a `manual` effect. The `displayName` property is the name of the storage account. The `evidenceStorageAccountID` property is the resource ID of the storage account. The `evidenceBlobContainer` property is the blob container name in which you plan to store the evidence.
-## Enforcement Mode
+ ```json
+ {
+ "properties": {
+ "displayName": "A contingency plan should be in place to ensure operational continuity for each Azure subscription."
+ "policyDefinitionId": "/providers/Microsoft.Authorization/policyDefinitions/{definitionId}",
+ "metadata": {
+ "evidenceStorages": [
+ {
+ "displayName": "Default evidence storage",
+ "evidenceStorageAccountId": "/subscriptions/{subscriptionId}/resourceGroups/{rg-name}/providers/Microsoft.Storage/storageAccounts/{storage-account-name}",
+ "evidenceBlobContainer": "evidence-container"
+ }
+ ]
+ }
+ }
+ }
+ ```
+
+## Enforcement mode
The **enforcementMode** property provides customers the ability to test the outcome of a policy on existing resources without initiating the policy effect or triggering entries in the
-[Azure Activity log](../../../azure-monitor/essentials/platform-logs-overview.md). This scenario is
+[Azure Activity log](../../../azure-monitor/essentials/platform-logs-overview.md).
+
+This scenario is
commonly referred to as "What If" and aligns to safe deployment practices. **enforcementMode** is different from the [Disabled](./effects.md#disabled) effect, as that effect prevents resource evaluation from happening at all.
same policy definition is reusable with a different set of parameters for a diff
reducing the duplication and complexity of policy definitions while providing flexibility. ## Identity
-For policy assignments with effect set to **deployIfNotExisit** or **modify**, it is required to have an identity property to do remediation on non-compliant resources. When using identity, the user must also specify a location for the assignment.
+
+For policy assignments with effect set to **deployIfNotExist** or **modify**, it is required to have an identity property to do remediation on non-compliant resources. When using identity, the user must also specify a location for the assignment.
> [!NOTE] > A single policy assignment can be associated with only one system- or user-assigned managed identity. However, that identity can be assigned more than one role if necessary.
For policy assignments with effect set to **deployIfNotExisit** or **modify**, i
- Learn how to [remediate non-compliant resources](../how-to/remediate-resources.md). - Review what a management group is with [Organize your resources with Azure management groups](../../management-groups/overview.md).-
governance Effects https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/concepts/effects.md
Title: Understand how effects work description: Azure Policy definitions have various effects that determine how compliance is managed and reported. Previously updated : 09/01/2021+ Last updated : 09/21/2022 + # Understand Azure Policy effects
These effects are currently supported in a policy definition:
- [Deny](#deny) - [DeployIfNotExists](#deployifnotexists) - [Disabled](#disabled)
+- [Manual (preview)](#manual-preview)
- [Modify](#modify) The following effects are _deprecated_:
definitions as `constraintTemplate` is deprecated.
location must be publicly accessible. > [!WARNING]
- > Don't use SAS URIs or tokens in `url` or anything else that could expose a secret.
+ > Don't use SAS URIs, URL tokens, or or anything else that could expose secrets in plain text.
- If _Base64Encoded_, paired with property `content` to provide the base 64 encoded constraint template. See
definitions as `constraintTemplate` is deprecated.
- An _array_ that includes the [kind](https://kubernetes.io/docs/concepts/overview/working-with-objects/kubernetes-objects/#required-fields) of Kubernetes object to limit evaluation to.
- - Defining `["*"]` for _kinds_ is disallowed.
+ - Defining `["*"]` for _kinds_ is disallowed.
- **values** (optional) - Defines any parameters and values to pass to the Constraint. Each value must exist in the Constraint template CRD.
related resources to match.
- Doesn't apply if **type** is a resource that would be underneath the **if** condition resource. - For _ResourceGroup_, would limit to the **if** condition resource's resource group or the resource group specified in **ResourceGroupName**.
- - For _Subscription_, queries the entire subscription for the related resource. Assignment scope should be set at subscription or higher for proper evaluation.
+ - For _Subscription_, queries the entire subscription for the related resource. Assignment scope should be set at subscription or higher for proper evaluation.
- Default is _ResourceGroup_. - **EvaluationDelay** (optional) - Specifies when the existence of the related resources should be evaluated. The delay is only
location of the Constraint template to use in Kubernetes to limit the allowed co
## DeployIfNotExists Similar to AuditIfNotExists, a DeployIfNotExists policy definition executes a template deployment
-when the condition is met. Policy assignments with effect set as DeployIfNotExists require a [managed identity](../how-to/remediate-resources.md) to do remediation.
+when the condition is met. Policy assignments with effect set as DeployIfNotExists require a [managed identity](../how-to/remediate-resources.md) to do remediation.
> [!NOTE] > [Nested templates](../../../azure-resource-manager/templates/linked-templates.md#nested-template)
related resources to match and the template deployment to execute.
- Doesn't apply if **type** is a resource that would be underneath the **if** condition resource. - For _ResourceGroup_, would limit to the **if** condition resource's resource group or the resource group specified in **ResourceGroupName**.
- - For _Subscription_, queries the entire subscription for the related resource. Assignment scope should be set at subscription or higher for proper evaluation.
+ - For _Subscription_, queries the entire subscription for the related resource. Assignment scope should be set at subscription or higher for proper evaluation.
- Default is _ResourceGroup_. - **EvaluationDelay** (optional) - Specifies when the existence of the related resources should be evaluated. The delay is only
Example: Gatekeeper v2 admission control rule to allow only the specified contai
} ```
+## Manual (preview)
+
+The new `manual` (preview) effect enables you to define and track your own custom attestation
+resources. Unlike other Policy definitions that actively scan for evaluation, the Manual effect
+allows for manual changes to the compliance state. To change the compliance for a manual policy,
+you'll need to create an attestation for that compliance state.
+
+> [!NOTE]
+> During Public Preview, support for manual policy is available through various Microsoft Defender
+> for Cloud regulatory compliance initiatives. If you are a Microsoft Defender for Cloud [Premium tier](https://azure.microsoft.com/pricing/details/defender-for-cloud/) customer, refer to their experience overview.
+
+The following example targets Azure subscriptions and sets the initial compliance state to `Unknown`.
+
+```json
+{
+ "if": {
+ "field": "type",
+ "equals": "Microsoft.Resources/subscriptions"
+ },
+ "then": {
+ "effect": "manual",
+ "details": {
+ "defaultState": "Unknown"
+ }
+ }
+}
+```
+
+The `defaultState` property has three possible values:
+
+- **Unknown**: The initial, default state of the targeted resources.
+- **Compliant**: Resource is compliant according to your manual policy standards
+- **Non-compliant**: Resource is non-compliant according to your manual policy standards
+
+The Azure Policy compliance engine evaluates all applicable resources to the default state specified
+in the definition (`Unknown` if not specified). An `Unknown` compliance state indicates that you
+must manually attest the resource compliance state. If the effect state is unspecified, it defaults
+to `Unknown`. The `Unknown` compliance state indicates that you must attest the compliance state yourself.
+
+The following screenshot shows how a manual policy assignment with the `Unknown`
+state appears in the Azure portal:
+
+![Resource compliance table in the Azure portal showing an assigned manual policy with a compliance reason of 'unknown.'](./manual-policy-portal.png)
+
+When a policy definition with `manual` effect is assigned, you have the option to include **evidence**, which refers to optional supplemental information which supports the custom compliance attestation. Evidence itself is stored in Azure Storage, and you can specify the storage blob container in the [policy assignment's metadata](../concepts/assignment-structure.md#metadata) under the property `evidenceStorages`. Further details of the evidence file are described in the attestation JSON resource.
+
+### Attestations
+
+`Microsoft.PolicyInsights/attestations`, called an Attestation resource, is a new proxy resource type
+ that sets the compliance states for targeted resources in a manual policy. You can only have one
+ attestation on one resource for an individual policy. In preview, Attestations are available
+only through the Azure Resource Manager (ARM) API.
+
+Below is an example of creating a new attestation resource:
+
+```http
+PUT http://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.PolicyInsights/attestations/{name}?api-version=2019-10-01
+```
+
+#### Request body
+
+Below is a sample attestation resource JSON object:
+
+```json
+"properties": {
+ "policyAssignmentId": "/subscriptions/{subscriptionID}/providers/microsoft.authorization/policyassignments/{assignmentID}",
+ "policyDefinitionReferenceId": "{definitionReferenceID}",
+ "complianceState": "Compliant",
+ "expiresOn": "2023-07-14T00:00:00Z",
+ "owner": "{AADObjectID}",
+ "comments": "This subscription has passed a security audit. See attached details for evidence",
+ "evidence": [
+ {
+ "description": "The results of the security audit.",
+ "sourceUri": "https://gist.github.com/contoso/9573e238762c60166c090ae16b814011"
+ },
+ {
+ "description": "Description of the attached evidence document.",
+ "sourceUri": "https://storagesamples.blob.core.windows.net/sample-container/contingency_evidence_adendum.docx"
+ },
+ ],
+}
+```
+
+|Property |Description |
+|||
+|policyAssignmentId |Required assignment ID for which the state is being set. |
+|policyDefinitionReferenceId |Optional definition reference ID, if within a policy initiative. |
+|complianceState |Desired state of the resources. Allowed values are `Compliant`, `NonCompliant`, and `Unknown`. |
+|owner |Optional Azure AD object ID of responsible party. |
+|comments |Optional description of why state is being set. |
+|evidence |Optional link array for attestation evidence. |
+
+Because attestations are a separate resource from policy assignments, they have their own lifecycle. You can PUT, GET and DELETE attestations by using the ARM API. See the [Policy REST API Reference](/rest/api/policy) for more details.
+ ## Modify Modify is used to add, update, or remove properties or tags on a subscription or resource during creation or update. A common example is updating tags on resources such as costCenter. Existing non-compliant resources can be remediated with a [remediation task](../how-to/remediate-resources.md). A single Modify rule can have any number of
-operations. Policy assignments with effect set as Modify require a [managed identity](../how-to/remediate-resources.md) to do remediation.
+operations. Policy assignments with effect set as Modify require a [managed identity](../how-to/remediate-resources.md) to do remediation.
The following operations are supported by Modify:
The following operations are supported by Modify:
Modify evaluates before the request gets processed by a Resource Provider during the creation or updating of a resource. The Modify operations are applied to the request content when the **if** condition of the policy rule is met. Each Modify operation can specify a condition that determines
-when it's applied. Operations with conditions that are evaluated to _false_ are skipped.
+when it's applied. Operations with _false_ condition evaluations are skipped.
When an alias is specified, the following additional checks are performed to ensure that the Modify operation doesn't change the request content in a way that causes the resource provider to reject
governance Policy Applicability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/concepts/policy-applicability.md
+
+ Title: Azure Policy applicability logic
+description: Describes the rules Azure Policy uses to determine whether the policy is applied to its assigned resources.
Last updated : 09/22/2022++++
+# What is applicability in Azure Policy?
+
+When a policy definition is assigned to a scope, Azure Policy scans every resource in that scope to determine what should be considered for compliance evaluation. A resource will only be assessed for compliance if it is considered **applicable** to the given policy assignment.
+
+Applicability is determined by several factors:
+- **Conditions** in the `if` block of the [policy rule](../concepts/definition-structure.md#policy-rule).
+- **Mode** of the policy definition.
+- **Excluded scopes** specified in the assignment.
+- **Exemptions** of resources or resource hierarchies.
+
+Condition(s) in the `if` block of the policy rule are evaluated for applicability in slightly different ways based on the effect.
+
+> [!NOTE]
+> Applicability is different from compliance, and the logic used to determine each is different. If a resource is **applicable** that means it is relevant to the policy. If a resource is **compliant** that means it adheres to the policy. Sometimes only certain conditions from the policy rule impact applicability, while all conditions of the policy rule impact compliance state.
+
+## Applicability logic for Append/Modify/Audit/Deny/DataPlane effects
+
+Azure Policy evaluates only `type`, `name`, and `kind` conditions in the policy rule `if` expression and treats other conditions as true (or false when negated). If the final evaluation result is true, the policy is applicable. Otherwise, it's not applicable.
+
+Following are special cases to the previously described applicability logic:
+
+|Scenario |Result |
+|||
+|Any invalid aliases in the `if` conditions |The policy is not applicable |
+|When the `if` conditions consist of only `kind` conditions |The policy is applicable to all resources |
+|When the `if` conditions consist of only `name` conditions |The policy is applicable to all resources |
+|When the `if` conditions consist of only `type` and `kind` or `type` and `name` conditions |Only type conditions are considered when deciding applicability |
+|When any conditions (including deployment parameters) include a `location` condition |Will not be applicable to subscriptions |
+
+## Applicability logic for AuditIfNotExists and DeployIfNotExists policy effects
+
+The applicability of AuditIfNotExists and DeployIfNotExists policies is based off the entire `if` condition of the policy rule. When the `if` evaluates to false, the policy is not applicable.
+
+## Next steps
+
+- Learn how to [Get compliance data of Azure resources](../how-to/get-compliance-data.md).
+- Learn how to [remediate non-compliant resources](../how-to/remediate-resources.md).
+- Review the [update in policy compliance for resource type policies](https://azure.microsoft.com/updates/general-availability-update-in-policy-compliance-for-resource-type-policies/).
governance Guest Configuration Baseline Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/guest-configuration-baseline-windows.md
Title: Reference - Azure Policy guest configuration baseline for Windows description: Details of the Windows baseline on Azure implemented through Azure Policy guest configuration. Previously updated : 08/23/2022 Last updated : 09/21/2022
For more information, see [Azure Policy guest configuration](../concepts/guest-c
|Name<br /><sub>(ID)</sub> |Details |Expected value<br /><sub>(Type)</sub> |Severity | |||||
-|Account Lockout Duration<br /><sub>(AZ-WIN-73312)</sub> |<br />**Key Path**: [System Access]LockoutDuration<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\>\= 15<br /><sub>(Policy)</sub> |Warning |
+|Account Lockout Duration<br /><sub>(AZ-WIN-73312)</sub> |**Description**: This policy setting determines the length of time that must pass before a locked account is unlocked and a user can try to log on again. The setting does this by specifying the number of minutes a locked out account will remain unavailable. If the value for this policy setting is configured to 0, locked out accounts will remain locked out until an administrator manually unlocks them. Although it might seem like a good idea to configure the value for this policy setting to a high value, such a configuration will likely increase the number of calls that the help desk receives to unlock accounts locked by mistake. Users should be aware of the length of time a lock remains in place, so that they realize they only need to call the help desk if they have an extremely urgent need to regain access to their computer. The recommended state for this setting is: `15 or more minute(s)`. **Note:** Password Policy settings (section 1.1) and Account Lockout Policy settings (section 1.2) must be applied via the **Default Domain Policy** GPO in order to be globally in effect on **domain** user accounts as their default behavior. If these settings are configured in another GPO, they will only affect **local** user accounts on the computers that receive the GPO. However, custom exceptions to the default password policy and account lockout policy rules for specific domain users and/or groups can be defined using Password Settings Objects (PSOs), which are completely separate from Group Policy and most easily configured using Active Directory Administrative Center.<br />**Key Path**: [System Access]LockoutDuration<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\>\= 15<br /><sub>(Policy)</sub> |Warning |
-## Administrative Template - Window Defender
+## Administrative Template - Windows Defender
|Name<br /><sub>(ID)</sub> |Details |Expected value<br /><sub>(Type)</sub> |Severity | |||||
-|Configure detection for potentially unwanted applications<br /><sub>(AZ-WIN-202219)</sub> |<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows Defender\PUAProtection<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\= 1<br /><sub>(Registry)</sub> |Critical |
-|Scan all downloaded files and attachments<br /><sub>(AZ-WIN-202221)</sub> |<br />**Key Path**: Software\Policies\Microsoft\Windows Defender\Real-Time Protection\DisableIOAVProtection<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\= 0<br /><sub>(Registry)</sub> |Warning |
-|Turn off Microsoft Defender AntiVirus<br /><sub>(AZ-WIN-202220)</sub> |<br />**Key Path**: Software\Policies\Microsoft\Windows Defender\DisableAntiSpyware<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\= 0<br /><sub>(Registry)</sub> |Critical |
-|Turn off real-time protection<br /><sub>(AZ-WIN-202222)</sub> |<br />**Key Path**: Software\Policies\Microsoft\Windows Defender\Real-Time Protection\DisableRealtimeMonitoring<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\= 0<br /><sub>(Registry)</sub> |Warning |
-|Turn on e-mail scanning<br /><sub>(AZ-WIN-202218)</sub> |<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows Defender\Scan\DisableEmailScanning<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 0<br /><sub>(Registry)</sub> |Warning |
-|Turn on script scanning<br /><sub>(AZ-WIN-202223)</sub> |<br />**Key Path**: Software\Policies\Microsoft\Windows Defender\Real-Time Protection\DisableScriptScanning<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\= 0<br /><sub>(Registry)</sub> |Warning |
+|Configure detection for potentially unwanted applications<br /><sub>(AZ-WIN-202219)</sub> |**Description**: This policy setting controls detection and action for Potentially Unwanted Applications (PUA), which are sneaky unwanted application bundlers or their bundled applications that can deliver adware or malware. The recommended state for this setting is: `Enabled: Block`. For more information, see this link: [Block potentially unwanted applications with Microsoft Defender Antivirus | Microsoft Docs](/windows/security/threat-protection/windows-defender-antivirus/detect-block-potentially-unwanted-apps-windows-defender-antivirus)<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows Defender\PUAProtection<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\= 1<br /><sub>(Registry)</sub> |Critical |
+|Scan all downloaded files and attachments<br /><sub>(AZ-WIN-202221)</sub> |**Description**: This policy setting configures scanning for all downloaded files and attachments. The recommended state for this setting is: `Enabled`.<br />**Key Path**: Software\Policies\Microsoft\Windows Defender\Real-Time Protection\DisableIOAVProtection<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\= 0<br /><sub>(Registry)</sub> |Warning |
+|Turn off Microsoft Defender AntiVirus<br /><sub>(AZ-WIN-202220)</sub> |**Description**: This policy setting turns off Microsoft Defender Antivirus. If the setting is configured to Disabled, Microsoft Defender Antivirus runs and computers are scanned for malware and other potentially unwanted software. The recommended state for this setting is: `Disabled`.<br />**Key Path**: Software\Policies\Microsoft\Windows Defender\DisableAntiSpyware<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\= 0<br /><sub>(Registry)</sub> |Critical |
+|Turn off real-time protection<br /><sub>(AZ-WIN-202222)</sub> |**Description**: This policy setting configures real-time protection prompts for known malware detection. Microsoft Defender Antivirus alerts you when malware or potentially unwanted software attempts to install itself or to run on your computer. The recommended state for this setting is: `Disabled`.<br />**Key Path**: Software\Policies\Microsoft\Windows Defender\Real-Time Protection\DisableRealtimeMonitoring<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\= 0<br /><sub>(Registry)</sub> |Warning |
+|Turn on e-mail scanning<br /><sub>(AZ-WIN-202218)</sub> |**Description**: This policy setting allows you to configure e-mail scanning. When e-mail scanning is enabled, the engine will parse the mailbox and mail files, according to their specific format, in order to analyze the mail bodies and attachments. Several e-mail formats are currently supported, for example: pst (Outlook), dbx, mbx, mime (Outlook Express), binhex (Mac). The recommended state for this setting is: `Enabled`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows Defender\Scan\DisableEmailScanning<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 0<br /><sub>(Registry)</sub> |Warning |
+|Turn on script scanning<br /><sub>(AZ-WIN-202223)</sub> |**Description**: This policy setting allows script scanning to be turned on/off. Script scanning intercepts scripts then scans them before they are executed on the system. The recommended state for this setting is: `Enabled`.<br />**Key Path**: Software\Policies\Microsoft\Windows Defender\Real-Time Protection\DisableScriptScanning<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\= 0<br /><sub>(Registry)</sub> |Warning |
## Administrative Templates - Control Panel
For more information, see [Azure Policy guest configuration](../concepts/guest-c
|Name<br /><sub>(ID)</sub> |Details |Expected value<br /><sub>(Type)</sub> |Severity | |||||
-|Disable SMB v1 client (remove dependency on LanmanWorkstation)<br /><sub>(AZ-WIN-00122)</sub> |<br />**Key Path**: SYSTEM\CurrentControlSet\Services\LanmanWorkstation\DependsOnService<br />**OS**: WS2008, WS2008R2, WS2012<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= Bowser\0MRxSmb20\0NSI\0\0<br /><sub>(Registry)</sub> |Critical |
-|WDigest Authentication must be disabled.<br /><sub>(AZ-WIN-73497)</sub> |<br />**Key Path**: SYSTEM\CurrentControlSet\Control\SecurityProviders\Wdigest\UseLogonCredential<br />**OS**: WS2016, WS2019<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 0<br /><sub>(Registry)</sub> |Important |
+|Disable SMB v1 client (remove dependency on LanmanWorkstation)<br /><sub>(AZ-WIN-00122)</sub> |**Description**: SMBv1 is a legacy protocol that uses the MD5 algorithm as part of SMB. MD5 is known to be vulnerable to a number of attacks such as collision and preimage attacks as well as not being FIPS compliant.<br />**Key Path**: SYSTEM\CurrentControlSet\Services\LanmanWorkstation\DependOnService<br />**OS**: WS2008, WS2008R2, WS2012<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= Bowser\0MRxSmb20\0NSI\0\0<br /><sub>(Registry)</sub> |Critical |
+|WDigest Authentication<br /><sub>(AZ-WIN-73497)</sub> |**Description**: When WDigest authentication is enabled, Lsass.exe retains a copy of the user's plaintext password in memory, where it can be at risk of theft. If this setting is not configured, WDigest authentication is disabled in Windows 8.1 and in Windows Server 2012 R2; it is enabled by default in earlier versions of Windows and Windows Server. For more information about local accounts and credential theft, review the "[Mitigating Pass-the-Hash (PtH) Attacks and Other Credential Theft Techniques](https://www.microsoft.com/download/details.aspx?id=36036)" documents. For more information about `UseLogonCredential`, see Microsoft Knowledge Base article 2871997: [Microsoft Security Advisory Update to improve credentials protection and management May 13, 2014](https://support.microsoft.com/en-us/kb/2871997). The recommended state for this setting is: `Disabled`.<br />**Key Path**: SYSTEM\CurrentControlSet\Control\SecurityProviders\WDigest\UseLogonCredential<br />**OS**: WS2016, WS2019<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 0<br /><sub>(Registry)</sub> |Important |
## Administrative Templates - MSS |Name<br /><sub>(ID)</sub> |Details |Expected value<br /><sub>(Type)</sub> |Severity | |||||
-|MSS: (DisableIPSourceRouting IPv6) IP source routing protection level (protects against packet spoofing)<br /><sub>(AZ-WIN-202213)</sub> |<br />**Key Path**: System\CurrentControlSet\Services\Tcpip6\Parameters\DisableIPSourceRouting<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\= 2<br /><sub>(Registry)</sub> |Informational |
-|MSS: (DisableIPSourceRouting) IP source routing protection level (protects against packet spoofing)<br /><sub>(AZ-WIN-202244)</sub> |<br />**Key Path**: System\CurrentControlSet\Services\Tcpip\Parameters\DisableIPSourceRouting<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\= 2<br /><sub>(Registry)</sub> |Informational |
-|MSS: (NoNameReleaseOnDemand) Allow the computer to ignore NetBIOS name release requests except from WINS servers<br /><sub>(AZ-WIN-202214)</sub> |<br />**Key Path**: System\CurrentControlSet\Services\Netbt\Parameters\NoNameReleaseOnDemand<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\= 1<br /><sub>(Registry)</sub> |Informational |
-|MSS: (SafeDllSearchMode) Enable Safe DLL search mode (recommended)<br /><sub>(AZ-WIN-202215)</sub> |<br />**Key Path**: SYSTEM\CurrentControlSet\Control\Session Manager\SafeDllSearchMode<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\= 1<br /><sub>(Registry)</sub> |Warning |
-|MSS: (WarningLevel) Percentage threshold for the security event log at which the system will generate a warning<br /><sub>(AZ-WIN-202212)</sub> |<br />**Key Path**: SYSTEM\CurrentControlSet\Services\Eventlog\Security\WarningLevel<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member | 90<br /><sub>(Registry)</sub> |Informational |
-|Windows Server must be configured to prevent Internet Control Message Protocol (ICMP) redirects from overriding Open Shortest Path First (OSPF)-generated routes.<br /><sub>(AZ-WIN-73503)</sub> |<br />**Key Path**: SYSTEM\CurrentControlSet\Services\Tcpip\Parameters\EnableICMPRedirect<br />**OS**: WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 0<br /><sub>(Registry)</sub> |Informational |
+|MSS: (DisableIPSourceRouting IPv6) IP source routing protection level (protects against packet spoofing)<br /><sub>(AZ-WIN-202213)</sub> |**Description**: IP source routing is a mechanism that allows the sender to determine the IP route that a datagram should follow through the network. The recommended state for this setting is: `Enabled: Highest protection, source routing is completely disabled`.<br />**Key Path**: System\CurrentControlSet\Services\Tcpip6\Parameters\DisableIPSourceRouting<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\= 2<br /><sub>(Registry)</sub> |Informational |
+|MSS: (DisableIPSourceRouting) IP source routing protection level (protects against packet spoofing)<br /><sub>(AZ-WIN-202244)</sub> |**Description**: IP source routing is a mechanism that allows the sender to determine the IP route that a datagram should take through the network. It is recommended to configure this setting to Not Defined for enterprise environments and to Highest Protection for high security environments to completely disable source routing. The recommended state for this setting is: `Enabled: Highest protection, source routing is completely disabled`.<br />**Key Path**: System\CurrentControlSet\Services\Tcpip\Parameters\DisableIPSourceRouting<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\= 2<br /><sub>(Registry)</sub> |Informational |
+|MSS: (NoNameReleaseOnDemand) Allow the computer to ignore NetBIOS name release requests except from WINS servers<br /><sub>(AZ-WIN-202214)</sub> |**Description**: NetBIOS over TCP/IP is a network protocol that among other things provides a way to easily resolve NetBIOS names that are registered on Windows-based systems to the IP addresses that are configured on those systems. This setting determines whether the computer releases its NetBIOS name when it receives a name-release request. The recommended state for this setting is: `Enabled`.<br />**Key Path**: System\CurrentControlSet\Services\Netbt\Parameters\NoNameReleaseOnDemand<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\= 1<br /><sub>(Registry)</sub> |Informational |
+|MSS: (SafeDllSearchMode) Enable Safe DLL search mode (recommended)<br /><sub>(AZ-WIN-202215)</sub> |**Description**: The DLL search order can be configured to search for DLLs that are requested by running processes in one of two ways: - Search folders specified in the system path first, and then search the current working folder. - Search current working folder first, and then search the folders specified in the system path. When enabled, the registry value is set to 1. With a setting of 1, the system first searches the folders that are specified in the system path and then searches the current working folder. When disabled the registry value is set to 0 and the system first searches the current working folder and then searches the folders that are specified in the system path. Applications will be forced to search for DLLs in the system path first. For applications that require unique versions of these DLLs that are included with the application, this entry could cause performance or stability problems. The recommended state for this setting is: `Enabled`. **Note:** More information on how Safe DLL search mode works is available at this link: [Dynamic-Link Library Search Order - Windows applications | Microsoft Docs](/windows/win32/dlls/dynamic-link-library-search-order)<br />**Key Path**: SYSTEM\CurrentControlSet\Control\Session Manager\SafeDllSearchMode<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\= 1<br /><sub>(Registry)</sub> |Warning |
+|MSS: (WarningLevel) Percentage threshold for the security event log at which the system will generate a warning<br /><sub>(AZ-WIN-202212)</sub> |**Description**: This setting can generate a security audit in the Security event log when the log reaches a user-defined threshold. The recommended state for this setting is: `Enabled: 90% or less`. **Note:** If log settings are configured to Overwrite events as needed or Overwrite events older than x days, this event will not be generated.<br />**Key Path**: SYSTEM\CurrentControlSet\Services\Eventlog\Security\WarningLevel<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member | 90<br /><sub>(Registry)</sub> |Informational |
+|Windows Server must be configured to prevent Internet Control Message Protocol (ICMP) redirects from overriding Open Shortest Path First (OSPF)-generated routes.<br /><sub>(AZ-WIN-73503)</sub> |**Description**: Internet Control Message Protocol (ICMP) redirects cause the IPv4 stack to plumb host routes. These routes override the Open Shortest Path First (OSPF) generated routes. The recommended state for this setting is: `Disabled`.<br />**Key Path**: SYSTEM\CurrentControlSet\Services\Tcpip\Parameters\EnableICMPRedirect<br />**OS**: WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 0<br /><sub>(Registry)</sub> |Informational |
## Administrative Templates - Network |Name<br /><sub>(ID)</sub> |Details |Expected value<br /><sub>(Type)</sub> |Severity | ||||| |Enable insecure guest logons<br /><sub>(AZ-WIN-00171)</sub> |**Description**: This policy setting determines if the SMB client will allow insecure guest logons to an SMB server. The recommended state for this setting is: `Disabled`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\LanmanWorkstation\AllowInsecureGuestAuth<br />**OS**: WS2016, WS2019, WS2022<br />**Server Type**: Domain Member, Workgroup Member |\= 0<br /><sub>(Registry)</sub> |Critical |
-|Hardened UNC Paths - NETLOGON<br /><sub>(AZ_WIN_202250)</sub> |<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\NetworkProvider\HardenedPaths\\\*\NETLOGON<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\= RequireMutualAuthentication=1, RequireIntegrity=1<br /><sub>(Registry)</sub> |Warning |
-|Hardened UNC Paths - SYSVOL<br /><sub>(AZ_WIN_202251)</sub> |<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\NetworkProvider\HardenedPaths\\\*\SYSVOL<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\= RequireMutualAuthentication=1, RequireIntegrity=1<br /><sub>(Registry)</sub> |Warning |
+|Hardened UNC Paths - NETLOGON<br /><sub>(AZ_WIN_202250)</sub> |**Description**: This policy setting configures secure access to UNC paths. This policy setting configures secure access to UNC paths<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\NetworkProvider\HardenedPaths?ValueName=\\*\NETLOGON<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\= RequireMutualAuthentication=1, RequireIntegrity=1<br /><sub>(Registry)</sub> |Warning |
+|Hardened UNC Paths - SYSVOL<br /><sub>(AZ_WIN_202251)</sub> |<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\NetworkProvider\HardenedPaths?ValueName=\\*\SYSVOL<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\= RequireMutualAuthentication=1, RequireIntegrity=1<br /><sub>(Registry)</sub> |Warning |
|Minimize the number of simultaneous connections to the Internet or a Windows Domain<br /><sub>(CCE-38338-0)</sub> |**Description**: This policy setting prevents computers from connecting to both a domain based network and a non-domain based network at the same time. The recommended state for this setting is: `Enabled`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\WcmSvc\GroupPolicy\fMinimizeConnections<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 1<br /><sub>(Registry)</sub> |Warning | |Prohibit installation and configuration of Network Bridge on your DNS domain network<br /><sub>(CCE-38002-2)</sub> |**Description**: You can use this procedure to control user's ability to install and configure a network bridge. The recommended state for this setting is: `Enabled`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\Network Connections\NC_AllowNetBridge_NLA<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 0<br /><sub>(Registry)</sub> |Warning | |Prohibit use of Internet Connection Sharing on your DNS domain network<br /><sub>(AZ-WIN-00172)</sub> |**Description**: Although this "legacy" setting traditionally applied to the use of Internet Connection Sharing (ICS) in Windows 2000, Windows XP & Server 2003, this setting now freshly applies to the Mobile Hotspot feature in Windows 10 & Server 2016. The recommended state for this setting is: `Enabled`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\Network Connections\NC_ShowSharedAccessUI<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 0<br /><sub>(Registry)</sub> |Warning |
For more information, see [Azure Policy guest configuration](../concepts/guest-c
|Name<br /><sub>(ID)</sub> |Details |Expected value<br /><sub>(Type)</sub> |Severity | |||||
-|Enable Structured Exception Handling Overwrite Protection (SEHOP)<br /><sub>(AZ-WIN-202210)</sub> |<br />**Key Path**: SYSTEM\CurrentControlSet\Control\Session Manager\kernel\DisableExceptionChainValidation<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\= 0<br /><sub>(Registry)</sub> |Critical |
-|NetBT NodeType configuration<br /><sub>(AZ-WIN-202211)</sub> |<br />**Key Path**: SYSTEM\CurrentControlSet\Services\NetBT\Parameters\NodeType<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\= 2<br /><sub>(Registry)</sub> |Warning |
+|Enable Structured Exception Handling Overwrite Protection (SEHOP)<br /><sub>(AZ-WIN-202210)</sub> |**Description**: Windows includes support for Structured Exception Handling Overwrite Protection (SEHOP). We recommend enabling this feature to improve the security profile of the computer. The recommended state for this setting is: `Enabled`.<br />**Key Path**: SYSTEM\CurrentControlSet\Control\Session Manager\kernel\DisableExceptionChainValidation<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\= 0<br /><sub>(Registry)</sub> |Critical |
+|NetBT NodeType configuration<br /><sub>(AZ-WIN-202211)</sub> |**Description**: This setting determines which method NetBIOS over TCP/IP (NetBT) uses to register and resolve names. The available methods are: - The B-node (broadcast) method only uses broadcasts. - The P-node (point-to-point) method only uses name queries to a name server (WINS). - The M-node (mixed) method broadcasts first, then queries a name server (WINS) if broadcast failed. - The H-node (hybrid) method queries a name server (WINS) first, then broadcasts if the query failed. The recommended state for this setting is: `Enabled: P-node (recommended)` (point-to-point). **Note:** Resolution through LMHOSTS or DNS follows these methods. If the `NodeType` registry value is present, it overrides any `DhcpNodeType` registry value. If neither `NodeType` nor `DhcpNodeType` is present, the computer uses B-node (broadcast) if there are no WINS servers configured for the network, or H-node (hybrid) if there is at least one WINS server configured.<br />**Key Path**: SYSTEM\CurrentControlSet\Services\NetBT\Parameters\NodeType<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\= 2<br /><sub>(Registry)</sub> |Warning |
## Administrative Templates - System
For more information, see [Azure Policy guest configuration](../concepts/guest-c
|Configure Offer Remote Assistance<br /><sub>(CCE-36388-7)</sub> |**Description**: This policy setting allows you to turn on or turn off Offer (Unsolicited) Remote Assistance on this computer. Help desk and support personnel will not be able to proactively offer assistance, although they can still respond to user assistance requests. The recommended state for this setting is: `Disabled`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows NT\Terminal Services\fAllowUnsolicited<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Warning | |Configure Solicited Remote Assistance<br /><sub>(CCE-37281-3)</sub> |**Description**: This policy setting allows you to turn on or turn off Solicited (Ask for) Remote Assistance on this computer. The recommended state for this setting is: `Disabled`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows NT\Terminal Services\fAllowToGetHelp<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 0<br /><sub>(Registry)</sub> |Critical | |Do not display network selection UI<br /><sub>(CCE-38353-9)</sub> |**Description**: This policy setting allows you to control whether anyone can interact with available networks UI on the logon screen. If you enable this policy setting, the PC's network connectivity state cannot be changed without signing into Windows. If you disable or don't configure this policy setting, any user can disconnect the PC from the network or can connect the PC to other available networks without signing into Windows.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\System\DontDisplayNetworkSelectionUI<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Warning |
-|Do not enumerate connected users on domain-joined computers<br /><sub>(AZ-WIN-202216)</sub> |<br />**Key Path**: Software\Policies\Microsoft\Windows\System\DontEnumerateConnectedUsers<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\= 1<br /><sub>(Registry)</sub> |Warning |
+|Do not enumerate connected users on domain-joined computers<br /><sub>(AZ-WIN-202216)</sub> |**Description**: This policy setting prevents connected users from being enumerated on domain-joined computers. The recommended state for this setting is: `Enabled`.<br />**Key Path**: Software\Policies\Microsoft\Windows\System\DontEnumerateConnectedUsers<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\= 1<br /><sub>(Registry)</sub> |Warning |
|Enable RPC Endpoint Mapper Client Authentication<br /><sub>(CCE-37346-4)</sub> |**Description**: This policy setting controls whether RPC clients authenticate with the Endpoint Mapper Service when the call they are making contains authentication information. The Endpoint Mapper Service on computers running Windows NT4 (all service packs) cannot process authentication information supplied in this manner. If you disable this policy setting, RPC clients will not authenticate to the Endpoint Mapper Service, but they will be able to communicate with the Endpoint Mapper Service on Windows NT4 Server. If you enable this policy setting, RPC clients will authenticate to the Endpoint Mapper Service for calls that contain authentication information. Clients making such calls will not be able to communicate with the Windows NT4 Server Endpoint Mapper Service. If you do not configure this policy setting, it remains disabled. RPC clients will not authenticate to the Endpoint Mapper Service, but they will be able to communicate with the Windows NT4 Server Endpoint Mapper Service. Note: This policy will not be applied until the system is rebooted.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows NT\Rpc\EnableAuthEpResolution<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Critical | |Enable Windows NTP Client<br /><sub>(CCE-37843-0)</sub> |**Description**: This policy setting specifies whether the Windows NTP Client is enabled. Enabling the Windows NTP Client allows your computer to synchronize its computer clock with other NTP servers. You might want to disable this service if you decide to use a third-party time provider. The recommended state for this setting is: `Enabled`.<br />**Key Path**: SYSTEM\CurrentControlSet\Services\W32Time\TimeProviders\NtpClient\Enabled<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Critical |
-|Encryption Oracle Remediation for CredSSP protocol<br /><sub>(AZ-WIN-201910)</sub> |<br />**Key Path**: SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System\CredSSP\Parameters\AllowEncryptionOracle<br />**OS**: WS2016, WS2019<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 0<br /><sub>(Registry)</sub> |Critical |
+|Encryption Oracle Remediation for CredSSP protocol<br /><sub>(AZ-WIN-201910)</sub> |**Description**: Some versions of the CredSSP protocol that is used by some applications (such as Remote Desktop Connection) are vulnerable to an encryption oracle attack against the client. This policy controls compatibility with vulnerable clients and servers and allows you to set the level of protection desired for the encryption oracle vulnerability. The recommended state for this setting is: `Enabled: Force Updated Clients`.<br />**Key Path**: SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System\CredSSP\Parameters\AllowEncryptionOracle<br />**OS**: WS2016, WS2019<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 0<br /><sub>(Registry)</sub> |Critical |
|Ensure 'Configure registry policy processing: Do not apply during periodic background processing' is set to 'Enabled: FALSE'<br /><sub>(CCE-36169-1)</sub> |**Description**: The "Do not apply during periodic background processing" option prevents the system from updating affected policies in the background while the computer is in use. When background updates are disabled, policy changes will not take effect until the next user logon or system restart. The recommended state for this setting is: `Enabled: FALSE` (unchecked).<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\Group Policy\{35378EAC-683F-11D2-A89A-00C04FBBCFA2}\NoBackgroundPolicy<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\= 0<br /><sub>(Registry)</sub> |Critical | |Ensure 'Configure registry policy processing: Process even if the Group Policy objects have not changed' is set to 'Enabled: TRUE'<br /><sub>(CCE-36169-1a)</sub> |**Description**: The "Process even if the Group Policy objects have not changed" option updates and reapplies policies even if the policies have not changed. The recommended state for this setting is: `Enabled: TRUE` (checked).<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\Group Policy\{35378EAC-683F-11D2-A89A-00C04FBBCFA2}\NoGPOListChanges<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\= 0<br /><sub>(Registry)</sub> |Critical | |Ensure 'Continue experiences on this device' is set to 'Disabled'<br /><sub>(AZ-WIN-00170)</sub> |**Description**: This policy setting determines whether the Windows device is allowed to participate in cross-device experiences (continue experiences). The recommended state for this setting is: `Disabled`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\System\EnableCdp<br />**OS**: WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Warning |
-|Enumerate local users on domain-joined computers<br /><sub>(AZ_WIN_202204)</sub> |<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\System\EnumerateLocalUsers<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Member |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Warning |
+|Enumerate local users on domain-joined computers<br /><sub>(AZ_WIN_202204)</sub> |**Description**: This policy setting allows local users to be enumerated on domain-joined computers. The recommended state for this setting is: `Disabled`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\System\EnumerateLocalUsers<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Member |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Warning |
|Include command line in process creation events<br /><sub>(CCE-36925-6)</sub> |**Description**: This policy setting determines what information is logged in security audit events when a new process has been created. This setting only applies when the Audit Process Creation policy is enabled. If you enable this policy setting the command line information for every process will be logged in plain text in the security event log as part of the Audit Process Creation event 4688, "a new process has been created," on the workstations and servers on which this policy setting is applied. If you disable or do not configure this policy setting, the process's command line information will not be included in Audit Process Creation events. Default: Not configured Note: When this policy setting is enabled, any user with access to read the security events will be able to read the command line arguments for any successfully created process. Command line arguments can contain sensitive or private information such as passwords or user data.<br />**Key Path**: SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System\Audit\ProcessCreationIncludeCmdLine_Enabled<br />**OS**: WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Critical |
-|Prevent device metadata retrieval from the Internet<br /><sub>(AZ-WIN-202251)</sub> |<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\Device Metadata\PreventDeviceMetadataFromNetwork<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\= 1<br /><sub>(Registry)</sub> |Informational |
-|Remote host allows delegation of non-exportable credentials<br /><sub>(AZ-WIN-20199)</sub> |<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\CredentialsDelegation\AllowProtectedCreds<br />**OS**: WS2016, WS2019<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Critical |
+|Prevent device metadata retrieval from the Internet<br /><sub>(AZ-WIN-202251)</sub> |**Description**: This policy setting allows you to prevent Windows from retrieving device metadata from the Internet. The recommended state for this setting is: `Enabled`. **Note:** This will not prevent the installation of basic hardware drivers, but does prevent associated 3rd-party utility software from automatically being installed under the context of the `SYSTEM` account.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\Device Metadata\PreventDeviceMetadataFromNetwork<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\= 1<br /><sub>(Registry)</sub> |Informational |
+|Remote host allows delegation of non-exportable credentials<br /><sub>(AZ-WIN-20199)</sub> |**Description**: Remote host allows delegation of non-exportable credentials. When you use credential delegation, devices provide an exportable version of credentials to the remote host. This exposes users to the risk of credential theft from attackers on the remote host. The Restricted Admin Mode and Windows Defender Remote Credential Guard features are two options to help protect against this risk. The recommended state for this setting is: `Enabled`. **Note:** More detailed information on Windows Defender Remote Credential Guard and how it compares to Restricted Admin Mode can be found at this link: [Protect Remote Desktop credentials with Windows Defender Remote Credential Guard (Windows 10) | Microsoft Docs](/windows/access-protection/remote-credential-guard)<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\CredentialsDelegation\AllowProtectedCreds<br />**OS**: WS2016, WS2019<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Critical |
|Turn off app notifications on the lock screen<br /><sub>(CCE-35893-7)</sub> |**Description**: This policy setting allows you to prevent app notifications from appearing on the lock screen. The recommended state for this setting is: `Enabled`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\System\DisableLockScreenAppNotifications<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Warning |
-|Turn off background refresh of Group Policy<br /><sub>(CCE-14437-8)</sub> |<br />**Key Path**: Software\Microsoft\Windows\CurrentVersion\Policies\System\DisableBkGndGroupPolicy<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\= 0<br /><sub>(Registry)</sub> |Warning |
+|Turn off background refresh of Group Policy<br /><sub>(CCE-14437-8)</sub> |**Description**: This policy setting prevents Group Policy from being updated while the computer is in use. This policy setting applies to Group Policy for computers, users and Domain Controllers. The recommended state for this setting is: `Disabled`.<br />**Key Path**: Software\Microsoft\Windows\CurrentVersion\Policies\System\DisableBkGndGroupPolicy<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\= 0<br /><sub>(Registry)</sub> |Warning |
|Turn off downloading of print drivers over HTTP<br /><sub>(CCE-36625-2)</sub> |**Description**: This policy setting controls whether the computer can download print driver packages over HTTP. To set up HTTP printing, printer drivers that are not available in the standard operating system installation might need to be downloaded over HTTP. The recommended state for this setting is: `Enabled`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows NT\Printers\DisableWebPnPDownload<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Warning | |Turn off Internet Connection Wizard if URL connection is referring to Microsoft.com<br /><sub>(CCE-37163-3)</sub> |**Description**: This policy setting specifies whether the Internet Connection Wizard can connect to Microsoft to download a list of Internet Service Providers (ISPs). The recommended state for this setting is: `Enabled`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\Internet Connection Wizard\ExitOnMSICW<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Warning | |Turn on convenience PIN sign-in<br /><sub>(CCE-37528-7)</sub> |**Description**: This policy setting allows you to control whether a domain user can sign in using a convenience PIN. In Windows 10, convenience PIN was replaced with Passport, which has stronger security properties. To configure Passport for domain users, use the policies under Computer configuration\\Administrative Templates\\Windows Components\\Microsoft Passport for Work. **Note:** The user's domain password will be cached in the system vault when using this feature. The recommended state for this setting is: `Disabled`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\System\AllowDomainPINLogon<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Warning |
For more information, see [Azure Policy guest configuration](../concepts/guest-c
|Name<br /><sub>(ID)</sub> |Details |Expected value<br /><sub>(Type)</sub> |Severity | |||||
-|Turn off cloud consumer account state content<br /><sub>(AZ-WIN-202217)</sub> |<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\CloudContent\DisableConsumerAccountStateContent<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\= 1<br /><sub>(Registry)</sub> |Warning |
+|Turn off cloud consumer account state content<br /><sub>(AZ-WIN-202217)</sub> |**Description**: This policy setting determines whether cloud consumer account state content is allowed in all Windows experiences. The recommended state for this setting is: `Enabled`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\CloudContent\DisableConsumerAccountStateContent<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\= 1<br /><sub>(Registry)</sub> |Warning |
## Administrative Templates - Windows Components |Name<br /><sub>(ID)</sub> |Details |Expected value<br /><sub>(Type)</sub> |Severity | |||||
-|Do not allow drive redirection<br /><sub>(AZ-WIN-73569)</sub> |<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows NT\Terminal Services\fDisableCdm<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\= 1<br /><sub>(Registry)</sub> |Warning |
-|Turn on PowerShell Transcription<br /><sub>(AZ-WIN-202208)</sub> |<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\PowerShell\Transcription\EnableTranscripting<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\= 0<br /><sub>(Registry)</sub> |Warning |
+|Do not allow drive redirection<br /><sub>(AZ-WIN-73569)</sub> |**Description**: This policy setting prevents users from sharing the local drives on their client computers to Remote Desktop Servers that they access. Mapped drives appear in the session folder tree in Windows Explorer in the following format: `\\TSClient\<driveletter>$` If local drives are shared they are left vulnerable to intruders who want to exploit the data that is stored on them. The recommended state for this setting is: `Enabled`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows NT\Terminal Services\fDisableCdm<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\= 1<br /><sub>(Registry)</sub> |Warning |
+|Turn on PowerShell Transcription<br /><sub>(AZ-WIN-202208)</sub> |**Description**: This Policy setting lets you capture the input and output of Windows PowerShell commands into text-based transcripts. The recommended state for this setting is: `Disabled`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\PowerShell\Transcription\EnableTranscripting<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\= 0<br /><sub>(Registry)</sub> |Warning |
## Administrative Templates - Windows Security |Name<br /><sub>(ID)</sub> |Details |Expected value<br /><sub>(Type)</sub> |Severity | |||||
-|Prevent users from modifying settings<br /><sub>(AZ-WIN-202209)</sub> |<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows Defender Security Center\App and Browser protection\DisallowExploitProtectionOverride<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\= 1<br /><sub>(Registry)</sub> |Warning |
+|Prevent users from modifying settings<br /><sub>(AZ-WIN-202209)</sub> |**Description**: This policy setting prevent users from making changes to the Exploit protection settings area in the Windows Security settings. The recommended state for this setting is: `Enabled`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows Defender Security Center\App and Browser protection\DisallowExploitProtectionOverride<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\= 1<br /><sub>(Registry)</sub> |Warning |
-## Administrative Template - Windows Defender
+## Administrative Templates - Windows Defender
|Name<br /><sub>(ID)</sub> |Details |Expected value<br /><sub>(Type)</sub> |Severity | |||||
-|Configure Attack Surface Reduction rules<br /><sub>(AZ_WIN_202205)</sub> |<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows Defender\Windows Defender Exploit Guard\ASR\ExploitGuard_ASR_Rules<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\= 1<br /><sub>(Registry)</sub> |Warning |
-|Prevent users and apps from accessing dangerous websites<br /><sub>(AZ_WIN_202207)</sub> |<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows Defender\Windows Defender Exploit Guard\Network Protection\EnableNetworkProtection<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\= 1<br /><sub>(Registry)</sub> |Warning |
+|Configure Attack Surface Reduction rules<br /><sub>(AZ_WIN_202205)</sub> |**Description**: This policy setting controls the state for the Attack Surface Reduction (ASR) rules. The recommended state for this setting is: `Enabled`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows Defender\Windows Defender Exploit Guard\ASR\ExploitGuard_ASR_Rules<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\= 1<br /><sub>(Registry)</sub> |Warning |
+|Prevent users and apps from accessing dangerous websites<br /><sub>(AZ_WIN_202207)</sub> |**Description**: This policy setting controls Microsoft Defender Exploit Guard network protection. The recommended state for this setting is: `Enabled: Block`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows Defender\Windows Defender Exploit Guard\Network Protection\EnableNetworkProtection<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\= 1<br /><sub>(Registry)</sub> |Warning |
## Audit Computer Account Management |Name<br /><sub>(ID)</sub> |Details |Expected value<br /><sub>(Type)</sub> |Severity | |||||
-|Audit Computer Account Management<br /><sub>(CCE-38004-8)</sub> |<br />**Key Path**: {0CCE9236-69AE-11D9-BED3-505054503030}<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller |\= Success<br /><sub>(Audit)</sub> |Critical |
+|Audit Computer Account Management<br /><sub>(CCE-38004-8)</sub> |**Description**: This subcategory reports each event of computer account management, such as when a computer account is created, changed, deleted, renamed, disabled, or enabled. Events for this subcategory include: - 4741: A computer account was created. - 4742: A computer account was changed. - 4743: A computer account was deleted. The recommended state for this setting is to include: `Success`.<br />**Key Path**: {0CCE9236-69AE-11D9-BED3-505054503030}<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller |\= Success<br /><sub>(Audit)</sub> |Critical |
## Secured Core |Name<br /><sub>(ID)</sub> |Details |Expected value<br /><sub>(Type)</sub> |Severity | |||||
-|Enable boot DMA protection<br /><sub>(AZ-WIN-202250)</sub> |<br />**Key Path**: BootDMAProtection<br />**OS**: Ex= [WSASHCI22H2<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(OsConfig)</sub> |Critical |
-|Enable hypervisor enforced code integrity<br /><sub>(AZ-WIN-202246)</sub> |<br />**Key Path**: HypervisorEnforcedCodeIntegrityStatus<br />**OS**: Ex= [WSASHCI22H2<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 0<br /><sub>(OsConfig)</sub> |Critical |
-|Enable secure boot<br /><sub>(AZ-WIN-202248)</sub> |<br />**Key Path**: SecureBootState<br />**OS**: Ex= [WSASHCI22H2<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(OsConfig)</sub> |Critical |
-|Enable system guard<br /><sub>(AZ-WIN-202247)</sub> |<br />**Key Path**: SystemGuardStatus<br />**OS**: Ex= [WSASHCI22H2<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 0<br /><sub>(OsConfig)</sub> |Critical |
-|Enable virtualization based security<br /><sub>(AZ-WIN-202245)</sub> |<br />**Key Path**: VirtualizationBasedSecurityStatus<br />**OS**: Ex= [WSASHCI22H2<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 0<br /><sub>(OsConfig)</sub> |Critical |
-|Set TPM version<br /><sub>(AZ-WIN-202249)</sub> |<br />**Key Path**: TPMVersion<br />**OS**: Ex= [WSASHCI22H2<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member | 2.0<br /><sub>(OsConfig)</sub> |Critical |
+|Enable boot DMA protection<br /><sub>(AZ-WIN-202250)</sub> |**Description**: Secured-core capable servers support system firmware which provides protection against malicious and unintended Direct Memory Access (DMA) attacks for all DMA-capable devices during the boot process.<br />**Key Path**: BootDMAProtection<br />**OS**: Ex= [WSASHCI22H2<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(OsConfig)</sub> |Critical |
+|Enable hypervisor enforced code integrity<br /><sub>(AZ-WIN-202246)</sub> |**Description**: HVCI and VBS improve the threat model of Windows and provide stronger protections against malware trying to exploit the Windows Kernel. HVCI is a critical component that protects and hardens the isolated virtual environment created by VBS by running kernel mode code integrity within it and restricting kernel memory allocations that could be used to compromise the system.<br />**Key Path**: HypervisorEnforcedCodeIntegrityStatus<br />**OS**: Ex= [WSASHCI22H2<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 0<br /><sub>(OsConfig)</sub> |Critical |
+|Enable secure boot<br /><sub>(AZ-WIN-202248)</sub> |**Description**: Secure boot is a security standard developed by members of the PC industry to help make sure that a device boots using only software that is trusted by the Original Equipment Manufacturer (OEM).<br />**Key Path**: SecureBootState<br />**OS**: Ex= [WSASHCI22H2<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(OsConfig)</sub> |Critical |
+|Enable system guard<br /><sub>(AZ-WIN-202247)</sub> |**Description**: Using processor support for Dynamic Root of Trust of Measurement (DRTM) technology, System Guard puts firmware in a hardware-based sandbox helping to limit the impact of vulnerabilities in millions of lines of highly privileged firmware code. <br />**Key Path**: SystemGuardStatus<br />**OS**: Ex= [WSASHCI22H2<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 0<br /><sub>(OsConfig)</sub> |Critical |
+|Enable virtualization based security<br /><sub>(AZ-WIN-202245)</sub> |**Description**: Virtualization-based security, or VBS, uses hardware virtualization features to create and isolate a secure region of memory from the normal operating system. This helps to ensure that servers remain devoted to running critical workloads and helps protect related applications and data from attack and exfiltration. VBS is enabled and locked by default on Azure Stack HCI.<br />**Key Path**: VirtualizationBasedSecurityStatus<br />**OS**: Ex= [WSASHCI22H2<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 0<br /><sub>(OsConfig)</sub> |Critical |
+|Set TPM version<br /><sub>(AZ-WIN-202249)</sub> |**Description**: Trusted Platform Module (TPM) technology is designed to provide hardware-based, security-related functions. TPM2.0 is required for the Secured-core features.<br />**Key Path**: TPMVersion<br />**OS**: Ex= [WSASHCI22H2<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member | 2.0<br /><sub>(OsConfig)</sub> |Critical |
## Security Options - Accounts |Name<br /><sub>(ID)</sub> |Details |Expected value<br /><sub>(Type)</sub> |Severity | |||||
-|Accounts: Block Microsoft accounts<br /><sub>(AZ-WIN-202201)</sub> |<br />**Key Path**: Software\Microsoft\Windows\CurrentVersion\Policies\System\NoConnectedUser<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\= 3<br /><sub>(Registry)</sub> |Warning |
+|Accounts: Block Microsoft accounts<br /><sub>(AZ-WIN-202201)</sub> |**Description**: This policy setting prevents users from adding new Microsoft accounts on this computer. The recommended state for this setting is: `Users can't add or log on with Microsoft accounts`.<br />**Key Path**: Software\Microsoft\Windows\CurrentVersion\Policies\System\NoConnectedUser<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\= 3<br /><sub>(Registry)</sub> |Warning |
|Accounts: Guest account status<br /><sub>(CCE-37432-2)</sub> |**Description**: This policy setting determines whether the Guest account is enabled or disabled. The Guest account allows unauthenticated network users to gain access to the system. The recommended state for this setting is: `Disabled`. **Note:** This setting will have no impact when applied to the domain controller organizational unit via group policy because domain controllers have no local account database. It can be configured at the domain level via group policy, similar to account lockout and password policy settings.<br />**Key Path**: [System Access]EnableGuestAccount<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 0<br /><sub>(Policy)</sub> |Critical | |Accounts: Limit local account use of blank passwords to console logon only<br /><sub>(CCE-37615-2)</sub> |**Description**: This policy setting determines whether local accounts that are not password protected can be used to log on from locations other than the physical computer console. If you enable this policy setting, local accounts that have blank passwords will not be able to log on to the network from remote client computers. Such accounts will only be able to log on at the keyboard of the computer. The recommended state for this setting is: `Enabled`.<br />**Key Path**: SYSTEM\CurrentControlSet\Control\Lsa\LimitBlankPasswordUse<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 1<br /><sub>(Registry)</sub> |Critical |
-|Network access: Allow anonymous SID/Name translation<br /><sub>(CCE-10024-8)</sub> |<br />**Key Path**: [System Access]LSAAnonymousNameLookup<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\= 0<br /><sub>(Policy)</sub> |Warning |
+|Accounts: Rename guest account<br /><sub>(AZ-WIN-202255)</sub> |**Description**: The built-in local guest account is another well-known name to attackers. It is recommended to rename this account to something that does not indicate its purpose. Even if you disable this account, which is recommended, ensure that you rename it for added security. On Domain Controllers, since they do not have their own local accounts, this rule refers to the built-in Guest account that was established when the domain was first created.<br />**Key Path**: [System Access]NewGuestName<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member | Guest<br /><sub>(Policy)</sub> |Warning |
+|Network access: Allow anonymous SID/Name translation<br /><sub>(CCE-10024-8)</sub> |**Description**: This policy setting determines whether an anonymous user can request security identifier (SID) attributes for another user, or use a SID to obtain its corresponding user name. The recommended state for this setting is: `Disabled`.<br />**Key Path**: [System Access]LSAAnonymousNameLookup<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\= 0<br /><sub>(Policy)</sub> |Warning |
## Security Options - Audit
For more information, see [Azure Policy guest configuration](../concepts/guest-c
||||| |Devices: Allowed to format and eject removable media<br /><sub>(CCE-37701-0)</sub> |**Description**: This policy setting determines who is allowed to format and eject removable media. You can use this policy setting to prevent unauthorized users from removing data on one computer to access it on another computer on which they have local administrator privileges.<br />**Key Path**: SOFTWARE\Microsoft\Windows NT\CurrentVersion\Winlogon\AllocateDASD<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Warning | |Devices: Prevent users from installing printer drivers<br /><sub>(CCE-37942-0)</sub> |**Description**: For a computer to print to a shared printer, the driver for that shared printer must be installed on the local computer. This security setting determines who is allowed to install a printer driver as part of connecting to a shared printer. The recommended state for this setting is: `Enabled`. **Note:** This setting does not affect the ability to add a local printer. This setting does not affect Administrators.<br />**Key Path**: SYSTEM\CurrentControlSet\Control\Print\Providers\LanMan Print Services\Servers\AddPrinterDrivers<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 1<br /><sub>(Registry)</sub> |Warning |
-|Limits print driver installation to Administrators<br /><sub>(AZ_WIN_202202)</sub> |<br />**Key Path**: Software\Policies\Microsoft\Windows NT\Printers\PointAndPrint\RestrictDriverInstallationToAdministrators<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\= 1<br /><sub>(Registry)</sub> |Warning |
+|Limits print driver installation to Administrators<br /><sub>(AZ_WIN_202202)</sub> |**Description**: This policy setting controls whether users that aren't Administrators can install print drivers on the system. The recommended state for this setting is: `Enabled`. **Note:** On August 10, 2021, Microsoft announced a [Point and Print Default Behavior Change](https://msrc-blog.microsoft.com/2021/08/10/point-and-print-default-behavior-change/) which modifies the default Point and Print driver installation and update behavior to require Administrator privileges. This is documented in [KB5005652�Manage new Point and Print default driver installation behavior (CVE-2021-34481)](https://support.microsoft.com/en-gb/topic/kb5005652-manage-new-point-and-print-default-driver-installation-behavior-cve-2021-34481-873642bf-2634-49c5-a23b-6d8e9a302872).<br />**Key Path**: Software\Policies\Microsoft\Windows NT\Printers\PointAndPrint\RestrictDriverInstallationToAdministrators<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\= 1<br /><sub>(Registry)</sub> |Warning |
## Security Options - Domain member |Name<br /><sub>(ID)</sub> |Details |Expected value<br /><sub>(Type)</sub> |Severity | |||||
-|Ensure 'Domain member: Digitally encrypt or sign secure channel data (always)' is set to 'Enabled'<br /><sub>(CCE-36142-8)</sub> |<br />**Key Path**: SYSTEM\CurrentControlSet\Services\Netlogon\Parameters\RequireSignOrSeal<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |Doesn't exist or \= 1<br /><sub>(Registry)</sub> |Critical |
-|Ensure 'Domain member: Digitally encrypt secure channel data (when possible)' is set to 'Enabled'<br /><sub>(CCE-37130-2)</sub> |<br />**Key Path**: SYSTEM\CurrentControlSet\Services\Netlogon\Parameters\SealSecureChannel<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |Doesn't exist or \= 1<br /><sub>(Registry)</sub> |Critical |
+|Ensure 'Domain member: Digitally encrypt or sign secure channel data (always)' is set to 'Enabled'<br /><sub>(CCE-36142-8)</sub> |**Description**: This policy setting determines whether all secure channel traffic that is initiated by the domain member must be signed or encrypted. The recommended state for this setting is: `Enabled`.<br />**Key Path**: SYSTEM\CurrentControlSet\Services\Netlogon\Parameters\RequireSignOrSeal<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |Doesn't exist or \= 1<br /><sub>(Registry)</sub> |Critical |
+|Ensure 'Domain member: Digitally encrypt secure channel data (when possible)' is set to 'Enabled'<br /><sub>(CCE-37130-2)</sub> |**Description**: This policy setting determines whether a domain member should attempt to negotiate encryption for all secure channel traffic that it initiates. The recommended state for this setting is: `Enabled`.<br />**Key Path**: SYSTEM\CurrentControlSet\Services\Netlogon\Parameters\SealSecureChannel<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |Doesn't exist or \= 1<br /><sub>(Registry)</sub> |Critical |
|Ensure 'Domain member: Digitally sign secure channel data (when possible)' is set to 'Enabled'<br /><sub>(CCE-37222-7)</sub> |**Description**: <p><span>This policy setting determines whether a domain member should attempt to negotiate whether all secure channel traffic that it initiates must be digitally signed. Digital signatures protect the traffic from being modified by anyone who captures the data as it traverses the network. The recommended state for this setting is: 'Enabled'.</span></p><br />**Key Path**: SYSTEM\CurrentControlSet\Services\Netlogon\Parameters\SignSecureChannel<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |Doesn't exist or \= 1<br /><sub>(Registry)</sub> |Critical | |Ensure 'Domain member: Disable machine account password changes' is set to 'Disabled'<br /><sub>(CCE-37508-9)</sub> |**Description**: <p><span>This policy setting determines whether a domain member can periodically change its computer account password. Computers that cannot automatically change their account passwords are potentially vulnerable, because an attacker might be able to determine the password for the system's domain account. The recommended state for this setting is: 'Disabled'.</span></p><br />**Key Path**: SYSTEM\CurrentControlSet\Services\Netlogon\Parameters\DisablePasswordChange<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Critical | |Ensure 'Domain member: Maximum machine account password age' is set to '30 or fewer days, but not 0'<br /><sub>(CCE-37431-4)</sub> |**Description**: This policy setting determines the maximum allowable age for a computer account password. By default, domain members automatically change their domain passwords every 30 days. If you increase this interval significantly so that the computers no longer change their passwords, an attacker would have more time to undertake a brute force attack against one of the computer accounts. The recommended state for this setting is: `30 or fewer days, but not 0`. **Note:** A value of `0` does not conform to the benchmark as it disables maximum password age.<br />**Key Path**: System\CurrentControlSet\Services\Netlogon\Parameters\MaximumPasswordAge<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |In 1-30<br /><sub>(Registry)</sub> |Critical |
-|Ensure 'Domain member: Require strong (Windows 2000 or later) session key' is set to 'Enabled'<br /><sub>(CCE-37614-5)</sub> |<br />**Key Path**: SYSTEM\CurrentControlSet\Services\Netlogon\Parameters\RequireStrongKey<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |Doesn't exist or \= 1<br /><sub>(Registry)</sub> |Critical |
+|Ensure 'Domain member: Require strong (Windows 2000 or later) session key' is set to 'Enabled'<br /><sub>(CCE-37614-5)</sub> |**Description**: When this policy setting is enabled, a secure channel can only be established with Domain Controllers that are capable of encrypting secure channel data with a strong (128-bit) session key. To enable this policy setting, all Domain Controllers in the domain must be able to encrypt secure channel data with a strong key, which means all Domain Controllers must be running Microsoft Windows 2000 or newer. The recommended state for this setting is: `Enabled`.<br />**Key Path**: SYSTEM\CurrentControlSet\Services\Netlogon\Parameters\RequireStrongKey<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |Doesn't exist or \= 1<br /><sub>(Registry)</sub> |Critical |
## Security Options - Interactive Logon |Name<br /><sub>(ID)</sub> |Details |Expected value<br /><sub>(Type)</sub> |Severity | |||||
-|Caching of logon credentials must be limited<br /><sub>(AZ-WIN-73651)</sub> |<br />**Key Path**: SOFTWARE\Microsoft\Windows NT\CurrentVersion\Winlogon\CachedLogonsCount<br />**OS**: WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |In 1-4<br /><sub>(Registry)</sub> |Informational |
+|Caching of logon credentials must be limited<br /><sub>(AZ-WIN-73651)</sub> |**Description**: This policy setting determines whether a user can log on to a Windows domain using cached account information. Logon information for domain accounts can be cached locally to allow users to log on even if a Domain Controller cannot be contacted. This policy setting determines the number of unique users for whom logon information is cached locally. If this value is set to 0, the logon cache feature is disabled. An attacker who is able to access the file system of the server could locate this cached information and use a brute force attack to determine user passwords. The recommended state for this setting is: `4 or fewer logon(s)`.<br />**Key Path**: SOFTWARE\Microsoft\Windows NT\CurrentVersion\Winlogon\CachedLogonsCount<br />**OS**: WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |In 1-4<br /><sub>(Registry)</sub> |Informational |
|Interactive logon: Do not display last user name<br /><sub>(CCE-36056-0)</sub> |**Description**: This policy setting determines whether the account name of the last user to log on to the client computers in your organization will be displayed in each computer's respective Windows logon screen. Enable this policy setting to prevent intruders from collecting account names visually from the screens of desktop or laptop computers in your organization. The recommended state for this setting is: `Enabled`.<br />**Key Path**: SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System\DontDisplayLastUserName<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Critical | |Interactive logon: Do not require CTRL+ALT+DEL<br /><sub>(CCE-37637-6)</sub> |**Description**: This policy setting determines whether users must press CTRL+ALT+DEL before they log on. The recommended state for this setting is: `Disabled`.<br />**Key Path**: SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System\DisableCAD<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Critical |
-|Interactive logon: Machine inactivity limit<br /><sub>(AZ-WIN-73645)</sub> |<br />**Key Path**: SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System\InactivityTimeoutSecs<br />**OS**: WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |In 1-900<br /><sub>(Registry)</sub> |Important |
-|Interactive logon: Message text for users attempting to log on<br /><sub>(AZ-WIN-202253)</sub> |<br />**Key Path**: SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System\LegalNoticeText<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member | <br /><sub>(Registry)</sub> |Warning |
-|Interactive logon: Message title for users attempting to log on<br /><sub>(AZ-WIN-202254)</sub> |<br />**Key Path**: SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System\LegalNoticeCaption<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member | <br /><sub>(Registry)</sub> |Warning |
-|Interactive logon: Prompt user to change password before expiration<br /><sub>(CCE-10930-6)</sub> |<br />**Key Path**: Software\Microsoft\Windows NT\CurrentVersion\Winlogon\PasswordExpiryWarning<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |In 5-14<br /><sub>(Registry)</sub> |Informational |
+|Interactive logon: Machine inactivity limit<br /><sub>(AZ-WIN-73645)</sub> |**Description**: Windows notices inactivity of a logon session, and if the amount of inactive time exceeds the inactivity limit, then the screen saver will run, locking the session. The recommended state for this setting is: `900 or fewer second(s), but not 0`. **Note:** A value of `0` does not conform to the benchmark as it disables the machine inactivity limit.<br />**Key Path**: SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System\InactivityTimeoutSecs<br />**OS**: WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |In 1-900<br /><sub>(Registry)</sub> |Important |
+|Interactive logon: Message text for users attempting to log on<br /><sub>(AZ-WIN-202253)</sub> |**Description**: This policy setting specifies a text message that displays to users when they log on. Configure this setting in a manner that is consistent with the security and operational requirements of your organization.<br />**Key Path**: SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System\LegalNoticeText<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member | <br /><sub>(Registry)</sub> |Warning |
+|Interactive logon: Message title for users attempting to log on<br /><sub>(AZ-WIN-202254)</sub> |**Description**: This policy setting specifies the text displayed in the title bar of the window that users see when they log on to the system. Configure this setting in a manner that is consistent with the security and operational requirements of your organization.<br />**Key Path**: SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System\LegalNoticeCaption<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member | <br /><sub>(Registry)</sub> |Warning |
+|Interactive logon: Prompt user to change password before expiration<br /><sub>(CCE-10930-6)</sub> |**Description**: This policy setting determines how far in advance users are warned that their password will expire. It is recommended that you configure this policy setting to at least 5 days but no more than 14 days to sufficiently warn users when their passwords will expire. The recommended state for this setting is: `between 5 and 14 days`.<br />**Key Path**: Software\Microsoft\Windows NT\CurrentVersion\Winlogon\PasswordExpiryWarning<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |In 5-14<br /><sub>(Registry)</sub> |Informational |
## Security Options - Microsoft Network Client
For more information, see [Azure Policy guest configuration](../concepts/guest-c
|Microsoft network server: Digitally sign communications (always)<br /><sub>(CCE-37864-6)</sub> |**Description**: This policy setting determines whether packet signing is required by the SMB server component. Enable this policy setting in a mixed environment to prevent downstream clients from using the workstation as a network server. The recommended state for this setting is: `Enabled`.<br />**Key Path**: SYSTEM\CurrentControlSet\Services\LanManServer\Parameters\RequireSecuritySignature<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Critical | |Microsoft network server: Digitally sign communications (if client agrees)<br /><sub>(CCE-35988-5)</sub> |**Description**: This policy setting determines whether the SMB server will negotiate SMB packet signing with clients that request it. If no signing request comes from the client, a connection will be allowed without a signature if the **Microsoft network server: Digitally sign communications (always)** setting is not enabled. **Note:** Enable this policy setting on SMB clients on your network to make them fully effective for packet signing with all clients and servers in your environment. The recommended state for this setting is: `Enabled`.<br />**Key Path**: SYSTEM\CurrentControlSet\Services\LanManServer\Parameters\EnableSecuritySignature<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Critical | |Microsoft network server: Disconnect clients when logon hours expire<br /><sub>(CCE-37972-7)</sub> |**Description**: This security setting determines whether to disconnect users who are connected to the local computer outside their user account's valid logon hours. This setting affects the Server Message Block (SMB) component. If you enable this policy setting you should also enable **Network security: Force logoff when logon hours expire** (Rule 2.3.11.6). If your organization configures logon hours for users, this policy setting is necessary to ensure they are effective. The recommended state for this setting is: `Enabled`.<br />**Key Path**: SYSTEM\CurrentControlSet\Services\LanManServer\Parameters\EnableForcedLogoff<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 1<br /><sub>(Registry)</sub> |Critical |
-|Microsoft network server: Server SPN target name validation level<br /><sub>(CCE-10617-9)</sub> |<br />**Key Path**: System\CurrentControlSet\Services\LanManServer\Parameters\SMBServerNameHardeningLevel<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Member |\= 1<br /><sub>(Registry)</sub> |Warning |
+|Microsoft network server: Server SPN target name validation level<br /><sub>(CCE-10617-9)</sub> |**Description**: This policy setting controls the level of validation a computer with shared folders or printers (the server) performs on the service principal name (SPN) that is provided by the client computer when it establishes a session using the server message block (SMB) protocol. The server message block (SMB) protocol provides the basis for file and print sharing and other networking operations, such as remote Windows administration. The SMB protocol supports validating the SMB server service principal name (SPN) within the authentication blob provided by a SMB client; this behavior prevents SMB relay attacks. This setting will affect both SMB1 and SMB2. The recommended state for this setting is: `Accept if provided by client`. Configuring this setting to `Required from client` also conforms to the benchmark. **Note:** Since the release of the MS [KB3161561](https://support.microsoft.com/en-us/kb/3161561) security patch, this setting can cause significant issues (such as replication problems, group policy editing issues and blue screen crashes) on Domain Controllers when used _simultaneously_ with UNC path hardening (i.e. Rule 18.5.14.1). **CIS therefore recommends against deploying this setting on Domain Controllers.**<br />**Key Path**: System\CurrentControlSet\Services\LanManServer\Parameters\SMBServerNameHardeningLevel<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Member |\= 1<br /><sub>(Registry)</sub> |Warning |
## Security Options - Microsoft Network Server
For more information, see [Azure Policy guest configuration](../concepts/guest-c
|Name<br /><sub>(ID)</sub> |Details |Expected value<br /><sub>(Type)</sub> |Severity | |||||
-|Accounts: Rename administrator account<br /><sub>(CCE-10976-9)</sub> |<br />**Key Path**: [System Access]NewAdministratorName<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member | Administrator<br /><sub>(Policy)</sub> |Warning |
+|Accounts: Rename administrator account<br /><sub>(CCE-10976-9)</sub> |**Description**: The built-in local administrator account is a well-known account name that attackers will target. It is recommended to choose another name for this account, and to avoid names that denote administrative or elevated access accounts. Be sure to also change the default description for the local administrator (through the Computer Management console). On Domain Controllers, since they do not have their own local accounts, this rule refers to the built-in Administrator account that was established when the domain was first created.<br />**Key Path**: [System Access]NewAdministratorName<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member | Administrator<br /><sub>(Policy)</sub> |Warning |
|Network access: Do not allow anonymous enumeration of SAM accounts<br /><sub>(CCE-36316-8)</sub> |**Description**: This policy setting controls the ability of anonymous users to enumerate the accounts in the Security Accounts Manager (SAM). If you enable this policy setting, users with anonymous connections will not be able to enumerate domain account user names on the systems in your environment. This policy setting also allows additional restrictions on anonymous connections. The recommended state for this setting is: `Enabled`. **Note:** This policy has no effect on domain controllers.<br />**Key Path**: SYSTEM\CurrentControlSet\Control\Lsa\RestrictAnonymousSAM<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Member, Workgroup Member |Doesn't exist or \= 1<br /><sub>(Registry)</sub> |Critical | |Network access: Do not allow anonymous enumeration of SAM accounts and shares<br /><sub>(CCE-36077-6)</sub> |**Description**: This policy setting controls the ability of anonymous users to enumerate SAM accounts as well as shares. If you enable this policy setting, anonymous users will not be able to enumerate domain account user names and network share names on the systems in your environment. The recommended state for this setting is: `Enabled`. **Note:** This policy has no effect on domain controllers.<br />**Key Path**: SYSTEM\CurrentControlSet\Control\Lsa\RestrictAnonymous<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Critical | |Network access: Let Everyone permissions apply to anonymous users<br /><sub>(CCE-36148-5)</sub> |**Description**: This policy setting determines what additional permissions are assigned for anonymous connections to the computer. The recommended state for this setting is: `Disabled`.<br />**Key Path**: SYSTEM\CurrentControlSet\Control\Lsa\EveryoneIncludesAnonymous<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Critical |
For more information, see [Azure Policy guest configuration](../concepts/guest-c
|Name<br /><sub>(ID)</sub> |Details |Expected value<br /><sub>(Type)</sub> |Severity | |||||
-|Users must be required to enter a password to access private keys stored on the computer.<br /><sub>(AZ-WIN-73699)</sub> |<br />**Key Path**: SOFTWARE\Policies\Microsoft\Cryptography\ForceKeyProtection<br />**OS**: WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 2<br /><sub>(Registry)</sub> |Important |
-|Windows Server must be configured to use FIPS-compliant algorithms for encryption, hashing, and signing.<br /><sub>(AZ-WIN-73701)</sub> |<br />**Key Path**: SYSTEM\CurrentControlSet\Control\Lsa\FIPSAlgorithmPolicy\Enabled<br />**OS**: WS2016, WS2019, WS2022<br />**Server Type**: Domain Member |\= 1<br /><sub>(Registry)</sub> |Important |
+|Users must be required to enter a password to access private keys stored on the computer.<br /><sub>(AZ-WIN-73699)</sub> |**Description**: If the private key is discovered, an attacker can use the key to authenticate as an authorized user and gain access to the network infrastructure. The cornerstone of the PKI is the private key used to encrypt or digitally sign information. If the private key is stolen, this will lead to the compromise of the authentication and non-repudiation gained through PKI because the attacker can use the private key to digitally sign documents and pretend to be the authorized user. Both the holders of a digital certificate and the issuing authority must protect the computers, storage devices, or whatever they use to keep the private keys.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Cryptography\ForceKeyProtection<br />**OS**: WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 2<br /><sub>(Registry)</sub> |Important |
+|Windows Server must be configured to use FIPS-compliant algorithms for encryption, hashing, and signing.<br /><sub>(AZ-WIN-73701)</sub> |**Description**: This setting ensures the system uses algorithms that are FIPS-compliant for encryption, hashing, and signing. FIPS-compliant algorithms meet specific standards established by the U.S. Government and must be the algorithms used for all OS encryption functions.<br />**Key Path**: SYSTEM\CurrentControlSet\Control\Lsa\FIPSAlgorithmPolicy\Enabled<br />**OS**: WS2016, WS2019, WS2022<br />**Server Type**: Domain Member |\= 1<br /><sub>(Registry)</sub> |Important |
## Security Options - System objects
For more information, see [Azure Policy guest configuration](../concepts/guest-c
|Name<br /><sub>(ID)</sub> |Details |Expected value<br /><sub>(Type)</sub> |Severity | |||||
-|Account lockout threshold.<br /><sub>(AZ-WIN-73311)</sub> |<br />**Key Path**: [System Access]LockoutBadCount<br />**OS**: WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |In 1-3<br /><sub>(Policy)</sub> |Important |
+|Account lockout threshold<br /><sub>(AZ-WIN-73311)</sub> |**Description**: This policy setting determines the number of failed logon attempts before the account is locked. Setting this policy to `0` does not conform to the benchmark as doing so disables the account lockout threshold. The recommended state for this setting is: `5 or fewer invalid logon attempt(s), but not 0`. **Note:** Password Policy settings (section 1.1) and Account Lockout Policy settings (section 1.2) must be applied via the **Default Domain Policy** GPO in order to be globally in effect on **domain** user accounts as their default behavior. If these settings are configured in another GPO, they will only affect **local** user accounts on the computers that receive the GPO. However, custom exceptions to the default password policy and account lockout policy rules for specific domain users and/or groups can be defined using Password Settings Objects (PSOs), which are completely separate from Group Policy and most easily configured using Active Directory Administrative Center.<br />**Key Path**: [System Access]LockoutBadCount<br />**OS**: WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |In 1-3<br /><sub>(Policy)</sub> |Important |
|Enforce password history<br /><sub>(CCE-37166-6)</sub> |**Description**: <p><span>This policy setting determines the number of renewed, unique passwords that have to be associated with a user account before you can reuse an old password. The value for this policy setting must be between 0 and 24 passwords. The default value for Windows Vista is 0 passwords, but the default setting in a domain is 24 passwords. To maintain the effectiveness of this policy setting, use the Minimum password age setting to prevent users from repeatedly changing their password. The recommended state for this setting is: '24 or more password(s)'.</span></p><br />**Key Path**: [System Access]PasswordHistorySize<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\>\= 24<br /><sub>(Policy)</sub> |Critical | |Maximum password age<br /><sub>(CCE-37167-4)</sub> |**Description**: This policy setting defines how long a user can use their password before it expires. Values for this policy setting range from 0 to 999 days. If you set the value to 0, the password will never expire. Because attackers can crack passwords, the more frequently you change the password the less opportunity an attacker has to use a cracked password. However, the lower this value is set, the higher the potential for an increase in calls to help desk support due to users having to change their password or forgetting which password is current. The recommended state for this setting is `60 or fewer days, but not 0`.<br />**Key Path**: [System Access]MaximumPasswordAge<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |In 1-70<br /><sub>(Policy)</sub> |Critical | |Minimum password age<br /><sub>(CCE-37073-4)</sub> |**Description**: This policy setting determines the number of days that you must use a password before you can change it. The range of values for this policy setting is between 1 and 999 days. (You may also set the value to 0 to allow immediate password changes.) The default value for this setting is 0 days. The recommended state for this setting is: `1 or more day(s)`.<br />**Key Path**: [System Access]MinimumPasswordAge<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\>\= 1<br /><sub>(Policy)</sub> |Critical | |Minimum password length<br /><sub>(CCE-36534-6)</sub> |**Description**: This policy setting determines the least number of characters that make up a password for a user account. There are many different theories about how to determine the best password length for an organization, but perhaps "pass phrase" is a better term than "password." In Microsoft Windows 2000 or later, pass phrases can be quite long and can include spaces. Therefore, a phrase such as "I want to drink a $5 milkshake" is a valid pass phrase; it is a considerably stronger password than an 8 or 10 character string of random numbers and letters, and yet is easier to remember. Users must be educated about the proper selection and maintenance of passwords, especially with regard to password length. In enterprise environments, the ideal value for the Minimum password length setting is 14 characters, however you should adjust this value to meet your organization's business requirements. The recommended state for this setting is: `14 or more character(s)`.<br />**Key Path**: [System Access]MinimumPasswordLength<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\>\= 14<br /><sub>(Policy)</sub> |Critical |
-|Password must meet complexity requirements<br /><sub>(CCE-37063-5)</sub> |**Description**: This policy setting checks all new passwords to ensure that they meet basic requirements for strong passwords. When this policy is enabled, passwords must meet the following minimum requirements: - Does not contain the user's account name or parts of the user's full name that exceed two consecutive characters - Be at least six characters in length - Contain characters from three of the following four categories: - English uppercase characters (A through Z) - English lowercase characters (a through z) - Base 10 digits (0 through 9) - Non-alphabetic characters (for example, !, $, #, %) - A catch-all category of any Unicode character that does not fall under the previous four categories. This fifth category can be regionally specific. Each additional character in a password increases its complexity exponentially. For instance, a seven-character, all lower-case alphabetic password would have 267 (approximately 8 x 109 or 8 billion) possible combinations. At 1,000,000 attempts per second (a capability of many password-cracking utilities), it would only take 133 minutes to crack. A seven-character alphabetic password with case sensitivity has 527 combinations. A seven-character case-sensitive alphanumeric password without punctuation has 627 combinations. An eight-character password has 268 (or 2 x 1011) possible combinations. Although this might seem to be a large number, at 1,000,000 attempts per second it would take only 59 hours to try all possible passwords. Remember, these times will significantly increase for passwords that use ALT characters and other special keyboard characters such as "!" or "@". Proper use of the password settings can help make it difficult to mount a brute force attack. The recommended state for this setting is: `Enabled`.<br />**Key Path**: [System Access]PasswordComplexity<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= true<br /><sub>(Policy)</sub> |Critical |
-|Reset account lockout counter.<br /><sub>(AZ-WIN-73309)</sub> |<br />**Key Path**: [System Access]ResetLockoutCount<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\>\= 15<br /><sub>(Policy)</sub> |Important |
+|Password must meet complexity requirements<br /><sub>(CCE-37063-5)</sub> |**Description**: This policy setting checks all new passwords to ensure that they meet basic requirements for strong passwords. When this policy is enabled, passwords must meet the following minimum requirements: - Does not contain the user's account name or parts of the user's full name that exceed two consecutive characters - Be at least six characters in length - Contain characters from three of the following four categories: - English uppercase characters (A through Z) - English lowercase characters (a through z) - Base 10 digits (0 through 9) - Non-alphabetic characters (for example, !, $, #, %) - A catch-all category of any Unicode character that does not fall under the previous four categories. This fifth category can be regionally specific. Each additional character in a password increases its complexity exponentially. For instance, a seven-character, all lower-case alphabetic password would have 267 (approximately 8 x 109 or 8 billion) possible combinations. At 1,000,000 attempts per second (a capability of many password-cracking utilities), it would only take 133 minutes to crack. A seven-character alphabetic password with case sensitivity has 527 combinations. A seven-character case-sensitive alphanumeric password without punctuation has 627 combinations. An eight-character password has 268 (or 2 x 1011) possible combinations. Although this might seem to be a large number, at 1,000,000 attempts per second it would take only 59 hours to try all possible passwords. Remember, these times will significantly increase for passwords that use ALT characters and other special keyboard characters such as "!" or "@". Proper use of the password settings can help make it difficult to mount a brute force attack. The recommended state for this setting is: `Enabled`.<br />**Key Path**: [System Access]PasswordComplexity<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Policy)</sub> |Critical |
+|Reset account lockout counter after<br /><sub>(AZ-WIN-73309)</sub> |**Description**: This policy setting determines the length of time before the Account lockout threshold resets to zero. The default value for this policy setting is Not Defined. If the Account lockout threshold is defined, this reset time must be less than or equal to the value for the Account lockout duration setting. If you leave this policy setting at its default value or configure the value to an interval that is too long, your environment could be vulnerable to a DoS attack. An attacker could maliciously perform a number of failed logon attempts on all users in the organization, which will lock out their accounts. If no policy were determined to reset the account lockout, it would be a manual task for administrators. Conversely, if a reasonable time value is configured for this policy setting, users would be locked out for a set period until all of the accounts are unlocked automatically. The recommended state for this setting is: `15 or more minute(s)`. **Note:** Password Policy settings (section 1.1) and Account Lockout Policy settings (section 1.2) must be applied via the **Default Domain Policy** GPO in order to be globally in effect on **domain** user accounts as their default behavior. If these settings are configured in another GPO, they will only affect **local** user accounts on the computers that receive the GPO. However, custom exceptions to the default password policy and account lockout policy rules for specific domain users and/or groups can be defined using Password Settings Objects (PSOs), which are completely separate from Group Policy and most easily configured using Active Directory Administrative Center.<br />**Key Path**: [System Access]ResetLockoutCount<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\>\= 15<br /><sub>(Policy)</sub> |Important |
|Store passwords using reversible encryption<br /><sub>(CCE-36286-3)</sub> |**Description**: This policy setting determines whether the operating system stores passwords in a way that uses reversible encryption, which provides support for application protocols that require knowledge of the user's password for authentication purposes. Passwords that are stored with reversible encryption are essentially the same as plaintext versions of the passwords. The recommended state for this setting is: `Disabled`.<br />**Key Path**: [System Access]ClearTextPassword<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 0<br /><sub>(Policy)</sub> |Critical | ## Security Settings - Windows Firewall
For more information, see [Azure Policy guest configuration](../concepts/guest-c
||||| |Windows Firewall: Domain: Allow unicast response<br /><sub>(AZ-WIN-00088)</sub> |**Description**: <p><span>This option is useful if you need to control whether this computer receives unicast responses to its outgoing multicast or broadcast messages.  </span></p><p><span>We recommend this setting to ‘Yes’ for Private and Domain profiles, this will set the registry value to 0.</span></p><br />**Key Path**: Software\Policies\Microsoft\WindowsFirewall\DomainProfile\DisableUnicastResponsesToMulticastBroadcast<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\= 0<br /><sub>(Registry)</sub> |Warning | |Windows Firewall: Domain: Firewall state<br /><sub>(CCE-36062-8)</sub> |**Description**: Select On (recommended) to have Windows Firewall with Advanced Security use the settings for this profile to filter network traffic. If you select Off, Windows Firewall with Advanced Security will not use any of the firewall rules or connection security rules for this profile.<br />**Key Path**: SOFTWARE\Policies\Microsoft\WindowsFirewall\DomainProfile\EnableFirewall<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Critical |
-|Windows Firewall: Domain: Inbound connections<br /><sub>(AZ-WIN-202252)</sub> |<br />**Key Path**: SOFTWARE\Policies\Microsoft\WindowsFirewall\DomainProfile\DefaultInboundAction<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\= 1<br /><sub>(Registry)</sub> |Critical |
-|Windows Firewall: Domain: Logging: Log dropped packets<br /><sub>(AZ-WIN-202226)</sub> |<br />**Key Path**: SOFTWARE\Policies\Microsoft\WindowsFirewall\DomainProfile\Logging\LogDroppedPackets<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Informational |
-|Windows Firewall: Domain: Logging: Log successful connections<br /><sub>(AZ-WIN-202227)</sub> |<br />**Key Path**: SOFTWARE\Policies\Microsoft\WindowsFirewall\DomainProfile\Logging\LogSuccessfulConnections<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\= 1<br /><sub>(Registry)</sub> |Warning |
-|Windows Firewall: Domain: Logging: Name<br /><sub>(AZ-WIN-202224)</sub> |<br />**Key Path**: SOFTWARE\Policies\Microsoft\WindowsFirewall\DomainProfile\Logging\LogFilePath<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\= %SystemRoot%\System32\logfiles\firewall\domainfw.log<br /><sub>(Registry)</sub> |Informational |
-|Windows Firewall: Domain: Logging: Size limit (KB)<br /><sub>(AZ-WIN-202225)</sub> |<br />**Key Path**: SOFTWARE\Policies\Microsoft\WindowsFirewall\DomainProfile\Logging\LogFileSize<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\>\= 16384<br /><sub>(Registry)</sub> |Warning |
+|Windows Firewall: Domain: Inbound connections<br /><sub>(AZ-WIN-202252)</sub> |**Description**: This setting determines the behavior for inbound connections that do not match an inbound firewall rule. The recommended state for this setting is: `Block (default)`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\WindowsFirewall\DomainProfile\DefaultInboundAction<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\= 1<br /><sub>(Registry)</sub> |Critical |
+|Windows Firewall: Domain: Logging: Log dropped packets<br /><sub>(AZ-WIN-202226)</sub> |**Description**: Use this option to log when Windows Firewall with Advanced Security discards an inbound packet for any reason. The log records why and when the packet was dropped. Look for entries with the word `DROP` in the action column of the log. The recommended state for this setting is: `Yes`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\WindowsFirewall\DomainProfile\Logging\LogDroppedPackets<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Informational |
+|Windows Firewall: Domain: Logging: Log successful connections<br /><sub>(AZ-WIN-202227)</sub> |**Description**: Use this option to log when Windows Firewall with Advanced Security allows an inbound connection. The log records why and when the connection was formed. Look for entries with the word `ALLOW` in the action column of the log. The recommended state for this setting is: `Yes`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\WindowsFirewall\DomainProfile\Logging\LogSuccessfulConnections<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\= 1<br /><sub>(Registry)</sub> |Warning |
+|Windows Firewall: Domain: Logging: Name<br /><sub>(AZ-WIN-202224)</sub> |**Description**: Use this option to specify the path and name of the file in which Windows Firewall will write its log information. The recommended state for this setting is: `%SystemRoot%\System32\logfiles\firewall\domainfw.log`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\WindowsFirewall\DomainProfile\Logging\LogFilePath<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\= %SystemRoot%\System32\logfiles\firewall\domainfw.log<br /><sub>(Registry)</sub> |Informational |
+|Windows Firewall: Domain: Logging: Size limit (KB)<br /><sub>(AZ-WIN-202225)</sub> |**Description**: Use this option to specify the size limit of the file in which Windows Firewall will write its log information. The recommended state for this setting is: `16,384 KB or greater`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\WindowsFirewall\DomainProfile\Logging\LogFileSize<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\>\= 16384<br /><sub>(Registry)</sub> |Warning |
|Windows Firewall: Domain: Outbound connections<br /><sub>(CCE-36146-9)</sub> |**Description**: This setting determines the behavior for outbound connections that do not match an outbound firewall rule. In Windows Vista, the default behavior is to allow connections unless there are firewall rules that block the connection.<br />**Key Path**: SOFTWARE\Policies\Microsoft\WindowsFirewall\DomainProfile\DefaultOutboundAction<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\= 0<br /><sub>(Registry)</sub> |Critical | |Windows Firewall: Domain: Settings: Apply local connection security rules<br /><sub>(CCE-38040-2)</sub> |**Description**: <p><span>This setting controls whether local administrators are allowed to create local connection rules that apply together with firewall rules configured by Group Policy. The recommended state for this setting is ‘Yes’, this will set the registry value to 1.</span></p><br />**Key Path**: SOFTWARE\Policies\Microsoft\WindowsFirewall\DomainProfile\AllowLocalIPsecPolicyMerge<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\= 0<br /><sub>(Registry)</sub> |Critical | |Windows Firewall: Domain: Settings: Apply local firewall rules<br /><sub>(CCE-37860-4)</sub> |**Description**: <p><span>This setting controls whether local administrators are allowed to create local firewall rules that apply together with firewall rules configured by Group Policy.</span></p><p><span>The recommended state for this setting is Yes, this will set the registry value to 1. </span></p><br />**Key Path**: SOFTWARE\Policies\Microsoft\WindowsFirewall\DomainProfile\AllowLocalPolicyMerge<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Critical |
-|Windows Firewall: Domain: Settings: Display a notification<br /><sub>(CCE-38041-0)</sub> |**Description**: <p><span>By selecting this option, no notification is displayed to the user when a program is blocked from receiving inbound connections. In a server environment, the popups are not useful as the users is not logged in, popups are not necessary and can add confusion for the administrator.  </span></p><p><span>Configure this policy setting to ‘No’, this will set the registry value to 1.  Windows Firewall will not display a notification when a program is blocked from receiving inbound connections.</span></p><br />**Key Path**: SOFTWARE\Policies\Microsoft\WindowsFirewall\DomainProfile\DisableNotifications<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\= 1<br /><sub>(Registry)</sub> |Warning |
+|Windows Firewall: Domain: Settings: Display a notification<br /><sub>(CCE-38041-0)</sub> |**Description**: <p><span>When this option is selected, no notification is displayed to the user when a program is blocked from receiving inbound connections. In a server environment, the popups are not useful as the users is not logged in, popups are not necessary and can add confusion for the administrator.  </span></p><p><span>Configure this policy setting to ‘No’, this will set the registry value to 1.  Windows Firewall will not display a notification when a program is blocked from receiving inbound connections.</span></p><br />**Key Path**: SOFTWARE\Policies\Microsoft\WindowsFirewall\DomainProfile\DisableNotifications<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\= 1<br /><sub>(Registry)</sub> |Warning |
|Windows Firewall: Private: Allow unicast response<br /><sub>(AZ-WIN-00089)</sub> |**Description**: <p><span>This option is useful if you need to control whether this computer receives unicast responses to its outgoing multicast or broadcast messages.  </span></p><p><span>We recommend this setting to ‘Yes’ for Private and Domain profiles, this will set the registry value to 0.</span></p><br />**Key Path**: Software\Policies\Microsoft\WindowsFirewall\PrivateProfile\DisableUnicastResponsesToMulticastBroadcast<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 0<br /><sub>(Registry)</sub> |Warning | |Windows Firewall: Private: Firewall state<br /><sub>(CCE-38239-0)</sub> |**Description**: Select On (recommended) to have Windows Firewall with Advanced Security use the settings for this profile to filter network traffic. If you select Off, Windows Firewall with Advanced Security will not use any of the firewall rules or connection security rules for this profile.<br />**Key Path**: SOFTWARE\Policies\Microsoft\WindowsFirewall\PrivateProfile\EnableFirewall<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Critical |
-|Windows Firewall: Private: Inbound connections<br /><sub>(AZ-WIN-202228)</sub> |<br />**Key Path**: SOFTWARE\Policies\Microsoft\WindowsFirewall\PrivateProfile\DefaultInboundAction<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\= 1<br /><sub>(Registry)</sub> |Critical |
-|Windows Firewall: Private: Logging: Log dropped packets<br /><sub>(AZ-WIN-202231)</sub> |<br />**Key Path**: SOFTWARE\Policies\Microsoft\WindowsFirewall\PrivateProfile\Logging\LogDroppedPackets<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\= 1<br /><sub>(Registry)</sub> |Informational |
-|Windows Firewall: Private: Logging: Log successful connections<br /><sub>(AZ-WIN-202232)</sub> |<br />**Key Path**: SOFTWARE\Policies\Microsoft\WindowsFirewall\PrivateProfile\Logging\LogSuccessfulConnections<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\= 1<br /><sub>(Registry)</sub> |Warning |
-|Windows Firewall: Private: Logging: Name<br /><sub>(AZ-WIN-202229)</sub> |<br />**Key Path**: SOFTWARE\Policies\Microsoft\WindowsFirewall\PublicProfile\Logging\LogFilePath<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\= %SystemRoot%\System32\logfiles\firewall\privatefw.log<br /><sub>(Registry)</sub> |Informational |
-|Windows Firewall: Private: Logging: Size limit (KB)<br /><sub>(AZ-WIN-202230)</sub> |<br />**Key Path**: SOFTWARE\Policies\Microsoft\WindowsFirewall\PrivateProfile\Logging\LogFileSize<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\>\= 16384<br /><sub>(Registry)</sub> |Warning |
+|Windows Firewall: Private: Inbound connections<br /><sub>(AZ-WIN-202228)</sub> |**Description**: This setting determines the behavior for inbound connections that do not match an inbound firewall rule. The recommended state for this setting is: `Block (default)`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\WindowsFirewall\PrivateProfile\DefaultInboundAction<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\= 1<br /><sub>(Registry)</sub> |Critical |
+|Windows Firewall: Private: Logging: Log dropped packets<br /><sub>(AZ-WIN-202231)</sub> |**Description**: Use this option to log when Windows Firewall with Advanced Security discards an inbound packet for any reason. The log records why and when the packet was dropped. Look for entries with the word `DROP` in the action column of the log. The recommended state for this setting is: `Yes`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\WindowsFirewall\PrivateProfile\Logging\LogDroppedPackets<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\= 1<br /><sub>(Registry)</sub> |Informational |
+|Windows Firewall: Private: Logging: Log successful connections<br /><sub>(AZ-WIN-202232)</sub> |**Description**: Use this option to log when Windows Firewall with Advanced Security allows an inbound connection. The log records why and when the connection was formed. Look for entries with the word `ALLOW` in the action column of the log. The recommended state for this setting is: `Yes`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\WindowsFirewall\PrivateProfile\Logging\LogSuccessfulConnections<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\= 1<br /><sub>(Registry)</sub> |Warning |
+|Windows Firewall: Private: Logging: Name<br /><sub>(AZ-WIN-202229)</sub> |**Description**: Use this option to specify the path and name of the file in which Windows Firewall will write its log information. The recommended state for this setting is: `%SystemRoot%\System32\logfiles\firewall\privatefw.log`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\WindowsFirewall\PrivateProfile\Logging\LogFilePath<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\= %SystemRoot%\System32\logfiles\firewall\privatefw.log<br /><sub>(Registry)</sub> |Informational |
+|Windows Firewall: Private: Logging: Size limit (KB)<br /><sub>(AZ-WIN-202230)</sub> |**Description**: Use this option to specify the size limit of the file in which Windows Firewall will write its log information. The recommended state for this setting is: `16,384 KB or greater`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\WindowsFirewall\PrivateProfile\Logging\LogFileSize<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\>\= 16384<br /><sub>(Registry)</sub> |Warning |
|Windows Firewall: Private: Outbound connections<br /><sub>(CCE-38332-3)</sub> |**Description**: This setting determines the behavior for outbound connections that do not match an outbound firewall rule. The default behavior is to allow connections unless there are firewall rules that block the connection. Important If you set Outbound connections to Block and then deploy the firewall policy by using a GPO, computers that receive the GPO settings cannot receive subsequent Group Policy updates unless you create and deploy an outbound rule that enables Group Policy to work. Predefined rules for Core Networking include outbound rules that enable Group Policy to work. Ensure that these outbound rules are active, and thoroughly test firewall profiles before deploying.<br />**Key Path**: SOFTWARE\Policies\Microsoft\WindowsFirewall\PrivateProfile\DefaultOutboundAction<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 0<br /><sub>(Registry)</sub> |Critical | |Windows Firewall: Private: Settings: Apply local connection security rules<br /><sub>(CCE-36063-6)</sub> |**Description**: <p><span>This setting controls whether local administrators are allowed to create local connection rules that apply together with firewall rules configured by Group Policy. The recommended state for this setting is ‘Yes’, this will set the registry value to 1.</span></p><br />**Key Path**: SOFTWARE\Policies\Microsoft\WindowsFirewall\PrivateProfile\AllowLocalIPsecPolicyMerge<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 0<br /><sub>(Registry)</sub> |Critical | |Windows Firewall: Private: Settings: Apply local firewall rules<br /><sub>(CCE-37438-9)</sub> |**Description**: <p><span>This setting controls whether local administrators are allowed to create local firewall rules that apply together with firewall rules configured by Group Policy.</span></p><p><span>The recommended state for this setting is Yes, this will set the registry value to 1. </span></p><br />**Key Path**: SOFTWARE\Policies\Microsoft\WindowsFirewall\PrivateProfile\AllowLocalPolicyMerge<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Critical |
-|Windows Firewall: Private: Settings: Display a notification<br /><sub>(CCE-37621-0)</sub> |**Description**: <p><span>By selecting this option, no notification is displayed to the user when a program is blocked from receiving inbound connections. In a server environment, the popups are not useful as the users is not logged in, popups are not necessary and can add confusion for the administrator.  </span></p><p><span> Configure this policy setting to ‘No’, this will set the registry value to 1.  Windows Firewall will not display a notification when a program is blocked from receiving inbound connections.</span></p><br />**Key Path**: SOFTWARE\Policies\Microsoft\WindowsFirewall\PrivateProfile\DisableNotifications<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Warning |
+|Windows Firewall: Private: Settings: Display a notification<br /><sub>(CCE-37621-0)</sub> |**Description**: <p><span>When this option is selected, no notification is displayed to the user when a program is blocked from receiving inbound connections. In a server environment, the popups are not useful as the users is not logged in, popups are not necessary and can add confusion for the administrator.  </span></p><p><span> Configure this policy setting to ‘No’, this will set the registry value to 1.  Windows Firewall will not display a notification when a program is blocked from receiving inbound connections.</span></p><br />**Key Path**: SOFTWARE\Policies\Microsoft\WindowsFirewall\PrivateProfile\DisableNotifications<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Warning |
|Windows Firewall: Public: Allow unicast response<br /><sub>(AZ-WIN-00090)</sub> |**Description**: <p><span>This option is useful if you need to control whether this computer receives unicast responses to its outgoing multicast or broadcast messages. This can be done by changing the state for this setting to ‘No’, this will set the registry value to 1.</span></p><br />**Key Path**: Software\Policies\Microsoft\WindowsFirewall\PublicProfile\DisableUnicastResponsesToMulticastBroadcast<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Warning | |Windows Firewall: Public: Firewall state<br /><sub>(CCE-37862-0)</sub> |**Description**: Select On (recommended) to have Windows Firewall with Advanced Security use the settings for this profile to filter network traffic. If you select Off, Windows Firewall with Advanced Security will not use any of the firewall rules or connection security rules for this profile.<br />**Key Path**: SOFTWARE\Policies\Microsoft\WindowsFirewall\PublicProfile\EnableFirewall<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Critical |
-|Windows Firewall: Public: Inbound connections<br /><sub>(AZ-WIN-202234)</sub> |<br />**Key Path**: SOFTWARE\Policies\Microsoft\WindowsFirewall\PublicProfile\DefaultInboundAction<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\= 1<br /><sub>(Registry)</sub> |Critical |
-|Windows Firewall: Public: Logging: Log dropped packets<br /><sub>(AZ-WIN-202237)</sub> |<br />**Key Path**: SOFTWARE\Policies\Microsoft\WindowsFirewall\PublicProfile\Logging\LogDroppedPackets<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\= 1<br /><sub>(Registry)</sub> |Informational |
-|Windows Firewall: Public: Logging: Log successful connections<br /><sub>(AZ-WIN-202233)</sub> |<br />**Key Path**: SOFTWARE\Policies\Microsoft\WindowsFirewall\PublicProfile\Logging\LogSuccessfulConnections<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Warning |
-|Windows Firewall: Public: Logging: Name<br /><sub>(AZ-WIN-202235)</sub> |<br />**Key Path**: SOFTWARE\Policies\Microsoft\WindowsFirewall\PrivateProfile\Logging\LogFilePath<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\= %SystemRoot%\System32\logfiles\firewall\publicfw.log<br /><sub>(Registry)</sub> |Informational |
-|Windows Firewall: Public: Logging: Size limit (KB)<br /><sub>(AZ-WIN-202236)</sub> |<br />**Key Path**: SOFTWARE\Policies\Microsoft\WindowsFirewall\PublicProfile\Logging\LogFileSize<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\>\= 16384<br /><sub>(Registry)</sub> |Informational |
+|Windows Firewall: Public: Inbound connections<br /><sub>(AZ-WIN-202234)</sub> |**Description**: This setting determines the behavior for inbound connections that do not match an inbound firewall rule. The recommended state for this setting is: `Block (default)`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\WindowsFirewall\PublicProfile\DefaultInboundAction<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\= 1<br /><sub>(Registry)</sub> |Critical |
+|Windows Firewall: Public: Logging: Log dropped packets<br /><sub>(AZ-WIN-202237)</sub> |**Description**: Use this option to log when Windows Firewall with Advanced Security discards an inbound packet for any reason. The log records why and when the packet was dropped. Look for entries with the word `DROP` in the action column of the log. The recommended state for this setting is: `Yes`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\WindowsFirewall\PublicProfile\Logging\LogDroppedPackets<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\= 1<br /><sub>(Registry)</sub> |Informational |
+|Windows Firewall: Public: Logging: Log successful connections<br /><sub>(AZ-WIN-202233)</sub> |**Description**: Use this option to log when Windows Firewall with Advanced Security allows an inbound connection. The log records why and when the connection was formed. Look for entries with the word `ALLOW` in the action column of the log. The recommended state for this setting is: `Yes`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\WindowsFirewall\PublicProfile\Logging\LogSuccessfulConnections<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Warning |
+|Windows Firewall: Public: Logging: Name<br /><sub>(AZ-WIN-202235)</sub> |**Description**: Use this option to specify the path and name of the file in which Windows Firewall will write its log information. The recommended state for this setting is: `%SystemRoot%\System32\logfiles\firewall\publicfw.log`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\WindowsFirewall\PublicProfile\Logging\LogFilePath<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\= %SystemRoot%\System32\logfiles\firewall\publicfw.log<br /><sub>(Registry)</sub> |Informational |
+|Windows Firewall: Public: Logging: Size limit (KB)<br /><sub>(AZ-WIN-202236)</sub> |**Description**: Use this option to specify the size limit of the file in which Windows Firewall will write its log information. The recommended state for this setting is: `16,384 KB or greater`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\WindowsFirewall\PublicProfile\Logging\LogFileSize<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\>\= 16384<br /><sub>(Registry)</sub> |Informational |
|Windows Firewall: Public: Outbound connections<br /><sub>(CCE-37434-8)</sub> |**Description**: This setting determines the behavior for outbound connections that do not match an outbound firewall rule. The default behavior is to allow connections unless there are firewall rules that block the connection. Important If you set Outbound connections to Block and then deploy the firewall policy by using a GPO, computers that receive the GPO settings cannot receive subsequent Group Policy updates unless you create and deploy an outbound rule that enables Group Policy to work. Predefined rules for Core Networking include outbound rules that enable Group Policy to work. Ensure that these outbound rules are active, and thoroughly test firewall profiles before deploying.<br />**Key Path**: SOFTWARE\Policies\Microsoft\WindowsFirewall\PublicProfile\DefaultOutboundAction<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 0<br /><sub>(Registry)</sub> |Critical |
-|Windows Firewall: Public: Settings: Apply local connection security rules<br /><sub>(CCE-36268-1)</sub> |**Description**: <p><span>This setting controls whether local administrators are allowed to create local connection rules that apply together with firewall rules configured by Group Policy. The recommended state for this setting is ‘Yes’, this will set the registry value to 1.</span></p><br />**Key Path**: SOFTWARE\Policies\Microsoft\WindowsFirewall\PublicProfile\AllowLocalIPsecPolicyMerge<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 0<br /><sub>(Registry)</sub> |Critical |
+|Windows Firewall: Public: Settings: Apply local connection security rules<br /><sub>(CCE-36268-1)</sub> |**Description**: <p><span>This setting controls whether local administrators are allowed to create local connection rules that apply together with firewall rules configured by Group Policy. The recommended state for this setting is ‘Yes’, this will set the registry value to 1.</span></p><br />**Key Path**: SOFTWARE\Policies\Microsoft\WindowsFirewall\PublicProfile\AllowLocalIPsecPolicyMerge<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\= 0<br /><sub>(Registry)</sub> |Critical |
|Windows Firewall: Public: Settings: Apply local firewall rules<br /><sub>(CCE-37861-2)</sub> |**Description**: <p><span>This setting controls whether local administrators are allowed to create local firewall rules that apply together with firewall rules configured by Group Policy.</span></p><p><span>The recommended state for this setting is Yes, this will set the registry value to 1. </span></p><br />**Key Path**: SOFTWARE\Policies\Microsoft\WindowsFirewall\PublicProfile\AllowLocalPolicyMerge<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Critical | |Windows Firewall: Public: Settings: Display a notification<br /><sub>(CCE-38043-6)</sub> |**Description**: <p><span>By selecting this option, no notification is displayed to the user when a program is blocked from receiving inbound connections. In a server environment, the popups are not useful as the users is not logged in, popups are not necessary and can add confusion for the administrator.  </span></p><p><span>Configure this policy setting to ‘No’, this will set the registry value to 1.  Windows Firewall will not display a notification when a program is blocked from receiving inbound connections.</span></p><br />**Key Path**: SOFTWARE\Policies\Microsoft\WindowsFirewall\PublicProfile\DisableNotifications<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Warning |
For more information, see [Azure Policy guest configuration](../concepts/guest-c
|Name<br /><sub>(ID)</sub> |Details |Expected value<br /><sub>(Type)</sub> |Severity | ||||| |Audit Credential Validation<br /><sub>(CCE-37741-6)</sub> |**Description**: <p><span>This subcategory reports the results of validation tests on credentials submitted for a user account logon request. These events occur on the computer that is authoritative for the credentials. For domain accounts, the domain controller is authoritative, whereas for local accounts, the local computer is authoritative. In domain environments, most of the Account Logon events occur in the Security log of the domain controllers that are authoritative for the domain accounts. However, these events can occur on other computers in the organization when local accounts are used to log on. Events for this subcategory include: - 4774: An account was mapped for logon. - 4775: An account could not be mapped for logon. - 4776: The domain controller attempted to validate the credentials for an account. - 4777: The domain controller failed to validate the credentials for an account. The recommended state for this setting is: 'Success and Failure'.</span></p><br />**Key Path**: {0CCE923F-69AE-11D9-BED3-505054503030}<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Member, Workgroup Member |\= Success and Failure<br /><sub>(Audit)</sub> |Critical |
-|Audit Kerberos Authentication Service<br /><sub>(AZ-WIN-00004)</sub> |<br />**Key Path**: {0CCE9242-69AE-11D9-BED3-505054503030}<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller |\>\= Success and Failure<br /><sub>(Audit)</sub> |Critical |
+|Audit Kerberos Authentication Service<br /><sub>(AZ-WIN-00004)</sub> |**Description**: This subcategory reports the results of events generated after a Kerberos authentication TGT request. Kerberos is a distributed authentication service that allows a client running on behalf of a user to prove its identity to a server without sending data across the network. This helps mitigate an attacker or server from impersonating a user. - 4768: A Kerberos authentication ticket (TGT) was requested. - 4771: Kerberos pre-authentication failed. - 4772: A Kerberos authentication ticket request failed. The recommended state for this setting is: `Success and Failure`.<br />**Key Path**: {0CCE9242-69AE-11D9-BED3-505054503030}<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller |\>\= Success and Failure<br /><sub>(Audit)</sub> |Critical |
## System Audit Policies - Account Management |Name<br /><sub>(ID)</sub> |Details |Expected value<br /><sub>(Type)</sub> |Severity | |||||
-|Audit Distribution Group Management<br /><sub>(CCE-36265-7)</sub> |<br />**Key Path**: {0CCE9238-69AE-11D9-BED3-505054503030}<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller |\>\= Success<br /><sub>(Audit)</sub> |Critical |
+|Audit Distribution Group Management<br /><sub>(CCE-36265-7)</sub> |**Description**: This subcategory reports each event of distribution group management, such as when a distribution group is created, changed, or deleted or when a member is added to or removed from a distribution group. If you enable this Audit policy setting, administrators can track events to detect malicious, accidental, and authorized creation of group accounts. Events for this subcategory include: - 4744: A security-disabled local group was created. - 4745: A security-disabled local group was changed. - 4746: A member was added to a security-disabled local group. - 4747: A member was removed from a security-disabled local group. - 4748: A security-disabled local group was deleted. - 4749: A security-disabled global group was created. - 4750: A security-disabled global group was changed. - 4751: A member was added to a security-disabled global group. - 4752: A member was removed from a security-disabled global group. - 4753: A security-disabled global group was deleted. - 4759: A security-disabled universal group was created. - 4760: A security-disabled universal group was changed. - 4761: A member was added to a security-disabled universal group. - 4762: A member was removed from a security-disabled universal group. - 4763: A security-disabled universal group was deleted. The recommended state for this setting is to include: `Success`.<br />**Key Path**: {0CCE9238-69AE-11D9-BED3-505054503030}<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller |\>\= Success<br /><sub>(Audit)</sub> |Critical |
|Audit Other Account Management Events<br /><sub>(CCE-37855-4)</sub> |**Description**: This subcategory reports other account management events. Events for this subcategory include: ΓÇö 4782: The password hash an account was accessed. ΓÇö 4793: The Password Policy Checking API was called. Refer to the Microsoft Knowledgebase article "Description of security events in Windows Vista and in Windows Server 2008" for the most recent information about this setting: https://support.microsoft.com/topic/ms16-014-description-of-the-security-update-for-windows-vista-windows-server-2008-windows-7-windows-server-2008-r2-windows-server-2012-windows-8-1-and-windows-server-2012-r2-february-9-2016-1ff344d3-cd1c-cdbd-15b4-9344c7a7e6bd.<br />**Key Path**: {0CCE923A-69AE-11D9-BED3-505054503030}<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller |\>\= Success<br /><sub>(Audit)</sub> |Critical | |Audit Security Group Management<br /><sub>(CCE-38034-5)</sub> |**Description**: This subcategory reports each event of security group management, such as when a security group is created, changed, or deleted or when a member is added to or removed from a security group. If you enable this Audit policy setting, administrators can track events to detect malicious, accidental, and authorized creation of security group accounts. Events for this subcategory include: - 4727: A security-enabled global group was created. - 4728: A member was added to a security-enabled global group. - 4729: A member was removed from a security-enabled global group. - 4730: A security-enabled global group was deleted. - 4731: A security-enabled local group was created. - 4732: A member was added to a security-enabled local group. - 4733: A member was removed from a security-enabled local group. - 4734: A security-enabled local group was deleted. - 4735: A security-enabled local group was changed. - 4737: A security-enabled global group was changed. - 4754: A security-enabled universal group was created. - 4755: A security-enabled universal group was changed. - 4756: A member was added to a security-enabled universal group. - 4757: A member was removed from a security-enabled universal group. - 4758: A security-enabled universal group was deleted. - 4764: A group's type was changed. The recommended state for this setting is: `Success and Failure`.<br />**Key Path**: {0CCE9237-69AE-11D9-BED3-505054503030}<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\>\= Success<br /><sub>(Audit)</sub> |Critical | |Audit User Account Management<br /><sub>(CCE-37856-2)</sub> |**Description**: This subcategory reports each event of user account management, such as when a user account is created, changed, or deleted; a user account is renamed, disabled, or enabled; or a password is set or changed. If you enable this Audit policy setting, administrators can track events to detect malicious, accidental, and authorized creation of user accounts. Events for this subcategory include: - 4720: A user account was created. - 4722: A user account was enabled. - 4723: An attempt was made to change an account's password. - 4724: An attempt was made to reset an account's password. - 4725: A user account was disabled. - 4726: A user account was deleted. - 4738: A user account was changed. - 4740: A user account was locked out. - 4765: SID History was added to an account. - 4766: An attempt to add SID History to an account failed. - 4767: A user account was unlocked. - 4780: The ACL was set on accounts which are members of administrators groups. - 4781: The name of an account was changed: - 4794: An attempt was made to set the Directory Services Restore Mode. - 5376: Credential Manager credentials were backed up. - 5377: Credential Manager credentials were restored from a backup. The recommended state for this setting is: `Success and Failure`.<br />**Key Path**: {0CCE9235-69AE-11D9-BED3-505054503030}<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= Success and Failure<br /><sub>(Audit)</sub> |Critical |
For more information, see [Azure Policy guest configuration](../concepts/guest-c
|Name<br /><sub>(ID)</sub> |Details |Expected value<br /><sub>(Type)</sub> |Severity | ||||| |Audit PNP Activity<br /><sub>(AZ-WIN-00182)</sub> |**Description**: This policy setting allows you to audit when plug and play detects an external device. The recommended state for this setting is: `Success`. **Note:** A Windows 10, Server 2016 or higher OS is required to access and set this value in Group Policy.<br />**Key Path**: {0CCE9248-69AE-11D9-BED3-505054503030}<br />**OS**: WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\>\= Success<br /><sub>(Audit)</sub> |Critical |
-|Audit Process Creation<br /><sub>(CCE-36059-4)</sub> |**Description**: This subcategory reports the creation of a process and the name of the program or user that created it. Events for this subcategory include: - 4688: A new process has been created. - 4696: A primary token was assigned to process. Refer to Microsoft Knowledge Base article 947226: [Description of security events in Windows Vista and in Windows Server 2008](https://support.microsoft.com/en-us/topic/ms16-014-description-of-the-security-update-for-windows-vista-windows-server-2008-windows-7-windows-server-2008-r2-windows-server-2012-windows-8-1-and-windows-server-2012-r2-february-9-2016-114d5f4b-ca8e-9bf0-63f1-eebfa94c5e74) for the most recent information about this setting. The recommended state for this setting is: `Success`.<br />**Key Path**: {0CCE922B-69AE-11D9-BED3-505054503030}<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\>\= Success<br /><sub>(Audit)</sub> |Critical |
+|Audit Process Creation<br /><sub>(CCE-36059-4)</sub> |**Description**: This subcategory reports the creation of a process and the name of the program or user that created it. Events for this subcategory include: - 4688: A new process has been created. - 4696: A primary token was assigned to process. Refer to Microsoft Knowledge Base article 947226: [Description of security events in Windows Vista and in Windows Server 2008](https://support.microsoft.com/en-us/kb/947226) for the most recent information about this setting. The recommended state for this setting is: `Success`.<br />**Key Path**: {0CCE922B-69AE-11D9-BED3-505054503030}<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\>\= Success<br /><sub>(Audit)</sub> |Critical |
## System Audit Policies - DS Access |Name<br /><sub>(ID)</sub> |Details |Expected value<br /><sub>(Type)</sub> |Severity | |||||
-|Audit Directory Service Access<br /><sub>(CCE-37433-0)</sub> |<br />**Key Path**: {0CCE923B-69AE-11D9-BED3-505054503030}<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller |\>\= Failure<br /><sub>(Audit)</sub> |Critical |
-|Audit Directory Service Changes<br /><sub>(CCE-37616-0)</sub> |<br />**Key Path**: {0CCE923C-69AE-11D9-BED3-505054503030}<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller |\>\= Success<br /><sub>(Audit)</sub> |Critical |
-|Audit Directory Service Replication<br /><sub>(AZ-WIN-00093)</sub> |<br />**Key Path**: {0CCE923D-69AE-11D9-BED3-505054503030}<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\>\= No Auditing<br /><sub>(Audit)</sub> |Critical |
+|Audit Directory Service Access<br /><sub>(CCE-37433-0)</sub> |**Description**: This subcategory reports when an AD DS object is accessed. Only objects with SACLs cause audit events to be generated, and only when they are accessed in a manner that matches their SACL. These events are similar to the directory service access events in previous versions of Windows Server. This subcategory applies only to Domain Controllers. Events for this subcategory include: - 4662 : An operation was performed on an object. The recommended state for this setting is to include: `Failure`.<br />**Key Path**: {0CCE923B-69AE-11D9-BED3-505054503030}<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller |\>\= Failure<br /><sub>(Audit)</sub> |Critical |
+|Audit Directory Service Changes<br /><sub>(CCE-37616-0)</sub> |**Description**: This subcategory reports changes to objects in Active Directory Domain Services (AD DS). The types of changes that are reported are create, modify, move, and undelete operations that are performed on an object. DS Change auditing, where appropriate, indicates the old and new values of the changed properties of the objects that were changed. Only objects with SACLs cause audit events to be generated, and only when they are accessed in a manner that matches their SACL. Some objects and properties do not cause audit events to be generated due to settings on the object class in the schema. This subcategory applies only to Domain Controllers. Events for this subcategory include: - 5136 : A directory service object was modified. - 5137 : A directory service object was created. - 5138 : A directory service object was undeleted. - 5139 : A directory service object was moved. The recommended state for this setting is to include: `Success`.<br />**Key Path**: {0CCE923C-69AE-11D9-BED3-505054503030}<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller |\>\= Success<br /><sub>(Audit)</sub> |Critical |
+|Audit Directory Service Replication<br /><sub>(AZ-WIN-00093)</sub> |**Description**: This subcategory reports when replication between two domain controllers begins and ends. Events for this subcategory include: - 4932: Synchronization of a replica of an Active Directory naming context has begun. ΓÇô 4933: Synchronization of a replica of an Active Directory naming context has ended. Refer to the Microsoft Knowledgebase article ΓÇ£Description of security events in Windows Vista and in Windows Server 2008ΓÇ¥ for the most recent information about this setting: http:--support.microsoft.com-default.aspx-kb-947226<br />**Key Path**: {0CCE923D-69AE-11D9-BED3-505054503030}<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller |\>\= No Auditing<br /><sub>(Audit)</sub> |Critical |
## System Audit Policies - Logon-Logoff
For more information, see [Azure Policy guest configuration](../concepts/guest-c
|Name<br /><sub>(ID)</sub> |Details |Expected value<br /><sub>(Type)</sub> |Severity | |||||
-|Audit Detailed File Share<br /><sub>(AZ-WIN-00100)</sub> |<br />**Key Path**: {0CCE9244-69AE-11D9-BED3-505054503030}<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\>\= Failure<br /><sub>(Audit)</sub> |Critical |
-|Audit File Share<br /><sub>(AZ-WIN-00102)</sub> |<br />**Key Path**: {0CCE9224-69AE-11D9-BED3-505054503030}<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= Success and Failure<br /><sub>(Audit)</sub> |Critical |
+|Audit Detailed File Share<br /><sub>(AZ-WIN-00100)</sub> |**Description**: This subcategory allows you to audit attempts to access files and folders on a shared folder. Events for this subcategory include: - 5145: network share object was checked to see whether client can be granted desired access. The recommended state for this setting is to include: `Failure`<br />**Key Path**: {0CCE9244-69AE-11D9-BED3-505054503030}<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\>\= Failure<br /><sub>(Audit)</sub> |Critical |
+|Audit File Share<br /><sub>(AZ-WIN-00102)</sub> |**Description**: This policy setting allows you to audit attempts to access a shared folder. The recommended state for this setting is: `Success and Failure`. **Note:** There are no system access control lists (SACLs) for shared folders. If this policy setting is enabled, access to all shared folders on the system is audited.<br />**Key Path**: {0CCE9224-69AE-11D9-BED3-505054503030}<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= Success and Failure<br /><sub>(Audit)</sub> |Critical |
|Audit Other Object Access Events<br /><sub>(AZ-WIN-00113)</sub> |**Description**: This subcategory reports other object access-related events such as Task Scheduler jobs and COM+ objects. Events for this subcategory include: ΓÇö 4671: An application attempted to access a blocked ordinal through the TBS. ΓÇö 4691: Indirect access to an object was requested. ΓÇö 4698: A scheduled task was created. ΓÇö 4699: A scheduled task was deleted. ΓÇö 4700: A scheduled task was enabled. ΓÇö 4701: A scheduled task was disabled. ΓÇö 4702: A scheduled task was updated. ΓÇö 5888: An object in the COM+ Catalog was modified. ΓÇö 5889: An object was deleted from the COM+ Catalog. ΓÇö 5890: An object was added to the COM+ Catalog. Refer to the Microsoft Knowledgebase article "Description of security events in Windows Vista and in Windows Server 2008" for the most recent information about this setting: https://support.microsoft.com/topic/ms16-014-description-of-the-security-update-for-windows-vista-windows-server-2008-windows-7-windows-server-2008-r2-windows-server-2012-windows-8-1-and-windows-server-2012-r2-february-9-2016-1ff344d3-cd1c-cdbd-15b4-9344c7a7e6bd.<br />**Key Path**: {0CCE9227-69AE-11D9-BED3-505054503030}<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= Success and Failure<br /><sub>(Audit)</sub> |Critical | |Audit Removable Storage<br /><sub>(CCE-37617-8)</sub> |**Description**: This policy setting allows you to audit user attempts to access file system objects on a removable storage device. A security audit event is generated only for all objects for all types of access requested. If you configure this policy setting, an audit event is generated each time an account accesses a file system object on a removable storage. Success audits record successful attempts and Failure audits record unsuccessful attempts. If you do not configure this policy setting, no audit event is generated when an account accesses a file system object on a removable storage. The recommended state for this setting is: `Success and Failure`. **Note:** A Windows 8, Server 2012 (non-R2) or higher OS is required to access and set this value in Group Policy.<br />**Key Path**: {0CCE9245-69AE-11D9-BED3-505054503030}<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= Success and Failure<br /><sub>(Audit)</sub> |Critical |
For more information, see [Azure Policy guest configuration](../concepts/guest-c
|Name<br /><sub>(ID)</sub> |Details |Expected value<br /><sub>(Type)</sub> |Severity | ||||| |Audit Authentication Policy Change<br /><sub>(CCE-38327-3)</sub> |**Description**: This subcategory reports changes in authentication policy. Events for this subcategory include: ΓÇö 4706: A new trust was created to a domain. ΓÇö 4707: A trust to a domain was removed. ΓÇö 4713: Kerberos policy was changed. ΓÇö 4716: Trusted domain information was modified. ΓÇö 4717: System security access was granted to an account. ΓÇö 4718: System security access was removed from an account. ΓÇö 4739: Domain Policy was changed. ΓÇö 4864: A namespace collision was detected. ΓÇö 4865: A trusted forest information entry was added. ΓÇö 4866: A trusted forest information entry was removed. ΓÇö 4867: A trusted forest information entry was modified. Refer to the Microsoft Knowledgebase article "Description of security events in Windows Vista and in Windows Server 2008" for the most recent information about this setting: https://support.microsoft.com/topic/ms16-014-description-of-the-security-update-for-windows-vista-windows-server-2008-windows-7-windows-server-2008-r2-windows-server-2012-windows-8-1-and-windows-server-2012-r2-february-9-2016-1ff344d3-cd1c-cdbd-15b4-9344c7a7e6bd.<br />**Key Path**: {0CCE9230-69AE-11D9-BED3-505054503030}<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\>\= Success<br /><sub>(Audit)</sub> |Critical |
-|Audit Authorization Policy Change<br /><sub>(CCE-36320-0)</sub> |<br />**Key Path**: {0CCE9231-69AE-11D9-BED3-505054503030}<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\>\= Success<br /><sub>(Audit)</sub> |Critical |
+|Audit Authorization Policy Change<br /><sub>(CCE-36320-0)</sub> |**Description**: This subcategory reports changes in authorization policy. Events for this subcategory include: - 4704: A user right was assigned. - 4705: A user right was removed. - 4706: A new trust was created to a domain. - 4707: A trust to a domain was removed. - 4714: Encrypted data recovery policy was changed. The recommended state for this setting is to include: `Success`.<br />**Key Path**: {0CCE9231-69AE-11D9-BED3-505054503030}<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\>\= Success<br /><sub>(Audit)</sub> |Critical |
|Audit MPSSVC Rule-Level Policy Change<br /><sub>(AZ-WIN-00111)</sub> |**Description**: This subcategory reports changes in policy rules used by the Microsoft Protection Service (MPSSVC.exe). This service is used by Windows Firewall and by Microsoft OneCare. Events for this subcategory include: ΓÇö 4944: The following policy was active when the Windows Firewall started. ΓÇö 4945: A rule was listed when the Windows Firewall started. ΓÇö 4946: A change has been made to Windows Firewall exception list. A rule was added. ΓÇö 4947: A change has been made to Windows Firewall exception list. A rule was modified. ΓÇö 4948: A change has been made to Windows Firewall exception list. A rule was deleted. ΓÇö 4949: Windows Firewall settings were restored to the default values. ΓÇö 4950: A Windows Firewall setting has changed. ΓÇö 4951: A rule has been ignored because its major version number was not recognized by Windows Firewall. ΓÇö 4952: Parts of a rule have been ignored because its minor version number was not recognized by Windows Firewall. The other parts of the rule will be enforced. ΓÇö 4953: A rule has been ignored by Windows Firewall because it could not parse the rule. ΓÇö 4954: Windows Firewall Group Policy settings have changed. The new settings have been applied. ΓÇö 4956: Windows Firewall has changed the active profile. ΓÇö 4957: Windows Firewall did not apply the following rule: ΓÇö 4958: Windows Firewall did not apply the following rule because the rule referred to items not configured on this computer: Refer to the Microsoft Knowledgebase article "Description of security events in Windows Vista and in Windows Server 2008" for the most recent information about this setting: https://support.microsoft.com/topic/ms16-014-description-of-the-security-update-for-windows-vista-windows-server-2008-windows-7-windows-server-2008-r2-windows-server-2012-windows-8-1-and-windows-server-2012-r2-february-9-2016-1ff344d3-cd1c-cdbd-15b4-9344c7a7e6bd.<br />**Key Path**: {0CCE9232-69AE-11D9-BED3-505054503030}<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= Success and Failure<br /><sub>(Audit)</sub> |Critical |
-|Audit Other Policy Change Events<br /><sub>(AZ-WIN-00114)</sub> |<br />**Key Path**: {0CCE9234-69AE-11D9-BED3-505054503030}<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\>\= Failure<br /><sub>(Audit)</sub> |Critical |
+|Audit Other Policy Change Events<br /><sub>(AZ-WIN-00114)</sub> |**Description**: This subcategory contains events about EFS Data Recovery Agent policy changes, changes in Windows Filtering Platform filter, status on Security policy settings updates for local Group Policy settings, Central Access Policy changes, and detailed troubleshooting events for Cryptographic Next Generation (CNG) operations. - 5063: A cryptographic provider operation was attempted. - 5064: A cryptographic context operation was attempted. - 5065: A cryptographic context modification was attempted. - 5066: A cryptographic function operation was attempted. - 5067: A cryptographic function modification was attempted. - 5068: A cryptographic function provider operation was attempted. - 5069: A cryptographic function property operation was attempted. - 5070: A cryptographic function property modification was attempted. - 6145: One or more errors occurred while processing security policy in the group policy objects. The recommended state for this setting is to include: `Failure`.<br />**Key Path**: {0CCE9234-69AE-11D9-BED3-505054503030}<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\>\= Failure<br /><sub>(Audit)</sub> |Critical |
|Audit Policy Change<br /><sub>(CCE-38028-7)</sub> |**Description**: This subcategory reports changes in audit policy including SACL changes. Events for this subcategory include: ΓÇö 4715: The audit policy (SACL) on an object was changed. ΓÇö 4719: System audit policy was changed. ΓÇö 4902: The Per-user audit policy table was created. ΓÇö 4904: An attempt was made to register a security event source. ΓÇö 4905: An attempt was made to unregister a security event source. ΓÇö 4906: The CrashOnAuditFail value has changed. ΓÇö 4907: Auditing settings on object were changed. ΓÇö 4908: Special Groups Logon table modified. ΓÇö 4912: Per User Audit Policy was changed. Refer to the Microsoft Knowledgebase article "Description of security events in Windows Vista and in Windows Server 2008" for the most recent information about this setting: https://support.microsoft.com/topic/ms16-014-description-of-the-security-update-for-windows-vista-windows-server-2008-windows-7-windows-server-2008-r2-windows-server-2012-windows-8-1-and-windows-server-2012-r2-february-9-2016-1ff344d3-cd1c-cdbd-15b4-9344c7a7e6bd.<br />**Key Path**: {0CCE922F-69AE-11D9-BED3-505054503030}<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\>\= Success<br /><sub>(Audit)</sub> |Critical | ## System Audit Policies - Privilege Use |Name<br /><sub>(ID)</sub> |Details |Expected value<br /><sub>(Type)</sub> |Severity | |||||
-|Audit Sensitive Privilege Use<br /><sub>(CCE-36267-3)</sub> |**Description**: This subcategory reports when a user account or service uses a sensitive privilege. A sensitive privilege includes the following user rights: Act as part of the operating system, Backup files and directories, Create a token object, Debug programs, Enable computer and user accounts to be trusted for delegation, Generate security audits, Impersonate a client after authentication, Load and unload device drivers, Manage auditing and security log, Modify firmware environment values, Replace a process-level token, Restore files and directories, and Take ownership of files or other objects. Auditing this subcategory will create a high volume of events. Events for this subcategory include: ΓÇö 4672: Special privileges assigned to new logon. ΓÇö 4673: A privileged service was called. ΓÇö 4674: An operation was attempted on a privileged object. Refer to the Microsoft Knowledgebase article 'Description of security events in Windows Vista and in Windows Server 2008' for the most recent information about this setting: https://support.microsoft.com/topic/ms16-014-description-of-the-security-update-for-windows-vista-windows-server-2008-windows-7-windows-server-2008-r2-windows-server-2012-windows-8-1-and-windows-server-2012-r2-february-9-2016-1ff344d3-cd1c-cdbd-15b4-9344c7a7e6bd.<br />**Key Path**: {0CCE9228-69AE-11D9-BED3-505054503030}<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= Success and Failure<br /><sub>(Audit)</sub> |Critical |
+|Audit Sensitive Privilege Use<br /><sub>(CCE-36267-3)</sub> |**Description**: This subcategory reports when a user account or service uses a sensitive privilege. A sensitive privilege includes the following user rights: Act as part of the operating system, Back up files and directories, Create a token object, Debug programs, Enable computer and user accounts to be trusted for delegation, Generate security audits, Impersonate a client after authentication, Load and unload device drivers, Manage auditing and security log, Modify firmware environment values, Replace a process-level token, Restore files and directories, and Take ownership of files or other objects. Auditing this subcategory will create a high volume of events. Events for this subcategory include: ΓÇö 4672: Special privileges assigned to new logon. ΓÇö 4673: A privileged service was called. ΓÇö 4674: An operation was attempted on a privileged object. Refer to the Microsoft Knowledgebase article 'Description of security events in Windows Vista and in Windows Server 2008' for the most recent information about this setting: https://support.microsoft.com/topic/ms16-014-description-of-the-security-update-for-windows-vista-windows-server-2008-windows-7-windows-server-2008-r2-windows-server-2012-windows-8-1-and-windows-server-2012-r2-february-9-2016-1ff344d3-cd1c-cdbd-15b4-9344c7a7e6bd.<br />**Key Path**: {0CCE9228-69AE-11D9-BED3-505054503030}<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= Success and Failure<br /><sub>(Audit)</sub> |Critical |
## System Audit Policies - System |Name<br /><sub>(ID)</sub> |Details |Expected value<br /><sub>(Type)</sub> |Severity | |||||
-|Audit IPsec Driver<br /><sub>(CCE-37853-9)</sub> |<br />**Key Path**: {0CCE9213-69AE-11D9-BED3-505054503030}<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\>\= Success and Failure<br /><sub>(Audit)</sub> |Critical |
-|Audit Other System Events<br /><sub>(CCE-38030-3)</sub> |<br />**Key Path**: {0CCE9214-69AE-11D9-BED3-505054503030}<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= Success and Failure<br /><sub>(Audit)</sub> |Critical |
+|Audit IPsec Driver<br /><sub>(CCE-37853-9)</sub> |**Description**: This subcategory reports on the activities of the Internet Protocol security (IPsec) driver. Events for this subcategory include: - 4960: IPsec dropped an inbound packet that failed an integrity check. If this problem persists, it could indicate a network issue or that packets are being modified in transit to this computer. Verify that the packets sent from the remote computer are the same as those received by this computer. This error might also indicate interoperability problems with other IPsec implementations. - 4961: IPsec dropped an inbound packet that failed a replay check. If this problem persists, it could indicate a replay attack against this computer. - 4962: IPsec dropped an inbound packet that failed a replay check. The inbound packet had too low a sequence number to ensure it was not a replay. - 4963: IPsec dropped an inbound clear text packet that should have been secured. This is usually due to the remote computer changing its IPsec policy without informing this computer. This could also be a spoofing attack attempt. - 4965: IPsec received a packet from a remote computer with an incorrect Security Parameter Index (SPI). This is usually caused by malfunctioning hardware that is corrupting packets. If these errors persist, verify that the packets sent from the remote computer are the same as those received by this computer. This error may also indicate interoperability problems with other IPsec implementations. In that case, if connectivity is not impeded, then these events can be ignored. - 5478: IPsec Services has started successfully. - 5479: IPsec Services has been shut down successfully. The shutdown of IPsec Services can put the computer at greater risk of network attack or expose the computer to potential security risks. - 5480: IPsec Services failed to get the complete list of network interfaces on the computer. This poses a potential security risk because some of the network interfaces may not get the protection provided by the applied IPsec filters. Use the IP Security Monitor snap-in to diagnose the problem. - 5483: IPsec Services failed to initialize RPC server. IPsec Services could not be started. - 5484: IPsec Services has experienced a critical failure and has been shut down. The shutdown of IPsec Services can put the computer at greater risk of network attack or expose the computer to potential security risks. - 5485: IPsec Services failed to process some IPsec filters on a plug-and-play event for network interfaces. This poses a potential security risk because some of the network interfaces may not get the protection provided by the applied IPsec filters. Use the IP Security Monitor snap-in to diagnose the problem. The recommended state for this setting is: `Success and Failure`.<br />**Key Path**: {0CCE9213-69AE-11D9-BED3-505054503030}<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\>\= Success and Failure<br /><sub>(Audit)</sub> |Critical |
+|Audit Other System Events<br /><sub>(CCE-38030-3)</sub> |**Description**: This subcategory reports on other system events. Events for this subcategory include: - 5024 : The Windows Firewall Service has started successfully. - 5025 : The Windows Firewall Service has been stopped. - 5027 : The Windows Firewall Service was unable to retrieve the security policy from the local storage. The service will continue enforcing the current policy. - 5028 : The Windows Firewall Service was unable to parse the new security policy. The service will continue with currently enforced policy. - 5029: The Windows Firewall Service failed to initialize the driver. The service will continue to enforce the current policy. - 5030: The Windows Firewall Service failed to start. - 5032: Windows Firewall was unable to notify the user that it blocked an application from accepting incoming connections on the network. - 5033 : The Windows Firewall Driver has started successfully. - 5034 : The Windows Firewall Driver has been stopped. - 5035 : The Windows Firewall Driver failed to start. - 5037 : The Windows Firewall Driver detected critical runtime error. Terminating. - 5058: Key file operation. - 5059: Key migration operation. The recommended state for this setting is: `Success and Failure`.<br />**Key Path**: {0CCE9214-69AE-11D9-BED3-505054503030}<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= Success and Failure<br /><sub>(Audit)</sub> |Critical |
|Audit Security State Change<br /><sub>(CCE-38114-5)</sub> |**Description**: This subcategory reports changes in security state of the system, such as when the security subsystem starts and stops. Events for this subcategory include: ΓÇö 4608: Windows is starting up. ΓÇö 4609: Windows is shutting down. ΓÇö 4616: The system time was changed. ΓÇö 4621: Administrator recovered system from CrashOnAuditFail. Users who are not administrators will now be allowed to log on. Some auditable activity might not have been recorded. Refer to the Microsoft Knowledgebase article 'Description of security events in Windows Vista and in Windows Server 2008' for the most recent information about this setting: https://support.microsoft.com/topic/ms16-014-description-of-the-security-update-for-windows-vista-windows-server-2008-windows-7-windows-server-2008-r2-windows-server-2012-windows-8-1-and-windows-server-2012-r2-february-9-2016-1ff344d3-cd1c-cdbd-15b4-9344c7a7e6bd.<br />**Key Path**: {0CCE9210-69AE-11D9-BED3-505054503030}<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\>\= Success<br /><sub>(Audit)</sub> |Critical | |Audit Security System Extension<br /><sub>(CCE-36144-4)</sub> |**Description**: This subcategory reports the loading of extension code such as authentication packages by the security subsystem. Events for this subcategory include: ΓÇö 4610: An authentication package has been loaded by the Local Security Authority. ΓÇö 4611: A trusted logon process has been registered with the Local Security Authority. ΓÇö 4614: A notification package has been loaded by the Security Account Manager. ΓÇö 4622: A security package has been loaded by the Local Security Authority. ΓÇö 4697: A service was installed in the system. Refer to the Microsoft Knowledgebase article "Description of security events in Windows Vista and in Windows Server 2008" for the most recent information about this setting: https://support.microsoft.com/topic/ms16-014-description-of-the-security-update-for-windows-vista-windows-server-2008-windows-7-windows-server-2008-r2-windows-server-2012-windows-8-1-and-windows-server-2012-r2-february-9-2016-1ff344d3-cd1c-cdbd-15b4-9344c7a7e6bd.<br />**Key Path**: {0CCE9211-69AE-11D9-BED3-505054503030}<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\>\= Success<br /><sub>(Audit)</sub> |Critical | |Audit System Integrity<br /><sub>(CCE-37132-8)</sub> |**Description**: This subcategory reports on violations of integrity of the security subsystem. Events for this subcategory include: ΓÇö 4612: Internal resources allocated for the queuing of audit messages have been exhausted, leading to the loss of some audits. ΓÇö 4615: Invalid use of LPC port. ΓÇö 4618: A monitored security event pattern has occurred. ΓÇö 4816 : RPC detected an integrity violation while decrypting an incoming message. ΓÇö 5038: Code integrity determined that the image hash of a file is not valid. The file could be corrupt due to unauthorized modification or the invalid hash could indicate a potential disk device error. ΓÇö 5056: A cryptographic self-test was performed. ΓÇö 5057: A cryptographic primitive operation failed. ΓÇö 5060: Verification operation failed. ΓÇö 5061: Cryptographic operation. ΓÇö 5062: A kernel-mode cryptographic self-test was performed. Refer to the Microsoft Knowledgebase article 'Description of security events in Windows Vista and in Windows Server 2008' for the most recent information about this setting: https://support.microsoft.com/topic/ms16-014-description-of-the-security-update-for-windows-vista-windows-server-2008-windows-7-windows-server-2008-r2-windows-server-2012-windows-8-1-and-windows-server-2012-r2-february-9-2016-1ff344d3-cd1c-cdbd-15b4-9344c7a7e6bd.<br />**Key Path**: {0CCE9212-69AE-11D9-BED3-505054503030}<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= Success and Failure<br /><sub>(Audit)</sub> |Critical |
For more information, see [Azure Policy guest configuration](../concepts/guest-c
|Name<br /><sub>(ID)</sub> |Details |Expected value<br /><sub>(Type)</sub> |Severity | |||||
-|Access Credential Manager as a trusted caller<br /><sub>(CCE-37056-9)</sub> |**Description**: This security setting is used by Credential Manager during Backup and Restore. No accounts should have this user right, as it is only assigned to Winlogon. Users' saved credentials might be compromised if this user right is assigned to other entities. The recommended state for this setting is: `No One`.<br />**Key Path**: [Privilege Rights]SeTrustedCredManAccessPrivilege<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= No One<br /><sub>(Policy)</sub> |Warning |
+|Access Credential Manager as a trusted caller<br /><sub>(CCE-37056-9)</sub> |**Description**: This security setting is used by Credential Manager during Backup and Restore. No accounts should have this user right, as it is only assigned to the Winlogon process. Users' saved credentials might be compromised if this user right is assigned to other entities. The recommended state for this setting is: `No One`.<br />**Key Path**: [Privilege Rights]SeTrustedCredManAccessPrivilege<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= No One<br /><sub>(Policy)</sub> |Warning |
|Access this computer from the network<br /><sub>(CCE-35818-4)</sub> |**Description**: <p><span>This policy setting allows other users on the network to connect to the computer and is required by various network protocols that include Server Message Block (SMB) based protocols, NetBIOS, Common Internet File System (CIFS), and Component Object Model Plus (COM+). - *Level 1 - Domain Controller.* The recommended state for this setting is: 'Administrators, Authenticated Users, ENTERPRISE DOMAIN CONTROLLERS'. - *Level 1 - Member Server.* The recommended state for this setting is: 'Administrators, Authenticated Users'.</span></p><br />**Key Path**: [Privilege Rights]SeNetworkLogonRight<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Member, Workgroup Member | Administrators, Authenticated Users<br /><sub>(Policy)</sub> |Critical | |Act as part of the operating system<br /><sub>(CCE-36876-1)</sub> |**Description**: This policy setting allows a process to assume the identity of any user and thus gain access to the resources that the user is authorized to access. The recommended state for this setting is: `No One`.<br />**Key Path**: [Privilege Rights]SeTcbPrivilege<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= No One<br /><sub>(Policy)</sub> |Critical | |Allow log on locally<br /><sub>(CCE-37659-0)</sub> |**Description**: This policy setting determines which users can interactively log on to computers in your environment. Logons that are initiated by pressing the CTRL+ALT+DEL key sequence on the client computer keyboard require this user right. Users who attempt to log on through Terminal Services or IIS also require this user right. The Guest account is assigned this user right by default. Although this account is disabled by default, Microsoft recommends that you enable this setting through Group Policy. However, this user right should generally be restricted to the Administrators and Users groups. Assign this user right to the Backup Operators group if your organization requires that they have this capability. When configuring a user right in the SCM enter a comma delimited list of accounts. Accounts can be either local or located in Active Directory, they can be groups, users, or computers.<br />**Key Path**: [Privilege Rights]SeInteractiveLogonRight<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Member, Workgroup Member |\= Administrators<br /><sub>(Policy)</sub> |Critical | |Allow log on through Remote Desktop Services<br /><sub>(CCE-37072-6)</sub> |**Description**: <p><span>This policy setting determines which users or groups have the right to log on as a Terminal Services client. Remote desktop users require this user right. If your organization uses Remote Assistance as part of its help desk strategy, create a group and assign it this user right through Group Policy. If the help desk in your organization does not use Remote Assistance, assign this user right only to the Administrators group or use the restricted groups feature to ensure that no user accounts are part of the Remote Desktop Users group. Restrict this user right to the Administrators group, and possibly the Remote Desktop Users group, to prevent unwanted users from gaining access to computers on your network by means of the Remote Assistance feature. - **Level 1 - Domain Controller.** The recommended state for this setting is: 'Administrators'. - **Level 1 - Member Server.** The recommended state for this setting is: 'Administrators, Remote Desktop Users'. **Note:** A Member Server that holds the _Remote Desktop Services_ Role with _Remote Desktop Connection Broker_ Role Service will require a special exception to this recommendation, to allow the 'Authenticated Users' group to be granted this user right. **Note 2:** The above lists are to be treated as allowlists, which implies that the above principals need not be present for assessment of this recommendation to pass.</span></p><br />**Key Path**: [Privilege Rights]SeRemoteInteractiveLogonRight<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member | Administrators, Remote Desktop Users<br /><sub>(Policy)</sub> |Critical |
-|Back up files and directories<br /><sub>(CCE-35912-5)</sub> |**Description**: This policy setting allows users to circumvent file and directory permissions to backup the system. This user right is enabled only when an application (such as NTBACKUP) attempts to access a file or directory through the NTFS file system backup application programming interface (API). Otherwise, the assigned file and directory permissions apply. The recommended state for this setting is: `Administrators`.<br />**Key Path**: [Privilege Rights]SeBackupPrivilege<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Member, Workgroup Member | Administrators, Backup Operators, Server Operators<br /><sub>(Policy)</sub> |Critical |
+|Back up files and directories<br /><sub>(CCE-35912-5)</sub> |**Description**: This policy setting allows users to circumvent file and directory permissions to back up the system. This user right is enabled only when an application (such as NTBACKUP) attempts to access a file or directory through the NTFS file system backup application programming interface (API). Otherwise, the assigned file and directory permissions apply. The recommended state for this setting is: `Administrators`.<br />**Key Path**: [Privilege Rights]SeBackupPrivilege<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Member, Workgroup Member | Administrators, Backup Operators, Server Operators<br /><sub>(Policy)</sub> |Critical |
|Bypass traverse checking<br /><sub>(AZ-WIN-00184)</sub> |**Description**: This policy setting allows users who do not have the Traverse Folder access permission to pass through folders when they browse an object path in the NTFS file system or the registry. This user right does not allow users to list the contents of a folder. When configuring a user right in the SCM enter a comma delimited list of accounts. Accounts can be either local or located in Active Directory, they can be groups, users, or computers.<br />**Key Path**: [Privilege Rights]SeChangeNotifyPrivilege<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Member, Workgroup Member | Administrators, Authenticated Users, Backup Operators, Local Service, Network Service<br /><sub>(Policy)</sub> |Critical | |Change the system time<br /><sub>(CCE-37452-0)</sub> |**Description**: This policy setting determines which users and groups can change the time and date on the internal clock of the computers in your environment. Users who are assigned this user right can affect the appearance of event logs. When a computer's time setting is changed, logged events reflect the new time, not the actual time that the events occurred. When configuring a user right in the SCM enter a comma delimited list of accounts. Accounts can be either local or located in Active Directory, they can be groups, users, or computers. **Note:** Discrepancies between the time on the local computer and on the domain controllers in your environment may cause problems for the Kerberos authentication protocol, which could make it impossible for users to log on to the domain or obtain authorization to access domain resources after they are logged on. Also, problems will occur when Group Policy is applied to client computers if the system time is not synchronized with the domain controllers. The recommended state for this setting is: `Administrators, LOCAL SERVICE`.<br />**Key Path**: [Privilege Rights]SeSystemtimePrivilege<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Member, Workgroup Member | Administrators, Server Operators, LOCAL SERVICE<br /><sub>(Policy)</sub> |Critical | |Change the time zone<br /><sub>(CCE-37700-2)</sub> |**Description**: This setting determines which users can change the time zone of the computer. This ability holds no great danger for the computer and may be useful for mobile workers. The recommended state for this setting is: `Administrators, LOCAL SERVICE`.<br />**Key Path**: [Privilege Rights]SeTimeZonePrivilege<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member | Administrators, LOCAL SERVICE<br /><sub>(Policy)</sub> |Critical |
-|Create a pagefile<br /><sub>(CCE-35821-8)</sub> |**Description**: This policy setting allows users to change the size of the pagefile. By making the pagefile extremely large or extremely small, an attacker could easily affect the performance of a compromised computer. The recommended state for this setting is: `Administrators`.<br />**Key Path**: [Privilege Rights]SeCreatePagefilePrivilege<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= Administrators<br /><sub>(Policy)</sub> |Critical |
+|Create a pagefile<br /><sub>(CCE-35821-8)</sub> |**Description**: This policy setting allows users to change the size of the pagefile. By configuring the pagefile to be either extremely large or extremely small, an attacker could easily affect the performance of a compromised computer. The recommended state for this setting is: `Administrators`.<br />**Key Path**: [Privilege Rights]SeCreatePagefilePrivilege<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= Administrators<br /><sub>(Policy)</sub> |Critical |
|Create a token object<br /><sub>(CCE-36861-3)</sub> |**Description**: This policy setting allows a process to create an access token, which may provide elevated rights to access sensitive data. The recommended state for this setting is: `No One`.<br />**Key Path**: [Privilege Rights]SeCreateTokenPrivilege<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= No One<br /><sub>(Policy)</sub> |Warning | |Create global objects<br /><sub>(CCE-37453-8)</sub> |**Description**: This policy setting determines whether users can create global objects that are available to all sessions. Users can still create objects that are specific to their own session if they do not have this user right. Users who can create global objects could affect processes that run under other users' sessions. This capability could lead to a variety of problems, such as application failure or data corruption. The recommended state for this setting is: `Administrators, LOCAL SERVICE, NETWORK SERVICE, SERVICE`. **Note:** A Member Server with Microsoft SQL Server _and_ its optional "Integration Services" component installed will require a special exception to this recommendation for additional SQL-generated entries to be granted this user right.<br />**Key Path**: [Privilege Rights]SeCreateGlobalPrivilege<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member | Administrators, SERVICE, LOCAL SERVICE, NETWORK SERVICE<br /><sub>(Policy)</sub> |Warning | |Create permanent shared objects<br /><sub>(CCE-36532-0)</sub> |**Description**: This user right is useful to kernel-mode components that extend the object namespace. However, components that run in kernel mode have this user right inherently. Therefore, it is typically not necessary to specifically assign this user right. The recommended state for this setting is: `No One`.<br />**Key Path**: [Privilege Rights]SeCreatePermanentPrivilege<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= No One<br /><sub>(Policy)</sub> |Warning | |Create symbolic links<br /><sub>(CCE-35823-4)</sub> |**Description**: <p><span>This policy setting determines which users can create symbolic links. In Windows Vista, existing NTFS file system objects, such as files and folders, can be accessed by referring to a new kind of file system object called a symbolic link. A symbolic link is a pointer (much like a shortcut or .lnk file) to another file system object, which can be a file, folder, shortcut or another symbolic link. The difference between a shortcut and a symbolic link is that a shortcut only works from within the Windows shell. To other programs and applications, shortcuts are just another file, whereas with symbolic links, the concept of a shortcut is implemented as a feature of the NTFS file system. Symbolic links can potentially expose security vulnerabilities in applications that are not designed to use them. For this reason, the privilege for creating symbolic links should only be assigned to trusted users. By default, only Administrators can create symbolic links. - **Level 1 - Domain Controller.** The recommended state for this setting is: 'Administrators'. - **Level 1 - Member Server.** The recommended state for this setting is: 'Administrators' and (when the _Hyper-V_ Role is installed) 'NT VIRTUAL MACHINE\Virtual Machines'.</span></p><br />**Key Path**: [Privilege Rights]SeCreateSymbolicLinkPrivilege<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member | Administrators, NT VIRTUAL MACHINE\Virtual Machines<br /><sub>(Policy)</sub> |Critical |
+|Debug programs<br /><sub>(AZ-WIN-73755)</sub> |**Description**: This policy setting determines which user accounts will have the right to attach a debugger to any process or to the kernel, which provides complete access to sensitive and critical operating system components. Developers who are debugging their own applications do not need to be assigned this user right; however, developers who are debugging new system components will need it. The recommended state for this setting is: `Administrators`. **Note:** This user right is considered a "sensitive privilege" for the purposes of auditing.<br />**Key Path**: [Privilege Rights]SeDebugPrivilege<br />**OS**: WS2016, WS2019<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= Administrators<br /><sub>(Policy)</sub> |Critical |
|Deny access to this computer from the network<br /><sub>(CCE-37954-5)</sub> |**Description**: <p><span>This policy setting prohibits users from connecting to a computer from across the network, which would allow users to access and potentially modify data remotely. In high security environments, there should be no need for remote users to access data on a computer. Instead, file sharing should be accomplished through the use of network servers. - **Level 1 - Domain Controller.** The recommended state for this setting is to include: 'Guests, Local account'. - **Level 1 - Member Server.** The recommended state for this setting is to include: 'Guests, Local account and member of Administrators group'. **Caution:** Configuring a standalone (non-domain-joined) server as described above may result in an inability to remotely administer the server. **Note:** Configuring a member server or standalone server as described above may adversely affect applications that create a local service account and place it in the Administrators group - in which case you must either convert the application to use a domain-hosted service account, or remove Local account and member of Administrators group from this User Right Assignment. Using a domain-hosted service account is strongly preferred over making an exception to this rule, where possible.</span></p><br />**Key Path**: [Privilege Rights]SeDenyNetworkLogonRight<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\>\= Guests<br /><sub>(Policy)</sub> |Critical | |Deny log on as a batch job<br /><sub>(CCE-36923-1)</sub> |**Description**: This policy setting determines which accounts will not be able to log on to the computer as a batch job. A batch job is not a batch (.bat) file, but rather a batch-queue facility. Accounts that use the Task Scheduler to schedule jobs need this user right. The **Deny log on as a batch job** user right overrides the **Log on as a batch job** user right, which could be used to allow accounts to schedule jobs that consume excessive system resources. Such an occurrence could cause a DoS condition. Failure to assign this user right to the recommended accounts can be a security risk. The recommended state for this setting is to include: `Guests`.<br />**Key Path**: [Privilege Rights]SeDenyBatchLogonRight<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\>\= Guests<br /><sub>(Policy)</sub> |Critical | |Deny log on as a service<br /><sub>(CCE-36877-9)</sub> |**Description**: This security setting determines which service accounts are prevented from registering a process as a service. This policy setting supersedes the **Log on as a service** policy setting if an account is subject to both policies. The recommended state for this setting is to include: `Guests`. **Note:** This security setting does not apply to the System, Local Service, or Network Service accounts.<br />**Key Path**: [Privilege Rights]SeDenyServiceLogonRight<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\>\= Guests<br /><sub>(Policy)</sub> |Critical |
For more information, see [Azure Policy guest configuration](../concepts/guest-c
|Profile system performance<br /><sub>(CCE-36052-9)</sub> |**Description**: This policy setting allows users to use tools to view the performance of different system processes, which could be abused to allow attackers to determine a system's active processes and provide insight into the potential attack surface of the computer. The recommended state for this setting is: `Administrators, NT SERVICE\WdiServiceHost`.<br />**Key Path**: [Privilege Rights]SeSystemProfilePrivilege<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member | Administrators, NT SERVICE\WdiServiceHost<br /><sub>(Policy)</sub> |Warning | |Replace a process level token<br /><sub>(CCE-37430-6)</sub> |**Description**: This policy setting allows one process or service to start another service or process with a different security access token, which can be used to modify the security access token of that sub-process and result in the escalation of privileges. The recommended state for this setting is: `LOCAL SERVICE, NETWORK SERVICE`. **Note:** A Member Server that holds the _Web Server (IIS)_ Role with _Web Server_ Role Service will require a special exception to this recommendation, to allow IIS application pool(s) to be granted this user right. **Note #2:** A Member Server with Microsoft SQL Server installed will require a special exception to this recommendation for additional SQL-generated entries to be granted this user right.<br />**Key Path**: [Privilege Rights]SeAssignPrimaryTokenPrivilege<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member | LOCAL SERVICE, NETWORK SERVICE<br /><sub>(Policy)</sub> |Warning | |Restore files and directories<br /><sub>(CCE-37613-7)</sub> |**Description**: This policy setting determines which users can bypass file, directory, registry, and other persistent object permissions when restoring backed up files and directories on computers that run Windows Vista in your environment. This user right also determines which users can set valid security principals as object owners; it is similar to the Backup files and directories user right. The recommended state for this setting is: `Administrators`.<br />**Key Path**: [Privilege Rights]SeRestorePrivilege<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Member, Workgroup Member | Administrators, Backup Operators<br /><sub>(Policy)</sub> |Warning |
-|Shut down the system<br /><sub>(CCE-38328-1)</sub> |**Description**: This policy setting determines which users who are logged on locally to the computers in your environment can shut down the operating system with the Shut Down command. Misuse of this user right can result in a denial of service condition. The recommended state for this setting is: `Administrators`.<br />**Key Path**: [Privilege Rights]SeShutdownPrivilege<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= Administrators<br /><sub>(Policy)</sub> |Warning |
+|Shut down the system<br /><sub>(CCE-38328-1)</sub> |**Description**: This policy setting determines which users who are logged on locally to the computers in your environment can shut down the operating system with the Shut Down command. Misuse of this user right can result in a denial of service condition. The recommended state for this setting is: `Administrators`.<br />**Key Path**: [Privilege Rights]SeShutdownPrivilege<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member | Administrators, Backup Operators<br /><sub>(Policy)</sub> |Warning |
|Take ownership of files or other objects<br /><sub>(CCE-38325-7)</sub> |**Description**: This policy setting allows users to take ownership of files, folders, registry keys, processes, or threads. This user right bypasses any permissions that are in place to protect objects to give ownership to the specified user. The recommended state for this setting is: `Administrators`.<br />**Key Path**: [Privilege Rights]SeTakeOwnershipPrivilege<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= Administrators<br /><sub>(Policy)</sub> |Critical |
-|The Debug programs user right must only be assigned to the Administrators group.<br /><sub>(AZ-WIN-73755)</sub> |<br />**Key Path**: [Privilege Rights]SeDebugPrivilege<br />**OS**: WS2016, WS2019<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= Administrators<br /><sub>(Policy)</sub> |Critical |
-|The Impersonate a client after authentication user right must only be assigned to Administrators, Service, Local Service, and Network Service.<br /><sub>(AZ-WIN-73785)</sub> |<br />**Key Path**: [Privilege Rights]SeImpersonatePrivilege<br />**OS**: WS2016, WS2019<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member | Administrators,Service,Local Service,Network Service<br /><sub>(Policy)</sub> |Important |
+|The Impersonate a client after authentication user right must only be assigned to Administrators, Service, Local Service, and Network Service.<br /><sub>(AZ-WIN-73785)</sub> |**Description**: The policy setting allows programs that run on behalf of a user to impersonate that user (or another specified account) so that they can act on behalf of the user. If this user right is required for this kind of impersonation, an unauthorized user will not be able to convince a client to connect�for example, by remote procedure call (RPC) or named pipes�to a service that they have created to impersonate that client, which could elevate the unauthorized user's permissions to administrative or system levels. Services that are started by the Service Control Manager have the built-in Service group added by default to their access tokens. COM servers that are started by the COM infrastructure and configured to run under a specific account also have the Service group added to their access tokens. As a result, these processes are assigned this user right when they are started. Also, a user can impersonate an access token if any of the following conditions exist: - The access token that is being impersonated is for this user. - The user, in this logon session, logged on to the network with explicit credentials to create the access token. - The requested level is less than Impersonate, such as Anonymous or Identify. An attacker with the **Impersonate a client after authentication** user right could create a service, trick a client to make them connect to the service, and then impersonate that client to elevate the attacker's level of access to that of the client. The recommended state for this setting is: `Administrators, LOCAL SERVICE, NETWORK SERVICE, SERVICE`. **Note:** This user right is considered a "sensitive privilege" for the purposes of auditing. **Note #2:** A Member Server with Microsoft SQL Server _and_ its optional "Integration Services" component installed will require a special exception to this recommendation for additional SQL-generated entries to be granted this user right.<br />**Key Path**: [Privilege Rights]SeImpersonatePrivilege<br />**OS**: WS2016, WS2019<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member | Administrators, Service, Local Service, Network Service<br /><sub>(Policy)</sub> |Important |
## Windows Components |Name<br /><sub>(ID)</sub> |Details |Expected value<br /><sub>(Type)</sub> |Severity | ||||| |Allow Basic authentication<br /><sub>(CCE-36254-1)</sub> |**Description**: This policy setting allows you to manage whether the Windows Remote Management (WinRM) service accepts Basic authentication from a remote client. The recommended state for this setting is: `Disabled`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\WinRM\Client\AllowBasic<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Critical |
+|Allow Diagnostic Data<br /><sub>(AZ-WIN-00169)</sub> |**Description**: This policy setting determines the amount of diagnostic and usage data reported to Microsoft. A value of 0 will send minimal data to Microsoft. This data includes Malicious Software Removal Tool (MSRT) & Windows Defender data, if enabled, and telemetry client settings. Setting a value of 0 applies to enterprise, EDU, IoT and server devices only. Setting a value of 0 for other devices is equivalent to choosing a value of 1. A value of 1 sends only a basic amount of diagnostic and usage data. Note that setting values of 0 or 1 will degrade certain experiences on the device. A value of 2 sends enhanced diagnostic and usage data. A value of 3 sends the same data as a value of 2, plus additional diagnostics data, including the files and content that may have caused the problem. Windows 10 telemetry settings apply to the Windows operating system and some first party apps. This setting does not apply to third party apps running on Windows 10. The recommended state for this setting is: `Enabled: 0 - Security [Enterprise Only]`. **Note:** If the "Allow Telemetry" setting is configured to "0 - Security [Enterprise Only]", then the options in Windows Update to defer upgrades and updates will have no effect.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\DataCollection\AllowTelemetry<br />**OS**: WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\>\= 1<br /><sub>(Registry)</sub> |Warning |
|Allow indexing of encrypted files<br /><sub>(CCE-38277-0)</sub> |**Description**: This policy setting controls whether encrypted items are allowed to be indexed. When this setting is changed, the index is rebuilt completely. Full volume encryption (such as BitLocker Drive Encryption or a non-Microsoft solution) must be used for the location of the index to maintain security for encrypted files. The recommended state for this setting is: `Disabled`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\Windows Search\AllowIndexingEncryptedStoresOrItems<br />**OS**: WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Warning | |Allow Microsoft accounts to be optional<br /><sub>(CCE-38354-7)</sub> |**Description**: This policy setting lets you control whether Microsoft accounts are optional for Windows Store apps that require an account to sign in. This policy only affects Windows Store apps that support it. If you enable this policy setting, Windows Store apps that typically require a Microsoft account to sign in will allow users to sign in with an enterprise account instead. If you disable or do not configure this policy setting, users will need to sign in with a Microsoft account.<br />**Key Path**: SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System\MSAOptional<br />**OS**: WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Warning |
-|Allow Telemetry<br /><sub>(AZ-WIN-00169)</sub> |**Description**: This policy setting determines the amount of diagnostic and usage data reported to Microsoft. A value of 0 will send minimal data to Microsoft. This data includes Malicious Software Removal Tool (MSRT) & Windows Defender data, if enabled, and telemetry client settings. Setting a value of 0 applies to enterprise, EDU, IoT and server devices only. Setting a value of 0 for other devices is equivalent to choosing a value of 1. A value of 1 sends only a basic amount of diagnostic and usage data. Note that setting values of 0 or 1 will degrade certain experiences on the device. A value of 2 sends enhanced diagnostic and usage data. A value of 3 sends the same data as a value of 2, plus additional diagnostics data, including the files and content that may have caused the problem. Windows 10 telemetry settings apply to the Windows operating system and some first party apps. This setting does not apply to third party apps running on Windows 10. The recommended state for this setting is: `Enabled: 0 - Security [Enterprise Only]`. **Note:** If the "Allow Telemetry" setting is configured to "0 - Security [Enterprise Only]", then the options in Windows Update to defer upgrades and updates will have no effect.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\DataCollection\AllowTelemetry<br />**OS**: WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\>\= 0<br /><sub>(Registry)</sub> |Warning |
|Allow unencrypted traffic<br /><sub>(CCE-38223-4)</sub> |**Description**: This policy setting allows you to manage whether the Windows Remote Management (WinRM) service sends and receives unencrypted messages over the network. The recommended state for this setting is: `Disabled`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\WinRM\Client\AllowUnencryptedTraffic<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Critical | |Allow user control over installs<br /><sub>(CCE-36400-0)</sub> |**Description**: Permits users to change installation options that typically are available only to system administrators. The security features of Windows Installer prevent users from changing installation options typically reserved for system administrators, such as specifying the directory to which files are installed. If Windows Installer detects that an installation package has permitted the user to change a protected option, it stops the installation and displays a message. These security features operate only when the installation program is running in a privileged security context in which it has access to directories denied to the user. The recommended state for this setting is: `Disabled`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\Installer\EnableUserControl<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Critical | |Always install with elevated privileges<br /><sub>(CCE-37490-0)</sub> |**Description**: This setting controls whether or not Windows Installer should use system permissions when it installs any program on the system. **Note:** This setting appears both in the Computer Configuration and User Configuration folders. To make this setting effective, you must enable the setting in both folders. **Caution:** If enabled, skilled users can take advantage of the permissions this setting grants to change their privileges and gain permanent access to restricted files and folders. Note that the User Configuration version of this setting is not guaranteed to be secure. The recommended state for this setting is: `Disabled`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\Installer\AlwaysInstallElevated<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Member, Workgroup Member |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Warning | |Always prompt for password upon connection<br /><sub>(CCE-37929-7)</sub> |**Description**: This policy setting specifies whether Terminal Services always prompts the client computer for a password upon connection. You can use this policy setting to enforce a password prompt for users who log on to Terminal Services, even if they already provided the password in the Remote Desktop Connection client. By default, Terminal Services allows users to automatically log on if they enter a password in the Remote Desktop Connection client. Note If you do not configure this policy setting, the local computer administrator can use the Terminal Services Configuration tool to either allow or prevent passwords from being automatically sent.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows NT\Terminal Services\fPromptForPassword<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Critical |
-|Application: Control Event Log behavior when the log file reaches its maximum size<br /><sub>(CCE-37775-4)</sub> |**Description**: This policy setting controls Event Log behavior when the log file reaches its maximum size. If you enable this policy setting and a log file reaches its maximum size, new events are not written to the log and are lost. If you disable or do not configure this policy setting and a log file reaches its maximum size, new events overwrite old events. Note: Old events may or may not be retained according to the "Backup log automatically when full"  policy setting.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\EventLog\Application\Retention<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Critical |
+|Application: Control Event Log behavior when the log file reaches its maximum size<br /><sub>(CCE-37775-4)</sub> |**Description**: This policy setting controls Event Log behavior when the log file reaches its maximum size. If you enable this policy setting and a log file reaches its maximum size, new events are not written to the log and are lost. If you disable or do not configure this policy setting and a log file reaches its maximum size, new events overwrite old events. Note: Old events may or may not be retained according to the "Backup log automatically when full" policy setting.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\EventLog\Application\Retention<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Critical |
|Application: Specify the maximum log file size (KB)<br /><sub>(CCE-37948-7)</sub> |**Description**: This policy setting specifies the maximum size of the log file in kilobytes. If you enable this policy setting, you can configure the maximum log file size to be between 1 megabyte (1024 kilobytes) and 2 terabytes (2147483647 kilobytes) in kilobyte increments. If you disable or do not configure this policy setting, the maximum size of the log file will be set to the locally configured value. This value can be changed by the local administrator using the Log Properties dialog and it defaults to 20 megabytes.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\EventLog\Application\MaxSize<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\>\= 32768<br /><sub>(Registry)</sub> |Critical |
-|Block all consumer Microsoft account user authentication<br /><sub>(AZ-WIN-20198)</sub> |<br />**Key Path**: SOFTWARE\Policies\Microsoft\MicrosoftAccount\DisableUserAuth<br />**OS**: WS2016, WS2019<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Critical |
+|Block all consumer Microsoft account user authentication<br /><sub>(AZ-WIN-20198)</sub> |**Description**: This setting determines whether applications and services on the device can utilize new consumer Microsoft account authentication via the Windows `OnlineID` and `WebAccountManager` APIs. The recommended state for this setting is: `Enabled`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\MicrosoftAccount\DisableUserAuth<br />**OS**: WS2016, WS2019<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Critical |
|Configure local setting override for reporting to Microsoft MAPS<br /><sub>(AZ-WIN-00173)</sub> |**Description**: This policy setting configures a local override for the configuration to join Microsoft MAPS. This setting can only be set by Group Policy. If you enable this setting the local preference setting will take priority over Group Policy. If you disable or do not configure this setting Group Policy will take priority over the local preference setting.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows Defender\SpyNet\LocalSettingOverrideSpynetReporting<br />**OS**: WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Warning |
-|Configure Windows SmartScreen<br /><sub>(CCE-35859-8)</sub> |**Description**: This policy setting allows you to manage the behavior of Windows SmartScreen. Windows SmartScreen helps keep PCs safer by warning users before running unrecognized programs downloaded from the Internet. Some information is sent to Microsoft about files and programs run on PCs with this feature enabled. If you enable this policy setting, Windows SmartScreen behavior may be controlled by setting one of the following options: ΓÇó Give user a warning before running downloaded unknown software ΓÇó Turn off SmartScreen If you disable or do not configure this policy setting, Windows SmartScreen behavior is managed by administrators on the PC by using Windows SmartScreen Settings in Security and Maintenance. Options: ΓÇó Give user a warning before running downloaded unknown software ΓÇó Turn off SmartScreen<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\System\EnableSmartScreen<br />**OS**: WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |In 1-2<br /><sub>(Registry)</sub> |Warning |
+|Configure Windows SmartScreen<br /><sub>(CCE-35859-8)</sub> |**Description**: This policy setting allows you to manage the behavior of Windows SmartScreen. Windows SmartScreen helps keep PCs safer by warning users before running unrecognized programs downloaded from the Internet. Some information is sent to Microsoft about files and programs run on PCs with this feature enabled. If you enable this policy setting, Windows SmartScreen behavior may be controlled by setting one of the following options: ΓÇó Give user a warning before running downloaded unknown software ΓÇó Turn off SmartScreen. If you disable or do not configure this policy setting, Windows SmartScreen behavior is managed by administrators on the PC by using Windows SmartScreen Settings in Security and Maintenance. Options: ΓÇó Give user a warning before running downloaded unknown software ΓÇó Turn off SmartScreen<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\System\EnableSmartScreen<br />**OS**: WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Warning |
|Detect change from default RDP port<br /><sub>(AZ-WIN-00156)</sub> |**Description**: This setting determines whether the network port that listens for Remote Desktop Connections has been changed from the default 3389<br />**Key Path**: System\CurrentControlSet\Control\Terminal Server\WinStations\RDP-Tcp\PortNumber<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 3389<br /><sub>(Registry)</sub> |Critical | |Disable Windows Search Service<br /><sub>(AZ-WIN-00176)</sub> |**Description**: This registry setting disables the Windows Search Service<br />**Key Path**: System\CurrentControlSet\Services\Wsearch\Start<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 4<br /><sub>(Registry)</sub> |Critical | |Disallow Autoplay for non-volume devices<br /><sub>(CCE-37636-8)</sub> |**Description**: This policy setting disallows AutoPlay for MTP devices like cameras or phones. If you enable this policy setting, AutoPlay is not allowed for MTP devices like cameras or phones. If you disable or do not configure this policy setting, AutoPlay is enabled for non-volume devices.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\Explorer\NoAutoplayfornonVolume<br />**OS**: WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Critical |
For more information, see [Azure Policy guest configuration](../concepts/guest-c
|Do not delete temp folders upon exit<br /><sub>(CCE-37946-1)</sub> |**Description**: This policy setting specifies whether Remote Desktop Services retains a user's per-session temporary folders at logoff. The recommended state for this setting is: `Disabled`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows NT\Terminal Services\DeleteTempDirsOnExit<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 1<br /><sub>(Registry)</sub> |Warning | |Do not display the password reveal button<br /><sub>(CCE-37534-5)</sub> |**Description**: This policy setting allows you to configure the display of the password reveal button in password entry user experiences. The recommended state for this setting is: `Enabled`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\CredUI\DisablePasswordReveal<br />**OS**: WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Warning | |Do not show feedback notifications<br /><sub>(AZ-WIN-00140)</sub> |**Description**: This policy setting allows an organization to prevent its devices from showing feedback questions from Microsoft. If you enable this policy setting, users will no longer see feedback notifications through the Windows Feedback app. If you disable or do not configure this policy setting, users may see notifications through the Windows Feedback app asking users for feedback. Note: If you disable or do not configure this policy setting, users can control how often they receive feedback questions.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\DataCollection\DoNotShowFeedbackNotifications<br />**OS**: WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Critical |
-|Do not use temporary folders per session<br /><sub>(CCE-38180-6)</sub> |**Description**: By default, Remote Desktop Services creates a separate temporary folder on the RD Session Host server for each active session that a user maintains on the RD Session Host server. The temporary folder is created on the RD Session Host server in a Temp folder under the user's profile folder and is named with the "sessionid." This temporary folder is used to store individual temporary files. To reclaim disk space, the temporary folder is deleted when the user logs off from a session. The recommended state for this setting is: `Disabled`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows NT\Terminal Services\PerSessionTempDir<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 1<br /><sub>(Registry)</sub> |Critical |
+|Do not use temporary folders per session<br /><sub>(CCE-38180-6)</sub> |**Description**: By default, Remote Desktop Services creates a separate temporary folder on the RD Session Host server for each active session that a user maintains on the RD Session Host server. The temporary folder is created on the RD Session Host server in a Temp folder under the user's profile folder and is named with the session ID. This temporary folder is used to store individual temporary files. To reclaim disk space, the temporary folder is deleted when the user logs off from a session. The recommended state for this setting is: `Disabled`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows NT\Terminal Services\PerSessionTempDir<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 1<br /><sub>(Registry)</sub> |Critical |
|Enumerate administrator accounts on elevation<br /><sub>(CCE-36512-2)</sub> |**Description**: This policy setting controls whether administrator accounts are displayed when a user attempts to elevate a running application. The recommended state for this setting is: `Disabled`.<br />**Key Path**: SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\CredUI\EnumerateAdministrators<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Warning |
-|PowerShell script block logging must be enabled.<br /><sub>(AZ-WIN-73591)</sub> |<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\PowerShell\ScriptBlockLogging\EnableScriptBlockLogging<br />**OS**: WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Important |
|Prevent downloading of enclosures<br /><sub>(CCE-37126-0)</sub> |**Description**: This policy setting prevents the user from having enclosures (file attachments) downloaded from a feed to the user's computer. The recommended state for this setting is: `Enabled`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Internet Explorer\Feeds\DisableEnclosureDownload<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Warning | |Require secure RPC communication<br /><sub>(CCE-37567-5)</sub> |**Description**: Specifies whether a Remote Desktop Session Host server requires secure RPC communication with all clients or allows unsecured communication. You can use this setting to strengthen the security of RPC communication with clients by allowing only authenticated and encrypted requests. If the status is set to Enabled, Remote Desktop Services accepts requests from RPC clients that support secure requests, and does not allow unsecured communication with untrusted clients. If the status is set to Disabled, Remote Desktop Services always requests security for all RPC traffic. However, unsecured communication is allowed for RPC clients that do not respond to the request. If the status is set to Not Configured, unsecured communication is allowed. Note: The RPC interface is used for administering and configuring Remote Desktop Services.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows NT\Terminal Services\fEncryptRPCTraffic<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Critical | |Require user authentication for remote connections by using Network Level Authentication<br /><sub>(AZ-WIN-00149)</sub> |**Description**: Require user authentication for remote connections by using Network Level Authentication<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows NT\Terminal Services\UserAuthentication<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 1<br /><sub>(Registry)</sub> |Critical | |Scan removable drives<br /><sub>(AZ-WIN-00177)</sub> |**Description**: This policy setting allows you to manage whether or not to scan for malicious software and unwanted software in the contents of removable drives such as USB flash drives when running a full scan. If you enable this setting removable drives will be scanned during any type of scan. If you disable or do not configure this setting removable drives will not be scanned during a full scan. Removable drives may still be scanned during quick scan and custom scan.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows Defender\Scan\DisableRemovableDriveScanning<br />**OS**: WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 0<br /><sub>(Registry)</sub> |Critical |
-|Security: Control Event Log behavior when the log file reaches its maximum size<br /><sub>(CCE-37145-0)</sub> |**Description**: This policy setting controls Event Log behavior when the log file reaches its maximum size. If you enable this policy setting and a log file reaches its maximum size, new events are not written to the log and are lost. If you disable or do not configure this policy setting and a log file reaches its maximum size, new events overwrite old events. Note: Old events may or may not be retained according to the "Backup log automatically when full"  policy setting.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\EventLog\Security\Retention<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Critical |
+|Security: Control Event Log behavior when the log file reaches its maximum size<br /><sub>(CCE-37145-0)</sub> |**Description**: This policy setting controls Event Log behavior when the log file reaches its maximum size. If you enable this policy setting and a log file reaches its maximum size, new events are not written to the log and are lost. If you disable or do not configure this policy setting and a log file reaches its maximum size, new events overwrite old events. Note: Old events may or may not be retained according to the "Backup log automatically when full" policy setting.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\EventLog\Security\Retention<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Critical |
|Security: Specify the maximum log file size (KB)<br /><sub>(CCE-37695-4)</sub> |**Description**: This policy setting specifies the maximum size of the log file in kilobytes. If you enable this policy setting, you can configure the maximum log file size to be between 1 megabyte (1024 kilobytes) and 2 terabytes (2,147,483,647 kilobytes) in kilobyte increments. If you disable or do not configure this policy setting, the maximum size of the log file will be set to the locally configured value. This value can be changed by the local administrator using the Log Properties dialog and it defaults to 20 megabytes.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\EventLog\Security\MaxSize<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\>\= 196608<br /><sub>(Registry)</sub> |Critical | |Send file samples when further analysis is required<br /><sub>(AZ-WIN-00126)</sub> |**Description**: This policy setting configures behavior of samples submission when opt-in for MAPS telemetry is set. Possible options are: (0x0) Always prompt (0x1) Send safe samples automatically (0x2) Never send (0x3) Send all samples automatically<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows Defender\SpyNet\SubmitSamplesConsent<br />**OS**: WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Warning | |Set client connection encryption level<br /><sub>(CCE-36627-8)</sub> |**Description**: This policy setting specifies whether the computer that is about to host the remote connection will enforce an encryption level for all data sent between it and the client computer for the remote session.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows NT\Terminal Services\MinEncryptionLevel<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 3<br /><sub>(Registry)</sub> |Critical | |Set the default behavior for AutoRun<br /><sub>(CCE-38217-6)</sub> |**Description**: This policy setting sets the default behavior for Autorun commands. Autorun commands are generally stored in autorun.inf files. They often launch the installation program or other routines. Prior to Windows Vista, when media containing an autorun command is inserted, the system will automatically execute the program without user intervention. This creates a major security concern as code may be executed without user's knowledge. The default behavior starting with Windows Vista is to prompt the user whether autorun command is to be run. The autorun command is represented as a handler in the Autoplay dialog. If you enable this policy setting, an Administrator can change the default Windows Vista or later behavior for autorun to: a) Completely disable autorun commands, or b) Revert back to pre-Windows Vista behavior of automatically executing the autorun command. If you disable or not configure this policy setting, Windows Vista or later will prompt the user whether autorun command is to be run.<br />**Key Path**: SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\Explorer\NoAutorun<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Critical |
-|Setup: Control Event Log behavior when the log file reaches its maximum size<br /><sub>(CCE-38276-2)</sub> |**Description**: This policy setting controls Event Log behavior when the log file reaches its maximum size. If you enable this policy setting and a log file reaches its maximum size, new events are not written to the log and are lost. If you disable or do not configure this policy setting and a log file reaches its maximum size, new events overwrite old events. Note: Old events may or may not be retained according to the "Backup log automatically when full"  policy setting.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\EventLog\Setup\Retention<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Critical |
+|Setup: Control Event Log behavior when the log file reaches its maximum size<br /><sub>(CCE-38276-2)</sub> |**Description**: This policy setting controls Event Log behavior when the log file reaches its maximum size. If you enable this policy setting and a log file reaches its maximum size, new events are not written to the log and are lost. If you disable or do not configure this policy setting and a log file reaches its maximum size, new events overwrite old events. Note: Old events may or may not be retained according to the "Backup log automatically when full" policy setting.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\EventLog\Setup\Retention<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Critical |
|Setup: Specify the maximum log file size (KB)<br /><sub>(CCE-37526-1)</sub> |**Description**: This policy setting specifies the maximum size of the log file in kilobytes. If you enable this policy setting, you can configure the maximum log file size to be between 1 megabyte (1024 kilobytes) and 2 terabytes (2,147,483,647 kilobytes) in kilobyte increments. If you disable or do not configure this policy setting, the maximum size of the log file will be set to the locally configured value. This value can be changed by the local administrator using the Log Properties dialog and it defaults to 20 megabytes.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\EventLog\Setup\MaxSize<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\>\= 32768<br /><sub>(Registry)</sub> |Critical | |Sign-in last interactive user automatically after a system-initiated restart<br /><sub>(CCE-36977-7)</sub> |**Description**: This policy setting controls whether a device will automatically sign-in the last interactive user after Windows Update restarts the system. The recommended state for this setting is: `Disabled`.<br />**Key Path**: SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System\DisableAutomaticRestartSignOn<br />**OS**: WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Critical | |Specify the interval to check for definition updates<br /><sub>(AZ-WIN-00152)</sub> |**Description**: This policy setting allows you to specify an interval at which to check for definition updates. The time value is represented as the number of hours between update checks. Valid values range from 1 (every hour) to 24 (once per day). If you enable this setting, checking for definition updates will occur at the interval specified. If you disable or do not configure this setting, checking for definition updates will occur at the default interval.<br />**Key Path**: SOFTWARE\Microsoft\Microsoft Antimalware\Signature Updates\SignatureUpdateInterval<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 8<br /><sub>(Registry)</sub> |Critical |
-|System: Control Event Log behavior when the log file reaches its maximum size<br /><sub>(CCE-36160-0)</sub> |**Description**: This policy setting controls Event Log behavior when the log file reaches its maximum size. If you enable this policy setting and a log file reaches its maximum size, new events are not written to the log and are lost. If you disable or do not configure this policy setting and a log file reaches its maximum size, new events overwrite old events. Note: Old events may or may not be retained according to the "Backup log automatically when full"  policy setting.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\EventLog\System\Retention<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Critical |
+|System: Control Event Log behavior when the log file reaches its maximum size<br /><sub>(CCE-36160-0)</sub> |**Description**: This policy setting controls Event Log behavior when the log file reaches its maximum size. If you enable this policy setting and a log file reaches its maximum size, new events are not written to the log and are lost. If you disable or do not configure this policy setting and a log file reaches its maximum size, new events overwrite old events. Note: Old events may or may not be retained according to the "Backup log automatically when full" policy setting.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\EventLog\System\Retention<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Critical |
|System: Specify the maximum log file size (KB)<br /><sub>(CCE-36092-5)</sub> |**Description**: This policy setting specifies the maximum size of the log file in kilobytes. If you enable this policy setting, you can configure the maximum log file size to be between 1 megabyte (1024 kilobytes) and 2 terabytes (2,147,483,647 kilobytes) in kilobyte increments. If you disable or do not configure this policy setting, the maximum size of the log file will be set to the locally configured value. This value can be changed by the local administrator using the Log Properties dialog and it defaults to 20 megabytes.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\EventLog\System\MaxSize<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\>\= 32768<br /><sub>(Registry)</sub> |Critical |
-|The Application Compatibility Program Inventory must be prevented from collecting data and sending the information to Microsoft.<br /><sub>(AZ-WIN-73543)</sub> |<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\AppCompat\DisableInventory<br />**OS**: WS2016, WS2019, WS2022<br />**Server Type**: Domain Member |\= 1<br /><sub>(Registry)</sub> |Informational |
+|The Application Compatibility Program Inventory must be prevented from collecting data and sending the information to Microsoft.<br /><sub>(AZ-WIN-73543)</sub> |**Description**: Some features may communicate with the vendor, sending system information or downloading data or components for the feature. Turning off this capability will prevent potentially sensitive information from being sent outside the enterprise and will prevent uncontrolled updates to the system. This setting will prevent the Program Inventory from collecting data about a system and sending the information to Microsoft.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\AppCompat\DisableInventory<br />**OS**: WS2016, WS2019, WS2022<br />**Server Type**: Domain Member |\= 1<br /><sub>(Registry)</sub> |Informational |
|Turn off Autoplay<br /><sub>(CCE-36875-3)</sub> |**Description**: Autoplay starts to read from a drive as soon as you insert media in the drive, which causes the setup file for programs or audio media to start immediately. An attacker could use this feature to launch a program to damage the computer or data on the computer. You can enable the Turn off Autoplay setting to disable the Autoplay feature. Autoplay is disabled by default on some removable drive types, such as floppy disk and network drives, but not on CD-ROM drives. Note You cannot use this policy setting to enable Autoplay on computer drives in which it is disabled by default, such as floppy disk and network drives.<br />**Key Path**: SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\Explorer\NoDriveTypeAutoRun<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 255<br /><sub>(Registry)</sub> |Critical | |Turn off Data Execution Prevention for Explorer<br /><sub>(CCE-37809-1)</sub> |**Description**: Disabling data execution prevention can allow certain legacy plug-in applications to function without terminating Explorer. The recommended state for this setting is: `Disabled`. **Note:** Some legacy plug-in applications and other software may not function with Data Execution Prevention and will require an exception to be defined for that specific plug-in/software.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\Explorer\NoDataExecutionPrevention<br />**OS**: WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Critical | |Turn off heap termination on corruption<br /><sub>(CCE-36660-9)</sub> |**Description**: Without heap termination on corruption, legacy plug-in applications may continue to function when a File Explorer session has become corrupt. Ensuring that heap termination on corruption is active will prevent this. The recommended state for this setting is: `Disabled`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\Explorer\NoHeapTerminationOnCorruption<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Critical | |Turn off Microsoft consumer experiences<br /><sub>(AZ-WIN-00144)</sub> |**Description**: This policy setting turns off experiences that help consumers make the most of their devices and Microsoft account. If you enable this policy setting, users will no longer see personalized recommendations from Microsoft and notifications about their Microsoft account. If you disable or do not configure this policy setting, users may see suggestions from Microsoft and notifications about their Microsoft account. Note: This setting only applies to Enterprise and Education SKUs.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\CloudContent\DisableWindowsConsumerFeatures<br />**OS**: WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 1<br /><sub>(Registry)</sub> |Warning |
-|Turn off shell protocol protected mode<br /><sub>(CCE-36809-2)</sub> |**Description**: This policy setting allows you to configure the amount of functionality that the shell protocol can have. When using the full functionality of this protocol applications can open folders and launch files. The protected mode reduces the functionality of this protocol allowing applications to only open a limited set of folders. Applications are not able to open files with this protocol when it is in the protected mode. It is recommended to leave this protocol in the protected mode to increase the security of Windows. The recommended state for this setting is: `Disabled`.<br />**Key Path**: SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\Explorer\PreXPSP2ShellProtocolBehavior<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Warning |
+|Turn off shell protocol protected mode<br /><sub>(CCE-36809-2)</sub> |**Description**: This policy setting allows you to configure the amount of functionality that the shell protocol can have. When using the full functionality of this protocol, applications can open folders and launch files. The protected mode reduces the functionality of this protocol allowing applications to only open a limited set of folders. Applications are not able to open files with this protocol when it is in the protected mode. It is recommended to leave this protocol in the protected mode to increase the security of Windows. The recommended state for this setting is: `Disabled`.<br />**Key Path**: SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\Explorer\PreXPSP2ShellProtocolBehavior<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Warning |
|Turn on behavior monitoring<br /><sub>(AZ-WIN-00178)</sub> |**Description**: This policy setting allows you to configure behavior monitoring. If you enable or do not configure this setting behavior monitoring will be enabled. If you disable this setting behavior monitoring will be disabled.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows Defender\Real-Time Protection\DisableBehaviorMonitoring<br />**OS**: WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Warning |
+|Turn on PowerShell Script Block Logging<br /><sub>(AZ-WIN-73591)</sub> |**Description**: This policy setting enables logging of all PowerShell script input to the `Applications and Services Logs\Microsoft\Windows\PowerShell\Operational` Event Log channel. The recommended state for this setting is: `Enabled`. **Note:** If logging of _Script Block Invocation Start/Stop Events_ is enabled (option box checked), PowerShell will log additional events when invocation of a command, script block, function, or script starts or stops. Enabling this option generates a high volume of event logs. CIS has intentionally chosen not to make a recommendation for this option, since it generates a large volume of events. **If an organization chooses to enable the optional setting (checked), this also conforms to the benchmark.**<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\PowerShell\ScriptBlockLogging\EnableScriptBlockLogging<br />**OS**: WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Important |
## Windows Settings - Security Settings |Name<br /><sub>(ID)</sub> |Details |Expected value<br /><sub>(Type)</sub> |Severity | |||||
-|Adjust memory quotas for a process<br /><sub>(CCE-10849-8)</sub> |<br />**Key Path**: [Privilege Rights]SeIncreaseQuotaPrivilege<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member | Administrators, Local Service, Network Service<br /><sub>(Policy)</sub> |Warning |
+|Adjust memory quotas for a process<br /><sub>(CCE-10849-8)</sub> |**Description**: This policy setting allows a user to adjust the maximum amount of memory that is available to a process. The ability to adjust memory quotas is useful for system tuning, but it can be abused. In the wrong hands, it could be used to launch a denial of service (DoS) attack. The recommended state for this setting is: `Administrators, LOCAL SERVICE, NETWORK SERVICE`. **Note:** A Member Server that holds the _Web Server (IIS)_ Role with _Web Server_ Role Service will require a special exception to this recommendation, to allow IIS application pool(s) to be granted this user right. **Note #2:** A Member Server with Microsoft SQL Server installed will require a special exception to this recommendation for additional SQL-generated entries to be granted this user right.<br />**Key Path**: [Privilege Rights]SeIncreaseQuotaPrivilege<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member | Administrators, Local Service, Network Service<br /><sub>(Policy)</sub> |Warning |
> [!NOTE] > Availability of specific Azure Policy guest configuration settings may vary in Azure Government
hdinsight Hdinsight Overview Before You Start https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-overview-before-you-start.md
+
+ Title: Before you start with Azure HDInsight
+description: In Azure HDInsight, few points to be considered before starting to create a cluster.
++ Last updated : 09/22/2022++
+# Consider the below points before starting to create a cluster.
+
+As part of the best practices, consider the following points before starting to create a cluster.
+
+## Bring your own database
+
+HDInsight have two options to configure the databases in the clusters.
+
+1. Bring your own database (external)
+1. Default database (internal)
+
+During cluster creation, default configuration will use internal database. Once the cluster is created, customer canΓÇÖt change the database type. Hence, it's recommended to create and use the external database. You can create custom databases for Ambari, Hive, and Ranger.
+
+For more information, see how to [Set up HDInsight clusters with a custom Ambari DB](/azure/hdinsight/hdinsight-custom-ambari-db.md)
+
+## Keep your clusters up to date
+
+To take advantage of the latest HDInsight features, we recommend regularly migrating your HDInsight clusters to the latest version. HDInsight doesn't support in-place upgrades where existing clusters are upgraded to new component versions. You need to create a new cluster with the desired components and platform version and migrate your application to use the new cluster.
+
+As part of the best practices, we recommend you keep your clusters updated on regular basis.
+
+HDInsight release happens every 30 to 60 days. It's always good to move to the latest release as early possible. The recommended maximum duration for cluster upgrades is less than six months.
+
+For more information, see how to [Migrate HDInsight cluster to a newer version](/azure/hdinsight/hdinsight-upgrade-cluster.md)
+
+## Next steps
+
+* [Create Apache Hadoop cluster in HDInsight](./hadoop/apache-hadoop-linux-create-cluster-get-started-portal.md)
+* [Create Apache Spark cluster - Portal](./spark/apache-spark-jupyter-spark-sql-use-portal.md)
+* [Enterprise security in Azure HDInsight](./domain-joined/hdinsight-security-overview.md)
hdinsight Hdinsight Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-overview.md
description: An introduction to HDInsight, and the Apache Hadoop and Apache Spar
Previously updated : 07/28/2022 Last updated : 09/20/2022 #Customer intent: As a data analyst, I want understand what is Hadoop and how it is offered in Azure HDInsight so that I can decide on using HDInsight instead of on premises clusters. # What is Azure HDInsight?
-Azure HDInsight is a managed, full-spectrum, open-source analytics service in the cloud for enterprises. With HDInsight, you can use open-source frameworks such as Hadoop, Apache Spark, Apache Hive, LLAP, Apache Kafka, and more, in your Azure environment.
+Azure HDInsight is a managed, full-spectrum, open-source analytics service in the cloud for enterprises. With HDInsight, you can use open-source frameworks such as, Apache Spark, Apache Hive, LLAP, Apache Kafka, Hadoop and more, in your Azure environment.
## What is HDInsight and the Hadoop technology stack?
Familiar business intelligence (BI) tools retrieve, analyze, and report data tha
* [Connect Excel to Apache Hadoop with the Microsoft Hive ODBC Driver](./hadoop/apache-hadoop-connect-excel-hive-odbc-driver.md) (requires Windows) - ## In-region data residency
-Spark, Hadoop, and LLAP don't store customer data, so these services automatically satisfy in-region data residency requirements including those specified in the [Trust Center](https://azuredatacentermap.azurewebsites.net/).
-
-Kafka and HBase do store customer data. This data is automatically stored by Kafka and HBase in a single region, so this service satisfies in-region data residency requirements including those specified in the [Trust Center](https://azuredatacentermap.azurewebsites.net/).
+Spark, Hadoop, and LLAP don't store customer data, so these services automatically satisfy in-region data residency requirements specified in the [Trust Center](https://azuredatacentermap.azurewebsites.net/).
+Kafka and HBase do store customer data. This data is automatically stored by Kafka and HBase in a single region, so this service satisfies in-region data residency requirements specified in the [Trust Center](https://azuredatacentermap.azurewebsites.net/).
Familiar business intelligence (BI) tools retrieve, analyze, and report data that is integrated with HDInsight by using either the Power Query add-in or the Microsoft Hive ODBC Driver.
healthcare-apis Events Disable Delete Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/events/events-disable-delete-workspace.md
Title: Disable events and delete workspaces - Azure Health Data Services
-description: This article provides resources on how to disable Events and delete workspaces.
+ Title: Disable Events and delete workspaces - Azure Health Data Services
+description: This article provides resources on how to disable the Events service and delete workspaces.
Previously updated : 07/06/2022 Last updated : 09/22/2022
-# Disable events and delete workspaces
+# Disable Events and delete workspaces
In this article, you'll learn how to disable the Events feature and delete workspaces in the Azure Health Data Services.
-## Disable events
+## Disable Events
-To disable events from sending event messages for a single Event Subscription, the Event Subscription must be deleted.
+To disable Events from sending event messages for a single **Event Subscription**, the **Event Subscription** must be deleted.
-1. Select the Event Subscription to be deleted. In this example, we'll be selecting an Event Subscription named **fhir-events**.
+1. Select the **Event Subscription** to be deleted. In this example, we'll be selecting an Event Subscription named **fhir-events**.
:::image type="content" source="media/disable-delete-workspaces/events-select-subscription.png" alt-text="Screenshot of Events subscriptions and select event subscription to be deleted." lightbox="media/disable-delete-workspaces/events-select-subscription.png":::
To disable events from sending event messages for a single Event Subscription, t
:::image type="content" source="media/disable-delete-workspaces/events-select-subscription-delete.png" alt-text="Screenshot of events subscriptions and select delete and confirm the event subscription to be deleted." lightbox="media/disable-delete-workspaces/events-select-subscription-delete.png":::
-3. To completely disable Events, delete all Event Subscriptions so that no Event Subscriptions remain.
+3. To completely disable Events, delete all **Event Subscriptions** so that no **Event Subscriptions** remain.
:::image type="content" source="media/disable-delete-workspaces/events-disable-no-subscriptions.png" alt-text="Screenshot of Events subscriptions and delete all event subscriptions to disable events." lightbox="media/disable-delete-workspaces/events-disable-no-subscriptions.png"::: > [!NOTE]
->
> The Fast Healthcare Interoperability Resources (FHIR&#174;) service will automatically go into an **Updating** status to disable the Events extension when a full delete of Event Subscriptions is executed. The FHIR service will remain online while the operation is completing. ## Delete workspaces
As an example:
## Next steps
-For more information about how to troubleshoot Events, see
+For more information about troubleshooting Events, see the Events troubleshooting guide:
>[!div class="nextstepaction"] >[Troubleshoot Events](./events-troubleshooting-guide.md)
iot-dps Concepts Custom Allocation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/concepts-custom-allocation.md
+
+ Title: Using custom allocation policies with Azure IoT Hub Device Provisioning Service
+description: Understand custom allocation policies with the Azure IoT Hub Device Provisioning Service (DPS)
++ Last updated : 09/09/2022++++++
+# Understand custom allocation policies with Azure IoT Hub Device Provisioning Service
+
+Custom allocation policies give you more control over how devices are assigned to your IoT hubs. By using custom allocation policies, you can define your own allocation policies when the built-in policies provided by the Device Provisioning Service (DPS) don't meet the requirements of your scenario.
+
+For example, maybe you want to examine the certificate a device is using during provisioning and assign the device to an IoT hub based on a certificate property. Or, maybe you have information stored in a database for your devices and need to query the database to determine which IoT hub a device should be assigned to or how the device's initial twin should be set.
+
+You implement a custom allocation policy in a webhook hosted in [Azure Functions](../azure-functions/functions-overview.md). You can then configure the webhook in one or more individual enrollments and enrollment groups. When a device registers through a configured enrollment entry, DPS calls the webhook which returns the IoT hub to register the device to and, optionally, the initial twin settings for the device and any information to be returned directly to the device.
+
+## Overview
+
+The following steps describe how custom allocation polices work:
+
+1. A custom allocation developer develops a webhook that implements the intended allocation policy and deploys it as an HTTP Trigger function to Azure Functions. The webhook takes information about the DPS enrollment entry and the device and returns the IoT hub that the device should be registered to and, optionally, information about the device's initial state.
+
+1. An IoT operator configures one or more individual enrollments and/or enrollment groups for custom allocation and provides calling details for the custom allocation webhook in Azure Functions.
+
+1. When a device [registers](/rest/api/iot-dps/device/runtime-registration/register-device) through an enrollment entry configured for the custom allocation webhook, DPS sends a POST request to the webhook with the request body set to an **AllocationRequest** request object. The **AllocationRequest** object contains information about the device trying to provision and the individual enrollment or enrollment group it's provisioning through. The device information can include an optional custom payload sent from the device in its registration request. For more information, see [Custom allocation policy request](#custom-allocation-policy-request).
+
+1. The Azure Function executes and returns an **AllocationResponse** object on success. The **AllocationResponse** object contains the IoT hub the device should be provisioned to, the initial twin state, and an optional custom payload to return to the device. For more information, see [Custom allocation policy response](#custom-allocation-policy-response).
+
+1. DPS assigns the device to the IoT hub indicated in the response, and, if an initial twin is returned, sets the initial twin for the device accordingly. If a custom payload is returned by the webhook, it's passed to the device along with the assigned IoT hub and authentication details in the [registration response](/rest/api/iot-dps/device/runtime-registration/register-device#deviceregistrationresult) from DPS.
+
+1. The device connects to the assigned IoT hub and downloads its initial twin state. If a custom payload is returned in the registration response, the device uses it according to its own client-side logic.
+
+The following sections provide more detail about the custom allocation request and response, custom payloads, and policy implementation. For a complete end-to-end example of a custom allocation policy, see [Use custom allocation policies](tutorial-custom-allocation-policies.md).
+
+## Custom allocation policy request
+
+DPS sends a POST request to your webhook on the following endpoint: `https://{your-function-app-name}.azurewebsites.net/api/{your-http-trigger}`
+
+The request body is an **AllocationRequest** object:
+
+| Property name | Description |
+||-|
+| individualEnrollment | An [individual enrollment record](/rest/api/iot-dps/service/individual-enrollment/get#individualenrollment) that contains properties associated with the individual enrollment that the allocation request originated from. Present if the device is registering through an individual enrollment. |
+| enrollmentGroup | An [enrollment group record](/rest/api/iot-dps/service/enrollment-group/get#enrollmentgroup) that contains the properties associated with the enrollment group that the allocation request originated from. Present if the device is registering through an enrollment group. |
+| deviceRuntimeContext | An object that contains properties associated with the device that is registering. Always present. |
+| linkedHubs | An array that contains the hostnames of the IoT hubs that are linked to the enrollment entry that the allocation request originated from. The device may be assigned to any one of these IoT hubs. Always present. |
+
+The **DeviceRuntimeContext** object has the following properties:
+
+| Property | Type | Description |
+|-||-|
+| registrationId | string | The registration ID provided by the device at runtime. Always present. |
+| currentIotHubHostName | string | The hostname of the IoT hub the device was previously assigned to (if any). Not present if this is an initial assignment. You can use this property to determine whether this is an initial assignment for the device or whether the device has been previously assigned. |
+| currentDeviceId | string | The device ID from the device's previous assignment (if any). Not present if this is an initial assignment. |
+| x509 | X509DeviceAttestation | For X.509 attestation, contains certificate details. |
+| symmetricKey | SymmetricKeyAttestation | For symmetric key attestation, contains primary and secondary key details. |
+| tpm | TpmAttestation | For TPM attestation, contains endorsement key and storage root key details. |
+| payload | object | Contains properties specified by the device in the payload property during registration. Present if the device sends a custom payload in the DPS registration request. |
+
+The following JSON shows the **AllocationRequest** object sent by DPS for a device registering through a symmetric key based enrollment group.
+
+```json
+{
+ "enrollmentGroup":{
+ "enrollmentGroupId":"contoso-custom-allocated-devices",
+ "attestation":{
+ "type":"symmetricKey"
+ },
+ "capabilities":{
+ "iotEdge":false
+ },
+ "etag":"\"13003fea-0000-0300-0000-62d1d5e50000\"",
+ "provisioningStatus":"enabled",
+ "reprovisionPolicy":{
+ "updateHubAssignment":true,
+ "migrateDeviceData":true
+ },
+ "createdDateTimeUtc":"2022-07-05T21:27:16.8123235Z",
+ "lastUpdatedDateTimeUtc":"2022-07-15T21:02:29.5922255Z",
+ "allocationPolicy":"custom",
+ "iotHubs":[
+ "custom-allocation-toasters-hub.azure-devices.net",
+ "custom-allocation-heatpumps-hub.azure-devices.net"
+ ],
+ "customAllocationDefinition":{
+ "webhookUrl":"https://custom-allocation-function-app-3.azurewebsites.net/api/HttpTrigger1?****",
+ "apiVersion":"2021-10-01"
+ }
+ },
+ "deviceRuntimeContext":{
+ "registrationId":"breakroom499-contoso-tstrsd-007",
+ "symmetricKey":{
+
+ }
+ },
+ "linkedHubs":[
+ "custom-allocation-toasters-hub.azure-devices.net",
+ "custom-allocation-heatpumps-hub.azure-devices.net"
+ ]
+}
+```
+
+Because this is the initial registration for the device, the **deviceRuntimeContext** property contains only the registration ID and the authentication details for the device. The following JSON shows the **deviceRuntimeContext** for a subsequent call to register the same device. Notice that the current IoT Hub hostname and device ID are included in the request.
+
+```json
+{
+ "deviceRuntimeContext":{
+ "registrationId":"breakroom499-contoso-tstrsd-007",
+ "currentIotHubHostName":"custom-allocation-toasters-hub.azure-devices.net",
+ "currentDeviceId":"breakroom499-contoso-tstrsd-007",
+ "symmetricKey":{
+
+ }
+ },
+}
+```
+
+## Custom allocation policy response
+
+A successful request returns an **AllocationResponse** object.
+
+| Property | Description |
+|-|-|
+| initialTwin | Optional. An object that contains the desired properties and tags to set in the initial twin on the assigned IoT hub. DPS uses the initialTwin property to set the initial twin on the assigned IoT hub on initial assignment or when re-provisioning if the enrollment entry's migration policy is set to *Re-provision and reset to initial config*. In both of these cases, if the initialTwin is not returned or is set to null, DPS sets the twin on the assigned IoT hub to the initial twin settings in the enrollment entry. DPS ignores the initialTwin for all other re-provisioning settings in the enrollment entry. To learn more, see [Implementation details](#implementation-details). |
+| iotHubHostName | Required. The hostname of the IoT hub to assign the device to. This must be one of the IoT hubs passed in the **linkedHubs** property in the request. |
+| payload | Optional. An object that contains data to be passed back to the device in the Registration response. The exact data will depend on the implicit contract defined by the developer between the device and the custom allocation function. |
+
+The following JSON shows the **AllocationResponse** object returned by a custom allocation function to DPS for the example registration above.
+
+```json
+{
+ "iotHubHostName":"custom-allocation-toasters-hub.azure-devices.net",
+ "initialTwin":{
+ "properties":{
+ "desired":{
+ "state":"ready",
+ "darknessSetting":"medium"
+ }
+ },
+ "tags":{
+ "deviceType":"toaster"
+ }
+ }
+}
+```
+
+## Use device payloads in custom allocation
+
+Devices can send a custom payload that is passed by DPS to your custom allocation webhook, which can then use that data in its logic. The webhook may use this data in a number of ways, perhaps to determine which IoT hub to assign the device to, or to look up information in an external database that might be used to set properties on the initial twin. Conversely, your webhook can return data back to the device through DPS, which may be used in the device's client-side logic.
+
+For example, you may want to allocate devices based on the device model. In this case, you can configure the device to report its model information in the request payload when it registers with DPS. DPS will pass this payload to the custom allocation webhook, which will determine which IoT hub the device will be provisioned to based on the device model information. If needed, the webhook can return data back to the DPS as a JSON object in the webhook response, and DPS will return this data to your device in the registration response.
+
+### Device sends data payload to DPS
+
+A device calls the [register](/rest/api/iot-dps/device/runtime-registration/register-device) API to register with DPS. The request can be enhanced with the optional **payload** property. This property can contain any valid JSON object. The exact contents will depend on the requirements of your solution.
+
+For attestation with TPM, the request body looks like the following:
+
+```json
+{
+ "registrationId": "mydevice",
+ "tpm": {
+ "endorsementKey": "xxxx-device-endorsement-key-xxxxx",
+ "storageRootKey": "xxxx-device-storage-root-key-xxxxx"
+ },
+ "payload": { "property1": "value1", "property2": {"propertyA":"valueA", "property2-2":1234}, .. }
+}
+```
+
+### DPS sends data payload to custom allocation webhook
+
+If a device includes a payload its registration request, DPS passes the payload in the **AllocationRequest.deviceRuntimeContext.payload** property when it calls the custom allocation webhook.
+
+For the TPM registration request in the previous section, the device runtime context will look like the following:
+
+```json
+{
+ "registrationId": "mydevice",
+ "tpm": {
+ "endorsementKey": "xxxx-device-endorsement-key-xxxxx",
+ "storageRootKey": "xxxx-device-storage-root-key-xxxxx"
+ },
+ "payload": { "property1": "value1", "property2": {"propertyA":"valueA", "property2-2":1234}, .. }
+}
+```
+
+If this isn't the initial registration for the device, then the runtime context will also include the **currentIoTHubHostname** and the **currentDeviceId** properties.
+
+### Custom allocation webhook returns data to DPS
+
+The custom allocation policy webhook can return data intended for a device to DPS in a JSON object using the **AllocationResponse.payload** property in the webhook response.
+
+The following JSON shows a webhook response that includes a payload:
+
+```json
+{
+ "iotHubHostName":"custom-allocation-toasters-hub.azure-devices.net",
+ "initialTwin":{
+ "properties":{
+ "desired":{
+ "state":"ready",
+ "darknessSetting":"medium"
+ }
+ },
+ "tags":{
+ "deviceType":"toaster"
+ }
+ },
+ "payload": { "property1": "value1" }
+}
+```
+
+### DPS sends data payload to device
+
+If DPS receives a payload in the webhook response, it passes this data back to the device in the **RegistrationOperationStatus.registrationState.payload** property in the response on a successful registration. The **registrationState** property is of type [DeviceRegistrationResult](/rest/api/iot-dps/device/runtime-registration/register-device#deviceregistrationresult).
+
+The following JSON shows a successful registration response for a TPM device that includes the **payload** property:
+
+```json
+{
+ "operationId":"5.316aac5bdc130deb.b1e02da8-xxxx-xxxx-xxxx-7ea7a6b7f550",
+ "status":"assigned",
+ "registrationState":{
+ "assignedHub":"myIotHub",
+ "createdDateTimeUtc" : "2022-08-01T22:57:47Z",
+ "deviceId" : "myDeviceId",
+ "etag" : "xxxx-etag-value-xxxxx",
+ "lastUpdatedDateTimeUtc" : "2022-08-01T22:57:47Z",
+ "payload": { "property1": "value1" },
+ "registrationId": "mydevice",
+ "status": assigned,
+ "substatus": initialAssignment,
+ "tpm": {"authenticationKey": "xxxx-encrypted-authentication-key-xxxxx"}
+ }
+}
+```
+
+## Implementation details
+
+The custom allocation webhook can be called for a device that has not been previously registered through DPS (initial assignment) or for a device that has been previously registered through DPS (reprovisioning). DPS supports the following reprovisioning policies: *Re-provision and migrate data*, *Re-provision and reset to initial config*, and *Never re-provision*. These policies are applied whenever a previously provisioned device is assigned to a new IoT hub. For more details, see [Reprovisioning](concepts-device-reprovision.md).
+
+The following points describe the requirements that your custom allocation webhook must observe and behavior that you should be aware of when designing your webhook:
+
+* The device should be assigned to one of the IoT hubs in the **AllocationRequest.linkedHubs** property. This property contains the list of IoT hubs by hostname that the device can be assigned to. This is typically composed of the IoT hubs selected for the enrollment entry. If no IoT hubs are selected in the enrollment entry, it will contain all the IoT hubs linked to the DPS instance. Finally, if the device is reprovisioning and the *Never re-provision* policy is set on the enrollment entry, it will contain only the IoT hub that the device is currently assigned to.
+
+* On initial assignment, if the **initialTwin** property is returned by the webhook, DPS will set the initial twin for the device on the assigned IoT hub accordingly. If the **initialTwin** property is omitted or is **null**, DPS sets the initial twin for the device to the initial twin setting specified in the enrollment entry.
+
+* On reprovisioning, DPS follows the reprovisioning policy set in the enrollment entry. DPS only uses **initialTwin** property in the response if the current IoT hub is changed and the reprovisioning policy set on the enrollment entry is *Re-provision and reset to initial config*. In this case, DPS sets the initial twin for the device on the new IoT hub exactly as it would during initial assignment in the previous bullet. In all other cases, DPS ignores the **initialTwin** property.
+
+* If the **payload** property is set in the response, DPS will always return it to the device regardless of whether the request is for initial assignment or reprovisioning.
+
+* If a device has previously been provisioned to an IoT hub, the **AllocationRequest.deviceRuntimeContext** will contain a **currentIotHubHostName** property, which will be set to the hostname of the IoT hub where the device is currently assigned.
+
+* You can determine which of the reprovisioning policies is currently set on the enrollment entry, by examining the **reprovisionPolicy** property of either the **AllocationRequest.individualEnrollment** or the **AllocationRequest.enrollmentGroup** property in the request. the following JSON shows the settings for the *Re-provision and migrate data* policy:
+
+ ```json
+ "reprovisionPolicy":{
+ "updateHubAssignment":true,
+ "migrateDeviceData":true
+ }
+ ```
+
+## SDK support
+
+The DPS device SDKs provide APIs in C, C#, Java, and Node.js to help you register devices with DPS. Both the IoT Hub SDKs and the DPS SDKs provide classes that represent device and service artifacts like device twins and enrollment entries that might be helpful when developing custom allocation webhooks. To learn more about the Azure IoT SDKs available for IoT Hub and IoT Hub Device Provisioning service, see [Azure IoT Hub SDKs](../iot-hub/iot-hub-devguide-sdks.md) and [Azure DPS SDKs](./libraries-sdks.md).
+
+## Next steps
+
+* For an end-to-end example using a custom allocation policy, see [Use custom allocation policies](tutorial-custom-allocation-policies.md)
+
+* To learn more about Azure Functions, see the [Azure Functions documentation](../azure-functions/index.yml)
iot-dps Concepts Device Reprovision https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/concepts-device-reprovision.md
Depending on the scenario, a device could send a request to a provisioning servi
* **Never re-provision**: The device is never reassigned to a different hub. This policy is provided for managing backwards compatibility. > [!NOTE]
-> DPS will always call the custom allocation webhook regardless of re-provisioning policy in case there is new [ReturnData](how-to-send-additional-data.md) for the device. If the re-provisioning policy is set to **never re-provision**, the webhook will be called but the device will not change its assigned hub.
+> DPS will always call the custom allocation webhook regardless of re-provisioning policy in case there is new [ReturnData](concepts-custom-allocation.md#use-device-payloads-in-custom-allocation) for the device. If the re-provisioning policy is set to **never re-provision**, the webhook will be called but the device will not change its assigned hub.
When designing your solution and defining a reprovisioning logic there are a few things to consider. For example:
iot-dps Concepts Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/concepts-service.md
The service-level setting that determines how Device Provisioning Service assign
* **Static configuration via the enrollment list**: specification of the desired IoT hub in the enrollment list takes priority over the service-level allocation policy.
-* **Custom (Use Azure Function)**: A [custom allocation policy](how-to-use-custom-allocation-policies.md) gives you more control over how devices are assigned to an IoT hub. This is accomplished by using custom code in an Azure Function to assign devices to an IoT hub. The device provisioning service calls your Azure Function code providing all relevant information about the device and the enrollment to your code. Your function code is executed and returns the IoT hub information used to provisioning the device.
+* **Custom (Use Azure Function)**: A [custom allocation policy](concepts-custom-allocation.md) gives you more control over how devices are assigned to an IoT hub. This is accomplished by using custom code in an Azure Function to assign devices to an IoT hub. The device provisioning service calls your Azure Function code providing all relevant information about the device and the enrollment to your code. Your function code is executed and returns the IoT hub information used to provisioning the device.
## Enrollment
iot-dps How To Send Additional Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/how-to-send-additional-data.md
Title: How to transfer a payload between device and Azure Device Provisioning Se
description: This document describes how to transfer a payload between device and Device Provisioning Service (DPS) Previously updated : 12/03/2021 Last updated : 09/21/2022
# How to transfer payloads between devices and DPS
-Sometimes DPS needs more data from devices to properly provision them to the right IoT Hub, and that data needs to be provided by the device. Vice versa, DPS can return data to the device to facilitate client-side logic.
+Devices that register with DPS are required to provide a registration ID and valid credentials (keys or X.509 certificates) when they register. However, there may be IoT solutions or scenarios in which additional data is needed from the device. For example, a custom allocation policy webhook may use information like a device model number to select an IoT hub to provision the device to. Likewise, a device may require additional data in the registration response to facilitate its client-side logic. DPS provides the capability for devices to both send and receive an optional payload when they register.
## When to use it
-This feature can be used as an enhancement for [custom allocation](./how-to-use-custom-allocation-policies.md). For example, you want to allocate your devices based on the device model without human intervention. In this case, you can configure the device to report its model information as part of the [register device call](/rest/api/iot-dps/device/runtime-registration/register-device). DPS will pass the deviceΓÇÖs payload to the custom allocation webhook. Then your function can decide which IoT hub the device will be provisioned to based on the device model information. If needed, the webhook can return data back to the device as a JSON object in the webhook response.
+Common scenarios for sending optional payloads are:
+
+* [Custom allocation policies](concepts-custom-allocation.md) can use the device payload to help select an IoT hub for a device or set its initial twin. For example, you may want to allocate your devices based on the device model. In this case, you can configure the device to report its model information when it registers. DPS will pass the deviceΓÇÖs payload to the custom allocation webhook. Then your webhook can decide which IoT hub the device will be provisioned to based on the device model information. If needed, the webhook can also return data back to the device as a JSON object in the webhook response. To learn more, see [Use device payloads in custom allocation](concepts-custom-allocation.md#use-device-payloads-in-custom-allocation).
+
+* [IoT Plug and Play (PnP)](../iot-develop/overview-iot-plug-and-play.md) devices *may* use the payload to send their model ID when they register with DPS. You can find examples of this usage in the PnP samples in the SDK or sample repositories. For example, [C# PnP thermostat](https://github.com/Azure-Samples/azure-iot-samples-csharp/blob/main/iot-hub/Samples/device/PnpDeviceSamples/Thermostat/Program.cs) or [Node.js PnP temperature controller](https://github.com/Azure/azure-iot-sdk-node/blob/main/device/samples/javascript/pnp_temperature_controller.js).
+
+* [IoT Central](../iot-central/core/overview-iot-central.md) devices that connect through DPS *should* follow [IoT Plug and Play conventions](..//iot-develop/concepts-convention.md) and send their model ID when they register. IoT Central uses the model ID to assign the device to the correct device template. To learn more, see [Device implementation and best practices for IoT Central](../iot-central/core/concepts-device-implementation.md).
## Device sends data payload to DPS
-When your device is sending a [register device call](/rest/api/iot-dps/device/runtime-registration/register-device) to DPS, The register call can be enhanced to take other fields in the body. The body looks like the following:
+When your device calls [Register Device](/rest/api/iot-dps/device/runtime-registration/register-device) to register with DPS, it can include additional data in the **payload** property. For example, the following JSON shows the body for a request to register using TPM attestation:
```json { "registrationId": "mydevice", "tpm": {
- "endorsementKey": "stuff",
- "storageRootKey": "things"
+ "endorsementKey": "xxxx-device-endorsement-key-xxxx",
+ "storageRootKey": "xxx-device-storage-root-key-xxxx"
}, "payload": { A JSON object that contains your additional data } } ```
+The **payload** property must be a JSON object and can contain any data relevant to your IoT solution or scenario.
+ ## DPS returns data to the device
-If the custom allocation policy webhook wishes to return some data to the device, it will pass the data back as a JSON object in the webhook response. The change is in the payload section below.
+DPS can return data back to the device in the registration response. Currently, this feature is exclusively used in custom allocation scenarios. If the custom allocation policy webhook needs to return data to the device, it can pass the data back as a JSON object in the webhook response. DPS will then pass that data back in the **registrationState.payload** property in the [Register Device response](/rest/api/iot-dps/device/runtime-registration/register-device#registrationoperationstatus). For example, the following JSON shows the body of a successful response to register using TPM attestation.
```json
-{
- "iotHubHostName": "sample-iot-hub-1.azure-devices.net",
- "initialTwin": {
- "tags": {
- "tag1": true
- },
- "properties": {
- "desired": {
- "prop1": true
- }
- }
- },
- "payload": { A JSON object that contains the data returned by the webhook }
-}
+{
+ "operationId":"5.316aac5bdc130deb.b1e02da8-xxxx-xxxx-xxxx-7ea7a6b7f550",
+ "status":"assigned",
+ "registrationState":{
+ "registrationId":"my-tpm-device",
+ "createdDateTimeUtc":"2022-08-31T22:02:50.5163352Z",
+ "assignedHub":"sample-iot-hub-1.azure-devices.net",
+ "deviceId":"my-tpm-device",
+ "status":"assigned",
+ "substatus":"initialAssignment",
+ "lastUpdatedDateTimeUtc":"2022-08-31T22:02:50.7370676Z",
+ "etag":"xxxx-etag-value-xxxx",
+ "tpm": {"authenticationKey": "xxxx-encrypted-authentication-key-xxxxx"},
+ "payload": { A JSON object that contains the data returned by the webhook }
+ }
+}
```
-## SDK support
+The **payload** property must be a JSON object and can contain any data relevant to your IoT solution or scenario.
-This feature is available in C, C#, JAVA and Node.js client SDKs. To learn more about the Azure IoT SDKs available for IoT Hub and the IoT Hub Device Provisioning service, see [Microsoft Azure IoT SDKs]( https://github.com/Azure/azure-iot-sdks).
+## SDK support
-[IoT Plug and Play (PnP)](../iot-develop/overview-iot-plug-and-play.md) devices use the payload to send their model ID when they register with DPS. You can find examples of this usage in the PnP samples in the SDK or sample repositories. For example, [C# PnP thermostat](https://github.com/Azure/azure-iot-sdk-csharp/blob/main/iothub/device/samples/solutions/PnpDeviceSamples/Thermostat/Program.cs) or [Node.js PnP temperature controller](https://github.com/Azure/azure-iot-sdk-node/blob/main/device/samples/javascript/pnp_temperature_controller.js).
+This feature is available in C, C#, JAVA and Node.js client SDKs. To learn more about the Azure IoT SDKs available for IoT Hub and IoT Hub Device Provisioning service, see [Microsoft Azure IoT SDKs]( https://github.com/Azure/azure-iot-sdks).
## IoT Edge support
The payload file (in this case `/home/aziot/payload/json`) can contain any valid
## Next steps
-* To learn how to provision devices using a custom allocation policy, see [How to use custom allocation policies](./how-to-use-custom-allocation-policies.md)
+* For an overview of custom allocation policies, see [Understand custom allocation policies](./concepts-custom-allocation.md)
+
+* To learn how to provision devices using a custom allocation policy, see [Use custom allocation policies](./tutorial-custom-allocation-policies.md)
iot-dps How To Use Custom Allocation Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/how-to-use-custom-allocation-policies.md
- Title: Custom allocation policies with Azure IoT Hub Device Provisioning Service
-description: How to use custom allocation policies with the Azure IoT Hub Device Provisioning Service (DPS)
-- Previously updated : 01/26/2021------
-# How to use custom allocation policies
-
-A custom allocation policy gives you more control over how devices are assigned to an IoT hub. This is accomplished by using custom code in an [Azure Function](../azure-functions/functions-overview.md) to assign devices to an IoT hub. The device provisioning service calls your Azure Function code providing all relevant information about the device and the enrollment. Your function code is executed and returns the IoT hub information used to provisioning the device.
-
-By using custom allocation policies, you define your own allocation policies when the policies provided by the Device Provisioning Service don't meet the requirements of your scenario.
-
-For example, maybe you want to examine the certificate a device is using during provisioning and assign the device to an IoT hub based on a certificate property. Or, maybe you have information stored in a database for your devices and need to query the database to determine which IoT hub a device should be assigned to.
-
-This article demonstrates a custom allocation policy using an Azure Function written in C#. Two new IoT hubs are created representing a *Contoso Toasters Division* and a *Contoso Heat Pumps Division*. Devices requesting provisioning must have a registration ID with one of the following suffixes to be accepted for provisioning:
-
-* **-contoso-tstrsd-007**: Contoso Toasters Division
-* **-contoso-hpsd-088**: Contoso Heat Pumps Division
-
-The devices will be provisioned based on one of these required suffixes on the registration ID. These devices will be simulated using a provisioning sample included in the [Azure IoT C SDK](https://github.com/Azure/azure-iot-sdk-c).
-
-You perform the following steps in this article:
-
-* Use the Azure CLI to create two Contoso division IoT hubs (**Contoso Toasters Division** and **Contoso Heat Pumps Division**)
-* Create a new group enrollment using an Azure Function for the custom allocation policy
-* Create device keys for two device simulations.
-* Set up the development environment for the Azure IoT C SDK
-* Simulate the devices and verify that they are provisioned according to the example code in the custom allocation policy
--
-## Prerequisites
-
-The following prerequisites are for a Windows development environment. For Linux or macOS, see the appropriate section in [Prepare your development environment](https://github.com/Azure/azure-iot-sdk-c/blob/master/doc/devbox_setup.md) in the SDK documentation.
--- [Visual Studio](https://visualstudio.microsoft.com/vs/) 2019 with the ['Desktop development with C++'](/cpp/ide/using-the-visual-studio-ide-for-cpp-desktop-development) workload enabled. Visual Studio 2015 and Visual Studio 2017 are also supported.--- Latest version of [Git](https://git-scm.com/download/) installed.--
-## Create the provisioning service and two divisional IoT hubs
-
-In this section, you use the Azure Cloud Shell to create a provisioning service and two IoT hubs representing the **Contoso Toasters Division** and the **Contoso Heat Pumps division**.
-
-> [!TIP]
-> The commands used in this article create the provisioning service and other resources in the West US location. We recommend that you create your resources in the region nearest you that supports Device Provisioning Service. You can view a list of available locations by running the command `az provider show --namespace Microsoft.Devices --query "resourceTypes[?resourceType=='ProvisioningServices'].locations | [0]" --out table` or by going to the [Azure Status](https://azure.microsoft.com/status/) page and searching for "Device Provisioning Service". In commands, locations can be specified either in one word or multi-word format; for example: westus, West US, WEST US, etc. The value is not case sensitive. If you use multi-word format to specify location, enclose the value in quotes; for example, `-- location "West US"`.
->
-
-1. Use the Azure Cloud Shell to create a resource group with the [az group create](/cli/azure/group#az-group-create) command. An Azure resource group is a logical container into which Azure resources are deployed and managed.
-
- The following example creates a resource group named *contoso-us-resource-group* in the *westus* region. It is recommended that you use this group for all resources created in this article. This approach will make clean up easier after you're finished.
-
- ```azurecli-interactive
- az group create --name contoso-us-resource-group --location westus
- ```
-
-2. Use the Azure Cloud Shell to create a device provisioning service (DPS) with the [az iot dps create](/cli/azure/iot/dps#az-iot-dps-create) command. The provisioning service will be added to *contoso-us-resource-group*.
-
- The following example creates a provisioning service named *contoso-provisioning-service-1098* in the *westus* location. You must use a unique service name. Make up your own suffix in the service name in place of **1098**.
-
- ```azurecli-interactive
- az iot dps create --name contoso-provisioning-service-1098 --resource-group contoso-us-resource-group --location westus
- ```
-
- This command may take a few minutes to complete.
-
-3. Use the Azure Cloud Shell to create the **Contoso Toasters Division** IoT hub with the [az iot hub create](/cli/azure/iot/hub#az-iot-hub-create) command. The IoT hub will be added to *contoso-us-resource-group*.
-
- The following example creates an IoT hub named *contoso-toasters-hub-1098* in the *westus* location. You must use a unique hub name. Make up your own suffix in the hub name in place of **1098**.
-
- > [!CAUTION]
- > The example Azure Function code for the custom allocation policy requires the substring `-toasters-` in the hub name. Make sure to use a name containing the required toasters substring.
-
- ```azurecli-interactive
- az iot hub create --name contoso-toasters-hub-1098 --resource-group contoso-us-resource-group --location westus --sku S1
- ```
-
- This command may take a few minutes to complete.
-
-4. Use the Azure Cloud Shell to create the **Contoso Heat Pumps Division** IoT hub with the [az iot hub create](/cli/azure/iot/hub#az-iot-hub-create) command. This IoT hub will also be added to *contoso-us-resource-group*.
-
- The following example creates an IoT hub named *contoso-heatpumps-hub-1098* in the *westus* location. You must use a unique hub name. Make up your own suffix in the hub name in place of **1098**.
-
- > [!CAUTION]
- > The example Azure Function code for the custom allocation policy requires the substring `-heatpumps-` in the hub name. Make sure to use a name containing the required heatpumps substring.
-
- ```azurecli-interactive
- az iot hub create --name contoso-heatpumps-hub-1098 --resource-group contoso-us-resource-group --location westus --sku S1
- ```
-
- This command may take a few minutes to complete.
-
-5. The IoT hubs must be linked to the DPS resource.
-
- Run the following two commands to get the connection strings for the hubs you just created. Replace the hub resource names with the names you chose in each command:
-
- ```azurecli-interactive
- hubToastersConnectionString=$(az iot hub connection-string show --hub-name contoso-toasters-hub-1098 --key primary --query connectionString -o tsv)
- hubHeatpumpsConnectionString=$(az iot hub connection-string show --hub-name contoso-heatpumps-hub-1098 --key primary --query connectionString -o tsv)
- ```
-
- Run the following commands to link the hubs to the DPS resource. Replace the DPS resource name with the name you chose in each command:
-
- ```azurecli-interactive
- az iot dps linked-hub create --dps-name contoso-provisioning-service-1098 --resource-group contoso-us-resource-group --connection-string $hubToastersConnectionString --location westus
- az iot dps linked-hub create --dps-name contoso-provisioning-service-1098 --resource-group contoso-us-resource-group --connection-string $hubHeatpumpsConnectionString --location westus
- ```
----
-## Create the custom allocation function
-
-In this section, you create an Azure function that implements your custom allocation policy. This function decides which divisional IoT hub a device should be registered to based on whether its registration ID contains the string **-contoso-tstrsd-007** or **-contoso-hpsd-088**. It also sets the initial state of the device twin based on whether the device is a toaster or a heat pump.
-
-1. Sign in to the [Azure portal](https://portal.azure.com). From your home page, select **+ Create a resource**.
-
-2. In the *Search the Marketplace* search box, type "Function App". From the drop-down list select **Function App**, and then select **Create**.
-
-3. On **Function App** create page, under the **Basics** tab, enter the following settings for your new function app and select **Review + create**:
-
- **Resource Group**: Select the **contoso-us-resource-group** to keep all resources created in this article together.
-
- **Function App name**: Enter a unique function app name. This example uses **contoso-function-app-1098**.
-
- **Publish**: Verify that **Code** is selected.
-
- **Runtime Stack**: Select **.NET Core** from the drop-down.
-
- **Version**: Select **3.1** from the drop-down.
-
- **Region**: Select the same region as your resource group. This example uses **West US**.
-
- > [!NOTE]
- > By default, Application Insights is enabled. Application Insights is not necessary for this article, but it might help you understand and investigate any issues you encounter with the custom allocation. If you prefer, you can disable Application Insights by selecting the **Monitoring** tab and then selecting **No** for **Enable Application Insights**.
-
- ![Create an Azure Function App to host the custom allocation function](./media/how-to-use-custom-allocation-policies/create-function-app.png)
-
-4. On the **Summary** page, select **Create** to create the function app. Deployment may take several minutes. When it completes, select **Go to resource**.
-
-5. On the left pane of the function app **Overview** page, click **Functions** and then **+ Add** to add a new function.
-
-6. On the **Add function** page, click **HTTP Trigger**, then click the **Add** button.
-
-7. On the next page, click **Code + Test**. This allows you to edit the code for the function named **HttpTrigger1**. The **run.csx** code file should be opened for editing.
-
-8. Reference required NuGet packages. To create the initial device twin, the custom allocation function uses classes that are defined in two NuGet packages that must be loaded into the hosting environment. With Azure Functions, NuGet packages are referenced using a *function.proj* file. In this step, you save and upload a *function.proj* file for the required assemblies. For more information, see [Using NuGet packages with Azure Functions](../azure-functions/functions-reference-csharp.md#using-nuget-packages).
-
- 1. Copy the following lines into your favorite editor and save the file on your computer as *function.proj*.
-
- ```xml
- <Project Sdk="Microsoft.NET.Sdk">
- <PropertyGroup>
- <TargetFramework>netstandard2.0</TargetFramework>
- </PropertyGroup>
- <ItemGroup>
- <PackageReference Include="Microsoft.Azure.Devices.Provisioning.Service" Version="1.16.3" />
- <PackageReference Include="Microsoft.Azure.Devices.Shared" Version="1.27.0" />
- </ItemGroup>
- </Project>
- ```
-
- 2. Click the **Upload** button located above the code editor to upload your *function.proj* file. After uploading, select the file in the code editor using the drop down box to verify the contents.
-
-9. Make sure *run.csx* for **HttpTrigger1** is selected in the code editor. Replace the code for the **HttpTrigger1** function with the following code and select **Save**:
-
- ```csharp
- #r "Newtonsoft.Json"
-
- using System.Net;
- using Microsoft.AspNetCore.Mvc;
- using Microsoft.Extensions.Primitives;
- using Newtonsoft.Json;
-
- using Microsoft.Azure.Devices.Shared; // For TwinCollection
- using Microsoft.Azure.Devices.Provisioning.Service; // For TwinState
-
- public static async Task<IActionResult> Run(HttpRequest req, ILogger log)
- {
- log.LogInformation("C# HTTP trigger function processed a request.");
-
- // Get request body
- string requestBody = await new StreamReader(req.Body).ReadToEndAsync();
- dynamic data = JsonConvert.DeserializeObject(requestBody);
-
- log.LogInformation("Request.Body:...");
- log.LogInformation(requestBody);
-
- // Get registration ID of the device
- string regId = data?.deviceRuntimeContext?.registrationId;
-
- string message = "Uncaught error";
- bool fail = false;
- ResponseObj obj = new ResponseObj();
-
- if (regId == null)
- {
- message = "Registration ID not provided for the device.";
- log.LogInformation("Registration ID : NULL");
- fail = true;
- }
- else
- {
- string[] hubs = data?.linkedHubs?.ToObject<string[]>();
-
- // Must have hubs selected on the enrollment
- if (hubs == null)
- {
- message = "No hub group defined for the enrollment.";
- log.LogInformation("linkedHubs : NULL");
- fail = true;
- }
- else
- {
- // This is a Contoso Toaster Model 007
- if (regId.Contains("-contoso-tstrsd-007"))
- {
- //Find the "-toasters-" IoT hub configured on the enrollment
- foreach(string hubString in hubs)
- {
- if (hubString.Contains("-toasters-"))
- obj.iotHubHostName = hubString;
- }
-
- if (obj.iotHubHostName == null)
- {
- message = "No toasters hub found for the enrollment.";
- log.LogInformation(message);
- fail = true;
- }
- else
- {
- // Specify the initial tags for the device.
- TwinCollection tags = new TwinCollection();
- tags["deviceType"] = "toaster";
-
- // Specify the initial desired properties for the device.
- TwinCollection properties = new TwinCollection();
- properties["state"] = "ready";
- properties["darknessSetting"] = "medium";
-
- // Add the initial twin state to the response.
- TwinState twinState = new TwinState(tags, properties);
- obj.initialTwin = twinState;
- }
- }
- // This is a Contoso Heat pump Model 008
- else if (regId.Contains("-contoso-hpsd-088"))
- {
- //Find the "-heatpumps-" IoT hub configured on the enrollment
- foreach(string hubString in hubs)
- {
- if (hubString.Contains("-heatpumps-"))
- obj.iotHubHostName = hubString;
- }
-
- if (obj.iotHubHostName == null)
- {
- message = "No heat pumps hub found for the enrollment.";
- log.LogInformation(message);
- fail = true;
- }
- else
- {
- // Specify the initial tags for the device.
- TwinCollection tags = new TwinCollection();
- tags["deviceType"] = "heatpump";
-
- // Specify the initial desired properties for the device.
- TwinCollection properties = new TwinCollection();
- properties["state"] = "on";
- properties["temperatureSetting"] = "65";
-
- // Add the initial twin state to the response.
- TwinState twinState = new TwinState(tags, properties);
- obj.initialTwin = twinState;
- }
- }
- // Unrecognized device.
- else
- {
- fail = true;
- message = "Unrecognized device registration.";
- log.LogInformation("Unknown device registration");
- }
- }
- }
-
- log.LogInformation("\nResponse");
- log.LogInformation((obj.iotHubHostName != null) ? JsonConvert.SerializeObject(obj) : message);
-
- return (fail)
- ? new BadRequestObjectResult(message)
- : (ActionResult)new OkObjectResult(obj);
- }
-
- public class ResponseObj
- {
- public string iotHubHostName {get; set;}
- public TwinState initialTwin {get; set;}
- }
- ```
-
-## Create the enrollment
-
-In this section, you'll create a new enrollment group that uses the custom allocation policy. For simplicity, this article uses [Symmetric key attestation](concepts-symmetric-key-attestation.md) with the enrollment. For a more secure solution, consider using [X.509 certificate attestation](concepts-x509-attestation.md) with a chain of trust.
-
-1. Still on the [Azure portal](https://portal.azure.com), open your provisioning service.
-
-2. Select **Manage enrollments** on the left pane, and then select the **Add enrollment group** button at the top of the page.
-
-3. On **Add Enrollment Group**, enter the following information, and select the **Save** button.
-
- **Group name**: Enter **contoso-custom-allocated-devices**.
-
- **Attestation Type**: Select **Symmetric Key**.
-
- **Auto Generate Keys**: This checkbox should already be checked.
-
- **Select how you want to assign devices to hubs**: Select **Custom (Use Azure Function)**.
-
- **Subscription**: Select the subscription where you created your Azure Function.
-
- **Function App**: Select your function app by name. **contoso-function-app-1098** was used in this example.
-
- **Function**: Select the **HttpTrigger1** function.
-
- ![Add custom allocation enrollment group for symmetric key attestation](./media/how-to-use-custom-allocation-policies/create-custom-allocation-enrollment.png)
-
-4. After saving the enrollment, reopen it and make a note of the **Primary Key**. You must save the enrollment first to have the keys generated. This key will be used to generate unique device keys for simulated devices later.
-
-## Derive unique device keys
-
-In this section, you create two unique device keys. One key will be used for a simulated toaster device. The other key will be used for a simulated heat pump device.
-
-To generate the device key, you use the **Primary Key** you noted earlier to compute the [HMAC-SHA256](https://wikipedia.org/wiki/HMAC) of the device registration ID for each device and convert the result into Base64 format. For more information on creating derived device keys with enrollment groups, see the group enrollments section of [Symmetric key attestation](concepts-symmetric-key-attestation.md).
-
-For the example in this article, use the following two device registration IDs and compute a device key for both devices. Both registration IDs have a valid suffix to work with the example code for the custom allocation policy:
-
-* **breakroom499-contoso-tstrsd-007**
-* **mainbuilding167-contoso-hpsd-088**
--
-# [Windows](#tab/windows)
-
-If you're using a Windows-based workstation, you can use PowerShell to generate your derived device key as shown in the following example.
-
-Replace the value of **KEY** with the **Primary Key** you noted earlier.
-
-```powershell
-$KEY='oiK77Oy7rBw8YB6IS6ukRChAw+Yq6GC61RMrPLSTiOOtdI+XDu0LmLuNm11p+qv2I+adqGUdZHm46zXAQdZoOA=='
-
-$REG_ID1='breakroom499-contoso-tstrsd-007'
-$REG_ID2='mainbuilding167-contoso-hpsd-088'
-
-$hmacsha256 = New-Object System.Security.Cryptography.HMACSHA256
-$hmacsha256.key = [Convert]::FromBase64String($KEY)
-$sig1 = $hmacsha256.ComputeHash([Text.Encoding]::ASCII.GetBytes($REG_ID1))
-$sig2 = $hmacsha256.ComputeHash([Text.Encoding]::ASCII.GetBytes($REG_ID2))
-$derivedkey1 = [Convert]::ToBase64String($sig1)
-$derivedkey2 = [Convert]::ToBase64String($sig2)
-
-echo "`n`n$REG_ID1 : $derivedkey1`n$REG_ID2 : $derivedkey2`n`n"
-```
-
-```powershell
-breakroom499-contoso-tstrsd-007 : JC8F96eayuQwwz+PkE7IzjH2lIAjCUnAa61tDigBnSs=
-mainbuilding167-contoso-hpsd-088 : 6uejA9PfkQgmYylj8Zerp3kcbeVrGZ172YLa7VSnJzg=
-```
-
-# [Linux](#tab/linux)
-
-If you're using a Linux workstation, you can use openssl to generate your derived device keys as shown in the following example.
-
-Replace the value of **KEY** with the **Primary Key** you noted earlier.
-
-```bash
-KEY=oiK77Oy7rBw8YB6IS6ukRChAw+Yq6GC61RMrPLSTiOOtdI+XDu0LmLuNm11p+qv2I+adqGUdZHm46zXAQdZoOA==
-
-REG_ID1=breakroom499-contoso-tstrsd-007
-REG_ID2=mainbuilding167-contoso-hpsd-088
-
-keybytes=$(echo $KEY | base64 --decode | xxd -p -u -c 1000)
-devkey1=$(echo -n $REG_ID1 | openssl sha256 -mac HMAC -macopt hexkey:$keybytes -binary | base64)
-devkey2=$(echo -n $REG_ID2 | openssl sha256 -mac HMAC -macopt hexkey:$keybytes -binary | base64)
-
-echo -e $"\n\n$REG_ID1 : $devkey1\n$REG_ID2 : $devkey2\n\n"
-```
-
-```bash
-breakroom499-contoso-tstrsd-007 : JC8F96eayuQwwz+PkE7IzjH2lIAjCUnAa61tDigBnSs=
-mainbuilding167-contoso-hpsd-088 : 6uejA9PfkQgmYylj8Zerp3kcbeVrGZ172YLa7VSnJzg=
-```
---
-The simulated devices will use the derived device keys with each registration ID to perform symmetric key attestation.
-
-## Prepare an Azure IoT C SDK development environment
-
-In this section, you prepare the development environment used to build the [Azure IoT C SDK](https://github.com/Azure/azure-iot-sdk-c). The SDK includes the sample code for the simulated device. This simulated device will attempt provisioning during the device's boot sequence.
-
-This section is oriented toward a Windows-based workstation. For a Linux example, see the set-up of the VMs in [Tutorial: Provision for geolatency](how-to-provision-multitenant.md).
-
-1. Download the [CMake build system](https://cmake.org/download/).
-
- It is important that the Visual Studio prerequisites (Visual Studio and the 'Desktop development with C++' workload) are installed on your machine, **before** starting the `CMake` installation. Once the prerequisites are in place, and the download is verified, install the CMake build system.
-
-2. Find the tag name for the [latest release](https://github.com/Azure/azure-iot-sdk-c/releases/latest) of the SDK.
-
-3. Open a command prompt or Git Bash shell. Run the following commands to clone the latest release of the [Azure IoT C SDK](https://github.com/Azure/azure-iot-sdk-c) GitHub repository. Use the tag you found in the previous step as the value for the `-b` parameter:
-
- ```cmd/sh
- git clone -b <release-tag> https://github.com/Azure/azure-iot-sdk-c.git
- cd azure-iot-sdk-c
- git submodule update --init
- ```
-
- You should expect this operation to take several minutes to complete.
-
-4. Create a `cmake` subdirectory in the root directory of the git repository, and navigate to that folder. Run the following commands from the `azure-iot-sdk-c` directory:
-
- ```cmd/sh
- mkdir cmake
- cd cmake
- ```
-
-5. Run the following command, which builds a version of the SDK specific to your development client platform. A Visual Studio solution for the simulated device will be generated in the `cmake` directory.
-
- ```cmd
- cmake -Dhsm_type_symm_key:BOOL=ON -Duse_prov_client:BOOL=ON ..
- ```
-
- If `cmake` doesn't find your C++ compiler, you might get build errors while running the command. If that happens, try running the command in the [Visual Studio command prompt](/dotnet/framework/tools/developer-command-prompt-for-vs).
-
- Once the build succeeds, the last few output lines will look similar to the following output:
-
- ```cmd/sh
- $ cmake -Dhsm_type_symm_key:BOOL=ON -Duse_prov_client:BOOL=ON ..
- -- Building for: Visual Studio 15 2017
- -- Selecting Windows SDK version 10.0.16299.0 to target Windows 10.0.17134.
- -- The C compiler identification is MSVC 19.12.25835.0
- -- The CXX compiler identification is MSVC 19.12.25835.0
-
- ...
-
- -- Configuring done
- -- Generating done
- -- Build files have been written to: E:/IoT Testing/azure-iot-sdk-c/cmake
- ```
-
-## Simulate the devices
-
-In this section, you update a provisioning sample named **prov\_dev\_client\_sample** located in the Azure IoT C SDK you set up previously.
-
-This sample code simulates a device boot sequence that sends the provisioning request to your Device Provisioning Service instance. The boot sequence will cause the toaster device to be recognized and assigned to the IoT hub using the custom allocation policy.
-
-1. In the Azure portal, select the **Overview** tab for your Device Provisioning Service and note down the **_ID Scope_** value.
-
- ![Extract Device Provisioning Service endpoint information from the portal blade](./media/quick-create-simulated-device-x509/copy-id-scope.png)
-
-2. In Visual Studio, open the **azure_iot_sdks.sln** solution file that was generated by running CMake earlier. The solution file should be in the following location:
-
- ```
- azure-iot-sdk-c\cmake\azure_iot_sdks.sln
- ```
-
-3. In Visual Studio's *Solution Explorer* window, navigate to the **Provision\_Samples** folder. Expand the sample project named **prov\_dev\_client\_sample**. Expand **Source Files**, and open **prov\_dev\_client\_sample.c**.
-
-4. Find the `id_scope` constant, and replace the value with your **ID Scope** value that you copied earlier.
-
- ```c
- static const char* id_scope = "0ne00002193";
- ```
-
-5. Find the definition for the `main()` function in the same file. Make sure the `hsm_type` variable is set to `SECURE_DEVICE_TYPE_SYMMETRIC_KEY` as shown below:
-
- ```c
- SECURE_DEVICE_TYPE hsm_type;
- //hsm_type = SECURE_DEVICE_TYPE_TPM;
- //hsm_type = SECURE_DEVICE_TYPE_X509;
- hsm_type = SECURE_DEVICE_TYPE_SYMMETRIC_KEY;
- ```
-
-6. Right-click the **prov\_dev\_client\_sample** project and select **Set as Startup Project**.
-
-### Simulate the Contoso toaster device
-
-1. To simulate the toaster device, find the call to `prov_dev_set_symmetric_key_info()` in **prov\_dev\_client\_sample.c** which is commented out.
-
- ```c
- // Set the symmetric key if using they auth type
- //prov_dev_set_symmetric_key_info("<symm_registration_id>", "<symmetric_Key>");
- ```
-
- Uncomment the function call and replace the placeholder values (including the angle brackets) with the toaster registration ID and derived device key you generated previously. The key value **JC8F96eayuQwwz+PkE7IzjH2lIAjCUnAa61tDigBnSs=** shown below is only given as an example.
-
- ```c
- // Set the symmetric key if using they auth type
- prov_dev_set_symmetric_key_info("breakroom499-contoso-tstrsd-007", "JC8F96eayuQwwz+PkE7IzjH2lIAjCUnAa61tDigBnSs=");
- ```
-
- Save the file.
-
-2. On the Visual Studio menu, select **Debug** > **Start without debugging** to run the solution. In the prompt to rebuild the project, select **Yes**, to rebuild the project before running.
-
- The following output is an example of the simulated toaster device successfully booting up and connecting to the provisioning service instance to be assigned to the toasters IoT hub by the custom allocation policy:
-
- ```cmd
- Provisioning API Version: 1.3.6
-
- Registering Device
-
- Provisioning Status: PROV_DEVICE_REG_STATUS_CONNECTED
- Provisioning Status: PROV_DEVICE_REG_STATUS_ASSIGNING
- Provisioning Status: PROV_DEVICE_REG_STATUS_ASSIGNING
-
- Registration Information received from service: contoso-toasters-hub-1098.azure-devices.net, deviceId: breakroom499-contoso-tstrsd-007
-
- Press enter key to exit:
- ```
-
-### Simulate the Contoso heat pump device
-
-1. To simulate the heat pump device, update the call to `prov_dev_set_symmetric_key_info()` in **prov\_dev\_client\_sample.c** again with the heat pump registration ID and derived device key you generated earlier. The key value **6uejA9PfkQgmYylj8Zerp3kcbeVrGZ172YLa7VSnJzg=** shown below is also only given as an example.
-
- ```c
- // Set the symmetric key if using they auth type
- prov_dev_set_symmetric_key_info("mainbuilding167-contoso-hpsd-088", "6uejA9PfkQgmYylj8Zerp3kcbeVrGZ172YLa7VSnJzg=");
- ```
-
- Save the file.
-
-2. On the Visual Studio menu, select **Debug** > **Start without debugging** to run the solution. In the prompt to rebuild the project, select **Yes** to rebuild the project before running.
-
- The following output is an example of the simulated heat pump device successfully booting up and connecting to the provisioning service instance to be assigned to the Contoso heat pumps IoT hub by the custom allocation policy:
-
- ```cmd
- Provisioning API Version: 1.3.6
-
- Registering Device
-
- Provisioning Status: PROV_DEVICE_REG_STATUS_CONNECTED
- Provisioning Status: PROV_DEVICE_REG_STATUS_ASSIGNING
- Provisioning Status: PROV_DEVICE_REG_STATUS_ASSIGNING
-
- Registration Information received from service: contoso-heatpumps-hub-1098.azure-devices.net, deviceId: mainbuilding167-contoso-hpsd-088
-
- Press enter key to exit:
- ```
-
-## Troubleshooting custom allocation policies
-
-The following table shows expected scenarios and the results error codes you might receive. Use this table to help troubleshoot custom allocation policy failures with your Azure Functions.
-
-| Scenario | Registration result from Provisioning Service | Provisioning SDK Results |
-| -- | | |
-| The webhook returns 200 OK with ΓÇÿiotHubHostNameΓÇÖ set to a valid IoT hub host name | Result status: Assigned | SDK returns PROV_DEVICE_RESULT_OK along with hub information |
-| The webhook returns 200 OK with ΓÇÿiotHubHostNameΓÇÖ present in the response, but set to an empty string or null | Result status: Failed<br><br> Error code: CustomAllocationIotHubNotSpecified (400208) | SDK returns PROV_DEVICE_RESULT_HUB_NOT_SPECIFIED |
-| The webhook returns 401 Unauthorized | Result status: Failed<br><br>Error code: CustomAllocationUnauthorizedAccess (400209) | SDK returns PROV_DEVICE_RESULT_UNAUTHORIZED |
-| An Individual Enrollment was created to disable the device | Result status: Disabled | SDK returns PROV_DEVICE_RESULT_DISABLED |
-| The webhook returns error code >= 429 | DPSΓÇÖ orchestration will retry a number of times. The retry policy is currently:<br><br>&nbsp;&nbsp;- Retry count: 10<br>&nbsp;&nbsp;- Initial interval: 1s<br>&nbsp;&nbsp;- Increment: 9s | SDK will ignore error and submit another get status message in the specified time |
-| The webhook returns any other status code | Result status: Failed<br><br>Error code: CustomAllocationFailed (400207) | SDK returns PROV_DEVICE_RESULT_DEV_AUTH_ERROR |
-
-## Clean up resources
-
-If you plan to continue working with the resources created in this article, you can leave them. If you don't plan to continue using the resources, use the following steps to delete all of the resources created in this article to avoid unnecessary charges.
-
-The steps here assume you created all resources in this article as instructed in the same resource group named **contoso-us-resource-group**.
-
-> [!IMPORTANT]
-> Deleting a resource group is irreversible. The resource group and all the resources contained in it are permanently deleted. Make sure that you don't accidentally delete the wrong resource group or resources. If you created the IoT Hub inside an existing resource group that contains resources you want to keep, only delete the IoT Hub resource itself instead of deleting the resource group.
->
-
-To delete the resource group by name:
-
-1. Sign in to the [Azure portal](https://portal.azure.com) and select **Resource groups**.
-
-2. In the **Filter by name...** textbox, type the name of the resource group containing your resources, **contoso-us-resource-group**.
-
-3. To the right of your resource group in the result list, select **...** then **Delete resource group**.
-
-4. You'll be asked to confirm the deletion of the resource group. Type the name of your resource group again to confirm, and then select **Delete**. After a few moments, the resource group and all of its contained resources are deleted.
-
-## Next steps
-
-* To learn more Reprovisioning, see [IoT Hub Device reprovisioning concepts](concepts-device-reprovision.md)
-* To learn more Deprovisioning, see [How to deprovision devices that were previously autoprovisioned](how-to-unprovision-devices.md)
iot-dps Tutorial Custom Allocation Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/tutorial-custom-allocation-policies.md
Title: Tutorial for using custom allocation policies with Azure IoT Hub Device Provisioning Service (DPS)
-description: Tutorial for using custom allocation policies with the Azure IoT Hub Device Provisioning Service (DPS)
+ Title: Tutorial - Use custom allocation policies with Azure IoT Hub Device Provisioning Service
+description: This tutorial shows how to provision devices using a custom allocation policy in your Azure IoT Hub Device Provisioning Service (DPS) instance.
Previously updated : 04/23/2021 Last updated : 09/13/2022 -
-#Customer intent: As a new IoT developer, I want use my own code with DPS to allocate devices to IoT hubs.
+ # Tutorial: Use custom allocation policies with Device Provisioning Service (DPS)
-A custom allocation policy gives you more control over how devices are assigned to an IoT hub. This is accomplished by using custom code in an [Azure Function](../azure-functions/functions-overview.md) that runs during provisioning to assign devices to an IoT hub. The device provisioning service calls your Azure Function code providing all relevant information about the device and the enrollment. Your function code is executed and returns the IoT hub information used to provisioning the device.
+Custom allocation policies give you more control over how devices are assigned to your IoT hubs. With custom allocation policies, you can define your own allocation policies when the policies provided by the Azure IoT Hub Device Provisioning Service (DPS) don't meet the requirements of your scenario. A custom allocation policy is implemented in a webhook hosted in [Azure functions](../azure-functions/functions-overview.md) and configured on one or more individual enrollments and/or enrollment groups. When a device registers with DPS using a configured enrollment entry, DPS calls the webhook to find out which IoT hub the device should be registered to and, optionally, its initial state. To learn more, see [Understand custom allocation policies](concepts-custom-allocation.md).
-By using custom allocation policies, you define your own allocation policies when the policies provided by the Device Provisioning Service don't meet the requirements of your scenario.
+This tutorial demonstrates a custom allocation policy using an Azure Function written in C#. Devices are assigned to one of two IoT hubs representing a *Contoso Toasters Division* and a *Contoso Heat Pumps Division*. Devices requesting provisioning must have a registration ID with one of the following suffixes to be accepted for provisioning:
-For example, maybe you want to examine the certificate a device is using during provisioning and assign the device to an IoT hub based on a certificate property. Or, maybe you have information stored in a database for your devices and need to query the database to determine which IoT hub a device should be assigned to.
+* **-contoso-tstrsd-007** for the Contoso Toasters Division
+* **-contoso-hpsd-088** for the Contoso Heat Pumps Division
-This article demonstrates an enrollment group with a custom allocation policy that uses an Azure Function written in C# to provision toaster devices using symmetric keys. Any device not recognized as a toaster device will not be provisioned to an IoT hub.
+Devices will be simulated using a provisioning sample included in the [Azure IoT C SDK](https://github.com/Azure/azure-iot-sdk-c).
-Devices will request provisioning using provisioning sample code included in the [Azure IoT C SDK](https://github.com/Azure/azure-iot-sdk-c).
--
-In this tutorial you will do the following:
+In this tutorial, you'll do the following:
> [!div class="checklist"]
-> * Create a new Azure Function App to support a custom allocation function
-> * Create a new group enrollment using an Azure Function for the custom allocation policy
-> * Create device keys for two devices
-> * Set up the development environment for example device code from the [Azure IoT C SDK](https://github.com/Azure/azure-iot-sdk-c)
-> * Run the devices and verify that they are provisioned according to the custom allocation policy
-
+> * Use the Azure CLI to create a DPS instance and to create and link two Contoso division IoT hubs (**Contoso Toasters Division** and **Contoso Heat Pumps Division**) to it
+> * Create an Azure Function that implements the custom allocation policy
+> * Create a new enrollment group uses the Azure Function for the custom allocation policy
+> * Create device symmetric keys for two simulated devices
+> * Set up the development environment for the Azure IoT C SDK
+> * Simulate the devices and verify that they are provisioned according to the example code in the custom allocation policy
[!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)] ## Prerequisites
-* This article assumes you've completed the steps in [Set up IoT Hub Device Provisioning Service with the Azure portal](./quick-setup-auto-provision.md) to create your IoT Hub and DPS instance.
+The following prerequisites are for a Windows development environment. For Linux or macOS, see the appropriate section in [Prepare your development environment](https://github.com/Azure/azure-iot-sdk-c/blob/master/doc/devbox_setup.md) in the SDK documentation.
+
+- [Visual Studio](https://visualstudio.microsoft.com/vs/) 2022 with the ['Desktop development with C++'](/cpp/ide/using-the-visual-studio-ide-for-cpp-desktop-development) workload enabled. Visual Studio 2015 and Visual Studio 2017 are also supported.
+
+- Latest version of [Git](https://git-scm.com/download/) installed.
++
+## Create the provisioning service and two divisional IoT hubs
+
+In this section, you use the Azure Cloud Shell to create a provisioning service and two IoT hubs representing the **Contoso Toasters Division** and the **Contoso Heat Pumps division**.
+
+> [!TIP]
+> The commands used in this tutorial create the provisioning service and other resources in the West US location. We recommend that you create your resources in the region nearest you that supports Device Provisioning Service. You can view a list of available locations by running the command `az provider show --namespace Microsoft.Devices --query "resourceTypes[?resourceType=='ProvisioningServices'].locations | [0]" --out table` or by going to the [Azure Status](https://azure.microsoft.com/status/) page and searching for "Device Provisioning Service". In commands, locations can be specified either in one word or multi-word format; for example: westus, West US, WEST US, etc. The value is not case sensitive. If you use multi-word format to specify location, enclose the value in quotes; for example, `-- location "West US"`.
+>
+
+1. Use the Azure Cloud Shell to create a resource group with the [az group create](/cli/azure/group#az-group-create) command. An Azure resource group is a logical container into which Azure resources are deployed and managed.
+
+ The following example creates a resource group named *contoso-us-resource-group* in the *westus* region. It is recommended that you use this group for all resources created in this tutorial. This approach will make clean up easier after you're finished.
+
+ ```azurecli-interactive
+ az group create --name contoso-us-resource-group --location westus
+ ```
+
+2. Use the Azure Cloud Shell to create a device provisioning service (DPS) with the [az iot dps create](/cli/azure/iot/dps#az-iot-dps-create) command. The provisioning service will be added to *contoso-us-resource-group*.
+
+ The following example creates a provisioning service named *contoso-provisioning-service-1098* in the *westus* location. You must use a unique service name. Make up your own suffix in the service name in place of **1098**.
+
+ ```azurecli-interactive
+ az iot dps create --name contoso-provisioning-service-1098 --resource-group contoso-us-resource-group --location westus
+ ```
+
+ This command may take a few minutes to complete.
+
+3. Use the Azure Cloud Shell to create the **Contoso Toasters Division** IoT hub with the [az iot hub create](/cli/azure/iot/hub#az-iot-hub-create) command. The IoT hub will be added to *contoso-us-resource-group*.
+
+ The following example creates an IoT hub named *contoso-toasters-hub-1098* in the *westus* location. You must use a unique hub name. Make up your own suffix in the hub name in place of **1098**.
+
+ > [!CAUTION]
+ > The example Azure Function code for the custom allocation policy requires the substring `-toasters-` in the hub name. Make sure to use a name containing the required toasters substring.
+
+ ```azurecli-interactive
+ az iot hub create --name contoso-toasters-hub-1098 --resource-group contoso-us-resource-group --location westus --sku S1
+ ```
+
+ This command may take a few minutes to complete.
+
+4. Use the Azure Cloud Shell to create the **Contoso Heat Pumps Division** IoT hub with the [az iot hub create](/cli/azure/iot/hub#az-iot-hub-create) command. This IoT hub will also be added to *contoso-us-resource-group*.
-* Latest version of [Git](https://git-scm.com/download/) installed.
+ The following example creates an IoT hub named *contoso-heatpumps-hub-1098* in the *westus* location. You must use a unique hub name. Make up your own suffix in the hub name in place of **1098**.
-* For a Windows development environment, [Visual Studio](https://visualstudio.microsoft.com/vs/) 2019 with the ['Desktop development with C++'](/cpp/ide/using-the-visual-studio-ide-for-cpp-desktop-development) workload enabled is required. Visual Studio 2015 and Visual Studio 2017 are also supported.
+ > [!CAUTION]
+ > The example Azure Function code for the custom allocation policy requires the substring `-heatpumps-` in the hub name. Make sure to use a name containing the required heatpumps substring.
+
+ ```azurecli-interactive
+ az iot hub create --name contoso-heatpumps-hub-1098 --resource-group contoso-us-resource-group --location westus --sku S1
+ ```
+
+ This command may take a few minutes to complete.
+
+5. The IoT hubs must be linked to the DPS resource.
+
+ Run the following two commands to get the connection strings for the hubs you just created. Replace the hub resource names with the names you chose in each command:
+
+ ```azurecli-interactive
+ hubToastersConnectionString=$(az iot hub connection-string show --hub-name contoso-toasters-hub-1098 --key primary --query connectionString -o tsv)
+ hubHeatpumpsConnectionString=$(az iot hub connection-string show --hub-name contoso-heatpumps-hub-1098 --key primary --query connectionString -o tsv)
+ ```
+
+ Run the following commands to link the hubs to the DPS resource. Replace the DPS resource name with the name you chose in each command:
+
+ ```azurecli-interactive
+ az iot dps linked-hub create --dps-name contoso-provisioning-service-1098 --resource-group contoso-us-resource-group --connection-string $hubToastersConnectionString --location westus
+ az iot dps linked-hub create --dps-name contoso-provisioning-service-1098 --resource-group contoso-us-resource-group --connection-string $hubHeatpumpsConnectionString --location westus
+ ```
-* For Linux or macOS, see the appropriate section in [Prepare your development environment](https://github.com/Azure/azure-iot-sdk-c/blob/master/doc/devbox_setup.md) in the [Azure IoT C SDK](https://github.com/Azure/azure-iot-sdk-c) documentation.
## Create the custom allocation function
-In this section, you create an Azure function that implements your custom allocation policy. This function decides whether a device should be registered to your IoT Hub based on whether its registration ID contains the string prefix **contoso-toaster**.
+In this section, you create an Azure function that implements your custom allocation policy. This function decides which divisional IoT hub a device should be registered to based on whether its registration ID contains the string **-contoso-tstrsd-007** or **-contoso-hpsd-088**. It also sets the initial state of the device twin based on whether the device is a toaster or a heat pump.
1. Sign in to the [Azure portal](https://portal.azure.com). From your home page, select **+ Create a resource**. 2. In the *Search the Marketplace* search box, type "Function App". From the drop-down list select **Function App**, and then select **Create**.
-3. On **Function App** create page, under the **Basics** tab, enter the following settings for your new function app and select **Review + create**:
+3. On the **Function App** create page, under the **Basics** tab, enter the following settings for your new function app and select **Review + create**:
- **Subscription**: If you have multiple subscriptions and the desired subscription is not selected, select the subscription you want to use.
+ **Resource Group**: Select the **contoso-us-resource-group** to keep all resources created in this tutorial together.
- **Resource Group**: This field allows you to create a new resource group, or choose an existing one to contain the function app. Choose the same resource group that contains the Iot hub you created for testing previously, for example, **TestResources**. By putting all related resources in a group together, you can manage them together.
-
- **Function App name**: Enter a unique function app name. This example uses **contoso-function-app**.
+ **Function App name**: Enter a unique function app name. This example uses **contoso-function-app-1098**.
**Publish**: Verify that **Code** is selected.
- **Runtime Stack**: Select **.NET Core** from the drop-down.
+ **Runtime Stack**: Select **.NET** from the drop-down.
+
+ **Version**: Select **3.1** from the drop-down.
**Region**: Select the same region as your resource group. This example uses **West US**. > [!NOTE]
- > By default, Application Insights is enabled. Application Insights is not necessary for this article, but it might help you understand and investigate any issues you encounter with the custom allocation. If you prefer, you can disable Application Insights by selecting the **Monitoring** tab and then selecting **No** for **Enable Application Insights**.
+ > By default, Application Insights is enabled. Application Insights is not necessary for this tutorial, but it might help you understand and investigate any issues you encounter with the custom allocation. If you prefer, you can disable Application Insights by selecting the **Monitoring** tab and then selecting **No** for **Enable Application Insights**.
![Create an Azure Function App to host the custom allocation function](./media/tutorial-custom-allocation-policies/create-function-app.png) 4. On the **Summary** page, select **Create** to create the function app. Deployment may take several minutes. When it completes, select **Go to resource**.
-5. On the left pane under **Functions** click **Functions** and then **+ Add** to add a new function.
+5. On the left pane of the function app **Overview** page, select **Functions** and then **+ Create** to add a new function.
+
+6. On the **Create function** page, make sure that **Development environment** is set to **Develop in portal**. Then select the **HTTP Trigger** template followed by the **Create** button.
-6. On the templates page, select the **HTTP Trigger** tile, then select **Create Function**. A function named **HttpTrigger1** is created, and the portal displays the overview page for your function.
+7. When the **HttpTrigger1** function opens, select **Code + Test** on the left pane. This allows you to edit the code for the function. The **run.csx** code file should be opened for editing.
-7. Click **Code + Test** for your new function. The portal displays the contents of the **run.csx** code file.
+8. Reference required NuGet packages. To create the initial device twin, the custom allocation function uses classes that are defined in two NuGet packages that must be loaded into the hosting environment. With Azure Functions, NuGet packages are referenced using a *function.proj* file. In this step, you save and upload a *function.proj* file for the required assemblies. For more information, see [Using NuGet packages with Azure Functions](../azure-functions/functions-reference-csharp.md#using-nuget-packages).
-8. Replace the code for the **HttpTrigger1** function with the following code and select **Save**. Your custom allocation code is ready to be used.
+ 1. Copy the following lines into your favorite editor and save the file on your computer as *function.proj*.
+
+ ```xml
+ <Project Sdk="Microsoft.NET.Sdk">
+ <PropertyGroup>
+ <TargetFramework>netstandard2.0</TargetFramework>
+ </PropertyGroup>
+ <ItemGroup>
+ <PackageReference Include="Microsoft.Azure.Devices.Provisioning.Service" Version="1.18.1" />
+ <PackageReference Include="Microsoft.Azure.Devices.Shared" Version="1.30.1" />
+ </ItemGroup>
+ </Project>
+ ```
+
+ 2. Click the **Upload** button located above the code editor to upload your *function.proj* file. After uploading, select the file in the code editor using the drop down box to verify the contents.
+
+ 3. Select the *function.proj* file in the code editor and verify its contents. If the *function.proj* file is empty copy the lines above into the file and save it. (Sometimes the upload will create the file without uploading the contents.)
+
+9. Make sure *run.csx* for **HttpTrigger1** is selected in the code editor. Replace the code for the **HttpTrigger1** function with the following code and select **Save**:
```csharp #r "Newtonsoft.Json"
In this section, you create an Azure function that implements your custom alloca
using Microsoft.Extensions.Primitives; using Newtonsoft.Json;
+ using Microsoft.Azure.Devices.Shared; // For TwinCollection
+ using Microsoft.Azure.Devices.Provisioning.Service; // For TwinState
+ public static async Task<IActionResult> Run(HttpRequest req, ILogger log) { log.LogInformation("C# HTTP trigger function processed a request.");
In this section, you create an Azure function that implements your custom alloca
} else {
- string[] hubs = data?.linkedHubs.ToObject<string[]>();
+ string[] hubs = data?.linkedHubs?.ToObject<string[]>();
// Must have hubs selected on the enrollment if (hubs == null)
In this section, you create an Azure function that implements your custom alloca
} else {
- // This is a Contoso Toaster
- if (regId.Contains("contoso-toaster"))
+ // This is a Contoso Toaster Model 007
+ if (regId.Contains("-contoso-tstrsd-007"))
{
- //Log IoT hubs configured for the enrollment
+ //Find the "-toasters-" IoT hub configured on the enrollment
foreach(string hubString in hubs) {
- log.LogInformation("linkedHub : " + hubString);
+ if (hubString.Contains("-toasters-"))
+ obj.iotHubHostName = hubString;
}
- obj.iotHubHostName = hubs[0];
- log.LogInformation("Selected hub : " + obj.iotHubHostName);
+ if (obj.iotHubHostName == null)
+ {
+ message = "No toasters hub found for the enrollment.";
+ log.LogInformation(message);
+ fail = true;
+ }
+ else
+ {
+ // Specify the initial tags for the device.
+ TwinCollection tags = new TwinCollection();
+ tags["deviceType"] = "toaster";
+
+ // Specify the initial desired properties for the device.
+ TwinCollection properties = new TwinCollection();
+ properties["state"] = "ready";
+ properties["darknessSetting"] = "medium";
+
+ // Add the initial twin state to the response.
+ TwinState twinState = new TwinState(tags, properties);
+ obj.initialTwin = twinState;
+ }
}
+ // This is a Contoso Heat pump Model 008
+ else if (regId.Contains("-contoso-hpsd-088"))
+ {
+ //Find the "-heatpumps-" IoT hub configured on the enrollment
+ foreach(string hubString in hubs)
+ {
+ if (hubString.Contains("-heatpumps-"))
+ obj.iotHubHostName = hubString;
+ }
+
+ if (obj.iotHubHostName == null)
+ {
+ message = "No heat pumps hub found for the enrollment.";
+ log.LogInformation(message);
+ fail = true;
+ }
+ else
+ {
+ // Specify the initial tags for the device.
+ TwinCollection tags = new TwinCollection();
+ tags["deviceType"] = "heatpump";
+
+ // Specify the initial desired properties for the device.
+ TwinCollection properties = new TwinCollection();
+ properties["state"] = "on";
+ properties["temperatureSetting"] = "65";
+
+ // Add the initial twin state to the response.
+ TwinState twinState = new TwinState(tags, properties);
+ obj.initialTwin = twinState;
+ }
+ }
+ // Unrecognized device.
else { fail = true;
In this section, you create an Azure function that implements your custom alloca
public class ResponseObj { public string iotHubHostName {get; set;}
+ public TwinState initialTwin {get; set;}
} ```
-9. Just below the bottom of the **run.csx** code file, click **Logs** to monitor the logging from the custom allocation function.
-- ## Create the enrollment
-In this section, you'll create a new enrollment group that uses the custom allocation policy. For simplicity, this article uses [Symmetric key attestation](concepts-symmetric-key-attestation.md) with the enrollment. For a more secure solution, consider using [X.509 certificate attestation](concepts-x509-attestation.md) with a chain of trust.
+In this section, you'll create a new enrollment group that uses the custom allocation policy. For simplicity, this tutorial uses [Symmetric key attestation](concepts-symmetric-key-attestation.md) with the enrollment. For a more secure solution, consider using [X.509 certificate attestation](concepts-x509-attestation.md) with a chain of trust.
1. Still on the [Azure portal](https://portal.azure.com), open your provisioning service. 2. Select **Manage enrollments** on the left pane, and then select the **Add enrollment group** button at the top of the page.
-3. On **Add Enrollment Group**, enter the information in the table below and click the **Save** button.
+3. On **Add Enrollment Group**, enter the following information, and select the **Save** button.
+
+ **Group name**: Enter **contoso-custom-allocated-devices**.
+
+ **Attestation Type**: Select **Symmetric Key**.
+
+ **Auto Generate Keys**: This checkbox should already be checked.
+
+ **Select how you want to assign devices to hubs**: Select **Custom (Use Azure Function)**.
+
+ **Subscription**: Select the subscription where you created your Azure Function.
- | Field | Description and/or suggested value |
- | :- | :-- |
- | **Group name** | Enter **contoso-custom-allocated-devices**. The enrollment group name is a case-insensitive string (up to 128 characters long) of alphanumeric characters plus the special characters: `'-'`, `'.'`, `'_'`, `':'`. The last character must be alphanumeric or dash (`'-'`). |
- | **Attestation Type** | Select **Symmetric Key** |
- | **Auto Generate Keys** | This checkbox should already be checked. |
- | **Select how you want to assign devices to hubs** | Select **Custom (Use Azure Function)** |
- | **Select the IoT hubs this group can be assigned to** | Select the IoT hub you created previously when you completed the quick start. |
- | **Select Azure Function** | Select the subscription that contains the function app you created. Then select the **contoso-function-app** and **HttpTrigger1** for the function. |
+ **Function App**: Select your function app by name. **contoso-function-app-1098** was used in this example.
+
+ **Function**: Select the **HttpTrigger1** function.
![Add custom allocation enrollment group for symmetric key attestation](./media/tutorial-custom-allocation-policies/create-custom-allocation-enrollment.png)
-4. After saving the enrollment, reopen it and make a note of the **Primary Key**. You must save the enrollment first to have the keys generated. This primary symmetric key will be used to generate unique device keys for devices that attempt provisioning later.
+4. After saving the enrollment, reopen it and make a note of the **Primary Key**. You must save the enrollment first to have the keys generated. This key will be used to generate unique device keys for simulated devices later.
## Derive unique device keys
-Devices don't use the primary symmetric key directly. Instead you use the primary key to derive a device key for each device. In this section, you create two unique device keys. One key will be used for a simulated toaster device. The other key will be used for a simulated heat pump device. The keys generated will allow both devices to attempt a registration. Only one device registration ID will have a valid suffix to be accepted by custom allocation policy example code. As a result, one will be accepted and the other rejected
-
-To derive the device key, you use the symmetric key you noted earlier to compute the [HMAC-SHA256](https://wikipedia.org/wiki/HMAC) of the device registration ID for each device and convert the result into Base64 format. For more information on creating derived device keys with enrollment groups, see the group enrollments section of [Symmetric key attestation](concepts-symmetric-key-attestation.md).
+Devices don't use the enrollment group's primary symmetric key directly. Instead, you use the primary key to derive a device key for each device. In this section, you create two unique device keys. One key will be used for a simulated toaster device. The other key will be used for a simulated heat pump device.
-For the example in this article, use the following two device registration IDs with the code below to compute a device key for both devices:
+To derive the device key, you use the enrollment group **Primary Key** you noted earlier to compute the [HMAC-SHA256](https://wikipedia.org/wiki/HMAC) of the device registration ID for each device and convert the result into Base64 format. For more information on creating derived device keys with enrollment groups, see the group enrollments section of [Symmetric key attestation](concepts-symmetric-key-attestation.md).
-* **contoso-toaster-007**
-* **contoso-heatpump-088**
+For the example in this tutorial, use the following two device registration IDs and compute a device key for both devices. Both registration IDs have a valid suffix to work with the example code for the custom allocation policy:
+* **breakroom499-contoso-tstrsd-007**
+* **mainbuilding167-contoso-hpsd-088**
# [Azure CLI](#tab/azure-cli)
-The IoT extension for the Azure CLI provides the [`compute-device-key`](/cli/azure/iot/dps#az-iot-dps-compute-device-key) command for generating derived device keys. This command can be used on Windows-based or Linux systems, from PowerShell or a Bash shell.
+The IoT extension for the Azure CLI provides the [`iot dps enrollment-group compute-device-key`](/cli/azure/iot/dps/enrollment-group#az-iot-dps-enrollment-group-compute-device-key) command for generating derived device keys. This command can be used on Windows-based or Linux systems, from PowerShell or a Bash shell.
Replace the value of `--key` argument with the **Primary Key** from your enrollment group. ```azurecli
-az iot dps compute-device-key --key oiK77Oy7rBw8YB6IS6ukRChAw+Yq6GC61RMrPLSTiOOtdI+XDu0LmLuNm11p+qv2I+adqGUdZHm46zXAQdZoOA== --registration-id contoso-toaster-007
+az iot dps enrollment-group compute-device-key --key oiK77Oy7rBw8YB6IS6ukRChAw+Yq6GC61RMrPLSTiOOtdI+XDu0LmLuNm11p+qv2I+adqGUdZHm46zXAQdZoOA== --registration-id breakroom499-contoso-tstrsd-007
"JC8F96eayuQwwz+PkE7IzjH2lIAjCUnAa61tDigBnSs=" ``` ```azurecli
-az iot dps compute-device-key --key oiK77Oy7rBw8YB6IS6ukRChAw+Yq6GC61RMrPLSTiOOtdI+XDu0LmLuNm11p+qv2I+adqGUdZHm46zXAQdZoOA== --registration-id contoso-heatpump-088
+az iot dps compute-device-key --key oiK77Oy7rBw8YB6IS6ukRChAw+Yq6GC61RMrPLSTiOOtdI+XDu0LmLuNm11p+qv2I+adqGUdZHm46zXAQdZoOA== --registration-id mainbuilding167-contoso-hpsd-088
"6uejA9PfkQgmYylj8Zerp3kcbeVrGZ172YLa7VSnJzg=" ```
+> [!NOTE]
+> You can also supply the enrollment group ID rather than the symmetric key to the `iot dps enrollment-group compute-device-key` command. For example:
+>
+> ```azurecli
+> az iot dps enrollment-group compute-device-key -g contoso-us-resource-group --dps-name contoso-provisioning-service-1098 --enrollment-id contoso-custom-allocated-devices --registration-id breakroom499-contoso-tstrsd-007
+> ```
+ # [PowerShell](#tab/powershell) If you're using a Windows-based workstation, you can use PowerShell to generate your derived device key as shown in the following example.
-Replace the value of **KEY** variable with the **Primary Key** you noted earlier after your enrollment group was created. The key value and output shown with the code below is only an example.
+Replace the value of **KEY** with the **Primary Key** you noted earlier.
```powershell $KEY='oiK77Oy7rBw8YB6IS6ukRChAw+Yq6GC61RMrPLSTiOOtdI+XDu0LmLuNm11p+qv2I+adqGUdZHm46zXAQdZoOA=='
-$REG_ID1='contoso-toaster-007'
-$REG_ID2='contoso-heatpump-088'
+$REG_ID1='breakroom499-contoso-tstrsd-007'
+$REG_ID2='mainbuilding167-contoso-hpsd-088'
$hmacsha256 = New-Object System.Security.Cryptography.HMACSHA256
-$hmacsha256.key = [Convert]::FromBase64String($key)
+$hmacsha256.key = [Convert]::FromBase64String($KEY)
$sig1 = $hmacsha256.ComputeHash([Text.Encoding]::ASCII.GetBytes($REG_ID1)) $sig2 = $hmacsha256.ComputeHash([Text.Encoding]::ASCII.GetBytes($REG_ID2)) $derivedkey1 = [Convert]::ToBase64String($sig1)
echo "`n`n$REG_ID1 : $derivedkey1`n$REG_ID2 : $derivedkey2`n`n"
``` ```powershell
-contoso-toaster-007 : JC8F96eayuQwwz+PkE7IzjH2lIAjCUnAa61tDigBnSs=
-contoso-heatpump-088 : 6uejA9PfkQgmYylj8Zerp3kcbeVrGZ172YLa7VSnJzg=
+breakroom499-contoso-tstrsd-007 : JC8F96eayuQwwz+PkE7IzjH2lIAjCUnAa61tDigBnSs=
+mainbuilding167-contoso-hpsd-088 : 6uejA9PfkQgmYylj8Zerp3kcbeVrGZ172YLa7VSnJzg=
``` # [Bash](#tab/bash)
-If you're using a Linux workstation, you can use openssl to generate your derived device keys as shown in the following Bash example.
-
-Replace the value of **KEY** variable with the **Primary Key** you noted earlier after your enrollment group was created. The key value and output shown with the code below is only an example.
+If you're using a Linux workstation, you can use openssl to generate your derived device keys as shown in the following example.
+Replace the value of **KEY** with the **Primary Key** you noted earlier.
```bash KEY=oiK77Oy7rBw8YB6IS6ukRChAw+Yq6GC61RMrPLSTiOOtdI+XDu0LmLuNm11p+qv2I+adqGUdZHm46zXAQdZoOA==
-REG_ID1=contoso-toaster-007
-REG_ID2=contoso-heatpump-088
+REG_ID1=breakroom499-contoso-tstrsd-007
+REG_ID2=mainbuilding167-contoso-hpsd-088
keybytes=$(echo $KEY | base64 --decode | xxd -p -u -c 1000) devkey1=$(echo -n $REG_ID1 | openssl sha256 -mac HMAC -macopt hexkey:$keybytes -binary | base64)
echo -e $"\n\n$REG_ID1 : $devkey1\n$REG_ID2 : $devkey2\n\n"
``` ```bash
-contoso-toaster-007 : JC8F96eayuQwwz+PkE7IzjH2lIAjCUnAa61tDigBnSs=
-contoso-heatpump-088 : 6uejA9PfkQgmYylj8Zerp3kcbeVrGZ172YLa7VSnJzg=
+breakroom499-contoso-tstrsd-007 : JC8F96eayuQwwz+PkE7IzjH2lIAjCUnAa61tDigBnSs=
+mainbuilding167-contoso-hpsd-088 : 6uejA9PfkQgmYylj8Zerp3kcbeVrGZ172YLa7VSnJzg=
```
-## Prepare an Azure IoT C SDK development environment
+The simulated devices will use the derived device keys with each registration ID to perform symmetric key attestation.
-Devices will request provisioning using provisioning sample code included in the [Azure IoT C SDK](https://github.com/Azure/azure-iot-sdk-c).
+## Prepare an Azure IoT C SDK development environment
In this section, you prepare the development environment used to build the [Azure IoT C SDK](https://github.com/Azure/azure-iot-sdk-c). The SDK includes the sample code for the simulated device. This simulated device will attempt provisioning during the device's boot sequence.
This sample code simulates a device boot sequence that sends the provisioning re
hsm_type = SECURE_DEVICE_TYPE_SYMMETRIC_KEY; ```
-6. In the `main()` function, find the call to `Prov_Device_Register_Device()`. Just before that call, add the following lines of code that use [`Prov_Device_Set_Provisioning_Payload()`](/azure/iot-hub/iot-c-sdk-ref/prov-device-client-h/prov-device-set-provisioning-payload) to pass a custom JSON payload during provisioning. This can be used to provide more information to your custom allocation functions. This could also be used to pass the device type instead of examining the registration ID. For more information on sending and receiving custom data payloads with DPS, see [How to transfer payloads between devices and DPS](how-to-send-additional-data.md).
+6. In the `main()` function, find the call to `Prov_Device_Register_Device()`. Just before that call, add the following lines of code that use [`Prov_Device_Set_Provisioning_Payload()`](/azure/iot-hub/iot-c-sdk-ref/prov-device-client-h/prov-device-set-provisioning-payload) to pass a custom JSON payload during provisioning. This can be used to provide more information to your custom allocation functions. This could also be used to pass the device type instead of examining the registration ID. For more information on sending and receiving custom data payloads with DPS, see [How to transfer payloads between devices and DPS](concepts-custom-allocation.md#use-device-payloads-in-custom-allocation).
```c // An example custom payload
This sample code simulates a device boot sequence that sends the provisioning re
```c // Set the symmetric key if using they auth type
- prov_dev_set_symmetric_key_info("contoso-toaster-007", "JC8F96eayuQwwz+PkE7IzjH2lIAjCUnAa61tDigBnSs=");
+ prov_dev_set_symmetric_key_info("breakroom499-contoso-tstrsd-007", "JC8F96eayuQwwz+PkE7IzjH2lIAjCUnAa61tDigBnSs=");
``` Save the file. 2. On the Visual Studio menu, select **Debug** > **Start without debugging** to run the solution. In the prompt to rebuild the project, select **Yes**, to rebuild the project before running.
- The following text is example logging output from the custom allocation function code running for the toaster device. Notice a hub is successfully selected for a toaster device. Also notice the `payload` member that contains the custom JSON content you added to the code. This is available for your code to use within the `deviceRuntimeContext`.
-
- This logging is available by clicking **Logs** under the function code in the portal:
-
- ```cmd
- 2020-09-23T11:44:37.505 [Information] Executing 'Functions.HttpTrigger1' (Reason='This function was programmatically called via the host APIs.', Id=4596d45e-086f-4e86-929b-4a02814eee40)
- 2020-09-23T11:44:41.380 [Information] C# HTTP trigger function processed a request.
- 2020-09-23T11:44:41.381 [Information] Request.Body:...
- 2020-09-23T11:44:41.381 [Information] {"enrollmentGroup":{"enrollmentGroupId":"contoso-custom-allocated-devices","attestation":{"type":"symmetricKey"},"capabilities":{"iotEdge":false},"etag":"\"e8002126-0000-0100-0000-5f6b2a570000\"","provisioningStatus":"enabled","reprovisionPolicy":{"updateHubAssignment":true,"migrateDeviceData":true},"createdDateTimeUtc":"2020-09-23T10:58:31.62286Z","lastUpdatedDateTimeUtc":"2020-09-23T10:58:31.62286Z","allocationPolicy":"custom","iotHubs":["contoso-toasters-hub-1098.azure-devices.net"],"customAllocationDefinition":{"webhookUrl":"https://contoso-function-app.azurewebsites.net/api/HttpTrigger1?****","apiVersion":"2019-03-31"}},"deviceRuntimeContext":{"registrationId":"contoso-toaster-007","symmetricKey":{},"payload":{"MyDeviceFirmwareVersion":"12.0.2.5","MyDeviceProvisioningVersion":"1.0.0.0"}},"linkedHubs":["contoso-toasters-hub-1098.azure-devices.net"]}
- 2020-09-23T11:44:41.687 [Information] linkedHub : contoso-toasters-hub-1098.azure-devices.net
- 2020-09-23T11:44:41.688 [Information] Selected hub : contoso-toasters-hub-1098.azure-devices.net
- 2020-09-23T11:44:41.688 [Information] Response
- 2020-09-23T11:44:41.688 [Information] {"iotHubHostName":"contoso-toasters-hub-1098.azure-devices.net"}
- 2020-09-23T11:44:41.689 [Information] Executed 'Functions.HttpTrigger1' (Succeeded, Id=4596d45e-086f-4e86-929b-4a02814eee40, Duration=4347ms)
- ```
-
- The following example device output shows the simulated toaster device successfully booting up and connecting to the provisioning service instance to be assigned to the toasters IoT hub by the custom allocation policy:
+ The following output is an example of the simulated toaster device successfully booting up and connecting to the provisioning service instance to be assigned to the toasters IoT hub by the custom allocation policy:
```cmd
- Provisioning API Version: 1.3.6
+ Provisioning API Version: 1.8.0
Registering Device
This sample code simulates a device boot sequence that sends the provisioning re
Provisioning Status: PROV_DEVICE_REG_STATUS_ASSIGNING Provisioning Status: PROV_DEVICE_REG_STATUS_ASSIGNING
- Registration Information received from service: contoso-toasters-hub-1098.azure-devices.net, deviceId: contoso-toaster-007
+ Registration Information received from service: contoso-toasters-hub-1098.azure-devices.net, deviceId: breakroom499-contoso-tstrsd-007
Press enter key to exit: ```
+ The following output is example logging output from the custom allocation function code running for the toaster device. Notice a hub is successfully selected for a toaster device. Also notice the `payload` property that contains the custom JSON content you added to the code. This is available for your code to use within the `deviceRuntimeContext`.
+
+ This logging is available by clicking **Logs** under the function code in the portal:
+
+ ```output
+ 2022-08-03T20:34:41.178 [Information] Executing 'Functions.HttpTrigger1' (Reason='This function was programmatically called via the host APIs.', Id=12950752-6d75-4f41-844b-c253a6653d4f)
+ 2022-08-03T20:34:41.340 [Information] C# HTTP trigger function processed a request.
+ 2022-08-03T20:34:41.341 [Information] Request.Body:...
+ 2022-08-03T20:34:41.341 [Information] {"enrollmentGroup":{"enrollmentGroupId":"contoso-custom-allocated-devices","attestation":{"type":"symmetricKey"},"capabilities":{"iotEdge":false},"etag":"\"0000f176-0000-0700-0000-62eaad1e0000\"","provisioningStatus":"enabled","reprovisionPolicy":{"updateHubAssignment":true,"migrateDeviceData":true},"createdDateTimeUtc":"2022-08-03T17:15:10.8464255Z","lastUpdatedDateTimeUtc":"2022-08-03T17:15:10.8464255Z","allocationPolicy":"custom","iotHubs":["contoso-toasters-hub-1098.azure-devices.net","contoso-heatpumps-hub-1098.azure-devices.net"],"customAllocationDefinition":{"webhookUrl":"https://contoso-function-app-1098.azurewebsites.net/api/HttpTrigger1?****","apiVersion":"2021-10-01"}},"deviceRuntimeContext":{"registrationId":"breakroom499-contoso-tstrsd-007","currentIotHubHostName":"contoso-toasters-hub-1098.azure-devices.net","currentDeviceId":"breakroom499-contoso-tstrsd-007","symmetricKey":{},"payload":{"MyDeviceFirmwareVersion":"12.0.2.5","MyDeviceProvisioningVersion":"1.0.0.0"}},"linkedHubs":["contoso-toasters-hub-1098.azure-devices.net","contoso-heatpumps-hub-1098.azure-devices.net"]}
+ 2022-08-03T20:34:41.382 [Information] Response
+ 2022-08-03T20:34:41.398 [Information] {"iotHubHostName":"contoso-toasters-hub-1098.azure-devices.net","initialTwin":{"properties":{"desired":{"state":"ready","darknessSetting":"medium"}},"tags":{"deviceType":"toaster"}}}
+ 2022-08-03T20:34:41.399 [Information] Executed 'Functions.HttpTrigger1' (Succeeded, Id=12950752-6d75-4f41-844b-c253a6653d4f, Duration=227ms)
+ ```
+++ ### Simulate the Contoso heat pump device 1. To simulate the heat pump device, update the call to `prov_dev_set_symmetric_key_info()` in **prov\_dev\_client\_sample.c** again with the heat pump registration ID and derived device key you generated earlier. The key value **6uejA9PfkQgmYylj8Zerp3kcbeVrGZ172YLa7VSnJzg=** shown below is also only given as an example. ```c // Set the symmetric key if using they auth type
- prov_dev_set_symmetric_key_info("contoso-heatpump-088", "6uejA9PfkQgmYylj8Zerp3kcbeVrGZ172YLa7VSnJzg=");
+ prov_dev_set_symmetric_key_info("mainbuilding167-contoso-hpsd-088", "6uejA9PfkQgmYylj8Zerp3kcbeVrGZ172YLa7VSnJzg=");
``` Save the file. 2. On the Visual Studio menu, select **Debug** > **Start without debugging** to run the solution. In the prompt to rebuild the project, select **Yes** to rebuild the project before running.
- The following text is example logging output from the custom allocation function code running for the heat pump device. The custom allocation policy rejects this registration with a HTTP error 400 Bad Request. Notice the `payload` member that contains the custom JSON content you added to the code. This is available for your code to use within the `deviceRuntimeContext`.
-
- This logging is available by clicking **Logs** under the function code in the portal:
+ The following output is an example of the simulated heat pump device successfully booting up and connecting to the provisioning service instance to be assigned to the Contoso heat pumps IoT hub by the custom allocation policy:
```cmd
- 2020-09-23T11:50:23.652 [Information] Executing 'Functions.HttpTrigger1' (Reason='This function was programmatically called via the host APIs.', Id=2fa77f10-42f8-43fe-88d9-a8c01d4d3f68)
- 2020-09-23T11:50:23.653 [Information] C# HTTP trigger function processed a request.
- 2020-09-23T11:50:23.654 [Information] Request.Body:...
- 2020-09-23T11:50:23.654 [Information] {"enrollmentGroup":{"enrollmentGroupId":"contoso-custom-allocated-devices","attestation":{"type":"symmetricKey"},"capabilities":{"iotEdge":false},"etag":"\"e8002126-0000-0100-0000-5f6b2a570000\"","provisioningStatus":"enabled","reprovisionPolicy":{"updateHubAssignment":true,"migrateDeviceData":true},"createdDateTimeUtc":"2020-09-23T10:58:31.62286Z","lastUpdatedDateTimeUtc":"2020-09-23T10:58:31.62286Z","allocationPolicy":"custom","iotHubs":["contoso-toasters-hub-1098.azure-devices.net"],"customAllocationDefinition":{"webhookUrl":"https://contoso-function-app.azurewebsites.net/api/HttpTrigger1?****","apiVersion":"2019-03-31"}},"deviceRuntimeContext":{"registrationId":"contoso-heatpump-088","symmetricKey":{},"payload":{"MyDeviceFirmwareVersion":"12.0.2.5","MyDeviceProvisioningVersion":"1.0.0.0"}},"linkedHubs":["contoso-toasters-hub-1098.azure-devices.net"]}
- 2020-09-23T11:50:23.654 [Information] Unknown device registration
- 2020-09-23T11:50:23.654 [Information] Response
- 2020-09-23T11:50:23.654 [Information] Unrecognized device registration.
- 2020-09-23T11:50:23.655 [Information] Executed 'Functions.HttpTrigger1' (Succeeded, Id=2fa77f10-42f8-43fe-88d9-a8c01d4d3f68, Duration=11ms)
- ```
+ Provisioning API Version: 1.8.0
- The following example device output shows the simulated heat pump device booting up and connecting to the provisioning service instance to attempt registration to an IoT hub using the custom allocation policy. This fails with error (`Custom allocation failed with status code: 400`) since the custom allocation policy was designed to only allows toaster devices:
--
- ```cmd
- Provisioning API Version: 1.3.7
-
Registering Device
-
+ Provisioning Status: PROV_DEVICE_REG_STATUS_CONNECTED Provisioning Status: PROV_DEVICE_REG_STATUS_ASSIGNING Provisioning Status: PROV_DEVICE_REG_STATUS_ASSIGNING
- Error: Time:Wed Sep 23 13:06:01 2020 File:d:\testing\azure-iot-sdk-c\provisioning_client\src\prov_device_ll_client.c Func:_prov_transport_process_json_reply Line:658 Provisioning Failure: OperationId: 4.eb89f3e8407a3711.2525bd34-02e9-4e91-a9c0-4dbc4ad5de66 - Date: 2020-09-23T17:05:58.2363145Z - Msg: Custom allocation failed with status code: 400
- Error: Time:Wed Sep 23 13:06:01 2020 File:d:\testing\azure-iot-sdk-c\provisioning_client\src\prov_transport_mqtt_common.c Func:_prov_transport_common_mqtt_dowork Line:1014 Unable to process registration reply.
- Error: Time:Wed Sep 23 13:06:01 2020 File:d:\testing\azure-iot-sdk-c\provisioning_client\src\prov_device_ll_client.c Func:_on_transport_registration_data Line:770 Failure retrieving data from the provisioning service
-
- Failure registering device: PROV_DEVICE_RESULT_DEV_AUTH_ERROR
- Press enter key to exit:
+
+ Registration Information received from service: contoso-heatpumps-hub-1098.azure-devices.net, deviceId: mainbuilding167-contoso-hpsd-088
+
+ Press enter key to exit:
```
-
+
+## Troubleshooting custom allocation policies
+
+The following table shows expected scenarios and the results error codes you might receive. Use this table to help troubleshoot custom allocation policy failures with your Azure Functions.
+
+| Scenario | Registration result from Provisioning Service | Provisioning SDK Results |
+| -- | | |
+| The webhook returns 200 OK with ΓÇÿiotHubHostNameΓÇÖ set to a valid IoT hub host name | Result status: Assigned | SDK returns PROV_DEVICE_RESULT_OK along with hub information |
+| The webhook returns 200 OK with ΓÇÿiotHubHostNameΓÇÖ present in the response, but set to an empty string or null | Result status: Failed<br><br> Error code: CustomAllocationIotHubNotSpecified (400208) | SDK returns PROV_DEVICE_RESULT_HUB_NOT_SPECIFIED |
+| The webhook returns 401 Unauthorized | Result status: Failed<br><br>Error code: CustomAllocationUnauthorizedAccess (400209) | SDK returns PROV_DEVICE_RESULT_UNAUTHORIZED |
+| An Individual Enrollment was created to disable the device | Result status: Disabled | SDK returns PROV_DEVICE_RESULT_DISABLED |
+| The webhook returns error code >= 429 | DPSΓÇÖ orchestration will retry a number of times. The retry policy is currently:<br><br>&nbsp;&nbsp;- Retry count: 10<br>&nbsp;&nbsp;- Initial interval: 1s<br>&nbsp;&nbsp;- Increment: 9s | SDK will ignore error and submit another get status message in the specified time |
+| The webhook returns any other status code | Result status: Failed<br><br>Error code: CustomAllocationFailed (400207) | SDK returns PROV_DEVICE_RESULT_DEV_AUTH_ERROR |
+ ## Clean up resources
-If you plan to continue working with the resources created in this article, you can leave them. If you don't plan to continue using the resources, use the following steps to delete all of the resources created in this article to avoid unnecessary charges.
+If you plan to continue working with the resources created in this tutorial, you can leave them. If you don't plan to continue using the resources, use the following steps to delete all of the resources created in this tutorial to avoid unnecessary charges.
-The steps here assume you created all resources in this article as instructed in the same resource group named **contoso-us-resource-group**.
+The steps here assume you created all resources in this tutorial as instructed in the same resource group named **contoso-us-resource-group**.
> [!IMPORTANT] > Deleting a resource group is irreversible. The resource group and all the resources contained in it are permanently deleted. Make sure that you don't accidentally delete the wrong resource group or resources. If you created the IoT Hub inside an existing resource group that contains resources you want to keep, only delete the IoT Hub resource itself instead of deleting the resource group.
To delete the resource group by name:
## Next steps
-For a more in-depth custom allocation policy example, see
+* To learn more about custom allocation policies, see
-> [!div class="nextstepaction"]
-> [How to use custom allocation policies](how-to-use-custom-allocation-policies.md)
+ > [!div class="nextstepaction"]
+ > [Understand custom allocation policies](concepts-custom-allocation.md)
* To learn more Reprovisioning, see
-> [!div class="nextstepaction"]
-> [IoT Hub Device reprovisioning concepts](concepts-device-reprovision.md)
+ > [!div class="nextstepaction"]
+ > [IoT Hub Device reprovisioning concepts](concepts-device-reprovision.md)
* To learn more Deprovisioning, see
-> [!div class="nextstepaction"]
-> [How to deprovision devices that were previously autoprovisioned](how-to-unprovision-devices.md)
+ > [!div class="nextstepaction"]
+ > [How to deprovision devices that were previously autoprovisioned](how-to-unprovision-devices.md)
iot-dps Virtual Network Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/virtual-network-support.md
Note the following current limitations for DPS when using private endpoints:
* Private endpoints will not work with DPS when the DPS resource and the linked Hub are in different clouds. For example, [Azure Government and global Azure](../azure-government/documentation-government-welcome.md).
-* Currently, [custom allocation policies with Azure Functions](how-to-use-custom-allocation-policies.md) for DPS will not work when the Azure function is locked down to a VNET and private endpoints.
+* Currently, [custom allocation policies with Azure Functions](concepts-custom-allocation.md) for DPS will not work when the Azure function is locked down to a VNET and private endpoints.
* Current DPS VNET support is for data ingress into DPS only. Data egress, which is the traffic from DPS to IoT Hub, uses an internal service-to-service mechanism rather than a dedicated VNET. Support for full VNET-based egress lockdown between DPS and IoT Hub is not currently available.
For example, the provisioning device client sample ([pro_dev_client_sample](http
:::code language="c" source="~/iot-samples-c/provisioning_client/samples/prov_dev_client_sample/prov_dev_client_sample.c" range="138-144" highlight="3":::
-To use the sample with a private endpoint, the highlighted code above would be changed to use the service endpoint for your DPS resource. For example, if you service endpoint was `mydps.azure-devices-provisioning.net`, the code would look as follows.
+To use the sample with a private endpoint, the highlighted code above would be changed to use the service endpoint for your DPS resource. For example, if your service endpoint was `mydps.azure-devices-provisioning.net`, the code would look as follows.
```C static const char* global_prov_uri = "global.azure-devices-provisioning.net";
iot-edge Nested Virtualization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/nested-virtualization.md
This is the baseline approach for any Windows VM that hosts Azure IoT Edge for L
If you're using Windows Server, make sure you [install the Hyper-V role](/windows-server/virtualization/hyper-v/get-started/install-the-hyper-v-role-on-windows-server). ## Deployment on Windows VM on VMware ESXi
-Both Intel-based VMware ESXi [6.7](https://docs.vmware.com/en/VMware-vSphere/6.7/vsphere-esxi-67-installation-setup-guide.pdf) and [7.0](https://docs.vmware.com/en/VMware-vSphere/7.0/rn/vsphere-esxi-vcenter-server-70-release-notes.html) versions support nested virtualization needed for hosting Azure IoT Edge for Linux on Windows on top of a Windows virtual machine.
+Both Intel-based VMware ESXi [6.7](https://docs.vmware.com/en/VMware-vSphere/6.7/vsphere-esxi-67-installation-setup-guide.pdf) and [7.0](https://docs.vmware.com/en/VMware-vSphere/7.0/rn/vsphere-esxi-vcenter-server-70-release-notes.html) versions can host Azure IoT Edge for Linux on Windows on top of a Windows virtual machine.
+
+>[!NOTE]
+> Per [VMware KB2009916](https://kb.vmware.com/s/article/2009916), currently nested virtualization is limited to Microsoft Hyper-V, strictly for VBS only and not for virtualizing multiple VMs. We are working to extend this support to EFLOW.
To set up an Azure IoT Edge for Linux on Windows on a VMware ESXi Windows virtual machine, use the following steps: 1. Create a Windows virtual machine on the VMware ESXi host. For more information about VMware VM deployment, see [VMware - Deploying Virtual Machines](https://docs.vmware.com/en/VMware-vSphere/7.0/com.vmware.vsphere.vm_admin.doc/GUID-39D19B2B-A11C-42AE-AC80-DDA8682AB42C.html).
iot-hub Iot Hub Rm Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-rm-rest.md
# Create an IoT hub using the resource provider REST API (.NET) -
-You can use the [IoT Hub resource provider REST API](/rest/api/iothub/iothubresource) to create and manage Azure IoT hubs programmatically. This tutorial shows you how to use the IoT Hub resource provider REST API to create an IoT hub from a C# program.
+You can use the [IoT Hub Resource](/rest/api/iothub/iothubresource) REST API to create and manage Azure IoT hubs programmatically. This article shows you how to use the IoT Hub Resource to create an IoT hub using **Postman**. Alternatively, you can use **cURL**. If any of these REST commands fail, find help with the [IoT Hub API common error codes](/rest/api/iothub/common-error-codes).
[!INCLUDE [updated-for-az](../../includes/updated-for-az.md)] ## Prerequisites
-* Visual Studio
+* [Azure PowerShell module](/powershell/azure/install-az-ps) or [Azure Cloud Shell](/azure/cloud-shell/overview)
-* [Azure PowerShell module](/powershell/azure/install-az-ps)
+* [Postman](/rest/api/azure/#how-to-call-azure-rest-apis-with-postman) or [cURL](https://curl.se/)
+## Get an Azure access token
-## Prepare your Visual Studio project
+1. In the Azure PowerShell cmdlet or Azure Cloud Shell, sign in and then retrieve a token with the following command. If you're using Cloud Shell you are already signed in, so skip this step.
-1. In Visual Studio, create a Visual C# Windows Classic Desktop project using the **Console App (.NET Framework)** project template. Name the project **CreateIoTHubREST**.
+ ```azurecli-interactive
+ az account get-access-token --resource https://management.azure.com
+ ```
+ You should see a response in the console similar to this JSON (except the access token is long):
-2. In Solution Explorer, right-click on your project and then click **Manage NuGet Packages**.
+ ```json
+ {
+ "accessToken": "eyJ ... pZA",
+ "expiresOn": "2022-09-16 20:57:52.000000",
+ "subscription": "XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX",
+ "tenant": "XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX",
+ "tokenType": "Bearer"
+ }
+ ```
-3. In NuGet Package Manager, check **Include prerelease**, and on the **Browse** page search for **Microsoft.Azure.Management.ResourceManager**. Select the package, click **Install**, in **Review Changes** click **OK**, then click **I Accept** to accept the licenses.
+1. In a new **Postman** request, from the **Auth** tab, select the **Type** dropdown list and choose **Bearer Token**.
-4. In NuGet Package Manager, search for **Microsoft.IdentityModel.Clients.ActiveDirectory**. Click **Install**, in **Review Changes** click **OK**, then click **I Accept** to accept the license.
- > [!IMPORTANT]
- > The [Microsoft.IdentityModel.Clients.ActiveDirectory](https://www.nuget.org/packages/Microsoft.IdentityModel.Clients.ActiveDirectory) NuGet package and Azure AD Authentication Library (ADAL) have been deprecated. No new features have been added since June 30, 2020. We strongly encourage you to upgrade. For more information see the [migration guide](../active-directory/develop/msal-migration.md).
+ :::image type="content" source="media/iot-hub-rm-rest/select-bearer-token.png" alt-text="Screenshot that shows how to select the Bearer Token type of authorization in **Postman**.":::
-5. In Program.cs, replace the existing **using** statements with the following code:
+1. Paste the access token into the field labeled **Token**.
- ```csharp
- using System;
- using System.Net.Http;
- using System.Net.Http.Headers;
- using System.Text;
- using Microsoft.Azure.Management.ResourceManager;
- using Microsoft.Azure.Management.ResourceManager.Models;
- using Microsoft.IdentityModel.Clients.ActiveDirectory;
- using Newtonsoft.Json;
- using Microsoft.Rest;
- using System.Linq;
- using System.Threading;
- ```
+Keep in mind the access token expires after 5-60 minutes, so you may need to generate another one.
-6. In Program.cs, add the following static variables replacing the placeholder values. You made a note of **ApplicationId**, **SubscriptionId**, **TenantId**, and **Password** earlier in this tutorial. **Resource group name** is the name of the resource group you use when you create the IoT hub. You can use a pre-existing or a new resource group. **IoT Hub name** is the name of the IoT Hub you create, such as **MyIoTHub**. The name of your IoT hub must be globally unique. **Deployment name** is a name for the deployment, such as **Deployment_01**.
+## Create an IoT hub
- ```csharp
- static string applicationId = "{Your ApplicationId}";
- static string subscriptionId = "{Your SubscriptionId}";
- static string tenantId = "{Your TenantId}";
- static string password = "{Your application Password}";
+1. Select the REST command dropdown list and choose the PUT command. Copy the URL below, replacing the values in the `{}` with your own values. The `{resourceName}` value is the name you'd like for your new IoT hub. Paste the URL into the field next to the PUT command.
- static string rgName = "{Resource group name}";
- static string iotHubName = "{IoT Hub name including your initials}";
- ```
+ ```rest
+ PUT https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Devices/IotHubs/{resourceName}?api-version=2021-04-12
+ ```
- [!INCLUDE [iot-hub-pii-note-naming-hub](../../includes/iot-hub-pii-note-naming-hub.md)]
+ :::image type="content" source="media/iot-hub-rm-rest/paste-put-command.png" alt-text="Screenshot that shows how to add a PUT command in Postman.":::
+ See the [PUT command in the IoT Hub Resource](/rest/api/iothub/iot-hub-resource/create-or-update?tabs=HTTP).
-## Use the resource provider REST API to create an IoT hub
+1. From the **Body** tab, select **raw** and **JSON** from the dropdown lists.
-Use the [IoT Hub resource provider REST API](/rest/api/iothub/iothubresource) to create an IoT hub in your resource group. You can also use the resource provider REST API to make changes to an existing IoT hub.
+ :::image type="content" source="media/iot-hub-rm-rest/add-body-for-put.png" alt-text="Screenshot that shows how to add JSON to the body of your request in Postman.":::
-1. Add the following method to Program.cs:
+1. Copy the following JSON, replacing values in `<>` with your own. Paste the JSON into the box in **Postman** on the **Body** tab. Make sure your IoT hub name matches the one in your PUT URL. Change the location to your location (the location assigned to your resource group).
- ```csharp
- static void CreateIoTHub(string token)
+ ```json
{-
+ "name": "<my-iot-hub>",
+ "location": "<region>",
+ "tags": {},
+ "properties": {},
+ "sku": {
+ "name": "S1",
+ "tier": "Standard",
+ "capacity": 1
+ }
} ```
-2. Add the following code to the **CreateIoTHub** method. This code creates an **HttpClient** object with the authentication token in the headers:
+ See the [PUT command in the IoT Hub Resource](/rest/api/iothub/iot-hub-resource/create-or-update?tabs=HTTP).
- ```csharp
- HttpClient client = new HttpClient();
- client.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("Bearer", token);
- ```
+1. Select **Send** to send your request and create a new IoT hub. A successful request will return a **201 Created** response with a JSON printout of your IoT hub specifications. You can save your request if you're using **Postman**.
-3. Add the following code to the **CreateIoTHub** method. This code describes the IoT hub to create and generates a JSON representation. For the current list of locations that support IoT Hub see [Azure Status](https://azure.microsoft.com/status/):
+## View an IoT hub
- ```csharp
- var description = new
- {
- name = iotHubName,
- location = "East US",
- sku = new
- {
- name = "S1",
- tier = "Standard",
- capacity = 1
- }
- };
-
- var json = JsonConvert.SerializeObject(description, Formatting.Indented);
- ```
+To see all the specifications of your new IoT hub, use a GET request. You can use the same URL that you used with the PUT request, but must erase the **Body** of that request (if not already blank) because a GET request can't have a body. Here's the GET request template:
-4. Add the following code to the **CreateIoTHub** method. This code submits the REST request to Azure. The code then checks the response and retrieves the URL you can use to monitor the state of the deployment task:
+```rest
+GET https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Devices/IotHubs/{resourceName}?api-version=2018-04-01
+```
- ```csharp
- var content = new StringContent(JsonConvert.SerializeObject(description), Encoding.UTF8, "application/json");
- var requestUri = string.Format("https://management.azure.com/subscriptions/{0}/resourcegroups/{1}/providers/Microsoft.devices/IotHubs/{2}?api-version=2021-04-12", subscriptionId, rgName, iotHubName);
- var result = client.PutAsync(requestUri, content).Result;
+See the [GET command in the IoT Hub Resource](/rest/api/iothub/iot-hub-resource/get?tabs=HTTP).
- if (!result.IsSuccessStatusCode)
- {
- Console.WriteLine("Failed {0}", result.Content.ReadAsStringAsync().Result);
- return;
- }
+## Update an IoT hub
- var asyncStatusUri = result.Headers.GetValues("Azure-AsyncOperation").First();
- ```
+Updating is as simple as using the same PUT request from when we created the IoT hub and editing the JSON body to contain parameters of your choosing. Edit the body of the request by adding a **tags** property, then run the PUT request.
-5. Add the following code to the end of the **CreateIoTHub** method. This code uses the **asyncStatusUri** address retrieved in the previous step to wait for the deployment to complete:
-
- ```csharp
- string body;
- do
- {
- Thread.Sleep(10000);
- HttpResponseMessage deploymentstatus = client.GetAsync(asyncStatusUri).Result;
- body = deploymentstatus.Content.ReadAsStringAsync().Result;
- } while (body == "{\"status\":\"Running\"}");
- ```
-
-6. Add the following code to the end of the **CreateIoTHub** method. This code retrieves the keys of the IoT hub you created and prints them to the console:
-
- ```csharp
- var listKeysUri = string.Format("https://management.azure.com/subscriptions/{0}/resourceGroups/{1}/providers/Microsoft.Devices/IotHubs/{2}/IoTHubKeys/listkeys?api-version=2021-04-12", subscriptionId, rgName, iotHubName);
- var keysresults = client.PostAsync(listKeysUri, null).Result;
-
- Console.WriteLine("Keys: {0}", keysresults.Content.ReadAsStringAsync().Result);
- ```
+```json
+{
+ "name": "<my-iot-hub>",
+ "location": "westus2",
+ "tags": {
+ "Animal": "Cat"
+ },
+ "properties": {},
+ "sku": {
+ "name": "S1",
+ "tier": "Standard",
+ "capacity": 1
+ }
+}
+```
-## Complete and run the application
+```rest
+PUT https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Devices/IotHubs/{resourceName}?api-version=2018-04-01
+```
-You can now complete the application by calling the **CreateIoTHub** method before you build and run it.
+The response will show the new tag added in the console. Remember, you may need to refresh your access token if too much time has passed since the last time you generated one.
-1. Add the following code to the end of the **Main** method:
+See the [PUT command in the IoT Hub Resource](/rest/api/iothub/iot-hub-resource/create-or-update?tabs=HTTP).
- ```csharp
- CreateIoTHub(token.AccessToken);
- Console.ReadLine();
- ```
+Alternatively, use the [PATCH command in the IoT Hub Resource](/rest/api/iothub/iot-hub-resource/update?tabs=HTTP) to update tags.
-2. Click **Build** and then **Build Solution**. Correct any errors.
+## Delete an IoT hub
-3. Click **Debug** and then **Start Debugging** to run the application. It may take several minutes for the deployment to run.
+If you're only testing, you might want to clean up your resources and delete your new IoT hub, by sending a DELETE request. be sure to replace the values in `{}` with your own values. The `{resourceName}` value is the name of your IoT hub.
-4. To verify that your application added the new IoT hub, visit the [Azure portal](https://portal.azure.com/) and view your list of resources. Alternatively, use the **Get-AzResource** PowerShell cmdlet.
+```rest
+DELETE https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Devices/IotHubs/{resourceName}?api-version=2018-04-01
+```
-> [!NOTE]
-> This example application adds an S1 Standard IoT Hub for which you are billed. When you are finished, you can delete the IoT hub through the [Azure portal](https://portal.azure.com/) or by using the **Remove-AzResource** PowerShell cmdlet when you are finished.
+See the [DELETE command in the IoT Hub Resource](/rest/api/iothub/iot-hub-resource/delete?tabs=HTTP).
## Next steps
To learn more about developing for IoT Hub, see the following articles:
To further explore the capabilities of IoT Hub, see:
-* [Deploying AI to edge devices with Azure IoT Edge](../iot-edge/quickstart-linux.md)
+* [Deploying AI to edge devices with Azure IoT Edge](../iot-edge/quickstart-linux.md)
key-vault About Certificates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/certificates/about-certificates.md
The following table represents the mapping of x509 key usage policy to effective
|-|--|--| |DataEncipherment|encrypt, decrypt| N/A | |DecipherOnly|decrypt| N/A |
-|DigitalSignature|sign, verify| Key Vault default without a usage specification at certificate creation time |
+|DigitalSignature|sign, verify| Key Vault default without a usage specification at certificate creation time |
|EncipherOnly|encrypt| N/A | |KeyCertSign|sign, verify|N/A|
-|KeyEncipherment|wrapKey, unwrapKey| Key Vault default without a usage specification at certificate creation time |
+|KeyEncipherment|wrapKey, unwrapKey| Key Vault default without a usage specification at certificate creation time |
|NonRepudiation|sign, verify| N/A | |crlsign|sign, verify| N/A |
Before a certificate issuer can be created in a Key Vault, following prerequisit
- An organization administrator must on-board their company (ex. Contoso) with at least one CA provider.
-2. Admin creates requester credentials for Key Vault to enroll (and renew) TLS/SSL certificates
+1. Admin creates requester credentials for Key Vault to enroll (and renew) TLS/SSL certificates
- Provides the configuration to be used to create an issuer object of the provider in the key vault
Certificate contacts contain contact information to send notifications triggered
Access control for certificates is managed by Key Vault, and is provided by the Key Vault that contains those certificates. The access control policy for certificates is distinct from the access control policies for keys and secrets in the same Key Vault. Users may create one or more vaults to hold certificates, to maintain scenario appropriate segmentation and management of certificates. For more information on certificate access control, see [here](certificate-access-control.md) - ## Certificate Use Cases ### Secure communication and authentication
TLS certificates can help encrypt communications over the internet and establish
* Cloud/Multi-Cloud: secure cloud-based applications on-prem, cross-cloud, or in your cloud provider's tenant. ### Code signing+ A certificate can help secure the code/script of software, thereby ensuring that the author can share the software over the internet without being changed by malicious entities. Furthermore, once the author signs the code using a certificate leveraging the code signing technology, the software is marked with a stamp of authentication displaying the author and their website. Therefore, the certificate used in code signing helps validate the software's authenticity, promoting end-to-end security. ## Next steps
key-vault Tutorial Import Certificate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/certificates/tutorial-import-certificate.md
In this case, we will create a certificate called **ExampleCertificate**, or imp
# [Azure portal](#tab/azure-portal)
-1. On the Key Vault properties pages, select **Certificates**.
+1. On the page for your key vault, select **Certificates**.
2. Click on **Generate/Import**. 3. On the **Create a certificate** screen choose the following values: - **Method of Certificate Creation**: Import.
key-vault Howto Logging https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/howto-logging.md
What is logged:
## Prerequisites
-To complete this tutorial, you must have the following:
+To complete this tutorial, you will need an Azure key vault. You can create a new key vault using one of these methods:
+ - [Create a key vault using the Azure CLI](quick-create-cli.md)
+ - [Create a key vault using Azure PowerShell](quick-create-powershell.md)
+ - [Create a key vault using the Azure portal](quick-create-portal.md)
-* An existing key vault that you have been using.
-* [Azure Cloud Shell](https://shell.azure.com) - Bash environment.
-* Sufficient storage on Azure for your Key Vault logs.
+You will also need a destination for your logs. This can be an existing or new Azure storage account and/or Log Analytics workspace.
-In this article, commands are formatted for [Cloud Shell](https://shell.azure.com) with Bash as an environment.
+> [!IMPORTANT]
+> If you use an existing Azure storage account or Log Analytics workspace, it must be in the same subscription as your key vault. It must also use the Azure Resource Manager deployment model, rather than the classic deployment model.
+>
+> If you create a new Azure storage account or Log Analytics workspace, we recommend you create it in the same resource group as your key vault, for ease of management.
+
+You can create a new Azure storage account using one of these methods:
+ - [Create a storage account using the Azure CLI](../../storage/common/storage-account-create.md?tabs=azure-cli)
+ - [Create a storage account using Azure PowerShell](../../storage/common/storage-account-create.md?tabs=azure-powershell)
+ - [Create a storage account using the Azure portal](../../storage/common/storage-account-create.md?tabs=azure-portal)
+
+You can create a new Log Analytics workspace using one of these methods:
+ - [Create a Log Analytics workspace using the Azure CLI](../../azure-monitor/logs/quick-create-workspace.md?tabs=azure-cli)
+ - [Create a Log Analytics workspace using Azure PowerShell](../../azure-monitor/logs/quick-create-workspace.md?tabs=azure-powershell)
+ - [Create a Log Analytics workspace the Azure portal](../../azure-monitor/logs/quick-create-workspace.md?tabs=azure-portal)
## Connect to your Key Vault subscription
Get-AzSubscription
Set-AzContext -SubscriptionId "<subscriptionID>" ```
-## Create a storage account for your logs
-
-Although you can use an existing storage account for your logs, here you create a new storage account dedicated to Key Vault logs.
-
-For additional ease of management, you'll also use the same resource group as the one that contains the key vault. In the [Azure CLI quickstart](quick-create-cli.md) and [Azure PowerShell quickstart](quick-create-powershell.md), this resource group is named **myResourceGroup**, and the location is *eastus*. Replace these values with your own, as applicable.
-
-You also need to provide a storage account name. Storage account names must be unique, between 3 and 24 characters in length, and use numbers and lowercase letters only. Lastly, you create a storage account of the `Standard_LRS` SKU.
-
-With the Azure CLI, use the [az storage account create](/cli/azure/storage/account#az-storage-account-create) command.
-
-```azurecli-interactive
-az storage account create --name "<your-unique-storage-account-name>" -g "myResourceGroup" --sku "Standard_LRS"
-```
-
-With Azure PowerShell, use the [New-AzStorageAccount](/powershell/module/az.storage/new-azstorageaccount) cmdlet. You will need to provide the location that corresponds to the resource group.
+## Obtain resource IDs
-```powershell
- New-AzStorageAccount -ResourceGroupName myResourceGroup -Name "<your-unique-storage-account-name>" -Type "Standard_LRS" -Location "eastus"
-```
-
-In either case, note the ID of the storage account. The Azure CLI operation returns the ID in the output. To obtain the ID with Azure PowerShell, use [Get-AzStorageAccount](/powershell/module/az.storage/get-azstorageaccount), and assign the output to the variable `$sa`. You can then see the storage account with `$sa.id`. (The `$sa.Context` property is also used later in this article.)
-
-```powershell-interactive
-$sa = Get-AzStorageAccount -Name "<your-unique-storage-account-name>" -ResourceGroup "myResourceGroup"
-$sa.id
-```
-
-The ID of the storage account is in the following format: "/subscriptions/*your-subscription-ID*/resourceGroups/myResourceGroup/providers/Microsoft.Storage/storageAccounts/*your-unique-storage-account-name*".
-
-> [!NOTE]
-> If you decide to use an existing storage account, it must use the same subscription as your key vault. It must use the Azure Resource Manager deployment model, rather than the classic deployment model.
+To enable logging on a key vault, you will need the resource ID of the key vault, as well as the destination (Azure Storage or Log Analytics account).
-## Obtain your key vault resource ID
-
-In the [CLI quickstart](quick-create-cli.md) and [PowerShell quickstart](quick-create-powershell.md), you created a key with a unique name. Use that name again in the following steps. If you can't remember the name of your key vault, you can use the Azure CLI [az keyvault list](/cli/azure/keyvault#az-keyvault-list) command, or the Azure PowerShell [Get-AzKeyVault](/powershell/module/az.keyvault/get-azkeyvault) cmdlet, to list them.
+If you can't remember the name of your key vault, you can use the Azure CLI [az keyvault list](/cli/azure/keyvault#az-keyvault-list) command, or the Azure PowerShell [Get-AzKeyVault](/powershell/module/az.keyvault/get-azkeyvault) cmdlet, to find it.
Use the name of your key vault to find its resource ID. With the Azure CLI, use the [az keyvault show](/cli/azure/keyvault#az-keyvault-show) command.
Set-AzDiagnosticSetting "<key-vault-resource-id>" -StorageAccountId $sa.id -Enab
To configure diagnostic settings in the Azure portal, follow these steps:
-1. From the **Resource** pane menu, select **Diagnostic settings**.
+1. From the **Resource** pane menu, select **Diagnostic settings**, and then **Add diagnostic setting**
:::image type="content" source="../media/diagnostics-portal-1.png" alt-text="Screenshot that shows how to select diagnostic settings.":::
-1. Select **+ Add diagnostic setting**.
-
- :::image type="content" source="../media/diagnostics-portal-2.png" alt-text="Screenshot that shows adding a diagnostic setting.":::
-
-1. Select a name for your diagnostic setting. To configure logging for Azure Monitor for Key Vault, select **AuditEvent** and **Send to Log Analytics workspace**. Then choose the subscription and Log Analytics workspace to which you want to send your logs. You can also select the option to **Archive to a storage account**.
+1. Under **Category groups**, select both **audit** and **allLogs**.
+1. If Azure Log Analytics is the destination, select **Send to Log Analytics workspace** and choose your subscription and workspace from the drop-down menus. You may also select **Archive to a storage account** and choose your subscription and storage account from the drop-down menus.
- :::image type="content" source="../media/diagnostics-portal-3.png" alt-text="Screenshot of diagnostic settings options.":::
+ :::image type="content" source="../media/diagnostics-portal-2.png" alt-text="Screenshot of diagnostic settings options.":::
- Otherwise, select the options that pertain to the logs that you want to select.
1. When you have selected your desired options, select **Save**.
- :::image type="content" source="../media/diagnostics-portal-4.png" alt-text="Screenshot that shows how to save the options you selected.":::
+ :::image type="content" source="../media/diagnostics-portal-3.png" alt-text="Screenshot that shows how to save the options you selected.":::
key-vault Private Link Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/private-link-service.md
After configuring the key vault basics, select the Networking tab and follow the
1. Click the "+ Add" Button to add a private endpoint. ![Screenshot that shows the 'Networking' tab on the 'Create key vault' page.](../media/private-link-service-1.png)
-
+ 1. In the "Location" field of the Create Private Endpoint Blade, select the region in which your virtual network is located. 1. In the "Name" field, create a descriptive name that will allow you to identify this private endpoint. 1. Select the virtual network and subnet you want this private endpoint to be created in from the dropdown menu.
After configuring the key vault basics, select the Networking tab and follow the
1. Select "Ok". ![Screenshot that shows the 'Create private endpoint' page with settings selected.](../media/private-link-service-8.png)
-
+ You will now be able to see the configured private endpoint. You now have the option to delete and edit this private endpoint. Select the "Review + Create" button and create the key vault. It will take 5-10 minutes for the deployment to complete.
key-vault Rbac Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/rbac-guide.md
Individual keys, secrets, and certificates permissions should be used
only for specific scenarios: - Sharing individual secrets between multiple applications, e.g., one application needs to access data from the other application-- Cross-tenant encryption with customer key, e.g., ISV using a key from a customer key vault to encrypt its data More about Azure Key Vault management guidelines, see:
For full details, see [Assign Azure roles using Azure PowerShell](../../role-bas
### Secret scope role assignment
+> [!NOTE]
+> Key vault secret, certificate, key scope role assignments should only be used for limited scenarios described [here](rbac-guide.md?i#best-practices-for-individual-keys-secrets-and-certificates-role-assignments) to comply with security best practices.
+ 1. Open a previously created secret. 1. Click the Access control(IAM) tab
load-testing How To Test Private Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/how-to-test-private-endpoint.md
When you start the load test, Azure Load Testing service injects the following A
These resources are ephemeral and exist only for the duration of the load test run. If you restrict access to your virtual network, you need to [configure your virtual network](#configure-your-virtual-network) to enable communication between these Azure Load Testing and the injected VMs. > [!NOTE]
-> Virtual network support for Azure Load Testing is available in the following Azure regions: Australia East, East US, East US 2, and North Europe.
+> Virtual network support for Azure Load Testing is available in the following Azure regions: Australia East, East US, East US 2, North Europe, and South Central US.
> [!IMPORTANT] > Azure Load Testing is currently in preview. For legal terms that apply to Azure features that are in beta, in preview, or otherwise not yet released into general availability, see the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
logic-apps Logic Apps Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-overview.md
Title: Overview for Azure Logic Apps
-description: Azure Logic Apps is a cloud platform for automating workflows that integrate apps, data, services, and systems using little to no code. Workflows can run in a multi-tenant, single-tenant, or dedicated environment.
+ Title: Overview
+description: Azure Logic Apps is a cloud platform for creating and running automated workflows that integrate apps, data, services, and systems using little to no code. Workflows can run in a multi-tenant, single-tenant, or dedicated environment.
ms.suite: integration Previously updated : 01/27/2022 Last updated : 09/22/2022 # What is Azure Logic Apps?
-[Azure Logic Apps](https://azure.microsoft.com/services/logic-apps) is a cloud-based platform for creating and running automated [*workflows*](#workflow) that integrate your apps, data, services, and systems. With this platform, you can quickly develop highly scalable integration solutions for your enterprise and business-to-business (B2B) scenarios. As a member of [Azure Integration Services](https://azure.microsoft.com/product-categories/integration/), Azure Logic Apps simplifies the way that you connect legacy, modern, and cutting-edge systems across cloud, on premises, and hybrid environments.
+Azure Logic Apps is a cloud-based platform for creating and running automated workflows that integrate your apps, data, services, and systems. With this platform, you can quickly develop highly scalable integration solutions for your enterprise and business-to-business (B2B) scenarios. As a member of [Azure Integration Services](https://azure.microsoft.com/product-categories/integration/), Azure Logic Apps simplifies the way that you connect legacy, modern, and cutting-edge systems across cloud, on premises, and hybrid environments. Learn more about [Azure Logic Apps on the Azure website](https://azure.microsoft.com/services/logic-apps).
-The following list describes just a few example tasks, business processes, and workloads that you can automate using the Azure Logic Apps service:
+The following list describes just a few example tasks, business processes, and workloads that you can automate using Azure Logic Apps:
* Schedule and send email notifications using Office 365 when a specific event happens, for example, a new file is uploaded.
The following list describes just a few example tasks, business processes, and w
* Monitor tweets, analyze the sentiment, and create alerts or tasks for items that need review.
-> [!VIDEO https://learn.microsoft.com/Shows/Azure-Friday/Go-serverless-Enterprise-integration-with-Azure-Logic-Apps/player]
-
-Based on the logic app resource type that you choose and create, your logic apps run in multi-tenant Azure Logic Apps, [single-tenant Azure Logic Apps](single-tenant-overview-compare.md), or a dedicated [integration service environment](connect-virtual-network-vnet-isolated-environment-overview.md) when accessing an Azure virtual network. To run logic apps in containers, [create single-tenant based logic apps using Azure Arc enabled Logic Apps](azure-arc-enabled-logic-apps-create-deploy-workflows.md). For more information, review [What is Azure Arc enabled Logic Apps?](azure-arc-enabled-logic-apps-overview.md) and [Resource type and host environment differences for logic apps](#resource-environment-differences).
+Based on the logic app resource type that you choose, your logic app workflows can run in either multi-tenant Azure Logic Apps, single-tenant Azure Logic Apps, a dedicated integration service environment, or an App Service Environment (v3). With the last three environments, your workflows can access an Azure virtual network more easily. You can also run logic app workflows in containers when you create single tenant-based workflows using Azure Arc enabled Logic Apps.
-To securely access and run operations on various data sources, you can use [*managed connectors*](#managed-connector) in your workflows. Choose from [many hundreds of connectors in an abundant and growing Azure ecosystem](/connectors/connector-reference/connector-reference-logicapps-connectors), for example:
+To communicate with any service endpoint, run your own code, organize your workflow, or manipulate data, you can use [*built-in* connector operations](#built-in-operations) in your workflow. These operations run natively on the Azure Logic Apps runtime. To securely access and run operations on data and entities in many services such as Azure, Microsoft, and other web apps or on-premises systems, you can use [*managed* (Azure-hosted) connector operations](#managed-connector) in your workflows. Choose from [many hundreds of connectors in an abundant and growing Azure ecosystem](/connectors/connector-reference/connector-reference-logicapps-connectors), for example:
* Azure services such as Blob Storage and Service Bus
To securely access and run operations on various data sources, you can use [*man
* File shares such as FTP and SFTP
-To communicate with any service endpoint, run your own code, organize your workflow, or manipulate data, you can use [*built-in*](#built-in-operations) triggers and actions, which run natively within the Azure Logic Apps service. For example, built-in triggers include Request, HTTP, and Recurrence. Built-in actions include Condition, For each, Execute JavaScript code, and operations that call Azure Functions, web apps or API apps hosted in Azure, and other Azure Logic Apps workflows.
- For B2B integration scenarios, Azure Logic Apps includes capabilities from [BizTalk Server](/biztalk/core/introducing-biztalk-server). To define business-to-business (B2B) artifacts, you create [*integration account*](#integration-account) where you store these artifacts. After you link this account to your logic app, your workflows can use these B2B artifacts and exchange messages that comply with Electronic Data Interchange (EDI) and Enterprise Application Integration (EAI) standards.
-For more information about the ways workflows can access and work with apps, data, services, and systems, review the following documentation:
+For more information, review the following documentation:
-* [Connectors for Azure Logic Apps](../connectors/apis-list.md)
+* [Connectors overview for Azure Logic Apps](../connectors/apis-list.md)
-* [Managed connectors for Azure Logic Apps](../connectors/managed.md)
+* [Managed connectors](../connectors/managed.md)
-* [Built-in triggers and actions for Azure Logic Apps](../connectors/built-in.md)
+* [Built-in connectors](../connectors/built-in.md)
* [B2B enterprise integration solutions with Azure Logic Apps](logic-apps-enterprise-integration-overview.md)
+* [Single-tenant versus multi-tenant and integration service environment for Azure Logic Apps](single-tenant-overview-compare.md)
+
+* [What is Azure Arc enabled Logic Apps?](azure-arc-enabled-logic-apps-overview.md)
+
+> [!VIDEO https://learn.microsoft.com/Shows/Azure-Friday/Go-serverless-Enterprise-integration-with-Azure-Logic-Apps/player]
+ <a name="logic-app-concepts"></a> ## Key terms
-The following terms are important concepts in the Azure Logic Apps service.
+The following list briefly defines terms and core concepts in Azure Logic Apps.
### Logic app
-A *logic app* is the Azure resource you create when you want to develop a workflow. There are [multiple logic app resource types that run in different environments](#resource-environment-differences).
+A *logic app* is the Azure resource you create when you want to build a workflow. There are [multiple logic app resource types that run in different environments](#resource-environment-differences).
### Workflow
For example, you can define trading partners, agreements, schemas, maps, and oth
## How logic apps work
-In a logic app, each workflow always starts with a single [trigger](#trigger). A trigger fires when a condition is met, for example, when a specific event happens or when data meets specific criteria. Many triggers include [scheduling capabilities](../logic-apps/concepts-schedule-automated-recurring-tasks-workflows.md) that control how often your workflow runs. Following the trigger, one or more [actions](#action) run operations that, for example, process, handle, or convert data that travels through the workflow, or that advance the workflow to the next step.
+In a logic app resource, each workflow always starts with a single [trigger](#trigger). A trigger fires when a condition is met, for example, when a specific event happens or when data meets specific criteria. Many triggers include [scheduling capabilities](concepts-schedule-automated-recurring-tasks-workflows.md) that control how often your workflow runs. After the trigger fires, one or more [actions](#action) run operations that process, handle, or convert data that travels through the workflow, or that advance the workflow to the next step.
The following screenshot shows part of an example enterprise workflow. This workflow uses conditions and switches to determine the next action. Let's say you have an order system, and your workflow processes incoming orders. You want to review orders above a certain cost manually. Your workflow already has previous steps that determine how much an incoming order costs. So, you create an initial condition based on that cost value. For example:
You can visually create workflows using the Azure Logic Apps workflow designer i
To create logic app workflows, you choose the **Logic App** resource type based on your scenario, solution requirements, the capabilities that you want, and the environment where you want to run your workflows.
-The following table briefly summarizes differences between the original **Logic App (Consumption)** resource type and the **Logic App (Standard)** resource type. You'll also learn the differences between the *single-tenant environment*, *multi-tenant environment*, *integration service environment (ISE)*, and *App Service Environment v3 (ASEv3)* for deploying, hosting, and running your logic app workflows.
+The following table briefly summarizes differences between the original **Logic App (Consumption)** resource type and the **Logic App (Standard)** resource type. You'll also learn the differences between the *single-tenant environment*, *multi-tenant environment*, *integration service environment* (ISE), and *App Service Environment v3 (ASEv3)* for deploying, hosting, and running your logic app workflows.
[!INCLUDE [Logic app resource type and environment differences](../../includes/logic-apps-resource-environment-differences-table.md)]
The Azure Logic Apps integration platform provides prebuilt Microsoft-managed AP
You usually won't have to write any code. However, if you do need to write code, you can create code snippets using [Azure Functions](../azure-functions/functions-overview.md) and run that code from your workflow. You can also create code snippets that run in your workflow by using the [**Inline Code** action](logic-apps-add-run-inline-code.md). If your workflow needs to interact with events from Azure services, custom apps, or other solutions, you can monitor, route, and publish events using [Azure Event Grid](../event-grid/overview.md).
-Azure Logic Apps is fully managed by Microsoft Azure, which frees you from worrying about hosting, scaling, managing, monitoring, and maintaining solutions built with these services. When you use these capabilities to create ["serverless" apps and solutions](../logic-apps/logic-apps-serverless-overview.md), you can just focus on the business logic and functionality. These services automatically scale to meet your needs, make integrations faster, and help you build robust cloud apps using little to no code.
+Azure Logic Apps is fully managed by Microsoft Azure, which frees you from worrying about hosting, scaling, managing, monitoring, and maintaining solutions built with these services. When you use these capabilities to create ["serverless" apps and solutions](logic-apps-serverless-overview.md), you can just focus on the business logic and functionality. These services automatically scale to meet your needs, make integrations faster, and help you build robust cloud apps using little to no code.
To learn how other companies improved their agility and increased focus on their core businesses when they combined Azure Logic Apps with other Azure services and Microsoft products, check out these [customer stories](https://aka.ms/logic-apps-customer-stories).
The following sections provide more information about the capabilities and benef
Save time and simplify complex processes by using the visual design tools in Azure Logic Apps. Create your workflows from start to finish by using the Azure Logic Apps workflow designer in the Azure portal, Visual Studio Code, or Visual Studio. Just start your workflow with a trigger, and add any number of actions from the [connectors gallery](/connectors/connector-reference/connector-reference-logicapps-connectors).
-If you're creating a multi-tenant based logic app, get started faster when you [create a workflow from the templates gallery](../logic-apps/logic-apps-create-logic-apps-from-templates.md). These templates are available for common workflow patterns, which range from simple connectivity for Software-as-a-Service (SaaS) apps to advanced B2B solutions plus "just for fun" templates.
+If you're creating a multi-tenant based logic app, get started faster when you [create a workflow from the templates gallery](logic-apps-create-logic-apps-from-templates.md). These templates are available for common workflow patterns, which range from simple connectivity for Software-as-a-Service (SaaS) apps to advanced B2B solutions plus "just for fun" templates.
#### Connect different systems across various environments
Some patterns and processes are easy to describe but hard to implement in code.
#### Write once, reuse often
-Create your logic apps as Azure Resource Manager templates so that you can [set up and automate deployments](../logic-apps/logic-apps-azure-resource-manager-templates-overview.md) across multiple environments and regions.
+Create your logic apps as Azure Resource Manager templates so that you can [set up and automate deployments](logic-apps-azure-resource-manager-templates-overview.md) across multiple environments and regions.
#### First-class support for enterprise integration and B2B scenarios
-Businesses and organizations electronically communicate with each other by using industry-standard but different message protocols and formats, such as EDIFACT, AS2, X12, and RosettaNet. By using the [enterprise integration capabilities](../logic-apps/logic-apps-enterprise-integration-overview.md) supported by Azure Logic Apps, you can create workflows that transform message formats used by trading partners into formats that your organization's systems can interpret and process. Azure Logic Apps handles these exchanges smoothly and securely with encryption and digital signatures.
+Businesses and organizations electronically communicate with each other by using industry-standard but different message protocols and formats, such as EDIFACT, AS2, X12, and RosettaNet. By using the [enterprise integration capabilities](logic-apps-enterprise-integration-overview.md) supported by Azure Logic Apps, you can create workflows that transform message formats used by trading partners into formats that your organization's systems can interpret and process. Azure Logic Apps handles these exchanges smoothly and securely with encryption and digital signatures.
You can start small with your current systems and services, and then grow incrementally at your own pace. When you're ready, the Azure Logic Apps platform helps you implement and scale up to more mature integration scenarios by providing these capabilities and more: * Integrate and build off [Microsoft BizTalk Server](/biztalk/core/introducing-biztalk-server), [Azure Service Bus](../service-bus-messaging/service-bus-messaging-overview.md), [Azure Functions](../azure-functions/functions-overview.md), [Azure API Management](../api-management/api-management-key-concepts.md), and more.
-* Exchange messages using [EDIFACT](../logic-apps/logic-apps-enterprise-integration-edifact.md), [AS2](../logic-apps/logic-apps-enterprise-integration-as2.md), [X12](../logic-apps/logic-apps-enterprise-integration-x12.md), and [RosettaNet](logic-apps-enterprise-integration-rosettanet.md) protocols.
+* Exchange messages using [EDIFACT](logic-apps-enterprise-integration-edifact.md), [AS2](logic-apps-enterprise-integration-as2.md), [X12](logic-apps-enterprise-integration-x12.md), and [RosettaNet](logic-apps-enterprise-integration-rosettanet.md) protocols.
-* Process [XML messages](../logic-apps/logic-apps-enterprise-integration-xml.md) and [flat files](../logic-apps/logic-apps-enterprise-integration-flatfile.md).
+* Process [XML messages](logic-apps-enterprise-integration-xml.md) and [flat files](logic-apps-enterprise-integration-flatfile.md).
-* Create an [integration account](./logic-apps-enterprise-integration-create-integration-account.md) to store and manage B2B artifacts, such as [trading partners](../logic-apps/logic-apps-enterprise-integration-partners.md), [agreements](../logic-apps/logic-apps-enterprise-integration-agreements.md), [transform maps](../logic-apps/logic-apps-enterprise-integration-maps.md), [validation schemas](../logic-apps/logic-apps-enterprise-integration-schemas.md), and more.
+* Create an [integration account](./logic-apps-enterprise-integration-create-integration-account.md) to store and manage B2B artifacts, such as [trading partners](logic-apps-enterprise-integration-partners.md), [agreements](logic-apps-enterprise-integration-agreements.md), [maps](logic-apps-enterprise-integration-maps.md), [schemas](logic-apps-enterprise-integration-schemas.md), and more.
-For example, if you use Microsoft BizTalk Server, your workflows can communicate with your BizTalk Server using the [BizTalk Server connector](../connectors/managed.md#on-premises-connectors). You can then run or extend BizTalk-like operations in your workflows by using [integration account connectors](../connectors/managed.md#integration-account-connectors). Going in the other direction, BizTalk Server can communicate with your workflows by using the [Microsoft BizTalk Server Adapter for Azure Logic Apps](https://www.microsoft.com/download/details.aspx?id=54287). Learn how to [set up and use the BizTalk Server Adapter](/biztalk/core/logic-app-adapter) in your BizTalk Server.
+For example, if you use Microsoft BizTalk Server, your workflows can communicate with your BizTalk Server using the [BizTalk Server connector](../connectors/managed.md#on-premises-connectors). You can then run or extend BizTalk-like operations in your workflows by using [integration account connectors](../connectors/managed.md#integration-account-connectors). In the other direction, BizTalk Server can communicate with your workflows by using the [Microsoft BizTalk Server Adapter for Azure Logic Apps](https://www.microsoft.com/download/details.aspx?id=54287). Learn how to [set up and use the BizTalk Server Adapter](/biztalk/core/logic-app-adapter) in your BizTalk Server.
#### Built-in extensibility
-If no suitable connector is available to run the code you want, you can create and call your own code snippets from your workflow by using [Azure Functions](../azure-functions/functions-overview.md). Or, create your own [APIs](../logic-apps/logic-apps-create-api-app.md) and [custom connectors](../logic-apps/custom-connector-overview.md) that you can call from your workflows.
+If no suitable connector is available to run the code you want, you can create and call your own code snippets from your workflow by using [Azure Functions](../azure-functions/functions-overview.md). Or, create your own [APIs](logic-apps-create-api-app.md) and [custom connectors](custom-connector-overview.md) that you can call from your workflows.
#### Access resources inside Azure virtual networks
-Logic app workflows can access secured resources, such as virtual machines (VMs) and other systems or services, that are inside an [Azure virtual network](../virtual-network/virtual-networks-overview.md) when you create an [*integration service environment* (ISE)](../logic-apps/connect-virtual-network-vnet-isolated-environment-overview.md). An ISE is a dedicated instance of the Azure Logic Apps service that uses dedicated resources and runs separately from the global multi-tenant Azure Logic Apps service.
+Logic app workflows can access secured resources such as virtual machines (VMs), other services, and systems that are inside an [Azure virtual network](../virtual-network/virtual-networks-overview.md) when you create an [*integration service environment* (ISE)](connect-virtual-network-vnet-isolated-environment-overview.md). An ISE is a dedicated instance of the Azure Logic Apps service that uses dedicated resources and runs separately from the global multi-tenant Azure Logic Apps service.
Running logic apps in your own dedicated instance helps reduce the impact that other Azure tenants might have on app performance, also known as the ["noisy neighbors" effect](https://en.wikipedia.org/wiki/Cloud_computing_issues#Performance_interference_and_noisy_neighbors). An ISE also provides these benefits: * Your own static IP addresses, which are separate from the static IP addresses that are shared by the logic apps in the multi-tenant service. You can also set up a single public, static, and predictable outbound IP address to communicate with destination systems. That way, you don't have to set up extra firewall openings at those destination systems for each ISE.
-* Increased limits on run duration, storage retention, throughput, HTTP request and response timeouts, message sizes, and custom connector requests. For more information, review [Limits and configuration for Azure Logic Apps](../logic-apps/logic-apps-limits-and-config.md).
+* Increased limits on run duration, storage retention, throughput, HTTP request and response timeouts, message sizes, and custom connector requests. For more information, review [Limits and configuration for Azure Logic Apps](logic-apps-limits-and-config.md).
-When you create an ISE, Azure *injects* or deploys that ISE into your Azure virtual network. You can then use this ISE as the location for the logic apps and integration accounts that need access. For more information about creating an ISE, review [Connect to Azure virtual networks from Azure Logic Apps](../logic-apps/connect-virtual-network-vnet-isolated-environment.md).
+When you create an ISE, Azure *injects* or deploys that ISE into your Azure virtual network. You can then use this ISE as the location for the logic apps and integration accounts that need access. For more information about creating an ISE, review [Connect to Azure virtual networks from Azure Logic Apps](connect-virtual-network-vnet-isolated-environment.md).
#### Pricing options
-Each logic app type, which differs by capabilities and where they run (multi-tenant, single-tenant, integration service environment), has a different [pricing model](../logic-apps/logic-apps-pricing.md). For example, multi-tenant based logic apps use consumption pricing, while logic apps in an integration service environment use fixed pricing. Learn more about [pricing and metering](../logic-apps/logic-apps-pricing.md) for Azure Logic Apps.
+Each logic app resource type, which differs by capabilities and where they run (multi-tenant, single-tenant, integration service environment), has a different [pricing model](logic-apps-pricing.md). For example, multi-tenant based logic apps use consumption pricing, while logic apps in an integration service environment use fixed pricing. Learn more about [pricing and metering](logic-apps-pricing.md) for Azure Logic Apps.
## How does Azure Logic Apps differ from Functions, WebJobs, and Power Automate?
Before you can start with Azure Logic Apps, you need an Azure subscription. If y
When you're ready, try one or more of the following quickstart guides for Azure Logic Apps. Learn how to create a basic workflow that monitors an RSS feed and sends an email for new content.
-* [Create a multi-tenant based logic app in the Azure portal](../logic-apps/quickstart-create-first-logic-app-workflow.md)
+* [Create a multi-tenant based logic app in the Azure portal](quickstart-create-first-logic-app-workflow.md)
* [Create a multi-tenant based logic app in Visual Studio](quickstart-create-logic-apps-with-visual-studio.md)
Learn more about the Azure Logic Apps platform with these introductory videos:
## Next steps
-* [Quickstart: Create your first logic app workflow](../logic-apps/quickstart-create-first-logic-app-workflow.md)
+* [Quickstart: Create your first logic app workflow](quickstart-create-first-logic-app-workflow.md)
machine-learning Concept Mlflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-mlflow.md
> * You can use Azure Machine Learning workspaces as your tracking server for any experiment you're running with MLflow, whether it runs on Azure Machine Learning or not. You only need to configure MLflow to point to the workspace where the tracking should happen. > * You can run any training routine that uses MLflow in Azure Machine Learning without changes. MLflow also supports model management and model deployment capabilities.
-MLflow can manage the complete machine learning lifecycle by using four core capabilities:
-
-* [Tracking](https://mlflow.org/docs/latest/quickstart.html#using-the-tracking-api) is a component of MLflow that logs and tracks your training job metrics, parameters, and model artifacts. It doesn't matter where your experiment's environment is--locally on your computer, on a remote compute target, on a virtual machine, or on an Azure Machine Learning compute instance.
-* [Model Registry](https://mlflow.org/docs/latest/model-registry.html) is a component of MLflow that manages a model's versions in a centralized repository.
-* [Model deployment](https://mlflow.org/docs/latest/models.html#deploy-a-python-function-model-on-microsoft-azure-ml) is a capability of MLflow that deploys models registered through the MLflow format to compute targets. Because of how MLflow models are stored, there's no need to provide scoring scripts for models in such a format.
-* [Project](https://mlflow.org/docs/latest/projects.html) is a format for packaging data science code in a reusable and reproducible way, based primarily on conventions. It's supported in preview on Azure Machine Learning.
- ## Tracking with MLflow
Azure Machine Learning uses MLflow Tracking for metric logging and artifact stor
> [!NOTE] > Unlike the Azure Machine Learning SDK v1, there's no logging functionality in the SDK v2 (preview). We recommend that you use MLflow for logging.
-With MLflow Tracking, you can connect Azure Machine Learning as the back end of your MLflow experiments. The workspace provides a centralized, secure, and scalable location to store training metrics and models. Capabilities include:
+With MLflow Tracking, you can connect Azure Machine Learning as the back end of your MLflow experiments. The workspace provides a centralized, secure, and scalable location to store training metrics and models.
+
+Capabilities include:
* [Track machine learning experiments and models running locally or in the cloud](how-to-use-mlflow-cli-runs.md) with MLflow in Azure Machine Learning. * [Track Azure Databricks machine learning experiments](how-to-use-mlflow-azure-databricks.md) with MLflow in Azure Machine Learning. * [Track Azure Synapse Analytics machine learning experiments](how-to-use-mlflow-azure-synapse.md) with MLflow in Azure Machine Learning.
-> [!IMPORTANT]
-> - MLflow in R support is limited to tracking an experiment's metrics, parameters, and models on Azure Machine Learning jobs. RStudio or Jupyter Notebooks with R kernels are not supported. Model registries are not supported if you're using the MLflow R SDK. As an alternative, use the Azure Machine Learning CLI or Azure Machine Learning studio for model registration and management. View an [R example about using the MLflow tracking client with Azure Machine Learning](https://github.com/Azure/azureml-examples/tree/main/cli/jobs/single-step/r).
-> - MLflow in Java support is limited to tracking an experiment's metrics and parameters on Azure Machine Learning jobs. Artifacts and models can't be tracked via the MLflow Java SDK. View a [Java example about using the MLflow tracking client with Azure Machine Learning](https://github.com/Azure/azureml-examples/tree/main/cli/jobs/single-step/java/iris).
+You can also use MLflow to [Query & compare experiments and runs with MLflow](how-to-track-experiments-mlflow.md).
-To learn how to use MLflow to query experiments and runs in Azure Machine Learning, see [Manage experiments and runs with MLflow](how-to-track-experiments-mlflow.md).
+
+> [!IMPORTANT]
+> - MLflow in R support is limited to tracking experiment's metrics, parameters and models on Azure Machine Learning jobs. Interactive training on RStudio or Jupyter Notebooks with R kernels is not supported. Model management and registration is not supported using the MLflow R SDK. As an alternative, use Azure ML CLI or Azure ML studio for model registration and management. View the following [R example about using the MLflow tracking client with Azure Machine Learning](https://github.com/Azure/azureml-examples/tree/main/cli/jobs/single-step/r).
+> - MLflow in Java support is limited to tracking experiment's metrics and parameters on Azure Machine Learning jobs. Artifacts and models can't be tracked using the MLflow Java SDK. As an alternative, use the `Outputs` folder in jobs along with the method `mlflow.save_model` to save models (or artifacts) you want to capture. View the following [Java example about using the MLflow tracking client with the Azure Machine Learning](https://github.com/Azure/azureml-examples/tree/main/cli/jobs/single-step/java/iris).
## Model registries with MLflow
The following table shows which operations are supported by each of the tools av
| Feature | MLflow SDK | Azure Machine Learning v2 (CLI/SDK) | Azure Machine Learning studio | | :- | :-: | :-: | :-: | | Track and log metrics, parameters, and models | **&check;** | | |
-| Retrieve metrics, parameters, and models | **&check;**<sup>1</sup> | <sup>2</sup> | **&check;** |
-| Submit training jobs with MLflow projects | **&check;** | | |
+| Retrieve metrics, parameters, and models | **&check;** | <sup>1</sup> | **&check;** |
+| Submit training jobs with MLflow projects | **&check;** <sup>2</sup> | | |
| Submit training jobs with inputs and outputs | | **&check;** | **&check;** |
-| Submit training jobs by using machine learning pipelines | | **&check;** | |
-| Manage experiments and runs | **&check;**<sup>1</sup> | **&check;** | **&check;** |
+| Submit training jobs by using machine learning pipelines | | **&check;** | **&check;** |
+| Manage experiments and runs | **&check;** | **&check;** | **&check;** |
| Manage MLflow models | **&check;**<sup>3</sup> | **&check;** | **&check;** | | Manage non-MLflow models | | **&check;** | **&check;** |
-| Deploy MLflow models to Azure Machine Learning | **&check;**<sup>4</sup> | **&check;** | **&check;** |
+| Deploy MLflow models to Azure Machine Learning (Online & Batch) | **&check;**<sup>4</sup> | **&check;** | **&check;** |
| Deploy non-MLflow models to Azure Machine Learning | | **&check;** | **&check;** | > [!NOTE]
-> - <sup>1</sup> View [Manage experiments and runs with MLflow](how-to-track-experiments-mlflow.md) for details.
-> - <sup>2</sup> Only artifacts and models can be downloaded.
-> - <sup>3</sup> View [Manage model registries in Azure Machine Learning with MLflow](how-to-manage-models-mlflow.md) for details.
-> - <sup>4</sup> View [Deploy MLflow models to Azure Machine Learning](how-to-deploy-mlflow-models.md) for details. Deployment of MLflow models to batch inference by using the MLflow SDK is not possible at the moment.
+> - <sup>1</sup> Only artifacts and models can be downloaded.
+> - <sup>2</sup> On preview.
+> - <sup>3</sup> Some operations may not be supported. View [Manage model registries in Azure Machine Learning with MLflow](how-to-manage-models-mlflow.md) for details.
+> - <sup>4</sup> Deployment of MLflow models to batch inference by using the MLflow SDK is not possible at the moment. View [Deploy MLflow models to Azure Machine Learning](how-to-deploy-mlflow-models.md) for details.
## Example notebooks
If you're getting started with MLflow in Azure Machine Learning, we recommend th
* [Training models in Azure Databricks and deploying them on Azure Machine Learning](https://github.com/Azure/azureml-examples/blob/main/notebooks/using-mlflow/no-code-deployment/track_with_databricks_deploy_aml.ipynb): Demonstrates how to train models in Azure Databricks and deploy them in Azure Machine Learning. It also includes how to handle cases where you also want to track the experiments with the MLflow instance in Azure Databricks. * [Migrating models with a scoring script to MLflow](https://github.com/Azure/azureml-examples/blob/main/notebooks/using-mlflow/migrating-scoring-to-mlflow/scoring_to_mlmodel.ipynb): Demonstrates how to migrate models with scoring scripts to no-code deployment with MLflow. * [Using MLflow REST with Azure Machine Learning](https://github.com/Azure/azureml-examples/blob/main/notebooks/using-mlflow/using-rest-api/using_mlflow_rest_api.ipynb): Demonstrates how to work with the MLflow REST API when you're connected to Azure Machine Learning.+
+## Next steps
+
+* [Track machine learning experiments and models running locally or in the cloud](how-to-use-mlflow-cli-runs.md) with MLflow in Azure Machine Learning.
machine-learning Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/data-science-virtual-machine/overview.md
Last updated 06/23/2022
The Data Science Virtual Machine (DSVM) is a customized VM image on the Azure cloud platform built specifically for doing data science. It has many popular data science tools preinstalled and pre-configured to jump-start building intelligent applications for advanced analytics.
+> [!IMPORTANT]
+> Items marked (preview) in this article are currently in public preview.
+> The preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+ The DSVM is available on: + Windows Server 2019 + Ubuntu 18.04 LTS + Ubuntu 20.04 LTS
+Additionally, we are excited to offer Azure DSVM for PyTorch (preview), which is an Ubuntu 20.04 image from Azure Marketplace that is optimized for large, distributed deep learning workloads. It comes preinstalled and validated with the latest PyTorch version to reduce setup costs and accelerate time to value. It comes packaged with various optimization functionalities (ONNX Runtime​, DeepSpeed​, MSCCL​, ORTMoE​, Fairscale​, Nvidia Apex​), as well as an up-to-date stack with the latest compatible versions of Ubuntu, Python, PyTorch, CUDA.
+ ## Comparison with Azure Machine Learning The DSVM is a customized VM image for Data Science but [Azure Machine Learning](../overview-what-is-azure-machine-learning.md) (AzureML) is an end-to-end platform that encompasses:
machine-learning How To Assign Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-assign-roles.md
To use Azure AD security groups:
3. Assign the group an RBAC role on the workspace, such as AzureML Data Scientist, Reader or Contributor. 4. [Add group members](../active-directory/fundamentals/active-directory-groups-members-azure-portal.md). The members consequently gain access to the workspace. - ## Create custom role If the built-in roles are insufficient, you can create custom roles. Custom roles might have read, write, delete, and compute resource permissions in that workspace. You can make the role available at a specific workspace level, a specific resource group level, or a specific subscription level.
If you anticipate that you will need to recreate complex role assignments, an Az
## Common scenarios
-The following table is a summary of Azure Machine Learning activities and the permissions required to perform them at the least scope. For example, if an activity can be performed with a workspace scope (Column 4), then all higher scope with that permission will also work automatically:
+The following table is a summary of Azure Machine Learning activities and the permissions required to perform them at the least scope. For example, if an activity can be performed with a workspace scope (Column 4), then all higher scope with that permission will also work automatically. Note that for certain activities the permissions differ between V1 and V2 APIs.
> [!IMPORTANT] > All paths in this table that start with `/` are **relative paths** to `Microsoft.MachineLearningServices/` :
The following table is a summary of Azure Machine Learning activities and the pe
| Request subscription level Amlcompute quota or set workspace level quota | Owner, or contributor, or custom role </br>allowing `/locations/updateQuotas/action`</br> at subscription scope | Not Authorized | Not Authorized | | Create new compute cluster | Not required | Not required | Owner, contributor, or custom role allowing: `/workspaces/computes/write` | | Create new compute instance | Not required | Not required | Owner, contributor, or custom role allowing: `/workspaces/computes/write` |
-| Submitting any type of run | Not required | Not required | Owner, contributor, or custom role allowing: `"/workspaces/*/read", "/workspaces/environments/write", "/workspaces/experiments/runs/write", "/workspaces/metadata/artifacts/write", "/workspaces/metadata/snapshots/write", "/workspaces/environments/build/action", "/workspaces/experiments/runs/submit/action", "/workspaces/environments/readSecrets/action"` |
-| Publishing pipelines and endpoints | Not required | Not required | Owner, contributor, or custom role allowing: `"/workspaces/endpoints/pipelines/*", "/workspaces/pipelinedrafts/*", "/workspaces/modules/*"` |
+| Submitting any type of run (V1) | Not required | Not required | Owner, contributor, or custom role allowing: `"/workspaces/*/read", "/workspaces/environments/write", "/workspaces/experiments/runs/write", "/workspaces/metadata/artifacts/write", "/workspaces/metadata/snapshots/write", "/workspaces/environments/build/action", "/workspaces/experiments/runs/submit/action", "/workspaces/environments/readSecrets/action"` |
+| Submitting any type of run (V2) | Not required | Not required | Owner, contributor, or custom role allowing: `"/workspaces/*/read", "/workspaces/environments/write", "/workspaces/jobs/*", "/workspaces/metadata/artifacts/write", "/workspaces/metadata/codes/*/write", "/workspaces/environments/build/action", "/workspaces/environments/readSecrets/action"` |
+| Publishing pipelines and endpoints (V1) | Not required | Not required | Owner, contributor, or custom role allowing: `"/workspaces/endpoints/pipelines/*", "/workspaces/pipelinedrafts/*", "/workspaces/modules/*"` |
+| Publishing pipelines and endpoints (V2) | Not required | Not required | Owner, contributor, or custom role allowing: `"/workspaces/endpoints/pipelines/*", "/workspaces/pipelinedrafts/*", "/workspaces/components/*"` |
| Attach an AKS resource <sub>2</sub> | Not required | Owner or contributor on the resource group that contains AKS | | Deploying a registered model on an AKS/ACI resource | Not required | Not required | Owner, contributor, or custom role allowing: `"/workspaces/services/aks/write", "/workspaces/services/aci/write"` | | Scoring against a deployed AKS endpoint | Not required | Not required | Owner, contributor, or custom role allowing: `"/workspaces/services/aks/score/action", "/workspaces/services/aks/listkeys/action"` (when you are not using Azure Active Directory auth) OR `"/workspaces/read"` (when you are using token auth) |
The following table is a summary of Azure Machine Learning activities and the pe
2: When attaching an AKS cluster, you also need to the [Azure Kubernetes Service Cluster Admin Role](../role-based-access-control/built-in-roles.md#azure-kubernetes-service-cluster-admin-role) on the cluster.
+### Differences between actions for V1 and V2 APIs
+
+There are certain differences between actions for V1 APIs and V2 APIs.
+
+| Asset | Action path for V1 API | Action path for V2 API
+| -- | -- | -- |
+| Dataset | Microsoft.MachineLearningServices/workspaces/datasets | Microsoft.MachineLearningServices/workspaces/datasets/versions |
+| Experiment runs and jobs | Microsoft.MachineLearningServices/workspaces/experiments | Microsoft.MachineLearningServices/workspaces/jobs |
+| Models | Microsoft.MachineLearningServices/workspaces/models | Microsoft.MachineLearningServices/workspaces/models/verstions |
+| Snapshots and code | Microsoft.MachineLearningServices/workspaces/snapshots | Microsoft.MachineLearningServices/workspaces/codes/versions |
+| Modules and components | Microsoft.MachineLearningServices/workspaces/modules | Microsoft.MachineLearningServices/workspaces/components |
+
+You can make custom roles compatible with both V1 and V2 APIs by including both actions, or using wildcards that include both actions, for example Microsoft.MachineLearningServices/workspaces/datasets/*/read.
+ ### Create a workspace using a customer-managed key When using a customer-managed key (CMK), an Azure Key Vault is used to store the key. The user or service principal used to create the workspace must have owner or contributor access to the key vault.
To perform MLflow operations with your Azure Machine Learning workspace, use the
| MLflow operation | Scope | | | |
-| List all experiments in the workspace tracking store, get an experiment by id, get an experiment by name | `Microsoft.MachineLearningServices/workspaces/experiments/read` |
-| Create an experiment with a name , set a tag on an experiment, restore an experiment marked for deletion| `Microsoft.MachineLearningServices/workspaces/experiments/write` |
-| Delete an experiment | `Microsoft.MachineLearningServices/workspaces/experiments/delete` |
-| Get a run and related data and metadata, get a list of all values for the specified metric for a given run, list artifacts for a run | `Microsoft.MachineLearningServices/workspaces/experiments/runs/read` |
-| Create a new run within an experiment, delete runs, restore deleted runs, log metrics under the current run, set tags on a run, delete tags on a run, log params (key-value pair) used for a run, log a batch of metrics, params, and tags for a run, update run status | `Microsoft.MachineLearningServices/workspaces/experiments/runs/write` |
-| Get registered model by name, fetch a list of all registered models in the registry, search for registered models, latest version models for each requests stage, get a registered model's version, search model versions, get URI where a model version's artifacts are stored, search for runs by experiment ids | `Microsoft.MachineLearningServices/workspaces/models/read` |
-| Create a new registered model, update a registered model's name/description, rename existing registered model, create new version of the model, update a model version's description, transition a registered model to one of the stages | `Microsoft.MachineLearningServices/workspaces/models/write` |
-| Delete a registered model along with all its version, delete specific versions of a registered model | `Microsoft.MachineLearningServices/workspaces/models/delete` |
+| (V1) List, read, create, update or delete experiments | `Microsoft.MachineLearningServices/workspaces/experiments/*` |
+| (V2) List, read, create, update or delete jobs | `Microsoft.MachineLearningServices/workspaces/jobs/*` |
+| Get registered model by name, fetch a list of all registered models in the registry, search for registered models, latest version models for each requests stage, get a registered model's version, search model versions, get URI where a model version's artifacts are stored, search for runs by experiment ids | `Microsoft.MachineLearningServices/workspaces/models/*/read` |
+| Create a new registered model, update a registered model's name/description, rename existing registered model, create new version of the model, update a model version's description, transition a registered model to one of the stages | `Microsoft.MachineLearningServices/workspaces/models/*/write` |
+| Delete a registered model along with all its version, delete specific versions of a registered model | `Microsoft.MachineLearningServices/workspaces/models/*/delete` |
<a id="customroles"></a>
A more restricted role definition without wildcards in the allowed actions. It c
"Microsoft.MachineLearningServices/workspaces/computes/stop/action", "Microsoft.MachineLearningServices/workspaces/computes/restart/action", "Microsoft.MachineLearningServices/workspaces/computes/applicationaccess/action",
- "Microsoft.MachineLearningServices/workspaces/notebooks/storage/read",
"Microsoft.MachineLearningServices/workspaces/notebooks/storage/write", "Microsoft.MachineLearningServices/workspaces/notebooks/storage/delete",
- "Microsoft.MachineLearningServices/workspaces/notebooks/samples/read",
"Microsoft.MachineLearningServices/workspaces/experiments/runs/write", "Microsoft.MachineLearningServices/workspaces/experiments/write", "Microsoft.MachineLearningServices/workspaces/experiments/runs/submit/action",
A more restricted role definition without wildcards in the allowed actions. It c
"Microsoft.MachineLearningServices/workspaces/metadata/snapshots/write", "Microsoft.MachineLearningServices/workspaces/metadata/artifacts/write", "Microsoft.MachineLearningServices/workspaces/environments/write",
- "Microsoft.MachineLearningServices/workspaces/models/write",
+ "Microsoft.MachineLearningServices/workspaces/models/*/write",
"Microsoft.MachineLearningServices/workspaces/modules/write",
- "Microsoft.MachineLearningServices/workspaces/datasets/registered/write",
- "Microsoft.MachineLearningServices/workspaces/datasets/registered/delete",
- "Microsoft.MachineLearningServices/workspaces/datasets/unregistered/write",
- "Microsoft.MachineLearningServices/workspaces/datasets/unregistered/delete",
+ "Microsoft.MachineLearningServices/workspaces/components/*/write",
+ "Microsoft.MachineLearningServices/workspaces/datasets/*/write",
+ "Microsoft.MachineLearningServices/workspaces/datasets/*/delete",
"Microsoft.MachineLearningServices/workspaces/computes/listNodes/action", "Microsoft.MachineLearningServices/workspaces/environments/build/action" ],
Allows a data scientist to perform all MLflow AzureML supported operations **exc
"IsCustom": true, "Description": "Can perform azureml mlflow integrated functionalities that includes mlflow tracking, projects, model registry", "Actions": [
- "Microsoft.MachineLearningServices/workspaces/experiments/read",
- "Microsoft.MachineLearningServices/workspaces/experiments/write",
- "Microsoft.MachineLearningServices/workspaces/experiments/delete",
- "Microsoft.MachineLearningServices/workspaces/experiments/runs/read",
- "Microsoft.MachineLearningServices/workspaces/experiments/runs/write",
- "Microsoft.MachineLearningServices/workspaces/models/read",
- "Microsoft.MachineLearningServices/workspaces/models/write",
- "Microsoft.MachineLearningServices/workspaces/models/delete"
+ "Microsoft.MachineLearningServices/workspaces/experiments/*",
+ "Microsoft.MachineLearningServices/workspaces/jobs/*",
+ "Microsoft.MachineLearningServices/workspaces/models/*"
], "NotActions": [ "Microsoft.MachineLearningServices/workspaces/delete",
Allows you to assign a role to a service principal and use that to automate your
"Microsoft.MachineLearningServices/workspaces/environments/read", "Microsoft.MachineLearningServices/workspaces/metadata/secrets/read", "Microsoft.MachineLearningServices/workspaces/modules/read",
- "Microsoft.MachineLearningServices/workspaces/experiments/runs/read",
- "Microsoft.MachineLearningServices/workspaces/datasets/registered/read",
+ "Microsoft.MachineLearningServices/workspaces/components/read",
+ "Microsoft.MachineLearningServices/workspaces/datasets/*/read",
"Microsoft.MachineLearningServices/workspaces/datastores/read", "Microsoft.MachineLearningServices/workspaces/environments/write",
+ "Microsoft.MachineLearningServices/workspaces/experiments/runs/read",
"Microsoft.MachineLearningServices/workspaces/experiments/runs/write",
+ "Microsoft.MachineLearningServices/workspaces/experiments/runs/submit/action",
+ "Microsoft.MachineLearningServices/workspaces/experiments/jobs/read",
+ "Microsoft.MachineLearningServices/workspaces/experiments/jobs/write",
"Microsoft.MachineLearningServices/workspaces/metadata/artifacts/write", "Microsoft.MachineLearningServices/workspaces/metadata/snapshots/write",
+ "Microsoft.MachineLearningServices/workspaces/metadata/codes/*/write",
"Microsoft.MachineLearningServices/workspaces/environments/build/action",
- "Microsoft.MachineLearningServices/workspaces/experiments/runs/submit/action"
], "NotActions": [ "Microsoft.MachineLearningServices/workspaces/computes/write",
Allows you to review and reject the labeled dataset and view labeling insights.
"Microsoft.MachineLearningServices/workspaces/labeling/labels/read", "Microsoft.MachineLearningServices/workspaces/labeling/labels/write", "Microsoft.MachineLearningServices/workspaces/labeling/labels/reject/action",
+ "Microsoft.MachineLearningServices/workspaces/labeling/labels/update/action",
"Microsoft.MachineLearningServices/workspaces/labeling/projects/read", "Microsoft.MachineLearningServices/workspaces/labeling/projects/summary/read" ],
A vendor account manager can help manage all the vendor roles and perform any la
"Microsoft.MachineLearningServices/workspaces/experiments/runs/read", "Microsoft.MachineLearningServices/workspaces/labeling/labels/read", "Microsoft.MachineLearningServices/workspaces/labeling/labels/write",
- "Microsoft.MachineLearningServices/workspaces/labeling/labels/reject/action",
+ "Microsoft.MachineLearningServices/workspaces/labeling/labels/reject/action",
+ "Microsoft.MachineLearningServices/workspaces/labeling/labels/update/action",
+ "Microsoft.MachineLearningServices/workspaces/labeling/labels/approve_unapprove/action",
"Microsoft.MachineLearningServices/workspaces/labeling/projects/read", "Microsoft.MachineLearningServices/workspaces/labeling/projects/summary/read", "Microsoft.MachineLearningServices/workspaces/labeling/export/action",
A customer quality assurance role can view project dashboards, preview datasets,
"Microsoft.MachineLearningServices/workspaces/experiments/runs/read", "Microsoft.MachineLearningServices/workspaces/labeling/labels/read", "Microsoft.MachineLearningServices/workspaces/labeling/labels/reject/action",
+ "Microsoft.MachineLearningServices/workspaces/labeling/labels/approve_unapprove/action",
"Microsoft.MachineLearningServices/workspaces/labeling/projects/read", "Microsoft.MachineLearningServices/workspaces/labeling/projects/summary/read", "Microsoft.MachineLearningServices/workspaces/labeling/export/action",
A vendor quality assurance role can perform a customer quality assurance role, b
"Microsoft.MachineLearningServices/workspaces/read", "Microsoft.MachineLearningServices/workspaces/experiments/runs/read", "Microsoft.MachineLearningServices/workspaces/labeling/labels/read",
- "Microsoft.MachineLearningServices/workspaces/labeling/labels/reject/action",
+ "Microsoft.MachineLearningServices/workspaces/labeling/labels/reject/action",
+ "Microsoft.MachineLearningServices/workspaces/labeling/labels/update/action",
+ "Microsoft.MachineLearningServices/workspaces/labeling/labels/approve_unapprove/action",
"Microsoft.MachineLearningServices/workspaces/labeling/projects/read", "Microsoft.MachineLearningServices/workspaces/labeling/projects/summary/read", "Microsoft.MachineLearningServices/workspaces/labeling/export/action"
machine-learning How To Auto Train Forecast https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-auto-train-forecast.md
However, the following steps are performed only for `forecasting` task types:
* Create features based on time series identifiers to enable fixed effects across different series * Create time-based features to assist in learning seasonal patterns * Encode categorical variables to numeric quantities
+* Detect the non-stationary time series and automatically differencing them to mitigate the impact of unit roots.
To view the full list of possible engineered features generated from time series data, see [TimeIndexFeaturizer Class](/python/api/azureml-automl-runtime/azureml.automl.runtime.featurizer.transformer.timeseries.time_index_featurizer).
machine-learning How To Manage Workspace Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-workspace-cli.md
# Manage Azure Machine Learning workspaces using Azure CLI [!INCLUDE [cli v2](../../includes/machine-learning-cli-v2.md)] > [!div class="op_single_selector" title1="Select the version of Azure Machine Learning SDK or CLI extension you are using:"]
machine-learning How To Manage Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-workspace.md
Title: Manage workspaces in portal or Python SDK
+ Title: Manage workspaces in portal or Python SDK (v2)
-description: Learn how to manage Azure Machine Learning workspaces in the Azure portal or with the SDK for Python.
+description: Learn how to manage Azure Machine Learning workspaces in the Azure portal or with the SDK for Python (v2).
Previously updated : 03/08/2022 Last updated : 09/21/2022
-# Manage Azure Machine Learning workspaces in the portal or with the Python SDK
+# Manage Azure Machine Learning workspaces in the portal or with the Python SDK (v2)
-In this article, you create, view, and delete [**Azure Machine Learning workspaces**](concept-workspace.md) for [Azure Machine Learning](overview-what-is-azure-machine-learning.md), using the Azure portal or the [SDK for Python](/python/api/overview/azure/ml/).
-As your needs change or requirements for automation increase you can also manage workspaces [using the CLI](v1/reference-azure-machine-learning-cli.md), or [via the VS Code extension](how-to-setup-vs-code.md).
+> [!div class="op_single_selector" title1="Select the version of Azure Machine Learning SDK you are using:"]
+> * [v1](v1/how-to-manage-workspace.md)
+> * [v2 (preview)](how-to-manage-workspace.md)
+
+In this article, you create, view, and delete [**Azure Machine Learning workspaces**](concept-workspace.md) for [Azure Machine Learning](overview-what-is-azure-machine-learning.md), using the [Azure portal](https://portal.azure.com) or the [SDK for Python](/python/api/overview/azure/ml/).
+
+As your needs change or requirements for automation increase you can also manage workspaces [using the CLI](how-to-manage-workspace-cli.md), or [via the VS Code extension](how-to-setup-vs-code.md).
## Prerequisites * An Azure subscription. If you don't have an Azure subscription, create a free account before you begin. Try the [free or paid version of Azure Machine Learning](https://azure.microsoft.com/free/) today.
-* If using the Python SDK, [install the SDK](/python/api/overview/azure/ml/install).
+* If using the Python SDK:
+ 1. [Install the SDK v2](https://aka.ms/sdk-v2-install).
+ 1. Provide your subscription details
+
+ [!notebook-python[](~/azureml-examples-main/sdk/resources/workspace/workspace.ipynb?name=subscription_id)]
+
+ 1. Get a handle to the subscription. `ml_client` will be used in all the Python code in this article.
+
+ [!notebook-python[](~/azureml-examples-main/sdk/resources/workspace/workspace.ipynb?name=ml_client)]
+
+ * (Optional) If you have multiple accounts, add the tenant ID of the Azure Active Directory you wish to use into the `DefaultAzureCredential`. Find your tenant ID from the [Azure portal](https://portal.azure.com) under **Azure Active Directory, External Identities**.
+
+ ```python
+ DefaultAzureCredential(interactive_browser_tenant_id="<TENANT_ID>")
+ ```
+
+ * (Optional) If you're working on a [sovereign cloud](reference-machine-learning-cloud-parity.md)**, specify the sovereign cloud to authenticate with into the `DefaultAzureCredential`..
+
+ ```python
+ from azure.identity import AzureAuthorityHosts
+ DefaultAzureCredential(authority=AzureAuthorityHosts.AZURE_GOVERNMENT))
+ ```
## Limitations [!INCLUDE [register-namespace](../../includes/machine-learning-register-namespace.md)]
-* By default, creating a workspace also creates an Azure Container Registry (ACR). Since ACR does not currently support unicode characters in resource group names, use a resource group that does not contain these characters.
+* By default, creating a workspace also creates an Azure Container Registry (ACR). Since ACR doesn't currently support unicode characters in resource group names, use a resource group that doesn't contain these characters.
-* Azure Machine Learning does not support hierarchical namespace (Azure Data Lake Storage Gen2 feature) for the workspace's default storage account.
+* Azure Machine Learning doesn't support hierarchical namespace (Azure Data Lake Storage Gen2 feature) for the workspace's default storage account.
[!INCLUDE [application-insight](../../includes/machine-learning-application-insight.md)]
You can create a workspace [directly in Azure Machine Learning studio](./quickst
# [Python SDK](#tab/python) * **Default specification.** By default, dependent resources and the resource group will be created automatically. This code creates a workspace named `myworkspace` and a resource group named `myresourcegroup` in `eastus2`.
- ```python
- from azureml.core import Workspace
-
- ws = Workspace.create(name='myworkspace',
- subscription_id='<azure-subscription-id>',
- resource_group='myresourcegroup',
- create_resource_group=True,
- location='eastus2'
- )
- ```
- Set `create_resource_group` to False if you have an existing Azure resource group that you want to use for the workspace.
-
-* <a name="create-multi-tenant"></a>**Multiple tenants.** If you have multiple accounts, add the tenant ID of the Azure Active Directory you wish to use. Find your tenant ID from the [Azure portal](https://portal.azure.com) under **Azure Active Directory, External Identities**.
-
- ```python
- from azureml.core.authentication import InteractiveLoginAuthentication
- from azureml.core import Workspace
-
- interactive_auth = InteractiveLoginAuthentication(tenant_id="my-tenant-id")
- ws = Workspace.create(name='myworkspace',
- subscription_id='<azure-subscription-id>',
- resource_group='myresourcegroup',
- create_resource_group=True,
- location='eastus2',
- auth=interactive_auth
- )
- ```
-
-* **[Sovereign cloud](reference-machine-learning-cloud-parity.md)**. You'll need extra code to authenticate to Azure if you're working in a sovereign cloud.
-
- ```python
- from azureml.core.authentication import InteractiveLoginAuthentication
- from azureml.core import Workspace
-
- interactive_auth = InteractiveLoginAuthentication(cloud="<cloud name>") # for example, cloud="AzureUSGovernment"
- ws = Workspace.create(name='myworkspace',
- subscription_id='<azure-subscription-id>',
- resource_group='myresourcegroup',
- create_resource_group=True,
- location='eastus2',
- auth=interactive_auth
- )
- ```
+ [!notebook-python[](~/azureml-examples-main/sdk/resources/workspace/workspace.ipynb?name=basic_workspace_name)]
* **Use existing Azure resources**. You can also create a workspace that uses existing Azure resources with the Azure resource ID format. Find the specific Azure resource IDs in the Azure portal or with the SDK. This example assumes that the resource group, storage account, key vault, App Insights, and container registry already exist.
- ```python
- import os
- from azureml.core import Workspace
- from azureml.core.authentication import ServicePrincipalAuthentication
-
- service_principal_password = os.environ.get("AZUREML_PASSWORD")
-
- service_principal_auth = ServicePrincipalAuthentication(
- tenant_id="<tenant-id>",
- username="<application-id>",
- password=service_principal_password)
-
- auth=service_principal_auth,
- subscription_id='<azure-subscription-id>',
- resource_group='myresourcegroup',
- create_resource_group=False,
- location='eastus2',
- friendly_name='My workspace',
- storage_account='subscriptions/<azure-subscription-id>/resourcegroups/myresourcegroup/providers/microsoft.storage/storageaccounts/mystorageaccount',
- key_vault='subscriptions/<azure-subscription-id>/resourcegroups/myresourcegroup/providers/microsoft.keyvault/vaults/mykeyvault',
- app_insights='subscriptions/<azure-subscription-id>/resourcegroups/myresourcegroup/providers/microsoft.insights/components/myappinsights',
- container_registry='subscriptions/<azure-subscription-id>/resourcegroups/myresourcegroup/providers/microsoft.containerregistry/registries/mycontainerregistry',
- exist_ok=False)
- ```
-
-For more information, see [Workspace SDK reference](/python/api/azureml-core/azureml.core.workspace.workspace).
+ [!notebook-python[](~/azureml-examples-main/sdk/resources/workspace/workspace.ipynb?name=basic_ex_workspace_name)]
+
+For more information, see [Workspace SDK reference](/python/api/azure-ai-ml/azure.ai.ml.entities.workspace).
If you have problems in accessing your subscription, see [Set up authentication for Azure Machine Learning resources and workflows](how-to-setup-authentication.md), as well as the [Authentication in Azure Machine Learning](https://aka.ms/aml-notebook-auth) notebook.
If you have problems in accessing your subscription, see [Set up authentication
1. In the upper-left corner of Azure portal, select **+ Create a resource**.
- ![Create a new resource](./media/how-to-manage-workspace/create-workspace.gif)
+ :::image type="content" source="media/how-to-manage-workspace/create-workspace.gif" alt-text="Screenshot show how to create a workspace in Azure portal.":::
1. Use the search bar to find **Machine Learning**.
If you have problems in accessing your subscription, see [Set up authentication
| Storage account | The default storage account for the workspace. By default, a new one is created. | | Key Vault | The Azure Key Vault used by the workspace. By default, a new one is created. | | Application Insights | The application insights instance for the workspace. By default, a new one is created. |
- | Container Registry | The Azure Container Registry for the workspace. By default, a new one is _not_ initially created for the workspace. Instead, it is created once you need it when creating a Docker image during training or deployment. |
+ | Container Registry | The Azure Container Registry for the workspace. By default, a new one isn't_ initially created for the workspace. Instead, it's created once you need it when creating a Docker image during training or deployment. |
:::image type="content" source="media/how-to-manage-workspace/create-workspace-form.png" alt-text="Configure your workspace.":::
If you have problems in accessing your subscription, see [Set up authentication
-- ### Networking > [!IMPORTANT]
If you have problems in accessing your subscription, see [Set up authentication
# [Python SDK](#tab/python)
-The Azure Machine Learning Python SDK provides the [PrivateEndpointConfig](/python/api/azureml-core/azureml.core.privateendpointconfig) class, which can be used with [Workspace.create()](/python/api/azureml-core/azureml.core.workspace.workspace#create-name--auth-none--subscription-id-none--resource-group-none--location-none--create-resource-group-true--sku--basictags-none--friendly-name-none--storage-account-none--key-vault-none--app-insights-none--container-registry-none--adb-workspace-none--cmk-keyvault-none--resource-cmk-uri-none--hbi-workspace-false--default-cpu-compute-target-none--default-gpu-compute-target-none--private-endpoint-config-none--private-endpoint-auto-approval-true--exist-ok-false--show-output-true-) to create a workspace with a private endpoint. This class requires an existing virtual network.
+
+[!notebook-python[](~/azureml-examples-main/sdk/resources/workspace/workspace.ipynb?name=basic_private_link_workspace_name)]
+
+This class requires an existing virtual network.
# [Portal](#tab/azure-portal)
-1. The default network configuration is to use a __Public endpoint__, which is accessible on the public internet. To limit access to your workspace to an Azure Virtual Network you have created, you can instead select __Private endpoint__ as the __Connectivity method__, and then use __+ Add__ to configure the endpoint.
+1. The default network configuration is to use a __Public endpoint__, which is accessible on the public internet. To limit access to your workspace to an Azure Virtual Network you've created, you can instead select __Private endpoint__ as the __Connectivity method__, and then use __+ Add__ to configure the endpoint.
:::image type="content" source="media/how-to-manage-workspace/select-private-endpoint.png" alt-text="Private endpoint selection":::
The Azure Machine Learning Python SDK provides the [PrivateEndpointConfig](/pyth
:::image type="content" source="media/how-to-manage-workspace/create-private-endpoint.png" alt-text="Private endpoint creation":::
-1. When you are finished configuring networking, you can select __Review + Create__, or advance to the optional __Advanced__ configuration.
+1. When you're finished configuring networking, you can select __Review + Create__, or advance to the optional __Advanced__ configuration.
-- ### Advanced By default, metadata for the workspace is stored in an Azure Cosmos DB instance that Microsoft maintains. This data is encrypted using Microsoft-managed keys.
To limit the data that Microsoft collects on your workspace, select __High busin
> [!IMPORTANT] > Selecting high business impact can only be done when creating a workspace. You cannot change this setting after workspace creation.
-#### Use your own key
+#### Use your own data encryption key
-You can provide your own key for data encryption. Doing so creates the Azure Cosmos DB instance that stores metadata in your Azure subscription. For more information, see [Customer-managed keys](concept-customer-managed keys.md).
+You can provide your own key for data encryption. Doing so creates the Azure Cosmos DB instance that stores metadata in your Azure subscription. For more information, see [Customer-managed keys](concept-customer-managed-keys.md).
Use the following steps to provide your own key:
Use the following steps to provide your own key:
# [Python SDK](#tab/python)
-Use `cmk_keyvault` and `resource_cmk_uri` to specify the customer managed key.
```python
-from azureml.core import Workspace
- ws = Workspace.create(name='myworkspace',
- subscription_id='<azure-subscription-id>',
- resource_group='myresourcegroup',
- create_resource_group=True,
- location='eastus2'
- cmk_keyvault='subscriptions/<azure-subscription-id>/resourcegroups/myresourcegroup/providers/microsoft.keyvault/vaults/<keyvault-name>',
- resource_cmk_uri='<key-identifier>'
- )
+from azure.ai.ml.entities import Workspace, CustomerManagedKey
+
+# specify the workspace details
+ws = Workspace(
+ name="my_workspace",
+ location="eastus",
+ display_name="My workspace",
+ description="This example shows how to create a workspace",
+ customer_managed_key=CustomerManagedKey(
+ key_vault="/subscriptions/<SUBSCRIPTION_ID>/resourcegroups/<RESOURCE_GROUP>/providers/microsoft.keyvault/vaults/<VAULT_NAME>"
+ key_uri="<KEY-IDENTIFIER>"
+ )
+ tags=dict(purpose="demo")
+)
+
+ml_client.workspaces.begin_create(ws)
``` # [Portal](#tab/azure-portal)
from azureml.core import Workspace
### Download a configuration file
-If you will be creating a [compute instance](quickstart-create-resources.md), skip this step. The compute instance has already created a copy of this file for you.
-
-# [Python SDK](#tab/python)
-
-If you plan to use code on your local environment that references this workspace (`ws`), write the configuration file:
-
-```python
-ws.write_config()
-```
-
-# [Portal](#tab/azure-portal)
+If you'll be running your code on a [compute instance](quickstart-create-resources.md), skip this step. The compute instance will create and store copy of this file for you.
-If you plan to use code on your local environment that references this workspace, select **Download config.json** from the **Overview** section of the workspace.
+If you plan to use code on your local environment that references this workspace, download the file:
+1. Select your workspace in [Azure studio](https://ml.azure.com)
+1. At the top right, select the workspace name, then select **Download config.json**
![Download config.json](./media/how-to-manage-workspace/configure.png) -- Place the file into the directory structure with your Python scripts or Jupyter Notebooks. It can be in the same directory, a subdirectory named *.azureml*, or in a parent directory. When you create a compute instance, this file is added to the correct directory on the VM for you. ## Connect to a workspace -
-In your Python code, you create a workspace object to connect to your workspace. This code will read the contents of the configuration file to find your workspace. You will get a prompt to sign in if you are not already authenticated.
-
-```python
-from azureml.core import Workspace
-
-ws = Workspace.from_config()
-```
-
-* <a name="connect-multi-tenant"></a>**Multiple tenants.** If you have multiple accounts, add the tenant ID of the Azure Active Directory you wish to use. Find your tenant ID from the [Azure portal](https://portal.azure.com) under **Azure Active Directory, External Identities**.
-
- ```python
- from azureml.core.authentication import InteractiveLoginAuthentication
- from azureml.core import Workspace
-
- interactive_auth = InteractiveLoginAuthentication(tenant_id="my-tenant-id")
- ws = Workspace.from_config(auth=interactive_auth)
- ```
+When running machine learning tasks using the SDK, you require a MLClient object that specifies the connection to your workspace. You can create an `MLClient` object from parameters, or with a configuration file.
-* **[Sovereign cloud](reference-machine-learning-cloud-parity.md)**. You'll need extra code to authenticate to Azure if you're working in a sovereign cloud.
- [!INCLUDE [sdk v1](../../includes/machine-learning-sdk-v1.md)]
+* **With a configuration file:** This code will read the contents of the configuration file to find your workspace. You'll get a prompt to sign in if you aren't already authenticated.
```python
- from azureml.core.authentication import InteractiveLoginAuthentication
- from azureml.core import Workspace
+ from azure.ai.ml import MLClient
- interactive_auth = InteractiveLoginAuthentication(cloud="<cloud name>") # for example, cloud="AzureUSGovernment"
- ws = Workspace.from_config(auth=interactive_auth)
+ # read the config from the current directory
+ ws_from_config = MLClient.from_config()
```
+* **From parameters**: There's no need to have a config.json file available if you use this approach.
+ [!notebook-python[](~/azureml-examples-main/sdk/resources/workspace/workspace.ipynb?name=ws)]
+ If you have problems in accessing your subscription, see [Set up authentication for Azure Machine Learning resources and workflows](how-to-setup-authentication.md), as well as the [Authentication in Azure Machine Learning](https://aka.ms/aml-notebook-auth) notebook.
-## <a name="view"></a>Find a workspace
+## Find a workspace
+
+See a list of all the workspaces you can use.
+You can also search for workspace inside studio. See [Search for Azure Machine Learning assets (preview)](how-to-search-assets.md).
-See a list of all the workspaces you can use.
# [Python SDK](#tab/python)
-Find your subscriptions in the [Subscriptions page in the Azure portal](https://portal.azure.com/#blade/Microsoft_Azure_Billing/SubscriptionsBlade). Copy the ID and use it in the code below to see all workspaces available for that subscription.
+[!notebook-python[](~/azureml-examples-main/sdk/resources/workspace/workspace.ipynb?name=my_ml_client)]
+[!notebook-python[](~/azureml-examples-main/sdk/resources/workspace/workspace.ipynb?name=ws_name)]
-```python
-from azureml.core import Workspace
+To get details of a specific workspace:
-Workspace.list('<subscription-id>')
-```
+[!notebook-python[](~/azureml-examples-main/sdk/resources/workspace/workspace.ipynb?name=ws_location)]
-The Workspace.list(..) method does not return the full workspace object. It includes only basic information about existing workspaces in the subscription. To get a full object for specific workspace, use Workspace.get(..).
# [Portal](#tab/azure-portal)
If you accidentally deleted your workspace, you may still be able to retrieve yo
# [Python SDK](#tab/python) -
-Delete the workspace `ws`:
```python
-ws.delete(delete_dependent_resources=False, no_wait=False)
+ml_client.workspaces.begin_delete(name=ws_basic.name, delete_dependent_resources=True)
```
-The default action is not to delete resources associated with the workspace, that is, container registry, storage account, key vault, and application insights. Set `delete_dependent_resources` to True to delete these resources as well.
+The default action isn't to delete resources associated with the workspace, that is, container registry, storage account, key vault, and application insights. Set `delete_dependent_resources` to True to delete these resources as well.
# [Portal](#tab/azure-portal)
The Azure Machine Learning workspace uses Azure Container Registry (ACR) for som
## Examples
-Examples of creating a workspace:
-* Use Azure portal to [create a workspace and compute instance](quickstart-create-resources.md)
+Examples in this article come from [workspace.ipynb](https://github.com/Azure/azureml-examples/blob/main/sdk/resources/workspace/workspace.ipynb).
## Next steps
To learn more about planning a workspace for your organization's requirements, s
* If you need to move a workspace to another Azure subscription, see [How to move a workspace](how-to-move-workspace.md).
-* To find a workspace, see [Search for Azure Machine Learning assets (preview)](how-to-search-assets.md).
-* If you need to move a workspace to another Azure subscription, see [How to move a workspace](how-to-move-workspace.md).
For information on how to keep your Azure ML up to date with the latest security updates, see [Vulnerability management](concept-vulnerability-management.md).
machine-learning How To Setup Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-setup-customer-managed-keys.md
For examples of creating the workspace with a customer-managed key, see the foll
| Creation method | Article | | -- | -- | | CLI | [Create a workspace with Azure CLI](how-to-manage-workspace-cli.md#customer-managed-key-and-high-business-impact-workspace) |
-| Azure portal/</br>Python SDK | [Create and manage a workspace](how-to-manage-workspace.md#use-your-own-key) |
+| Azure portal/</br>Python SDK | [Create and manage a workspace](how-to-manage-workspace.md#use-your-own-data-encryption-key) |
| Azure Resource Manager</br>template | [Create a workspace with a template](how-to-create-workspace-template.md#deploy-an-encrypted-workspace) | | REST API | [Create, run, and delete Azure ML resources with REST](how-to-manage-rest.md#create-a-workspace-using-customer-managed-encryption-keys) |
This process allows you to encrypt both the Data and the OS Disk of the deployed
* [Customer-managed keys with Azure Machine Learning](concept-customer-managed-keys.md) * [Create a workspace with Azure CLI](how-to-manage-workspace-cli.md#customer-managed-key-and-high-business-impact-workspace) |
-* [Create and manage a workspace](how-to-manage-workspace.md#use-your-own-key) |
+* [Create and manage a workspace](how-to-manage-workspace.md#use-your-own-data-encryption-key) |
* [Create a workspace with a template](how-to-create-workspace-template.md#deploy-an-encrypted-workspace) | * [Create, run, and delete Azure ML resources with REST](how-to-manage-rest.md#create-a-workspace-using-customer-managed-encryption-keys) |
machine-learning How To Create Machine Learning Pipelines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-create-machine-learning-pipelines.md
See the list of all your pipelines and their run details in the studio:
1. Sign in to [Azure Machine Learning studio](https://ml.azure.com).
-1. [View your workspace](../how-to-manage-workspace.md#view).
+1. [View your workspace](../how-to-manage-workspace.md#find-a-workspace).
1. On the left, select **Pipelines** to see all your pipeline runs. ![list of machine learning pipelines](../media/how-to-create-your-first-pipeline/pipelines.png)
machine-learning How To Deploy Pipelines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-deploy-pipelines.md
You can also run a published pipeline from the studio:
1. Sign in to [Azure Machine Learning studio](https://ml.azure.com).
-1. [View your workspace](../how-to-manage-workspace.md#view).
+1. [View your workspace](../how-to-manage-workspace.md#find-a-workspace).
1. On the left, select **Endpoints**.
machine-learning How To Manage Workspace Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-manage-workspace-cli.md
# Manage Azure Machine Learning workspaces using Azure CLI extension v1 [!INCLUDE [cli v1](../../../includes/machine-learning-cli-v1.md)]
-> [!div class="op_single_selector" title1="Select the version of Azure Machine Learning SDK or CLI extension you are using:"]
+> [!div class="op_single_selector" title1="Select the version of Azure Machine Learning CLI extension you are using:"]
> * [v1](how-to-manage-workspace-cli.md) > * [v2 (current version)](../how-to-manage-workspace-cli.md)
machine-learning How To Manage Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-manage-workspace.md
+
+ Title: Manage workspaces in portal or Python SDK (v1)
+
+description: Learn how to manage Azure Machine Learning workspaces in the Azure portal or with the SDK for Python (v1).
+++++ Last updated : 03/08/2022++++
+# Manage Azure Machine Learning workspaces with the Python SDK (v1)
+
+> [!div class="op_single_selector" title1="Select the version of Azure Machine Learning SDK you are using:"]
+> * [v1](how-to-manage-workspace.md)
+> * [v2 (preview)](../how-to-manage-workspace.md)
+
+In this article, you create, view, and delete [**Azure Machine Learning workspaces**](../concept-workspace.md) for [Azure Machine Learning](../overview-what-is-azure-machine-learning.md), using the [SDK for Python](/python/api/overview/azure/ml/).
+
+As your needs change or requirements for automation increase you can also manage workspaces [using the CLI](reference-azure-machine-learning-cli.md), or [via the VS Code extension](../how-to-setup-vs-code.md).
+
+## Prerequisites
+
+* An Azure subscription. If you don't have an Azure subscription, create a free account before you begin. Try the [free or paid version of Azure Machine Learning](https://azure.microsoft.com/free/) today.
+* If using the Python SDK, [install the SDK](/python/api/overview/azure/ml/install).
+
+## Limitations
++
+* By default, creating a workspace also creates an Azure Container Registry (ACR). Since ACR doesn't currently support unicode characters in resource group names, use a resource group that doesn't contain these characters.
+
+* Azure Machine Learning doesn't support hierarchical namespace (Azure Data Lake Storage Gen2 feature) for the workspace's default storage account.
++
+## Create a workspace
+
+You can create a workspace [directly in Azure Machine Learning studio](../quickstart-create-resources.md#create-the-workspace), with limited options available. Or use one of the methods below for more control of options.
+
+* **Default specification.** By default, dependent resources and the resource group will be created automatically. This code creates a workspace named `myworkspace` and a resource group named `myresourcegroup` in `eastus2`.
+
+ [!INCLUDE [sdk v1](../../../includes/machine-learning-sdk-v1.md)]
+
+ ```python
+ from azureml.core import Workspace
+
+ ws = Workspace.create(name='myworkspace',
+ subscription_id='<azure-subscription-id>',
+ resource_group='myresourcegroup',
+ create_resource_group=True,
+ location='eastus2'
+ )
+ ```
+ Set `create_resource_group` to False if you have an existing Azure resource group that you want to use for the workspace.
+
+* **Multiple tenants.** If you have multiple accounts, add the tenant ID of the Azure Active Directory you wish to use. Find your tenant ID from the [Azure portal](https://portal.azure.com) under **Azure Active Directory, External Identities**.
+
+ [!INCLUDE [sdk v1](../../../includes/machine-learning-sdk-v1.md)]
+
+ ```python
+ from azureml.core.authentication import InteractiveLoginAuthentication
+ from azureml.core import Workspace
+
+ interactive_auth = InteractiveLoginAuthentication(tenant_id="my-tenant-id")
+ ws = Workspace.create(name='myworkspace',
+ subscription_id='<azure-subscription-id>',
+ resource_group='myresourcegroup',
+ create_resource_group=True,
+ location='eastus2',
+ auth=interactive_auth
+ )
+ ```
+
+* **[Sovereign cloud](../reference-machine-learning-cloud-parity.md)**. You'll need extra code to authenticate to Azure if you're working in a sovereign cloud.
+
+ [!INCLUDE [sdk v1](../../../includes/machine-learning-sdk-v1.md)]
+
+ ```python
+ from azureml.core.authentication import InteractiveLoginAuthentication
+ from azureml.core import Workspace
+
+ interactive_auth = InteractiveLoginAuthentication(cloud="<cloud name>") # for example, cloud="AzureUSGovernment"
+ ws = Workspace.create(name='myworkspace',
+ subscription_id='<azure-subscription-id>',
+ resource_group='myresourcegroup',
+ create_resource_group=True,
+ location='eastus2',
+ auth=interactive_auth
+ )
+ ```
+
+* **Use existing Azure resources**. You can also create a workspace that uses existing Azure resources with the Azure resource ID format. Find the specific Azure resource IDs in the Azure portal or with the SDK. This example assumes that the resource group, storage account, key vault, App Insights, and container registry already exist.
+
+ [!INCLUDE [sdk v1](../../../includes/machine-learning-sdk-v1.md)]
+
+ ```python
+ import os
+ from azureml.core import Workspace
+ from azureml.core.authentication import ServicePrincipalAuthentication
+
+ service_principal_password = os.environ.get("AZUREML_PASSWORD")
+
+ service_principal_auth = ServicePrincipalAuthentication(
+ tenant_id="<tenant-id>",
+ username="<application-id>",
+ password=service_principal_password)
+
+ auth=service_principal_auth,
+ subscription_id='<azure-subscription-id>',
+ resource_group='myresourcegroup',
+ create_resource_group=False,
+ location='eastus2',
+ friendly_name='My workspace',
+ storage_account='subscriptions/<azure-subscription-id>/resourcegroups/myresourcegroup/providers/microsoft.storage/storageaccounts/mystorageaccount',
+ key_vault='subscriptions/<azure-subscription-id>/resourcegroups/myresourcegroup/providers/microsoft.keyvault/vaults/mykeyvault',
+ app_insights='subscriptions/<azure-subscription-id>/resourcegroups/myresourcegroup/providers/microsoft.insights/components/myappinsights',
+ container_registry='subscriptions/<azure-subscription-id>/resourcegroups/myresourcegroup/providers/microsoft.containerregistry/registries/mycontainerregistry',
+ exist_ok=False)
+ ```
+
+For more information, see [Workspace SDK reference](/python/api/azureml-core/azureml.core.workspace.workspace).
+
+If you have problems in accessing your subscription, see [Set up authentication for Azure Machine Learning resources and workflows](how-to-setup-authentication.md), as well as the [Authentication in Azure Machine Learning](https://aka.ms/aml-notebook-auth) notebook.
++
+### Networking
+
+> [!IMPORTANT]
+> For more information on using a private endpoint and virtual network with your workspace, see [Network isolation and privacy](how-to-network-security-overview.md).
++
+The Azure Machine Learning Python SDK provides the [PrivateEndpointConfig](/python/api/azureml-core/azureml.core.privateendpointconfig) class, which can be used with [Workspace.create()](/python/api/azureml-core/azureml.core.workspace.workspace#create-name--auth-none--subscription-id-none--resource-group-none--location-none--create-resource-group-true--sku--basictags-none--friendly-name-none--storage-account-none--key-vault-none--app-insights-none--container-registry-none--adb-workspace-none--cmk-keyvault-none--resource-cmk-uri-none--hbi-workspace-false--default-cpu-compute-target-none--default-gpu-compute-target-none--private-endpoint-config-none--private-endpoint-auto-approval-true--exist-ok-false--show-output-true-) to create a workspace with a private endpoint. This class requires an existing virtual network.
++
+### Advanced
+
+By default, metadata for the workspace is stored in an Azure Cosmos DB instance that Microsoft maintains. This data is encrypted using Microsoft-managed keys.
+
+To limit the data that Microsoft collects on your workspace, select __High business impact workspace__ in the portal, or set `hbi_workspace=true ` in Python. For more information on this setting, see [Encryption at rest](../concept-data-encryption.md#encryption-at-rest).
+
+> [!IMPORTANT]
+> Selecting high business impact can only be done when creating a workspace. You cannot change this setting after workspace creation.
+
+#### Use your own data encryption key
+
+You can provide your own key for data encryption. Doing so creates the Azure Cosmos DB instance that stores metadata in your Azure subscription. For more information, see [Customer-managed keys for Azure Machine Learning](../concept-customer-managed-keys.md).
+
+Use the following steps to provide your own key:
+
+> [!IMPORTANT]
+> Before following these steps, you must first perform the following actions:
+>
+> Follow the steps in [Configure customer-managed keys](../how-to-setup-customer-managed-keys.md) to:
+> * Register the Azure Cosmos DB provider
+> * Create and configure an Azure Key Vault
+> * Generate a key
+
+Use `cmk_keyvault` and `resource_cmk_uri` to specify the customer managed key.
+
+```python
+from azureml.core import Workspace
+ ws = Workspace.create(name='myworkspace',
+ subscription_id='<azure-subscription-id>',
+ resource_group='myresourcegroup',
+ create_resource_group=True,
+ location='eastus2'
+ cmk_keyvault='subscriptions/<azure-subscription-id>/resourcegroups/myresourcegroup/providers/microsoft.keyvault/vaults/<keyvault-name>',
+ resource_cmk_uri='<key-identifier>'
+ )
+
+```
+
+### Download a configuration file
+
+If you'll be using a [compute instance](../quickstart-create-resources.md) in your workspace to run your code, skip this step. The compute instance will create and store a copy of this file for you.
+
+If you plan to use code on your local environment that references this workspace (`ws`), write the configuration file:
+
+
+```python
+ws.write_config()
+```
+
+Place the file into the directory structure with your Python scripts or Jupyter Notebooks. It can be in the same directory, a subdirectory named *.azureml*, or in a parent directory. When you create a compute instance, this file is added to the correct directory on the VM for you.
+
+## Connect to a workspace
+
+In your Python code, you create a workspace object to connect to your workspace. This code will read the contents of the configuration file to find your workspace. You'll get a prompt to sign in if you aren't already authenticated.
+
+
+```python
+from azureml.core import Workspace
+
+ws = Workspace.from_config()
+```
+
+* **Multiple tenants.** If you have multiple accounts, add the tenant ID of the Azure Active Directory you wish to use. Find your tenant ID from the [Azure portal](https://portal.azure.com) under **Azure Active Directory, External Identities**.
+
+ [!INCLUDE [sdk v1](../../../includes/machine-learning-sdk-v1.md)]
+
+ ```python
+ from azureml.core.authentication import InteractiveLoginAuthentication
+ from azureml.core import Workspace
+
+ interactive_auth = InteractiveLoginAuthentication(tenant_id="my-tenant-id")
+ ws = Workspace.from_config(auth=interactive_auth)
+ ```
+
+* **[Sovereign cloud](../reference-machine-learning-cloud-parity.md)**. You'll need extra code to authenticate to Azure if you're working in a sovereign cloud.
+
+ [!INCLUDE [sdk v1](../../../includes/machine-learning-sdk-v1.md)]
+
+ ```python
+ from azureml.core.authentication import InteractiveLoginAuthentication
+ from azureml.core import Workspace
+
+ interactive_auth = InteractiveLoginAuthentication(cloud="<cloud name>") # for example, cloud="AzureUSGovernment"
+ ws = Workspace.from_config(auth=interactive_auth)
+ ```
+
+If you have problems in accessing your subscription, see [Set up authentication for Azure Machine Learning resources and workflows](../how-to-setup-authentication.md), as well as the [Authentication in Azure Machine Learning](https://aka.ms/aml-notebook-auth) notebook.
+
+## Find a workspace
+
+See a list of all the workspaces you can use.
+
+Find your subscriptions in the [Subscriptions page in the Azure portal](https://portal.azure.com/#blade/Microsoft_Azure_Billing/SubscriptionsBlade). Copy the ID and use it in the code below to see all workspaces available for that subscription.
++
+```python
+from azureml.core import Workspace
+
+Workspace.list('<subscription-id>')
+```
+
+The Workspace.list(..) method doesn't return the full workspace object. It includes only basic information about existing workspaces in the subscription. To get a full object for specific workspace, use Workspace.get(..).
++
+## Delete a workspace
+
+When you no longer need a workspace, delete it.
++
+If you accidentally deleted your workspace, you may still be able to retrieve your notebooks. For details, see [Failover for business continuity and disaster recovery](../how-to-high-availability-machine-learning.md#workspace-deletion).
++
+Delete the workspace `ws`:
+
+
+```python
+ws.delete(delete_dependent_resources=False, no_wait=False)
+```
+
+The default action isn't to delete resources associated with the workspace, that is, container registry, storage account, key vault, and application insights. Set `delete_dependent_resources` to True to delete these resources as well.
++
+## Clean up resources
++
+## Troubleshooting
+
+* **Supported browsers in Azure Machine Learning studio**: We recommend that you use the most up-to-date browser that's compatible with your operating system. The following browsers are supported:
+ * Microsoft Edge (The new Microsoft Edge, latest version. Not Microsoft Edge legacy)
+ * Safari (latest version, Mac only)
+ * Chrome (latest version)
+ * Firefox (latest version)
+
+* **Azure portal**:
+ * If you go directly to your workspace from a share link from the SDK or the Azure portal, you can't view the standard **Overview** page that has subscription information in the extension. In this scenario, you also can't switch to another workspace. To view another workspace, go directly to [Azure Machine Learning studio](https://ml.azure.com) and search for the workspace name.
+ * All assets (Data, Experiments, Computes, and so on) are available only in [Azure Machine Learning studio](https://ml.azure.com). They're *not* available from the Azure portal.
+ * Attempting to export a template for a workspace from the Azure portal may return an error similar to the following text: `Could not get resource of the type <type>. Resources of this type will not be exported.` As a workaround, use one of the templates provided at [https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.machinelearningservices](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.machinelearningservices) as the basis for your template.
+
+### Workspace diagnostics
++
+### Resource provider errors
+
+
+
+### Deleting the Azure Container Registry
+
+The Azure Machine Learning workspace uses Azure Container Registry (ACR) for some operations. It will automatically create an ACR instance when it first needs one.
+++
+## Next steps
+
+Once you have a workspace, learn how to [Train and deploy a model](tutorial-train-deploy-notebook.md).
+
+To learn more about planning a workspace for your organization's requirements, see [Organize and set up Azure Machine Learning](/azure/cloud-adoption-framework/ready/azure-best-practices/ai-machine-learning-resource-organization).
+
+* If you need to move a workspace to another Azure subscription, see [How to move a workspace](../how-to-move-workspace.md).
+
+* To find a workspace, see [Search for Azure Machine Learning assets (preview)](../how-to-search-assets.md).
+
+For information on how to keep your Azure ML up to date with the latest security updates, see [Vulnerability management](../concept-vulnerability-management.md).
marketplace Azure App Review Feedback https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/azure-app-review-feedback.md
description: Handle feedback for your Azure application offer from the Microsoft
Previously updated : 07/01/2021 Last updated : 9/22/2022
This article explains how to access feedback from the Microsoft Azure Marketplace review team in [Azure DevOps](https://azure.microsoft.com/services/devops/). If critical issues are found in your Azure application offer during the **Microsoft review** step, you can sign into this system to view detailed information about these issues (review feedback). After you fix all issues, you must resubmit your offer to continue to publish it on Azure Marketplace. The following diagram illustrates how this feedback process relates to the publishing process.
-![Review feedback process](media/azure-app/review-feedback-process.png)
+You can [download our TTK](/azure/azure-resource-manager/templates/test-toolkit) and test your offers locally before you submit them to ensure that they will pass once you go live.
+ Typically, review issues are referenced as a pull request (PR). Each PR is linked to an online Azure DevOps item, which contains details about the issue. The following image displays an example of the Partner Center experience if issues are found during reviews.
migrate Migrate Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/migrate-support-matrix.md
You can create a project in many geographies in the public cloud.
**Geography** | **Metadata storage location** | Africa | South Africa or North Africa
-Asia Pacific | East Asia or Southeast Asia
+Asia Pacific | East Asia
Australia | Australia East or Australia Southeast Brazil | Brazil South Canada | Canada Central or Canada East
mysql Concepts High Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/concepts-high-availability.md
You can disable HA on a server after you create it. Billing stops immediately.
You need to be able to mitigate downtime for your application even when you're not using HA. Service downtime, like scheduled patches, minor version upgrades, or customer-initiated operations like scaling of compute can be performed during scheduled maintenance windows. To mitigate application impact for Azure-initiated maintenance tasks, you can schedule them on a day of the week and time that minimizes the impact on the application.</br> - **Can I use a read replica for an HA-enabled server?**</br>
-Read replicas aren't supported for HA servers. This feature is on our roadmap, and we're working to make it available soon.</br>
+Yes, read replicas are supported for HA servers.</br>
- **Can I use Data-in Replication for HA servers?**</br> Data-in Replication isn't supported for HA servers. But Data-in Replication for HA servers is on our roadmap and will be available soon. For now, if you want to use Data-in Replication for migration, you can follow these steps:
mysql Concepts Read Replicas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/concepts-read-replicas.md
Azure Database for MySQL Flexible Server provides the **Replication lag in secon
If you see increased replication lag, refer to [troubleshooting replication latency](./../howto-troubleshoot-replication-latency.md) to troubleshoot and understand possible causes.
+>[!IMPORTANT]
+>Read Replica on HA server uses storage based replication technology, which no longer uses 'SLAVE_IO_RUNNING' metric available in MySQL's 'SHOW SLAVE STATUS' command. The value of it will always be displayed as "No" and is not indicative of replication status.
+ ## Stop replication You can stop replication between a source and a replica. After replication is stopped between a source server and a read replica, the replica becomes a standalone server. The data in the standalone server is the data that was available on the replica at the time the stop replication command was started. The standalone server doesn't catch up with the source server.
If GTID is enabled on a source server (`gtid_mode` = ON), newly created replicas
| Scenario | Limitation/Consideration | |:-|:-|
-| Replica on server with HA enabled | Not supported |
| Replica on server in Burstable Pricing Tier| Not supported | | Cross region read replication | Not supported | | Pricing | The cost of running the replica server is based on the region where the replica server is running |
mysql How To Read Replicas Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/how-to-read-replicas-cli.md
In this article, you will learn how to create and manage read replicas in the Az
[!Note] >
-> * Replica is not supported on high availability enabled server.
->
> * If GTID is enabled on a primary server (`gtid_mode` = ON), newly created replicas will also have GTID enabled and use GTID based replication. To learn more refer to [Global transaction identifier (GTID)](concepts-read-replicas.md#global-transaction-identifier-gtid) ## Azure CLI
mysql How To Read Replicas Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/how-to-read-replicas-portal.md
Last updated 06/17/2021
In this article, you will learn how to create and manage read replicas in the Azure Database for MySQL flexible server using the Azure portal. > [!Note]
->
-> * Replica is not supported on high availability enabled server.
->
+>
> * If GTID is enabled on a primary server (`gtid_mode` = ON), newly created replicas will also have GTID enabled and use GTID based replication. To learn more refer to [Global transaction identifier (GTID)](concepts-read-replicas.md#global-transaction-identifier-gtid) ## Prerequisites
mysql Sample Cli Restart Stop Start https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/scripts/sample-cli-restart-stop-start.md
Last updated 02/10/2022
This sample CLI script performs restart, start and stop operations on an Azure Database for MySQL - Flexible Server. > [!IMPORTANT]
-> When you **Stop** the server it remains in that state for the next 7 days in a stretch. If you do not manually **Start** it during this time, the server will automatically be started at the end of 7 days. You can chose to **Stop** it again if you are not using the server.
+> When you **Stop** the server it remains in that state for the next 30 days in a stretch. If you do not manually **Start** it during this time, the server will automatically be started at the end of 30 days. You can chose to **Stop** it again if you are not using the server.
During the time server is stopped, no management operations can be performed on the server. In order to change any configuration settings on the server, you will need to start the server.
mysql Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/whats-new.md
This article summarizes new releases and features in Azure Database for MySQL -
> This article contains references to the term slave, a term that Microsoft no longer uses. When the term is removed from the software, we'll remove it from this article. ## September 2022
+- **Read replica for HA enabled Azure Database for MySQL - Flexible Server (General Availability)**
+
+ The read replica feature allows you to replicate data from an Azure Database for MySQL flexible server to a read-only server. You can replicate the source server to up to 10 replicas. This functionality is now extended to support HA enabled servers within same region.[Learn more](concepts-read-replicas.md)
+++
+- **Azure Active Directory authentication for Azure Database for MySQL ΓÇô Flexible Server (Public Preview)**
+
+ You can now authenticate to Azure Database for MySQL - Flexible server using Microsoft Azure Active Directory (Azure AD) using identities. With Azure AD authentication, you can manage database user identities and other Microsoft services in a central location, which simplifies permission management. [Learn More](concepts-azure-ad-authentication.md)
++ - **Customer managed keys data encryption ΓÇô Azure Database for MySQL ΓÇô Flexible Server (Preview)**
mysql Concepts Certificate Rotation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-certificate-rotation.md
Azure Database for MySQL users can only use the predefined certificate to connec
Per the industry's compliance requirements, CA vendors began revoking CA certificates for non-compliant CAs, requiring servers to use certificates issued by compliant CAs, and signed by CA certificates from those compliant CAs. Since Azure Database for MySQL used one of these non-compliant certificates, we needed to rotate the certificate to the compliant version to minimize the potential threat to your MySQL servers.
-The new certificate is rolled out and in effect as of February 15, 2021 (02/15/2021).
-
-#### What change was performed on February 15, 2021 (02/15/2021)?
-
-On February 15, 2021, the [BaltimoreCyberTrustRoot root certificate](https://www.digicert.com/CACerts/BaltimoreCyberTrustRoot.crt.pem) was replaced with a **compliant version** of the same [BaltimoreCyberTrustRoot root certificate](https://www.digicert.com/CACerts/BaltimoreCyberTrustRoot.crt.pem) to ensure existing customers don't need to change anything and there's no impact to their connections to the server. During this change, the [BaltimoreCyberTrustRoot root certificate](https://www.digicert.com/CACerts/BaltimoreCyberTrustRoot.crt.pem) was **not replaced** with [DigiCertGlobalRootG2](https://cacerts.digicert.com/DigiCertGlobalRootG2.crt.pem) and that change is deferred to allow more time for customers to make the change.
- #### Do I need to make any changes on my client to maintain connectivity?
-> [!NOTE]
-> If you are using PHP driver with [enableRedirect](./how-to-redirection.md) kindly follow the steps mentioned under [Create a combined CA certificate](#create-a-combined-ca-certificate) to avoid connection failures.
-
-No change is required on client side. If you followed steps mentioned under [Create a combined CA certificate](#create-a-combined-ca-certificate) below, you can continue to connect as long as **BaltimoreCyberTrustRoot certificate is not removed** from the combined CA certificate. **To maintain connectivity, we recommend that you retain the BaltimoreCyberTrustRoot in your combined CA certificate until further notice.**
+If you followed steps mentioned under [Create a combined CA certificate](#create-a-combined-ca-certificate) below, you can continue to connect as long as **BaltimoreCyberTrustRoot certificate is not removed** from the combined CA certificate. **To maintain connectivity, we recommend that you retain the BaltimoreCyberTrustRoot in your combined CA certificate until further notice.**
###### Create a combined CA certificate
To avoid interruption of your application's availability as a result of certif
> [!NOTE] > Please don't drop or alter **Baltimore certificate** until the cert change is made. We'll send a communication after the change is done, and then it will be safe to drop the **Baltimore certificate**.
-#### Why was BaltimoreCyberTrustRoot certificate not replaced to DigiCertGlobalRootG2 during this change on February 15, 2021?
-
-We evaluated the customer readiness for this change and realized that many customers were looking for extra lead time to manage this change. To provide more lead time to customers for readiness, we decided to defer the certificate change to DigiCertGlobalRootG2 for at least a year, providing sufficient lead time to the customers and end users.
-
-Our recommendation to users is to use the aforementioned steps to create a combined certificate and connect to your server but do not remove BaltimoreCyberTrustRoot certificate until we send a communication to remove it.
- #### What if we removed the BaltimoreCyberTrustRoot certificate? You'll start to encounter connectivity errors while connecting to your Azure Database for MySQL server. You'll need to [configure SSL](how-to-configure-ssl.md) with the [BaltimoreCyberTrustRoot](https://www.digicert.com/CACerts/BaltimoreCyberTrustRoot.crt.pem) certificate again to maintain connectivity.
mysql Single Server Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/single-server-whats-new.md
Azure Database for MySQL is a relational database service in the Microsoft cloud
This article summarizes new releases and features in Azure Database for MySQL - Single Server beginning in January 2021. Listings appear in reverse chronological order, with the most recent updates first.
+## September 2022
+
+Clients’ devices using SSL to connect to Azure Database for MySQL – Single Server instances must have their CA certificates updated. To address compliance requirements, starting October 2022 the CA certificates were changed from BaltimoreCyberTrustRoot to DigiCertGlobalRootG2.
+To avoid interruption of your application's availability as a result of certificates being unexpectedly revoked, or to update a certificate that has been revoked, use the steps explained in the [article](./concepts-certificate-rotation.md#create-a-combined-ca-certificate), to maintain connectivity.
+Use the steps mentioned to [create a combined certificate](./concepts-certificate-rotation.md#create-a-combined-ca-certificate) and connect to your server but do not remove BaltimoreCyberTrustRoot certificate until we send a communication to remove it.
+ ## May 2022 Enabled the ability to change the server parameter innodb_ft_server_stopword_table from Portal/CLI.
postgresql Concepts Certificate Rotation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/concepts-certificate-rotation.md
description: Learn about the upcoming changes of root certificate changes that w
--++ Previously updated : 06/24/2022 Last updated : 09/20/2022 # Understanding the changes in the Root CA change for Azure Database for PostgreSQL Single server [!INCLUDE [applies-to-postgresql-single-server](../includes/applies-to-postgresql-single-server.md)]
-Azure Database for PostgreSQL Single Server successfully completed the root certificate change on **February 15, 2021 (02/15/2021)** as part of standard maintenance and security best practices. This article gives you more details about the changes, the resources affected, and the steps needed to ensure that your application maintains connectivity to your database server.
+Azure Database for PostgreSQL Single Server planning the root certificate change starting **October, 2022 (10/2022)** as part of standard maintenance and security best practices. This article gives you more details about the changes, the resources affected, and the steps needed to ensure that your application maintains connectivity to your database server.
## Why root certificate update is required?
-Azure database for PostgreSQL users can only use the predefined certificate to connect to their PostgreSQL server, which is located [here](https://www.digicert.com/CACerts/BaltimoreCyberTrustRoot.crt.pem). However, [Certificate Authority (CA) Browser forum](https://cabforum.org/) recently published reports of multiple certificates issued by CA vendors to be non-compliant.
+Historically, Azure database for PostgreSQL users could only use the predefined certificate to connect to their PostgreSQL server, which is located [here](https://www.digicert.com/CACerts/BaltimoreCyberTrustRoot.crt.pem). However, [Certificate Authority (CA) Browser forum](https://cabforum.org/) recently published reports of multiple certificates issued by CA vendors to be non-compliant.
As per the industry's compliance requirements, CA vendors began revoking CA certificates for non-compliant CAs, requiring servers to use certificates issued by compliant CAs, and signed by CA certificates from those compliant CAs. Since Azure Database for MySQL used one of these non-compliant certificates, we needed to rotate the certificate to the compliant version to minimize the potential threat to your MySQL servers.
-The new certificate is rolled out and in effect starting February 15, 2021 (02/15/2021).
+The new certificate is rolled out and in effect starting October, 2022 (10/2022).
-## What change was performed on February 15, 2021 (02/15/2021)?
+## What change will be performed starting October 2022 (10/2022)?
-On February 15, 2021, the [BaltimoreCyberTrustRoot root certificate](https://www.digicert.com/CACerts/BaltimoreCyberTrustRoot.crt.pem) was replaced with a **compliant version** of the same [BaltimoreCyberTrustRoot root certificate](https://www.digicert.com/CACerts/BaltimoreCyberTrustRoot.crt.pem) to ensure existing customers do not need to change anything and there is no impact to their connections to the server. During this change, the [BaltimoreCyberTrustRoot root certificate](https://www.digicert.com/CACerts/BaltimoreCyberTrustRoot.crt.pem) was **not replaced** with [DigiCertGlobalRootG2](https://cacerts.digicert.com/DigiCertGlobalRootG2.crt.pem) and that change is deferred to allow more time for customers to make the change.
+Starting October 2022, the [BaltimoreCyberTrustRoot root certificate](https://www.digicert.com/CACerts/BaltimoreCyberTrustRoot.crt.pem) will be replaced with a **compliant version** known as [DigiCertGlobalRootG2 root certificate ](https://www.digicert.com/CACerts/BaltimoreCyberTrustRoot.crt.pem). If your applications take advantage of **verify-ca** or **verify-full** as value of [**sslmode** parameter](https://www.postgresql.org/docs/current/libpq-ssl.html) in the database client connectivity will need to follow directions below to add new certificates to certificate store to maintain connectivity.
## Do I need to make any changes on my client to maintain connectivity?
-There is no change required on client side. if you followed our previous recommendation below, you will still be able to continue to connect as long as **BaltimoreCyberTrustRoot certificate is not removed** from the combined CA certificate. **We recommend to not remove the BaltimoreCyberTrustRoot from your combined CA certificate until further notice to maintain connectivity.**
+There are no code or application changes required on client side. if you follow our previous recommendation below, you will still be able to continue to connect as long as **BaltimoreCyberTrustRoot certificate isn't removed** from the combined CA certificate. **We recommend to not remove the BaltimoreCyberTrustRoot from your combined CA certificate until further notice to maintain connectivity.**
### Previous Recommendation * Download BaltimoreCyberTrustRoot & DigiCertGlobalRootG2 Root CA from links below: * https://www.digicert.com/CACerts/BaltimoreCyberTrustRoot.crt.pem * https://cacerts.digicert.com/DigiCertGlobalRootG2.crt.pem
+* Optionally, to prevent future disruption, it is also recommended to add the following roots to the trusted store:
+ * [DigiCert Global Root G3](https://www.digicert.com/kb/digicert-root-certificates.htm) (thumbprint: 7e04de896a3e666d00e687d33ffad93be83d349e)
+ * [Microsoft RSA Root Certificate Authority 2017](https://www.microsoft.com/pkiops/certs/Microsoft%20RSA%20Root%20Certificate%20Authority%202017.crt) (thumbprint: 73a5e64a3bff8316ff0edccc618a906e4eae4d74)
+ * [Microsoft ECC Root Certificate Authority 2017](https://www.microsoft.com/pkiops/certs/Microsoft%20ECC%20Root%20Certificate%20Authority%202017.crt) (thumbprint: 999a64c37ff47d9fab95f14769891460eec4c3c5)
* Generate a combined CA certificate store with both **BaltimoreCyberTrustRoot** and **DigiCertGlobalRootG2** certificates are included. * For Java (PostgreSQL JDBC) users using DefaultJavaSSLFactory, execute:
There is no change required on client side. if you followed our previous recomme
* System.setProperty("javax.net.ssl.trustStore","path_to_truststore_file"); * System.setProperty("javax.net.ssl.trustStorePassword","password");
- * For .NET (Npgsql) users on Windows, make sure **Baltimore CyberTrust Root** and **DigiCert Global Root G2** both exist in Windows Certificate Store, Trusted Root Certification Authorities. If any certificates do not exist, import the missing certificate.
+ * For .NET (Npgsql) users on Windows, make sure **Baltimore CyberTrust Root** and **DigiCert Global Root G2** both exist in Windows Certificate Store, Trusted Root Certification Authorities. If any certificates don't exist, import the missing certificate.
![Azure Database for PostgreSQL .net cert](media/overview/netconnecter-cert.png)
- * For .NET (Npgsql) users on Linux using SSL_CERT_DIR, make sure **BaltimoreCyberTrustRoot** and **DigiCertGlobalRootG2** both exist in the directory indicated by SSL_CERT_DIR. If any certificates do not exist, create the missing certificate file.
+ * For .NET (Npgsql) users on Linux using SSL_CERT_DIR, make sure **BaltimoreCyberTrustRoot** and **DigiCertGlobalRootG2** both exist in the directory indicated by SSL_CERT_DIR. If any certificates don't exist, create the missing certificate file.
* For other PostgreSQL client users, you can merge two CA certificate files like this format below
There is no change required on client side. if you followed our previous recomme
* In future, after the new certificate deployed on the server side, you can change your CA pem file to DigiCertGlobalRootG2.crt.pem. > [!NOTE]
-> Please do not drop or alter **Baltimore certificate** until the cert change is made. We will send a communication once the change is done, after which it is safe for them to drop the Baltimore certificate.
-
-## Why was BaltimoreCyberTrustRoot certificate not replaced to DigiCertGlobalRootG2 during this change on February 15, 2021?
-
-We evaluated the customer readiness for this change and realized many customers were looking for additional lead time to manage this change. In the interest of providing more lead time to customers for readiness, we have decided to defer the certificate change to DigiCertGlobalRootG2 for at least a year providing sufficient lead time to the customers and end users.
-
-Our recommendations to users is, use the aforementioned steps to create a combined certificate and connect to your server but do not remove BaltimoreCyberTrustRoot certificate until we send a communication to remove it.
+> Please don't drop or alter **Baltimore certificate** until the cert change is made. We will send a communication once the change is done, after which it is safe for them to drop the Baltimore certificate.
## What if we removed the BaltimoreCyberTrustRoot certificate?
You will start to connectivity errors while connecting to your Azure Database fo
### 1. If I am not using SSL/TLS, do I still need to update the root CA?
-No actions required if you are not using SSL/TLS.
+No actions required if you aren't using SSL/TLS.
### 2. If I am using SSL/TLS, do I need to restart my database server to update the root CA?
-No, you do not need to restart the database server to start using the new certificate. This is a client-side change and the incoming client connections need to use the new certificate to ensure that they can connect to the database server.
+No, you don't need to restart the database server to start using the new certificate. This is a client-side change and the incoming client connections need to use the new certificate to ensure that they can connect to the database server.
### 3. How do I know if I'm using SSL/TLS with root certificate verification? You can identify whether your connections verify the root certificate by reviewing your connection string. - If your connection string includes `sslmode=verify-ca` or `sslmode=verify-full`, you need to update the certificate. - If your connection string includes `sslmode=disable`, `sslmode=allow`, `sslmode=prefer`, or `sslmode=require`, you do not need to update certificates.-- If your connection string does not specify sslmode, you do not need to update certificates.
+- If your connection string doesn't specify sslmode, you don't need to update certificates.
If you are using a client that abstracts the connection string away, review the client's documentation to understand whether it verifies certificates. To understand PostgreSQL sslmode review the [SSL mode descriptions](https://www.postgresql.org/docs/11/libpq-ssl.html#ssl-mode-descriptions) in PostgreSQL documentation.
For connector using Self-hosted Integration Runtime where you explicitly include
### 7. Do I need to plan a database server maintenance downtime for this change?
-No. Since the change here is only on the client side to connect to the database server, there is no maintenance downtime needed for the database server for this change.
+No. Since the change here is only on the client side to connect to the database server, there's no maintenance downtime needed for the database server for this change.
-### 8. If I create a new server after February 15, 2021 (02/15/2021), will I be impacted?
-
-For servers created after February 15, 2021 (02/15/2021), you will continue to use the [BaltimoreCyberTrustRoot](https://www.digicert.com/CACerts/BaltimoreCyberTrustRoot.crt.pem) for your applications to connect using SSL.
+### 8. If I create a new server after October 2022 (10/2022), will I be impacted?
+For servers created after October 2022 (10/2022), you will continue to use the [BaltimoreCyberTrustRoot](https://www.digicert.com/CACerts/BaltimoreCyberTrustRoot.crt.pem) together with new [DigiCertGlobalRootG2](https://cacerts.digicert.com/DigiCertGlobalRootG2.crt.pem) root certificates in your database client SSL certificate store for your applications to connect using SSL.
### 9. How often does Microsoft update their certificates or what is the expiry policy?
To verify if you are using SSL connection to connect to the server refer [SSL ve
### 12. Is there an action needed if I already have the DigiCertGlobalRootG2 in my certificate file?
-No. There is no action needed if your certificate file already has the **DigiCertGlobalRootG2**.
+No. There's no action needed if your certificate file already has the **DigiCertGlobalRootG2**.
### 13. What if you are using docker image of PgBouncer sidecar provided by Microsoft?-
-A new docker image which supports both [**Baltimore**](https://www.digicert.com/CACerts/BaltimoreCyberTrustRoot.crt.pem) and [**DigiCert**](https://cacerts.digicert.com/DigiCertGlobalRootG2.crt.pem) is published to below [here](https://hub.docker.com/_/microsoft-azure-oss-db-tools-pgbouncer-sidecar) (Latest tag). You can pull this new image to avoid any interruption in connectivity starting February 15, 2021.
+A new docker image which supports both [**Baltimore**](https://www.digicert.com/CACerts/BaltimoreCyberTrustRoot.crt.pem) and [**DigiCert**](https://cacerts.digicert.com/DigiCertGlobalRootG2.crt.pem) is published to below [here](https://hub.docker.com/_/microsoft-azure-oss-db-tools-pgbouncer-sidecar) (Latest tag). You can pull this new image to avoid any interruption in connectivity starting October, 2022.
### 14. What if I have further questions?
+If you have questions, get answers from community experts in [Microsoft Q&A](mailto:AzureDatabaseforPostgreSQL@service.microsoft.com). If you have a support plan and you need technical help please create a [support request](https://learn.microsoft.com/azure/azure-portal/supportability/how-to-create-azure-support-request):
+* ForΓÇ»*Issue type*, selectΓÇ»*Technical*.
+* ForΓÇ»*Subscription*, select your *subscription*.
+* ForΓÇ»*Service*, selectΓÇ»*My Services*, then selectΓÇ»*Azure Database for PostgreSQL ΓÇô Single Server*.
+* ForΓÇ»*Problem type*, selectΓÇ»*Security*.
+* ForΓÇ»*Problem subtype*, selectΓÇ» *Azure Encryption and Infrastructure Double Encryption*
-If you have questions, get answers from community experts in [Microsoft Q&A](mailto:AzureDatabaseforPostgreSQL@service.microsoft.com). If you have a support plan and you need technical help, [contact us](mailto:AzureDatabaseforPostgreSQL@service.microsoft.com)
postgresql Concepts Connection Libraries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/concepts-connection-libraries.md
Most language client libraries used to connect to PostgreSQL server are external
| Python | [psycopg](https://www.psycopg.org/) | DB API 2.0-compliant | [Download](https://sourceforge.net/projects/adodbapi/) | | PHP | [php-pgsql](https://secure.php.net/manual/en/book.pgsql.php) | Database extension | [Install](https://secure.php.net/manual/en/pgsql.installation.php) | | Node.js | [Pg npm package](https://www.npmjs.com/package/pg) | Pure JavaScript non-blocking client | [Install](https://www.npmjs.com/package/pg) |
-| Java | [JDBC](https://jdbc.postgresql.org/) | Type 4 JDBC driver | [Download](https://jdbc.postgresql.org/download.html)  |
+| Java | [JDBC](https://jdbc.postgresql.org/) | Type 4 JDBC driver | [Download](https://jdbc.postgresql.org/download/)  |
| Ruby | [Pg gem](https://deveiate.org/code/pg/) | Ruby Interface | [Download](https://rubygems.org/downloads/pg-0.20.0.gem) | | Go | [Package pq](https://godoc.org/github.com/lib/pq) | Pure Go postgres driver | [Install](https://github.com/lib/pq/blob/master/README.md) | | C\#/ .NET | [Npgsql](https://www.npgsql.org/) | ADO.NET Data Provider | [Download](https://dotnet.microsoft.com/download) |
purview Register Scan Power Bi Tenant Cross Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-power-bi-tenant-cross-tenant.md
Previously updated : 09/21/2022 Last updated : 09/22/2022
Use either of the following deployment checklists during the setup, or for troub
2. **Implicit grant and hybrid flows** > **ID tokens (used for implicit and hybrid flows)** is selected. 3. **Allow public client flows** is enabled.
-1. If delegated authentication is used, in the Power BI Azure AD tenant validate the following Power BI admin user settings:
+1. If delegated authentication is used, in the Power BI Azure AD tenant, validate the following Power BI admin user settings:
1. The user is assigned to the Power BI administrator role. 2. At least one [Power BI license](/power-bi/admin/service-admin-licensing-organization#subscription-license-types) is assigned to the user. 3. If the user is recently created, sign in with the user at least once, to make sure that the password is reset successfully, and the user can successfully initiate the session.
To create and run a new scan by using the self-hosted integration runtime, perfo
:::image type="content" source="media/setup-power-bi-scan-catalog-portal/power-bi-key-vault.png" alt-text="Screenshot of the instance of Azure Key Vault.":::
-1. Enter a name for the secret. For **Value**, type the newly created password for the Azure AD user. Select **Create** to complete.
+1. Enter a name for the secret. For **Value**, type the newly created secret for the App registration. Select **Create** to complete.
- :::image type="content" source="media/setup-power-bi-scan-catalog-portal/power-bi-key-vault-secret.png" alt-text="Screenshot that shows how to generate a secret in Azure Key Vault.":::
+
+2. Under **Certificates & secrets**, create a new secret and save it securely for next steps.
+
+3. In Azure portal, navigate to your Azure key vault.
+
+4. Select **Settings** > **Secrets** and select **+ Generate/Import**.
+
+ :::image type="content" source="media/setup-power-bi-scan-catalog-portal/power-bi-key-vault.png" alt-text="Screenshot how to navigate to Azure Key Vault.":::
+
+5. Enter a name for the secret and for **Value**, type the newly created secret for the App registration. Select **Create** to complete.
+
+ :::image type="content" source="media/setup-power-bi-scan-catalog-portal/power-bi-key-vault-secret-spn.png" alt-text="Screenshot how to generate an Azure Key Vault secret for SPN.":::
1. If your key vault isn't connected to Microsoft Purview yet, you need to [create a new key vault connection](manage-credentials.md#create-azure-key-vaults-connections-in-your-microsoft-purview-account).
To create and run a new scan by using the self-hosted integration runtime, perfo
- **Tenant ID**: Your Power BI tenant ID - **Client ID**: Use Service Principal Client ID (App ID) you created earlier
+ :::image type="content" source="media/setup-power-bi-scan-catalog-portal/power-bi-scan-spn-authentication.png" alt-text="Screenshot of the new credential menu, showing Power BI credential for SPN with all required values supplied.":::
+ 1. Select **Test connection** before continuing to the next steps. If the test fails, select **View Report** to see the detailed status and troubleshoot the problem:
purview Register Scan Power Bi Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-power-bi-tenant.md
Previously updated : 09/21/2022 Last updated : 09/22/2022
In Azure Active Directory Tenant, where Power BI tenant is located:
:::image type="content" source="./media/setup-power-bi-scan-PowerShell/add-group-member.png" alt-text="Screenshot of how to add the catalog's managed instance to group.":::
- - If you are using **delegated authentication** or **service principal** as authentication method, add your **service princial** to this security group. Select **Members**, then select **+ Add members**.
+ - If you are using **delegated authentication** or **service principal** as authentication method, add your **service principal** to this security group. Select **Members**, then select **+ Add members**.
5. Search for your Microsoft Purview managed identity or service principal and select it.
For more information about Microsoft Purview network settings, see [Use private
To create and run a new scan, do the following:
-1. Create an App Registration in your Azure Active Directory tenant. Provide a web URL in the **Redirect URI**. Take note of Client ID(App ID).
+1. In the [Azure portal](https://portal.azure.com), select **Azure Active Directory** and create an App Registration in the tenant. Provide a web URL in the **Redirect URI**. Take note of Client ID(App ID).
:::image type="content" source="media/setup-power-bi-scan-catalog-portal/power-bi-create-service-principle.png" alt-text="Screenshot how to create a Service principle.":::
To create and run a new scan, do the following:
1. Under **Advanced settings**, enable **Allow Public client flows**.
-2. In the Microsoft Purview Studio, navigate to the **Data map** in the left menu.
+2. Under **Certificates & secrets**, create a new secret and save it securely for next steps.
-1. Navigate to **Sources**.
+3. In Azure portal, navigate to your Azure key vault.
-1. Select the registered Power BI source.
+4. Select **Settings** > **Secrets** and select **+ Generate/Import**.
-1. Select **+ New scan**.
+ :::image type="content" source="media/setup-power-bi-scan-catalog-portal/power-bi-key-vault.png" alt-text="Screenshot how to navigate to Azure Key Vault.":::
-1. Give your scan a name. Then select the option to include or exclude the personal workspaces.
+5. Enter a name for the secret and for **Value**, type the newly created secret for the App registration. Select **Create** to complete.
+
+ :::image type="content" source="media/setup-power-bi-scan-catalog-portal/power-bi-key-vault-secret-spn.png" alt-text="Screenshot how to generate an Azure Key Vault secret for SPN.":::
+
+6. If your key vault isn't connected to Microsoft Purview yet, you'll need to [create a new key vault connection](manage-credentials.md#create-azure-key-vaults-connections-in-your-microsoft-purview-account)
+
+7. In the Microsoft Purview Studio, navigate to the **Data map** in the left menu.
+
+8. Navigate to **Sources**.
+
+9. Select the registered Power BI source.
+
+10. Select **+ New scan**.
+
+11. Give your scan a name. Then select the option to include or exclude the personal workspaces.
>[!Note] > Switching the configuration of a scan to include or exclude a personal workspace will trigger a full scan of Power BI source.
-1. Select your self-hosted integration runtime from the drop-down list.
+12. Select your self-hosted integration runtime from the drop-down list.
:::image type="content" source="media/setup-power-bi-scan-catalog-portal/power-bi-scan-shir.png" alt-text="Image showing Power BI scan setup using SHIR for same tenant.":::
-1. For the **Credential**, select **service principal** and select **+ New** to create a new credential.
+13. For the **Credential**, select **service principal** and select **+ New** to create a new credential.
-1. Create a new credential and provide required parameters:
+14. Create a new credential and provide required parameters:
- **Name**: Provide a unique name for credential - **Authentication method**: Service principal - **Tenant ID**: Your Power BI tenant ID - **Client ID**: Use Service Principal Client ID (App ID) you created earlier
-
-1. Select **Test Connection** before continuing to next steps. If **Test Connection** failed, select **View Report** to see the detailed status and troubleshoot the problem
+
+ :::image type="content" source="media/setup-power-bi-scan-catalog-portal/power-bi-scan-spn-authentication.png" alt-text="Screenshot of the new credential menu, showing Power BI credential for SPN with all required values supplied.":::
+
+15. Select **Test Connection** before continuing to next steps. If **Test Connection** failed, select **View Report** to see the detailed status and troubleshoot the problem
1. Access - Failed status means the user authentication failed. Scans using managed identity will always pass because no user authentication required. 2. Assets (+ lineage) - Failed status means the Microsoft Purview - Power BI authorization has failed. Make sure the Microsoft Purview managed identity is added to the security group associated in Power BI admin portal. 3. Detailed metadata (Enhanced) - Failed status means the Power BI admin portal is disabled for the following setting - **Enhance admin APIs responses with detailed metadata** :::image type="content" source="media/setup-power-bi-scan-catalog-portal/power-bi-test-connection-status-report.png" alt-text="Screenshot of test connection status report page.":::
-1. Set up a scan trigger. Your options are **Recurring**, and **Once**.
+16. Set up a scan trigger. Your options are **Recurring**, and **Once**.
:::image type="content" source="media/setup-power-bi-scan-catalog-portal/scan-trigger.png" alt-text="Screenshot of the Microsoft Purview scan scheduler.":::
-1. On **Review new scan**, select **Save and run** to launch your scan.
+17. On **Review new scan**, select **Save and run** to launch your scan.
### Create scan for same-tenant using self-hosted IR with delegated authentication
To create and run a new scan, do the following:
- **Client ID**: Use Service Principal Client ID (App ID) you created earlier - **User name**: Provide the username of Power BI Administrator you created earlier - **Password**: Select the appropriate Key vault connection and the **Secret name** where the Power BI account password was saved earlier.+ :::image type="content" source="media/setup-power-bi-scan-catalog-portal/power-bi-scan-delegated-authentication.png" alt-text="Screenshot of the new credential menu, showing Power B I credential with all required values supplied.":::
-1. Select **Test Connection** before continuing to next steps. If **Test Connection** failed, select **View Report** to see the detailed status and troubleshoot the problem
+2. Select **Test Connection** before continuing to next steps. If **Test Connection** failed, select **View Report** to see the detailed status and troubleshoot the problem
1. Access - Failed status means the user authentication failed. Scans using managed identity will always pass because no user authentication required. 2. Assets (+ lineage) - Failed status means the Microsoft Purview - Power BI authorization has failed. Make sure the Microsoft Purview managed identity is added to the security group associated in Power BI admin portal. 3. Detailed metadata (Enhanced) - Failed status means the Power BI admin portal is disabled for the following setting - **Enhance admin APIs responses with detailed metadata** :::image type="content" source="media/setup-power-bi-scan-catalog-portal/power-bi-test-connection-status-report.png" alt-text="Screenshot of test connection status report page.":::
-1. Set up a scan trigger. Your options are **Recurring**, and **Once**.
+3. Set up a scan trigger. Your options are **Recurring**, and **Once**.
:::image type="content" source="media/setup-power-bi-scan-catalog-portal/scan-trigger.png" alt-text="Screenshot of the Microsoft Purview scan scheduler.":::
-1. On **Review new scan**, select **Save and run** to launch your scan.
+4. On **Review new scan**, select **Save and run** to launch your scan.
:::image type="content" source="media/setup-power-bi-scan-catalog-portal/save-run-power-bi-scan.png" alt-text="Screenshot of Save and run Power BI source.":::
security Feature Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/feature-availability.md
The following tables display the current Microsoft Sentinel feature availability
| **Domain solution content** | | | | - [Apache Log4j Vulnerability Detection](../../sentinel/sentinel-solutions-catalog.md#domain-solutions) | Public Preview | Public Preview | | - [Cybersecurity Maturity Model Certification (CMMC)](../../sentinel/sentinel-solutions-catalog.md#domain-solutions) | Public Preview | Public Preview |
-| - [IoT/OT Threat Monitoring with Defender for IoT](../../sentinel/sentinel-solutions-catalog.md#domain-solutions) | Public Preview | Public Preview |
+| - [Microsoft Defender for IoT](../../sentinel/sentinel-solutions-catalog.md#domain-solutions) | Public Preview | Public Preview |
| - [Maturity Model for Event Log Management M2131](../../sentinel/sentinel-solutions-catalog.md#domain-solutions) | Public Preview | Public Preview | | - [Microsoft Insider Risk Management (IRM)](../../sentinel/sentinel-solutions-catalog.md#domain-solutions) | Public Preview | Public Preview | | - [Microsoft Sentinel Deception](../../sentinel/sentinel-solutions-catalog.md#domain-solutions) | Public Preview | Public Preview |
security Services Technologies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/services-technologies.md
Over time, this list will change and grow, just as Azure does. Make sure to chec
|Service|Description| |--|--| |[Microsoft Defender for Cloud](../../security-center/security-center-introduction.md)| A cloud workload protection solution that provides security management and advanced threat protection across hybrid cloud workloads.|
+|[Microsoft Sentinel](../../sentinel/overview.md)| A scalable, cloud-native solution that delivers intelligent security analytics and threat intelligence across the enterprise.|
|[Azure Key Vault](../../key-vault/general/overview.md)| A secure secrets store for the passwords, connection strings, and other information you need to keep your apps working. | |[Azure Monitor logs](../../azure-monitor/logs/log-query-overview.md)|A monitoring service that collects telemetry and other data, and provides a query language and analytics engine to deliver operational insights for your apps and resources. Can be used alone or with other services such as Defender for Cloud. | |[Azure Dev/Test Labs](../../devtest-labs/devtest-lab-overview.md)|A service that helps developers and testers quickly create environments in Azure while minimizing waste and controlling cost. |
Over time, this list will change and grow, just as Azure does. Make sure to chec
| [Network&nbsp;Security&nbsp;Groups](../../virtual-network/virtual-network-vnet-plan-design-arm.md)| A network-based access control feature using a 5-tuple to make allow or deny decisions. | | [Azure VPN Gateway](../../vpn-gateway/vpn-gateway-about-vpngateways.md)| A network device used as a VPN endpoint to allow cross-premises access to Azure Virtual Networks. | | [Azure Application Gateway](../../application-gateway/overview.md)|An advanced web application load balancer that can route based on URL and perform SSL-offloading. |
-|[Web application firewall](../../web-application-firewall/afds/afds-overview.md) (WAF)|A feature of Application Gateway that provides centralized protection of your web applications from common exploits and vulnerabilities|
+|[Web application firewall](../../web-application-firewall/overview.md) (WAF)|A feature that provides centralized protection of your web applications from common exploits and vulnerabilities|
| [Azure Load Balancer](../../load-balancer/load-balancer-overview.md)|A TCP/UDP application network load balancer. | | [Azure ExpressRoute](../../expressroute/expressroute-introduction.md)| A dedicated WAN link between on-premises networks and Azure Virtual Networks. | | [Azure Traffic Manager](../../traffic-manager/traffic-manager-overview.md)| A global DNS load balancer.|
Over time, this list will change and grow, just as Azure does. Make sure to chec
|[Azure DDoS protection](../../ddos-protection/ddos-protection-overview.md)|Combined with application design best practices, provides defense against DDoS attacks.| |[Virtual Network service endpoints](../../virtual-network/virtual-network-service-endpoints-overview.md)|Extends your virtual network private address space and the identity of your VNet to the Azure services, over a direct connection.| |[Azure Private Link](../../private-link/private-link-overview.md)|Provides private connectivity from a virtual network to Azure platform as a service (PaaS), customer-owned, or Microsoft partner services.|
+|[Azure Bastion](../../bastion/bastion-overview.md)|A service you deploy that lets you connect to a virtual machine using your browser and the Azure portal.|
+|[Azure Front Door](../../frontdoor/front-door-application-security.md)|Provides web application protection capability to safeguard your web applications from network attacks and common web vulnerabilities exploits like SQL Injection or Cross Site Scripting (XSS).|
++
sentinel Iot Advanced Threat Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/iot-advanced-threat-monitoring.md
+
+ Title: Investigate and detect threats for IoT devices | Microsoft Docs
+description: This tutorial describes how to use the Microsoft Sentinel data connector and solution for Microsoft Defender for IoT to secure your entire OT environment. Detect and respond to OT threats, including multistage attacks that may cross IT and OT boundaries.
++ Last updated : 09/18/2022+++
+# Tutorial: Investigate and detect threats for IoT devices
+
+The integration between Microsoft Defender for IoT and Microsoft Sentinel enable SOC teams to efficiently and effectively detect and respond to Operational Technology (OT) threats. Enhance your security capabilities with the **Microsoft Defender for IoT** solution, a set of bundled content configured specifically for Defender for IoT data that includes analytics rules, workbooks, and playbooks. While Defender for IoT supports both Enterprise IoT and OT networks, the **Microsoft Defender for IoT** solution supports OT networks only.
+
+In this tutorial, you:
+
+> [!div class="checklist"]
+>
+> * Install the **Microsoft Defender for IoT** solution in your Microsoft Sentinel workspace
+> * Learn how to investigate Defender for IoT alerts in Microsoft Sentinel incidents
+> * Learn about the analytics rules, workbooks, and playbooks deployed to your Microsoft Sentinel workspace with the **Microsoft Defender for IoT** solution
+
+## Prerequisites
+
+Before you start, make sure you have:
+
+- **Read** and **Write** permissions on your Microsoft Sentinel workspace. For more information, see [Permissions in Microsoft Sentinel](roles.md).
+
+- Completed [Tutorial: Connect Microsoft Defender for IoT with Microsoft Sentinel](iot-solution.md).
+
+## Install the Defender for IoT solution
+
+Microsoft Sentinel [solutions](sentinel-solutions.md) can help you onboard Microsoft Sentinel security content for a specific data connector using a single process.
+
+The **Microsoft Defender for IoT** solution integrates Defender for IoT data with Microsoft Sentinel's security orchestration, automation, and response (SOAR) capabilities by providing out-of-the-box and OT-optimized playbooks for automated response and prevention capabilities.
+
+**To install the solution**:
+
+1. In Microsoft Sentinel, under **Content management**, select **Content hub** and then locate the **Microsoft Defender for IoT** solution.
+
+1. At the bottom right, select **View details**, and then **Create**. Select the subscription, resource group, and workspace where you want to install the solution, and then review the related security content that will be deployed.
+
+1. When you're done, select **Review + Create** to install the solution.
+
+For more information, see [About Microsoft Sentinel content and solutions](sentinel-solutions.md) and [Centrally discover and deploy out-of-the-box content and solutions](sentinel-solutions-deploy.md).
+
+## Detect threats out-of-the-box with Defender for IoT data
+
+The **Microsoft Defender for IoT** data connector includes a default *Microsoft Security* rule named **Create incidents based on Azure Defender for IOT alerts**, which automatically creates new incidents for any new Defender for IoT alerts detected.
+
+The **Microsoft Defender for IoT** solution includes a more detailed set of out-of-the-box analytics rules, which are built specifically for Defender for IoT data and fine-tune the incidents created in Microsoft Sentinel for relevant alerts.
+
+**To use out-of-the-box Defender for IoT alerts**:
+
+1. On the Microsoft Sentinel **Analytics** page, search for and disable the **Create incidents based on Azure Defender for IOT alerts** rule. This step prevents duplicate incidents from being created in Microsoft Sentinel for the same alerts.
+
+1. Search for and enable any of the following out-of-the-box analytics rules, installed with the **Microsoft Defender for IoT** solution:
+
+ | Rule Name | Description|
+ | - | -|
+ | **Illegal function codes for ICS/SCADA traffic** | Illegal function codes in supervisory control and data acquisition (SCADA) equipment may indicate one of the following: <br><br>- Improper application configuration, such as due to a firmware update or reinstallation. <br>- Malicious activity. For example, a cyber threat that attempts to use illegal values within a protocol to exploit a vulnerability in the programmable logic controller (PLC), such as a buffer overflow. |
+ | **Firmware update** | Unauthorized firmware updates may indicate malicious activity on the network, such as a cyber threat that attempts to manipulate PLC firmware to compromise PLC function. |
+ | **Unauthorized PLC changes** | Unauthorized changes to PLC ladder logic code may be one of the following: <br><br>- An indication of new functionality in the PLC. <br>- Improper configuration of an application, such as due to a firmware update or reinstallation. <br>- Malicious activity on the network, such as a cyber threat that attempts to manipulate PLC programming to compromise PLC function. |
+ | **PLC insecure key state** | The new mode may indicate that the PLC is not secure. Leaving the PLC in an insecure operating mode may allow adversaries to perform malicious activities on it, such as a program download. <br><br>If the PLC is compromised, devices and processes that interact with it may be impacted. which may affect overall system security and safety. |
+ | **PLC stop** | The PLC stop command may indicate an improper configuration of an application that has caused the PLC to stop functioning, or malicious activity on the network. For example, a cyber threat that attempts to manipulate PLC programming to affect the functionality of the network. |
+ | **Suspicious malware found in the network** | Suspicious malware found on the network indicates that suspicious malware is trying to compromise production. |
+ | **Multiple scans in the network** | Multiple scans on the network can be an indication of one of the following: <br><br>- A new device on the network <br>- New functionality of an existing device <br>- Misconfiguration of an application, such as due to a firmware update or reinstallation <br>- Malicious activity on the network for reconnaissance |
+ | **Internet connectivity** | An OT device communicating with internet addresses may indicate an improper application configuration, such as anti-virus software attempting to download updates from an external server, or malicious activity on the network. |
+ | **Unauthorized device in the SCADA network** | An unauthorized device on the network may be a legitimate, new device recently installed on the network, or an indication of unauthorized or even malicious activity on the network, such as a cyber threat attempting to manipulate the SCADA network. |
+ | **Unauthorized DHCP configuration in the SCADA network** | An unauthorized DHCP configuration on the network may indicate a new, unauthorized device operating on the network. <br><br>This may be a legitimate, new device recently deployed on the network, or an indication of unauthorized or even malicious activity on the network, such as a cyber threat attempting to manipulate the SCADA network. |
+ | **Excessive login attempts** | Excessive sign in attempts may indicate improper service configuration, human error, or malicious activity on the network, such as a cyber threat attempting to manipulate the SCADA network. |
+ | **High bandwidth in the network** | An unusually high bandwidth may be an indication of a new service/process on the network, such as backup, or an indication of malicious activity on the network, such as a cyber threat attempting to manipulate the SCADA network. |
+ | **Denial of Service** | This alert detects attacks that would prevent the use or proper operation of the DCS system. |
+ | **Unauthorized remote access to the network** | Unauthorized remote access to the network can compromise the target device. <br><br> This means that if another device on the network is compromised, the target devices can be accessed remotely, increasing the attack surface. |
+ | **No traffic on Sensor Detected** | A sensor that no longer detects network traffic indicates that the system may be insecure. |
+
+For more information, see:
+
+- [Detect threats out-of-the-box](detect-threats-built-in.md)
+- [Create custom analytics rules to detect threats](detect-threats-custom.md)
+
+> [!TIP]
+> You can also manually create and manage analytics rules in the Microsoft Sentinel **Analytics > Active rules** page. For example, you might use this option to use the out-of-the box analytics rules as templates for customized rules, or to configure analytics rules for scenarios not yet covered by the solution.
+>
+> For more information, see [Detect threats out-of-the-box](detect-threats-built-in.md).
+
+## Investigate Defender for IoT incidents
+
+After youΓÇÖve [configured your Defender for IoT data to trigger new incidents in Microsoft Sentinel](#detect-threats-out-of-the-box-with-defender-for-iot-data), start investigating those incidents in Microsoft Sentinel as you would other incidents.
+
+**To investigate Microsoft Defender for IoT incidents**:
+
+1. In Microsoft Sentinel, go to the **Incidents** page.
+
+1. Above the incident grid, select the **Product name** filter and clear the **Select all** option. Then, select **Microsoft Defender for IoT** to view only incidents triggered by Defender for IoT alerts. For example:
+
+ :::image type="content" source="media/iot-solution/filter-incidents-defender-for-iot.png" alt-text="Screenshot of filtering incidents by product name for Defender for IoT devices.":::
+
+1. Select a specific incident to begin your investigation.
+
+ In the incident details pane on the right, view details such as incident severity, a summary of the entities involved, any mapped MITRE ATT&CK tactics or techniques, and more.
+
+ :::image type="content" source="media/iot-solution/investigate-iot-incidents.png" alt-text="Screenshot of a Microsoft Defender for IoT incident in Microsoft Sentinel.":::
+
+ > [!TIP]
+ > To investigate the incident in Defender for IoT, select the **Investigate in Microsoft Defender for IoT** link at the top of the incident details pane.
+
+For more information on how to investigate incidents and use the investigation graph, see [Investigate incidents with Microsoft Sentinel](investigate-cases.md).
+
+### Investigate further with IoT device entities
+
+When investigating an incident in Microsoft Sentinel, in an incident details pane, select an IoT device entity from the **Entities** list to open its device entity page. You can identify an IoT device by the IoT device icon: :::image type="icon" source="media/iot-solution/iot-device-icon.png" border="false":::
+
+If you don't see your IoT device entity right away, select **View full details** under the entities listed to open the full incident page. In the **Entities** tab, select an IoT device to open its entity page. For example:
+
+ :::image type="content" source="media/iot-solution/incident-full-details-iot-device.png" alt-text="Screenshot of a full detail incident page.":::
+
+The IoT device entity page provides contextual device information, with basic device details and device owner contact information. The device entity page can help prioritize remediation based on device importance and business impact, as per each alert's site, zone, and sensor. For example:
++
+For more information on entity pages, see [Investigate entities with entity pages in Microsoft Sentinel](entity-pages.md).
+
+You can also hunt for vulnerable devices on the Microsoft Sentinel **Entity behavior** page. For example, view the top five IoT devices with the highest number of alerts, or search for a device by IP address or device name:
++
+For more information on how to investigate incidents and use the investigation graph, see [Investigate incidents with Microsoft Sentinel](investigate-cases.md).
+
+## Visualize and monitor Defender for IoT data
+
+To visualize and monitor your Defender for IoT data, use the workbooks deployed to your Microsoft Sentinel workspace as part of the [Microsoft Defender for IoT](#install-the-defender-for-iot-solution) solution.
+
+The Defenders for IoT workbooks provide guided investigations for OT entities based on open incidents, alert notifications, and activities for OT assets. They also provide a hunting experience across the MITRE ATT&CK® framework for ICS, and are designed to enable analysts, security engineers, and MSSPs to gain situational awareness of OT security posture.
+
+View workbooks in Microsoft Sentinel on the **Threat management > Workbooks > My workbooks** tab. For more information, see [Visualize collected data](get-visibility.md).
+
+The following table describes the workbooks included in the **Microsoft Defender for IoT** solution:
+
+|Workbook |Description |Logs |
+||||
+|**Overview** | Dashboard displaying a summary of key metrics for device inventory, threat detection and vulnerabilities. | Uses data from Azure Resource Graph (ARG) |
+|**Device Inventory** | Displays data such as: OT device name, type, IP address, Mac address, Model, OS, Serial Number, Vendor, Protocols, Open alerts, and CVEs and recommendations per device. Can be filtered by site, zone, and sensor. | Uses data from Azure Resource Graph (ARG) |
+|**Incidents** | Displays data such as: <br><br>- Incident Metrics, Topmost Incident, Incident over time, Incident by Protocol, Incident by Device Type, Incident by Vendor, and Incident by IP address.<br><br>- Incident by Severity, Incident Mean time to respond, Incident Mean time to resolve and Incident close reasons. | Uses data from the following log: SecurityAlert |
+|**Alerts** | Displays data such as: Alert Metrics, Top Alerts, Alert over time, Alert by Severity, Alert by Engine, Alert by Device Type, Alert by Vendor and Alert by IP address. | Uses data from Azure Resource Graph (ARG) |
+|**MITRE ATT&CK® for ICS** | Displays data such as: Tactic Count, Tactic Details, Tactic over time, Technique Count. | Uses data from the following log: SecurityAlert |
+|**Vulnerabilities** | Displays vulnerabilities and CVEs for vulnerable devices. Can be filtered by device site and CVE severity. | Uses data from Azure Resource Graph (ARG) |
+
+## Automate response to Defender for IoT alerts
+
+Playbooks are collections of automated remediation actions that can be run from Microsoft Sentinel as a routine. A playbook can help automate and orchestrate your threat response; it can be run manually or set to run automatically in response to specific alerts or incidents, when triggered by an analytics rule or an automation rule, respectively.
+
+The [Microsoft Defender for IoT](#install-the-defender-for-iot-solution) solution includes out-of-the-box playbooks that provide the following functionality:
+
+- [Automatically close incidents](#automatically-close-incidents)
+- [Send email notifications by production line](#send-email-notifications-by-production-line)
+- [Create a new ServiceNow ticket](#create-a-new-servicenow-ticket)
+- [Update alert statuses in Defender for IoT](#update-alert-statuses-in-defender-for-iot)
+- [Automate workflows for incidents with active CVEs](#automate-workflows-for-incidents-with-active-cves)
+- [Send email to the IoT/OT device owner](#send-email-to-the-iotot-device-owner)
+- [Triage incidents involving highly important devices](#triage-incidents-involving-highly-important-devices)
+
+Before using the out-of-the-box playbooks, make sure to perform the prerequisite steps as listed [below](#playbook-prerequisites).
+
+For more information, see:
+
+- [Tutorial: Use playbooks with automation rules in Microsoft Sentinel](tutorial-respond-threats-playbook.md)
+- [Automate threat response with playbooks in Microsoft Sentinel](automate-responses-with-playbooks.md)
+
+### Playbook prerequisites
+
+Before using the out-of-the-box playbooks, make sure you perform the following prerequisites, as needed for each playbook:
+
+- [Ensure valid playbook connections](#ensure-valid-playbook-connections)
+- [Add a required role to your subscription](#add-a-required-role-to-your-subscription)
+- [Connect your incidents, relevant analytics rules, and the playbook](#connect-your-incidents-relevant-analytics-rules-and-the-playbook)
+
+#### Ensure valid playbook connections
+
+This procedure helps ensure that each connection step in your playbook has valid connections, and is required for all solution playbooks.
+
+**To ensure your valid connections**:
+
+1. In Microsoft Sentinel, open the playbook from **Automation** > **Active playbooks**.
+
+1. Select a playbook to open it as a Logic app.
+
+1. With the playbook opened as a Logic app, select **Logic app designer**. Expand each step in the logic app to check for invalid connections, which are indicated by an orange warning triangle. For example:
+
+ :::image type="content" source="media/iot-solution/connection-steps.png" alt-text="Screenshot of the default AD4IOT AutoAlertStatusSync playbook." lightbox="media/iot-solution/connection-steps.png":::
+
+ > [!IMPORTANT]
+ > Make sure to expand each step in the logic app. Invalid connections may be hiding inside other steps.
+
+1. Select **Save**.
+
+#### Add a required role to your subscription
+
+This procedure describes how to add a required role to the Azure subscription where the playbook is installed, and is required only for the following playbooks:
+
+- [AD4IoT-AutoAlertStatusSync](#update-alert-statuses-in-defender-for-iot)
+- [AD4IoT-CVEAutoWorkflow](#automate-workflows-for-incidents-with-active-cves)
+- [AD4IoT-SendEmailtoIoTOwner](#send-email-to-the-iotot-device-owner)
+- [AD4IoT-AutoTriageIncident](#triage-incidents-involving-highly-important-devices)
+
+Required roles differ per playbook, but the steps remain the same.
+
+**To add a required role to your subscription**:
+
+1. In Microsoft Sentinel, open the playbook from **Automation** > **Active playbooks**.
+
+1. Select a playbook to open it as a Logic app.
+
+1. With the playbook opened as a Logic app, select **Identity > System assigned**, and then in the **Permissions** area, select the **Azure role assignments** button.
+
+1. In the **Azure role assignments** page, select **Add role assignment**.
+
+1. In the **Add role assignment** pane:
+
+ 1. Define the **Scope** as **Subscription**.
+
+ 1. From the dropdown, select the **Subscription** where your playbook is installed.
+
+ 1. From the **Role** dropdown, select one of the following roles, depending on the playbook youΓÇÖre working with:
+
+ |Playbook name |Role |
+ |||
+ |[AD4IoT-AutoAlertStatusSync](#update-alert-statuses-in-defender-for-iot) |Security Admin |
+ |[AD4IoT-CVEAutoWorkflow](#automate-workflows-for-incidents-with-active-cves) |Reader |
+ |[AD4IoT-SendEmailtoIoTOwner](#send-email-to-the-iotot-device-owner) |Reader |
+ |[AD4IoT-AutoTriageIncident](#triage-incidents-involving-highly-important-devices) |Reader |
+
+1. When you're done, select **Save**.
+
+#### Connect your incidents, relevant analytics rules, and the playbook
+
+This procedure describes how to configure a Microsoft Sentinel analytics rule to automatically run your playbooks based on an incident trigger, and is required for all solution playbooks.
+
+**To add your analytics rule**:
+
+1. In Microsoft Sentinel, go to **Automation** > **Automation rules**.
+
+1. To create a new automation rule, select **Create** > **Automation rule**.
+
+1. In the **Trigger** field, select one of the following triggers, depending on the playbook youΓÇÖre working with:
+
+ - The [AD4IoT-AutoAlertStatusSync](#update-alert-statuses-in-defender-for-iot) playbook: Select the **When an incident is updated** trigger
+ - All other solution playbooks: Select the **When an incident is created** trigger
+
+1. In the **Conditions** area, select **If > Analytic rule name > Contains**, and then select the specific analytics rules relevant for Defender for IoT in your organization.
+
+ For example:
+
+ :::image type="content" source="media/iot-solution/automate-playbook.png" alt-text="Screenshot of a Defender for IoT alert status sync automation rule." lightbox="media/iot-solution/automate-playbook.png":::
+
+ You may be using out-of-the-box analytics rules, or you may have modified the out-of-the-box content, or created your own. For more information, see [Detect threats out-of-the-box with Defender for IoT data](#detect-threats-out-of-the-box-with-defender-for-iot-data).
+
+1. In the **Actions** area, select **Run playbook** > *playbook name*.
+
+1. Select **Run**.
+
+> [!TIP]
+> You can also manually run a playbook on demand. This can be useful in situations where you want more control over orchestration and response processes. For more information, see [Run a playbook on demand](tutorial-respond-threats-playbook.md#run-a-playbook-on-demand).
+
+### Automatically close incidents
+
+**Playbook name**: AD4IoT-AutoCloseIncidents
+
+In some cases, maintenance activities generate alerts in Microsoft Sentinel that can distract a SOC team from handling the real problems. This playbook automatically closes incidents created from such alerts during a specified maintenance period, explicitly parsing the IoT device entity fields.
+
+To use this playbook:
+
+- Enter the relevant time period when the maintenance is expected to occur, and the IP addresses of any relevant assets, such as listed in an Excel file.
+- Create a watchlist that includes all the asset IP addresses on which alerts should be handled automatically.
+
+### Send email notifications by production line
+
+**Playbook name**: AD4IoT-MailByProductionLine
+
+This playbook sends mail to notify specific stakeholders about alerts and events that occur in your environment.
+
+For example, when you have specific security teams assigned to specific product lines or geographic locations, you'll want that team to be notified about alerts that are relevant to their responsibilities.
+
+To use this playbook, create a watchlist that maps between the sensor names and the mailing addresses of each of the stakeholders you want to alert.
+
+### Create a new ServiceNow ticket
+
+**Playbook name**: AD4IoT-NewAssetServiceNowTicket
+
+Typically, the entity authorized to program a PLC is the Engineering Workstation. Therefore, attackers might create new Engineering Workstations in order to create malicious PLC programming.
+
+This playbook opens a ticket in ServiceNow each time a new Engineering Workstation is detected, explicitly parsing the IoT device entity fields.
+
+### Update alert statuses in Defender for IoT
+
+**Playbook name**: AD4IoT-AutoAlertStatusSync
+
+This playbook updates alert statuses in Defender for IoT whenever a related alert in Microsoft Sentinel has a **Status** update.
+
+This synchronization overrides any status defined in Defender for IoT, in the Azure portal or the sensor console, so that the alert statuses match that of the related incident.
+
+### Automate workflows for incidents with active CVEs
+
+**Playbook name**: AD4IoT-CVEAutoWorkflow
+
+This playbook adds active CVEs into the incident comments of affected devices. An automated triage is performed if the CVE is critical, and an email notification is sent to the device owner, as defined on the site level in Defender for IoT.
+
+To add a device owner, edit the site owner on the **Sites and sensors** page in Defender for IoT. For more information, see [Site management options from the Azure portal](../defender-for-iot/organizations/how-to-manage-sensors-on-the-cloud.md#site-management-options-from-the-azure-portal).
+
+### Send email to the IoT/OT device owner
+
+**Playbook name**: AD4IoT-SendEmailtoIoTOwner
+
+This playbook sends an email with the incident details to the device owner as defined on the site level in Defender for IoT, so that they can start investigating, even responding directly from the automated email. Response options include:
+
+- **Yes this is expected**. Select this option to close the incident.
+
+- **No this is NOT expected**. Select this option to keep the incident active, increase the severity, and add a confirmation tag to the incident.
+
+The incident is automatically updated based on the response selected by the device owner.
+
+To add a device owner, edit the site owner on the **Sites and sensors** page in Defender for IoT. For more information, see [Site management options from the Azure portal](../defender-for-iot/organizations/how-to-manage-sensors-on-the-cloud.md#site-management-options-from-the-azure-portal).
+
+### Triage incidents involving highly important devices
+
+**Playbook name**: AD4IoT-AutoTriageIncident
+
+This playbook updates the incident severity according to the importance level of the devices involved.
+
+## Next steps
+
+For more information, see:
+
+- [Investigate incidents with Microsoft Sentinel](investigate-cases.md)
+- [Investigate entities with entity pages in Microsoft Sentinel](entity-pages.md)
+- [Visualize collected data](get-visibility.md)
+- [Tutorial: Use playbooks with automation rules in Microsoft Sentinel](tutorial-respond-threats-playbook.md)
+- [Defending Critical Infrastructure with the Microsoft Sentinel: IT/OT Threat Monitoring Solution](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/defending-critical-infrastructure-with-the-microsoft-sentinel-it/ba-p/3061184)
+- [Microsoft Defender for IoT documentation](../defender-for-iot/index.yml)
+- [Microsoft Defender for IoT solution](sentinel-solutions-catalog.md#microsoft)
sentinel Iot Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/iot-solution.md
Title: Integrate Microsoft Sentinel and Microsoft Defender for IoT | Microsoft Docs
-description: This tutorial describes how to use the Microsoft Sentinel data connector and solution for Microsoft Defender for IoT to secure your entire OT environment. Detect and respond to OT threats, including multistage attacks that may cross IT and OT boundaries.
+ Title: Connect Microsoft Defender for IoT with Microsoft Sentinel
+description: This tutorial describes how to integrate Microsoft Sentinel and Microsoft Defender for IoT with the Microsoft Sentinel data connector to secure your entire OT environment. Detect and respond to OT threats, including multistage attacks that may cross IT and OT boundaries.
Last updated 06/20/2022
-# Tutorial: Integrate Microsoft Sentinel and Microsoft Defender for IoT
+# Tutorial: Connect Microsoft Defender for IoT with Microsoft Sentinel
-ΓÇï[Microsoft Defender for IoT](../defender-for-iot/index.yml) enables you to secure your entire OT environment, whether you need to protect existing OT devices or build security into new OT innovations.
+ΓÇï[Microsoft Defender for IoT](../defender-for-iot/index.yml) enables you to secure your entire OT and Enterprise IoT environment, whether you need to protect existing devices or build security into new innovations.
Microsoft Sentinel and Microsoft Defender for IoT help to bridge the gap between IT and OT security challenges, and to empower SOC teams with out-of-the-box capabilities to efficiently and effectively detect and respond to OT threats. The integration between Microsoft Defender for IoT and Microsoft Sentinel helps organizations to quickly detect multistage attacks, which often cross IT and OT boundaries.
-In this tutorial, you:
+This connector allows you to stream Microsoft Defender for IoT data into Microsoft Sentinel, so you can view, analyze, and respond to Defender for IoT alerts, and the incidents they generate, in a broader organizational threat context.
+
+The Microsoft Sentinel integration is supported only for OT networks.
+
+In this tutorial, you will learn how to:
> [!div class="checklist"] >
-> * Connect Microsoft Sentinel to Defender for IoT
-> * Use Log Analytics to query for Defender for IoT alerts
-> * Install the Microsoft Sentinel solution for Defender for IoT
-> * Learn about the analytics rules, workbooks, and playbooks deployed to your Microsoft Sentinel workspace with the Defender for IoT solution
-
+> * Connect Defender for IoT data to Microsoft Sentinel
+> * Use Log Analytics to query Defender for IoT alert data
## Prerequisites Before you start, make sure you have the following requirements on your workspace: -- **Read** and **Write** permissions on your Microsoft Sentinel workspace--- **Contributor** permissions on the subscription you want to connect--- <a name="enablehub"></a>Defender for IoT must be enabled on your relevant IoT Hub instances.-
- Use the following procedure to verify or enable this setting if needed:
-
- 1. Go to the IoT Hub instance that you'd defined when onboarding your sensors in Defender for IoT.
+- **Read** and **Write** permissions on your Microsoft Sentinel workspace. For more information, see [Permissions in Microsoft Sentinel](roles.md).
- 1. Select **Defender for IoT > Settings > Data Collection**.
+- **Contributor** or **Owner** permissions on the subscription you want to connect to Microsoft Sentinel.
- 1. Under **Microsoft Defender for IoT**, select **Enable Microsoft Defender for IoT**.
-
-For more information, see [Permissions in Microsoft Sentinel](roles.md) and [Quickstart: Get started with Defender for IoT](../defender-for-iot/organizations/getting-started.md).
+- A Defender for IoT plan on your Azure subscription with data streaming into Defender for IoT. For more information, see [Quickstart: Get started with Defender for IoT](../defender-for-iot/organizations/getting-started.md).
> [!IMPORTANT] > Currently, having both the Microsoft Defender for IoT and the [Microsoft Defender for Cloud](data-connectors-reference.md#microsoft-defender-for-cloud) data connectors enabled on the same Microsoft Sentinel workspace simultaneously may result in duplicate alerts in Microsoft Sentinel. We recommend that you disconnect the Microsoft Defender for Cloud data connector before connecting to Microsoft Defender for IoT.
Start by enabling the **Defender for IoT** data connector to stream all your Def
If you've made any connection changes, it can take 10 seconds or more for the **Subscription** list to update.
- > [!TIP]
- > If you see an error message, make sure that you have [Defender for IoT enabled](#enablehub) on at least one IoT Hub instance within your selected subscription.
- >
- For more information, see [Connect Microsoft Sentinel to Azure, Windows, Microsoft, and Amazon services](connect-azure-windows-microsoft-services.md). - ## View Defender for IoT alerts -
-View Defender for IoT alerts in the Microsoft Sentinel **Logs** area.
+After you've connected a subscription to Microsoft Sentinel, you'll be able to view Defender for IoT alerts in the Microsoft Sentinel **Logs** area.
1. In Microsoft Sentinel, select **Logs > AzureSecurityOfThings > SecurityAlert**, or search for **SecurityAlert**.
In Defender for IoT on the Azure portal and the sensor console, the **Last detec
For more information, see [View alerts on the Defender for IoT portal](../defender-for-iot/organizations/how-to-manage-cloud-alerts.md) and [View alerts on your sensor](../defender-for-iot/organizations/how-to-view-alerts.md).
-## Install the Defender for IoT solution
-
-The **IoT OT Threat Monitoring with Defender for IoT** solution is a set of bundled content, including analytics rules, workbooks, and playbooks, configured specifically for Defender for IoT data. This solution currently supports only Operational Networks (OT/ICS).
-
-> [!TIP]
-> Microsoft Sentinel [solutions](sentinel-solutions.md) can help you onboard Microsoft Sentinel security content for a specific data connector using a single process. For example, the **IoT OT Threat Monitoring with Defender for IoT** supports the integration with Microsoft Sentinel's security orchestration, automation, and response (SOAR) capabilities by providing out-of-the-box and OT-optimized playbooks with automated response and prevention capabilities.
--
-**To install the solution**
-
-1. In Microsoft Sentinel, under **Content management**, select **Content hub** and then locate the **IoT OT Threat Monitoring with Defender for IoT** solution.
-
-1. At the bottom right, select **View details**, and then **Create**. Select the subscription, resource group, and workspace where you want to install the solution, and then review the related security content that will be deployed.
-
- When you're done, select **Review + Create** to install the solution.
-
-For more information, see [About Microsoft Sentinel content and solutions](sentinel-solutions.md) and [Centrally discover and deploy out-of-the-box content and solutions](sentinel-solutions-deploy.md).
-
-## Detect threats out-of-the-box with Defender for IoT data
-
-Incidents aren't created for alerts generated by Defender for IoT data by default.
-
-You can ensure that Microsoft Sentinel creates incidents for relevant alerts generated by Defender for IoT, either by using out-of-the-box analytics rules provided in the **IoT OT Threat Monitoring with Defender for IoT** solution, configuring analytics rules manually, or by configuring your data connector to automatically create incidents for *all* alerts generated by Defender for IoT.
-
-For more information, see:
--- [Detect threats out-of-the-box](detect-threats-built-in.md)-- [Create custom analytics rules to detect threats](detect-threats-custom.md)-
-# [Use out-of-the-box analytics rules](#tab/use-out-of-the-box-analytics-rules-recommended)
-
-[Install the Defender for IoT solution](#install-the-defender-for-iot-solution) to get out-of-the-box analytics rules deployed to your workspace, built specifically for Defender for IoT data.
-
-The following table describes the out-of-the-box analytics rules provided in the [IoT OT Threat Monitoring with Defender for IoT](#install-the-defender-for-iot-solution) solution.
-
-> [!TIP]
-> When working with the following analytics rules, we recommend that you turn off the default *Microsoft Security* Defender for the IoT analytics rules.
->
-
-| Rule Name | Description|
-| - | -|
-| **Illegal function codes for ICS/SCADA traffic** | Illegal function codes in supervisory control and data acquisition (SCADA) equipment may indicate one of the following: <br><br>- Improper application configuration, such as due to a firmware update or reinstallation. <br>- Malicious activity. For example, a cyber threat that attempts to use illegal values within a protocol to exploit a vulnerability in the programmable logic controller (PLC), such as a buffer overflow. |
-| **Firmware update** | Unauthorized firmware updates may indicate malicious activity on the network, such as a cyber threat that attempts to manipulate PLC firmware to compromise PLC function. |
-| **Unauthorized PLC changes** | Unauthorized changes to PLC ladder logic code may be one of the following: <br><br>- An indication of new functionality in the PLC. <br>- Improper configuration of an application, such as due to a firmware update or reinstallation. <br>- Malicious activity on the network, such as a cyber threat that attempts to manipulate PLC programming to compromise PLC function. |
-| **PLC insecure key state** | The new mode may indicate that the PLC is not secure. Leaving the PLC in an insecure operating mode may allow adversaries to perform malicious activities on it, such as a program download. <br><br>If the PLC is compromised, devices and processes that interact with it may be impacted. which may affect overall system security and safety. |
-| **PLC stop** | The PLC stop command may indicate an improper configuration of an application that has caused the PLC to stop functioning, or malicious activity on the network. For example, a cyber threat that attempts to manipulate PLC programming to affect the functionality of the network. |
-| **Suspicious malware found in the network** | Suspicious malware found on the network indicates that suspicious malware is trying to compromise production. |
-| **Multiple scans in the network** | Multiple scans on the network can be an indication of one of the following: <br><br>- A new device on the network <br>- New functionality of an existing device <br>- Misconfiguration of an application, such as due to a firmware update or reinstallation <br>- Malicious activity on the network for reconnaissance |
-| **Internet connectivity** | An OT device communicating with internet addresses may indicate an improper application configuration, such as anti-virus software attempting to download updates from an external server, or malicious activity on the network. |
-| **Unauthorized device in the SCADA network** | An unauthorized device on the network may be a legitimate, new device recently installed on the network, or an indication of unauthorized or even malicious activity on the network, such as a cyber threat attempting to manipulate the SCADA network. |
-| **Unauthorized DHCP configuration in the SCADA network** | An unauthorized DHCP configuration on the network may indicate a new, unauthorized device operating on the network. <br><br>This may be one a legitimate, new device recently deployed on the network, or an indication of unauthorized or even malicious activity on the network, such as a cyber threat attempting to manipulate the SCADA network. |
-| **Excessive login attempts** | Excessive sign in attempts may indicate improper service configuration, human error, or malicious activity on the network, such as a cyber threat attempting to manipulate the SCADA network. |
-| **High bandwidth in the network** | An unusually high bandwidth may be an indication of a new service/process on the network, such as backup, or an indication of malicious activity on the network, such as a cyber threat attempting to manipulate the SCADA network. |
-| **Denial of Service** | This alert detects attacks that would prevent the use or proper operation of the DCS system. |
-| **Unauthorized remote access to the network** | Unauthorized remote access to the network can compromise the target device. <br><br> This means that if another device on the network is compromised, the target devices can be accessed remotely, increasing the attack surface. |
-| **No traffic on Sensor Detected** | A sensor that no longer detects network traffic indicates that the system may be insecure. |
-
-# [Create and maintain analytics rules manually](#tab/create-and-maintain-analytics-rules-manually)
-
-Manually create and manage analytics rules in the Microsoft Sentinel **Analytics > Active rules** page. For more information, see [Detect threats out-of-the-box](detect-threats-built-in.md).
-
-Use this option if you haven't yet installed the [IoT OT Threat Monitoring with Defender for IoT](#install-the-defender-for-iot-solution) solution, if you want to use the out-of-the-box analytics rules as templates for customized rules, or if you'd like to configure analytics rules for scenarios not covered by the solution.
-
-# [Configure the connector to create incidents for all alerts](#tab/configure-the-connector-to-create-incidents-for-all-alerts)
-
-You can configure the **Defender for IoT** data connector to automatically create incidents for *all* alerts generated by Defender for IoT.
-
-In the **Instructions** tab of the data connector page, scroll down to the **Create incidents** section and select **Enable**.
-
-> [!CAUTION]
-> This option may cause a large number of incidents to be created in your workspace.
->
---
-## Visualize and monitor Defender for IoT data
-
-To visualize and monitor your Defender for IoT data, use the workbooks deployed to your Microsoft Sentinel workspace as part of the [IoT OT Threat Monitoring with Defender for IoT](#install-the-defender-for-iot-solution) solution.
-
-The Defender for IoT workbooks provide guided investigations for OT entities based on open incidents, alert notifications, and activities for OT assets. They also provide a hunting experience across the MITRE ATT&CK® framework for ICS, and are designed to enable analysts, security engineers, and MSSPs to gain situational awareness of OT security posture.
-
-View workbooks in Microsoft Sentinel on the **Threat management > Workbooks > My workbooks** tab. For more information, see [Visualize collected data](get-visibility.md).
-
-The following table describes the workbooks included in the **IoT OT Threat Monitoring with Defender for IoT** solution:
-
-|Workbook |Description |Logs |
-||||
-|**Alerts** | Displays data such as: Alert Metrics, Topmost Alerts, Alert over time, Alert by Severity, Alert by Engine, Alert by Device Type, Alert by Vendor and Alert by IP address. | Uses data from the following log: SecurityAlert |
-|**Incidents** | Displays data such as: <br><br>- Incident Metrics, Topmost Incident, Incident over time, Incident by Protocol, Incident by Device Type, Incident by Vendor, and Incident by IP address.<br><br>- Incident by Severity, Incident Mean time to respond, Incident Mean time to resolve and Incident close reasons. | Uses data from the following log: SecurityAlert |
-|**MITRE ATT&CK® for ICS** | Displays data such as: Tactic Count, Tactic Details, Tactic over time, Technique Count. | Uses data from the following log: SecurityAlert |
-|**Device Inventory** | Displays data such as: OT device name, type, IP address, Mac address, Model, OS, Serial Number, Vendor, Protocols. | Uses data from the following log: SecurityAlert |
--
-## Automate response to Defender for IoT alerts
-
-Playbooks are collections of automated remediation actions that can be run from Microsoft Sentinel as a routine. A playbook can help automate and orchestrate your threat response; it can be run manually or set to run automatically in response to specific alerts or incidents, when triggered by an analytics rule or an automation rule, respectively.
-
-The playbooks described in the following sections are deployed to your Microsoft Sentinel workspace as part of the [IoT OT Threat Monitoring with Defender for IoT](#install-the-defender-for-iot-solution) solution.
-
-For more information, see:
--- [Tutorial: Use playbooks with automation rules in Microsoft Sentinel](tutorial-respond-threats-playbook.md)-- [Automate threat response with playbooks in Microsoft Sentinel](automate-responses-with-playbooks.md)-
-### Automatically close incidents
-
-**Playbook name**: AD4IoT-AutoCloseIncidents
-
-In some cases, maintenance activities generate alerts in Microsoft Sentinel that can distract a SOC team from handling the real problems. This playbook automatically closes incidents created from such alerts during a specified maintenance period, explicitly parsing the IoT device entity fields.
-
-To use this playbook:
--- Enter the relevant time period when the maintenance is expected to occur, and the IP addresses of any relevant assets, such as listed in an Excel file.-- Create a watchlist that includes all the asset IP addresses on which alerts should be handled automatically.--
-### Email notifications by production line
-
-**Playbook name**: AD4IoT-MailByProductionLine
-
-This playbook sends mail to notify specific stakeholders about alerts and events that occur in your environment.
-
-For example, when you have specific security teams assigned to specific product lines or geographic locations, you'll want that team to be notified about alerts that are relevant to their responsibilities.
-
-To use this playbook, create a watchlist that maps between the sensor names and the mailing addresses of each of the stakeholders you want to alert.
-
-### Create a new ServiceNow ticket
-
-**Playbook name**: AD4IoT-NewAssetServiceNowTicket
-
-Typically, the entity authorized to program a PLC is the Engineering Workstation. Therefore, attackers might create new Engineering Workstations in order to create malicious PLC programming.
-
-This playbook opens a ticket in ServiceNow each time a new Engineering Workstation is detected, explicitly parsing the IoT device entity fields.
-
-### Update alert statuses in Defender for IoT
-
-**Playbook name**: AD4IoT-AutoAlertStatusSync
-
-This playbook updates alert statuses in Defender for IoT whenever a related alert in Microsoft Sentinel has a **Status** update.
-
-This synchronization overrides any status defined in Defender for IoT, in the Azure portal or the sensor console, so that the alert statuses match that of the related incident.
-
-To use this playbook, make sure that you have the required role applied, valid connections where required, and an automation rule to connect incident triggers with the **AD4IoT-AutoAlertStatusSync** playbook:
-
-**To add the *Security Admin* role to the Azure subscription where the playbook is installed**:
-
-1. Open the **AD4IoT-AutoAlertStatusSync** playbook from the Microsoft Sentinel **Automation** page.
-
-1. With the playbook opened as a Logic app, select **Identity > System assigned**, and then in the **Permissions** area, select the **Azure role assignments** button.
-
-1. In the **Azure role assignments** page, select **Add role assignment**.
-
-1. In the **Add role assignment** pane:
-
- - Define the **Scope** as **Subscription**
- - From the **Subscription** dropdown, select the subscription where your playbook is installed.
- - From the **Role** dropdown, select the **Security Admin** role, and then select **Save**.
-
-**To ensure that you have valid connections for each of your connection steps in the playbook**:
-
-1. Open the **AD4IoT-AutoAlertStatusSync** playbook from the Microsoft Sentinel **Automation** page.
-
-1. With the playbook opened as a Logic app, select **Logic app designer**. If you have invalid connection details, you may have warning signs in both of the **Connections** steps. For example:
-
- :::image type="content" source="media/iot-solution/connection-steps.png" alt-text="Screenshot of the default AD4IOT AutoAlertStatusSync playbook." lightbox="media/iot-solution/connection-steps.png":::
-
-1. Select a **Connections** step to expand it and add a valid connection as needed.
-
-**To connect your incidents, relevant analytics rules, and the **AD4IoT-AutoAlertStatusSync** playbook**:
-
-Add a new Microsoft Sentinel analytics rule, defined as follows:
--- In the **Trigger** field, select **When an incident is updated**--- In the **Conditions** area, select **If > Analytic rule name > Contains**, and then select the specific analytics rules relevant for Defender for IoT in your organization.-
- You may be using out-of-the-box analytics rules, or you may have modified the out-of-the-box content, or created your own. For more information, see [Detect threats out-of-the-box with Defender for IoT data](#detect-threats-out-of-the-box-with-defender-for-iot-data).
--- In the **Actions** area, select **Run playbook > AD4IoT-AutoAlertStatusSync**.-
-For example:
+## Next steps
+[Install the **Microsoft Defender for IoT** solution](iot-advanced-threat-monitoring.md) to your Microsoft Sentinel workspace.
-## Next steps
+The **Microsoft Defender for IoT** solution is a set of bundled, out-of-the-box content that's configured specifically for Defender for IoT data, and includes analytics rules, workbooks, and playbooks.
For more information, see:
+- [Tutorial: Investigate and detect threats for IoT devices](iot-advanced-threat-monitoring.md)
- [Defending Critical Infrastructure with the Microsoft Sentinel: IT/OT Threat Monitoring Solution](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/defending-critical-infrastructure-with-the-microsoft-sentinel-it/ba-p/3061184)-- [Microsoft Defender for IoT documentation](../defender-for-iot/index.yml)-- [Microsoft Defender for IoT solution](sentinel-solutions-catalog.md#microsoft) - [Microsoft Defender for IoT data connector](data-connectors-reference.md#microsoft-defender-for-iot)
+- [Microsoft Defender for IoT solution](sentinel-solutions-catalog.md#microsoft)
sentinel Reference Systemconfig https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/reference-systemconfig.md
extractuseremail = <True/False>
apiretry = <True/False> auditlogforcexal = <True/False> auditlogforcelegacyfiles = <True/False>
+azure_resource_id = <Azure _ResourceId>
+# Used to force a specific resource group for the SAP tables in Log Analytics, useful for applying RBAC on SAP data
+# example - /subscriptions/1234568-qwer-qwer-qwer-123456789/resourcegroups/RESOURCE_GROUP_NAME/providers/microsoft.compute/virtualmachines/VIRTUAL_MACHINE_NAME
+# for more information - https://learn.microsoft.com/azure/azure-monitor/logs/log-standard-columns#_resourceid.
timechunk = <value> # Default timechunk value is 60 (minutes). For certain tables, the data connector retrieves data from the ABAP server using timechunks (collecting all events that occurred within a certain timestamp). On busy systems this may result in large datasets, so to reduce memory and CPU utilization footprint, consider configuring to a smaller value.
sentinel Sentinel Solutions Catalog https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sentinel-solutions-catalog.md
For more information, see [Centrally discover and deploy Microsoft Sentinel out-
|**Apache Log4j Vulnerability Detection** | Analytics rules, hunting queries, workbooks, playbooks | Application, Security - Threat Protection, Security - Vulnerability Management | Microsoft| |**Cybersecurity Maturity Model Certification (CMMC)** | [Analytics rules, workbook, playbook](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/what-s-new-cybersecurity-maturity-model-certification-cmmc/ba-p/2111184) | Compliance | Microsoft| |**Dev-0537 Detection and Hunting**|Workbook|Security - Threat Protection|Microsoft|
-| **IoT/OT Threat Monitoring with Defender for IoT** | [Analytics rules, playbooks, workbook](iot-solution.md) | Internet of Things (IoT), Security - Threat Protection | Microsoft |
+| **Microsoft Defender for IoT** | [Analytics rules, playbooks, workbook](iot-advanced-threat-monitoring.md) | Internet of Things (IoT), Security - Threat Protection | Microsoft |
|**Maturity Model for Event Log Management M2131** | [Analytics rules, hunting queries, playbooks, workbook](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/modernize-log-management-with-the-maturity-model-for-event-log/ba-p/3072842) | Compliance | Microsoft| |**Microsoft Insider Risk Management** (IRM) |[Data connector](data-connectors-reference.md#microsoft-365-insider-risk-management-irm-preview), [workbook, analytics rules, hunting queries, playbook](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/announcing-the-microsoft-sentinel-microsoft-insider-risk/ba-p/2955786) |Security - Insider threat | Microsoft| | **Microsoft Sentinel Deception** | [Workbooks, analytics rules, watchlists](monitor-key-vault-honeytokens.md) | Security - Threat Protection |Microsoft |
sentinel Whats New Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/whats-new-archive.md
For more information, see the [Microsoft Security Response Center blog](https://
The new **IoT OT Threat Monitoring with Defender for IoT** solution available in the [Microsoft Sentinel content hub](sentinel-solutions-catalog.md#microsoft) provides further support for the Microsoft Sentinel integration with Microsoft Defender for IoT, bridging gaps between IT and OT security challenges, and empowering SOC teams with enhanced abilities to efficiently and effectively detect and respond to OT threats.
-For more information, see [Tutorial: Integrate Microsoft Sentinel and Microsoft Defender for IoT](iot-solution.md).
+For more information, see [Tutorial: Investigate Microsoft Defender for IoT devices with Microsoft Sentinel](iot-advanced-threat-monitoring.md).
### Ingest GitHub logs into your Microsoft Sentinel workspace (Public preview)
service-fabric Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/release-notes.md
We are excited to announce that 9.0 release of the Service Fabric runtime has st
- Windows Server 2022 is now supported as of the 9.0 CU2 release. - Mirantis Container runtime support on Windows for Service Fabric containers - The Microsoft Web Platform Installer (WebPI) used for installing Service Fabric SDK and Tools was retired on July 1, 2022.
+- Azure Service Fabric will block deployments that do not meet Silver or Gold durability requirements starting on 9/30/2022. 5 VMs or more will be enforced with this change to help avoid data loss from VM-level infrastructure requests for production workloads. Enforcement for existing clusters will be rolled out in the coming months.
+- Azure Service Fabric node types with VMSS durability of Silver or Gold should always have Windows update explicitly disabled to avoid unintended OS restarts due to the Windows updates, which can impact the production workloads. This can be done by setting the "enableAutomaticUpdates": false, in the VMSS OSProfile. Consider enabling Automatic VMSS Image upgrades instead. The deployments will start failing from 09/30/2022 for new clusters, if the WindowsUpdates are not disabled on the VMSS. Enforcement for existing clusters will be rolled out in the coming months.
### Service Fabric 9.0 releases | Release date | Release | More info |
We are excited to announce that 9.0 release of the Service Fabric runtime has st
| April 29, 2022 | [Azure Service Fabric 9.0](https://techcommunity.microsoft.com/t5/azure-service-fabric-blog/azure-service-fabric-9-0-release/ba-p/3299108) | [Release notes](https://github.com/microsoft/service-fabric/blob/master/release_notes/Service_Fabric_ReleaseNotes_90.md)| | June 06, 2022 | [Azure Service Fabric 9.0 First Refresh Release](https://techcommunity.microsoft.com/t5/azure-service-fabric-blog/microsoft-azure-service-fabric-9-0-first-refresh-release/ba-p/3469489) | [Release notes](https://github.com/microsoft/service-fabric/blob/master/release_notes/Service_Fabric_ReleaseNotes_90CU1.md)| | July 14, 2022 | [Azure Service Fabric 9.0 Second Refresh Release](https://techcommunity.microsoft.com/t5/azure-service-fabric-blog/microsoft-azure-service-fabric-9-0-second-refresh-release/ba-p/3575842) | [Release notes](https://github.com/microsoft/service-fabric/blob/master/release_notes/Service_Fabric_ReleaseNotes_90CU2.md)|
+| September 13, 2022 | [Azure Service Fabric 9.0 Third Refresh Release](https://techcommunity.microsoft.com/t5/azure-service-fabric-blog/microsoft-azure-service-fabric-9-0-third-refresh-update-release/ba-p/3631367) | [Release notes](https://github.com/microsoft/service-fabric/blob/master/release_notes/Service_Fabric_ReleaseNotes_90CU3.md)|
## Service Fabric 8.2
spring-apps Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/faq.md
Security and privacy are among the top priorities for Azure and Azure Spring App
### How does Azure Spring Apps host my applications?
-Each service instance in Azure Spring Apps is backed by a fully dedicated Kubernetes cluster with multiple worker nodes. Azure Spring Apps manages the underlying Kubernetes cluster for you, including high availability, scalability, Kubernetes version upgrade, and so on.
+Each service instance in Azure Spring Apps is backed by Azure Kubernetes Service with multiple worker nodes. Azure Spring Apps manages the underlying Kubernetes cluster for you, including high availability, scalability, Kubernetes version upgrade, and so on.
Azure Spring Apps intelligently schedules your applications on the underlying Kubernetes worker nodes. To provide high availability, Azure Spring Apps distributes applications with 2 or more instances on different nodes.
spring-apps How To Built In Persistent Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-built-in-persistent-storage.md
Title: How to use built-in persistent storage in Azure Spring Apps | Microsoft Docs
-description: How to use built-in persistent storage in Azure Spring Apps
+ Title: Use built-in persistent storage in Azure Spring Apps | Microsoft Docs
+description: Learn how to use built-in persistent storage in Azure Spring Apps
Azure Spring Apps provides two types of built-in storage for your application: persistent and temporary.
-By default, Azure Spring Apps provides temporary storage for each application instance. Temporary storage is limited to 5 GB per instance with the default mount path /tmp.
+By default, Azure Spring Apps provides temporary storage for each application instance. Temporary storage is limited to 5 GB per instance with */tmp* as the default mount path.
> [!WARNING] > If you restart an application instance, the associated temporary storage is permanently deleted.
-Persistent storage is a file-share container managed by Azure and allocated per application. Data stored in persistent storage is shared by all instances of an application. An Azure Spring Apps instance can have a maximum of 10 applications with persistent storage enabled. Each application is allocated 50 GB of persistent storage. The default mount path for persistent storage is /persistent.
+Persistent storage is a file-share container managed by Azure and allocated per application. Data stored in persistent storage is shared by all instances of an application. An Azure Spring Apps instance can have a maximum of 10 applications with persistent storage enabled. Each application is allocated 50 GB of persistent storage. The default mount path for persistent storage is */persistent*.
> [!WARNING] > If you disable an applications's persistent storage, all of that storage is deallocated and all of the stored data is lost. ## Enable or disable built-in persistent storage
-You can modify the state of built-in persistent storage using the Azure portal or by using the Azure CLI.
+You can enable or disable built-in persistent storage using the Azure portal or Azure CLI.
#### [Portal](#tab/azure-portal)
-## Enable or disable built-in persistent storage with the portal
+Use the following steps to enable or disable built-in persistent storage using the Azure portal.
-The portal can be used to enable or disable built-in persistent storage.
+1. Go to your Azure Spring Apps instance in the Azure portal.
-1. From the **Home** page of your Azure portal, select **All Resources**.
+1. Select **Apps** to view apps for your service instance, and then select an app to display the app's **Overview** page.
- >![Locate the All Resources icon](media/portal-all-resources.jpg)
+ :::image type="content" source="media/how-to-built-in-persistent-storage/app-selected.png" lightbox="media/how-to-built-in-persistent-storage/app-selected.png" alt-text="Screenshot of Azure portal showing the Apps page.":::
-1. Select the Azure Spring Apps resource that needs persistent storage. In this example, the selected application is called **upspring**.
+1. On the **Overview** page, select **Configuration**.
- > ![Select your application](media/select-service.jpg)
+ :::image type="content" source="media/how-to-built-in-persistent-storage/select-configuration.png" lightbox="media/how-to-built-in-persistent-storage/select-configuration.png" alt-text="Screenshot of Azure portal showing details for an app.":::
-1. Under the **Settings** heading, select **Apps**.
+1. On the **Configuration** page, select **Persistent Storage**.
-1. Your Azure Spring Apps services appear in a table. Select the service that you want to add persistent storage to. In this example, the **gateway** service is selected.
+ :::image type="content" source="media/how-to-built-in-persistent-storage/select-persistent-storage.png" lightbox="media/how-to-built-in-persistent-storage/select-persistent-storage.png" alt-text="Screenshot of Azure portal showing the Configuration page.":::
- > ![Select your service](media/select-gateway.jpg)
+1. On the **Persistent Storage** tab, select **Enable** to enable persistent storage, or **Disable** to disable persistent storage.
-1. From the service's configuration page, select **Configuration**
+ :::image type="content" source="media/how-to-built-in-persistent-storage/enable-persistent-storage.png" lightbox="media/how-to-built-in-persistent-storage/enable-persistent-storage.png" alt-text="Screenshot of Azure portal showing the Persistent Storage tab.":::
-1. Select the **Persistent Storage** tab and select **Enable** to turn on persistent storage, or select **Disable** to turn off persistent storage.
-
- > ![Enable persistent storage](media/enable-persistent-storage.jpg)
-
-If persistent storage is enabled, its size and path are shown on the **Persistent Storage** tab.
+If persistent storage is enabled, the **Persistent Storage** tab displays the storage size and path.
#### [Azure CLI](#tab/azure-cli)
-## Use the Azure CLI to enable or disable built-in persistent storage
+ If necessary, install the Azure Spring Apps extension for the Azure CLI using this command: ```azurecli
az extension add --name spring
Other operations:
-* To create an app with built-in persistent storage enabled:
+- To create an app with built-in persistent storage enabled:
```azurecli az spring app create -n <app> -g <resource-group> -s <service-name> --enable-persistent-storage true ```
-* To enable built-in persistent storage for an existing app:
+- To enable built-in persistent storage for an existing app:
```azurecli az spring app update -n <app> -g <resource-group> -s <service-name> --enable-persistent-storage true ```
-* To disable built-in persistent storage in an existing app:
+- To disable built-in persistent storage in an existing app:
```azurecli az spring app update -n <app> -g <resource-group> -s <service-name> --enable-persistent-storage false
Other operations:
## Next steps
-* Learn about [application and service quotas](./quotas.md).
-* Learn how to [manually scale your application](./how-to-scale-manual.md).
+- [Quotas and service plans for Azure Spring Apps](./quotas.md)
+- [Scale an application in Azure Spring Apps](./how-to-scale-manual.md)
storage-mover Agent Deploy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage-mover/agent-deploy.md
At a minimum, the agent image needs 20 GiB of local storage. The amount required
## Create the agent VM 1. Create a new VM to host the agent. Open **Hyper-V Manager**. In the **Actions** pane, select **New** and **Virtual Machine...** to launch the **New Virtual Machine Wizard**.
- :::image type="content" source="media/agent-deploy/agent-vm-create-sml.png" alt-text="Image showing how to launch the New Virtual Machine Wizard from within the Hyper-V Manager." lightbox="media/agent-deploy/agent-vm-create-lrg.png":::
+
+ :::image type="content" source="media/agent-deploy/agent-vm-create-sml.png" alt-text="Image showing how to launch the New Virtual Machine Wizard from within the Hyper-V Manager." lightbox="media/agent-deploy/agent-vm-create-lrg.png":::
1. Within the **Specify Name and Location** pane, specify values for the agent VM's **Name** and **Location** fields. The location should match the folder where the VHD is stored, if possible. Select **Next**.
- :::image type="content" source="media/agent-deploy/agent-name-select-sml.png" alt-text="Image showing the location of the Name and Location fields within the New Virtual Machine Wizard." lightbox="media/agent-deploy/agent-name-select-lrg.png":::
+
+ :::image type="content" source="media/agent-deploy/agent-name-select-sml.png" alt-text="Image showing the location of the Name and Location fields within the New Virtual Machine Wizard." lightbox="media/agent-deploy/agent-name-select-lrg.png":::
1. Within the **Specify Generation** pane, select the **Generation 1** option.
At a minimum, the agent image needs 20 GiB of local storage. The amount required
Only *Generation 1* VMs are supported. This Linux image won't boot as a *Generation 2* VM. 1. If you haven't already, [determine the amount of memory you'll need for your VM](#determine-required-resources-for-the-vm). Enter this amount in the **Assign Memory** pane, noting that you need to enter the value in MiB. 1 GiB = 1024 MiB. Using the **Dynamic Memory** feature is fine.
- :::image type="content" source="media/agent-deploy/agent-memory-allocate-sml.png" lightbox="media/agent-deploy/agent-memory-allocate-lrg.png" alt-text="Image showing the location of the Startup Memory field within the New Virtual Machine Wizard.":::
+
+ :::image type="content" source="media/agent-deploy/agent-memory-allocate-sml.png" lightbox="media/agent-deploy/agent-memory-allocate-lrg.png" alt-text="Image showing the location of the Startup Memory field within the New Virtual Machine Wizard.":::
1. Within the **Configure Networking** pane, select the **Connection** drop-down. From the list, choose the virtual switch that will provide the agent with internet connectivity and select **Next**. For more information, see the [Hyper-V virtual networking documentation](/windows-server/networking/sdn/technologies/hyper-v-network-virtualization/hyperv-network-virtualization-overview-windows-server) for details.
- :::image type="content" source="media/agent-deploy/agent-networking-configure-sml.png" lightbox="media/agent-deploy/agent-networking-configure-lrg.png" alt-text="Image showing the location of the network Connection field within the New Virtual Machine Wizard.":::
+
+ :::image type="content" source="media/agent-deploy/agent-networking-configure-sml.png" lightbox="media/agent-deploy/agent-networking-configure-lrg.png" alt-text="Image showing the location of the network Connection field within the New Virtual Machine Wizard.":::
1. Within the **Connect Virtual Hard Disk** pane, select the **Use an existing Virtual Hard Disk** option. In the **Location** field, select **Browse** and navigate to the VHD file that was extracted in the previous steps. Select **Next**.
- :::image type="content" source="media/agent-deploy/agent-disk-connect-sml.png" lightbox="media/agent-deploy/agent-disk-connect-lrg.png" alt-text="Image showing the location of the Virtual Hard Disk Connection fields within the New Virtual Machine Wizard.":::
+
+ :::image type="content" source="media/agent-deploy/agent-disk-connect-sml.png" lightbox="media/agent-deploy/agent-disk-connect-lrg.png" alt-text="Image showing the location of the Virtual Hard Disk Connection fields within the New Virtual Machine Wizard.":::
1. Within the **Summary** pane, select **Finish** to create the agent VM.
- :::image type="content" source="media/agent-deploy/agent-configuration-details-sml.png" lightbox="media/agent-deploy/agent-configuration-details-lrg.png" alt-text="Image showing the user-assigned values in the Summary pane of the New Virtual Machine Wizard.":::
+
+ :::image type="content" source="media/agent-deploy/agent-configuration-details-sml.png" lightbox="media/agent-deploy/agent-configuration-details-lrg.png" alt-text="Image showing the user-assigned values in the Summary pane of the New Virtual Machine Wizard.":::
1. After the new agent is successfully created, it will appear in the **Virtual Machines** pane within the **Hyper-V Manager**.
- :::image type="content" source="media/agent-deploy/agent-created-sml.png" lightbox="media/agent-deploy/agent-created-lrg.png" alt-text="Image showing the agent VM deployed within the New Virtual Machine Wizard.":::
+
+ :::image type="content" source="media/agent-deploy/agent-created-sml.png" lightbox="media/agent-deploy/agent-created-lrg.png" alt-text="Image showing the agent VM deployed within the New Virtual Machine Wizard.":::
## Change the default password
storage Archive Rehydrate Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/archive-rehydrate-overview.md
To learn how to rehydrate an archived blob to an online tier, see [Rehydrate an
When you rehydrate a blob, you can set the priority for the rehydration operation via the optional *x-ms-rehydrate-priority* header on a [Set Blob Tier](/rest/api/storageservices/set-blob-tier) or [Copy Blob](/rest/api/storageservices/copy-blob) operation. Rehydration priority options include: -- **Standard priority**: The rehydration request will be processed in the order it was received and may take up to 15 hours for objects under 10 GB in size.
+- **Standard priority**: The rehydration request will be processed in the order it was received and may take up to 15 hours to complete for objects under 10 GB in size.
- **High priority**: The rehydration request will be prioritized over standard priority requests and may complete in less than one hour for objects under 10 GB in size. To check the rehydration priority while the rehydration operation is underway, call [Get Blob Properties](/rest/api/storageservices/get-blob-properties) to return the value of the `x-ms-rehydrate-priority` header. The rehydration priority property returns either *Standard* or *High*.
storage Storage Blob Get Url Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-get-url-javascript.md
You can get a container or blob URL by using the `url` property of the client ob
- ContainerClient.[url](/javascript/api/@azure/storage-blob/containerclient#@azure-storage-blob-containerclient-url) - BlobClient.[url](/javascript/api/@azure/storage-blob/blobclient#@azure-storage-blob-blobclient-url)-- BlockBlobClient.[url](/javascript/api/@azure/storage-blob/blockblobclien#@azure-storage-blob-blockblobclient-url)
+- BlockBlobClient.[url](/javascript/api/@azure/storage-blob/blockblobclient#@azure-storage-blob-blockblobclient-url)
The [sample code snippets](https://github.com/Azure-Samples/AzureStorageSnippets/tree/master/blobs/howto/JavaScript/NodeJS-v12/dev-guide) are available in GitHub as runnable Node.js files.
storage Storage Samples Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-samples-javascript.md
The following tables provide an overview of our samples repository and the scena
:::row::: :::column span="":::
- [Create a container](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/storage/storage-blob/samples/v12/javascript/listContainers.js)
+ [Create a container](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/storage/storage-blob/samples/v12/javascript/errorsAndResponses.js#L23)
:::column-end::: :::column span=""::: [Create a container using a shared key credential](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/storage/storage-blob/samples/v12/javascript/snapshots.js#L23)
The following tables provide an overview of our samples repository and the scena
[List containers by page](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/storage/storage-blob/samples/v12/javascript/listContainers.js#L34) :::column-end::: :::column span="":::
- [Delete a container](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/storage/storage-blob/samples/v12/javascript/listContainers.js)
+ [Delete a container](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/storage/storage-blob/samples/v12/javascript/errorsAndResponses.js#L132)
:::column-end::: :::row-end:::
The following tables provide an overview of our samples repository and the scena
:::row::: :::column span="":::
- [Create a blob](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/storage/storage-blob/samples/v12/javascript/listContainers.js#L8)
+ [Create a blob service client](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/storage/storage-blob/samples/v12/javascript/listContainers.js#L23)
:::column-end::: :::column span="":::
- [List blobs](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/storage/storage-blob/samples/v12/javascript/listContainers.js#L22)
+ [Upload a blob](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/storage/storage-blob/samples/v12/javascript/errorsAndResponses.js#L59)
:::column-end::: :::row-end::: :::row::: :::column span="":::
- [Download a blob](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/storage/storage-blob/samples/v12/javascript/listContainers.js)
+ [Download a blob](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/storage/storage-blob/samples/v12/javascript/errorsAndResponses.js#L90)
:::column-end::: :::column span=""::: [List blobs using an iterator](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/storage/storage-blob/samples/v12/javascript/listBlobsFlat.js#L41)
update-center Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/support-matrix.md
United Kingdom | UK South
The following table lists the supported operating systems for Azure VMs and Azure Arc-enabled servers. Before you enable update management center (preview), ensure that the target machines meet the operating system requirements.
->[!NOTE]
-> For Azure VMs, we currently support a combination of Offer, Publisher, and SKU of the VM image. Ensure you match all three to confirm support.
# [Azure VMs](#tab/azurevm-os)
-[Azure VMs](../virtual-machines/index.yml) are:
-
- | Publisher | Operating System | SKU |
- |-|-|-|
- | Canonical | UbuntuServer | 16.04-LTS, 18.04-LTS |
- | Canonical | 0001-com-ubuntu-server-focal | 20_04-LTS |
- | Canonical | 0001-com-ubuntu-pro-focal | pro-20_04-LTS |
- | Canonical | 0001-com-ubuntu-pro-bionic | pro-18_04-LTS |
- | Red Hat | RHEL | 7-RAW, 7-LVM, 6.8, 6.9, 7.2, 7.3, 7.4, 7.5, 7.6, 7.7, 7.8, 7_9, 8, 8.1, 8.2, 8_3, 8-LVM |
- | Red Hat | RHEL-RAW | 8-RAW |
- | OpenLogic | CentOS | 6.8, 6.9, 7.2, 7.3, 7.4, 7.5, 7.6, 7.7, 7_8, 7_9, 8.0, 8_1, 8_2, 8_3 |
- | OpenLogic | CentOS-LVM | 7-LVM, 8-LVM |
- | SUSE | SLES-12-SP5 | Gen1, Gen2 |
- | SUSE | SLES-15-SP2 | Gen1, Gen2 |
- | MicrosoftWindowsServer | WindowsServer | 2022-datacenter </br> 2022-datacenter-g2 </br> 2022-datacenter-azure-edition</br> 2022-datacenter-azure-edition-smalldisk </br> 2022-datacenter-core-g2 </br> |
- | MicrosoftWindowsServer | WindowsServer | 2019-Datacenter</br> 2019-Datacenter-Core</br> 2019-datacenter-gensecond </br> 2019-Datacenter-smalldisk </br> 2019-Datacenter-with-Containers </br> 2019-datacenter-with-Containers </br> 2019-Datacenter-Server-Core |
- | MicrosoftWindowsServer | WindowsServer | 2016-Datacenter</br> 2016-datacenter-gensecond</br> 2016-Datacenter-smalldisk </br> 2016-Datacenter-Server-Core </br> 2016-Datacenter-Server-Containers |
- | MicrosoftWindowsServer | MicrosoftServerOperatingSystems-Previews | Windows-Server-2022-Azure-Edition-Preview, Windows-Server-2019-Azure-Edition-Preview |
- | MicrosoftWindowsServer | WindowsServer | 2012-R2-Datacenter |
- | MicrosoftWindowsServer | WindowsServer | 2008-R2-SP1 |
- | MicrosoftVisualStudio | VisualStudio | VS-2017-ENT-Latest-WS2016 |
-
- >[!NOTE]
- > Custom images are currently not supported.
+>[!NOTE]
+> - For [Azure VMs](../virtual-machines/index.yml), we currently support a combination of Offer, Publisher, and SKU of the VM image. Ensure you match all three to confirm support.
+> - See the list of [supported OS images](../virtual-machines/automatic-vm-guest-patching.md#supported-os-images).
+> - Custom images are currently not supported.
# [Azure Arc-enabled servers](#tab/azurearc-os)
virtual-desktop Sandbox https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/sandbox.md
To publish Windows Sandbox to your host pool using PowerShell:
1. Connect to Azure using one of the following methods: - Open a PowerShell prompt on your local device. Run the `Connect-AzAccount` cmdlet to sign in to your Azure account. For more information, see [Sign in with Azure PowerShell](/powershell/azure/authenticate-azureps).
- - Sign in to [the Azure portal](https://portal.azure.com/) and open [Azure Cloud Shell](https://github.com/MicrosoftDocs/azure-docs-pr/pull/cloud-shell/overview.md) with PowerShell as the shell type.
+ - Sign in to [the Azure portal](https://portal.azure.com/) and open [Azure Cloud Shell](../cloud-shell/overview.md) with PowerShell as the shell type.
2. Run the following cmdlet to get a list of all the Azure tenants your account has access to:
virtual-machines Trusted Launch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/trusted-launch.md
Previously updated : 05/31/2022 Last updated : 09/22/2022
Azure offers trusted launch as a seamless way to improve the security of [genera
- Edv4-series, Edsv4-series - Fsv2-series - Lsv2-series
+- NCasT4_v3-series
+- NVadsA10 v5-series
**OS support**:-- Redhat Enterprise Linux 8.3, 8.4, 8.5 LVM
+- Redhat Enterprise Linux 8.3, 8.4, 8.5, 8.6, 9.0 LVM
- SUSE Enterprise Linux 15 SP3 - Ubuntu Server 22.04 LTS - Ubuntu Server 20.04 LTS - Ubuntu Server 18.04 LTS - Debian 11 - CentOS 8.3, 8.4-- Oracle Linux 8.3 LVM
+- Oracle Linux 8.3, 8.4, 8.5, 8.6, 9.0 LVM
- CBL-Mariner - Windows Server 2022 - Windows Server 2019
virtual-machines Create Vm Specialized Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/create-vm-specialized-portal.md
There are several ways to create a virtual machine (VM) in Azure:
- You can create a new VM from the VHD of a VM that has been deleted. For example, if you have an Azure VM that isn't working correctly, you can delete the VM and use its VHD to create a new VM. You can either reuse the same VHD or create a copy of the VHD by creating a snapshot and then creating a new managed disk from the snapshot. Although creating a snapshot takes a few more steps, it preserves the original VHD and provides you with a fallback. -- Take a classic VM and use the VHD to create a new VM that uses the Resource Manager deployment model and managed disks. For the best results, **Stop** the classic VM in the Azure portal before creating the snapshot.- - You can create an Azure VM from an on-premises VHD by uploading the on-premises VHD and attaching it to a new VM. You use PowerShell or another tool to upload the VHD to a storage account, and then you create a managed disk from the VHD. For more information, see [Upload a specialized VHD](create-vm-specialized.md#option-2-upload-a-specialized-vhd). > [!IMPORTANT] >
-> When you use a specialized disk to create a new VM, the new VM retains the computer name of the original VM. Other computer-specific information (e.g. CMID) is also kept and, in some cases, this duplicate information could cause issues. When copying a VM, be aware of what types of computer-specific information your applications rely on.
-> Thus, don't use a specialized disk if you want to create multiple VMs. Instead, for larger deployments, [create an image](capture-image-resource.md) and then [use that image to create multiple VMs](create-vm-generalized-managed.md).
+> When you use a [specialized](shared-image-galleries.md#generalized-and-specialized-images) disk to create a new VM, the new VM retains the computer name of the original VM. Other computer-specific information (e.g. CMID) is also kept and, in some cases, this duplicate information could cause issues. When copying a VM, be aware of what types of computer-specific information your applications rely on.
+> Don't use a specialized disk if you want to create multiple VMs. Instead, for larger deployments, create an image and then use that image to create multiple VMs.
+> For more information, see [Store and share images in an Azure Compute Gallery](shared-image-galleries.md).
We recommend that you limit the number of concurrent deployments to 20 VMs from a single snapshot or VHD.
virtual-machines Using Visual Studio Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/using-visual-studio-vm.md
- Title: Using Visual Studio on an Azure virtual machine
-description: Using Visual Studio on an Azure virtual machine.
------ Previously updated : 11/17/2020-
-keywords: visualstudio
--
-# Visual Studio images on Azure
-**Applies to:** :heavy_check_mark: Windows VMs :heavy_check_mark: Flexible scale sets :heavy_check_mark: Uniform scale sets
-
-Using Visual Studio in a preconfigured Azure virtual machine (VM) is a quick, easy way to go from nothing to an up-and-running development environment. System images with different Visual Studio configurations are available in the [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/category/compute?filters=virtual-machine-images%3Bmicrosoft%3Bwindows&page=1&subcategories=application-infrastructure).
-
-New to Azure? [Create a free Azure account](https://azure.microsoft.com/free).
-
-> [!NOTE]
-> Not all subscriptions are eligible to deploy Windows 10 images. For more information see [Use Windows client in Azure for dev/test scenarios](./client-images.md)
-
-## What configurations and versions are available?
-Images for the most recent major versions, Visual Studio 2019, Visual Studio 2017 and Visual Studio 2015, can be found in the Azure Marketplace. For each released major version, you see the originally "released to web" (RTW) version and the latest updated versions. Each of these versions offers the Visual Studio Enterprise and the Visual Studio Community editions. These images are updated at least every month to include the latest Visual Studio and Windows updates. While the names of the images remain the same, each image's description includes the installed product version and the image's "as of" date.
-
-| Release version | Editions | Product version |
-|:--:|::|:--:|
-| [Visual Studio 2019: Latest (Version 16.8)](https://azuremarketplace.microsoft.com/marketplace/apps/microsoftvisualstudio.visualstudio2019latest?tab=Overview) | Enterprise, Community | Version 16.8.0 |
-| Visual Studio 2019: RTW | Enterprise | Version 16.0.20 |
-| Visual Studio 2017: Latest (Version 15.9) | Enterprise, Community | Version 15.9.29 |
-| Visual Studio 2017: RTW | Enterprise, Community | Version 15.0.28 |
-| Visual Studio 2015: Latest (Update 3) | Enterprise, Community | Version 14.0.25431.01 |
-
-> [!NOTE]
-> In accordance with Microsoft servicing policy, the originally released (RTW) version of Visual Studio 2015 has expired for servicing. Visual Studio 2015 Update 3 is the only remaining version offered for the Visual Studio 2015 product line.
-
-For more information, see the [Visual Studio Servicing Policy](https://www.visualstudio.com/productinfo/vs-servicing-vs).
-
-## What features are installed?
-Each image contains the recommended feature set for that Visual Studio edition. Generally, the installation includes:
-
-* All available workloads, including each workload's recommended optional components. More details on the workloads, components, and SDKs included Visual Studio could be found in the [Visual Studio documentation](/visualstudio/install/workload-and-component-ids)
-* .NET 4.6.2 and .NET 4.7 SDKs, Targeting Packs, and Developer Tools
-* Visual F#
-* GitHub Extension for Visual Studio
-* LINQ to SQL Tools
-
-The command line used to install Visual Studio when building the images is as follows:
-
-```
- vs_enterprise.exe --allWorkloads --includeRecommended --passive ^
- add Microsoft.Net.Component.4.7.SDK ^
- add Microsoft.Net.Component.4.7.TargetingPack ^
- add Microsoft.Net.Component.4.6.2.SDK ^
- add Microsoft.Net.Component.4.6.2.TargetingPack ^
- add Microsoft.Net.ComponentGroup.4.7.DeveloperTools ^
- add Microsoft.VisualStudio.Component.FSharp ^
- add Component.GitHub.VisualStudio ^
- add Microsoft.VisualStudio.Component.LinqToSql
-```
-
-If the images don't include a Visual Studio feature that you require, provide feedback through the feedback tool in the upper-right corner of the page.
-
-## What size VM should I choose?
-Azure offers a full range of virtual machine sizes. Because Visual Studio is a powerful, multi-threaded application, you want a VM size that includes at least two processors and 7 GB of memory. We recommend the following VM sizes for the Visual Studio images:
-
- * Standard_D2_v3
- * Standard_D2s_v3
- * Standard_D4_v3
- * Standard_D4s_v3
- * Standard_D2_v2
- * Standard_D2S_v2
- * Standard_D3_v2
-
-For more information on the latest machine sizes, see [Sizes for Windows virtual machines in Azure](../sizes.md).
-
-With Azure, you can rebalance your initial choice by resizing the VM. You can either provision a new VM with a more appropriate size, or resize your existing VM to different underlying hardware. For more information, see [Resize a Windows VM](../resize-vm.md).
-
-## After the VM is running, what's next?
-Visual Studio follows the "bring your own license" model in Azure. As with an installation on proprietary hardware, one of the first steps is licensing your Visual Studio installation. To unlock Visual Studio, either:
-- Sign in with a Microsoft account that's associated with a Visual Studio subscription -- Unlock Visual Studio with the product key that came with your initial purchase-
-For more information, see [Sign in to Visual Studio](/visualstudio/ide/signing-in-to-visual-studio) and [How to unlock Visual Studio](/visualstudio/ide/how-to-unlock-visual-studio).
-
-## How do I save the development VM for future or team use?
-
-The spectrum of development environments is huge, and there's real cost associated with building out the more complex environments. Regardless of your environment's configuration, you can save, or capture, your configured VM as a "base image" for future use or for other members of your team. Then, when booting a new VM, you provision it from the base image rather than the Azure Marketplace image.
-
-A quick summary: Use the System Preparation tool (Sysprep) and shut down the running VM, and then capture *(Figure 1)* the VM as an image through the UI in the Azure portal. Azure saves the `.vhd` file that contains the image in the storage account of your choosing. The new image then shows up as an Image resource in your subscription's list of resources.
-
-<img src="media/using-visual-studio-vm/capture-vm.png" alt="Capture an image through the Azure portal UI"><center>*(Figure 1) Capture an image through the Azure portal UI.*</center>
-
-For more information, see [Create a managed image of a generalized VM in Azure](./capture-image-resource.md).
-
-> [!IMPORTANT]
-> Don't forget to use Sysprep to prepare the VM. If you miss that step, Azure can't provision a VM from the image.
-
-> [!NOTE]
-> You still incur some cost for storage of the images, but that incremental cost can be insignificant compared to the overhead costs to rebuild the VM from scratch for each team member who needs one. For instance, it costs a few dollars to create and store a 127-GB image for a month that's reusable by your entire team. However, these costs are insignificant compared to hours each employee invests to build out and validate a properly configured dev box for their individual use.
-
-Additionally, your development tasks or technologies might need more scale, like varieties of development configurations and multiple machine configurations. You can use Azure DevTest Labs to create _recipes_ that automate construction of your "golden image." You can also use DevTest Labs to manage policies for your team's running VMs. [Using Azure DevTest Labs for developers](../../devtest-labs/devtest-lab-developer-lab.md) is the best source for more information on DevTest Labs.
-
-## Next steps
-Now that you know about the preconfigured Visual Studio images, the next step is to create a new VM:
-
-* [Create a VM through the Azure portal](quick-create-portal.md)
-* [Windows Virtual Machines overview](overview.md)
virtual-machines High Availability Guide Rhel Pacemaker https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/high-availability-guide-rhel-pacemaker.md
vm-windows Previously updated : 08/29/2022 Last updated : 09/22/2022
The following items are prefixed with either **[A]** - applicable to all nodes,
sudo pcs cluster start --all </code></pre>
- If building a cluster on **RHEL 8.X**, use the following commands:
+ If building a cluster on **RHEL 8.x**, use the following commands:
<pre><code>sudo pcs host auth <b>prod-cl1-0</b> <b>prod-cl1-1</b> -u hacluster sudo pcs cluster setup <b>nw1-azr</b> <b>prod-cl1-0</b> <b>prod-cl1-1</b> totem token=30000 sudo pcs cluster start --all
The following items are prefixed with either **[A]** - applicable to all nodes,
The fencing device uses either a managed identity for Azure resource or service principal to authorize against Microsoft Azure. ### Using Managed Identity
-To create a managed identity (MSI), [create a system-assigned](/azure/active-directory/managed-identities-azure-resources/qs-configure-portal-windows-vm#system-assigned-managed-identity) managed identity for each VM in the cluster. Should a system-assigned managed identity already exist, it will be used. User assigned managed identities should not be used with Pacemaker at this time.
+To create a managed identity (MSI), [create a system-assigned](/azure/active-directory/managed-identities-azure-resources/qs-configure-portal-windows-vm#system-assigned-managed-identity) managed identity for each VM in the cluster. Should a system-assigned managed identity already exist, it will be used. User assigned managed identities should not be used with Pacemaker at this time. Fence device, based on managed identity is supported on RHEL 7.9 and RHEL 8.x.
### Using Service Principal Follow these steps to create a service principal, if not using managed identity.
Follow these steps to create a service principal, if not using managed identity.
### **[1]** Create a custom role for the fence agent
-Neither managed identity nor service principal have permissions to access your Azure resources by default. You need to give the managed identity or service principal permissions to start and stop (power-off) all virtual machines of the cluster. If you did not already create the custom role, you can create it using [PowerShell](../../../role-based-access-control/custom-roles-powershell.md) or [Azure CLI](../../../role-based-access-control/custom-roles-cli.md)
+Neither managed identity nor service principal has permissions to access your Azure resources by default. You need to give the managed identity or service principal permissions to start and stop (power-off) all virtual machines of the cluster. If you did not already create the custom role, you can create it using [PowerShell](../../../role-based-access-control/custom-roles-powershell.md) or [Azure CLI](../../../role-based-access-control/custom-roles-cli.md)
Use the following content for the input file. You need to adapt the content to your subscriptions that is, replace *xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx* and *yyyyyyyy-yyyy-yyyy-yyyy-yyyyyyyyyyyy* with the Ids of your subscription. If you only have one subscription, remove the second entry in AssignableScopes.
op monitor interval=3600
#### [Service Principal](#tab/spn)
-For RHEL **7.X**, use the following command to configure the fence device:
+For RHEL **7.x**, use the following command to configure the fence device:
<pre><code>sudo pcs stonith create rsc_st_azure fence_azure_arm login="<b>login ID</b>" passwd="<b>password</b>" \ resourceGroup="<b>resource group</b>" tenantId="<b>tenant ID</b>" subscriptionId="<b>subscription id</b>" \ <b>pcmk_host_map="prod-cl1-0:prod-cl1-0-vm-name;prod-cl1-1:prod-cl1-1-vm-name"</b> \
power_timeout=240 pcmk_reboot_timeout=900 pcmk_monitor_timeout=120 pcmk_monitor_
op monitor interval=3600 </code></pre>
-For RHEL **8.X**, use the following command to configure the fence device:
+For RHEL **8.x**, use the following command to configure the fence device:
<pre><code>sudo pcs stonith create rsc_st_azure fence_azure_arm username="<b>login ID</b>" password="<b>password</b>" \ resourceGroup="<b>resource group</b>" tenantId="<b>tenant ID</b>" subscriptionId="<b>subscription id</b>" \ <b>pcmk_host_map="prod-cl1-0:prod-cl1-0-vm-name;prod-cl1-1:prod-cl1-1-vm-name"</b> \
op monitor interval=3600
+If you are using fencing device, based on service principal configuration, read [Change from SPN to MSI for Pacemaker clusters using Azure fencing](https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/sap-on-azure-high-availability-change-from-spn-to-msi-for/ba-p/3609278) and learn how to convert to managed identity configuration.
+ > [!TIP] > Only configure the `pcmk_delay_max` attribute in two node Pacemaker clusters. For more information on preventing fence races in a two node Pacemaker cluster, see [Delaying fencing in a two node cluster to prevent fence races of "fence death" scenarios](https://access.redhat.com/solutions/54829).
op monitor interval=3600
> [!TIP] > This section is only applicable, if it is desired to configure special fencing device `fence_kdump`.
-If there is a need to collect diagnostic information within the VM , it may be useful to configure additional fencing device, based on fence agent `fence_kdump`. The `fence_kdump` agent can detect that a node entered kdump crash recovery and can allow the crash recovery service to complete, before other fencing methods are invoked. Note that `fence_kdump` is not a replacement for traditional fence mechanisms, like Azure Fence Agent when using Azure VMs.
+If there is a need to collect diagnostic information within the VM, it may be useful to configure additional fencing device, based on fence agent `fence_kdump`. The `fence_kdump` agent can detect that a node entered kdump crash recovery and can allow the crash recovery service to complete, before other fencing methods are invoked. Note that `fence_kdump` is not a replacement for traditional fence mechanisms, like Azure Fence Agent when using Azure VMs.
> [!IMPORTANT] > Be aware that when `fence_kdump` is configured as a first level fencing device, it will introduce delays in the fencing operations and respectively delays in the application resources failover.
virtual-machines High Availability Guide Suse Pacemaker https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/high-availability-guide-suse-pacemaker.md
vm-windows Previously updated : 08/30/2022 Last updated : 09/22/2022
This section applies only if you want to use a fencing device with an Azure fenc
This section applies only if you're using a fencing device that's based on an Azure fence agent. The fencing device uses either a managed identity or a service principal to authorize against Microsoft Azure. #### Using managed identity
-To create a managed identity (MSI), [create a system-assigned](/azure/active-directory/managed-identities-azure-resources/qs-configure-portal-windows-vm#system-assigned-managed-identity) managed identity for each VM in the cluster. Should a system-assigned managed identity already exist, it will be used. User assigned managed identities should not be used with Pacemaker at this time.
+To create a managed identity (MSI), [create a system-assigned](/azure/active-directory/managed-identities-azure-resources/qs-configure-portal-windows-vm#system-assigned-managed-identity) managed identity for each VM in the cluster. Should a system-assigned managed identity already exist, it will be used. User assigned managed identities should not be used with Pacemaker at this time. Fence device, based on managed identity is supported on SLES 15 SP1 and above.
#### Using service principal
Make sure to assign the custom role to the service principal at all VM (cluster
>[!IMPORTANT] > If using managed identity, the installed version of the *fence-agents* package must be fence-agents 4.5.2+git.1592573838.1eee0863 or later. Earlier versions will not work correctly with a managed identity configuration.
- > Currently only SLES 15 SP1 and older are supported for managed identity configuration.
+ > Currently only SLES 15 SP1 and newer are supported for managed identity configuration.
1. **[A]** Install the Azure Python SDK and Azure Identity Python module.
Make sure to assign the custom role to the service principal at all VM (cluster
sudo crm configure property stonith-timeout=900 </code></pre>
+ If you are using fencing device, based on service principal configuration, read [Change from SPN to MSI for Pacemaker clusters using Azure fencing](https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/sap-on-azure-high-availability-change-from-spn-to-msi-for/ba-p/3609278) and learn how to convert to managed identity configuration.
+ > [!IMPORTANT] > The monitoring and fencing operations are deserialized. As a result, if there's a longer-running monitoring operation and simultaneous fencing event, there's no delay to the cluster failover because the monitoring operation is already running.
virtual-network-manager Create Virtual Network Manager Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/create-virtual-network-manager-portal.md
Previously updated : 04/20/2021- Last updated : 09/22/2022+ # Quickstart: Create a mesh network topology with Azure Virtual Network Manager using the Azure portal
Deploy a network manager instance with the defined scope and access you need.
1. Select **Review + create** and then select **Create** once validation has passed. ## Create virtual networks
-Create five virtual networks using the portal. This example creates virtual networks named VNetA, VNetB, VNetC and VNetD in the West US location. Each virtual network will have a tag of networkType used for dynamic membership. If you already have virtual networks you want create a mesh network with, you'll need to add tags listed below to your virtual networks and then you can skip to the next section.
+Create five virtual networks using the portal. This example creates virtual networks named VNetA, VNetB, VNetC and VNetD in the West US location. Each virtual network will have a tag of networkType used for dynamic membership. If you have existing virtual networks for your mesh configuration, you'll need to add tags listed below to your virtual networks and skip to the next section.
1. From the **Home** screen, select **+ Create a resource** and search for **Virtual network**. Then select **Create** to begin configuring the virtual network.
Virtual Network Manager applies configurations to groups of VNets by placing the
1. You'll see the new network group added to the *Network Groups* page. :::image type="content" source="./media/create-virtual-network-manager-portal/network-groups-list.png" alt-text="Screenshot of network group page with list of network groups.":::
-1. Once your network group is created, you'll add virtual networks as members. Choose one of the options: *Static membership* or *Dynamic membership* with Azure Policy.
+1. Once your network group is created, you'll add virtual networks as members. Choose one of the options: *[Manually add membership](#manually-add-membership)* or *[Create policy to dynamically add members](#create-azure-policy-for-dynamic-membership)* with Azure Policy.
## Define membership for a mesh configuration
-Azure Virtual Network manager allows you two methods for adding membership to a network group. Static membership involves manually adding virtual networks, and dynamic membership involves using Azure Policy to dynamically add virtual networks based on conditions. Choose the option below for your mesh membership configuration:
-### Static membership option
-Using static membership, you'll manually add three VNets for your Mesh configuration to your Network Group using the steps below:
+Azure Virtual Network manager allows you two methods for adding membership to a network group. You can manually add virtual networks or use Azure Policy to dynamically add virtual networks based on conditions. Choose the option below for your mesh membership configuration:
+### Manually add membership
+In this task, you'll manually add three virtual networks for your Mesh configuration to your network group using the steps below:
-1. From the list of network groups, select **myNetworkGroup** and select **Add** under *Static membership* on the *myNetworkGroup* page.
+1. From the list of network groups, select **myNetworkGroup** and select **Add virtual networks** under *Manually add members* on the *myNetworkGroup* page.
:::image type="content" source="./media/create-virtual-network-manager-portal/add-static-member.png" alt-text="Screenshot of add a virtual network f.":::
-1. On the *Add static members* page, select all three virtual networks created previously (VNetA, VNetB, and VNetC). Then select **Add** to add the 3 virtual networks to the network group.
+1. On the *Manually add members* page, select three virtual networks created previously (VNetA, VNetB, and VNetC). Then select **Add** to add the 3 virtual networks to the network group.
:::image type="content" source="./media/create-virtual-network-manager-portal/add-virtual-networks.png" alt-text="Screenshot of add virtual networks to network group page."::: 1. On the **Network Group** page under *Settings*, select **Group Members** to view the membership of the group you manually selected. :::image type="content" source="media/create-virtual-network-manager-portal/group-members-list-thumb.png" alt-text="Screenshot of group membership under Group Membership." lightbox="media/create-virtual-network-manager-portal/group-members-list.png":::
-### Dynamic membership with Azure Policy
-Using [Azure Policy](concept-azure-policy-integration.md), you'll define a condition to dynamically add three VNets for your Mesh configuration to your Network Group using the steps below.
+### Create Azure Policy for dynamic membership
+Using [Azure Policy](concept-azure-policy-integration.md), you'll define a condition to dynamically add three virtual networks tagged as **Prod** to your network group using the steps below.
-1. From the list of network groups, select **myNetworkGroup**.
-
- :::image type="content" source="./media/tutorial-create-secured-hub-and-spoke/network-group-page.png" alt-text="Screenshot of the network groups page.":::
-
-1. On the **Overview** page, select **Create Azure Policy** under *Create policy to dynamically add members*.
+1. From the list of network groups, select **myNetworkGroup** and select **Create Azure Policy** under *Create policy to dynamically add members*.
:::image type="content" source="media/create-virtual-network-manager-portal/define-dynamic-membership.png" alt-text="Screenshot of Create Azure Policy button.":::
Using [Azure Policy](concept-azure-policy-integration.md), you'll define a condi
| Criteria | | | Parameter | Select **Tags** from the drop-down.| | Operator | Select **Exists** from the drop-down.|
- | Condition | Enter **NetworkType** to dynamically add the three previously created virtual networks into this network group. |
+ | Condition | Enter **Prod** to dynamically add the three previously created virtual networks into this network group. |
-1. Select **Advanced (JSON) editor** to modify the JSON code.
-1. On line 5, replace **exists** with **equals** and set the value to **"Prod"** from **true**.
-1.
- :::image type="content" source="./media/create-virtual-network-manager-portal/json-advanced-editor.png" alt-text="Screenshot of Advanced (JSON) editor.":::
-
-1. Select **Save** to deploy the group membership.
+1. Select **Save** to deploy the group membership. It can take up to one minute for the policy to take effect and be added to your network group.
1. On the *Network Group* page under **Settings**, select **Group Members** to view the membership of the group based on the conditions defined in Azure Policy. :::image type="content" source="media/create-virtual-network-manager-portal/group-members-list-thumb.png" alt-text="Screenshot of group membership under Group Membership." lightbox="media/create-virtual-network-manager-portal/group-members-list.png":::- ## Create a configuration Now that the Network Group is created, and has the correct VNets, create a mesh network topology configuration. Replace <subscription_id> with your subscription and follow the steps below:
virtual-network-manager How To Create Hub And Spoke https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/how-to-create-hub-and-spoke.md
This section will help you create a network group containing the virtual network
1. Go to your Azure Virtual Network Manager instance. This how-to guide assumes you've created one using the [quickstart](create-virtual-network-manager-portal.md) guide.
-1. Select **Network groups** under *Settings*, and then select **+ Add** to create a new network group.
+1. Select **Network Groups** under *Settings*, then select **+ Create**.
- :::image type="content" source="./media/tutorial-create-secured-hub-and-spoke/add-network-group.png" alt-text="Screenshot of add a network group button.":::
+ :::image type="content" source="./media/create-virtual-network-manager-portal/add-network-group-2.png" alt-text="Screenshot of add a network group button.":::
-1. On the *Basics* tab, enter a **Name** and a **Description** for the network group.
+1. On the *Create a network group* page, enter a **Name** for the network group. This example will use the name **myNetworkGroup**. Select **Add** to create the network group.
- :::image type="content" source="./media/how-to-create-hub-and-spoke/basics.png" alt-text="Screenshot of basics tab for add a network group.":::
+ :::image type="content" source="./media/create-virtual-network-manager-portal/network-group-basics.png" alt-text="Screenshot of create a network group page.":::
-1. To add virtual network manually, select the **Static group members** tab. For more information, see [static members](concept-network-groups.md#static-membership).
+1. You'll see the new network group added to the *Network Groups* page.
+ :::image type="content" source="./media/create-virtual-network-manager-portal/network-groups-list.png" alt-text="Screenshot of network group page with list of network groups.":::
- :::image type="content" source="./media/how-to-create-hub-and-spoke/static-group.png" alt-text="Screenshot of static group members tab.":::
+1. Once your network group is created, you'll add virtual networks as members. Choose one of the options: *[Manually add membership](concept-network-groups.md#static-membership)* or *[Create policy to dynamically add members](concept-network-groups.md#dynamic-membership)*.
+## Define network group members
+Azure Virtual Network manager allows you two methods for adding membership to a network group. You can manually add virtual networks or use Azure Policy to dynamically add virtual networks based on conditions. Choose the option below for your mesh membership configuration:
-1. To add virtual networks dynamically, select the **Conditional statements** tab. For more information, see [dynamic membership](concept-network-groups.md#dynamic-membership).
+### Manually adding members
+To manually add the desired virtual networks for your Mesh configuration to your Network Group, follow the steps below:
- :::image type="content" source="./media/how-to-create-hub-and-spoke/conditional-statements.png" alt-text="Screenshot of conditional statements tab.":::
+1. From the list of network groups, select your network group and select **Add virtual networks** under *Manually add members* on the network group page.
-1. Once you're satisfied with the virtual networks selected for the network group, select **Review + create**. Then select **Create** once validation has passed.
-
+ :::image type="content" source="./media/create-virtual-network-manager-portal/add-static-member.png" alt-text="Screenshot of add a virtual network.":::
+
+1. On the *Manually add members* page, select all the virtual networks and select **Add**.
+
+ :::image type="content" source="./media/create-virtual-network-manager-portal/add-virtual-networks.png" alt-text="Screenshot of add virtual networks to network group page.":::
+
+1. To review the network group membership manually added, select **Group Members** on the *Network Group* page under **Settings**.
+ :::image type="content" source="media/create-virtual-network-manager-portal/group-members-list-thumb.png" alt-text="Screenshot of group membership under Group Membership." lightbox="media/create-virtual-network-manager-portal/group-members-list.png":::
+
+### Dynamic membership with Azure Policy
+To dynamically add members using [Azure Policy](concept-azure-policy-integration.md), follow the steps below:
+
+1. From the list of network groups, select your network group and select **Create Azure Policy** under *Create policy to dynamically add members*.
+
+ :::image type="content" source="media/create-virtual-network-manager-portal/define-dynamic-membership.png" alt-text="Screenshot of Create Azure Policy button.":::
+
+1. On the **Create Azure Policy** page, create a conditional statement to populate your network group. You can choose different conditional parameters including *Name* and *Tags*.
+
+ :::image type="content" source="media/how-to-create-hub-and-spoke/create-azure-policy.png" alt-text="Screenshot of Create Azure Policy page with conditional parameters displayed.":::
+
+1. To review the network group membership based on the conditions defined in Azure Policy, select **Group Members** on the *Network Group* page under **Settings**
## Create a hub and spoke connectivity configuration This section will guide you through how to create a hub-and-spoke configuration with the network group you created in the previous section.
This section will guide you through how to create a hub-and-spoke configuration
## Deploy the hub and spoke configuration
-To have this configuration take effect in your environment, you'll need to deploy the configuration to the regions where your selected virtual network are created.
+To have this configuration take effect in your environment, you'll need to deploy the configuration to the regions where your selected virtual networks are created.
1. Select **Deployments** under *Settings*, then select **Deploy a configuration**.
virtual-network-manager How To Create Mesh Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/how-to-create-mesh-network.md
In this article, you'll learn how to create a mesh network topology using Azure
* Created a [Azure Virtual Network Manager instance](create-virtual-network-manager-portal.md#create-virtual-network-manager). * Identify virtual networks you want to use in the mesh configuration or create new [virtual networks](../virtual-network/quick-create-portal.md).
-## Create a network group
+## <a name="group"></a> Create a network group
This section will help you create a network group containing the virtual networks you'll be using for the hub-and-spoke network topology. 1. Go to your Azure Virtual Network Manager instance. This how-to guide assumes you've created one using the [quickstart](create-virtual-network-manager-portal.md) guide.
-1. Select **Network groups** under *Settings*, and then select **+ Create** to create a new network group.
+1. Select **Network Groups** under *Settings*, then select **+ Create**.
- :::image type="content" source="./media/tutorial-create-secured-hub-and-spoke/add-network-group.png" alt-text="Screenshot of Create a network group button.":::
+ :::image type="content" source="./media/create-virtual-network-manager-portal/add-network-group-2.png" alt-text="Screenshot of add a network group button.":::
-1. On the *Create a network group* page, enter a **Name** and a **Description** for the network group. Then select **Add** to create the network group.
+1. On the *Create a network group* page, enter a **Name** for the network group. This example will use the name **myNetworkGroup**. Select **Add** to create the network group.
:::image type="content" source="./media/create-virtual-network-manager-portal/network-group-basics.png" alt-text="Screenshot of create a network group page."::: 1. You'll see the new network group added to the *Network Groups* page. :::image type="content" source="./media/create-virtual-network-manager-portal/network-groups-list.png" alt-text="Screenshot of network group page with list of network groups.":::
-1. From the list of network groups, select **myNetworkGroup** to manage the network group memberships.
+1. Once your network group is created, you'll add virtual networks as members. Choose one of the options: *[Manually add membership](concept-network-groups.md#static-membership)* or *[Create policy to dynamically add members](concept-network-groups.md#dynamic-membership)*.
- :::image type="content" source="media/how-to-create-mesh-network/manage-group-membership.png" alt-text="Screenshot of manage group memberships page.":::
+## Define network group members
+Azure Virtual Network manager allows you two methods for adding membership to a network group. You can manually add virtual networks or use Azure Policy to dynamically add virtual networks based on conditions. Choose the option below for your mesh membership configuration:
-1. To add a virtual network manually, select the **Add** button under *Static membership*, and select the virtual networks to add. Then select **Add** to save the static membership. For more information, see [static members](concept-network-groups.md#static-membership).
+### Manually adding members
+To manually add the desired virtual networks for your Mesh configuration to your Network Group, follow the steps below:
+
+1. From the list of network groups, select your network group and select **Add virtual networks** under *Manually add members* on the network group page.
+
+ :::image type="content" source="./media/create-virtual-network-manager-portal/add-static-member.png" alt-text="Screenshot of add a virtual network.":::
+
+1. On the *Manually add members* page, select all the virtual networks and select **Add**.
:::image type="content" source="./media/create-virtual-network-manager-portal/add-virtual-networks.png" alt-text="Screenshot of add virtual networks to network group page.":::
-1. To add virtual networks dynamically, select the **Define** button under *Define dynamic membership*, and then enter the conditional statements for membership. Select **Save** to save the dynamic membership conditions. For more information, see [dynamic membership](concept-network-groups.md#dynamic-membership).
+1. To review the network group membership manually added, select **Group Members** on the *Network Group* page under **Settings**.
+ :::image type="content" source="media/create-virtual-network-manager-portal/group-members-list-thumb.png" alt-text="Screenshot of group membership under Group Membership." lightbox="media/create-virtual-network-manager-portal/group-members-list.png":::
+
+### Dynamic membership with Azure Policy
+To dynamically add members using [Azure Policy](concept-azure-policy-integration.md), follow the steps below:
+
+1. From the list of network groups, select your network group and select **Create Azure Policy** under *Create policy to dynamically add members*.
+
+ :::image type="content" source="media/create-virtual-network-manager-portal/define-dynamic-membership.png" alt-text="Screenshot of Create Azure Policy button.":::
- :::image type="content" source="media/how-to-create-mesh-network/define-dynamic-members.png" alt-text="Screenshot of Define dynamic membership page.":::
+1. On the **Create Azure Policy** page, create a conditional statement to populate your network group. You can choose different conditional parameters including *Name* and *Tags*.
+
+ :::image type="content" source="media/how-to-create-hub-and-spoke/create-azure-policy.png" alt-text="Screenshot of Create Azure Policy page with conditional parameters displayed.":::
+1. To review the network group membership based on the conditions defined in Azure Policy, select **Group Members** on the *Network Group* page under **Settings**
## Create a mesh connectivity configuration This section will guide you through how to create a mesh configuration with the network group you created in the previous section.
This section will guide you through how to create a mesh configuration with the
:::image type="content" source="media/how-to-create-mesh-network/add-connectivity-config.png" alt-text="Screenshot of Add a connectivity configuration page and options.":::
-1. On the *Add network groups* page, select the network groups you want to add to this configuration. Then click **Select** to save.
+1. On the *Add network groups* page, select the network groups you want to add to this configuration. Then select **Select** to save.
1. Select **Review + create** and then **Create** to create the mesh connectivity configuration.
virtual-network Network Security Group How It Works https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/network-security-group-how-it-works.md
For outbound traffic, Azure processes the rules in a network security group asso
- **VM1**: The security rules in *NSG2* are processed. Unless you create a security rule that denies port 80 outbound to the internet, the traffic is allowed by the [AllowInternetOutbound](./network-security-groups-overview.md#allowinternetoutbound) default security rule in both *NSG1* and *NSG2*. If *NSG2* has a security rule that denies port 80, the traffic is denied, and never evaluated by *NSG1*. To deny port 80 from the virtual machine, either, or both of the network security groups must have a rule that denies port 80 to the internet. - **VM2**: All traffic is sent through the network interface to the subnet, since the network interface attached to *VM2* doesn't have a network security group associated to it. The rules in *NSG1* are processed.-- **VM3**: If *NSG2* has a security rule that denies port 80, the traffic is denied. If *NSG2* has a security rule that allows port 80, then port 80 is allowed outbound to the internet, since a network security group isn't associated to *Subnet2*.
+- **VM3**: If *NSG2* has a security rule that denies port 80, the traffic is denied. If not, the traffic is allowed by the [AllowInternetOutbound](./network-security-groups-overview.md#allowinternetoutbound) default security rule in NSG2, since a network security group isn't associated to Subnet2.
- **VM4**: All network traffic is allowed from *VM4,* because a network security group isn't associated to the network interface attached to the virtual machine, or to *Subnet3*.
You can easily view the aggregate rules applied to a network interface by viewin
* If you've never created a network security group, you can complete a quick [tutorial](tutorial-filter-network-traffic.md) to get some experience creating one. * If you're familiar with network security groups and need to manage them, see [Manage a network security group](manage-network-security-group.md). * If you're having communication problems and need to troubleshoot network security groups, see [Diagnose a virtual machine network traffic filter problem](diagnose-network-traffic-filter-problem.md).
-* Learn how to enable [network security group flow logs](../network-watcher/network-watcher-nsg-flow-logging-portal.md?toc=%2fazure%2fvirtual-network%2ftoc.json) to analyze network traffic to and from resources that have an associated network security group.
+* Learn how to enable [network security group flow logs](../network-watcher/network-watcher-nsg-flow-logging-portal.md?toc=%2fazure%2fvirtual-network%2ftoc.json) to analyze network traffic to and from resources that have an associated network security group.
virtual-wan Global Hub Profile https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/global-hub-profile.md
Azure Virtual WAN offers two types of connection profiles for User VPN clients:
## <a name="global"></a>Global profiles
-The global profile associated with a User VPN configuration points to a Global Traffic Manager. The Global Traffic Manager includes all active User VPN hubs that are using that User VPN configuration. However, you can choose to exclude hubs if necessary. A user connected to the global profile is directed to the hub that's closest to the user's geographic location. This is especially useful if you have users that travel between multiple locations frequently.
+The global profile associated with a User VPN configuration points to a Global Traffic Manager. The Global Traffic Manager includes all active User VPN hubs that are using that User VPN configuration. However, you can choose to exclude hubs from the Global Traffic Manager if necessary. A user connected to the global profile is directed to the hub that's closest to the user's geographic location. This is especially useful if you have users that travel between multiple locations frequently.
For example, a User VPN Configuration is associated with two different hubs for the same virtual WAN, one in West US and one in Southeast Asia. If a user connects to the global profile associated with the User VPN configuration, they'll connect to the closest Virtual WAN hub based on their location.
virtual-wan How To Routing Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/how-to-routing-policies.md
While Private Traffic includes both branch and Virtual Network address prefixes
* **Internet Traffic Routing Policy**: When an Internet Traffic Routing Policy is configured on a Virtual WAN hub, all branch (User VPN (Point-to-site VPN), Site-to-site VPN, and ExpressRoute) and Virtual Network connections to that Virtual WAN Hub will forward Internet-bound traffic to the Azure Firewall resource, Third-Party Security provider or **Network Virtual Appliance** specified as part of the Routing Policy.
- In other words, when Traffic Routing Policy is configured on a Virtual WAN hub, the Virtual WAN will propagate a **default** route to all spokes and Gateways. In the case of a **Network Virtual Appliance** this routes will be learned and propagated through BGP via the vWAN Route Service and learned by the BGP speakers inside the **Network Virtual Appliance**
+ In other words, when Traffic Routing Policy is configured on a Virtual WAN hub, the Virtual WAN will advertise a **default** route to all spokes, Gateways and Network Virtual Appliances (deployed in the hub or spoke). This includes the **Network Virtual Appliance** that is the next hop for the Itnernet Traffic routing policy.
* **Private Traffic Routing Policy**: When a Private Traffic Routing Policy is configured on a Virtual WAN hub, **all** branch and Virtual Network traffic in and out of the Virtual WAN Hub including inter-hub traffic will be forwarded to the Next Hop Azure Firewall resource or Network Virtual Appliance resource that was specified in the Private Traffic Routing Policy.
While Private Traffic includes both branch and Virtual Network address prefixes
4. If you want to configure a Private Traffic Routing Policy and have branches or virtual networks using non-IANA RFC1918 Prefixes, select **Additional Prefixes** and specify the non-IANA RFC1918 prefix ranges in the text box that comes up. Select **Done**. > [!NOTE]
- > At this point in time, Routing Policies for **Network Virtual Appliances** do not allow you to edit the RFC1918 prefixes. Azure vWAN will be propagating the RFC 1918 space to all spokes and Gateways across, as well as to BGP speakers inside the ****Network Virtual Appliances**. Be mindful of the implications about the propagation of these prefixes into your environment and create the appropriate policies inside your **Network Virtual Appliance** to control routing behavior. Should it be desired to propagate more specific RFC 1918 spaces (i.e Spoke address space), those prefixes need to be added as well on the box below explicit.
+ > At this point in time, Routing Policies for **Network Virtual Appliances** do not allow you to edit the RFC1918 prefixes. Virtual WAN will propagate the RFC1918 aggregate prefixes to all spoke Virtual networks, Gateways as well as the **Network Virtual Appliances**. Be mindful of the implications about the propagation of these prefixes into your environment and create the appropriate policies inside your **Network Virtual Appliance** to control routing behavior.
:::image type="content" source="./media/routing-policies/private-prefixes-nva.png"alt-text="Screenshot showing how to configure additional private prefixes for NVA routing policies."lightbox="./media/routing-policies/private-prefixes-nva.png":::
vpn-gateway Tutorial Site To Site Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/tutorial-site-to-site-portal.md
Previously updated : 09/14/2022 Last updated : 09/21/2022
web-application-firewall Quick Create Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/ag/quick-create-template.md
Title: 'Quickstart: Create an Azure WAF v2 on Application Gateway - Azure Resource Manager template'
+ Title: 'Quickstart: Create an Azure WAF v2 by using an Azure Resource Manager template'
-description: Learn how to use an Azure Resource Manager quickstart template (ARM template) to create a Web Application Firewall v2 on Azure Application Gateway.
+description: Use a quickstart Azure Resource Manager template (ARM template) to create a Web Application Firewall v2 on Azure Application Gateway.
Previously updated : 09/16/2020 Last updated : 09/20/2022 -+
+# Customer intent: As a cloud administrator, I want to quickly deploy a Web Application Firewall v2 on Azure Application Gateway for production environments or to evaluate WAF v2 functionality.
-# Quickstart: Create an Azure WAF v2 on Application Gateway using an ARM template
+# Quickstart: Create an Azure Web Application Firewall v2 by using an ARM template
-In this quickstart, you use an Azure Resource Manager template (ARM template) to create an Azure Web Application Firewall v2 on Application Gateway.
+In this quickstart, you use an Azure Resource Manager template (ARM template) to create an Azure Web Application Firewall (WAF) v2 on Azure Application Gateway.
[!INCLUDE [About Azure Resource Manager](../../../includes/resource-manager-quickstart-introduction.md)] [!INCLUDE [updated-for-az](../../../includes/updated-for-az.md)]
-If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button. The template will open in the Azure portal.
+If your environment meets the prerequisites and you're familiar with using ARM templates, you can select the **Deploy to Azure** button to open the template in the Azure portal.
-[![Deploy to Azure](../../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fdemos%2Fag-docs-wafv2%2Fazuredeploy.json)
+[![Deploy to Azure button.](../../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fdemos%2Fag-docs-wafv2%2Fazuredeploy.json)
## Prerequisites -- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- An Azure account with an active subscription. If you don't have one, you can [create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
## Review the template
-This template creates a simple Web Application Firewall v2 on Azure Application Gateway. This includes a public IP frontend IP address, HTTP settings, a rule with a basic listener on port 80, and a backend pool. A WAF policy with a custom rule is created to block traffic to the backend pool based on an IP address match type.
+This template creates a simple Web Application Firewall v2 on Azure Application Gateway. The template creates a public IP frontend IP address, HTTP settings, a rule with a basic listener on port 80, and a backend pool. A WAF policy with a custom rule blocks traffic to the backend pool based on an IP address match type.
-The template used in this quickstart is from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/ag-docs-wafv2/).
+The template defines the following Azure resources:
+- [Microsoft.Network/applicationgateways](/azure/templates/microsoft.network/applicationgateways)
+- [Microsoft.Network/ApplicationGatewayWebApplicationFirewallPolicies](/azure/templates/microsoft.network/ApplicationGatewayWebApplicationFirewallPolicies)
+- [Microsoft.Network/publicIPAddresses](/azure/templates/microsoft.network/publicipaddresses), one for the application gateway and two for the virtual machines (VMs)
+- [Microsoft.Network/networkSecurityGroups](/azure/templates/microsoft.network/networksecuritygroups)
+- [Microsoft.Network/virtualNetworks](/azure/templates/microsoft.network/virtualnetworks)
+- [Microsoft.Compute/virtualMachines](/azure/templates/microsoft.compute/virtualmachines), two VMs
+- [Microsoft.Network/networkInterfaces](/azure/templates/microsoft.network/networkinterfaces), one for each VM
+- [Microsoft.Compute/virtualMachine/extensions](/azure/templates/microsoft.compute/virtualmachines/extensions) to configure IIS and the web pages
-Multiple Azure resources are defined in the template:
+This template is from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/ag-docs-wafv2/).
-- [**Microsoft.Network/applicationgateways**](/azure/templates/microsoft.network/applicationgateways)-- [**Microsoft.Network/ApplicationGatewayWebApplicationFirewallPolicies**](/azure/templates/microsoft.network/ApplicationGatewayWebApplicationFirewallPolicies)-- [**Microsoft.Network/publicIPAddresses**](/azure/templates/microsoft.network/publicipaddresses) : one for the application gateway, and two for the virtual machines.-- [**Microsoft.Network/networkSecurityGroups**](/azure/templates/microsoft.network/networksecuritygroups)-- [**Microsoft.Network/virtualNetworks**](/azure/templates/microsoft.network/virtualnetworks)-- [**Microsoft.Compute/virtualMachines**](/azure/templates/microsoft.compute/virtualmachines) : two virtual machines-- [**Microsoft.Network/networkInterfaces**](/azure/templates/microsoft.network/networkinterfaces) : two for the virtual machines-- [**Microsoft.Compute/virtualMachine/extensions**](/azure/templates/microsoft.compute/virtualmachines/extensions) : to configure IIS and the web pages ## Deploy the template Deploy the ARM template to Azure:
-1. Select **Deploy to Azure** to sign in to Azure and open the template. The template creates an application gateway, the network infrastructure, and two virtual machines in the backend pool running IIS.
+1. Select **Deploy to Azure** to sign in to Azure and open the template. The template creates an application gateway, the network infrastructure, and two VMs in the backend pool running IIS.
- [![Deploy to Azure](../../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fdemos%2Fag-docs-wafv2%2Fazuredeploy.json)
+ [![Deploy to Azure button.](../../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fdemos%2Fag-docs-wafv2%2Fazuredeploy.json)
-2. Select or create your resource group.
-3. Select **I agree to the terms and conditions stated above** and then select **Purchase**. The deployment can take 10 minutes or longer to complete.
+1. Select or create a resource group.
+1. Select **Review + create**, and when validation passes, select **Create**. The deployment can take 10 minutes or longer to complete.
## Validate the deployment
-Although IIS isn't required to create the application gateway, it's installed on the backend servers to verify if Azure successfully created a WAF v2 on the application gateway.
+Although IIS isn't required, the template installs IIS on the backend servers so you can verify that Azure successfully created a WAF v2 on the application gateway.
Use IIS to test the application gateway:
-1. Find the public IP address for the application gateway on its **Overview** page.![Record application gateway public IP address](../../application-gateway/media/application-gateway-create-gateway-portal/application-gateway-record-ag-address.png) Or, you can select **All resources**, enter *myAGPublicIPAddress* in the search box, and then select it in the search results. Azure displays the public IP address on the **Overview** page.
-2. Copy the public IP address, and then paste it into the address bar of your browser to browse that IP address.
-3. Check the response. A **403 Forbidden** response verifies that the WAF was successfully created and is blocking connections to the backend pool.
-4. Change the custom rule to **Allow traffic**.
- Run the following Azure PowerShell script, replacing your resource group name:
+1. Copy the public IP address for the application gateway from its **Overview** page.
+
+ ![Screenshot that shows the application gateway public IP address.](../../application-gateway/media/application-gateway-create-gateway-portal/application-gateway-record-ag-address.png)
+
+ You can also search for *application gateways* in the Azure search box. The list of application gateways shows the public IP addresses in the **Public IP address** column.
+
+1. Paste the IP address into the address bar of your browser to browse that address.
+1. Check the response. A **403 Forbidden** response verifies that the WAF is successfully blocking connections to the backend pool.
+1. To change the custom rule to allow traffic, run the following Azure PowerShell script, replacing your resource group name:
+ ```azurepowershell $rg = "<your resource group name>" $AppGW = Get-AzApplicationGateway -Name myAppGateway -ResourceGroupName $rg
Use IIS to test the application gateway:
Set-AzApplicationGateway -ApplicationGateway $AppGW ```
- Refresh your browser multiple times and you should see connections to both myVM1 and myVM2.
+1. Refresh your browser several times. You should see connections to both myVM1 and myVM2.
## Clean up resources
-When you no longer need the resources that you created with the application gateway, delete the resource group. This removes the application gateway and all the related resources.
+When you no longer need the resources you created in this quickstart, delete the resource group to remove the application gateway and all its related resources.
To delete the resource group, call the `Remove-AzResourceGroup` cmdlet:
Remove-AzResourceGroup -Name "<your resource group name>"
## Next steps > [!div class="nextstepaction"]
-> [Tutorial: Create an application gateway with a Web Application Firewall using the Azure portal](application-gateway-web-application-firewall-portal.md)
+> [Tutorial: Create an application gateway with a Web Application Firewall by using the Azure portal](application-gateway-web-application-firewall-portal.md)