Updates from: 03/02/2023 02:12:23
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Oauth2 Technical Profile https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/oauth2-technical-profile.md
The following table lists the token endpoint metadata.
| `HttpBinding` | No | The expected HTTP binding to the token endpoint. Possible values: `GET` or `POST`. | | `AccessTokenResponseFormat` | No | The format of the access token endpoint call. For example, Facebook requires an HTTP GET method, but the access token response is in JSON format. Possible values: `Default`, `Json`, and `JsonP`. | | `ExtraParamsInAccessTokenEndpointResponse` | No | Contains the extra parameters that can be returned in the response from **AccessTokenEndpoint** by some identity providers. For example, the response from **AccessTokenEndpoint** contains an extra parameter such as `openid`, which is a mandatory parameter besides the access_token in a **ClaimsEndpoint** request query string. Multiple parameter names should be escaped and separated by the comma ',' delimiter. |
-|`token_endpoint_auth_method`| No| Specifies how Azure AD B2C sends the authentication header to the token endpoint. Possible values: `client_secret_post` (default), and `client_secret_basic` (public preview), `private_key_jwt` (public preview). For more information, see [OpenID Connect client authentication section](https://openid.net/specs/openid-connect-core-1_0.html#ClientAuthentication). |
+|`token_endpoint_auth_method`| No| Specifies how Azure AD B2C sends the authentication header to the token endpoint. Possible values: `client_secret_post` (default), and `client_secret_basic`, `private_key_jwt`. For more information, see [OpenID Connect client authentication section](https://openid.net/specs/openid-connect-core-1_0.html#ClientAuthentication). |
|`token_signing_algorithm`| No | Specifies the signing algorithm to use when `token_endpoint_auth_method` is set to `private_key_jwt`. Possible values: `RS256` (default) or `RS512`.| ### Configure HTTP binding method
active-directory-b2c Whats New Docs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/whats-new-docs.md
Title: "What's new in Azure Active Directory business-to-customer (B2C)" description: "New and updated documentation for the Azure Active Directory business-to-customer (B2C)." Previously updated : 10/31/2022 Last updated : 03/01/2023
Welcome to what's new in Azure Active Directory B2C documentation. This article lists new docs that have been added and those that have had significant updates in the last three months. To learn what's new with the B2C service, see [What's new in Azure Active Directory](../active-directory/fundamentals/whats-new.md) and [Azure AD B2C developer release notes](custom-policy-developer-notes.md)
+## February 2023
+
+### Updated articles
+
+- [Azure Active Directory B2C code samples](integrate-with-app-code-samples.md)
+- [JSON claims transformations](json-transformations.md)
+- [Set up sign-in for a specific Azure Active Directory organization in Azure Active Directory B2C](identity-provider-azure-ad-single-tenant.md)
+- [Page layout versions](page-layout.md)
+ ## January 2023 ### New articles
Welcome to what's new in Azure Active Directory B2C documentation. This article
- [Set up a password reset flow in Azure Active Directory B2C](add-password-reset-policy.md) - [What is Azure Active Directory B2C?](overview.md) - [Technical and feature overview of Azure Active Directory B2C](technical-overview.md)-
-## November 2022
-
-### New articles
--- [Configure Azure Active Directory B2C with Akamai Enterprise Application Access for SSO and secure hybrid access](partner-akamai-secure-hybrid-access.md)-
-### Updated articles
--- [Manage your Azure Active Directory B2C tenant](tenant-management-manage-administrator.md)-- [Manage Azure AD B2C with Microsoft Graph](microsoft-graph-operations.md)-- [Tutorial: Create an Azure Active Directory B2C tenant](tutorial-create-tenant.md)-- [Roles and resource access control](roles-resource-access-control.md)-- [Define an Azure Active Directory technical profile in an Azure Active Directory B2C custom policy](active-directory-technical-profile.md)-
-## October 2022
-
-### New articles
--- [Edit Azure Active Directory B2C Identity Experience Framework (IEF) XML with Grit Visual IEF Editor](partner-grit-editor.md)-- [Register apps in Azure Active Directory B2C](register-apps.md)-
-### Updated articles
--- [Set up sign-in for a specific Azure Active Directory organization in Azure Active Directory B2C](identity-provider-azure-ad-single-tenant.md)-- [Set up a password reset flow in Azure Active Directory B2C](add-password-reset-policy.md)-- [Azure Active Directory B2C documentation landing page](index.yml)-- [Publish your Azure Active Directory B2C app to the Azure Active Directory app gallery](publish-app-to-azure-ad-app-gallery.md)-- [JSON claims transformations](json-transformations.md)-
-## September
-
-### New articles
--- [Tutorial: Configure the Grit IAM B2B2C solution with Azure Active Directory B2C](partner-grit-iam.md)-
-## August 2022
-
-### New articles
--- [Configure Azure Active Directory B2C with Deduce to combat identity fraud and create a trusted user experience](partner-deduce.md)-
-### Updated articles
--- [Clean up resources and delete the tenant](tutorial-delete-tenant.md)-- [Set up sign-up and sign-in with a Twitter account using Azure Active Directory B2C](identity-provider-twitter.md)-- [JSON claims transformations](json-transformations.md)-- [Extensions app in Azure AD B2C](extensions-app.md)-- [Manage Azure AD B2C with Microsoft Graph](microsoft-graph-operations.md)-- [Define custom attributes in Azure Active Directory B2C](user-flow-custom-attributes.md)-- [Azure Active Directory B2C: What's new](whats-new-docs.md)-- [Page layout versions](page-layout.md)-
-## July 2022
-
-### New articles
--- [Configure authentication in a sample React single-page application by using Azure Active Directory B2C](configure-authentication-sample-react-spa-app.md)-- [Configure authentication options in a React application by using Azure Active Directory B2C](enable-authentication-react-spa-app-options.md)-- [Enable authentication in your own React Application by using Azure Active Directory B2C](enable-authentication-react-spa-app.md)-
-### Updated articles
--- [Enable custom domains for Azure Active Directory B2C](custom-domain.md)-- [Set up sign-up and sign-in with a Twitter account using Azure Active Directory B2C](identity-provider-twitter.md)-- [Page layout versions](page-layout.md)-- [Monitor Azure AD B2C with Azure Monitor](azure-monitor.md)-- [Enable JavaScript and page layout versions in Azure Active Directory B2C](javascript-and-page-layout.md)-- [Localization string IDs](localization-string-ids.md)-
-## June 2022
-
-### New articles
--- [Configure authentication in an Azure Static Web App by using Azure AD B2C](configure-authentication-in-azure-static-app.md)-- [Configure authentication in an Azure Web App configuration file by using Azure AD B2C](configure-authentication-in-azure-web-app-file-based.md)-- [Configure authentication in an Azure Web App by using Azure AD B2C](configure-authentication-in-azure-web-app.md)-- [Enable authentication options in an Azure Static Web App by using Azure AD B2C](enable-authentication-azure-static-app-options.md)-- [Enable authentication in your own Python web application using Azure Active Directory B2C](enable-authentication-python-web-app.md)-- [Set up OAuth 2.0 client credentials flow in Azure Active Directory B2C](client-credentials-grant-flow.md)-- [Configure WhoIAM Rampart with Azure Active Directory B2C](partner-whoiam-rampart.md)-
-### Updated articles
--- [Configure authentication in a sample Python web app by using Azure AD B2C](configure-authentication-sample-python-web-app.md)-- [Single-page application sign-in using the OAuth 2.0 implicit flow in Azure Active Directory B2C](implicit-flow-single-page-application.md)-- [Set up OAuth 2.0 client credentials flow in Azure Active Directory B2C](client-credentials-grant-flow.md)-- [Tutorial: Create user flows and custom policies in Azure Active Directory B2C](tutorial-create-user-flows.md)-- [Configure TheAccessHub Admin Tool by using Azure Active Directory B2C](partner-n8identity.md)-- [Monitor Azure AD B2C with Azure Monitor](azure-monitor.md)--
-## May 2022
-
-### Updated articles
--- [Set redirect URLs to b2clogin.com for Azure Active Directory B2C](b2clogin.md)-- [Enable custom domains for Azure Active Directory B2C](custom-domain.md)-- [Configure xID with Azure Active Directory B2C for passwordless authentication](partner-xid.md)-- [UserJourneys](userjourneys.md)-- [Secure your API used an API connector in Azure AD B2C](secure-rest-api.md)-
-## April 2022
-
-### New articles
--- [Tutorial: Configure Azure Web Application Firewall with Azure Active Directory B2C](partner-azure-web-application-firewall.md)-- [Configure Asignio with Azure Active Directory B2C for multi-factor authentication](partner-asignio.md)-- [Set up sign-up and sign-in with Mobile ID using Azure Active Directory B2C](identity-provider-mobile-id.md)-- [Find help and open a support ticket for Azure Active Directory B2C](find-help-open-support-ticket.md)-
-### Updated articles
--- [Configure authentication in a sample single-page application by using Azure AD B2C](configure-authentication-sample-spa-app.md)-- [Configure xID with Azure Active Directory B2C for passwordless authentication](partner-xid.md)-- [Azure Active Directory B2C service limits and restrictions](service-limits.md)-- [Localization string IDs](localization-string-ids.md)-- [Manage your Azure Active Directory B2C tenant](tenant-management-manage-administrator.md)-- [Page layout versions](page-layout.md)-- [Secure your API used an API connector in Azure AD B2C](secure-rest-api.md)-- [Azure Active Directory B2C: What's new](whats-new-docs.md)-- [Application types that can be used in Active Directory B2C](application-types.md)-- [Publish your Azure Active Directory B2C app to the Azure Active Directory app gallery](publish-app-to-azure-ad-app-gallery.md)-- [Quickstart: Set up sign in for a desktop app using Azure Active Directory B2C](quickstart-native-app-desktop.md)-- [Register a single-page application (SPA) in Azure Active Directory B2C](tutorial-register-spa.md)
active-directory Concept Authentication Default Enablement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-authentication-default-enablement.md
Previously updated : 10/26/2022 Last updated : 02/24/2023
The following table lists each setting that can be set to Microsoft managed and
| [Registration campaign](how-to-mfa-registration-campaign.md) | Disabled | | [Location in Microsoft Authenticator notifications](how-to-mfa-additional-context.md) | Disabled | | [Application name in Microsoft Authenticator notifications](how-to-mfa-additional-context.md) | Disabled |
+| [System-preferred MFA](concept-system-preferred-multifactor-authentication.md) | Disabled |
As threat vectors change, Azure AD may announce default protection for a **Microsoft managed** setting in [release notes](../fundamentals/whats-new.md) and on commonly read forums like [Tech Community](https://techcommunity.microsoft.com/).
active-directory Concept Registration Mfa Sspr Combined https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-registration-mfa-sspr-combined.md
Combined registration supports the authentication methods and actions in the fol
| FIDO2 security keys*| Yes | No | Yes | > [!NOTE]
+> <b>Alternate phone</b> can only be registered in *manage mode* on the [Security info](https://mysignins.microsoft.com/security-info) page and requires Voice calls to be enabled in the Authentication methods policy. <br />
> <b>Office phone</b> can only be registered in *Interrupt mode* if the users *Business phone* property has been set. Office phone can be added by users in *Managed mode from the [Security info](https://mysignins.microsoft.com/security-info)* without this requirement. <br />
-> <b>App passwords</b> are available only to users who have been enforced for Azure AD Multi-Factor Authentication. App passwords are not available to users who are enabled for Azure AD Multi-Factor Authentication by a Conditional Access policy. <br />
-> <b>FIDO2 security keys</b>, can only be added in *Managed mode only from the [Security info](https://mysignins.microsoft.com/security-info) page*
+> <b>App passwords</b> are available only to users who have been enforced for per-user MFA. App passwords are not available to users who are enabled for Azure AD Multi-Factor Authentication by a Conditional Access policy. <br />
+> <b>FIDO2 security keys</b>, can only be added in *manage mode only* on the [Security info](https://mysignins.microsoft.com/security-info) page.
Users can set one of the following options as the default multifactor authentication method.
active-directory Concept System Preferred Multifactor Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-system-preferred-multifactor-authentication.md
+
+ Title: System-preferred multifactor authentication (MFA) - Azure Active Directory
+description: Learn how to use system-preferred multifactor authentication
+++ Last updated : 02/28/2023+++++++
+# Customer intent: As an identity administrator, I want to encourage users to use the Microsoft Authenticator app in Azure AD to improve and secure user sign-in events.
+
+# System-preferred multifactor authentication - Authentication methods policy
+
+System-preferred multifactor authentication (MFA) prompts users to sign in by using the most secure method they registered. Administrators can enable system-preferred MFA to improve sign-in security and discourage less secure sign-in methods like SMS.
+
+For example, if a user registered both SMS and Microsoft Authenticator push notifications as methods for MFA, system-preferred MFA prompts the user to sign in by using the more secure push notification method. The user can still choose to sign in by using another method, but they're first prompted to try the most secure method they registered.
+
+System-preferred MFA is a Microsoft managed setting, which is a [tristate policy](#authentication-method-feature-configuration-properties). For preview, the **default** state is disabled. If you want to turn it on for all users or a group of users during preview, you need to explicitly change the Microsoft managed state to **enabled** by using Microsoft Graph API. Sometime after general availability, the Microsoft managed state for system-preferred MFA will change to **enabled**.
+
+After system-preferred MFA is enabled, the authentication system does all the work. Users don't need to set any authentication method as their default because the system always determines and presents the most secure method they registered.
+
+## Enable system-preferred MFA
+
+To enable system-preferred MFA in advance, you need to choose a single target group for the schema configuration, as shown in the [Request](#request) example.
+
+### Authentication method feature configuration properties
+
+By default, system-preferred MFA is [Microsoft managed](concept-authentication-default-enablement.md#microsoft-managed-settings) and disabled during preview. After generally availability, the Microsoft managed state default value will change to enable system-preferred MFA.
+
+| Property | Type | Description |
+|-||-|
+| excludeTarget | featureTarget | A single entity that is excluded from this feature. <br>You can only exclude one group from system-preferred MFA, which can be a dynamic or nested group.|
+| includeTarget | featureTarget | A single entity that is included in this feature. <br>You can only include one group for system-preferred MFA, which can be a dynamic or nested group.|
+| State | advancedConfigState | Possible values are:<br>**enabled** explicitly enables the feature for the selected group.<br>**disabled** explicitly disables the feature for the selected group.<br>**default** allows Azure AD to manage whether the feature is enabled or not for the selected group. |
+
+### Feature target properties
+
+System-preferred MFA can be enabled only for a single group, which can be a dynamic or nested group.
+
+| Property | Type | Description |
+|-||-|
+| id | String | ID of the entity targeted. |
+| targetType | featureTargetType | The kind of entity targeted, such as group, role, or administrative unit. The possible values are: 'group', 'administrativeUnit', 'role', 'unknownFutureValue'. |
+
+Use the following API endpoint to enable **systemCredentialPreferences** and include or exclude groups:
+
+```
+https://graph.microsoft.com/beta/authenticationMethodsPolicy
+```
+
+>[!NOTE]
+>In Graph Explorer, you need to consent to the **Policy.ReadWrite.AuthenticationMethod** permission.
+
+### Request
+
+The following example excludes a sample target group and includes all users. For more information, see [Update authenticationMethodsPolicy](/graph/api/authenticationmethodspolicy-update?view=graph-rest-beta).
+
+```http
+PATCH https://graph.microsoft.com/beta/policies/authenticationMethodsPolicy
+Content-Type: application/json
+
+{
+ "systemCredentialPreferences": {
+ "state": "enabled",
+ "excludeTargets": [
+ {
+ "id": "d1411007-6fcf-4b4c-8d70-1da1857ed33c",
+ "targetType": "group"
+ }
+ ],
+ "includeTargets": [
+ {
+ "id": "all_users",
+ "targetType": "group"
+ }
+ ]
+ }
+}
+```
+
+## Known issues
+
+- [FIDO2 security key isn't supported on iOS mobile devices](../develop/support-fido2-authentication.md#mobile). This issue might surface when system-preferred MFA is enabled. Until a fix is available, we recommend not using FIDO2 security keys on iOS devices.
+
+## Common questions
+
+### How does system-preferred MFA determine the most secure method?
+
+When a user signs in, the authentication process checks which authentication methods are registered for the user. The user is prompted to sign-in with the most secure method according to the following order. The order of authentication methods is dynamic. It's updated as the security landscape changes, and as better authentication methods emerge.
+
+1. Temporary Access Pass
+1. Certificate-based authentication
+1. FIDO2 security key
+1. Microsoft Authenticator notification
+1. Companion app notification
+1. Microsoft Authenticator time-based one-time password (TOTP)
+1. Companion app TOTP
+1. Hardware token based TOTP
+1. Software token based TOTP
+1. SMS over mobile
+1. OnewayVoiceMobileOTP
+1. OnewayVoiceAlternateMobileOTP
+1. OnewayVoiceOfficeOTP
+1. TwowayVoiceMobile
+1. TwowayVoiceAlternateMobile
+1. TwowayVoiceOffice
+1. TwowaySMSOverMobile
+
+### How does system-preferred MFA affect AD FS or NPS extension?
+
+System-preferred MFA doesn't affect users who sign in by using Active Directory Federation Services (AD FS) or Network Policy Server (NPS) extension. Those users don't see any change to their sign-in experience.
+
+### What if the most secure MFA method isn't available?
+
+If the user doesn't have that have the most secure method available, they can sign in with another method. After sign-in, they're redirected to their Security info page to remove the registration of the authentication method that isn't available.
+
+For example, let's say an end user misplaces their FIDO2 security key. When they try to sign in without their security key, they can click **I can't use my security key right now** and continue to sign in by using another method, like a time-based one-time password (TOTP). After sign-in, their Security info page appears and they need to remove their FIDO2 security key registration. They can register the method again later if they find their FIDO2 security key.
+
+### What happens for users who aren't specified in the Authentication methods policy but enabled in the legacy MFA tenant-wide policy?
+
+The system-preferred MFA also applies for users who are enabled for MFA in the legacy MFA policy.
+
+## Next steps
+
+* [Authentication methods in Azure Active Directory](concept-authentication-authenticator-app.md)
+* [How to run a registration campaign to set up Microsoft Authenticator](how-to-mfa-registration-campaign.md)
active-directory Plan Conditional Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/plan-conditional-access.md
The following name indicates that this policy is the first of four policies to e
### Block countries from which you never expect a sign-in.
-Azure active directory allows you to create [named locations](location-condition.md). Create the list of countries that are allowed, and then create a network block policy with these "allowed countries" as an exclusion. This is less overhead for customers who are based in smaller geographic locations.**Be sure to exempt your emergency access accounts from this policy**.
+Azure active directory allows you to create [named locations](location-condition.md). Create the list of countries that are allowed, and then create a network block policy with these "allowed countries" as an exclusion. This is less overhead for customers who are based in smaller geographic locations. **Be sure to exempt your emergency access accounts from this policy**.
## Deploy Conditional Access policy
active-directory Active Directory V2 Protocols https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/active-directory-v2-protocols.md
# OAuth 2.0 and OpenID Connect (OIDC) in the Microsoft identity platform
-Knowing about OAuth or OpenID Connect (OIDC) at the protocol level is not required to use the Microsoft identity platform. However, you will encounter protocol terms and concepts as you use the identity platform to add authentication to your apps. As you work with the Azure portal, our documentation, and authentication libraries, knowing some fundamentals can assist your integration and overall experience.
+Knowing about OAuth or OpenID Connect (OIDC) at the protocol level isn't required to use the Microsoft identity platform. However, you'll encounter protocol terms and concepts as you use the identity platform to add authentication to your apps. As you work with the Azure portal, our documentation, and authentication libraries, knowing some fundamentals can assist your integration and overall experience.
## Roles in OAuth 2.0
-Four parties are usually involved in an OAuth 2.0 and OpenID Connect authentication and authorization exchange. These exchanges are often called *authentication flows* or *auth flows*.
+Four parties are generally involved in an OAuth 2.0 and OpenID Connect authentication and authorization exchange. These exchanges are often called *authentication flows* or *auth flows*.
![Diagram showing the OAuth 2.0 roles](./media/active-directory-v2-flows/protocols-roles.svg)
Next, learn about the OAuth 2.0 authentication flows used by each application ty
* [Authentication flows and application scenarios](authentication-flows-app-scenarios.md) * [Microsoft Authentication Library (MSAL)](msal-overview.md)
-**We strongly advise against crafting your own library or raw HTTP calls to execute authentication flows.** A [Microsoft Authentication Library](reference-v2-libraries.md) is safer and much easier. However, if your scenario prevents you from using our libraries or you'd just like to learn more about the identity platform's implementation, we have protocol reference:
+**We strongly advise against crafting your own library or raw HTTP calls to execute authentication flows.** A [Microsoft Authentication Library](reference-v2-libraries.md) is safer and easier. However, if your scenario prevents you from using our libraries or you'd just like to learn more about the identity platform's implementation, we have protocol reference:
* [Authorization code grant flow](v2-oauth2-auth-code-flow.md) - Single-page apps (SPA), mobile apps, native (desktop) applications * [Client credentials flow](v2-oauth2-client-creds-grant-flow.md) - Server-side processes, scripts, daemons * [On-behalf-of (OBO) flow](v2-oauth2-on-behalf-of-flow.md) - Web APIs that call another web API on a user's behalf
-* [OpenID Connect](v2-protocols-oidc.md) - User sign-in, sign-out, and single sign-on (SSO)
+* [OpenID Connect](v2-protocols-oidc.md) - User sign-in, sign out, and single sign-on (SSO)
active-directory Sample V2 Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/sample-v2-code.md
The following samples illustrate web applications that sign in users. Some sampl
> | Java </p> Spring |Azure AD Spring Boot Starter Series <br/> &#8226; [Sign in users](https://github.com/Azure-Samples/ms-identity-msal-java-samples/tree/main/4.%20Spring%20Framework%20Web%20App%20Tutorial/1-Authentication/sign-in) <br/> &#8226; [Sign in users (B2C)](https://github.com/Azure-Samples/ms-identity-msal-java-samples/tree/main/4.%20Spring%20Framework%20Web%20App%20Tutorial/1-Authentication/sign-in-b2c) <br/> &#8226; [Call Microsoft Graph](https://github.com/Azure-Samples/ms-identity-msal-java-samples/tree/main/4.%20Spring%20Framework%20Web%20App%20Tutorial/2-Authorization-I/call-graph) <br/> &#8226; [Use App Roles for access control](https://github.com/Azure-Samples/ms-identity-msal-java-samples/tree/main/4.%20Spring%20Framework%20Web%20App%20Tutorial/3-Authorization-II/roles) <br/> &#8226; [Use Groups for access control](https://github.com/Azure-Samples/ms-identity-msal-java-samples/tree/main/4.%20Spring%20Framework%20Web%20App%20Tutorial/3-Authorization-II/groups) <br/> &#8226; [Deploy to Azure App Service](https://github.com/Azure-Samples/ms-identity-msal-java-samples/tree/main/4.%20Spring%20Framework%20Web%20App%20Tutorial/4-Deployment/deploy-to-azure-app-service) <br/> &#8226; [Protect a web API](https://github.com/Azure-Samples/ms-identity-msal-java-samples/tree/main/4.%20Spring%20Framework%20Web%20App%20Tutorial/3-Authorization-II/protect-web-api) | &#8226; MSAL Java <br/> &#8226; Azure AD Boot Starter | Authorization code | > | Java </p> Servlets | Spring-less Servlet Series <br/> &#8226; [Sign in users](https://github.com/Azure-Samples/ms-identity-msal-java-samples/tree/main/3.%20Java%20Servlet%20Web%20App%20Tutorial/1-Authentication/sign-in) <br/> &#8226; [Sign in users (B2C)](https://github.com/Azure-Samples/ms-identity-msal-java-samples/tree/main/3.%20Java%20Servlet%20Web%20App%20Tutorial/1-Authentication/sign-in-b2c) <br/> &#8226; [Call Microsoft Graph](https://github.com/Azure-Samples/ms-identity-msal-java-samples/tree/main/3.%20Java%20Servlet%20Web%20App%20Tutorial/2-Authorization-I/call-graph) <br/> &#8226; [Use App Roles for access control](https://github.com/Azure-Samples/ms-identity-msal-java-samples/tree/main/3.%20Java%20Servlet%20Web%20App%20Tutorial/3-Authorization-II/roles) <br/> &#8226; [Use Security Groups for access control](https://github.com/Azure-Samples/ms-identity-msal-java-samples/tree/main/3.%20Java%20Servlet%20Web%20App%20Tutorial/3-Authorization-II/groups) <br/> &#8226; [Deploy to Azure App Service](https://github.com/Azure-Samples/ms-identity-msal-java-samples/tree/main/3.%20Java%20Servlet%20Web%20App%20Tutorial/4-Deployment/deploy-to-azure-app-service) | MSAL Java | Authorization code | > | Node.js </p> Express | Express web app series <br/> &#8226; [Sign in users](https://github.com/Azure-Samples/ms-identity-javascript-nodejs-tutorial/blob/main/1-Authentication/1-sign-in/README.md)<br/> &#8226; [Sign in users (B2C)](https://github.com/Azure-Samples/ms-identity-javascript-nodejs-tutorial/blob/main/1-Authentication/2-sign-in-b2c/README.md)<br/> &#8226; [Call Microsoft Graph](https://github.com/Azure-Samples/ms-identity-javascript-nodejs-tutorial/blob/main/2-Authorization/1-call-graph/README.md)<br/> &#8226; [Deploy to Azure App Service](https://github.com/Azure-Samples/ms-identity-javascript-nodejs-tutorial/blob/main/3-Deployment/README.md)<br/> &#8226; [Use App Roles for access control](https://github.com/Azure-Samples/ms-identity-javascript-nodejs-tutorial/blob/main/4-AccessControl/1-app-roles/README.md)<br/> &#8226; [Use Security Groups for access control](https://github.com/Azure-Samples/ms-identity-javascript-nodejs-tutorial/blob/main/4-AccessControl/2-security-groups/README.md) <br/> &#8226; [Web app that sign in users](https://github.com/Azure-Samples/ms-identity-node) | MSAL Node | Authorization code |
-> | Python </p> Flask | Flask Series <br/> &#8226; [Sign in users](https://github.com/Azure-Samples/ms-identity-python-flask-tutorial) <br/> &#8226; [Sign in users (B2C)](https://github.com/Azure-Samples/ms-identity-python-flask-tutorial) <br/>&#8226; [Sign in users and call Microsoft Graph](https://github.com/Azure-Samples/ms-identity-python-webapp) <br/> &#8226; [Call Microsoft Graph](https://github.com/Azure-Samples/ms-identity-python-flask-tutorial) <br/> &#8226; [Deploy to Azure App Service](https://github.com/Azure-Samples/ms-identity-python-flask-tutorial) | MSAL Python | Authorization code |
+> | Python </p> Flask | Flask Series <br/> &#8226; [Sign in users](https://github.com/Azure-Samples/ms-identity-python-flask-tutorial) <br/> &#8226; [Sign in users (B2C)](https://github.com/Azure-Samples/ms-identity-python-flask-tutorial) <br/>&#8226; [A template to sign in AAD or B2C users, and optionally call a downstream API (Microsoft Graph)](https://github.com/Azure-Samples/ms-identity-python-webapp) <br/> &#8226; [Call Microsoft Graph](https://github.com/Azure-Samples/ms-identity-python-flask-tutorial) <br/> &#8226; [Deploy to Azure App Service](https://github.com/Azure-Samples/ms-identity-python-flask-tutorial) | MSAL Python | Authorization code |
> | Python </p> Django | Django Series <br/> &#8226; [Sign in users](https://github.com/Azure-Samples/ms-identity-python-django-tutorial/tree/main/1-Authentication/sign-in) <br/> &#8226; [Sign in users (B2C)](https://github.com/Azure-Samples/ms-identity-python-django-tutorial/tree/main/1-Authentication/sign-in-b2c) <br/> &#8226; [Call Microsoft Graph](https://github.com/Azure-Samples/ms-identity-python-django-tutorial/tree/main/2-Authorization-I/call-graph) <br/> &#8226; [Deploy to Azure App Service](https://github.com/Azure-Samples/ms-identity-python-django-tutorial/tree/main/3-Deployment/deploy-to-azure-app-service)| MSAL Python | Authorization code | > | Ruby | Graph Training <br/> &#8226; [Sign in users and call Microsoft Graph](https://github.com/microsoftgraph/msgraph-training-rubyrailsapp) | OmniAuth OAuth2 | Authorization code |
active-directory Whats New Docs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/whats-new-docs.md
Previously updated : 02/01/2023 Last updated : 03/01/2023
Welcome to what's new in the Microsoft identity platform documentation. This article lists new docs that have been added and those that have had significant updates in the last three months.
+## February 2023
+
+### New articles
+
+- [Frequently asked questions about workload identities license plans](workload-identities-faqs.md)
+
+### Updated articles
+
+- [Configure the role claim issued in the SAML token](active-directory-enterprise-app-role-management.md)
+- [Microsoft identity platform and the OAuth 2.0 client credentials flow](v2-oauth2-client-creds-grant-flow.md)
+- [Overview of shared device mode](msal-shared-devices.md)
+- [Run automated integration tests](test-automate-integration-testing.md)
+- [Tutorial: Sign in users and call Microsoft Graph in Windows Presentation Foundation (WPF) desktop app](tutorial-v2-windows-desktop.md)
+ ## January 2023 ### New articles
Welcome to what's new in the Microsoft identity platform documentation. This art
- [Tutorial: Sign in users and call a protected API from a Blazor WebAssembly app](tutorial-blazor-webassembly.md) - [Web app that signs in users: App registration](scenario-web-app-sign-user-app-registration.md) - [Web app that signs in users: Code configuration](scenario-web-app-sign-user-app-configuration.md)-
-## November 2022
-
-### New articles
--- [How to configure app instance property lock for your applications (Preview)](howto-configure-app-instance-property-locks.md)-
-### Updated articles
--- [Configure SSO on macOS and iOS](single-sign-on-macos-ios.md)-- [Developer guide to Conditional Access authentication context](developer-guide-conditional-access-authentication-context.md)-- [Get a token from the token cache using MSAL.NET](msal-net-acquire-token-silently.md)-- [How and why applications are added to Azure AD](active-directory-how-applications-are-added.md)-- [How to migrate a Node.js app from ADAL to MSAL](msal-node-migration.md)-- [Initialize client applications using MSAL.NET](msal-net-initializing-client-applications.md)-- [Logging in MSAL.NET](msal-logging-dotnet.md)-- [Logging in MSAL for Java](msal-logging-java.md)-- [Migrate applications to the Microsoft Authentication Library (MSAL)](msal-migration.md)-- [Shared device mode for iOS devices](msal-ios-shared-devices.md)-- [Troubleshoot publisher verification](troubleshoot-publisher-verification.md)-- [Tutorial: Use shared-device mode in your Android application](tutorial-v2-shared-device-mode.md)
active-directory Groups Write Back Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-write-back-portal.md
To understand the behavior of No writeback in the portal, check the properties o
By default, the **Group writeback state** of groups is set to **No writeback**. This means: - **Microsoft 365 groups**: if the group ```IsEnabled = null``` and ```onPremisesGroupType = null```, to ensure backwards compatibility with older version of group writeback, the group is written back to your on-premises Active Directory as a distribution group.-- **Azure AD security groups**: if the group ```IsEnabled = null``` and ```onPremisesGroupType = null``` then the group is not written back to your on-premises Active Directory.
+- **Azure AD security groups**: if the group ```IsEnabled = null``` and ```onPremisesGroupType = null``` then the group isn't written back to your on-premises Active Directory.
## Show writeback columns
CloudGroup2 True
Open [Microsoft Graph Explorer](https://developer.microsoft.com/graph/graph-explorer) and use the following endpoint ```https://graph.microsoft.com/beta/groups/{Group_ID}```.
-Replace the Group_ID with a cloud group id, and then click on Run query.
+Replace the Group_ID with a cloud group ID, and then select on Run query.
In the **Response Preview**, scroll to the end to see the part of the JSON file. ```JSON
In the **Response Preview**, scroll to the end to see the part of the JSON file.
- Check out the groups REST API documentation for the [preview writeback property on the settings template](/graph/api/resources/group?view=graph-rest-beta&preserve-view=true). - For more about group writeback operations, see [Azure AD Connect group writeback](../hybrid/how-to-connect-group-writeback.md).-- For more information about the writebackConfiguration resource, read [writebackConfiguration resource type](/graph/api/resources/writebackconfiguration?view=graph-rest-beta).
+- For more information about the writebackConfiguration resource, read [writebackConfiguration resource type](/graph/api/resources/writebackconfiguration?view=graph-rest-beta&preserve-view=true).
active-directory Whats New Docs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/whats-new-docs.md
Title: "What's new in Azure Active Directory External Identities" description: "New and updated documentation for the Azure Active Directory External Identities." Previously updated : 10/04/2022 Last updated : 03/01/2023
Welcome to what's new in Azure Active Directory External Identities documentation. This article lists new docs that have been added and those that have had significant updates in the last three months. To learn what's new with the External Identities service, see [What's new in Azure Active Directory](../fundamentals/whats-new.md).
+## February 2023
+
+### Updated articles
+
+- [Email one-time passcode authentication](one-time-passcode.md)
+- [Secure your API used an API connector in Azure AD External Identities self-service sign-up user flows](self-service-sign-up-secure-api-connector.md)
+- [Azure Active Directory External Identities: What's new](whats-new-docs.md)
+- [Authentication and Conditional Access for External Identities](authentication-conditional-access.md)
+- [Quickstart: Add a guest user with PowerShell](b2b-quickstart-invite-powershell.md)
+ ## January 2023 ### Updated articles
Welcome to what's new in Azure Active Directory External Identities documentatio
- [Azure Active Directory B2B collaboration invitation redemption](redemption-experience.md) - [Azure Active Directory B2B collaboration API and customization](customize-invitation-api.md) - [Azure Active Directory External Identities: What's new](whats-new-docs.md)-- [Auditing and reporting a B2B collaboration user](auditing-and-reporting.md)-
-## November 2022
-
-### Updated articles
--- [Tutorial: Use PowerShell to bulk invite Azure AD B2B collaboration users](bulk-invite-powershell.md)-- [Grant B2B users in Azure AD access to your on-premises applications](hybrid-cloud-to-on-premises.md)-- [Reset redemption status for a guest user](reset-redemption-status.md)-- [Language customization in Azure Active Directory](user-flow-customize-language.md)-- [B2B collaboration overview](what-is-b2b.md)-- [Azure Active Directory External Identities: What's new](whats-new-docs.md)-- [Tutorial: Enforce multi-factor authentication for B2B guest users](b2b-tutorial-require-mfa.md)
+- [Auditing and reporting a B2B collaboration user](auditing-and-reporting.md)
active-directory Customize Branding https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/customize-branding.md
Previously updated : 11/21/2022 Last updated : 03/01/2023
This article covers how to customize the company branding for sign-in experience
An updated experience for adding company branding is available as an Azure AD preview feature. To opt in and explore the new experience, go to **Azure AD** > **Preview features** and enable the **Enhanced Company Branding** feature. Check out the updated documentation on [how to customize branding](how-to-customize-branding.md).
-## License requirements
+## Role and license requirements
Adding custom branding requires one of the following licenses:
Adding custom branding requires one of the following licenses:
Azure AD Premium editions are available for customers in China using the worldwide instance of Azure AD. Azure AD Premium editions aren't currently supported in the Azure service operated by 21Vianet in China. For more information about licensing and editions, see [Sign up for Azure AD Premium](active-directory-get-started-premium.md).
-## Customize the default sign-in experience
+The **Global Administrator** role is required to customize company branding.
+
+## Before you begin
You can customize the sign-in experience when users sign in to your organization's tenant-specific apps, such as `https://outlook.com/woodgrove.com`, or when passing a domain variable, such as `https://passwordreset.microsoftonline.com/?whr=woodgrove.com`.
Custom branding appears after users sign in. Users that start the sign-in proces
**Images have different image and file size requirements.** Take note of the requirements for each option. You may need to use a photo editor to create the right-sized images. The preferred image type for all images is PNG, but JPG is accepted.
+**Use Microsoft Graph with Azure AD company branding.** Company branding can be viewed and managed using Microsoft Graph on the `/beta` endpoint and the `organizationalBranding` resource type. For more information, see the [organizational branding API documentation](/graph/api/resources/organizationalbranding?view=graph-rest-beta&preserve-view=true).
+
+## How to configure company branding
+ 1. Sign in to the [Azure portal](https://portal.azure.com/) using a Global administrator account for the directory. 2. Go to **Azure Active Directory** > **Company branding** > **Configure**.
Custom branding appears after users sign in. Users that start the sign-in proces
- **Language** The language for your first customized branding configuration is based on your default locale can't be changed. Once a default sign-in experience is created, you can add language-specific customized branding.
- - **Sign-in page background image** Select a PNG or JPG image file to appear as the background for your sign-in pages. The image will be anchored to the center of the browser, and will scale to the size of the viewable space.
+ - **Sign-in page background image** Select a PNG or JPG image file to appear as the background for your sign-in pages. The image is anchored to the center of the browser, and scales to the size of the viewable space.
We recommended using images without a strong subject focus. An opaque white box appears in the center of the screen, which could cover any part of the image depending on the dimensions of the viewable space.
Custom branding appears after users sign in. Users that start the sign-in proces
> [!IMPORTANT] > Hyperlinks that are added to the sign-in page text render as text in native environments, such as desktop and mobile applications. -- **Advanced settings**
+ - **Advanced settings**
![Configure company branding page, with advanced settings completed](media/customize-branding/legacy-customize-branding-configure-advanced.png)
- - **Sign-in page background color** Specify the hexadecimal color (#FFFFFF) that will appear in place of your background image in low-bandwidth connection situations. We recommend using the primary color of your banner logo or your organization color.
+ - **Sign-in page background color** Specify the hexadecimal color (#FFFFFF) that appears in place of your background image in low-bandwidth connection situations. We recommend using the primary color of your banner logo or your organization color.
- - **Square logo image** Select a PNG or JPG image of your organization's logo to appear during the setup process for new Windows 10 Enterprise devices. This image is only used for Windows authentication and appears only on tenants that are using [Windows Autopilot](/windows/deployment/windows-autopilot/windows-10-autopilot) for deployment or for password entry pages in other Windows 10 experiences. In some cases, it may also appear in the consent dialog.
+ - **Square logo image** Select a PNG or JPG image of your organization's logo to appear during the setup process for new Windows 10 Enterprise devices. This image is only used for Windows authentication and only appears on tenants that are using [Windows Autopilot](/windows/deployment/windows-autopilot/windows-10-autopilot) for deployment or password entry pages in other Windows 10 experiences. In some cases, it may also appear in the consent dialog.
We recommend using a transparent image since the background might not match your logo background. We also recommend not adding padding around the image or it might make your logo look small.
- - **Square logo image, dark theme** Same as the square logo image above. This logo image takes the place of the square logo image when used with a dark background, such as with Windows 10 Azure AD joined screens during the out-of-box experience (OOBE). If your logo looks good on white, dark blue, and black backgrounds, you don't need to add this image.
+ - **Square logo image, dark theme** Same as the square logo image. This logo image takes the place of the square logo image when used with a dark background, such as with Windows 10 Azure AD joined screens during the out-of-box experience (OOBE). If your logo looks good on white, dark blue, and black backgrounds, you don't need to add this image.
>[!IMPORTANT] > Transparent logos are supported with the square logo image. The color palette used in the transparent logo could conflict with backgrounds (such as, white, light grey, dark grey, and black backgrounds) used within Microsoft 365 apps and services that consume the square logo image. Solid color backgrounds may need to be used to ensure the square image logo is rendered correctly in all situations.
Custom branding appears after users sign in. Users that start the sign-in proces
This process creates your first custom branding configuration, and it becomes the default for your tenant. The default custom branding configuration serves as a fallback option for all language-specific branding configurations. The configuration can't be removed after you create it. >[!IMPORTANT]
- >To add more corporate branding configurations to your tenant, you must choose **New language** on the **Contoso - Company branding** page. This opens the **Configure company branding** page, where you can follow the same steps as above.
+ >To add more corporate branding configurations to your tenant, you must choose **New language** on the **Contoso - Company branding** page. This opens the **Configure company branding** page, where you can follow the previous steps.
## Customize the sign-in experience by browser language
To create an inclusive experience for all of your users, you can customize the s
2. Select **Azure Active Directory** > **Company branding** > **+ New language**.
-The process for customizing the experience is the same as the [Default sign-in experience](#customize-the-default-sign-in-experience), except you select a **Language** from the dropdown list.
+The process for customizing the experience is the same as the main [configure company branding](#configure-your-company-branding) process, except you select a **Language** from the dropdown list.
We recommend adding **Sign-in page text** in the selected language. ## Edit custom branding
-If custom branding has been added to your tenant, you can edit the details already provided. Refer to the details and descriptions of each setting in the [Add custom branding](#customize-the-default-sign-in-experience) section of this article.
+If custom branding has been added to your tenant, you can edit the details already provided. Refer to the details and descriptions of each setting in the [configure your company branding](#configure-your-company-branding) section of this article.
1. Sign in to the [Azure portal](https://portal.azure.com/) using a Global Administrator account for the directory.
If custom branding has been added to your tenant, you can edit the details alrea
## Next steps - [Add your organization's privacy info on Azure AD](./active-directory-properties-area.md)-- [Learn more about Conditional Access](../conditional-access/overview.md)
+- [Learn more about Conditional Access](../conditional-access/overview.md)
active-directory How To Customize Branding https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/how-to-customize-branding.md
Previously updated : 01/31/2023 Last updated : 03/01/2023
When users authenticate into your corporate intranet or web-based applications, Azure Active Directory (Azure AD) provides the identity and access management (IAM) service. You can add company branding that applies to all these sign-in experiences to create a consistent experience for your users.
-The default sign-in experience is the global look and feel that applies across all sign-ins to your tenant. Before you customize any settings, the default Microsoft branding will appear in your sign-in pages. You can customize this default experience with a custom background image or color, favicon, layout, header, and footer. You can also upload a custom CSS.
+The default sign-in experience is the global look and feel that applies across all sign-ins to your tenant. Before you customize any settings, the default Microsoft branding appears in your sign-in pages. You can customize this default experience with a custom background image or color, favicon, layout, header, and footer. You can also upload a custom CSS.
-The updated experience for adding company branding covered in this article is available as an Azure AD preview feature. To opt in and explore the new experience, go to **Azure AD** > **Preview features** and enable the **Enhanced Company Branding** feature. For more information about previews, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
-
-Instructions for the legacy company branding customization process can be found in the [Customize branding](customize-branding.md) article.
+> [!NOTE]
+> Instructions for the legacy company branding customization process can be found in the **[Customize branding](customize-branding.md)** article.<br><br>The updated experience for adding company branding covered in this article is available as an Azure AD preview feature. To opt in and explore the new experience, go to **Azure AD** > **Preview features** and enable the **Enhanced Company Branding** feature. For more information about previews, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+>
## User experience You can customize the sign-in pages when users access your organization's tenant-specific apps. For Microsoft and SaaS applications (multi-tenant apps) such as <https://myapps.microsoft.com>, or <https://outlook.com> the customized sign-in page appears only after the user types their **Email**, or **Phone**, and select **Next**.
-Some of the Microsoft applications support the home realm discovery `whr` query string parameter, or a domain variable. With the home realm discovery and domain parameter, the customized sign-in page will appear immediately in the first step.
+Some of the Microsoft applications support the home realm discovery `whr` query string parameter, or a domain variable. With the home realm discovery and domain parameter, the customized sign-in page appears immediately in the first step.
In the following examples replace the contoso.com with your own tenant name, or verified domain name:
In the following examples replace the contoso.com with your own tenant name, or
- For my app portal `https://myapps.microsoft.com/?whr=contoso.com` - Self-service password reset `https://passwordreset.microsoftonline.com/?whr=contoso.com`
-## License requirements
+## Role and license requirements
Adding custom branding requires one of the following licenses:
For more information about licensing and editions, see the [Sign up for Azure AD
Azure AD Premium editions are available for customers in China using the worldwide instance of Azure AD. Azure AD Premium editions aren't currently supported in the Azure service operated by 21Vianet in China
+The **Global Administrator** role is required to customize company branding.
+ ## Before you begin **All branding elements are optional. Default settings will remain, if left unchanged.** For example, if you specify a banner logo but no background image, the sign-in page shows your logo with a default background image from the destination site such as Microsoft 365. Additionally, sign-in page branding doesn't carry over to personal Microsoft accounts. If your users or guests authenticate using a personal Microsoft account, the sign-in page won't reflect the branding of your organization. **Images have different image and file size requirements.** Take note of the image requirements for each option. You may need to use a photo editor to create the right size images. The preferred image type for all images is PNG, but JPG is accepted.
+**Use Microsoft Graph with Azure AD company branding.** Company branding can be viewed and managed using Microsoft Graph on the `/beta` endpoint and the `organizationalBranding` resource type. For more information, see the [organizational branding API documentation](/graph/api/resources/organizationalbranding?view=graph-rest-beta&preserve-view=true).
+
+## How to navigate the company branding process
+ 1. Sign in to the [Azure portal](https://portal.azure.com/) using a Global Administrator account for the directory. 2. Go to **Azure Active Directory** > **Company branding** > **Customize**.
- - If you currently have a customized sign-in experience, you'll see an **Edit** button.
+ - If you currently have a customized sign-in experience, the **Edit** button is available.
![Custom branding landing page with 'Company branding' highlighted in the side menu and 'Configure' button highlighted in the center of the page](media/how-to-customize-branding/customize-branding-getting-started.png)
The sign-in experience process is grouped into sections. At the end of each sect
- **Favicon**: Select a PNG or JPG of your logo that appears in the web browser tab. -- **Background image**: Select a PNG or JPG to display as the main image on your sign-in page. This image will scale and crop according to the window size, but may be partially blocked by the sign-in prompt.
+- **Background image**: Select a PNG or JPG to display as the main image on your sign-in page. This image scales and crops according to the window size, but may be partially blocked by the sign-in prompt.
- **Page background color**: If the background image isn't able to load because of a slower connection, your selected background color appears instead.
If you haven't enabled the footer, go to the **Layout** section and select **Sho
- **Self-service password reset**: - Show self-service password reset (SSPR): Select the checkbox to turn on SSPR.
- - Common URL: Enter the destination URL for where your users will reset their passwords. This URL appears on the username and password collection screens.
+ - Common URL: Enter the destination URL for where your users reset their passwords. This URL appears on the username and password collection screens.
- Username collection display text: Replace the default text with your own custom username collection text. - Password collection display text: Replace the default text with your own customer password collection text.
active-directory Security Operations Consumer Accounts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/security-operations-consumer-accounts.md
Title: Azure Active Directory security operations for consumer accounts description: Guidance to establish baselines and how to monitor and alert on potential security issues with consumer accounts. -+ Previously updated : 07/15/2021 Last updated : 02/28/2023
# Azure Active Directory security operations for consumer accounts
-Activities associated with consumer identities is another critical area for your organization to protect and monitor. This article is for Azure AD B2C tenants and provides guidance for monitoring consumer account activities. The activities are organized by:
+Consumer identity activities are an important area for your organization to protect and monitor. This article is for Azure Active Directory B2C (Azure AD B2C) tenants and has guidance for monitoring consumer account activities. The activities are:
-* Consumer account activities
-* Privileged account activities
-* Application activities
-* Infrastructure activities
+* Consumer account
+* Privileged account
+* Application
+* Infrastructure
-If you have not yet read the [Azure Active Directory (Azure AD) security operations overview](security-operations-introduction.md), we recommend you do so before proceeding.
+## Before you begin
-## Define a baseline
-
-To discover anomalous behavior, you first must define what normal and expected behavior is. Defining what expected behavior for your organization is, helps you determine when unexpected behavior occurs. The definition also helps to reduce the noise level of false positives when monitoring and alerting.
-
-Once you define what you expect, you perform baseline monitoring to validate your expectations. With that information, you can monitor the logs for anything that falls outside of tolerances you define.
+Before using the guidance in this article, we recommend you read, [Azure AD security operations guide](security-operations-introduction.md).
-Use the Azure AD Audit Logs, Azure AD Sign-in Logs, and directory attributes as your data sources for accounts created outside of normal processes. The following are suggestions to help you think about and define what normal is for your organization.
-
-* **Consumer account creation** ΓÇô evaluate the following:
+## Define a baseline
- * Strategy and principles for tools and processes used for creating and managing consumer accounts. For example, are there standard attributes, formats that are applied to consumer account attributes.
+To discover anomalous behavior, define normal and expected behavior. Defining expected behavior for your organization helps you discover unexpected behavior. Use the definition to help reduce false positives, during monitoring and alerting.
- * Approved sources for account creation. For example, onboarding custom policies, customer provisioning or migration tool.
+With expected behavior defined, perform baseline monitoring to validate expectations. Then, monitor logs for what falls outside tolerance.
- * Alert strategy for accounts created outside of approved sources. Is there a controlled list of organizations your organization collaborates with?
+For accounts created outside normal processes, use the Azure AD Audit Logs, Azure AD Sign-in Logs, and directory attributes as your data sources. The following suggestions can help you define normal.
- * Strategy and alert parameters for accounts created, modified, or disabled by an account that isn't an approved consumer account administrator.
+### Consumer account creation
- * Monitoring and alert strategy for consumer accounts missing standard attributes, such as customer number or not following organizational naming conventions.
+Evaluate the following list:
- * Strategy, principles, and process for account deletion and retention.
+* Strategy and principles for tools and processes to create and manage consumer accounts
+ * For example, standard attributes and formats applied to consumer account attributes
+* Approved sources for account creation.
+ * For example, onboarding custom policies, customer provisioning or migration tool
+* Alert strategy for accounts created outside approved sources.
+ * Create a controlled list of organizations your organization collaborates with
+* Strategy and alert parameters for accounts created, modified, or disabled by an unapproved consumer account administrator
+* Monitoring and alert strategy for consumer accounts missing standard attributes, such as customer number, or not following organizational naming conventions
+* Strategy, principles, and process for account deletion and retention
## Where to look
-The log files you use for investigation and monitoring are:
-
-* [Azure AD Audit logs](../reports-monitoring/concept-audit-logs.md)
-
-* [Sign-in logs](../reports-monitoring/concept-all-sign-ins.md)
-
-* [Risky Users log](../identity-protection/howto-identity-protection-investigate-risk.md)
-
-* [UserRiskEvents log](../identity-protection/howto-identity-protection-investigate-risk.md)
-
-From the Azure portal, you can view the Azure AD Audit logs and download as comma separated value (CSV) or JavaScript Object Notation (JSON) files. The Azure portal has several ways to integrate Azure AD logs with other tools that allow for greater automation of monitoring and alerting:
+Use log files to investigate and monitor. See the following articles for more:
-* **[Microsoft Sentinel](../../sentinel/overview.md)** ΓÇô enables intelligent security analytics at the enterprise level by providing security information and event management (SIEM) capabilities.
+* [Audit logs in Azure AD](../reports-monitoring/concept-audit-logs.md)
+* [Sign-in logs in Azure AD (preview)](../reports-monitoring/concept-all-sign-ins.md)
+* [How To: Investigate risk](../identity-protection/howto-identity-protection-investigate-risk.md)
-* **[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure)** - Sigma is an evolving open standard for writing rules and templates that automated management tools can use to parse log files. Where Sigma templates exist for our recommended search criteria, we've added a link to the Sigma repo. The Sigma templates aren't written, tested, and managed by Microsoft. Rather, the repo and templates are created and collected by the worldwide IT security community.
+### Audit logs and automation tools
-* **[Azure Monitor](../../azure-monitor/overview.md)** ΓÇô enables automated monitoring and alerting of various conditions. Can create or use workbooks to combine data from different sources.
+From the Azure portal, you can view Azure AD Audit logs and download as comma separated value (CSV) or JavaScript Object Notation (JSON) files. Use the Azure portal to integrate Azure AD logs with other tools to automate monitoring and alerting:
-* **[Azure Event Hubs](../../event-hubs/event-hubs-about.md) integrated with a SIEM**- [Azure AD logs can be integrated to other SIEMs](../reports-monitoring/tutorial-azure-monitor-stream-logs-to-event-hub.md) such as Splunk, ArcSight, QRadar and Sumo Logic via the Azure Event Hubs integration.
+* **Microsoft Sentinel** ΓÇô security analytics with security information and event management (SIEM) capabilities
+ * [What is Microsoft Sentinel?](../../sentinel/overview.md)
+* **Sigma rules** - an open standard for writing rules and templates that automated management tools can use to parse log files. If there are Sigma templates for our recommended search criteria, we added a link to the Sigma repo. Microsoft doesn't write, test, or manage Sigma templates. The repo and templates are created, and collected, by the IT security community.
+ * [SigmaHR/sigma](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure)
+* **Azure Monitor** ΓÇô automated monitoring and alerting of various conditions. Create or use workbooks to combine data from different sources.
+ * [Azure Monitor overview](../../azure-monitor/overview.md)
+* **Azure Event Hubs integrated with a SIEM** - integrate Azure AD logs with SIEMs such as Splunk, ArcSight, QRadar and Sumo Logic with Azure Event Hubs
+ * [Azure Event Hubs-A big data streaming platform and event ingestion service](../../event-hubs/event-hubs-about.md)
+ * [Tutorial: Stream Azure Active Directory logs to an Azure event hub](../reports-monitoring/tutorial-azure-monitor-stream-logs-to-event-hub.md)
+* **Microsoft Defender for Cloud Apps** ΓÇô discover and manage apps, govern across apps and resources, and conform cloud app compliance
+ * [Microsoft Defender for Cloud Apps overview](/defender-cloud-apps/what-is-defender-for-cloud-apps)
+* **Identity Protection** - detect risk on workload identities across sign-in behavior and offline indicators of compromise
+ * [Securing workload identities with Identity Protection](..//identity-protection/concept-workload-identity-risk.md)
-* **[Microsoft Defender for Cloud Apps](/cloud-app-security/what-is-cloud-app-security)** ΓÇô enables you to discover and manage apps, govern across apps and resources, and check your cloud apps' compliance.
-
-* **[Securing workload identities with Identity Protection Preview](..//identity-protection/concept-workload-identity-risk.md)** - Used to detect risk on workload identities across sign-in behavior and offline indicators of compromise.
-
- The remainder of this article describes what we recommend you monitor and alert on, and is organized by the type of threat. Where there are specific pre-built solutions we link to them or provide samples following the table. Otherwise, you can build alerts using the preceding tools.
+Use the remainder of the article for recommendations on what to monitor and alert. Refer to the tables, organized by threat type. See links to pre-built solutions or samples following the table. Build alerts using the previously mentioned tools.
## Consumer accounts
-| What to monitor | Risk Level | Where | Filter / subfilter | Notes |
+| What to monitor | Risk level | Where | Filter / subfilter | Notes |
| - | - | - | - | - |
-| Large number of account creations or deletions | High | Azure AD Audit logs | Activity: Add user<br>Status = success<br>Initiated by (actor) = CPIM Service<br>-and-<br>Activity: Delete user<br>Status = success<br>Initiated by (actor) = CPIM Service | Define a baseline threshold, and then monitor and adjust to suite your organizational behaviors and limit false alerts from being generated. |
-| Accounts created and deleted by non-approved users or processes. | Medium | Azure AD Audit logs | Initiated by (actor) ΓÇô USER PRINCIPAL NAME<br>-and-<br>Activity: Add user<br>Status = success<br>Initiated by (actor) != CPIM Service<br>and-or<br>Activity: Delete user<br>Status = success<br>Initiated by (actor) != CPIM Service | If the actors are non-approved users, configure to send an alert. |
-| Accounts assigned to a privileged role. | High | Azure AD Audit logs | Activity: Add user<br>Status = success<br>Initiated by (actor) == CPIM Service<br>-and-<br>Activity: Add member to role<br>Status = success | If the account is assigned to an Azure AD role, Azure role, or privileged group membership, alert and prioritize the investigation. |
-| Failed sign-in attempts. | Medium - if Isolated incident<br>High - if many accounts are experiencing the same pattern | Azure AD Sign-ins log | Status = failed<br>-and-<br>Sign-in error code 50126 - Error validating credentials due to invalid username or password.<br>-and-<br>Application == "CPIM PowerShell Client"<br>-or-<br>Application == "ProxyIdentityExperienceFramework" | Define a baseline threshold, and then monitor and adjust to suite your organizational behaviors and limit false alerts from being generated. |
-| Smart lock-out events. | Medium - if Isolated incident<br>High - if many accounts are experiencing the same pattern or a VIP | Azure AD Sign-ins log | Status = failed<br>-and-<br>Sign-in error code = 50053 ΓÇô IdsLocked<br>-and-<br>Application == "CPIM PowerShell Client"<br>-or-<br>Application =="ProxyIdentityExperienceFramework" | Define a baseline threshold, and then monitor and adjust to suite your organizational behaviors and limit false alerts from being generated. |
-| Failed authentications from countries you don't operate out of. | Medium | Azure AD Sign-ins log | Status = failed<br>-and-<br>Location = \<unapproved location><br>-and-<br>Application == "CPIM PowerShell Client"<br>-or-<br>Application == "ProxyIdentityExperienceFramework" | Monitor entries not equal to the city names you provide. |
-| Increased failed authentications of any type. | Medium | Azure AD Sign-ins log | Status = failed<br>-and-<br>Application == "CPIM PowerShell Client"<br>-or-<br>Application == "ProxyIdentityExperienceFramework" | If you don't have a set threshold, monitor and alert if failures increase by 10% or greater. |
-| Account disabled/blocked for sign-ins | Low | Azure AD Sign-ins log | Status = Failure<br>-and-<br>error code = 50057, The user account is disabled. | This could indicate someone is trying to gain access to an account after they have left an organization. Although the account is blocked it's important to log and alert on this activity. |
-| Measurable increase of successful sign-ins. | Low | Azure AD Sign-ins log | Status = Success<br>-and-<br>Application == "CPIM PowerShell Client"<br>-or-<br>Application == "ProxyIdentityExperienceFramework" | If you don't have a set threshold, monitor and alert if successful authentications increase by 10% or greater. |
+| Large number of account creations or deletions | High | Azure AD Audit logs | Activity: Add user<br>Status = success<br>Initiated by (actor) = CPIM Service<br>-and-<br>Activity: Delete user<br>Status = success<br>Initiated by (actor) = CPIM Service | Define a baseline threshold, and then monitor and adjust to suite your organizational behaviors. Limit false alerts. |
+| Accounts created and deleted by non-approved users or processes| Medium | Azure AD Audit logs | Initiated by (actor) ΓÇô USER PRINCIPAL NAME<br>-and-<br>Activity: Add user<br>Status = success<br>Initiated by (actor) != CPIM Service<br>and-or<br>Activity: Delete user<br>Status = success<br>Initiated by (actor) != CPIM Service | If the actors are non-approved users, configure to send an alert. |
+| Accounts assigned to a privileged role| High | Azure AD Audit logs | Activity: Add user<br>Status = success<br>Initiated by (actor) == CPIM Service<br>-and-<br>Activity: Add member to role<br>Status = success | If the account is assigned to an Azure AD role, Azure role, or privileged group membership, alert and prioritize the investigation. |
+| Failed sign-in attempts| Medium - if Isolated incident<br>High - if many accounts are experiencing the same pattern | Azure AD Sign-ins log | Status = failed<br>-and-<br>Sign-in error code 50126 - Error validating credentials due to invalid username or password.<br>-and-<br>Application == "CPIM PowerShell Client"<br>-or-<br>Application == "ProxyIdentityExperienceFramework" | Define a baseline threshold, and then monitor and adjust to suit your organizational behaviors and limit false alerts from being generated. |
+| Smart lock-out events| Medium - if Isolated incident<br>High - if many accounts are experiencing the same pattern or a VIP | Azure AD Sign-ins log | Status = failed<br>-and-<br>Sign-in error code = 50053 ΓÇô IdsLocked<br>-and-<br>Application == "CPIM PowerShell Client"<br>-or-<br>Application =="ProxyIdentityExperienceFramework" | Define a baseline threshold, and then monitor and adjust to suit your organizational behaviors and limit false alerts. |
+| Failed authentications from countries or regions you don't operate from| Medium | Azure AD Sign-ins log | Status = failed<br>-and-<br>Location = \<unapproved location><br>-and-<br>Application == "CPIM PowerShell Client"<br>-or-<br>Application == "ProxyIdentityExperienceFramework" | Monitor entries not equal to provided city names. |
+| Increased failed authentications of any type | Medium | Azure AD Sign-ins log | Status = failed<br>-and-<br>Application == "CPIM PowerShell Client"<br>-or-<br>Application == "ProxyIdentityExperienceFramework" | If you don't have a threshold, monitor and alert if failures increase by 10%, or greater. |
+| Account disabled/blocked for sign-ins | Low | Azure AD Sign-ins log | Status = Failure<br>-and-<br>error code = 50057, The user account is disabled. | This scenario could indicate someone trying to gain access to an account after they left an organization. The account is blocked, but it's important to log and alert this activity. |
+| Measurable increase of successful sign-ins | Low | Azure AD Sign-ins log | Status = Success<br>-and-<br>Application == "CPIM PowerShell Client"<br>-or-<br>Application == "ProxyIdentityExperienceFramework" | If you don't have a threshold, monitor and alert if successful authentications increase by 10%, or greater. |
## Privileged accounts
-| What to monitor | Risk Level | Where | Filter/sub-filter | Notes |
+| What to monitor | Risk level | Where | Filter / subfilter | Notes |
| - | - | - | - | - |
-| Sign-in failure, bad password threshold | High | Azure AD Sign-ins log | Status = Failure<br>-and-<br>error code = 50126 | Define a baseline threshold and then monitor and adjust to suit your organizational behaviors and limit false alerts from being generated. |
-| Failure because of Conditional Access requirement | High | Azure AD Sign-ins log | Status = Failure<br>-and-<br>error code = 53003<br>-and-<br>Failure reason = Blocked by Conditional Access | This event can be an indication an attacker is trying to get into the account. |
-| Interrupt | High, medium | Azure AD Sign-ins log | Status = Failure<br>-and-<br>error code = 53003<br>-and-<br>Failure reason = Blocked by Conditional Access | This event can be an indication an attacker has the password for the account but can't pass the MFA challenge. |
-| Account lockout | High | Azure AD Sign-ins log | Status = Failure<br>-and-<br>error code = 50053 | Define a baseline threshold, and then monitor and adjust to suit your organizational behaviors and limit false alerts from being generated. |
-| Account disabled or blocked for sign-ins | low | Azure AD Sign-ins log | Status = Failure<br>-and-<br>Target = User UPN<br>-and-<br>error code = 50057 | This event could indicate someone is trying to gain access to an account after they've left the organization. Although the account is blocked, it's still important to log and alert on this activity. |
-| MFA fraud alert or block | High | Azure AD Sign-ins log/Azure Log Analytics | Sign-ins>Authentication details<br> Result details = MFA denied, fraud code entered | Privileged user has indicated they haven't instigated the MFA prompt, which could indicate an attacker has the password for the account. |
-| MFA fraud alert or block | High | Azure AD Sign-ins log/Azure Log Analytics | Activity type = Fraud reported - User is blocked for MFA or fraud reported - No action taken (based on fraud report tenant-level settings) | Privileged user indicated no instigation of the MFA prompt. This can indicate an attacker has the account password. |
-| Privileged account sign-ins outside of expected controls | High | Azure AD Sign-ins log | Status = Failure<br>UserPricipalName = \<Admin account> <br> Location = \<unapproved location> <br> IP address = \<unapproved IP><br>Device info = \<unapproved Browser, Operating System> | Monitor and alert on any entries that you've defined as unapproved. |
-| Outside of normal sign-in times | High | Azure AD Sign-ins log | Status = Success<br>-and-<br>Location =<br>-and-<br>Time = Outside of working hours | Monitor and alert if sign-ins occur outside of expected times. It's important to find the normal working pattern for each privileged account and to alert if there are unplanned changes outside of normal working times. Sign-ins outside of normal working hours could indicate compromise or possible insider threats. |
-| Password change | High | Azure AD Audit logs | Activity actor = Admin/self-service<br>-and-<br>Target = User<br>-and-<br>Status = Success or failure | Alert on any admin account password changes, especially for global admins, user admins, subscription admins, and emergency access accounts. Write a query targeted at all privileged accounts. |
-| Changes to authentication methods | High | Azure AD Audit logs | Activity: Create identity provider<br>Category: ResourceManagement<br>Target: User Principal Name | This change could be an indication of an attacker adding an auth method to the account so they can have continued access. |
-| Identity Provider updated by non-approved actors | High | Azure AD Audit logs | Activity: Update identity provider<br>Category: ResourceManagement<br>Target: User Principal Name | This change could be an indication of an attacker adding an auth method to the account so they can have continued access. |
-Identity Provider deleted by non-approved actors | High | Azure AD Access Reviews | Activity: Delete identity provider<br>Category: ResourceManagement<br>Target: User Principal Name | This change could be an indication of an attacker adding an auth method to the account so they can have continued access. |
+| Sign-in failure, bad password threshold | High | Azure AD Sign-ins log | Status = Failure<br>-and-<br>error code = 50126 | Define a baseline threshold and monitor and adjust to suit your organizational behaviors. Limit false alerts. |
+| Failure because of Conditional Access requirement | High | Azure AD Sign-ins log | Status = Failure<br>-and-<br>error code = 53003<br>-and-<br>Failure reason = Blocked by Conditional Access | The event can indicate an attacker is trying to get into the account. |
+| Interrupt | High, medium | Azure AD Sign-ins log | Status = Failure<br>-and-<br>error code = 53003<br>-and-<br>Failure reason = Blocked by Conditional Access | The event can indicate an attacker has the account password, but can't pass the MFA challenge. |
+| Account lockout | High | Azure AD Sign-ins log | Status = Failure<br>-and-<br>error code = 50053 | Define a baseline threshold, then monitor and adjust to suit your organizational behaviors. Limit false alerts. |
+| Account disabled or blocked for sign-ins | low | Azure AD Sign-ins log | Status = Failure<br>-and-<br>Target = User UPN<br>-and-<br>error code = 50057 | The event could indicate someone trying to gain account access after they've left the organization. Although the account is blocked, log and alert this activity. |
+| MFA fraud alert or block | High | Azure AD Sign-ins log/Azure Log Analytics | Sign-ins>Authentication details<br> Result details = MFA denied, fraud code entered | Privileged user indicates they haven't instigated the MFA prompt, which could indicate an attacker has the account password. |
+| MFA fraud alert or block | High | Azure AD Sign-ins log/Azure Log Analytics | Activity type = Fraud reported - User is blocked for MFA or fraud reported - No action taken, based on fraud report tenant-level settings | Privileged user indicated no instigation of the MFA prompt. The scenario can indicate an attacker has the account password. |
+| Privileged account sign-ins outside of expected controls | High | Azure AD Sign-ins log | Status = Failure<br>UserPricipalName = \<Admin account> <br> Location = \<unapproved location> <br> IP address = \<unapproved IP><br>Device info = \<unapproved Browser, Operating System> | Monitor and alert entries you defined as unapproved. |
+| Outside of normal sign-in times | High | Azure AD Sign-ins log | Status = Success<br>-and-<br>Location =<br>-and-<br>Time = Outside of working hours | Monitor and alert if sign-ins occur outside expected times. Find the normal working pattern for each privileged account and alert if there are unplanned changes outside normal working times. Sign-ins outside normal working hours could indicate compromise or possible insider threat. |
+| Password change | High | Azure AD Audit logs | Activity actor = Admin/self-service<br>-and-<br>Target = User<br>-and-<br>Status = Success or failure | Alert any admin account password changes, especially for global admins, user admins, subscription admins, and emergency access accounts. Write a query for privileged accounts. |
+| Changes to authentication methods | High | Azure AD Audit logs | Activity: Create identity provider<br>Category: ResourceManagement<br>Target: User Principal Name | The change could indicate an attacker adding an auth method to the account to have continued access. |
+| Identity Provider updated by non-approved actors | High | Azure AD Audit logs | Activity: Update identity provider<br>Category: ResourceManagement<br>Target: User Principal Name | The change could indicate an attacker adding an auth method to the account to have continued access. |
+Identity Provider deleted by non-approved actors | High | Azure AD Access Reviews | Activity: Delete identity provider<br>Category: ResourceManagement<br>Target: User Principal Name | The change could indicate an attacker adding an auth method to the account to have continued access. |
## Applications
-| What to monitor | Risk Level | Where | Filter/sub-filter | Notes |
+| What to monitor | Risk level | Where | Filter / subfilter | Notes |
| - | - | - | - | - |
-| Added credentials to existing applications | High | Azure AD Audit logs | Service-Core Directory, Category-ApplicationManagement<br>Activity: Update Application-Certificates and secrets management<br>-and-<br>Activity: Update Service principal/Update Application | Alert when credentials are: added outside of normal business hours or workflows, of types not used in your environment, or added to a non-SAML flow supporting service principal. |
-| App assigned to an Azure role-based access control (RBAC) role, or Azure AD Role | High to medium | Azure AD Audit logs | Type: service principal<br>Activity: ΓÇ£Add member to roleΓÇ¥<br>or<br>ΓÇ£Add eligible member to roleΓÇ¥<br>-or-<br>ΓÇ£Add scoped member to role.ΓÇ¥ |
+| Added credentials to applications | High | Azure AD Audit logs | Service-Core Directory, Category-ApplicationManagement<br>Activity: Update Application-Certificates and secrets management<br>-and-<br>Activity: Update Service principal/Update Application | Alert when credentials are: added outside normal business hours or workflows, types not used in your environment, or added to a non-SAML flow supporting service principal. |
+| App assigned to an Azure role-based access control (RBAC) role, or Azure AD Role | High to medium | Azure AD Audit logs | Type: service principal<br>Activity: ΓÇ£Add member to roleΓÇ¥<br>or<br>ΓÇ£Add eligible member to roleΓÇ¥<br>-or-<br>ΓÇ£Add scoped member to role.ΓÇ¥ |N/A|
| App granted highly privileged permissions, such as permissions with ΓÇ£.AllΓÇ¥ (Directory.ReadWrite.All) or wide-ranging permissions (Mail.) | High | Azure AD Audit logs |N/A | Apps granted broad permissions such as ΓÇ£.AllΓÇ¥ (Directory.ReadWrite.All) or wide-ranging permissions (Mail.) |
-| Administrator granting either application permissions (app roles) or highly privileged delegated permissions | High | Microsoft 365 portal | ΓÇ£Add app role assignment to service principalΓÇ¥<br>-where-<br>Target(s) identifies an API with sensitive data (such as Microsoft Graph) ΓÇ£Add delegated permission grantΓÇ¥<br>-where-<br>Target(s) identifies an API with sensitive data (such as Microsoft Graph)<br>-and-<br>DelegatedPermissionGrant.Scope includes high-privilege permissions. | Alert when a global administrator, application administrator, or cloud application administrator consents to an application. Especially look for consent outside of normal activity and change procedures. |
-| Application is granted permissions for Microsoft Graph, Exchange, SharePoint, or Azure AD. | High | Azure AD Audit logs | ΓÇ£Add delegated permission grantΓÇ¥<br>-or-<br>ΓÇ£Add app role assignment to service principalΓÇ¥<br>-where-<br>Target(s) identifies an API with sensitive data (such as Microsoft Graph, Exchange Online, and so on) | Alert as in the preceding row. |
-| Highly privileged delegated permissions are granted on behalf of all users | High | Azure AD Audit logs | ΓÇ£Add delegated permission grantΓÇ¥<br>where<br>Target(s) identifies an API with sensitive data (such as Microsoft Graph)<br>DelegatedPermissionGrant.Scope includes high-privilege permissions<br>-and-<br>DelegatedPermissionGrant.ConsentType is ΓÇ£AllPrincipalsΓÇ¥. | Alert as in the preceding row. |
-| Applications that are using the ROPC authentication flow | Medium | Azure AD Sign-ins log | Status=Success<br>Authentication Protocol-ROPC | High level of trust is being placed in this application as the credentials can be cached or stored. Move if possible to a more secure authentication flow. This should only be used in automated testing of applications, if at all. For more information |
-| Dangling URI | High | Azure AD Logs and Application Registration | Service-Core Directory<br>Category-ApplicationManagement<br>Activity: Update Application<br>Success ΓÇô Property Name AppAddress | For example look for dangling URIs that point to a domain name that no longer exists or one you donΓÇÖt own. |
+| Administrator granting application permissions (app roles), or highly privileged delegated permissions | High | Microsoft 365 portal | ΓÇ£Add app role assignment to service principalΓÇ¥<br>-where-<br>Target(s) identifies an API with sensitive data (such as Microsoft Graph) ΓÇ£Add delegated permission grantΓÇ¥<br>-where-<br>Target(s) identifies an API with sensitive data (such as Microsoft Graph)<br>-and-<br>DelegatedPermissionGrant.Scope includes high-privilege permissions. | Alert when a global, application, or cloud application administrator consents to an application. Especially look for consent outside normal activity and change procedures. |
+| Application is granted permissions for Microsoft Graph, Exchange, SharePoint, or Azure AD. | High | Azure AD Audit logs | ΓÇ£Add delegated permission grantΓÇ¥<br>-or-<br>ΓÇ£Add app role assignment to service principalΓÇ¥<br>-where-<br>Target(s) identifies an API with sensitive data (such as Microsoft Graph, Exchange Online, and so on) | Use the alert in the preceding row. |
+| Highly privileged delegated permissions granted on behalf of all users | High | Azure AD Audit logs | ΓÇ£Add delegated permission grantΓÇ¥<br>where<br>Target(s) identifies an API with sensitive data (such as Microsoft Graph)<br>DelegatedPermissionGrant.Scope includes high-privilege permissions<br>-and-<br>DelegatedPermissionGrant.ConsentType is ΓÇ£AllPrincipalsΓÇ¥. | Use the alert in the preceding row. |
+| Applications that are using the ROPC authentication flow | Medium | Azure AD Sign-ins log | Status=Success<br>Authentication Protocol-ROPC | High level of trust is placed in this application because the credentials can be cached or stored. If possible, move to a more secure authentication flow. Use the process only in automated application testing, if ever. |
+| Dangling URI | High | Azure AD Logs and Application Registration | Service-Core Directory<br>Category-ApplicationManagement<br>Activity: Update Application<br>Success ΓÇô Property Name AppAddress | For example, look for dangling URIs pointing to a domain name that is gone, or one you donΓÇÖt own. |
| Redirect URI configuration changes | High | Azure AD logs | Service-Core Directory<br>Category-ApplicationManagement<br>Activity: Update Application<br>Success ΓÇô Property Name AppAddress | Look for URIs not using HTTPS*, URIs with wildcards at the end or the domain of the URL, URIs that are **not** unique to the application, URIs that point to a domain you don't control. |
-| Changes to AppID URI | High | Azure AD logs | Service-Core Directory<br>Category-ApplicationManagement<br>Activity: Update Application<br>Activity: Update Service principal | Look for any AppID URI modifications, such as adding, modifying, or removing the URI. |
-| Changes to application ownership | Medium | Azure AD logs | Service-Core Directory<br>Category-ApplicationManagement<br>Activity: Add owner to application | Look for any instance of a user being added as an application owner outside of normal change management activities. |
-| Changes to log-out URL | Low | Azure AD logs | Service-Core Directory<br>Category-ApplicationManagement<br>Activity: Update Application<br>-and-<br>Activity: Update service principle | Look for any modifications to a sign-out URL. Blank entries or entries to non-existent locations would stop a user from terminating a session.
+| Changes to AppID URI | High | Azure AD logs | Service-Core Directory<br>Category-ApplicationManagement<br>Activity: Update Application<br>Activity: Update Service principal | Look for AppID URI modifications, such as adding, modifying, or removing the URI. |
+| Changes to application ownership | Medium | Azure AD logs | Service-Core Directory<br>Category-ApplicationManagement<br>Activity: Add owner to application | Look for instances of users added as application owners outside normal change management activities. |
+| Changes to sign out URL | Low | Azure AD logs | Service-Core Directory<br>Category-ApplicationManagement<br>Activity: Update Application<br>-and-<br>Activity: Update service principle | Look for modifications to a sign out URL. Blank entries or entries to non-existent locations would stop a user from terminating a session.
## Infrastructure
-| What to monitor | Risk Level | Where | Filter/sub-filter | Notes |
+| What to monitor | Risk Level | Where | Filter / subfilter | Notes |
| - | - | - | - | - |
-| New Conditional Access Policy created by non-approved actors | High | Azure AD Audit logs | Activity: Add conditional access policy<br>Category: Policy<br>Initiated by (actor): User Principal Name | Monitor and alert on Conditional Access changes. Is Initiated by (actor): approved to make changes to Conditional Access? |
-| Conditional Access Policy removed by non-approved actors | Medium | Azure AD Audit logs | Activity: Delete conditional access policy<br>Category: Policy<br>Initiated by (actor): User Principal Name | Monitor and alert on Conditional Access changes. Is Initiated by (actor): approved to make changes to Conditional Access? |
-| Conditional Access Policy updated by non-approved actors | High | Azure AD Audit logs | Activity: Update conditional access policy<br>Category: Policy<br>Initiated by (actor): User Principal Name | Monitor and alert on Conditional Access changes. Is Initiated by (actor): approved to make changes to Conditional Access?<br>Review Modified Properties and compare ΓÇ£oldΓÇ¥ vs ΓÇ£newΓÇ¥ value |
-| B2C Custom policy created by non-approved actors | High | Azure AD Audit logs| Activity: Create custom policy<br>Category: ResourceManagement<br>Target: User Principal Name | Monitor and alert on Custom Policies changes. Is Initiated by (actor): approved to make changes to Custom Policies? |
-| B2C Custom policy updated by non-approved actors | High | Azure AD Audit logs| Activity: Get custom policies<br>Category: ResourceManagement<br>Target: User Principal Name | Monitor and alert on Custom Policies changes. Is Initiated by (actor): approved to make changes to Custom Policies? |
-| B2C Custom policy deleted by non-approved actors | Medium |Azure AD Audit logs | Activity: Delete custom policy<br>Category: ResourceManagement<br>Target: User Principal Name | Monitor and alert on Custom Policies changes. Is Initiated by (actor): approved to make changes to Custom Policies? |
-| User Flow created by non-approved actors | High |Azure AD Audit logs | Activity: Create user flow<br>Category: ResourceManagement<br>Target: User Principal Name | Monitor and alert on User Flow changes. Is Initiated by (actor): approved to make changes to User Flows? |
-| User Flow updated by non-approved actors | High | Azure AD Audit logs| Activity: Update user flow<br>Category: ResourceManagement<br>Target: User Principal Name | Monitor and alert on User Flow changes. Is Initiated by (actor): approved to make changes to User Flows? |
-| User Flow deleted by non-approved actors | Medium | Azure AD Audit logs| Activity: Delete user flow<br>Category: ResourceManagement<br>Target: User Principal Name | Monitor and alert on User Flow changes. Is Initiated by (actor): approved to make changes to User Flows? |
-| API Connectors created by non-approved actors | Medium | Azure AD Audit log| Activity: Create API connector<br>Category: ResourceManagement<br>Target: User Principal Name | Monitor and alert on API Connector changes. Is Initiated by (actor): approved to make changes to API Connectors? |
-| API Connectors updated by non-approved actors | Medium | Azure AD Audit logs| Activity: Update API connector<br>Category: ResourceManagement<br>Target: User Principal Name: ResourceManagement | Monitor and alert on API Connector changes. Is Initiated by (actor): approved to make changes to API Connectors? |
-| API Connectors deleted by non-approved actors | Medium | Azure AD Audit log|Activity: Update API connector<br>Category: ResourceManagment<br>Target: User Principal Name: ResourceManagment | Monitor and alert on API Connector changes. Is Initiated by (actor): approved to make changes to API Connectors? |
-| Identity Provider created by non-approved actors | High |Azure AD Audit log | Activity: Create identity provider<br>Category: ResourceManagement<br>Target: User Principal Name | Monitor and alert on Identity Provider changes. Is Initiated by (actor): approved to make changes to Identity Provider configuration? |
-| Identity Provider updated by non-approved actors | High | Azure AD Audit log| Activity: Update identity provider<br>Category: ResourceManagement<br>Target: User Principal Name | Monitor and alert on Identity Provider changes. Is Initiated by (actor): approved to make changes to Identity Provider configuration? |
-Identity Provider deleted by non-approved actors | Medium | | Activity: Delete identity provider<br>Category: ResourceManagement<br>Target: User Principal Name | Monitor and alert on Identity Provider changes. Is Initiated by (actor): approved to make changes to Identity Provider configuration? |
+| New Conditional Access Policy created by non-approved actors | High | Azure AD Audit logs | Activity: Add conditional access policy<br>Category: Policy<br>Initiated by (actor): User Principal Name | Monitor and alert Conditional Access changes. Initiated by (actor): approved to make changes to Conditional Access? |
+| Conditional Access Policy removed by non-approved actors | Medium | Azure AD Audit logs | Activity: Delete conditional access policy<br>Category: Policy<br>Initiated by (actor): User Principal Name | Monitor and alert Conditional Access changes. Initiated by (actor): approved to make changes to Conditional Access? |
+| Conditional Access Policy updated by non-approved actors | High | Azure AD Audit logs | Activity: Update conditional access policy<br>Category: Policy<br>Initiated by (actor): User Principal Name | Monitor and alert Conditional Access changes. Initiated by (actor): approved to make changes to Conditional Access?<br>Review Modified Properties and compare old vs. new value |
+| B2C custom policy created by non-approved actors | High | Azure AD Audit logs| Activity: Create custom policy<br>Category: ResourceManagement<br>Target: User Principal Name | Monitor and alert custom policy changes. Initiated by (actor): approved to make changes to custom policies? |
+| B2C custom policy updated by non-approved actors | High | Azure AD Audit logs| Activity: Get custom policies<br>Category: ResourceManagement<br>Target: User Principal Name | Monitor and alert custom policy changes. Initiated by (actor): approved to make changes to custom policies? |
+| B2C custom policy deleted by non-approved actors | Medium |Azure AD Audit logs | Activity: Delete custom policy<br>Category: ResourceManagement<br>Target: User Principal Name | Monitor and alert custom policy changes. Initiated by (actor): approved to make changes to custom policies? |
+| User flow created by non-approved actors | High |Azure AD Audit logs | Activity: Create user flow<br>Category: ResourceManagement<br>Target: User Principal Name | Monitor and alert on user flow changes. Initiated by (actor): approved to make changes to user flows? |
+| User flow updated by non-approved actors | High | Azure AD Audit logs| Activity: Update user flow<br>Category: ResourceManagement<br>Target: User Principal Name | Monitor and alert on user flow changes. Initiated by (actor): approved to make changes to user flows? |
+| User flow deleted by non-approved actors | Medium | Azure AD Audit logs| Activity: Delete user flow<br>Category: ResourceManagement<br>Target: User Principal Name | Monitor and alert on user flow changes. Initiated by (actor): approved to make changes to user flows? |
+| API connectors created by non-approved actors | Medium | Azure AD Audit logs| Activity: Create API connector<br>Category: ResourceManagement<br>Target: User Principal Name | Monitor and alert API connector changes. Initiated by (actor): approved to make changes to API connectors? |
+| API connectors updated by non-approved actors | Medium | Azure AD Audit logs| Activity: Update API connector<br>Category: ResourceManagement<br>Target: User Principal Name: ResourceManagement | Monitor and alert API connector changes. Initiated by (actor): approved to make changes to API connectors? |
+| API connectors deleted by non-approved actors | Medium | Azure AD Audit logs|Activity: Update API connector<br>Category: ResourceManagment<br>Target: User Principal Name: ResourceManagment | Monitor and alert API connector changes. Initiated by (actor): approved to make changes to API connectors? |
+| Identity provider (IdP) created by non-approved actors | High |Azure AD Audit logs | Activity: Create identity provider<br>Category: ResourceManagement<br>Target: User Principal Name | Monitor and alert IdP changes. Initiated by (actor): approved to make changes to IdP configuration? |
+| IdP updated by non-approved actors | High | Azure AD Audit logs| Activity: Update identity provider<br>Category: ResourceManagement<br>Target: User Principal Name | Monitor and alert IdP changes. Initiated by (actor): approved to make changes to IdP configuration? |
+IdP deleted by non-approved actors | Medium | Azure AD Audit logs| Activity: Delete identity provider<br>Category: ResourceManagement<br>Target: User Principal Name | Monitor and alert IdP changes. Initiated by (actor): approved to make changes to IdP configuration? |
## Next steps
-See these security operations guide articles:
-
-[Azure AD security operations overview](security-operations-introduction.md)
-
-[Security operations for user accounts](security-operations-user-accounts.md)
-
-[Security operations for privileged accounts](security-operations-privileged-accounts.md)
-
-[Security operations for Privileged Identity Management](security-operations-privileged-identity-management.md)
-
-[Security operations for applications](security-operations-applications.md)
-
-[Security operations for devices](security-operations-devices.md)
+To learn more, see the following security operations articles:
-[Security operations for infrastructure](security-operations-infrastructure.md)
+* [Azure AD security operations guide](security-operations-introduction.md)
+* [Azure AD security operations for user accounts](security-operations-user-accounts.md)
+* [Security operations for privileged accounts in Azure AD](security-operations-privileged-accounts.md)
+* [Azure AD security operations for Privileged Identity Management](security-operations-privileged-identity-management.md)
+* [Azure AD security operations guide for applications](security-operations-applications.md)
+* [Azure AD security operations for devices](security-operations-devices.md)
+* [Security operations for infrastructure](security-operations-infrastructure.md)
active-directory Understanding Lifecycle Workflows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/understanding-lifecycle-workflows.md
The following document provides an overview of a workflow created using Lifecycl
## Permissions and Roles
-For a full list of supported delegate and application permissions required to use Lifecycle Workflows, see: [Lifecycle workflows permissions](/graph/permissions-reference#lifecycle-workflows-permissions).
+For a full list of supported delegated and application permissions required to use Lifecycle Workflows, see: [Lifecycle workflows permissions](/graph/permissions-reference#lifecycle-workflows-permissions).
For delegated scenarios, the admin needs one of the following [Azure AD roles](/azure/active-directory/users-groups-roles/directory-assign-admin-roles#available-roles):
active-directory How To Connect Install Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-install-prerequisites.md
We recommend that you harden your Azure AD Connect server to decrease the securi
### SQL Server used by Azure AD Connect * Azure AD Connect requires a SQL Server database to store identity data. By default, a SQL Server 2019 Express LocalDB (a light version of SQL Server Express) is installed. SQL Server Express has a 10-GB size limit that enables you to manage approximately 100,000 objects. If you need to manage a higher volume of directory objects, point the installation wizard to a different installation of SQL Server. The type of SQL Server installation can impact the [performance of Azure AD Connect](./plan-connect-performance-factors.md#sql-database-factors). * If you use a different installation of SQL Server, these requirements apply:
- * Azure AD Connect support all mainstream supported SQL Server versions up to SQL Server 2019. Please refer to the [SQL Server lifecycle article](/lifecycle/products/?products=sql-server) to verify the support status of your SQL Server version. Azure SQL Database *isn't supported* as a database. This includes both Azure SQL Database and Azure SQL Managed Instance.
+ * Azure AD Connect support all mainstream supported SQL Server versions up to SQL Server 2019. Please refer to the [SQL Server lifecycle article](/lifecycle/products/?products=sql-server) to verify the support status of your SQL Server version. SQL Server 2012 is no longer supported. Azure SQL Database *isn't supported* as a database. This includes both Azure SQL Database and Azure SQL Managed Instance.
* You must use a case-insensitive SQL collation. These collations are identified with a \_CI_ in their name. Using a case-sensitive collation identified by \_CS_ in their name *isn't supported*. * You can have only one sync engine per SQL instance. Sharing a SQL instance with FIM/MIM Sync, DirSync, or Azure AD Sync *isn't supported*.
active-directory How To Connect Password Hash Synchronization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-password-hash-synchronization.md
Your current cloud service session is not immediately affected by a synchronized
A user must enter their corporate credentials a second time to authenticate to Azure AD, regardless of whether they're signed in to their corporate network. This pattern can be minimized, however, if the user selects the Keep me signed in (KMSI) check box at sign-in. This selection sets a session cookie that bypasses authentication for 180 days. KMSI behavior can be enabled or disabled by the Azure AD administrator. In addition, you can reduce password prompts by configuring [Azure AD join](../devices/concept-azure-ad-join.md) or [Hybrid Azure AD join](../devices/concept-azure-ad-join-hybrid.md), which automatically signs users in when they are on their corporate devices connected to your corporate network.
+### Additional advantages
+
+- Generally, password hash synchronization is simpler to implement than a federation service. It doesn't require any additional servers, and eliminates dependence on a highly available federation service to authenticate users.
+- Password hash synchronization can also be enabled in addition to federation. It may be used as a fallback if your federation service experiences an outage.
+ > [!NOTE] > Password sync is only supported for the object type user in Active Directory. It is not supported for the iNetOrgPerson object type.
To support temporary passwords in Azure AD for synchronized users, you can enabl
> Forcing a user to change their password on next logon requires a password change at the same time. Azure AD Connect will not pick up the force password change flag by itself; it is supplemental to the detected password change that occurs during password hash sync. > > If the user has the option "Password never expires" set in Active Directory (AD), the force password change flag will not be set in Active Directory (AD), so the user will not be prompted to change the password during the next sign-in.
+>
+> A new user created in Active Directory with "User must change password at next logon" flag will always be provisioned in Azure AD with a password policy to "Force change password on next sign-in", irrespective of the *ForcePasswordChangeOnLogOn* feature being true or false. This is an Azure AD internal logic since the new user is provisioned without a password, whereas *ForcePasswordChangeOnLogOn* feature only affects admin password reset scenarios.
> [!CAUTION] > You should only use this feature when SSPR and Password Writeback are enabled on the tenant. This is so that if a user changes their password via SSPR, it will be synchronized to Active Directory.
If your organization uses the accountExpires attribute as part of user account m
### Overwrite synchronized passwords
-An administrator can manually reset your password by using Windows PowerShell.
+An administrator can manually reset your password directly in Azure AD by using Windows PowerShell (unless the user is in a Federated Domain).
In this case, the new password overrides your synchronized password, and all password policies defined in the cloud are applied to the new password.
If you change your on-premises password again, the new password is synchronized
The synchronization of a password has no impact on the Azure user who is signed in. Your current cloud service session is not immediately affected by a synchronized password change that occurs while you're signed in to a cloud service. KMSI extends the duration of this difference. When the cloud service requires you to authenticate again, you need to provide your new password.
-### Additional advantages
--- Generally, password hash synchronization is simpler to implement than a federation service. It doesn't require any additional servers, and eliminates dependence on a highly available federation service to authenticate users.-- Password hash synchronization can also be enabled in addition to federation. It may be used as a fallback if your federation service experiences an outage.- ## Password hash sync process for Azure AD Domain Services If you use Azure AD Domain Services to provide legacy authentication for applications and services that need to use Kerberos, LDAP, or NTLM, some additional processes are part of the password hash synchronization flow. Azure AD Connect uses the additional following process to synchronize password hashes to Azure AD for use in Azure AD Domain
active-directory Usertesting Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/usertesting-tutorial.md
- Title: 'Tutorial: Azure AD SSO integration with UserTesting'
-description: Learn how to configure single sign-on between Azure Active Directory and UserTesting.
-------- Previously updated : 11/21/2022----
-# Tutorial: Azure AD SSO integration with UserTesting
-
-In this tutorial, you'll learn how to integrate UserTesting with Azure Active Directory (Azure AD). When you integrate UserTesting with Azure AD, you can:
-
-* Control in Azure AD who has access to UserTesting.
-* Enable your users to be automatically signed-in to UserTesting with their Azure AD accounts.
-* Manage your accounts in one central location - the Azure portal.
-
-## Prerequisites
-
-To get started, you need the following items:
-
-* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
-* UserTesting single sign-on (SSO) enabled subscription.
-
-## Scenario description
-
-In this tutorial, you configure and test Azure AD SSO in a test environment.
-
-* UserTesting supports **SP and IDP** initiated SSO.
-
-> [!NOTE]
-> Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
-
-## Add UserTesting from the gallery
-
-To configure the integration of UserTesting into Azure AD, you need to add UserTesting from the gallery to your list of managed SaaS apps.
-
-1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
-1. On the left navigation pane, select the **Azure Active Directory** service.
-1. Navigate to **Enterprise Applications** and then select **All Applications**.
-1. To add new application, select **New application**.
-1. In the **Add from the gallery** section, type **UserTesting** in the search UserTesting.
-1. Select **UserTesting** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-
- Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, as well as walk through the SSO configuration as well. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides)
-
-## Configure and test Azure AD SSO for UserTesting
-
-Configure and test Azure AD SSO with UserTesting using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in UserTesting.
-
-To configure and test Azure AD SSO with UserTesting, perform the following steps:
-
-1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
- 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
- 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
-1. **[Configure UserTesting SSO](#configure-usertesting-sso)** - to configure the single sign-on settings on application side.
- 1. **[Create UserTesting test user](#create-usertesting-test-user)** - to have a counterpart of B.Simon in UserTesting that is linked to the Azure AD representation of user.
-1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
-
-## Configure Azure AD SSO
-
-Follow these steps to enable Azure AD SSO in the Azure portal.
-
-1. In the Azure portal, on the **UserTesting** application integration page, find the **Manage** section and select **single sign-on**.
-1. On the **Select a single sign-on method** page, select **SAML**.
-1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
-
- ![Edit Basic SAML Configuration](common/edit-urls.png)
-
-1. On the **Basic SAML Configuration** section, the user does not have to perform any step as the app is already pre-integrated with Azure.
-
-1. On the **Basic SAML Configuration** section, if you wish to configure the application in **SP** initiated mode, perform the following steps:
-
- a. In the **Identifier** textbox, type the URL:
- `https://www.okta.com/saml2/service-provider/sposbpqioaxlalylvzsc`
-
- b. In the **Reply URL** textbox, type the URL:
- `https://auth.usertesting.com/sso/saml2/0oa1mi3sggbs692Nc0h8`
-
- c. In the **Sign on URL** textbox, type the URL:
- `https://app.usertesting.com/users/sso_sign_in`
-
- d. In the **Relay State** textbox, type the URL:
- `https://app.usertesting.com/sessions/from_idp`
-
-1. Your UserTesting application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows an example for this. The default value of **Unique User Identifier** is **user.userprincipalname** but UserTesting expects this to be mapped with the user's email address. For that you can use **user.mail** attribute from the list or use the appropriate attribute value based on your organization configuration.
-
- ![image](common/default-attributes.png)
-
-1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Federation Metadata XML** and select **Download** to download the certificate and save it on your computer.
-
- ![The Certificate download link](common/metadataxml.png)
-
-1. On the **Set up UserTesting** section, copy the appropriate URL(s) based on your requirement.
-
- ![Copy configuration URLs](common/copy-configuration-urls.png)
-
-### Create an Azure AD test user
-
-In this section, you'll create a test user in the Azure portal called B.Simon.
-
-1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
-1. Select **New user** at the top of the screen.
-1. In the **User** properties, follow these steps:
- 1. In the **Name** field, enter `B.Simon`.
- 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
- 1. Select the **Show password** check UserTesting, and then write down the value that's displayed in the **Password** UserTesting.
- 1. Click **Create**.
-
-### Assign the Azure AD test user
-
-In this section, you'll enable B.Simon to use Azure single sign-on by granting access to UserTesting.
-
-1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
-1. In the applications list, select **UserTesting**.
-1. In the app's overview page, find the **Manage** section and select **Users and groups**.
-1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
-1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
-1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
-1. In the **Add Assignment** dialog, click the **Assign** button.
-
-## Configure UserTesting SSO
-
-To configure single sign-on on **UserTesting** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [UserTesting support team](mailto:support@usertesting.com). They set this setting to have the SAML SSO connection set properly on both sides. [Learn how](https://help.usertesting.com/hc/en-us/articles/360001764852-Single-Sign-On-SSO-Setup-Instructions).
-
-### Create UserTesting test user
-
-In this section, you create a user called Britta Simon in UserTesting. Work with [UserTesting support team](mailto:support@usertesting.com) to add the users in the UserTesting platform. Users must be created and activated before you use single sign-on.
-
-## Test SSO
-
-In this section, you test your Azure AD single sign-on configuration with following options.
-
-#### SP initiated:
-
-* Click on **Test this application** in Azure portal. This will redirect to UserTesting Sign on URL where you can initiate the login flow.
-
-* Go to UserTesting Sign-on URL directly and initiate the login flow from there.
-
-#### IDP initiated:
-
-* Click on **Test this application** in Azure portal and you should be automatically signed in to the UserTesting for which you set up the SSO.
-
-You can also use Microsoft My Apps to test the application in any mode. When you click the UserTesting tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the UserTesting for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
-
-## Next steps
-
-Once you configure UserTesting you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-aad).
aks Auto Upgrade Node Image https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/auto-upgrade-node-image.md
Last updated 02/03/2023
# Automatically upgrade Azure Kubernetes Service cluster node operating system images (preview)
-AKS supports upgrading the images on a node so your cluster is up to date with the newest operating system (OS) and runtime updates. AKS regularly provides new node OS images with the latest updates, so it's beneficial to upgrade your node's images regularly for the latest AKS features and to maintain security. Before learning about auto-upgrade, make sure you understand upgrade fundamentals by reading [Upgrade an AKS cluster][upgrade-aks-cluster].
-
-The latest AKS node image information can be found by visiting the [AKS release tracker][release-tracker].
+AKS now supports an exclusive channel dedicated to controlling node-level OS security updates. This channel, referred to as the node OS auto-upgrade channel, works in tandem with the existing [Autoupgrade][auto-upgrade] channel which is used for Kubernetes version upgrades.
[!INCLUDE [preview features callout](./includes/preview/preview-callout.md)] ## Why use node OS auto-upgrade
-Node OS auto-upgrade provides a set once and forget mechanism that yields tangible time and operational cost benefits. By enabling auto-upgrade, you can ensure your clusters are up to date and don't miss the latest AKS features or patches from AKS.
+This channel is exclusively meant to control node OS security updates. You can use this channel to disable [unattended upgrades][unattended-upgrades]. You can schedule maintenance without worrying about [Kured][kured] for security patches, provided you choose either the `SecurityPatch` or `NodeImage` options for `nodeOSUpgradeChannel`. By using this channel, you can run node image upgrades in tandem with Kubernetes version auto-upgrade channels like `Stable` and `Rapid`.
## Prerequisites
az provider register --namespace Microsoft.ContainerService
## Limitations
-If using the `node-image` cluster auto-upgrade channel or the `NodeImage` node image auto-upgrade channel, Linux [unattended upgrades][unattended-upgrades] will be disabled by default.
+If using the `node-image` cluster auto-upgrade channel or the `NodeImage` node OS auto-upgrade channel, Linux [unattended upgrades][unattended-upgrades] will be disabled by default. You can't change node OS auto-upgrade channel value if your cluster auto-upgrade channel is `node-image`. In order to set the node OS auto-upgrade channel values , make sure the [cluster auto-upgrade channel][Autoupgrade] is not `node-image`.
+
+The nodeosupgradechannel is not supported on Mariner and Windows OS nodepools.
## Using node OS auto-upgrade
For more information on Planned Maintenance, see [Use Planned Maintenance to sch
[az-feature-show]: /cli/azure/feature#az-feature-show [upgrade-aks-cluster]: upgrade-cluster.md [unattended-upgrades]: https://help.ubuntu.com/community/AutomaticSecurityUpdates
+[Autoupgrade]: auto-upgrade-cluster.md
+[kured]: node-updates-kured.md
aks Concepts Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/concepts-security.md
Title: Concepts - Security in Azure Kubernetes Services (AKS)
description: Learn about security in Azure Kubernetes Service (AKS), including master and node communication, network policies, and Kubernetes secrets. Previously updated : 02/22/2023 Last updated : 02/28/2023
Container security protects the entire end-to-end pipeline from build to the app
The Secure Supply Chain includes the build environment and registry.
-Kubernetes includes security components, such as *pod security standards* and *Secrets*. Meanwhile, Azure includes components like Active Directory, Microsoft Defender for Containers, Azure Policy, Azure Key Vault, network security groups and orchestrated cluster upgrades. AKS combines these security components to:
+Kubernetes includes security components, such as *pod security standards* and *Secrets*. Azure includes components like Active Directory, Microsoft Defender for Containers, Azure Policy, Azure Key Vault, network security groups and orchestrated cluster upgrades. AKS combines these security components to:
-* Provide a complete Authentication and Authorization story.
-* Leverage AKS Built-in Azure Policy to secure your applications.
+* Provide a complete authentication and authorization story.
+* Apply AKS Built-in Azure Policy to secure your applications.
* End-to-End insight from build through your application with Microsoft Defender for Containers. * Keep your AKS cluster running the latest OS security updates and Kubernetes releases. * Provide secure pod traffic and access to sensitive credentials.
This article introduces the core concepts that secure your applications in AKS.
## Build Security
-As the entry point for the Supply Chain, it is important to conduct static analysis of image builds before they are promoted down the pipeline. This includes vulnerability and compliance assessment. It is not about failing a build because it has a vulnerability, as that will break development. It is about looking at the "Vendor Status" to segment based on vulnerabilities that are actionable by the development teams. Also leverage "Grace Periods" to allow developers time to remediate identified issues.
+As the entry point for the Supply Chain, it is important to conduct static analysis of image builds before they are promoted down the pipeline. This includes vulnerability and compliance assessment. It is not about failing a build because it has a vulnerability, as that breaks development. It's about looking at the **Vendor Status** to segment based on vulnerabilities that are actionable by the development teams. Also use **Grace Periods** to allow developers time to remediate identified issues.
## Registry Security
-Assessing the vulnerability state of the image in the Registry will detect drift and will also catch images that didn't come from your build environment. Use [Notary V2](https://github.com/notaryproject/notaryproject) to attach signatures to your images to ensure deployments are coming from a trusted location.
+Assessing the vulnerability state of the image in the Registry detects drift and also catches images that didn't come from your build environment. Use [Notary V2](https://github.com/notaryproject/notaryproject) to attach signatures to your images to ensure deployments are coming from a trusted location.
## Cluster security
-In AKS, the Kubernetes master components are part of the managed service provided, managed, and maintained by Microsoft. Each AKS cluster has its own single-tenanted, dedicated Kubernetes master to provide the API Server, Scheduler, etc.
+In AKS, the Kubernetes master components are part of the managed service provided, managed, and maintained by Microsoft. Each AKS cluster has its own single-tenanted, dedicated Kubernetes master to provide the API Server, Scheduler, etc. For information on how Microsoft manages security vulnerabilities and details about releasing security updates for managed pars of an AKS clusters, see [Vulnerability management for Azure Kubernetes Service][microsoft-vulnerability-management-aks].
By default, the Kubernetes API server uses a public IP address and a fully qualified domain name (FQDN). You can limit access to the API server endpoint using [authorized IP ranges][authorized-ip-ranges]. You can also create a fully [private cluster][private-clusters] to limit API server access to your virtual network.
You can control access to the API server using Kubernetes role-based access cont
## Node security AKS nodes are Azure virtual machines (VMs) that you manage and maintain.+ * Linux nodes run optimized versions of Ubuntu or Mariner. * Windows Server nodes run an optimized Windows Server 2019 release using the `containerd` or Docker container runtime.
When an AKS cluster is created or scaled up, the nodes are automatically deploye
> [!NOTE] > AKS clusters using:
-> * Kubernetes version 1.19 and greater for Linux node pools use `containerd` as its container runtime. Using `containerd` with Windows Server 2019 node pools is currently in preview. For more details, see [Add a Windows Server node pool with `containerd`][aks-add-np-containerd].
+> * Kubernetes version 1.19 and greater for Linux node pools use `containerd` as its container runtime. Using `containerd` with Windows Server 2019 node pools is currently in preview. For more information, see [Add a Windows Server node pool with `containerd`][aks-add-np-containerd].
> * Kubernetes prior to v1.19 for Linux node pools use Docker as its container runtime. For Windows Server 2019 node pools, Docker is the default container runtime.
-### Node security patches
-
-#### Linux nodes
-Each evening, Linux nodes in AKS get security patches through their distro security update channel. This behavior is automatically configured as the nodes are deployed in an AKS cluster. To minimize disruption and potential impact to running workloads, nodes are not automatically rebooted if a security patch or kernel update requires it. For more information about how to handle node reboots, see [Apply security and kernel updates to nodes in AKS][aks-kured].
-
-Nightly updates apply security updates to the OS on the node, but the node image used to create nodes for your cluster remains unchanged. If a new Linux node is added to your cluster, the original image is used to create the node. This new node will receive all the security and kernel updates available during the automatic check every night but will remain unpatched until all checks and restarts are complete. You can use node image upgrade to check for and update node images used by your cluster. For more details on node image upgrade, see [Azure Kubernetes Service (AKS) node image upgrade][node-image-upgrade].
-
-For AKS clusters on auto upgrade channel "node-image" will not pull security updates through unattended upgrade. They will get security updates through the weekly node image upgrade.
-
-#### Windows Server nodes
-
-For Windows Server nodes, Windows Update doesn't automatically run and apply the latest updates. Schedule Windows Server node pool upgrades in your AKS cluster around the regular Windows Update release cycle and your own validation process. This upgrade process creates nodes that run the latest Windows Server image and patches, then removes the older nodes. For more information on this process, see [Upgrade a node pool in AKS][nodepool-upgrade].
+For more information about the security upgrade process for Linux and Windows worker nodes, see [Security patching nodes][aks-vulnerability-management-nodes].
### Node authorization+ Node authorization is a special-purpose authorization mode that specifically authorizes API requests made by kubelets to protect against East-West attacks. Node authorization is enabled by default on AKS 1.24 + clusters. ### Node deployment+ Nodes are deployed into a private virtual network subnet, with no public IP addresses assigned. For troubleshooting and management purposes, SSH is enabled by default and only accessible using the internal IP address. ### Node storage+ To provide storage, the nodes use Azure Managed Disks. For most VM node sizes, Azure Managed Disks are Premium disks backed by high-performance SSDs. The data stored on managed disks is automatically encrypted at rest within the Azure platform. To improve redundancy, Azure Managed Disks are securely replicated within the Azure datacenter. ### Hostile multi-tenant workloads
To start the upgrade process, specify one of the [listed available Kubernetes ve
During the upgrade process, AKS nodes are individually cordoned from the cluster to prevent new pods from being scheduled on them. The nodes are then drained and upgraded as follows:
-1. A new node is deployed into the node pool.
+1. A new node is deployed into the node pool.
* This node runs the latest OS image and patches. 1. One of the existing nodes is identified for upgrade. 1. Pods on the identified node are gracefully terminated and scheduled on the other nodes in the node pool.
For connectivity and security with on-premises networks, you can deploy your AKS
To filter virtual network traffic flow, Azure uses network security group rules. These rules define the source and destination IP ranges, ports, and protocols allowed or denied access to resources. Default rules are created to allow TLS traffic to the Kubernetes API server. You create services with load balancers, port mappings, or ingress routes. AKS automatically modifies the network security group for traffic flow.
-If you provide your own subnet for your AKS cluster (whether using Azure CNI or Kubenet), **do not** modify the NIC-level network security group managed by AKS. Instead, create more subnet-level network security groups to modify the flow of traffic. Make sure they don't interfere with necessary traffic managing the cluster, such as load balancer access, communication with the control plane, and [egress][aks-limit-egress-traffic].
+If you provide your own subnet for your AKS cluster (whether using Azure CNI or Kubenet), **do not** modify the NIC-level network security group managed by AKS. Instead, create more subnet-level network security groups to modify the flow of traffic. Verify they don't interfere with necessary traffic managing the cluster, such as load balancer access, communication with the control plane, and [egress][aks-limit-egress-traffic].
### Kubernetes network policy
To limit network traffic between pods in your cluster, AKS offers support for [K
## Application Security
-To protect pods running on AKS leverage [Microsoft Defender for Containers][microsoft-defender-for-containers] to detect and restrict cyber attacks against your applications running in your pods. Run continual scanning to detect drift in the vulnerability state of your application and implement a "blue/green/canary" process to patch and replace the vulnerable images.
-
+To protect pods running on AKS, consider [Microsoft Defender for Containers][microsoft-defender-for-containers] to detect and restrict cyber attacks against your applications running in your pods. Run continual scanning to detect drift in the vulnerability state of your application and implement a "blue/green/canary" process to patch and replace the vulnerable images.
## Kubernetes Secrets
-With a Kubernetes *Secret*, you inject sensitive data into pods, such as access credentials or keys.
-1. Create a Secret using the Kubernetes API.
-1. Define your pod or deployment and request a specific Secret.
+With a Kubernetes *Secret*, you inject sensitive data into pods, such as access credentials or keys.
+
+1. Create a Secret using the Kubernetes API.
+1. Define your pod or deployment and request a specific Secret.
* Secrets are only provided to nodes with a scheduled pod that requires them.
- * The Secret is stored in *tmpfs*, not written to disk.
-1. When you delete the last pod on a node requiring a Secret, the Secret is deleted from the node's tmpfs.
- * Secrets are stored within a given namespace and can only be accessed by pods within the same namespace.
+ * The Secret is stored in *tmpfs*, not written to disk.
+1. When you delete the last pod on a node requiring a Secret, the Secret is deleted from the node's *tmpfs*.
+ * Secrets are stored within a given namespace and are only accessible from pods within the same namespace.
-Using Secrets reduces the sensitive information defined in the pod or service YAML manifest. Instead, you request the Secret stored in Kubernetes API Server as part of your YAML manifest. This approach only provides the specific pod access to the Secret.
+Using Secrets reduces the sensitive information defined in the pod or service YAML manifest. Instead, you request the Secret stored in Kubernetes API Server as part of your YAML manifest. This approach only provides the specific pod access to the Secret.
> [!NOTE] > The raw secret manifest files contain the secret data in base64 format (see the [official documentation][secret-risks] for more details). Treat these files as sensitive information, and never commit them to source control.
-Kubernetes secrets are stored in etcd, a distributed key-value store. Etcd store is fully managed by AKS and [data is encrypted at rest within the Azure platform][encryption-atrest].
+Kubernetes secrets are stored in etcd, a distributed key-value store. AKS fully manages the Etcd store and [data is encrypted at rest within the Azure platform][encryption-atrest].
## Next steps
For more information on core Kubernetes and AKS concepts, see:
[private-clusters]: private-clusters.md [network-policy]: use-network-policies.md [node-image-upgrade]: node-image-upgrade.md
+[microsoft-vulnerability-management-aks]: concepts-vulnerability-management.md
+[aks-vulnerability-management-nodes]: concepts-vulnerability-management.md#worker-nodes
aks Concepts Vulnerability Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/concepts-vulnerability-management.md
+
+ Title: Vulnerability management for Azure Kubernetes Service
+
+description: Learn how Microsoft manages security vulnerabilities for Azure Kubernetes Service (AKS) clusters.
+ Last updated : 02/24/2023
+
++
+# Vulnerability management for Azure Kubernetes Service (AKS)
+
+Vulnerability management involves detecting, assessing, mitigating, and reporting on any security vulnerabilities that exist in an organizationΓÇÖs systems and software. Vulnerability management is a shared responsibility between you and Microsoft.
+
+This article describes how Microsoft manages security vulnerabilities and security updates (also referred to as patches), for Azure Kubernetes Service (AKS) clusters.
+
+## How vulnerabilities are discovered
+
+Microsoft identifies and patches vulnerabilities and missing security updates for the following components:
+
+- AKS Container Images
+
+- Ubuntu operating system 18.04 and 22.04 worker nodes: Canonical provides Microsoft with OS builds that have all available security updates applied.
+
+- Windows Server 2022 OS worker nodes: The Windows Server operating system is patched on the second Tuesday of every month. SLAs should be the same as per their support contract and severity.
+
+- Mariner OS Nodes: Mariner provides AKS with OS builds that have all available security updates applied.
+
+## AKS Container Images
+
+While the [Cloud Native Computing Foundation][cloud-native-computing-foundation] (CNCF) owns and maintains most of the code running in AKS, the Azure Container Upstream team takes responsibility for building the open-source packages that we deploy on AKS. With that responsibility, it includes having complete ownership of the build, scan, sign, validate, and hotfix process and control over the binaries in container images. By us having responsibility for building the open-source packages deployed on AKS, it enables us to both establish a software supply chain over the binary, and patch the software as needed.
+
+Microsoft has invested in engineers (the Azure Container Upstream team) and infrastructure in the broader Kubernetes ecosystem to help build the future of cloud-native compute in the wider CNCF community. A notable example of this is the donation of engineering time to help manage Kubernetes releases. This work not only ensures the quality of every Kubernetes release for the world, but also enables AKS quickly get new Kubernetes releases out into production for several years. In some cases, ahead of other cloud providers by several months. Microsoft collaborates with other industry partners in the Kubernetes security organization. For example, the Security Response Committee (SRC) receives, prioritizes, and patches embargoed security vulnerabilities before they're announced to the public. This commitment ensures Kubernetes is secure for everyone, and enables AKS to patch and respond to vulnerabilities faster to keep our customers safe. In addition to Kubernetes, Microsoft has signed up to receive pre-release notifications for software vulnerabilities for products such as Envoy, container runtimes, and many other open-source projects.
+
+Microsoft scans container images using static analysis to discover vulnerabilities and missing updates in Kubernetes and Microsoft-managed containers. If fixes are available, the scanner automatically begins the update and release process.
+
+In addition to automated scanning, Microsoft discovers and updates vulnerabilities unknown to scanners in the following ways:
+
+* Microsoft performs its own audits, penetration testing, and vulnerability discovery across all AKS platforms. Specialized teams inside Microsoft and trusted third-party security vendors conduct their own attack research.
+
+* Microsoft actively engages with the security research community through multiple vulnerability reward programs. A dedicated [Microsoft Azure Bounty program][azure-bounty-program-overview] provides significant bounties for the best cloud vulnerability found each year.
+
+* Microsoft collaborates with other industry and open source software partners who share vulnerabilities, security research, and updates before the public release of the vulnerability. The goal of this collaboration is to update large pieces of Internet infrastructure before the vulnerability is announced to the public. In some cases, Microsoft contributes vulnerabilities found to this community.
+
+* Microsoft's security collaboration happens on many levels. Sometimes it occurs formally through programs where organizations sign up to receive pre-release notifications about software vulnerabilities for products such as Kubernetes and Docker. Collaboration also happens informally due to our engagement with many open source projects such as the Linux kernel, container runtimes, virtualization technology, and others.
+
+## Worker Nodes
+
+### Linux nodes
+
+Each evening, Linux nodes in AKS receive security patches through their distribution security update channel. This behavior is automatically configured, as the nodes are deployed in an AKS cluster. To minimize disruption and potential impact to running workloads, nodes aren't automatically rebooted if a security patch or kernel update requires it. For more information about how to handle node reboots, see [Apply security and kernel updates to nodes in AKS][apply-security-kernel-updates-to-aks-nodes].
+
+Nightly, we apply security updates to the OS on the node, but the node image used to create nodes for your cluster remains unchanged. If a new Linux node is added to your cluster, the original image is used to create the node. This new node receives all the security and kernel updates available during the automatic assessment performed every night, but remains unpatched until all checks and restarts are complete. You can use node image upgrade to check for and update node images used by your cluster. For more information on node image upgrade, see [Azure Kubernetes Service (AKS) node image upgrade][aks-node-image-upgrade].
+
+For AKS clusters on auto upgrade channel, a *node-image* doesn't pull security updates through the unattended upgrade process. They receive security updates through the weekly node image upgrade.
+
+### Windows Server nodes
+
+For Windows Server nodes, Windows Update doesn't automatically run and apply the latest updates. Schedule Windows Server node pool upgrades in your AKS cluster around the regular Windows Update release cycle and your own update management process. This upgrade process creates nodes that run the latest Windows Server image and patches, then removes the older nodes. For more information on this process, see [Upgrade a node pool in AKS][upgrade-node-pool-in-aks].
+
+## How vulnerabilities are classified
+
+Microsoft makes large investments in security hardening the entire stack, including the OS, container, Kubernetes, and network layers. In addition to setting good defaults, security-hardened configurations, and managed components. Combined, these efforts help to reduce the impact and likelihood of vulnerabilities.
+
+The AKS team classifies vulnerabilities according to the Kubernetes vulnerability scoring system. Classifications consider many factors including AKS configuration and security hardening. As a result of this approach, and the investments AKS make in security, AKS vulnerability classifications might differ from other classification sources.
+
+The following table describes vulnerability severity categories:
+
+|Severity |Description |
+|||
+|Critical |A vulnerability easily exploitable in all clusters by an unauthenticated remote attacker that leads to full system compromise. |
+|High |A vulnerability easily exploitable for many clusters that leads to loss of confidentiality, integrity, or availability. |
+|Medium |A vulnerability exploitable for some clusters where loss of confidentiality, integrity, or availability is limited by common configurations, difficulty of the exploit itself, required access, or user interaction. |
+|Low |All other vulnerabilities. Exploitation is unlikely or consequences of exploitation are limited. |
+
+## How vulnerabilities are updated
+
+AKS patches CVE's that has a *vendor fix* every week. CVE's without a fix are waiting on a *vendor fix* before it can be remediated. The fixed container images are cached in the next corresponding Virtual Hard Disk (VHD) build, which also contains the updated Ubuntu/Mariner/Windows patched CVE's. As long as you're running the updated VHD, you shouldn't be running any container image CVE's with a vendor fix that is over 30 days old.
+
+For the OS-based vulnerabilities in the VHD, AKS uses **Unattended Update** by default, so any security updates should be applied to the existing VHD's daily. If **Unattended Update** is disabled, then it's a recommended best practice that you apply a Node Image update on a regular cadence to ensure the latest OS and Image security updates are applied.
+
+## Update release timelines
+
+Microsoft's goal is to mitigate detected vulnerabilities within a time period appropriate for the risks they represent. The [Microsoft Azure FedRAMP High][microsoft-azure-fedramp-high] Provisional Authorization to Operate (P-ATO) includes AKS in audit scope and has been authorized. FedRAMP Continuous Monitoring Strategy Guide and the FedRAMP Low, Moderate, and High Security Control baselines requires remediation of known vulnerabilities within a specific time period according to their severity level. As specified in FedRAMP RA-5d.
+
+## How vulnerabilities and updates are communicated
+
+In general, Microsoft doesn't broadly communicate the release of new patch versions for AKS. However, Microsoft constantly monitors and validates available CVE patches to support them in AKS in a timely manner. If a critical patch is found or user action is required, Microsoft [notifies you to upgrade to the newly available patch][aks-cve-feed].
+
+## Security Reporting
+
+You can report a security issue to the Microsoft Security Response Center (MSRC), by [creating a vulnerability report][mrc-create-report].
+
+If you prefer to submit a report without logging in to the tool, send email to [secure@microsoft.com](mailto:secure@microsoft.com). If possible, encrypt your message with our PGP key by downloading it from the [Microsoft Security Response Center PGP Key page][msrc-pgp-key-page].
+
+You should receive a response within 24 hours. If for some reason you don't, follow up with an email to ensure we received your original message. For more information, go to the [Microsoft Security Response Center][microsoft-security-response-center].
+
+Include the following requested information (as much as you can provide) to help us better understand the nature and scope of the possible issue:
+
+ * Type of issue (for example, buffer overflow, SQL injection, cross-site scripting, etc.)
+ * Full paths of source file(s) related to the manifestation of the issue
+ * The location of the affected source code (tag/branch/commit or direct URL)
+ * Any special configuration required to reproduce the issue
+ * Step-by-step instructions to reproduce the issue
+ * Proof-of-concept or exploit code (if possible)
+ * Impact of the issue, including how an attacker might exploit the issue.
+
+This information helps us triage your reported security issue quicker.
+
+If you're reporting for a bug bounty, more complete reports can contribute to a higher bounty award. For more information about our active programs, see [Microsoft Bug Bounty Program][microsoft-bug-bounty-program-overview].
+
+### Policy
+
+Microsoft follows the principle of [Coordinated Vulnerability Disclosure][coordinated-vulnerability-disclosure].
+
+## Next steps
+
+See the overview about [Upgrading Azure Kubernetes Service clusters and node pools][upgrade-aks-clusters-nodes].
+
+<!-- LINKS - internal -->
+[upgrade-aks-clusters-nodes]: upgrade.md
+[microsoft-azure-fedramp-high]: /azure/azure-government/compliance/azure-services-in-fedramp-auditscope#azure-government-services-by-audit-scope
+[apply-security-kernel-updates-to-aks-nodes]: node-updates-kured.md
+[aks-node-image-upgrade]: node-image-upgrade.md
+[upgrade-node-pool-in-aks]: use-multiple-node-pools.md#upgrade-a-node-pool
+
+<!-- LINKS - external -->
+[microsoft-bug-bounty-program-overview]: https://aka.ms/opensource/security/bounty
+[coordinated-vulnerability-disclosure]: https://aka.ms/opensource/security/cvd
+[kubernetes-security-response-committee]: https://github.com/kubernetes/committee-security-response
+[cloud-native-computing-foundation]: https://www.cncf.io/
+[aks-cve-feed]: https://github.com/Azure/AKS/issues?q=is%3Aissue+is%3Aopen+cve
+[mrc-create-report]: https://aka.ms/opensource/security/create-report
+[msrc-pgp-key-page]: https://aka.ms/opensource/security/pgpkey
+[microsoft-security-response-center]: https://aka.ms/opensource/security/msrc
+[azure-bounty-program-overview]: https://www.microsoft.com/msrc/bounty-microsoft-azure
aks Update Credentials https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/update-credentials.md
Title: Reset the credentials for a cluster-
-description: Learn how update or reset the service principal or Azure AD Application credentials for an Azure Kubernetes Service (AKS) cluster.
+ Title: Update or rotate the credentials for an Azure Kubernetes Service (AKS) cluster
+description: Learn how update or rotate the service principal or Azure AD Application credentials for an Azure Kubernetes Service (AKS) cluster.
Previously updated : 03/11/2019- Last updated : 03/01/2023
-# Update or rotate the credentials for Azure Kubernetes Service (AKS)
-
-AKS clusters created with a service principal have a one-year expiration time. As you near the expiration date, you can reset the credentials to extend the service principal for an additional period of time. You may also want to update, or rotate, the credentials as part of a defined security policy. This article details how to update these credentials for an AKS cluster.
+# Update or rotate the credentials for an Azure Kubernetes Service (AKS) cluster
-You may also have [integrated your AKS cluster with Azure Active Directory (Azure AD)][aad-integration], and use it as an authentication provider for your cluster. In that case you will have 2 more identities created for your cluster, the Azure AD Server App and the Azure AD Client App, you may also reset those credentials.
-
-Alternatively, you can use a managed identity for permissions instead of a service principal. Managed identities are easier to manage than service principals and do not require updates or rotations. For more information, see [Use managed identities](use-managed-identity.md).
+AKS clusters created with a service principal have a one-year expiration time. As you near the expiration date, you can reset the credentials to extend the service principal for an additional period of time. You may also want to update, or rotate, the credentials as part of a defined security policy. AKS clusters [integrated with Azure Active Directory (Azure AD)][aad-integration] as an authentication provider have two more identities: the Azure AD Server App and the Azure AD Client App. This article details how to update the service principal and Azure AD credentials for an AKS cluster.
> [!NOTE]
-> - When you use the `az aks create` command to generate the service principal automatically, the service principal credentials are written to the file `~/.azure/aksServicePrincipal.json` on the machine used to run the command
-> - If you don't specify a service principal with Azure CLI commands, the default service principal located at `~/.azure/aksServicePrincipal.json` is used
+> Alternatively, you can use a managed identity for permissions instead of a service principal. Managed identities don't require updates or rotations. For more information, see [Use managed identities](use-managed-identity.md).
## Before you begin
-You need the Azure CLI version 2.0.65 or later installed and configured. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli].
+You need the Azure CLI version 2.0.65 or later installed and configured. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli].
## Update or create a new service principal for your AKS cluster When you want to update the credentials for an AKS cluster, you can choose to either: * Update the credentials for the existing service principal.
-* Create a new service principal and update the cluster to use these new credentials.
+* Create a new service principal and update the cluster to use these new credentials.
> [!WARNING] > If you choose to create a *new* service principal, wait around 30 minutes for the service principal permission to propagate across all regions. Updating a large AKS cluster to use these credentials may take a long time to complete. ### Check the expiration date of your service principal
-To check the expiration date of your service principal, use the [az ad sp credential list][az-ad-sp-credential-list] command. The following example gets the service principal ID for the cluster named *myAKSCluster* in the *myResourceGroup* resource group using the [az aks show][az-aks-show] command. The service principal ID is set as a variable named *SP_ID* for use with the [az ad sp credential list][az-ad-sp-credential-list] command.
+To check the expiration date of your service principal, use the [`az ad app credential list`][az-ad-app-credential-list] command. The following example gets the service principal ID for the cluster named *myAKSCluster* in the *myResourceGroup* resource group using the [`az aks show`][az-aks-show] command. The service principal ID is set as a variable named *SP_ID*.
```azurecli SP_ID=$(az aks show --resource-group myResourceGroup --name myAKSCluster \
SP_ID=$(az aks show --resource-group myResourceGroup --name myAKSCluster \
az ad app credential list --id "$SP_ID" --query "[].endDateTime" -o tsv ```
-### Reset the existing service principal credential
+### Reset the existing service principal credentials
-To update the credentials for the existing service principal, get the service principal ID of your cluster using the [az aks show][az-aks-show] command. The following example gets the ID for the cluster named *myAKSCluster* in the *myResourceGroup* resource group. The service principal ID is set as a variable named *SP_ID* for use in additional command. These commands use Bash syntax.
+To update the credentials for an existing service principal, get the service principal ID of your cluster using the [`az aks show`][az-aks-show] command. The following example gets the ID for the cluster named *myAKSCluster* in the *myResourceGroup* resource group. The variable named *SP_ID* stores the service principal ID used in the next step. These commands use the Bash command language.
> [!WARNING] > When you reset your cluster credentials on an AKS cluster that uses Azure Virtual Machine Scale Sets, a [node image upgrade][node-image-upgrade] is performed to update your nodes with the new credential information.
SP_ID=$(az aks show --resource-group myResourceGroup --name myAKSCluster \
--query servicePrincipalProfile.clientId -o tsv) ```
-With a variable set that contains the service principal ID, now reset the credentials using [az ad sp credential reset][az-ad-sp-credential-reset]. The following example lets the Azure platform generate a new secure secret for the service principal. This new secure secret is also stored as a variable.
+Use the variable *SP_ID* containing the service principal ID to reset the credentials using the [`az ad app credential reset`][az-ad-app-credential-reset] command. The following example enables the Azure platform to generate a new secure secret for the service principal and store it as a variable named *SP_SECRET*.
```azurecli-interactive SP_SECRET=$(az ad app credential reset --id "$SP_ID" --query password -o tsv) ```
-Now continue on to [update AKS cluster with new service principal credentials](#update-aks-cluster-with-new-service-principal-credentials). This step is necessary for the Service Principal changes to reflect on the AKS cluster.
+Next, you [update AKS cluster with service principal credentials][update-cluster-service-principal-credentials]. This step is necessary to update the service principal on your AKS cluster.
### Create a new service principal
-If you chose to update the existing service principal credentials in the previous section, skip this step. Continue to [update AKS cluster with new service principal credentials](#update-aks-cluster-with-new-service-principal-credentials).
+> [!NOTE]
+> If you updated the existing service principal credentials in the previous section, skip this section and instead [update the AKS cluster with service principal credentials][update-cluster-service-principal-credentials].
-To create a service principal and then update the AKS cluster to use these new credentials, use the [az ad sp create-for-rbac][az-ad-sp-create] command.
+To create a service principal and update the AKS cluster to use the new credential, use the [`az ad sp create-for-rbac`][az-ad-sp-create] command.
```azurecli-interactive az ad sp create-for-rbac --role Contributor --scopes /subscriptions/mySubscriptionID ```
-The output is similar to the following example. Make a note of your own `appId` and `password`. These values are used in the next step.
+The output is similar to the following example output. Make a note of your own `appId` and `password` to use in the next step.
```json {
The output is similar to the following example. Make a note of your own `appId`
} ```
-Now define variables for the service principal ID and client secret using the output from your own [az ad sp create-for-rbac][az-ad-sp-create] command, as shown in the following example. The *SP_ID* is your *appId*, and the *SP_SECRET* is your *password*:
+Define variables for the service principal ID and client secret using your output from running the [`az ad sp create-for-rbac`][az-ad-sp-create] command. The *SP_ID* is the *appId*, and the *SP_SECRET* is your *password*.
```console SP_ID=7d837646-b1f3-443d-874c-fd83c7c739c5 SP_SECRET=a5ce83c9-9186-426d-9183-614597c7f2f7 ```
-Now continue on to [update AKS cluster with new service principal credentials](#update-aks-cluster-with-new-service-principal-credentials). This step is necessary for the Service Principal changes to reflect on the AKS cluster.
+Next, you [update AKS cluster with the new service principal credential][update-cluster-service-principal-credentials]. This step is necessary to update the AKS cluster with the new service principal credential.
-## Update AKS cluster with new service principal credentials
+## Update AKS cluster with service principal credentials
-> [!IMPORTANT]
-> For large clusters, updating the AKS cluster with a new service principal may take a long time to complete. Consider reviewing and customizing the [node surge upgrade settings][node-surge-upgrade] to minimize disruption during cluster updates and upgrades.
+>[!IMPORTANT]
+>For large clusters, updating your AKS cluster with a new service principal may take a long time to complete. Consider reviewing and customizing the [node surge upgrade settings][node-surge-upgrade] to minimize disruption during the update. For small and midsize clusters, it takes a several minutes for the new credentials to update in the cluster.
-Regardless of whether you chose to update the credentials for the existing service principal or create a service principal, you now update the AKS cluster with your new credentials using the [az aks update-credentials][az-aks-update-credentials] command. The variables for the *--service-principal* and *--client-secret* are used:
+Update the AKS cluster with your new or existing credentials by running the [`az aks update-credentials`][az-aks-update-credentials] command.
```azurecli-interactive az aks update-credentials \
az aks update-credentials \
--name myAKSCluster \ --reset-service-principal \ --service-principal "$SP_ID" \
- --client-secret "${SP_SECRET:Q}"
+ --client-secret "${SP_SECRET}"
```
-> [!NOTE]
-> `${SP_SECRET:Q}` escapes any special characters in `SP_SECRET`, which can cause the command to fail. The above example works for Azure Cloud Shell and zsh terminals. For BASH terminals, use `${SP_SECRET@Q}`.
-
-For small and midsize clusters, it takes a few moments for the service principal credentials to be updated in the AKS.
+## Update AKS cluster with new Azure AD application credentials
-## Update AKS Cluster with new Azure AD Application credentials
-
-You may create new Azure AD Server and Client applications by following the [Azure AD integration steps][create-aad-app]. Or reset your existing Azure AD Applications following the [same method as for service principal reset](#reset-the-existing-service-principal-credential). After that you just need to update your cluster Azure AD Application credentials using the same [az aks update-credentials][az-aks-update-credentials] command but using the *--reset-aad* variables.
+You can create new Azure AD server and client applications by following the [Azure AD integration steps][create-aad-app], or reset your existing Azure AD applications following the [same method as for service principal reset][reset-existing-service-principal-credentials]. After that, you need to update your cluster Azure AD application credentials using the [`az aks update-credentials`][az-aks-update-credentials] command with the *--reset-aad* variables.
```azurecli-interactive az aks update-credentials \
az aks update-credentials \
--aad-client-app-id <CLIENT APPLICATION ID> ``` - ## Next steps
-In this article, the service principal for the AKS cluster itself and the Azure AD Integration Applications were updated. For more information on how to manage identity for workloads within a cluster, see [Best practices for authentication and authorization in AKS][best-practices-identity].
+In this article, you learned how to update or rotate service principal and Azure AD application credentials. For more information on how to use a manage identity for workloads within an AKS cluster, see [Best practices for authentication and authorization in AKS][best-practices-identity].
<!-- LINKS - internal --> [install-azure-cli]: /cli/azure/install-azure-cli
In this article, the service principal for the AKS cluster itself and the Azure
[aad-integration]: ./azure-ad-integration-cli.md [create-aad-app]: ./azure-ad-integration-cli.md#create-azure-ad-server-component [az-ad-sp-create]: /cli/azure/ad/sp#az_ad_sp_create_for_rbac
-[az-ad-sp-credential-list]: /cli/azure/ad/sp/credential#az_ad_sp_credential_list
-[az-ad-sp-credential-reset]: /cli/azure/ad/sp/credential#az_ad_sp_credential_reset
+[az-ad-app-credential-list]: /cli/azure/ad/app/credential#az_ad_app_credential_list
+[az-ad-app-credential-reset]: /cli/azure/ad/app/credential#az_ad_app_credential_reset
[node-image-upgrade]: ./node-image-upgrade.md [node-surge-upgrade]: upgrade-cluster.md#customize-node-surge-upgrade
+[update-cluster-service-principal-credentials]: #update-aks-cluster-with-service-principal-credentials
+[reset-existing-service-principal-credentials]: #reset-the-existing-service-principal-credentials
aks Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/upgrade.md
For more information what cluster operations may trigger specific upgrade events
[ts-ip-limit]: /troubleshoot/azure/azure-kubernetes/error-code-publicipcountlimitreached [ts-quota-exceeded]: /troubleshoot/azure/azure-kubernetes/error-code-quotaexceeded [ts-subnet-full]: /troubleshoot/azure/azure-kubernetes/error-code-subnetisfull-upgrade
-[node-security-patches]: ./concepts-security.md#node-security-patches
+[node-security-patches]: ./concepts-vulnerability-management.md#worker-nodes
[node-updates-kured]: ./node-updates-kured.md
aks Use Mariner https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-mariner.md
Mariner currently has the following limitations:
[mariner-doc]: https://microsoft.github.io/CBL-Mariner/docs/#cbl-mariner-linux [mariner-capabilities]: https://microsoft.github.io/CBL-Mariner/docs/#key-capabilities-of-cbl-mariner-linux [mariner-cluster-config]: cluster-configuration.md
-[mariner-node-pool]: use-multiple-node-pools.md
-[ubuntu-to-mariner]: use-multiple-node-pools.md
+[mariner-node-pool]: use-multiple-node-pools.md#add-a-mariner-node-pool
+[ubuntu-to-mariner]: use-multiple-node-pools.md#migrate-ubuntu-nodes-to-mariner
[auto-upgrade-aks]: auto-upgrade-cluster.md [kured]: node-updates-kured.md
app-service Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/overview.md
Title: App Service Environment overview
description: This article discusses the Azure App Service Environment feature of Azure App Service. Previously updated : 02/22/2023 Last updated : 03/01/2023
App Service Environment v3 is available in the following regions:
| Switzerland North | ✅ | ✅ | ✅ | | Switzerland West | ✅ | | ✅ | | UAE Central | | | ✅ |
-| UAE North | ✅ | | ✅ |
+| UAE North | ✅ | ✅ | ✅ |
| UK South | ✅ | ✅ | ✅ | | UK West | ✅ | | ✅ | | West Central US | ✅ | | ✅ |
app-service Overview Name Resolution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/overview-name-resolution.md
+
+ Title: Name resolution in App Service
+description: Overview of how name resolution (DNS) works for your app in Azure App Service.
++ Last updated : 03/01/2023+++
+# Name resolution (DNS) in App Service
+
+Your app uses DNS when making calls to dependent resources. Resources could be Azure services such as Key Vault, Storage or Azure SQL, but it could also be web apis that your app depends on. When you want to make a call to for example *myservice.com*, you're using DNS to resolve the name to an IP. This article describes how App Service is handling name resolution and how it determines what DNS servers to use. The article also describes settings you can use to configure DNS resolution.
+
+## How name resolution works in App Service
+
+If you aren't integrating your app with a virtual network and you haven't configured custom DNS, your app uses [Azure DNS](../virtual-network/virtual-networks-name-resolution-for-vms-and-role-instances.md#azure-provided-name-resolution). If you integrate your app with a virtual network, your app uses the DNS configuration of the virtual network. The default for virtual network is also to use Azure DNS. Through the virtual network, it's also possible to link to [Azure DNS private zones](../dns/private-dns-overview.md) and use that for private endpoint resolution or private domain name resolution.
+
+If you configured your virtual network with a list of custom DNS servers, name resolution uses these servers. If your virtual network is using custom DNS servers and you're using private endpoints, you should read [this article](../private-link/private-endpoint-dns.md) carefully. You also need to consider that your custom DNS servers are able to resolve any public DNS records used by your app. Your DNS configuration needs to either forward requests to a public DNS server, include a public DNS server like Azure DNS in the list of custom DNS servers or specify an alternative server at the app level.
+
+When your app needs to resolve a domain name using DNS, the app sends a name resolution request to all configured DNS servers. If the first server in the list returns a response within the timeout limit, you get the result returned immediately. If not, the app waits for the other servers to respond within the timeout period and evaluates the DNS server responses in the order you've configured the servers. If none of the servers respond within the timeout and you have configured retry, you repeat the process.
+
+## Configuring DNS servers
+
+The individual app allows you to override the DNS configuration by specifying the `dnsServers` property in the `dnsConfiguration` site property object. You can specify up to five custom DNS servers. You can configure custom DNS servers using the Azure CLI:
+
+```azurecli-interactive
+az resource update --resource-group <group-name> --name <app-name> --resource-type "Microsoft.Web/sites" --set properties.dnsConfiguration.dnsServers="['168.63.169.16','1.1.1.1']"
+```
+
+You can still use the existing `WEBSITE_DNS_SERVER` app setting, and you can add custom DNS servers with either setting. If you want to add multiple DNS servers using the app setting, you must separate the servers by commas with no blank spaces added.
+
+Using the app setting `WEBSITE_DNS_ALT_SERVER`, you append a DNS server to end of the configured DNS servers. You use the setting to configure a fallback server to custom DNS servers from the virtual network.
+
+## Configure name resolution behavior
+
+If you require fine-grained control over name resolution, App Service allows you to modify the default behavior. You can modify retry attempts, retry timeout and cache timeout. Changing behavior like disabling or lowering cache duration may influence performance.
+
+|Property name|Default value|Allowed values|Description|
+|-|-|-|
+|dnsRetryAttemptCount|1|1-5|Defines the number of attempts to resolve where one means no retries|
+|dnsMaxCacheTimeout|30|0-60|Cache timeout defined in seconds. Setting cache to zero means you've disabled caching|
+|dnsRetryAttemptTimeout|3|1-30|Timeout before retrying or failing. Timeout also defines the time to wait for secondary server results if the primary doesn't respond|
+
+>[!NOTE]
+> * Changing name resolution behavior is not supported on Windows Container apps
+> * To enable DNS caching on Web App for Containers and Linux-based apps you must add the app setting `WEBSITE_ENABLE_DNS_CACHE`
+
+Configure the name resolution behavior by using these CLI commands:
+
+```azurecli-interactive
+az resource update --resource-group <group-name> --name <app-name> --set properties.dnsConfiguration.dnsMaxCacheTimeout=[0-60] --resource-type "Microsoft.Web/sites"
+az resource update --resource-group <group-name> --name <app-name> --set properties.dnsConfiguration.dnsRetryAttemptCount=[1-5] --resource-type "Microsoft.Web/sites"
+az resource update --resource-group <group-name> --name <app-name> --set properties.dnsConfiguration.dnsRetryAttemptTimeout=[1-30] --resource-type "Microsoft.Web/sites"
+```
+
+Validate the settings by using this CLI command:
+
+```azurecli-interactive
+az resource show --resource-group <group-name> --name <app-name> --query properties.dnsConfiguration --resource-type "Microsoft.Web/sites"
+```
+
+## Next steps
+
+- [Configure virtual network integration](./configure-vnet-integration-enable.md)
+- [Name resolution for resources in Azure virtual networks](../virtual-network/virtual-networks-name-resolution-for-vms-and-role-instances.md)
+- [General networking overview](./networking-features.md)
app-service Troubleshoot Diagnostic Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/troubleshoot-diagnostic-logs.md
In your application code, you use the usual logging facilities to send log messa
System.Diagnostics.Trace.TraceError("If you're seeing this, something bad happened"); ``` -- By default, ASP.NET Core uses the [Microsoft.Extensions.Logging.AzureAppServices](https://www.nuget.org/packages/Microsoft.Extensions.Logging.AzureAppServices) logging provider. For more information, see [ASP.NET Core logging in Azure](/aspnet/core/fundamentals/logging/). For information about WebJobs SDK logging, see [Get started with the Azure WebJobs SDK](./webjobs-sdk-get-started.md#enable-console-logging)
+ By default, ASP.NET Core uses the [Microsoft.Extensions.Logging.AzureAppServices](https://www.nuget.org/packages/Microsoft.Extensions.Logging.AzureAppServices) logging provider. For more information, see [ASP.NET Core logging in Azure](/aspnet/core/fundamentals/logging/). For information about WebJobs SDK logging, see [Get started with the Azure WebJobs SDK](./webjobs-sdk-get-started.md#enable-console-logging)
+- Python applications can use the [OpenCensus package](/azure/azure-monitor/app/opencensus-python) to send logs to the application diagnostics log.
+ ## Stream logs
-Before you stream logs in real time, enable the log type that you want. Any information written to the conosle output or files ending in .txt, .log, or .htm that are stored in the */home/LogFiles* directory (D:\home\LogFiles) is streamed by App Service.
+Before you stream logs in real time, enable the log type that you want. Any information written to the console output or files ending in .txt, .log, or .htm that are stored in the */home/LogFiles* directory (D:\home\LogFiles) is streamed by App Service.
> [!NOTE] > Some types of logging buffer write to the log file, which can result in out of order events in the stream. For example, an application log entry that occurs when a user visits a page may be displayed in the stream before the corresponding HTTP log entry for the page request.
app-service Tutorial Python Postgresql App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-python-postgresql-app.md
Title: 'Tutorial: Deploy a Python Django or Flask web app with PostgreSQL'
description: Create a Python Django or Flask web app with a PostgreSQL database and deploy it to Azure. The tutorial uses either the Django or Flask framework and the app is hosted on Azure App Service on Linux. ms.devlang: python Previously updated : 10/07/2022 Last updated : 02/28/2023
In this tutorial, you'll deploy a data-driven Python web app (**[Django](https:/
**To complete this tutorial, you'll need:**
-* An Azure account with an active subscription exists. If you don't have an Azure account, you [can create one for free](https://azure.microsoft.com/free/python).
+* An Azure account with an active subscription. If you don't have an Azure account, you [can create one for free](https://azure.microsoft.com/free/python).
* Knowledge of Python with Flask development or [Python with Django development](/training/paths/django-create-data-driven-websites/) ## Sample application Sample Python applications using the Flask and Django framework are provided to help you follow along with this tutorial. To deploy them without running them locally, skip this part.
-To run the application locally, make sure you have [Python 3.7 or higher](https://www.python.org/downloads/) and [PostgreSQL](https://www.postgresql.org/download/) install locally. Then, download or clone the app:
+To run the application locally, make sure you have [Python 3.7 or higher](https://www.python.org/downloads/) and [PostgreSQL](https://www.postgresql.org/download/) installed locally. Then, download or clone the app:
### [Flask](#tab/flask)
Sign in to the [Azure portal](https://portal.azure.com/) and follow these steps
1. *Resource Group* &rarr; Select **Create new** and use a name of **msdocs-python-postgres-tutorial**. 1. *Region* &rarr; Any Azure region near you. 1. *Name* &rarr; **msdocs-python-postgres-XYZ** where *XYZ* is any three random characters. This name must be unique across Azure.
- 1. *Runtime stack* &rarr; **Python 3.9**.
+ 1. *Runtime stack* &rarr; **Python 3.10**.
+ 1. *Database* &rarr; **PostgreSQL - Flexible Server** is selected by default as the database engine. The server name and database name is also set by default to appropriate values.
1. *Hosting plan* &rarr; **Basic**. When you're ready, you can [scale up](manage-scale-up.md) to a production pricing tier later.
- 1. **PostgreSQL - Flexible Server** is selected by default as the database engine.
1. Select **Review + create**. 1. After validation completes, select **Create**. :::column-end:::
Sign in to the [Azure portal](https://portal.azure.com/) and follow these steps
:::column-end::: :::row-end:::
-Having issues? [Let us know](https://aka.ms/DjangoCLITutorialHelp).
- ## 2. Verify connection settings The creation wizard generated the connectivity variables for you already as [app settings](configure-common.md#configure-app-settings).
The creation wizard generated the connectivity variables for you already as [app
:::row-end::: :::row::: :::column span="2":::
- **Step 2.** In the **Application settings** tab of the **Configuration** page, verify that `DBNAME`, `DBHOST`, `DBUSER`, and `DBPASS` are present. They'll be injected into the runtime environment as environment variables.
- App settings are a good way to keep connection secrets out of your code repository.
+ **Step 2.** In the **Application settings** tab of the **Configuration** page, verify that `AZURE_POSTGRESQL_CONNECTIONSTRING` is present. That will be injected into the runtime environment as an environment variable.
+ App settings are one way to keep connection secrets out of your code repository.
+ When you're ready to move your secrets to a more secure location,
+ here's an [article on storing in Azure Key Vault](/azure/key-vault/certificates/quick-create-python).
:::column-end::: :::column::: :::image type="content" source="./media/tutorial-python-postgresql-app/azure-portal-get-connection-string-2.png" alt-text="A screenshot showing how to see the autogenerated connection string." lightbox="./media/tutorial-python-postgresql-app/azure-portal-get-connection-string-2.png"::: :::column-end::: :::row-end:::
-Having issues? Refer first to the [Troubleshooting guide](configure-language-python.md#troubleshooting), otherwise, [let us know](https://aka.ms/DjangoCLITutorialHelp).
+Having issues? Check the [Troubleshooting guide](configure-language-python.md#troubleshooting).
+ ## 3. Deploy sample code
In this step, you'll configure GitHub deployment using GitHub Actions. It's just
1. In **Organization**, select your account. 1. In **Repository**, select **msdocs-flask-postgresql-sample-app**. 1. In **Branch**, select **main**.
+ 1. Keep the default option selected to **Add a workflow**.
1. In the top menu, select **Save**. App Service commits a workflow file into the chosen GitHub repository, in the `.github/workflows` directory. :::column-end::: :::column:::
In this step, you'll configure GitHub deployment using GitHub Actions. It's just
--
-Having issues? Refer first to the [Troubleshooting guide](configure-language-python.md#troubleshooting), otherwise, [let us know](https://aka.ms/DjangoCLITutorialHelp).
+Having issues? Check the [Troubleshooting guide](configure-language-python.md#troubleshooting).
+ ## 4. Generate database schema ### [Flask](#tab/flask)
-With the PostgreSQL database protected by the virtual network, the easiest way to run Run [Flask database migrations](https://flask-migrate.readthedocs.io/en/latest/) is in an SSH session with the App Service container.
+With the PostgreSQL database protected by the virtual network, the easiest way to run [Flask database migrations](https://flask-migrate.readthedocs.io/en/latest/) is in an SSH session with the App Service container.
:::row::: :::column span="2":::
With the PostgreSQL database protected by the virtual network, the easiest way t
--
-Having issues? [Let us know](https://aka.ms/DjangoCLITutorialHelp).
- ## 5. Browse to the app :::row:::
Having issues? [Let us know](https://aka.ms/DjangoCLITutorialHelp).
:::row-end::: :::row::: :::column span="2":::
- **Step 2.** Add a few tasks to the list.
- Congratulations, you're running a secure data-driven Flask app in Azure App Service, with connectivity to Azure Database for PostgreSQL.
+ **Step 2.** Add a few restaurants to the list.
+ Congratulations, you're running a web app in Azure App Service, with secure connectivity to Azure Database for PostgreSQL.
:::column-end::: :::column::: :::image type="content" source="./media/tutorial-python-postgresql-app/azure-portal-browse-app-2.png" alt-text="A screenshot of the Flask web app with PostgreSQL running in Azure showing restaurants and restaurant reviews." lightbox="./media/tutorial-python-postgresql-app/azure-portal-browse-app-2.png"::: :::column-end::: :::row-end:::
-Having issues? [Let us know](https://aka.ms/DjangoCLITutorialHelp).
- ## 6. Stream diagnostic logs Azure App Service captures all messages output to the console to help you diagnose issues with your application. The sample app includes `print()` statements to demonstrate this capability as shown below. ### [Flask](#tab/flask) ### [Django](#tab/django) --
Azure App Service captures all messages output to the console to help you diagno
**Step 1.** In the App Service page: 1. From the left menu, select **App Service logs**. 1. Under **Application logging**, select **File System**.
+ 1. In the top menu, select **Save**.
:::column-end::: :::column::: :::image type="content" source="./media/tutorial-python-postgresql-app/azure-portal-stream-diagnostic-logs-1.png" alt-text="A screenshot showing how to enable native logs in App Service in the Azure portal." lightbox="./media/tutorial-python-postgresql-app/azure-portal-stream-diagnostic-logs-1.png":::
Azure App Service captures all messages output to the console to help you diagno
:::column-end::: :::row-end:::
-Having issues? [Let us know](https://aka.ms/DjangoCLITutorialHelp).
+Learn more about logging in Python apps in the series on [setting up Azure Monitor for your Python application](/azure/azure-monitor/app/opencensus-python]).
## 7. Clean up resources
When you're finished, you can delete all of the resources from your Azure subscr
:::column-end::: :::row-end:::
-Having issues? [Let us know](https://aka.ms/DjangoCLITutorialHelp).
- ## Frequently asked questions - [How much does this setup cost?](#how-much-does-this-setup-cost)
Having issues? [Let us know](https://aka.ms/DjangoCLITutorialHelp).
#### How much does this setup cost?
-Pricing for the create resources is as follows:
+Pricing for the created resources is as follows:
- The App Service plan is created in **Basic** tier and can be scaled up or down. See [App Service pricing](https://azure.microsoft.com/pricing/details/app-service/linux/).-- The PostgreSQL flexible server is create in the lowest burstable tier **Standard_B1ms**, with the minimum storage size, which can be scaled up or down. See [Azure Database for PostgreSQL pricing](https://azure.microsoft.com/pricing/details/postgresql/flexible-server/).
+- The PostgreSQL flexible server is created in the lowest burstable tier **Standard_B1ms**, with the minimum storage size, which can be scaled up or down. See [Azure Database for PostgreSQL pricing](https://azure.microsoft.com/pricing/details/postgresql/flexible-server/).
- The virtual network doesn't incur a charge unless you configure extra functionality, such as peering. See [Azure Virtual Network pricing](https://azure.microsoft.com/pricing/details/virtual-network/). - The private DNS zone incurs a small charge. See [Azure DNS pricing](https://azure.microsoft.com/pricing/details/dns/).
Pricing for the create resources is as follows:
#### How does local app development work with GitHub Actions?
-Take the autogenerated workflow file from App Service as an example, each `git push` kicks off a new build and deployment run. From a local clone of the GitHub repository, you make the desired updates push it to GitHub. For example:
+Using the autogenerated workflow file from App Service as an example, each `git push` kicks off a new build and deployment run. From a local clone of the GitHub repository, you make the desired updates and push to GitHub. For example:
```terminal git add .
For more information, see [Production settings for Django apps](configure-langua
#### I can't connect to the SSH session
-If you can't connect to the SSH session, then the app itself has failed to start. Check the [diagnostic logs](#6-stream-diagnostic-logs) for details. For example, if you see an error like `KeyError: 'DBNAME'`, it may mean that the environment variable is missing (you may have removed the app setting).
+If you can't connect to the SSH session, then the app itself has failed to start. Check the [diagnostic logs](#6-stream-diagnostic-logs) for details. For example, if you see an error like `KeyError: 'AZURE_POSTGRESQL_CONNECTIONSTRING'`, it may mean that the environment variable is missing (you may have removed the app setting).
#### I get an error when running database migrations
-If you encounter any errors related to connecting to the database, check if the app settings (`DBHOST`, `DBNAME`, `DBUSER`, and `DBPASS`) have been changed. Without those settings, the migrate command can't communicate with the database.
+If you encounter any errors related to connecting to the database, check if the app settings (`AZURE_POSTGRESQL_CONNECTIONSTRING`) have been changed. Without that connection string, the migrate command can't communicate with the database.
## Next steps
applied-ai-services Concept Composed Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-composed-models.md
Previously updated : 12/15/2022 Last updated : 02/28/2023 recommendations: false
recommendations: false
With composed models, you can assign multiple custom models to a composed model called with a single model ID. It's useful when you've trained several models and want to group them to analyze similar form types. For example, your composed model might include custom models trained to analyze your supply, equipment, and furniture purchase orders. Instead of manually trying to select the appropriate model, you can use a composed model to determine the appropriate custom model for each analysis and extraction. * ```Custom form``` and ```Custom template``` models can be composed together into a single composed model.+ * With the model compose operation, you can assign up to 200 trained custom models to a single composed model. To analyze a document with a composed model, Form Recognizer first classifies the submitted form, chooses the best-matching assigned model, and returns results.+ * For **_custom template models_**, the composed model can be created using variations of a custom template or different form types. This operation is useful when incoming forms may belong to one of several templates.
-* The response will include a ```docType``` property to indicate which of the composed models was used to analyze the document.
+
+* The response includes a ```docType``` property to indicate which of the composed models was used to analyze the document.
+ * For ```Custom neural``` models the best practice is to add all the different variations of a single document type into a single training dataset and train on custom neural model. Model compose is best suited for scenarios when you have documents of different types being submitted for analysis.
+* Pricing is the same whether you're using a composed model or selecting a specific model. One model analyzes each document. With composed models, the system performs a classification to check which of the composed custom models should be invoked and invokes the single best model for the document.
+ ## Compose model limits > [!NOTE]
With composed models, you can assign multiple custom models to a composed model
|**Custom neural models v3.0 (preview)**|Not Supported|Not Supported|Supported|Not Supported| |**Custom Neural models v3.0 (GA)**|Not Supported|Not Supported|Not Supported|Supported|
-* To compose a model trained with a prior version of the API (v2.1 or earlier), train a model with the v3.0 API using the same labeled dataset. That addition will ensure that the v2.1 model can be composed with other models.
+* To compose a model trained with a prior version of the API (v2.1 or earlier), train a model with the v3.0 API using the same labeled dataset. That addition ensures that the v2.1 model can be composed with other models.
-* Models composed with v2.1 of the API will continue to be supported, requiring no updates.
+* Models composed with v2.1 of the API continue to be supported, requiring no updates.
* The limit for maximum number of custom models that can be composed is 100.
The following resources are supported by Form Recognizer **v3.0** :
::: moniker range="form-recog-2.1.0"
-The following resources are supported by Form Recognizer v2.1:
+Form Recognizer v2.1 supports the following resources:
| Feature | Resources | |-|-|
Learn to create and compose custom models:
> [!div class="nextstepaction"] > [**Build a custom model**](how-to-guides/build-a-custom-model.md)
-> [**Compose custom models**](how-to-guides/compose-custom-models.md)
+> [**Compose custom models**](how-to-guides/compose-custom-models.md)
applied-ai-services Managed Identities Secured Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/managed-identities-secured-access.md
Previously updated : 01/18/2023 Last updated : 02/28/2023 monikerRange: '>=form-recog-2.1.0' recommendations: false
recommendations: false
[!INCLUDE [applies to v3.0 and v2.1](includes/applies-to-v3-0-and-v2-1.md)]
-This how-to guide will walk you through the process of enabling secure connections for your Form Recognizer resource. You can secure the following connections:
+This how-to guide walks you through the process of enabling secure connections for your Form Recognizer resource. You can secure the following connections:
* Communication between a client application within a Virtual Network (VNET) and your Form Recognizer Resource.
This how-to guide will walk you through the process of enabling secure connectio
* Communication between your Form Recognizer resource and a storage account (needed when training a custom model).
-You'll be setting up your environment to secure the resources:
+ You're setting up your environment to secure the resources:
:::image type="content" source="media/managed-identities/secure-config.png" alt-text="Screenshot of secure configuration with managed identity and private endpoints."::: ## Prerequisites
-To get started, you'll need:
+To get started, you need:
* An active [**Azure account**](https://azure.microsoft.com/free/cognitive-services/)ΓÇöif you don't have one, you can [**create a free account**](https://azure.microsoft.com/free/). * A [**Form Recognizer**](https://portal.azure.com/#create/Microsoft.CognitiveServicesTextTranslation) or [**Cognitive Services**](https://portal.azure.com/#create/Microsoft.CognitiveServicesAllInOne) resource in the Azure portal. For detailed steps, _see_ [Create a Cognitive Services resource using the Azure portal](../../cognitive-services/cognitive-services-apis-create-account.md?tabs=multiservice%2cwindows).
-* An [**Azure blob storage account**](https://portal.azure.com/#create/Microsoft.StorageAccount-ARM) in the same region as your Form Recognizer resource. You'll create containers to store and organize your blob data within your storage account.
+* An [**Azure blob storage account**](https://portal.azure.com/#create/Microsoft.StorageAccount-ARM) in the same region as your Form Recognizer resource. Create containers to store and organize your blob data within your storage account.
-* An [**Azure virtual network**](https://portal.azure.com/#create/Microsoft.VirtualNetwork-ARM) in the same region as your Form Recognizer resource. You'll create a virtual network to deploy your application resources to train models and analyze documents.
+* An [**Azure virtual network**](https://portal.azure.com/#create/Microsoft.VirtualNetwork-ARM) in the same region as your Form Recognizer resource. Create a virtual network to deploy your application resources to train models and analyze documents.
* An **Azure data science VM** for [**Windows**](../../machine-learning/data-science-virtual-machine/provision-vm.md) or [**Linux/Ubuntu**](../../machine-learning/data-science-virtual-machine/dsvm-ubuntu-intro.md) to optionally deploy a data science VM in the virtual network to test the secure connections being established.
Configure each of the resources to ensure that the resources can communicate wit
* Configure the Form Recognizer Studio to use the newly created Form Recognizer resource by accessing the settings page and selecting the resource.
-* Validate that the configuration works by selecting the Read API and analyzing a sample document. If the resource was configured correctly, the request will successfully complete.
+* Validate that the configuration works by selecting the Read API and analyzing a sample document. If the resource was configured correctly, the request successfully completes.
* Add a training dataset to a container in the Storage account you created.
Configure each of the resources to ensure that the resources can communicate wit
* Select the container with the training dataset you uploaded in the previous step. Ensure that if the training dataset is within a folder, the folder path is set appropriately.
-* If you have the required permissions, the Studio will set the CORS setting required to access the storage account. If you don't have the permissions, you'll need to ensure that the CORS settings are configured on the Storage account before you can proceed.
+* If you have the required permissions, the Studio sets the CORS setting required to access the storage account. If you don't have the permissions, you need to ensure that the CORS settings are configured on the Storage account before you can proceed.
* Validate that the Studio is configured to access your training data, if you can see your documents in the labeling experience, all the required connections have been established.
You now have a working implementation of all the components needed to build a Fo
:::image type="content" source="media/managed-identities/default-config.png" alt-text="Screenshot of default security configuration.":::
-Next, you'll complete the following steps:
+Next, complete the following steps:
* Setup managed identity on the Form Recognizer resource.
Start configuring secure communications by navigating to the **Networking** tab
## Enable access to storage from Form Recognizer
-To ensure that the Form Recognizer resource can access the training dataset, you'll need to add a role assignment for the managed identity that was created earlier.
+To ensure that the Form Recognizer resource can access the training dataset, you need to add a role assignment for your [managed identity](#setup-managed-identity-for-form-recognizer).
1. Staying on the storage account window in the Azure portal, navigate to the **Access Control (IAM)** tab in the left navigation bar.
To ensure that the Form Recognizer resource can access the training dataset, you
:::image type="content" source="media/managed-identities/v2-stg-role-assign-role.png" alt-text="Screenshot of add role assignment window.":::
-1. On the **Role** tab, search for and select the**Storage Blob Reader** permission and select **Next**.
+1. On the **Role** tab, search for and select the **Storage Blob Data Reader** permission and select **Next**.
:::image type="content" source="media/managed-identities/v2-stg-role-assignment.png" alt-text="Screenshot of choose a role tab.":::
Great! You've configured your Form Recognizer resource to use a managed identity
## Configure private endpoints for access from VNETs
-When you connect to resources from a virtual network, adding private endpoints will ensure both the storage account and the Form Recognizer resource are accessible from the virtual network.
+When you connect to resources from a virtual network, adding private endpoints ensures both the storage account, and the Form Recognizer resource are accessible from the virtual network.
-Next, you'll configure the virtual network to ensure only resources within the virtual network or traffic router through the network will have access to the Form Recognizer resource and the storage account.
+Next, configure the virtual network to ensure only resources within the virtual network or traffic router through the network have access to the Form Recognizer resource and the storage account.
### Enable your virtual network and private endpoints
Next, you'll configure the virtual network to ensure only resources within the v
### Configure your private endpoint
-1. Navigate to the **Private endpoint connections** tab and select the **+ Private endpoint**. You'll be
-navigated to the **Create a private endpoint** dialog page.
+1. Navigate to the **Private endpoint connections** tab and select the **+ Private endpoint**. You're navigated to the **Create a private endpoint** dialog page.
1. On the **Create private endpoint** dialog page, select the following options:
That's it! You can now configure secure access for your Form Recognizer resource
## Next steps > [!div class="nextstepaction"]
-> [Access Azure Storage from a web app using managed identities](../../app-service/scenario-secure-app-access-storage.md?bc=%2fazure%2fapplied-ai-services%2fform-recognizer%2fbreadcrumb%2ftoc.json&toc=%2fazure%2fapplied-ai-services%2fform-recognizer%2ftoc.json)
+> [Access Azure Storage from a web app using managed identities](../../app-service/scenario-secure-app-access-storage.md?bc=%2fazure%2fapplied-ai-services%2fform-recognizer%2fbreadcrumb%2ftoc.json&toc=%2fazure%2fapplied-ai-services%2fform-recognizer%2ftoc.json)
automation Automation Solution Vm Management Config https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-solution-vm-management-config.md
Title: Configure Azure Automation Start/Stop VMs during off-hours
description: This article tells how to configure the Start/Stop VMs during off-hours feature to support different use cases or scenarios. Previously updated : 02/23/2023 Last updated : 02/28/2023
# Configure Start/Stop VMs during off-hours > [!NOTE]
-> Start/Stop VM during off-hours, version 1 is going to retire soon by CY23 and is unavailable in the marketplace now. We recommend that you start using [version 2](../azure-functions/start-stop-vms/overview.md) which is now generally available. The new version offers all existing capabilities and provides new features, such as multi-subscription support from a single Start/Stop instance. If you have the version 1 solution already deployed, you can still use the feature, and we will provide support until retirement in CY23. The details on announcement will be shared soon.
+> Start/Stop VM during off-hours version 1 is unavailable in the marketplace now as it will retire by 30 September 2023. We recommend you start using [version 2](/azure/azure-functions/start-stop-vms/overview), which is now generally available. The new version offers all existing capabilities and provides new features, such as multi-subscription support from a single Start/Stop instance. If you have the version 1 solution already deployed, you can still use the feature, and we will provide support until 30 September 2023. The details of the announcement will be shared on March 31st 2023.
This article describes how to configure the [Start/Stop VMs during off-hours](automation-solution-vm-management.md) feature to support the described scenarios. You can also learn how to:
automation Automation Solution Vm Management Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-solution-vm-management-logs.md
Title: Query logs from Azure Automation Start/Stop VMs during off-hours
description: This article tells how to use Azure Monitor to query log data generated by Start/Stop VMs during off-hours. Previously updated : 11/29/2022 Last updated : 02/28/2023
# Query logs from Start/Stop VMs during off-hours > [!NOTE]
-> Start/Stop VM during off-hours, version 1 is deprecated and unavailable in the marketplace now. We recommend that you start using [version 2](../azure-functions/start-stop-vms/overview.md), which is now generally available.
-The new version offers all existing capabilities and provides new features, such as multi-subscription support from a single Start/Stop instance. If you have the version 1 solution already deployed, you can still use the feature, and we will provide support until further announcement.
+> Start/Stop VM during off-hours version 1 is unavailable in the marketplace now as it will retire by 30 September 2023. We recommend you start using [version 2](/azure/azure-functions/start-stop-vms/overview), which is now generally available. The new version offers all existing capabilities and provides new features, such as multi-subscription support from a single Start/Stop instance. If you have the version 1 solution already deployed, you can still use the feature, and we will provide support until 30 September 2023. The details of the announcement will be shared on March 31st 2023.
Azure Automation forwards two types of records to the linked Log Analytics workspace: job logs and job streams. This article reviews the data available for [query](../azure-monitor/logs/log-query-overview.md) in Azure Monitor.
automation Automation Solution Vm Management Remove https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-solution-vm-management-remove.md
Title: Remove Azure Automation Start/Stop VMs during off-hours overview
description: This article describes how to remove the Start/Stop VMs during off-hours feature and unlink an Automation account from the Log Analytics workspace. Previously updated : 02/23/2023 Last updated : 02/28/2023
# Remove Start/Stop VMs during off-hours from Automation account > [!NOTE]
-> Start/Stop VM during off-hours, version 1 is going to retire soon by CY23 and is unavailable in the marketplace now. We recommend that you start using [version 2](../azure-functions/start-stop-vms/overview.md), which is now generally available. The new version offers all existing capabilities and provides new features, such as multi-subscription support from a single Start/Stop instance. If you have the version 1 solution already deployed, you can still use the feature, and we will provide support until retirement in CY23. The details on announcement will be shared soon.
+> Start/Stop VM during off-hours version 1 is unavailable in the marketplace now as it will retire by 30 September 2023. We recommend you start using [version 2](/azure/azure-functions/start-stop-vms/overview), which is now generally available. The new version offers all existing capabilities and provides new features, such as multi-subscription support from a single Start/Stop instance. If you have the version 1 solution already deployed, you can still use the feature, and we will provide support until 30 September 2023. The details of the announcement will be shared on March 31st 2023.
After you enable the Start/Stop VMs during off-hours feature to manage the running state of your Azure VMs, you may decide to stop using it. Removing this feature can be done using one of the following methods based on the supported deployment models:
automation Automation Solution Vm Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-solution-vm-management.md
Title: Azure Automation Start/Stop VMs during off-hours overview
description: This article describes the Start/Stop VMs during off-hours feature, which starts or stops VMs on a schedule and proactively monitor them from Azure Monitor Logs. Previously updated : 01/04/2023 Last updated : 02/28/2023
# Start/Stop VMs during off-hours overview > [!NOTE]
-> Start/Stop VM during off-hours, version 1 is going to retire soon by CY23 and is unavailable in the marketplace now. We recommend that you start using [version 2](/azure/azure-functions/start-stop-vms/overview), which is now generally available. The new version offers all existing capabilities and provides new features, such as multi-subscription support from a single Start/Stop instance. If you have the version 1 solution already deployed, you can still use the feature, and we will provide support until retirement in CY23. The details on announcement will be shared soon.
+> Start/Stop VM during off-hours version 1 is unavailable in the marketplace now as it will retire by 30 September 2023. We recommend you start using [version 2](/azure/azure-functions/start-stop-vms/overview), which is now generally available. The new version offers all existing capabilities and provides new features, such as multi-subscription support from a single Start/Stop instance. If you have the version 1 solution already deployed, you can still use the feature, and we will provide support until 30 September 2023. The details of the announcement will be shared on March 31st 2023.
The Start/Stop VMs during off-hours feature start or stops enabled Azure VMs. It starts or stops machines on user-defined schedules, provides insights through Azure Monitor logs, and sends optional emails by using [action groups](../azure-monitor/alerts/action-groups.md). The feature can be enabled on both Azure Resource Manager and classic VMs for most scenarios.
automation Region Mappings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/how-to/region-mappings.md
# Supported regions for linked Log Analytics workspace > [!NOTE]
-> Start/Stop VM during off-hours, version 1 is going to retire soon by CY23 and is unavailable in the marketplace now. We recommend that you start using [version 2](../../azure-functions/start-stop-vms/overview.md) which is now generally available. The new version offers all existing capabilities and provides new features, such as multi-subscription support from a single Start/Stop instance. If you have the version 1 solution already deployed, you can still use the feature, and we will provide support until retirement in CY23. The details on announcement will be shared soon.
+> Start/Stop VM during off-hours version 1 is unavailable in the marketplace now as it will retire by 30 September 2023. We recommend you start using [version 2](/azure/azure-functions/start-stop-vms/overview), which is now generally available. The new version offers all existing capabilities and provides new features, such as multi-subscription support from a single Start/Stop instance. If you have the version 1 solution already deployed, you can still use the feature, and we will provide support until 30 September 2023. The details of the announcement will be shared on March 31st 2023.
In Azure Automation, you can enable the Update Management, Change Tracking and Inventory, and Start/Stop VMs during off-hours features for your servers and virtual machines. These features have a dependency on a Log Analytics workspace, and therefore require linking the workspace with an Automation account. However, only certain regions are supported to link them together. In general, the mapping is *not* applicable if you plan to link an Automation account to a workspace that won't have these features enabled.
automation Start Stop Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/troubleshoot/start-stop-vm.md
Title: Troubleshoot Azure Automation Start/Stop VMs during off-hours issues
description: This article tells how to troubleshoot and resolve issues arising during the use of the Start/Stop VMs during off-hours feature. Previously updated : 11/29/2022 Last updated : 02/28/2023
# Troubleshoot Start/Stop VMs during off-hours issues > [!NOTE]
-> Start/Stop VM during off-hours, version 1 is deprecated and unavailable in the marketplace now. We recommend that you start using [version 2](../../azure-functions/start-stop-vms/overview.md), which is now generally available. The new version offers all existing capabilities and provides new features, such as multi-subscription support from a single Start/Stop instance. If you have the version 1 solution already deployed, you can still use the feature, and we will provide support until further announcement.
+> Start/Stop VM during off-hours version 1 is unavailable in the marketplace now as it will retire by 30 September 2023. We recommend you start using [version 2](/azure/azure-functions/start-stop-vms/overview), which is now generally available. The new version offers all existing capabilities and provides new features, such as multi-subscription support from a single Start/Stop instance. If you have the version 1 solution already deployed, you can still use the feature, and we will provide support until 30 September 2023. The details of the announcement will be shared on March 31st 2023.
This article provides information on troubleshooting and resolving issues that arise when you deploy the Azure Automation Start/Stop VMs during off-hours feature on your VMs.
azure-app-configuration Quickstart Python Provider https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/quickstart-python-provider.md
The Python App Configuration provider is a library in preview running on top of
```python from azure.appconfiguration.provider import (
- AzureAppConfigurationProvider,
+ load_provider,
SettingSelector ) import os
The Python App Configuration provider is a library in preview running on top of
connection_string = os.environ.get("AZURE_APPCONFIG_CONNECTION_STRING") # Connect to Azure App Configuration using a connection string.
- config = AzureAppConfigurationProvider.load(
- connection_string=connection_string)
+ config = load_provider(connection_string=connection_string)
# Find the key "message" and print its value. print(config["message"])
The Python App Configuration provider is a library in preview running on top of
# Connect to Azure App Configuration using a connection string and trimmed key prefixes. trimmed = {"test."}
- config = AzureAppConfigurationProvider.load(
- connection_string=connection_string, trimmed_key_prefixes=trimmed)
+ config = load_provider(connection_string=connection_string, trim_prefixes=trimmed)
# From the keys with trimmed prefixes, find a key with "message" and print its value. print(config["message"]) # Connect to Azure App Configuration using SettingSelector.
- selects = {SettingSelector("message*", "\0")}
- config = AzureAppConfigurationProvider.load(
- connection_string=connection_string, selects=selects)
+ selects = {SettingSelector(key_filter="message*", label_filter="\0")}
+ config = load_provider(connection_string=connection_string, selects=selects)
# Print True or False to indicate if "message" is found in Azure App Configuration. print("message found: " + str("message" in config))
azure-cache-for-redis Cache Best Practices Connection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-best-practices-connection.md
Previously updated : 08/03/2022 Last updated : 02/27/2023
Test your system's resiliency to connection breaks using a [reboot](cache-admini
## TCP settings for Linux-hosted client applications
-The default TCP settings in some Linux versions can cause Redis server connections to fail for 13 minutes or more. The default settings can prevent the client application from detecting closed connections and restoring them automatically if the connection was not closed gracefully.
+The default TCP settings in some Linux versions can cause Redis server connections to fail for 13 minutes or more. The default settings can prevent the client application from detecting closed connections and restoring them automatically if the connection wasn't closed gracefully.
-The failure to reestablish a connection can happen occur in situations where the network connection is disrupted or the Redis server goes offline for unplanned maintenance.
+The failure to reestablish a connection can happen in situations where the network connection is disrupted or the Redis server goes offline for unplanned maintenance.
We recommend these TCP settings:
Apply design patterns for resiliency. For more information, see [How do I make m
## Idle timeout
-Azure Cache for Redis currently has a 10-minute idle timeout for connections, so the idle timeout setting in your client application should be less than 10 minutes. Most common client libraries have a configuration setting that allows client libraries to send Redis `PING` commands to a Redis server automatically and periodically. However, when using client libraries without this type of setting, customer applications themselves are responsible for keeping the connection alive.
+Azure Cache for Redis has a 10-minute timeout for idle connections. The 10-minute timeout allows the server to automatically clean up leaky connections or connections orphaned by a client application. Most Redis client libraries have a built-in capability to send `heartbeat` or `keepalive` commands periodically to prevent connections from being closed even if there are no requests from the client application.
+
+If there's any risk of your connections being idle for 10 minutes, configure the `keepalive` interval to a value less than 10 minutes. If your application is using a client library that doesn't have native support for `keepalive` functionality, you can implement it in your application by periodically sending a `PING` command.
## Next steps - [Best practices for development](cache-best-practices-development.md) - [Azure Cache for Redis development FAQ](cache-development-faq.yml)-- [Failover and patching](cache-failover.md)
+- [Failover and patching](cache-failover.md)
azure-cache-for-redis Cache Python Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-python-get-started.md
Title: 'Quickstart: Use Azure Cache for Redis in Python' description: In this quickstart, you learn how to create a Python App that uses Azure Cache for Redis. + Previously updated : 11/05/2019+ Last updated : 02/15/2023 ms.devlang: python
-#Customer intent: As a Python developer new to Azure Cache for Redis, I want to create a new Python app that uses Azure Cache for Redis.
+ # Quickstart: Use Azure Cache for Redis in Python
If you want to skip straight to the code, see the [Python quickstart](https://gi
## Prerequisites - Azure subscription - [create one for free](https://azure.microsoft.com/free/)-- [Python 2 or 3](https://www.python.org/downloads/)
+- Python 3
+ - For macOS or Linux, download from [python.org](https://www.python.org/downloads/).
+ - For Windows 11, use the [Windows Store](https://www.microsoft.com/en-us/p/python-3/9nblggh083nz?activetab=pivot:overviewtab).
## Create an Azure Cache for Redis instance [!INCLUDE [redis-cache-create](includes/redis-cache-create.md)]
If you want to skip straight to the code, see the [Python quickstart](https://gi
## Install redis-py
-[Redis-py](https://github.com/andymccurdy/redis-py) is a Python interface to Azure Cache for Redis. Use the Python packages tool, *pip*, to install the *redis-py* package from a command prompt.
+[Redis-py](https://pypi.org/project/redis/) is a Python interface to Azure Cache for Redis. Use the Python packages tool, `pip`, to install the `redis-py` package from a command prompt.
-The following example used *pip3* for Python 3 to install *redis-py* on Windows 10 from an Administrator command prompt.
+The following example used `pip3` for Python 3 to install `redis-py` on Windows 11 from an Administrator command prompt.
-![Install the redis-py Python interface to Azure Cache for Redis](./media/cache-python-get-started/cache-python-install-redis-py.png)
## Read and write to the cache
-Run Python from the command line and test your cache by using the following code. Replace `<Your Host Name>` and `<Your Access Key>` with the values from your Azure Cache for Redis instance. Your host name is of the form *\<DNS name>.redis.cache.windows.net*.
+Run Python from the command line and test your cache by using the following code. Replace `<Your Host Name>` and `<Your Access Key>` with the values from your Azure Cache for Redis instance. Your host name is of the form `<DNS name>.redis.cache.windows.net`.
```python >>> import redis
b'bar'
``` > [!IMPORTANT]
-> For Azure Cache for Redis version 3.0 or higher, TLS/SSL certificate check is enforced. ssl_ca_certs must be explicitly set when connecting to Azure Cache for Redis. For RedHat Linux, ssl_ca_certs are in the */etc/pki/tls/certs/ca-bundle.crt* certificate module.
+> For Azure Cache for Redis version 3.0 or higher, TLS/SSL certificate check is enforced. `ssl_ca_certs` must be explicitly set when connecting to Azure Cache for Redis. For RedHat Linux, `ssl_ca_certs` are in the `/etc/pki/tls/certs/ca-bundle.crt` certificate module.
## Create a Python sample app
-Create a new text file, add the following script, and save the file as *PythonApplication1.py*. Replace `<Your Host Name>` and `<Your Access Key>` with the values from your Azure Cache for Redis instance. Your host name is of the form *\<DNS name>.redis.cache.windows.net*.
+Create a new text file, add the following script, and save the file as `PythonApplication1.py`. Replace `<Your Host Name>` and `<Your Access Key>` with the values from your Azure Cache for Redis instance. Your host name is of the form `<DNS name>.redis.cache.windows.net`.
```python import redis
print("GET Message returned : " + result.decode("utf-8"))
result = r.client_list() print("CLIENT LIST returned : ") for c in result:
- print("id : " + c['id'] + ", addr : " + c['addr'])
+ print(f"id : {c['id']}, addr : {c['addr']}")
```
-Run *PythonApplication1.py* with Python. You should see results like the following example:
+Run `PythonApplication1.py` with Python. You should see results like the following example:
-![Run Python script to test cache access](./media/cache-python-get-started/cache-python-completed.png)
## Clean up resources
If you're finished with the Azure resource group and resources you created in th
To delete the resource group and its Redis Cache for Azure instance: 1. From the [Azure portal](https://portal.azure.com), search for and select **Resource groups**.+ 1. In the **Filter by name** text box, enter the name of the resource group that contains your cache instance, and then select it from the search results. + 1. On your resource group page, select **Delete resource group**.+ 1. Type the resource group name, and then select **Delete**.
- ![Delete your resource group for Azure Cache for Redis](./media/cache-python-get-started/delete-your-resource-group-for-azure-cache-for-redis.png)
+ :::image type="content" source="./media/cache-python-get-started/delete-your-resource-group-for-azure-cache-for-redis.png" alt-text="Screenshot of the Azure portal showing how to delete the resource group for Azure Cache for Redis.":::
## Next steps
-> [!div class="nextstepaction"]
-> [Create a simple ASP.NET web app that uses an Azure Cache for Redis.](./cache-web-app-howto.md)
+- [Create a simple ASP.NET web app that uses an Azure Cache for Redis.](./cache-web-app-howto.md)
azure-functions Durable Functions Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-overview.md
Durable entities are currently not supported in Java.
Clients can enqueue *operations* for (also known as "signaling") an entity function using the [entity client binding](durable-functions-bindings.md#entity-client).
-# [C# (InProc)](#tab/csharp-isolated)
+# [C# (InProc)](#tab/csharp-inproc)
```csharp [FunctionName("EventHubTriggerCSharp")]
public static async Task Run(
> [!NOTE] > Dynamically generated proxies are also available in .NET for signaling entities in a type-safe way. And in addition to signaling, clients can also query for the state of an entity function using [type-safe methods](durable-functions-dotnet-entities.md#accessing-entities-through-interfaces) on the orchestration client binding.
-# [C# (Isolated)](#tab/csharp-inproc)
+# [C# (Isolated)](#tab/csharp-isolated)
Durable entities are currently not supported in the .NET-isolated worker.
azure-functions Durable Functions Storage Providers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-storage-providers.md
The Azure Storage provider is the default storage provider and doesn't require a
The `connectionName` property in host.json is a reference to environment configuration which specifies how the app should connect to Azure Storage. It may specify: - The name of an application setting containing a connection string. To obtain a connection string, follow the steps shown at [Manage storage account access keys](../../storage/common/storage-account-keys-manage.md).-- The name of a shared prefix for multiple application settings, together defining an [identity-based connection](#identity-based-connections-preview).
+- The name of a shared prefix for multiple application settings, together defining an [identity-based connection](#identity-based-connections).
If the configured value is both an exact match for a single setting and a prefix match for other settings, the exact match is used. If no value is specified in host.json, the default value is "AzureWebJobsStorage".
-##### Identity-based connections (preview)
+##### Identity-based connections
If you are using [version 2.7.0 or higher of the extension](https://github.com/Azure/azure-functions-durable-extension/releases/tag/v2.7.0) and the Azure storage provider, instead of using a connection string with a secret, you can have the app use an [Azure Active Directory identity](../../active-directory/fundamentals/active-directory-whatis.md). To do this, you would define settings under a common prefix which maps to the `connectionName` property in the trigger and binding configuration.
azure-functions Azfd0005 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/errors-diagnostics/diagnostic-events/azfd0005.md
+
+ Title: "AZFD0005: External startup exception"
+
+description: "AZFD0005: External startup exception"
+++ Last updated : 02/14/2023++
+# AZFD0005: External startup exception
+
+This event occurs when an external startup class throws an exception during function app initialization.
+
+| | Value |
+|-|-|
+| **Event ID** |AZFD0005|
+| **Category** |[Usage]|
+| **Severity** |Error|
+
+## Event description
+
+An external startup class is a class registered with the `FunctionsStartupAttribute`. It is often used to register services or configuration sources for dependency injection. For more information on the feature, see [use dependency injection in .NET Azure Functions](../../functions-dotnet-dependency-injection.md).
+
+If a call to either of the `Configure()` methods on the `FunctionsStartup` class throws an exception, it will cause the function app initialization to fail. The functions host will catch this exception, log the error, and retry initialization. This retry helps to recover from transient errors, but permanent errors may be logged many times as the host retries.
+
+If `APPLICATIONINSIGHTS_CONNECTION_STRING` is set as an app setting, the error will also be logged as an `exception` in Application Insights.
+
+## How to resolve the event
+
+Every occurrence of this event is unique to the application. Investigate the error message and stack trace to see what may need to be done to prevent this error in the future. For example, a timeout may need to be retried; or an error calling an external service may need to be handled.
+
+## When to suppress the event
+
+This event shouldn't be suppressed.
azure-functions Functions Add Output Binding Storage Queue Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-add-output-binding-storage-queue-cli.md
For more information on the details of bindings, see [Azure Functions triggers a
With the queue binding defined, you can now update your function to receive the `msg` output parameter and write messages to the queue. ::: zone pivot="programming-language-python" ::: zone-end ::: zone pivot="programming-language-javascript"
azure-functions Functions Add Output Binding Storage Queue Vs Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-add-output-binding-storage-queue-vs-code.md
Now, you can add the storage output binding to your project.
## Add an output binding
-In Functions, each type of binding requires a `direction`, `type`, and a unique `name` to be defined in the *function.json* file. The way you define these attributes depends on the language of your function app.
+
+In Functions, each type of binding requires a `direction`, `type`, and unique `name`. The way you define these attributes depends on the language of your function app.
[!INCLUDE [functions-add-output-binding-json](../../includes/functions-add-output-binding-json.md)] ::: zone-end
+In Functions, each type of binding requires a `direction`, `type`, and a unique `name`. The way you define these attributes depends on your Python programming model.
+++ ::: zone pivot="programming-language-csharp" [!INCLUDE [functions-add-storage-binding-csharp-library](../../includes/functions-add-storage-binding-csharp-library.md)]
After the binding is defined, you can use the `name` of the binding to access it
::: zone pivot="programming-language-python" ::: zone-end
azure-maps About Azure Maps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/about-azure-maps.md
Maps Creator service is a suite of web services that developers can use to creat
Maps Creator provides the following
-* [Dataset service][Dataset service]. Use the Dataset service to create a dataset from a converted Drawing package data. For information about Drawing package requirements, see Drawing package requirements.
+* [Dataset service][Dataset service]. Use the Dataset service to create a dataset from a converted drawing package data. For information about Drawing package requirements, see Drawing package requirements.
-* [Conversion service][Conversion service]. Use the Conversion service to convert a DWG design file into Drawing package data for indoor maps.
+* [Conversion service][Conversion service]. Use the Conversion service to convert a DWG design file into drawing package data for indoor maps.
* [Tileset service][Tileset]. Use the Tileset service to create a vector-based representation of a dataset. Applications can use a tileset to present a visual tile-based view of the dataset.
azure-maps Creator Indoor Maps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/creator-indoor-maps.md
Creator services create, store, and use various data types that are defined and
- Feature stateset - Routeset
-## Upload a Drawing package
+## Upload a drawing package
-Creator collects indoor map data by converting an uploaded Drawing package. The Drawing package represents a constructed or remodeled facility. For information about Drawing package requirements, see [Drawing package requirements](drawing-requirements.md).
+Creator collects indoor map data by converting an uploaded drawing package. The drawing package represents a constructed or remodeled facility. For information about drawing package requirements, see [Drawing package requirements](drawing-requirements.md).
-Use the [Azure Maps Data Upload API](/rest/api/maps/data-v2/update) to upload a Drawing package. After the Drawing packing is uploaded, the Data Upload API returns a user data identifier (`udid`). The `udid` can then be used to convert the uploaded package into indoor map data.
+Use the [Azure Maps Data Upload API](/rest/api/maps/data-v2/update) to upload a drawing package. After the Drawing packing is uploaded, the Data Upload API returns a user data identifier (`udid`). The `udid` can then be used to convert the uploaded package into indoor map data.
-## Convert a Drawing package
+## Convert a drawing package
-The [Azure Maps Conversion service](/rest/api/maps/v2/conversion) converts an uploaded Drawing package into indoor map data. The Conversion service also validates the package. Validation issues are classified into two types:
+The [Azure Maps Conversion service](/rest/api/maps/v2/conversion) converts an uploaded drawing package into indoor map data. The Conversion service also validates the package. Validation issues are classified into two types:
-- Errors: If any errors are detected, the conversion process fails. When an error occurs, the Conversion service provides a link to the [Azure Maps Drawing Error Visualizer](drawing-error-visualizer.md) stand-alone web application. You can use the Drawing Error Visualizer to inspect [Drawing package warnings and errors](drawing-conversion-error-codes.md) that occurred during the conversion process. After you fix the errors, you can attempt to upload and convert the package.
+- Errors: If any errors are detected, the conversion process fails. When an error occurs, the Conversion service provides a link to the [Azure Maps Drawing Error Visualizer](drawing-error-visualizer.md) stand-alone web application. You can use the Drawing Error Visualizer to inspect [drawing package warnings and errors](drawing-conversion-error-codes.md) that occurred during the conversion process. After you fix the errors, you can attempt to upload and convert the package.
- Warnings: If any warnings are detected, the conversion succeeds. However, we recommend that you review and resolve all warnings. A warning means that part of the conversion was ignored or automatically fixed. Failing to resolve the warnings could result in errors in later processes. For more information, see [Drawing package warnings and errors](drawing-conversion-error-codes.md).
Use the Tileset service to create a vector-based representation of a dataset. Ap
### Datasets
-A dataset is a collection of indoor map features. The indoor map features represent facilities that are defined in a converted Drawing package. After you create a dataset with the [Dataset service](/rest/api/maps/v2/dataset), you can create any number of [tilesets](#tilesets) or [feature statesets](#feature-statesets).
+A dataset is a collection of indoor map features. The indoor map features represent facilities that are defined in a converted drawing package. After you create a dataset with the [Dataset service](/rest/api/maps/v2/dataset), you can create any number of [tilesets](#tilesets) or [feature statesets](#feature-statesets).
At any time, developers can use the [Dataset service](/rest/api/maps/v2/dataset) to add or remove facilities to an existing dataset. For more information about how to update an existing dataset using the API, see the append options in [Dataset service](/rest/api/maps/v2/dataset). For an example of how to update a dataset, see [Data maintenance](#data-maintenance).
As you begin to develop solutions for indoor maps, you can discover ways to inte
The following example shows how to update a dataset, create a new tileset, and delete an old tileset:
-1. Follow steps in the [Upload a Drawing package](#upload-a-drawing-package) and [Convert a Drawing package](#convert-a-drawing-package) sections to upload and convert the new Drawing package.
+1. Follow steps in the [Upload a drawing package](#upload-a-drawing-package) and [Convert a drawing package](#convert-a-drawing-package) sections to upload and convert the new drawing package.
2. Use the [Dataset Create API](/rest/api/maps/v2/dataset/create) to append the converted data to the existing dataset. 3. Use the [Tileset Create API](/rest/api/maps/v2/tileset/create) to generate a new tileset out of the updated dataset. 4. Save the new **tilesetId** for the next step.
azure-maps Drawing Conversion Error Codes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/drawing-conversion-error-codes.md
# Drawing conversion errors and warnings
-The [Azure Maps Conversion service](/rest/api/maps/v2/conversion) lets you convert uploaded Drawing packages into map data. Drawing packages must adhere to the [Drawing package requirements](drawing-requirements.md). If one or more requirements aren't met, then the Conversion service will return errors or warnings. This article lists the conversion error and warning codes, with recommendations on how to resolve them. It also provides some examples of drawings that can cause the Conversion service to return these codes.
+The [Azure Maps Conversion service](/rest/api/maps/v2/conversion) lets you convert uploaded drawing packages into map data. Drawing packages must adhere to the [Drawing package requirements](drawing-requirements.md). If one or more requirements aren't met, then the Conversion service will return errors or warnings. This article lists the conversion error and warning codes, with recommendations on how to resolve them. It also provides some examples of drawings that can cause the Conversion service to return these codes.
The Conversion service will succeed if there are any conversion warnings. However, it's recommended that you review and resolve all warnings. A warning means part of the conversion was ignored or automatically fixed. Failing to resolve the warnings could result in errors in latter processes.
The image below shows an unsupported entity type as a multi-line text object on
#### *How to fix unsupportedFeatureRepresentation*
-Ensure that your DWG files contain only the supported entity types. Supported types are listed under the [Drawing files requirements section in the Drawing package requirements article](drawing-requirements.md#drawing-package-requirements).
+Ensure that your DWG files contain only the supported entity types. Supported types are listed under the [Drawing files requirements](drawing-requirements.md#drawing-package-requirements) section in the drawing package requirements article.
### **automaticRepairPerformed**
An **invalidUserData** error occurs when the Conversion service is unable to rea
#### *Example scenario for invalidUserData*
-You attempted to upload a Drawing package with an incorrect `udid` parameter.
+You attempted to upload a drawing package with an incorrect `udid` parameter.
#### *How to fix invalidUserData* To fix an **invalidUserData** error, verify that: * You've provided a correct `udid` for the uploaded package.
-* Azure Maps Creator has been enabled for the Azure Maps account you used for uploading the Drawing package.
-* The API request to the Conversion service contains the subscription key to the Azure Maps account you used for uploading the Drawing package.
+* Azure Maps Creator has been enabled for the Azure Maps account you used for uploading the drawing package.
+* The API request to the Conversion service contains the subscription key to the Azure Maps account you used for uploading the drawing package.
### **dwgError**
azure-maps Drawing Error Visualizer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/drawing-error-visualizer.md
This tutorial uses the [Postman](https://www.postman.com/) application, but you
## Download
-1. Upload your Drawing package to the Azure Maps Creator service to obtain a `udid` for the uploaded package. For steps on how to upload a package, see [Upload a drawing package](tutorial-creator-indoor-maps.md#upload-a-drawing-package).
+1. Upload your drawing package to the Azure Maps Creator service to obtain a `udid` for the uploaded package. For steps on how to upload a package, see [Upload a drawing package](tutorial-creator-indoor-maps.md#upload-a-drawing-package).
-2. Now that the Drawing package is uploaded, we'll use `udid` for the uploaded package to convert the package into map data. For steps on how to convert a package, see [Convert a drawing package](tutorial-creator-indoor-maps.md#convert-a-drawing-package).
+2. Now that the drawing package is uploaded, we'll use `udid` for the uploaded package to convert the package into map data. For steps on how to convert a package, see [Convert a drawing package](tutorial-creator-indoor-maps.md#convert-a-drawing-package).
>[!NOTE] >If your conversion process succeeds, you will not receive a link to the Error Visualizer tool.
The _ConversionWarningsAndErrors.json_ file has been placed at the root of the
:::image type="content" source="./media/drawing-errors-visualizer/loading-data.gif" alt-text="Drawing Error Visualizer App - Drag and drop to load data":::
-Once the _ConversionWarningsAndErrors.json_ file loads, you'll see a list of your Drawing package errors and warnings. Each error or warning is specified by the layer, level, and a detailed message. To view detailed information about an error or warning, click on the **Details** link. An intractable section will then appear below the list. You may now navigate to each error to learn more details on how to resolve the error.
+Once the _ConversionWarningsAndErrors.json_ file loads, you'll see a list of your drawing package errors and warnings. Each error or warning is specified by the layer, level, and a detailed message. To view detailed information about an error or warning, click on the **Details** link. An intractable section will then appear below the list. You may now navigate to each error to learn more details on how to resolve the error.
:::image type="content" source="./media/drawing-errors-visualizer/errors.png" alt-text="Drawing Error Visualizer App - Errors and Warnings":::
azure-maps Drawing Package Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/drawing-package-guide.md
Title: Drawing package guide for Microsoft Azure Maps Creator
-description: Learn how to prepare a Drawing package for the Azure Maps Conversion service
+description: Learn how to prepare a drawing package for the Azure Maps Conversion service
Last updated 01/31/2023
-# Conversion Drawing package guide
+# Conversion drawing package guide
This guide shows you how to prepare your Drawing Package for the [Azure Maps Conversion service] using specific CAD commands to correctly prepare your DWG files and manifest file for the Conversion service.
The wall layer is meant to represent the physical extents of a facility such as
## Step 3: Prepare the manifest
-The Drawing package Manifest is a JSON file. The Manifest tells the Azure Maps Conversion service how to read the facility DWG files and metadata. Some examples of this information could be the specific information each DWG layer contains, or the geographical location of the facility.
+The drawing package Manifest is a JSON file. The Manifest tells the Azure Maps Conversion service how to read the facility DWG files and metadata. Some examples of this information could be the specific information each DWG layer contains, or the geographical location of the facility.
To achieve a successful conversion, all ΓÇ£requiredΓÇ¥ properties must be defined. A sample manifest file can be found inside the [sample drawing package]. This guide doesn't cover properties supported by the manifest. For more information about manifest properties, see [Manifest File Properties].
To achieve a successful conversion, all ΓÇ£requiredΓÇ¥ properties must be define
The building level specifies which DWG file to use for which level. A level must have a level name and ordinal that describes that vertical order of each level. Every facility must have an ordinal 0, which is the ground floor of a facility. An ordinal 0 must be provided even if the drawings occupy a few floors of a facility. For example, floors 15-17 can be defined as ordinal 0-2, respectively.
-The following example is taken from the [sample drawing package]. The facility has three levels: basement, ground, and level 2. The filename contains the full file name and path of the file relative to the manifest file within the .zip Drawing package.
+The following example is taken from the [sample drawing package]. The facility has three levels: basement, ground, and level 2. The filename contains the full file name and path of the file relative to the manifest file within the .zip drawing package.
```json     "buildingLevels": {
azure-maps Drawing Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/drawing-requirements.md
Title: Drawing package requirements in Microsoft Azure Maps Creator
-description: Learn about the Drawing package requirements to convert your facility design files to map data
+description: Learn about the drawing package requirements to convert your facility design files to map data
Last updated 02/17/2023
# Drawing package requirements
-You can convert uploaded Drawing packages into map data by using the [Azure Maps Conversion service](/rest/api/maps/v2/conversion). This article describes the Drawing package requirements for the Conversion API. To view a sample package, you can download the sample [Drawing package](https://github.com/Azure-Samples/am-creator-indoor-data-examples).
+You can convert uploaded drawing packages into map data by using the [Azure Maps Conversion service](/rest/api/maps/v2/conversion). This article describes the drawing package requirements for the Conversion API. To view a sample package, you can download the sample [Drawing package](https://github.com/Azure-Samples/am-creator-indoor-data-examples).
-For a guide on how to prepare your Drawing package, see [Conversion Drawing Package Guide](drawing-package-guide.md).
+For a guide on how to prepare your drawing package, see [Conversion Drawing Package Guide](drawing-package-guide.md).
## Prerequisites
-The Drawing package includes drawings saved in DWG format, which is the native file format for Autodesk's AutoCAD® software.
+The drawing package includes drawings saved in DWG format, which is the native file format for Autodesk's AutoCAD® software.
-You can choose any CAD software to produce the drawings in the Drawing package.
+You can choose any CAD software to produce the drawings in the drawing package.
-The [Azure Maps Conversion service](/rest/api/maps/v2/conversion) converts the Drawing package into map data. The Conversion service works with the AutoCAD DWG file format `AC1032`.
+The [Azure Maps Conversion service](/rest/api/maps/v2/conversion) converts the drawing package into map data. The Conversion service works with the AutoCAD DWG file format `AC1032`.
## Glossary of terms
For easy reference, here are some terms and definitions that are important as yo
## Drawing package structure
-A Drawing package is a .zip archive that contains the following files:
+A drawing package is a .zip archive that contains the following files:
- DWG files in AutoCAD DWG file format.-- A _manifest.json_ file that describes the DWG files in the Drawing package.
+- A _manifest.json_ file that describes the DWG files in the drawing package.
-The Drawing package must be zipped into a single archive file, with the .zip extension. The DWG files can be organized in any way inside the package, but the manifest file must live at the root directory of the zipped package. The next sections detail the requirements for the DWG files, manifest file, and the content of these files. To view a sample package, you can download the [sample Drawing package](https://github.com/Azure-Samples/am-creator-indoor-data-examples).
+The drawing package must be zipped into a single archive file, with the .zip extension. The DWG files can be organized in any way inside the package, but the manifest file must live at the root directory of the zipped package. The next sections detail the requirements for the DWG files, manifest file, and the content of these files. To view a sample package, you can download the [sample drawing package](https://github.com/Azure-Samples/am-creator-indoor-data-examples).
## DWG file conversion process
The [Azure Maps Conversion service](/rest/api/maps/v2/conversion) does the follo
## DWG file requirements
-A single DWG file is required for each level of the facility. All data of a single level must be contained in a single DWG file. Any external references (_xrefs_) must be bound to the parent drawing. For example, a facility with three levels will have three DWG files in the Drawing package.
+A single DWG file is required for each level of the facility. All data of a single level must be contained in a single DWG file. Any external references (_xrefs_) must be bound to the parent drawing. For example, a facility with three levels will have three DWG files in the drawing package.
Each DWG file must adhere to the following requirements: - The DWG file must define the _Exterior_ and _Unit_ layers. It can optionally define the following layers: _Wall_, _Door_, _UnitLabel_, _Zone_, and _ZoneLabel_. - The DWG file can't contain features from multiple levels. - The DWG file can't contain features from multiple facilities.-- The DWG must reference the same measurement system and unit of measurement as other DWG files in the Drawing package.
+- The DWG must reference the same measurement system and unit of measurement as other DWG files in the drawing package.
## DWG layer requirements
No matter how many entity drawings are in the exterior layer, the [resulting fac
If the layer contains multiple overlapping PolyLines, the PolyLines are dissolved into a single Level feature. Instead, if the layer contains multiple non-overlapping PolyLines, the resulting Level feature has a multi-polygonal representation.
-You can see an example of the Exterior layer as the outline layer in the [sample Drawing package](https://github.com/Azure-Samples/am-creator-indoor-data-examples).
+You can see an example of the Exterior layer as the outline layer in the [sample drawing package](https://github.com/Azure-Samples/am-creator-indoor-data-examples).
### Unit layer
The Units layer should adhere to the following requirements:
Name a unit by creating a text object in the UnitLabel layer, and then place the object inside the bounds of the unit. For more information, see the [UnitLabel layer](#unitlabel-layer).
-You can see an example of the Units layer in the [sample Drawing package](https://github.com/Azure-Samples/am-creator-indoor-data-examples).
+You can see an example of the Units layer in the [sample drawing package](https://github.com/Azure-Samples/am-creator-indoor-data-examples).
### Wall layer
The DWG file for each level can contain a layer that defines the physical extent
- Walls must be drawn as Polygon, PolyLine (closed), Circle, or Ellipse (closed). - The wall layer or layers should only contain geometry that's interpreted as building structure.
-You can see an example of the Walls layer in the [sample Drawing package](https://github.com/Azure-Samples/am-creator-indoor-data-examples).
+You can see an example of the Walls layer in the [sample drawing package](https://github.com/Azure-Samples/am-creator-indoor-data-examples).
### Door layer
The DWG file for each level can contain a Zone layer that defines the physical e
Name a zone by creating a text object in the ZoneLabel layer, and placing the text object inside the bounds of the zone. For more information, see [ZoneLabel layer](#zonelabel-layer).
-You can see an example of the Zone layer in the [sample Drawing package](https://github.com/Azure-Samples/am-creator-indoor-data-examples).
+You can see an example of the Zone layer in the [sample drawing package](https://github.com/Azure-Samples/am-creator-indoor-data-examples).
### UnitLabel layer
The DWG file for each level can contain a UnitLabel layer. The UnitLabel layer a
- Unit labels must fall entirely inside the bounds of their unit. - Units must not contain multiple text entities in the UnitLabel layer.
-You can see an example of the UnitLabel layer in the [sample Drawing package](https://github.com/Azure-Samples/am-creator-indoor-data-examples).
+You can see an example of the UnitLabel layer in the [sample drawing package](https://github.com/Azure-Samples/am-creator-indoor-data-examples).
### ZoneLabel layer
The DWG file for each level can contain a ZoneLabel layer. This layer adds a nam
- Zones labels must fall inside the bounds of their zone. - Zones must not contain multiple text entities in the ZoneLabel layer.
-You can see an example of the ZoneLabel layer in the [sample Drawing package](https://github.com/Azure-Samples/am-creator-indoor-data-examples).
+You can see an example of the ZoneLabel layer in the [sample drawing package](https://github.com/Azure-Samples/am-creator-indoor-data-examples).
## Manifest file requirements
The `zoneProperties` object contains a JSON array of zone properties.
|zoneNameSubtitle| string | false |Subtitle of the zone. | |zoneSetId| string | false | Set ID to establish a relationship among multiple zones so that they can be queried or selected as a group. For example, zones that span multiple levels. |
-### Sample Drawing package manifest
+### Sample drawing package manifest
-Below is the manifest file for the sample Drawing package. Go to the [Sample Drawing package for Azure Maps Creator](https://github.com/Azure-Samples/am-creator-indoor-data-examples) on GitHub to download the entire package.
+Below is the manifest file for the sample drawing package. Go to the [Sample drawing package for Azure Maps Creator](https://github.com/Azure-Samples/am-creator-indoor-data-examples) on GitHub to download the entire package.
#### Manifest file
azure-maps How To Dataset Geojson https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-dataset-geojson.md
For more information on the GeoJSON package, see the [Geojson zip package requir
### Upload the GeoJSON package
-Use the [Data Upload API](/rest/api/maps/data-v2/upload) to upload the Drawing package to Azure Maps Creator account.
+Use the [Data Upload API](/rest/api/maps/data-v2/upload) to upload the drawing package to Azure Maps Creator account.
The Data Upload API is a long running transaction that implements the pattern defined in [Creator Long-Running Operation API V2](creator-long-running-operation-v2.md).
https://us.atlas.microsoft.com/mapData/operations/{operationId}?api-version=2.0&
A dataset is a collection of map features, such as buildings, levels, and rooms. To create a dataset from your GeoJSON, use the new [Dataset Create API][Dataset Create 2022-09-01-preview]. The Dataset Create API takes the `udid` you got in the previous section and returns the `datasetId` of the new dataset. > [!IMPORTANT]
-> This is different from the [previous version][Dataset Create] in that it doesn't require a `conversionId` from a converted Drawing package.
+> This is different from the [previous version][Dataset Create] in that it doesn't require a `conversionId` from a converted drawing package.
To create a dataset:
https://us.atlas.microsoft.com/datasets?api-version=2022-09-01-preview&conversio
| Identifier | Description | |--|-|
-| conversionId | The ID returned when converting your drawing package. For more information, see [Convert a Drawing package][conversion]. |
+| conversionId | The ID returned when converting your drawing package. For more information, see [Convert a drawing package][conversion]. |
| datasetId | The dataset ID returned when creating the original dataset from a GeoJSON package). | ## Geojson zip package requirements
azure-maps Indoor Map Dynamic Styling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/indoor-map-dynamic-styling.md
You can use Azure Maps Creator [Feature State service](/rest/api/maps/v2/feature
1. [Create an Azure Maps account](quick-demo-map-app.md#create-an-azure-maps-account) 2. [Obtain a primary subscription key](quick-demo-map-app.md#get-the-primary-key-for-your-account), also known as the primary key or the subscription key. 3. [Create a Creator resource](how-to-manage-creator.md)
-4. Download the [sample Drawing package](https://github.com/Azure-Samples/am-creator-indoor-data-examples).
+4. Download the [sample drawing package](https://github.com/Azure-Samples/am-creator-indoor-data-examples).
5. [Create an indoor map](tutorial-creator-indoor-maps.md) to obtain a `tilesetId` and `statesetId`. 6. Build a web application by following the steps in [How to use the Indoor Map module](how-to-use-indoor-module.md).
azure-maps Tutorial Creator Indoor Maps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/tutorial-creator-indoor-maps.md
This tutorial describes how to create indoor maps for use in Microsoft Azure Map
> [!div class="checklist"] >
-> * Upload your indoor map Drawing package.
-> * Convert your Drawing package into map data.
+> * Upload your indoor map drawing package.
+> * Convert your drawing package into map data.
> * Create a dataset from your map data. > * Create a tileset from the data in your dataset. > * Get the default map configuration ID from your tileset.
In the next tutorials in the Creator series you'll learn to:
1. [Make an Azure Maps account](quick-demo-map-app.md#create-an-azure-maps-account). 2. [Obtain a primary subscription key](quick-demo-map-app.md#get-the-primary-key-for-your-account), also known as the primary key or the subscription key. 3. [Create a Creator resource](how-to-manage-creator.md).
-4. Download the [Sample Drawing package](https://github.com/Azure-Samples/am-creator-indoor-data-examples/blob/master/Sample%20-%20Contoso%20Drawing%20Package.zip).
+4. Download the [Sample drawing package](https://github.com/Azure-Samples/am-creator-indoor-data-examples/blob/master/Sample%20-%20Contoso%20Drawing%20Package.zip).
This tutorial uses the [Postman](https://www.postman.com/) application, but you can use a different API development environment.
This tutorial uses the [Postman](https://www.postman.com/) application, but you
> * This article uses the `us.atlas.microsoft.com` geographical URL. If your Creator service wasn't created in the United States, you must use a different geographical URL. For more information, see [Access to Creator Services](how-to-manage-creator.md#access-to-creator-services). > * In the URL examples in this article you will need to replace `{Your-Azure-Maps-Subscription-key}` with your Azure Maps subscription key.
-## Upload a Drawing package
+## Upload a drawing package
-Use the [Data Upload API](/rest/api/maps/data-v2/upload) to upload the Drawing package to Azure Maps resources.
+Use the [Data Upload API](/rest/api/maps/data-v2/upload) to upload the drawing package to Azure Maps resources.
The Data Upload API is a long running transaction that implements the pattern defined in [Creator Long-Running Operation API V2](creator-long-running-operation-v2.md).
-To upload the Drawing package:
+To upload the drawing package:
1. In the Postman app, select **New**.
To upload the Drawing package:
10. Select the **binary** radio button.
-11. Select **Select File**, and then select a Drawing package.
+11. Select **Select File**, and then select a drawing package.
- :::image type="content" source="./media/tutorial-creator-indoor-maps/data-upload-body.png" alt-text="A screenshot of Postman showing the body tab in the POST window, with Select File highlighted, this is used to select the Drawing package to import into Creator.":::
+ :::image type="content" source="./media/tutorial-creator-indoor-maps/data-upload-body.png" alt-text="A screenshot of Postman showing the body tab in the POST window, with Select File highlighted, this is used to select the drawing package to import into Creator.":::
12. Select **Send**. 13. In the response window, select the **Headers** tab.
-14. Copy the value of the **Operation-Location** key. The Operation-Location key is also known as the `status URL` and is required to check the status of the Drawing package upload, which is explained in the next section.
+14. Copy the value of the **Operation-Location** key. The Operation-Location key is also known as the `status URL` and is required to check the status of the drawing package upload, which is explained in the next section.
:::image type="content" source="./media/tutorial-creator-indoor-maps/data-upload-response-header.png" alt-text="A screenshot of Postman showing the header tab in the response window, with the Operation Location key highlighted.":::
-### Check the Drawing package upload status
+### Check the drawing package upload status
To check the status of the drawing package and retrieve its unique ID (`udid`):
To check the status of the drawing package and retrieve its unique ID (`udid`):
:::image type="content" source="./media/tutorial-creator-indoor-maps/resource-location-url.png" alt-text="A screenshot of Postman showing the resource location URL in the responses header.":::
-### (Optional) Retrieve Drawing package metadata
+### (Optional) Retrieve drawing package metadata
-You can retrieve metadata from the Drawing package resource. The metadata contains information like the resource location URL, creation date, updated date, size, and upload status.
+You can retrieve metadata from the drawing package resource. The metadata contains information like the resource location URL, creation date, updated date, size, and upload status.
To retrieve content metadata:
To retrieve content metadata:
} ```
-## Convert a Drawing package
+## Convert a drawing package
-Now that the Drawing package is uploaded, you'll use the `udid` for the uploaded package to convert the package into map data. The [Conversion API](/rest/api/maps/v2/conversion) uses a long-running transaction that implements the pattern defined in the [Creator Long-Running Operation](creator-long-running-operation-v2.md) article.
+Now that the drawing package is uploaded, you'll use the `udid` for the uploaded package to convert the package into map data. The [Conversion API](/rest/api/maps/v2/conversion) uses a long-running transaction that implements the pattern defined in the [Creator Long-Running Operation](creator-long-running-operation-v2.md) article.
To convert a drawing package:
To convert a drawing package:
:::image type="content" source="./media/tutorial-creator-indoor-maps/data-convert-location-url.png" border="true" alt-text="A screenshot of Postman showing the URL value of the operation location key in the responses header.":::
-### Check the Drawing package conversion status
+### Check the drawing package conversion status
-After the conversion operation completes, it returns a `conversionId`. We can access the `conversionId` by checking the status of the Drawing package conversion process. The `conversionId` can then be used to access the converted data.
+After the conversion operation completes, it returns a `conversionId`. We can access the `conversionId` by checking the status of the drawing package conversion process. The `conversionId` can then be used to access the converted data.
To check the status of the conversion process and retrieve the `conversionId`:
To check the status of the conversion process and retrieve the `conversionId`:
4. Select the **GET** HTTP method:
-5. Enter the `status URL` you copied in [Convert a Drawing package](#convert-a-drawing-package). The request should look like the following URL:
+5. Enter the `status URL` you copied in [Convert a drawing package](#convert-a-drawing-package). The request should look like the following URL:
```http https://us.atlas.microsoft.com/conversions/operations/{operationId}?api-version=2.0&subscription-key={Your-Azure-Maps-Subscription-key}
To check the status of the conversion process and retrieve the `conversionId`:
:::image type="content" source="./media/tutorial-creator-indoor-maps/data-conversion-id.png" alt-text="A screenshot of Postman highlighting the conversion ID value that appears in the resource location key in the responses header.":::
-The sample Drawing package should be converted without errors or warnings. However, if you receive errors or warnings from your own Drawing package, the JSON response includes a link to the [Drawing error visualizer](drawing-error-visualizer.md). You can use the Drawing Error visualizer to inspect the details of errors and warnings. To receive recommendations to resolve conversion errors and warnings, see [Drawing conversion errors and warnings](drawing-conversion-error-codes.md).
+The sample drawing package should be converted without errors or warnings. However, if you receive errors or warnings from your own drawing package, the JSON response includes a link to the [Drawing error visualizer](drawing-error-visualizer.md). You can use the Drawing Error visualizer to inspect the details of errors and warnings. To receive recommendations to resolve conversion errors and warnings, see [Drawing conversion errors and warnings](drawing-conversion-error-codes.md).
The following JSON fragment displays a sample conversion warning:
The following JSON fragment displays a sample conversion warning:
## Create a dataset
-A dataset is a collection of map features, such as buildings, levels, and rooms. To create a dataset, use the [Dataset Create API](/rest/api/maps/v2/dataset/create). The Dataset Create API takes the `conversionId` for the converted Drawing package and returns a `datasetId` of the created dataset.
+A dataset is a collection of map features, such as buildings, levels, and rooms. To create a dataset, use the [Dataset Create API](/rest/api/maps/v2/dataset/create). The Dataset Create API takes the `conversionId` for the converted drawing package and returns a `datasetId` of the created dataset.
To create a dataset:
To create a dataset:
4. Select the **POST** HTTP method.
-5. Enter the following URL to the [Dataset API](/rest/api/maps/v2/dataset). The request should look like the following URL (replace `{conversionId`} with the `conversionId` obtained in [Check Drawing package conversion status](#check-the-drawing-package-conversion-status)):
+5. Enter the following URL to the [Dataset API](/rest/api/maps/v2/dataset). The request should look like the following URL (replace `{conversionId`} with the `conversionId` obtained in [Check drawing package conversion status](#check-the-drawing-package-conversion-status)):
```http https://us.atlas.microsoft.com/datasets?api-version=2.0&conversionId={conversionId}&subscription-key={Your-Azure-Maps-Subscription-key}
azure-monitor Azure Web Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/azure-web-apps.md
Title: Monitor Azure App Service performance | Microsoft Docs description: Application performance monitoring for Azure App Service. Chart load and response time, dependency information, and set alerts on performance. Previously updated : 11/15/2022 Last updated : 03/01/2023
There are two ways to enable monitoring for applications hosted on App Service:
This approach is much more customizable, but it requires the following approaches: SDK for [.NET Core](./asp-net-core.md), [.NET](./asp-net.md), [Node.js](./nodejs.md), [Python](./opencensus-python.md), and a standalone agent for [Java](./opentelemetry-enable.md?tabs=java). This method also means you must manage the updates to the latest version of the packages yourself.
- If you need to make custom API calls to track events/dependencies not captured by default with auto-instrumentation monitoring, you'll need to use this method. To learn more, see [Application Insights API for custom events and metrics](./api-custom-events-metrics.md).
+ If you need to make custom API calls to track events/dependencies not captured by default with auto-instrumentation monitoring, you need to use this method. To learn more, see [Application Insights API for custom events and metrics](./api-custom-events-metrics.md).
-If both auto-instrumentation monitoring and manual SDK-based instrumentation are detected, in .NET only the manual instrumentation settings will be honored, while in Java only the auto-instrumentation will be emitting the telemetry. This practice is to prevent duplicate data from being sent.
+If both auto-instrumentation monitoring and manual SDK-based instrumentation are detected, in .NET only the manual instrumentation settings are honored, while in Java only the auto-instrumentation are emitting the telemetry. This practice is to prevent duplicate data from being sent.
> [!NOTE] > Snapshot Debugger and Profiler are only available in .NET and .NET Core.
+## Release notes
+
+This section contains the release notes for Azure Web Apps Extension for runtime instrumentation with Application Insights.
+
+To find which version of the extension you're currently using, go to `https://<yoursitename>.scm.azurewebsites.net/ApplicationInsights`.
+
+### Release notes
+
+#### 2.8.44
+
+- .NET/.NET Core: Upgraded to [ApplicationInsights .NET SDK to 2.20.1](https://github.com/microsoft/ApplicationInsights-dotnet/tree/autoinstrumentation/2.20.1).
+
+#### 2.8.43
+
+- Separate .NET/.NET Core, Java and Node.js package into different App Service Windows Site Extension.
+
+#### 2.8.42
+
+- JAVA extension: Upgraded to [Java Agent 3.2.0](https://github.com/microsoft/ApplicationInsights-Java/releases/tag/3.2.0) from 2.5.1.
+- Node.js extension: Updated AI SDK to [2.1.8](https://github.com/microsoft/ApplicationInsights-node.js/releases/tag/2.1.8) from 2.1.7. Added support for User and System assigned Azure AD Managed Identities.
+- .NET Core: Added self-contained deployments and .NET 6.0 support using [.NET Startup Hook](https://github.com/dotnet/runtime/blob/main/docs/design/features/host-startup-hook.md).
+
+#### 2.8.41
+
+- Node.js extension: Updated AI SDK to [2.1.7](https://github.com/microsoft/ApplicationInsights-node.js/releases/tag/2.1.7) from 2.1.3.
+- .NET Core: Removed out-of-support version (2.1). Supported versions are 3.1 and 5.0.
+
+#### 2.8.40
+
+- JAVA extension: Upgraded to [Java Agent 3.1.1 (GA)](https://github.com/microsoft/ApplicationInsights-Java/releases/tag/3.1.1) from 3.0.2.
+- Node.js extension: Updated AI SDK to [2.1.3](https://github.com/microsoft/ApplicationInsights-node.js/releases/tag/2.1.3) from 1.8.8.
+
+#### 2.8.39
+
+- .NET Core: Added .NET Core 5.0 support.
+
+#### 2.8.38
+
+- JAVA extension: upgraded to [Java Agent 3.0.2 (GA)](https://github.com/microsoft/ApplicationInsights-Java/releases/tag/3.0.2) from 2.5.1.
+- Node.js extension: Updated AI SDK to [1.8.8](https://github.com/microsoft/ApplicationInsights-node.js/releases/tag/1.8.8) from 1.8.7.
+- .NET Core: Removed out-of-support versions (2.0, 2.2, 3.0). Supported versions are 2.1 and 3.1.
+
+#### 2.8.37
+
+- AppSvc Windows extension: Made .NET Core work with any version of System.Diagnostics.DiagnosticSource.dll.
+
+#### 2.8.36
+
+- AppSvc Windows extension: Enabled Inter-op with AI SDK in .NET Core.
+
+#### 2.8.35
+
+- AppSvc Windows extension: Added .NET Core 3.1 support.
+
+#### 2.8.33
+
+- .NET, .NET core, Java, and Node.js agents and the Windows Extension: Support for sovereign clouds. Connections strings can be used to send data to sovereign clouds.
+
+#### 2.8.31
+
+- The ASP.NET Core agent fixed an issue with the Application Insights SDK. If the runtime loaded the incorrect version of `System.Diagnostics.DiagnosticSource.dll`, the codeless extension doesn't crash the application and backs off. To fix the issue, customers should remove `System.Diagnostics.DiagnosticSource.dll` from the bin folder or use the older version of the extension by setting `ApplicationInsightsAgent_EXTENSIONVERSION=2.8.24`. If they don't, application monitoring isn't enabled.
+
+#### 2.8.26
+
+- ASP.NET Core agent: Fixed issue related to updated Application Insights SDK. The agent doesn't try to load `AiHostingStartup` if the ApplicationInsights.dll is already present in the bin folder. It resolves issues related to reflection via Assembly\<AiHostingStartup\>.GetTypes().
+- Known issues: Exception `System.IO.FileLoadException: Could not load file or assembly 'System.Diagnostics.DiagnosticSource, Version=4.0.4.0, Culture=neutral, PublicKeyToken=cc7b13ffcd2ddd51'` could be thrown if another version of `DiagnosticSource` dll is loaded. It could happen, for example, if `System.Diagnostics.DiagnosticSource.dll` is present in the publish folder. As mitigation, use the previous version of extension by setting app settings in app
+
+#### 2.8.24
+
+- Repackaged version of 2.8.21.
+
+#### 2.8.23
+
+- Added ASP.NET Core 3.0 codeless monitoring support.
+- Updated ASP.NET Core SDK to [2.8.0](https://github.com/microsoft/ApplicationInsights-aspnetcore/releases/tag/2.8.0) for runtime versions 2.1, 2.2 and 3.0. Apps targeting .NET Core 2.0 continue to use 2.1.1 of the SDK.
+
+#### 2.8.14
+
+- Updated ASP.NET Core SDK version from 2.3.0 to the latest (2.6.1) for apps targeting .NET Core 2.1, 2.2. Apps targeting .NET Core 2.0 continue to use 2.1.1 of the SDK.
+
+#### 2.8.12
+
+- Support for ASP.NET Core 2.2 apps.
+- Fixed a bug in ASP.NET Core extension causing injection of SDK even when the application is already instrumented with the SDK. For 2.1 and 2.2 apps, the presence of ApplicationInsights.dll in the application folder now causes the extension to back off. For 2.0 apps, the extension backs off only if ApplicationInsights is enabled with a `UseApplicationInsights()` call.
+
+- Permanent fix for incomplete HTML response for ASP.NET Core apps. This fix is now extended to work for .NET Core 2.2 apps.
+
+- Added support to turn off JavaScript injection for ASP.NET Core apps (`APPINSIGHTS_JAVASCRIPT_ENABLED=false appsetting`). For ASP.NET core, the JavaScript injection is in "Opt-Out" mode by default, unless explicitly turned off. (The default setting is done to retain current behavior.)
+
+- Fixed ASP.NET Core extension bug that caused injection even if ikey wasn't present.
+- Fixed a bug in the SDK version prefix logic that caused an incorrect SDK version in telemetry.
+
+- Added SDK version prefix for ASP.NET Core apps to identify how telemetry was collected.
+- Fixed SCM- ApplicationInsights page to correctly show the version of the pre-installed extension.
+
+#### 2.8.10
+
+- Fix for incomplete HTML response for ASP.NET Core apps.
+ ## Next steps Learn how to enable auto-instrumentation application monitoring for your [.NET Core](./azure-web-apps-net-core.md), [.NET](./azure-web-apps-net.md), [Java](./azure-web-apps-java.md), or [Nodejs](./azure-web-apps-nodejs.md) application running on App Service.
azure-monitor Web App Extension Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/web-app-extension-release-notes.md
- Title: Release Notes for Azure web app extension - Application Insights
-description: Releases notes for Azure Web Apps Extension for runtime instrumentation with Application Insights.
- Previously updated : 11/15/2022---
-# Release notes for Azure Web App extension for Application Insights
-
-This article contains the releases notes for Azure Web Apps Extension for runtime instrumentation with Application Insights. This is applicable only for pre-installed extensions.
-
-Learn more about [Azure Web App Extension for Application Insights](azure-web-apps.md)).
-
-## Frequently asked questions
--- How to find which version of the extension I am currently on?
- - Go to `https://<yoursitename>.scm.azurewebsites.net/ApplicationInsights`. Visit the step by step troubleshooting guide for extension/agent based monitoring for [ASP.NET Core](./azure-web-apps-net-core.md#troubleshooting), [ASP.NET](./azure-web-apps-net.md#troubleshooting), [Java](./azure-web-apps-java.md#troubleshooting), or [Node.js](./azure-web-apps-nodejs.md#troubleshooting) ) for more information.
--- What if I'm using private extensions?
- - Uninstall private site extensions since it's no longer supported.
-
-## Release notes
-
-### 2.8.44
--- .NET/.NET Core: Upgraded to [ApplicationInsights .NET SDK to 2.20.1-redfield](https://github.com/microsoft/ApplicationInsights-dotnet/tree/autoinstrumentation/2.20.1).-
-### 2.8.43
--- Separate .NET/.NET Core, Java and Node.js package into different App Service Windows Site Extension. -
-### 2.8.42
--- JAVA extension: Upgraded to [Java Agent 3.2.0](https://github.com/microsoft/ApplicationInsights-Java/releases/tag/3.2.0) from 2.5.1.-- Node.js extension: Updated AI SDK to [2.1.8](https://github.com/microsoft/ApplicationInsights-node.js/releases/tag/2.1.8) from 2.1.7. Added support for User and System assigned AAD Managed Identities.-- .NET Core: Added self-contained deployments and .NET 6.0 support using [.NET Startup Hook](https://github.com/dotnet/runtime/blob/main/docs/design/features/host-startup-hook.md).-
-### 2.8.41
--- Node.js extension: Updated AI SDK to [2.1.7](https://github.com/microsoft/ApplicationInsights-node.js/releases/tag/2.1.7) from 2.1.3.-- .NET Core: Removed out-of-support version (2.1). Supported versions are 3.1 and 5.0.-
-### 2.8.40
--- JAVA extension: Upgraded to [Java Agent 3.1.1 (GA)](https://github.com/microsoft/ApplicationInsights-Java/releases/tag/3.1.1) from 3.0.2.-- Node.js extension: Updated AI SDK to [2.1.3](https://github.com/microsoft/ApplicationInsights-node.js/releases/tag/2.1.3) from 1.8.8.-
-### 2.8.39
--- .NET Core: Added .NET Core 5.0 support.-
-### 2.8.38
--- JAVA extension: upgraded to [Java Agent 3.0.2 (GA)](https://github.com/microsoft/ApplicationInsights-Java/releases/tag/3.0.2) from 2.5.1.-- Node.js extension: Updated AI SDK to [1.8.8](https://github.com/microsoft/ApplicationInsights-node.js/releases/tag/1.8.8) from 1.8.7.-- .NET Core: Removed out-of-support versions (2.0, 2.2, 3.0). Supported versions are 2.1 and 3.1.-
-### 2.8.37
--- AppSvc Windows extension: Made .NET Core work with any version of System.Diagnostics.DiagnosticSource.dll.-
-### 2.8.36
--- AppSvc Windows extension: Enabled Inter-op with AI SDK in .NET Core.-
-### 2.8.35
--- AppSvc Windows extension: Added .NET Core 3.1 support.-
-### 2.8.33
--- .NET, .NET core, Java, and Node.js agents and the Windows Extension: Support for sovereign clouds. Connections strings can be used to send data to sovereign clouds.-
-### 2.8.31
--- ASP.NET Core agent: Fixed issue related to one of the updated Application Insights SDK's references (see known issues for 2.8.26). If the incorrect version of `System.Diagnostics.DiagnosticSource.dll` is already loaded by runtime, the codeless extension now won't crash the application and backs off. For customers affected by that issue, it's advised to remove the `System.Diagnostics.DiagnosticSource.dll` from the bin folder or use the older version of the extension by setting "ApplicationInsightsAgent_EXTENSIONVERSION=2.8.24"; otherwise, application monitoring isn't enabled.-
-### 2.8.26
--- ASP.NET Core agent: Fixed issue related to updated Application Insights SDK. The agent won't try to load `AiHostingStartup` if the ApplicationInsights.dll is already present in the bin folder. This resolves issues related to reflection via Assembly\<AiHostingStartup\>.GetTypes().-- Known issues: Exception `System.IO.FileLoadException: Could not load file or assembly 'System.Diagnostics.DiagnosticSource, Version=4.0.4.0, Culture=neutral, PublicKeyToken=cc7b13ffcd2ddd51'` could be thrown if another version of `DiagnosticSource` dll is loaded. This could happen, for example, if `System.Diagnostics.DiagnosticSource.dll` is present in the publish folder. As mitigation, use the previous version of extension by setting app settings in app -
-### 2.8.24
--- Repackaged version of 2.8.21.-
-### 2.8.23
--- Added ASP.NET Core 3.0 codeless monitoring support.-- Updated ASP.NET Core SDK to [2.8.0](https://github.com/microsoft/ApplicationInsights-aspnetcore/releases/tag/2.8.0) for runtime versions 2.1, 2.2 and 3.0. Apps targeting .NET Core 2.0 continue to use 2.1.1 of the SDK.-
-### 2.8.14
--- Updated ASP.NET Core SDK version from 2.3.0 to the latest (2.6.1) for apps targeting .NET Core 2.1, 2.2. Apps targeting .NET Core 2.0 continue to use 2.1.1 of the SDK.-
-### 2.8.12
--- Support for ASP.NET Core 2.2 apps.-- Fixed a bug in ASP.NET Core extension causing injection of SDK even when the application is already instrumented with the SDK. For 2.1 and 2.2 apps, the presence of ApplicationInsights.dll in the application folder now causes the extension to back off. For 2.0 apps, the extension backs off only if ApplicationInsights is enabled with a `UseApplicationInsights()` call.--- Permanent fix for incomplete HTML response for ASP.NET Core apps. This fix is now extended to work for .NET Core 2.2 apps.--- Added support to turn off JavaScript injection for ASP.NET Core apps (`APPINSIGHTS_JAVASCRIPT_ENABLED=false appsetting`). For ASP.NET core, the JavaScript injection is in "Opt-Out" mode by default, unless explicitly turned off. (The default setting is done to retain current behavior.)--- Fixed ASP.NET Core extension bug that caused injection even if ikey was not present.-- Fixed a bug in the SDK version prefix logic that caused an incorrect SDK version in telemetry.--- Added SDK version prefix for ASP.NET Core apps to identify how telemetry was collected.-- Fixed SCM- ApplicationInsights page to correctly show the version of the pre-installed extension.-
-### 2.8.10
--- Fix for incomplete HTML response for ASP.NET Core apps.-
-## Next steps
--- Visit the [Application Monitoring for Azure App Service documentation](azure-web-apps.md) for more information on how to configuring monitoring for Azure App Services.
azure-monitor Best Practices Cost https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/best-practices-cost.md
Since Azure Monitor charges for the collection of data, your goal should be to c
| Recommendation | Description | |:|:|
-| Configure agent collection to remove unneeded data. | Analyze the data collected by Container insights as described in [Controlling ingestion to reduce cost](containers/container-insights-cost.md#control-ingestion-to-reduce-cost) and adjust your configuration to stop collection of data you don't need. |
+| Configure agent collection to remove unneeded data. | Analyze the data collected by Container insights as described in [Controlling ingestion to reduce cost](containers/container-insights-cost.md#control-ingestion-to-reduce-cost) and adjust your configuration to stop collection of data in ContainerLogs you don't need. |
+| Modify settings for collection of metric data | You can reduce your costs by modifying the default collection settings Container insights uses for the collection of metric data. See [Enable cost optimization settings (preview)](containers/container-insights-cost-config.md) for details on modifying both the frequency that metric data is collected and the namespaces that are collected. |
| Limit Prometheus metrics collected | If you configured Prometheus metric scraping, then follow the recommendations at [Controlling ingestion to reduce cost](containers/container-insights-cost.md#prometheus-metrics-scraping) to optimize your data collection for cost. | | Configure Basic Logs | [Convert your schema to ContainerLogV2](containers/container-insights-logging-v2.md) which is compatible with Basic logs and can provide significant cost savings as described in [Controlling ingestion to reduce cost](containers/container-insights-cost.md#configure-basic-logs). |
azure-monitor Metrics Supported https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/metrics-supported.md
Previously updated : 02/01/2023 Last updated : 03/01/2023
> [!NOTE] > This list is largely auto-generated. Any modification made to this list via GitHub might be written over without warning. Contact the author of this article for details on how to make permanent updates.
-Date list was last updated: 02/01/2023.
+Date list was last updated: 03/01/2023.
-Azure Monitor provides several ways to interact with metrics, including charting them in the Azure portal, accessing them through the REST API, or querying them by using PowerShell or the Azure CLI (Command Line Interface).
+Azure Monitor provides several ways to interact with metrics, including charting them in the Azure portal, accessing them through the REST API, or querying them by using PowerShell or the Azure CLI (Command Line Interface).
This article is a complete list of all platform (that is, automatically collected) metrics currently available with the consolidated metric pipeline in Azure Monitor. Metrics changed or added after the date at the top of this article might not yet appear in the list. To query for and access the list of metrics programmatically, use the [2018-01-01 api-version](/rest/api/monitor/metricdefinitions). Other metrics not in this list might be available in the portal or through legacy APIs.
This latest update adds a new column and reorders the metrics to be alphabetical
|ActionIdOccurrences |Yes |Action Occurences |Count |Total |Number of times each action appears. |ActionId, Mode, RunId | |ActionNamespacesPerEvent |Yes |Action Namespaces Per Event |Count |Average |Average number of action namespaces per event. |Mode, RunId | |ActionsPerEvent |Yes |Actions Per Event |Count |Average |Number of actions per event. |Mode, RunId |
-|AdaFineTunedTokenTransaction |Yes |Processed Ada FineTuned Inference Tokens |Count |Total |Number of Inference Tokens Processed on an Ada FineTuned Model |ApiName, ModelDeploymentName, FeatureName, UsageChannel, Region |
-|AdaFineTunedTrainingHours |Yes |Processed Ada FineTuned Training Hours |Count |Total |Number of Training Hours Processed on an Ada FineTuned Model |ApiName, FeatureName, UsageChannel, Region |
-|AdaTokenTransaction |Yes |Processed Ada Inference Tokens |Count |Total |Number of Inference Tokens Processed on an Ada Model |ApiName, ModelDeploymentName, FeatureName, UsageChannel, Region |
+|AdaFineTunedTokenTransaction |Yes |Processed Ada FineTuned Inference Tokens (deprecated) |Count |Total |Number of Inference Tokens Processed on an Ada FineTuned Model |ApiName, ModelDeploymentName, FeatureName, UsageChannel, Region |
+|AdaFineTunedTrainingHours |Yes |Processed Ada FineTuned Training Hours (deprecated) |Count |Total |Number of Training Hours Processed on an Ada FineTuned Model |ApiName, FeatureName, UsageChannel, Region |
+|AdaTokenTransaction |Yes |Processed Ada Inference Tokens (deprecated) |Count |Total |Number of Inference Tokens Processed on an Ada Model |ApiName, ModelDeploymentName, FeatureName, UsageChannel, Region |
|AudioSecondsTranscribed |Yes |Audio Seconds Transcribed |Count |Total |Number of seconds transcribed |ApiName, FeatureName, UsageChannel, Region | |AudioSecondsTranslated |Yes |Audio Seconds Translated |Count |Total |Number of seconds translated |ApiName, FeatureName, UsageChannel, Region |
-|BabbageFineTunedTokenTransaction |Yes |Processed Babbage FineFuned Inference Tokens |Count |Total |Number of Inference Tokens processed on a Babbage FineFuned Model |ApiName, ModelDeploymentName, FeatureName, UsageChannel, Region |
-|BabbageFineTunedTrainingHours |Yes |Processed Babbage FineTuned Training Hours |Count |Total |Number of Training Hours Processed on a Babbage FineTuned Model |ApiName, FeatureName, UsageChannel, Region |
-|BabbageTokenTransaction |Yes |Processed Babbage Inference Tokens |Count |Total |Number of Inference Tokens Processed on a Babbage Model |ApiName, ModelDeploymentName, FeatureName, UsageChannel, Region |
+|BabbageFineTunedTokenTransaction |Yes |Processed Babbage FineFuned Inference Tokens (deprecated) |Count |Total |Number of Inference Tokens processed on a Babbage FineFuned Model |ApiName, ModelDeploymentName, FeatureName, UsageChannel, Region |
+|BabbageFineTunedTrainingHours |Yes |Processed Babbage FineTuned Training Hours (deprecated) |Count |Total |Number of Training Hours Processed on a Babbage FineTuned Model |ApiName, FeatureName, UsageChannel, Region |
+|BabbageTokenTransaction |Yes |Processed Babbage Inference Tokens (deprecated) |Count |Total |Number of Inference Tokens Processed on a Babbage Model |ApiName, ModelDeploymentName, FeatureName, UsageChannel, Region |
+|BaselineEstimatorOverallReward |Yes |Baseline Estimator Overall Reward |Count |Average |Baseline Estimator Overall Reward. |Mode, RunId |
|BaselineEstimatorSlotReward |Yes |Baseline Estimator Slot Reward |Count |Average |Baseline Estimator Reward by slot. |SlotId, SlotIndex, Mode, RunId |
+|BaselineRandomEstimatorOverallReward |Yes |Baseline Random Estimator Overall Reward |Count |Average |Baseline Random Estimator Overall Reward. |Mode, RunId |
|BaselineRandomEstimatorSlotReward |Yes |Baseline Random Estimator Slot Reward |Count |Average |Baseline Random Estimator Reward by slot. |SlotId, SlotIndex, Mode, RunId | |BaselineRandomEventCount |Yes |Baseline Random Event count |Count |Total |Estimation for baseline random event count. |Mode, RunId | |BaselineRandomReward |Yes |Baseline Random Reward |Count |Total |Estimation for baseline random reward. |Mode, RunId |
This latest update adds a new column and reorders the metrics to be alphabetical
|CharactersTrained |Yes |Characters Trained (Deprecated) |Count |Total |Total number of characters trained. |ApiName, OperationName, Region | |CharactersTranslated |Yes |Characters Translated (Deprecated) |Count |Total |Total number of characters in incoming text request. |ApiName, OperationName, Region | |ClientErrors |Yes |Client Errors |Count |Total |Number of calls with client side error (HTTP response code 4xx). |ApiName, OperationName, Region, RatelimitKey |
-|CodeCushman001FineTunedTokenTransaction |Yes |Processed Code-Cushman-001 FineTuned Inference Tokens |Count |Total |Number of Inference Tokens Processed on a Code-Cushman-001 FineTuned Model |ApiName, ModelDeploymentName, FeatureName, UsageChannel, Region |
-|CodeCushman001FineTunedTrainingHours |Yes |Processed Code-Cushman-001 FineTuned Traning Hours |Count |Total |Number of Training Hours Processed on a Code-Cushman-001 FineTuned Model |ApiName, FeatureName, UsageChannel, Region |
-|CodeCushman001TokenTransaction |Yes |Processed Code-Cushman-001 Inference Tokens |Count |Total |Number of Inference Tokens Processed on a Code-Cushman-001 Model |ApiName, ModelDeploymentName, FeatureName, UsageChannel, Region |
+|CodeCushman001FineTunedTokenTransaction |Yes |Processed Code-Cushman-001 FineTuned Inference Tokens (deprecated) |Count |Total |Number of Inference Tokens Processed on a Code-Cushman-001 FineTuned Model |ApiName, ModelDeploymentName, FeatureName, UsageChannel, Region |
+|CodeCushman001FineTunedTrainingHours |Yes |Processed Code-Cushman-001 FineTuned Traning Hours (deprecated) |Count |Total |Number of Training Hours Processed on a Code-Cushman-001 FineTuned Model |ApiName, FeatureName, UsageChannel, Region |
+|CodeCushman001TokenTransaction |Yes |Processed Code-Cushman-001 Inference Tokens (deprecated) |Count |Total |Number of Inference Tokens Processed on a Code-Cushman-001 Model |ApiName, ModelDeploymentName, FeatureName, UsageChannel, Region |
|ComputerVisionTransactions |Yes |Computer Vision Transactions |Count |Total |Number of Computer Vision Transactions |ApiName, FeatureName, UsageChannel, Region | |ContextFeatureIdOccurrences |Yes |Context Feature Occurrences |Count |Total |Number of times each context feature appears. |FeatureId, Mode, RunId | |ContextFeaturesPerEvent |Yes |Context Features Per Event |Count |Average |Number of context features per event. |Mode, RunId | |ContextNamespacesPerEvent |Yes |Context Namespaces Per Event |Count |Average |Number of context namespaces per event. |Mode, RunId |
-|CurieFineTunedTokenTransaction |Yes |Processed Curie FineTuned Inference Tokens |Count |Total |Number of Inference Tokens processed on a Curie FineTuned Model |ApiName, ModelDeploymentName, FeatureName, UsageChannel, Region |
-|CurieFineTunedTrainingHours |Yes |Processed Curie FineTuned Training Hours |Count |Total |Number of Training Hours Processed on a Curie FineTuned Model |ApiName, FeatureName, UsageChannel, Region |
-|CurieTokenTransaction |Yes |Processed Curie Inference Tokens |Count |Total |Number of Inference Tokens Processed on a Curie Model |ApiName, ModelDeploymentName, FeatureName, UsageChannel, Region |
+|CurieFineTunedTokenTransaction |Yes |Processed Curie FineTuned Inference Tokens (deprecated) |Count |Total |Number of Inference Tokens processed on a Curie FineTuned Model |ApiName, ModelDeploymentName, FeatureName, UsageChannel, Region |
+|CurieFineTunedTrainingHours |Yes |Processed Curie FineTuned Training Hours (deprecated) |Count |Total |Number of Training Hours Processed on a Curie FineTuned Model |ApiName, FeatureName, UsageChannel, Region |
+|CurieTokenTransaction |Yes |Processed Curie Inference Tokens (deprecated) |Count |Total |Number of Inference Tokens Processed on a Curie Model |ApiName, ModelDeploymentName, FeatureName, UsageChannel, Region |
|CustomVisionTrainingTime |Yes |Custom Vision Training Time |Seconds |Total |Custom Vision training time |ApiName, FeatureName, UsageChannel, Region | |CustomVisionTransactions |Yes |Custom Vision Transactions |Count |Total |Number of Custom Vision prediction transactions |ApiName, FeatureName, UsageChannel, Region | |DataIn |Yes |Data In |Bytes |Total |Size of incoming data in bytes. |ApiName, OperationName, Region | |DataOut |Yes |Data Out |Bytes |Total |Size of outgoing data in bytes. |ApiName, OperationName, Region |
-|DavinciFineTunedTokenTransaction |Yes |Processed Davinci FineTuned Inference Tokens |Count |Total |Number of Inference Tokens Processed on a Davinci FineTuned Model |ApiName, ModelDeploymentName, FeatureName, UsageChannel, Region |
-|DavinciFineTunedTrainingHours |Yes |Processed Davinci FineTuned Traning Hours |Count |Total |Number of Training Hours Processed on a Davinci FineTuned Model |ApiName, FeatureName, UsageChannel, Region |
-|DavinciTokenTransaction |Yes |Processed Davinci Inference Tokens |Count |Total |Number of Inference Tokens Processed on a Davinci Model |ApiName, ModelDeploymentName, FeatureName, UsageChannel, Region |
+|DavinciFineTunedTokenTransaction |Yes |Processed Davinci FineTuned Inference Tokens (deprecated) |Count |Total |Number of Inference Tokens Processed on a Davinci FineTuned Model |ApiName, ModelDeploymentName, FeatureName, UsageChannel, Region |
+|DavinciFineTunedTrainingHours |Yes |Processed Davinci FineTuned Traning Hours (deprecated) |Count |Total |Number of Training Hours Processed on a Davinci FineTuned Model |ApiName, FeatureName, UsageChannel, Region |
+|DavinciTokenTransaction |Yes |Processed Davinci Inference Tokens (deprecated) |Count |Total |Number of Inference Tokens Processed on a Davinci Model |ApiName, ModelDeploymentName, FeatureName, UsageChannel, Region |
|DocumentCharactersTranslated |Yes |Document Characters Translated |Count |Total |Number of characters in document translation request. |ApiName, FeatureName, UsageChannel, Region | |DocumentCustomCharactersTranslated |Yes |Document Custom Characters Translated |Count |Total |Number of characters in custom document translation request. |ApiName, FeatureName, UsageChannel, Region | |FaceImagesTrained |Yes |Face Images Trained |Count |Total |Number of images trained. 1,000 images trained per transaction. |ApiName, FeatureName, UsageChannel, Region |
This latest update adds a new column and reorders the metrics to be alphabetical
|NumberOfSlots |Yes |Slots |Count |Average |Number of slots per event. |Mode, RunId | |NumberofSpeakerProfiles |Yes |Number of Speaker Profiles |Count |Total |Number of speaker profiles enrolled. Prorated hourly. |ApiName, FeatureName, UsageChannel, Region | |ObservedRewards |Yes |Observed Rewards |Count |Total |Number of Observed Rewards. |Mode, RunId |
+|OnlineEstimatorOverallReward |Yes |Online Estimator Overall Reward |Count |Average |Online Estimator Overall Reward. |Mode, RunId |
|OnlineEstimatorSlotReward |Yes |Online Estimator Slot Reward |Count |Average |Online Estimator Reward by slot. |SlotId, SlotIndex, Mode, RunId | |OnlineEventCount |Yes |Online Event Count |Count |Total |Estimation for online event count. |Mode, RunId | |OnlineReward |Yes |Online Reward |Count |Total |Estimation for online reward. |Mode, RunId |
This latest update adds a new column and reorders the metrics to be alphabetical
|SpeechModelHostingHours |Yes |Speech Model Hosting Hours |Count |Total |Number of speech model hosting hours |ApiName, FeatureName, UsageChannel, Region | |SpeechSessionDuration |Yes |Speech Session Duration (Deprecated) |Seconds |Total |Total duration of speech session in seconds. |ApiName, OperationName, Region | |SuccessfulCalls |Yes |Successful Calls |Count |Total |Number of successful calls. |ApiName, OperationName, Region, RatelimitKey |
+|SuccessRate |No |Availability |Percent |Average |Availability percentage with the following calculation: (Total Calls - Server Errors)/Total Calls. Server Errors include any HTTP responses >=500. |ApiName, OperationName, Region, RatelimitKey |
|SynthesizedCharacters |Yes |Synthesized Characters |Count |Total |Number of Characters. |ApiName, FeatureName, UsageChannel, Region |
-|TextAda001TokenTransaction |Yes |Processed Text Ada 001 Inference Tokens |Count |Total |Number of Inference Tokens processed on a text-ada-001 model |ApiName, ModelDeploymentName, FeatureName, UsageChannel, Region |
-|TextBabbage001TokenTransaction |Yes |Processed Text Babbage 001 Inference Tokens |Count |Total |Number of Inference Tokens processed on a text-babbage-001 model |ApiName, ModelDeploymentName, FeatureName, UsageChannel, Region |
+|TextAda001TokenTransaction |Yes |Processed Text Ada 001 Inference Tokens (deprecated) |Count |Total |Number of Inference Tokens processed on a text-ada-001 model |ApiName, ModelDeploymentName, FeatureName, UsageChannel, Region |
+|TextBabbage001TokenTransaction |Yes |Processed Text Babbage 001 Inference Tokens (deprecated) |Count |Total |Number of Inference Tokens processed on a text-babbage-001 model |ApiName, ModelDeploymentName, FeatureName, UsageChannel, Region |
|TextCharactersTranslated |Yes |Text Characters Translated |Count |Total |Number of characters in incoming text translation request. |ApiName, FeatureName, UsageChannel, Region |
-|TextCurie001TokenTransaction |Yes |Processed Text Curie 001 Inference Tokens |Count |Total |Number of Inference Tokens Processed on a text-curie-001 Model |ApiName, ModelDeploymentName, FeatureName, UsageChannel, Region |
+|TextCurie001TokenTransaction |Yes |Processed Text Curie 001 Inference Tokens (deprecated) |Count |Total |Number of Inference Tokens Processed on a text-curie-001 Model |ApiName, ModelDeploymentName, FeatureName, UsageChannel, Region |
|TextCustomCharactersTranslated |Yes |Text Custom Characters Translated |Count |Total |Number of characters in incoming custom text translation request. |ApiName, FeatureName, UsageChannel, Region |
-|TextDavinci001TokenTransaction |Yes |Processed Text Davinci 001 Inference Tokens |Count |Total |Number of Inference Tokens Processed on a text-davinci-001 Model |ApiName, ModelDeploymentName, FeatureName, UsageChannel, Region |
-|TextDavinci002TokenTransaction |Yes |Processed Text Davinci 002 Inference Tokens |Count |Total |Number of Inference Tokens Processed on a text-davinci-002 Model |ApiName, ModelDeploymentName, FeatureName, UsageChannel, Region |
+|TextDavinci001TokenTransaction |Yes |Processed Text Davinci 001 Inference Tokens (deprecated) |Count |Total |Number of Inference Tokens Processed on a text-davinci-001 Model |ApiName, ModelDeploymentName, FeatureName, UsageChannel, Region |
+|TextDavinci002TokenTransaction |Yes |Processed Text Davinci 002 Inference Tokens (deprecated) |Count |Total |Number of Inference Tokens Processed on a text-davinci-002 Model |ApiName, ModelDeploymentName, FeatureName, UsageChannel, Region |
|TextTrainedCharacters |Yes |Text Trained Characters |Count |Total |Number of characters trained using text translation. |ApiName, FeatureName, UsageChannel, Region | |TokenTransaction |Yes |Processed Inference Tokens |Count |Total |Number of Inference Tokens Processed on an OpenAI Model |ApiName, ModelDeploymentName, FeatureName, UsageChannel, Region | |TotalCalls |Yes |Total Calls |Count |Total |Total number of calls. |ApiName, OperationName, Region, RatelimitKey |
This latest update adds a new column and reorders the metrics to be alphabetical
|ActivityCancelledRuns |Yes |Cancelled activity runs metrics |Count |Total |Cancelled activity runs metrics |ActivityType, PipelineName, FailureType, Name | |ActivityFailedRuns |Yes |Failed activity runs metrics |Count |Total |Failed activity runs metrics |ActivityType, PipelineName, FailureType, Name | |ActivitySucceededRuns |Yes |Succeeded activity runs metrics |Count |Total |Succeeded activity runs metrics |ActivityType, PipelineName, FailureType, Name |
+|AirflowIntegrationRuntimeCeleryTaskTimeoutError |No |Airflow Integration Runtime Celery Task Timeout Error |Count |Total |Airflow Integration Runtime Celery Task Timeout Error |IntegrationRuntimeName |
+|AirflowIntegrationRuntimeCollectDBDags |No |Airflow Integration Runtime Collect DB Dags |Milliseconds |Average |Airflow Integration Runtime Collect DB Dags |IntegrationRuntimeName |
+|AirflowIntegrationRuntimeCpuPercentage |No |Airflow Integration Runtime Cpu Percentage |Percent |Average |Airflow Integration Runtime Cpu Percentage |IntegrationRuntimeName, ContainerName |
+|AirflowIntegrationRuntimeCpuUsage |Yes |Airflow Integration Runtime Memory Usage |Millicores |Average |Airflow Integration Runtime Memory Usage |IntegrationRuntimeName, ContainerName |
+|AirflowIntegrationRuntimeDagBagSize |No |Airflow Integration Runtime Dag Bag Size |Count |Total |Airflow Integration Runtime Dag Bag Size |IntegrationRuntimeName |
+|AirflowIntegrationRuntimeDagCallbackExceptions |No |Airflow Integration Runtime Dag Callback Exceptions |Count |Total |Airflow Integration Runtime Dag Callback Exceptions |IntegrationRuntimeName |
+|AirflowIntegrationRuntimeDAGFileRefreshError |No |Airflow Integration Runtime DAG File Refresh Error |Count |Total |Airflow Integration Runtime DAG File Refresh Error |IntegrationRuntimeName |
+|AirflowIntegrationRuntimeDAGProcessingImportErrors |No |Airflow Integration Runtime DAG Processing Import Errors |Count |Total |Airflow Integration Runtime DAG Processing Import Errors |IntegrationRuntimeName |
+|AirflowIntegrationRuntimeDAGProcessingLastDuration |No |Airflow Integration Runtime DAG Processing Last Duration |Milliseconds |Average |Airflow Integration Runtime DAG Processing Last Duration |IntegrationRuntimeName, DagFile |
+|AirflowIntegrationRuntimeDAGProcessingLastRunSecondsAgo |No |Airflow Integration Runtime DAG Processing Last Run Seconds Ago |Seconds |Average |Airflow Integration Runtime DAG Processing Last Run Seconds Ago |IntegrationRuntimeName, DagFile |
+|AirflowIntegrationRuntimeDAGProcessingManagerStalls |No |Airflow Integration Runtime DAG ProcessingManager Stalls |Count |Total |Airflow Integration Runtime DAG ProcessingManager Stalls |IntegrationRuntimeName |
+|AirflowIntegrationRuntimeDAGProcessingProcesses |No |Airflow Integration Runtime DAG Processing Processes |Count |Total |Airflow Integration Runtime DAG Processing Processes |IntegrationRuntimeName |
+|AirflowIntegrationRuntimeDAGProcessingProcessorTimeouts |No |Airflow Integration Runtime DAG Processing Processor Timeouts |Seconds |Average |Airflow Integration Runtime DAG Processing Processor Timeouts |IntegrationRuntimeName |
+|AirflowIntegrationRuntimeDAGProcessingTotalParseTime |No |Airflow Integration Runtime DAG Processing Total Parse Time |Seconds |Average |Airflow Integration Runtime DAG Processing Total Parse Time |IntegrationRuntimeName |
+|AirflowIntegrationRuntimeDAGRunDependencyCheck |No |Airflow Integration Runtime DAG Run Dependency Check |Milliseconds |Average |Airflow Integration Runtime DAG Run Dependency Check |IntegrationRuntimeName, DagId |
+|AirflowIntegrationRuntimeDAGRunDurationFailed |No |Airflow Integration Runtime DAG Run Duration Failed |Milliseconds |Average |Airflow Integration Runtime DAG Run Duration Failed |IntegrationRuntimeName, DagId |
+|AirflowIntegrationRuntimeDAGRunDurationSuccess |No |Airflow Integration Runtime DAG Run Duration Success |Milliseconds |Average |Airflow Integration Runtime DAG Run Duration Success |IntegrationRuntimeName, DagId |
+|AirflowIntegrationRuntimeDAGRunFirstTaskSchedulingDelay |No |Airflow Integration Runtime DAG Run First Task Scheduling Delay |Milliseconds |Average |Airflow Integration Runtime DAG Run First Task Scheduling Delay |IntegrationRuntimeName, DagId |
+|AirflowIntegrationRuntimeDAGRunScheduleDelay |No |Airflow Integration Runtime DAG Run Schedule Delay |Milliseconds |Average |Airflow Integration Runtime DAG Run Schedule Delay |IntegrationRuntimeName, DagId |
+|AirflowIntegrationRuntimeExecutorOpenSlots |No |Airflow Integration Runtime Executor Open Slots |Count |Total |Airflow Integration Runtime Executor Open Slots |IntegrationRuntimeName |
+|AirflowIntegrationRuntimeExecutorQueuedTasks |No |Airflow Integration Runtime Executor Queued Tasks |Count |Total |Airflow Integration Runtime Executor Queued Tasks |IntegrationRuntimeName |
+|AirflowIntegrationRuntimeExecutorRunningTasks |No |Airflow Integration Runtime Executor Running Tasks |Count |Total |Airflow Integration Runtime Executor Running Tasks |IntegrationRuntimeName |
+|AirflowIntegrationRuntimeJobEnd |No |Airflow Integration Runtime Job End |Count |Total |Airflow Integration Runtime Job End |IntegrationRuntimeName, Job |
+|AirflowIntegrationRuntimeJobHeartbeatFailure |No |Airflow Integration Runtime Heartbeat Failure |Count |Total |Airflow Integration Runtime Heartbeat Failure |IntegrationRuntimeName, Job |
+|AirflowIntegrationRuntimeJobStart |No |Airflow Integration Runtime Job Start |Count |Total |Airflow Integration Runtime Job Start |IntegrationRuntimeName, Job |
+|AirflowIntegrationRuntimeMemoryPercentage |Yes |Airflow Integration Runtime Memory Percentage |Percent |Average |Airflow Integration Runtime Memory Percentage |IntegrationRuntimeName, ContainerName |
+|AirflowIntegrationRuntimeOperatorFailures |No |Airflow Integration Runtime Operator Failures |Count |Total |Airflow Integration Runtime Operator Failures |IntegrationRuntimeName, Operator |
+|AirflowIntegrationRuntimeOperatorSuccesses |No |Airflow Integration Runtime Operator Successes |Count |Total |Airflow Integration Runtime Operator Successes |IntegrationRuntimeName, Operator |
+|AirflowIntegrationRuntimePoolOpenSlots |No |Airflow Integration Runtime Pool Open Slots |Count |Total |Airflow Integration Runtime Pool Open Slots |IntegrationRuntimeName, Pool |
+|AirflowIntegrationRuntimePoolQueuedSlots |No |Airflow Integration Runtime Pool Queued Slots |Count |Total |Airflow Integration Runtime Pool Queued Slots |IntegrationRuntimeName, Pool |
+|AirflowIntegrationRuntimePoolRunningSlots |No |Airflow Integration Runtime Pool Running Slots |Count |Total |Airflow Integration Runtime Pool Running Slots |IntegrationRuntimeName, Pool |
+|AirflowIntegrationRuntimePoolStarvingTasks |No |Airflow Integration Runtime Pool Starving Tasks |Count |Total |Airflow Integration Runtime Pool Starving Tasks |IntegrationRuntimeName, Pool |
+|AirflowIntegrationRuntimeSchedulerCriticalSectionBusy |No |Airflow Integration Runtime Scheduler Critical Section Busy |Count |Total |Airflow Integration Runtime Scheduler Critical Section Busy |IntegrationRuntimeName |
+|AirflowIntegrationRuntimeSchedulerCriticalSectionDuration |No |Airflow Integration Runtime Scheduler Critical Section Duration |Milliseconds |Average |Airflow Integration Runtime Scheduler Critical Section Duration |IntegrationRuntimeName |
+|AirflowIntegrationRuntimeSchedulerFailedSLAEmailAttempts |No |Airflow Integration Runtime Scheduler Failed SLA Email Attempts |Count |Total |Airflow Integration Runtime Scheduler Failed SLA Email Attempts |IntegrationRuntimeName |
+|AirflowIntegrationRuntimeSchedulerHeartbeat |No |Airflow Integration Runtime Scheduler Heartbeats |Count |Total |Airflow Integration Runtime Scheduler Heartbeats |IntegrationRuntimeName |
+|AirflowIntegrationRuntimeSchedulerOrphanedTasksAdopted |No |Airflow Integration Runtime Scheduler Orphaned Tasks Adopted |Count |Total |Airflow Integration Runtime Scheduler Orphaned Tasks Adopted |IntegrationRuntimeName |
+|AirflowIntegrationRuntimeSchedulerOrphanedTasksCleared |No |Airflow Integration Runtime Scheduler Orphaned Tasks Cleared |Count |Total |Airflow Integration Runtime Scheduler Orphaned Tasks Cleared |IntegrationRuntimeName |
+|AirflowIntegrationRuntimeSchedulerTasksExecutable |No |Airflow Integration Runtime Scheduler Tasks Executable |Count |Total |Airflow Integration Runtime Scheduler Tasks Executable |IntegrationRuntimeName |
+|AirflowIntegrationRuntimeSchedulerTasksKilledExternally |No |Airflow Integration Runtime Scheduler Tasks Killed Externally |Count |Total |Airflow Integration Runtime Scheduler Tasks Killed Externally |IntegrationRuntimeName |
+|AirflowIntegrationRuntimeSchedulerTasksRunning |No |Airflow Integration Runtime Scheduler Tasks Running |Count |Total |Airflow Integration Runtime Scheduler Tasks Running |IntegrationRuntimeName |
+|AirflowIntegrationRuntimeSchedulerTasksStarving |No |Airflow Integration Runtime Scheduler Tasks Starving |Count |Total |Airflow Integration Runtime Scheduler Tasks Starving |IntegrationRuntimeName |
+|AirflowIntegrationRuntimeStartedTaskInstances |No |Airflow Integration Runtime Started Task Instances |Count |Total |Airflow Integration Runtime Started Task Instances |IntegrationRuntimeName, DagId, TaskId |
+|AirflowIntegrationRuntimeTaskInstanceCreatedUsingOperator |No |Airflow Integration Runtime Task Instance Created Using Operator |Count |Total |Airflow Integration Runtime Task Instance Created Using Operator |IntegrationRuntimeName, Operator |
+|AirflowIntegrationRuntimeTaskInstanceDuration |No |Airflow Integration Runtime Task Instance Duration |Milliseconds |Average |Airflow Integration Runtime Task Instance Duration |IntegrationRuntimeName, DagId, TaskID |
+|AirflowIntegrationRuntimeTaskInstanceFailures |No |Airflow Integration Runtime Task Instance Failures |Count |Total |Airflow Integration Runtime Task Instance Failures |IntegrationRuntimeName |
+|AirflowIntegrationRuntimeTaskInstanceFinished |No |Airflow Integration Runtime Task Instance Finished |Count |Total |Airflow Integration Runtime Task Instance Finished |IntegrationRuntimeName, DagId, TaskId, State |
+|AirflowIntegrationRuntimeTaskInstancePreviouslySucceeded |No |Airflow Integration Runtime Task Instance Previously Succeeded |Count |Total |Airflow Integration Runtime Task Instance Previously Succeeded |IntegrationRuntimeName |
+|AirflowIntegrationRuntimeTaskInstanceSuccesses |No |Airflow Integration Runtime Task Instance Successes |Count |Total |Airflow Integration Runtime Task Instance Successes |IntegrationRuntimeName |
+|AirflowIntegrationRuntimeTaskRemovedFromDAG |No |Airflow Integration Runtime Task Removed From DAG |Count |Total |Airflow Integration Runtime Task Removed From DAG |IntegrationRuntimeName, DagId |
+|AirflowIntegrationRuntimeTaskRestoredToDAG |No |Airflow Integration Runtime Task Restored To DAG |Count |Total |Airflow Integration Runtime Task Restored To DAG |IntegrationRuntimeName, DagId |
+|AirflowIntegrationRuntimeTriggersBlockedMainThread |No |Airflow Integration Runtime Triggers Blocked Main Thread |Count |Total |Airflow Integration Runtime Triggers Blocked Main Thread |IntegrationRuntimeName |
+|AirflowIntegrationRuntimeTriggersFailed |No |Airflow Integration Runtime Triggers Failed |Count |Total |Airflow Integration Runtime Triggers Failed |IntegrationRuntimeName |
+|AirflowIntegrationRuntimeTriggersRunning |No |Airflow Integration Runtime Triggers Running |Count |Total |Airflow Integration Runtime Triggers Running |IntegrationRuntimeName |
+|AirflowIntegrationRuntimeTriggersSucceeded |No |Airflow Integration Runtime Triggers Succeeded |Count |Total |Airflow Integration Runtime Triggers Succeeded |IntegrationRuntimeName |
+|AirflowIntegrationRuntimeZombiesKilled |No |Airflow Integration Runtime Zombie Tasks Killed |Count |Total |Airflow Integration Runtime Zombie Tasks Killed |IntegrationRuntimeName |
|FactorySizeInGbUnits |Yes |Total factory size (GB unit) |Count |Maximum |Total factory size (GB unit) |No Dimensions | |IntegrationRuntimeAvailableMemory |Yes |Integration runtime available memory |Bytes |Average |Integration runtime available memory |IntegrationRuntimeName, NodeName | |IntegrationRuntimeAvailableNodeNumber |Yes |Integration runtime available node count |Count |Average |Integration runtime available node count |IntegrationRuntimeName |
This latest update adds a new column and reorders the metrics to be alphabetical
|cpu_credits_remaining |Yes |CPU Credits Remaining |Count |Maximum |CPU Credits Remaining |No Dimensions | |cpu_percent |Yes |Host CPU Percent |Percent |Maximum |Host CPU Percent |No Dimensions | |HA_IO_status |Yes |HA IO Status |Count |Maximum |Status for replication IO thread running |No Dimensions |
+|HA_replication_lag |Yes |HA Replication Lag |Seconds |Maximum |HA Replication lag in seconds |No Dimensions |
|HA_SQL_status |Yes |HA SQL Status |Count |Maximum |Status for replication SQL thread running |No Dimensions | |Innodb_buffer_pool_pages_data |Yes |InnoDB Buffer Pool Pages Data |Count |Total |The number of pages in the InnoDB buffer pool containing data. |No Dimensions | |Innodb_buffer_pool_pages_dirty |Yes |InnoDB Buffer Pool Pages Dirty |Count |Total |The current number of dirty pages in the InnoDB buffer pool. |No Dimensions | |Innodb_buffer_pool_pages_free |Yes |InnoDB Buffer Pool Pages Free |Count |Total |The number of free pages in the InnoDB buffer pool. |No Dimensions | |Innodb_buffer_pool_read_requests |Yes |InnoDB Buffer Pool Read Requests |Count |Total |The number of logical read requests. |No Dimensions | |Innodb_buffer_pool_reads |Yes |InnoDB Buffer Pool Reads |Count |Total |The number of logical reads that InnoDB could not satisfy from the buffer pool, and had to read directly from disk. |No Dimensions |
-|io_consumption_percent |Yes |IO Percent |Percent |Maximum |IO Percent |No Dimensions |
+|io_consumption_percent |Yes |Storage IO Percent |Percent |Maximum |Storage I/O consumption percent |No Dimensions |
|memory_percent |Yes |Host Memory Percent |Percent |Maximum |Host Memory Percent |No Dimensions | |network_bytes_egress |Yes |Host Network Out |Bytes |Total |Host Network egress in bytes |No Dimensions | |network_bytes_ingress |Yes |Host Network In |Bytes |Total |Host Network ingress in bytes |No Dimensions |
This latest update adds a new column and reorders the metrics to be alphabetical
|storage_io_count |Yes |IO Count |Count |Total |The number of I/O consumed. |No Dimensions | |storage_limit |Yes |Storage Limit |Bytes |Maximum |Storage Limit |No Dimensions | |storage_percent |Yes |Storage Percent |Percent |Maximum |Storage Percent |No Dimensions |
-|storage_throttle_count |Yes |Storage Throttle Count |Count |Maximum |Storage throttle count. |No Dimensions |
+|storage_throttle_count |Yes |Storage Throttle Count |Count |Maximum |Storage IO requests throttled in the selected time range. |No Dimensions |
|storage_used |Yes |Storage Used |Bytes |Maximum |Storage Used |No Dimensions | |total_connections |Yes |Total Connections |Count |Total |Total Connections |No Dimensions |
This latest update adds a new column and reorders the metrics to be alphabetical
|backup_storage_used |Yes |Backup Storage Used |Bytes |Average |Backup Storage Used |No Dimensions | |blks_hit |Yes |Disk Blocks Hit (Preview) |Count |Total |Number of times disk blocks were found already in the buffer cache, so that a read was not necessary |DatabaseName | |blks_read |Yes |Disk Blocks Read (Preview) |Count |Total |Number of disk blocks read in this database |DatabaseName |
+|client_connections_active |Yes |Active client connections (Preview) |Count |Maximum |Connections from clients which are associated with a PostgreSQL connection |DatabaseName |
+|client_connections_waiting |Yes |Waiting client connections (Preview) |Count |Maximum |Connections from clients that are waiting for a PostgreSQL connection to service them |DatabaseName |
|connections_failed |Yes |Failed Connections |Count |Total |Failed Connections |No Dimensions | |connections_succeeded |Yes |Succeeded Connections |Count |Total |Succeeded Connections |No Dimensions | |cpu_credits_consumed |Yes |CPU Credits Consumed |Count |Average |Total number of credits consumed by the database server |No Dimensions |
This latest update adds a new column and reorders the metrics to be alphabetical
|n_mod_since_analyze_user_tables |Yes |Estimated Modifications User Tables (Preview) |Count |Maximum |Estimated number of rows modified since user only tables were last analyzed |DatabaseName | |network_bytes_egress |Yes |Network Out |Bytes |Total |Network Out across active connections |No Dimensions | |network_bytes_ingress |Yes |Network In |Bytes |Total |Network In across active connections |No Dimensions |
+|num_pools |Yes |Number of connection pools (Preview) |Count |Maximum |Total number of connection pools |DatabaseName |
|numbackends |Yes |Backends (Preview) |Count |Maximum |Number of backends connected to this database |DatabaseName | |oldest_backend_time_sec |Yes |Oldest Backend (Preview) |Seconds |Maximum |The age in seconds of the oldest backend (irrespective of the state) |No Dimensions | |oldest_backend_xmin |Yes |Oldest xmin (Preview) |Count |Maximum |The actual value of the oldest xmin. |No Dimensions |
This latest update adds a new column and reorders the metrics to be alphabetical
|physical_replication_delay_in_seconds |Yes |Read Replica Lag (Preview) |Seconds |Maximum |Read Replica lag in seconds |No Dimensions | |read_iops |Yes |Read IOPS |Count |Average |Number of data disk I/O read operations per second |No Dimensions | |read_throughput |Yes |Read Throughput Bytes/Sec |Count |Average |Bytes read per second from the data disk during monitoring period |No Dimensions |
+|server_connections_active |Yes |Active server connections (Preview) |Count |Maximum |Connections to PostgreSQL that are in use by a client connection |DatabaseName |
+|server_connections_idle |Yes |Idle server connections (Preview) |Count |Maximum |Connections to PostgreSQL that are idle, ready to service a new client connection |DatabaseName |
|sessions_by_state |Yes |Sessions by State (Preview) |Count |Maximum |Overall state of the backends |State | |sessions_by_wait_event_type |Yes |Sessions by WaitEventType (Preview) |Count |Maximum |Sessions by the type of event for which the backend is waiting |WaitEventType | |storage_free |Yes |Storage Free |Bytes |Average |Storage Free |No Dimensions |
This latest update adds a new column and reorders the metrics to be alphabetical
|tables_vacuumed_user_tables |Yes |User Tables Vacuumed (Preview) |Count |Maximum |Number of user only tables that have been vacuumed in this database |DatabaseName | |temp_bytes |Yes |Temporary Files Size (Preview) |Bytes |Total |Total amount of data written to temporary files by queries in this database |DatabaseName | |temp_files |Yes |Temporary Files (Preview) |Count |Total |Number of temporary files created by queries in this database |DatabaseName |
+|total_pooled_connections |Yes |Total pooled connections (Preview) |Count |Maximum |Current number of pooled connections |DatabaseName |
|tup_deleted |Yes |Tuples Deleted (Preview) |Count |Total |Number of rows deleted by queries in this database |DatabaseName | |tup_fetched |Yes |Tuples Fetched (Preview) |Count |Total |Number of rows fetched by queries in this database |DatabaseName | |tup_inserted |Yes |Tuples Inserted (Preview) |Count |Total |Number of rows inserted by queries in this database |DatabaseName |
This latest update adds a new column and reorders the metrics to be alphabetical
|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| ||||||||
+|RowsDropped_Count |Yes |Rows Dropped |Count |Count |Number of rows dropped while running transformation. |InputStreamId |
+|RowsReceived_Count |Yes |Rows Received |Count |Count |Total number of rows recevied for transformation. |InputStreamId |
|TransformationErrors |Yes |Transformation Errors |Count |Count |The number of rows, where the execution of KQL transformation led to an error, KQL transformation service limit exceeds. |InputStreamId, ErrorType |
+|TransformationErrors_Count |Yes |Transformation Errors |Count |Count |The number of rows, where the execution of KQL transformation led to an error like KQL transformation service limit exceeds. |InputStreamId, ErrorType |
+|TransformationRuntime_DurationMs |Yes |Transformation Runtime Duration |Count |Count |Total time taken in miliseconds to transform given set of records. |InputStreamId |
## Microsoft.IoTCentral/IoTApps
This latest update adds a new column and reorders the metrics to be alphabetical
|RequestLatency_P90 |Yes |Request Latency P90 |Milliseconds |Average |The average P90 request latency aggregated by all request latency values collected over the selected time period |deployment | |RequestLatency_P95 |Yes |Request Latency P95 |Milliseconds |Average |The average P95 request latency aggregated by all request latency values collected over the selected time period |deployment | |RequestLatency_P99 |Yes |Request Latency P99 |Milliseconds |Average |The average P99 request latency aggregated by all request latency values collected over the selected time period |deployment |
-|RequestsPerMinute |No |Requests Per Minute |Count |Average |The number of requests sent to online endpoint within a minute |deployment, statusCode, statusCodeClass |
+|RequestsPerMinute |No |Requests Per Minute |Count |Average |The number of requests sent to online endpoint within a minute |deployment, statusCode, statusCodeClass, modelStatusCode |
## Microsoft.MachineLearningServices/workspaces/onlineEndpoints/deployments
This latest update adds a new column and reorders the metrics to be alphabetical
|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| ||||||||
+|BgpPeerStatus |Yes |BGP Peer Status |Unspecified |Minimum |Operational state of the BGP peer. State is represented in numerical form. Idle : 1, Connect : 2, Active : 3, Opensent : 4, Openconfirm : 5, Established : 6 |FabricId, RegionName, IpAddress |
|CpuUtilizationMax |Yes |Cpu Utilization Max |Percent |Average |Max cpu utilization. The maximum value of the percentage measure of the statistic over the time interval. |FabricId, RegionName, ComponentName | |CpuUtilizationMin |Yes |Cpu Utilization Min |Percent |Average |Min cpu utilization. The minimum value of the percentage measure of the statistic over the time interval. |FabricId, RegionName, ComponentName |
-|FanSpeed |Yes |Fan Speed |Count |Average |Current fan speed. |FabricId, RegionName, ComponentName |
+|FanSpeed |Yes |Fan Speed |Unspecified |Average |Current fan speed. |FabricId, RegionName, ComponentName |
|IfEthInCrcErrors |Yes |Ethernet Interface In CRC Errors |Count |Average |The total number of frames received that had a length (excluding framing bits, but including FCS octets) of between 64 and 1518 octets, inclusive, but had either a bad Frame Check Sequence (FCS) with an integral number of octets (FCS Error) or a bad FCS with a non-integral number of octets (Alignment Error) |FabricId, RegionName, InterfaceName | |IfEthInFragmentFrames |Yes |Ethernet Interface In Fragment Frames |Count |Average |The total number of frames received that were less than 64 octets in length (excluding framing bits but including FCS octets) and had either a bad Frame Check Sequence (FCS) with an integral number of octets (FCS Error) or a bad FCS with a non-integral number of octets (Alignment Error). |FabricId, RegionName, InterfaceName | |IfEthInJabberFrames |Yes |Ethernet Interface In Jabber Frames |Count |Average |Number of jabber frames received on the interface. Jabber frames are typically defined as oversize frames which also have a bad CRC. |FabricId, RegionName, InterfaceName |
This latest update adds a new column and reorders the metrics to be alphabetical
|IfInFcsErrors |Yes |Interface In FCS Errors |Count |Average |Number of received packets which had errors in the frame check sequence (FCS), i.e., framing errors. |FabricId, RegionName, InterfaceName | |IfInMulticastPkts |Yes |Interface In Multicast Pkts |Count |Average |The number of packets, delivered by this sub-layer to a higher (sub-)layer, that were addressed to a multicast address at this sub-layer. For a MAC-layer protocol, this includes both Group and Functional addresses. |FabricId, RegionName, InterfaceName | |IfInOctets |Yes |Interface In Octets |Count |Average |The total number of octets received on the interface, including framing characters. |FabricId, RegionName, InterfaceName |
+|IfInPkts |Yes |Interface In Pkts |Count |Average |The total number of packets received on the interface, including all unicast, multicast, broadcast and bad packets etc. |FabricId, RegionName, InterfaceName |
|IfInUnicastPkts |Yes |Interface In Unicast Pkts |Count |Average |The number of packets, delivered by this sub-layer to a higher (sub-)layer, that were not addressed to a multicast or broadcast address at this sub-layer. |FabricId, RegionName, InterfaceName | |IfOutBroadcastPkts |Yes |Interface Out Broadcast Pkts |Count |Average |The total number of packets that higher-level protocols requested be transmitted, and that were addressed to a broadcast address at this sub-layer, including those that were discarded or not sent. |FabricId, RegionName, InterfaceName | |IfOutDiscards |Yes |Interface Out Discards |Count |Average |The number of outbound packets that were chosen to be discarded even though no errors had been detected to prevent their being transmitted. |FabricId, RegionName, InterfaceName | |IfOutErrors |Yes |Interface Out Errors |Count |Average |For packet-oriented interfaces, the number of outbound packets that could not be transmitted because of errors. |FabricId, RegionName, InterfaceName | |IfOutMulticastPkts |Yes |Interface Out Multicast Pkts |Count |Average |The total number of packets that higher-level protocols requested be transmitted, and that were addressed to a multicast address at this sub-layer, including those that were discarded or not sent. For a MAC-layer protocol, this includes both Group and Functional addresses. |FabricId, RegionName, InterfaceName | |IfOutOctets |Yes |Interface Out Octets |Count |Average |The total number of octets transmitted out of the interface, including framing characters. |FabricId, RegionName, InterfaceName |
+|IfOutPkts |Yes |Interface Out Pkts |Count |Average |The total number of packets transmitted out of the interface, including all unicast, multicast, broadcast, and bad packets etc. |FabricId, RegionName, InterfaceName |
|IfOutUnicastPkts |Yes |Interface Out Unicast Pkts |Count |Average |The total number of packets that higher-level requested be transmitted, and that were not addressed to a multicast or broadcast address at this sub-layer, including those that were discarded or not sent. |FabricId, RegionName, InterfaceName |
+|InterfaceOperStatus |Yes |Interface Operational State |Unspecified |Minimum |The current operational state of the interface. State is represented in numerical form. Up: 0, Down: 1, Lower_layer_down: 2, Testing: 3, Unknown: 4, Dormant: 5, Not_present: 6. |FabricId, RegionName, InterfaceName |
+|LacpErrors |Yes |Lacp Errors |Count |Average |Number of LACPDU illegal packet errors. |FabricId, RegionName, InterfaceName |
|LacpInPkts |Yes |Lacp In Pkts |Count |Average |Number of LACPDUs received. |FabricId, RegionName, InterfaceName | |LacpOutPkts |Yes |Lacp Out Pkts |Count |Average |Number of LACPDUs transmitted. |FabricId, RegionName, InterfaceName | |LacpRxErrors |Yes |Lacp Rx Errors |Count |Average |Number of LACPDU receive packet errors. |FabricId, RegionName, InterfaceName |
+|LacpTxErrors |Yes |Lacp Tx Errors |Count |Average |Number of LACPDU transmit packet errors. |FabricId, RegionName, InterfaceName |
+|LacpUnknownErrors |Yes |Lacp Unknown Errors |Count |Average |Number of LACPDU unknown packet errors. |FabricId, RegionName, InterfaceName |
|LldpFrameIn |Yes |Lldp Frame In |Count |Average |The number of lldp frames received. |FabricId, RegionName, InterfaceName | |LldpFrameOut |Yes |Lldp Frame Out |Count |Average |The number of frames transmitted out. |FabricId, RegionName, InterfaceName | |LldpTlvUnknown |Yes |Lldp Tlv Unknown |Count |Average |The number of frames received with unknown TLV. |FabricId, RegionName, InterfaceName | |MemoryAvailable |Yes |Memory Available |Bytes |Average |The available memory physically installed, or logically allocated to the component. |FabricId, RegionName, ComponentName | |MemoryUtilized |Yes |Memory Utilized |Bytes |Average |The memory currently in use by processes running on the component, not considering reserved memory that is not available for use. |FabricId, RegionName, ComponentName |
+|PowerSupplyCapacity |Yes |Power Supply Maximum Power Capacity |Unspecified |Average |Maximum power capacity of the power supply (watts). |FabricId, RegionName, ComponentName |
+|PowerSupplyInputCurrent |Yes |Power Supply Input Current |Unspecified |Average |The input current draw of the power supply (amps). |FabricId, RegionName, ComponentName |
+|PowerSupplyInputVoltage |Yes |Power Supply Input Voltage |Unspecified |Average |Input voltage to the power supply (volts). |FabricId, RegionName, ComponentName |
+|PowerSupplyOutputCurrent |Yes |Power Supply Output Current |Unspecified |Average |The output current supplied by the power supply (amps) |FabricId, RegionName, ComponentName |
+|PowerSupplyOutputPower |Yes |Power Supply Output Power |Unspecified |Average |Output power supplied by the power supply (watts) |FabricId, RegionName, ComponentName |
+|PowerSupplyOutputVoltage |Yes |Power Supply Output Voltage |Unspecified |Average |Output voltage supplied by the power supply (volts). |FabricId, RegionName, ComponentName |
## Microsoft.Maps/accounts
This latest update adds a new column and reorders the metrics to be alphabetical
|EventsPerMinuteIngested |No |Events Per Minute Ingested |Count |Maximum |The number of events per minute recently received |StampColor | |EventsPerMinuteIngestedLimit |No |Events Per Minute Ingested Limit |Count |Maximum |The maximum number of events per minute which can be received before events become throttled |StampColor | |EventsPerMinuteIngestedPercentUtilization |No |Events Per Minute Ingested % Utilization |Percent |Average |The percentage of the current metric ingestion rate limit being utilized |StampColor |
+|SimpleSamplesStored |No |Simple Data Samples Stored |Count |Maximum |The total number of samples stored for simple sampling types (like sum, count). For Prometheus this is equivalent to the number of samples scraped and ingested. |StampColor |
## Microsoft.NetApp/netAppAccounts/capacityPools
This latest update adds a new column and reorders the metrics to be alphabetical
|EstimatedBilledCapacityUnits |No |Estimated Billed Capacity Units |Count |Average |Estimated capacity units that will be charged |No Dimensions | |FailedRequests |Yes |Failed Requests |Count |Total |Count of failed requests that Application Gateway has served |BackendSettingsPool | |FixedBillableCapacityUnits |No |Fixed Billable Capacity Units |Count |Average |Minimum capacity units that will be charged |No Dimensions |
-|GatewayUtilization |No |Gateway Utilization |Percent |Average |Denotes overall utilization of the Application Gateway resource. This is an aggregate report of all the underlying instances. In general, one should consider scaling out when the value exceeds 70%. However, the threshold could differ for different workloads and hence it is recommended to choose a limit that suits your requirements. |No Dimensions |
+|GatewayUtilization |No |Gateway Utilization |Percent |Average |Denotes the current utilization status of the Application Gateway resource. The metric is an aggregate report of your gateway's running instances. As a recommendation, one should consider scaling out when the value exceeds 70%. However, the threshold could differ for different workloads. Hence, choose a limit that suits your requirements. |No Dimensions |
|HealthyHostCount |Yes |Healthy Host Count |Count |Average |Number of healthy backend hosts |BackendSettingsPool | |MatchedCount |Yes |Web Application Firewall Total Rule Distribution |Count |Total |Web Application Firewall Total Rule Distribution for the incoming traffic |RuleGroup, RuleId | |NewConnectionsPerSecond |No |New connections per second |CountPerSecond |Average |New connections per second established with Application Gateway |No Dimensions |
This latest update adds a new column and reorders the metrics to be alphabetical
|ApplicationRuleHit |Yes |Application rules hit count |Count |Total |Number of times Application rules were hit |Status, Reason, Protocol | |DataProcessed |Yes |Data processed |Bytes |Total |Total amount of data processed by this firewall |No Dimensions | |FirewallHealth |Yes |Firewall health state |Percent |Average |Indicates the overall health of this firewall |Status, Reason |
+|FirewallLatencyPng |Yes |Latency Probe (Preview) |Milliseconds |Average |Estimate of the average latency of the Firewall as measured by latency probe |No Dimensions |
|NetworkRuleHit |Yes |Network rules hit count |Count |Total |Number of times Network rules were hit |Status, Reason, Protocol | |SNATPortUtilization |Yes |SNAT port utilization |Percent |Average |Percentage of outbound SNAT ports currently in use |Protocol | |Throughput |No |Throughput |BitsPerSecond |Average |Throughput processed by this firewall |No Dimensions |
This latest update adds a new column and reorders the metrics to be alphabetical
|TotalAppDomainsUnloaded |Yes |Total App Domains Unloaded |Count |Average |The total number of AppDomains unloaded since the start of the application. |Instance |
+## NGINX.NGINXPLUS/nginxDeployments
+<!-- Data source : naam-->
+
+|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions|
+||||||||
+|nginx |Yes |nginx |Count |Total |The NGINX metric. |No Dimensions |
++ ## Wandisco.Fusion/migrators <!-- Data source : naam-->
This latest update adds a new column and reorders the metrics to be alphabetical
- [Export metrics to storage, Event Hub, or Log Analytics](../essentials/platform-logs-overview.md)
-<!--Gen Date: Wed Feb 01 2023 09:43:49 GMT+0200 (Israel Standard Time)-->
+<!--Gen Date: Wed Mar 01 2023 10:07:05 GMT+0200 (Israel Standard Time)-->
azure-monitor Resource Logs Categories https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/resource-logs-categories.md
Title: Supported categories for Azure Monitor resource logs
description: Understand the supported services and event schemas for Azure Monitor resource logs. Previously updated : 02/01/2023 Last updated : 03/01/2023
If you think something is missing, you can open a GitHub comment at the bottom o
|CoreAnalytics |Gets the metrics of the endpoint, e.g., bandwidth, egress, etc. |No |
+## Microsoft.Chaos/experiments
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|ExperimentOrchestration |Experiment Orchestration Events |Yes |
++ ## Microsoft.ClassicNetwork/networksecuritygroups <!-- Data source : arm-->
If you think something is missing, you can open a GitHub comment at the bottom o
|Category|Category Display Name|Costs To Export| |||| |accounts |Databricks Accounts |No |
-|accountsAccessControl |Databricks Accounts Access Control |Yes |
-|capsule8ContainerSecurityScanningReports |Databricks Capsule8 Container Security Scanning Reports |Yes |
-|clamAntiVirusReports |Databricks Clam AntiVirus Reports |Yes |
+|capsule8Dataplane |Databricks Capsule8 Container Security Scanning Reports |Yes |
+|clamAVScan |Databricks Clam AV Scan |Yes |
|clusterLibraries |Databricks Cluster Libraries |Yes | |clusters |Databricks Clusters |No | |databrickssql |Databricks DatabricksSQL |Yes |
If you think something is missing, you can open a GitHub comment at the bottom o
|mlflowExperiment |Databricks MLFlow Experiment |Yes | |modelRegistry |Databricks Model Registry |Yes | |notebook |Databricks Notebook |No |
-|partnerConnect |Databricks Partner Connect |Yes |
+|partnerHub |Databricks Partner Hub |Yes |
|RemoteHistoryService |Databricks Remote History Service |Yes | |repos |Databricks Repos |Yes | |secrets |Databricks Secrets |No |
If you think something is missing, you can open a GitHub comment at the bottom o
|Category|Category Display Name|Costs To Export| |||| |ActivityRuns |Pipeline activity runs log |No |
+|AirflowDagProcessingLogs |Airflow dag processing logs |Yes |
+|AirflowSchedulerLogs |Airflow scheduler logs |Yes |
+|AirflowTaskLogs |Airflow task execution logs |Yes |
+|AirflowWebLogs |Airflow web logs |Yes |
+|AirflowWorkerLogs |Airflow worker logs |Yes |
|PipelineRuns |Pipeline runs log |No | |SandboxActivityRuns |Sandbox Activity runs log |Yes | |SandboxPipelineRuns |Sandbox Pipeline runs log |Yes |
If you think something is missing, you can open a GitHub comment at the bottom o
|Category|Category Display Name|Costs To Export| |||| |AuditLogs |Audit logs |Yes |
+|DiagnosticLogs |Diagnostic logs |Yes |
## Microsoft.HealthcareApis/workspaces/fhirservices
If you think something is missing, you can open a GitHub comment at the bottom o
|AppTraces |Traces |No |
-## Microsoft.Insights/datacollectionrules
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|DCRErrorLogs |DCR Error Logs |Yes |
-- ## microsoft.keyvault/managedhsms <!-- Data source : naam-->
If you think something is missing, you can open a GitHub comment at the bottom o
|WorkflowRuntime |Workflow runtime diagnostic events |No |
+## Microsoft.MachineLearningServices/registries
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|RegistryAssetReadEvent |Registry Asset Read Event |Yes |
+|RegistryAssetWriteEvent |Registry Asset Write Event |Yes |
++ ## Microsoft.MachineLearningServices/workspaces <!-- Data source : naam-->
If you think something is missing, you can open a GitHub comment at the bottom o
|LoadBalancerProbeHealthStatus |Load Balancer Probe Health Status |No |
+## Microsoft.Network/networkManagers
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|NetworkGroupMembershipChange |Network Group Membership Change |Yes |
++ ## Microsoft.Network/networksecuritygroups <!-- Data source : arm-->
If you think something is missing, you can open a GitHub comment at the bottom o
|Category|Category Display Name|Costs To Export| ||||
+|NspCrossPerimeterInboundAllowed |Cross perimeter inbound access allowed by perimeter link. |Yes |
+|NspCrossPerimeterOutboundAllowed |Cross perimeter outbound access allowed by perimeter link. |Yes |
|NspIntraPerimeterInboundAllowed |Inbound access allowed within same perimeter. |Yes |
-|NspIntraPerimeterOutboundAllowed |Outbound attempted to same perimeter. |Yes |
+|NspIntraPerimeterOutboundAllowed |Outbound attempted to same perimeter. NOTE: To be deprecated in future. |Yes |
+|NspOutboundAttempt |Outbound attempted to same or different perimeter. |Yes |
|NspPrivateInboundAllowed |Private endpoint traffic allowed. |Yes | |NspPublicInboundPerimeterRulesAllowed |Public inbound access allowed by NSP access rules. |Yes | |NspPublicInboundPerimeterRulesDenied |Public inbound access denied by NSP access rules. |Yes |
If you think something is missing, you can open a GitHub comment at the bottom o
|Category|Category Display Name|Costs To Export| ||||
+|Analytics |Analytics |Yes |
|Automation |Automation |Yes | |DataConnectors |Data Collection - Connectors |Yes |
If you think something is missing, you can open a GitHub comment at the bottom o
|AscWarningEvent |HPC Cache warning |Yes |
+## Microsoft.StorageMover/storageMovers
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|CopyLogsFailed |Copy logs - Failed |Yes |
+|JobRunLogs |Job run logs |Yes |
++ ## Microsoft.StreamAnalytics/streamingjobs <!-- Data source : arm-->
If you think something is missing, you can open a GitHub comment at the bottom o
* [Analyze logs from Azure storage with Log Analytics](./resource-logs.md#send-to-log-analytics-workspace)
-<!--Gen Date: Wed Feb 01 2023 09:43:49 GMT+0200 (Israel Standard Time)-->
+<!--Gen Date: Wed Mar 01 2023 10:07:05 GMT+0200 (Israel Standard Time)-->
azure-monitor Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/api/overview.md
Title: Overview
description: This article describes the REST API, created to make the data collected by Azure Log Analytics easily available. Previously updated : 11/27/2022 Last updated : 02/28/2023 # Azure Monitor Log Analytics API Overview
To try the API without writing any code, you can use:
- Your favorite client such as [Fiddler](https://www.telerik.com/fiddler) or [Postman](https://www.getpostman.com/) to manually generate queries with a user interface. - [cURL](https://curl.haxx.se/) from the command line, and then pipe the output into [jsonlint](https://github.com/zaach/jsonlint) to get readable JSON.
-Instead of calling the REST API directly, you can also use the Azure Monitor Query SDK. The SDK contains idiomatic client libraries for the following ecosystems:
+Instead of calling the REST API directly, you can use the idiomatic Azure Monitor Query client libraries:
- [.NET](/dotnet/api/overview/azure/Monitor.Query-readme) - [Go](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/monitor/azquery)
azure-monitor Data Platform Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/data-platform-logs.md
description: Learn the basics of Azure Monitor Logs, which is used for advanced
documentationcenter: '' na Previously updated : 11/08/2022 Last updated : 02/28/2023
The following table describes some of the ways that you can use Azure Monitor Lo
| Visualize | Pin query results rendered as tables or charts to an [Azure dashboard](../../azure-portal/azure-portal-dashboards.md).<br>Create a [workbook](../visualize/workbooks-overview.md) to combine with multiple sets of data in an interactive report. <br>Export the results of a query to [Power BI](./log-powerbi.md) to use different visualizations and share with users outside Azure.<br>Export the results of a query to [Grafana](../visualize/grafana-plugin.md) to use its dashboarding and combine with other data sources.| | Get insights | Logs support [insights](../insights/insights-overview.md) that provide a customized monitoring experience for particular applications and services. | | Retrieve | Access log query results from a:<ul><li>Command line via the [Azure CLI](/cli/azure/monitor/log-analytics) or [Azure PowerShell cmdlets](/powershell/module/az.operationalinsights).</li><li>Custom app via the [REST API](/rest/api/loganalytics/) or client library for [.NET](/dotnet/api/overview/azure/Monitor.Query-readme), [Go](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/monitor/azquery), [Java](/java/api/overview/azure/monitor-query-readme), [JavaScript](/javascript/api/overview/azure/monitor-query-readme), or [Python](/python/api/overview/azure/monitor-query-readme).</li></ul> |
+| Import | Upload logs from a custom app via the [REST API](/azure/azure-monitor/logs/logs-ingestion-api-overview) or client library for [.NET](/dotnet/api/overview/azure/Monitor.Ingestion-readme), [Java](/java/api/overview/azure/monitor-ingestion-readme), [JavaScript](/javascript/api/overview/azure/monitor-ingestion-readme), or [Python](/python/api/overview/azure/monitor-ingestion-readme). |
| Export | Configure [automated export of log data](./logs-data-export.md) to an Azure Storage account or Azure Event Hubs.<br>Build a workflow to retrieve log data and copy it to an external location by using [Azure Logic Apps](./logicapp-flow-connector.md). | ![Diagram that shows an overview of Azure Monitor Logs.](media/data-platform-logs/logs-overview.png)
azure-monitor Log Query Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/log-query-overview.md
Previously updated : 11/08/2022 Last updated : 02/28/2023
Areas in Azure Monitor where you'll use queries include:
- [Azure Logic Apps](../logs/logicapp-flow-connector.md): Use the results of a log query in an automated workflow by using Logic Apps. - [PowerShell](/powershell/module/az.operationalinsights/invoke-azoperationalinsightsquery): Use the results of a log query in a PowerShell script from a command line or an Azure Automation runbook that uses `Invoke-AzOperationalInsightsQuery`. - [Azure Monitor Logs API](/rest/api/loganalytics/): Retrieve log data from the workspace from any REST API client. The API request includes a query that's run against Azure Monitor to determine the data to retrieve.-- **Azure Monitor Query SDK**: Retrieve log data from the workspace via an idiomatic client library for the following ecosystems:
+- **Azure Monitor Query client libraries**: Retrieve log data from the workspace via an idiomatic client library for the following ecosystems:
- [.NET](/dotnet/api/overview/azure/Monitor.Query-readme) - [Go](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/monitor/azquery) - [Java](/java/api/overview/azure/monitor-query-readme)
azure-netapp-files Azure Netapp Files Create Volumes Smb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-create-volumes-smb.md
na Previously updated : 02/21/2023 Last updated : 02/28/2023 # Create an SMB volume for Azure NetApp Files
This article shows you how to create an SMB3 volume. For NFS volumes, see [Creat
* You must have already set up a capacity pool. See [Create a capacity pool](azure-netapp-files-set-up-capacity-pool.md). * A subnet must be delegated to Azure NetApp Files. See [Delegate a subnet to Azure NetApp Files](azure-netapp-files-delegate-subnet.md).
+* The [SMB Continuous Availability](#continuous-availability) feature is currently in preview. You must submit a waitlist request before you can use this feature.
+* The [non-browsable shares](#non-browsable-share) and [access-based enumeration](#access-based-enumeration) features are currently in preview. You must register each feature before you can use it:
+
+1. Register the feature:
+
+ ```azurepowershell-interactive
+ Register-AzProviderFeature -ProviderNamespace Microsoft.NetApp -FeatureName ANFSmbNonBrowsable
+ Register-AzProviderFeature -ProviderNamespace Microsoft.NetApp -FeatureName ANFSMBAccessBasedEnumeration
+ ```
+
+2. Check the status of the feature registration:
+
+ > [!NOTE]
+ > The **RegistrationState** may be in the `Registering` state for up to 60 minutes before changing to `Registered`. Wait until the status is **Registered** before continuing.
+
+ ```azurepowershell-interactive
+ Get-AzProviderFeature -ProviderNamespace Microsoft.NetApp -FeatureName ANFSmbNonBrowsable
+ Get-AzProviderFeature -ProviderNamespace Microsoft.NetApp -FeatureName ANFSMBAccessBasedEnumeration
+ ```
+You can also use [Azure CLI commands](/cli/azure/feature) `az feature register` and `az feature show` to register the feature and display the registration status.
## Configure Active Directory connections
Before creating an SMB volume, you need to create an Active Directory connection
- It must start with an alphabetical character. - It can contain only letters, numbers, or dashes (`-`). - The length must not exceed 80 characters.
-
+ * <a name="smb3-encryption"></a>If you want to enable encryption for SMB3, select **Enable SMB3 Protocol Encryption**. This feature enables encryption for in-flight SMB3 data. SMB clients not using SMB3 encryption will not be able to access this volume. Data at rest is encrypted regardless of this setting.
- See [SMB encryption](azure-netapp-files-smb-performance.md#smb-encryption) for additional information.
+ See [SMB encryption](azure-netapp-files-smb-performance.md#smb-encryption) for additional information.
+
+ * <a name="access-based-enumeration"></a> If you want to enable access-based enumeration, select **Enable Access Based Enumeration**.
+
+ This feature will hide directories and files created under a share from users who do not have access permissions to the files or folders under the share. Users will still be able to view the share.
+
+ * <a name="non-browsable-share"></a> You can enable the **non-browsable-share feature.**
+
+ This feature prevents the Windows client from browsing the share. The share does not show up in the Windows File Browser or in the list of shares when you run the `net view \\server /all` command.
+
+ > [!IMPORTANT]
+ > Both the access-based enumeration and non-browsable shares features are currently in preview. If this is your first time using either, refer to the steps in [Before you begin](#before-you-begin) to register either feature.
* <a name="continuous-availability"></a>If you want to enable Continuous Availability for the SMB volume, select **Enable Continuous Availability**. > [!IMPORTANT]
- > The SMB Continuous Availability feature is currently in public preview. You need to submit a waitlist request for accessing the feature through the **[Azure NetApp Files SMB Continuous Availability Shares Public Preview waitlist submission page](https://aka.ms/anfsmbcasharespreviewsignup)**. Wait for an official confirmation email from the Azure NetApp Files team before using the Continuous Availability feature.
+ > The SMB Continuous Availability feature is currently in preview. You need to submit a waitlist request for accessing the feature through the **[Azure NetApp Files SMB Continuous Availability Shares Public Preview waitlist submission page](https://aka.ms/anfsmbcasharespreviewsignup)**. Wait for an official confirmation email from the Azure NetApp Files team before using the Continuous Availability feature.
> You should enable Continuous Availability only for Citrix App Layering, SQL Server, and [FSLogix user profile containers](../virtual-desktop/create-fslogix-profile-container.md). Using SMB Continuous Availability shares for workloads other than Citrix App Layering, SQL Server, and FSLogix user profile containers is *not* supported. This feature is currently supported on Windows SQL Server. Linux SQL Server is not currently supported. If you are using a non-administrator (domain) account to install SQL Server, ensure that the account has the required security privilege assigned. If the domain account does not have the required security privilege (`SeSecurityPrivilege`), and the privilege cannot be set at the domain level, you can grant the privilege to the account by using the **Security privilege users** field of Active Directory connections. See [Create an Active Directory connection](create-active-directory-connections.md#create-an-active-directory-connection). **Custom applications are not supported with SMB Continuous Availability.**
- <!-- [1/13/21] Commenting out command-based steps below, because the plan is to use form-based (URL) registration, similar to CRR feature registration -->
- <!--
- ```azurepowershell-interactive
- Register-AzProviderFeature -ProviderNamespace Microsoft.NetApp -FeatureName ANFSMBCAShare
- ```
-
- Check the status of the feature registration:
-
- > [!NOTE]
- > The **RegistrationState** may be in the `Registering` state for up to 60 minutes before changing to`Registered`. Wait until the status is `Registered` before continuing.
-
- ```azurepowershell-interactive
- Get-AzProviderFeature -ProviderNamespace Microsoft.NetApp -FeatureName ANFSMBCAShare
- ```
-
- You can also use [Azure CLI commands](/cli/azure/feature) `az feature register` and `az feature show` to register the feature and display the registration status.
- -->
-
- ![Screenshot that describes the Protocol tab of creating an SMB volume.](../media/azure-netapp-files/azure-netapp-files-protocol-smb.png)
+ :::image type="content" source="../media/azure-netapp-files/azure-netapp-files-protocol-smb.png" alt-text="Screenshot showing the Protocol tab of creating an SMB volume." lightbox="../media/azure-netapp-files/azure-netapp-files-protocol-smb.png":::
5. Select **Review + Create** to review the volume details. Then select **Create** to create the SMB volume.
azure-netapp-files Configure Ldap Extended Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/configure-ldap-extended-groups.md
na Previously updated : 01/30/2023 Last updated : 02/21/2023 # Enable Active Directory Domain Services (AD DS) LDAP authentication for NFS volumes
The following information is passed to the server in the query:
* UID or username * Requested attributes (`uid`, `uidNumber`, `gidNumber` for users, or `gidNumber` for groups) 1. If the user or group isnΓÇÖt found, the request fails, and access is denied.
-1. If the request is successful, then user and group attributes are [cached for future use](configure-ldap-extended-groups.md#considerations). This operation improves the performance of subsequent LDAP queries associated with the cached user or group attributes. It also reduces the load on the ADDS/AADDS LDAP server.
+1. If the request is successful, then user and group attributes are [cached for future use](configure-ldap-extended-groups.md#considerations). This operation improves the performance of subsequent LDAP queries associated with the cached user or group attributes. It also reduces the load on the ADDS/AADDS LDAP server.
## Considerations
The following information is passed to the server in the query:
Then you need to restart the `rpcbind` service on your host or reboot the host.
-6. Follow steps in [Create an NFS volume for Azure NetApp Files](azure-netapp-files-create-volumes.md) to create an NFS volume. During the volume creation process, under the **Protocol** tab, enable the **LDAP** option.
+6. Follow steps in [Create an NFS volume for Azure NetApp Files](azure-netapp-files-create-volumes.md) to create an NFS volume. During the volume creation process, under the **Protocol** tab, enable the **LDAP** option.
![Screenshot that shows Create a Volume page with LDAP option.](../media/azure-netapp-files/create-nfs-ldap.png)
The following information is passed to the server in the query:
* Specify nested **User DN** and **Group DN** in the format of `OU=subdirectory,OU=directory,DC=domain,DC=com`. * Specify **Group Membership Filter** in the format of `(gidNumber=*)`.
+ * If a user is a member of more than 256 groups, only 256 groups will be listed.
+ * Refer to [errors for LDAP volumes](troubleshoot-volumes.md#errors-for-ldap-volumes) if you run into errors.
![Screenshot that shows options related to LDAP Search Scope](../media/azure-netapp-files/ldap-search-scope.png)
azure-netapp-files Create Active Directory Connections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/create-active-directory-connections.md
na Previously updated : 01/25/2023 Last updated : 03/01/2023 # Create and manage Active Directory connections for Azure NetApp Files
Several features of Azure NetApp Files require that you have an Active Directory
## <a name="requirements-for-active-directory-connections"></a>Requirements and considerations for Active Directory connections > [!IMPORTANT]
-> You must follow guidelines described in [Understand guidelines for Active Directory Domain Services site design and planning for Azure NetApp Files](understand-guidelines-active-directory-domain-service-site.md) for Active Directory Domain Services (AD DS) or Azure Active Directory Domain Services (AAD DS) used with Azure NetApp Files.
+> You must follow guidelines described in [Understand guidelines for Active Directory Domain Services site design and planning for Azure NetApp Files](understand-guidelines-active-directory-domain-service-site.md) for Active Directory Domain Services (AD DS) or Azure Active Directory Domain Services (Azure AD DS) used with Azure NetApp Files.
> In addition, before creating the AD connection, review [Modify Active Directory connections for Azure NetApp Files](modify-active-directory-connections.md) to understand the impact of making changes to the AD connection configuration options after the AD connection has been created. Changes to the AD connection configuration options are disruptive to client access and some options cannot be changed at all. * An Azure NetApp Files account must be created in the region where the Azure NetApp Files volumes are deployed.
Several features of Azure NetApp Files require that you have an Active Directory
>[!NOTE] >It is recommended that you configure a Secondary DNS server. See [Understand guidelines for Active Directory Domain Services site design and planning for Azure NetApp Files](understand-guidelines-active-directory-domain-service-site.md). Ensure that your DNS server configuration meets the requirements for Azure NetApp Files. Otherwise, Azure NetApp Files service operations, SMB authentication, Kerberos, or LDAP operations might fail.
- If you use Azure AD DS (AAD DS), you should use the IP addresses of the AAD DS domain controllers for Primary DNS and Secondary DNS respectively.
+ If you use Azure AD DS (Azure AD DS), you should use the IP addresses of the Azure AD DS domain controllers for Primary DNS and Secondary DNS respectively.
* **AD DNS Domain Name (required)** This is the fully qualified domain name of the AD DS that will be used with Azure NetApp Files (for example, `contoso.com`). * **AD Site Name (required)** This is the AD DS site name that will be used by Azure NetApp Files for domain controller discovery.
- The default site name for both ADDS and AADDS is `Default-First-Site-Name`. Follow the [naming conventions for site names](/troubleshoot/windows-server/identity/naming-conventions-for-computer-domain-site-ou#site-names) if you want to rename the site name.
+ The default site name for both AD DS and Azure AD DS is `Default-First-Site-Name`. Follow the [naming conventions for site names](/troubleshoot/windows-server/identity/naming-conventions-for-computer-domain-site-ou#site-names) if you want to rename the site name.
>[!NOTE] > See [Understand guidelines for Active Directory Domain Services site design and planning for Azure NetApp Files](understand-guidelines-active-directory-domain-service-site.md). Ensure that your AD DS site design and configuration meets the requirements for Azure NetApp Files. Otherwise, Azure NetApp Files service operations, SMB authentication, Kerberos, or LDAP operations might fail.
Several features of Azure NetApp Files require that you have an Active Directory
If no value is provided, Azure NetApp Files will use the `CN=Computers` container.
- If you're using Azure NetApp Files with Azure Active Directory Domain Services (AAD DS), the organizational unit path is `OU=AADDC Computers`
+ If you're using Azure NetApp Files with Azure Active Directory Domain Services (Azure AD DS), the organizational unit path is `OU=AADDC Computers`
:::image type="content" source="../media/azure-netapp-files/azure-netapp-files-join-active-directory.png" alt-text="Screenshot of the Join Active Directory input fields.":::
Several features of Azure NetApp Files require that you have an Active Directory
This option enables LDAP over TLS for secure communication between an Azure NetApp Files volume and the Active Directory LDAP server. You can enable LDAP over TLS for NFS, SMB, and dual-protocol volumes of Azure NetApp Files. >[!NOTE]
- >LDAP over TLS must not be enabled if you're using Azure Active Directory Domain Services (AAD DS). AAD DS uses LDAPS (port 636) to secure LDAP traffic instead of LDAP over TLS (port 389).
+ >LDAP over TLS must not be enabled if you're using Azure Active Directory Domain Services (Azure AD DS). Azure AD DS uses LDAPS (port 636) to secure LDAP traffic instead of LDAP over TLS (port 389).
For more information, see [Enable Active Directory Domain Services (AD DS) LDAP authentication for NFS volumes](configure-ldap-over-tls.md).
Several features of Azure NetApp Files require that you have an Active Directory
See [Configure AD DS LDAP with extended groups for NFS volume access](configure-ldap-extended-groups.md#ldap-search-scope) for information about these options.
+ * <a name="preferred-server-ldap"></a> **Preferred server for LDAP client**
+
+ The **Preferred server for LDAP client** option allows you to submit the IP addresses of up to two AD servers as a comma-separated list. Rather than sequentially contacting all of the discovered AD services for a domain, the LDAP client will contact the specified servers first.
+ * <a name="encrypted-smb-dc"></a> **Encrypted SMB connections to Domain Controller** **Encrypted SMB connections to Domain Controller** specifies whether encryption should be used for communication between an SMB server and domain controller. When enabled, only SMB3 will be used for encrypted domain controller connections.
azure-netapp-files Create Volumes Dual Protocol https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/create-volumes-dual-protocol.md
na Previously updated : 02/23/2023 Last updated : 02/28/2023 # Create a dual-protocol volume for Azure NetApp Files
To create NFS volumes, see [Create an NFS volume](azure-netapp-files-create-volu
See [Create a capacity pool](azure-netapp-files-set-up-capacity-pool.md). * A subnet must be delegated to Azure NetApp Files. See [Delegate a subnet to Azure NetApp Files](azure-netapp-files-delegate-subnet.md).
+* The [non-browsable shares](#non-browsable-share) and [access-based enumeration](#access-based-enumeration) features are currently in preview. You must register each feature before you can use it:
+
+1. Register the feature:
+
+ ```azurepowershell-interactive
+ Register-AzProviderFeature -ProviderNamespace Microsoft.NetApp -FeatureName ANFSmbNonBrowsable
+ Register-AzProviderFeature -ProviderNamespace Microsoft.NetApp -FeatureName ANFSMBAccessBasedEnumeration
+ ```
+
+2. Check the status of the feature registration:
+
+ > [!NOTE]
+ > The **RegistrationState** may be in the `Registering` state for up to 60 minutes before changing to `Registered`. Wait until the status is **Registered** before continuing.
+
+ ```azurepowershell-interactive
+ Get-AzProviderFeature -ProviderNamespace Microsoft.NetApp -FeatureName ANFSmbNonBrowsable
+ Get-AzProviderFeature -ProviderNamespace Microsoft.NetApp -FeatureName ANFSMBAccessBasedEnumeration
+ ```
+You can also use [Azure CLI commands](/cli/azure/feature) `az feature register` and `az feature show` to register the feature and display the registration status.
## Considerations
To create NFS volumes, see [Create an NFS volume](azure-netapp-files-create-volu
Additional configurations are required for Kerberos. Follow the instructions in [Configure NFSv4.1 Kerberos encryption](configure-kerberos-encryption.md). +
+ * <a name="access-based-enumeration"></a> If you want to enable access-based enumeration, select **Enable Access Based Enumeration**.
+
+ This feature will hide directories and files created under a share from users who do not have access permissions. Users will still be able to view the share. You can only enable access-based enumeration if the dual-protocol volume uses NTFS security style.
+
+ * <a name="non-browsable-share"></a> You can enable the **non-browsable-share feature.**
+
+ This feature prevents the Windows client from browsing the share. The share does not show up in the Windows File Browser or in the list of shares when you run the `net view \\server /all` command.
+
+ > [!IMPORTANT]
+ > The access-based enumeration and non-browsable shares features are currently in preview. If this is your first time using either, refer to the steps in [Before you begin](#before-you-begin) to register the features.
+ * Customize **Unix Permissions** as needed to specify change permissions for the mount path. The setting does not apply to the files under the mount path. The default setting is `0770`. This default setting grants read, write, and execute permissions to the owner and the group, but no permissions are granted to other users. Registration requirement and considerations apply for setting **Unix Permissions**. Follow instructions in [Configure Unix permissions and change ownership mode](configure-unix-permissions-change-ownership-mode.md).
azure-netapp-files Cross Region Replication Requirements Considerations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/cross-region-replication-requirements-considerations.md
na Previously updated : 02/23/2023 Last updated : 02/28/2023
This article describes requirements and considerations about [using the volume c
* There can be a delay up to five minutes for the interface to reflect a newly added snapshot on the source volume. * Cascading and fan in/out topologies aren't supported. * Configuring volume replication for source volumes created from snapshot isn't supported at this time.
-* After you set up cross-region replication, the replication process creates *SnapMirror snapshots* to provide references between the source volume and the destination volume. SnapMirror snapshots are cycled automatically when a new one is created for every incremental transfer. You cannot delete SnapMirror snapshots until replication relationship and volume is deleted.
-* You cannot mount a dual-protocol volume until you [authorize replication from the source volume](cross-region-replication-create-peering.md#authorize-replication-from-the-source-volume) and the initial [transfer](cross-region-replication-display-health-status.md#display-replication-status) happens.
-* You can delete manual snapshots on the source volume of a replication relationship when the replication relationship is active or broken, and also after the replication relationship is deleted. You cannot delete manual snapshots for the destination volume until the replication relationship is broken.
-* You can't revert a source or destination volume of cross-region replication to a snapshot. The snapshot revert functionality is greyed out for volumes in a replication relationship.
+* After you set up cross-region replication, the replication process creates *SnapMirror snapshots* to provide references between the source volume and the destination volume. SnapMirror snapshots are cycled automatically when a new one is created for every incremental transfer. You can't delete SnapMirror snapshots until replication relationship and volume is deleted.
+* You can't mount a dual-protocol volume until you [authorize replication from the source volume](cross-region-replication-create-peering.md#authorize-replication-from-the-source-volume) and the initial [transfer](cross-region-replication-display-health-status.md#display-replication-status) happens.
+* You can delete manual snapshots on the source volume of a replication relationship when the replication relationship is active or broken, and also after the replication relationship is deleted. You can't delete manual snapshots for the destination volume until the replication relationship is broken.
+* You can revert a source or destination volume of a cross-region replication to a snapshot, provided the snapshot is newer than the most recent SnapMirror snapshot. Snapshots older than the SnapMirror snapshot can't be used for a volume revert operation. For more information, see [Revert a volume using snapshot revert](snapshots-revert-volume.md).
## Next steps * [Create volume replication](cross-region-replication-create-peering.md)
This article describes requirements and considerations about [using the volume c
* [Volume replication metrics](azure-netapp-files-metrics.md#replication) * [Delete volume replications or volumes](cross-region-replication-delete.md) * [Troubleshoot cross-region replication](troubleshoot-cross-region-replication.md)
+* [Revert a volume using snapshot revert using Azure NetApp Files](snapshots-revert-volume.md)
* [Test disaster recovery for Azure NetApp Files](test-disaster-recovery.md)
azure-netapp-files Faq Smb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/faq-smb.md
Previously updated : 02/02/2023 Last updated : 02/28/2023 # SMB FAQs for Azure NetApp Files
Azure NetApp Files doesn't support using MMC to manage `Sessions` and `Open File
## How can I obtain the IP address of an SMB volume via the portal?
-Use the **JSON View** link on the volume overview pane, and look for the **startIp** identifier under **properties** -> **mountTargets**.
+Use the **JSON View** link on the volume overview pane, and look for the **startIp** identifier under **properties** > **mountTargets**.
## Can an Azure NetApp Files SMB share act as a DFS Namespace (DFS-N) root?
-No. However, Azure NetApp Files SMB shares can serve as a DFS Namespace (DFS-N) folder target.
-To use an Azure NetApp Files SMB share as a DFS-N folder target, provide the Universal Naming Convention (UNC) mount path of the Azure NetApp Files SMB share by using the [DFS Add Folder Target](/windows-server/storage/dfs-namespaces/add-folder-targets#to-add-a-folder-target) procedure.
+No. However, Azure NetApp Files SMB shares can serve as a DFS Namespace (DFS-N) folder target.
+
+To use an Azure NetApp Files SMB share as a DFS-N folder target, provide the Universal Naming Convention (UNC) mount path of the Azure NetApp Files SMB share by using the [DFS Add Folder Target](/windows-server/storage/dfs-namespaces/add-folder-targets#to-add-a-folder-target) procedure.
+
+Also refer to [Use DFS-N and DFS Root Consolidation with Azure NetApp Files](use-dfs-n-and-dfs-root-consolidation-with-azure-netapp-files.md).
## Can the SMB share permissions be changed?
Azure NetApp Files supports modifying `SMB Shares` by using Microsoft Management
See [Modify SMB share permissions](azure-netapp-files-create-volumes-smb.md#modify-smb-share-permissions) for more information on this procedure.
+Azure NetApp Files also supports [access-based enumeration](azure-netapp-files-create-volumes-smb.md#access-based-enumeration) and [non-browsable shares](azure-netapp-files-create-volumes-smb.md#non-browsable-share) on SMB and dual-protocol volumes. You can enable these features during or after the creation of an SMB or dual-protocol volume.
+ ## Can I change the SMB share name after the SMB volume has been created? No. However, you can create a new SMB volume with the new share name from a snapshot of the SMB volume with the old share name.
azure-netapp-files Manage Default Individual User Group Quotas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/manage-default-individual-user-group-quotas.md
You can also use [Azure CLI commands](/cli/azure/feature) `az feature register`
For individual user quota using the NFS protocol, specify a value in the range of `0` to `4294967295`. * **Quota limit**:
- Specify the limit in the range of `4` to `1125899906842620`.
- Select `KiB`, `MiB`, `GiB`, or `TiB` from the pulldown.
+ Specify the limit in the range of `1` to `1125899906842620`.
+ Select `KiB`, `MiB`, `GiB`, or `TiB` from the pulldown. The minimum configurable quota limit is 4 KiB.
## Edit or delete quota rules
azure-netapp-files Modify Active Directory Connections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/modify-active-directory-connections.md
Previously updated : 11/28/2022 Last updated : 02/21/2023 # Modify Active Directory connections for Azure NetApp Files
-Once you have [created an Active Directory connection](create-active-directory-connections.md) in Azure NetApp Files, you can modify it. When modifying an Active Directory, not all configurations can be modified.
+Once you've [created an Active Directory connection](create-active-directory-connections.md) in Azure NetApp Files, you can modify it. When you're modifying an Active Directory connection, not all configurations are modifiable.
## Modify Active Directory connections 1. Select **Active Directory connections**. Then, select **Edit** to edit an existing AD connection.
-1. In the **Edit Active Directory** window that appears, modify Active Directory connection configurations as needed. See [Options for Active Directory connections](#options-for-active-directory-connections) for an explanation of what fields can be modified.
+1. In the **Edit Active Directory** window that appears, modify Active Directory connection configurations as needed. For an explanation of what fields you can modify, see [Options for Active Directory connections](#options-for-active-directory-connections).
## Options for Active Directory connections
-|Field Name |What it is |Can it be modified? |Considerations & Impacts |Effect |
+|Field Name |What it is |Is it modifiable? |Considerations & Impacts |Effect |
|:-:|:--|:-:|:--|:--|
-| Primary DNS | Primary DNS server IP addresses for the Active Directory domain. | Yes | None* | New DNS IP will be used for DNS resolution. |
+| Primary DNS | Primary DNS server IP addresses for the Active Directory domain. | Yes | None* | New DNS IP is used for DNS resolution. |
| Secondary DNS | Secondary DNS server IP addresses for the Active Directory domain. | Yes | None* | New DNS IP will be used for DNS resolution in case primary DNS fails. | | AD DNS Domain Name | The domain name of your Active Directory Domain Services that you want to join.ΓÇ»| No | None | N/A |
-| AD Site Name | The site to which the domain controller discovery is limited. | Yes | This should match the site name in Active Directory Sites and Services. See footnote.* | Domain discovery will be limited to the new site name. If not specified, "Default-First-Site-Name" will be used. |
+| AD Site Name | The site to which the domain controller discovery is limited. | Yes | This should match the site name in Active Directory Sites and Services. See footnote.* | Domain discovery is limited to the new site name. If not specified, "Default-First-Site-Name" is used. |
| SMB Server (Computer Account) Prefix | Naming prefix for the computer account in Active Directory that Azure NetApp Files will use for the creation of new accounts. See footnote.* | Yes | Existing volumes need to be mounted again as the mount is changed for SMB shares and NFS Kerberos volumes.* | Renaming the SMB server prefix after you create the Active Directory connection is disruptive. You'll need to remount existing SMB shares and NFS Kerberos volumes after renaming the SMB server prefix as the mount path will change. |
-| Organizational Unit Path | The LDAP path for the organizational unit (OU) where SMB server computer accounts will be created. `OU=second level`, `OU=first level`| No | If you are using Azure NetApp Files with Azure Active Directory Domain Services (AADDS), the organizational path is `OU=AADDC Computers` when you configure Active Directory for your NetApp Account. | Computer accounts will be placed under the OU specified. If not specified, the default of `OU=Computers` is used by default. |
-| AES Encryption | To take advantage of the strongest security with Kerberos-based communication, you can enable AES-256 and AES-128 encryption on the SMB server. | Yes | If you enable AES encryption, the user credentials used to join Active Directory must have the highest corresponding account option enabled, matching the capabilities enabled for your Active Directory. For example, if your Active Directory has only AES-128 enabled, you must enable the AES-128 account option for the user credentials. If your Active Directory has the AES-256 capability, you must enable the AES-256 account option (which also supports AES-128). If your Active Directory does not have any Kerberos encryption capability, Azure NetApp Files uses DES by default.* | Enable AES encryption for Active Directory Authentication |
-| LDAP Signing | This functionality enables secure LDAP lookups between the Azure NetApp Files service and the user-specified Active Directory Domain Services domain controller. | Yes | LDAP signing to Require Signing in group policy* | This provides ways to increase the security for communication between LDAP clients and Active Directory domain controllers. |
-| Allow local NFS users with LDAP | If enabled, this option will manage access for local users and LDAP users. | Yes | This option will allow access to local users. It is not recommended and, if enabled, should only be used for a limited time and later disabled. | If enabled, this option will allow access to local users and LDAP users. If access is needed for only LDAP users, this option must be disabled. |
-| LDAP over TLS | If enabled, LDAP over TLS will be configured to support secure LDAP communication to active directory. | Yes | None | If LDAP over TLS is enabled and if the server root CA certificate is already present in the database, then LDAP traffic is secured using the CA certificate. If a new certificate is passed in, that certificate will be installed. |
+| Organizational Unit Path | The LDAP path for the organizational unit (OU) where SMB server computer accounts will be created. `OU=second level`, `OU=first level`| No | If you're using Azure NetApp Files with Azure Active Directory Domain Services (AADDS), the organizational path is `OU=AADDC Computers` when you configure Active Directory for your NetApp Account. | Computer accounts will be placed under the OU specified. If not specified, the default of `OU=Computers` is used by default. |
+| AES Encryption | To take advantage of the strongest security with Kerberos-based communication, you can enable AES-256 and AES-128 encryption on the SMB server. | Yes | If you enable AES encryption, the user credentials used to join Active Directory must have the highest corresponding account option enabled, matching the capabilities enabled for your Active Directory. For example, if your Active Directory has only AES-128 enabled, you must enable the AES-128 account option for the user credentials. If your Active Directory has the AES-256 capability, you must enable the AES-256 account option (which also supports AES-128). If your Active Directory doesn't have any Kerberos encryption capability, Azure NetApp Files uses DES by default.* | Enable AES encryption for Active Directory Authentication |
+| LDAP Signing | This functionality enables secure LDAP lookups between the Azure NetApp Files service and the user-specified Active Directory Domain Services domain controller. | Yes | LDAP signing to Require Signing in group policy* | This option provides ways to increase the security for communication between LDAP clients and Active Directory domain controllers. |
+| Allow local NFS users with LDAP | If enabled, this option manages access for local users and LDAP users. | Yes | This option allows access to local users. It's not recommended and, if enabled, should only be used for a limited time and later disabled. | If enabled, this option allows access to local users and LDAP users. If your configuration requires access for only LDAP users, you must disable this option. |
+| LDAP over TLS | If enabled, LDAP over TLS is configured to support secure LDAP communication to active directory. | Yes | None | If you've enabled LDAP over TLS and if the server root CA certificate is already present in the database, then the CA certificate secures LDAP traffic. If a new certificate is passed in, that certificate will be installed. |
| Server root CA Certificate | When LDAP over SSL/TLS is enabled, the LDAP client is required to have base64-encoded Active Directory Certificate Service's self-signed root CA certificate. | Yes | None* | LDAP traffic secured with new certificate only if LDAP over TLS is enabled |
-| Encrypted SMB connections to Domain Controller | This specifies whether encryption should be used for communication between SMB server and domain controller. See [Create Active Directory connections](create-active-directory-connections.md#encrypted-smb-dc) for more details on using this feature. | Yes | SMB, Kerberos, and LDAP enabled volume creation cannot be used if the domain controller does not support SMB3 | Only SMB3 will be used for encrypted domain controller connections. |
-| Backup policy users | You can include additional accounts that require elevated privileges to the computer account created for use with Azure NetApp Files. See [Create and manage Active Directory connections](create-active-directory-connections.md#create-an-active-directory-connection) for more information. | Yes | None* | The specified accounts will be allowed to change the NTFS permissions at the file or folder level. |
-| Administrators | Specify users or groups that will be given administrator privileges on the volume | Yes | None | User account will receive administrator privileges |
+| LDAP search scope | See [Create and manage Active Directory connections](create-active-directory-connections.md#create-an-active-directory-connection) | Yes | - | - |
+| Preferred server for LDAP client | You can designate up to two AD servers for the LDAP to attempt connection with first. See [Understand guidelines for Active Directory Domain Services site design and planning](understand-guidelines-active-directory-domain-service-site.md#ad-ds-ldap-discover) | Yes | None* | Potentially impede a timeout when the LDAP client seeks to connect to the AD server. |
+| Encrypted SMB connections to Domain Controller | This option specifies whether encryption should be used for communication between SMB server and domain controller. See [Create Active Directory connections](create-active-directory-connections.md#encrypted-smb-dc) for more details on using this feature. | Yes | SMB, Kerberos, and LDAP enabled volume creation can't be used if the domain controller doesn't support SMB3 | Only use SMB3 for encrypted domain controller connections. |
+| Backup policy users | You can include more accounts that require elevated privileges to the computer account created for use with Azure NetApp Files. For more information, see [Create and manage Active Directory connections](create-active-directory-connections.md#create-an-active-directory-connection).F | Yes | None* | The specified accounts will be allowed to change the NTFS permissions at the file or folder level. |
+| Administrators | Specify users or groups to grant administrator privileges on the volume to | Yes | None | User account receives administrator privileges |
| Username | Username of the Active Directory domain administrator | Yes | None* | Credential change to contact DC |
-| Password | Password of the Active Directory domain administrator | Yes | None* <br></br> Password cannot exceed 64 characters. | Credential change to contact DC |
+| Password | Password of the Active Directory domain administrator | Yes | None* <br></br> Password can't exceed 64 characters. | Credential change to contact DC |
| Kerberos Realm: AD Server Name | The name of the Active Directory machine. This option is only used when creating a Kerberos volume. | Yes | None* | | | Kerberos Realm: KDC IP | Specifies the IP address of the Kerberos Distribution Center (KDC) server. KDC in Azure NetApp Files is an Active Directory server | Yes | None | A new KDC IP address will be used | | Region | The region where the Active Directory credentials are associated | No | None | N/A | | User DN | User domain name, which overrides the base DN for user lookups Nested userDN can be specified in `OU=subdirectory, OU=directory, DC=domain, DC=com` format.​ | Yes | None* | User search scope gets limited to User DN instead of base DN. | | Group DN | Group domain name. groupDN overrides the base DN for group lookups. Nested groupDN can be specified in `OU=subdirectory, OU=directory, DC=domain, DC=com` format.​ | Yes | None* | Group search scope gets limited to Group DN instead of base DN. | | Group Membership Filter | The custom LDAP search filter to be used when looking up group membership from LDAP server.​ `groupMembershipFilter` can be specified with the `(gidNumber=*)` format. | Yes | None* | Group membership filter will be used while querying group membership of a user from LDAP server. |
-| Security Privilege Users | You can grant security privilege (`SeSecurityPrivilege`) to users that require elevated privilege to access the Azure NetApp Files volumes. The specified user accounts will be allowed to perform certain actions on Azure NetApp Files SMB shares that require security privilege not assigned by default to domain users. See [Create and manage Active Directory connections](create-active-directory-connections.md#create-an-active-directory-connection) for more information. | Yes | Using this feature is optional and supported only for SQL Server. The domain account used for installing SQL Server must already exist before you add it to the Security privilege users field. When you add the SQL Server installer's account to Security privilege users, the Azure NetApp Files service might validate the account by contacting the domain controller. The command might fail if it cannot contact the domain controller. For more information about `SeSecurityPrivilege` and SQL Server, see [SQL Server installation fails if the Setup account doesn't have certain user rights](/troubleshoot/sql/install/installation-fails-if-remove-user-right).* | Allows non-administrator accounts to use SQL severs on top of ANF volumes. |
+| Security Privilege Users | You can grant security privilege (`SeSecurityPrivilege`) to users that require elevated privilege to access the Azure NetApp Files volumes. The specified user accounts will be allowed to perform certain actions on Azure NetApp Files SMB shares that require security privilege not assigned by default to domain users. See [Create and manage Active Directory connections](create-active-directory-connections.md#create-an-active-directory-connection) for more information. | Yes | Using this feature is optional and supported only for SQL Server. The domain account used for installing SQL Server must already exist before you add it to the Security privilege users field. When you add the SQL Server installer's account to Security privilege users, the Azure NetApp Files service might validate the account by contacting the domain controller. The command might fail if it can't contact the domain controller. See [SQL Server installation fails if the Setup account doesn't have certain user rights](/troubleshoot/sql/install/installation-fails-if-remove-user-right) for more information about `SeSecurityPrivilege` and SQL Server.* | Allows non-administrator accounts to use SQL severs on top of ANF volumes. |
**\*There is no impact on a modified entry only if the modifications are entered correctly. If you enter data incorrectly, users and applications will lose access.** ## Next Steps * [Understand guidelines for Active Directory Domain Services site design and planning for Azure NetApp Files](understand-guidelines-active-directory-domain-service-site.md)
-* [Configure ADDS LDAP with extended groups for NFS](configure-ldap-extended-groups.md)
-* [Configure ADDS LDAP over TLS](configure-ldap-over-tls.md)
+* [Configure AD DS LDAP with extended groups for NFS](configure-ldap-extended-groups.md)
+* [Configure AD DS LDAP over TLS](configure-ldap-over-tls.md)
* [Create and manage Active Directory connections](create-active-directory-connections.md)
azure-netapp-files Snapshots Restore New Volume https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/snapshots-restore-new-volume.md
na Previously updated : 12/06/2022 Last updated : 02/22/2023
![Screenshot that shows the Restore New Volume menu.](../media/azure-netapp-files/azure-netapp-files-snapshot-restore-to-new-volume.png)
-3. In the Create a Volume window, provide information for the new volume:
- * **Name**
- Specify the name for the volume that you're creating.
-
- The name must be unique within a resource group. It must be at least three characters long. It can use any alphanumeric characters.
+3. In the **Create a Volume** page, provide information for the new volume.
- * **Quota**
- Specify the amount of logical storage that you want to allocate to the volume.
+ The new volume uses the same protocol that the snapshot uses.
+ For information about the fields in the Create a Volume page, see:
+ * [Create an NFS volume](azure-netapp-files-create-volumes.md)
+ * [Create an SMB volume](azure-netapp-files-create-volumes-smb.md)
+ * [Create a dual-protocol volume](create-volumes-dual-protocol.md)
+
+ By default, the new volume includes a reference to the snapshot that was used for the restore operation from the original volume from Step 2, referred to as the *base snapshot*. This base snapshot does *not* consume any additional space because of [how snapshots work](snapshots-introduction.md). If you don't want the new volume to contain this base snapshot, select **Delete base snapshot** during the new volume creation.
- ![Screenshot that shows the Create a Volume window.](../media/azure-netapp-files/snapshot-restore-new-volume.png)
+ :::image type="content" source="../media/azure-netapp-files/snapshot-restore-new-volume.png" alt-text="Screenshot showing the Create a Volume window for restoring a volume from a snapshot.":::
4. Select **Review+create**. Select **Create**.
- The new volume uses the same protocol that the snapshot uses.
- The new volume to which the snapshot is restored appears in the Volumes page.
- The snapshot used to create the new volume will also be present on the new volume.
+ The Volumes page displays the new volume that the snapshot restores to.
+ ## Next steps
azure-netapp-files Snapshots Revert Volume https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/snapshots-revert-volume.md
na Previously updated : 03/18/2022 Last updated : 02/28/2023
The [snapshot](snapshots-introduction.md) revert functionality enables you to qu
You can find the Revert Volume option in the Snapshots menu of a volume. After you select a snapshot for reversion, Azure NetApp Files reverts the volume to the data and timestamps that it contained when the selected snapshot was taken.
+The revert functionality is also available in configurations with volume replication relationships.
+ > [!IMPORTANT] > Active filesystem data and snapshots that were taken after the selected snapshot will be lost. The snapshot revert operation will replace *all* the data in the targeted volume with the data in the selected snapshot. You should pay attention to the snapshot contents and creation date when you select a snapshot. You cannot undo the snapshot revert operation. ## Considerations * Reverting a volume using snapshot revert is not supported on [Azure NetApp Files volumes that have backups](backup-requirements-considerations.md). -
+* In configurations with a volume replication relationship, a SnapMirror snapshot is created to synchronize between the source and destination volumes. This snapshot is created in addition to any user-created snapshots. **When reverting a source volume with an active volume replication relationship, only snapshots that are more recent than this SnapMirror snapshot can be used in the revert operation.**
## Steps
-1. Go to the **Snapshots** menu of a volume. Right-click the snapshot you want to use for the revert operation. Select **Revert volume**.
+1. Go to the **Snapshots** menu of a volume. Right-click the snapshot you want to use for the revert operation. Select **Revert volume**.
![Screenshot that describes the right-click menu of a snapshot.](../media/azure-netapp-files/snapshot-right-click-menu.png)
azure-netapp-files Troubleshoot Volumes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/troubleshoot-volumes.md
na Previously updated : 08/05/2022 Last updated : 02/21/2023 # Troubleshoot volume errors for Azure NetApp Files
This article describes error messages and resolutions that can help you troubles
## Errors for LDAP volumes | Error conditions | Resolutions |
-|-|-|
+|-||
| Error when creating an SMB volume with ldapEnabled as true: <br> `Error Message: ldapEnabled option is only supported with NFS protocol volume. ` | You cannot create an SMB volume with LDAP enabled. <br> Create SMB volumes with LDAP disabled. | | Error when updating the ldapEnabled parameter value for an existing volume: <br> `Error Message: ldapEnabled parameter is not allowed to update` | You cannot modify the LDAP option setting after creating a volume. <br> Do not update the LDAP option setting on a created volume. See [Configure AD DS LDAP with extended groups for NFS volume access](configure-ldap-extended-groups.md) for details. | | Error when creating an LDAP-enabled NFS volume: <br> `Could not query DNS server` <br> `Sample error message:` <br> `"log": time="2020-10-21 05:04:04.300" level=info msg=Res method=GET url=/v2/Volumes/070d0d72-d82c-c893-8ce3-17894e56cea3 x-correlation-id=9bb9e9fe-abb6-4eb5-a1e4-9e5fbb838813 x-request-id=c8032cb4-2453-05a9-6d61-31ca4a922d85 xresp="200: {\"created\":\"2020-10-21T05:02:55.000Z\",\"lifeCycleState\":\"error\",\"lifeCycleStateDetails\":\"Error when creating - Could not query DNS server. Verify that the network configuration is correct and that DNS servers are available.\",\"name\":\"smb1\",\"ownerId\ \":\"8c925a51-b913-11e9-b0de-9af5941b8ed0\",\"region\":\"westus2stage\",\"volumeId\":\"070d0d72-d82c-c893-8ce3-` | This error occurs because DNS is unreachable. <br> <ul><li> Check if you've configured the correct site (site scoping) for Azure NetApp Files. </li><li> The reason that DNS is unreachable might be an incorrect DNS IP address or networking issues. Check the DNS IP address entered in the AD connection to make sure that it is correct. </li><li> Make sure that the AD and the volume are in the same region and the same VNet. If they are in different VNets, ensure that VNet peering is established between the two VNets.</li></ul> | | Error when creating volume from a snapshot: <br> `Aggregate does not exist` | Azure NetApp Files doesnΓÇÖt support provisioning a new, LDAP-enabled volume from a snapshot that belongs to an LDAP-disabled volume. <br> Try creating new an LDAP-disabled volume from the given snapshot. |
+| When only primary group IDs are seen and user belongs to auxiliary groups too. | This is caused by a query timeout: <br> -Use [LDAP search scope option](configure-ldap-extended-groups.md). <br> -Use [preferred Active Directory servers for LDAP client](create-active-directory-connections.md#preferred-server-ldap). |
+| `Error describing volume - Entry doesn't exist for username: <username>, please try with a valid username` | -Check if the user is present on LDAP server. <br> -Check if the LDAP server is healthy. |
## Errors for volume allocation
azure-netapp-files Understand Guidelines Active Directory Domain Service Site https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/understand-guidelines-active-directory-domain-service-site.md
na Previously updated : 01/25/2023 Last updated : 02/21/2023 # Understand guidelines for Active Directory Domain Services site design and planning for Azure NetApp Files
Ensure that you meet the following requirements about the DNS configurations:
* Ensure that [the SRV records registered by the AD DS Net Logon service](https://social.technet.microsoft.com/wiki/contents/articles/7608.srv-records-registered-by-net-logon.aspx) have been created on the DNS servers. * Ensure that the PTR records for the AD DS domain controllers used by Azure NetApp Files have been created on the DNS servers. * Azure NetApp Files supports standard and secure dynamic DNS updates. If you require secure dynamic DNS updates, ensure that secure updates are configured on the DNS servers.
-* If dynamic DNS updates are not used, you need to manually create an A record and a PTR record for the AD DS computer account(s) created in the AD DS **Organizational Unit** (specified in the Azure NetApp Files AD connection) to support Azure NetApp FIles LDAP Signing, LDAP over TLS, SMB, dual-protocol, or Kerberos NFSv4.1 volumes.
+* If dynamic DNS updates are not used, you need to manually create an A record and a PTR record for the AD DS computer account(s) created in the AD DS **Organizational Unit** (specified in the Azure NetApp Files AD connection) to support Azure NetApp Files LDAP Signing, LDAP over TLS, SMB, dual-protocol, or Kerberos NFSv4.1 volumes.
* For complex or large AD DS topologies, [DNS Policies or DNS subnet prioritization may be required to support LDAP enabled NFS volumes](#ad-ds-ldap-discover). ### Time source requirements
Ensure that stale DNS records associated with the retired AD DS domain controlle
A separate discovery process for AD DS LDAP servers occurs when LDAP is enabled for an Azure NetApp Files NFS volume. When the LDAP client is created on Azure NetApp Files, Azure NetApp Files queries the AD DS domain service (SRV) resource record for a list of all AD DS LDAP servers in the domain and not the AD DS LDAP servers assigned to the AD DS site specified in the AD connection.
+In large or complex AD DS topologies, you might need to implement [DNS Policies](/windows-server/networking/dns/deploy/dns-policies-overview) or [DNS subnet prioritization](/previous-versions/windows/it-pro/windows-2000-server/cc961422(v=technet.10)?redirectedfrom=MSDN) to ensure that the AD DS LDAP servers assigned to the AD DS site specified in the AD connection are returned.
+
+Alternatively, the AD DS LDAP server discovery process can be overridden by specifying up to two [preferred AD servers for the LDAP client](create-active-directory-connections.md#preferred-server-ldap).
+ > [!IMPORTANT]
-> If Azure NetApp Files cannot reach a discovered AD DS LDAP server during the creation of the Azure NetApp Files LDAP client, the creation of the LDAP enabled volume will fail. In large or complex AD DS topologies, you might need to implement [DNS Policies](/windows-server/networking/dns/deploy/dns-policies-overview) or [DNS subnet prioritization](/previous-versions/windows/it-pro/windows-2000-server/cc961422(v=technet.10)?redirectedfrom=MSDN) to ensure that the AD DS LDAP servers assigned to the AD DS site specified in the AD connection are returned. Contact your Microsoft CSA for guidance on how to best configure your DNS to support LDAP-enabled NFS volumes.
+> If Azure NetApp Files cannot reach a discovered AD DS LDAP server during the creation of the Azure NetApp Files LDAP client, the creation of the LDAP enabled volume will fail.
### Consequences of incorrect or incomplete AD Site Name configuration
Azure NetApp Files SMB, dual-protocol, and NFSv4.1 Kerberos volumes support cros
## Next steps * [Create and manage Active Directory connections](create-active-directory-connections.md) * [Modify Active Directory connections](modify-active-directory-connections.md)
-* [Enable ADDS LDAP authentication for NFS volumes](configure-ldap-over-tls.md)
+* [Enable AD DS LDAP authentication for NFS volumes](configure-ldap-over-tls.md)
* [Create an SMB volume](azure-netapp-files-create-volumes-smb.md) * [Create a dual-protocol volume](create-volumes-dual-protocol.md) * [Errors for SMB and dual-protocol volumes](troubleshoot-volumes.md#errors-for-smb-and-dual-protocol-volumes)
azure-netapp-files Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/whats-new.md
na Previously updated : 02/28/2023 Last updated : 03/01/2023 # What's new in Azure NetApp Files
-Azure NetApp Files is updated regularly. This article provides a summary about the latest new features and enhancements.
+Azure NetApp Files is updated regularly. This article provides a summary about the latest new features and enhancements.
+
+## March 2023
+
+* [Active Directory support improvement](create-active-directory-connections.md#preferred-server-ldap) (Preview)
+
+ The Preferred server for LDAP client option allows you to submit the IP addresses of up to two Active Directory (AD) servers as a comma-separated list. Rather than sequentially contacting all of the discovered AD services for a domain, the LDAP client will contact the specified servers first.
## February 2023
+* [Cross region replication enhancement: snapshot revert on replication source volume](snapshots-revert-volume.md)
+
+ When using cross-region replication, reverting a snapshot in a source or destination volume with an active replication configuration was not initially supported. Restoring a snapshot on the source volume from the latest local snapshot was not possible. Instead you had to use either client copy using the .snapshot directory, single file snapshot restore, or needed to break the replication in order to apply a volume revert. With this new feature, a snapshot revert on a replication source volume is possible provided you select a snapshot that is newer than the latest SnapMirror snapshot. This enables data recovery (revert) from a snapshot while cross region replication stays active, improving data protection SLA.
+
+* [Access-based enumeration](azure-netapp-files-create-volumes-smb.md#access-based-enumeration) (Preview)
+
+ Access-based enumeration (ABE) displays only the files and folders that a user has permissions to access. If a user does not have Read (or equivalent) permissions for a folder, the Windows client hides the folder from the userΓÇÖs view. This new capability provides an additional layer of security by only displaying files and folders a user has access to, and as a result hiding file and folder information a user has no access to. You can now enable ABE on Azure NetApp Files [SMB](azure-netapp-files-create-volumes-smb.md#access-based-enumeration) and [dual-protocol](create-volumes-dual-protocol.md#access-based-enumeration) (with NTFS security style) volumes.
+
+* [Non-browsable shares](azure-netapp-files-create-volumes-smb.md#non-browsable-share) (Preview)
+
+ You can now configure Azure NetApp Files [SMB](azure-netapp-files-create-volumes-smb.md#non-browsable-share) or [dual-protocol](create-volumes-dual-protocol.md#non-browsable-share) volumes as non-browsable. This new feature prevents the Windows client from browsing the share, and the share does not show up in the Windows File Explorer. This new capability provides an additional layer of security by not displaying shares that are configured as non-browsable. Users who have access to the share will maintain access.
+
+* Option to **delete base snapshot** when you [restore a snapshot to a new volume using Azure NetApp Files](snapshots-restore-new-volume.md)
+
+ By default, the new volume includes a reference to the snapshot that was used for the restore operation, referred to as the *base snapshot*. If you donΓÇÖt want the new volume to contain this base snapshot, you can select the **Delete base snapshot** option during volume creation.
+ * The [Unix permissions and change ownership mode](configure-unix-permissions-change-ownership-mode.md) features are now generally available (GA). You no longer need to register the features before using them.
Azure NetApp Files is updated regularly. This article provides a summary about t
Azure Application Consistent Snapshot Tool (AzAcSnap) is a command-line tool that enables customers to simplify data protection for third-party databases in Linux environments.
- AzAcSnap 7 is being released with the following fixes and improvements:
+ The AzAcSnap 7 release includes the following fixes and improvements:
* Shortening of snapshot names * Restore (`-c restore`) improvements * Test (`-c test`) improvements
Azure NetApp Files is updated regularly. This article provides a summary about t
* [Cross-zone replication](create-cross-zone-replication.md) (Preview)
- With AzureΓÇÖs push towards the use of availability zones (AZs) the need for storage-based data replication is equally increasing. Azure NetApp Files now supports [cross-zone replication](cross-zone-replication-introduction.md). With this new in-region replication capability - by combining it with the new availability zone volume placement feature - you can replicate your Azure NetApp Files volumes asynchronously from one Azure availability zone to another in a fast and cost-effective way.
+ With AzureΓÇÖs push towards the use of availability zones (AZs), the need for storage-based data replication is equally increasing. Azure NetApp Files now supports [cross-zone replication](cross-zone-replication-introduction.md). With this new in-region replication capability - by combining it with the new availability zone volume placement feature - you can replicate your Azure NetApp Files volumes asynchronously from one Azure availability zone to another in a fast and cost-effective way.
Cross-zone replication helps you protect your data from unforeseeable zone failures without the need for host-based data replication. Cross-zone replication minimizes the amount of data required to replicate across the zones, therefore limiting data transfers required and also shortens the replication time, so you can achieve a smaller Restore Point Objective (RPO). Cross-zone replication doesnΓÇÖt involve any network transfer costs, hence it is highly cost-effective.
Azure NetApp Files is updated regularly. This article provides a summary about t
* [Encrypted SMB connections to Domain Controller](create-active-directory-connections.md#encrypted-smb-dc) (Preview)
- With the Encrypted SMB connections to Active Directory Domain Controller capability you can now specify whether encryption should be used for communication between SMB server and domain controller in Active Directory connections. When enabled, only SMB3 will be used for encrypted domain controller connections.
+ With the Encrypted SMB connections to Active Directory Domain Controller capability, you can now specify whether to use encryption for communication between SMB server and domain controller in Active Directory connections. When enabled, only SMB3 will be used for encrypted domain controller connections.
## October 2022
Azure NetApp Files is updated regularly. This article provides a summary about t
* [Azure NetApp Files datastores for Azure VMware Solution](../azure-vmware/attach-azure-netapp-files-to-azure-vmware-solution-hosts.md) (Preview)
- [Azure NetApp Files datastores for Azure VMware Solution](https://azure.microsoft.com/blog/power-your-file-storageintensive-workloads-with-azure-vmware-solution) is now in public preview. This new integration between Azure VMware Solution and Azure NetApp Files will enable you to [create datastores via the Azure VMware Solution resource provider with Azure NetApp Files NFS volumes](../azure-vmware/attach-azure-netapp-files-to-azure-vmware-solution-hosts.md) and mount the datastores on your private cloud clusters of choice. Along with the integration of Azure disk pools for Azure VMware Solution, this will provide more choice to scale storage needs independently of compute resources. For your storage-intensive workloads running on Azure VMware Solution, the integration with Azure NetApp Files helps to easily scale storage capacity beyond the limits of the local instance storage for Azure VMware Solution provided by vSAN and lower your overall total cost of ownership for storage-intensive workloads.
+ [Azure NetApp Files datastores for Azure VMware Solution](https://azure.microsoft.com/blog/power-your-file-storageintensive-workloads-with-azure-vmware-solution) is now in public preview. This new integration between Azure VMware Solution and Azure NetApp Files enables you to [create datastores via the Azure VMware Solution resource provider with Azure NetApp Files NFS volumes](../azure-vmware/attach-azure-netapp-files-to-azure-vmware-solution-hosts.md) and mount the datastores on your private cloud clusters of choice. Along with the integration of Azure disk pools for Azure VMware Solution, this capability provides more choice to scale storage needs independently of compute resources. For your storage-intensive workloads running on Azure VMware Solution, the integration with Azure NetApp Files helps to easily scale storage capacity beyond the limits of the local instance storage for Azure VMware Solution provided by vSAN and lower your overall total cost of ownership for storage-intensive workloads.
* [Azure Policy built-in definitions for Azure NetApp Files](azure-policy-definitions.md#built-in-policy-definitions)
Azure NetApp Files is updated regularly. This article provides a summary about t
* [SMB Continuous Availability (CA) shares support for Citrix App Layering](enable-continuous-availability-existing-smb.md) (Preview)
- [Citrix App Layering](https://docs.citrix.com/en-us/citrix-app-layering/4.html) radically reduces the time it takes to manage Windows applications and images. App Layering separates the management of your OS and apps from your infrastructure. You can install each app and OS patch once, update the associated templates, and redeploy your images. You can publish layered images as open standard virtual disks, usable in any environment. App Layering can be used to provide dynamic access application layer virtual disks stored on SMB shared networked storage, including Azure NetApp Files. To enhance App Layering resiliency to events of storage service maintenance, Azure NetApp Files has extended support for [SMB Transparent Failover via SMB Continuous Availability (CA) shares on Azure NetApp Files](azure-netapp-files-create-volumes-smb.md#continuous-availability) for App Layering virtual disks. For more information, see [Azure NetApp Files Azure Virtual Desktop Infrastructure solutions | Citrix](azure-netapp-files-solution-architectures.md#citrix). Custom applications are not supported with SMB Continuous Availability.
+ [Citrix App Layering](https://docs.citrix.com/en-us/citrix-app-layering/4.html) radically reduces the time it takes to manage Windows applications and images. App Layering separates the management of your OS and apps from your infrastructure. You can install each app and OS patch once, update the associated templates, and redeploy your images. You can publish layered images as open standard virtual disks, usable in any environment. You can use App Layering to provide dynamic access application layer virtual disks stored on SMB shared networked storage, including Azure NetApp Files. To enhance App Layering resiliency to events of storage service maintenance, Azure NetApp Files has extended support for [SMB Transparent Failover via SMB Continuous Availability (CA) shares on Azure NetApp Files](azure-netapp-files-create-volumes-smb.md#continuous-availability) for App Layering virtual disks. For more information, see [Azure NetApp Files Azure Virtual Desktop Infrastructure solutions | Citrix](azure-netapp-files-solution-architectures.md#citrix). Azure NetApp Files doesn't support custom applications with SMB Continuous Availability.
## April 2022
Azure NetApp Files is updated regularly. This article provides a summary about t
* [Single-file snapshot restore](snapshots-restore-file-single.md) (Preview)
- Azure NetApp Files provides ways to quickly restore data from snapshots (mainly at the volume level). See [How Azure NetApp Files snapshots work](snapshots-introduction.md). Options for user file self-restore are available via client-side data copy from the `~snapshot` (Windows) or `.snapshot` (Linux) folders. These operations require data (files and directories) to traverse the network twice (upon read and write). As such, the operations aren't time and resource efficient, especially with large data sets. If you don't want to restore the entire snapshot to a new volume, revert a volume, or copy large files across the network, you can use the single-file snapshot restore feature to restore individual files directly on the service from a volume snapshot without requiring data copy via an external client. This approach will drastically reduce RTO and network resource usage when restoring large files.
+ Azure NetApp Files provides ways to quickly restore data from snapshots (mainly at the volume level). See [How Azure NetApp Files snapshots work](snapshots-introduction.md). Options for user file self-restore are available via client-side data copy from the `~snapshot` (Windows) or `.snapshot` (Linux) folders. These operations require data (files and directories) to traverse the network twice (upon read and write). As such, the operations aren't time and resource efficient, especially with large data sets. If you don't want to restore the entire snapshot to a new volume, revert a volume, or copy large files across the network, you can use the single-file snapshot restore feature to restore individual files directly on the service from a volume snapshot without requiring data copy via an external client. This approach drastically reduces RTO and network resource usage when restoring large files.
* Features that are now generally available (GA)
Azure NetApp Files is updated regularly. This article provides a summary about t
* [Standard network features](configure-network-features.md) (Preview)
- Azure NetApp Files now supports **Standard** network features for volumes that customers have been asking for since the inception. This capability has been made possible by innovative hardware and software integration. Standard network features provide an enhanced virtual networking experience through a variety of features for a seamless and consistent experience with security posture of all their workloads including Azure NetApp Files.
+ Azure NetApp Files now supports **Standard** network features for volumes that customers have been asking for since the inception. This capability is a result of innovative hardware and software integration. Standard network features provide an enhanced virtual networking experience through various features for a seamless and consistent experience with security posture of all their workloads including Azure NetApp Files.
You can now choose *Standard* or *Basic* network features when creating a new Azure NetApp Files volume. Upon choosing Standard network features, you can take advantage of the following supported features for Azure NetApp Files volumes and delegated subnets: * Increased IP limits for the VNets with Azure NetApp Files volumes at par with VMs
Azure NetApp Files is updated regularly. This article provides a summary about t
* [Azure NetApp Files backup](backup-introduction.md) (Preview)
- Azure NetApp Files online snapshots are now enhanced with backup of snapshots. With this new backup capability, you can vault your Azure NetApp Files snapshots to cost-efficient and ZRS-enabled Azure storage in a fast and cost-effective way. This approach further protects your data from accidental deletion.
+ Azure NetApp Files online snapshots now support backup of snapshots. With this new backup capability, you can vault your Azure NetApp Files snapshots to cost-efficient and ZRS-enabled Azure storage in a fast and cost-effective way. This approach further protects your data from accidental deletion.
- Azure NetApp Files backup extends ONTAP's built-in snapshot technology. When snapshots are vaulted to Azure storage, only changed blocks relative to previously vaulted snapshots are copied and stored, in an efficient format. Vaulted snapshots are still represented in full. They can be restored to a new volume individually and directly, eliminating the need for an iterative, full-incremental recovery process. This advanced technology minimizes the amount of data required to store to and retrieve from Azure storage, therefore saving data transfer and storage costs. It also shortens the backup vaulting time, so you can achieve a smaller Restore Point Objective (RPO). You can keep a minimum number of snapshots online on the Azure NetApp Files service for the most immediate, near-instantaneous data-recovery needs. In doing so, you can build up a longer history of snapshots at a lower cost for long-term retention in the Azure NetApp Files backup vault.
+ Azure NetApp Files backup extends ONTAP's built-in snapshot technology. When snapshots are vaulted to Azure storage, only changed blocks relative to previously vaulted snapshots are copied and stored, in an efficient format. Vaulted snapshots are still represented in full. You can restore them to a new volume individually and directly, eliminating the need for an iterative, full-incremental recovery process. This advanced technology minimizes the amount of data required to store to and retrieve from Azure storage, therefore saving data transfer and storage costs. It also shortens the backup vaulting time, so you can achieve a smaller Restore Point Objective (RPO). You can keep a minimum number of snapshots online on the Azure NetApp Files service for the most immediate, near-instantaneous data-recovery needs. In doing so, you can build up a longer history of snapshots at a lower cost for long-term retention in the Azure NetApp Files backup vault.
For more information, see [How Azure NetApp Files snapshots work](snapshots-introduction.md). * [**Administrators**](create-active-directory-connections.md#create-an-active-directory-connection) option in Active Directory connections (Preview)
- The Active Directory connections page now includes an **Administrators** field. You can specify users or groups that will be given administrator privileges on the volume.
+ The Active Directory connections page now includes an **Administrators** field. You can specify users or groups that will have administrator privileges on the volume.
## August 2021
Azure NetApp Files is updated regularly. This article provides a summary about t
* [Shared AD support for multiple accounts to one Active Directory per region per subscription](create-active-directory-connections.md#shared_ad) (Preview)
- To date, Azure NetApp Files supports only a single Active Directory (AD) per region, where only a single NetApp account could be configured to access the AD. The new **Shared AD** feature enables all NetApp accounts to share an AD connection created by one of the NetApp accounts that belong to the same subscription and the same region. For example, all NetApp accounts in the same subscription and region can use the common AD configuration to create an SMB volume, a NFSv4.1 Kerberos volume, or a dual-protocol volume. When you use this feature, the AD connection will be visible in all NetApp accounts that are under the same subscription and same region.
+ To date, Azure NetApp Files supports only a single Active Directory (AD) per region, where only a single NetApp account could be configured to access the AD. The new **Shared AD** feature enables all NetApp accounts to share an AD connection created by one of the NetApp accounts that belong to the same subscription and the same region. For example, all NetApp accounts in the same subscription and region can use the common AD configuration to create an SMB volume, a NFSv4.1 Kerberos volume, or a dual-protocol volume. When you use this feature, the AD connection is visible in all NetApp accounts that are under the same subscription and same region.
## May 2021
Azure NetApp Files is updated regularly. This article provides a summary about t
* [Enable Active Directory Domain Services (AD DS) LDAP authentication for NFS volumes](configure-ldap-over-tls.md) (Preview)
- By default, LDAP communications between client and server applications are not encrypted. This means that it is possible to use a network-monitoring device or software to view the communications between an LDAP client and server computers. This scenario might be problematic in non-isolated or shared VNets when an LDAP simple bind is used, because the credentials (username and password) used to bind the LDAP client to the LDAP server are passed over the network unencrypted. LDAP over TLS (also known as LDAPS) is a protocol that uses TLS to secure communication between LDAP clients and LDAP servers. Azure NetApp Files now supports the secure communication between an Active Directory Domain Server (AD DS) using LDAP over TLS. Azure NetApp Files can now use LDAP over TLS for setting up authenticated sessions between the Active Directory-integrated LDAP servers. You can enable the LDAP over TLS feature for NFS, SMB, and dual-protocol volumes. By default, LDAP over TLS is disabled on Azure NetApp Files.
+ By default, LDAP communications between client and server applications are not encrypted. This setting means that it is possible to use a network-monitoring device or software to view the communications between an LDAP client and server computers. This scenario might be problematic in non-isolated or shared VNets when an LDAP simple bind is used, because the credentials (username and password) used to bind the LDAP client to the LDAP server are passed over the network unencrypted. LDAP over TLS (also known as LDAPS) is a protocol that uses TLS to secure communication between LDAP clients and LDAP servers. Azure NetApp Files now supports the secure communication between an Active Directory Domain Server (AD DS) using LDAP over TLS. Azure NetApp Files can now use LDAP over TLS for setting up authenticated sessions between the Active Directory-integrated LDAP servers. You can enable the LDAP over TLS feature for NFS, SMB, and dual-protocol volumes. By default, LDAP over TLS is disabled on Azure NetApp Files.
* Support for throughput [metrics](azure-netapp-files-metrics.md)
Azure NetApp Files is updated regularly. This article provides a summary about t
This behavior change is a result of the following key requests indicated by many users:
- * Previously, VM clients would see the thinly provisioned (100 TiB) capacity of any given volume when using OS space or capacity monitoring tools. This situation could result in inaccurate capacity visibility on the client or application side. This behavior has now been corrected.
+ * Previously, VM clients would see the thinly provisioned (100 TiB) capacity of any given volume when using OS space or capacity monitoring tools. This situation could result in inaccurate capacity visibility on the client or application side. This behavior has been corrected.
* The previous auto-grow behavior of capacity pools gave application owners no control over the provisioned capacity pool space (and the associated cost). This behavior was especially cumbersome in environments where ΓÇ£run-away processesΓÇ¥ could rapidly fill up and grow the provisioned capacity. This behavior has been corrected. * Users want to see and maintain a direct correlation between volume size (quota) and performance. The previous behavior allowed for (implicit) over-subscription of a volume (capacity) and capacity pool auto-grow. As such, users could not make a direct correlation until the volume quota had been actively set or reset. This behavior has now been corrected.
Azure NetApp Files is updated regularly. This article provides a summary about t
* [SMB Continuous Availability (CA) shares support for FSLogix user profile containers](azure-netapp-files-create-volumes-smb.md#continuous-availability) (Preview)
- [FSLogix](/fslogix/overview) is a set of solutions that enhance, enable, and simplify non-persistent Windows computing environments. FSLogix solutions are appropriate for virtual environments in both public and private clouds. FSLogix solutions can also be used to create more portable computing sessions when you use physical devices. FSLogix can be used to provide dynamic access to persistent user profile containers stored on SMB shared networked storage, including Azure NetApp Files. To enhance FSLogix resiliency to events of storage service maintenance, Azure NetApp Files has extended support for SMB Transparent Failover via [SMB Continuous Availability (CA) shares on Azure NetApp Files](azure-netapp-files-create-volumes-smb.md#continuous-availability) for user profile containers. For more information, see Azure NetApp Files [Azure Virtual Desktop solutions](azure-netapp-files-solution-architectures.md#windows-virtual-desktop).
+ [FSLogix](/fslogix/overview) is a set of solutions that enhance, enable, and simplify non-persistent Windows computing environments. FSLogix solutions are appropriate for virtual environments in both public and private clouds. You can also use FSLogix solutions to create more portable computing sessions when you use physical devices. FSLogix can provide dynamic access to persistent user profile containers stored on SMB shared networked storage, including Azure NetApp Files. To enhance FSLogix resiliency to events of storage service maintenance, Azure NetApp Files has extended support for SMB Transparent Failover via [SMB Continuous Availability (CA) shares on Azure NetApp Files](azure-netapp-files-create-volumes-smb.md#continuous-availability) for user profile containers. For more information, see Azure NetApp Files [Azure Virtual Desktop solutions](azure-netapp-files-solution-architectures.md#windows-virtual-desktop).
* [SMB3 Protocol Encryption](azure-netapp-files-create-volumes-smb.md#smb3-encryption) (Preview)
- You can now enable SMB3 Protocol Encryption on Azure NetApp Files SMB and dual-protocol volumes. This feature enables encryption for in-flight SMB3 data, using the [AES-CCM algorithm on SMB 3.0, and the AES-GCM algorithm on SMB 3.1.1](/windows-server/storage/file-server/file-server-smb-overview#features-added-in-smb-311-with-windows-server-2016-and-windows-10-version-1607) connections. SMB clients not using SMB3 encryption will not be able to access this volume. Data at rest is encrypted regardless of this setting. SMB encryption further enhances security. However, it might impact the client (CPU overhead for encrypting and decrypting messages). It might also impact storage resource utilization (reductions in throughput). You should test the encryption performance impact against your applications before deploying workloads into production.
+ You can now enable SMB3 Protocol Encryption on Azure NetApp Files SMB and dual-protocol volumes. This feature enables encryption for in-flight SMB3 data, using the [AES-CCM algorithm on SMB 3.0, and the AES-GCM algorithm on SMB 3.1.1](/windows-server/storage/file-server/file-server-smb-overview#features-added-in-smb-311-with-windows-server-2016-and-windows-10-version-1607) connections. SMB clients not using SMB3 encryption can't access this volume. Data at rest is encrypted regardless of this setting. SMB encryption further enhances security. However, it might impact the client (CPU overhead for encrypting and decrypting messages). It might also impact storage resource utilization (reductions in throughput). You should test the encryption performance impact against your applications before deploying workloads into production.
* [Active Directory Domain Services (AD DS) LDAP user-mapping with NFS extended groups](configure-ldap-extended-groups.md) (Preview)
Azure NetApp Files is updated regularly. This article provides a summary about t
* [SMB Continuous Availability (CA) shares](azure-netapp-files-create-volumes-smb.md#add-an-smb-volume) (Preview)
- SMB Transparent Failover enables maintenance operations on the Azure NetApp Files service without interrupting connectivity to server applications storing and accessing data on SMB volumes. To support SMB Transparent Failover, Azure NetApp Files now supports the SMB Continuous Availability shares option for use with SQL Server applications over SMB running on Azure VMs. This feature is currently supported on Windows SQL Server. Linux SQL Server is not currently supported. This feature provides significant performance improvements for SQL Server. It also provides scale and cost benefits for [Single Instance, Always-On Failover Cluster Instance and Always-On Availability Group deployments](azure-netapp-files-solution-architectures.md#sql-server). See [Benefits of using Azure NetApp Files for SQL Server deployment](solutions-benefits-azure-netapp-files-sql-server.md).
+ SMB Transparent Failover enables maintenance operations on the Azure NetApp Files service without interrupting connectivity to server applications storing and accessing data on SMB volumes. To support SMB Transparent Failover, Azure NetApp Files now supports the SMB Continuous Availability shares option for use with SQL Server applications over SMB running on Azure VMs. This feature is currently supported on Windows SQL Server. Azure NetApp Files doesn't currently support Linux SQL Server. This feature provides significant performance improvements for SQL Server. It also provides scale and cost benefits for [Single Instance, Always-On Failover Cluster Instance and Always-On Availability Group deployments](azure-netapp-files-solution-architectures.md#sql-server). See [Benefits of using Azure NetApp Files for SQL Server deployment](solutions-benefits-azure-netapp-files-sql-server.md).
* [Automatic resizing of a cross-region replication destination volume](azure-netapp-files-resize-capacity-pools-or-volumes.md#resize-a-cross-region-replication-destination-volume)
Azure NetApp Files is updated regularly. This article provides a summary about t
* [Snapshot revert](snapshots-revert-volume.md)
- The snapshot revert functionality enables you to quickly revert a volume to the state it was in when a particular snapshot was taken. In most cases, reverting a volume is much faster than restoring individual files from a snapshot to the active file system. It is also more space efficient compared to restoring a snapshot to a new volume.
+ The snapshot revert functionality enables you to quickly revert a volume to the state it was in when a particular snapshot was taken. In most cases, reverting a volume is faster than restoring individual files from a snapshot to the active file system. It is also more space efficient compared to restoring a snapshot to a new volume.
## September 2020 * [Azure NetApp Files cross-region replication](cross-region-replication-introduction.md) (Preview)
- Azure NetApp Files now supports cross-region replication. With this new disaster recovery capability, you can replicate your Azure NetApp Files volumes from one Azure region to another in a fast and cost-effective way. It helps you protect your data from unforeseeable regional failures. Azure NetApp Files cross-region replication leverages NetApp SnapMirror® technology; only changed blocks are sent over the network in a compressed, efficient format. This proprietary technology minimizes the amount of data required to replicate across the regions, therefore saving data transfer costs. It also shortens the replication time, so you can achieve a smaller Restore Point Objective (RPO).
+ Azure NetApp Files now supports cross-region replication. With this new disaster recovery capability, you can replicate your Azure NetApp Files volumes from one Azure region to another in a fast and cost-effective way. It helps you protect your data from unforeseeable regional failures. Azure NetApp Files cross-region replication uses NetApp SnapMirror® technology; only changed blocks are sent over the network in a compressed, efficient format. This proprietary technology minimizes the amount of data required to replicate across the regions, therefore saving data transfer costs. It also shortens the replication time, so you can achieve a smaller Restore Point Objective (RPO).
* [Manual QoS Capacity Pool](azure-netapp-files-understand-storage-hierarchy.md#manual-qos-type) (Preview)
azure-resource-manager Manage Resource Groups Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/manage-resource-groups-python.md
Learn how to use Python with [Azure Resource Manager](overview.md) to manage you
* azure-mgmt-resource * azure-mgmt-storage
+ If you have older versions of these packages already installed in your virtual environment, you may need to update them with `pip install --upgrade {package-name}`
+ * The examples in this article use CLI-based authentication (`AzureCliCredential`). Depending on your environment, you may need to run `az login` first to authenticate. * An environment variable with your Azure subscription ID. To get your Azure subscription ID, use:
for rg in rg_list:
print(rg.name) ```
-To get one resource group, provide the name of the resource group.
+To get one resource group, use [ResourceManagementClient.resource_groups.get](/python/api/azure-mgmt-resource/azure.mgmt.resource.resources.v2022_09_01.operations.resourcegroupsoperations#azure-mgmt-resource-resources-v2022-09-01-operations-resourcegroupsoperations-get) and provide the name of the resource group.
```python import os
For more information about how Azure Resource Manager orders the deletion of res
## Deploy resources
-You can deploy Azure resources by using Python classes or by deploying an Azure Resource Manager (ARM) template.
+You can deploy Azure resources by using Python classes or by deploying an Azure Resource Manager template (ARM template).
+
+### Deploy resources by using Python classes
-The following example creates a storage account. The name you provide for the storage account must be unique across Azure.
+The following example creates a storage account by using [StorageManagementClient.storage_accounts.begin_create](/python/api/azure-mgmt-storage/azure.mgmt.storage.v2022_09_01.operations.storageaccountsoperations#azure-mgmt-storage-v2022-09-01-operations-storageaccountsoperations-begin-create). The name for the storage account must be unique across Azure.
```python import os
storage_account_result = storage_client.storage_accounts.begin_create(
) ```
-To deploy an ARM template, use [ResourceManagementClient.deployments.begin_create_or_update](/python/api/azure-mgmt-resource/azure.mgmt.resource.resources.v2022_09_01.operations.deploymentsoperations#azure-mgmt-resource-resources-v2022-09-01-operations-deploymentsoperations-begin-create-or-update).
+### Deploy resources by using an ARM template
+
+To deploy an ARM template, use [ResourceManagementClient.deployments.begin_create_or_update](/python/api/azure-mgmt-resource/azure.mgmt.resource.resources.v2022_09_01.operations.deploymentsoperations#azure-mgmt-resource-resources-v2022-09-01-operations-deploymentsoperations-begin-create-or-update). The following example requires a local template named `storage.json`.
```python import os
rg_deployment_result = resource_client.deployments.begin_create_or_update(
) ```
-The following example shows the ARM template you're deploying:
+The following example shows the ARM template named `storage.json` that you're deploying:
```json {
cloud-shell Quickstart Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-shell/quickstart-powershell.md
For a detailed list of differences, see [PowerShell differences on non-Windows p
![Screenshot showing how to select the PowerShell environment for the Azure Cloud Shell.][08]
+## Registering your subscription with Azure Cloud Shell
+
+Azure Cloud Shell needs access to manage resources. Access is provided through namespaces that must
+be registered to your subscription. Use the following commands to register the Microsoft.CloudShell
+RP namespace in your subscription:
+
+```azurepowershell-interactive
+Select-AzSubscription -SubscriptionId <SubscriptionId>
+Register-AzResourceProvider -ProviderNamespace Microsoft.CloudShell
+```
+
+> [!NOTE]
+> You only need to register the namespace once per subscription.
+ ## Run PowerShell commands Run regular PowerShell commands in the Cloud Shell, such as:
cloud-shell Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-shell/quickstart.md
This document details how to use Bash in Azure Cloud Shell in the [Azure portal]
> [!TIP] > You are automatically authenticated for Azure CLI in every session.
-### Select the Bash environment
+### Registering your subscription with Azure Cloud Shell
-Check that the environment drop-down from the left-hand side of shell window says `Bash`.
+Azure Cloud Shell needs access to manage resources. Access is provided through namespaces that must
+be registered to your subscription. Use the following commands to register the Microsoft.CloudShell
+RP namespace in your subscription:
-![Screenshot showing how to select the Bash environment for the Azure Cloud Shell.][04]
+```azurecli-interactive
+az account set --subscription <Subscription Name or Id>
+az provider register --namespace Microsoft.CloudShell
+```
+
+> [!NOTE]
+> You only need to register the namespace once per subscription.
### Set your subscription
cognitive-services Conversation Transcription https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/conversation-transcription.md
Conversation transcription uses two types of inputs:
- **Multi-channel audio stream:** For specification and design details, see [Microphone array recommendations](./speech-sdk-microphone.md). - **User voice samples:** Conversation transcription needs user profiles in advance of the conversation for speaker identification. Collect audio recordings from each user, and then send the recordings to the [signature generation service](https://aka.ms/cts/signaturegenservice) to validate the audio and generate user profiles.
+> [!NOTE]
+> Single channel audio configuration for conversation transcription is currently only available in private preview.
+ User voice samples for voice signatures are required for speaker identification. Speakers who don't have voice samples are recognized as *unidentified*. Unidentified speakers can still be differentiated when the `DifferentiateGuestSpeakers` property is enabled (see the following example). The transcription output then shows speakers as, for example, *Guest_0* and *Guest_1*, instead of recognizing them as pre-enrolled specific speaker names. ```csharp
cognitive-services Language Learning Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/language-learning-overview.md
+
+ Title: Language learning with Azure Cognitive Service for Speech
+
+description: Azure Cognitive Services for Speech can be used to learn languages.
++++++ Last updated : 02/23/2023+++
+# Language learning with Azure Cognitive Service for Speech
+
+The Azure Cognitive Service for Speech platform is a comprehensive collection of technologies and services aimed at accelerating the incorporation of speech into applications. Azure Cognitive Services for Speech can be used to learn languages.
++
+## Pronunciation Assessment
+
+The [Pronunciation Assessment](pronunciation-assessment-tool.md) feature is designed to provide instant feedback to users on the accuracy, fluency, and prosody of their speech when learning a new language, so that they can speak and present in a new language with confidence. For information about availability of pronunciation assessment, see [supported languages](language-support.md?tabs=pronunciation-assessment) and [available regions](regions.md#speech-service).
+
+The Pronunciation Assessment feature offers several benefits for educators, service providers, and students.
+- For educators, it provides instant feedback, eliminates the need for time-consuming oral language assessments, and offers consistent and comprehensive assessments.
+- For service providers, it offers high real-time capabilities, worldwide Speech cognitive service and supports growing global business.
+- For students and learners, it provides a convenient way to practice and receive feedback, authoritative scoring to compare with native pronunciation and helps to follow the exact text order for long sentences or full documents.
+
+## Speech-to-Text
+
+Azure [Speech-to-Text](speech-to-text.md) supports real-time language identification for multilingual language learning scenarios, help human-human interaction with better understanding and readable context.
+
+## Text-to-Speech
+
+[Text-to-Speech](text-to-speech.md) prebuilt neural voices can read out learning materials natively and empower self-served learning. A broad portfolio of [languages and voices](language-support.md?tabs=tts) are supported for AI teacher, content read aloud capabilities, and more. Microsoft is continuously working on bringing new languages to the world.
++
+[Custom Neural Voice](custom-neural-voice.md) is available for you to create a customized synthetic voice for your applications. Education companies are using this technology to personalize language learning, by creating unique characters with distinct voices that match the culture and background of their target audience.
+++
+## Next steps
+
+* [How to use pronunciation assessment](how-to-pronunciation-assessment.md)
+* [What is Speech-to-Text](speech-to-text.md)
+* [What is Text-to-Speech](text-to-speech.md)
+* [What is Custom Neural Voice](custom-neural-voice.md)
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/overview.md
Common scenarios for speech include:
- [Captioning](./captioning-concepts.md): Learn how to synchronize captions with your input audio, apply profanity filters, get partial results, apply customizations, and identify spoken languages for multilingual scenarios. - [Audio Content Creation](text-to-speech.md#more-about-neural-text-to-speech-features): You can use neural voices to make interactions with chatbots and voice assistants more natural and engaging, convert digital texts such as e-books into audiobooks and enhance in-car navigation systems. - [Call Center](call-center-overview.md): Transcribe calls in real-time or process a batch of calls, redact personally identifying information, and extract insights such as sentiment to help with your call center use case.
+- [Language learning](language-learning-overview.md): Provide pronunciation assessment feedback to language learners, support real-time transcription for remote learning conversations, and read aloud teaching materials with neural voices.
- [Voice assistants](voice-assistants.md): Create natural, humanlike conversational interfaces for their applications and experiences. The voice assistant feature provides fast, reliable interaction between a device and an assistant implementation. Microsoft uses Speech for many scenarios, such as captioning in Teams, dictation in Office 365, and Read Aloud in the Edge browser.
cognitive-services Language Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/document-translation/language-studio.md
+
+ Title: Try Document Translation in Language Studio
+description: "Document Translation in Azure Cognitive Services Language Studio."
++++++ Last updated : 02/27/2023+
+recommendations: false
++
+# Document Translation in Language Studio (Preview)
+
+> [!IMPORTANT]
+> Document Translation in Language Studio is currently in Public Preview. Features, approaches and processes may change, prior to General Availability (GA), based on user feedback.
+
+ Document Translation in [**Azure Cognitive Services Language Studio**](https://language.cognitive.azure.com/home) is a no-code user interface that lets you translate documents from local storage or Azure Blob Storage interactively.
+
+## Prerequisites
+
+If you or an administrator have previously setup a Translator resource with a **system-assigned managed identity**, enabled a **Storage Blob Data Contributor** role assignment, and created an Azure Blob storage account, you can skip this section and [**Get started**](#get-started) right away.
+
+> [!NOTE]
+>
+> * Document Translation is currently supported in the Translator (single-service) resource only, and is **not** included in the Cognitive Services (multi-service) resource.
+>
+> * Document Translation is **only** supported in the S1 Standard Service Plan (Pay-as-you-go) or in the D3 Volume Discount Plan. *See* [Cognitive Services pricingΓÇöTranslator](https://azure.microsoft.com/pricing/details/cognitive-services/translator/).
+>
+
+Document Translation in Language Studio requires the following resources:
+
+* An active [**Azure account**](https://azure.microsoft.com/free/cognitive-services/). If you don't have one, you can [**create a free account**](https://azure.microsoft.com/free/).
+
+* A [**single-service Translator resource**](https://portal.azure.com/#create/Microsoft.CognitiveServicesTextTranslation) (**not** a multi-service Cognitive Services resource) with [**system-assigned managed identity**](how-to-guides/create-use-managed-identities.md#enable-a-system-assigned-managed-identity) enabled and a [**Storage Blob Data Contributor**](how-to-guides/create-use-managed-identities.md#grant-access-to-your-storage-account) role assigned. For more information, *see* [**Managed identities for Document Translation**](how-to-guides/create-use-managed-identities.md). Also, make sure the region and pricing sections are completed as follows:
+
+ * **Resource Region**. For this project, choose a **non-global** region. For Document Translation, [system-assigned managed identity](how-to-guides/create-use-managed-identities.md) isn't supported in the global region.
+ * **Pricing tier**. Select Standard S1 or D3 to try the service. Document Translation isn't supported in the free tier.
+
+* An [**Azure blob storage account**](https://portal.azure.com/#create/Microsoft.StorageAccount-ARM). An active Azure blob storage account is required to use Document Translation in the Language Studio.
+
+Now that you've completed the prerequisites, let's start translating documents!
+
+## Get started
+
+At least one **source document** is required. You can download our [document translation sample document](https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/Translator/document-translation-sample.docx).
+
+1. Navigate to [Language Studio](https://language.cognitive.azure.com/home).
+
+1. If you're using the Language Studio for the first time, a **Select an Azure resource** pop-up screen appears. Make the following selections:
+
+ * **Azure directory**.
+ * **Azure subscription**.
+ * **Resource type**. Choose **Translator**.
+ * **Resource name**. The resource you select must have [**managed identity enabled**](how-to-guides/create-use-managed-identities.md).
+
+ :::image type="content" source="media/language-studio/choose-azure-resource.png" alt-text="Screenshot of the language studio choose your Azure resource dialog window.":::
+
+ > [!TIP]
+ > You can update your selected directory and resource by selecting the Translator settings icon located in the left navigation section.
+
+1. Navigate to Language Studio and select the **Document translation** tile:
+
+ :::image type="content" source="media/language-studio/welcome-home-page.png" alt-text="Screenshot of the language studio home page.":::
+
+1. If you're using the Document Translation feature for the first time, start with the **Initial Configuration** to select your **Azure Translator resource** and **Document storage** account:
+
+ :::image type="content" source="media/language-studio/initial-configuration.png" alt-text="Screenshot of the initial configuration page.":::
+
+1. In the **Job** section, choose the language to **Translate from** (source) or keep the default **Auto-detect language** and select the language to **Translate to** (target). You can select a maximum of 10 target languages. Once you've selected your source and target language(s), select **Next**:
+
+ :::image type="content" source="media/language-studio/basic-information.png" alt-text="Screenshot of the language studio basic information page.":::
+
+## File location and destination
+
+Your source and target files can be located in your local environment or your Azure Blob storage [container](../../../storage/blobs/storage-quickstart-blobs-portal.md#create-a-container). Follow the steps to select where to retrieve your source and store your target files:
+
+### Choose a source file location
+
+#### [**Local**](#tab/local-env)
+
+ 1. In the **files and destination** section, choose the files for translation by selecting the **Upload local files** button.
+
+ 1. Next, select **&#x2795; Add file(s)**, choose the file(s) for translation, then select **Next**:
+
+ :::image type="content" source="media/language-studio/upload-file.png" alt-text="Screenshot of the select files for translation page.":::
+
+#### [**Azure blob storage**](#tab/blob-storage)
+
+1. In the **files and destination** section, choose the files for translation by selecting the **Select for Blob storage** button.
+
+1. Next, choose your *source* **Blob container**, find and select the file(s) for translation, then select **Next**:
+
+ :::image type="content" source="media/language-studio/select-blob-container.png" alt-text="Screenshot of select files from your blob container.":::
+++
+### Choose a target file destination
+
+#### [**Local**](#tab/local-env)
+
+While still in the **files and destination** section, select **Download translated file(s)**. Once you have made your choice, select **Next**:
+
+ :::image type="content" source="media/language-studio/target-file-upload.png" alt-text="Screenshot of the select destination for target files page.":::
+
+#### [**Azure blob storage**](#tab/blob-storage)
+
+1. While still in the **files and destination** section, select **Upload to Azure blob storage**.
+1. Next, choose your *target* **Blob container** and select **Next**:
+
+ :::image type="content" source="media/language-studio/target-file-upload.png" alt-text="Screenshot of target file upload drop-down menu.":::
+++
+### Optional selections and review
+
+1. (Optional) You can add **additional options** for custom translation and/or a glossary file. If you don't require these options, just select **Next**.
+
+1. On the **Review and finish** page, check to make sure that your selections are correct. If not, you can go back. If everything looks good, select the **Start translation job** button.
+
+ :::image type="content" source="media/language-studio/start-translation.png" alt-text="Screenshot of the start translation job page.":::
+
+1. The **Job history** page contains the **Translation job id** and job status.
+
+ > [!NOTE]
+ > The list of translation jobs on the job history page includes all the jobs that were submitted through the chosen translator resource. If your colleague used the same translator resource to submit a job, you will see the status of that job on the job history page.
+
+ :::image type="content" source="media/language-studio/job-history.png" alt-text="Screenshot of the job history page.":::
+
+That's it! You now know how to translate documents using Azure Cognitive Services Language Studio.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+>
+> [Use Document Translation REST APIs programmatically](how-to-guides/use-rest-api-programmatically.md)
cognitive-services Get Started With Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/document-translation/quickstarts/get-started-with-rest-api.md
zone_pivot_groups: programming-languages-set-translator
> [!NOTE] >
-> * In most instances, when you create a Cognitive Service resource in the Azure portal, you have the option to create a multi-service key or a single-service key. However, Document Translation is currently supported in the Translator (single-service) resource only, and is **not** included in the Cognitive Services (multi-service) resource.
+> * Document Translation is currently supported in the Translator (single-service) resource only, and is **not** included in the Cognitive Services (multi-service) resource.
> > * Document Translation is **only** supported in the S1 Standard Service Plan (Pay-as-you-go) or in the D3 Volume Discount Plan. *See* [Cognitive Services pricingΓÇöTranslator](https://azure.microsoft.com/pricing/details/cognitive-services/translator/). >
For this project, you need a **source document** uploaded to your **source conta
## HTTP request
-A batch Document Translation request is submitted to your Translator service endpoint via a POST request. If successful, the POST method returns a `202 Accepted` response code and the batch request is created by the service. The translated documents are listed in your target container.
+A batch Document Translation request is submitted to your Translator service endpoint via a POST request. If successful, the POST method returns a `202 Accepted` response code and the service creates a batch request. The translated documents are listed in your target container.
### Headers
cognitive-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/whats-new.md
Previously updated : 12/17/2022 Last updated : 02/28/2023 <!-- markdownlint-disable MD024 -->
Translator is a language service that enables users to translate text and docume
Translator service supports language translation for more than 100 languages. If your language community is interested in partnering with Microsoft to add your language to Translator, contact us via the [Translator community partner onboarding form](https://forms.office.com/pages/responsepage.aspx?id=v4j5cvGGr0GRqy180BHbR-riVR3Xj0tOnIRdZOALbM9UOU1aMlNaWFJOOE5YODhRR1FWVzY0QzU1OS4u).
+## February 2023
+
+[**Document Translation in Language Studio**](document-translation/language-studio.md) is now available for Public Preview. The feature provides a no-code user interface that to translate documents from local storage or Azure Blob Storage interactively.
+ ## November 2022 ### Custom Translator stable GA v2.0 release
communication-services Azure Communication Services Azure Cognitive Services Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/call-automation/azure-communication-services-azure-cognitive-services-integration.md
+
+ Title: Connect Azure Communication Services to Azure Cognitive Services
+
+description: Provides a how-to guide for connecting ACS to Azure Cognitive Services.
++++ Last updated : 02/15/2023+++++
+# Connect Azure Communication Services with Azure Cognitive Services
+
+>[!IMPORTANT]
+>Functionality described on this document is currently in private preview. Private preview includes access to SDKs and documentation for testing purposes that are not yet available publicly.
+>Apply to become an early adopter by filling out the form for [preview access to Azure Communication Services](https://aka.ms/acs-tap-invite).
++
+Azure Communication Services Call Automation APIs provide developers the ability to steer and control the ACS Telephony, VoIP or WebRTC calls using real-time event triggers to perform actions based on custom business logic specific to their domain. Within the Call Automation APIs developers can use simple AI powered APIs, which can be used to play personalized greeting messages, recognize conversational voice inputs to gather information on contextual questions to drive a more self-service model with customers, use sentiment analysis to improve customer service overall. These content specific APIs are orchestrated through **Azure Cognitive Services** with support for customization of AI models without developers needing to terminate media streams on their services and streaming back to Azure for AI functionality.
+
+All this is possible with one-click where enterprises can access a secure solution and link their models through the portal. Furthermore, developers and enterprises don't need to manage credentials. Connecting your Cognitive Services uses managed identities to access user-owned resources. Developers can use managed identities to authenticate any resource that supports Azure Active Directory authentication.
+
+BYO Cognitive Services can be easily integrated into any application regardless of the programming language. When creating an Azure Resource in Azure portal, enable the BYO option and provide the URL to the Cognitive Services. This simple experience allows developers to meet their needs, scale, and avoid investing time and resources into designing and maintaining a custom solution.
+
+> [!NOTE]
+> This integration is only supported in limited regions for Azure Cognitive Services, for more information about which regions are supported please view the limitations section at the bottom of this document. It is also recommended that when you're creating a new Azure Cognitive Service resource that you create a Multi-service Cognitive Service resource.
+
+## Common use cases
+
+### Build applications that can play and recognize speech
+
+With the ability to, connect your Cognitive Services to Azure Communication Services, you can enable custom play functionality, using [Text-to-Speech](../../../../articles/cognitive-services/Speech-Service/text-to-speech.md) and [SSML](../../../../articles/cognitive-services/Speech-Service/speech-synthesis-markup.md) configuration, to play more customized and natural sounding audio to users. Through the Cognitive Services connection, you can also use the Speech-To-Text service to incorporate recognition of voice responses that can be converted into actionable tasks through business logic in the application. These functions can be further enhanced through the ability to create custom models within Cognitive services that are bespoke to your domain and region through the ability to choose which languages are spoken and recognized, custom voices and custom models built based on your experience.
+
+## Run time flow
+[![Run time flow](./media/run-time-flow.png)](./media/run-time-flow.png#lightbox)
+
+## Azure portal experience
+You can also configure and bind your Communication Services and Cognitive Services through the Azure portal.
+
+### Add a Managed Identity to the ACS Resource
+
+1. Navigate to your ACS Resource in the Azure portal
+2. Select the Identity tab.
+3. Enable system assigned identity. This action begins the creation of the identity; A pop-up notification appears notifying you that the request is being processed.
+
+[![Enable managed identiy](./media/enable-system-identity.png)](./media/enable-system-identity.png#lightbox)
+
+### Option 1: Add role from Azure Cognitive Services in the Azure portal
+1. Navigate to your Azure Cognitive Service resource.
+2. Select the "Access control (IAM)" tab.
+3. Click the "+ Add" button.
+4. Select "Add role assignments" from the menu
+
+[![Add role from IAM](./media/add-role.png)](./media/add-role.png#lightbox)
+
+5. Choose the "Cognitive Services User" role to assign, then click "Next".
+
+[![Cognitive Services user](./media/cognitive-service-user.png)](./media/cognitive-service-user.png#lightbox)
+
+6. For the field "Assign access to" choose the "User, group or service principal".
+7. Press "+ Select members" and a side tab opens.
+8. Choose your Azure Communication Services subscription from the "Subscriptions" drop down menu and click "Select".
+
+[![Select ACS resource](./media/select-acs-resource.png)](./media/select-acs-resource.png#lightbox)
+
+9. Click ΓÇ£Review + assignΓÇ¥, this assigns the role to the managed identity.
+
+### Option 2: Add role through ACS Identity tab
+
+1. Navigate to your ACS resource in the Azure portal
+2. Select Identity tab
+3. Click on "Azure role assignments"
+
+[![ACS role assignment](./media/add-role-acs.png)](./media/add-role-acs.png#lightbox)
+
+4. Click the "Add role assignment (Preview)" button, which opens the "Add role assignment (Preview)" tab
+5. Select the "Resource group" for "Scope".
+6. Select the "Subscription" // The CogSvcs subscription?
+7. Select the "Resource Group" containing the Cognitive Service
+8. Select the "Role" "Cognitive Services User"
+
+[![ACS role information](./media/acs-roles-cognitive-services.png)](./media/acs-roles-cognitive-services.png#lightbox)
+
+10. Click Save
+
+Your Communication Service has now been linked to your Azure Cognitive Service resource.
+
+## Azure Cognitive Services regions supported
+
+This integration between Azure Communication Services and Azure Cognitive Services is only supported in the following regions at this point in time:
+- westus
+- westus2
+- westus3
+- eastus
+- eastus2
+- centralus
+- northcentralus
+- southcentralus
+- westcentralus
+- westeu
+
+## Next Steps
+- Learn about [playing audio](../../concepts/call-automation/play-ai-action.md) to callers using Text-to-Speech.
+- Learn about [gathering user input](../../concepts/call-automation/recognize-ai-action.md) with Speech-to-Text.
communication-services Play Ai Action https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/call-automation/play-ai-action.md
+
+ Title: Play audio in call
+
+description: Conceptual information about playing audio in a call using Call Automation and Azure Cognitive Services
++++ Last updated : 02/15/2023++++
+# Playing audio in calls
+
+The play action provided through the ACS Call Automation SDK allows you to play audio prompts to participants in the call. This action can be accessed through the server-side implementation of your application. You can play audio to call participants through one of two methods;
+- Providing ACS access to pre-recorded audio files of WAV format, that ACS can access with support for authentication
+- Regular text that can be converted into speech output through the integration with Azure Cognitive Services.
+
+You can leverage the newly announced integration between [Azure Communication Services and Azure Cognitive Services](./azure-communication-services-azure-cognitive-services-integration.md) to play personalized responses using Azure [Text-To-Speech](../../../../articles/cognitive-services/Speech-Service/text-to-speech.md). You can use human like prebuilt neural voices out of the box or create custom neural voices that are unique to your product or brand. For more information on supported voices, languages and locales please see [Language and voice support for the Speech service](../../../../articles/cognitive-services/Speech-Service/language-support.md).
+
+> [!NOTE]
+> ACS currently only supports WAV files formatted as mono channel audio recorded at 16KHz. You can create your own audio files using [Speech synthesis with Audio Content Creation tool](../../../../articles/cognitive-services/Speech-Service/how-to-audio-content-creation.md).
+
+## Prebuilt Neural Text to Speech voices
+Microsoft uses deep neural networks to overcome the limits of traditional speech synthesis with regards to stress and intonation in spoken language. Prosody prediction and voice synthesis occur simultaneously, resulting in a more fluid and natural sounding output. You can use these neural voices to make interactions with your chatbots and voice assistants more natural and engaging. There are over 100 pre-built voices to choose from. Learn more about [Azure Text-to-Speech voices](../../../../articles/cognitive-services/Speech-Service/language-support.md).
+
+## Common use cases
+
+The play action can be used in many ways, below are some examples of how developers may wish to use the play action in their applications.
+
+### Announcements
+Your application might want to play some sort of announcement when a participant joins or leaves the call, to notify other users.
+
+### Self-serve customers
+
+In scenarios with IVRs and virtual assistants, you can use your application or bots to play audio prompts to callers, this prompt can be in the form of a menu to guide the caller through their interaction.
+
+### Hold music
+The play action can also be used to play hold music for callers. This action can be set up in a loop so that the music keeps playing until an agent is available to assist the caller.
+
+### Playing compliance messages
+As part of compliance requirements in various industries, vendors are expected to play legal or compliance messages to callers, for example, ΓÇ£This call will be recorded for quality purposesΓÇ¥.
+
+## Sample architecture for playing audio in call using Text-To-Speech
+
+![Play with AI](./media/play-ai.png)
+
+## Sample architecture for playing audio in a call
+
+![Screenshot of flow for play action.](./media/play-action.png)
+
+## Known limitations
+- Play action isn't enabled to work with Teams Interoperability.
+
+## Next steps
+- Check out our how-to guide to learn [how-to play custom voice prompts](../../how-tos/call-automation/play-ai-action.md) to users.
communication-services Recognize Ai Action https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/call-automation/recognize-ai-action.md
+
+ Title: Recognize Action
+
+description: Conceptual information gathering user voice input using Call Automation and Azure Cognitive Services
+++ Last updated : 02/15/2023++++
+# Gathering user input with Recognize action
+++
+With the release of ACS Call Automation Recognize action, developers can now enhance their IVR or contact center applications to recognize user input. One of the most common scenarios of recognition is playing a message for the user which prompts them to provide a response that then gets recognized by the application, once recognized the application then carries out a corresponding action. Input from callers can be received in several ways which include DTMF (user input via the digits on their calling device), speech or a combination of both DTMF and speech.
+
+**Voice recognition with speech-to-text**
+
+[Azure Communications services integration with Azure Cognitive Services](./azure-communication-services-azure-cognitive-services-integration.md), allows you through the Recognize action to analyze audio in real-time to transcribe spoken word into text. Out of the box Microsoft utilizes a Universal Language Model as a base model that is trained with Microsoft-owned data and reflects commonly used spoken language. This model is pre-trained with dialects and phonetics representing a variety of common domains. For more information about support languages please see [Languages and voice support for the Speech service](../../../../articles/cognitive-services/Speech-Service/language-support.md).
++
+**DTMF**
+
+Dual-tone multifrequency (DTMF) recognition is the process of understanding tones/sounds generated by a telephone when a number is pressed. Equipment at the receiving end listening for the specific tone then converts them into commands. These commands generally signal user intent when navigating a menu in an IVR scenario or in some cases can be used to capture important information that the user needs to provide via their phones keypad.
+
+**DTMF events and their associated tones**
+
+|Event|Tone|
+| |--|
+|0|Zero|
+|1|One|
+|2|Two|
+|3|Three|
+|4|Four|
+|5|Five|
+|6|Six|
+|7|Seven|
+|8|Eight|
+|9|Nine|
+|A|A|
+|B|B|
+|C|C|
+|D|D|
+|*|Asterisk|
+|#|Pound|
+
+## Common use cases
+
+The recognize action can be used for many reasons, below are a few examples of how developers can use the recognize action in their application.
+
+### Improve user journey with self-service prompts
+
+- **Users can control the call** - By enabling input recognition you allow the caller to navigate your IVR menu and provide information that can be used to resolve their query.
+- **Gather user information** - By enabling input recognition your application can gather input from the callers. This can be information such as account numbers, credit card information, etc.
+- **Transcribe caller response** - With voice recognition you can collect user input and transcribe the audio to text and analyze it to carry out specific business action.
+
+### Interrupt audio prompts
+
+**User can exit from an IVR menu and speak to a human agent** - With DTMF interruption your application can allow users to interrupt the flow of the IVR menu and be able to chat to a human agent.
+
+## Sample architecture for gathering user input in a call with voice recognition
+
+![Recognize AI Action](./media/recognize-ai-flow.png)
+
+## Sample architecture for gathering user input in a call
+
+![Recognize Action](./media/recognize-flow.png)
+
+## Next steps
+
+- Check out our how-to guide to learn how you can [gather user input](../../how-tos/call-automation/recognize-ai-action.md).
communication-services Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/service-limits.md
Previously updated : 11/01/2021 Last updated : 02/28/2023
This document explains the limitations of Azure Communication Services APIs and possible resolutions. ## Throttling patterns and architecture
-When you hit service limitations, you'll generally receive an HTTP status code 429 (Too many requests). In general, the following are best practices for handling throttling:
+When you hit service limitations, you will receive an HTTP status code 429 (Too many requests). In general, the following are best practices for handling throttling:
- Reduce the number of operations per request. - Reduce the frequency of calls.
This sandbox setup is to help developers start building the application. You can
|Send typing indicator|User and chat thread|5|15| |Send typing indicator|Chat thread|10|30|
+### Chat storage
+Chat messages are stored for 90 days. Submit [a request to Azure Support](../../azure-portal/supportability/how-to-create-azure-support-request.md) if you require storage for longer time period. If the time period is less than 90 days for chat messages, use the delete chat thread APIs.
+ ## Voice and video calling ### Call maximum limitations
The Communication Services Calling SDK supports the following streaming configur
| **Maximum # of outgoing local streams that you can send simultaneously** | one video or one screen sharing | one video + one screen sharing | | **Maximum # of incoming remote streams that you can render simultaneously** | four videos + one screen sharing | six videos + one screen sharing |
-While the Calling SDK won't enforce these limits, your users may experience performance degradation if they're exceeded.
+While the Calling SDK will not enforce these limits, your users may experience performance degradation if they're exceeded.
### Calling SDK timeouts
communication-services Play Ai Action https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/call-automation/play-ai-action.md
+
+ Title: Customize voice prompts to users with Play action using Text-to-Speech
+
+description: Provides a how-to guide for playing audio to participants as part of a call.
++++ Last updated : 02/15/2023+++
+zone_pivot_groups: acs-csharp-java
++
+# Customize voice prompts to users with Play action using Text-to-Speech
+
+>[!IMPORTANT]
+>Functionality described on this document is currently in private preview. Private preview includes access to SDKs and documentation for testing purposes that are not yet available publicly.
+>Apply to become an early adopter by filling out the form for [preview access to Azure Communication Services](https://aka.ms/acs-tap-invite).
+
+This guide will help you get started with playing audio to participants by using the play action provided through Azure Communication Services Call Automation SDK.
+++
+## Event codes
+|Status|Code|Subcode|Message|
+|-|--|--|--|
+|PlayCompleted|200|0|Action completed successfully.|
+|PlayFailed|400|8535|Action failed, file format is invalid.|
+|PlayFailed|400|8536|Action failed, file could not be downloaded.|
+|PlayCanceled|400|8508|Action failed, the operation was canceled.|
+
+## Clean up resources
+If you want to clean up and remove a Communication Services subscription, you can delete the resource or resource group. Deleting the resource group also deletes any other resources associated with it. Learn more about [cleaning up resources](../../quickstarts/create-communication-resource.md#clean-up-resources).
+
+## Next steps
+- Learn more about [Call Automation](../../concepts/call-automation/call-automation.md)
+- Learn more about [Gathering user input in a call](../../concepts/call-automation/recognize-ai-action.md)
+- Learn more about [Playing audio in calls](../../concepts/call-automation/play-ai-action.md)
communication-services Recognize Ai Action https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/call-automation/recognize-ai-action.md
+
+ Title: Gather user voice input
+
+description: Provides a how-to guide for gathering user voice input from participants on a call.
+++ Last updated : 02/15/2023+++
+zone_pivot_groups: acs-csharp-java
++
+# Gather user input with Recognize action and voice input
+
+This guide helps you get started recognizing user input in the forms of DTMF or voice input provided by participants through Azure Communication Services Call Automation SDK.
+++
+## Event codes
+|Status|Code|Subcode|Message|
+|-|--|--|--|
+|RecognizeCompleted|200|8531|Action completed, max digits received.|
+|RecognizeCompleted|200|8533|Action completed, DTMF option matched.|
+|RecognizeCompleted|200|8545|Action completed, speech option matched.|
+|RecognizeCompleted|200|8514|Action completed as stop tone was detected.|
+|RecognizeCompleted|400|8508|Action failed, the operation was canceled.|
+|RecognizeFailed|400|8510|Action failed, initial silence time out reached|
+|RecognizeFailed|500|8511|Action failed, encountered failure while trying to play the prompt.|
+|RecognizeFailed|400|8532|Action failed, inter-digit silence time out reached.|
+|RecognizeFailed|400|8547|Action failed, speech option not matched.|
+|RecognizeFailed|500|8534|Action failed, incorrect tone entered.|
+|RecognizeFailed|500|9999|Unspecified error.|
+|RecognizeCanceled|400|8508|Action failed, the operation was canceled. |
+
+## Limitations
+
+- You must either pass a tone for every single choice if you want to enable callers to use tones or voice inputs, otherwise no tone should be sent if you're expecting only voice input from callers.
+
+## Clean up resources
+
+If you want to clean up and remove a Communication Services subscription, you can delete the resource or resource group. Deleting the resource group also deletes any other resources associated with it. Learn more about [cleaning up resources](../../quickstarts/create-communication-resource.md#clean-up-resources).
+
+## Next Steps
+- Learn more about [Gathering user input](../../concepts/call-automation/recognize-ai-action.md)
+- Learn more about [Playing audio in call](../../concepts/call-automation/play-ai-action.md)
+- Learn more about [Call Automation](../../concepts/call-automation/call-automation.md)
communication-services Enable User Engagement Tracking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/email/enable-user-engagement-tracking.md
+
+ Title: How to configure user engagement tracking to an email domain with Azure Communication Service resource.
+
+description: Learn about how to enable user engagement for the email domains with Azure Communication Services resource.
++++ Last updated : 02/15/2023+++
+# Quickstart: How to enable user engagement tracking for the email domain with Azure Communication Service resource
++
+Configuring email engagement enables the insights on your customers' engagement with emails to help build customer relationships. Only the emails that are sent from Azure Communication Services verified Email Domains that are enabled for user engagement analysis will get the engagement tracking metrics.
+
+> [!IMPORTANT]
+> By enabling this feature, you are acknowledging that you are enabling open/click tracking and giving consent to collect your customers' email activity
+
+In this quick start, you'll learn about how to enable user engagement tracking for verified domain in Azure Communication Services.
+
+## Enable email engagement
+1. Go the overview page of the Email Communications Service resource that you created earlier.
+2. Click Provision Domains on the left navigation panel. You'll see list of provisioned domains.
+3. Click on the Custom Domain name that you would like to update.
++
+4. The navigation lands in Domain Overview page where you'll able to see User interaction tracking Off by default.
++
+5. The navigation lands in Domain Overview page where you'll able to see User interaction tracking Off by default.
++
+6. Click turn on to enable engagement tracking.
+
+**Your email domain is now ready to send emails with user engagement tracking.**
+
+You can now subscribe to Email User Engagement operational logs - provides information related to 'open' and 'click' user engagement metrics for messages sent from the Email service.
+
+## Next steps
+
+* [Get started with log analytics in Azure Communication Service](../../concepts/logging-and-diagnostics.md)
++
+The following documents may be interesting to you:
+
+- Familiarize yourself with the [Email client library](../../concepts/email/sdk-features.md)
+- [Get started by connecting Email Communication Service with a Azure Communication Service resource](../../quickstarts/email/connect-email-communication-resource.md)
container-apps Microservices Dapr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/microservices-dapr.md
az containerapp create \
--user-assigned $IDENTITY_ID \ --environment $CONTAINERAPPS_ENVIRONMENT \ --image dapriosamples/hello-k8s-node:latest \
- --target-port 3000 \
- --ingress 'internal' \
--min-replicas 1 \ --max-replicas 1 \ --enable-dapr \
$ServiceArgs = @{
Location = $Location ManagedEnvironmentId = $EnvId TemplateContainer = $ServiceTemplateObj
- IngressTargetPort = 3000
ScaleMinReplica = 1 ScaleMaxReplica = 1 DaprEnabled = $true
container-registry Authenticate Aks Cross Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/authenticate-aks-cross-tenant.md
In **Tenant B**, assign the AcrPull role to the service principal, scoped to the
### Step 4: Update AKS with the Azure AD application secret
-Use the multitenant application (client) ID and client secret collected in Step 1 to [update the AKS service principal credential](../aks/update-credentials.md#update-aks-cluster-with-new-service-principal-credentials).
+Use the multitenant application (client) ID and client secret collected in Step 1 to [update the AKS service principal credential](../aks/update-credentials.md#update-aks-cluster-with-service-principal-credentials).
Updating the service principal can take several minutes.
cosmos-db Burst Capacity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/burst-capacity.md
After the 10 seconds is over, the burst capacity has been used up. If the worklo
## Getting started
-To get started using burst capacity, enroll in the preview by submitting a request for the **Azure Cosmos DB Burst Capacity** feature via the [**Preview Features** page](../azure-resource-manager/management/preview-features.md) in your Azure Subscription overview page. You can also select the **Register for preview** button in the eligibility check page to open the **Preview Features** page.
+To get started using burst capacity, navigate to the **Features** page in your Azure Cosmos DB account. Select and enable the **Burst Capacity (preview)** feature.
+Before enabling the feature, verify that your Azure Cosmos DB account(s) meet all the [preview eligibility criteria](#limitations-preview-eligibility-criteria). Once you've enabled the feature, it will take 15-20 minutes to take effect.
-Before submitting your request:
--- Ensure that you have at least one Azure Cosmos DB account in the subscription. This account may be an existing account or a new one you've created to try out the preview feature. If you have no accounts in the subscription when the Azure Cosmos DB team receives your request, it will be declined, as there are no accounts to apply the feature to.-- Verify that your Azure Cosmos DB account(s) meet all the [preview eligibility criteria](#limitations-preview-eligibility-criteria).-
-The Azure Cosmos DB team will review your request and contact you via email to confirm which account(s) in the subscription you want to enroll in the preview.
-
-To check whether an Azure Cosmos DB account is eligible for the preview, you can use the built-in eligibility checker in the Azure portal. From your Azure Cosmos DB account overview page in the Azure portal, navigate to **Diagnose and solve problems** -> **Throughput and Scaling** -> **Burst Capacity**. Run the **Check eligibility for burst capacity preview** diagnostic.
-- ## Limitations (preview eligibility criteria)
cosmos-db Concepts Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/concepts-limits.md
You can allocate throughput at a container-level or a database-level in terms of
| Maximum number of distinct (logical) partition keys | Unlimited | | Maximum storage per container | Unlimited | | Maximum attachment size per Account (Attachment feature is being deprecated) | 2 GB |
-| Minimum RU/s required per 1 GB | 10 RU/s ┬│ |
+| Minimum RU/s required per 1 GB | 1 RU/s |
┬╣ You can increase Maximum RUs per container or database by [filing an Azure support ticket](create-support-request-quota-increase.md). ┬▓ To learn about best practices for managing workloads that have partition keys requiring higher limits for storage or throughput, see [Create a synthetic partition key](synthetic-partition-keys.md). If your workload has already reached the logical partition limit of 20 GB in production, it's recommended to rearchitect your application with a different partition key as a long-term solution. To help give time to rearchitect your application, you can request a temporary increase in the logical partition key limit for your existing application. [File an Azure support ticket](create-support-request-quota-increase.md) and select quota type **Temporary increase in container's logical partition key size**. Requesting a temporary increase is intended as a temporary mitigation and not recommended as a long-term solution, as **SLA guarantees are not honored when the limit is increased**. To remove the configuration, file a support ticket and select quota type **Restore containerΓÇÖs logical partition key size to default (20 GB)**. Filing this support ticket can be done after you have either deleted data to fit the 20-GB logical partition limit or have rearchitected your application with a different partition key.
-┬│ Minimum can be lowered if your account is eligible to our ["high storage / low throughput" program](set-throughput.md#high-storage-low-throughput-program)
- ### Minimum throughput limits An Azure Cosmos DB container (or shared throughput database) using manual throughput must have a minimum throughput of 400 RU/s. As the container grows, Azure Cosmos DB requires a minimum throughput to ensure the resource (database or container) has sufficient resource for its operations.
The actual minimum RU/s may vary depending on your account configuration. You ca
To estimate the minimum throughput required of a container with manual throughput, find the maximum of:
-* 400 RU/s
-* Current storage in GB * 10 RU/s
+* 400 RU/s
+* Current storage in GB * 1 RU/s
* Highest RU/s ever provisioned on the container / 100
-For example, you have a container provisioned with 400 RU/s and 0-GB storage. You increase the throughput to 50,000 RU/s and import 20 GB of data. The minimum RU/s is now `MAX(400, 20 * 10 RU/s per GB, 50,000 RU/s / 100)` = 500 RU/s. Over time, the storage grows to 200 GB. The minimum RU/s is now `MAX(400, 200 * 10 RU/s per GB, 50,000 / 100)` = 2000 RU/s.
-
-> [!NOTE]
-> The minimum throughput of 10 RU/s per GB of storage can be lowered if your account is eligible to our ["high storage / low throughput" program](set-throughput.md#high-storage-low-throughput-program).
+For example, you have a container provisioned with 400 RU/s and 0-GB storage. You increase the throughput to 50,000 RU/s and import 20 GB of data. The minimum RU/s is now `MAX(400, 20 * 1 RU/s per GB, 50,000 RU/s / 100)` = 500 RU/s. Over time, the storage grows to 2000 GB. The minimum RU/s is now `MAX(400, 2000 * 1 RU/s per GB, 50,000 / 100)` = 2000 RU/s.
#### Minimum throughput on shared throughput database To estimate the minimum throughput required of a shared throughput database with manual throughput, find the maximum of:
-* 400 RU/s
-* Current storage in GB * 10 RU/s
+* 400 RU/s
+* Current storage in GB * 1 RU/s
* Highest RU/s ever provisioned on the database / 100 * 400 + MAX(Container count - 25, 0) * 100 RU/s
-For example, you have a database provisioned with 400 RU/s, 15 GB of storage, and 10 containers. The minimum RU/s is `MAX(400, 15 * 10 RU/s per GB, 400 / 100, 400 + 0 )` = 400 RU/s. If there were 30 containers in the database, the minimum RU/s would be `400 + MAX(30 - 25, 0) * 100 RU/s` = 900 RU/s.
-
-> [!NOTE]
-> The minimum throughput of 10 RU/s per GB of storage can be lowered if your account is eligible to our ["high storage / low throughput" program](set-throughput.md#high-storage-low-throughput-program).
+For example, you have a database provisioned with 400 RU/s, 15 GB of storage, and 10 containers. The minimum RU/s is `MAX(400, 15 * 1 RU/s per GB, 400 / 100, 400 + 0 )` = 400 RU/s. If there were 30 containers in the database, the minimum RU/s would be `400 + MAX(30 - 25, 0) * 100 RU/s` = 900 RU/s.
In summary, here are the minimum provisioned RU limits when using manual throughput.
An Azure Cosmos DB item can represent either a document in a collection, a row i
| Maximum size of an item | 2 MB (UTF-8 length of JSON representation) ┬╣ | | Maximum length of partition key value | 2048 bytes (101 bytes if large partition-key isn't enabled) | | Maximum length of ID value | 1023 bytes |
-| Allowed characters for ID value | Service-side all Unicode characters except for '/' and '\\' are allowed. <br/>**WARNING: But for best interoperability we STRONGLY RECOMMEND to only use alpha-numerical ASCII characters in the ID value only**. <br/>There are known limitations in some versions of the Cosmos DB SDK, connectors (ADF, Spark, Kafka etc.), and http-drivers/libraries etc. These limitations can prevent successful processing when the ID value contains non-alphanumerical ASCII characters. So, to increase interoperability, encode the ID value - [for example via Base64 + custom encoding of special characters allowed in Base64](https://github.com/Azure/azure-cosmos-dotnet-v3/blob/78fc16c35c521b4f9a7aeef11db4df79c2545dee/Microsoft.Azure.Cosmos.Encryption/src/EncryptionProcessor.cs#L475-L489). - if you have to support non-alphanumerical ASCII characters in your service/application. |
+| Allowed characters for ID value | Service-side all Unicode characters except for '/' and '\\' are allowed. <br/>**WARNING: But for best interoperability we STRONGLY RECOMMEND to only use alpha-numerical ASCII characters in the ID value only**. <br/>There are several known limitations in some versions of the Cosmos DB SDK, as well as connectors (ADF, Spark, Kafka etc.) and http-drivers/libraries etc. that can prevent successful processing when the ID value contains non-alphanumerical ASCII characters. So, to increase interoperability, please encode the ID value - [for example via Base64 + custom encoding of special characters allowed in Base64](https://github.com/Azure/azure-cosmos-dotnet-v3/blob/78fc16c35c521b4f9a7aeef11db4df79c2545dee/Microsoft.Azure.Cosmos.Encryption/src/EncryptionProcessor.cs#L475-L489). - if you have to support non-alphanumerical ASCII characters in your service/application. |
| Maximum number of properties per item | No practical limit | | Maximum length of property name | No practical limit | | Maximum length of property value | No practical limit |
See the [Autoscale](provision-throughput-autoscale.md#autoscale-limits) article
| Minimum RU/s the system can scale to | `0.1 * Tmax`| | Current RU/s the system is scaled to | `0.1*Tmax <= T <= Tmax`, based on usage| | Minimum billable RU/s per hour| `0.1 * Tmax` <br></br>Billing is done on a per-hour basis, where you're billed for the highest RU/s the system scaled to in the hour, or `0.1*Tmax`, whichever is higher. |
-| Minimum autoscale max RU/s for a container | `MAX(1000, highest max RU/s ever provisioned / 10, current storage in GB * 100)` rounded to nearest 1000 RU/s |
-| Minimum autoscale max RU/s for a database | `MAX(1000, highest max RU/s ever provisioned / 10, current storage in GB * 100, 1000 + (MAX(Container count - 25, 0) * 1000))`, rounded to nearest 1000 RU/s. <br></br>Note if your database has more than 25 containers, the system increments the minimum autoscale max RU/s by 1000 RU/s per extra container. For example, if you have 30 containers, the lowest autoscale maximum RU/s you can set is 6000 RU/s (scales between 600 - 6000 RU/s). |
+| Minimum autoscale max RU/s for a container | `MAX(1000, highest max RU/s ever provisioned / 10, current storage in GB * 10)` rounded to nearest 1000 RU/s |
+| Minimum autoscale max RU/s for a database | `MAX(1000, highest max RU/s ever provisioned / 10, current storage in GB * 10, 1000 + (MAX(Container count - 25, 0) * 1000))`, rounded to nearest 1000 RU/s. <br></br>Note if your database has more than 25 containers, the system increments the minimum autoscale max RU/s by 1000 RU/s per extra container. For example, if you have 30 containers, the lowest autoscale maximum RU/s you can set is 6000 RU/s (scales between 600 - 6000 RU/s).
## SQL query limits
cosmos-db Set Throughput https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/set-throughput.md
The response of those methods also contains the [minimum provisioned throughput]
The actual minimum RU/s may vary depending on your account configuration. But generally it's the maximum of: * 400 RU/s
-* Current storage in GB * 10 RU/s (this constraint can be relaxed in some cases, see our [high storage / low throughput program](#high-storage-low-throughput-program))
+* Current storage in GB * 1 RU/s
* Highest RU/s ever provisioned on the database or container / 100 ### Changing the provisioned throughput
You can programmatically check the scaling progress by reading the [current prov
You can use [Azure Monitor metrics](monitor.md#view-operation-level-metrics-for-azure-cosmos-db) to view the history of provisioned throughput (RU/s) and storage on a resource.
-## <a id="high-storage-low-throughput-program"></a> High storage / low throughput program
-
-As described in the [Current provisioned throughput](#current-provisioned-throughput) section above, the minimum throughput you can provision on a container or database depends on a number of factors. One of them is the amount of data currently stored, as Azure Cosmos DB enforces a minimum throughput of 10 RU/s per GB of storage.
-
-This can be a concern in situations where you need to store large amounts of data, but have low throughput requirements in comparison. To better accommodate these scenarios, Azure Cosmos DB has introduced a **"high storage / low throughput" program** that decreases the RU/s per GB constraint on eligible accounts.
-
-To join this program and assess your full eligibility, all you have to do is to fill [this survey](https://aka.ms/cosmosdb-high-storage-low-throughput-program). The Azure Cosmos DB team will then follow up and proceed with your onboarding.
- ## Comparison of models This table shows a comparison between provisioning standard (manual) throughput on a database vs. on a container.
cost-management-billing Direct Ea Azure Usage Charges Invoices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/direct-ea-azure-usage-charges-invoices.md
Title: View your Azure usage summary details and download reports for EA enrollm
description: This article explains how enterprise administrators of direct and indirect Enterprise Agreement (EA) enrollments can view a summary of their usage data, Azure Prepayment consumed, and charges associated with other usage in the Azure portal. Previously updated : 02/10/2023 Last updated : 02/28/2023
The following table lists the terms and descriptions shown on the Usage + Charge
| Billed Separately | The services your organization used aren't covered by the credit. | | Azure Marketplace | Azure Marketplace purchases and usage aren't covered by your organization's credit and are billed separately | | Total Charges | Charges against credits + Service Overage + Billed Separately + Azure Marketplace |
+| Refunded Overage credits | Sum of refunded overage amount. The following section describes it further. |
+
+### Refunded overage credits
+
+In the past, when a reservation refund was required, Microsoft manually reviewed closed bills - sometimes going back multiple years. The manual review sometime led to issues. To resolve the issues, the refund review process is changing to a forward-looking review that doesn't require reviewing closed bills.
+
+The new review process is being deployed in phases. The current phase began on March 1, 2023. In this phase, Microsoft is addressing only refunds that result in an overage. For example, an overage that generates a credit note.
+
+To better understand the change, let's look at a detailed example of the old process. Assume that a reservation was bought in February 2022 with an overage credit (no Azure prepayment or Monetary Commitment was involved). You decided to return the reservation in August 2022. Refunds use the same payment method as the purchase. So, you received a credit note in August 2022 for the February 2022 billing period. However, the credit amount reflects the month of purchase. In this example, that's February 2022. The refund results in the change to the service overage and total charges.
+
+Here's how the example used to appear in the Azure portal.
++
+- After the reservation return in August 2022, you're entitled to $400 credit. You receive the credit note for the refund amount.
+- The service overage is changed from $1947.03 to $1547.03. The total charges change from $1947.83 to $1547.83. However, the changes donΓÇÖt reconcile with the usage details file. In this example, that's $1947.83. Also, the invoice for February 2022 didn't reconcile.
+- Return line items appear for the month of return. For example, August 2022 in usage details file.
+
+Now, let's look at the new process. There are no changes to the purchase month overage or to total charges, February 2022. Credits given for the month are viewed in the new **Refunded overage credits** column.
+
+Here's how the example now appears in the Azure portal.
++
+- After the reservation return in August 2022, you're entitled to $400 credits. You receive the credit note for the refund amount. There's no change to the process.
+- There are no changes to the February 2022 service overage or total charges after the refund. You're able to reconcile the refund as you review the usage details file and your invoice.
+- Return line items continue to appear in the month of return. For example August 2022, because there's no behavior or process change.
+
+>[!IMPORTANT]
+> - Refunds continue to appear for the purchase month for Azure prepayment and when there's a mix of overage and Azure prepayment.
+> - New behaviour (refunds to reflect in the month of return) will be enabled for MC involved scenarios tentatively by June 2023.
+> - There's no change to the process when there are:
+> - Adjustment charges
+> - Back-dated credits
+> - Discounts.
+> The preceding items result in bill regeneration. The regenerated bill shows the new refund billing process.
+
+#### Common refunded overage credits questions
+
+Question: What refunds are included in **Refunded Overage Credits**?<br>
+Answer: The `Refunded Overage Credits` attribute applies to reservation and savings plan refunds.
+
+Question: Are `Refunded Overage credits` values included in total charges?<br>
+Answer: No, it's standalone field that shows the sum of credits received for the month.
+
+Question: How do I reconcile the amount shown in **Refunded Overage Credits**?<br>
+Answer:
+1. In the Azure portal, navigate to **Reservation Transactions**.
+2. Sum all the refunds. They're shown as an overage for the month.
+ :::image type="content" source="./media/direct-ea-azure-usage-charges-invoices/reservation-transactions.png" alt-text="Screenshot showing the Reservation transactions page with refund amounts." lightbox="./media/direct-ea-azure-usage-charges-invoices/reservation-transactions.png" :::
+3. Navigate to **Usage + charges** look at the value shown in **Refunded Overage Credits**. The value is sum of all reservation and savings plan refunds that happened in the month.
+ :::image type="content" source="./media/direct-ea-azure-usage-charges-invoices/refunded-overage-credits.png" alt-text="Screenshot showing the refunded overage credits values." lightbox="./media/direct-ea-azure-usage-charges-invoices/refunded-overage-credits.png" :::
+ > [!NOTE]
+ > Savings plan refunds are not shown in **Reservation Transactions**. However, **Refunded Overage Credits** shows the sum of reservations and savings plans.
## Download usage charges CSV file
cost-management-billing Utilization Cost Reports https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/savings-plan/utilization-cost-reports.md
You can charge back savings plan use to other organizations by subscription, res
### Determine savings resulting from savings plan
-Get the Amortized costs data and filter the data for a savings plan instance. Then:
+Get the Amortized costs data and filter the data for a `PricingModel` = `SavingsPlan`. Then:
-1. Get estimated pay-as-you-go costs. Multiply the `UnitPrice` value with `Quantity` values to get estimated pay-as-you-go costs if the savings plan discount didn't apply to the usage.
+1. Get estimated pay-as-you-go costs or customer discounted cost. Multiply the `UnitPrice` value with `Quantity` values to get estimated pay-as-you-go costs if the savings plan discount didn't apply to the usage.
2. Get the savings plan costs. Sum the `Cost` values to get the monetary value of what you paid for the savings plan. It includes the used and unused costs of the savings plan.
-3. Subtract savings plan costs from estimated pay-as-you-go costs to get the estimated savings.
+3. Subtract estimated pay-as-you-go costs from savings plan costs to get the estimated savings.
+
+To know the Savings made out of public list price:
+Get public or list price cost. Multiply the `PayGPrice` value with `Quantity` values to get public-list-price costs.
+Get Savings made out of savings plan against public list price. Subtract estimated public-list-price costs from `Cost`.
+
+To know the % savings made out of discounted price for customer:
+Get Savings made out of savings plan against discounts given to customer. Subtract estimated pay-as-you-go from `Cost`.
+Get % discount applied on each line item. Divide `Cost` with public-list-price and then divide by 100.
Keep in mind that if you have an underutilized savings plan, the `UnusedBenefit` entry for `ChargeType` becomes a factor to consider. When you have a fully utilized savings plan, you receive the maximum savings possible. Any `UnusedBenefit` quantity reduces savings.
Group by **Charge Type** to see a breakdown of usage, purchases, and refunds; or
## Next steps -- Learn more about how to [Charge back Azure saving plan costs](charge-back-costs.md).
+- Learn more about how to [Charge back Azure saving plan costs](charge-back-costs.md).
data-factory Compute Linked Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/compute-linked-services.md
The following table provides a list of supported compute environments and the ac
| [Azure Data Lake Analytics](#azure-data-lake-analytics-linked-service) | [Data Lake Analytics U-SQL](transform-data-using-data-lake-analytics.md) | | [Azure SQL](#azure-sql-database-linked-service), [Azure Synapse Analytics](#azure-synapse-analytics-linked-service), [SQL Server](#sql-server-linked-service) | [Stored Procedure](transform-data-using-stored-procedure.md) | | [Azure Databricks](#azure-databricks-linked-service) | [Notebook](transform-data-databricks-notebook.md), [Jar](transform-data-databricks-jar.md), [Python](transform-data-databricks-python.md) |
+| [Azure Synapse Analytics (Artifacts)](#azure-synapse-analytics-artifacts-linked-service) | [Synapse Notebook activity](transform-data-synapse-notebook.md), [Synapse Spark job definition](transform-data-synapse-spark-job-definition.md) |
| [Azure Function](#azure-function-linked-service) | [Azure Function activity](control-flow-azure-function-activity.md)++ > ## HDInsight compute environment
You create a SQL Server linked service and use it with the [Stored Procedure Act
## Azure Synapse Analytics (Artifacts) linked service
-You create an Azure Synapse Analytics (Artifacts) linked service and use it with the [Synapse Notebook Activity](transform-data-synapse-notebook.md) and [Synapse Spark job definition Activity](transform-data-synapse-spark-job-definition.md) to invoke a stored procedure from a pipeline. See [Azure Synapse Analytics (Artifacts) Connector](connector-azure-synapse-analytics-artifacts.md) article for details about this linked service.
+You create an Azure Synapse Analytics (Artifacts) linked service and use it with the [Synapse Notebook Activity](transform-data-synapse-notebook.md) and [Synapse Spark job definition Activity](transform-data-synapse-spark-job-definition.md).
+
+### Example
+
+```json
+{
+ "name": "AzureSynapseArtifacts",
+ "properties": {
+ "description": "AzureSynapseArtifactsDescription",
+ "annotations": [],
+ "type": "AzureSynapseArtifacts",
+ "typeProperties": {
+ "endpoint": "https://<workspacename>.dev.azuresynapse.net",
+ "authentication": "MSI",
+ "workspaceResourceId": "<workspace Resource Id>"
+ }
+ }
+}
+```
+
+### Properties
+
+| **Property** | **Description** | **Required** |
+| | | |
+| name | Name of the Linked Service | Yes |
+| description | description of the Linked Service | No |
+| annotations | annotations of the Linked Service | No |
+| type | The type property should be set to **AzureSynapseArtifacts** | Yes |
+| endpoint | The Azure Synapse Analytics URL | Yes |
+| authentication | The default setting is System Assigned Managed Identity | Yes |
+| workspaceResourceId | workspace Resource Id | Yes |
## Azure Function linked service
data-factory Connector Azure Data Lake Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-azure-data-lake-storage.md
Previously updated : 09/01/2022 Last updated : 02/28/2023 # Copy and transform data in Azure Data Lake Storage Gen2 using Azure Data Factory or Azure Synapse Analytics
The following sections provide information about properties that are used to def
The Azure Data Lake Storage Gen2 connector supports the following authentication types. See the corresponding sections for details: - [Account key authentication](#account-key-authentication)
+- [Shared access signature authentication](#shared-access-signature-authentication)
- [Service principal authentication](#service-principal-authentication) - [System-assigned managed identity authentication](#managed-identity) - [User-assigned managed identity authentication](#user-assigned-managed-identity-authentication)
To use storage account key authentication, the following properties are supporte
} } ```
+### Shared access signature authentication
+
+A shared access signature provides delegated access to resources in your storage account. You can use a shared access signature to grant a client limited permissions to objects in your storage account for a specified time.
+
+You don't have to share your account access keys. The shared access signature is a URI that encompasses in its query parameters all the information necessary for authenticated access to a storage resource. To access storage resources with the shared access signature, the client only needs to pass in the shared access signature to the appropriate constructor or method.
+
+For more information about shared access signatures, see [Shared access signatures: Understand the shared access signature model](../storage/common/storage-sas-overview.md).
+
+> [!NOTE]
+>- The service now supports both *service shared access signatures* and *account shared access signatures*. For more information about shared access signatures, see [Grant limited access to Azure Storage resources using shared access signatures](../storage/common/storage-sas-overview.md).
+>- In later dataset configurations, the folder path is the absolute path starting from the container level. You need to configure one aligned with the path in your SAS URI.
+
+The following properties are supported for using shared access signature authentication:
+
+| Property | Description | Required |
+|: |: |: |
+| type | The `type` property must be set to `AzureBlobFS` (suggested)| Yes |
+| sasUri | Specify the shared access signature URI to the Storage resources such as blob or container. <br/>Mark this field as `SecureString` to store it securely. You can also put the SAS token in Azure Key Vault to use auto-rotation and remove the token portion. For more information, see the following samples and [Store credentials in Azure Key Vault](store-credentials-in-key-vault.md). | Yes |
+| connectVia | The [integration runtime](concepts-integration-runtime.md) to be used to connect to the data store. You can use the Azure integration runtime or the self-hosted integration runtime (if your data store is in a private network). If this property isn't specified, the service uses the default Azure integration runtime. | No |
+
+>[!NOTE]
+>If you're using the `AzureStorage` type linked service, it's still supported as is. But we suggest that you use the new `AzureDataLakeStorageGen2` linked service type going forward.
+
+**Example:**
+
+```json
+{
+ "name": "AzureDataLakeStorageGen2LinkedService",
+ "properties": {
+ "type": "AzureBlobFS",
+ "typeProperties": {
+ "sasUri": {
+ "type": "SecureString",
+ "value": "<SAS URI of the Azure Storage resource e.g. https://<accountname>.blob.core.windows.net/?sv=<storage version>&st=<start time>&se=<expire time>&sr=<resource>&sp=<permissions>&sip=<ip range>&spr=<protocol>&sig=<signature>>"
+ }
+ },
+ "connectVia": {
+ "referenceName": "<name of Integration Runtime>",
+ "type": "IntegrationRuntimeReference"
+ }
+ }
+}
+```
+
+**Example: store the account key in Azure Key Vault**
+
+```json
+{
+ "name": "AzureDataLakeStorageGen2LinkedService",
+ "properties": {
+ "type": "AzureBlobFS",
+ "typeProperties": {
+ "sasUri": {
+ "type": "SecureString",
+ "value": "<SAS URI of the Azure Storage resource without token e.g. https://<accountname>.blob.core.windows.net/>"
+ },
+ "sasToken": {
+ "type": "AzureKeyVaultSecret",
+ "store": {
+ "referenceName": "<Azure Key Vault linked service name>",
+ "type": "LinkedServiceReference"
+ },
+ "secretName": "<secretName with value of SAS token e.g. ?sv=<storage version>&st=<start time>&se=<expire time>&sr=<resource>&sp=<permissions>&sip=<ip range>&spr=<protocol>&sig=<signature>>"
+ }
+ },
+ "connectVia": {
+ "referenceName": "<name of Integration Runtime>",
+ "type": "IntegrationRuntimeReference"
+ }
+ }
+}
+```
+
+When you create a shared access signature URI, consider the following points:
+
+- Set appropriate read/write permissions on objects based on how the linked service (read, write, read/write) is used.
+- Set **Expiry time** appropriately. Make sure that the access to Storage objects doesn't expire within the active period of the pipeline.
+- The URI should be created at the right container or blob based on the need. A shared access signature URI to a blob allows the data factory or Synapse pipeline to access that particular blob. A shared access signature URI to a Blob storage container allows the data factory or Synapse pipeline to iterate through blobs in that container. To provide access to more or fewer objects later, or to update the shared access signature URI, remember to update the linked service with the new URI.
### Service principal authentication
data-factory Connector Azure Synapse Analytics Artifacts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-azure-synapse-analytics-artifacts.md
- Title: Copy and transform data in Azure Synapse Analytics (Artifacts) by using Azure Data Factory-
-description: Learn how to Linked service "Azure Synapse Analytics (Artifacts)".
------ Previously updated : 10/13/2022--
-# Copy and transform data in Azure Synapse Analytics (Artifacts) by using Azure Data Factory
-
-This article outlines how to use Copy Activity in Azure Data Factory pipelines to copy data to and from Azure Synapse Analytics (Artifacts).
-
-## Supported capabilities
-
-This Azure Synapse Analytics (Artifacts) connector is supported for the following capabilities:
-
-| Supported capabilities|IR | Managed private endpoint|
-|| --| --|
-|[Copy activity](copy-activity-overview.md) (source/sink)|&#9312; &#9313;|Γ£ô |
-|[Mapping data flow](concepts-data-flow-overview.md) (source/sink)|&#9312; |Γ£ô |
-|[Lookup activity](control-flow-lookup-activity.md)|&#9312; &#9313;|Γ£ô |
-|[GetMetadata activity](control-flow-get-metadata-activity.md)|&#9312; &#9313;|Γ£ô |
-|[Script activity](transform-data-using-script.md)|&#9312; &#9313;|Γ£ô |
-|[Stored procedure activity](transform-data-using-stored-procedure.md)|&#9312; &#9313;|Γ£ô |
-
-<small>*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*</small>
-
-## Create an Azure Synapse Analytics (Artifacts) linked service using UI
-
-Use the following steps to create an Azure Synapse Analytics (Artifacts) linked service in the Azure Data Factory portal UI.
-
-1. Browse to the **Manage** tab in your Azure Data Factory and select **Linked Services**, then click **New**:
-
- > [!div class="mx-imgBorder"]
- > ![Screenshot of creating a new linked service with Azure Data Factory UI.](./media/connector-azure-synapse-analytics-artifacts/new-linked-service.png)
-
-2. Search for *Synapse* under the Compute tab, and select the **Azure Synapse Analytics (Artifacts)** connector.
-
- > [!div class="mx-imgBorder"]
- > ![Screenshot of the Azure Synapse Analytics Artifacts connector.](./media/connector-azure-synapse-analytics-artifacts/azure-synapse-analytics-artifacts-connector.png)
-
-3. Configure the service details, test the connection, and create the new linked service.
-
- > [!div class="mx-imgBorder"]
- > ![Screenshot of the Azure Synapse Analytics Artifacts connector configuration.](./media/connector-azure-synapse-analytics-artifacts/configure-azure-synapse-analytics-artifacts-linked-service.png)
-
-## Connector configuration details
-
-The following sections provide details about properties that define Azure Data Factory entities specific to an Azure Synapse Analytics (Artifacts) connector.
-
-## Linked service properties
-
-These generic properties are supported for an Azure Synapse Analytics (Artifacts) linked service:
-
-| Property | Description | Required |
-| : | :- | :-- |
-| Name | Pick a valid name | Yes |
-| Description | Give some description of this linked service | No |
-| type | The type property must be set to **AzureSynapseArtifacts**. | Yes |
-| Connect via integration runtime| The [integration runtime](concepts-integration-runtime.md) to be used to connect to the data store. You can use Azure Integration Runtime or a self-hosted integration runtime (if your data store is located in a private network). If not specified, it uses the default Azure Integration Runtime. | Yes |
--
-## Linked service example that uses SQL authentication
-
-```json
-{
- "name": "AzureSynapseArtifacts",
- "properties": {
- "annotations": [],
- "type": "AzureSynapseArtifacts",
- "typeProperties": {
- "endpoint": "https://accessibilityverify.dev.azuresynapse.net",
- "authentication": "MSI"
- }
- }
-}
-```
-
-## Next steps
-
-For a list of data stores supported as sources and sinks by Copy Activity, see [supported data stores and formats](copy-activity-overview.md#supported-data-stores-and-formats).
defender-for-cloud Auto Deploy Azure Monitoring Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/auto-deploy-azure-monitoring-agent.md
description: Learn how to deploy the Azure Monitor Agent on your Azure, multiclo
Previously updated : 08/03/2022 Last updated : 03/01/2023
To deploy the Azure Monitor Agent with Defender for Cloud:
1. From Defender for Cloud's menu, open **Environment settings**. 1. Select the relevant subscription. 1. In the Monitoring coverage column of the Defender for Server plan, select **Settings**.+
+ :::image type="content" source="media/auto-deploy-azure-monitoring-agent/select-server-setting.png" alt-text="Screenshot showing selecting settings for server service plan." lightbox="media/auto-deploy-azure-monitoring-agent/select-server-setting.png":::
+ 1. Enable deployment of the Azure Monitor Agent: 1. For the **Log Analytics agent/Azure Monitor Agent**, select the **On** status.
+ :::image type="content" source="media/auto-deploy-azure-monitoring-agent/turn-on-azure-monitor-agent-auto-provision.png" alt-text="Screenshot showing turning on status for Log Analytics/Azure Monitor Agent." lightbox="media/auto-deploy-azure-monitoring-agent/turn-on-azure-monitor-agent-auto-provision.png":::
In the Configuration column, you can see the enabled agent type. When you enable Defender plans, Defender for Cloud decides which agent to provision based on your environment. In most cases, the default is the Log Analytics agent. 1. For the **Log Analytics agent/Azure Monitor Agent**, select **Edit configuration**. 1. For the Auto-provisioning configuration agent type, select **Azure Monitor Agent**.
+
+ :::image type="content" source="media/auto-deploy-azure-monitoring-agent/select-azure-monitor-agent-auto-provision.png" alt-text="Screenshot showing selecting Azure Monitor Agent for auto-provisioning." lightbox="media/auto-deploy-azure-monitoring-agent/select-azure-monitor-agent-auto-provision.png":::
By default:
To configure a custom destination workspace for the Azure Monitor Agent:
1. From Defender for Cloud's menu, open **Environment settings**. 1. Select the relevant subscription. 1. In the Monitoring coverage column of the Defender for Server plan, select **Settings**.+
+ :::image type="content" source="media/auto-deploy-azure-monitoring-agent/select-server-setting.png" alt-text="Screenshot showing selecting settings in Monitoring coverage column." lightbox="media/auto-deploy-azure-monitoring-agent/select-server-setting.png":::
+ 1. For the **Log Analytics agent/Azure Monitor Agent**, select **Edit configuration**.+
+ :::image type="content" source="media/auto-deploy-azure-monitoring-agent/configure-azure-monitor-agent-auto-provision.png" alt-text="Screenshot showing where to select edit configuration for Log Analytics agent/Azure Monitor Agent." lightbox="media/auto-deploy-azure-monitoring-agent/configure-azure-monitor-agent-auto-provision.png":::
+ 1. Select **Custom workspace**, and select the workspace that you want to send data to.
+ :::image type="content" source="media/auto-deploy-azure-monitoring-agent/select-azure-monitor-agent-auto-provision-custom.png" alt-text="screenshot showing selection of custom workspace." lightbox="media/auto-deploy-azure-monitoring-agent/select-azure-monitor-agent-auto-provision-custom.png":::
+ ### Log analytics workspace solutions The Azure Monitor Agent requires Log analytics workspace solutions. These solutions are automatically installed when you auto-provision the Azure Monitor Agent with the default workspace.
The required [Log Analytics workspace solutions](../azure-monitor/insights/solut
### Additional extensions for Defender for Cloud
-The Azure Monitor Agent requires additional extensions. The ASA extension, which supports endpoint protection recommendations, fileless attack detection, and Adaptive Application controls, is automatically installed when you auto-provision the Azure Monitor Agent.
+The Azure Monitor Agent requires more extensions. The ASA extension, which supports endpoint protection recommendations, fileless attack detection, and Adaptive Application controls, is automatically installed when you auto-provision the Azure Monitor Agent.
### Additional security events collection
defender-for-cloud Auto Deploy Vulnerability Assessment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/auto-deploy-vulnerability-assessment.md
Previously updated : 11/09/2021 Last updated : 02/28/2023 # Automatically configure vulnerability assessment for your machines
-Defender for Cloud collects data from your machines using agents and extensions. To save you the process of manually installing the extensions, such as [the manual installation of the Log Analytics agent](working-with-log-analytics-agent.md#manual-agent-provisioning), Defender for Cloud reduces management overhead by installing all required extensions on existing and new machines. Learn more [monitoring components](monitoring-components.md).
+Defender for Cloud collects data from your machines using agents and extensions. To save you the process of manually installing the extensions, such as [the manual installation of the Log Analytics agent](working-with-log-analytics-agent.md#manual-agent-provisioning), Defender for Cloud reduces management overhead by installing all required extensions on existing and new machines. Learn more about [monitoring components](monitoring-components.md).
To assess your machines for vulnerabilities, you can use one of the following solutions: -- Microsoft's threat and vulnerability management module of Microsoft Defender for Endpoint (included with Microsoft Defender for Servers)
+- Microsoft Defender Vulnerability Management available in Microsoft Defender for Endpoint (included with Microsoft Defender for Servers)
- An integrated Qualys agent (included with Microsoft Defender for Servers)-- A Qualys or Rapid7 scanner which you have licensed separately and configured within Defender for Cloud (this is called the Bring Your Own License, or BYOL, scenario)
+- A Qualys or Rapid7 scanner that you've licensed separately and configured within Defender for Cloud (this scenario is called the Bring Your Own License, or BYOL, scenario)
> [!NOTE] > To automatically configure a BYOL solution, see [Integrate security solutions in Microsoft Defender for Cloud](partner-integration.md).
To assess your machines for vulnerabilities, you can use one of the following so
1. From Defender for Cloud's menu, open **Environment settings**. 1. Select the relevant subscription.
-1. In the Monitoring coverage column of the Defender for Server plan, select **Settings**.
-1. Turn on the vulnerability assessment for machines and select the relevant solution.
+1. In the Monitoring coverage column of the Defender for Servers plan, select **Settings**.
+ :::image type="content" source="media/auto-deploy-azure-monitoring-agent/select-server-setting.png" alt-text="Screenshot showing selecting service plan settings for server." lightbox="media/auto-deploy-azure-monitoring-agent/select-server-setting.png":::
+1. Turn on the **Vulnerability assessment for machines** and select the relevant solution.
+ :::image type="content" source="media/auto-deploy-vulnerability-assessment/turn-on-deploy-vulnerability-assessment.png" alt-text="Screenshot showing where to turn on deployment of vulnerability assessment for machines." lightbox="media/auto-deploy-vulnerability-assessment/turn-on-deploy-vulnerability-assessment.png":::
> [!TIP] > Defender for Cloud enables the following policy: [(Preview) Configure machines to receive a vulnerability assessment provider](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f13ce0167-8ca6-4048-8e6b-f996402e3c1b).
defender-for-cloud Concept Agentless Data Collection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/concept-agentless-data-collection.md
Microsoft Defender for Cloud maximizes coverage on OS posture issues and extends beyond the reach of agent-based assessments. With agentless scanning for VMs, you can get frictionless, wide, and instant visibility on actionable posture issues without installed agents, network connectivity requirements, or machine performance impact.
-Agentless scanning for VMs provides vulnerability assessment and software inventory, both powered by Defender vulnerability management, in Azure and Amazon AWS environments. Agentless scanning is available in both [Defender Cloud Security Posture Management (CSPM)](concept-cloud-security-posture-management.md) and [Defender for Servers P2](defender-for-servers-introduction.md).
+Agentless scanning for VMs provides vulnerability assessment and software inventory, both powered by Microsoft Defender Vulnerability Management, in Azure and Amazon AWS environments. Agentless scanning is available in both [Defender Cloud Security Posture Management (CSPM)](concept-cloud-security-posture-management.md) and [Defender for Servers P2](defender-for-servers-introduction.md).
## Availability
Agentless scanning for VMs provides vulnerability assessment and software invent
||| |Release state:|Preview| |Pricing:|Requires either [Defender Cloud Security Posture Management (CSPM)](concept-cloud-security-posture-management.md) or [Microsoft Defender for Servers Plan 2](plan-defender-for-servers-select-plan.md#plan-features)|
-| Supported use cases:| :::image type="icon" source="./media/icons/yes-icon.png"::: Vulnerability assessment (powered by Defender vulnerability management)<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Software inventory (powered by Defender vulnerability management) |
+| Supported use cases:| :::image type="icon" source="./media/icons/yes-icon.png"::: Vulnerability assessment (powered by Defender Vulnerability Management)<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Software inventory (powered by Defender Vulnerability Management) |
| Clouds: | :::image type="icon" source="./media/icons/yes-icon.png"::: Azure Commercial clouds<br> :::image type="icon" source="./media/icons/no-icon.png"::: Azure Government<br>:::image type="icon" source="./media/icons/no-icon.png"::: Azure China 21Vianet<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Connected AWS accounts<br>:::image type="icon" source="./media/icons/no-icon.png"::: Connected GCP accounts | | Operating systems: | :::image type="icon" source="./media/icons/yes-icon.png"::: Windows<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Linux | | Instance types: | **Azure**<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Standard VMs<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Virtual machine scale set - Flex<br>:::image type="icon" source="./media/icons/no-icon.png"::: Virtual machine scale set - Uniform<br><br>**AWS**<br>:::image type="icon" source="./media/icons/yes-icon.png"::: EC2<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Auto Scale instances<br>:::image type="icon" source="./media/icons/no-icon.png"::: Instances with a ProductCode (Paid AMIs) |
Agentless scanning for VMs provides vulnerability assessment and software invent
While agent-based methods use OS APIs in runtime to continuously collect security related data, agentless scanning for VMs uses cloud APIs to collect data. Defender for Cloud takes snapshots of VM disks and does an out-of-band, deep analysis of the OS configuration and file system stored in the snapshot. The copied snapshot doesn't leave the original compute region of the VM, and the VM is never impacted by the scan.
-After the necessary metadata is acquired from the disk, Defender for Cloud immediately deletes the copied snapshot of the disk and sends the metadata to Microsoft engines to analyze configuration gaps and potential threats. For example, in vulnerability assessment, the analysis is done by Defender vulnerability management. The results are displayed in Defender for Cloud, seamlessly consolidating agent-based and agentless results.
+After the necessary metadata is acquired from the disk, Defender for Cloud immediately deletes the copied snapshot of the disk and sends the metadata to Microsoft engines to analyze configuration gaps and potential threats. For example, in vulnerability assessment, the analysis is done by Defender Vulnerability Management. The results are displayed in Defender for Cloud, seamlessly consolidating agent-based and agentless results.
The scanning environment where disks are analyzed is regional, volatile, isolated, and highly secure. Disk snapshots and data unrelated to the scan aren't stored longer than is necessary to collect the metadata, typically a few minutes.
defender-for-cloud Custom Dashboards Azure Workbooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/custom-dashboards-azure-workbooks.md
Defender for Cloud includes vulnerability scanners for your machines, containers
Learn more about using these scanners: -- [Find vulnerabilities with Microsoft threat and vulnerability management](deploy-vulnerability-assessment-defender-vulnerability-management.md)
+- [Find vulnerabilities with Microsoft Defender Vulnerability Management](deploy-vulnerability-assessment-defender-vulnerability-management.md)
- [Find vulnerabilities with the integrated Qualys scanner](deploy-vulnerability-assessment-vm.md) - [Scan your ACR images for vulnerabilities](defender-for-containers-vulnerability-assessment-azure.md) - [Scan your ECR images for vulnerabilities](defender-for-containers-vulnerability-assessment-elastic.md)
Learn more about using these scanners:
Findings for each resource type are reported in separate recommendations: -- [Vulnerabilities in your virtual machines should be remediated](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/1195afff-c881-495e-9bc5-1486211ae03f) (includes findings from Microsoft threat and vulnerability management, the integrated Qualys scanner, and any configured [BYOL VA solutions](deploy-vulnerability-assessment-byol-vm.md))
+- [Vulnerabilities in your virtual machines should be remediated](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/1195afff-c881-495e-9bc5-1486211ae03f) (includes findings from Microsoft Defender Vulnerability Management, the integrated Qualys scanner, and any configured [BYOL VA solutions](deploy-vulnerability-assessment-byol-vm.md))
- [Container registry images should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/dbd0cb49-b563-45e7-9724-889e799fa648) - [SQL databases should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/82e20e14-edc5-4373-bfc4-f13121257c37) - [SQL servers on machines should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/f97aa83c-9b63-4f9a-99f6-b22c4398f936)
defender-for-cloud Deploy Vulnerability Assessment Defender Vulnerability Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/deploy-vulnerability-assessment-defender-vulnerability-management.md
-# Investigate weaknesses with Microsoft Defender Vulnerability Management
+# Investigate weaknesses with Microsoft Defender Vulnerability Management
-[Microsoft's Defender Vulnerability Management](/microsoft-365/security/defender-vulnerability-management/defender-vulnerability-management) is a built-in module in Microsoft Defender for Endpoint that can:
+[Microsoft Defender Vulnerability Management](/microsoft-365/security/defender-vulnerability-management/defender-vulnerability-management), included with Microsoft Defender for Servers, uses built-in and agentless scanners to:
- Discover vulnerabilities and misconfigurations in near real time - Prioritize vulnerabilities based on the threat landscape and detections in your organization
+To learn more about agentless scanning, see [Find vulnerabilities and collect software inventory with agentless scanning](enable-vulnerability-assessment-agentless.md)
+
+>[!Note]
+> Microsoft Defender Vulnerability Management Add-on capabilities are included in Defender for Servers Plan 2. This provides consolidated inventories, new assessments, and mitigation tools to further enhance your vulnerability management program. To learn more, see [Vulnerability Management capabilities for servers](/microsoft-365/security/defender-vulnerability-management/defender-vulnerability-management-capabilities#vulnerability-managment-capabilities-for-servers).
+>
+> Defender Vulnerability Management add-on capabilities are only available through the [Microsoft Defender 365 portal](https://security.microsoft.com/homepage).
+ If you've enabled the [integration with Microsoft Defender for Endpoint](integration-defender-for-endpoint.md), you'll automatically get the Defender Vulnerability Management findings without the need for more agents.
-As it's a built-in module for Microsoft Defender for Endpoint, **Defender Vulnerability Management** doesn't require periodic scans.
+As Microsoft Defender Vulnerability Management continuously monitors your organization for vulnerabilities periodic scans are not required.
For a quick overview of Defender Vulnerability Management, watch this video:
For a quick overview of Defender Vulnerability Management, watch this video:
> As well as alerting you to vulnerabilities, Defender Vulnerability Management also provides functionality for Defender for Cloud's asset inventory tool. Learn more in [Software inventory](asset-inventory.md#access-a-software-inventory). You can learn more by watching this video from the Defender for Cloud in the Field video series:+ - [Microsoft Defender for Servers](episode-five.md) ## Availability
defender-for-cloud Deploy Vulnerability Assessment Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/deploy-vulnerability-assessment-vm.md
Defender for Cloud includes vulnerability scanning for your machines at no extra
> > Defender for Cloud's integrated vulnerability assessment solution works seamlessly with Azure Arc. When you've deployed Azure Arc, your machines will appear in Defender for Cloud and no Log Analytics agent is required.
-If you don't want to use the vulnerability assessment powered by Qualys, you can use [Microsoft Defender for Endpoint's threat and vulnerability management](deploy-vulnerability-assessment-defender-vulnerability-management.md) or [deploy a BYOL solution](deploy-vulnerability-assessment-byol-vm.md) with your own Qualys license, Rapid7 license, or another vulnerability assessment solution.
+If you don't want to use the vulnerability assessment powered by Qualys, you can use [Microsoft Defender Vulnerability Management](deploy-vulnerability-assessment-defender-vulnerability-management.md) or [deploy a BYOL solution](deploy-vulnerability-assessment-byol-vm.md) with your own Qualys license, Rapid7 license, or another vulnerability assessment solution.
## Availability
defender-for-cloud Enable Vulnerability Assessment Agentless https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/enable-vulnerability-assessment-agentless.md
Agentless scanning provides visibility into installed software and software vuln
Learn more about [agentless scanning](concept-agentless-data-collection.md).
-Agentless vulnerability assessment uses the Defender Vulnerability Management engine to assess vulnerabilities in the software installed on your VMs, without requiring Defender for Endpoint to be installed. Vulnerability assessment shows software inventory and vulnerability results in the same format as the agent-based assessments.
+Agentless vulnerability assessment uses the Microsoft Defender Vulnerability Management engine to assess vulnerabilities in the software installed on your VMs, without requiring Defender for Endpoint to be installed. Vulnerability assessment shows software inventory and vulnerability results in the same format as the agent-based assessments.
## Compatibility with agent-based vulnerability assessment solutions
-Defender for Cloud already supports different agent-based vulnerability scans, including [Microsoft Defender for Endpoint (MDE)](deploy-vulnerability-assessment-defender-vulnerability-management.md), [BYOL](deploy-vulnerability-assessment-byol-vm.md) and [Qualys](deploy-vulnerability-assessment-vm.md). Agentless scanning extends the visibility of Defender for Cloud to reach more devices.
+Defender for Cloud already supports different agent-based vulnerability scans, including [Microsoft Defender Vulnerability Management](deploy-vulnerability-assessment-defender-vulnerability-management.md), [BYOL](deploy-vulnerability-assessment-byol-vm.md) and [Qualys](deploy-vulnerability-assessment-vm.md). Agentless scanning extends the visibility of Defender for Cloud to reach more devices.
When you enable agentless vulnerability assessment: - If you have **no existing integrated vulnerability** assessment solutions, Defender for Cloud automatically displays vulnerability assessment results from agentless scanning.-- If you have **Vulnerability assessment with MDE integration**, Defender for Cloud shows a unified and consolidated view that optimizes coverage and freshness.
+- If you have **Defender Vulnerability Management** as part of an [integration with Microsoft Defender for Endpoint](integration-defender-for-endpoint.md), Defender for Cloud shows a unified and consolidated view that optimizes coverage and freshness.
- - Machines covered by just one of the sources (MDE or agentless) show the results from that source.
+ - Machines covered by just one of the sources (Defender Vulnerability Management or agentless) show the results from that source.
- Machines covered by both sources show the agent-based results only for increased freshness. - If you have **Vulnerability assessment with Qualys or BYOL integrations** - Defender for Cloud shows the agent-based results by default. Results from the agentless scan will be shown for machines that don't have an agent installed or from machines that aren't reporting findings correctly.
- If you want to change the default behavior so that Defender for Cloud always displays results from Defender vulnerability management (regardless of a third-party agent solution), select the [Defender vulnerability management](auto-deploy-vulnerability-assessment.md#automatically-enable-a-vulnerability-assessment-solution) setting in the vulnerability assessment solution.
+ If you want to change the default behavior so that Defender for Cloud always displays results from Defender Vulnerability Management (regardless of a third-party agent solution), select the [Defender Vulnerability Management](auto-deploy-vulnerability-assessment.md#automatically-enable-a-vulnerability-assessment-solution) setting in the vulnerability assessment solution.
## Enabling agentless scanning for machines
defender-for-cloud Episode Five https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/episode-five.md
Last updated 01/24/2023
# Microsoft Defender for Servers
-**Episode description**: In this episode of Defender for Cloud in the field, Aviv Mor joins Yuri Diogenes to talk about Microsoft Defender for Servers updates, including the new integration with Microsoft Defender Vulnerability Management (formerly TVM). Aviv explains how this new integration with Defender Vulnerability Management works, the advantages of this integration, which includes software inventory and easy experience to onboard. Aviv also covers the integration with MDE for Linux and the Defender for Servers support for the new multicloud connector for AWS.
+**Episode description**: In this episode of Defender for Cloud in the field, Aviv Mor joins Yuri Diogenes to talk about Microsoft Defender for Servers updates, including the new integration with Microsoft Defender Vulnerability Management (formerly TVM). Aviv explains how this new integration with Defender Vulnerability Management works and the advantages of this integration. Aviv covers the easy experience to onboard, software inventory, the integration with MDE for Linux, and the Defender for Servers support for the new multicloud connector for AWS.
<br> <br>
Last updated 01/24/2023
- [1:22](/shows/mdc-in-the-field/defender-for-containers#time=01m22s) - Overview of the announcements for Microsoft Defender for Servers -- [5:50](/shows/mdc-in-the-field/defender-for-containers#time=05m50s) - Migration path from Qualys VA to Defender Vulnerability Management
+- [5:50](/shows/mdc-in-the-field/defender-for-containers#time=05m50s) - Migration path from Qualys VA to Microsoft Defender Vulnerability Management
- [7:12](/shows/mdc-in-the-field/defender-for-containers#time=07m12s) - Defender Vulnerability Management capabilities in Defender for Servers
Last updated 01/24/2023
## Recommended resources
-Learn how to [Investigate weaknesses with Microsoft Defender for Endpoint's threat and vulnerability management](deploy-vulnerability-assessment-defender-vulnerability-management.md).
+Learn how to [Investigate weaknesses with Microsoft Defender Vulnerability Management](deploy-vulnerability-assessment-defender-vulnerability-management.md).
- Subscribe to [Microsoft Security on YouTube](https://www.youtube.com/redirect?event=video_description&redir_token=QUFFLUhqa0ZoTml2Qm9kZ2pjRzNMUXFqVUwyNl80YVNtd3xBQ3Jtc0trVm9QM2Z0NlpOeC1KSUE2UEd1cVJ5aHQ0MTN6WjJEYmNlOG9rWC1KZ1ZqaTNmcHdOOHMtWXRLSGhUTVBhQlhhYzlUc2xmTHZtaUpkd1c4LUQzLWt1YmRTbkVQVE5EcTJIM0Foc042SGdQZU5acVRJbw&q=https%3A%2F%2Faka.ms%2FSubscribeMicrosoftSecurity)
defender-for-cloud Integration Defender For Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/integration-defender-for-endpoint.md
The protections include:
- **Advanced post-breach detection sensors**. Defenders for Endpoint's sensors collect a vast array of behavioral signals from your machines. -- **Vulnerability assessment from the Microsoft threat and vulnerability management solution**. With Microsoft Defender for Endpoint installed, Defender for Cloud can show vulnerabilities discovered by the threat and vulnerability management module and also offer this module as a supported vulnerability assessment solution. Learn more in [Investigate weaknesses with Microsoft Defender for Endpoint's threat and vulnerability management](deploy-vulnerability-assessment-defender-vulnerability-management.md).
+- **Vulnerability assessment from Microsoft Defender Vulnerability Management**. With Microsoft Defender for Endpoint installed, Defender for Cloud can show vulnerabilities discovered by Defender Vulnerability Management and also offer this module as a supported vulnerability assessment solution. Learn more in [Investigate weaknesses with Microsoft Defender Vulnerability Management](deploy-vulnerability-assessment-defender-vulnerability-management.md).
This module also brings the software inventory features described in [Access a software inventory](asset-inventory.md#access-a-software-inventory) and can be automatically enabled for supported machines with [the auto deploy settings](auto-deploy-vulnerability-assessment.md).
defender-for-cloud Plan Defender For Servers Select Plan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/plan-defender-for-servers-select-plan.md
You can choose from two Defender for Servers paid plans:
| Feature | Details | Plan 1 | Plan 2 | |:|:|::|::|
-| **Defender for Endpoint integration** | Defender for Servers integrates with Defender for Endpoint and protects servers with all the features, including:<br/><br/>- [Attack surface reduction](/microsoft-365/security/defender-endpoint/overview-attack-surface-reduction) to lower the risk of attack.<br/><br/> - [Next-generation protection](/microsoft-365/security/defender-endpoint/next-generation-protection), including real-time scanning and protection and [Microsoft Defender Antivirus](/microsoft-365/security/defender-endpoint/next-generation-protection).<br/><br/> - EDR, including [threat analytics](/microsoft-365/security/defender-endpoint/threat-analytics), [automated investigation and response](/microsoft-365/security/defender-endpoint/automated-investigations), [advanced hunting](/microsoft-365/security/defender-endpoint/advanced-hunting-overview), and [Microsoft Defender Experts](/microsoft-365/security/defender-endpoint/endpoint-attack-notifications).<br/><br/> - Vulnerability assessment and mitigation provided by Defender for Endpoint integration with [Microsoft Defender Vulnerability Management](/microsoft-365/security/defender-vulnerability-management/defender-vulnerability-management-capabilities). | :::image type="icon" source="./media/icons/yes-icon.png" alt-text="Icon that shows it's supported in Plan 1."::: | :::image type="icon" source="./media/icons/yes-icon.png" alt-text="Icon that shows it's supported in Plan 2."::: |
+| **Defender for Endpoint integration** | Defender for Servers integrates with Defender for Endpoint and protects servers with all the features, including:<br/><br/>- [Attack surface reduction](/microsoft-365/security/defender-endpoint/overview-attack-surface-reduction) to lower the risk of attack.<br/><br/> - [Next-generation protection](/microsoft-365/security/defender-endpoint/next-generation-protection), including real-time scanning and protection and [Microsoft Defender Antivirus](/microsoft-365/security/defender-endpoint/next-generation-protection).<br/><br/> - EDR, including [threat analytics](/microsoft-365/security/defender-endpoint/threat-analytics), [automated investigation and response](/microsoft-365/security/defender-endpoint/automated-investigations), [advanced hunting](/microsoft-365/security/defender-endpoint/advanced-hunting-overview), and [Microsoft Defender Experts](/microsoft-365/security/defender-endpoint/endpoint-attack-notifications).<br/><br/> - Vulnerability assessment and mitigation provided by [Microsoft Defender Vulnerability Management](/microsoft-365/security/defender-vulnerability-management/defender-vulnerability-management-capabilities) as part of the Defender for Endpoint integration. | :::image type="icon" source="./media/icons/yes-icon.png" alt-text="Icon that shows it's supported in Plan 1."::: | :::image type="icon" source="./media/icons/yes-icon.png" alt-text="Icon that shows it's supported in Plan 2."::: |
| **Licensing** | Defender for Servers covers licensing for Defender for Endpoint. Licensing is charged per hour instead of per seat, lowering costs by protecting virtual machines only when they're in use.| :::image type="icon" source="./media/icons/yes-icon.png" alt-text="Icon that shows it's supported in Plan 1."::: | :::image type="icon" source="./media/icons/yes-icon.png" alt-text="Icon that shows it's supported in Plan 2."::: | | **Defender for Endpoint provisioning** | Defender for Servers automatically provisions the Defender for Endpoint sensor on every supported machine that's connected to Defender for Cloud.| :::image type="icon" source="./media/icons/yes-icon.png" alt-text="Icon that shows it's supported in Plan 1."::: | :::image type="icon" source="./media/icons/yes-icon.png" alt-text="Icon that shows it's supported in Plan 2."::: | | **Unified view** | Defender for Endpoint alerts appear in the Defender for Cloud portal. You can get detailed information in the Defender for Endpoint portal.| :::image type="icon" source="./media/icons/yes-icon.png" alt-text="Icon that shows it's supported in Plan 1."::: | :::image type="icon" source="./media/icons/yes-icon.png" alt-text="Icon that shows it's supported in Plan 2."::: | | **Threat detection for OS-level (agent-based)** | Defender for Servers and Defender for Endpoint detect threats at the OS level, including virtual machine behavioral detections and *fileless attack detection*, which generates detailed security alerts that accelerate alert triage, correlation, and downstream response time.<br>[Learn more](alerts-reference.md#alerts-windows) | :::image type="icon" source="./mediE](/microsoft-365/security/defender-endpoint/overview-endpoint-detection-response) | :::image type="icon" source="./media/icons/yes-icon.png" alt-text="Icon that shows it's supported in Plan 2."::: | | **Threat detection for network-level (agentless)** | Defender for Servers detects threats that are directed at the control plane on the network, including network-based detections for Azure virtual machines. | Not supported in Plan 1 | :::image type="icon" source="./media/icons/yes-icon.png" alt-text="Icon that shows it's supported in Plan 2."::: |
-| **Microsoft Defender Vulnerability Management Add-on** | See a deeper analysis of the security posture of your protected servers, including risks related to browser extensions, network shares, and digital certificates. [Learn more](deploy-vulnerability-assessment-defender-vulnerability-management.md). | Not supported in Plan 1 | :::image type="icon" source="./media/icons/yes-icon.png" alt-text="Icon that shows it's supported in Plan 2."::: |
+| **Microsoft Defender Vulnerability Management Add-on** | Get comprehensive visibility, assessments, and protection with consolidated asset inventories, security baselines assessments, application block feature, and more. [Learn more](deploy-vulnerability-assessment-defender-vulnerability-management.md). | Not supported in Plan 1 | :::image type="icon" source="./media/icons/yes-icon.png" alt-text="Icon that shows it's supported in Plan 2."::: |
| **Security Policy and Regulatory Compliance** | Customize a security policy for your subscription and also compare the configuration of your resources with requirements in industry standards, regulations, and benchmarks. Learn more about [regulatory compliance](regulatory-compliance-dashboard.md) and [security policies](security-policy-concept.md) | Not supported in Plan 1 | :::image type="icon" source="./media/icons/yes-icon.png" alt-text="Icon that shows it's supported in Plan 2.":::| | **[Qualys vulnerability assessment](deploy-vulnerability-assessment-vm.md)** | As an alternative to Defender Vulnerability Management, Defender for Cloud integrates with the Qualys scanner to identify vulnerabilities. You don't need a Qualys license or account. | Not supported in Plan 1 | :::image type="icon" source="./media/icons/yes-icon.png" alt-text="Icon that shows it's supported in Plan 2.":::| **[Adaptive application controls](adaptive-application-controls.md)** | Adaptive application controls define allowlists of known safe applications for machines. To use this feature, Defender for Cloud must be enabled on the subscription. | Not supported in Plan 1 |:::image type="icon" source="./media/icons/yes-icon.png" alt-text="Icon that shows it's supported in Plan 2."::: |
You can choose from two Defender for Servers paid plans:
## Select a vulnerability assessment solution
-A couple vulnerability assessment options are available in Defender for Servers:
+A couple of vulnerability assessment options are available in Defender for Servers:
- [Microsoft Defender Vulnerability Management](/microsoft-365/security/defender-vulnerability-management/defender-vulnerability-management-capabilities): Integrated with Defender for Endpoint. - Available in Defender for Servers Plan 1 and Defender for Servers Plan 2.
- - Defender Vulnerability Management is enabled by default on machines that are onboarded to Defender for Endpoint if [Defender for Endpoint has Defender Vulnerability Management](/microsoft-365/security/defender-vulnerability-management/get-defender-vulnerability-management) enabled.
+ - Defender Vulnerability Management is enabled by default on machines that are onboarded to Defender for Endpoint.
- Has the same [Windows](/microsoft-365/security/defender-endpoint/configure-server-endpoints#prerequisites), [Linux](/microsoft-365/security/defender-endpoint/microsoft-defender-endpoint-linux#prerequisites), and [network](/microsoft-365/security/defender-endpoint/configure-proxy-internet#enable-access-to-microsoft-defender-for-endpoint-service-urls-in-the-proxy-server) prerequisites as Defender for Endpoint. - No extra software is required.
+ >[!Note]
+ > Microsoft Defender Vulnerability Management Add-on capabilities are included in Defender for Servers Plan 2. This provides consolidated inventories, new assessments, and mitigation tools to further enhance your vulnerability management program. To learn more, see [Vulnerability Management capabilities for servers](/microsoft-365/security/defender-vulnerability-management/defender-vulnerability-management-capabilities#vulnerability-managment-capabilities-for-servers).
+ >
+ > Defender Vulnerability Management add-on capabilities are only available through the [Microsoft Defender 365 portal](https://security.microsoft.com/homepage).
+ - [Qualys vulnerability scanner](deploy-vulnerability-assessment-vm.md): Provided by Defender for Cloud Qualys integration. - Available only in Defender for Servers Plan 2.
defender-for-cloud Plan Multicloud Security Determine Multicloud Dependencies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/plan-multicloud-security-determine-multicloud-dependencies.md
Enabling Defender for Servers on your AWS or GCP connector allows Defender for C
Defender for Servers offers two different plans: - **Plan 1:**
- - **MDE integration:** Plan 1 integrates with [Microsoft Defender for Endpoint Plan 2](/microsoft-365/security/defender-endpoint/defender-endpoint-plan-1-2?view=o365-worldwide) to provide a full endpoint detection and response (EDR) solution for machines running a [range of operating systems](/microsoft-365/security/defender-endpoint/minimum-requirements?view=o365-worldwide). Defender for Endpoint features include:
- - [Reducing the attack surface](/microsoft-365/security/defender-endpoint/overview-attack-surface-reduction?view=o365-worldwide) for machines.
- - Providing [antivirus](/microsoft-365/security/defender-endpoint/next-generation-protection?view=o365-worldwide) capabilities.
- - Threat management, including [threat hunting](/microsoft-365/security/defender-endpoint/advanced-hunting-overview?view=o365-worldwide), [detection](/microsoft-365/security/defender-endpoint/overview-endpoint-detection-response?view=o365-worldwide), [analytics](/microsoft-365/security/defender-endpoint/threat-analytics?view=o365-worldwide), and [automated investigation and response](/microsoft-365/security/defender-endpoint/overview-endpoint-detection-response?view=o365-worldwide).
+ - **MDE integration:** Plan 1 integrates with [Microsoft Defender for Endpoint Plan 2](/microsoft-365/security/defender-endpoint/defender-endpoint-plan-1-2) to provide a full endpoint detection and response (EDR) solution for machines running a [range of operating systems](/microsoft-365/security/defender-endpoint/minimum-requirements). Defender for Endpoint features include:
+ - [Reducing the attack surface](/microsoft-365/security/defender-endpoint/overview-attack-surface-reduction) for machines.
+ - Providing [antivirus](/microsoft-365/security/defender-endpoint/next-generation-protection) capabilities.
+ - Threat management, including [threat hunting](/microsoft-365/security/defender-endpoint/advanced-hunting-overview), [detection](/microsoft-365/security/defender-endpoint/overview-endpoint-detection-response), [analytics](/microsoft-365/security/defender-endpoint/threat-analytics), and [automated investigation and response](/microsoft-365/security/defender-endpoint/overview-endpoint-detection-response).
- **Provisioning:** Automatic provisioning of the Defender for Endpoint sensor on every supported machine that's connected to Defender for Cloud. - **Licensing:** Charges Defender for Endpoint licenses per hour instead of per seat, lowering costs by protecting virtual machines only when they are in use. - **Plan 2:** Includes all the components of Plan 1 along with additional capabilities such as File Integrity Monitoring (FIM), Just-in-time (JIT) VM access, and more.
The following components and requirements are needed to receive full protection
- The Azure Arc agent is needed to read security information on the host level and allow Defender for Cloud to deploy the agents/extensions required for complete protection. To auto-provision the Azure Arc agent, the OS configuration agent on [GCP VM instances](./quickstart-onboard-gcp.md?pivots=env-settings) and the AWS Systems Manager (SSM) agent for [AWS EC2 instances](./quickstart-onboard-aws.md?pivots=env-settings) must be configured. [Learn more](../azure-arc/servers/agent-overview.md) about the agent. - **Defender for Endpoint capabilities**: The [Microsoft Defender for Endpoint](./integration-defender-for-endpoint.md?tabs=linux) agent provides comprehensive endpoint detection and response (EDR) capabilities.-- **Vulnerability assessment**: Using either the integrated [Qualys vulnerability scanner](./deploy-vulnerability-assessment-vm.md), or the [Microsoft threat and vulnerability management](/microsoft-365/security/defender-vulnerability-management/defender-vulnerability-management?view=o365-worldwide) solution.
+- **Vulnerability assessment**: Using either the integrated [Qualys vulnerability scanner](./deploy-vulnerability-assessment-vm.md), or the [Microsoft Defender Vulnerability Management](/microsoft-365/security/defender-vulnerability-management/defender-vulnerability-management) solution.
- **Log Analytics agent/[Azure Monitor Agent](../azure-monitor/agents/agents-overview.md) (AMA) (in preview)**: Collects security-related configuration information and event logs from machines. #### Check networking requirements
defender-for-cloud Support Matrix Defender For Servers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/support-matrix-defender-for-servers.md
The following tables show the features that are supported for virtual machines a
### Windows machines
-| **Feature** | **Azure Virtual Machines** | **Azure Virtual Machine Scale Sets** | **Azure Arc-enabled machines** | **Defender for Servers required** |
-|--|::|::|::|::|
-| [Microsoft Defender for Endpoint integration](integration-defender-for-endpoint.md) | Γ£ö</br>(on supported versions) | Γ£ö</br>(on supported versions) | Γ£ö | Yes |
-| [Virtual machine behavioral analytics (and security alerts)](alerts-reference.md) | Γ£ö | Γ£ö | Γ£ö | Yes |
-| [Fileless security alerts](alerts-reference.md#alerts-windows) | Γ£ö | Γ£ö | Γ£ö | Yes |
-| [Network-based security alerts](other-threat-protections.md#network-layer) | Γ£ö | Γ£ö | - | Yes |
-| [Just-in-time VM access](just-in-time-access-usage.md) | Γ£ö | - | - | Yes |
-| [Integrated Qualys vulnerability scanner](deploy-vulnerability-assessment-vm.md#overview-of-the-integrated-vulnerability-scanner) | Γ£ö | - | Γ£ö | Yes |
-| [File Integrity Monitoring](file-integrity-monitoring-overview.md) | Γ£ö | Γ£ö | Γ£ö | Yes |
-| [Adaptive application controls](adaptive-application-controls.md) | Γ£ö | - | Γ£ö | Yes |
-| [Network map](protect-network-resources.md#network-map) | Γ£ö | Γ£ö | - | Yes |
-| [Adaptive network hardening](adaptive-network-hardening.md) | Γ£ö | - | - | Yes |
-| [Regulatory compliance dashboard & reports](regulatory-compliance-dashboard.md) | Γ£ö | Γ£ö | Γ£ö | Yes |
-| [Docker host hardening](./harden-docker-hosts.md) | - | - | - | Yes |
-| Missing OS patches assessment | Γ£ö | Γ£ö | Γ£ö | Azure: No<br><br>Azure Arc-enabled: Yes |
-| Security misconfigurations assessment | Γ£ö | Γ£ö | Γ£ö | Azure: No<br><br>Azure Arc-enabled: Yes |
-| [Endpoint protection assessment](supported-machines-endpoint-solutions-clouds-servers.md#supported-endpoint-protection-solutions) | Γ£ö | Γ£ö | Γ£ö | Azure: No<br><br>Azure Arc-enabled: Yes |
-| Disk encryption assessment | Γ£ö</br>(for [supported scenarios](../virtual-machines/windows/disk-encryption-windows.md#unsupported-scenarios)) | Γ£ö | - | No |
-| Third-party vulnerability assessment | Γ£ö | - | Γ£ö | No |
-| [Network security assessment](protect-network-resources.md) | Γ£ö | Γ£ö | - | No |
-
+| **Feature** | **Azure Virtual Machines and [Virtual Machine Scale Sets with Flexible orchestration](../virtual-machine-scale-sets/virtual-machine-scale-sets-orchestration-modes.md#scale-sets-with-flexible-orchestration)** | **Azure Arc-enabled machines** | **Defender for Servers required** |
+| | :--: | :-: | :-: |
+| [Microsoft Defender for Endpoint integration](integration-defender-for-endpoint.md) | Γ£ö</br>(on supported versions) | Γ£ö | Yes |
+| [Virtual machine behavioral analytics (and security alerts)](alerts-reference.md) | Γ£ö | Γ£ö | Yes |
+| [Fileless security alerts](alerts-reference.md#alerts-windows) | Γ£ö | Γ£ö | Yes |
+| [Network-based security alerts](other-threat-protections.md#network-layer) | Γ£ö | - | Yes |
+| [Just-in-time VM access](just-in-time-access-usage.md) | Γ£ö | - | Yes |
+| [Integrated Qualys vulnerability scanner](deploy-vulnerability-assessment-vm.md#overview-of-the-integrated-vulnerability-scanner) | Γ£ö | Γ£ö | Yes |
+| [File Integrity Monitoring](file-integrity-monitoring-overview.md) | Γ£ö | Γ£ö | Yes |
+| [Adaptive application controls](adaptive-application-controls.md) | Γ£ö | Γ£ö | Yes |
+| [Network map](protect-network-resources.md#network-map) | Γ£ö | - | Yes |
+| [Adaptive network hardening](adaptive-network-hardening.md) | Γ£ö | - | Yes |
+| [Regulatory compliance dashboard & reports](regulatory-compliance-dashboard.md) | Γ£ö | Γ£ö | Yes |
+| [Docker host hardening](./harden-docker-hosts.md) | - | - | Yes |
+| Missing OS patches assessment | Γ£ö | Γ£ö | Azure: No<br><br>Azure Arc-enabled: Yes |
+| Security misconfigurations assessment | Γ£ö | Γ£ö | Azure: No<br><br>Azure Arc-enabled: Yes |
+| [Endpoint protection assessment](supported-machines-endpoint-solutions-clouds-servers.md#supported-endpoint-protection-solutions) | Γ£ö | Γ£ö | Azure: No<br><br>Azure Arc-enabled: Yes |
+| Disk encryption assessment | Γ£ö</br>(for [supported scenarios](../virtual-machines/windows/disk-encryption-windows.md#unsupported-scenarios)) | - | No |
+| Third-party vulnerability assessment (BYOL) | Γ£ö | - | No |
+| [Network security assessment](protect-network-resources.md) | Γ£ö | - | No |
### Linux machines
-| **Feature** | **Azure Virtual Machines** | **Azure Virtual Machine Scale Sets** | **Azure Arc-enabled machines** | **Defender for Servers required** |
-|--|::|::|::|::|
-| [Microsoft Defender for Endpoint integration](integration-defender-for-endpoint.md) | Γ£ö | - | Γ£ö | Yes |
-| [Virtual machine behavioral analytics (and security alerts)](./azure-defender.md) | Γ£ö</br>(on supported versions) | Γ£ö</br>(on supported versions) | Γ£ö | Yes |
-| [Fileless security alerts](alerts-reference.md#alerts-windows) | - | - | - | Yes |
-| [Network-based security alerts](other-threat-protections.md#network-layer) | Γ£ö | Γ£ö | - | Yes |
-| [Just-in-time VM access](just-in-time-access-usage.md) | Γ£ö | - | - | Yes |
-| [Integrated Qualys vulnerability scanner](deploy-vulnerability-assessment-vm.md#overview-of-the-integrated-vulnerability-scanner) | Γ£ö | - | Γ£ö | Yes |
-| [File Integrity Monitoring](file-integrity-monitoring-overview.md) | Γ£ö | Γ£ö | Γ£ö | Yes |
-| [Adaptive application controls](adaptive-application-controls.md) | Γ£ö | - | Γ£ö | Yes |
-| [Network map](protect-network-resources.md#network-map) | Γ£ö | Γ£ö | - | Yes |
-| [Adaptive network hardening](adaptive-network-hardening.md) | Γ£ö | - | - | Yes |
-| [Regulatory compliance dashboard & reports](regulatory-compliance-dashboard.md) | Γ£ö | Γ£ö | Γ£ö | Yes |
-| [Docker host hardening](./harden-docker-hosts.md) | Γ£ö | Γ£ö | Γ£ö | Yes |
-| Missing OS patches assessment | Γ£ö | Γ£ö | Γ£ö | Azure: No<br><br>Azure Arc-enabled: Yes |
-| Security misconfigurations assessment | Γ£ö | Γ£ö | Γ£ö | Azure: No<br><br>Azure Arc-enabled: Yes |
-| [Endpoint protection assessment](supported-machines-endpoint-solutions-clouds-servers.md#supported-endpoint-protection-solutions) | - | - | - | No |
-| Disk encryption assessment | Γ£ö</br>(for [supported scenarios](../virtual-machines/windows/disk-encryption-windows.md#unsupported-scenarios)) | Γ£ö | - | No |
-| Third-party vulnerability assessment | Γ£ö | - | Γ£ö | No |
-| [Network security assessment](protect-network-resources.md) | Γ£ö | Γ£ö | - | No |
+| **Feature** | **Azure Virtual Machines and [Virtual Machine Scale Sets with Flexible orchestration](../virtual-machine-scale-sets/virtual-machine-scale-sets-orchestration-modes.md#scale-sets-with-flexible-orchestration)** | **Azure Arc-enabled machines** | **Defender for Servers required** |
+| | :--: | :-: | :-: |
+| [Microsoft Defender for Endpoint integration](integration-defender-for-endpoint.md) | Γ£ö | Γ£ö | Yes |
+| [Virtual machine behavioral analytics (and security alerts)](./azure-defender.md) | Γ£ö</br>(on supported versions) | Γ£ö | Yes |
+| [Fileless security alerts](alerts-reference.md#alerts-windows) | - | - | Yes |
+| [Network-based security alerts](other-threat-protections.md#network-layer) | Γ£ö | - | Yes |
+| [Just-in-time VM access](just-in-time-access-usage.md) | Γ£ö | - | Yes |
+| [Integrated Qualys vulnerability scanner](deploy-vulnerability-assessment-vm.md#overview-of-the-integrated-vulnerability-scanner) | Γ£ö | Γ£ö | Yes |
+| [File Integrity Monitoring](file-integrity-monitoring-overview.md) | Γ£ö | Γ£ö | Yes |
+| [Adaptive application controls](adaptive-application-controls.md) | Γ£ö | Γ£ö | Yes |
+| [Network map](protect-network-resources.md#network-map) | Γ£ö | - | Yes |
+| [Adaptive network hardening](adaptive-network-hardening.md) | Γ£ö | - | Yes |
+| [Regulatory compliance dashboard & reports](regulatory-compliance-dashboard.md) | Γ£ö | Γ£ö | Yes |
+| [Docker host hardening](./harden-docker-hosts.md) | Γ£ö | Γ£ö | Yes |
+| Missing OS patches assessment | Γ£ö | Γ£ö | Azure: No<br><br>Azure Arc-enabled: Yes |
+| Security misconfigurations assessment | Γ£ö | Γ£ö | Azure: No<br><br>Azure Arc-enabled: Yes |
+| [Endpoint protection assessment](supported-machines-endpoint-solutions-clouds-servers.md#supported-endpoint-protection-solutions) | - | - | No |
+| Disk encryption assessment | Γ£ö</br>(for [supported scenarios](../virtual-machines/windows/disk-encryption-windows.md#unsupported-scenarios)) | - | No |
+| Third-party vulnerability assessment (BYOL) | Γ£ö | - | No |
+| [Network security assessment](protect-network-resources.md) | Γ£ö | - | No |
### Multicloud machines
The following tables show the features that are supported for virtual machines a
| Disk encryption assessment | Γ£ö</br>(for [supported scenarios](../virtual-machines/windows/disk-encryption-windows.md#unsupported-scenarios)) | Γ£ö</br>(for [supported scenarios](../virtual-machines/windows/disk-encryption-windows.md#unsupported-scenarios)) | | Third-party vulnerability assessment | - | - | | [Network security assessment](protect-network-resources.md) | - | - |
+| [Cloud security explorer](how-to-manage-cloud-security-explorer.md) | Γ£ö | - |
> [!TIP] >To experiment with features that are only available with enhanced security features enabled, you can enroll in a 30-day trial. For more information, see the [pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/).
defender-for-iot Alert Engine Messages https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/alert-engine-messages.md
Malware engine alerts describe detected malicious network activity.
| Title | Description| Severity | Category | MITRE ATT&CK <br> tactics and techniques | |--|--|--|--|--|
-| **Connection Attempt to Known Malicious IP** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. <br><br>Triggered by both OT and Enterprise IoT network sensors. | Major | Suspicion of Malicious Activity | **Tactics:** <br> - Initial Access <br> - Command And Control <br><br> **Techniques:** <br> - T0883: Internet Accessible Device <br> - T0884: Connection Proxy |
+| **Connection Attempt to Known Malicious IP** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. <br><br>Triggered by both OT and Enterprise IoT network sensors. | Critical | Suspicion of Malicious Activity | **Tactics:** <br> - Initial Access <br> - Command And Control <br><br> **Techniques:** <br> - T0883: Internet Accessible Device <br> - T0884: Connection Proxy |
| **Invalid SMB Message (DoublePulsar Backdoor Implant)** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical | Suspicion of Malware | **Tactics:** <br> - Initial Access <br> - LateralMovement <br><br> **Techniques:** <br> - T0866: Exploitation of Remote Services |
-| **Malicious Domain Name Request** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. <br><br>Triggered by both OT and Enterprise IoT network sensors. | Major | Suspicion of Malicious Activity | **Tactics:** <br> - Initial Access <br> - Command And Control <br><br> **Techniques:** <br> - T0883: Internet Accessible Device <br> - T0884: Connection Proxy |
-| **Malware Test File Detected - EICAR AV Success** | An EICAR AV test file was detected in traffic between two devices (over any transport - TCP or UDP). The file isn't malware. It's used to confirm that the antivirus software is installed correctly. Demonstrate what happens when a virus is found, and check internal procedures and reactions when a virus is found. Antivirus software should detect EICAR as if it were a real virus. | Major | Suspicion of Malicious Activity | **Tactics:** <br> - Discovery <br><br> **Techniques:** <br> - T0842: Network Sniffing |
+| **Malicious Domain Name Request** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. <br><br>Triggered by both OT and Enterprise IoT network sensors. | Critical | Suspicion of Malicious Activity | **Tactics:** <br> - Initial Access <br> - Command And Control <br><br> **Techniques:** <br> - T0883: Internet Accessible Device <br> - T0884: Connection Proxy |
+| **Malware Test File Detected - EICAR AV Success** | An EICAR AV test file was detected in traffic between two devices (over any transport - TCP or UDP). The file isn't malware. It's used to confirm that the antivirus software is installed correctly. Demonstrate what happens when a virus is found, and check internal procedures and reactions when a virus is found. Antivirus software should detect EICAR as if it were a real virus. | Critical | Suspicion of Malicious Activity | **Tactics:** <br> - Discovery <br><br> **Techniques:** <br> - T0842: Network Sniffing |
| **Suspicion of Conficker Malware** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Major | Suspicion of Malware | **Tactics:** <br> - Initial Access <br> - Impact <br><br> **Techniques:** <br> - T0826: Loss of Availability <br> - T0828: Loss of Productivity and Revenue <br> - T0847: Replication Through Removable Media | | **Suspicion of Denial Of Service Attack** | A source device attempted to initiate an excessive number of new connections to a destination device. This may indicate a Denial Of Service (DOS) attack against the destination device, and might interrupt device functionality, affect performance and service availability, or cause unrecoverable errors. <br><br> Threshold: 3000 attempts in 1 minute | Critical | Suspicion of Malicious Activity | **Tactics:** <br> - Inhibit Response Function <br><br> **Techniques:** <br> - T0814: Denial of Service |
-| **Suspicion of Malicious Activity** | Suspicious network activity was detected. This activity may be associated with an attack that triggered known 'Indicators of Compromise' (IOCs). Alert metadata should be reviewed by the security team. | Major | Suspicion of Malicious Activity | **Tactics:** <br> - Lateral Movement <br><br> **Techniques:** <br> - T0867: Lateral Tool Transfer |
+| **Suspicion of Malicious Activity** | Suspicious network activity was detected. This activity may be associated with an attack that triggered known 'Indicators of Compromise' (IOCs). Alert metadata should be reviewed by the security team. | Critical | Suspicion of Malicious Activity | **Tactics:** <br> - Lateral Movement <br><br> **Techniques:** <br> - T0867: Lateral Tool Transfer |
| **Suspicion of Malicious Activity (BlackEnergy)** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical | Suspicion of Malware | **Tactics:** <br> - Command And Control <br><br> **Techniques:** <br> - T0869: Standard Application Layer Protocol | | **Suspicion of Malicious Activity (DarkComet)** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical | Suspicion of Malware | **Tactics:** <br> - Impact <br><br> **Techniques:** <br> - T0882: Theft of Operational Information | | **Suspicion of Malicious Activity (Duqu)** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical | Suspicion of Malware | **Tactics:** <br> - Impact <br><br> **Techniques:** <br> - T0882: Theft of Operational Information |
Malware engine alerts describe detected malicious network activity.
| **Suspicion of Malicious Activity (Havex)** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical | Suspicion of Malware | **Tactics:** <br> - Collection <br> - Discovery <br> - Inhibit Response Function <br><br> **Techniques:** <br> - T0861: Point & Tag Identification <br> - T0846: Remote System Discovery <br> - T0814: Denial of Service | | **Suspicion of Malicious Activity (Karagany)** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical | Suspicion of Malware | **Tactics:** <br> - Impact <br><br> **Techniques:** <br> - T0882: Theft of Operational Information | | **Suspicion of Malicious Activity (LightsOut)** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical | Suspicion of Malware | **Tactics:** <br> - Evasion <br><br> **Techniques:** <br> - T0849: Masquerading |
-| **Suspicion of Malicious Activity (Name Queries)** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. <br><br> Threshold: 25 name queries in 1 minute | Major | Suspicion of Malicious Activity | **Tactics:** <br> - Command And Control <br><br> **Techniques:** <br> - T0884: Connection Proxy |
+| **Suspicion of Malicious Activity (Name Queries)** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. <br><br> Threshold: 25 name queries in 1 minute | Critical | Suspicion of Malicious Activity | **Tactics:** <br> - Command And Control <br><br> **Techniques:** <br> - T0884: Connection Proxy |
| **Suspicion of Malicious Activity (Poison Ivy)** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical | Suspicion of Malware | **Tactics:** <br> - Initial Access <br> - Lateral Movement <br><br> **Techniques:** <br> - T0866: Exploitation of Remote Services | | **Suspicion of Malicious Activity (Regin)** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical | Suspicion of Malware | **Tactics:** <br> - Initial Access <br> - Lateral Movement <br> - Impact <br><br> **Techniques:** <br> - T0866: Exploitation of Remote Services <br> - T0882: Theft of Operational Information | | **Suspicion of Malicious Activity (Stuxnet)** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical | Suspicion of Malware | **Tactics:** <br> - Initial Access <br> - Lateral Movement <br> - Impact <br><br> **Techniques:** <br> - T0818: Engineering Workstation Compromise <br> - T0866: Exploitation of Remote Services <br> - T0831: Manipulation of Control | | **Suspicion of Malicious Activity (WannaCry) [*](#ot-alerts-turned-off-by-default)** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Major | Suspicion of Malware | **Tactics:** <br> - Initial Access <br> - Lateral Movement <br><br> **Techniques:** <br> - T0866: Exploitation of Remote Services <br> - T0867: Lateral Tool Transfer | | **Suspicion of NotPetya Malware - Illegal SMB Parameters Detected** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical | Suspicion of Malware | **Tactics:** <br> - Initial Access <br> - Lateral Movement <br><br> **Techniques:** <br> - T0866: Exploitation of Remote Services | | **Suspicion of NotPetya Malware - Illegal SMB Transaction Detected** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical | Suspicion of Malware | **Tactics:** <br> - Lateral Movement <br><br> **Techniques:** <br> - T0867: Lateral Tool Transfer |
-| **Suspicion of Remote Code Execution with PsExec** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Major | Suspicion of Malicious Activity | **Tactics:** <br> - Lateral Movement <br> - Initial Access <br><br> **Techniques:** <br> - T0866: Exploitation of Remote Services |
-| **Suspicion of Remote Windows Service Management [*](#ot-alerts-turned-off-by-default)** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Major | Suspicion of Malicious Activity | **Tactics:** <br> - Initial Access <br><br> **Techniques:** <br> - T0822: NetworkExternal Remote Services |
-| **Suspicious Executable File Detected on Endpoint** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Major | Suspicion of Malicious Activity | **Tactics:** <br> - Evasion <br> - Inhibit Response Function <br><br> **Techniques:** <br> - T0851: Rootkit |
+| **Suspicion of Remote Code Execution with PsExec** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical | Suspicion of Malicious Activity | **Tactics:** <br> - Lateral Movement <br> - Initial Access <br><br> **Techniques:** <br> - T0866: Exploitation of Remote Services |
+| **Suspicion of Remote Windows Service Management [*](#ot-alerts-turned-off-by-default)** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical | Suspicion of Malicious Activity | **Tactics:** <br> - Initial Access <br><br> **Techniques:** <br> - T0822: NetworkExternal Remote Services |
+| **Suspicious Executable File Detected on Endpoint** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical | Suspicion of Malicious Activity | **Tactics:** <br> - Evasion <br> - Inhibit Response Function <br><br> **Techniques:** <br> - T0851: Rootkit |
| **Suspicious Traffic Detected [*](#ot-alerts-turned-off-by-default)** | Suspicious network activity was detected. This activity may be associated with an attack that triggered known 'Indicators of Compromise' (IOCs). Alert metadata should be reviewed by the security team | Critical | Suspicion of Malicious Activity | **Tactics:** <br> - Discovery <br><br> **Techniques:** <br> - T0842: Network Sniffing | | **Backup Activity with Antivirus Signatures** | Traffic detected between the source device and the destination backup server triggered this alert. The traffic includes backup of antivirus software that might contain malware signatures. This is most likely legitimate backup activity. | Warning | Backup | **Tactics:** <br> - Impact <br><br> **Techniques:** <br> - T0882: Theft of Operational Information |
defender-for-iot Neousys Nuvo 5006Lp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/appliance-catalog/neousys-nuvo-5006lp.md
This article describes the Neousys Nuvo-5006LP appliance for OT sensors. > [!NOTE]
-> Legacy appliances are certified but aren't currently offered as pre-configured appliances.
+> Neousys Nuvo-5006LP is a Legacy appliance and it supports Defender for IoT sensor software up to version 22.2.9.
+> It is recommended that these appliances be replaced with newer certified models such as the [YS-FIT2](ys-techsystems-ys-fit2.md) or [HPE DL20 (NHP 2LFF)](hpe-proliant-dl20-plus-smb.md).
| Appliance characteristic |Details | ||| |**Hardware profile** | L100 |
-|**Performance** | Max bandwidth: 30 Mbps<br>Max devices: 400 |
+|**Performance** | Max bandwidth: 30 Mbps<br>Max devices: 400 |
|**Physical specifications** | Mounting: Mounting kit, Din Rail<br>Ports: 5x RJ45|
-|**Status** | Not available pre-configured|
+|**Status** | Supported up to version 22.2.9|
:::image type="content" source="../media/ot-system-requirements/cyberx.png" alt-text="Photo of a Neousys Nuvo-5006LP." border="false":::
defender-for-iot How To Activate And Set Up Your On Premises Management Console https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-activate-and-set-up-your-on-premises-management-console.md
After activating an on-premises management console, you'll need to apply new act
|Location |Activation process | ||| |**On-premises management console** | Apply a new activation file on your on-premises management console if you've [modified the number of committed devices](how-to-manage-subscriptions.md#edit-a-plan-for-ot-networks) in your subscription. |
-|**Cloud-connected sensors** | Cloud-connected sensors remain activated for as long as your Azure subscription with your Defender for IoT plan is active. <br><br>If you're [updating an OT sensor from a legacy version](update-ot-software.md#update-legacy-ot-sensor-software), you'll need to re-activate your updated sensor. |
-| **Locally-managed** | Apply a new activation file to locally managed sensors every year. After a sensor's activation file has expired, the sensor will continue to monitor your network, but you'll see a warning message when signing in to the sensor. |
+|**Cloud-connected and locally-managed sensors** | Cloud-connected and locally-managed sensors remain activated for as long as your Azure subscription with your Defender for IoT plan is active. <br><br>If you're [updating an OT sensor from a legacy version](update-ot-software.md#update-legacy-ot-sensor-software), you'll need to re-activate your updated sensor. |
For more information, see [Manage Defender for IoT subscriptions](how-to-manage-subscriptions.md).
defender-for-iot How To Activate And Set Up Your Sensor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-activate-and-set-up-your-sensor.md
For information about uploading a new certificate, supported certificate paramet
### Activation expirations
-After activating a sensor, you'll need to apply new activation files as follows:
+After activating a sensor, cloud-connected and locally-managed sensors remain activated for as long as your Azure subscription with your Defender for IoT plan is active.
-|Location |Activation process |
-|||
-|**Cloud-connected sensors** | Cloud-connected sensors remain activated for as long as your Azure subscription with your Defender for IoT plan is active. <br><br>If you're [updating an OT sensor from a legacy version](update-ot-software.md#update-legacy-ot-sensor-software), you'll need to re-activate your updated sensor. |
-| **Locally managed** | Apply a new activation file to locally managed sensors every year. After a sensor's activation file has expired, the sensor will continue to monitor your network, but you'll see a warning message when signing in to the sensor. |
+If you're updating an OT sensor from a legacy version, you'll need to re-activate your updated sensor. For more information, see [Update legacy OT sensor software](update-ot-software.md#update-legacy-ot-sensor-software).
For more information, see [Manage Defender for IoT subscriptions](how-to-manage-subscriptions.md) and [Manage the on-premises management console](how-to-manage-the-on-premises-management-console.md). - ### Activate an expired license (versions under 10.0) For users with versions prior to 10.0, your license may expire, and the following alert will be displayed.
For more information, see:
- [Onboard a sensor](tutorial-onboarding.md#onboard-and-activate-the-virtual-sensor) -- [Manage sensor activation files](how-to-manage-individual-sensors.md#manage-sensor-activation-files)
+- [Manage sensor activation files](how-to-manage-individual-sensors.md#upload-a-new-activation-file)
- [Control what traffic is monitored](how-to-control-what-traffic-is-monitored.md)
defender-for-iot How To Analyze Programming Details Changes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-analyze-programming-details-changes.md
Title: Analyze programming details and changes
-description: Enhance forensics by displaying programming events carried out on your network devices and analyzing code changes. This information helps you discover suspicious programming activity.
Previously updated : 01/30/2022
+ Title: Analyze programming details and changes on an OT sensor - Microsoft Defender for IoT
+description: Discover suspicious programming activity by investigating programming events occurring on your network devices.
Last updated : 02/28/2023 - # Analyze programming details and changes
-Enhance forensics by displaying programming events carried out on your network devices and analyzing code changes. This information helps you discover suspicious programming activity, for example:
-
- - Human error: An engineer is programming the wrong device.
-
- - Corrupted programming automation: Programming is erroneously carried out because of automation failure.
-
- - Hacked systems: Unauthorized users logged into a programming device.
-
-You can display a programmed device and scroll through various programming changes carried out on it by other devices.
-
-View code that was added, changed, removed, or reloaded by the programming device. Search for programming changes based on file types, dates, or times of interest.
-
-## When to review programming activity
-
-You may need to review programming activity:
+Enhance forensics by displaying programming events occurring on your network devices and analyzing any code changes using the OT sensor. Watching for programming events helps you investigate suspicious programming activity, such as:
- - After viewing an alert regarding unauthorized programming
+ - **Human error**: An engineer programming the wrong device.
+ - **Corrupted programming automation**: Programming errors due to automation failures.
+ - **Hacked systems**: Unauthorized users logged into a programming device.
- - After a planned update to controllers
+Use the **Programming Timeline** tab on your OT network sensor to review programming data, such as when investigating an alert about unauthorized programming, after a planned controller update, or when a process or machine isn't working correctly and you want to understand who made the last update and when.
- - When a process or machine isn't working correctly (to see who carried out the last update and when)
+Programming activity shown on OT sensors include both *authorized* and *unauthorized* events. Authorized events are performed by devices that are either learned or manually defined as programming devices. Unauthorized events are performed by devices that haven't been learned or manually defined as programming devices.
- :::image type="content" source="media/how-to-work-with-maps/differences.png" alt-text="Screenshot of a Programming Change Log":::
-
-Other options let you:
-
- - Mark events of interest with a star.
-
- - Download a *.txt file with the current code.
-
-## About authorized versus unauthorized programming events
+> [!NOTE]
+> Programming data is available for devices using text based programming protocols, such as DeltaV.
-Unauthorized programming events are carried out by devices that haven't been learned or manually defined as programming devices. Authorized programming events are carried out by devices that were resolved or manually defined as programming devices.
+## Prerequisites
-The Programming Analysis window displays both authorized and unauthorized programming events.
+To perform the procedures in this article, make sure that you have:
-## Accessing programming details and changes
+- An OT sensor installed and configured, with text based programming protocol traffic.
-Access the Programming Analysis window from the:
+- Access to the sensor as a **Viewer**, **Security analyst** or **Admin** user.
-- [Event Timeline](how-to-track-sensor-activity.md)
+## Access programming data
-- [Unauthorized Programming Alerts](#unauthorized-programming-alerts)
+The **Programming Timeline** tab can be accessed from the **Device map**, **Device inventory**, and **Event timeline** pages in the sensor console.
-### Event timeline
+### Access programming data from the device map
-Use the event timeline to display a timeline of events in which programming changes were detected.
+1. Sign into the OT sensor console and select **Device map**.
+1. In the **Groups** area to the left of the map, select **Filter** > **OT Protocols** > select a text based programming protocol, such as DeltaV.
-### Unauthorized programming alerts
+1. In the map, right-click on the device you want to analyze, and select **Programming timeline**.
-Alerts are triggered when unauthorized programming devices carry out programming activities.
+ :::image type="content" source="media/analyze-programming/select-programming-timeline-from-device-map.png" alt-text="Screenshot of the programming timeline option from the device map." lightbox="media/analyze-programming/select-programming-timeline-from-device-map.png":::
+ The device details page opens with the **Programming Timeline** tab open.
+
+### Access programming data from the device inventory
-> [!NOTE]
-> You can also view basic programming information in the Device Properties window and Device Inventory.
+1. Sign into the OT sensor console and select **Device inventory**.
-## Working in the programming timeline window
+1. Filter the device inventory to show devices using text based programming protocols, such as DeltaV.
-This section describes how to view programming files and compare versions. Search for specific files sent to a programmed device. Search for files based on:
+1. Select the device you want to analyze, and then select **View full details** to open the device details page.
- - Date
+1. On the device details page, select the **Programming Timeline** tab.
- - File type
+ For example:
- :::image type="content" source="media/how-to-work-with-maps/timeline-view.png" alt-text="Screenshot of a programming timeline window.":::
+ :::image type="content" source="media/analyze-programming/programming-timeline-window-device-inventory.png" alt-text="Screenshot of programming timeline tab on device details page." lightbox="media/analyze-programming/programming-timeline-window-device-inventory.png":::
-|Programming timeline type | Description |
-|--|--|
-| Programmed Device | Provides details about the device that was programmed, including the hostname and file. |
-| Recent Events | Displays the 50 most recent events detected by the sensor. <br />To highlight an event, hover over it and select the star. :::image type="icon" source="media/how-to-work-with-maps/star.png" border="false"::: <br /> The last 50 events can be viewed. |
-| Files | Displays the files detected for the chosen date and the file size on the programmed device. <br /> By default, the maximum number of files available for display per device is 300. <br /> By default, the maximum file size for each file is 15 MB. |
-| File status :::image type="icon" source="media/how-to-work-with-maps/status-v2.png" border="false"::: | File labels indicate the status of the file on the device, including: <br /> **Added**: the file was added to the endpoint on the date or time selected. <br /> **Updated**: The file was updated on the date or time selected. <br /> **Deleted**: This file was removed. <br /> **No label**: The file wasn't changed. |
-| Programming Device | The device that made the programming change. Multiple devices may have carried out programming changes on one programmed device. The hostname, date, or time of change and logged in user are displayed. |
-| :::image type="icon" source="media/how-to-work-with-maps/current.png" border="false"::: | Displays the current file installed on the programmed device. |
-| :::image type="icon" source="media/how-to-work-with-maps/download-text.png" border="false"::: | Download a text file of the code displayed. |
-| :::image type="icon" source="media/how-to-work-with-maps/compare.png" border="false"::: | Compare the current file with the file detected on a selected date. |
+### Access programming data from the event timeline
-### Choose a file to review
+Use the event timeline to display a timeline of events in which programming changes were detected.
-This section describes how to choose a file to review.
+1. Sign into the OT sensor console and select **Event timeline**.
-**To choose a file to review:**
+1. Filter the event timeline for devices using text based programming protocols, such as **DeltaV**.
-1. Select an event from the **Recent Events** pane
+1. Select the event you want to analyze to open the event details pane on the right, and then select **Programming timeline**.
-2. Select a file from the File pane. The file appears in the Current pane.
+## View programming details
- :::image type="content" source="media/how-to-work-with-maps/choose-file.png" alt-text="Screenshot of selecting the file you want to work with.":::
+The **Programming Timeline** tab shows details about each device that was programmed. Select an event and a file to view full programming details on the right. In the **Programming Timeline** tab:
-### Compare files
+- The **Recent Events** area lists the 50 most recent events detected by the OT sensor. Hover over an event period select the star to mark the event as an **Important** event.
-This section describes how to compare programming files.
+- The **Files** area lists programming files detected for the selected device. The OT sensor can display a maximum of 300 files per device, where each file has a maximum size of 15 MB. The **Files** area lists each file's name and size, and one of the following statuses to indicate the programming event that occurred:
-**To compare:**
+ - **Added**: The programming file was added to the endpoint
+ - **Updated**: The programming file was updated on the endpoint
+ - **Deleted**: The programming file was removed from the endpoint
+ - **Unknown**: No changes were detected for the programming file
-1. Select an event from the Recent Events pane.
+- When a programming file is opened on the right, the device that was programmed is listed as the *programmed asset*. Multiple devices may have made programming changes on the device. Devices that made changes are listed as the *programming assets*, and details include the hostname, when the change was made, and the user that was signed in to the device at the time.
-2. Select a file from the File pane. The file appears in the Current pane. You can compare this file to other files.
+> [!TIP]
+> Select the :::image type="icon" source="media/analyze-programming/download-icon.png" border="false"::: download button to download a copy of the currently displayed programming file.
-3. Select the compare indicator.
+For example:
- :::image type="content" source="media/how-to-work-with-maps/compare.png" alt-text="Screenshot of the compare indicator.":::
- The window displays all dates the selected file was detected on the programmed device. The file may have been updated on the programmed device by multiple programming devices.
+## Compare programming detail files
- The number of differences detected appears in the upper right-hand corner of the window. You may need to scroll down to view differences.
+This procedure describes how to compare multiple programming detail files to identify discrepancies or investigate them for suspicious activity.
- :::image type="content" source="media/how-to-work-with-maps/scroll.png" alt-text="Screenshot of scrolling down to your selection.":::
+**To compare files:**
- The number is calculated by adjacent lines of changed text. For example, if eight consecutive lines of code were changed (deleted, updated, or added) this will be calculated as one difference.
+1. Open a programming file from an alert or from the **Device map** or **Device inventory** pages.
- :::image type="content" source="media/how-to-work-with-maps/program-timeline.png" alt-text="Screenshot of the programming timeline view." lightbox="media/how-to-work-with-maps/program-timeline.png":::
+1. With your first file open, select the compare :::image type="icon" source="media/analyze-programming/compare-icon.png" border="false"::: button.
-4. Select a date. The file detected on the selected date appears in the window.
+1. In the **Compare** pane, select a file for comparison by selecting the scale icon under **Action** next to the file. For example:
-5. The file selected from the Recent Events/Files pane always appears on the right.
+ :::image type="content" source="media/analyze-programming/compare-file-pane.png" alt-text="Screenshot of compare files pane." lightbox="media/analyze-programming/compare-file-pane.png":::
-## Device programming information: Other locations
+ The selected file opens up in a new pane for side-by-side comparison with the first file. The current file installed on the programmed device is labeled *Current* at the top of the file.
-In addition to reviewing details in the Programming Timeline, you can access programming information in the Device Properties window and the Device Inventory.
+ :::image type="content" source="media/analyze-programming/compare-files-side-by-side.png" alt-text="Screenshot of programming file comparison side by side." lightbox="media/analyze-programming/compare-files-side-by-side.png":::
-| Device type | Description |
-|--|--|
-| Device properties | The device properties window provides information on the last programming event detected on the device. |
-| The device inventory | The device inventory indicates if the device is a programming device. <br> :::image type="content" source="media/how-to-work-with-maps/inventory-v2.png" alt-text="Screenshot of the device inventory page."::: |
+ Scroll through the files to see the programming details and any differences between the files. Differences between the two files are highlighted in green and red.
## Next steps
-For more information, see [Import device information to a sensor](how-to-import-device-information.md).
+For more information, see [Import device information to a sensor](how-to-import-device-information.md).
defender-for-iot How To Manage Individual Sensors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-manage-individual-sensors.md
In [Defender for IoT](https://ms.portal.azure.com/#view/Microsoft_Azure_IoT_Defe
For more information, see [Update Defender for IoT OT monitoring software](update-ot-software.md).
-## Manage sensor activation files
+## Upload a new activation file
-Your sensor was onboarded with Microsoft Defender for IoT from the Azure portal. Each sensor was onboarded as either a locally connected sensor or a cloud-connected sensor.
+Each OT sensor is onboarded as a cloud-connected or locally-managed OT sensor and activated using a unique activation file. For cloud-connected sensors, the activation file is used to ensure the connection between the sensor and Azure.
-A unique activation file is uploaded to each sensor that you deploy. For more information about when and how to use a new file, see [Upload new activation files](#upload-new-activation-files). If you can't upload the file, see [Troubleshoot activation file upload](#troubleshoot-activation-file-upload).
-
-### About activation files for locally connected sensors
-
-Locally connected sensors are associated with an Azure subscription. The activation file for your locally connected sensors contains an expiration date. One month before this date, a warning message appears in the System Messages window in the top-right corner of the console. The warning remains until after you've updated the activation file.
-
-You can continue to work with Defender for IoT features even if the activation file has expired.
-You can continue to work with Defender for IoT features even if the activation file has expired.
-
-### About activation files for cloud-connected sensors
-
-Sensors that are cloud connected aren't limited by time periods for their activation file. The activation file for cloud-connected sensors is used to ensure the connection to Defender for IoT.
-
-### Upload new activation files
-
-You might need to upload a new activation file for an onboarded sensor when:
--- An activation file expires on a locally connected sensor.--- You want to work in a different sensor management mode.--- For sensors connected via an IoT Hub ([legacy](architecture-connections.md)), you want to assign a new Defender for IoT hub to a cloud-connected sensor.
+You'll need to upload a new activation file to your senor if you want to switch sensor management modes, such as moving from a locally-managed sensor to a cloud-connected sensor. Uploading a new activation file to your sensor includes deleting your sensor from the Azure portal and onboarding it again.
**To add a new activation file:**
-1. Go to the Azure portal for Defender for IoT.
-1. Use the search bar to find the sensor you need.
+1. In [Defender for IoT on the Azure portal](https://portal.azure.com/#blade/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/Getting_Started) > **Sites and sensors**, locate and [delete](how-to-manage-sensors-on-the-cloud.md#sensor-maintenance-and-troubleshooting) your OT sensor.
-1. Select the three dots (...) on the row and select **Delete sensor**.
+1. Select **Onboard OT sensor > OT** to onboard the sensor again from scratch. For more information, see [Onboard OT sensors](onboard-sensors.md)
-1. Onboard the sensor again by selecting **Getting Started**> **Set up OT/ICS Security** > **Register this sensor with Microsoft Defender for IoT**.
+1. On the **sites and sensors** page locate the sensor you just added.
-1. Go to the **Sites and sensors** page.
-
-1. Use the search bar to find the sensor you just added, and select it.
-1. Select the three dots (...) on the row and select **Download activation file**.
+1. Select the three dots (...) on the sensor's row and select **Download activation file**. Save the file in a location accessible to your sensor.
[!INCLUDE [root-of-trust](includes/root-of-trust.md)]
-1. Save the file.
-
-1. Sign in to the Defender for IoT sensor console.
-
-1. Select **System Settings** > **Sensor management** > **Subscription & Activation Mode**.
+1. Sign in to the Defender for IoT sensor console and select **System Settings** > **Sensor management** > **Subscription & Activation Mode**.
-1. Select **Upload** and select the file that you saved.
+1. Select **Upload** and browse to the file that you downloaded from the Azure portal.
-1. Select **Activate**.
+1. Select **Activate** to upload your new activation file.
### Troubleshoot activation file upload You'll receive an error message if the activation file couldn't be uploaded. The following events might have occurred: -- **For locally connected sensors**: The activation file isn't valid. If the file isn't valid, go to [Defender for IoT in the Azure portal](https://portal.azure.com/#blade/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/Getting_Started). On the **Sensor Management** page, select the sensor with the invalid file, and download a new activation file.--- **For cloud-connected sensors**: The sensor can't connect to the internet. Check the sensor's network configuration. If your sensor needs to connect through a web proxy to access the internet, verify that your proxy server is configured correctly on the **Sensor Network Configuration** screen. Verify that the required endpoints are allowed in the firewall and/or proxy.
+- **The sensor can't connect to the internet:** Check the sensor's network configuration. If your sensor needs to connect through a web proxy to access the internet, verify that your proxy server is configured correctly on the **Sensor Network Configuration** screen. Verify that the required endpoints are allowed in the firewall and/or proxy.
For OT sensors version 22.x, download the list of required endpoints from the **Sites and sensors** page on the Azure portal. Select an OT sensor with a supported software version, or a site with one or more supported sensors. And then select **More actions** > **Download endpoint details**. For sensors with earlier versions, see [Sensor access to Azure portal](how-to-set-up-your-network.md#sensor-access-to-azure-portal). -- **For cloud-connected sensors**: The activation file is valid but Defender for IoT rejected it. If you can't resolve this problem, you can download another activation from the **Sites and Sensors** page in the [Azure portal](https://portal.azure.com/#blade/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/Getting_Started). If this doesn't work, contact Microsoft Support.
+- **The activation file is valid but Defender for IoT rejected it:** If you can't resolve this problem, you can download another activation from the **Sites and Sensors** page in the [Azure portal](https://portal.azure.com/#blade/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/Getting_Started). If this doesn't work, contact Microsoft Support.
## Manage certificates
defender-for-iot How To Manage Sensors On The Cloud https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-manage-sensors-on-the-cloud.md
In such cases, do the following steps:
1. [Delete your existing sensor](#sensor-management-options-from-the-azure-portal). 1. [Onboard the sensor again](onboard-sensors.md#onboard-an-ot-sensor), registering it with any new settings.
-1. [Upload your new activation file](how-to-manage-individual-sensors.md#upload-new-activation-files).
+1. [Upload your new activation file](how-to-manage-individual-sensors.md#upload-a-new-activation-file).
### Reactivate an OT sensor for upgrades to version 22.x from a legacy version
defender-for-iot How To Manage Subscriptions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-manage-subscriptions.md
Billing changes will take effect one hour after cancellation of the previous sub
- For sensors monitoring overlapping network segments, create the activation file under the same zone. Identical devices that are detected in more than one sensor in a zone, will be merged into one device.
-1. [Upload a new activation file](how-to-manage-individual-sensors.md#upload-new-activation-files) for your sensors under the new subscription.
+1. [Upload a new activation file](how-to-manage-individual-sensors.md#upload-a-new-activation-file) for your sensors under the new subscription.
1. Delete the sensor identities from the previous subscription. For more information, see [Sensor management options from the Azure portal](how-to-manage-sensors-on-the-cloud.md#sensor-management-options-from-the-azure-portal).
defender-for-iot How To Work With Alerts On Premises Management Console https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-work-with-alerts-on-premises-management-console.md
The CSV file is generated, and you're prompted to save it locally.
> [View and manage alerts on your OT sensor](how-to-view-alerts.md) > [!div class="nextstepaction"]
-> [Accelerate alert workflows on an OT network sensor](how-to-accelerate-alert-incident-response.md)
+> [Accelerate on-premises OT alert workflows](how-to-accelerate-alert-incident-response.md)
> [!div class="nextstepaction"] > [Forward alert information](how-to-forward-alert-information-to-partners.md)
defender-for-iot Manage Users Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/manage-users-portal.md
Title: Manage Azure users for Microsoft Defender for IoT
description: Learn how to manage user permissions in the Azure portal for Microsoft Defender for IoT services. Last updated 09/04/2022 +
+ - zerotrust-services
# Manage users on the Azure portal
defender-for-iot Monitor Zero Trust https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/monitor-zero-trust.md
# Tutorial: Monitor your OT networks with Zero Trust principles
-[Zero Trust](/security/zero-trust/zero-trust-overview) is a security strategy for designing and implementing the following sets of security principles:
-
-|Verify explicitly |Use least privilege access |Assume breach |
-||||
-|Always authenticate and authorize based on all available data points. | Limit user access with Just-In-Time and Just-Enough-Access (JIT/JEA), risk-based adaptive policies, and data protection. | Minimize blast radius and segment access. Verify end-to-end encryption and use analytics to get visibility, drive threat detection, and improve defenses.
-
-<!--replace with include file-->
Defender for IoT uses site and zone definitions across your OT network to ensure that you're maintaining network hygiene and keeping each subsystem separate and secure.
In this tutorial, you learn how to:
> * [Look for alerts on unknown devices](#look-for-alerts-on-unknown-devices) > * [Look for vulnerable systems](#look-for-vulnerable-systems) > * [Look for alerts on cross-subnet traffic](#look-for-alerts-on-cross-subnet-traffic)
-> * [Simulate traffic to test your network](#simulate-traffic-to-test-your-network)
+> * [Simulate malicious traffic to test your network](#simulate-malicious-traffic-to-test-your-network)
+
+> [!IMPORTANT]
+> The **Recommendations** page in the Azure portal is currently in **PREVIEW**. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
## Prerequisites
You've separated your network in to sites and zones to keep each subsystem separ
## Look for alerts on unknown devices
-Do you know what devices are on your network, and who they're communicating with? Defender for IoT triggers alerts for any new, unknown device detected on your network so that you can identify it and ensure both the device security and your network security.
+Do you know what devices are on your network, and who they're communicating with? Defender for IoT triggers alerts for any new, unknown device detected in OT subnets so that you can identify it and ensure both the device security and your network security.
Unknown devices might include *transient* devices, which move between networks. For example, transient devices might include a technician's laptop, which they connect to the network when maintaining servers, or a visitor's smartphone, which connects to a guest network at your office.
Specific sites or zones that generate many alerts for unknown devices are at ris
- Learn the alert if the device is legitimate so that the alert isn't triggered again for the same device. On the alert details page, select **Learn**. - Block the device if it's not legitimate.
+## Look for unauthorized devices
+
+We recommend that you proactively watch for new, unauthorized devices detected on your network. Regularly checking for unauthorized devices can help prevent threats of rogue or potentially malicious devices that might infiltrate your network.
+
+For example, use the **Review unauthorized devices** recommendation to identify all unauthorized devices.
+
+**To review unauthorized devices**:
+
+1. In Defender for IoT on the Azure portal, select **Recommendations (Preview)** and search for the **Review unauthorized devices** recommendation.
+1. View the devices listed in the **Unhealthy devices** tab. Each of these devices in unauthorized and might be a risk to your network.
+
+Follow the remediation steps, such as to mark the device as authorized if the device is known to you, or disconnect the device from your network if the device remains unknown after investigation.
+
+For more information, see [Enhance security posture with security recommendations](recommendations.md).
+
+> [!TIP]
+> You can also review unauthorized devices by [filtering the device inventory](how-to-manage-device-inventory-for-organizations.md#view-the-device-inventory) by the **Authorization** field, showing only devices marked as **Unauthorized**.
++ ## Look for vulnerable systems If you have devices on your network with outdated software or firmware, they might be vulnerable to attack. Devices that are end-of-life, and have no more security updates are especially vulnerable.
If you have devices on your network with outdated software or firmware, they mig
1. In the **SiteName** select at the top of the page, select one or more sites to filter the data by site. Filtering data by site can help you identify concerns at specific sites, which may require site-wide updates or device replacements.
-## Simulate traffic to test your network
+## Simulate malicious traffic to test your network
To verify the security posture of a specific device, run an **Attack vector** report to simulate traffic to that device. Use the simulated traffic to locate and mitigate vulnerabilities before they're exploited.
When monitoring for Zero Trust, the following list is an example of important De
:::row::: :::column:::
- - Unauthorized device connected to the network
+ - Unauthorized device connected to the network, especially any malicious IP/Domain name requests
- Known malware detected - Unauthorized connection to the internet - Unauthorized remote access
defender-for-iot Onboard Sensors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/onboard-sensors.md
Title: Onboard sensors to Defender for IoT in the Azure portal
description: Learn how to onboard sensors to Defender for IoT in the Azure portal. Last updated 06/02/2022 +
+ - zerotrust-services
# Onboard OT sensors to Defender for IoT
defender-for-iot Sites And Zones On Premises https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/ot-deploy/sites-and-zones-on-premises.md
Title: Create OT sites and zones on an on-premises management console - Microsof
description: Learn how to create OT networking sites and zones on an on-premises management console to support Zero Trust principles while monitoring OT networks. Last updated 02/15/2023 +
+ - zerotrust-services
# Create OT sites and zones on an on-premises management console
defender-for-iot Roles Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/roles-azure.md
Title: Azure user roles and permissions for Microsoft Defender for IoT
description: Learn about the Azure user roles and permissions available for OT and Enterprise IoT monitoring with Microsoft Defender for IoT on the Azure portal. Last updated 09/19/2022 +
+ - zerotrust-services
# Azure user roles and permissions for Defender for IoT
devtest-labs Deliver Proof Concept https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/deliver-proof-concept.md
Learn about Azure and DevTest Labs by using the following resources:
- Alternatively, you can use a [Visual Studio subscription](https://azure.microsoft.com/pricing/member-offers/visual-studio-subscriptions) for the pilot deployment, and take advantage of free Azure credits. - You can also create and use a [free Azure account](https://azure.microsoft.com/free/search/?&OCID=AID719825_SEM_g4lyBqgB&lnkd=Bing_Azure_Brand&msclkid=ecc4275a31b61375749e7a5322c20de8&dclid=CMGW5-m78-ICFaLt4QodmUwGtQ) for the pilot.
+
+- To use Windows client OS images (Windows 7 or a later version) for your development or testing in Azure, take one of the following steps:
+ - [Buy an MSDN subscription](https://www.visualstudio.com/products/how-to-buy-vs).
+ - If you have an Enterprise Agreement, create an Azure subscription with the [Enterprise Dev/Test offer](https://azure.microsoft.com/offers/ms-azr-0148p).
+
+ For more information about the Azure credits for each MSDN offering, see [Monthly Azure credit for Visual Studio subscribers](https://azure.microsoft.com/pricing/member-offers/msdn-benefits-details/).
+
### Enroll all users in Azure AD
devtest-labs Devtest Lab Add Devtest User https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-add-devtest-user.md
The DevTest Labs User role can take the following actions in DevTest Labs:
## Add Owners, Contributors, or DevTest Labs Users
-A lab owner can add members to lab roles by using the Azure portal or an Azure PowerShell script. The user to add can be an external user with a valid [Microsoft account (MSA)](./devtest-lab-faq.yml).
+A lab owner can add members to lab roles by using the Azure portal or an Azure PowerShell script. The user to add can be an external user with a valid [Microsoft account (MSA)](/windows-server/identity/ad-ds/manage/understand-microsoft-accounts).
Azure permissions propagate from parent scope to child scope. Owners of an Azure subscription that contains labs are automatically owners of the subscription's DevTest Labs service, labs, and lab VMs and resources. Subscription owners can add Owners, Contributors, and DevTest Labs Users to labs in the subscription.
devtest-labs Devtest Lab Add Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-add-vm.md
Or, if you chose **Make this machine claimable** during VM creation, select **Cl
:::image type="content" source="./media/devtest-lab-add-vm/portal-lab-vm-creation-status.png" alt-text="Lab VM creation status page.":::
+When you create a VM in DevTest Labs, you're given permission to access that VM. You can view the VM both on the labs page and on the **Virtual Machines** page. Users assigned to the **DevTest Labs Owner** role can see all VMs that were created in the lab on the lab's **All Virtual Machines** page. However, users who have the **DevTest Labs User** role are not automatically granted read access to VM resources that other users have created. So, those VMs are not displayed on the **Virtual Machines** page.
+
+## Move existing Azure VMs into a DevTest Labs lab
+To copy your existing VMs to DevTest Labs:
+
+ 1. Copy the VHD file of your existing VM by using a [Windows PowerShell script](https://github.com/Azure/azure-devtestlab/blob/master/samples/DevTestLabs/Scripts/CopyVirtualMachines/CopyAzVHDFromVMToLab.ps1).
+ 2. Create the [custom image](devtest-lab-create-template.md) inside your DevTest Labs lab.
+ 3. Create a VM in the lab from your custom image.
+
+ <a name="add-artifacts-after-installation"></a> ## Next steps
devtest-labs Devtest Lab Attach Detach Data Disk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-attach-detach-data-disk.md
Last updated 03/29/2022
# Attach or detach a data disk for a lab virtual machine in Azure DevTest Labs
-This article explains how to attach and detach a lab virtual machine (VM) data disk in Azure DevTest Labs. You can create, attach, detach, and reattach [data disks](../virtual-machines/managed-disks-overview.md) for lab VMs that you own. This functionality is useful for managing storage or software separately from individual VMs.
+This article explains how to attach and detach a lab virtual machine (VM) data disk in Azure DevTest Labs. You can create, attach, detach, and reattach multiple [data disks](../virtual-machines/managed-disks-overview.md) for lab VMs that you own. This functionality is useful for managing storage or software separately from individual VMs.
## Prerequisites
devtest-labs Devtest Lab Comparing Vm Base Image Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-comparing-vm-base-image-types.md
Last updated 08/26/2021
Both [custom images](devtest-lab-create-template.md) and [formulas](devtest-lab-manage-formulas.md) can be used as bases for [creating new lab virtual machines (VMs)](devtest-lab-add-vm.md). The key distinction between custom images and formulas is that a custom image is simply an image based on a virtual hard drive (VHD). A formula is an image based on a VHD plus preconfigured settings. Preconfigured settings can include VM size, virtual network, subnet, and artifacts. These preconfigured settings are set up with default values that you can override at the time of VM creation.
-In this article, you'll learn the pros and cons of using custom images versus using formulas. You can also read [How to create a custom image from a VM](devtest-lab-create-custom-image-from-vm-using-portal.md) and the [DevTest Labs FAQ](devtest-lab-faq.yml).
+In this article, you'll learn the pros and cons of using custom images versus using formulas. You can also read [How to create a custom image from a VM](devtest-lab-create-custom-image-from-vm-using-portal.md) and [Compare custom images and formulas in DevTest Labs](devtest-lab-comparing-vm-base-image-types.md).
## Custom image benefits Custom images provide a static, immutable way to create VMs from the environment you want.
devtest-labs Devtest Lab Delete Lab Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-delete-lab-vm.md
To delete a VM from a lab:
1. To check deletion status, select the **Notifications** icon on the Azure menu bar. +
+## Automate the process of deleting all the VMs in a lab
+
+As a lab owner, you can delete VMs from your lab in the Azure portal. You also can delete all the VMs in your lab by using a PowerShell script. In the following example, under the **values to change** comment, modify the parameter values. You can retrieve the `subscriptionId`, `labResourceGroup`, and `labName` values from the lab pane in the Azure portal.
+
+```powershell
+ # Delete all the VMs in a lab.
+
+ # Values to change:
+ $subscriptionId = "<Enter Azure subscription ID here>"
+ $labResourceGroup = "<Enter lab's resource group here>"
+ $labName = "<Enter lab name here>"
+
+ # Sign in to your Azure account.
+ Connect-AzAccount
+
+ # Select the Azure subscription that has the lab. This step is optional
+ # if you have only one subscription.
+ Select-AzSubscription -SubscriptionId $subscriptionId
+
+ # Get the lab that has the VMs that you want to delete.
+ $lab = Get-AzResource -ResourceId ('subscriptions/' + $subscriptionId + '/resourceGroups/' + $labResourceGroup + '/providers/Microsoft.DevTestLab/labs/' + $labName)
+
+ # Get the VMs from that lab.
+ $labVMs = Get-AzResource | Where-Object {
+ $_.ResourceType -eq 'microsoft.devtestlab/labs/virtualmachines' -and
+ $_.Name -like "$($lab.Name)/*"}
+
+ # Delete the VMs.
+ foreach($labVM in $labVMs)
+ {
+ Remove-AzResource -ResourceId $labVM.ResourceId -Force
+ }
+```
## Delete a lab When you delete a lab from a resource group, DevTest Labs automatically deletes:
To delete a lab:
![Screenshot of the Delete button on the lab Overview page.](media/devtest-lab-delete-lab-vm/delete-button.png) 1. On the **Are you sure you want to delete it?** page, under **Type the lab name**, type the lab name, and then select **Delete**.</br>
- The deletion of the lab and all it's resources is permanent, and cannot be undone.
+ The deletion of the lab and all its resources is permanent, and cannot be undone.
![Screenshot of the lab deletion confirmation page.](media/devtest-lab-delete-lab-vm/confirm-delete.png)
devtest-labs Devtest Lab Grant User Permissions To Specific Lab Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-grant-user-permissions-to-specific-lab-policies.md
Once you have the **ObjectId** for the user and a custom role name, you can assi
PS C:\>New-AzRoleAssignment -ObjectId 05DEFF7B-0AC3-4ABF-B74D-6A72CD5BF3F3 -RoleDefinitionName "Policy Contributor" -Scope /subscriptions/<SubscriptionID>/resourceGroups/<ResourceGroupName>/providers/Microsoft.DevTestLab/labs/<LabName>/policySets/default/policies/AllowedVmSizesInLab ```
-In the previous example, the **AllowedVmSizesInLab** policy is used. You can use any of the following polices:
+In the previous example, the **AllowedVmSizesInLab** policy is used. You can use any of the following policies:
* MaxVmsAllowedPerUser * MaxVmsAllowedPerLab * AllowedVmSizesInLab * LabVmsShutdown
+## Create a role to allow users to do a specific task
+
+This example script that creates the role **DevTest Labs Advanced User**, which has permission to start and stop all VMs in the lab:
+
+```powershell
+ $policyRoleDef = Get-AzRoleDefinition "DevTest Labs User"
+ $policyRoleDef.Actions.Remove('Microsoft.DevTestLab/Environments/*')
+ $policyRoleDef.Id = $null
+ $policyRoleDef.Name = "DevTest Labs Advanced User"
+ $policyRoleDef.IsCustom = $true
+ $policyRoleDef.AssignableScopes.Clear()
+ $policyRoleDef.AssignableScopes.Add("/subscriptions/<subscription Id>")
+ $policyRoleDef.Actions.Add("Microsoft.DevTestLab/labs/virtualMachines/Start/action")
+ $policyRoleDef.Actions.Add("Microsoft.DevTestLab/labs/virtualMachines/Stop/action")
+ $policyRoleDef = New-AzRoleDefinition -Role $policyRoleDef
+```
+ [!INCLUDE [devtest-lab-try-it-out](../../includes/devtest-lab-try-it-out.md)] ## Next steps
devtest-labs Devtest Lab Guidance Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-guidance-get-started.md
The following diagram shows how lab owners can configure policies and provide re
To create a lab in the Azure portal, see [Create a lab in Azure DevTest Labs](devtest-lab-create-lab.md).
-You can also automate lab creation, including custom settings, with a reusable *Azure Resource Manager (ARM) template*. For more information, see [Create a lab by using a Resource Manager template](./devtest-lab-faq.yml#how-do-i-create-a-lab-from-a-resource-manager-template-).
+You can also automate lab creation, including custom settings, with a reusable *Azure Resource Manager (ARM) template*. For more information, refer to [Azure Resource Manager (ARM) templates in Azure DevTest Labs](devtest-lab-use-arm-and-powershell-for-lab-resources.md)
### Add a virtual network to a lab
Lab owners can allow users more control by giving them Contributor rights to the
DevTest Labs is well-suited for transient activities like workshops, hands-on labs, training, or hackathons. In these scenarios: - Training leaders or lab owners can use custom templates to create identical, isolated VMs or environments.-- Trainees can [access the lab by using a URL](./devtest-lab-faq.yml#how-do-i-share-a-direct-link-to-my-lab-).
+- Trainees can [access the lab by using a URL](tutorial-create-custom-lab.md#share-a-link-to-the-lab).
- Trainees can claim already-created, preconfigured machines with a single action. - Lab owners can control lab costs and lifespan by: - Configuring policies.
Lab owners can manage costs by deleting labs and VMs when they're no longer need
- Set [expiration dates](devtest-lab-use-resource-manager-template.md#set-vm-expiration-date) on VMs. - [Delete labs](devtest-lab-delete-lab-vm.md#delete-a-lab) and all related resources.-- [Delete all lab VMs by running a single PowerShell script](./devtest-lab-faq.yml#how-do-i-automate-the-process-of-deleting-all-the-vms-in-my-lab-).
+- [Delete all lab VMs by running a single PowerShell script](devtest-lab-delete-lab-vm.md#automate-the-process-of-deleting-all-the-vms-in-a-lab).
## Proof of concept and scaled deployments
For a successful proof of concept:
## Next steps - [DevTest Labs concepts](devtest-lab-concepts.md)-- [DevTest Labs FAQ](devtest-lab-faq.yml)+ [!INCLUDE [devtest-lab-try-it-out](../../includes/devtest-lab-try-it-out.md)]
devtest-labs Devtest Lab Guidance Governance Application Migration Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-guidance-governance-application-migration-integration.md
Another factor is the frequency of changes to your software package. If you run
This scenario is an advanced scenario, and the scripts provided are sample scripts only. If any changes are required, you need to manage and maintain the scripts used in your environment. ++ ## Patterns to set up network configuration ### Question
If you use a site-to-site VPN or Express Route, consider using private IPs so th
When using shared public IPs, the virtual machines in a lab share a public IP address. This approach can be helpful when you need to avoid breaching the limits on public IP addresses for a given subscription.
+## Limits of labs per subscription
+### Question
+How many labs can I create under the same subscription?
+
+### Answer
+
+There isn't a specific limit on the number of labs that can be created per subscription. However, the amount of resources used per subscription is limited. You can read about the [limits and quotas for Azure subscriptions](../azure-resource-manager/management/azure-subscription-service-limits.md) and [how to increase these limits](https://azure.microsoft.com/blog/azure-limits-quotas-increase-requests).
+
+## Limits of VMs per lab
+### Question
+How many VMs can I create per lab?
+
+### Answer:
+There is no specific limit on the number of VMs that can be created per lab. However, the resources (VM cores, public IP addresses, and so on) that are used are limited per subscription. You can read about the [limits and quotas for Azure subscriptions](../azure-resource-manager/management/azure-subscription-service-limits.md) and [how to increase these limits](https://azure.microsoft.com/blog/azure-limits-quotas-increase-requests).
+
## Limits of number of virtual machines per user or lab ### Question
devtest-labs Devtest Lab Manage Formulas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-manage-formulas.md
To delete a formula, follow these steps:
[!INCLUDE [devtest-lab-try-it-out](../../includes/devtest-lab-try-it-out.md)] ## Related blog posts
-* [Custom images or formulas?](./devtest-lab-faq.yml)
+* [Custom images or formulas?](./devtest-lab-comparing-vm-base-image-types.md)
## Next steps Once you have created a formula for use when creating a VM, the next step is to [add a VM to your lab](devtest-lab-add-vm.md).
devtest-labs Devtest Lab Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-overview.md
Lab owners can take several measures to reduce waste and control lab costs.
- [DevTest Labs concepts](devtest-lab-concepts.md) - [Quickstart: Create a lab in Azure DevTest Labs](devtest-lab-create-lab.md)-- [DevTest Labs FAQ](devtest-lab-faq.yml) [!INCLUDE [devtest-lab-try-it-out](../../includes/devtest-lab-try-it-out.md)]
devtest-labs Devtest Lab Upload Vhd Using Azcopy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-upload-vhd-using-azcopy.md
To upload a VHD file by using AzCopy:
The process of uploading a VHD file might be lengthy depending on the size of the VHD file and your connection speed.
+## Automate uploading VHD files
+To automate uploading VHD files to create custom images, use [AzCopy](../storage/common/storage-use-azcopy-v10.md) to copy or upload VHD files to the storage account that's associated with the lab.
+
+To find the destination storage account that's associated with your lab:
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+2. On the left menu, select **Resource Groups**.
+3. Find and select the resource group that's associated with your lab.
+4. Under **Overview**, select one of the storage accounts.
+5. Select **Blobs**.
+6. Look for uploads in the list. If none exists, return to step 4 and try another storage account.
+7. Use the **URL** as the destination in your AzCopy command.
+ ## Next steps - Learn how to [create a custom image in Azure DevTest Labs from a VHD file by using the Azure portal](devtest-lab-create-template.md).
devtest-labs Devtest Lab Upload Vhd Using Storage Explorer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-upload-vhd-using-storage-explorer.md
To upload a VHD file by using Storage Explorer:
:::image type="content" source="media/devtest-lab-upload-vhd-using-storage-explorer/upload-status.png" alt-text="Screenshot that shows the Activities pane with upload status."::: +
+## Automate uploading VHD files
+To automate uploading VHD files to create custom images, use [Azure Storage Explorer](../vs-azure-tools-storage-manage-with-storage-explorer.md). Storage Explorer is a standalone app that runs on Windows, OS X, and Linux.
+
+To find the destination storage account that's associated with your lab:
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+2. On the left menu, select **Resource Groups**.
+3. Find and select the resource group that's associated with your lab.
+4. Under **Overview**, select one of the storage accounts.
+5. Select **Blobs**.
+6. Look for uploads in the list. If none exists, return to step 4 and try another storage account.
+7. Use the **URL** as the destination for your VHDs.
+ ## Next steps - Learn how to [create a custom image in Azure DevTest Labs from a VHD file by using the Azure portal](devtest-lab-create-template.md).
devtest-labs Devtest Lab Use Resource Manager Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-use-resource-manager-template.md
Add your template repositories to your lab so all lab users can access the templ
The repository now appears in the **Repositories** list for the lab. Users can now use the repository templates to [create multi-VM DevTest Labs environments](devtest-lab-create-environment-from-arm.md). Lab administrators can use the templates to [automate lab deployment and management tasks](devtest-lab-use-arm-and-powershell-for-lab-resources.md#arm-template-automation).
+### How do I create multiple VMs from the same template at once?
+You have two options for simultaneously creating multiple VMs from the same template:
+
+ - You can use the [Azure DevOps Tasks extension](https://marketplace.visualstudio.com/items?itemName=ms-azuredevtestlabs.tasks).
+ - You can [generate a Resource Manager template](devtest-lab-add-vm.md#create-and-add-virtual-machines) while you're creating a VM, and [deploy the Resource Manager template from Windows PowerShell](../azure-resource-manager/templates/deploy-powershell.md).
+ - You can also specify more than one instance of a machine to be created during virtual machine creation. To learn more about creating multiple instances of virtual machines, see the doc on [creating a lab virtual machine](devtest-lab-add-vm.md).
+
### Next steps - [Best practices for creating Azure Resource Manager templates](../azure-resource-manager/templates/best-practices.md)
devtest-labs Extend Devtest Labs Azure Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/extend-devtest-labs-azure-functions.md
The last step in this walkthrough is to test the Azure function.
Azure Functions can help extend the functionality of DevTest Labs beyond whatΓÇÖs already built-in and help customers meet their unique requirements for their teams. This pattern can be extended & expanded further to cover even more. To learn more about DevTest Labs, see the following articles: - [DevTest Labs Enterprise Reference Architecture](devtest-lab-reference-architecture.md)-- [Frequently Asked Questions](devtest-lab-faq.yml) - [Scaling up DevTest Labs](devtest-lab-guidance-scale.md) - [Automating DevTest Labs with PowerShell](https://github.com/Azure/azure-devtestlab/tree/master/samples/DevTestLabs/Modules/Library/Tests)
devtest-labs Import Virtual Machines From Another Lab https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/import-virtual-machines-from-another-lab.md
POST https://management.azure.com/subscriptions/<DestinationSubscriptionID>/reso
## Next steps - [Set policies for a lab](devtest-lab-set-lab-policy.md)-- [DevTest Labs frequently asked questions](devtest-lab-faq.yml)
devtest-labs Personal Data Delete Export https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/personal-data-delete-export.md
The data columns contained in **disks.csv** are listed below:
The exported data can be manipulated and visualized using tools, like SQL Server, Power BI, etc. This feature is especially useful when you want to report usage of your lab to your management team that may not be using the same Azure subscription as you do. ## Next steps
-See the following articles:
+See the following article:
- [Set policies for a lab](devtest-lab-set-lab-policy.md)-- [Frequently asked questions](devtest-lab-faq.yml)
devtest-labs Resource Group Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/resource-group-control.md
How to use this API:
## Next steps
-See the following articles:
+See the following article:
- [Set policies for a lab](devtest-lab-set-lab-policy.md)-- [Frequently asked questions](devtest-lab-faq.yml)
devtest-labs Troubleshoot Vm Deployment Failures https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/troubleshoot-vm-deployment-failures.md
+
+ Title: Troubleshoot VM deployment failures
+description: Learn how to troubleshoot virtual machine (VM) deployment failures in Azure DevTest Labs.
+++ Last updated : 02/27/2023++
+# Troubleshoot virtual machine (VM) deployment failures in Azure DevTest Labs
+This article guides you through possible causes and troubleshooting steps for deployment failures on Azure DevTest Labs virtual machines (VMs).
+
+## Why do I get a "Parent resource not found" error when I provision a VM from PowerShell?
+When one resource is a parent to another resource, the parent resource must exist before you create the child resource. If the parent resource doesn't exist, you see a **ParentResourceNotFound** message. If you don't specify a dependency on the parent resource, the child resource might be deployed before the parent.
+
+VMs are child resources under a lab in a resource group. When you use Resource Manager templates to deploy VMs by using PowerShell, the resource group name provided in the PowerShell script should be the resource group name of the lab. For more information, see [Troubleshoot common Azure deployment errors](../azure-resource-manager/templates/common-deployment-errors.md).
+
+## Where can I find more error information if a VM deployment fails?
+VM deployment errors are captured in activity logs. You can find lab VM activity logs under **Audit logs** or **Virtual machine diagnostics** on the resource menu on the lab's VM page (the page appears after you select the VM from the My virtual machines list).
+
+Sometimes, the deployment error occurs before VM deployment begins. An example is when the subscription limit for a resource that was created with the VM is exceeded. In this case, the error details are captured in the lab-level activity logs. Activity logs are located at the bottom of the **Configuration and policies** settings. For more information about using activity logs in Azure, see [View activity logs to audit actions on resources](../azure-monitor/essentials/activity-log.md).
+
+## Why do I get "location is not available for resource type" error when trying to create a lab?
+You may see an error message similar to the following one when you try to create a lab:
+
+```
+The provided location 'australiacentral' is not available for resource type 'Microsoft.KeyVault/vaults'. List of available regions for the resource type is 'devx-track-azurepowershell,northcentralus,eastus,northeurope,westeurope,eastasia,southeastasia,eastus2,centralus,southcentralus,westus,japaneast,japanwest,australiaeast,australiasoutheast,brazilsouth,centralindia,southindia,westindia,canadacentral,canadaeast,uksouth,ukwest,westcentralus,westus2,koreacentral,koreasouth,francecentral,southafricanorth
+```
+
+You can resolve this error by taking one of the following steps:
+
+**Option 1**
+
+Check availability of the resource type in Azure regions on the [Products available by region](https://azure.microsoft.com/global-infrastructure/services/) page. If the resource type isn't available in a certain region, DevTest Labs doesn't support creation of a lab in that region. Select another region when creating your lab.
+
+**Option 2**
+
+If the resource type is available in your region, check if it's registered with your subscription. It can be done at the subscription owner level as shown in [Azure resource providers and types](../azure-resource-manager/management/resource-providers-and-types.md).
+
+## Why isn't my existing virtual network saving properly?
+One possibility is that your virtual network name contains periods. If so, try removing the periods or replacing them with hyphens. Then, try again to save the virtual network.
+
+## Next steps
+
+If you need more help, try one of the following support channels:
+- [Troubleshooting artifact failures](devtest-lab-troubleshoot-artifact-failure.md)
+- Contact the Azure DevTest Labs experts on the [MSDN Azure and Stack Overflow forums](https://azure.microsoft.com/support/forums/).
+- Get answers from Azure experts through [Azure Forums](https://azure.microsoft.com/support/forums).
+- Connect with [@AzureSupport](https://twitter.com/azuresupport), the official Microsoft Azure account for improving customer experience. Azure Support connects the Azure community to answers, support, and experts.
devtest-labs Troubleshoot Vm Environment Creation Failures https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/troubleshoot-vm-environment-creation-failures.md
To see the lab template deployment logs, follow these steps:
3. Look for deployments with a failed status and select it. 4. On the **Deployment** page, select **Operation details** link for the operation that failed. 5. You see details about the operation that failed in the **Operation details** window.-
+
## Next steps See [Troubleshooting artifact failures](devtest-lab-troubleshoot-artifact-failure.md)
devtest-labs Tutorial Create Custom Lab https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/tutorial-create-custom-lab.md
To add users to a lab, you must be a [User Access Administrator](../role-based-a
1. On the **Review + assign** tab, select **Review + assign** to assign the role.
+## Share a link to the lab
+
+1. In the [Azure portal](https://portal.azure.com), go to the lab.
+2. Copy the **lab URL** from your browser, and then share it with your lab users.
+
+> [!NOTE]
+> If a lab user is an external user who has a Microsoft account, but who is not a member of your organization's Active Directory instance, the user might see an error message when they try to access the shared link. If an external user sees an error message, ask the user to first select their name in the upper-right corner of the Azure portal. Then, in the Directory section of the menu, the user can select the directory where the lab exists.
+
+ ## Clean up resources Use this lab for the next tutorial, [Access a lab in Azure DevTest Labs](tutorial-use-custom-lab.md). When you're done using the lab, delete it and its resources to avoid further charges.
dms Tutorial Azure Postgresql To Azure Postgresql Online Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-azure-postgresql-to-azure-postgresql-online-portal.md
Title: "Tutorial: Migrate Azure DB for PostgreSQL to Azure DB for PostgreSQL online via the Azure portal"
+ Title: "Tutorial: Migrate Azure Database for PostgreSQL to Azure Database for PostgreSQL online via the Azure portal"
-description: Learn to perform an online migration from one Azure DB for PostgreSQL to another Azure Database for PostgreSQL by using Azure Database Migration Service via the Azure portal.
+description: Learn to perform an online migration from one Azure Database for PostgreSQL to another Azure Database for PostgreSQL by using Azure Database Migration Service via the Azure portal.
Last updated 07/21/2020
-# Tutorial: Migrate/Upgrade Azure DB for PostgreSQL - Single Server to Azure DB for PostgreSQL - Single Server online using DMS via the Azure portal
+# Tutorial: Migrate/Upgrade Azure Database for PostgreSQL - Single Server to Azure Database for PostgreSQL - Single Server online using DMS via the Azure portal
You can use Azure Database Migration Service to migrate the databases from an [Azure Database for PostgreSQL - Single Server](../postgresql/overview.md#azure-database-for-postgresqlsingle-server) instance to same or different version of Azure Database for PostgreSQL - Single Server instance or Azure Database for PostgreSQL - Flexible Server with minimal downtime. In this tutorial, you migrate the **DVD Rental** sample database from an Azure Database for PostgreSQL v10 to Azure Database for PostgreSQL - Single Server by using the online migration activity in Azure Database Migration Service.
To complete this tutorial, you need to:
* Ensure that the Network Security Group (NSG) rules for your virtual network don't block the outbound port 443 of ServiceTag for ServiceBus, Storage and AzureMonitor. For more detail on virtual network NSG traffic filtering, see the article [Filter network traffic with network security groups](../virtual-network/virtual-network-vnet-plan-design-arm.md). * Create a server-level [firewall rule](/azure/azure-sql/database/firewall-configure) for Azure Database for PostgreSQL source to allow Azure Database Migration Service to access to the source databases. Provide the subnet range of the virtual network used for Azure Database Migration Service. * Create a server-level [firewall rule](/azure/azure-sql/database/firewall-configure) for Azure Database for PostgreSQL target to allow Azure Database Migration Service to access to the target databases. Provide the subnet range of the virtual network used for Azure Database Migration Service.
-* [Enable logical replication](../postgresql/concepts-logical.md) in the Azure DB for PostgreSQL source.
+* [Enable logical replication](../postgresql/concepts-logical.md) in the Azure Database for PostgreSQL source.
* Set the following Server parameters in the Azure Database for PostgreSQL instance being used as a source: * max_replication_slots = [number of slots], recommend setting to **ten slots**
After the service is created, locate it within the Azure portal, open it, and th
* Select **Run migration**.
-The migration activity window appears, and the **Status** of the activity should update to show as **Backup in Progress**. You may encounter the following error when upgrading from Azure DB for PostgreSQL 9.5 or 9.6:
+The migration activity window appears, and the **Status** of the activity should update to show as **Backup in Progress**. You may encounter the following error when upgrading from Azure Database for PostgreSQL 9.5 or 9.6:
**A scenario reported an unknown error. 28000: no pg_hba.conf entry for replication connection from host "40.121.141.121", user "sr"** This is because the PostgreSQL does not have appropriate privileges to create required logical replication artifacts. To enable required privileges, you can do the following:
-1. Open "Connection security" settings for the source Azure DB for PostgreSQL server you are trying to migrate/upgrade from.
+1. Open "Connection security" settings for the source Azure Database for PostgreSQL server you are trying to migrate/upgrade from.
2. Add a new firewall rule with a name ending with "_replrule" and add the IP address from the error message to the start IP and End IP fields. For the above error example - > Firewall rule name = sr_replrule; > Start IP = 40.121.141.121;
event-grid Consume Private Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/consume-private-endpoints.md
Title: Deliver events using private link service description: This article describes how to work around the limitation of not able to deliver events using private link service. Previously updated : 05/17/2022 Last updated : 03/01/2023 # Deliver events using private link service
-Currently, it's not possible to deliver events using [private endpoints](../private-link/private-endpoint-overview.md). That is, there is no support if you have strict network isolation requirements where your delivered events traffic must not leave the private IP space.
+Currently, it's not possible to deliver events using [private endpoints](../private-link/private-endpoint-overview.md). That is, there's no support if you have strict network isolation requirements where your delivered events traffic must not leave the private IP space.
## Use managed identity However, if your requirements call for a secure way to send events using an encrypted channel and a known identity of the sender (in this case, Event Grid) using public IP space, you could deliver events to Event Hubs, Service Bus, or Azure Storage service using an Azure Event Grid custom topic or a domain with system-assigned or user-assigned managed identity. For details about delivering events using managed identity, see [Event delivery using a managed identity](managed-service-identity.md).
To deliver events to Storage queues using managed identity, follow these steps:
1. [Add the identity to the **Storage Queue Data Message Sender**](../storage/blobs/assign-azure-role-data-access.md) role on Azure Storage queue. 1. [Configure the event subscription](managed-service-identity.md#create-event-subscriptions-that-use-an-identity) that uses a Storage queue as an endpoint to use the system-assigned or user-assigned managed identity.
+> [!NOTE]
+> - If there's no firewall or virtual network rules configured for the Azure Storage account, you can use both user-assigned and system-assigned identities to deliver events to the Azure Storage account.
+> - If a firewall or virtual network rule is configured for the Azure Storage account, you can use only the system-assigned managed identity if **Allow Azure services on the trusted service list to access the storage account** is also enabled on the storage account. You can't use user-assigned managed identity whether this option is enabled or not.
## Next steps For more information about delivering events using a managed identity, see [Event delivery using a managed identity](managed-service-identity.md).
event-grid Delivery And Retry https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/delivery-and-retry.md
Title: Azure Event Grid delivery and retry description: Describes how Azure Event Grid delivers events and how it handles undelivered messages. Previously updated : 01/12/2022 Last updated : 03/01/2023 # Event Grid message delivery and retry
-Event Grid provides durable delivery. It tries to deliver each message **at least once** for each matching subscription immediately. If a subscriber's endpoint doesn't acknowledge receipt of an event or if there's a failure, Event Grid retries delivery based on a fixed [retry schedule](#retry-schedule) and [retry policy](#retry-policy). By default, the Event Grid module delivers one event at a time to the subscriber. The payload is however an array with a single event.
+Event Grid provides durable delivery. It tries to deliver each message **at least once** for each matching subscription immediately. If a subscriber's endpoint doesn't acknowledge receipt of an event or if there's a failure, Event Grid retries delivery based on a fixed [retry schedule](#retry-schedule) and [retry policy](#retry-policy). By default, Event Grid delivers one event at a time to the subscriber. The payload is however an array with a single event.
> [!NOTE] > Event Grid doesn't guarantee order for event delivery, so subscribers may receive them out of order.
The following table describes the types of endpoints and errors for which retry
| Endpoint Type | Error codes | | --| --|
-| Azure Resources | 400 Bad Request, 413 Request Entity Too Large, 403 Forbidden, 404 Not Found |
-| Webhook | 400 Bad Request, 413 Request Entity Too Large, 403 Forbidden, 404 Not Found, 401 Unauthorized |
+| Azure Resources | 400 (Bad request), 413 (Request entity is too large) |
+| Webhook | 400 (Bad request), 413 (Request entity is too large), 401 (Unauthorized) |
> [!NOTE]
-> If dead-letter isn't configured for an endpoint, events will be dropped when the above errors happen. Consider configuring dead-letter if you don't want these kinds of events to be dropped. Dead lettered events will be dropped when the dead dead-letter destination is not found.
+> If dead-letter isn't configured for an endpoint, events will be dropped when the above errors happen. Consider configuring dead-letter if you don't want these kinds of events to be dropped. Dead lettered events will be dropped when the dead-letter destination isn't found.
-If the error returned by the subscribed endpoint isn't among the above list, Event Grid performs the retry using policies described below:
+If the error returned by the subscribed endpoint isn't among the above list, Event Grid performs the retry using the policy described below:
Event Grid waits 30 seconds for a response after delivering a message. After 30 seconds, if the endpoint hasnΓÇÖt responded, the message is queued for retry. Event Grid uses an exponential backoff retry policy for event delivery. Event Grid retries delivery on the following schedule on a best effort basis:
Event Grid waits 30 seconds for a response after delivering a message. After 30
- Every 12 hours up to 24 hours
-If the endpoint responds within 3 minutes, Event Grid will attempt to remove the event from the retry queue on a best effort basis but duplicates may still be received.
+If the endpoint responds within 3 minutes, Event Grid attempts to remove the event from the retry queue on a best effort basis, but duplicates may still be received.
Event Grid adds a small randomization to all retry steps and may opportunistically skip certain retries if an endpoint is consistently unhealthy, down for a long period, or appears to be overwhelmed. ## Retry policy
-You can customize the retry policy when creating an event subscription by using the following two configurations. An event will be dropped if either of the limits of the retry policy is reached.
+You can customize the retry policy when creating an event subscription by using the following two configurations. An event is dropped if either of the limits of the retry policy is reached.
- **Maximum number of attempts** - The value must be an integer between 1 and 30. The default value is 30. - **Event time-to-live (TTL)** - The value must be an integer between 1 and 1440. The default value is 1440 minutes
Event Grid defaults to sending each event individually to subscribers. The subsc
### Batching policy Batched delivery has two settings:
-* **Max events per batch** - Maximum number of events Event Grid will deliver per batch. This number will never be exceeded, however fewer events may be delivered if no other events are available at the time of publish. Event Grid doesn't delay events to create a batch if fewer events are available. Must be between 1 and 5,000.
+* **Max events per batch** - Maximum number of events Event Grid delivers per batch. This number will never be exceeded, however fewer events may be delivered if no other events are available at the time of publish. Event Grid doesn't delay events to create a batch if fewer events are available. Must be between 1 and 5,000.
* **Preferred batch size in kilobytes** - Target ceiling for batch size in kilobytes. Similar to max events, the batch size may be smaller if more events aren't available at the time of publish. It's possible that a batch is larger than the preferred batch size *if* a single event is larger than the preferred size. For example, if the preferred size is 4 KB and a 10-KB event is pushed to Event Grid, the 10-KB event will still be delivered in its own batch rather than being dropped. Batched delivery in configured on a per-event subscription basis via the portal, CLI, PowerShell, or SDKs.
Batched delivery in configured on a per-event subscription basis via the portal,
It isn't necessary to specify both the settings (Maximum events per batch and Approximate batch size in kilo bytes) when creating an event subscription. If only one setting is set, Event Grid uses (configurable) default values. See the following sections for the default values, and how to override them. ### Azure portal:
-![Batch delivery settings](./media/delivery-and-retry/batch-settings.png)
+You see these settings on the **Additional Features** tab of the **Event Subscription** page.
+ ### Azure CLI When creating an event subscription, use the following parameters:
For more information on using Azure CLI with Event Grid, see [Route storage even
## Delayed Delivery
-As an endpoint experiences delivery failures, Event Grid will begin to delay the delivery and retry of events to that endpoint. For example, if the first 10 events published to an endpoint fail, Event Grid will assume that the endpoint is experiencing issues and will delay all subsequent retries *and new* deliveries for some time - in some cases up to several hours.
+As an endpoint experiences delivery failures, Event Grid begins to delay the delivery and retry of events to that endpoint. For example, if the first 10 events published to an endpoint fail, Event Grid assumes that the endpoint is experiencing issues and will delay all subsequent retries *and new* deliveries for some time - in some cases up to several hours.
The functional purpose of delayed delivery is to protect unhealthy endpoints and the Event Grid system. Without back-off and delay of delivery to unhealthy endpoints, Event Grid's retry policy and volume capabilities can easily overwhelm a system.
event-grid Handler Storage Queues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/handler-storage-queues.md
Title: Storage queue as an event handler for Azure Event Grid events description: Describes how you can use Azure storage queues as event handlers for Azure Event Grid events. Previously updated : 09/28/2021 Last updated : 03/01/2023 # Storage queue as an event handler for Azure Event Grid events
An event handler is the place where the event is sent. The handler takes some fu
Use **Queue Storage** to receive events that need to be pulled. You might use Queue storage when you have a long running process that takes too long to respond. By sending events to Queue storage, the app can pull and process events on its own schedule.
+> [!NOTE]
+> - If there's no firewall or virtual network rules configured for the Azure Storage account, you can use both user-assigned and system-assigned identities to deliver events to the Azure Storage account.
+> - If a firewall or virtual network rule is configured for the Azure Storage account, you can use only the system-assigned managed identity if **Allow Azure services on the trusted service list to access the storage account** is also enabled on the storage account. You can't use user-assigned managed identity whether this option is enabled or not.
+ ## Tutorials See the following tutorial for an example of using Queue storage as an event handler.
event-grid Event Grid Powershell Webhook Secure Delivery Azure Ad App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/scripts/event-grid-powershell-webhook-secure-delivery-azure-ad-app.md
try {
} # Creates Azure Event Grid Azure AD Application if not exists
+ # You don't need to modify this id
+ # But Azure Event Grid Azure AD Application Id is different for different clouds
- $eventGridAppId = "4962773b-9cdb-44cf-a8bf-237846a00ab7" # You don't need to modify this id
+ $eventGridAppId = "4962773b-9cdb-44cf-a8bf-237846a00ab7" # Azure Public Cloud
+ # $eventGridAppId = "54316b56-3481-47f9-8f30-0300f5542a7b" # Azure Government Cloud
$eventGridRoleName = "AzureEventGridSecureWebhookSubscriber" # You don't need to modify this role name $eventGridSP = Get-AzureADServicePrincipal -Filter ("appId eq '" + $eventGridAppId + "'") if ($eventGridSP -match "Microsoft.EventGrid")
event-hubs Event Hubs Python Get Started Send https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-python-get-started-send.md
In this section, create a Python script to send events to the event hub that you
* `EVENT_HUB_CONNECTION_STR` * `EVENT_HUB_NAME`
+ > [!NOTE]
+ > Make sure that **EVENT_HUB_NAME** is the name of the event hub and not the Event Hubs namespace. If this value is incorrect, you will receive the error code: `CBS Token authentication failed.`.
+ ```python import asyncio
If you don't see events in the receiver window or the code reports an error, try
In this quickstart, you've sent and received events asynchronously. To learn how to send and receive events synchronously, go to the [GitHub sync_samples page](https://github.com/Azure/azure-sdk-for-python/tree/master/sdk/eventhub/azure-eventhub/samples/sync_samples).
-For all the samples (both synchronous and asynchronous) on GitHub, go to [Azure Event Hubs client library for Python samples](https://github.com/Azure/azure-sdk-for-python/tree/master/sdk/eventhub/azure-eventhub/samples).
+For all the samples (both synchronous and asynchronous) on GitHub, go to [Azure Event Hubs client library for Python samples](https://github.com/Azure/azure-sdk-for-python/tree/master/sdk/eventhub/azure-eventhub/samples).
event-hubs Monitor Event Hubs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/monitor-event-hubs.md
Title: Monitoring Azure Event Hubs
description: Learn how to use Azure Monitor to view, analyze, and create alerts on metrics from Azure Event Hubs. Previously updated : 06/16/2022 Last updated : 03/01/2023 # Monitor Azure Event Hubs
See [Create diagnostic setting to collect platform logs and metrics in Azure](..
If you use **Azure Storage** to store the diagnostic logging information, the information is stored in containers named **insights-logs-operationlogs** and **insights-metrics-pt1m**. Sample URL for an operation log: `https://<Azure Storage account>.blob.core.windows.net/insights-logs-operationallogs/resourceId=/SUBSCRIPTIONS/<Azure subscription ID>/RESOURCEGROUPS/<Resource group name>/PROVIDERS/MICROSOFT.SERVICEBUS/NAMESPACES/<Namespace name>/y=<YEAR>/m=<MONTH-NUMBER>/d=<DAY-NUMBER>/h=<HOUR>/m=<MINUTE>/PT1H.json`. The URL for a metric log is similar. ### Azure Event Hubs
-If you use **Azure Event Hubs** to store the diagnostic logging information, the information is stored in Event Hubs instances named **insights-logs-operationlogs** and **insights-metrics-pt1m**. You can also select your own event hub.
+If you use **Azure Event Hubs** to store the diagnostic logging information, the information is stored in Event Hubs instances named **insights-logs-operationlogs** and **insights-metrics-pt1m**. You can also select an existing event hub except for the event hub for which you are configuring diagnostic settings.
### Log Analytics If you use **Log Analytics** to store the diagnostic logging information, the information is stored in tables named **AzureDiagnostics** and **AzureMetrics**.
firewall Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/overview.md
Untrusted customer signed certificates|Customer signed certificates aren't trust
|Certificate Propagation|After a CA certificate is applied on the firewall, it may take between 5-10 minutes for the certificate to take effect.|A fix is being investigated.| |TLS 1.3 support|TLS 1.3 is partially supported. The TLS tunnel from client to the firewall is based on TLS 1.2, and from the firewall to the external Web server is based on TLS 1.3.|Updates are being investigated.| |Availability Zones for Firewall Premium in the Southeast Asia region|You can't currently deploy Azure Firewall Premium with Availability Zones in the Southeast Asia region.|Deploy the firewall in Southeast Asia without Availability Zones, or deploy in a region that supports Availability Zones.|---
+|TLSi intermediate CA certificate expiration|In some unique cases, the intermediate CA certificate can expire two months before the original expiration date.|Renew the intermediate CA certificate two months before the original expiration date. A fix is being investigated.|
## Next steps
governance Machine Configuration Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/machine-configuration/machine-configuration-create.md
Title: How to create custom machine configuration package artifacts description: Learn how to create a machine configuration package file. Previously updated : 07/25/2022 Last updated : 02/14/2023
When auditing / configuring both Windows and Linux, machine configuration uses a
be in. > [!IMPORTANT]
-> Custom packages that audit the state of an environment are Generally Available,
-> but packages that apply configurations are **in preview**. **The following limitations apply:**
+> Custom packages that audit the state of an environment and apply
+> configurations are in generally available (GA) support status. However, the following
+> limitations apply:
> > To use machine configuration packages that apply configurations, Azure VM guest > configuration extension version **1.29.24** or later,
be in.
> > To test creating and applying configurations on Linux, the > `GuestConfiguration` module is only available on Ubuntu 18 but the package
-> and policies produced by the module can be used on any Linux distro/version
+> and policies produced by the module can be used on any Linux distribution
+> and version
> supported in Azure or Arc. >
-> Testing packages on MacOS is not available.
+> Testing packages on macOS is not available.
> > Don't use secrets or confidential information in custom content packages.
governance Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/machine-configuration/overview.md
Title: Understand Azure Automanage Machine Configuration description: Learn how Azure Policy uses the machine configuration feature to audit or configure settings inside virtual machines. Previously updated : 01/03/2023 Last updated : 03/01/2023
For more information about troubleshooting machine configuration, see
### Multiple assignments
-Guest Configuration policy definitions now support assigning the same
-guest assignment to more than once per machine when the policy assignment uses different
-parameters.
+At this time, only some built-in Guest Configuration policy definitions support multiple assignments. However, all custom policies support multiple assignments by default if you used the latest version of [the `GuestConfiguration` PowerShell module](/azure/governance/machine-configuration/machine-configuration-create-setup) to create Guest Configuration packages and policies.
-### Assignments to Azure Management Groups
+### Assignments to Azure management groups
Azure Policy definitions in the category `Guest Configuration` can be assigned to management groups when the effect is `AuditIfNotExists` or `DeployIfNotExists`.
governance Assign Policy Terraform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/assign-policy-terraform.md
Title: "Quickstart: New policy assignment with Terraform" description: In this quickstart, you use Terraform and HCL syntax to create a policy assignment to identify non-compliant resources. Previously updated : 02/28/2023 Last updated : 03/01/2023 ms.tool: terraform
To remove the assignment created, use Azure CLI or reverse the Terraform executi
- Terraform ```bash
- terraform destroy assignment.tfplan
+ terraform destroy
``` ## Next steps
governance Effects https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/concepts/effects.md
These effects are currently supported in a policy definition:
- [Audit](#audit) - [AuditIfNotExists](#auditifnotexists) - [Deny](#deny)-- [DenyAction (preview)](#denyaction-preview)
+- [DenyAction](#denyaction)
- [DeployIfNotExists](#deployifnotexists) - [Disabled](#disabled) - [Manual (preview)](#manual-preview)
location of the Constraint template to use in Kubernetes to limit the allowed co
} } ```
-## DenyAction (preview)
+## DenyAction
`DenyAction` is used to block requests on intended action to resources. The only supported action today is `DELETE`. This effect will help prevent any accidental deletion of critical resources.
assignment.
`Microsoft.Authorization/policyAssignments`, `Microsoft.Authorization/denyAssignments`, `Microsoft.Blueprint/blueprintAssignments`, `Microsoft.Resources/deploymentStacks`, and `Microsoft.Authorization/locks` are all exempt from DenyAction enforcement to prevent lockout scenarios.
-> [!NOTE]
-> Under preview, assignments with `denyAction` effect will show a `Not Started` compliance state.
- #### Subscription deletion Policy won't block removal of resources that happens during a subscription deletion.
governance Evaluate Impact https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/concepts/evaluate-impact.md
reviews the request. When the policy definition effect is [Modify](./effects.md#
[Append](./effects.md#deny), or [DeployIfNotExists](./effects.md#deployifnotexists), Policy alters the request or adds to it. When the policy definition effect is [Audit](./effects.md#audit) or [AuditIfNotExists](./effects.md#auditifnotexists), Policy causes an Activity log entry to be created
-for new and updated resources. And when the policy definition effect is [Deny](./effects.md#deny) or [DenyAction](./effects.md#denyaction-preview), Policy stops the creation or alteration of the request.
+for new and updated resources. And when the policy definition effect is [Deny](./effects.md#deny) or [DenyAction](./effects.md#denyaction), Policy stops the creation or alteration of the request.
These outcomes are exactly as desired when you know the policy is defined correctly. However, it's important to validate a new policy works as intended before allowing it to change or block work. The
governance Guest Configuration Baseline Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/guest-configuration-baseline-windows.md
Title: Reference - Azure Policy guest configuration baseline for Windows description: Details of the Windows baseline on Azure implemented through Azure Policy guest configuration. Previously updated : 02/01/2023 Last updated : 02/14/2023
implementations:
- **Vulnerabilities in security configuration on your machines should be remediated** in Azure Security Center
-For more information, see [Azure Policy guest configuration](../concepts/guest-configuration.md) and
-[Overview of the Azure Security Benchmark (V2)](../../../security/benchmarks/overview.md).
+For more information, see [Azure Automanage machine configuration](../../machine-configuration/overview.md).
-## Account Policies - Password Policy
+## Account Policies-Password Policy
|Name<br /><sub>(ID)</sub> |Details |Expected value<br /><sub>(Type)</sub> |Severity | |||||
-|Account Lockout Duration<br /><sub>(AZ-WIN-73312)</sub> |**Description**: This policy setting determines the length of time that must pass before a locked account is unlocked and a user can try to log on again. The setting does this by specifying the number of minutes a locked out account will remain unavailable. If the value for this policy setting is configured to 0, locked out accounts will remain locked out until an administrator manually unlocks them. Although it might seem like a good idea to configure the value for this policy setting to a high value, such a configuration will likely increase the number of calls that the help desk receives to unlock accounts locked by mistake. Users should be aware of the length of time a lock remains in place, so that they realize they only need to call the help desk if they have an extremely urgent need to regain access to their computer. The recommended state for this setting is: `15 or more minute(s)`. **Note:** Password Policy settings (section 1.1) and Account Lockout Policy settings (section 1.2) must be applied via the **Default Domain Policy** GPO in order to be globally in effect on **domain** user accounts as their default behavior. If these settings are configured in another GPO, they will only affect **local** user accounts on the computers that receive the GPO. However, custom exceptions to the default password policy and account lockout policy rules for specific domain users and/or groups can be defined using Password Settings Objects (PSOs), which are completely separate from Group Policy and most easily configured using Active Directory Administrative Center.<br />**Key Path**: [System Access]LockoutDuration<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\>\= 15<br /><sub>(Policy)</sub> |Warning |
+|Account Lockout Duration<br /><sub>(AZ-WIN-73312)</sub> |**Description**: This policy setting determines the length of time that must pass before a locked account is unlocked and a user can try to log on again. The setting does this by specifying the number of minutes a locked out account will remain unavailable. If the value for this policy setting is configured to 0, locked out accounts will remain locked out until an administrator manually unlocks them. Although it might seem like a good idea to configure the value for this policy setting to a high value, such a configuration will likely increase the number of calls that the help desk receives to unlock accounts locked by mistake. Users should be aware of the length of time a lock remains in place, so that they realize they only need to call the help desk if they have an extremely urgent need to regain access to their computer. The recommended state for this setting is: `15 or more minute(s)`. **Note:** Password Policy settings (section 1.1) and Account Lockout Policy settings (section 1.2) must be applied via the **Default Domain Policy** GPO in order to be globally in effect on **domain** user accounts as their default behavior. If these settings are configured in another GPO, they will only affect **local** user accounts on the computers that receive the GPO. However, custom exceptions to the default password policy and account lockout policy rules for specific domain users and/or groups can be defined using Password Settings Objects (PSOs), which are completely separate from Group Policy and most easily configured using Active Directory Administrative Center.<br />**Key Path**: [System Access]LockoutDuration<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member<br />**Group Policy Path**: Computer Configuration\Policies\Windows Settings\Security Settings\Account Policies\Account Lockout Policy\Account lockout duration<br />**Compliance Standard Mappings**:<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**Name**&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**Platform**&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**ID**<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;CIS&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;WS2022&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;1.2.1<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;CIS&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;WS2019&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;1.2.1<br /> |\>\= 15<br /><sub>(Policy)</sub> |Warning |
+
+## Administrative Template - Window Defender
+
+|Name<br /><sub>(ID)</sub> |Details |Expected value<br /><sub>(Type)</sub> |Severity |
+|||||
+|Configure detection for potentially unwanted applications<br /><sub>(AZ-WIN-202219)</sub> |**Description**: This policy setting controls detection and action for Potentially Unwanted Applications (PUA), which are sneaky unwanted application bundlers or their bundled applications, that can deliver adware or malware. The recommended state for this setting is: `Enabled: Block`. For more information, see this link: [Block potentially unwanted applications with Microsoft Defender Antivirus \| Microsoft Docs](/windows/security/threat-protection/windows-defender-antivirus/detect-block-potentially-unwanted-apps-windows-defender-antivirus)<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows Defender\PUAProtection<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member<br />**Group Policy Path**: Computer Configuration\Policies\Administrative Templates\Windows Components\Microsoft Defender Antivirus\Configure detection for potentially unwanted applications<br />**Compliance Standard Mappings**:<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**Name**&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**Platform**&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**ID**<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;CIS&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;WS2022&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;18.9.47.15<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;CIS&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;WS2019&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;18.9.47.15<br /> |\= 1<br /><sub>(Registry)</sub> |Critical |
+|Scan all downloaded files and attachments<br /><sub>(AZ-WIN-202221)</sub> |**Description**: This policy setting configures scanning for all downloaded files and attachments. The recommended state for this setting is: `Enabled`.<br />**Key Path**: Software\Policies\Microsoft\Windows Defender\Real-Time Protection\DisableIOAVProtection<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member<br />**Group Policy Path**: Computer Configuration\Policies\Administrative Templates\Windows Components\Microsoft Defender Antivirus\Real-Time Protection\Scan all downloaded files and attachments<br />**Compliance Standard Mappings**:<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**Name**&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**Platform**&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**ID**<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;CIS&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;WS2022&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;18.9.47.9.1<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;CIS&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;WS2019&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;18.9.47.9.1<br /> |\= 0<br /><sub>(Registry)</sub> |Warning |
+|Turn off Microsoft Defender AntiVirus<br /><sub>(AZ-WIN-202220)</sub> |**Description**: This policy setting turns off Microsoft Defender Antivirus. If the setting is configured to Disabled, Microsoft Defender Antivirus runs and computers are scanned for malware and other potentially unwanted software. The recommended state for this setting is: `Disabled`.<br />**Key Path**: Software\Policies\Microsoft\Windows Defender\DisableAntiSpyware<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member<br />**Group Policy Path**: Computer Configuration\Policies\Administrative Templates\Windows Components\Microsoft Defender Antivirus\Turn off Microsoft Defender AntiVirus<br />**Compliance Standard Mappings**:<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**Name**&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**Platform**&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**ID**<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;CIS&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;WS2022&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;18.9.47.16<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;CIS&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;WS2019&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;18.9.47.16<br /> |\= 0<br /><sub>(Registry)</sub> |Critical |
+|Turn off real-time protection<br /><sub>(AZ-WIN-202222)</sub> |**Description**: This policy setting configures real-time protection prompts for known malware detection. Microsoft Defender Antivirus alerts you when malware or potentially unwanted software attempts to install itself or to run on your computer. The recommended state for this setting is: `Disabled`.<br />**Key Path**: Software\Policies\Microsoft\Windows Defender\Real-Time Protection\DisableRealtimeMonitoring<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member<br />**Group Policy Path**: Computer Configuration\Policies\Administrative Templates\Windows Components\Microsoft Defender Antivirus\Real-Time Protection\Turn off real-time protection<br />**Compliance Standard Mappings**:<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**Name**&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**Platform**&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**ID**<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;CIS&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;WS2022&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;18.9.47.9.2<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;CIS&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;WS2019&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;18.9.47.9.2<br /> |\= 0<br /><sub>(Registry)</sub> |Warning |
+|Turn on e-mail scanning<br /><sub>(AZ-WIN-202218)</sub> |**Description**: This policy setting allows you to configure e-mail scanning. When e-mail scanning is enabled, the engine will parse the mailbox and mail files, according to their specific format, in order to analyze the mail bodies and attachments. Several e-mail formats are currently supported, for example: pst (Outlook), dbx, mbx, mime (Outlook Express), binhex (Mac). The recommended state for this setting is: `Enabled`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows Defender\Scan\DisableEmailScanning<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member<br />**Group Policy Path**: Computer Configuration\Policies\Administrative Templates\Windows Components\Microsoft Defender Antivirus\Scan\Turn on e-mail scanning<br />**Compliance Standard Mappings**:<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**Name**&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**Platform**&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**ID**<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;CIS&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;WS2022&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;18.9.47.12.2<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;CIS&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;WS2019&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;18.9.47.12.2<br /> |\= 0<br /><sub>(Registry)</sub> |Warning |
+|Turn on script scanning<br /><sub>(AZ-WIN-202223)</sub> |**Description**: This policy setting allows script scanning to be turned on/off. Script scanning intercepts scripts then scans them before they are executed on the system. The recommended state for this setting is: `Enabled`.<br />**Key Path**: Software\Policies\Microsoft\Windows Defender\Real-Time Protection\DisableScriptScanning<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member<br />**Group Policy Path**: Computer Configuration\Policies\Administrative Templates\Windows Components\Microsoft Defender Antivirus\Real-Time Protection\Turn on script scanning<br />**Compliance Standard Mappings**:<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**Name**&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**Platform**&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**ID**<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;CIS&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;WS2022&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;18.9.47.9.4<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;CIS&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;WS2019&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;18.9.47.9.4<br /> |\= 0<br /><sub>(Registry)</sub> |Warning |
## Administrative Templates - Control Panel |Name<br /><sub>(ID)</sub> |Details |Expected value<br /><sub>(Type)</sub> |Severity | |||||
-|Allow Input Personalization<br /><sub>(AZ-WIN-00168)</sub> |**Description**: This policy enables the automatic learning component of input personalization that includes speech, inking, and typing. Automatic learning enables the collection of speech and handwriting patterns, typing history, contacts, and recent calendar information. It is required for the use of Cortana. Some of this collected information may be stored on the user's OneDrive, in the case of inking and typing; some of the information will be uploaded to Microsoft to personalize speech. The recommended state for this setting is: `Disabled`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\InputPersonalization\AllowInputPersonalization<br />**OS**: WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 0<br /><sub>(Registry)</sub> |Warning |
+|Allow Input Personalization<br /><sub>(AZ-WIN-00168)</sub> |**Description**: This policy enables the automatic learning component of input personalization that includes speech, inking, and typing. Automatic learning enables the collection of speech and handwriting patterns, typing history, contacts, and recent calendar information. It is required for the use of Cortana. Some of this collected information may be stored on the user's OneDrive, in the case of inking and typing; some of the information will be uploaded to Microsoft to personalize speech. The recommended state for this setting is: `Disabled`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\InputPersonalization\AllowInputPersonalization<br />**OS**: WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member<br />**Group Policy Path**: To establish the recommended configuration via GP, set the following UI path to `Disabled`: Computer Configuration\Policies\Administrative Templates\Control Panel\Regional and Language Options\Allow users to enable online speech recognition services **Note**: This Group Policy path may not exist by default. It is provided by the Group Policy template Globalization.admx/adml that is included with the Microsoft Windows 10 RTM (Release 1507) Administrative Templates (or newer). **Note #2**: In older Microsoft Windows Administrative Templates, this setting was initially named Allow input personalization, but it was renamed to Allow users to enable online speech recognition services starting with the Windows 10 R1809 & Server 2019 Administrative Templates.<br />**Compliance Standard Mappings**:<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**Name**&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**Platform**&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**ID**<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;CIS&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;WS2019&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;18.1.2.2<br /> |\= 0<br /><sub>(Registry)</sub> |Warning |
## Administrative Templates - MS Security Guide |Name<br /><sub>(ID)</sub> |Details |Expected value<br /><sub>(Type)</sub> |Severity | |||||
-|Disable SMB v1 client (remove dependency on LanmanWorkstation)<br /><sub>(AZ-WIN-00122)</sub> |**Description**: SMBv1 is a legacy protocol that uses the MD5 algorithm as part of SMB. MD5 is known to be vulnerable to a number of attacks such as collision and preimage attacks as well as not being FIPS compliant.<br />**Key Path**: SYSTEM\CurrentControlSet\Services\LanmanWorkstation\DependOnService<br />**OS**: WS2008, WS2008R2, WS2012<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= Bowser\0MRxSmb20\0NSI\0\0<br /><sub>(Registry)</sub> |Critical |
-|WDigest Authentication<br /><sub>(AZ-WIN-73497)</sub> |**Description**: When WDigest authentication is enabled, Lsass.exe retains a copy of the user's plaintext password in memory, where it can be at risk of theft. If this setting is not configured, WDigest authentication is disabled in Windows 8.1 and in Windows Server 2012 R2; it is enabled by default in earlier versions of Windows and Windows Server. For more information about local accounts and credential theft, review the "[Mitigating Pass-the-Hash (PtH) Attacks and Other Credential Theft Techniques](https://www.microsoft.com/download/details.aspx?id=36036)" documents. For more information about `UseLogonCredential`, see Microsoft Knowledge Base article 2871997: [Microsoft Security Advisory Update to improve credentials protection and management May 13, 2014](https://support.microsoft.com/kb/2871997). The recommended state for this setting is: `Disabled`.<br />**Key Path**: SYSTEM\CurrentControlSet\Control\SecurityProviders\WDigest\UseLogonCredential<br />**OS**: WS2016, WS2019<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 0<br /><sub>(Registry)</sub> |Important |
+|Disable SMB v1 client (remove dependency on LanmanWorkstation)<br /><sub>(AZ-WIN-00122)</sub> |**Description**: SMBv1 is a legacy protocol that uses the MD5 algorithm as part of SMB. MD5 is known to be vulnerable to a number of attacks such as collision and preimage attacks as well as not being FIPS compliant.<br />**Key Path**: SYSTEM\CurrentControlSet\Services\LanmanWorkstation\DependOnService<br />**OS**: WS2008, WS2008R2, WS2012<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member<br />**Group Policy Path**: Computer Configuration\Administrative Templates\MS Security Guide\Configure SMBv1 client driver<br />**Compliance Standard Mappings**:<br /> |Doesn't exist or \= Bowser\0MRxSmb20\0NSI\0\0<br /><sub>(Registry)</sub> |Critical |
+|WDigest Authentication<br /><sub>(AZ-WIN-73497)</sub> |**Description**: When WDigest authentication is enabled, Lsass.exe retains a copy of the user's plaintext password in memory, where it can be at risk of theft. If this setting is not configured, WDigest authentication is disabled in Windows 8.1 and in Windows Server 2012 R2; it is enabled by default in earlier versions of Windows and Windows Server. For more information about local accounts and credential theft, review the "[Mitigating Pass-the-Hash (PtH) Attacks and Other Credential Theft Techniques](https://www.microsoft.com/en-us/download/details.aspx?id=36036)" documents. For more information about `UseLogonCredential`, see Microsoft Knowledge Base article 2871997: [Microsoft Security Advisory Update to improve credentials protection and management May 13, 2014](https://support.microsoft.com/en-us/kb/2871997). The recommended state for this setting is: `Disabled`.<br />**Key Path**: SYSTEM\CurrentControlSet\Control\SecurityProviders\WDigest\UseLogonCredential<br />**OS**: WS2016, WS2019<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member<br />**Group Policy Path**: Computer Configuration\Policies\Administrative Templates\MS Security Guide\WDigest Authentication (disabling may require KB2871997)<br />**Compliance Standard Mappings**:<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**Name**&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**Platform**&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**ID**<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;CIS&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;WS2022&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;18.3.7<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;CIS&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;WS2019&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;18.3.7<br /> |\= 0<br /><sub>(Registry)</sub> |Important |
## Administrative Templates - MSS |Name<br /><sub>(ID)</sub> |Details |Expected value<br /><sub>(Type)</sub> |Severity | |||||
-|MSS: (DisableIPSourceRouting IPv6) IP source routing protection level (protects against packet spoofing)<br /><sub>(AZ-WIN-202213)</sub> |**Description**: IP source routing is a mechanism that allows the sender to determine the IP route that a datagram should follow through the network. The recommended state for this setting is: `Enabled: Highest protection, source routing is completely disabled`.<br />**Key Path**: System\CurrentControlSet\Services\Tcpip6\Parameters\DisableIPSourceRouting<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\= 2<br /><sub>(Registry)</sub> |Informational |
-|MSS: (DisableIPSourceRouting) IP source routing protection level (protects against packet spoofing)<br /><sub>(AZ-WIN-202244)</sub> |**Description**: IP source routing is a mechanism that allows the sender to determine the IP route that a datagram should take through the network. It is recommended to configure this setting to Not Defined for enterprise environments and to Highest Protection for high security environments to completely disable source routing. The recommended state for this setting is: `Enabled: Highest protection, source routing is completely disabled`.<br />**Key Path**: System\CurrentControlSet\Services\Tcpip\Parameters\DisableIPSourceRouting<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\= 2<br /><sub>(Registry)</sub> |Informational |
-|MSS: (NoNameReleaseOnDemand) Allow the computer to ignore NetBIOS name release requests except from WINS servers<br /><sub>(AZ-WIN-202214)</sub> |**Description**: NetBIOS over TCP/IP is a network protocol that among other things provides a way to easily resolve NetBIOS names that are registered on Windows-based systems to the IP addresses that are configured on those systems. This setting determines whether the computer releases its NetBIOS name when it receives a name-release request. The recommended state for this setting is: `Enabled`.<br />**Key Path**: System\CurrentControlSet\Services\Netbt\Parameters\NoNameReleaseOnDemand<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\= 1<br /><sub>(Registry)</sub> |Informational |
-|MSS: (SafeDllSearchMode) Enable Safe DLL search mode (recommended)<br /><sub>(AZ-WIN-202215)</sub> |**Description**: The DLL search order can be configured to search for DLLs that are requested by running processes in one of two ways: - Search folders specified in the system path first, and then search the current working folder. - Search current working folder first, and then search the folders specified in the system path. When enabled, the registry value is set to 1. With a setting of 1, the system first searches the folders that are specified in the system path and then searches the current working folder. When disabled the registry value is set to 0 and the system first searches the current working folder and then searches the folders that are specified in the system path. Applications will be forced to search for DLLs in the system path first. For applications that require unique versions of these DLLs that are included with the application, this entry could cause performance or stability problems. The recommended state for this setting is: `Enabled`. **Note:** More information on how Safe DLL search mode works is available at this link: [Dynamic-Link Library Search Order - Windows applications | Microsoft Docs](/windows/win32/dlls/dynamic-link-library-search-order)<br />**Key Path**: SYSTEM\CurrentControlSet\Control\Session Manager\SafeDllSearchMode<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\= 1<br /><sub>(Registry)</sub> |Warning |
-|MSS: (WarningLevel) Percentage threshold for the security event log at which the system will generate a warning<br /><sub>(AZ-WIN-202212)</sub> |**Description**: This setting can generate a security audit in the Security event log when the log reaches a user-defined threshold. The recommended state for this setting is: `Enabled: 90% or less`. **Note:** If log settings are configured to Overwrite events as needed or Overwrite events older than x days, this event will not be generated.<br />**Key Path**: SYSTEM\CurrentControlSet\Services\Eventlog\Security\WarningLevel<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member | 90<br /><sub>(Registry)</sub> |Informational |
-|Windows Server must be configured to prevent Internet Control Message Protocol (ICMP) redirects from overriding Open Shortest Path First (OSPF)-generated routes.<br /><sub>(AZ-WIN-73503)</sub> |**Description**: Internet Control Message Protocol (ICMP) redirects cause the IPv4 stack to plumb host routes. These routes override the Open Shortest Path First (OSPF) generated routes. The recommended state for this setting is: `Disabled`.<br />**Key Path**: SYSTEM\CurrentControlSet\Services\Tcpip\Parameters\EnableICMPRedirect<br />**OS**: WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 0<br /><sub>(Registry)</sub> |Informational |
+|MSS: (DisableIPSourceRouting IPv6) IP source routing protection level (protects against packet spoofing)<br /><sub>(AZ-WIN-202213)</sub> |**Description**: IP source routing is a mechanism that allows the sender to determine the IP route that a datagram should follow through the network. The recommended state for this setting is: `Enabled: Highest protection, source routing is completely disabled`.<br />**Key Path**: System\CurrentControlSet\Services\Tcpip6\Parameters\DisableIPSourceRouting<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member<br />**Group Policy Path**: Computer Configuration\Policies\Administrative Templates\MSS (Legacy)\MSS: (DisableIPSourceRouting IPv6) IP source routing protection level (protects against packet spoofing)<br />**Compliance Standard Mappings**:<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**Name**&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**Platform**&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**ID**<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;CIS&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;WS2022&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;18.4.2<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;CIS&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;WS2019&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;18.4.2<br /> |\= 2<br /><sub>(Registry)</sub> |Informational |
+|MSS: (DisableIPSourceRouting) IP source routing protection level (protects against packet spoofing)<br /><sub>(AZ-WIN-202244)</sub> |**Description**: IP source routing is a mechanism that allows the sender to determine the IP route that a datagram should take through the network. It is recommended to configure this setting to Not Defined for enterprise environments and to Highest Protection for high security environments to completely disable source routing. The recommended state for this setting is: `Enabled: Highest protection, source routing is completely disabled`.<br />**Key Path**: System\CurrentControlSet\Services\Tcpip\Parameters\DisableIPSourceRouting<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member<br />**Group Policy Path**: Computer Configuration\Policies\Administrative Templates\MSS (Legacy)\MSS: (DisableIPSourceRouting) IP source routing protection level (protects against packet spoofing)<br />**Compliance Standard Mappings**:<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**Name**&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**Platform**&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**ID**<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;CIS&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;WS2022&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;18.4.3<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;CIS&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;WS2019&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;18.4.3<br /> |\= 2<br /><sub>(Registry)</sub> |Informational |
+|MSS: (NoNameReleaseOnDemand) Allow the computer to ignore NetBIOS name release requests except from WINS servers<br /><sub>(AZ-WIN-202214)</sub> |**Description**: NetBIOS over TCP/IP is a network protocol that among other things provides a way to easily resolve NetBIOS names that are registered on Windows-based systems to the IP addresses that are configured on those systems. This setting determines whether the computer releases its NetBIOS name when it receives a name-release request. The recommended state for this setting is: `Enabled`.<br />**Key Path**: System\CurrentControlSet\Services\Netbt\Parameters\NoNameReleaseOnDemand<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member<br />**Group Policy Path**: Computer Configuration\Policies\Administrative Templates\MSS (Legacy)\MSS: (NoNameReleaseOnDemand) Allow the computer to ignore NetBIOS name release requests except from WINS servers<br />**Compliance Standard Mappings**:<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**Name**&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**Platform**&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**ID**<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;CIS&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;WS2022&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;18.4.6<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;CIS&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;WS2019&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;18.4.6<br /> |\= 1<br /><sub>(Registry)</sub> |Informational |
+|MSS: (SafeDllSearchMode) Enable Safe DLL search mode (recommended)<br /><sub>(AZ-WIN-202215)</sub> |**Description**: The DLL search order can be configured to search for DLLs that are requested by running processes in one of two ways: - Search folders specified in the system path first, and then search the current working folder. - Search current working folder first, and then search the folders specified in the system path. When enabled, the registry value is set to 1. With a setting of 1, the system first searches the folders that are specified in the system path and then searches the current working folder. When disabled the registry value is set to 0 and the system first searches the current working folder and then searches the folders that are specified in the system path. Applications will be forced to search for DLLs in the system path first. For applications that require unique versions of these DLLs that are included with the application, this entry could cause performance or stability problems. The recommended state for this setting is: `Enabled`. **Note:** More information on how Safe DLL search mode works is available at this link: [Dynamic-Link Library Search Order - Windows applications \| Microsoft Docs](/windows/win32/dlls/dynamic-link-library-search-order)<br />**Key Path**: SYSTEM\CurrentControlSet\Control\Session Manager\SafeDllSearchMode<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member<br />**Group Policy Path**: Computer Configuration\Policies\Administrative Templates\MSS (Legacy)\MSS: (SafeDllSearchMode) Enable Safe DLL search mode (recommended)<br />**Compliance Standard Mappings**:<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**Name**&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**Platform**&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**ID**<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;CIS&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;WS2022&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;18.4.8<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;CIS&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;WS2019&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;18.4.8<br /> |\= 1<br /><sub>(Registry)</sub> |Warning |
+|MSS: (WarningLevel) Percentage threshold for the security event log at which the system will generate a warning<br /><sub>(AZ-WIN-202212)</sub> |**Description**: This setting can generate a security audit in the Security event log when the log reaches a user-defined threshold. The recommended state for this setting is: `Enabled: 90% or less`. **Note:** If log settings are configured to Overwrite events as needed or Overwrite events older than x days, this event will not be generated.<br />**Key Path**: SYSTEM\CurrentControlSet\Services\Eventlog\Security\WarningLevel<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member<br />**Group Policy Path**: Computer Configuration\Policies\Administrative Templates\MSS (Legacy)\MSS: (WarningLevel) Percentage threshold for the security event log at which the system will generate a warning<br />**Compliance Standard Mappings**:<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**Name**&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**Platform**&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**ID**<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;CIS&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;WS2022&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;18.4.12<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;CIS&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;WS2019&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;18.4.12<br /> |\<\= 90<br /><sub>(Registry)</sub> |Informational |
+|Windows Server must be configured to prevent Internet Control Message Protocol (ICMP) redirects from overriding Open Shortest Path First (OSPF)-generated routes.<br /><sub>(AZ-WIN-73503)</sub> |**Description**: Internet Control Message Protocol (ICMP) redirects cause the IPv4 stack to plumb host routes. These routes override the Open Shortest Path First (OSPF) generated routes. The recommended state for this setting is: `Disabled`.<br />**Key Path**: SYSTEM\CurrentControlSet\Services\Tcpip\Parameters\EnableICMPRedirect<br />**OS**: WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member<br />**Group Policy Path**: Computer Configuration\Policies\Administrative Templates\MSS (Legacy)\MSS: (EnableICMPRedirect) Allow ICMP redirects to override OSPF generated routes<br />**Compliance Standard Mappings**:<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**Name**&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**Platform**&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**ID**<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;CIS&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;WS2022&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;18.4.4<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;CIS&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;WS2019&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;18.4.4<br /> |\= 0<br /><sub>(Registry)</sub> |Informational |
## Administrative Templates - Network |Name<br /><sub>(ID)</sub> |Details |Expected value<br /><sub>(Type)</sub> |Severity | |||||
-|Enable insecure guest logons<br /><sub>(AZ-WIN-00171)</sub> |**Description**: This policy setting determines if the SMB client will allow insecure guest logons to an SMB server. The recommended state for this setting is: `Disabled`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\LanmanWorkstation\AllowInsecureGuestAuth<br />**OS**: WS2016, WS2019, WS2022<br />**Server Type**: Domain Member, Workgroup Member |\= 0<br /><sub>(Registry)</sub> |Critical |
-|Hardened UNC Paths - NETLOGON<br /><sub>(AZ_WIN_202250)</sub> |**Description**: This policy setting configures secure access to UNC paths<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\NetworkProvider\HardenedPaths?ValueName=\\*\NETLOGON<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\= RequireMutualAuthentication=1, RequireIntegrity=1<br /><sub>(Registry)</sub> |Warning |
-|Hardened UNC Paths - SYSVOL<br /><sub>(AZ_WIN_202251)</sub> |**Description**: This policy setting configures secure access to UNC paths<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\NetworkProvider\HardenedPaths?ValueName=\\*\SYSVOL<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\= RequireMutualAuthentication=1, RequireIntegrity=1<br /><sub>(Registry)</sub> |Warning |
-|Minimize the number of simultaneous connections to the Internet or a Windows Domain<br /><sub>(CCE-38338-0)</sub> |**Description**: This policy setting prevents computers from connecting to both a domain based network and a non-domain based network at the same time. The recommended state for this setting is: `Enabled`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\WcmSvc\GroupPolicy\fMinimizeConnections<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 1<br /><sub>(Registry)</sub> |Warning |
-|Prohibit installation and configuration of Network Bridge on your DNS domain network<br /><sub>(CCE-38002-2)</sub> |**Description**: You can use this procedure to control user's ability to install and configure a network bridge. The recommended state for this setting is: `Enabled`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\Network Connections\NC_AllowNetBridge_NLA<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 0<br /><sub>(Registry)</sub> |Warning |
-|Prohibit use of Internet Connection Sharing on your DNS domain network<br /><sub>(AZ-WIN-00172)</sub> |**Description**: Although this "legacy" setting traditionally applied to the use of Internet Connection Sharing (ICS) in Windows 2000, Windows XP & Server 2003, this setting now freshly applies to the Mobile Hotspot feature in Windows 10 & Server 2016. The recommended state for this setting is: `Enabled`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\Network Connections\NC_ShowSharedAccessUI<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 0<br /><sub>(Registry)</sub> |Warning |
-|Turn off multicast name resolution<br /><sub>(AZ-WIN-00145)</sub> |**Description**: LLMNR is a secondary name resolution protocol. With LLMNR, queries are sent using multicast over a local network link on a single subnet from a client computer to another client computer on the same subnet that also has LLMNR enabled. LLMNR does not require a DNS server or DNS client configuration, and provides name resolution in scenarios in which conventional DNS name resolution is not possible. The recommended state for this setting is: `Enabled`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows NT\DNSClient\EnableMulticast<br />**OS**: WS2016, WS2019, WS2022<br />**Server Type**: Domain Member, Workgroup Member |\= 0<br /><sub>(Registry)</sub> |Warning |
+|Enable insecure guest logons<br /><sub>(AZ-WIN-00171)</sub> |**Description**: This policy setting determines if the SMB client will allow insecure guest logons to an SMB server. The recommended state for this setting is: `Disabled`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\LanmanWorkstation\AllowInsecureGuestAuth<br />**OS**: WS2016, WS2019, WS2022<br />**Server Type**: Domain Member, Workgroup Member<br />**Group Policy Path**: To establish the recommended configuration via GP, set the following UI path to `Disabled`:<br />Computer Configuration\Policies\Administrative Templates<br />etwork\Lanman Workstation\Enable insecure guest logons<br />**Note:** This Group Policy path may not exist by default. It is provided by the Group Policy template 'LanmanWorkstation.admx/adml' that is included with the Microsoft Windows 10 Release 1511 Administrative Templates (or newer).<br />**Compliance Standard Mappings**:<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**Name**&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**Platform**&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**ID**<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;STIG&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;WS2019&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;V-93239<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;STIG&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;WS2016&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;V-73507<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;CIS&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;WS2019&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;18.5.8.1<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;CIS&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;WS2022&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;18.5.8.1<br /> |\= 0<br /><sub>(Registry)</sub> |Critical |
+|Hardened UNC Paths - NETLOGON<br /><sub>(AZ_WIN_202250)</sub> |**Description**: This policy setting configures secure access to UNC paths<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\NetworkProvider\HardenedPaths?ValueName=\\*\NETLOGON<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member<br />**Group Policy Path**: Computer Configuration\Administrative Templates\Network\Network Provider\Hardened UNC Paths<br />**Compliance Standard Mappings**:<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**Name**&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**Platform**&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**ID**<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;CIS&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;WS2019&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;18.5.14.1<br /> |\= RequireMutualAuthentication=1, RequireIntegrity=1<br /><sub>(Registry)</sub> |Warning |
+|Hardened UNC Paths - SYSVOL<br /><sub>(AZ_WIN_202251)</sub> |**Description**: This policy setting configures secure access to UNC paths<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\NetworkProvider\HardenedPaths?ValueName=\\*\SYSVOL<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member<br />**Group Policy Path**: Computer Configuration\Administrative Templates\Network\Network Provider\Hardened UNC Paths<br />**Compliance Standard Mappings**:<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**Name**&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**Platform**&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**ID**<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;CIS&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;WS2019&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;18.5.14.1<br /> |\= RequireMutualAuthentication=1, RequireIntegrity=1<br /><sub>(Registry)</sub> |Warning |
+|Minimize the number of simultaneous connections to the Internet or a Windows Domain<br /><sub>(CCE-38338-0)</sub> |**Description**: This policy setting prevents computers from connecting to both a domain based network and a non-domain based network at the same time. The recommended state for this setting is: `Enabled`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\WcmSvc\GroupPolicy\fMinimizeConnections<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member<br />**Group Policy Path**: To establish the recommended configuration via GP, set the following UI path to `Enabled: 3 = Prevent Wi-Fi when on Ethernet`:<br />Computer Configuration\Policies\Administrative Templates<br />etwork\Windows Connection Manager\Minimize the number of simultaneous connections to the Internet or a Windows Domain<br />**Note:** This Group Policy path may not exist by default. It is provided by the Group Policy template 'WCM.admx/adml' that is included with the Microsoft Windows 8.0 & Server 2012 (non-R2) Administrative Templates. It was updated with a new _Minimize Policy Options_ sub-setting starting with the Windows 10 Release 1903 Administrative Templates.<br />**Compliance Standard Mappings**:<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**Name**&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**Platform**&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**ID**<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;CIS&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;WS2019&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;18.5.21.1<br /> |Doesn't exist or \= 1<br /><sub>(Registry)</sub> |Warning |
+|Prohibit installation and configuration of Network Bridge on your DNS domain network<br /><sub>(CCE-38002-2)</sub> |**Description**: You can use this procedure to control user's ability to install and configure a network bridge. The recommended state for this setting is: `Enabled`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\Network Connections\NC_AllowNetBridge_NLA<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member<br />**Group Policy Path**: To establish the recommended configuration via GP, set the following UI path to `Enabled`:<br />Computer Configuration\Policies\Administrative Templates\Network\Network Connections\Prohibit installation and configuration of Network Bridge on your DNS domain network<br />**Note:** This Group Policy path is provided by the Group Policy template `NetworkConnections.admx/adml` that is included with all versions of the Microsoft Windows Administrative Templates.<br />**Compliance Standard Mappings**:<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**Name**&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**Platform**&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**ID**<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;CIS&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;WS2019&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;18.5.11.2<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;CIS&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;WS2022&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;18.5.11.2<br /> |\= 0<br /><sub>(Registry)</sub> |Warning |
+|Prohibit use of Internet Connection Sharing on your DNS domain network<br /><sub>(AZ-WIN-00172)</sub> |**Description**: Although this "legacy" setting traditionally applied to the use of Internet Connection Sharing (ICS) in Windows 2000, Windows XP & Server 2003, this setting now freshly applies to the Mobile Hotspot feature in Windows 10 & Server 2016. The recommended state for this setting is: `Enabled`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\Network Connections\NC_ShowSharedAccessUI<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member<br />**Group Policy Path**: To establish the recommended configuration via GP, set the following UI path to `Enabled`:<br />Computer Configuration\Policies\Administrative Templates<br />etwork<br />etwork Connections\Prohibit use of Internet Connection Sharing on your DNS domain network<br />**Note:** This Group Policy path is provided by the Group Policy template 'NetworkConnections.admx/adml' that is included with all versions of the Microsoft Windows Administrative Templates.<br />**Compliance Standard Mappings**:<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**Name**&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**Platform**&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**ID**<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;CIS&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;WS2019&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;18.5.11.3<br /> |\= 0<br /><sub>(Registry)</sub> |Warning |
+|Turn off multicast name resolution<br /><sub>(AZ-WIN-00145)</sub> |**Description**: LLMNR is a secondary name resolution protocol. With LLMNR, queries are sent using multicast over a local network link on a single subnet from a client computer to another client computer on the same subnet that also has LLMNR enabled. LLMNR does not require a DNS server or DNS client configuration, and provides name resolution in scenarios in which conventional DNS name resolution is not possible. The recommended state for this setting is: `Enabled`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows NT\DNSClient\EnableMulticast<br />**OS**: WS2016, WS2019, WS2022<br />**Server Type**: Domain Member, Workgroup Member<br />**Group Policy Path**: To establish the recommended configuration via GP, set the following UI path to `Enabled`:<br />Computer Configuration\Policies\Administrative Templates<br />etwork\DNS Client\Turn off multicast name resolution<br />**Note:** This Group Policy path may not exist by default. It is provided by the Group Policy template 'DnsClient.admx/adml' that is included with the Microsoft Windows 8.0 & Server 2012 (non-R2) Administrative Templates (or newer).<br />**Compliance Standard Mappings**:<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**Name**&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**Platform**&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**ID**<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;CIS&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;WS2019&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;18.5.4.2<br /> |\= 0<br /><sub>(Registry)</sub> |Warning |
## Administrative Templates - Security Guide |Name<br /><sub>(ID)</sub> |Details |Expected value<br /><sub>(Type)</sub> |Severity | |||||
-|Enable Structured Exception Handling Overwrite Protection (SEHOP)<br /><sub>(AZ-WIN-202210)</sub> |**Description**: Windows includes support for Structured Exception Handling Overwrite Protection (SEHOP). We recommend enabling this feature to improve the security profile of the computer. The recommended state for this setting is: `Enabled`.<br />**Key Path**: SYSTEM\CurrentControlSet\Control\Session Manager\kernel\DisableExceptionChainValidation<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\= 0<br /><sub>(Registry)</sub> |Critical |
-|NetBT NodeType configuration<br /><sub>(AZ-WIN-202211)</sub> |**Description**: This setting determines which method NetBIOS over TCP/IP (NetBT) uses to register and resolve names. The available methods are: - The B-node (broadcast) method only uses broadcasts. - The P-node (point-to-point) method only uses name queries to a name server (WINS). - The M-node (mixed) method broadcasts first, then queries a name server (WINS) if broadcast failed. - The H-node (hybrid) method queries a name server (WINS) first, then broadcasts if the query failed. The recommended state for this setting is: `Enabled: P-node (recommended)` (point-to-point). **Note:** Resolution through LMHOSTS or DNS follows these methods. If the `NodeType` registry value is present, it overrides any `DhcpNodeType` registry value. If neither `NodeType` nor `DhcpNodeType` is present, the computer uses B-node (broadcast) if there are no WINS servers configured for the network, or H-node (hybrid) if there is at least one WINS server configured.<br />**Key Path**: SYSTEM\CurrentControlSet\Services\NetBT\Parameters\NodeType<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\= 2<br /><sub>(Registry)</sub> |Warning |
+|Enable Structured Exception Handling Overwrite Protection (SEHOP)<br /><sub>(AZ-WIN-202210)</sub> |**Description**: Windows includes support for Structured Exception Handling Overwrite Protection (SEHOP). We recommend enabling this feature to improve the security profile of the computer. The recommended state for this setting is: `Enabled`.<br />**Key Path**: SYSTEM\CurrentControlSet\Control\Session Manager\kernel\DisableExceptionChainValidation<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member<br />**Group Policy Path**: Computer Configuration\Policies\Administrative Templates\MS Security Guide\Enable Structured Exception Handling Overwrite Protection (SEHOP)<br />**Compliance Standard Mappings**:<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**Name**&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**Platform**&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**ID**<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;CIS&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;WS2022&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;18.3.4<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;CIS&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;WS2019&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;18.3.4<br /> |\= 0<br /><sub>(Registry)</sub> |Critical |
+|NetBT NodeType configuration<br /><sub>(AZ-WIN-202211)</sub> |**Description**: This setting determines which method NetBIOS over TCP/IP (NetBT) uses to register and resolve names. The available methods are: - The B-node (broadcast) method only uses broadcasts. - The P-node (point-to-point) method only uses name queries to a name server (WINS). - The M-node (mixed) method broadcasts first, then queries a name server (WINS) if broadcast failed. - The H-node (hybrid) method queries a name server (WINS) first, then broadcasts if the query failed. The recommended state for this setting is: `Enabled: P-node (recommended)` (point-to-point). **Note:** Resolution through LMHOSTS or DNS follows these methods. If the `NodeType` registry value is present, it overrides any `DhcpNodeType` registry value. If neither `NodeType` nor `DhcpNodeType` is present, the computer uses B-node (broadcast) if there are no WINS servers configured for the network, or H-node (hybrid) if there is at least one WINS server configured.<br />**Key Path**: SYSTEM\CurrentControlSet\Services\NetBT\Parameters\NodeType<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member<br />**Group Policy Path**: Computer Configuration\Policies\Administrative Templates\MS Security Guide\NetBT NodeType configuration<br />**Compliance Standard Mappings**:<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**Name**&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**Platform**&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**ID**<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;CIS&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;WS2022&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;18.3.6<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;CIS&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;WS2019&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;18.3.6<br /> |\= 2<br /><sub>(Registry)</sub> |Warning |
## Administrative Templates - System |Name<br /><sub>(ID)</sub> |Details |Expected value<br /><sub>(Type)</sub> |Severity | |||||
-|Block user from showing account details on sign-in<br /><sub>(AZ-WIN-00138)</sub> |**Description**: This policy prevents the user from showing account details (email address or user name) on the sign-in screen. If you enable this policy setting, the user cannot choose to show account details on the sign-in screen. If you disable or do not configure this policy setting, the user may choose to show account details on the sign-in screen.<br />**Key Path**: Software\Policies\Microsoft\Windows\System\BlockUserFromShowingAccountDetailsOnSignin<br />**OS**: WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Warning |
-|Boot-Start Driver Initialization Policy<br /><sub>(CCE-37912-3)</sub> |**Description**: This policy setting allows you to specify which boot-start drivers are initialized based on a classification determined by an Early Launch Antimalware boot-start driver. The Early Launch Antimalware boot-start driver can return the following classifications for each boot-start driver: - Good: The driver has been signed and has not been tampered with. - Bad: The driver has been identified as malware. It is recommended that you do not allow known bad drivers to be initialized. - Bad, but required for boot: The driver has been identified as malware, but the computer cannot successfully boot without loading this driver. - Unknown: This driver has not been attested to by your malware detection application and has not been classified by the Early Launch Antimalware boot-start driver. If you enable this policy setting you will be able to choose which boot-start drivers to initialize the next time the computer is started. If you disable or do not configure this policy setting, the boot start drivers determined to be Good, Unknown or Bad but Boot Critical are initialized and the initialization of drivers determined to be Bad is skipped. If your malware detection application does not include an Early Launch Antimalware boot-start driver or if your Early Launch Antimalware boot-start driver has been disabled, this setting has no effect and all boot-start drivers are initialized.<br />**Key Path**: SYSTEM\CurrentControlSet\Policies\EarlyLaunch\DriverLoadPolicy<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 3<br /><sub>(Registry)</sub> |Warning |
-|Configure Offer Remote Assistance<br /><sub>(CCE-36388-7)</sub> |**Description**: This policy setting allows you to turn on or turn off Offer (Unsolicited) Remote Assistance on this computer. Help desk and support personnel will not be able to proactively offer assistance, although they can still respond to user assistance requests. The recommended state for this setting is: `Disabled`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows NT\Terminal Services\fAllowUnsolicited<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Warning |
-|Configure Solicited Remote Assistance<br /><sub>(CCE-37281-3)</sub> |**Description**: This policy setting allows you to turn on or turn off Solicited (Ask for) Remote Assistance on this computer. The recommended state for this setting is: `Disabled`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows NT\Terminal Services\fAllowToGetHelp<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 0<br /><sub>(Registry)</sub> |Critical |
-|Do not display network selection UI<br /><sub>(CCE-38353-9)</sub> |**Description**: This policy setting allows you to control whether anyone can interact with available networks UI on the logon screen. If you enable this policy setting, the PC's network connectivity state cannot be changed without signing into Windows. If you disable or don't configure this policy setting, any user can disconnect the PC from the network or can connect the PC to other available networks without signing into Windows.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\System\DontDisplayNetworkSelectionUI<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Warning |
-|Do not enumerate connected users on domain-joined computers<br /><sub>(AZ-WIN-202216)</sub> |**Description**: This policy setting prevents connected users from being enumerated on domain-joined computers. The recommended state for this setting is: `Enabled`.<br />**Key Path**: Software\Policies\Microsoft\Windows\System\DontEnumerateConnectedUsers<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\= 1<br /><sub>(Registry)</sub> |Warning |
-|Enable RPC Endpoint Mapper Client Authentication<br /><sub>(CCE-37346-4)</sub> |**Description**: This policy setting controls whether RPC clients authenticate with the Endpoint Mapper Service when the call they are making contains authentication information. The Endpoint Mapper Service on computers running Windows NT4 (all service packs) cannot process authentication information supplied in this manner. If you disable this policy setting, RPC clients will not authenticate to the Endpoint Mapper Service, but they will be able to communicate with the Endpoint Mapper Service on Windows NT4 Server. If you enable this policy setting, RPC clients will authenticate to the Endpoint Mapper Service for calls that contain authentication information. Clients making such calls will not be able to communicate with the Windows NT4 Server Endpoint Mapper Service. If you do not configure this policy setting, it remains disabled. RPC clients will not authenticate to the Endpoint Mapper Service, but they will be able to communicate with the Windows NT4 Server Endpoint Mapper Service. Note: This policy will not be applied until the system is rebooted.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows NT\Rpc\EnableAuthEpResolution<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Critical |
-|Enable Windows NTP Client<br /><sub>(CCE-37843-0)</sub> |**Description**: This policy setting specifies whether the Windows NTP Client is enabled. Enabling the Windows NTP Client allows your computer to synchronize its computer clock with other NTP servers. You might want to disable this service if you decide to use a third-party time provider. The recommended state for this setting is: `Enabled`.<br />**Key Path**: SYSTEM\CurrentControlSet\Services\W32Time\TimeProviders\NtpClient\Enabled<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Critical |
-|Encryption Oracle Remediation for CredSSP protocol<br /><sub>(AZ-WIN-201910)</sub> |**Description**: Some versions of the CredSSP protocol that is used by some applications (such as Remote Desktop Connection) are vulnerable to an encryption oracle attack against the client. This policy controls compatibility with vulnerable clients and servers and allows you to set the level of protection desired for the encryption oracle vulnerability. The recommended state for this setting is: `Enabled: Force Updated Clients`.<br />**Key Path**: SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System\CredSSP\Parameters\AllowEncryptionOracle<br />**OS**: WS2016, WS2019<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 0<br /><sub>(Registry)</sub> |Critical |
-|Ensure 'Configure registry policy processing: Do not apply during periodic background processing' is set to 'Enabled: FALSE'<br /><sub>(CCE-36169-1)</sub> |**Description**: The "Do not apply during periodic background processing" option prevents the system from updating affected policies in the background while the computer is in use. When background updates are disabled, policy changes will not take effect until the next user logon or system restart. The recommended state for this setting is: `Enabled: FALSE` (unchecked).<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\Group Policy\{35378EAC-683F-11D2-A89A-00C04FBBCFA2}\NoBackgroundPolicy<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\= 0<br /><sub>(Registry)</sub> |Critical |
-|Ensure 'Configure registry policy processing: Process even if the Group Policy objects have not changed' is set to 'Enabled: TRUE'<br /><sub>(CCE-36169-1a)</sub> |**Description**: The "Process even if the Group Policy objects have not changed" option updates and reapplies policies even if the policies have not changed. The recommended state for this setting is: `Enabled: TRUE` (checked).<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\Group Policy\{35378EAC-683F-11D2-A89A-00C04FBBCFA2}\NoGPOListChanges<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\= 0<br /><sub>(Registry)</sub> |Critical |
-|Ensure 'Continue experiences on this device' is set to 'Disabled'<br /><sub>(AZ-WIN-00170)</sub> |**Description**: This policy setting determines whether the Windows device is allowed to participate in cross-device experiences (continue experiences). The recommended state for this setting is: `Disabled`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\System\EnableCdp<br />**OS**: WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Warning |
-|Enumerate local users on domain-joined computers<br /><sub>(AZ_WIN_202204)</sub> |**Description**: This policy setting allows local users to be enumerated on domain-joined computers. The recommended state for this setting is: `Disabled`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\System\EnumerateLocalUsers<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Member |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Warning |
-|Include command line in process creation events<br /><sub>(CCE-36925-6)</sub> |**Description**: This policy setting determines what information is logged in security audit events when a new process has been created. This setting only applies when the Audit Process Creation policy is enabled. If you enable this policy setting the command line information for every process will be logged in plain text in the security event log as part of the Audit Process Creation event 4688, "a new process has been created," on the workstations and servers on which this policy setting is applied. If you disable or do not configure this policy setting, the process's command line information will not be included in Audit Process Creation events. Default: Not configured Note: When this policy setting is enabled, any user with access to read the security events will be able to read the command line arguments for any successfully created process. Command line arguments can contain sensitive or private information such as passwords or user data.<br />**Key Path**: SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System\Audit\ProcessCreationIncludeCmdLine_Enabled<br />**OS**: WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Critical |
-|Prevent device metadata retrieval from the Internet<br /><sub>(AZ-WIN-202251)</sub> |**Description**: This policy setting allows you to prevent Windows from retrieving device metadata from the Internet. The recommended state for this setting is: `Enabled`. **Note:** This will not prevent the installation of basic hardware drivers, but does prevent associated 3rd-party utility software from automatically being installed under the context of the `SYSTEM` account.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\Device Metadata\PreventDeviceMetadataFromNetwork<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\= 1<br /><sub>(Registry)</sub> |Informational |
-|Remote host allows delegation of non-exportable credentials<br /><sub>(AZ-WIN-20199)</sub> |**Description**: Remote host allows delegation of non-exportable credentials. When using credential delegation, devices provide an exportable version of credentials to the remote host. This exposes users to the risk of credential theft from attackers on the remote host. The Restricted Admin Mode and Windows Defender Remote Credential Guard features are two options to help protect against this risk. The recommended state for this setting is: `Enabled`. **Note:** More detailed information on Windows Defender Remote Credential Guard and how it compares to Restricted Admin Mode can be found at this link: [Protect Remote Desktop credentials with Windows Defender Remote Credential Guard (Windows 10) | Microsoft Docs](/windows/access-protection/remote-credential-guard)<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\CredentialsDelegation\AllowProtectedCreds<br />**OS**: WS2016, WS2019<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Critical |
-|Turn off app notifications on the lock screen<br /><sub>(CCE-35893-7)</sub> |**Description**: This policy setting allows you to prevent app notifications from appearing on the lock screen. The recommended state for this setting is: `Enabled`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\System\DisableLockScreenAppNotifications<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Warning |
-|Turn off background refresh of Group Policy<br /><sub>(CCE-14437-8)</sub> |**Description**: This policy setting prevents Group Policy from being updated while the computer is in use. This policy setting applies to Group Policy for computers, users and Domain Controllers. The recommended state for this setting is: `Disabled`.<br />**Key Path**: Software\Microsoft\Windows\CurrentVersion\Policies\System\DisableBkGndGroupPolicy<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\= 0<br /><sub>(Registry)</sub> |Warning |
-|Turn off downloading of print drivers over HTTP<br /><sub>(CCE-36625-2)</sub> |**Description**: This policy setting controls whether the computer can download print driver packages over HTTP. To set up HTTP printing, printer drivers that are not available in the standard operating system installation might need to be downloaded over HTTP. The recommended state for this setting is: `Enabled`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows NT\Printers\DisableWebPnPDownload<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Warning |
-|Turn off Internet Connection Wizard if URL connection is referring to Microsoft.com<br /><sub>(CCE-37163-3)</sub> |**Description**: This policy setting specifies whether the Internet Connection Wizard can connect to Microsoft to download a list of Internet Service Providers (ISPs). The recommended state for this setting is: `Enabled`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\Internet Connection Wizard\ExitOnMSICW<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Warning |
-|Turn on convenience PIN sign-in<br /><sub>(CCE-37528-7)</sub> |**Description**: This policy setting allows you to control whether a domain user can sign in using a convenience PIN. In Windows 10, convenience PIN was replaced with Passport, which has stronger security properties. To configure Passport for domain users, use the policies under Computer configuration\\Administrative Templates\\Windows Components\\Microsoft Passport for Work. **Note:** The user's domain password will be cached in the system vault when using this feature. The recommended state for this setting is: `Disabled`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\System\AllowDomainPINLogon<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Warning |
+|Block user from showing account details on sign-in<br /><sub>(AZ-WIN-00138)</sub> |**Description**: This policy prevents the user from showing account details (email address or user name) on the sign-in screen. If you enable this policy setting, the user cannot choose to show account details on the sign-in screen. If you disable or do not configure this policy setting, the user may choose to show account details on the sign-in screen.<br />**Key Path**: Software\Policies\Microsoft\Windows\System\BlockUserFromShowingAccountDetailsOnSignin<br />**OS**: WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member<br />**Group Policy Path**: To establish the recommended configuration via GP, set the following UI path to `Enabled`:<br />Computer Configuration\Policies\Administrative Templates\System\Logon\Block user from showing account details on sign-in<br />**Note:** This Group Policy path may not exist by default. It is provided by the Group Policy template 'Logon.admx/adml' that is included with the Microsoft Windows 10 Release 1607 & Server 2016 Administrative Templates (or newer).<br />**Compliance Standard Mappings**:<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**Name**&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**Platform**&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**ID**<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;CIS&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;WS2019&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;18.8.28.1<br /> |\= 1<br /><sub>(Registry)</sub> |Warning |
+|Boot-Start Driver Initialization Policy<br /><sub>(CCE-37912-3)</sub> |**Description**: This policy setting allows you to specify which boot-start drivers are initialized based on a classification determined by an Early Launch Antimalware boot-start driver. The Early Launch Antimalware boot-start driver can return the following classifications for each boot-start driver: - Good: The driver has been signed and has not been tampered with. - Bad: The driver has been identified as malware. It is recommended that you do not allow known bad drivers to be initialized. - Bad, but required for boot: The driver has been identified as malware, but the computer cannot successfully boot without loading this driver. - Unknown: This driver has not been attested to by your malware detection application and has not been classified by the Early Launch Antimalware boot-start driver. If you enable this policy setting you will be able to choose which boot-start drivers to initialize the next time the computer is started. If you disable or do not configure this policy setting, the boot start drivers determined to be Good, Unknown or Bad but Boot Critical are initialized and the initialization of drivers determined to be Bad is skipped. If your malware detection application does not include an Early Launch Antimalware boot-start driver or if your Early Launch Antimalware boot-start driver has been disabled, this setting has no effect and all boot-start drivers are initialized.<br />**Key Path**: SYSTEM\CurrentControlSet\Policies\EarlyLaunch\DriverLoadPolicy<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member<br />**Group Policy Path**: To establish the recommended configuration via GP, set the following UI path to `Enabled:` `Good, unknown and bad but critical`:<br />Computer Configuration\Policies\Administrative Templates\System\Early Launch Antimalware\Boot-Start Driver Initialization Policy<br />**Note:** This Group Policy path may not exist by default. It is provided by the Group Policy template 'EarlyLaunchAM.admx/adml' that is included with the Microsoft Windows 8.0 & Server 2012 (non-R2) Administrative Templates (or newer).<br />**Compliance Standard Mappings**:<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**Name**&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**Platform**&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**ID**<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;CIS&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;WS2019&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;18.8.14.1<br /> |Doesn't exist or \= 3<br /><sub>(Registry)</sub> |Warning |
+|Configure Offer Remote Assistance<br /><sub>(CCE-36388-7)</sub> |**Description**: This policy setting allows you to turn on or turn off Offer (Unsolicited) Remote Assistance on this computer. Help desk and support personnel will not be able to proactively offer assistance, although they can still respond to user assistance requests. The recommended state for this setting is: `Disabled`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows NT\Terminal Services\fAllowUnsolicited<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member<br />**Group Policy Path**: To establish the recommended configuration via GP, set the following UI path to `Disabled`:<br />Computer Configuration\Policies\Administrative Templates\System\Remote Assistance\Configure Offer Remote Assistance<br /> **Note:** This Group Policy path may not exist by default. It is provided by the Group Policy template `RemoteAssistance.admx/adml` that is included with the Microsoft Windows 8.0 & Server 2012 (non-R2) Administrative Templates (or newer).<br />**Compliance Standard Mappings**:<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**Name**&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**Platform**&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**ID**<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;CIS&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;WS2019&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;18.8.36.1<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;CIS&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;WS2022&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;18.8.36.1<br /> |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Warning |
+|Configure Solicited Remote Assistance<br /><sub>(CCE-37281-3)</sub> |**Description**: This policy setting allows you to turn on or turn off Solicited (Ask for) Remote Assistance on this computer. The recommended state for this setting is: `Disabled`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows NT\Terminal Services\fAllowToGetHelp<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member<br />**Group Policy Path**: To establish the recommended configuration via GP, set the following UI path to `Disabled`:<br />Computer Configuration\Policies\Administrative Templates\System\Remote Assistance\Configure Solicited Remote Assistance<br />**Note:** This Group Policy path may not exist by default. It is provided by the Group Policy template `RemoteAssistance.admx/adml` that is included with the Microsoft Windows 8.0 & Server 2012 (non-R2) Administrative Templates (or newer).<br />**Compliance Standard Mappings**:<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**Name**&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**Platform**&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**ID**<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;CIS&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;WS2019&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;18.8.36.2<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;CIS&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;WS2022&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;18.8.36.2<br /> |\= 0<br /><sub>(Registry)</sub> |Critical |
+|Do not display network selection UI<br /><sub>(CCE-38353-9)</sub> |**Description**: This policy setting allows you to control whether anyone can interact with available networks UI on the logon screen. If you enable this policy setting, the PC's network connectivity state cannot be changed without signing into Windows. If you disable or don't configure this policy setting, any user can disconnect the PC from the network or can connect the PC to other available networks without signing into Windows.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\System\DontDisplayNetworkSelectionUI<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member<br />**Group Policy Path**: To establish the recommended configuration via GP, set the following UI path to `Enabled`:<br />Computer Configuration\Policies\Administrative Templates\System\Logon\Do not display network selection UI<br />**Note:** This Group Policy path may not exist by default. It is provided by the Group Policy template 'Logon.admx/adml' that is included with the Microsoft Windows 8.1 & Server 2012 R2 Administrative Templates (or newer).<br />**Compliance Standard Mappings**:<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**Name**&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**Platform**&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**ID**<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;CIS&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;WS2019&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;18.8.28.2<br /> |\= 1<br /><sub>(Registry)</sub> |Warning |
+|Do not enumerate connected users on domain-joined computers<br /><sub>(AZ-WIN-202216)</sub> |**Description**: This policy setting prevents connected users from being enumerated on domain-joined computers. The recommended state for this setting is: `Enabled`.<br />**Key Path**: Software\Policies\Microsoft\Windows\System\DontEnumerateConnectedUsers<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member<br />**Group Policy Path**: Computer Configuration\Policies\Administrative Templates\System\Logon\Do not enumerate connected users on domain-joined computers<br />**Compliance Standard Mappings**:<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**Name**&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**Platform**&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**ID**<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;CIS&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;WS2022&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;18.8.28.3<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;CIS&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;WS2019&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;18.8.28.3<br /> |\= 1<br /><sub>(Registry)</sub> |Warning |
+|Enable RPC Endpoint Mapper Client Authentication<br /><sub>(CCE-37346-4)</sub> |**Description**: This policy setting controls whether RPC clients authenticate with the Endpoint Mapper Service when the call they are making contains authentication information. The Endpoint Mapper Service on computers running Windows NT4 (all service packs) cannot process authentication information supplied in this manner. If you disable this policy setting, RPC clients will not authenticate to the Endpoint Mapper Service, but they will be able to communicate with the Endpoint Mapper Service on Windows NT4 Server. If you enable this policy setting, RPC clients will authenticate to the Endpoint Mapper Service for calls that contain authentication information. Clients making such calls will not be able to communicate with the Windows NT4 Server Endpoint Mapper Service. If you do not configure this policy setting, it remains disabled. RPC clients will not authenticate to the Endpoint Mapper Service, but they will be able to communicate with the Windows NT4 Server Endpoint Mapper Service. Note: This policy will not be applied until the system is rebooted.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows NT\Rpc\EnableAuthEpResolution<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Member, Workgroup Member<br />**Group Policy Path**: To establish the recommended configuration via GP, set the following UI path to `Enabled`:<br />Computer Configuration\Policies\Administrative Templates\System\Remote Procedure Call\Enable RPC Endpoint Mapper Client Authentication<br />**Note:** This Group Policy path may not exist by default. It is provided by the Group Policy template 'RPC.admx/adml' that is included with the Microsoft Windows 8.0 & Server 2012 (non-R2) Administrative Templates (or newer).<br />**Compliance Standard Mappings**:<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**Name**&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**Platform**&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**ID**<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;CIS&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;WS2019&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;18.8.37.1<br /> |\= 1<br /><sub>(Registry)</sub> |Critical |
+|Enable Windows NTP Client<br /><sub>(CCE-37843-0)</sub> |**Description**: This policy setting specifies whether the Windows NTP Client is enabled. Enabling the Windows NTP Client allows your computer to synchronize its computer clock with other NTP servers. You might want to disable this service if you decide to use a third-party time provider. The recommended state for this setting is: `Enabled`.<br />**Key Path**: SYSTEM\CurrentControlSet\Services\W32Time\TimeProviders\NtpClient\Enabled<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Workgroup Member<br />**Group Policy Path**: To establish the recommended configuration via GP, set the following UI path to `Enabled`:<br />Computer Configuration\Policies\Administrative Templates\System\Windows Time Service\Time Providers\Enable Windows NTP Client<br />**Note:** This Group Policy path is provided by the Group Policy template 'W32Time.admx/adml' that is included with all versions of the Microsoft Windows Administrative Templates.<br />**Compliance Standard Mappings**:<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**Name**&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**Platform**&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**ID**<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;CIS&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;WS2019&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;18.8.53.1.1<br /> |\= 1<br /><sub>(Registry)</sub> |Critical |
+|Encryption Oracle Remediation for CredSSP protocol<br /><sub>(AZ-WIN-201910)</sub> |**Description**: Some versions of the CredSSP protocol that is used by some applications (such as Remote Desktop Connection) are vulnerable to an encryption oracle attack against the client. This policy controls compatibility with vulnerable clients and servers and allows you to set the level of protection desired for the encryption oracle vulnerability. The recommended state for this setting is: `Enabled: Force Updated Clients`.<br />**Key Path**: SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System\CredSSP\Parameters\AllowEncryptionOracle<br />**OS**: WS2016, WS2019<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member<br />**Group Policy Path**: Computer Configuration\Policies\Administrative Templates\System\Credentials Delegation\Encryption Oracle Remediation<br />**Compliance Standard Mappings**:<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**Name**&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**Platform**&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**ID**<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;CIS&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;WS2022&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;18.8.4.1<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;CIS&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;WS2019&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;18.8.4.1<br /> |\= 0<br /><sub>(Registry)</sub> |Critical |
+|Ensure 'Configure registry policy processing: Do not apply during periodic background processing' is set to 'Enabled: FALSE'<br /><sub>(CCE-36169-1)</sub> |**Description**: The "Do not apply during periodic background processing" option prevents the system from updating affected policies in the background while the computer is in use. When background updates are disabled, policy changes will not take effect until the next user logon or system restart. The recommended state for this setting is: `Enabled: FALSE` (unchecked).<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\Group Policy\{35378EAC-683F-11D2-A89A-00C04FBBCFA2}\NoBackgroundPolicy<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member<br />**Group Policy Path**: To establish the recommended configuration via GP, set the following UI path to `Enabled`, then set the `Process even if the Group Policy objects have not changed` option to `TRUE` (checked):<br />Computer Configuration\Policies\Administrative Templates\System\Group Policy\Configure registry policy processing <br />**Note:** This Group Policy path may not exist by default. It is provided by the Group Policy template `GroupPolicy.admx/adml` that is included with the Microsoft Windows 8.0 & Server 2012 (non-R2) Administrative Templates (or newer).<br />**Compliance Standard Mappings**:<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**Name**&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**Platform**&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**ID**<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;CIS&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;WS2019&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;18.8.21.2<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;CIS&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;WS2022&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;18.8.21.2<br /> |\= 0<br /><sub>(Registry)</sub> |Critical |
+|Ensure 'Configure registry policy processing: Process even if the Group Policy objects have not changed' is set to 'Enabled: TRUE'<br /><sub>(CCE-36169-1a)</sub> |**Description**: The "Process even if the Group Policy objects have not changed" option updates and reapplies policies even if the policies have not changed. The recommended state for this setting is: `Enabled: TRUE` (checked).<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\Group Policy\{35378EAC-683F-11D2-A89A-00C04FBBCFA2}\NoGPOListChanges<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member<br />**Group Policy Path**: To establish the recommended configuration via GP, set the following UI path to `Enabled`, then set the 'Process even if the Group Policy objects have not changed' option to 'TRUE' (checked):<br />Computer Configuration\Policies\Administrative Templates\System\Group Policy\Configure registry policy processing<br />**Note:** This Group Policy path may not exist by default. It is provided by the Group Policy template 'GroupPolicy.admx/adml' that is included with the Microsoft Windows 8.0 & Server 2012 (non-R2) Administrative Templates (or newer).<br />**Compliance Standard Mappings**:<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**Name**&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**Platform**&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**ID**<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;CIS&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;WS2019&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;18.8.21.3<br /> |\= 0<br /><sub>(Registry)</sub> |Critical |
+|Ensure 'Continue experiences on this device' is set to 'Disabled'<br /><sub>(AZ-WIN-00170)</sub> |**Description**: This policy setting determines whether the Windows device is allowed to participate in cross-device experiences (continue experiences). The recommended state for this setting is: `Disabled`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\System\EnableCdp<br />**OS**: WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member<br />**Group Policy Path**: To establish the recommended configuration via GP, set the following UI path to `Disabled`:<br />Computer Configuration\Policies\Administrative Templates\System\Group Policy\Continue experiences on this device<br />**Note:** This Group Policy path may not exist by default. It is provided by the Group Policy template 'GroupPolicy.admx/adml' that is included with the Microsoft Windows 10 Release 1607 & Server 2016 Administrative Templates (or newer).<br />**Compliance Standard Mappings**:<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**Name**&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**Platform**&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**ID**<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;CIS&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;WS2019&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;18.8.21.4<br /> |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Warning |
+|Enumerate local users on domain-joined computers<br /><sub>(AZ_WIN_202204)</sub> |**Description**: This policy setting allows local users to be enumerated on domain-joined computers. The recommended state for this setting is: `Disabled`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\System\EnumerateLocalUsers<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Member<br />**Group Policy Path**: Computer Configuration\Policies\Administrative Templates\System\Logon\Enumerate local users on domain-joined computers<br />**Compliance Standard Mappings**:<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**Name**&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**Platform**&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**ID**<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;CIS&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;WS2022&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;18.8.28.4<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;CIS&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;WS2019&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;18.8.28.4<br /> |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Warning |
+|Include command line in process creation events<br /><sub>(CCE-36925-6)</sub> |**Description**: This policy setting determines what information is logged in security audit events when a new process has been created. This setting only applies when the Audit Process Creation policy is enabled. If you enable this policy setting the command line information for every process will be logged in plain text in the security event log as part of the Audit Process Creation event 4688, "a new process has been created," on the workstations and servers on which this policy setting is applied. If you disable or do not configure this policy setting, the process's command line information will not be included in Audit Process Creation events. Default: Not configured Note: When this policy setting is enabled, any user with access to read the security events will be able to read the command line arguments for any successfully created process. Command line arguments can contain sensitive or private information such as passwords or user data.<br />**Key Path**: SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System\Audit\ProcessCreationIncludeCmdLine_Enabled<br />**OS**: WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member<br />**Group Policy Path**: To establish the recommended configuration via GP, set the following UI path to `Enabled`:<br />Computer Configuration\Policies\Administrative Templates\System\Audit Process Creation\Include command line in process creation events<br />**Note:** This Group Policy path may not exist by default. It is provided by the Group Policy template 'AuditSettings.admx/adml' that is included with the Microsoft Windows 8.1 & Server 2012 R2 Administrative Templates (or newer).<br />**Compliance Standard Mappings**:<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**Name**&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**Platform**&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**ID**<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;CIS&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;WS2019&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;18.8.3.1<br /> |\= 1<br /><sub>(Registry)</sub> |Critical |
+|Prevent device metadata retrieval from the Internet<br /><sub>(AZ-WIN-202251)</sub> |**Description**: This policy setting allows you to prevent Windows from retrieving device metadata from the Internet. The recommended state for this setting is: `Enabled`. **Note:** This will not prevent the installation of basic hardware drivers, but does prevent associated 3rd-party utility software from automatically being installed under the context of the `SYSTEM` account.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\Device Metadata\PreventDeviceMetadataFromNetwork<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member<br />**Group Policy Path**: Computer Configuration\Policies\Administrative Templates\System\Device Installation\Prevent device metadata retrieval from the Internet<br />**Compliance Standard Mappings**:<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**Name**&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**Platform**&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**ID**<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;CIS&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;WS2022&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;18.8.7.2<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;CIS&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;WS2019&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;18.8.7.2<br /> |\= 1<br /><sub>(Registry)</sub> |Informational |
+|Remote host allows delegation of non-exportable credentials<br /><sub>(AZ-WIN-20199)</sub> |**Description**: Remote host allows delegation of non-exportable credentials. When using credential delegation, devices provide an exportable version of credentials to the remote host. This exposes users to the risk of credential theft from attackers on the remote host. The Restricted Admin Mode and Windows Defender Remote Credential Guard features are two options to help protect against this risk. The recommended state for this setting is: `Enabled`. **Note:** More detailed information on Windows Defender Remote Credential Guard and how it compares to Restricted Admin Mode can be found at this link: [Protect Remote Desktop credentials with Windows Defender Remote Credential Guard (Windows 10) \| Microsoft Docs](/windows/access-protection/remote-credential-guard)<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\CredentialsDelegation\AllowProtectedCreds<br />**OS**: WS2016, WS2019<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member<br />**Group Policy Path**: Computer Configuration\Policies\Administrative Templates\System\Credentials Delegation\Remote host allows delegation of non-exportable credentials<br />**Compliance Standard Mappings**:<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**Name**&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**Platform**&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**ID**<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;CIS&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;WS2022&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;18.8.4.2<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;CIS&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;WS2019&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;18.8.4.2<br /> |\= 1<br /><sub>(Registry)</sub> |Critical |
+|Turn off app notifications on the lock screen<br /><sub>(CCE-35893-7)</sub> |**Description**: This policy setting allows you to prevent app notifications from appearing on the lock screen. The recommended state for this setting is: `Enabled`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\System\DisableLockScreenAppNotifications<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member<br />**Group Policy Path**: To establish the recommended configuration via GP, set the following UI path to `Enabled`:<br />Computer Configuration\Policies\Administrative Templates\System\Logon\Turn off app notifications on the lock screen<br />**Note:** This Group Policy path may not exist by default. It is provided by the Group Policy template 'Logon.admx/adml' that is included with the Microsoft Windows 8.0 & Server 2012 (non-R2) Administrative Templates (or newer).<br />**Compliance Standard Mappings**:<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**Name**&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**Platform**&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**ID**<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;CIS&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;WS2019&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;18.8.28.5<br /> |\= 1<br /><sub>(Registry)</sub> |Warning |
+|Turn off background refresh of Group Policy<br /><sub>(CCE-14437-8)</sub> |**Description**: This policy setting prevents Group Policy from being updated while the computer is in use. This policy setting applies to Group Policy for computers, users and Domain Controllers. The recommended state for this setting is: `Disabled`.<br />**Key Path**: Software\Microsoft\Windows\CurrentVersion\Policies\System\DisableBkGndGroupPolicy<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member<br />**Group Policy Path**: Computer Configuration\Policies\Administrative Templates\System\Group Policy\Turn off background refresh of Group Policy<br />**Compliance Standard Mappings**:<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**Name**&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**Platform**&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**ID**<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;CIS&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;WS2022&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;18.8.21.5<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;CIS&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;WS2019&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;18.8.21.5<br /> |\= 0<br /><sub>(Registry)</sub> |Warning |
+|Turn off downloading of print drivers over HTTP<br /><sub>(CCE-36625-2)</sub> |**Description**: This policy setting controls whether the computer can download print driver packages over HTTP. To set up HTTP printing, printer drivers that are not available in the standard operating system installation might need to be downloaded over HTTP. The recommended state for this setting is: `Enabled`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows NT\Printers\DisableWebPnPDownload<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member<br />**Group Policy Path**: To establish the recommended configuration via GP, set the following UI path to `Enabled`:<br />Computer Configuration\Policies\Administrative Templates\System\Internet Communication Management\Internet Communication settings\Turn off downloading of print drivers over HTTP<br />**Note:** This Group Policy path is provided by the Group Policy template 'ICM.admx/adml' that is included with all versions of the Microsoft Windows Administrative Templates.<br />**Compliance Standard Mappings**:<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**Name**&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**Platform**&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**ID**<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;CIS&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;WS2019&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;18.8.22.1.1<br /> |\= 1<br /><sub>(Registry)</sub> |Warning |
+|Turn off Internet Connection Wizard if URL connection is referring to Microsoft.com<br /><sub>(CCE-37163-3)</sub> |**Description**: This policy setting specifies whether the Internet Connection Wizard can connect to Microsoft to download a list of Internet Service Providers (ISPs). The recommended state for this setting is: `Enabled`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\Internet Connection Wizard\ExitOnMSICW<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member<br />**Group Policy Path**: To establish the recommended configuration via GP, set the following UI path to `Enabled`:<br />Computer Configuration\Policies\Administrative Templates\System\Internet Communication Management\Internet Communication settings\Turn off Internet Connection Wizard if URL connection is referring to Microsoft.com<br />**Note:** This Group Policy path is provided by the Group Policy template 'ICM.admx/adml' that is included with all versions of the Microsoft Windows Administrative Templates.<br />**Compliance Standard Mappings**:<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**Name**&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**Platform**&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**ID**<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;CIS&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;WS2019&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;18.8.22.1.4<br /> |\= 1<br /><sub>(Registry)</sub> |Warning |
+|Turn on convenience PIN sign-in<br /><sub>(CCE-37528-7)</sub> |**Description**: This policy setting allows you to control whether a domain user can sign in using a convenience PIN. In Windows 10, convenience PIN was replaced with Passport, which has stronger security properties. To configure Passport for domain users, use the policies under Computer configuration\\Administrative Templates\\Windows Components\\Microsoft Passport for Work. **Note:** The user's domain password will be cached in the system vault when using this feature. The recommended state for this setting is: `Disabled`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\System\AllowDomainPINLogon<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member<br />**Group Policy Path**: To establish the recommended configuration via GP, set the following UI path to `Disabled`:<br />Computer Configuration\Policies\Administrative Templates\System\Logon\Turn on convenience PIN sign-in<br />**Note:** This Group Policy path may not exist by default. It is provided by the Group Policy template `CredentialProviders.admx/adml` that is included with the Microsoft Windows 8.0 & Server 2012 (non-R2) Administrative Templates (or newer).**Note 2:** In older Microsoft Windows Administrative Templates, this setting was initially named _Turn on PIN sign-in_, but it was renamed starting with the Windows 10 Release 1511 Administrative Templates.<br />**Compliance Standard Mappings**:<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**Name**&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**Platform**&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**ID**<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;CIS&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;WS2019&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;18.8.28.7<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;CIS&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;WS2022&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;18.8.28.7<br /> |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Warning |
+
+## Administrative Templates - Windows Component
+
+|Name<br /><sub>(ID)</sub> |Details |Expected value<br /><sub>(Type)</sub> |Severity |
+|||||
+|Turn off cloud consumer account state content<br /><sub>(AZ-WIN-202217)</sub> |**Description**: This policy setting determines whether cloud consumer account state content is allowed in all Windows experiences. The recommended state for this setting is: `Enabled`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\CloudContent\DisableConsumerAccountStateContent<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member<br />**Group Policy Path**: Computer Configuration\Policies\Administrative Templates\Windows Components\Cloud Content\Turn off cloud consumer account state content<br />**Compliance Standard Mappings**:<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**Name**&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**Platform**&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**ID**<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;CIS&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;WS2022&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;18.9.14.1<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;CIS&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;WS2019&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;18.9.14.1<br /> |\= 1<br /><sub>(Registry)</sub> |Warning |
## Administrative Templates - Windows Components |Name<br /><sub>(ID)</sub> |Details |Expected value<br /><sub>(Type)</sub> |Severity | |||||
-|Allow Basic authentication<br /><sub>(CCE-36254-1)</sub> |**Description**: This policy setting allows you to manage whether the Windows Remote Management (WinRM) service accepts Basic authentication from a remote client. The recommended state for this setting is: `Disabled`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\WinRM\Client\AllowBasic<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Critical |
-|Allow Diagnostic Data<br /><sub>(AZ-WIN-00169)</sub> |**Description**: This policy setting determines the amount of diagnostic and usage data reported to Microsoft. A value of 0 will send minimal data to Microsoft. This data includes Malicious Software Removal Tool (MSRT) & Windows Defender data, if enabled, and telemetry client settings. Setting a value of 0 applies to enterprise, EDU, IoT and server devices only. Setting a value of 0 for other devices is equivalent to choosing a value of 1. A value of 1 sends only a basic amount of diagnostic and usage data. Note that setting values of 0 or 1 will degrade certain experiences on the device. A value of 2 sends enhanced diagnostic and usage data. A value of 3 sends the same data as a value of 2, plus additional diagnostics data, including the files and content that may have caused the problem. Windows 10 telemetry settings apply to the Windows operating system and some first party apps. This setting does not apply to third party apps running on Windows 10. The recommended state for this setting is: `Enabled: 0 - Security [Enterprise Only]`. **Note:** If the "Allow Telemetry" setting is configured to "0 - Security [Enterprise Only]", then the options in Windows Update to defer upgrades and updates will have no effect.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\DataCollection\AllowTelemetry<br />**OS**: WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\>\= 1<br /><sub>(Registry)</sub> |Warning |
-|Allow indexing of encrypted files<br /><sub>(CCE-38277-0)</sub> |**Description**: This policy setting controls whether encrypted items are allowed to be indexed. When this setting is changed, the index is rebuilt completely. Full volume encryption (such as BitLocker Drive Encryption or a non-Microsoft solution) must be used for the location of the index to maintain security for encrypted files. The recommended state for this setting is: `Disabled`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\Windows Search\AllowIndexingEncryptedStoresOrItems<br />**OS**: WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Warning |
-|Allow Microsoft accounts to be optional<br /><sub>(CCE-38354-7)</sub> |**Description**: This policy setting lets you control whether Microsoft accounts are optional for Windows Store apps that require an account to sign in. This policy only affects Windows Store apps that support it. If you enable this policy setting, Windows Store apps that typically require a Microsoft account to sign in will allow users to sign in with an enterprise account instead. If you disable or do not configure this policy setting, users will need to sign in with a Microsoft account.<br />**Key Path**: SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System\MSAOptional<br />**OS**: WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Warning |
-|Allow unencrypted traffic<br /><sub>(CCE-38223-4)</sub> |**Description**: This policy setting allows you to manage whether the Windows Remote Management (WinRM) service sends and receives unencrypted messages over the network. The recommended state for this setting is: `Disabled`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\WinRM\Client\AllowUnencryptedTraffic<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Critical |
-|Allow user control over installs<br /><sub>(CCE-36400-0)</sub> |**Description**: Permits users to change installation options that typically are available only to system administrators. The security features of Windows Installer prevent users from changing installation options typically reserved for system administrators, such as specifying the directory to which files are installed. If Windows Installer detects that an installation package has permitted the user to change a protected option, it stops the installation and displays a message. These security features operate only when the installation program is running in a privileged security context in which it has access to directories denied to the user. The recommended state for this setting is: `Disabled`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\Installer\EnableUserControl<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Critical |
-|Always install with elevated privileges<br /><sub>(CCE-37490-0)</sub> |**Description**: This setting controls whether or not Windows Installer should use system permissions when it installs any program on the system. **Note:** This setting appears both in the Computer Configuration and User Configuration folders. To make this setting effective, you must enable the setting in both folders. **Caution:** If enabled, skilled users can take advantage of the permissions this setting grants to change their privileges and gain permanent access to restricted files and folders. Note that the User Configuration version of this setting is not guaranteed to be secure. The recommended state for this setting is: `Disabled`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\Installer\AlwaysInstallElevated<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Member, Workgroup Member |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Warning |
-|Always prompt for password upon connection<br /><sub>(CCE-37929-7)</sub> |**Description**: This policy setting specifies whether Terminal Services always prompts the client computer for a password upon connection. You can use this policy setting to enforce a password prompt for users who log on to Terminal Services, even if they already provided the password in the Remote Desktop Connection client. By default, Terminal Services allows users to automatically log on if they enter a password in the Remote Desktop Connection client. Note If you do not configure this policy setting, the loca