Updates from: 10/04/2023 01:12:35
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Roles Resource Access Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/roles-resource-access-control.md
When planning your access control strategy, it's best to assign users the least
|[Company branding](customize-ui.md#configure-company-branding)| Customize your user flow pages.| [Global Administrator](../active-directory/roles/permissions-reference.md#global-administrator)| |[User attributes](user-flow-custom-attributes.md)| Add or delete custom attributes available to all user flows.| [External ID User Flow Attribute Administrator](../active-directory/roles/permissions-reference.md#external-id-user-flow-attribute-administrator)| |Manage users| Manage [consumer accounts](manage-users-portal.md) and administrative accounts as described in this article.| [User Administrator](../active-directory/roles/permissions-reference.md#user-administrator)|
-|Roles and administrators| Manage role assignments in Azure AD B2C directory. Create and manage groups that can be assigned to Azure AD B2C roles. |[Global Administrator](../active-directory/roles/permissions-reference.md#global-administrator), [Privileged Role Administrator](../active-directory/roles/permissions-reference.md#privileged-role-administrator)|
+|Roles and administrators| Manage role assignments in Azure AD B2C directory. Create and manage groups that can be assigned to Azure AD B2C roles. Note that the Azure AD custom roles feature is currently not available for Azure AD B2C directories. |[Global Administrator](../active-directory/roles/permissions-reference.md#global-administrator), [Privileged Role Administrator](../active-directory/roles/permissions-reference.md#privileged-role-administrator)|
|[User flows](user-flow-overview.md)|For quick configuration and enablement of common identity tasks, like sign-up, sign-in, and profile editing.| [External ID User Flow Administrator](../active-directory/roles/permissions-reference.md#external-id-user-flow-administrator)| |[Custom policies](user-flow-overview.md)| Create, read, update, and delete all custom policies in Azure AD B2C.| [B2C IEF Policy Administrator](../active-directory/roles/permissions-reference.md#b2c-ief-policy-administrator)| |[Policy keys](policy-keys-overview.md)|Add and manage encryption keys for signing and validating tokens, client secrets, certificates, and passwords used in custom policies.|[B2C IEF Keyset Administrator](../active-directory/roles/permissions-reference.md#b2c-ief-keyset-administrator)|
active-directory Auth Oauth2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/auth-oauth2.md
Title: OAUTH 2.0 authentication with Microsoft Entra ID
-description: Architectural guidance on achieving OAUTH 2.0 authentication with Microsoft Entra ID.
+ Title: OAUTH 2.0 authorization with Microsoft Entra ID
+description: Architectural guidance on achieving OAUTH 2.0 authorization with Microsoft Entra ID.
-# OAuth 2.0 authentication with Microsoft Entra ID
+# OAuth 2.0 authorization with Microsoft Entra ID
The OAuth 2.0 is the industry protocol for authorization. It allows a user to grant limited access to its protected resources. Designed to work specifically with Hypertext Transfer Protocol (HTTP), OAuth separates the role of the client from the resource owner. The client requests access to the resources controlled by the resource owner and hosted by the resource server. The resource server issues access tokens with the approval of the resource owner. The client uses the access tokens to access the protected resources hosted by the resource server.
Rich client and modern app scenarios and RESTful web API access.
* **Web app**: The web app, or resource server, is where the resource or data resides. It trusts the authorization server to securely authenticate and authorize the OAuth client.
-* **Microsoft Entra ID**: Microsoft Entra ID is the authorization server, also known as the Identity Provider (IdP). It securely handles anything to do with the user's information, their access, and the trust relationship. It's responsible for issuing the tokens that grant and revoke access to resources.
+* **Microsoft Entra ID**: Microsoft Entra ID is the authentication server, also known as the Identity Provider (IdP). It securely handles anything to do with the user's information, their access, and the trust relationship. It's responsible for issuing the tokens that grant and revoke access to resources.
<a name='implement-oauth-20-with-azure-ad'></a>
active-directory Security Operations User Accounts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/security-operations-user-accounts.md
As you design and operationalize a log monitoring and alerting strategy, conside
| - | - | - | - | - | | Leaked credentials user risk detection| High| Microsoft Entra ID Risk Detection logs| UX: Leaked credentials <br><br>API: See [riskDetection resource type - Microsoft Graph](/graph/api/resources/riskdetection)| See [What is risk? Microsoft Entra ID Protection](../identity-protection/concept-identity-protection-risks.md)<br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure)| | Microsoft Entra Threat Intelligence user risk detection| High| Microsoft Entra ID Risk Detection logs| UX: Microsoft Entra threat intelligence <br><br>API: See [riskDetection resource type - Microsoft Graph](/graph/api/resources/riskdetection)| See [What is risk? Microsoft Entra ID Protection](../identity-protection/concept-identity-protection-risks.md)<br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure)|
-| Anonymous IP address sign-in risk detection| Varies| Microsoft Entra ID Risk Detection logs| UX: Anonymous IP address <br><br>API: See [riskDetection resource type - Microsoft Graph](/graph/api/resources/riskdetection)| See [What is risk? Microsoft Entra ID Protection](../identity-protection/concept-identity-protection-risks.md)[Sigma rules]<br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure)|
+| Anonymous IP address sign-in risk detection| Varies| Microsoft Entra ID Risk Detection logs| UX: Anonymous IP address <br><br>API: See [riskDetection resource type - Microsoft Graph](/graph/api/resources/riskdetection)| See [What is risk? Microsoft Entra ID Protection](../identity-protection/concept-identity-protection-risks.md)<br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure)|
| Atypical travel sign-in risk detection| Varies| Microsoft Entra ID Risk Detection logs| UX: Atypical travel <br><br>API: See [riskDetection resource type - Microsoft Graph](/graph/api/resources/riskdetection)| See [What is risk? Microsoft Entra ID Protection](../identity-protection/concept-identity-protection-risks.md)<br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure)| | Anomalous Token| Varies| Microsoft Entra ID Risk Detection logs| UX: Anomalous Token <br><br>API: See [riskDetection resource type - Microsoft Graph](/graph/api/resources/riskdetection)| See [What is risk? Microsoft Entra ID Protection](../identity-protection/concept-identity-protection-risks.md)<br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure)| | Malware linked IP address sign-in risk detection| Varies| Microsoft Entra ID Risk Detection logs| UX: Malware linked IP address <br><br>API: See [riskDetection resource type - Microsoft Graph](/graph/api/resources/riskdetection)| See [What is risk? Microsoft Entra ID Protection](../identity-protection/concept-identity-protection-risks.md)<br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure)|
active-directory Concept Certificate Based Authentication Technical Deep Dive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-certificate-based-authentication-technical-deep-dive.md
For passwordless sign-in to work, users should disable legacy notification throu
1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Authentication Policy Administrator](../roles/permissions-reference.md#authentication-policy-administrator).
-1. Follow the steps at [Enable passwordless phone sign-in authentication](../authentication/howto-authentication-passwordless-phone.md#enable-passwordless-phone-sign-in-authentication-methods)
+1. Follow the steps at [Enable passwordless phone sign-in authentication](../authentication/howto-authentication-passwordless-phone.md#enable-passwordless-phone-sign-in-authentication-methods).
>[!IMPORTANT] >In the above configuration under step 4, please choose **Passwordless** option. Change the mode for each groups added for PSI for **Authentication mode**, choose **Passwordless** for passwordless sign-in to work with CBA. If the admin configures "Any", CBA and PSI don't work.
active-directory How To Create Approve Privilege Request https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/how-to-create-approve-privilege-request.md
The **Remediation** dashboard has two privilege-on-demand (POD) workflows you ca
The request you submitted is now listed in **Pending Requests**. +
+## Following are the time limits per frequency type while creating the request.
+| Frequency Type | Time Limit (in hours) |
+|-|--|
+|ASAP | 24 |
+|Once | 2160 |
+|Daily | 23 |
+|Weekly | 23 |
+|Monthly | 672 |
+ ## Approve or reject a request for permissions 1. On the Permissions Management home page, select the **Remediation** tab, and then select the **My requests** subtab.
active-directory Reference Error Codes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/reference-error-codes.md
The `error` field has several possible values - review the protocol documentatio
| AADSTS53001 | DeviceNotDomainJoined - Conditional Access policy requires a domain joined device, and the device isn't domain joined. Have the user use a domain joined device. | | AADSTS53002 | ApplicationUsedIsNotAnApprovedApp - The app used isn't an approved app for Conditional Access. User needs to use one of the apps from the list of approved apps to use in order to get access. | | AADSTS53003 | BlockedByConditionalAccess - Access has been blocked by Conditional Access policies. The access policy does not allow token issuance. If this is unexpected, see the Conditional Access policy that applied to this request or contact your administrator. For additional information, please visit [troubleshooting sign-in with Conditional Access](../conditional-access/troubleshoot-conditional-access.md). |
+| AADSTS530035 |BlockedBySecurityDefaults - Access has been blocked by security defaults. This is due to the request using legacy auth or being deemed unsafe by security defaults policies. For additional information, please visit [enforced security policies](../fundamentals/security-defaults.md#enforced-security-policies).|
| AADSTS53004 | ProofUpBlockedDueToRisk - User needs to complete the multi-factor authentication registration process before accessing this content. User should register for multi-factor authentication. | | AADSTS53010 | ProofUpBlockedDueToSecurityInfoAcr - Cannot configure multi-factor authentication methods because the organization requires this information to be set from specific locations or devices. | | AADSTS53011 | User blocked due to risk on home tenant. |
active-directory Sample V2 Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/sample-v2-code.md
The following samples illustrate web applications that sign in users. Some sampl
> | ASP.NET Core|[Use the Conditional Access auth context to perform step\-up authentication](https://github.com/Azure-Samples/ms-identity-dotnetcore-ca-auth-context-app/blob/main/README.md) | [Microsoft.Identity.Web](/dotnet/api/microsoft-authentication-library-dotnet/confidentialclient) | Authorization code | > | ASP.NET Core|[Active Directory FS to Microsoft Entra migration](https://github.com/Azure-Samples/ms-identity-dotnet-adfs-to-aad) | [MSAL.NET](/entra/msal/dotnet) | &#8226; SAML <br/> &#8226; OpenID connect | > | ASP.NET | &#8226; [Microsoft Graph Training Sample](https://github.com/microsoftgraph/msgraph-training-aspnetmvcapp) <br/> &#8226; [Sign in users and call Microsoft Graph](https://github.com/Azure-Samples/ms-identity-aspnet-webapp-openidconnect) <br/> &#8226; [Sign in users and call Microsoft Graph with admin restricted scope](https://github.com/azure-samples/active-directory-dotnet-admin-restricted-scopes-v2) <br/> &#8226; [Quickstart: Sign in users](https://github.com/AzureAdQuickstarts/AppModelv2-WebApp-OpenIDConnect-DotNet) | [MSAL.NET](/entra/msal/dotnet) | &#8226; OpenID connect <br/> &#8226; Authorization code |
-> | Java </p> Spring |Microsoft Entra Spring Boot Starter Series <br/> &#8226; [Sign in users](https://github.com/Azure-Samples/ms-identity-msal-java-samples/tree/main/4.%20Spring%20Framework%20Web%20App%20Tutorial/1-Authentication/sign-in) <br/> &#8226; [Sign in users (B2C)](https://github.com/Azure-Samples/ms-identity-msal-java-samples/tree/main/4.%20Spring%20Framework%20Web%20App%20Tutorial/1-Authentication/sign-in-b2c) <br/> &#8226; [Call Microsoft Graph](https://github.com/Azure-Samples/ms-identity-msal-java-samples/tree/main/4.%20Spring%20Framework%20Web%20App%20Tutorial/2-Authorization-I/call-graph) <br/> &#8226; [Use App Roles for access control](https://github.com/Azure-Samples/ms-identity-msal-java-samples/tree/main/4.%20Spring%20Framework%20Web%20App%20Tutorial/3-Authorization-II/roles) <br/> &#8226; [Use Groups for access control](https://github.com/Azure-Samples/ms-identity-msal-java-samples/tree/main/4.%20Spring%20Framework%20Web%20App%20Tutorial/3-Authorization-II/groups) <br/> &#8226; [Deploy to Azure App Service](https://github.com/Azure-Samples/ms-identity-msal-java-samples/tree/main/4.%20Spring%20Framework%20Web%20App%20Tutorial/4-Deployment/deploy-to-azure-app-service) <br/> &#8226; [Protect a web API](https://github.com/Azure-Samples/ms-identity-msal-java-samples/tree/main/4.%20Spring%20Framework%20Web%20App%20Tutorial/3-Authorization-II/protect-web-api) | &#8226; [MSAL Java](/java/api/com.microsoft.aad.msal4j) <br/> &#8226; Microsoft Entra ID Boot Starter | Authorization code |
-> | Java </p> Servlets | Spring-less Servlet Series <br/> &#8226; [Sign in users](https://github.com/Azure-Samples/ms-identity-msal-java-samples/tree/main/3.%20Java%20Servlet%20Web%20App%20Tutorial/1-Authentication/sign-in) <br/> &#8226; [Sign in users (B2C)](https://github.com/Azure-Samples/ms-identity-msal-java-samples/tree/main/3.%20Java%20Servlet%20Web%20App%20Tutorial/1-Authentication/sign-in-b2c) <br/> &#8226; [Call Microsoft Graph](https://github.com/Azure-Samples/ms-identity-msal-java-samples/tree/main/3.%20Java%20Servlet%20Web%20App%20Tutorial/2-Authorization-I/call-graph) <br/> &#8226; [Use App Roles for access control](https://github.com/Azure-Samples/ms-identity-msal-java-samples/tree/main/3.%20Java%20Servlet%20Web%20App%20Tutorial/3-Authorization-II/roles) <br/> &#8226; [Use Security Groups for access control](https://github.com/Azure-Samples/ms-identity-msal-java-samples/tree/main/3.%20Java%20Servlet%20Web%20App%20Tutorial/3-Authorization-II/groups) <br/> &#8226; [Deploy to Azure App Service](https://github.com/Azure-Samples/ms-identity-msal-java-samples/tree/main/3.%20Java%20Servlet%20Web%20App%20Tutorial/4-Deployment/deploy-to-azure-app-service) | [MSAL Java](/java/api/com.microsoft.aad.msal4j) | Authorization code |
+> | Java </p> Spring |Microsoft Entra Spring Boot Starter Series <br/> &#8226; [Sign in users](https://github.com/Azure-Samples/ms-identity-msal-java-samples/tree/main/4-spring-web-app/1-Authentication/sign-in) <br/> &#8226; [Sign in users (B2C)](https://github.com/Azure-Samples/ms-identity-msal-java-samples/tree/main/4-spring-web-app/1-Authentication/sign-in-b2c) <br/> &#8226; [Call Microsoft Graph](https://github.com/Azure-Samples/ms-identity-msal-java-samples/tree/main/4-spring-web-app/2-Authorization-I/call-graph) <br/> &#8226; [Use App Roles for access control](https://github.com/Azure-Samples/ms-identity-msal-java-samples/tree/main/4-spring-web-app/3-Authorization-II/roles) <br/> &#8226; [Use Groups for access control](https://github.com/Azure-Samples/ms-identity-msal-java-samples/tree/main/4-spring-web-app/3-Authorization-II/groups) <br/> &#8226; [Deploy to Azure App Service](https://github.com/Azure-Samples/ms-identity-msal-java-samples/tree/main/4-spring-web-app/4-Deployment/deploy-to-azure-app-service) <br/> &#8226; [Protect a web API](https://github.com/Azure-Samples/ms-identity-msal-java-samples/tree/main/4-spring-web-app/3-Authorization-II/protect-web-api) | &#8226; [MSAL Java](/java/api/com.microsoft.aad.msal4j) <br/> &#8226; Microsoft Entra ID Boot Starter | Authorization code |
+> | Java </p> Servlets | Spring-less Servlet Series <br/> &#8226; [Sign in users](https://github.com/Azure-Samples/ms-identity-msal-java-samples/tree/main/3-java-servlet-web-app/1-Authentication/sign-in) <br/> &#8226; [Sign in users (B2C)](https://github.com/Azure-Samples/ms-identity-msal-java-samples/tree/main/3-java-servlet-web-app/1-Authentication/sign-in-b2c) <br/> &#8226; [Call Microsoft Graph](https://github.com/Azure-Samples/ms-identity-msal-java-samples/tree/main/3-java-servlet-web-app/2-Authorization-I/call-graph) <br/> &#8226; [Use App Roles for access control](https://github.com/Azure-Samples/ms-identity-msal-java-samples/tree/main/3-java-servlet-web-app/3-Authorization-II/roles) <br/> &#8226; [Use Security Groups for access control](https://github.com/Azure-Samples/ms-identity-msal-java-samples/tree/main/3-java-servlet-web-app/3-Authorization-II/groups) <br/> &#8226; [Deploy to Azure App Service](https://github.com/Azure-Samples/ms-identity-msal-java-samples/tree/main/3-java-servlet-web-app/4-Deployment/deploy-to-azure-app-service) | [MSAL Java](/java/api/com.microsoft.aad.msal4j) | Authorization code |
> | Node.js </p> Express | Express web app series <br/> &#8226; [Quickstart: sign in users](https://github.com/Azure-Samples/ms-identity-node/blob/main/README.md)<br/> &#8226; [Sign in users](https://github.com/Azure-Samples/ms-identity-javascript-nodejs-tutorial/blob/main/1-Authentication/1-sign-in/README.md)<br/> &#8226; [Sign in users (B2C)](https://github.com/Azure-Samples/ms-identity-javascript-nodejs-tutorial/blob/main/1-Authentication/2-sign-in-b2c/README.md)<br/> &#8226; [Call Microsoft Graph](https://github.com/Azure-Samples/ms-identity-javascript-nodejs-tutorial/blob/main/2-Authorization/1-call-graph/README.md) <br/> &#8226; [Call Microsoft Graph via BFF proxy](https://github.com/Azure-Samples/ms-identity-node) <br/> &#8226; [Deploy to Azure App Service](https://github.com/Azure-Samples/ms-identity-javascript-nodejs-tutorial/blob/main/3-Deployment/README.md)<br/> &#8226; [Use App Roles for access control](https://github.com/Azure-Samples/ms-identity-javascript-nodejs-tutorial/blob/main/4-AccessControl/1-app-roles/README.md)<br/> &#8226; [Use Security Groups for access control](https://github.com/Azure-Samples/ms-identity-javascript-nodejs-tutorial/blob/main/4-AccessControl/2-security-groups/README.md) | [MSAL Node](/javascript/api/@azure/msal-node) | &#8226; Authorization code <br/>&#8226; Backend-for-Frontend (BFF) proxy | > | Python </p> Flask | Flask Series <br/> &#8226; [Sign in users](https://github.com/Azure-Samples/ms-identity-python-flask-tutorial) <br/> &#8226; [Sign in users (B2C)](https://github.com/Azure-Samples/ms-identity-python-flask-tutorial) <br/>&#8226; [A template to sign in Microsoft Entra ID or B2C users, and optionally call a downstream API (Microsoft Graph)](https://github.com/Azure-Samples/ms-identity-python-webapp) <br/> &#8226; [Call Microsoft Graph](https://github.com/Azure-Samples/ms-identity-python-flask-tutorial) <br/> &#8226; [Deploy to Azure App Service](https://github.com/Azure-Samples/ms-identity-python-flask-tutorial) | [MSAL Python](/entra/msal/python) | Authorization code | > | Python </p> Django | Django Series <br/> &#8226; [Sign in users](https://github.com/Azure-Samples/ms-identity-python-django-tutorial/tree/main/1-Authentication/sign-in) <br/> &#8226; [Sign in users (B2C)](https://github.com/Azure-Samples/ms-identity-python-django-tutorial/tree/main/1-Authentication/sign-in-b2c) <br/> &#8226; [Call Microsoft Graph](https://github.com/Azure-Samples/ms-identity-python-django-tutorial/tree/main/2-Authorization-I/call-graph) <br/> &#8226; [Deploy to Azure App Service](https://github.com/Azure-Samples/ms-identity-python-django-tutorial/tree/main/3-Deployment/deploy-to-azure-app-service)| [MSAL Python](/entra/msal/python) | Authorization code |
The following samples show how to protect a web API with the Microsoft identity
> | -- | -- |-- |-- | > | ASP.NET | [Call Microsoft Graph](https://github.com/Azure-Samples/ms-identity-aspnet-webapi-onbehalfof) | [MSAL.NET](/entra/msal/dotnet) | On-Behalf-Of (OBO) | > | ASP.NET Core | [Sign in users and call Microsoft Graph](https://github.com/Azure-Samples/active-directory-dotnet-native-aspnetcore-v2/tree/master/2.%20Web%20API%20now%20calls%20Microsoft%20Graph) | [MSAL.NET](/entra/msal/dotnet) | On-Behalf-Of (OBO) |
-> | Java | [Sign in users](https://github.com/Azure-Samples/ms-identity-msal-java-samples/tree/main/4.%20Spring%20Framework%20Web%20App%20Tutorial/3-Authorization-II/protect-web-api) | [MSAL Java](/java/api/com.microsoft.aad.msal4j) | On-Behalf-Of (OBO) |
+> | Java | [Sign in users](https://github.com/Azure-Samples/ms-identity-msal-java-samples/tree/main/4-spring-web-app/3-Authorization-II/protect-web-api) | [MSAL Java](/java/api/com.microsoft.aad.msal4j) | On-Behalf-Of (OBO) |
> | Node.js | &#8226; [Protect a Node.js web API](https://github.com/Azure-Samples/ms-identity-javascript-tutorial/tree/main/3-Authorization-II/1-call-api) <br/> &#8226; [Protect a Node.js Web API with Azure AD B2C](https://github.com/Azure-Samples/active-directory-b2c-javascript-nodejs-webapi) | [MSAL Node](/javascript/api/@azure/msal-node) | Authorization bearer | ### Desktop
The following samples show public client desktop applications that access the Mi
> | - | -- | - | -- | > | .NET Core | &#8226; [Call Microsoft Graph](https://github.com/Azure-Samples/ms-identity-dotnet-desktop-tutorial/tree/master/1-Calling-MSGraph/1-1-AzureAD) <br/> &#8226; [Call Microsoft Graph with token cache](https://github.com/Azure-Samples/ms-identity-dotnet-desktop-tutorial/tree/master/2-TokenCache) <br/> &#8226; [Call Microsoft Graph with custom web UI HTML](https://github.com/Azure-Samples/ms-identity-dotnet-desktop-tutorial/tree/master/3-CustomWebUI/3-1-CustomHTML) <br/> &#8226; [Call Microsoft Graph with custom web browser](https://github.com/Azure-Samples/ms-identity-dotnet-desktop-tutorial/tree/master/3-CustomWebUI/3-2-CustomBrowser) <br/> &#8226; [Sign in users with device code flow](https://github.com/Azure-Samples/ms-identity-dotnet-desktop-tutorial/tree/master/4-DeviceCodeFlow) <br/> &#8226; [Authenticate users with MSAL.NET in a WinUI desktop application](https://github.com/Azure-Samples/ms-identity-netcore-winui) | [MSAL.NET](/entra/msal/dotnet) |&#8226; Authorization code with PKCE <br/> &#8226; Device code | > | .NET | [Invoke protected API with integrated Windows authentication](https://github.com/azure-samples/active-directory-dotnet-iwa-v2) | [MSAL.NET](/entra/msal/dotnet) | Integrated Windows authentication |
-> | Java | [Call Microsoft Graph](https://github.com/Azure-Samples/ms-identity-msal-java-samples/tree/main/2.%20Client-Side%20Scenarios/Integrated-Windows-Auth-Flow) | [MSAL Java](/java/api/com.microsoft.aad.msal4j) | Integrated Windows authentication |
+> | Java | [Call Microsoft Graph](https://github.com/Azure-Samples/ms-identity-msal-java-samples/tree/main/2-client-side/Integrated-Windows-Auth-Flow) | [MSAL Java](/java/api/com.microsoft.aad.msal4j) | Integrated Windows authentication |
> | Node.js | [Sign in users](https://github.com/Azure-Samples/ms-identity-javascript-nodejs-desktop) | [MSAL Node](/javascript/api/@azure/msal-node) | Authorization code with PKCE | > | .NET Core | [Call Microsoft Graph by signing in users using username/password](https://github.com/azure-samples/active-directory-dotnetcore-console-up-v2) | [MSAL.NET](/entra/msal/dotnet) | Resource owner password credentials | > | Python | [Sign in users](https://github.com/Azure-Samples/ms-identity-python-desktop) | [MSAL Python](/entra/msal/python) | Resource owner password credentials |
The following samples show an application that accesses the Microsoft Graph API
> | -- | -- |-- |-- | > | .NET Core | &#8226; [Call Microsoft Graph](https://github.com/Azure-Samples/active-directory-dotnetcore-daemon-v2/tree/master/1-Call-MSGraph) <br/> &#8226; [Call web API](https://github.com/Azure-Samples/active-directory-dotnetcore-daemon-v2/tree/master/2-Call-OwnApi) <br/> &#8226; [Using managed identity and Azure Key Vault](https://github.com/Azure-Samples/active-directory-dotnetcore-daemon-v2/tree/master/3-Using-KeyVault)| [MSAL.NET](/entra/msal/dotnet) | Client credentials grant| > | ASP.NET |[Multi-tenant with Microsoft identity platform endpoint](https://github.com/Azure-Samples/ms-identity-aspnet-daemon-webapp) | [MSAL.NET](/entra/msal/dotnet) | Client credentials grant|
-> | Java | &#8226; [Call Microsoft Graph with Secret](https://github.com/Azure-Samples/ms-identity-msal-java-samples/tree/main/1.%20Server-Side%20Scenarios/msal-client-credential-secret) <br/> &#8226; [Call Microsoft Graph with Certificate](https://github.com/Azure-Samples/ms-identity-msal-java-samples/tree/main/1.%20Server-Side%20Scenarios/msal-client-credential-certificate)| [MSAL Java](/java/api/com.microsoft.aad.msal4j) | Client credentials grant|
+> | Java | &#8226; [Call Microsoft Graph with Secret](https://github.com/Azure-Samples/ms-identity-msal-java-samples/tree/main/1-server-side/msal-client-credential-secret) <br/> &#8226; [Call Microsoft Graph with Certificate](https://github.com/Azure-Samples/ms-identity-msal-java-samples/tree/main/1-server-side/msal-client-credential-certificate)| [MSAL Java](/java/api/com.microsoft.aad.msal4j) | Client credentials grant|
> | Node.js | [Call Microsoft Graph with secret](https://github.com/Azure-Samples/ms-identity-javascript-nodejs-console) | [MSAL Node](/javascript/api/@azure/msal-node) | Client credentials grant | > | Python | &#8226; [Call Microsoft Graph with secret](https://github.com/Azure-Samples/ms-identity-python-daemon/tree/master/1-Call-MsGraph-WithSecret) <br/> &#8226; [Call Microsoft Graph with certificate](https://github.com/Azure-Samples/ms-identity-python-daemon/tree/master/2-Call-MsGraph-WithCertificate) | [MSAL Python](/entra/msal/python)| Client credentials grant|
The following sample shows a public client application running on a device witho
> | Language/<br/>Platform | Code sample(s) <br/> on GitHub |Auth<br/> libraries |Auth flow | > | -- | -- |-- |-- | > | .NET Core | [Invoke protected API from text-only device](https://github.com/azure-samples/active-directory-dotnetcore-devicecodeflow-v2) | [MSAL.NET](/entra/msal/dotnet) | Device code|
-> | Java | [Sign in users and invoke protected API from text-only device](https://github.com/Azure-Samples/ms-identity-msal-java-samples/tree/main/2.%20Client-Side%20Scenarios/Device-Code-Flow) | [MSAL Java](/java/api/com.microsoft.aad.msal4j) | Device code |
+> | Java | [Sign in users and invoke protected API from text-only device](https://github.com/Azure-Samples/ms-identity-msal-java-samples/tree/main/2-client-side/Device-Code-Flow) | [MSAL Java](/java/api/com.microsoft.aad.msal4j) | Device code |
> | Python | [Call Microsoft Graph](https://github.com/Azure-Samples/ms-identity-python-devicecodeflow) | [MSAL Python](/entra/msal/python) | Device code | ### Microsoft Teams applications
The following samples show how to build applications for the Java language and p
> [!div class="mx-tdCol2BreakAll"] > | App type | Code sample(s) <br/> on GitHub |Auth<br/> libraries |Auth flow | > | -- | -- |-- |-- |
-> | Web API | [Sign in users](https://github.com/Azure-Samples/ms-identity-msal-java-samples/tree/main/4.%20Spring%20Framework%20Web%20App%20Tutorial/3-Authorization-II/protect-web-api) | [MSAL Java](/java/api/com.microsoft.aad.msal4j) | On-Behalf-Of (OBO) |
-> | Desktop | [Call Microsoft Graph](https://github.com/Azure-Samples/ms-identity-msal-java-samples/tree/main/2.%20Client-Side%20Scenarios/Integrated-Windows-Auth-Flow) | [MSAL Java](/java/api/com.microsoft.aad.msal4j) | Integrated Windows authentication |
+> | Web API | [Sign in users](https://github.com/Azure-Samples/ms-identity-msal-java-samples/tree/main/4-spring-web-app/3-Authorization-II/protect-web-api) | [MSAL Java](/java/api/com.microsoft.aad.msal4j) | On-Behalf-Of (OBO) |
+> | Desktop | [Call Microsoft Graph](https://github.com/Azure-Samples/ms-identity-msal-java-samples/tree/main/2-client-side/Integrated-Windows-Auth-Flow) | [MSAL Java](/java/api/com.microsoft.aad.msal4j) | Integrated Windows authentication |
> | Mobile | [Sign in users and call Microsoft Graph](https://github.com/Azure-Samples/ms-identity-android-java) | [MSAL Android](https://github.com/AzureAD/microsoft-authentication-library-for-android) | Authorization code with PKCE | > | Headless | [Sign in users and invoke protected API from text-only device](https://github.com/Azure-Samples/ms-identity-msal-java-samples/tree/main/2.%20Client-Side%20Scenarios/Device-Code-Flow) | [MSAL Java](/java/api/com.microsoft.aad.msal4j) | Device code |
-> | Service/</br>daemon | &#8226; [Call Microsoft Graph with Secret](https://github.com/Azure-Samples/ms-identity-msal-java-samples/tree/main/1.%20Server-Side%20Scenarios/msal-client-credential-secret) <br/> &#8226; [Call Microsoft Graph with Certificate](https://github.com/Azure-Samples/ms-identity-msal-java-samples/tree/main/1.%20Server-Side%20Scenarios/msal-client-credential-certificate)| [MSAL Java](/java/api/com.microsoft.aad.msal4j) | Client credentials grant|
+> | Service/</br>daemon | &#8226; [Call Microsoft Graph with Secret](https://github.com/Azure-Samples/ms-identity-msal-java-samples/tree/main/1-server-side/msal-client-credential-secret) <br/> &#8226; [Call Microsoft Graph with Certificate](https://github.com/Azure-Samples/ms-identity-msal-java-samples/tree/main/1-server-side/msal-client-credential-certificate)| [MSAL Java](/java/api/com.microsoft.aad.msal4j) | Client credentials grant|
#### Java Spring > [!div class="mx-tdCol2BreakAll"] > | App type | Code sample(s) <br/> on GitHub |Auth<br/> libraries |Auth flow | > | -- | -- |-- |-- |
-> | Web application |Microsoft Entra Spring Boot Starter Series <br/> &#8226; [Sign in users](https://github.com/Azure-Samples/ms-identity-msal-java-samples/tree/main/4.%20Spring%20Framework%20Web%20App%20Tutorial/1-Authentication/sign-in) <br/> &#8226; [Sign in users (B2C)](https://github.com/Azure-Samples/ms-identity-msal-java-samples/tree/main/4.%20Spring%20Framework%20Web%20App%20Tutorial/1-Authentication/sign-in-b2c) <br/> &#8226; [Call Microsoft Graph](https://github.com/Azure-Samples/ms-identity-msal-java-samples/tree/main/4.%20Spring%20Framework%20Web%20App%20Tutorial/2-Authorization-I/call-graph) <br/> &#8226; [Use App Roles for access control](https://github.com/Azure-Samples/ms-identity-msal-java-samples/tree/main/4.%20Spring%20Framework%20Web%20App%20Tutorial/3-Authorization-II/roles) <br/> &#8226; [Use Groups for access control](https://github.com/Azure-Samples/ms-identity-msal-java-samples/tree/main/4.%20Spring%20Framework%20Web%20App%20Tutorial/3-Authorization-II/groups) <br/> &#8226; [Deploy to Azure App Service](https://github.com/Azure-Samples/ms-identity-msal-java-samples/tree/main/4.%20Spring%20Framework%20Web%20App%20Tutorial/4-Deployment/deploy-to-azure-app-service) <br/> &#8226; [Protect a web API](https://github.com/Azure-Samples/ms-identity-msal-java-samples/tree/main/4.%20Spring%20Framework%20Web%20App%20Tutorial/3-Authorization-II/protect-web-api) | &#8226; [MSAL Java](/java/api/com.microsoft.aad.msal4j) <br/> &#8226; Microsoft Entra ID Boot Starter | Authorization code |
+> | Web application |Microsoft Entra Spring Boot Starter Series <br/> &#8226; [Sign in users](https://github.com/Azure-Samples/ms-identity-msal-java-samples/tree/main/4-spring-web-app/1-Authentication/sign-in) <br/> &#8226; [Sign in users (B2C)](https://github.com/Azure-Samples/ms-identity-msal-java-samples/tree/main/4-spring-web-app/1-Authentication/sign-in-b2c) <br/> &#8226; [Call Microsoft Graph](https://github.com/Azure-Samples/ms-identity-msal-java-samples/tree/main/4-spring-web-app/2-Authorization-I/call-graph) <br/> &#8226; [Use App Roles for access control](https://github.com/Azure-Samples/ms-identity-msal-java-samples/tree/main/4-spring-web-app/3-Authorization-II/roles) <br/> &#8226; [Use Groups for access control](https://github.com/Azure-Samples/ms-identity-msal-java-samples/tree/main/4-spring-web-app/3-Authorization-II/groups) <br/> &#8226; [Deploy to Azure App Service](https://github.com/Azure-Samples/ms-identity-msal-java-samples/tree/main/4-spring-web-app/4-Deployment/deploy-to-azure-app-service) <br/> &#8226; [Protect a web API](https://github.com/Azure-Samples/ms-identity-msal-java-samples/tree/main/4-spring-web-app/3-Authorization-II/protect-web-api) | &#8226; [MSAL Java](/java/api/com.microsoft.aad.msal4j) <br/> &#8226; Microsoft Entra ID Boot Starter | Authorization code |
#### Java Servlet > [!div class="mx-tdCol2BreakAll"] > | App type | Code sample(s) <br/> on GitHub |Auth<br/> libraries |Auth flow | > | -- | -- |-- |-- |
-> | Web application | Spring-less Servlet Series <br/> &#8226; [Sign in users](https://github.com/Azure-Samples/ms-identity-msal-java-samples/tree/main/3.%20Java%20Servlet%20Web%20App%20Tutorial/1-Authentication/sign-in) <br/> &#8226; [Sign in users (B2C)](https://github.com/Azure-Samples/ms-identity-msal-java-samples/tree/main/3.%20Java%20Servlet%20Web%20App%20Tutorial/1-Authentication/sign-in-b2c) <br/> &#8226; [Call Microsoft Graph](https://github.com/Azure-Samples/ms-identity-msal-java-samples/tree/main/3.%20Java%20Servlet%20Web%20App%20Tutorial/2-Authorization-I/call-graph) <br/> &#8226; [Use App Roles for access control](https://github.com/Azure-Samples/ms-identity-msal-java-samples/tree/main/3.%20Java%20Servlet%20Web%20App%20Tutorial/3-Authorization-II/roles) <br/> &#8226; [Use Security Groups for access control](https://github.com/Azure-Samples/ms-identity-msal-java-samples/tree/main/3.%20Java%20Servlet%20Web%20App%20Tutorial/3-Authorization-II/groups) <br/> &#8226; [Deploy to Azure App Service](https://github.com/Azure-Samples/ms-identity-msal-java-samples/tree/main/3.%20Java%20Servlet%20Web%20App%20Tutorial/4-Deployment/deploy-to-azure-app-service) | [MSAL Java](/java/api/com.microsoft.aad.msal4j) | Authorization code |
+> | Web application | Spring-less Servlet Series <br/> &#8226; [Sign in users](https://github.com/Azure-Samples/ms-identity-msal-java-samples/tree/main/3-java-servlet-web-app/1-Authentication/sign-in) <br/> &#8226; [Sign in users (B2C)](https://github.com/Azure-Samples/ms-identity-msal-java-samples/tree/main/3-java-servlet-web-app/1-Authentication/sign-in-b2c) <br/> &#8226; [Call Microsoft Graph](https://github.com/Azure-Samples/ms-identity-msal-java-samples/tree/main/3-java-servlet-web-app/2-Authorization-I/call-graph) <br/> &#8226; [Use App Roles for access control](https://github.com/Azure-Samples/ms-identity-msal-java-samples/tree/main/3-java-servlet-web-app/3-Authorization-II/roles) <br/> &#8226; [Use Security Groups for access control](https://github.com/Azure-Samples/ms-identity-msal-java-samples/tree/main/3-java-servlet-web-app/3-Authorization-II/groups) <br/> &#8226; [Deploy to Azure App Service](https://github.com/Azure-Samples/ms-identity-msal-java-samples/tree/main/3-java-servlet-web-app/4-Deployment/deploy-to-azure-app-service) | [MSAL Java](/java/api/com.microsoft.aad.msal4j) | Authorization code |
### Python
active-directory Troubleshoot Hybrid Join Windows Current https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/troubleshoot-hybrid-join-windows-current.md
Use Event Viewer logs to locate the phase and error code for the join failures.
> [!NOTE] > The output is available from the Windows 10 May 2021 update (version 21H1).
-The "Attempt Status" field under the "AzureAdPrt" field will provide the status of the previous PRT attempt, along with other required debug information. For earlier Windows versions, extract the information from the Microsoft Entra analytics and operational logs.
+The "Attempt Status" field under the "AzureAdPrt" field will provide the status of the previous PRT attempt, along with other required debug information. For earlier Windows versions, extract the information from the [Microsoft Entra analytics and operational logs](/troubleshoot/windows-server/networking/diagnostic-logging-troubleshoot-workplace-join-issues#enable-workplace-join-debug-logging-by-using-event-viewer).
``` +-+
active-directory Data Storage Eu https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/data-storage-eu.md
For some components of a service, work is in progress to be included in the EU D
**EU Data Boundary:**
-See more information on Microsoft Entra temporary partial customer data transfers from the EU Data Boundary [Services that temporarily transfer a subset of customer data out of the EU Data Boundary](/privacy/eudb/eu-data-boundary-temporary-partial-transfers.md#security-services).
+See more information on Microsoft Entra temporary partial customer data transfers from the EU Data Boundary [Services that temporarily transfer a subset of customer data out of the EU Data Boundary](/privacy/eudb/eu-data-boundary-temporary-partial-transfers#security-services).
## Services that will permanently transfer a subset of customer data out of the EU Data Residency and EU Data Boundary
Some components of a service will continue to transfer a limited amount of custo
**EU Data Boundary:**
-See more information on Microsoft Entra permanent partial customer data transfers from the EU Data Boundary [Services that will permanently transfer a subset of customer data out of the EU Data Boundary](/privacy/eudb/eu-data-boundary-permanent-partial-transfers.md#security-services).
+See more information on Microsoft Entra permanent partial customer data transfers from the EU Data Boundary [Services that will permanently transfer a subset of customer data out of the EU Data Boundary](/privacy/eudb/eu-data-boundary-permanent-partial-transfers#security-services).
## Other considerations
Some services include capabilities that are optional (in some cases requiring a
**EU Data Boundary:**
-See more information on optional service capabilities that transfer customer data out of the EU Data Boundary [Optional service capabilities that transfer customer data out of the EU Data Boundary](/privacy/eudb/eu-data-boundary-transfers-for-optional-capabilities.md#microsoft-entra-id).
+See more information on optional service capabilities that transfer customer data out of the EU Data Boundary [Optional service capabilities that transfer customer data out of the EU Data Boundary](/privacy/eudb/eu-data-boundary-transfers-for-optional-capabilities#microsoft-entra-id).
### Other EU Data Boundary online services
active-directory How To Rename Azure Ad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/how-to-rename-azure-ad.md
function Update-Terminology {
foreach ($item in $Terminology.GetEnumerator()) { $old = [regex]::Escape($item.Key) $new = $item.Value
- $toReplace = '(?<!(name=\"[^$]*|https?:\/\/aka.ms/[a-z|0-1]*))' + $($old)
+ $toReplace = '(?<!(name=\"[^$]{1,100}|https?://aka.ms/[a-z0-9/-]{1,100}))' + $($old)
# Replace the old terminology with the new one $Content.Value = $Content.Value -replace $toReplace, $new
active-directory What Is Deprecated https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/what-is-deprecated.md
Use the following table to learn about changes including deprecations, retiremen
|[Terms of Use experience](https://techcommunity.microsoft.com/t5/microsoft-entra-azure-ad-blog/microsoft-entra-change-announcements-march-2023-train/ba-p/2967448)|Feature change|Jul 2023| |[Azure AD PowerShell and MSOnline PowerShell](https://aka.ms/aadgraphupdate)|Deprecation|Mar 30, 2024| |[Azure AD MFA Server](https://techcommunity.microsoft.com/t5/microsoft-entra-azure-ad-blog/microsoft-entra-change-announcements-march-2023-train/ba-p/2967448)|Retirement|Sep 30, 2024|
-|[Legacy MFA & SSPR policy](https://techcommunity.microsoft.com/t5/microsoft-entra-azure-ad-blog/microsoft-entra-change-announcements-march-2023-train/ba-p/2967448)|Retirement|Sep 30, 2024|
+|[Legacy MFA & SSPR policy](https://techcommunity.microsoft.com/t5/microsoft-entra-azure-ad-blog/microsoft-entra-change-announcements-march-2023-train/ba-p/2967448)|Retirement|Sep 30, 2025|
|['Require approved client app' Conditional Access Grant](https://aka.ms/RetireApprovedClientApp)|Retirement|Mar 31, 2026|
active-directory Access Reviews External Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/access-reviews-external-users.md
The output also includes the individual domains for each of these external ident
If you have external identities using resources such as Teams or other applications not yet governed by Entitlement Management, you may want to review access to these resources regularly, too. Microsoft Entra [Access Reviews](create-access-review.md) gives you the ability to review external identitiesΓÇÖ access by either letting the resource owner,external identities themselves, or another delegated person you trust attest to whether continued access it required. Access Reviews target a resource and create a review activity scoped to either Everyone who has access to the resource or Guest users only. The reviewer then sees the resulting list of users they need to review ΓÇô either all users, including employees of your organization or external identities only.
-![using a group to review access](media/access-reviews-external-users/group-members.png)
- Establishing a resource owner-driven review culture helps govern access for external identities. Resource owners, accountable for access, availability, and security of the information they own, are, in most cases, your best audience to drive decisions around access to their resources and are closer to the users who access them than central IT or a sponsor who manages many externals. ## Create Access Reviews for external identities Users that no longer have access to any resources in your tenant can be removed if they no longer work with your organization. Before you block and delete these external identities, you may want to reach out to these external users and make sure you haven't overlooked a project, or standing access they have that they still need. When you create a group that contains all external identities as members that you found have no access to any resources in your tenant, you can use Access Reviews to have all externals self-attest to whether they still need or have access ΓÇô or will still need access in the future. As part of the review, the review creator in Access Reviews can use the **Require reason on approval** function to require external users to provide a justification for continued access, through which you can learn where and how they still need access in your tenant. Also, you can enable the setting **Additional content for reviewer email** feature, to let users know that they'll be losing access if they donΓÇÖt respond and, should they still need access, a justification is required. If you want to go ahead and let Access Reviews **disable and delete** external identities, should they fail to respond or provide a valid reason for continued access, you can use the Disable and delete option, as described in the next section.
-![limiting the scope of the review to guest users only](media/access-reviews-external-users/guest-users-only.png)
+To create an Access Review for external identities, you'd follow these steps:
+
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Identity Governance Administrator](../roles/permissions-reference.md#identity-governance-administrator).
+
+1. Browse to **Identity** > **Groups** > **All groups**.
+
+1. Search for the group that contains members that are external identities that do not have access to resources in your tenant and make note of this group. To automate creating a group with members that fit this criteria, see: [Gathering information around external identity proliferation](https://github.com/microsoft/access-reviews-samples/tree/master/ExternalIdentityUse).
+
+1. Browse to **Identity governance** > **Access Reviews**.
+
+1. Select **+ New access review**.
+
+1. Select **Teams + Groups** and then select the group you noted earlier that contains the external identities to set the **Review scope**.
+
+1. Set the **Scope** as **Guest users only**.
+ [ ![Screenshot of limiting the scope of the review to guest users only.](media/access-reviews-external-users/guest-users-only.png) ](media/access-reviews-external-users/guest-users-only.png#lightbox)
+1. In the **Upon completion settings** section, you can select **Block users from signing-in for 30 days, then remove user from the tenant** under the **Action to apply on denied users** option. For more information, see: [Disable and delete external identities with Microsoft Entra access reviews](access-reviews-external-users.md#disable-and-delete-external-identities-with-microsoft-entra-access-reviews).
+
+1. After the access review is created, the guest user must certify their access before the review finishes. This is done by the guest approving or not approving their access within the My Access portal. For a full step by step guide, see: [Review access to groups and applications in access reviews](perform-access-review.md).
When the review finishes, the **Results** page shows an overview of the response given by every external identity. You can choose to apply results automatically and let Access Reviews disable and delete them. Alternatively, you can look through the responses given and decide whether you want to remove a userΓÇÖs access or follow-up with them and get additional information before making a decision. If some users still have access to resources that you haven't reviewed yet, you can use the review as part of your discovery and enrich your next review and attestation cycle.
+For a detailed step by step guide, see: [Create an access review of groups and applications in Microsoft Entra ID](create-access-review.md).
+ <a name='disable-and-delete-external-identities-with-azure-ad-access-reviews'></a> ## Disable and delete external identities with Microsoft Entra access reviews
In addition to the option of removing unwanted external identities from resource
![upon completion settings](media/access-reviews-external-users/upon-completion-settings.png)
-When creating a new Access Review, choose the **Select Teams + groups** option and limit the scope to **Guest users only**. In the ΓÇ£Upon completion settingsΓÇ¥ section, for **Action to apply on denied users** you can define **Block users from signing-in for 30 days, then remove user from the tenant**.
- This setting allows you to identify, block, and delete external identities from your Microsoft Entra tenant. External identities who are reviewed and denied continued access by the reviewer will be blocked and deleted, irrespective of the resource access or group membership they have. This setting is best used as a last step after you have validated that the external users in-review no longer carries resource access and can safely be removed from your tenant or if you want to make sure they're removed, irrespective of their standing access. The ΓÇ£Disable and deleteΓÇ¥ feature blocks the external user first, taking away their ability to signing into your tenant and accessing resources. Resource access isn't revoked in this stage, and in case you wanted to reinstantiate the external user, their ability to sign in can be reconfigured. Upon no further action, a blocked external identity will be deleted from the directory after 30 days, removing the account and their access. ## Next steps
active-directory Grant Admin Consent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/grant-admin-consent.md
New-MgOauth2PermissionGrant -BodyParameter $params |
4. Confirm that you've granted tenant wide admin consent by running the following request. ```powershell
- Get-MgOauth2PermissionGrant -Filter "clientId eq 'b0d9b9e3-0ecf-4bfd-8dab-9273dd055a94' consentType eq 'AllPrincipals'"
+ Get-MgOauth2PermissionGrant -Filter "clientId eq 'b0d9b9e3-0ecf-4bfd-8dab-9273dd055a94' and consentType eq 'AllPrincipals'"
``` ## Grant admin consent for application permissions
active-directory Concept Diagnostic Settings Logs Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/concept-diagnostic-settings-logs-options.md
Previously updated : 09/28/2023 Last updated : 10/02/2023
The `EnrichedOffice365AuditLogs` logs are associated with the enriched logs you
### Microsoft Graph activity logs
-The `MicrosoftGraphActivityLogs` provide administrators full visibility into all HTTP requests accessing your tenant's resources through the Microsoft Graph API. You can use these logs to identify activities that a compromised user account conducted in your tenant or to investigate problematic or unexpected behaviors for client applications, such as extreme call volumes. Route these logs to the same Log Analytics workspace with `SignInLogs` to cross-reference details of token requests for sign-in logs.
+The `MicrosoftGraphActivityLogs` is associated with a feature that's still in preview, but may be visible in the Microsoft Entra admin center. These logs provide administrators full visibility into all HTTP requests accessing your tenant's resources through the Microsoft Graph API. You can use these logs to identify activities that a compromised user account conducted in your tenant or to investigate problematic or unexpected behaviors for client applications, such as extreme call volumes. Route these logs to the same Log Analytics workspace with `SignInLogs` to cross-reference details of token requests for sign-in logs.
-The feature is currently in public preview. For more information, see [Access Microsoft Graph activity logs (preview)](/graph/microsoft-graph-activity-logs-overview).
+The feature is currently in private preview. For more information, see [Access Microsoft Graph activity logs (preview)](/graph/microsoft-graph-activity-logs-overview).
### Network access traffic logs
active-directory Concept Identity Secure Score https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/concept-identity-secure-score.md
+
+ Title: What is the identity secure score?
+description: Learn how to use the identity secure score to improve the security posture of your Microsoft Entra tenant.
+++++ Last updated : 10/03/2023++++++
+# Customer intent: As an IT admin, I want to know how to use the identity secure score and related recommendations to improve the security posture of my Microsoft Entra tenant.
++
+# What is identity secure score?
+
+The identity secure score is shown as a percentage that functions as an indicator for how aligned you are with Microsoft's recommendations for security. Each improvement action in identity secure score is tailored to your configuration.
+
+![Secure score](./media/concept-identity-secure-score/recommendations-identity-secure-score.png)
+
+This score helps to:
+
+- Objectively measure your identity security posture
+- Plan identity security improvements
+- Review the success of your improvements
+
+You can access the score and view individual recommendations related to your score in Microsoft Entra recommendations. You can also view the score and the full identity secure score dashboard, which compares your score to other tenants in the same industry and of a similar size. The dashboard also shows how your score has changed over time.
+
+By following the improvement actions in the Microsoft Entra recommendations, you can:
+
+- Improve your security posture and your score
+- Take advantage the features available to your organization as part of your identity investments
+
+## How do I get my secure score?
+
+Identity secure score is available to free and paid customers.
+
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Global Reader](../roles/permissions-reference.md#global-reader).
+1. Browse to **Protection** > **Identity Secure Score** to view the dashboard.
+
+The score and related recommendations are also found at **Identity** > **Overview** > **Recommendations**.
+
+## How does it work?
+
+Every 24 hours, we look at your security configuration and compare your settings with the recommended best practices. Based on the outcome of this evaluation, a new score is calculated for your directory. ItΓÇÖs possible that your security configuration isnΓÇÖt fully aligned with the best practice guidance and the improvement actions are only partially met. In these scenarios, you're awarded a portion of the max score available for the control.
+
+### Working with improvement actions on the dashboard
+
+Each recommendation is measured based on your configuration. If you're using third-party products to enable a best practice recommendation, you can indicate this configuration in the settings of an improvement action. You may set recommendations to be ignored if they don't apply to your environment. An ignored recommendation doesn't contribute to the calculation of your score.
+
+![Ignore or mark action as covered by third party](./media/concept-identity-secure-score/identity-secure-score-ignore-or-third-party-reccomendations.png)
+
+- **To address** - You recognize that the improvement action is necessary and plan to address it at some point in the future. This state also applies to actions that are detected as partially, but not fully completed.
+- **Risk accepted** - Security should always be balanced with usability, and not every recommendation works for everyone. When that is the case, you can choose to accept the risk, or the remaining risk, and not enact the improvement action. You aren't awarded any points, and the action isn't visible in the list of improvement actions. You can view this action in history or undo it at any time.
+- **Planned** - There are concrete plans in place to complete the improvement action.
+- **Resolved through third party** and **Resolved through alternate mitigation** - The improvement action has already been addressed by a third-party application or software, or an internal tool. You're awarded the points the action is worth, so your score better reflects your overall security posture. If a third party or internal tool no longer covers the control, you can choose another status. Keep in mind, Microsoft has no visibility into the completeness of implementation if the improvement action is marked as either of these statuses.
+
+### Working with secure score recommendations
+
+Identity secure score improvement actions also appear in Microsoft Entra recommendations. They both appear in the same list, but the secure score recommendations show the score.
+
+![Screenshot of the recommendations list with the secure score recommendations highlighted.](./media/concept-identity-secure-score/secure-score-recommendations-list.png)
+
+To address a secure score recommendation, select it from the list to view the details and action plan. If you take the appropriate action, the status changes automatically the next time the service runs. You can also mark the recommendation as *dismissed* or *postponed*. For more information on working with recommendations, see [How to use recommendations](./howto-use-recommendations.md).
+
+## How does it help me?
+
+The secure score helps you to:
+
+- Objectively measure your identity security posture
+- Plan identity security improvements
+- Review the success of your improvements
+
+## What you should know
+
+There are several things to consider when working with your identity secure score.
+
+### Who can use the identity secure score?
+
+To access identity secure score, you must be assigned one of the following roles in Microsoft Entra ID.
+
+#### Read and write roles
+
+With read and write access, you can make changes and directly interact with identity secure score.
+
+* Global Administrator
+* Security Administrator
+* Exchange Administrator
+* SharePoint Administrator
+
+#### Read-only roles
+
+With read-only access, you aren't able to edit status for an improvement action.
+
+* Helpdesk Administrator
+* User Administrator
+* Service Support Administrator
+* Security Reader
+* Security Operator
+* Global Reader
+
+### How are controls scored?
+
+Controls can be scored in two ways. Some are scored in a binary fashion - you get 100% of the score if you have the feature or setting configured based on our recommendation. Other scores are calculated as a percentage of the total configuration. For example, if the improvement recommendation states there's a maximum of 10.71% increase if you protect all your users with MFA and you have 5 of 100 total users protected, you're given a partial score around 0.53% (5 protected / 100 total * 10.71% maximum = 0.53% partial score).
+
+### What does [Not Scored] mean?
+
+Actions labeled as [Not Scored] are ones you can perform in your organization but aren't scored. So, you can still improve your security, but you aren't given credit for those actions right now.
+
+### How often is my score updated?
+
+The score is calculated once per day (around 1:00 AM PST). If you make a change to a measured action, the score will automatically update the next day. It may take up to 48 hours for a change to be reflected in your score.
+
+![Screenshot of the secure score with the last updated date and time highlighted.](./media/concept-identity-secure-score/secure-score-refresh-time.png)
+
+### My score changed. How do I figure out why?
+
+Head over to the [Microsoft 365 Defender portal](https://security.microsoft.com/), where you find your complete Microsoft secure score. You can easily see all the changes to your secure score by reviewing the in-depth changes on the history tab.
+
+### Does the secure score measure my risk of getting breached?
+
+No, secure score doesn't express an absolute measure of how likely you're to get breached. It expresses the extent to which you have adopted features that can offset risk. No service can guarantee protection, and the secure score shouldn't be interpreted as a guarantee in any way.
+
+### How should I interpret my score?
+
+Your score improves for configuring recommended security features or performing security-related tasks (like reading reports). Some actions are scored for partial completion, like enabling multifactor authentication (MFA) for your users. Your secure score is directly representative of the Microsoft security services you use. Remember that security must be balanced with usability. All security controls have a user impact component. Controls with low user impact should have little to no effect on your users' day-to-day operations.
+
+To see your score history, head over to the [Microsoft 365 Defender portal](https://security.microsoft.com/) and review your overall Microsoft secure score. You can review changes to your overall secure score be clicking on View History. Choose a specific date to see which controls were enabled for that day and what points you earned for each one.
+
+### How does the identity secure score relate to the Microsoft 365 secure score?
+
+The [Microsoft secure score](/office365/securitycompliance/microsoft-secure-score) contains five distinct control and score categories:
+
+- Identity
+- Data
+- Devices
+- Infrastructure
+- Apps
+
+The identity secure score represents the identity part of the Microsoft secure score. This overlap means that your recommendations for the identity secure score and the identity score in Microsoft are the same.
active-directory Concept Sign Ins https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/concept-sign-ins.md
Previously updated : 09/26/2023 Last updated : 10/02/2023
When reviewing the logs for this situation, the sign-in logs for the home tenant
The service principal sign-in logs don't include first-party, app-only sign-in activity. This type of activity happens when first-party apps get tokens for an internal Microsoft job where there's no direction or context from a user. We exclude these logs so you're not paying for logs related to internal Microsoft tokens within your tenant.
-You may identify Microsoft Graph events that don't correlate to a service principal sign-in if you're routing `MicrosoftGraphActivityLogs` with `SignInLogs` to the same Log Analytics workspace. This integration allows you to cross reference the token issued by the Microsoft Graph activity with the sign-in. The `UniqueTokenIdentifier` in the Microsoft Graph activity logs would be missing from the service principal sign-in logs.
+You may identify Microsoft Graph events that don't correlate to a service principal sign-in if you're routing `MicrosoftGraphActivityLogs` with `SignInLogs` to the same Log Analytics workspace. This integration allows you to cross reference the token issued for the Microsoft Graph API call with the sign-in activity. The `UniqueTokenIdentifier` for sign-in logs and the `SignInActivityId` in the Microsoft Graph activity logs would be missing from the service principal sign-in logs.
### Non-interactive user sign-ins
active-directory Overview Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/overview-recommendations.md
Title: What are Microsoft Entra recommendations?
-description: Provides a general overview of Microsoft Entra recommendations.
+description: Provides a general overview of Microsoft Entra recommendations so you can keep your tenant secure and healthy.
Previously updated : 09/21/2023 Last updated : 10/03/2023
# What are Microsoft Entra recommendations?
-Keeping track of all the settings and resources in your tenant can be overwhelming. The Microsoft Entra recommendations feature helps monitor the status of your tenant so you don't have to. The Microsoft Entra recommendations feature helps ensure your tenant is in a secure and healthy state while also helping you maximize the value of the features available in Microsoft Entra ID.
+Keeping track of all the settings and resources in your tenant can be overwhelming. The Microsoft Entra recommendations feature helps monitor the status of your tenant so you don't have to. These recommendations help ensure your tenant is in a secure and healthy state while also helping you maximize the value of the features available in Microsoft Entra ID.
-The Microsoft Entra recommendations feature provides you with personalized insights with actionable guidance to:
+Microsoft Entra recommendations now include *identity secure score* recommendations. These recommendations provide similar insights into the security of your tenant. Identity secure score recommendations include *secure score points*, which are calculated as an overall score based on several security factors. For more information, see [What is Identity Secure Score](concept-identity-secure-score.md).
+
+All these Microsoft Entra recommendations provide you with personalized insights with actionable guidance to:
- Help you identify opportunities to implement best practices for Microsoft Entra ID-related features. - Improve the state of your Microsoft Entra tenant. - Optimize the configurations for your scenarios.
-This article gives you an overview of how you can use Microsoft Entra recommendations. As an administrator, you should review your tenant's Microsoft Entra recommendations, and their associated resources periodically.
-
-## What it is
+This article gives you an overview of how you can use Microsoft Entra recommendations.
-The Microsoft Entra recommendations feature is the Microsoft Entra specific implementation of [Azure Advisor](../../advisor/advisor-overview.md), which is a personalized cloud consultant that helps you follow best practices to optimize your Azure deployments. Azure Advisor analyzes your resource configuration and usage data to recommend solutions that can help you improve the cost effectiveness, performance, reliability, and security of your Azure resources.
+## How does it work?
-*Microsoft Entra recommendations* use similar data to support you with the roll-out and management of Microsoft's best practices for Microsoft Entra tenants to keep your tenant in a secure and healthy state. The Microsoft Entra recommendations feature provides a holistic view into your tenant's security, health, and usage.
+On a daily basis, Microsoft Entra ID analyzes the configuration of your tenant. During this analysis, Microsoft Entra ID compares the configuration of your tenant with security best practices and recommendation data. If a recommendation is flagged as applicable to your tenant, the recommendation appears in the **Recommendations** section of the Microsoft Entra identity overview area. The recommendations are listed in order of priority so you can quickly determine where to focus first.
-## How it works
+![Screenshot of the Overview page of the tenant with the Recommendations option highlighted.](./media/overview-recommendations/recommendations-overview.png)
-On a daily basis, Microsoft Entra ID analyzes the configuration of your tenant. During this analysis, Microsoft Entra ID compares the data of a recommendation with the actual configuration of your tenant. If a recommendation is flagged as applicable to your tenant, the recommendation appears in the **Recommendations** section of the Identity Overview area. The recommendations are listed in order of priority so you can quickly determine where to focus first.
+Your identity secure score, which appears at the top of the page, is a numerical representation of the health of your tenant. Recommendations that apply to the Identity Secure Score are given individual scores in the table at the bottom of the page. These scores are added up to generate your Identity Secure Score. For more information, see [What is identity secure score](concept-identity-secure-score.md).
-![Screenshot of the Overview page of the tenant with the Recommendations option highlighted.](./media/overview-recommendations/recommendations-overview.png)
+![Screenshot of the identity secure score.](./media/overview-recommendations/identity-secure-score.png)
Each recommendation contains a description, a summary of the value of addressing the recommendation, and a step-by-step action plan. If applicable, impacted resources associated with the recommendation are listed, so you can resolve each affected area. If a recommendation doesn't have any associated resources, the impacted resource type is *Tenant level*, so your step-by-step action plan impacts the entire tenant and not just a specific resource.
+## Are Microsoft Entra recommendations related to Azure Advisor?
+
+The Microsoft Entra recommendations feature is the Microsoft Entra specific implementation of [Azure Advisor](../../advisor/advisor-overview.md), which is a personalized cloud consultant that helps you follow best practices to optimize your Azure deployments. Azure Advisor analyzes your resource configuration and usage data to recommend solutions that can help you improve the cost effectiveness, performance, reliability, and security of your Azure resources.
+
+Microsoft Entra recommendations use similar data to support you with the roll-out and management of Microsoft's best practices for Microsoft Entra tenants to keep your tenant in a secure and healthy state. The Microsoft Entra recommendations feature provides a holistic view into your tenant's security, health, and usage.
+ ## Recommendation availability and license requirements The recommendations listed in the following table are currently available in public preview or general availability. The license requirements for recommendations in public preview are subject to change. The table provides the impacted resources and links to available documentation.
The recommendations listed in the following table are currently available in pub
| [Renew expiring service principal credentials](recommendation-renew-expiring-service-principal-credential.md) | Applications | [Microsoft Entra Workload ID Premium](https://www.microsoft.com/security/business/identity-access/microsoft-entra-workload-id) | Preview | Microsoft Entra-only displays the recommendations that apply to your tenant, so you may not see all supported recommendations listed.-
-## Next steps
-
-* [Learn how to use Microsoft Entra recommendations](howto-use-recommendations.md)
-* [Explore the details of the "Turn off per-user MFA" recommendation](recommendation-turn-off-per-user-mfa.md)
active-directory Permissions Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/permissions-reference.md
Previously updated : 08/29/2023 Last updated : 10/03/2023
This is a [privileged role](privileged-roles-permissions.md). Users with this ro
> | microsoft.directory/deletedItems/delete | Permanently delete objects, which can no longer be restored | > | microsoft.directory/deletedItems/restore | Restore soft deleted objects to original state | > | microsoft.directory/devices/allProperties/allTasks | Create and delete devices, and read and update all properties |
+> | microsoft.directory/multiTenantOrganization/basic/update | Update basic properties of a multi-tenant organization |
+> | microsoft.directory/multiTenantOrganization/create | Create a multi-tenant organization |
+> | microsoft.directory/multiTenantOrganization/joinRequest/organizationDetails/update | Join a multi-tenant organization |
+> | microsoft.directory/multiTenantOrganization/joinRequest/standard/read | Read properties of a multi-tenant organization join request |
+> | microsoft.directory/multiTenantOrganization/standard/read | Read basic properties of a multi-tenant organization |
+> | microsoft.directory/multiTenantOrganization/tenants/organizationDetails/update | Update basic properties of a tenant participating in a multi-tenant organization |
+> | microsoft.directory/multiTenantOrganization/tenants/create | Create a tenant in a multi-tenant organization |
+> | microsoft.directory/multiTenantOrganization/tenants/delete | Delete a tenant participating in a multi-tenant organization |
+> | microsoft.directory/multiTenantOrganization/tenants/organizationDetails/read | Read organization details of a tenant participating in a multi-tenant organization |
+> | microsoft.directory/multiTenantOrganization/tenants/standard/read | Read basic properties of a tenant participating in a multi-tenant organization |
> | microsoft.directory/namedLocations/create | Create custom rules that define network locations | > | microsoft.directory/namedLocations/delete | Delete custom rules that define network locations | > | microsoft.directory/namedLocations/standard/read | Read basic properties of custom rules that define network locations |
This is a [privileged role](privileged-roles-permissions.md). Users with this ro
> | microsoft.directory/crossTenantAccessPolicy/partners/create | Create cross-tenant access policy for partners | > | microsoft.directory/crossTenantAccessPolicy/partners/delete | Delete cross-tenant access policy for partners | > | microsoft.directory/crossTenantAccessPolicy/partners/standard/read | Read basic properties of cross-tenant access policy for partners |
+> | microsoft.directory/crossTenantAccessPolicy/partners/templates/multiTenantOrganizationIdentitySynchronization/basic/update | Update cross tenant sync policy templates for multi-tenant organization |
+> | microsoft.directory/crossTenantAccessPolicy/partners/templates/multiTenantOrganizationIdentitySynchronization/resetToDefaultSettings | Reset cross tenant sync policy template for multi-tenant organization to default settings |
+> | microsoft.directory/crossTenantAccessPolicy/partners/templates/multiTenantOrganizationIdentitySynchronization/standard/read | Read basic properties of cross tenant sync policy templates for multi-tenant organization |
+> | microsoft.directory/crossTenantAccessPolicy/partners/templates/multiTenantOrganizationPartnerConfiguration/basic/update | Update cross tenant access policy templates for multi-tenant organization |
+> | microsoft.directory/crossTenantAccessPolicy/partners/templates/multiTenantOrganizationPartnerConfiguration/resetToDefaultSettings | Reset cross tenant access policy template for multi-tenant organization to default settings |
+> | microsoft.directory/crossTenantAccessPolicy/partners/templates/multiTenantOrganizationPartnerConfiguration/standard/read | Read basic properties of cross tenant access policy templates for multi-tenant organization |
> | microsoft.directory/crossTenantAccessPolicy/partners/b2bCollaboration/update | Update Microsoft Entra B2B collaboration settings of cross-tenant access policy for partners | > | microsoft.directory/crossTenantAccessPolicy/partners/b2bDirectConnect/update | Update Microsoft Entra B2B direct connect settings of cross-tenant access policy for partners | > | microsoft.directory/crossTenantAccessPolicy/partners/crossCloudMeetings/update | Update cross-cloud Teams meeting settings of cross-tenant access policy for partners |
This is a [privileged role](privileged-roles-permissions.md). Users with this ro
> | microsoft.directory/signInReports/allProperties/read | Read all properties on sign-in reports, including privileged properties | > | microsoft.directory/subscribedSkus/allProperties/allTasks | Buy and manage subscriptions and delete subscriptions | > | microsoft.directory/users/allProperties/allTasks | Create and delete users, and read and update all properties<br/>[![Privileged label icon.](./medi) |
+> | microsoft.directory/users/convertExternalToInternalMemberUser | Convert external user to internal user |
> | microsoft.directory/permissionGrantPolicies/create | Create permission grant policies | > | microsoft.directory/permissionGrantPolicies/delete | Delete permission grant policies | > | microsoft.directory/permissionGrantPolicies/standard/read | Read standard properties of permission grant policies |
Users with this role **cannot** do the following:
> | microsoft.directory/crossTenantAccessPolicy/standard/read | Read basic properties of cross-tenant access policy | > | microsoft.directory/crossTenantAccessPolicy/default/standard/read | Read basic properties of the default cross-tenant access policy | > | microsoft.directory/crossTenantAccessPolicy/partners/standard/read | Read basic properties of cross-tenant access policy for partners |
+> | microsoft.directory/crossTenantAccessPolicy/partners/templates/multiTenantOrganizationIdentitySynchronization/standard/read | Read basic properties of cross tenant sync policy templates for multi-tenant organization |
+> | microsoft.directory/crossTenantAccessPolicy/partners/templates/multiTenantOrganizationPartnerConfiguration/standard/read | Read basic properties of cross tenant access policy templates for multi-tenant organization |
> | microsoft.directory/crossTenantAccessPolicy/partners/identitySynchronization/standard/read | Read basic properties of cross-tenant sync policy | > | microsoft.directory/deviceManagementPolicies/standard/read | Read standard properties on device management application policies | > | microsoft.directory/deviceRegistrationPolicy/standard/read | Read standard properties on device registration policies |
+> | microsoft.directory/multiTenantOrganization/joinRequest/standard/read | Read properties of a multi-tenant organization join request |
+> | microsoft.directory/multiTenantOrganization/standard/read | Read basic properties of a multi-tenant organization |
+> | microsoft.directory/multiTenantOrganization/tenants/organizationDetails/read | Read organization details of a tenant participating in a multi-tenant organization |
+> | microsoft.directory/multiTenantOrganization/tenants/standard/read | Read basic properties of a tenant participating in a multi-tenant organization |
> | microsoft.directory/privilegedIdentityManagement/allProperties/read | Read all resources in Privileged Identity Management | > | microsoft.directory/provisioningLogs/allProperties/read | Read all properties of provisioning logs | > | microsoft.directory/roleAssignments/allProperties/read | Read all properties of role assignments |
Azure Advanced Threat Protection | Monitor and respond to suspicious security ac
> | microsoft.directory/crossTenantAccessPolicy/partners/create | Create cross-tenant access policy for partners | > | microsoft.directory/crossTenantAccessPolicy/partners/delete | Delete cross-tenant access policy for partners | > | microsoft.directory/crossTenantAccessPolicy/partners/standard/read | Read basic properties of cross-tenant access policy for partners |
+> | microsoft.directory/crossTenantAccessPolicy/partners/templates/multiTenantOrganizationIdentitySynchronization/basic/update | Update cross tenant sync policy templates for multi-tenant organization |
+> | microsoft.directory/crossTenantAccessPolicy/partners/templates/multiTenantOrganizationIdentitySynchronization/resetToDefaultSettings | Reset cross tenant sync policy template for multi-tenant organization to default settings |
+> | microsoft.directory/crossTenantAccessPolicy/partners/templates/multiTenantOrganizationIdentitySynchronization/standard/read | Read basic properties of cross tenant sync policy templates for multi-tenant organization |
+> | microsoft.directory/crossTenantAccessPolicy/partners/templates/multiTenantOrganizationPartnerConfiguration/basic/update | Update cross tenant access policy templates for multi-tenant organization |
+> | microsoft.directory/crossTenantAccessPolicy/partners/templates/multiTenantOrganizationPartnerConfiguration/resetToDefaultSettings | Reset cross tenant access policy template for multi-tenant organization to default settings |
+> | microsoft.directory/crossTenantAccessPolicy/partners/templates/multiTenantOrganizationPartnerConfiguration/standard/read | Read basic properties of cross tenant access policy templates for multi-tenant organization |
> | microsoft.directory/crossTenantAccessPolicy/partners/b2bCollaboration/update | Update Microsoft Entra B2B collaboration settings of cross-tenant access policy for partners | > | microsoft.directory/crossTenantAccessPolicy/partners/b2bDirectConnect/update | Update Microsoft Entra B2B direct connect settings of cross-tenant access policy for partners | > | microsoft.directory/crossTenantAccessPolicy/partners/crossCloudMeetings/update | Update cross-cloud Teams meeting settings of cross-tenant access policy for partners |
Azure Advanced Threat Protection | Monitor and respond to suspicious security ac
> | microsoft.directory/entitlementManagement/allProperties/read | Read all properties in Microsoft Entra entitlement management | > | microsoft.directory/identityProtection/allProperties/read | Read all resources in Microsoft Entra ID Protection | > | microsoft.directory/identityProtection/allProperties/update | Update all resources in Microsoft Entra ID Protection<br/>[![Privileged label icon.](./medi) |
+> | microsoft.directory/multiTenantOrganization/basic/update | Update basic properties of a multi-tenant organization |
+> | microsoft.directory/multiTenantOrganization/create | Create a multi-tenant organization |
+> | microsoft.directory/multiTenantOrganization/joinRequest/organizationDetails/update | Join a multi-tenant organization |
+> | microsoft.directory/multiTenantOrganization/joinRequest/standard/read | Read properties of a multi-tenant organization join request |
+> | microsoft.directory/multiTenantOrganization/standard/read | Read basic properties of a multi-tenant organization |
+> | microsoft.directory/multiTenantOrganization/tenants/organizationDetails/update | Update basic properties of a tenant participating in a multi-tenant organization |
+> | microsoft.directory/multiTenantOrganization/tenants/create | Create a tenant in a multi-tenant organization |
+> | microsoft.directory/multiTenantOrganization/tenants/delete | Delete a tenant participating in a multi-tenant organization |
+> | microsoft.directory/multiTenantOrganization/tenants/organizationDetails/read | Read organization details of a tenant participating in a multi-tenant organization |
+> | microsoft.directory/multiTenantOrganization/tenants/standard/read | Read basic properties of a tenant participating in a multi-tenant organization |
> | microsoft.directory/namedLocations/create | Create custom rules that define network locations | > | microsoft.directory/namedLocations/delete | Delete custom rules that define network locations | > | microsoft.directory/namedLocations/standard/read | Read basic properties of custom rules that define network locations |
In | Can do
> | microsoft.directory/conditionalAccessPolicies/standard/read | Read conditional access for policies | > | microsoft.directory/conditionalAccessPolicies/owners/read | Read the owners of conditional access policies | > | microsoft.directory/conditionalAccessPolicies/policyAppliedTo/read | Read the "applied to" property for conditional access policies |
+> | microsoft.directory/crossTenantAccessPolicy/partners/templates/multiTenantOrganizationIdentitySynchronization/standard/read | Read basic properties of cross tenant sync policy templates for multi-tenant organization |
+> | microsoft.directory/crossTenantAccessPolicy/partners/templates/multiTenantOrganizationPartnerConfiguration/standard/read | Read basic properties of cross tenant access policy templates for multi-tenant organization |
+> | microsoft.directory/multiTenantOrganization/joinRequest/standard/read | Read properties of a multi-tenant organization join request |
+> | microsoft.directory/multiTenantOrganization/standard/read | Read basic properties of a multi-tenant organization |
+> | microsoft.directory/multiTenantOrganization/tenants/organizationDetails/read | Read organization details of a tenant participating in a multi-tenant organization |
+> | microsoft.directory/multiTenantOrganization/tenants/standard/read | Read basic properties of a tenant participating in a multi-tenant organization |
> | microsoft.directory/privilegedIdentityManagement/allProperties/read | Read all resources in Privileged Identity Management | > | microsoft.directory/provisioningLogs/allProperties/read | Read all properties of provisioning logs | > | microsoft.directory/signInReports/allProperties/read | Read all properties on sign-in reports, including privileged properties |
Users with this role **cannot** do the following:
> | microsoft.directory/servicePrincipals/appRoleAssignedTo/update | Update service principal role assignments | > | microsoft.directory/users/assignLicense | Manage user licenses | > | microsoft.directory/users/create | Add users<br/>[![Privileged label icon.](./medi) |
+> | microsoft.directory/users/convertExternalToInternalMemberUser | Convert external user to internal user |
> | microsoft.directory/users/delete | Delete users<br/>[![Privileged label icon.](./medi) | > | microsoft.directory/users/disable | Disable users<br/>[![Privileged label icon.](./medi) | > | microsoft.directory/users/enable | Enable users<br/>[![Privileged label icon.](./medi) |
active-directory Confluencemicrosoft Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/confluencemicrosoft-tutorial.md
As of now, following versions of Confluence are supported:
- Confluence: 5.0 to 5.10 - Confluence: 6.0.1 to 6.15.9-- Confluence: 7.0.1 to 8.0.4
+- Confluence: 7.0.1 to 8.5.1
> [!NOTE] > Please note that our Confluence Plugin also works on Ubuntu Version 16.04
active-directory Ms Confluence Jira Plugin Adminguide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/ms-confluence-jira-plugin-adminguide.md
The plug-in supports the following versions of Jira and Confluence:
* JIRA also supports 5.2. For more details, click [Microsoft Entra single sign-on for JIRA 5.2](./jira52microsoft-tutorial.md). * Confluence: 5.0 to 5.10. * Confluence: 6.0.1 to 6.15.9.
-* Confluence: 7.0.1 to 8.0.4.
+* Confluence: 7.0.1 to 8.5.1.
## Installation
Confluence:
|Plugin Version | Release Notes | Supported JIRA versions | |--|-|-|
-| 6.3.9 | Bug Fixes: | Confluence Server: 7.20.3 to 8.0.4 |
+| 6.3.9 | Bug Fixes: | Confluence Server: 7.20.3 to 8.5.1 |
| | System Error: Metadata link cannot be configured on SSO plugins. | | | | | | | 6.3.8 | New Feature: | Confluence Server: 5.0 to 7.20.1 |
The plug-in supports these versions:
* JIRA also supports 5.2. For more details, click [Microsoft Entra single sign-on for JIRA 5.2](./jira52microsoft-tutorial.md). * Confluence: 5.0 to 5.10. * Confluence: 6.0.1 to 6.15.9.
-* Confluence: 7.0.1 to 8.0.4.
+* Confluence: 7.0.1 to 8.5.1.
### Is the plug-in free or paid?
active-directory Linkedin Employment Verification https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/linkedin-employment-verification.md
Previously updated : 05/09/2023 Last updated : 10/03/2023
If your organization wants its employees to get their place of work verified on
1. Setup your Microsoft Entra Verified ID service by following these [instructions](verifiable-credentials-configure-tenant.md). 1. [Create](how-to-use-quickstart-verifiedemployee.md#create-a-verified-employee-credential) a Verified ID Employee credential. 1. Deploy the custom webapp from [GitHub](https://github.com/Azure-Samples/VerifiedEmployeeIssuance).
-1. Configure the LinkedIn company page with your organization DID (decentralized identity) and URL of the custom Webapp. You cannot self-service the LinkedIn company page. Today, you need to fill in [this form](https://aka.ms/enablelinkedin) and we can enable your organization.
+1. Configure the LinkedIn company page with your organization DID (decentralized identity) and URL of the custom Webapp. You cannot self-service the LinkedIn company page.
+1. Today, you need to fill in [this form](https://aka.ms/enablelinkedin) and we can enable your organization.
1. Once you deploy the updated LinkedIn mobile app your employees can get verified. >[!IMPORTANT]
-> The app version required is Android **4.1.813** or newer, or IOS we require **9.27.2173** or newer. Keep in mind that inside the app, the version number shows **9.27.2336**, but in the App store the version number would be **9.1.312** or higher.
+> The form requires that you provide your account manager as the Microsoft contact. The app version required is Android **4.1.813** or newer, or IOS we require **9.27.2173** or newer. Keep in mind that inside the app, the version number shows **9.27.2336**, but in the App store the version number would be **9.1.312** or higher.
>[!NOTE] > Review LinkedIn's documentation for information on [verifications on LinkedIn profiles.](https://www.linkedin.com/help/linkedin/answer/a1359065).
advisor Advisor Reference Reliability Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/advisor/advisor-reference-reliability-recommendations.md
Learn more about [App Service Certificate - ASCDomainVerificationRequired (Domai
## Cache
-### Availability may be impacted from high memory fragmentation. Increase fragmentation memory reservation to avoid potential impact.
+### Availability may be impacted from high memory fragmentation. Increase fragmentation memory reservation to avoid potential impact
Fragmentation and memory pressure can cause availability incidents during a failover or management operations. Increasing reservation of memory for fragmentation helps in reducing the cache failures when running under high memory pressure. Memory for fragmentation can be increased via maxfragmentationmemory-reserved setting available in advanced settings blade.
Virtual machines in an Availability Set with disks that share either storage acc
Learn more about [Availability set - ManagedDisksAvSet (Use Managed Disks to improve data reliability)](https://aka.ms/aa_avset_manageddisk_learnmore).
-### Check Point Virtual Machine may lose Network Connectivity.
+### Check Point Virtual Machine may lose Network Connectivity
We have identified that your Virtual Machine might be running a version of Check Point image that has been known to lose network connectivity in the event of a platform servicing operation. We recommend that you upgrade to a newer version of the image. Contact Check Point for further instructions on how to upgrade your image.
In order for a session host to deploy and register to Azure Virtual Desktop prop
Learn more about [Virtual machine - SessionHostNeedsAssistanceForUrlCheck (Access to mandatory URLs missing for your Azure Virtual Desktop environment)](../virtual-desktop/safe-url-list.md).
+### Clusters having node pools using non-recommended B-Series
+
+Cluster has one or more node pools using a non-recommended burstable VM SKU. With burstable VMs, full vCPU capability 100% is unguaranteed. Please make sure B-series VM's are not used in Production environment.
+
+Learn more about [Kubernetes service - ClustersUsingBSeriesVMs (Clusters having node pools using non-recommended B-Series)](/azure/virtual-machines/sizes-b-series-burstable).
+
+## MySQL
+
+### Replication - Add a primary key to the table that currently does not have one
+
+Based on our internal monitoring, we have observed significant replication lag on your replica server. This lag is occurring because the replica server is replaying relay logs on a table that lacks a primary key. To ensure that the replica server can effectively synchronize with the primary and keep up with changes, we highly recommend adding primary keys to the tables in the primary server and subsequently recreating the replica server.
+
+Learn more about [Azure Database for MySQL flexible server - MySqlFlexibleServerReplicaMissingPKfb41 (Replication - Add a primary key to the table that currently does not have one)](/azure/mysql/how-to-troubleshoot-replication-latency#no-primary-key-or-unique-key-on-a-table).
+
+### High Availability - Add primary key to the table that currently does not have one
+
+Our internal monitoring system has identified significant replication lag on the High Availability standby server. This lag is primarily caused by the standby server replaying relay logs on a table that lacks a primary key. To address this issue and adhere to best practices, it is recommended to add primary keys to all tables. Once this is done, proceed to disable and then re-enable High Availability to mitigate the problem.
+
+Learn more about [Azure Database for MySQL flexible server - MySqlFlexibleServerHAMissingPKcf38 (High Availability - Add primary key to the table that currently does not have one.)](/azure/mysql/how-to-troubleshoot-replication-latency#no-primary-key-or-unique-key-on-a-table).
+ ## PostgreSQL ### Improve PostgreSQL availability by removing inactive logical replication slots
Learn more about [PostgreSQL server - OrcasPostgreSqlLogicalReplicationSlots (Im
### Improve PostgreSQL availability by removing inactive logical replication slots
-Our internal telemetry indicates that your PostgreSQL flexible server may have inactive logical replication slots. THIS NEEDS IMMEDIATE ATTENTION. This can result in degraded server performance and unavailability due to WAL file retention and buildup of snapshot files. To improve performance and availability, we STRONGLY recommend that you IMMEDIATELY either delete the inactive replication slots, or start consuming the changes from these slots so that the slots' Log Sequence Number (LSN) advances and is close to the current LSN of the server.
+Our internal telemetry indicates that your PostgreSQL flexible server may have inactive logical replication slots. THIS NEEDS IMMEDIATE ATTENTION. Inactive logical replication slots can result in degraded server performance and unavailability due to WAL file retention and buildup of snapshot files. To improve performance and availability, we STRONGLY recommend that you IMMEDIATELY either delete the inactive replication slots, or start consuming the changes from these slots so that the slots' Log Sequence Number (LSN) advances and is close to the current LSN of the server.
Learn more about [Azure Database for PostgreSQL flexible server - OrcasPostgreSqlFlexibleServerLogicalReplicationSlots (Improve PostgreSQL availability by removing inactive logical replication slots)](https://aka.ms/azure_postgresql_flexible_server_logical_decoding).
Some or all of your devices are using outdated SDK and we recommend you upgrade
Learn more about [IoT hub - UpgradeDeviceClientSdk (Upgrade device client SDK to a supported version for IotHub)](https://aka.ms/iothubsdk).
+### IoT Hub Potential Device Storm Detected
+
+This is when two or more devices are trying to connect to the IoT Hub using the same device ID credentials. When the second device (B) connects, it causes the first one (A) to become disconnected. Then (A) attempts to reconnect again, which causes (B) to get disconnected.
+
+Learn more about [IoT hub - IoTHubDeviceStorm (IoT Hub Potential Device Storm Detected)](https://aka.ms/IotHubDeviceStorm).
+
+### Upgrade Device Update for IoT Hub SDK to a supported version
+
+Your Device Update for IoT Hub Instance is using an outdated version of the SDK. We recommend upgrading to the latest version for the latest fixes, performance improvements, and new feature capabilities.
+
+Learn more about [IoT hub - DU_SDK_Advisor_Recommendation (Upgrade Device Update for IoT Hub SDK to a supported version)](/azure/iot-hub-device-update/understand-device-update).
+
+### IoT Hub Quota Exceeded Detected
+
+We have detected that your IoT Hub has exceeded its daily message quota. Consider adding units or increasing the SKU level to prevent this in the future.
+
+Learn more about [IoT hub - IoTHubQuotaExceededAdvisor (IoT Hub Quota Exceeded Detected)](/azure/iot-hub/troubleshoot-error-codes#403002-iothubquotaexceeded).
+
+### Upgrade device client SDK to a supported version for IotHub
+
+Some or all of your devices are using outdated SDK and we recommend you upgrade to a supported version of SDK. See the details in the recommendation.
+
+Learn more about [IoT hub - UpgradeDeviceClientSdk (Upgrade device client SDK to a supported version for IotHub)](https://aka.ms/iothubsdk).
+
+### Upgrade Edge Device Runtime to a supported version for Iot Hub
+
+Some or all of your Edge devices are using outdated versions and we recommend you upgrade to the latest supported version of the runtime. See the details in the recommendation.
+
+Learn more about [IoT hub - UpgradeEdgeSdk (Upgrade Edge Device Runtime to a supported version for Iot Hub)](https://aka.ms/IOTEdgeSDKCheck).
+ ## Azure Cosmos DB ### Configure Consistent indexing mode on your Azure Cosmos DB container
Learn more about [Machine - Azure Arc - ArcServerAgentVersion (Upgrade to the la
## Kubernetes
+### Upgrade to Standard tier for mission-critical and production clusters
+
+This cluster has more than 10 nodes and has not enabled the Standard tier. The Kubernetes Control Plane on the Free tier comes with limited resources and is not intended for production use or any cluster with 10 or more nodes.
+
+Learn more about [Kubernetes service - UseStandardpricingtier (Upgrade to Standard tier for mission-critical and production clusters)](/azure/aks/uptime-sla).
+ ### Pod Disruption Budgets Recommended Pod Disruption Budgets Recommended. Improve service high availability.
Learn more about [Kubernetes - Azure Arc - Arc-enabled K8s agent version upgrade
## Media Services
-### Increase Media Services quotas or limits to ensure continuity of service.
+### Increase Media Services quotas or limits to ensure continuity of service
Your media account is about to hit its quota limits. Review current usage of Assets, Content Key Policies and Stream Policies for the media account. To avoid any disruption of service, you should request quota limits to be increased for the entities that are closer to hitting quota limit. You can request quota limits to be increased by opening a ticket and adding relevant details to it. Don't create additional Azure Media accounts in an attempt to obtain higher limits.
Learn more about [Recovery Services vault - Enable CRR (Enable Cross Region Rest
## Search
-### You are close to exceeding storage quota of 2GB. Create a Standard search service.
+### You are close to exceeding storage quota of 2GB. Create a Standard search service
You're close to exceeding storage quota of 2GB. Create a Standard search service. Indexing operations stop working when storage quota is exceeded. Learn more about [Service limits in Azure Cognitive Search](/azure/search/search-limits-quotas-capacity).
-### You are close to exceeding storage quota of 50MB. Create a Basic or Standard search service.
+### You are close to exceeding storage quota of 50MB. Create a Basic or Standard search service
You're close to exceeding storage quota of 50MB. Create a Basic or Standard search service. Indexing operations stop working when storage quota is exceeded. Learn more about [Service limits in Azure Cognitive Search](/azure/search/search-limits-quotas-capacity).
-### You are close to exceeding your available storage quota. Add additional partitions if you need more storage.
+### You are close to exceeding your available storage quota. Add additional partitions if you need more storage
you're close to exceeding your available storage quota. Add additional partitions if you need more storage. After exceeding storage quota, you can still query, but indexing operations no longer work.
Learn more about [Service limits in Azure Cognitive Search](/azure/search/search
## Storage
+### You have ADLS Gen1 Accounts Which Need to be Migrated to ADLS Gen2
+
+As previously announced, Azure Data Lake Storage Gen1 will be retired on February 29, 2024. We highly recommend migrating your data lake to Azure Data Lake Storage Gen2, which offers advanced capabilities specifically designed for big data analytics and is built on top of Azure Blob Storage.
+
+Learn more about [Data lake store account - ADLSGen1_Deprecation (You have ADLS Gen1 Accounts Which Needs to be Migrated to ADLS Gen2)](https://azure.microsoft.com/updates/action-required-switch-to-azure-data-lake-storage-gen2-by-29-february-2024/).
+ ### Enable Soft Delete to protect your blob data After enabling Soft Delete, deleted data transitions to a soft deleted state instead of being permanently deleted. When data is overwritten, a soft deleted snapshot is generated to save the state of the overwritten data. You can configure the amount of time soft deleted data is recoverable before it permanently expires.
You have deployed your application multiple times over the last week. Deployment
Learn more about [App service - AppServiceStandardOrHigher (Move your App Service resource to Standard or higher and use deployment slots)](https://aka.ms/ant-staging).
-### Consider scaling out your App Service Plan to optimize user experience and availability.
+### Consider scaling out your App Service Plan to optimize user experience and availability
Consider scaling out your App Service Plan to at least two instances to avoid cold start delays and service interruptions during routine maintenance. Learn more about [App Service plan - AppServiceNumberOfInstances (Consider scaling out your App Service Plan to optimize user experience and availability.)](https://aka.ms/appsvcnuminstances).
-### Consider upgrading the hosting plan of the Static Web App(s) in this subscription to Standard SKU.
+### Consider upgrading the hosting plan of the Static Web App(s) in this subscription to Standard SKU
The combined bandwidth used by all the Free SKU Static Web Apps in this subscription is exceeding the monthly limit of 100GB. Consider upgrading these apps to Standard SKU to avoid throttling.
ai-services Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/service-limits.md
This article contains both a quick reference and detailed description of Azure A
| Adjustable | No |Yes <sup>3</sup>| | **Max number of pages (Training) * Classifier** | 10,000 | 10,000 (default value) | | Adjustable | No | No |
+| **Max number of document types (classes) * Classifier** | 500 | 500 (default value) |
+| Adjustable | No | No |
| **Training dataset size * Classifier** | 1GB | 1GB (default value) | | Adjustable | No | No |
+| **Min number of samples per class * Classifier** | 5 | 5 (default value) |
+| Adjustable | No | No |
::: moniker-end
ai-services Beginners Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/custom-translator/beginners-guide.md
Finding in-domain quality data is often a challenging task that varies based on
| Source | What it does | Rules to follow | |||| | Bilingual training documents | Teaches the system your terminology and style. | **Be liberal**. Any in-domain human translation is better than machine translation. Add and remove documents as you go and try to improve the [BLEU score](concepts/bleu-score.md?WT.mc_id=aiml-43548-heboelma). |
-| Tuning documents | Trains the Neural Machine Translation parameters. | **Be strict**. Compose them to be optimally representative of what you are going to translation in the future. |
+| Tuning documents | Trains the Neural Machine Translation parameters. | **Be strict**. Compose them to be optimally representative of what you are going to translate in the future. |
| Test documents | Calculate the [BLEU score](concepts/bleu-score.md?WT.mc_id=aiml-43548-heboelma).| **Be strict**. Compose test documents to be optimally representative of what you plan to translate in the future. | | Phrase dictionary | Forces the given translation 100% of the time. | **Be restrictive**. A phrase dictionary is case-sensitive and any word or phrase listed is translated in the way you specify. In many cases, it's better to not use a phrase dictionary and let the system learn. | | Sentence dictionary | Forces the given translation 100% of the time. | **Be strict**. A sentence dictionary is case-insensitive and good for common in domain short sentences. For a sentence dictionary match to occur, the entire submitted sentence must match the source dictionary entry. If only a portion of the sentence matches, the entry won't match. |
aks Auto Upgrade Node Image https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/auto-upgrade-node-image.md
Title: Automatically upgrade Azure Kubernetes Service (AKS) cluster node operating system images
-description: Learn how to automatically upgrade Azure Kubernetes Service (AKS) cluster node operating system images.
+ Title: Auto-Upgrade Azure Kubernetes Service (AKS) Node OS Images
+description: Learn how to set up automatic upgrades on Azure Kubernetes Service (AKS) for all your cluster node operating system images.
Last updated 02/03/2023
-# Automatically upgrade Azure Kubernetes Service cluster node operating system images
-
-AKS now supports an exclusive channel dedicated to controlling node-level OS security updates. This channel, referred to as the node OS auto-upgrade channel, can't be used for cluster-level Kubernetes version upgrades. To automatically upgrade Kubernetes versions, continue to use the cluster [auto-upgrade][Autoupgrade] channel.
-
+# Auto-upgrade Azure Kubernetes Service cluster node OS images
+AKS now supports the node OS auto-upgrade channel, an exclusive channel dedicated to controlling node-level OS security updates. This channel can't be used for cluster-level Kubernetes version upgrades.
## How does node OS auto-upgrade work with cluster auto-upgrade?
-Node-level OS security updates come in at a faster cadence than Kubernetes patch or minor version updates. This is the main reason for introducing a separate, dedicated Node OS auto-upgrade channel. With this feature, you can have a flexible and customized strategy for node-level OS security updates and a separate plan for cluster-level Kubernetes version [auto-upgrades][Autoupgrade].
+Node-level OS security updates are released at a faster rate than Kubernetes patch or minor version updates. The node OS auto-upgrade channel grants you flexibility and enables a customized strategy for node-level OS security updates. Then, you can choose a separate plan for cluster-level Kubernetes version [auto-upgrades][Autoupgrade].
It's highly recommended to use both cluster-level [auto-upgrades][Autoupgrade] and the node OS auto-upgrade channel together. Scheduling can be fine-tuned by applying two separate sets of [maintenance windows][planned-maintenance] - `aksManagedAutoUpgradeSchedule` for the cluster [auto-upgrade][Autoupgrade] channel and `aksManagedNodeOSUpgradeSchedule` for the node OS auto-upgrade channel.
-## Using node OS auto-upgrade
+## Use node OS auto-upgrade
The selected channel determines the timing of upgrades. When making changes to node OS auto-upgrade channels, allow up to 24 hours for the changes to take effect. > [!NOTE]
-> Node OS image auto-upgrade won't affect the cluster's Kubernetes version, but it only works for a cluster in a [supported version][supported].
+> Node OS image auto-upgrade won't affect the cluster's Kubernetes version. It only works for a cluster in a [supported version][supported].
The following upgrade channels are available. You're allowed to choose one of these options:
To set the node os auto-upgrade channel on existing cluster, update the *node-os
az aks update --resource-group myResourceGroup --name myAKSCluster --node-os-upgrade-channel SecurityPatch ```
-## Cadence and Ownership
+## Update ownership and cadence
The default cadence means there's no planned maintenance window applied.
The default cadence means there's no planned maintenance window applied.
- The `NodeOsUpgradeChannelPreview` feature flag must be enabled on your subscription
-### Register the 'NodeOsUpgradeChannelPreview' feature flag
+### Register the 'NodeOsUpgradeChannelPreview' feature flag
Register the `NodeOsUpgradeChannelPreview` feature flag by using the [az feature register][az-feature-register] command, as shown in the following example:
az provider register --namespace Microsoft.ContainerService
> By default, any new cluster created with an API version of `06-01-2022` or later will set the node OS auto-upgrade channel value to `NodeImage`. Any existing clusters created with an API version earlier than `06-01-2022` will have the node OS auto-upgrade channel value set to `None` by default.
-## Using node OS auto-upgrade with Planned Maintenance
+## Node OS auto-upgrade with Planned Maintenance
-If youΓÇÖre using Planned Maintenance and node OS auto-upgrade, your upgrade starts during your specified maintenance window.
+Planned Maintenance for the node OS auto-upgrade starts at your specified maintenance window.
> [!NOTE] > To ensure proper functionality, use a maintenance window of four hours or more.
aks Azure Cni Powered By Cilium https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-cni-powered-by-cilium.md
az aks update -n <clusterName> -g <resourceGroupName> \
--network-dataplane cilium ``` - ## Frequently asked questions - **Can I customize Cilium configuration?**
az aks update -n <clusterName> -g <resourceGroupName> \
No, AKS doesn't configure CPU or memory limits on the Cilium `daemonset` because Cilium is a critical system component for pod networking and network policy enforcement.
+- **Does Azure CNI powered by Cilium use Kube-Proxy?**
+
+ No, AKS clusters created with network dataplane as Cilium don't use Kube-Proxy.
+ If the AKS clusters are on [Azure CNI Overlay](./azure-cni-overlay.md) or [Azure CNI with dynamic IP allocation](./configure-azure-cni-dynamic-ip-allocation.md) and are upgraded to AKS clusters running Azure CNI powered by Cilium, new nodes workloads are created without kube-proxy. Older workloads are also migrated to run without kube-proxy as a part of this upgrade process.
+ ## Next steps Learn more about networking in AKS in the following articles:
aks Csi Storage Drivers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/csi-storage-drivers.md
The CSI storage driver support on AKS allows you to natively use:
- [**Azure Blob storage**](azure-blob-csi.md) can be used to mount Blob storage (or object storage) as a file system into a container or pod. Using Blob storage enables your cluster to support applications that work with large unstructured datasets like log file data, images or documents, HPC, and others. Additionally, if you ingest data into [Azure Data Lake storage](../storage/blobs/data-lake-storage-introduction.md), you can directly mount and use it in AKS without configuring another interim filesystem. > [!IMPORTANT]
-> Starting with Kubernetes version 1.26, in-tree persistent volume types *kubernetes.io/azure-disk* and *kubernetes.io/azure-file* are deprecated and will no longer be supported. Removing these drivers following their deprecation is not planned, however you should migrate to the corresponding CSI drivers *disks.csi.azure.com* and *file.csi.azure.com*. To review the migration options for your storage classes and upgrade your cluster to use Azure Disks and Azure Files CSI drivers, see [Migrate from in-tree to CSI drivers][migrate-from-in-tree-csi-drivers].
+> Starting with Kubernetes version 1.26, in-tree persistent volume types *kubernetes.io/azure-disk* and *kubernetes.io/azure-file* are deprecated and will no longer be supported. Removing these drivers following their deprecation is not planned, however you should migrate to the corresponding CSI drivers *disk.csi.azure.com* and *file.csi.azure.com*. To review the migration options for your storage classes and upgrade your cluster to use Azure Disks and Azure Files CSI drivers, see [Migrate from in-tree to CSI drivers][migrate-from-in-tree-csi-drivers].
> > *In-tree drivers* refers to the storage drivers that are part of the core Kubernetes code opposed to the CSI drivers, which are plug-ins.
aks Gpu Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/gpu-cluster.md
This article helps you provision nodes with schedulable GPUs on new and existing
* You also need the Azure CLI version 2.0.64 or later installed and configured. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli]. > [!NOTE]
-> If using an Azure Linux GPU node pool, automatic security patches aren't applied, and the default behavior for the cluster is *Unmanaged*. For more information, see [Using node OS auto-upgrade](./auto-upgrade-node-image.md#using-node-os-auto-upgrade).
+> If using an Azure Linux GPU node pool, automatic security patches aren't applied, and the default behavior for the cluster is *Unmanaged*. For more information, see [auto-upgrade](./auto-upgrade-node-image.md).
## Get the credentials for your cluster
There are two ways to add the NVIDIA device plugin:
### Update your cluster to use the AKS GPU image (preview) > [!NOTE]
-> If using an Azure Linux GPU node pool, automatic security patches aren't applied, and the default behavior for the cluster is *Unmanaged*. For more information, see [Using node OS auto-upgrade](./auto-upgrade-node-image.md#using-node-os-auto-upgrade).
+> If using an Azure Linux GPU node pool, automatic security patches aren't applied, and the default behavior for the cluster is *Unmanaged*. For more information, see [auto-upgrade](./auto-upgrade-node-image.md).
AKS provides a fully configured AKS image containing the [NVIDIA device plugin for Kubernetes][nvidia-github].
aks Open Ai Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/open-ai-quickstart.md
Title: Deploy an application that uses OpenAI on Azure Kubernetes Service (AKS)
-description: Learn how to deploy an application that uses OpenAI on Azure Kubernetes Service (AKS). #Required; article description that is displayed in search results.
- Previously updated : 09/18/2023-
+description: Learn how to deploy an application that uses OpenAI on Azure Kubernetes Service (AKS).
+ Last updated : 10/02/2023+ # Deploy an application that uses OpenAI on Azure Kubernetes Service (AKS)
To manage a Kubernetes cluster, you use the Kubernetes command-line client, [kub
aks-nodepool1-31469198-vmss000002 Ready agent 3h29m v1.25.6 ```
+> [!NOTE]
+> For private clusters, the nodes might be unreachable if you try to connect to them through the public IP address. In order to fix this, you need to create an endpoint within the same VNET as the cluster to connect from. Follow the instructions to [Create a private AKS cluster][create-private-cluster] and then connect to it.
+ ## Deploy the application :::image type="content" source="media/ai-walkthrough/aks-ai-demo-architecture.png" alt-text="Architecture diagram of AKS AI demo." lightbox="media/ai-walkthrough/aks-ai-demo-architecture.png":::
To learn more about generative AI use cases, see the following resources:
[key-vault]: csi-secrets-store-driver.md [aoai]: ../ai-services/openai/index.yml [learn-aoai]: /training/modules/explore-azure-openai
+[create-private-cluster]: private-clusters.md
aks Planned Maintenance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/planned-maintenance.md
A `Daily` schedule may look like *"every three days"*:
```json "schedule": { "daily": {
- "intervalDays": 2
+ "intervalDays": 3
} } ```
aks Vertical Pod Autoscaler Api Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/vertical-pod-autoscaler-api-reference.md
This reference is based on version 0.13.0 of the AKS implementation of VPA.
## VerticalPodAutoscaler
-|Name |Ojbect |Description |
+|Name |Object |Description |
|-|-||-| |metadata |ObjectMeta | Standard [object metadata][object-metadata-ref].| |spec |VerticalPodAutoscalerSpec |The desired behavior of the Vertical Pod Autoscaler.|
This reference is based on version 0.13.0 of the AKS implementation of VPA.
## VerticalPodAutoscalerSpec
-|Name |Ojbect |Description |
+|Name |Object |Description |
|-|-||-| |targetRef |CrossVersionObjectReference | Reference to the controller managing the set of pods for the autoscaler to control. For example, a Deployment or a StatefulSet. You can point a Vertical Pod Autoscaler at any controller that has a [Scale][scale-ref] subresource. Typically, the Vertical Pod Autoscaler retrieves the pod set from the controller's ScaleStatus. | |updatePolicy |PodUpdatePolicy |Specifies whether recommended updates are applied when a pod is started and whether recommended updates are applied during the life of a pod. |
This reference is based on version 0.13.0 of the AKS implementation of VPA.
## VerticalPodAutoscalerList
-|Name |Ojbect |Description |
+|Name |Object |Description |
|-|-||-| |metadata |ObjectMeta |Standard [object metadata][object-metadata-ref]. | |items |VerticalPodAutoscaler (array) |A list of Vertical Pod Autoscaler objects. | ## PodUpdatePolicy
-|Name |Ojbect |Description |
+|Name |Object |Description |
|-|-||-| |updateMode |string |A string that specifies whether recommended updates are applied when a pod is started and whether recommended updates are applied during the life of a pod. Possible values are `Off`, `Initial`, `Recreate`, and `Auto`. The default is `Auto` if you don't specify a value. | |minReplicas |int32 |A value representing the minimal number of replicas which need to be alive for Updater to attempt pod eviction (pending other checks like Pod Disruption Budget). Only positive values are allowed. Defaults to global `--min-replicas` flag, which is set to `2`. | ## PodResourcePolicy
-|Name |Ojbect |Description |
+|Name |Object |Description |
|-|-||-| |conainerPolicies |ContainerResourcePolicy |An array of resource policies for individual containers. There can be at most one entry for every named container, and optionally a single wildcard entry with `containerName = '*'`, which handles all containers that do not have individual policies. | ## ContainerResourcePolicy
-|Name |Ojbect |Description |
+|Name |Object |Description |
|-|-||-| |containerName |string |A string that specifies the name of the container that the policy applies to. If not specified, the policy serves as the default policy. | |mode |ContainerScalingMode |Specifies whether recommended updates are applied to the container when it is started and whether recommended updates are applied during the life of the container. Possible values are `Off` and `Auto`. The default is `Auto` if you don't specify a value. |
This reference is based on version 0.13.0 of the AKS implementation of VPA.
## VerticalPodAutoscalerRecommenderSelector
-|Name |Ojbect |Description |
+|Name |Object |Description |
|-|-||-| |name |string |A string that specifies the name of the recommender responsible for generating recommendation for this object. | ## VerticalPodAutoscalerStatus
-|Name |Ojbect |Description |
+|Name |Object |Description |
|-|-||-| |recommendation |RecommendedPodResources |The most recently recommended CPU and memory requests. | |conditions |VerticalPodAutoscalerCondition | An array that describes the current state of the Vertical Pod Autoscaler. | ## RecommendedPodResources
-|Name |Ojbect |Description |
+|Name |Object |Description |
|-|-||-| |containerRecommendation |RecommendedContainerResources |An array of resources recommendations for individual containers. | ## RecommendedContainerResources
-|Name |Ojbect |Description |
+|Name |Object |Description |
|-|-||-| |containerName |string| A string that specifies the name of the container that the recommendation applies to. | |target |ResourceList |The recommended CPU request and memory request for the container. |
This reference is based on version 0.13.0 of the AKS implementation of VPA.
## VerticalPodAutoscalerCondition
-|Name |Ojbect |Description |
+|Name |Object |Description |
|-|-||-| |type |VerticalPodAutoscalerConditionType |The type of condition being described. Possible values are `RecommendationProvided`, `LowConfidence`, `NoPodsMatched`, and `FetchingHistory`. | |status |ConditionStatus |The status of the condition. Possible values are `True`, `False`, and `Unknown`. |
app-service Migration Alternatives https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/migration-alternatives.md
Azure Resource Manager templates can be [deployed](../deploy-complex-application
The [migration feature](migrate.md) automates the migration to App Service Environment v3 and at the same time transfers all of your apps to the new environment. There's about one hour of downtime during this migration. If you're in a position where you can't have any downtime, the recommendation is to use one of the manual options to recreate your apps in an App Service Environment v3.
-You can distribute traffic between your old and new environment using an [Application Gateway](../networking/app-gateway-with-service-endpoints.md). If you're using an Internal Load Balancer (ILB) App Service Environment, see the [considerations](../networking/app-gateway-with-service-endpoints.md#considerations-for-ilb-ase) and [create an Azure Application Gateway](integrate-with-application-gateway.md) with an extra backend pool to distribute traffic between your environments. For internet facing App Service Environments, see these [considerations](../networking/app-gateway-with-service-endpoints.md#considerations-for-external-ase). You can also use services like [Azure Front Door](../../frontdoor/quickstart-create-front-door.md), [Azure Content Delivery Network (CDN)](../../cdn/cdn-add-to-web-app.md), and [Azure Traffic Manager](../../cdn/cdn-traffic-manager.md) to distribute traffic between environments. Using these services allows for testing of your new environment in a controlled manner and allows you to move to your new environment at your own pace.
+You can distribute traffic between your old and new environment using an [Application Gateway](../networking/app-gateway-with-service-endpoints.md). If you're using an Internal Load Balancer (ILB) App Service Environment, see the [considerations](../networking/app-gateway-with-service-endpoints.md#considerations-for-an-ilb-app-service-environment) and [create an Azure Application Gateway](integrate-with-application-gateway.md) with an extra backend pool to distribute traffic between your environments. For internet facing App Service Environments, see these [considerations](../networking/app-gateway-with-service-endpoints.md#considerations-for-an-external-app-service-environment). You can also use services like [Azure Front Door](../../frontdoor/quickstart-create-front-door.md), [Azure Content Delivery Network (CDN)](../../cdn/cdn-add-to-web-app.md), and [Azure Traffic Manager](../../cdn/cdn-traffic-manager.md) to distribute traffic between environments. Using these services allows for testing of your new environment in a controlled manner and allows you to move to your new environment at your own pace.
Once your migration and any testing with your new environment is complete, delete your old App Service Environment, the apps that are on it, and any supporting resources that you no longer need. You continue to be charged for any resources that haven't been deleted.
app-service Networking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/networking.md
Title: App Service Environment networking
description: App Service Environment networking details Previously updated : 07/21/2023 Last updated : 10/02/2023
For more information about Private Endpoint and Web App, see [Azure Web App Priv
## DNS
-The following sections describe the DNS considerations and configuration that apply inbound to and outbound from your App Service Environment. The examples use the domain suffix `appserviceenvironment.net` from Azure Public Cloud. If you're using other clouds like Azure Government, you need to use their respective domain suffix.
+The following sections describe the DNS considerations and configuration that apply inbound to and outbound from your App Service Environment. The examples use the domain suffix `appserviceenvironment.net` from Azure Public Cloud. If you're using other clouds like Azure Government, you need to use their respective domain suffix. Note that for App Service Environment domains, the site name will be truncated at 40 characters because of DNS limits. If you have a slot, the slot name will be truncated at 19 characters.
### DNS configuration to your App Service Environment
app-service Overview Certificates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/overview-certificates.md
Title: Certificates in App Service Environment
-description: Explain topics related to certificates in an App Service Environment. Learn how certificate bindings work on the single-tenanted apps in an App Service Environment.
+description: Explain the use of certificates in an App Service Environment. Learn how certificate bindings work on the single-tenanted apps in an App Service Environment.
Previously updated : 3/4/2022 Last updated : 10/3/2023
The App Service Environment is a deployment of the Azure App Service that runs w
## Application certificates
-Applications that are hosted in an App Service Environment support the following app-centric certificate features, which are also available in the multi-tenant App Service. For requirements and instructions for uploading and managing those certificates, see [Add a TLS/SSL certificate in Azure App Service](../configure-ssl-certificate.md).
+Applications that are hosted in an App Service Environment support the following app-centric certificate features, which are also available in the multitenant App Service. For requirements and instructions for uploading and managing those certificates, see [Add a TLS/SSL certificate in Azure App Service](../configure-ssl-certificate.md).
- [SNI certificates](../configure-ssl-certificate.md) - [KeyVault hosted certificates](../configure-ssl-certificate.md#import-a-certificate-from-key-vault)
You can [configure the TLS setting](../configure-ssl-bindings.md#enforce-tls-ver
## Private client certificate
-A common use case is to configure your app as a client in a client-server model. If you secure your server with a private CA certificate, you'll need to upload the client certificate to your app. The following instructions will load certificates to the trust store of the workers that your app is running on. You only need to upload the certificate once to use it with apps that are in the same App Service plan.
+A common use case is to configure your app as a client in a client-server model. If you secure your server with a private CA certificate, you need to upload the client certificate (*.cer* file) to your app. The following instructions load certificates to the trust store of the workers that your app is running on. You only need to upload the certificate once to use it with apps that are in the same App Service plan.
>[!NOTE] > Private client certificates are only supported from custom code in Windows code apps. Private client certificates are not supported outside the app. This limits usage in scenarios such as pulling the app container image from a registry using a private certificate and TLS validating through the front-end servers using a private certificate.
A common use case is to configure your app as a client in a client-server model.
Follow these steps to upload the certificate (*.cer* file) to your app in your App Service Environment. The *.cer* file can be exported from your certificate. For testing purposes, there's a PowerShell example at the end to generate a temporary self-signed certificate: 1. Go to the app that needs the certificate in the Azure portal
-1. Go to **TLS/SSL settings** in the app. Select **Public Key Certificate (.cer)**. Select **Upload Public Key Certificate**. Provide a name. Browse and select your *.cer* file. Select upload.
+1. Go to **Certificates** in the app. Select **Public Key Certificate (.cer)**. Select **Add certificate**. Provide a name. Browse and select your *.cer* file. Select upload.
1. Copy the thumbprint. 1. Go to **Configuration** > **Application Settings**. Create an app setting WEBSITE_LOAD_ROOT_CERTIFICATES with the thumbprint as the value. If you have multiple certificates, you can put them in the same setting separated by commas and no whitespace like 84EC242A4EC7957817B8E48913E50953552DAFA6,6A5C65DC9247F762FE17BF8D4906E04FE6B31819
-The certificate will be available by all the apps in the same app service plan as the app, which configured that setting, but all apps that depend on the private CA certificate should have the Application Setting configured to avoid timing issues.
+The certificate is available by all the apps in the same app service plan as the app, which configured that setting, but all apps that depend on the private CA certificate should have the Application Setting configured to avoid timing issues.
-If you need it to be available for apps in a different App Service plan, you'll need to repeat the app setting operation for the apps in that App Service plan. To check that the certificate is set, go to the Kudu console and issue the following command in the PowerShell debug console:
+If you need it to be available for apps in a different App Service plan, you need to repeat the app setting operation for the apps in that App Service plan. To check that the certificate is set, go to the Kudu console and issue the following command in the PowerShell debug console:
```azurepowershell-interactive dir Cert:\LocalMachine\Root
$fileName = "exportedcert.cer"
Export-Certificate -Cert $certThumbprint -FilePath $fileName -Type CERT ```
+## Private server certificate
+
+If your app acts as a server in a client-server model, either behind a reverse proxy or directly with private client and you're using a private CA certificate, you need to upload the server certificate (*.pfx* file) with the full certificate chain to your app and bind the certificate to the custom domain. Because the infrastructure is dedicated to your App Service Environment, the full certificate chain is added to the trust store of the servers. You only need to upload the certificate once to use it with apps that are in the same App Service Environment.
+
+>[!NOTE]
+> If you uploaded your certificate prior to 1. October 2023, you will need to reupload and rebind the certificate for the full certificate chain to be added to the servers.
+
+Follow the [secure custom domain with TLS/SSL](../configure-ssl-bindings.md) tutorial to upload/bind your private CA rooted certificate to the app in your App Service Environment.
+ ## Next steps * Information on how to [use certificates in application code](../configure-ssl-certificate-in-code.md)
app-service App Gateway With Service Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/networking/app-gateway-with-service-endpoints.md
Title: Application Gateway integration - Azure App Service | Microsoft Docs
-description: Describes how Application Gateway integrates with Azure App Service.
+ Title: Application Gateway integration - Azure App Service | Microsoft Learn
+description: Learn how Application Gateway integrates with Azure App Service.
documentationcenter: ''
ms.devlang: azurecli
# Application Gateway integration
-There are three variations of App Service that require slightly different configuration of the integration with Azure Application Gateway. The variations include regular App Service - also known as multitenant, Internal Load Balancer (ILB) App Service Environment and External App Service Environment. This article walks through how to configure it with App Service (multitenant) using service endpoint to secure traffic. The article also discusses considerations around using private endpoint and integrating with ILB, and External App Service Environment. Finally the article has considerations on scm/kudu site.
+
+Three variations of Azure App Service require slightly different configuration of the integration with Azure Application Gateway. The variations include regular App Service (also known as multitenant), an internal load balancer (ILB) App Service Environment, and an external App Service Environment.
+
+This article walks through how to configure Application Gateway with App Service (multitenant) by using service endpoints to secure traffic. The article also discusses considerations around using private endpoints and integrating with ILB and external App Service Environments. Finally, the article describes how to set access restrictions on a Source Control Manager (SCM) site.
## Integration with App Service (multitenant)
-App Service (multitenant) has a public internet facing endpoint. Using [service endpoints](../../virtual-network/virtual-network-service-endpoints-overview.md) you can allow traffic only from a specific subnet within an Azure Virtual Network and block everything else. In the following scenario, we use this functionality to ensure that an App Service instance can only receive traffic from a specific Application Gateway instance.
+App Service (multitenant) has a public internet-facing endpoint. By using [service endpoints](../../virtual-network/virtual-network-service-endpoints-overview.md), you can allow traffic from only a specific subnet within an Azure virtual network and block everything else. In the following scenario, you use this functionality to ensure that an App Service instance can receive traffic from only a specific application gateway.
++
+There are two parts to this configuration, aside from creating the App Service instance and the application gateway. The first part is enabling service endpoints in the subnet of the virtual network where the application gateway is deployed. Service endpoints ensure that all network traffic leaving the subnet toward App Service is tagged with the specific subnet ID.
+
+The second part is to set an access restriction on the specific web app to ensure that only traffic tagged with this specific subnet ID is allowed. You can configure the access restriction by using different tools, depending on your preference.
-There are two parts to this configuration besides creating the App Service and the Application Gateway. The first part is enabling service endpoints in the subnet of the Virtual Network where the Application Gateway is deployed. Service endpoints ensure all network traffic leaving the subnet towards the App Service is tagged with the specific subnet ID. The second part is to set an access restriction of the specific web app to ensure that only traffic tagged with this specific subnet ID is allowed. You can configure it using different tools depending on preference.
+## Set up services by using the Azure portal
-## Using Azure portal
-With Azure portal, you follow four steps to create and configure the setup. If you have existing resources, you can skip the first steps.
-1. Create an App Service using one of the Quickstarts in the App Service documentation, for example [.NET Core Quickstart](../quickstart-dotnetcore.md)
-2. Create an Application Gateway using the [portal Quickstart](../../application-gateway/quick-create-portal.md), but skip the Add backend targets section.
-3. Configure [App Service as a backend in Application Gateway](../../application-gateway/configure-web-app.md), but skip the Restrict access section.
-4. Finally create the [access restriction using service endpoints](../../app-service/app-service-ip-restrictions.md#set-a-service-endpoint-based-rule).
+With the Azure portal, you follow four steps to create and configure the setup of App Service and Application Gateway. If you have existing resources, you can skip the first steps.
-You can now access the App Service through Application Gateway. If you try to access the App Service directly, you should receive a 403 HTTP error indicating that the web site is stopped.
+1. Create an App Service instance by using one of the quickstarts in the App Service documentation. One example is the [.NET Core quickstart](../quickstart-dotnetcore.md).
+2. Create an application gateway by using the [portal quickstart](../../application-gateway/quick-create-portal.md), but skip the section about adding back-end targets.
+3. Configure [App Service as a back end in Application Gateway](../../application-gateway/configure-web-app.md), but skip the section about restricting access.
+4. Create the [access restriction by using service endpoints](../../app-service/app-service-ip-restrictions.md#set-a-service-endpoint-based-rule).
+You can now access App Service through Application Gateway. If you try to access App Service directly, you should receive a 403 HTTP error that says the web app has blocked your access.
-## Using Azure Resource Manager template
-The [Resource Manager deployment template][template-app-gateway-app-service-complete] creates a complete scenario. The scenario consists of an App Service instance locked down with service endpoints and access restriction to only receive traffic from Application Gateway. The template includes many Smart Defaults and unique postfixes added to the resource names for it to be simple. To override them, you have to clone the repo or download the template and edit it.
-To apply the template, you can use the Deploy to Azure button found in the description of the template, or you can use appropriate PowerShell/CLI.
+## Set up services by using an Azure Resource Manager template
-## Using Azure CLI
-The [Azure CLI sample](../../app-service/scripts/cli-integrate-app-service-with-application-gateway.md) creates an App Service locked down with service endpoints and access restriction to only receive traffic from Application Gateway. If you only need to isolate traffic to an existing App Service from an existing Application Gateway, the following command is sufficient.
+The [Azure Resource Manager deployment template][template-app-gateway-app-service-complete] creates a complete scenario. The scenario consists of an App Service instance that's locked down with service endpoints and an access restriction to receive traffic only from Application Gateway. The template includes many smart defaults and unique postfixes added to the resource names to keep it simple. To override them, you have to clone the repo or download the template and edit it.
+
+To apply the template, you can use the **Deploy to Azure** button in the description of the template. Or you can use appropriate PowerShell or Azure CLI code.
+
+## Set up services by using the Azure CLI
+
+The [Azure CLI sample](../../app-service/scripts/cli-integrate-app-service-with-application-gateway.md) creates an App Service instance that's locked down with service endpoints and an access restriction to receive traffic only from Application Gateway. If you only need to isolate traffic to an existing App Service instance from an existing application gateway, use the following command:
```azurecli-interactive az webapp config access-restriction add --resource-group myRG --name myWebApp --rule-name AppGwSubnet --priority 200 --subnet mySubNetName --vnet-name myVnetName ```
-In the default configuration, the command ensures both setup of the service endpoint configuration in the subnet and the access restriction in the App Service.
+In the default configuration, the command ensures setup of the service endpoint configuration in the subnet and the access restriction in App Service.
+
+## Considerations for using private endpoints
-## Considerations when using private endpoint
+As an alternative to service endpoints, you can use private endpoints to secure traffic between Application Gateway and App Service (multitenant). You need to ensure that Application Gateway can use DNS to resolve the private IP address of the App Service apps. Alternatively, you can use the private IP address in the back-end pool and override the host name in the HTTP settings.
-As an alternative to service endpoint, you can use private endpoint to secure traffic between Application Gateway and App Service (multitenant). You need to ensure that Application Gateway can DNS resolve the private IP of the App Service apps. Alternatively you can use the private IP in the backend pool and override the host name in the http settings.
+Application Gateway caches the DNS lookup results. If you use fully qualified domain names (FQDNs) and rely on DNS lookup to get the private IP address, you might need to restart the application gateway if the DNS update or the link to an Azure private DNS zone happened after you configured the back-end pool.
-Application Gateway caches the DNS lookup results. If you use FQDNs and rely on DNS lookup to get the private IP address, then you may need to restart the Application Gateway if the DNS update or link to Azure private DNS zone was done after configuring the backend pool. To restart the Application Gateway, you must start and stop the instance. You restart the Application Gateway using Azure CLI:
+To restart the application gateway, stop and start it by using the Azure CLI:
```azurecli-interactive az network application-gateway stop --resource-group myRG --name myAppGw az network application-gateway start --resource-group myRG --name myAppGw ```
-## Considerations for ILB ASE
-ILB App Service Environment isn't exposed to the internet and traffic between the instance and an Application Gateway is therefore already isolated to the Virtual Network. The following [how-to guide](../environment/integrate-with-application-gateway.md) configures an ILB App Service Environment and integrates it with an Application Gateway using Azure portal.
+## Considerations for an ILB App Service Environment
-If you want to ensure that only traffic from the Application Gateway subnet is reaching the App Service Environment, you can configure a Network security group (NSG) which affect all web apps in the App Service Environment. For the NSG, you're able to specify the subnet IP range and optionally the ports (80/443). Make sure you don't override the [required NSG rules](../environment/network-info.md#network-security-groups) for App Service Environment to function correctly.
+An ILB App Service Environment isn't exposed to the internet. Traffic between the instance and an application gateway is already isolated to the virtual network. To configure an ILB App Service Environment and integrate it with an application gateway by using the Azure portal, see the [how-to guide](../environment/integrate-with-application-gateway.md).
-To isolate traffic to an individual web app, you need to use IP-based access restrictions as service endpoints doesn't work with App Service Environment. The IP address should be the private IP of the Application Gateway instance.
+If you want to ensure that only traffic from the Application Gateway subnet is reaching the App Service Environment, you can configure a network security group (NSG) that affects all web apps in the App Service Environment. For the NSG, you can specify the subnet IP range and optionally the ports (80/443). For the App Service Environment to function correctly, make sure you don't override the [required NSG rules](../environment/network-info.md#network-security-groups).
-## Considerations for External ASE
-External App Service Environment has a public facing load balancer like multitenant App Service. Service endpoints don't work for App Service Environment, and that's why you have to use IP-based access restrictions using the public IP of the Application Gateway instance. To create an External App Service Environment using the Azure portal, you can follow this [Quickstart](../environment/create-external-ase.md)
+To isolate traffic to an individual web app, you need to use IP-based access restrictions, because service endpoints don't work with an App Service Environment. The IP address should be the private IP of the application gateway.
-[template-app-gateway-app-service-complete]: https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.web/web-app-with-app-gateway-v2/ "Azure Resource Manager template for complete scenario"
+## Considerations for an external App Service Environment
-## Considerations for kudu/scm site
-The scm site, also known as kudu, is an admin site, which exists for every web app. It isn't possible to reverse proxy the scm site and you most likely also want to lock it down to individual IP addresses or a specific subnet.
+An external App Service Environment has a public-facing load balancer like multitenant App Service. Service endpoints don't work for an App Service Environment. That's why you have to use IP-based access restrictions by using the public IP address of the application gateway. To create an external App Service Environment by using the Azure portal, you can follow [this quickstart](../environment/create-external-ase.md).
-If you want to use the same access restrictions as the main site, you can inherit the settings using the following command.
+[template-app-gateway-app-service-complete]: https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.web/web-app-with-app-gateway-v2/ "Azure Resource Manager template for a complete scenario"
+
+## Considerations for a Kudu/SCM site
+
+The SCM site, also known as Kudu, is an admin site that exists for every web app. It isn't possible to reverse proxy the SCM site. You most likely also want to lock it down to individual IP addresses or a specific subnet.
+
+If you want to use the same access restrictions as the main site, you can inherit the settings by using the following command:
```azurecli-interactive az webapp config access-restriction set --resource-group myRG --name myWebApp --use-same-restrictions-for-scm-site ```
-If you want to set individual access restrictions for the scm site, you can add access restrictions using the `--scm-site` flag like shown here.
+If you want to add individual access restrictions for the SCM site, you can use the `--scm-site` flag:
```azurecli-interactive az webapp config access-restriction add --resource-group myRG --name myWebApp --scm-site --rule-name KudoAccess --priority 200 --ip-address 208.130.0.0/16 ```
-## Considerations when using default domain
-Configuring Application Gateway to override the host name and use the default domain of App Service (typically `azurewebsites.net`) is the easiest way to configure the integration and doesn't require configuring custom domain and certificate in App Service. [This article](/azure/architecture/best-practices/host-name-preservation) discusses the general considerations when overriding the original host name. In App Service, there are two scenarios where you need to pay attention with this configuration.
+## Considerations for using the default domain
+
+Configuring Application Gateway to override the host name and use the default domain of App Service (typically `azurewebsites.net`) is the easiest way to configure the integration. It doesn't require configuring a custom domain and certificate in App Service.
+
+[This article](/azure/architecture/best-practices/host-name-preservation) discusses the general considerations for overriding the original host name. In App Service, there are two scenarios where you need to pay attention with this configuration.
### Authentication
-When you're using [the authentication feature](../overview-authentication-authorization.md) in App Service (also known as Easy Auth), your app will typically redirect to the sign-in page. Because App Service doesn't know the original host name of the request, the redirect would be done on the default domain name and usually result in an error. To work around default redirect, you can configure authentication to inspect a forwarded header and adapt the redirect domain to the original domain. Application Gateway uses a header called `X-Original-Host`.
-Using [file-based configuration](../configure-authentication-file-based.md) to configure authentication, you can configure App Service to adapt to the original host name. Add this configuration to your configuration file:
+
+When you use [the authentication feature](../overview-authentication-authorization.md) in App Service (also known as Easy Auth), your app typically redirects to the sign-in page. Because App Service doesn't know the original host name of the request, the redirect is done on the default domain name and usually results in an error.
+
+To work around the default redirect, you can configure authentication to inspect a forwarded header and adapt the redirect domain to the original domain. Application Gateway uses a header called `X-Original-Host`. By using [file-based configuration](../configure-authentication-file-based.md) to configure authentication, you can configure App Service to adapt to the original host name. Add this configuration to your configuration file:
```json {
Using [file-based configuration](../configure-authentication-file-based.md) to c
``` ### ARR affinity
-In multi-instance deployments, [ARR affinity](../configure-common.md?tabs=portal#configure-general-settings) ensures that client requests are routed to the same instance for the life of the session. ARR affinity doesn't work with host name overrides and you have to configure identical custom domain and certificate in App Service and in Application Gateway and not override host name for session affinity to work.
+
+In multiple-instance deployments, [ARR affinity](../configure-common.md?tabs=portal#configure-general-settings) ensures that client requests are routed to the same instance for the life of the session. ARR affinity doesn't work with host name overrides. For session affinity to work, you have to configure an identical custom domain and certificate in App Service and in Application Gateway and not override the host name.
## Next steps
-For more information on the App Service Environment, see [App Service Environment documentation](../environment/index.yml).
-To further secure your web app, information about Web Application Firewall on Application Gateway can be found in the [Azure Web Application Firewall documentation](../../web-application-firewall/ag/ag-overview.md).
+For more information on App Service Environments, see the [App Service Environment documentation](../environment/index.yml).
+
+To further secure your web app, you can find information about Azure Web Application Firewall on Application Gateway in the [Azure Web Application Firewall documentation](../../web-application-firewall/ag/ag-overview.md).
-Tutorial on [deploying a secure, resilient site with a custom domain](https://azure.github.io/AppService/2021/03/26/Secure-resilient-site-with-custom-domain) on App Service using either Azure Front Door or Application Gateway.
+To deploy a secure, resilient site with a custom domain on App Service by using either Azure Front Door or Application Gateway, see [this tutorial](https://azure.github.io/AppService/2021/03/26/Secure-resilient-site-with-custom-domain).
app-service Nat Gateway Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/networking/nat-gateway-integration.md
Title: Azure NAT Gateway integration - Azure App Service | Microsoft Docs
-description: Describes how NAT gateway integrates with Azure App Service.
+ Title: Azure NAT Gateway integration - Azure App Service | Microsoft Learn
+description: Learn how Azure NAT Gateway integrates with Azure App Service.
ms.devlang: azurecli
# Azure NAT Gateway integration
-Azure NAT gateway is a fully managed, highly resilient service, which can be associated with one or more subnets and ensures that all outbound Internet-facing traffic will be routed through the gateway. With App Service, there are two important scenarios that you can use NAT gateway for.
+Azure NAT Gateway is a fully managed, highly resilient service that can be associated with one or more subnets. It ensures that all outbound internet-facing traffic is routed through a network address translation (NAT) gateway. With Azure App Service, there are two important scenarios where you can use a NAT gateway.
-The NAT gateway gives you a static predictable public IP for outbound Internet-facing traffic. It also significantly increases the available [SNAT ports](../troubleshoot-intermittent-outbound-connection-errors.md) in scenarios where you have a high number of concurrent connections to the same public address/port combination.
+The NAT gateway gives you a static, predictable public IP address for outbound internet-facing traffic. It also significantly increases the available [source network address translation (SNAT) ports](../troubleshoot-intermittent-outbound-connection-errors.md) in scenarios where you have a high number of concurrent connections to the same public address/port combination.
-For more information and pricing. Go to the [Azure NAT Gateway overview](../../virtual-network/nat-gateway/nat-overview.md).
+Here are important considerations about Azure NAT Gateway integration:
-> [!Note]
-> * Using a NAT gateway with App Service is dependent on virtual network integration, and therefore a supported App Service plan pricing tier is required.
-> * When using a NAT gateway together with App Service, all traffic to Azure Storage must be using private endpoint or service endpoint.
-> * A NAT gateway cannot be used together with App Service Environment v1 or v2.
+* Using a NAT gateway with App Service is dependent on virtual network integration, so it requires a supported pricing tier in an App Service plan.
+* When you're using a NAT gateway together with App Service, all traffic to Azure Storage must use private endpoints or service endpoints.
+* You can't use a NAT gateway together with App Service Environment v1 or v2.
-## Configuring NAT gateway integration
+For more information and pricing, see the [Azure NAT Gateway overview](../../virtual-network/nat-gateway/nat-overview.md).
-To configure NAT gateway integration with App Service, you need to complete the following steps:
+## Configure NAT gateway integration
-* Configure regional virtual network integration with your app as described in [Integrate your app with an Azure virtual network](../overview-vnet-integration.md)
-* Ensure [Route All](../overview-vnet-integration.md#routes) is enabled for your virtual network integration so the Internet bound traffic will be affected by routes in your virtual network.
-* Provision a NAT gateway with a public IP and associate it with the virtual network integration subnet.
+To configure NAT gateway integration with App Service, first complete the following tasks:
-Set up Azure NAT Gateway through the portal:
+* Configure regional virtual network integration with your app, as described in [Integrate your app with an Azure virtual network](../overview-vnet-integration.md).
+* Ensure that [Route All](../overview-vnet-integration.md#routes) is enabled for your virtual network integration, so routes in your virtual network affect the internet-bound traffic.
+* Provision a NAT gateway with a public IP address and associate it with the subnet for virtual network integration.
-1. Go to the **Networking** UI in the App Service portal and select virtual network integration in the Outbound Traffic section. Ensure that your app is integrated with a subnet and **Route All** has been enabled.
-1. On the Azure portal menu or from the **Home** page, select **Create a resource**. The **New** window appears.
-1. Search for "NAT gateway" and select it from the list of results.
-1. Fill in the **Basics** information and pick the region where your app is located.
-1. In the **Outbound IP** tab, create a new or select an existing public IP.
-1. In the **Subnet** tab, select the subnet used for virtual network integration.
-1. Fill in tags if needed and **Create** the NAT gateway. After the NAT gateway is provisioned, click on the **Go to resource group** and select the new NAT gateway. You can see the public IP that your app will use for outbound Internet-facing traffic in the Outbound IP blade.
+Then, set up Azure NAT Gateway through the Azure portal:
-If you prefer using CLI to configure your environment, these are the important commands. As a prerequisite, you should create an app with virtual network integration configured.
+1. In the Azure portal, go to **App Service** > **Networking**. In the **Outbound Traffic** section, select **Virtual network integration**. Ensure that your app is integrated with a subnet and that **Route All** is enabled.
-Ensure **Route All** is configured for your virtual network integration:
+ :::image type="content" source="./media/nat-gateway-integration/nat-gateway-route-all-enabled.png" alt-text="Screenshot of the Route All option enabled for virtual network integration.":::
+1. On the Azure portal menu or from the home page, select **Create a resource**. The **New** window appears.
+1. Search for **NAT gateway** and select it from the list of results.
+1. Fill in the **Basics** information and choose the region where your app is located.
-```azurecli-interactive
-az webapp config set --resource-group [myResourceGroup] --name [myWebApp] --vnet-route-all-enabled
-```
+ :::image type="content" source="./media/nat-gateway-integration/nat-gateway-create-basics.png" alt-text="Screenshot of the Basics tab on the page for creating a NAT gateway.":::
+1. On the **Outbound IP** tab, create a public IP address or select an existing one.
-Create Public IP and NAT gateway:
+ :::image type="content" source="./media/nat-gateway-integration/nat-gateway-create-outbound-ip.png" alt-text="Screenshot of the Outbound IP tab on the page for creating a NAT gateway.":::
+1. On the **Subnet** tab, select the subnet that you use for virtual network integration.
-```azurecli-interactive
-az network public-ip create --resource-group [myResourceGroup] --name myPublicIP --sku standard --allocation static
+ :::image type="content" source="./media/nat-gateway-integration/nat-gateway-create-subnet.png" alt-text="Screenshot of the Subnet tab on the page for creating a NAT gateway.":::
+1. Fill in tags if needed, and then select **Create**. After the NAT gateway is provisioned, select **Go to resource group**, and then select the new NAT gateway. The **Outbound IP** pane shows the public IP address that your app will use for outbound internet-facing traffic.
-az network nat gateway create --resource-group [myResourceGroup] --name myNATgateway --public-ip-addresses myPublicIP --idle-timeout 10
-```
+ :::image type="content" source="./media/nat-gateway-integration/nat-gateway-public-ip.png" alt-text="Screenshot of the Outbound IP pane for a NAT gateway in the Azure portal.":::
-Associate the NAT gateway with the virtual network integration subnet:
+If you prefer to use the Azure CLI to configure your environment, these are the important commands. As a prerequisite, create an app with virtual network integration configured.
-```azurecli-interactive
-az network vnet subnet update --resource-group [myResourceGroup] --vnet-name [myVnet] --name [myIntegrationSubnet] --nat-gateway myNATgateway
-```
+1. Ensure that **Route All** is configured for your virtual network integration:
-## Scaling a NAT gateway
+ ```azurecli-interactive
+ az webapp config set --resource-group [myResourceGroup] --name [myWebApp] --vnet-route-all-enabled
+ ```
-The same NAT gateway can be used across multiple subnets in the same virtual network allowing a NAT gateway to be used across multiple apps and App Service plans.
+1. Create a public IP address and a NAT gateway:
-Azure NAT Gateway supports both public IP addresses and public IP prefixes. A NAT gateway can support up to 16 IP addresses across individual IP addresses and prefixes. Each IP address allocates 64,512 ports (SNAT ports) allowing up to 1M available ports. Learn more in the [Scaling section](../../virtual-network/nat-gateway/nat-gateway-resource.md#scalability) of Azure NAT Gateway.
+ ```azurecli-interactive
+ az network public-ip create --resource-group [myResourceGroup] --name myPublicIP --sku standard --allocation static
+
+ az network nat gateway create --resource-group [myResourceGroup] --name myNATgateway --public-ip-addresses myPublicIP --idle-timeout 10
+ ```
+
+1. Associate the NAT gateway with the subnet for virtual network integration:
+
+ ```azurecli-interactive
+ az network vnet subnet update --resource-group [myResourceGroup] --vnet-name [myVnet] --name [myIntegrationSubnet] --nat-gateway myNATgateway
+ ```
+
+## Scale a NAT gateway
+
+You can use the same NAT gateway across multiple subnets in the same virtual network. That approach allows you to use a NAT gateway across multiple apps and App Service plans.
+
+Azure NAT Gateway supports both public IP addresses and public IP prefixes. A NAT gateway can support up to 16 IP addresses across individual IP addresses and prefixes. Each IP address allocates 64,512 ports (SNAT ports), which allows up to 1 million available ports. Learn more in [Azure NAT Gateway resource](../../virtual-network/nat-gateway/nat-gateway-resource.md#scalability).
## Next steps
-For more information on Azure NAT Gateway, see [Azure NAT Gateway documentation](../../virtual-network/nat-gateway/nat-overview.md).
+For more information on Azure NAT Gateway, see the [Azure NAT Gateway documentation](../../virtual-network/nat-gateway/nat-overview.md).
-For more information on virtual network integration, see [Virtual network integration documentation](../overview-vnet-integration.md).
+For more information on virtual network integration, see the [documentation about virtual network integration](../overview-vnet-integration.md).
application-gateway Application Gateway Backend Health https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/application-gateway-backend-health.md
+
+ Title: Backend health
+
+description: Learn how to use Backend health report in Azure Application Gateway
++++ Last updated : 09/19/2023+++
+# Application Gateway - Backend health
+
+Application Gateway health probes (default and custom) continuously monitor all the backend servers in a pool to ensure the incoming traffic is sent only to the servers that are up and running. These health checks enable a seamless data plane operation of a gateway. When a backend server can receive traffic, the probe is successful and considered healthy. Otherwise, it's considered unhealthy. The precise representation of the health probes report is also made available for your consumption through the Backend Health capability.
+
+## Backend health report
+The possible statuses for a server's health report are:
+1. Healthy - Shows when the application gateway probes receive an expected response code from the backend server.
+1. Unhealthy - Shows when probes don't receive a response, or the response doesn't match the expected response code or body.
+1. Unknown - Occurs when the application gateway's control plane fails to communicate (for Backend Health call) with your application gateway instances or in case of [DNS resolution](application-gateway-backend-health-troubleshooting.md#updates-to-the-dns-entries-of-the-backend-pool) of the backend server's FQDN.
+
+For complete information on the cause and solution of the Unhealthy and Unknown states, visit the [troubleshooting article](application-gateway-backend-health-troubleshooting.md).
+
+> [!NOTE]
+> The Backend health report is updated based on the respective probe's refresh interval and doesn't depend on the moment of page refresh or Backend health API request.
+
+## Methods to view Backend health
+The backend server health report can be generated through the Azure portal, REST API, PowerShell, and Azure CLI.
+
+### Using Azure portal
+The Application Gateway portal provides an information-rich backend health report with visualizations and tools for faster troubleshooting. Each row shows the exact target server, the backend pool it belongs to, its backend setting association (including port and protocol), and the response received by the latest probe. Visit the [Health Probes article](application-gateway-probe-overview.md) to understand how this report is composed based on the number of Backend pools, servers, and Backend settings.
+
+For Unhealthy and Unknown statuses, you will also find a Troubleshoot link presenting you with the following tools:
+
+1. **Azure Network Watcher's Connection troubleshoot** - Visit the [Connection Troubleshoot](../network-watcher/network-watcher-connectivity-portal.md) documentation article to learn how to use this tool.
+1. **Backend server certificate visualization** - The Backend server certificate visualization makes it easy to understand the problem area, allowing you to act on the problem quickly. The three core components in the illustration provide you with a complete picture ΓÇö The client, the Application Gateway, and the Backend Server. However, the problems explained in this troubleshooting section only focus on the TLS connection between the application gateway and the backend server.
+
+ :::image type="content" source="media/application-gateway-backend-health/backend-certificate-error.png" alt-text="Screenshot and explanation of a certificate error on the Backend Health page.":::
+
+**Reading the illustration**
+- The red lines indicate a problem with the TLS connection between the gateway and the backend server or the certificate components on the backend server.
+- If there is red text in the Application Gateway or the Backend Server blocks, this indicates problems with the Backend Settings or the server certificate, respectively.
+- You must act on the respective property (Application Gateway's Backend Setting or the Backend Server) depending on the error indication and location.
+- A solution for each error type is provided. A documentation link is also provided for more information.
+
+### Using PowerShell
+
+The following PowerShell code shows how to view backend health by using the `Get-AzApplicationGatewayBackendHealth` cmdlet:
+
+```powershell
+Get-AzApplicationGatewayBackendHealth -Name ApplicationGateway1 -ResourceGroupName Contoso
+```
+
+### Using Azure CLI
+
+```azurecli
+az network application-gateway show-backend-health --resource-group AdatumAppGatewayRG --name AdatumAppGateway
+```
+
+### Results
+
+The following snippet shows an example of the response:
+
+```json
+{
+"BackendAddressPool": {
+ "Id": "/subscriptions/00000000-0000-0000-000000000000/resourceGroups/ContosoRG/providers/Microsoft.Network/applicationGateways/applicationGateway1/backendAddressPools/appGatewayBackendPool"
+},
+"BackendHttpSettingsCollection": [
+ {
+ "BackendHttpSettings": {
+ "Id": "/00000000-0000-0000-000000000000/resourceGroups/ContosoRG/providers/Microsoft.Network/applicationGateways/applicationGateway1/backendHttpSettingsCollection/appGatewayBackendHttpSettings"
+ },
+ "Servers": [
+ {
+ "Address": "hostname.westus.cloudapp.azure.com",
+ "Health": "Healthy"
+ },
+ {
+ "Address": "hostname.westus.cloudapp.azure.com",
+ "Health": "Healthy"
+ }
+ ]
+ }
+]
+}
+```
+
+## Next steps
+* Understanding [Application Gateway probes behavior](application-gateway-probe-overview.md).
+* [Generate a self-signed certificate](self-signed-certificates.md) with a custom root CA.
+
application-gateway Application Gateway Create Probe Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/application-gateway-create-probe-portal.md
Now that the probe has been created, it's time to add it to the gateway. Probe s
## Next steps
-View the health of the backend resources as determined by the probe using the [backend health view](./application-gateway-diagnostics.md#backend-health).
+View the health of the backend servers as determined by the probe using the [Backend health view](application-gateway-backend-health.md).
[1]: ./media/application-gateway-create-probe-portal/figure1.png [2]: ./media/application-gateway-create-probe-portal/figure2.png
application-gateway Application Gateway Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/application-gateway-diagnostics.md
Title: Backend health and diagnostic logs
+ Title: Diagnostic logs
-description: Learn how to enable and manage access logs and performance logs for Azure Application Gateway
+description: Learn how to enable and manage logs for Azure Application Gateway
Previously updated : 05/19/2023 Last updated : 09/19/2023
-# Backend health and diagnostic logs for Application Gateway
+# Diagnostic logs for Application Gateway
-You can monitor Azure Application Gateway resources in the following ways:
+Application Gateway logs provide detailed information for events related to a resource and its operations. These logs are available for events such as Access, Activity, Firewall, and Performance (only for V1). The granular information in logs is helpful when troubleshooting a problem or building an analytics dashboard by consuming this raw data.
-* [Backend health](#backend-health): Application Gateway provides the capability to monitor the health of the servers in the backend pools through the Azure portal and through PowerShell. You can also find the health of the backend pools through the performance diagnostic logs.
+Logs are available for all resources of Application Gateway; however, to consume them, you must enable their collection in a storage location of your choice. Logging in Azure Application Gateway is enabled by the Azure Monitor service. We recommend using the Log Analytics workspace as you can readily use its predefined queries and set alerts based on specific log conditions.
-* [Logs](#diagnostic-logging): Logs allow for performance, access, and other data to be saved or consumed from a resource for monitoring purposes.
+## <a name="diagnostic-logging"></a>Types of Diagnostic logs
-* [Metrics](application-gateway-metrics.md): Application Gateway has several metrics that help you verify your system is performing as expected.
--
-## Backend health
-
-Application Gateway provides the capability to monitor the health of individual members of the backend pools through the portal, PowerShell, and the command-line interface (CLI). You can also find an aggregated health summary of backend pools through the performance diagnostic logs.
-
-The backend health report reflects the output of the Application Gateway health probe to the backend instances. When probing is successful and the back end can receive traffic, it's considered healthy. Otherwise, it's considered unhealthy.
-
-> [!IMPORTANT]
-> If there is a network security group (NSG) on an Application Gateway subnet, open port ranges 65503-65534 for v1 SKUs, and 65200-65535 for v2 SKUs on the Application Gateway subnet for inbound traffic. This port range is required for Azure infrastructure communication. They are protected (locked down) by Azure certificates. Without proper certificates, external entities, including the customers of those gateways, won't be able to initiate any changes on those endpoints.
--
-### View backend health through the portal
-
-In the portal, backend health is provided automatically. In an existing application gateway, select **Monitoring** > **Backend health**.
-
-Each member in the backend pool is listed on this page (whether it's a NIC, IP, or FQDN). Backend pool name, port, backend HTTP settings name, and health status are shown. Valid values for health status are **Healthy**, **Unhealthy**, and **Unknown**.
-
-> [!NOTE]
-> If you see a backend health status of **Unknown**, ensure that access to the back end is not blocked by an NSG rule, a user-defined route (UDR), or a custom DNS in the virtual network.
-
-![Backend health][10]
-
-### View backend health through PowerShell
-
-The following PowerShell code shows how to view backend health by using the `Get-AzApplicationGatewayBackendHealth` cmdlet:
-
-```powershell
-Get-AzApplicationGatewayBackendHealth -Name ApplicationGateway1 -ResourceGroupName Contoso
-```
-
-### View backend health through Azure CLI
-
-```azurecli
-az network application-gateway show-backend-health --resource-group AdatumAppGatewayRG --name AdatumAppGateway
-```
-
-### Results
-
-The following snippet shows an example of the response:
-
-```json
-{
-"BackendAddressPool": {
- "Id": "/subscriptions/00000000-0000-0000-000000000000/resourceGroups/ContosoRG/providers/Microsoft.Network/applicationGateways/applicationGateway1/backendAddressPools/appGatewayBackendPool"
-},
-"BackendHttpSettingsCollection": [
- {
- "BackendHttpSettings": {
- "Id": "/00000000-0000-0000-000000000000/resourceGroups/ContosoRG/providers/Microsoft.Network/applicationGateways/applicationGateway1/backendHttpSettingsCollection/appGatewayBackendHttpSettings"
- },
- "Servers": [
- {
- "Address": "hostname.westus.cloudapp.azure.com",
- "Health": "Healthy"
- },
- {
- "Address": "hostname.westus.cloudapp.azure.com",
- "Health": "Healthy"
- }
- ]
- }
-]
-}
-```
-
-## <a name="diagnostic-logging"></a>Diagnostic logs
-
-You can use different types of logs in Azure to manage and troubleshoot application gateways. You can access some of these logs through the portal. All logs can be extracted from Azure Blob storage and viewed in different tools, such as [Azure Monitor logs](/previous-versions/azure/azure-monitor/insights/azure-networking-analytics), Excel, and Power BI. You can learn more about the different types of logs from the following list:
+You can use different types of logs in Azure to manage and troubleshoot application gateways. You can learn more about these types below:
* **Activity log**: You can use [Azure activity logs](../azure-monitor/essentials/activity-log.md) (formerly known as operational logs and audit logs) to view all operations that are submitted to your Azure subscription, and their status. Activity log entries are collected by default, and you can view them in the Azure portal. * **Access log**: You can use this log to view Application Gateway access patterns and analyze important information. This includes the caller's IP, requested URL, response latency, return code, and bytes in and out. An access log is collected every 60 seconds. This log contains one record per instance of Application Gateway. The Application Gateway instance is identified by the instanceId property.
You can use different types of logs in Azure to manage and troubleshoot applicat
> [!NOTE] > Logs are available only for resources deployed in the Azure Resource Manager deployment model. You cannot use logs for resources in the classic deployment model. For a better understanding of the two models, see the [Understanding Resource Manager deployment and classic deployment](../azure-resource-manager/management/deployment-models.md) article.
-You have three options for storing your logs:
+## Storage locations
+
+You have the following options to store the logs in your preferred location.
+
+1. **Log Analytic workspace**: Recommended as it allows you to readily use the predefined queries, visualizations and set alerts based on specific log conditions.
+1. **Azure Storage account**: Storage accounts are best used for logs when logs are stored for a longer duration and reviewed when needed.
+1. **Azure Event Hubs**: Event hubs are a great option for integrating with other security information and event management (SIEM) tools to get alerts on your resources.
+1. **Azure Monitor partner integrations**
-* **Storage account**: Storage accounts are best used for logs when logs are stored for a longer duration and reviewed when needed.
-* **Event hubs**: Event hubs are a great option for integrating with other security information and event management (SIEM) tools to get alerts on your resources.
-* **Azure Monitor logs**: Azure Monitor logs is best used for general real-time monitoring of your application or looking at trends.
+[Learn more](../azure-monitor/essentials/diagnostic-settings.md?WT.mc_id=Portal-Microsoft_Azure_Monitoring&tabs=portal#destinations) about the Azure Monitor's Diagnostic settings destinations.
### Enable logging through PowerShell
application-gateway Application Gateway Probe Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/application-gateway-probe-overview.md
-# Application Gateway health monitoring overview
+# Application Gateway health probes overview
Azure Application Gateway monitors the health of all the servers in its backend pool and automatically stops sending traffic to any server it considers unhealthy. The probes continue to monitor such an unhealthy server, and the gateway starts routing the traffic to it once again as soon as the probes detect it as healthy.
A gateway starts firing probes immediately after you configure a Rule by associa
:::image type="content" source="media/application-gateway-probe-overview/appgatewayprobe.png" alt-text="Diagram showing Application Gateway initiating health probes to individual backend targets within a backend pool":::
-The required probes are determined based on the unique combination of the Backend Server and Backend Setting. For example, consider a gateway with a single backend pool with two servers and two backend settings, each having different port numbers. When these distinct backend settings are associated with the same backend pool using their respective rules, the gateway creates probes for each server and the combination of the backend setting. You can view this on the [Backend health page](./application-gateway-diagnostics.md#backend-health).
+The required probes are determined based on the unique combination of the Backend Server and Backend Setting. For example, consider a gateway with a single backend pool with two servers and two backend settings, each having different port numbers. When these distinct backend settings are associated with the same backend pool using their respective rules, the gateway creates probes for each server and the combination of the backend setting. You can view this on the [Backend health page](application-gateway-backend-health.md).
:::image type="content" source="media/application-gateway-probe-overview/multiple-be-settings.png" alt-text="Diagram showing health probes report on the Backend Health page":::
For example:
$match = New-AzApplicationGatewayProbeHealthResponseMatch -StatusCode 200-399 $match = New-AzApplicationGatewayProbeHealthResponseMatch -Body "Healthy" ```
-Once the match criteria is specified, it can be attached to probe configuration using a `-Match` parameter in PowerShell.
+Match criteria can be attached to probe configuration using a `-Match` operator in PowerShell.
### Some use cases for Custom probes - If a backend server allows access to only authenticated users, the application gateway probes will receive a 403 response code instead of 200. As the clients (users) are bound to authenticate themselves for the live traffic, you can configure the probe traffic to accept 403 as an expected response.
application-gateway Configuration Infrastructure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/configuration-infrastructure.md
Fine grain control over the Application Gateway subnet via Route Table rules is
With current functionality there are some restrictions: > [!IMPORTANT]
-> Using UDRs on the Application Gateway subnet might cause the health status in the [backend health view](./application-gateway-diagnostics.md#backend-health) to appear as **Unknown**. It also might cause generation of Application Gateway logs and metrics to fail. We recommend that you don't use UDRs on the Application Gateway subnet so that you can view the backend health, logs, and metrics.
+> Using UDRs on the Application Gateway subnet might cause the health status in the [backend health view](application-gateway-backend-health.md) to appear as **Unknown**. It also might cause generation of Application Gateway logs and metrics to fail. We recommend that you don't use UDRs on the Application Gateway subnet so that you can view the backend health, logs, and metrics.
- **v1**
application-gateway How To Url Rewrite Gateway Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/for-containers/how-to-url-rewrite-gateway-api.md
Previously updated : 09/25/2023 Last updated : 10/03/2023
spec:
value: /shop - filters: - type: URLRewrite
- URLRewrite:
+ urlRewrite:
path: type: ReplacePrefixMatch replacePrefixMatch: /ecommerce
application-gateway Session Affinity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/for-containers/session-affinity.md
+
+ Title: Session affinity overview for Azure Application Gateway for Containers
+description: Learn how to configure session affinity for Azure Application Gateway for Containers.
+++++ Last updated : 10/02/2023+++
+# Application Gateway for Containers session affinity overview
+
+Session affinity, also known as *session persistence* or *sticky sessions*, is a technique used in load balancing to ensure a client's requests are always sent to the same server. This is important for applications that store user data in session variables or in a local cache on a particular server (commonly referred to as a stateful application).
+
+With session affinity, Application Gateway for Containers presents a cookie in the **Set-Cookie** header of the first response. If the client presents the cookie in future requests, Application Gateway for Containers recognizes the cookie and forwards traffic to the same backend target. See the following example scenario:
+
+ ![A diagram depicting Application Gateway for Containers session affinity.](./media/session-affinity/session-affinity.png)
+
+The following steps are depicted in the previous diagram:
+1. A client initiates a request to an Application Gateway for Containers' (AGC) frontend
+2. AGC selects one of the many available pods to load balance the request to. In this example, we assume Pod C is selected out of the four available pods.
+3. Pod C returns a response to AGC.
+4. In addition to the backend response from Pod C, AGC adds a Set-Cookie header containing a uniquely generated hash used for routing.
+5. The client sends another request to AGC along with the session affinity cookie set in the previous step.
+6. AGC detects the cookie and selects Pod C to serve the request.
+7. Pod C responds to AGC.
+8. AGC returns the response to the client
+
+## Usage details
+
+[Session affinity](api-specification-kubernetes.md#alb.networking.azure.io/v1.SessionAffinity) is defined by the following properties and characteristics:
+
+| Name | Description |
+| - | -- |
+| affinityType | Valid values are application-cookie or managed-cookie. |
+| cookieName | Required if affinityType is application-cookie. This is the name of the cookie. |
+| cookieDuration | Required if affinityType is application-cookie. This is the duration (lifetime) of the cookie in seconds. |
+
+In managed cookie affinity type, Application Gateway uses predefined values when the cookie is offered to the client.
+- The name of the cookie is: `AGCAffinity`.
+- The duration (lifetime) of the cookie is 86,400 seconds (one day).
+- The `cookieName` and `cookieDuration` properties and values are discarded.
+
+In application affinity type, the cookie name and duration (lifetime) must be explicitly defined.
+
+## How to configure session affinity
+
+# [Gateway API](#tab/session-affinity-gateway-api)
+
+Session affinity can be defined in a [RoutePolicy](api-specification-kubernetes.md#alb.networking.azure.io/v1.RoutePolicy) resource, which targets a defined HTTPRoute. You must specify `sessionAffinity` with an `affinityType` of either `application-cookie` or `managed-cookie`. In this example, we use `application-cookie` as the affinityType and explicitly define a cookie name and lifetime.
+
+Example command to create a new RoutePolicy with a defined cookie called `nomnom` with a lifetime of 3,600 seconds (1 hour).
+
+```bash
+kubectl apply -f - <<EOF
+apiVersion: alb.networking.azure.io/v1
+kind: RoutePolicy
+metadata:
+ name: session-affinity-route-policy
+spec:
+ targetRef:
+ kind: HTTPRoute
+ name: http-route
+ namespace: test-infra
+ group: ""
+ default:
+ sessionAffinity:
+ affinityType: "application-cookie"
+ cookieName: "nomnom"
+ cookieDuration: 3600
+EOF
+```
+
+# [Ingress API](#tab/session-affinity-ingress-api)
+
+Session affinity can be defined in an [IngressExtension](api-specification-kubernetes.md#alb.networking.azure.io/v1.IngressExtensionSpec) resource. You must specify `sessionAffinity` with an `affinityType` of either `application-cookie` or `managed-cookie`. In this example, we use `application-cookie` as the affinityType and explicitly define a cookie name and lifetime.
+
+Example command to create a new IngressExtension with a defined cookie called `nomnom` with a lifetime of 3,600 seconds (1 hour).
+
+```bash
+kubectl apply -f - <<EOF
+apiVersion: alb.networking.azure.io/v1
+kind: IngressExtension
+metadata:
+ name: session-affinity-ingress-extension
+ namespace: test-infra
+spec:
+ backendSettings:
+ - service: echo
+ sessionAffinity:
+ affinityType: "application-cookie"
+ cookieName: "nomnom"
+ cookieDuration: 3600
+EOF
+```
++
application-gateway Ingress Controller Install New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/ingress-controller-install-new.md
installed in an environment with no pre-existing components.
We recommend the use of [Azure Cloud Shell](https://shell.azure.com/) for all command-line operations below. Launch your shell from shell.azure.com or by clicking the link:
-[![Embed launch](https://shell.azure.com/images/launchcloudshell.png "Launch Azure Cloud Shell")](https://shell.azure.com)
+[![Embed launch](./media/launch-cloud-shell/launch-cloud-shell.png "Launch Azure Cloud Shell")](https://shell.azure.com)
Alternatively, launch Cloud Shell from Azure portal using the following icon:
application-gateway Ingress Controller Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/ingress-controller-troubleshoot.md
[Azure Cloud Shell](https://shell.azure.com/) is the most convenient way to troubleshoot any problems with your AKS and AGIC installation. Launch your shell from [shell.azure.com](https://shell.azure.com/) or by selecting the link:
-[![Embed launch](https://shell.azure.com/images/launchcloudshell.png "Launch Azure Cloud Shell")](https://shell.azure.com)
+[![Embed launch](./media/launch-cloud-shell/launch-cloud-shell.png "Launch Azure Cloud Shell")](https://shell.azure.com)
> [!TIP] > Also see [What is Application Gateway for Containers?](for-containers/overview.md) currently in public preview.
automation Automation Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-faq.md
description: This article gives answers to frequently asked questions about Azur
Previously updated : 08/25/2021 Last updated : 10/03/2023 #Customer intent: As an implementer, I want answers to various questions.
automation Automation Graphical Authoring Intro https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-graphical-authoring-intro.md
Title: Author graphical runbooks in Azure Automation
description: This article tells how to author a graphical runbook without working with code. Previously updated : 04/25/2023 Last updated : 10/03/2023
# Author graphical runbooks in Azure Automation > [!IMPORTANT]
-> Azure Automation Run As Account will retire on September 30, 2023 and will be replaced with Managed Identities. Before that date, you'll need to start migrating your runbooks to use [managed identities](automation-security-overview.md#managed-identities). For more information, see [migrating from an existing Run As accounts to managed identity](/azure/automation/migrate-run-as-accounts-managed-identity?tabs=run-as-account#sample-scripts) to start migrating the runbooks from Run As account to managed identities before 30 September 2023.
+> Azure Automation Run as accounts, including Classic Run as accounts have retired on **30 September 2023** and replaced with [Managed Identities](automation-security-overview.md#managed-identities). You would no longer be able to create or renew Run as accounts through the Azure portal. For more information, see [migrating from an existing Run As accounts to managed identity](migrate-run-as-accounts-managed-identity.md?tabs=run-as-account#sample-scripts).
All runbooks in Azure Automation are Windows PowerShell workflows. Graphical runbooks and graphical PowerShell Workflow runbooks generate PowerShell code that the Automation workers run but that you cannot view or modify. You can convert a graphical runbook to a graphical PowerShell Workflow runbook, and vice versa. However, you can't convert these runbooks to a textual runbook. Additionally, the Automation graphical editor can't import a textual runbook.
You have the option to revert to the Published version of a runbook. This operat
* To get started with graphical runbooks, see [Tutorial: Create a graphical runbook](./learn/powershell-runbook-managed-identity.md). * To know more about runbook types and their advantages and limitations, see [Azure Automation runbook types](automation-runbook-types.md).
-* To understand how to authenticate using the Automation Run As account, see [Run As account](automation-security-overview.md#run-as-account).
* For a PowerShell cmdlet reference, see [Az.Automation](/powershell/module/az.automation/#automation).
automation Automation Security Guidelines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-security-guidelines.md
Title: Azure Automation security guidelines, security best practices Automation
description: This article helps you with the guidelines that Azure Automation offers to ensure a secured configuration of Automation account, Hybrid Runbook worker role, authentication certificate and identities, network isolation and policies. Previously updated : 02/16/2022 Last updated : 10/03/2023 # Security best practices in Azure Automation > [!IMPORTANT]
-> Azure Automation Run As Account will retire on September 30, 2023 and will be replaced with Managed Identities. Before that date, you'll need to start migrating your runbooks to use [managed identities](automation-security-overview.md#managed-identities). For more information, see [migrating from an existing Run As accounts to managed identity](migrate-run-as-accounts-managed-identity.md?tabs=run-as-account#sample-scripts) to start migrating the runbooks from Run As account to managed identities before 30 September 2023.
+> Azure Automation Run as accounts, including Classic Run as accounts have retired on **30 September 2023** and replaced with [Managed Identities](automation-security-overview.md#managed-identities). You would no longer be able to create or renew Run as accounts through the Azure portal. For more information, see [migrating from an existing Run As accounts to managed identity](migrate-run-as-accounts-managed-identity.md?tabs=run-as-account#sample-scripts).
This article details the best practices to securely execute the automation jobs. [Azure Automation](./overview.md) provides you the platform to orchestrate frequent, time consuming, error-prone infrastructure management and operational tasks, as well as mission-critical operations. This service allows you to execute scripts, known as automation runbooks seamlessly across cloud and hybrid environments.
This section guides you in configuring your Automation account securely.
Follow the [Managed identity best practice recommendations](../active-directory/managed-identities-azure-resources/managed-identity-best-practice-recommendations.md#choosing-system-or-user-assigned-managed-identities) for more details.
-1. If you use Run As accounts as the authentication mechanism for your runbooks, ensure the following:
- - Track the service principals in your inventory. Service principals often have elevated permissions.
- - Delete any unused Run As accounts to minimize your exposed attack surface.
- - [Renew the Run As certificate](./manage-runas-account.md#cert-renewal) periodically.
- - Follow the RBAC guidelines to limit the permissions assigned to Run As account using this [script](./manage-runas-account.md#limit-run-as-account-permissions). Do not assign high privilege permissions like Contributor, Owner and so on.
- 1. Rotate the [Azure Automation keys](./automation-create-standalone-account.md?tabs=azureportal#manage-automation-account-keys) periodically. The key regeneration prevents future DSC or hybrid worker node registrations from using previous keys. We recommend to use the [Extension based hybrid workers](./automation-hybrid-runbook-worker.md) that use Azure AD authentication instead of Automation keys. Azure AD centralizes the control and management of identities and resource credentials. ### Data security
automation Automation Security Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-security-overview.md
description: This article provides an overview of Azure Automation account authe
keywords: automation security, secure automation; automation authentication Previously updated : 04/12/2023 Last updated : 10/04/2023
# Azure Automation account authentication overview > [!IMPORTANT]
-> Azure Automation Run As Account will retire on September 30, 2023 and will be replaced with Managed Identities. Before that date, you'll need to start migrating your runbooks to use [managed identities](automation-security-overview.md#managed-identities). For more information, see [migrating from an existing Run As accounts to managed identity](migrate-run-as-accounts-managed-identity.md?tabs=run-as-account#sample-scripts) to start migrating the runbooks from Run As account to managed identities before 30 September 2023.
+> Azure Automation Run as accounts, including Classic Run as accounts have retired on **30 September 2023** and replaced with [Managed Identities](automation-security-overview.md#managed-identities). You would no longer be able to create or renew Run as accounts through the Azure portal. For more information, see [migrating from an existing Run As accounts to managed identity](migrate-run-as-accounts-managed-identity.md?tabs=run-as-account#sample-scripts).
Azure Automation allows you to automate tasks against resources in Azure, on-premises, and with other cloud providers such as Amazon Web Services (AWS). You can use runbooks to automate your tasks, or a Hybrid Runbook Worker if you have business or operational processes to manage outside of Azure. Working in any one of these environments require permissions to securely access the resources with the minimal rights required.
Managed identities are the recommended way to authenticate in your runbooks, and
Here are some of the benefits of using managed identities: -- Using a managed identity instead of the Automation Run As account simplifies management. You don't have to renew the certificate used by a Run As account.-
+- Using a managed identity instead of the Automation Run As account simplifies management.
- Managed identities can be used without any additional cost. -- You don't have to specify the Run As connection object in your runbook code. You can access resources using your Automation account's managed identity from a runbook without creating certificates, connections, Run As accounts, etc.
+- You don't have to specify the Run As connection object in your runbook code. You can access resources using your Automation account's managed identity from a runbook without creating certificates, connections, etc.
An Automation account can authenticate using two types of managed identities:
An Automation account can authenticate using two types of managed identities:
For details on using managed identities, see [Enable managed identity for Azure Automation](enable-managed-identity-for-automation.md).
-## Run As accounts
-
-> [!IMPORTANT]
-> Azure Automation Run As Account will retire on September 30, 2023 and will be replaced with Managed Identities. Before that date, you'll need to start migrating your runbooks to use [managed identities](automation-security-overview.md#managed-identities). For more information, see [migrating from an existing Run As accounts to managed identity](migrate-run-as-accounts-managed-identity.md?tabs=run-as-account#sample-scripts) to start migrating the runbooks from Run As account to managed identities before 30 September 2023.
-
-Run As accounts in Azure Automation provide authentication for managing Azure Resource Manager resources or resources deployed on the classic deployment model. There are two types of Run As accounts in Azure Automation:
-- Azure Run As Account-- Azure Classic Run As Account-
-To renew a Run As account, permissions are needed at three levels:
--- Subscription,-- Azure Active Directory (Azure AD), and-- Automation account-- ### Subscription permissions You need the `Microsoft.Authorization/*/Write` permission. This permission is obtained through membership of one of the following Azure built-in roles:
You need the `Microsoft.Authorization/*/Write` permission. This permission is ob
- [Owner](../role-based-access-control/built-in-roles.md#owner) - [User Access Administrator](../role-based-access-control/built-in-roles.md#user-access-administrator)
-To renew Classic Run As accounts, you must have the Co-administrator role at the subscription level. To learn more about classic subscription permissions, see [Azure classic subscription administrators](../role-based-access-control/classic-administrators.md#add-a-co-administrator).
+To learn more about classic subscription permissions, see [Azure classic subscription administrators](../role-based-access-control/classic-administrators.md#add-a-co-administrator).
### Azure AD permissions
To learn more about the Azure Resource Manager and Classic deployment models, se
> [!VIDEO https://www.microsoft.com/videoplayer/embed/RWwtF3]
-### Run As account
-
-Run As Account consists of the following components:
-- An Azure AD application with a self-signed certificate, and a service principal account for the application in Azure AD, which is assigned the [Contributor](../role-based-access-control/built-in-roles.md#contributor) role for the account in your current subscription. You can change the certificate setting to [Reader](../role-based-access-control/built-in-roles.md#reader) or any other role. For more information, see [Role-based access control in Azure Automation](automation-role-based-access-control.md).-- An Automation certificate asset named `AzureRunAsCertificate` in the specified Automation account. The certificate asset holds the certificate private key that the Azure AD application uses.-- An Automation connection asset named `AzureRunAsConnection` in the specified Automation account. The connection asset holds the application ID, tenant ID, subscription ID, and certificate thumbprint.-
-### Azure Classic Run As account
-
-Azure Classic Run As Account consists of the following components:
-- A management certificate in the subscription.-- An Automation certificate asset named `AzureClassicRunAsCertificate` in the specified Automation account. The certificate asset holds the certificate private key used by the management certificate.-- An Automation connection asset named `AzureClassicRunAsConnection` in the specified Automation account. The connection asset holds the subscription name, subscription ID, and certificate asset name.-
-> [!NOTE]
-> You must be a co-administrator on the subscription to renew this type of Run As account.
-
-## Service principal for Run As account
-
-The service principal for a Run As account doesn't have permissions to read Azure AD by default. If you want to add permissions to read or manage Azure AD, you must grant the permissions on the service principal under **API permissions**. To learn more, see [Add permissions to access your web API](../active-directory/develop/quickstart-configure-app-access-web-apis.md#add-permissions-to-access-your-web-api).
-
-## <a name="permissions"></a>Run As account permissions
-
-This section defines permissions for both regular Run As accounts and Classic Run As accounts.
-
-* To create or update or delete a Run As account, an Application administrator in Azure Active Directory and an Owner in the subscription can complete all the tasks.
-* To configure or renew or delete a Classic Run As accounts, you must have the Co-administrator role at the subscription level. To learn more about classic subscription permissions, see [Azure classic subscription administrators](../role-based-access-control/classic-administrators.md#add-a-co-administrator).
-
-In a situation where you have separation of duties, the following table shows a listing of the tasks, the equivalent cmdlet, and permissions needed:
-
-|Task|Cmdlet |Minimum Permissions |Where you set the permissions|
-|||||
-|Create Azure AD Application|[New-AzADApplication](/powershell/module/az.resources/new-azadapplication) | Application Developer role<sup>1</sup> |[Azure AD](../active-directory/develop/howto-create-service-principal-portal.md#permissions-required-for-registering-an-app)</br>Home > Azure AD > App Registrations |
-|Add a credential to the application.|[New-AzADAppCredential](/powershell/module/az.resources/new-azadappcredential) | Application Administrator or Global Administrator<sup>1</sup> |[Azure AD](../active-directory/develop/howto-create-service-principal-portal.md#permissions-required-for-registering-an-app)</br>Home > Azure AD > App Registrations|
-|Create and get an Azure AD service principal|[New-AzADServicePrincipal](/powershell/module/az.resources/new-azadserviceprincipal)</br>[Get-AzADServicePrincipal](/powershell/module/az.resources/get-azadserviceprincipal) | Application Administrator or Global Administrator<sup>1</sup> |[Azure AD](../active-directory/develop/howto-create-service-principal-portal.md#permissions-required-for-registering-an-app)</br>Home > Azure AD > App Registrations|
-|Assign or get the Azure role for the specified principal|[New-AzRoleAssignment](/powershell/module/az.resources/new-azroleassignment)</br>[Get-AzRoleAssignment](/powershell/module/Az.Resources/Get-AzRoleAssignment) | User Access Administrator or Owner, or have the following permissions:</br></br><code>Microsoft.Authorization/Operations/read</br>Microsoft.Authorization/permissions/read</br>Microsoft.Authorization/roleDefinitions/read</br>Microsoft.Authorization/roleAssignments/write</br>Microsoft.Authorization/roleAssignments/read</br>Microsoft.Authorization/roleAssignments/delete</code></br></br> | [Subscription](../role-based-access-control/role-assignments-portal.md)</br>Home > Subscriptions > \<subscription name\> - Access Control (IAM)|
-|Create or remove an Automation certificate|[New-AzAutomationCertificate](/powershell/module/Az.Automation/New-AzAutomationCertificate)</br>[Remove-AzAutomationCertificate](/powershell/module/az.automation/remove-azautomationcertificate) | Contributor on resource group |Automation account resource group|
-|Create or remove an Automation connection|[New-AzAutomationConnection](/powershell/module/az.automation/new-azautomationconnection)</br>[Remove-AzAutomationConnection](/powershell/module/az.automation/remove-azautomationconnection)|Contributor on resource group |Automation account resource group|
-
-<sup>1</sup> Non-administrator users in your Azure AD tenant can [register AD applications](../active-directory/develop/howto-create-service-principal-portal.md#permissions-required-for-registering-an-app) if the Azure AD tenant's **Users can register applications** option on the **User settings** page is set to **Yes**. If the application registration setting is **No**, the user performing this action must be as defined in this table.
-
-If you aren't a member of the subscription's Active Directory instance before you're added to the Global Administrator role of the subscription, you're added as a guest. In this situation, you receive a `You do not have permissions to create…` warning on the **Add Automation account** page.
-
-To verify that the situation producing the error message has been remedied:
-
-1. From the Azure Active Directory pane in the Azure portal, select **Users and groups**.
-2. Select **All users**.
-3. Choose your name, then select **Profile**.
-4. Ensure that the value of the **User type** attribute under your user's profile isn't set to **Guest**.
- ## Role-based access control Role-based access control is available with Azure Resource Manager to grant permitted actions to an Azure AD user account and Run As account, and authenticate the service principal. Read [Role-based access control in Azure Automation article](automation-role-based-access-control.md) for further information to help develop your model for managing Automation permissions.
automation Delete Run As Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/delete-run-as-account.md
Title: Delete an Azure Automation Run As account
description: This article tells how to delete a Run As account with PowerShell or from the Azure portal. Previously updated : 09/01/2023 Last updated : 10/02/2023 # Delete an Azure Automation Run As account > [!IMPORTANT]
-> Azure Automation Run As Account will retire on September 30, 2023 and will be replaced with Managed Identities. Before that date, you'll need to start migrating your runbooks to use [managed identities](automation-security-overview.md#managed-identities). For more information, see [migrating from an existing Run As accounts to managed identity](/azure/automation/migrate-run-as-accounts-managed-identity?tabs=run-as-account#sample-scripts) to start migrating the runbooks from Run As account to managed identities before 30 September 2023.
+> Azure Automation Run as accounts, including Classic Run as accounts have retired on **30 September 2023** and replaced with [Managed Identities](automation-security-overview.md#managed-identities). You would no longer be able to create or renew Run as accounts through the Azure portal. For more information, see [migrating from an existing Run As accounts to managed identity](migrate-run-as-accounts-managed-identity.md?tabs=run-as-account#sample-scripts).
Run As accounts in Azure Automation provide authentication for managing resources on the Azure Resource Manager or Azure Classic deployment model using Automation runbooks and other Automation features. This article describes how to delete a Run As or Classic Run As account. When you perform this action, the Automation account is retained. After you delete the Run As account, you can re-create it in the Azure portal or with the provided PowerShell script.
To configure or update or delete a Run As account and a Classic Run As accounts,
- Cloud Application Administrator - Global Administrator
-To learn more about permissions, see [Run As account permissions](automation-security-overview.md#permissions).
-- ## Delete a Run As or Classic Run As account 1. In the Azure portal, open the Automation account.
automation Manage Run As Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/manage-run-as-account.md
- Title: Manage an Azure Automation Run As account
-description: This article tells how to manage your Azure Automation Run As account with PowerShell or from the Azure portal.
- Previously updated : 05/05/2023---
-# Manage an Azure Automation Run As account
-
-> [!IMPORTANT]
-> Azure Automation Run As Account will retire on September 30, 2023 and will be replaced with Managed Identities. Before that date, you'll need to start migrating your runbooks to use [managed identities](automation-security-overview.md#managed-identities). For more information, see [migrating from an existing Run As accounts to managed identity](/azure/automation/migrate-run-as-accounts-managed-identity?tabs=run-as-account#sample-scripts) to start migrating the runbooks from Run As account to managed identities before 30 September 2023.
---
-Run As accounts in Azure Automation provide authentication for managing resources on the Azure Resource Manager or Azure Classic deployment model using Automation runbooks and other Automation features.
-
-In this article we cover how to manage a Run as or Classic Run As account, including:
-
- * How to renew a self-signed certificate
- * How to renew a certificate from an enterprise or third-party certificate authority (CA)
- * Manage permissions for the Run As account
-
-To learn more about Azure Automation account authentication, permissions required to manage the Run as account, and guidance related to process automation scenarios, see [Automation Account authentication overview](automation-security-overview.md).
-
-## <a name="cert-renewal"></a>Renew a self-signed certificate
-
-The self-signed certificate that you have created for the Run As account expires one month from the date of creation. At some point before your Run As account expires, you must renew the certificate. You can renew it any time before it expires.
-
-When you renew the self-signed certificate, the current valid certificate is retained to ensure that any runbooks that are queued up or actively running, and that authenticate with the Run As account, aren't negatively affected. The certificate remains valid until its expiration date.
-
->[!NOTE]
->If you think that the Run As account has been compromised, you can delete and re-create the self-signed certificate.
-
->[!NOTE]
->If you have configured your Run As account to use a certificate issued by your enterprise or third-party CA and you use the option to renew a self-signed certificate option, the enterprise certificate is replaced by a self-signed certificate. To renew your certificate in this case, see [Renew an enterprise or third-party certificate](#renew-an-enterprise-or-third-party-certificate).
-
-Use the following steps to renew the self-signed certificate.
-
-1. Sign-in to the [Azure portal](https://portal.azure.com).
-
-1. Go to your Automation account and select **Run As Accounts** in the account settings section.
-
- :::image type="content" source="media/manage-run-as-account/automation-account-properties-pane.png" alt-text="Automation account properties pane.":::
-
-1. On the **Run As Accounts** properties page, select either **Run As Account** or **Classic Run As Account** depending on which account you need to renew the certificate for.
-
-1. On the **Properties** page for the selected account, select **Renew certificate**.
-
- :::image type="content" source="media/manage-run-as-account/automation-account-renew-run-as-certificate.png" alt-text="Renew certificate for Run As account.":::
-
-1. While the certificate is being renewed, you can track the progress under **Notifications** from the menu.
-
-## Renew an enterprise or third-party certificate
-
-Every certificate has a built-in expiration date. If the certificate you assigned to the Run As account was issued by a certification authority (CA), you need to perform a different set of steps to configure the Run As account with the new certificate before it expires. You can renew it any time before it expires.
-
-1. Import the renewed certificate following the steps for [Create a new certificate](./shared-resources/certificates.md#create-a-new-certificate). Automation requires the certificate to have the following configuration:
-
- * Specify the provider **Microsoft Enhanced RSA and AES Cryptographic Provider**
- * Marked as exportable
- * Configured to use the SHA256 algorithm
- * Saved in the `*.pfx` or `*.cer` format.
-
- After you import the certificate, note or copy the certificate **Thumbprint** value. This value is used to update the Run As connection properties with the new certificate.
-
-1. Sign in to the [Azure portal](https://portal.azure.com).
-
-1. Search for and select **Automation Accounts**.
-
-1. On the Automation Accounts page, select your Automation account from the list.
-
-1. In the left pane, select **Connections**.
-
-1. On the **Connections** page, select **AzureRunAsConnection** and update the **Certificate Thumbprint** with the new certificate thumbprint.
-
-1. Select **Save** to commit your changes.
-
-## Grant Run As account permissions in other subscriptions
-
-Azure Automation supports using a single Automation account from one subscription, and executing runbooks against Azure Resource Manager resources across multiple subscriptions. This configuration does not support the Azure Classic deployment model.
-
-You assign the Run As account service principal the [Contributor](../role-based-access-control/built-in-roles.md#contributor) role in the other subscription, or more restrictive permissions. For more information, see [Role-based access control](automation-role-based-access-control.md) in Azure Automation. To assign the Run As account to the role in the other subscription, the user account performing this task needs to be a member of the **Owner** role in that subscription.
-
-> [!NOTE]
-> This configuration only supports multiple subscriptions of an organization using a common Azure AD tenant.
-
-Before granting the Run As account permissions, you need to first note the display name of the service principal to assign.
-
-1. Sign in to the [Azure portal](https://portal.azure.com).
-1. From your Automation account, select **Run As Accounts** under **Account Settings**.
-1. Select **Azure Run As Account**.
-1. Copy or note the value for **Display Name** on the **Azure Run As Account** page.
-
-For detailed steps for how to add role assignments, check out the following articles depending on the method you want to use.
-
-* [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md)
-* [Assign Azure roles using Azure PowerShell](../role-based-access-control/role-assignments-powershell.md)
-* [Assign Azure roles using the Azure CLI](../role-based-access-control/role-assignments-cli.md)
-* [Assign Azure roles using the REST API](..//role-based-access-control/role-assignments-rest.md)
-
-After assigning the Run As account to the role, in your runbook specify `Set-AzContext -SubscriptionId "xxxx-xxxx-xxxx-xxxx"` to set the subscription context to use. For more information, see [Set-AzContext](/powershell/module/az.accounts/set-azcontext).
-
-## Check role assignment for Azure Automation Run As account
-
-To check the role assigned to the Automation Run As account Azure AD, follow these steps:
-
-1. Sign in to the [Azure portal](https://portal.azure.com/).
-1. Go to your Automation account and in **Account Settings**, select **Run as accounts**.
-1. Select **Azure Run as Account** to view the **Application ID**.
-
- :::image type="content" source="media/manage-run-as-account/automation-run-as-app-id.png" alt-text="Screenshot that describes on how to copy application ID.":::
-
-1. Go to Azure portal and search for **Azure Active Directory**.
-1. On the **Active Directory Overview** page, **Overview** tab, in the search box, enter the Application ID.
-
- :::image type="content" source="media/manage-run-as-account/active-directory-app-id-inline.png" alt-text="Screenshot that describes application ID copied in the Overview tab." lightbox="media/manage-run-as-account/active-directory-app-id-expanded.png":::
-
- In the **Enterprise applications** section, you will see the display name of your Run As Account.
-
-1. Select the application ID and in the properties page of that ID, go to **Overview** blade, **Properties**, and copy the name of the Enterprise application.
-1. Go to Azure portal and search for your **Subscription** and select your subscription.
-1. Go to **Access Control (IAM)**, **Role Assignment** and paste the name of the Enterprise application in the search box to view the App along with the role and scope assigned to it.
-For example: in the screenshot below, the Run As Account Azure AD App has the Contributor access at the subscription level.
-
- :::image type="content" source="media/manage-run-as-account/check-role-assignments-inline.png" alt-text="Screenshot that describes how to view the role and scope assigned to the enterprise application." lightbox="media/manage-run-as-account/check-role-assignments-expanded.png":::
--
-## Limit Run As account permissions
-
-To control the targeting of Automation against resources in Azure, you can run the [Update-AutomationRunAsAccountRoleAssignments.ps1](https://aka.ms/AA5hug8) script. This script changes your existing Run As account service principal to create and use a custom role definition. The role has permissions for all resources except [Key Vault](../key-vault/index.yml).
-
->[!IMPORTANT]
->After you run the **Update-AutomationRunAsAccountRoleAssignments.ps1** script, runbooks that access Key Vault through the use of Run As accounts no longer work. Before running the script, you should review runbooks in your account for calls to Azure Key Vault. To enable access to Key Vault from Azure Automation runbooks, you must [add the Run As account to Key Vault's permissions](#add-permissions-to-key-vault).
-
-If you need to further restrict what the Run As service principal can do, you can add other resource types to the `NotActions` element of the custom role definition. The following example restricts access to `Microsoft.Compute/*`. If you add this resource type to `NotActions` for the role definition, the role will not be able to access any Compute resource. To learn more about role definitions, see [Understand role definitions for Azure resources](../role-based-access-control/role-definitions.md).
-
-```powershell
-$roleDefinition = Get-AzRoleDefinition -Name 'Automation RunAs Contributor'
-$roleDefinition.NotActions.Add("Microsoft.Compute/*")
-$roleDefinition | Set-AzRoleDefinition
-```
-
-You can determine if the service principal used by your Run As account assigned the **Contributor** role or a custom one.
-
-1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Go to your Automation account and select **Run As Accounts** in the account settings section.
-1. Select **Azure Run As Account**.
-1. Select **Role** to locate the role definition that is being used.
--
-You can also determine the role definition used by the Run As accounts for multiple subscriptions or Automation accounts. Do this by using the [Check-AutomationRunAsAccountRoleAssignments.ps1](https://aka.ms/AA5hug5) script in the PowerShell Gallery.
-
-### Add permissions to Key Vault
-
-You can allow Azure Automation to verify if Key Vault and your Run As account service principal are using a custom role definition. You must:
-
-* Grant permissions to Key Vault.
-* Set the access policy.
-
-You can use the [Extend-AutomationRunAsAccountRoleAssignmentToKeyVault.ps1](https://aka.ms/AA5hugb) script in the PowerShell Gallery to grant your Run As account permissions to Key Vault. See [Assign a Key Vault access policy](../key-vault/general/assign-access-policy-powershell.md) for more details on setting permissions on Key Vault.
--
-## Next steps
-
-* [Application Objects and Service Principal Objects](../active-directory/develop/app-objects-and-service-principals.md).
-* [Certificates overview for Azure Cloud Services](../cloud-services/cloud-services-certs-create.md).
-* If you no longer need to use a Run As account, see [Delete a Run As account](delete-run-as-account.md).
automation Migrate Run As Accounts Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/migrate-run-as-accounts-managed-identity.md
Title: Migrate from a Run As account to Managed identities
description: This article describes how to migrate from a Run As account to managed identities in Azure Automation. Previously updated : 08/04/2023 Last updated : 10/03/2023
# Migrate from an existing Run As account to Managed identities > [!IMPORTANT]
-> Azure Automation Run As accounts will retire on *30 September 2023* and completely move to [Managed Identities](automation-security-overview.md#managed-identities). All runbook executions using RunAs accounts, including Classic Run As accounts wouldn't be supported after this date. Starting 01 April 2023, the creation of **new** Run As accounts in Azure Automation will not be possible.
+> Azure Automation Run as accounts, including Classic Run as accounts have retired on **30 September 2023** and replaced with [Managed Identities](automation-security-overview.md#managed-identities). You would no longer be able to create or renew Run as accounts through the Azure portal.
For more information about migration cadence and the support timeline for Run As account creation and certificate renewal, see the [frequently asked questions](automation-managed-identity-faq.md).
Before you migrate from a Run As account or Classic Run As account to a managed
> - There are two ways to use managed identities in hybrid runbook worker scripts: either the system-assigned managed identity for the Automation account *or* the virtual machine (VM) managed identity for an Azure VM running as a hybrid runbook worker. > - The VM's user-assigned managed identity and the VM's system-assigned managed identity will *not* work in an Automation account that's configured with an Automation account's managed identity. When you enable the Automation account's managed identity, you can use only the Automation account's system-assigned managed identity and not the VM managed identity. For more information, see [Use runbook authentication with managed identities](automation-hrw-run-runbooks.md).
-1. Assign the same role to the managed identity to access the Azure resources that match the Run As account. Follow the steps in [Check the role assignment for the Azure Automation Run As account](manage-run-as-account.md#check-role-assignment-for-azure-automation-run-as-account). Use this [script](https://github.com/azureautomation/runbooks/blob/master/Utility/AzRunAs/AssignMIRunAsRoles.ps1) to enable the System assigned identity in an Automation account and assign the same set of permissions present in Azure Automation Run as account to System Assigned identity of the Automation account.
+1. Assign the same role to the managed identity to access the Azure resources that match the Run As account. Use this [script](https://github.com/azureautomation/runbooks/blob/master/Utility/AzRunAs/AssignMIRunAsRoles.ps1) to enable the System assigned identity in an Automation account and assign the same set of permissions present in Azure Automation Run as account to System Assigned identity of the Automation account.
- Ensure that you don't assign high-privilege permissions like contributor or owner to the Run As account. Follow the role-based access control (RBAC) guidelines to limit the permissions from the default contributor permissions assigned to a Run As account by using [this script](manage-run-as-account.md#limit-run-as-account-permissions).
-
- For example, if the Automation account is required only to start or stop an Azure VM, then the permissions assigned to the Run As account need to be only for starting or stopping the VM. Similarly, assign read-only permissions if a runbook is reading from Azure Blob Storage. For more information, see [Azure Automation security guidelines](../automation/automation-security-guidelines.md#authentication-certificate-and-identities).
+ For example, if the Automation account is required only to start or stop an Azure VM, then the permissions assigned to the Run As account need to be only for starting or stopping the VM. Similarly, assign read-only permissions if a runbook is reading from Azure Blob Storage. For more information, see [Azure Automation security guidelines](../automation/automation-security-guidelines.md#authentication-certificate-and-identities).
1. If you're using Classic Run As accounts, ensure that you have [migrated](../virtual-machines/classic-vm-deprecation.md) resources deployed through classic deployment model to Azure Resource Manager. 1. Use [this script](https://github.com/azureautomation/runbooks/blob/master/Utility/AzRunAs/Check-AutomationRunAsAccountRoleAssignments.ps1) to find out which Automation accounts are using a Run As account. If your Azure Automation accounts contain a Run As account, it has the built-in contributor role assigned to it by default. You can use the script to check the Azure Automation Run As accounts and determine if their role assignment is the default one or if it has been changed to a different role definition.
automation Extension Based Hybrid Runbook Worker https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/troubleshoot/extension-based-hybrid-runbook-worker.md
A runbook running on a Hybrid Runbook Worker fails with the following error mess
#### Cause
-This error occurs when you attempt to use a [Run As account](../automation-security-overview.md#run-as-accounts) in a runbook that runs on a Hybrid Runbook Worker where the Run As account certificate isn't present. Hybrid Runbook Workers don't have the certificate asset locally by default. The Run As account requires this asset to operate properly.
+This error occurs when you attempt to use a Run As account in a runbook that runs on a Hybrid Runbook Worker where the Run As account certificate isn't present. Hybrid Runbook Workers don't have the certificate asset locally by default. The Run As account requires this asset to operate properly.
#### Resolution
automation Hybrid Runbook Worker https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/troubleshoot/hybrid-runbook-worker.md
A runbook running on a Hybrid Runbook Worker fails with the following error mess
#### Cause
-This error occurs when you attempt to use a [Run As account](../automation-security-overview.md#run-as-accounts) in a runbook that runs on a Hybrid Runbook Worker where the Run As account certificate isn't present. Hybrid Runbook Workers don't have the certificate asset locally by default. The Run As account requires this asset to operate properly.
+This error occurs when you attempt to use a Run As account in a runbook that runs on a Hybrid Runbook Worker where the Run As account certificate isn't present. Hybrid Runbook Workers don't have the certificate asset locally by default. The Run As account requires this asset to operate properly.
#### Resolution
automation Runbooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/troubleshoot/runbooks.md
When you receive errors during runbook execution in Azure Automation, you can us
1. If your runbook is suspended or unexpectedly fails:
- * [Renew the certificate](../manage-runas-account.md#cert-renewal) if the Run As account has expired.
* [Renew the webhook](../automation-webhooks.md#update-a-webhook) if you're trying to use an expired webhook to start the runbook. * [Check job statuses](../automation-runbook-execution.md#job-statuses) to determine current runbook statuses and some possible causes of the issue. * [Add additional output](../automation-runbook-output-and-messages.md#working-with-message-streams) to the runbook to identify what happens before the runbook is suspended.
Run Login-AzureRMAccount to login.
### Cause
-This error can occur when you're not using a Run As account or the Run As account has expired. For more information, see [Azure Automation Run As accounts overview](../automation-security-overview.md#run-as-accounts).
+This error can occur when you're not using a Run As account or the Run As account has expired.
This error has two primary causes:
Follow [Step 5 - Add authentication to manage Azure resources](../learn/powershe
#### Insufficient permissions
-[Add permissions to Key Vault](../manage-runas-account.md#add-permissions-to-key-vault) to ensure that your Run As account has sufficient permissions to access Key Vault.
+Add permissions to Key Vault to ensure that your Run As account has sufficient permissions to access Key Vault.
## Scenario: Runbook fails with "Parameter length exceeded" error
automation Shared Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/troubleshoot/shared-resources.md
$Body = @"
"@ ```
-## Run As accounts
-
-### <a name="unable-create-update"></a>Scenario: You're unable to create or update a Run As account
-
-#### Issue
-
-When you try to create or update a Run As account, you receive an error similar to the following:
-
-```error
-You do not have permissions to create…
-```
-#### Cause
-
-You don't have the permissions that you need to create or update the Run As account, or the resource is locked at a resource group level.
-
-#### Resolution
-
-To create or update a Run As account, you must have appropriate [permissions](../automation-security-overview.md#permissions) to the various resources used by the Run As account.
-
-If the problem is because of a lock, verify that the lock can be removed. Then go to the resource that is locked in Azure portal, right-click the lock, and select **Delete**.
+## Run As accounts
> [!NOTE]
-> Azure Automation Run As account will retire on **September 30, 2023** and will be replaced with Managed Identities. Ensure that you start migrating your runbooks to use [managed identities](../automation-security-overview.md#managed-identities). For more information, see [migrating from an existing Run As accounts to managed identity](../migrate-run-as-accounts-managed-identity.md#sample-scripts) to start migrating the runbooks from Run As accounts to managed identities before **September 30, 2023**.
+> Azure Automation Run as accounts, including Classic Run as accounts have retired on **30 September 2023** and replaced with [Managed Identities](../automation-security-overview.md#managed-identities)
+You would no longer be able to create or renew Run as accounts through the Azure portal. For more information, see [migrating from an existing Run As accounts to managed identity](../migrate-run-as-accounts-managed-identity.md#sample-scripts).
### <a name="iphelper"></a>Scenario: You receive the error "Unable to find an entry point named 'GetPerAdapterInfo' in DLL 'iplpapi.dll'" when executing a runbook
automation Start Stop Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/troubleshoot/start-stop-vm.md
Review the following list for potential resolutions:
* **ScheduledStartStop_Parent** * **SequencedStartStop_Parent**
-* Verify that your [Run As account](../automation-security-overview.md#run-as-accounts) has proper permissions to the VMs you're trying to start or stop. To learn how to check the permissions on a resource, see [Quickstart: View roles assigned to a user using the Azure portal](../../role-based-access-control/check-access.md). You'll need to provide the application ID for the service principal used by the Run As account. You can retrieve this value by going to your Automation account in the Azure portal. Select **Run as accounts** under **Account Settings**, and select the appropriate Run As account.
+* To learn how to check the permissions on a resource, see [Quickstart: View roles assigned to a user using the Azure portal](../../role-based-access-control/check-access.md). You'll need to provide the application ID for the service principal used by the Run As account. You can retrieve this value by going to your Automation account in the Azure portal. Select **Run as accounts** under **Account Settings**, and select the appropriate Run As account.
* VMs might not be started or stopped if they're being explicitly excluded. Excluded VMs are set in the `External_ExcludeVMNames` variable in the Automation account to which the feature is deployed. The following example shows how you can query that value with PowerShell.
This issue can be caused by an improperly configured or expired Run As account.
To verify that your Run As account is properly configured, go to your Automation account in the Azure portal and select **Run as accounts** under **Account Settings**. If a Run As account is improperly configured or expired, the status shows the condition.
-If your Run As account is misconfigured, delete and re-create your Run As account. For more information, see [Azure Automation Run As accounts](../automation-security-overview.md#run-as-accounts).
-
-If the certificate is expired for your Run As account, follow the steps in [Self-signed certificate renewal](../manage-runas-account.md#cert-renewal) to renew the certificate.
+If your Run As account is misconfigured, delete and re-create your Run As account.
If there are missing permissions, see [Quickstart: View roles assigned to a user using the Azure portal](../../role-based-access-control/check-access.md). You must provide the application ID for the service principal used by the Run As account. You can retrieve this value by going to your Automation account in the Azure portal. Select **Run as accounts** under **Account Settings**, and select the appropriate Run As account.
automation Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/whats-new.md
description: Significant updates to Azure Automation updated each month.
Previously updated : 09/17/2023 Last updated : 10/03/2023
Azure Automation receives improvements on an ongoing basis. To stay up to date w
This page is updated monthly, so revisit it regularly. If you're looking for items older than six months, you can find them in [Archive for What's new in Azure Automation](whats-new-archive.md).
+## October 2023
+
+### Retirement of Run As accounts
+
+**Type: Retirement**
+
+Azure Automation Run As Accounts, including Classic Run as accounts have retired on **30 September 2023** and replaced with Managed Identities. You would no longer be able to create or renew Run as accounts through the Azure portal. For more information, see [migrating from an existing Run As accounts to managed identity](migrate-run-as-accounts-managed-identity.md).
## May 2023
azure-app-configuration Enable Dynamic Configuration Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/enable-dynamic-configuration-python.md
In this tutorial, you learn how to:
> * Set up your app to update its configuration in response to changes in an App Configuration store. > [!NOTE]
-> Requires [azure-appconfiguration-provider](https://pypi.org/project/azure-appconfiguration-provider/1.1.0b1/) package version 1.1.0b1 or later.
+> Requires [azure-appconfiguration-provider](https://pypi.org/project/azure-appconfiguration-provider/1.1.0b2/) package version 1.1.0b2 or later.
## Prerequisites
azure-app-configuration Pull Key Value Devops Pipeline https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/pull-key-value-devops-pipeline.md
The following parameters are used by the Azure App Configuration task:
- **Azure subscription**: A drop-down containing your available Azure service connections. To update and refresh your list of available Azure service connections, press the **Refresh Azure subscription** button to the right of the textbox. - **App Configuration Endpoint**: A drop-down that loads your available configuration stores endpoints under the selected subscription. To update and refresh your list of available configuration stores endpoints, press the **Refresh App Configuration Endpoint** button to the right of the textbox.
+- **Selection Mode**: Specifies how the key-values read from a configuration store are selected. The 'Default' selection mode allows the use of key and label filters. The 'Snapshot' selection mode allows key-values to be selected from a snapshot. Default value is **Default**.
- **Key Filter**: The filter can be used to select what key-values are requested from Azure App Configuration. A value of * will select all key-values. For more information on, see [Query key values](concept-key-value.md#query-key-values). - **Label**: Specifies which label should be used when selecting key-values from the App Configuration store. If no label is provided, then key-values with the no label will be retrieved. The following characters are not allowed: , *.
+-**Snapshot Name**: Specifies snapshot from which key-values should be retrieved in Azure App Configuration.
- **Trim Key Prefix**: Specifies one or more prefixes that should be trimmed from App Configuration keys before setting them as variables. Multiple prefixes can be separated by a new-line character. - **Suppress Warning For Overridden Keys**: Default value is unchecked. Specifies whether to show warnings when existing keys are overridden. Enable this option when it is expected that the key-values downloaded from App Configuration have overlapping keys with what exists in pipeline variables.
azure-arc Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/resource-bridge/upgrade.md
+
+ Title: Upgrade Arc resource bridge (preview)
+description: Learn how to upgrade Arc resource bridge (preview) using either cloud-managed upgrade or manual upgrade.
Last updated : 10/02/2023+++
+# Upgrade Arc resource bridge (preview)
+
+This article describes how Arc resource bridge (preview) is upgraded and the two ways upgrade can be performed, using cloud-managed upgrade or manual upgrade.
+
+> [!IMPORTANT]
+> Currently, you must request access in order to use cloud-managed upgrade. To do so, [open a support request](/azure/azure-portal/supportability/how-to-create-azure-support-request). Select **Technical** for **Issue type** and **Azure Arc Resource Bridge** for **Service type**. In the **Summary** field, enter *Requesting access to cloud-managed upgrade*, and select **Resource Bridge Agent issue** for **Problem type**. Complete the rest of the support request and then select **Create**. We'll review your account and contact you to confirm your access to cloud-managed upgrade.
+
+## Prerequisites
+
+In order to upgrade resource bridge, its status must be online and the [credentials in the appliance VM](maintenance.md#update-credentials-in-the-appliance-vm) must be valid.
+
+There must be sufficient space on the management machine and appliance VM to download required images (~3.5 GB). For VMware, a new template is created.
+
+Currently, in order to upgrade Arc resource bridge, you must enable outbound connection from the Appliance VM IPs (`k8snodeippoolstart/end`, VM IP 1/2) to `msk8s.sb.tlu.dl.delivery.mp.microsoft.com`, port 443. Be sure the full list of [required endpoints for Arc resource bridge](network-requirements.md) are also enabled.
+
+Arc resource bridges configured with DHCP can't be upgraded and won't be supported in production. A new Arc resource bridge should be deployed using [static IP configuration](system-requirements.md#static-ip-configuration).
+
+## Overview
+
+The upgrade process deploys a new resource bridge using the reserved appliance VM IP (`k8snodeippoolend` IP, VM IP 2). Once the new resource bridge is up, it becomes the active resource bridge. The old resource bridge is deleted, and its appliance VM IP (`k8dsnodeippoolstart`, VM IP 1) becomes the new reserved appliance VM IP that will be used in the next upgrade.
+
+Deploying a new resource bridge consists of downloading the appliance image (~3.5 GB) from the cloud, using the image to deploy a new appliance VM, verifying the new resource bridge is running, connecting it to Azure, deleting the old appliance VM, and reserving the old IP to be used for a future upgrade.
+
+Overall, the upgrade generally takes at least 30 minutes, depending on network speeds. A short intermittent downtime may happen during the handoff between the old Arc resource bridge to the new Arc resource bridge. Additional downtime may occur if prerequisites are not met, or if a change in the network (DNS, firewall, proxy, etc.) impacts the Arc resource bridge's ability to communicate.
+
+There are two ways to upgrade Arc resource bridge: cloud-managed upgrades managed by Microsoft, or manual upgrades where Azure CLI commands are performed by an admin.
+
+## Cloud-managed upgrade
+
+Arc resource bridge is a Microsoft-managed product. Microsoft manages upgrades of Arc resource bridge through cloud-managed upgrade. Cloud-managed upgrade allows Microsoft to ensure that the resource bridge remains on a supported version.
+
+> [!IMPORTANT]
+> As noted earlier, cloud-managed upgrades are currently available only to customers who request access by opening a support request.
+
+Cloud-managed upgrades are handled through Azure. A notification is pushed to Azure to reflect the state of the appliance VM as it upgrades. As the resource bridge progresses through the upgrade, its status may switch back and forth between different upgrade steps. Upgrade is complete when the appliance VM `status` is `Running` and `provisioningState` is `Succeeded`.
+
+To check the status of a cloud-managed upgrade, check the Azure resource in ARM or run the following Azure CLI command from the management machine:
+
+```azurecli
+az arcappliance show --resource-group [REQUIRED] --name [REQUIRED]
+```
+
+## Manual upgrade
+
+Arc resource bridge can be manually upgraded from the management machine. The management machine must have the kubeconfig and appliance configuration files stored locally. Manual upgrade generally takes between 30-90 minutes, depending on network speeds.
+
+To manually upgrade your Arc resource bridge, make sure you have installed the latest `az arcappliance` CLI extension by running the extension upgrade command from the management machine:
+
+```azurecli
+az extension add --upgrade --name arcappliance
+```
+
+To manually upgrade your resource bridge, use the following command:
+
+```azurecli
+az arcappliance upgrade <private cloud> --config-file <file path to ARBname-appliance.yaml>
+```
+
+For example: `az arcappliance upgrade vmware --config-file c:\contosoARB01-appliance.yaml`
+
+## Private cloud providers
+
+Partner products that use Arc resource bridge may choose to handle upgrades differently, including enabling cloud-managed upgrade by default. This article will be updated to reflect any such changes.
+
+[Azure Arc VM management (preview) on Azure Stack HCI](/azure-stack/hci/manage/azure-arc-vm-management-overview) handles upgrades across all components as a "validated recipe" package, and upgrades are applied using the LCM tool. You must manually apply the packaged upgrade using the LCM tool.
+
+## Version releases
+
+The Arc resource bridge version is tied to the versions of underlying components used in the appliance image, such as the Kubernetes version. When there is a change in the appliance image, the Arc resource bridge version gets incremented. This generally happens when a new `az arcappliance` CLI extension version is released. An updated extension is typically released on a monthly cadence at the end of the month. For detailed release info, refer to the [Arc resource bridge release notes](https://github.com/Azure/ArcResourceBridge/releases) on GitHub.
+
+## Notification and upgrade availability
+
+If your Arc resource bridge is at n-3 version, then you may receive an email notification letting you know that your resource bridge may soon be out of support once the next version is released. If you receive this notification, upgrade the resource bridge as soon as possible to allow debug time for any issues with manual upgrade, or submit a support ticket if cloud-managed upgrade was unable to upgrade your resource bridge.
+
+To check if your Arc resource bridge has an upgrade available, run the command:
+
+```azurecli
+az arcappliance get-upgrades --resource-group [REQUIRED] --name [REQUIRED]
+```
+
+To see the current version of an Arc resource bridge appliance, run `az arcappliance show` or check the Azure resource.
+
+To find the latest released version of Arc resource bridge, check the [Arc resource bridge release notes](https://github.com/Azure/ArcResourceBridge/releases) on GitHub.
+
+## Supported versions
+
+Generally, the latest released version and the previous three versions (n-3) of Arc resource bridge are supported. For example, if the current version is 1.0.10, then the typical n-3 supported versions are:
+
+- Current version: 1.0.10
+- n-1 version: 1.0.9
+- n-2 version: 1.0.8
+- n-3 version: 1.0.7
+
+There may be instances where supported versions are not sequential. For example, version 1.0.11 is released and later found to contain a bug. A hot fix is released in version 1.0.12 and version 1.0.11 is removed. In this scenario, n-3 supported versions become 1.0.12, 1.0.10, 1.0.9, 1.0.8.
+
+Arc resource bridge typically releases a new version on a monthly cadence, at the end of the month. Delays may occur that could push the release date further out. Regardless of when a new release comes out, if you are within n-3 supported versions, then your Arc resource bridge version is supported. To stay updated on releases, visit the [Arc resource bridge release notes](https://github.com/Azure/ArcResourceBridge/releases) on GitHub.
+
+If a resource bridge is not upgraded to one of the supported versions (n-3), then it will fall outside the support window and be unsupported. If this happens, it may not always be possible to upgrade an unsupported resource bridge to a newer version, as component services used by Arc resource bridge may no longer be compatible. In addition, the unsupported resource bridge may not be able to provide reliable monitoring and health metrics.
+
+If an Arc resource bridge is unable to be upgraded to a supported version, you must delete it and deploy a new resource bridge. Depending on which private cloud product you're using, there may be other steps required to reconnect the resource bridge to existing resources. For details, check the partner product's Arc resource bridge recovery documentation.
+
+## Next steps
+
+- Learn about [Arc resource bridge maintenance operations](maintenance.md).
+- Learn about [troubleshooting Arc resource bridge](troubleshoot-resource-bridge.md).
azure-functions Recover Python Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/recover-python-functions.md
Sometimes, the package might have been integrated into [Python Standard Library]
However, if you're finding that the issue hasn't been fixed, and you're on a deadline, we encourage you to do some research to find a similar package for your project. Usually, the Python community will provide you with a wide variety of similar libraries that you can use.
+#### Disable dependency isolation flag
+
+Set the application setting [PYTHON_ISOLATE_WORKER_DEPENDENCIES](functions-app-settings.md#python_isolate_worker_dependencies) to a value of `0`.
+ ## Troubleshoot cannot import 'cygrpc'
azure-monitor Agents Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/agents-overview.md
View [supported operating systems for Azure Arc Connected Machine agent](../../a
| Windows 8 Enterprise and Pro<br>(Server scenarios only | | X<sup>1</sup>) | | | Windows 7 SP1<br>(Server scenarios only) | | X<sup>1</sup>) | | | Azure Stack HCI | X | X | |
+| Windows IoT Enterprise | X | | |
<sup>1</sup> Running the OS on server hardware that is always connected, always on.<br> <sup>2</sup> Using the Azure Monitor agent [client installer](./azure-monitor-agent-windows-client.md).<br>
azure-monitor Javascript Framework Extensions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/javascript-framework-extensions.md
These plugins provide extra functionality and integration with the specific fram
### [React](#tab/react)
-None.
+- Make sure the version of the React plugin that you want to install is compatible with your version of Application Insights. For more information, see [Compatibility Matrix for the React plugin](https://github.com/microsoft/applicationinsights-react-js#compatibility-matrix).
### [React Native](#tab/reactnative)
azure-monitor Container Insights Agent Config https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-agent-config.md
The following table describes the settings you can configure to control data col
| `[log_collection_settings.env_var] enabled =` | Boolean | True or false | This setting controls environment variable collection<br> across all pods/nodes in the cluster<br> and defaults to `enabled = true` when not specified<br> in the ConfigMap.<br> If collection of environment variables is globally enabled, you can disable it for a specific container<br> by setting the environment variable<br> `AZMON_COLLECT_ENV` to `False` either with a Dockerfile setting or in the [configuration file for the Pod](https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/) under the `env:` section.<br> If collection of environment variables is globally disabled, you can't enable collection for a specific container. The only override that can be applied at the container level is to disable collection when it's already enabled globally. | | `[log_collection_settings.enrich_container_logs] enabled =` | Boolean | True or false | This setting controls container log enrichment to populate the `Name` and `Image` property values<br> for every log record written to the **ContainerLog** table for all container logs in the cluster.<br> It defaults to `enabled = false` when not specified in the ConfigMap. | | `[log_collection_settings.collect_all_kube_events] enabled =` | Boolean | True or false | This setting allows the collection of Kube events of all types.<br> By default, the Kube events with type **Normal** aren't collected. When this setting is set to `true`, the **Normal** events are no longer filtered, and all events are collected.<br> It defaults to `enabled = false` when not specified in the ConfigMap. |
+| `[log_collection_settings.enable_multiline_logs] enabled =` | Boolean | True or False | This setting controls whether multiline container logs are enabled. They are disabled by default. See [Multi-line logging in Container Insights](./container-insights-logging-v2.md) to learn more. |
### Metric collection settings
azure-monitor Container Insights Hybrid Setup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-hybrid-setup.md
# Configure hybrid Kubernetes clusters with Container insights
-Container insights provides a rich monitoring experience for the Azure Kubernetes Service (AKS) and [AKS Engine on Azure](https://github.com/Azure/aks-engine), which is a self-managed Kubernetes cluster hosted on Azure. This article describes how to enable monitoring of Kubernetes clusters hosted outside of Azure and achieve a similar monitoring experience.
+Container insights provides a rich monitoring experience for the Azure Kubernetes Service (AKS). This article describes how to enable monitoring of Kubernetes clusters hosted outside of Azure and achieve a similar monitoring experience.
## Supported configurations
The following configurations are officially supported with Container insights. I
- Environments: - Kubernetes on-premises.
- - AKS Engine on Azure and Azure Stack. For more information, see [AKS Engine on Azure Stack](/azure-stack/user/azure-stack-kubernetes-aks-engine-overview).
- [OpenShift](https://docs.openshift.com/container-platform/4.3/welcome/https://docsupdatetracker.net/index.html) version 4 and higher, on-premises or in other cloud environments. - Versions of Kubernetes and support policy are the same as versions of [AKS supported](../../aks/supported-kubernetes-versions.md). - The following container runtimes are supported: Moby and CRI compatible runtimes such CRI-O and ContainerD.
azure-monitor Container Insights Onboard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-onboard.md
Container insights supports the following environments:
- [Azure Kubernetes Service (AKS)](../../aks/index.yml) - [Azure Arc-enabled Kubernetes cluster](../../azure-arc/kubernetes/overview.md) - [Azure Stack](/azure-stack/user/azure-stack-kubernetes-aks-engine-overview) or on-premises
- - [AKS engine](https://github.com/Azure/aks-engine)
- [Red Hat OpenShift](https://docs.openshift.com/container-platform/latest/welcome/https://docsupdatetracker.net/index.html) version 4.x The versions of Kubernetes and support policy are the same as those versions [supported in AKS](../../aks/supported-kubernetes-versions.md).
azure-netapp-files Backup Requirements Considerations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/backup-requirements-considerations.md
This article describes the requirements and considerations you need to be aware
You need to be aware of several requirements and considerations before using Azure NetApp Files backup: >[!IMPORTANT]
->All backups require a backup vault. If you have existing backups, you must migrate backups to a backup vault before you can perform any operation with a backup. For more information about this procedure, see [Manage backup vaults](backup-vault-manage.md).
+>All backups require a backup vault. If you have existing backups, you must migrate backups to a backup vault before you can perform any operation with a backup. For more information, see [Manage backup vaults](backup-vault-manage.md).
* Azure NetApp Files backup is available in the regions associated with your Azure NetApp Files subscription. Azure NetApp Files backup in a region can only protect an Azure NetApp Files volume located in that same region. For example, backups created by the service in West US 2 for a volume located in West US 2 are sent to Azure storage also located in West US 2. Azure NetApp Files doesn't support backups or backup replication to a different region.
azure-netapp-files Backup Restore New Volume https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/backup-restore-new-volume.md
Restoring a backup creates a new volume with the same protocol type. This articl
* Restoring a backup to a new volume is not dependent on the networking type used by the source volume. You can restore the backup of a volume configured with Basic networking to a volume configured with Standard networking and vice versa.
+* In the Volume overview page, refer to the **Originated from** field to see the name of the snapshot used to create the volume.
+ * See [Restoring volume backups from vaulted snapshots](snapshots-introduction.md#restoring-volume-backups-from-vaulted-snapshots) for more information.
See [Requirements and considerations for Azure NetApp Files backup](backup-requi
![Screenshot that shows the Create a Volume page.](../media/azure-netapp-files/backup-restore-create-volume.png)
+4. The Volumes page displays the new volume. In the Volumes page, the **Originated from** field identifies the name of the snapshot used to create the volume.
+ ## Next steps * [Understand Azure NetApp Files backup](backup-introduction.md)
azure-netapp-files Snapshots Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/snapshots-introduction.md
The following diagram shows volume restoration (cloning) by using DR target volu
[![Diagram that shows volume restoration using DR target volume snapshot](../media/azure-netapp-files/snapshot-restore-clone-target-volume.png)](../media/azure-netapp-files/snapshot-restore-clone-target-volume.png#lightbox)
-See [Restore a snapshot to a new volume](snapshots-restore-new-volume.md) about volume restore operations.
+When you restore a snapshot to a new volume, the Volume overview page displays the name of the snapshot used to create the new volume in the **Originated from** field. See [Restore a snapshot to a new volume](snapshots-restore-new-volume.md) about volume restore operations.
### Restoring (reverting) an online snapshot in-place
azure-netapp-files Snapshots Restore New Volume https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/snapshots-restore-new-volume.md
:::image type="content" source="../media/azure-netapp-files/snapshot-restore-new-volume.png" alt-text="Screenshot showing the Create a Volume window for restoring a volume from a snapshot."::: 4. Select **Review+create**. Select **Create**.
- The Volumes page displays the new volume that the snapshot restores to.
-
+ The Volumes page displays the new volume to which the snapshot restores. Refer to the **Originated from** field to see the name of the snapshot used to create the volume.
## Next steps
azure-netapp-files Understand Guidelines Active Directory Domain Service Site https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/understand-guidelines-active-directory-domain-service-site.md
This article provides recommendations to help you develop an AD DS deployment st
Before you deploy Azure NetApp Files volumes, you must identify the AD DS integration requirements for Azure NetApp Files to ensure that Azure NetApp Files is well connected to AD DS. _Incorrect or incomplete AD DS integration with Azure NetApp Files might cause client access interruptions or outages for SMB, dual-protocol, or Kerberos NFSv4.1 volumes_.
+### Supported authentication scenarios
+
+Azure NetApp Files supports identity-based authentication over SMB through the following methods.
+
+* **AD DS authentication**: AD DS-joined Windows machines can access Azure NetApp Files shares with Active Directory credentials over SMB. Your client must have line of sight to your AD DS. If you already have AD DS set up on-premises or on a VM in Azure where your devices are domain-joined to your AD DS, you should use AD DS for Azure NetApp Files file share authentication.
+* **Azure AD DS authentication**: Cloud-based, Azure AD DS-joined Windows VMs can access Azure NetApp Files file shares with Azure AD DS credentials. In this solution, Azure AD DS runs a traditional Windows Server AD domain on behalf of the customer.
+* **Azure AD Kerberos for hybrid identities**: Using Azure AD for authenticating [hybrid user identities](../active-directory/hybrid/whatis-hybrid-identity.md) allows Azure AD users to access Azure NetApp Files file shares using Kerberos authentication. This means your end users can access Azure NetApp Files file shares without requiring a line-of-sight to domain controllers from hybrid Azure AD-joined and Azure AD-joined Windows or Linux virtual machines. *Cloud-only identities aren't currently supported.*
+* **AD Kerberos authentication for Linux clients**: Linux clients can use Kerberos authentication over SMB for Azure NetApp Files using AD DS.
++ ### <a name="network-requirements"></a>Network requirements Azure NetApp Files SMB, dual-protocol, and Kerberos NFSv4.1 volumes require reliable and low-latency network connectivity (less than 10 ms RTT) to AD DS domain controllers. Poor network connectivity or high network latency between Azure NetApp Files and AD DS domain controllers can cause client access interruptions or client timeouts.
azure-netapp-files Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/whats-new.md
Azure NetApp Files is updated regularly. This article provides a summary about the latest new features and enhancements.
+## October 2023
+
+* [Snapshot manageability enhancement: Identify parent snapshot](snapshots-restore-new-volume.md)
+
+ You can now see the name of the snapshot used to create a new volume. In the Volume overview page, the **Originated from** field identifies the source snapshot used in volume creation. If the field is empty, no snapshot was used.
+ ## September 2023 * [Azure NetApp Files customer-managed keys for Azure NetApp Files volume encryption is now available in select US Gov regions (Preview)](configure-customer-managed-keys.md#supported-regions)
bastion Bastion Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/bastion-faq.md
description: Learn about frequently asked questions for Azure Bastion.
Previously updated : 08/16/2023 Last updated : 10/03/2023 # Azure Bastion FAQ
Azure Bastion offers support for file transfer between your target VM and local
This feature doesn't work with AADJ VM extension-joined machines using Azure AD users. For more information, see [Sign in to a Windows virtual machine in Azure by using Azure AD](../active-directory/devices/howto-vm-sign-in-azure-ad-windows.md#requirements).
-### <a name="rdscal"></a>Does Azure Bastion require an RDS CAL for administrative purposes on Azure-hosted VMs?
+### <a name="rdscal-compatibility"></a>Is Bastion compatible with VMs set up as RDS session hosts?
-No, access to Windows Server VMs by Azure Bastion doesn't require an [RDS CAL](https://www.microsoft.com/p/windows-server-remote-desktop-services-cal/dg7gmgf0dvsv?activetab=pivot:overviewtab) when used solely for administrative purposes.
+Bastion does not support connecting to a VM that is set up as an RDS session host.
### <a name="keyboard"></a>Which keyboard layouts are supported during the Bastion remote session?
bastion Quickstart Host Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/quickstart-host-portal.md
description: Learn how to deploy Bastion with default settings from the Azure po
Previously updated : 06/08/2023 Last updated : 10/03/2023
In this quickstart, you'll learn how to deploy Azure Bastion with default settings to your virtual network using the Azure portal. After Bastion is deployed, you can connect (SSH/RDP) to virtual machines (VM) in the virtual network via Bastion using the private IP address of the VM. The VMs you connect to don't need a public IP address, client software, agent, or a special configuration. For more information about Azure Bastion, see [What is Azure Bastion?](bastion-overview.md) :::image type="content" source="./media/create-host/host-architecture.png" alt-text="Diagram showing Azure Bastion architecture." lightbox="./media/create-host/host-architecture.png":::
-
+ The steps in this article help you do the following: * Deploy Bastion with default settings from your VM resource using the Azure portal. When you deploy using default settings, the settings are based on the virtual network to which Bastion will be deployed.
When you create Azure Bastion using default settings, the settings are configure
1. Bastion begins deploying. This can take around 10 minutes to complete.
+ > [!NOTE]
+ > [!INCLUDE [Bastion failed subnet](../../includes/bastion-failed-subnet.md)]
+ >
+ ## <a name="connect"></a>Connect to a VM When the Bastion deployment is complete, the screen changes to the **Connect** page.
bastion Tutorial Create Host Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/tutorial-create-host-portal.md
description: Learn how to deploy Bastion using settings that you specify - Azure
Previously updated : 06/08/2023 Last updated : 10/03/2023
This section helps you deploy Bastion to your VNet. Once Bastion is deployed, yo
1. On the **Subnets** page, select **+Subnet** to open the **Add subnet** page.
-1. On the **Add subnet page**, create the 'AzureBastionSubnet' subnet using the following values. Leave the other values as default.
+1. On the **Add subnet page**, create the 'AzureBastionSubnet' subnet using the following values. Leave the other values as default.
* The subnet name must be **AzureBastionSubnet**. * The subnet must be at least **/26 or larger** (/26, /25, /24 etc.) to accommodate features available with the Standard SKU.
cloud-shell Persisting Shell Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-shell/persisting-shell-storage.md
description: Walkthrough of how Azure Cloud Shell persists files. ms.contributor: jahelmic Previously updated : 09/29/2023 Last updated : 10/03/2023 tags: azure-resource-manager
filter for locally redundant storage (LRS), geo-redundant storage (GRS), and zon
## Securing storage access For security, each user should create their own storage account. For Azure role-based access control
-(Azure RBAC), users must have contributor access or above at the storage account level.
+(Azure RBAC), users must have contributor access or higher at the storage account level.
Cloud Shell uses an Azure fileshare in a storage account, inside a specified subscription. Due to inherited permissions, users with sufficient access rights to the subscription can access all the
shm 65536 0 65536
You can update the fileshare that's associated with Cloud Shell using the `clouddrive mount` command.
-If you mount an existing fileshare, the storage accounts must be located in your select Cloud Shell
-region. Retrieve the location by running `env` and checking the `ACC_LOCATION`.
-
-#### The `clouddrive mount` command
- > [!NOTE] > If you're mounting a new fileshare, a new user image is created for your `$HOME` directory. Your > previous `$HOME` image is kept in your previous fileshare.
file storage GUI when you refresh the blade.
![Screenshot of the download dialog box in Cloud Shell.][10] You can only download files located under your `$HOME` folder.
-1. Click the **Download** button.
+1. Select the **Download** button.
### Upload files
You should now see the files that are accessible in your `clouddrive` directory
### Upload files in Azure Cloud Shell 1. In an Azure Cloud Shell session, select the **Upload/Download files** icon and select the
- **Upload** option.
-1. Your browser will open a file dialog. Select the file you want to upload then click the **Open**
- button.
+ **Upload** option. Your browser opens a file dialog box.
+1. Choose the file you want to upload then select the **Open** button.
The file is uploaded to the root of your `$HOME` folder. You can move the file after it's uploaded.
communication-services Direct Routing Infrastructure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/telephony/direct-routing-infrastructure.md
If Mutual TLS (MTLS) support is enabled for the direct routing connection on the
The connection points for Communication Services direct routing are the following three FQDNs: -- **sip.pstnhub.microsoft.com ΓÇö Global FQDN ΓÇö must be tried first. When the SBC sends a request to resolve this name, the Microsoft Azure DNS servers return an IP address that points to the primary Azure datacenter assigned to the SBC. The assignment is based on performance metrics of the datacenters and geographical proximity to the SBC. The IP address returned corresponds to the primary FQDN.
+- **sip.pstnhub.microsoft.com** ΓÇö Global FQDN ΓÇö must be tried first. When the SBC sends a request to resolve this name, the Microsoft Azure DNS servers return an IP address that points to the primary Azure datacenter assigned to the SBC. The assignment is based on performance metrics of the datacenters and geographical proximity to the SBC. The IP address returned corresponds to the primary FQDN.
- **sip2.pstnhub.microsoft.com** ΓÇö Secondary FQDN ΓÇö geographically maps to the second priority region. - **sip3.pstnhub.microsoft.com** ΓÇö Tertiary FQDN ΓÇö geographically maps to the third priority region.
communication-services Data Channel https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/voice-video-calling/data-channel.md
These measures are in place to prevent flooding when a significant number of par
## Next steps For more information, see the following articles: -- Learn about [QuickStart - Add messaging to your calling app](../../quickstarts/voice-video-calling/get-started-data-channel.md)
+- Learn about [QuickStart - Add data channel to your calling app](../../quickstarts/voice-video-calling/get-started-data-channel.md)
- Learn more about [Calling SDK capabilities](../../quickstarts/voice-video-calling/getting-started-with-calling.md)
container-apps Firewall Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/firewall-integration.md
Network Security Groups (NSGs) needed to configure virtual networks closely rese
You can lock down a network via NSGs with more restrictive rules than the default NSG rules to control all inbound and outbound traffic for the Container Apps environment at the subscription level.
-In the workload profiles environment, user-defined routes (UDRs) and securing outbound traffic with a firewall are supported. Learn more in the [networking concepts document](./networking.md#user-defined-routes-udr).
+In the workload profiles environment, user-defined routes (UDRs) and securing outbound traffic with a firewall are supported. When using an external workload profiles environment, inbound traffic to Container Apps that use external ingress routes through the public IP that exists in the [managed resource group](./networking.md#workload-profiles-environment-1) rather than through your subnet. This means that locking down inbound traffic via NSG or Firewall on an external workload profiles environment is not supported. For more information, see [Networking in Azure Container Apps environments](./networking.md#user-defined-routes-udr).
In the Consumption only environment, custom user-defined routes (UDRs) and ExpressRoutes aren't supported.
container-apps Networking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/networking.md
Different environment types have different subnet requirements:
- Your subnet must be delegated to `Microsoft.App/environments`.
+- When using an external environment with external ingress, inbound traffic routes through the infrastructureΓÇÖs public IP rather than through your subnet.
+ - Container Apps automatically reserves 11 IP addresses for integration with the subnet. When your apps are running in a workload profiles environment, the number of IP addresses required for infrastructure integration doesn't vary based on the scale demands of the environment. Additional IP addresses are allocated according to the following rules depending on the type of workload profile you are using more IP addresses are allocated depending on your environment's workload profile: - When you're using the [Dedicated workload profile](workload-profiles-overview.md#profile-types) for your container app, each node has one IP address assigned.
container-registry Container Registry Tasks Scheduled https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-tasks-scheduled.md
Scheduling a task is useful for scenarios like the following:
First, populate the following shell environment variable with a value appropriate for your environment. This step isn't strictly required, but makes executing the multiline Azure CLI commands in this tutorial a bit easier. If you don't populate the environment variable, you must manually replace each value wherever it appears in the example commands.
-[![Embed launch](https://shell.azure.com/images/launchcloudshell.png "Launch Azure Cloud Shell")](https://shell.azure.com)
+[![Embed launch](./media/launch-cloud-shell/launch-cloud-shell.png "Launch Azure Cloud Shell")](https://shell.azure.com)
```console ACR_NAME=<registry-name> # The name of your Azure container registry
cosmos-db Custom Partitioning Analytical Store https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/custom-partitioning-analytical-store.md
Title: Custom partitioning in Azure Synapse Link for Azure Cosmos DB (Preview)
+ Title: Custom partitioning in Azure Synapse Link for Azure Cosmos DB
description: Custom partitioning enables you to partition the analytical store data on fields that are commonly used as filters in analytical queries resulting in improved query performance.
-# Custom partitioning in Azure Synapse Link for Azure Cosmos DB (Preview)
+# Custom partitioning in Azure Synapse Link for Azure Cosmos DB
[!INCLUDE[NoSQL](includes/appliesto-nosql.md)] Custom partitioning enables you to partition analytical store data, on fields that are commonly used as filters in analytical queries, resulting in improved query performance. In this article, you'll learn how to partition your data in Azure Cosmos DB analytical store using keys that are critical for your analytical workloads. It also explains how to take advantage of the improved query performance with partition pruning. You'll also learn how the partitioned store helps to improve the query performance when your workloads have a significant number of updates or deletes.
-> [!IMPORTANT]
-> Custom partitioning feature is currently in public preview. This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
- > [!NOTE]
-> Azure Cosmos DB accounts should have [Azure Synapse Link](synapse-link.md) enabled to take advantage of custom partitioning.
+> Azure Cosmos DB accounts and containers should have [Azure Synapse Link](synapse-link.md) enabled to take advantage of custom partitioning.
## How does it work?
cosmos-db Diagnostic Queries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/gremlin/diagnostic-queries.md
Common queries are shown in the resource-specific and Azure Diagnostics tables.
#### [Resource-specific](#tab/resource-specific) ```Kusto
- let topRequestsByRUcharge = CDBDataPlaneRequests
- | where TimeGenerated > ago(24h)
- | project RequestCharge , TimeGenerated, ActivityId;
CDBGremlinRequests | project PIICommandText, ActivityId, DatabaseName , CollectionName | join kind=inner topRequestsByRUcharge on ActivityId
Common queries are shown in the resource-specific and Azure Diagnostics tables.
#### [Azure Diagnostics](#tab/azure-diagnostics) ```Kusto
- let topRequestsByRUcharge = AzureDiagnostics
- | where Category == "DataPlaneRequests" and TimeGenerated > ago(1h)
- | project requestCharge_s , TimeGenerated, activityId_g;
AzureDiagnostics | where Category == "GremlinRequests" | project piiCommandText_s, activityId_g, databasename_s , collectionname_s
Common queries are shown in the resource-specific and Azure Diagnostics tables.
#### [Resource-specific](#tab/resource-specific) ```Kusto
- let throttledRequests = CDBDataPlaneRequests
- | where StatusCode == "429"
- | project OperationName , TimeGenerated, ActivityId;
CDBGremlinRequests | project PIICommandText, ActivityId, DatabaseName , CollectionName | join kind=inner throttledRequests on ActivityId
Common queries are shown in the resource-specific and Azure Diagnostics tables.
#### [Azure Diagnostics](#tab/azure-diagnostics) ```Kusto
- let throttledRequests = AzureDiagnostics
- | where Category == "DataPlaneRequests"
- | where statusCode_s == "429"
- | project OperationName , TimeGenerated, activityId_g;
AzureDiagnostics | where Category == "GremlinRequests" | project piiCommandText_s, activityId_g, databasename_s , collectionname_s
Common queries are shown in the resource-specific and Azure Diagnostics tables.
#### [Resource-specific](#tab/resource-specific) ```Kusto
- let operationsbyUserAgent = CDBDataPlaneRequests
- | project OperationName, DurationMs, RequestCharge, ResponseLength, ActivityId;
CDBGremlinRequests //specify collection and database //| where DatabaseName == "DB NAME" and CollectionName == "COLLECTIONNAME"
Common queries are shown in the resource-specific and Azure Diagnostics tables.
#### [Azure Diagnostics](#tab/azure-diagnostics) ```Kusto
- let operationsbyUserAgent = AzureDiagnostics
- | where Category=="DataPlaneRequests"
- | project OperationName, duration_s, requestCharge_s, responseLength_s, activityId_g;
AzureDiagnostics | where Category == "GremlinRequests" //| where databasename_s == "DB NAME" and collectioname_s == "COLLECTIONNAME"
cosmos-db How To Setup Rbac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/how-to-setup-rbac.md
principalId='<aadPrincipalId>'
az cosmosdb sql role assignment create --account-name $accountName --resource-group $resourceGroupName --scope "/" --principal-id $principalId --role-definition-id $readOnlyRoleDefinitionId ```
-### Using Azure Resource Manager templates
+### Using Bicep/Azure Resource Manager templates
+
+For a built-in assignment using a Bicep template:
+
+```
+resource sqlRoleAssignment 'Microsoft.DocumentDB/databaseAccounts/sqlRoleAssignments@2023-04-15' = {
+ name: guid(<roleDefinitionId>, <aadPrincipalId>, <databaseAccountResourceId>)
+ parent: databaseAccount
+ properties:{
+ principalId: <aadPrincipalId>
+ roleDefinitionId: '/${subscription().id}/resourceGroups/<databaseAccountResourceGroup>/providers/Microsoft.DocumentDB/databaseAccounts/<myCosmosAccount>/sqlRoleDefinitions/<roleDefinitionId>'
+ scope: <databaseAccountResourceId>
+ }
+}
+```
For a reference and examples of using Azure Resource Manager templates to create role assignments, see [``Microsoft.DocumentDB`` ``databaseAccounts/sqlRoleAssignments``](/azure/templates/microsoft.documentdb/2021-10-15/databaseaccounts/sqlroleassignments).
cosmos-db Managed Identity Based Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/managed-identity-based-authentication.md
Once published, the ``DefaultAzureCredential`` class will use credentials from t
## Next steps -- [Certificate-based authentication with Azure Cosmos DB and Azure Active Directory](certificate-based-authentication.md) - [Secure Azure Cosmos DB keys using Azure Key Vault](store-credentials-key-vault.md) - [Security baseline for Azure Cosmos DB](security-baseline.md)
cosmos-db Vector Search https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/vcore/vector-search.md
To perform a vector search, use the `$search` aggregation pipeline stage in a Mo
"vector": <vector_to_search>, "path": "<path_to_property>", "k": <num_results_to_return>
- }
- ...
+ },
+ "returnStoredSource": True }},
+ {
+ "$project": { "<custom_name_for_similarity_score>": {
+ "$meta": "searchScore" },
+ "document" : "$$ROOT"
+ }
} } ```
+To retrieve the similarity score (`searchScore`) along with the documents found by the vector search, use the `$project` operator to include `searchScore` and rename it as `<custom_name_for_similarity_score>` in the results. Then the document is also projected as nested object. Note that the similarity score is calculated using the metric defined in the vector index.
### Query a vector index by using $search
db.exampleCollection.aggregate([
"path": "vectorContent", "k": 2 },
- "returnStoredSource": true
- }
+ "returnStoredSource": true }},
+ {
+ "$project": { "similarityScore": {
+ "$meta": "searchScore" },
+ "document" : "$$ROOT"
+ }
} ]); ```
In this example, a vector search is performed by using `queryVector` as an input
```javascript [ {
- _id: ObjectId("645acb54413be5502badff94"),
- name: 'Eugenia Lopez',
- bio: 'Eugenia is the CEO of AdvenureWorks.',
- vectorContent: [ 0.51, 0.12, 0.23 ]
+ similarityScore: 0.9465376,
+ document: {
+ _id: ObjectId("645acb54413be5502badff94"),
+ name: 'Eugenia Lopez',
+ bio: 'Eugenia is the CEO of AdvenureWorks.',
+ vectorContent: [ 0.51, 0.12, 0.23 ]
+ }
}, {
- _id: ObjectId("645acb54413be5502badff97"),
- name: 'Rory Nguyen',
- bio: 'Rory Nguyen is the founder of AdventureWorks and the president of the Our Planet initiative.',
- vectorContent: [ 0.91, 0.76, 0.83 ]
+ similarityScore: 0.9006955,
+ document: {
+ _id: ObjectId("645acb54413be5502badff97"),
+ name: 'Rory Nguyen',
+ bio: 'Rory Nguyen is the founder of AdventureWorks and the president of the Our Planet initiative.',
+ vectorContent: [ 0.91, 0.76, 0.83 ]
+ }
} ] ```
cosmos-db Monitor Resource Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/monitor-resource-logs.md
Here, we walk through the process of creating diagnostic settings for your accou
| Category | API | Definition | Key Properties | | | | | | | **DataPlaneRequests** | All APIs | Logs back-end requests as data plane operations, which are requests executed to create, update, delete or retrieve data within the account. | `Requestcharge`, `statusCode`, `clientIPaddress`, `partitionID`, `resourceTokenPermissionId` `resourceTokenPermissionMode` |
- | **MongoRequests** | Mongo | Logs user-initiated requests from the front end to serve requests to Azure Cosmos DB for MongoDB. When you enable this category, make sure to disable DataPlaneRequests. | `Requestcharge`, `opCode`, `retryCount`, `piiCommandText` |
- | **CassandraRequests** | Cassandra | Logs user-initiated requests from the front end to serve requests to Azure Cosmos DB for Cassandra. When you enable this category, make sure to disable DataPlaneRequests. | `operationName`, `requestCharge`, `piiCommandText` |
- | **GremlinRequests** | Gremlin | Logs user-initiated requests from the front end to serve requests to Azure Cosmos DB for Gremlin. When you enable this category, make sure to disable DataPlaneRequests. | `operationName`, `requestCharge`, `piiCommandText`, `retriedDueToRateLimiting` |
- | **QueryRuntimeStatistics** | NoSQL | This table details query operations executed against an API for NoSQL account. By default, the query text and its parameters are obfuscated to avoid logging personal data with full text query logging available by request. | `databasename`, `partitionkeyrangeid`, `querytext` |
+ | **MongoRequests** | API for MongoDB | Logs user-initiated requests from the front end to serve requests to Azure Cosmos DB for MongoDB. When you enable this category, make sure to disable DataPlaneRequests. | `Requestcharge`, `opCode`, `retryCount`, `piiCommandText` |
+ | **CassandraRequests** | API for Apache Cassandra | Logs user-initiated requests from the front end to serve requests to Azure Cosmos DB for Cassandra. When you enable this category, make sure to disable DataPlaneRequests. | `operationName`, `requestCharge`, `piiCommandText` |
+ | **GremlinRequests** | API for Apache Gremlin | Logs user-initiated requests from the front end to serve requests to Azure Cosmos DB for Gremlin. When you enable this category, make sure to disable DataPlaneRequests. | `operationName`, `requestCharge`, `piiCommandText`, `retriedDueToRateLimiting` |
+ | **QueryRuntimeStatistics** | API for NoSQL | This table details query operations executed against an API for NoSQL account. By default, the query text and its parameters are obfuscated to avoid logging personal data with full text query logging available by request. | `databasename`, `partitionkeyrangeid`, `querytext` |
| **PartitionKeyStatistics** | All APIs | Logs the statistics of logical partition keys by representing the estimated storage size (KB) of the partition keys. This table is useful when troubleshooting storage skews. This PartitionKeyStatistics log is only emitted if the following conditions are true: 1. At least 1% of the documents in the physical partition have same logical partition key. 2. Out of all the keys in the physical partition, the PartitionKeyStatistics log captures the top three keys with largest storage size. </li></ul> If the previous conditions aren't met, the partition key statistics data isn't available. It's okay if the above conditions aren't met for your account, which typically indicates you have no logical partition storage skew. **Note**: The estimated size of the partition keys is calculated using a sampling approach that assumes the documents in the physical partition are roughly the same size. If the document sizes aren't uniform in the physical partition, the estimated partition key size may not be accurate. | `subscriptionId`, `regionName`, `partitionKey`, `sizeKB` |
- | **PartitionKeyRUConsumption** | API for NoSQL | Logs the aggregated per-second RU/s consumption of partition keys. This table is useful for troubleshooting hot partitions. Currently, Azure Cosmos DB reports partition keys for API for NoSQL accounts only and for point read/write, query, and stored procedure operations. | `subscriptionId`, `regionName`, `partitionKey`, `requestCharge`, `partitionKeyRangeId` |
+ | **PartitionKeyRUConsumption** | API for NoSQL or API for Apache Gremlin | Logs the aggregated per-second RU/s consumption of partition keys. This table is useful for troubleshooting hot partitions. Currently, Azure Cosmos DB reports partition keys for API for NoSQL accounts only and for point read/write, query, and stored procedure operations. | `subscriptionId`, `regionName`, `partitionKey`, `requestCharge`, `partitionKeyRangeId` |
| **ControlPlaneRequests** | All APIs | Logs details on control plane operations, which include, creating an account, adding or removing a region, updating account replication settings etc. | `operationName`, `httpstatusCode`, `httpMethod`, `region` | | **TableApiRequests** | API for Table | Logs user-initiated requests from the front end to serve requests to Azure Cosmos DB for Table. When you enable this category, make sure to disable DataPlaneRequests. | `operationName`, `requestCharge`, `piiCommandText` |
cosmos-db Certificate Based Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/certificate-based-authentication.md
- Title: Certificate-based authentication with Azure Cosmos DB and Active Directory
-description: Learn how to configure an Azure AD identity for certificate-based authentication to access keys from Azure Cosmos DB.
---- Previously updated : 06/11/2019-----
-# Certificate-based authentication for an Azure AD identity to access keys from an Azure Cosmos DB account
-
-Certificate-based authentication enables your client application to be authenticated by using Azure Active Directory (Azure AD) with a client certificate. You can perform certificate-based authentication on a machine where you need an identity, such as an on-premises machine or virtual machine in Azure. Your application can then read Azure Cosmos DB keys without having the keys directly in the application. This article describes how to create a sample Azure AD application, configure it for certificate-based authentication, sign into Azure using the new application identity, and then it retrieves the keys from your Azure Cosmos DB account. This article uses Azure PowerShell to set up the identities and provides a C# sample app that authenticates and accesses keys from your Azure Cosmos DB account.
-
-## Prerequisites
-
-* Install the [latest version](/powershell/azure/install-azure-powershell) of Azure PowerShell.
-
-* If you don't have an [Azure subscription](../../guides/developer/azure-developer-guide.md#understanding-accounts-subscriptions-and-billing), create a [free account](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio) before you begin.
-
-## Register an app in Azure AD
-
-In this step, you will register a sample web application in your Azure AD account. This application is later used to read the keys from your Azure Cosmos DB account. Use the following steps to register an application:
-
-1. Sign in to the [Azure portal](https://portal.azure.com).
-
-1. Open the Azure **Active Directory** pane, go to **App registrations** pane, and select **New registration**.
-
- :::image type="content" source="./media/certificate-based-authentication/new-app-registration.png" alt-text="New application registration in Active Directory":::
-
-1. Fill the **Register an application** form with the following details:
-
- * **Name** ΓÇô Provide a name for your application, it can be any name such as "sampleApp".
- * **Supported account types** ΓÇô Choose **Accounts in this organizational directory only (Default Directory)** to allow resources in your current directory to access this application.
- * **Redirect URL** ΓÇô Choose application of type **Web** and provide a URL where your application is hosted, it can be any URL. For this example, you can provide a test URL such as `https://sampleApp.com` it's okay even if the app doesn't exist.
-
- :::image type="content" source="./media/certificate-based-authentication/register-sample-web-app.png" alt-text="Registering a sample web application":::
-
-1. Select **Register** after you fill the form.
-
-1. After the app is registered, make a note of the **Application(client) ID** and **Object ID**, you will use these details in the next steps.
-
- :::image type="content" source="./media/certificate-based-authentication/get-app-object-ids.png" alt-text="Get the application and object IDs":::
-
-## Install the AzureAD module
-
-In this step, you will install the Azure AD PowerShell module. This module is required to get the ID of the application you registered in the previous step and associate a self-signed certificate to that application.
-
-1. Open Windows PowerShell ISE with administrator rights. If you haven't already done, install the AZ PowerShell module and connect to your subscription. If you have multiple subscriptions, you can set the context of current subscription as shown in the following commands:
-
- ```powershell
- Install-Module -Name Az -AllowClobber
- Connect-AzAccount
-
- Get-AzSubscription
- $context = Get-AzSubscription -SubscriptionId <Your_Subscription_ID>
- Set-AzContext $context
- ```
-
-1. Install and import the [AzureAD](/powershell/module/azuread/) module
-
- ```powershell
- Install-Module AzureAD
- Import-Module AzureAD
- # On PowerShell 7.x, use the -UseWindowsPowerShell parameter
- # Import-Module AzureAD -UseWindowsPowerShell
- ```
-
-## Sign into your Azure AD
-
-Sign into your Azure AD where you have registered the application. Use the Connect-AzureAD command to sign into your account, enter your Azure account credentials in the pop-up window.
-
-```powershell
-Connect-AzureAD
-```
-
-## Create a self-signed certificate
-
-Open another instance of Windows PowerShell ISE, and run the following commands to create a self-signed certificate and read the key associated with the certificate:
-
-```powershell
-$cert = New-SelfSignedCertificate -CertStoreLocation "Cert:\CurrentUser\My" -Subject "CN=sampleAppCert" -KeySpec KeyExchange
-$keyValue = [System.Convert]::ToBase64String($cert.GetRawCertData())
-```
-
-## Create the certificate-based credential
-
-Next run the following commands to get the object ID of your application and create the certificate-based credential. In this example, we set the certificate to expire after a year, you can set it to any required end date.
-
-```powershell
-$application = Get-AzureADApplication -ObjectId <Object_ID_of_Your_Application>
-
-New-AzureADApplicationKeyCredential -ObjectId $application.ObjectId -CustomKeyIdentifier "Key1" -Type AsymmetricX509Cert -Usage Verify -Value $keyValue -EndDate "2020-01-01"
-```
-
-The above command results in the output similar to the screenshot below:
--
-## Configure your Azure Cosmos DB account to use the new identity
-
-1. Sign in to the [Azure portal](https://portal.azure.com).
-
-1. Navigate to your Azure Cosmos DB account.
-
-1. Assign the Contributor role to the sample app you created in the previous section.
-
- For detailed steps, see [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.md).
-
-## Register your certificate with Azure AD
-
-You can associate the certificate-based credential with the client application in Azure AD from the Azure portal. To associate the credential, you must upload the certificate file with the following steps:
-
-In the Azure app registration for the client application:
-
-1. Sign in to the [Azure portal](https://portal.azure.com).
-
-1. Open the Azure **Active Directory** pane, go to the **App registrations** pane, and open the sample app you created in the previous step.
-
-1. Select **Certificates & secrets** and then **Upload certificate**. Browse the certificate file you created in the previous step to upload.
-
-1. Select **Add**. After the certificate is uploaded, the thumbprint, start date, and expiration values are displayed.
-
-## Access the keys from PowerShell
-
-In this step, you will sign into Azure by using the application and the certificate you created and access your Azure Cosmos DB account's keys.
-
-1. Initially clear the Azure account's credentials you have used to sign into your account. You can clear credentials by using the following command:
-
- ```powershell
- Disconnect-AzAccount -Username <Your_Azure_account_email_id>
- ```
-
-1. Next validate that you can sign into Azure portal by using the application's credentials and access the Azure Cosmos DB keys:
-
- ```powershell
- Login-AzAccount -ApplicationId <Your_Application_ID> -CertificateThumbprint $cert.Thumbprint -ServicePrincipal -Tenant <Tenant_ID_of_your_application>
-
- Get-AzCosmosDBAccountKey `
- -ResourceGroupName "<Resource_Group_Name_of_your_Azure_Cosmos_account>" `
- -Name "<Your_Azure_Cosmos_Account_Name>" `
- -Type "Keys"
- ```
-
-The previous command will display the primary and secondary primary keys of your Azure Cosmos DB account. You can view the Activity log of your Azure Cosmos DB account to validate that the get keys request succeeded and the event is initiated by the "sampleApp" application.
--
-## Access the keys from a C# application
-
-You can also validate this scenario by accessing keys from a C# application. The following C# console application, that can access Azure Cosmos DB keys by using the app registered in Active Directory. Make sure to update the tenantId, clientID, certName, resource group name, subscription ID, Azure Cosmos DB account name details before you run the code.
-
-```csharp
-using System;
-using Microsoft.IdentityModel.Clients.ActiveDirectory;
-using System.Linq;
-using System.Net.Http;
-using System.Security.Cryptography.X509Certificates;
-using System.Threading;
-using System.Threading.Tasks;
-
-namespace TodoListDaemonWithCert
-{
- class Program
- {
- private static string aadInstance = "https://login.windows.net/";
- private static string tenantId = "<Your_Tenant_ID>";
- private static string clientId = "<Your_Client_ID>";
- private static string certName = "<Your_Certificate_Name>";
-
- private static int errorCode = 0;
- static int Main(string[] args)
- {
- MainAync().Wait();
- Console.ReadKey();
-
- return 0;
- }
-
- static async Task MainAync()
- {
- string authContextURL = aadInstance + tenantId;
- AuthenticationContext authContext = new AuthenticationContext(authContextURL);
- X509Certificate2 cert = ReadCertificateFromStore(certName);
-
- ClientAssertionCertificate credential = new ClientAssertionCertificate(clientId, cert);
- AuthenticationResult result = await authContext.AcquireTokenAsync("https://management.azure.com/", credential);
- if (result == null)
- {
- throw new InvalidOperationException("Failed to obtain the JWT token");
- }
-
- string token = result.AccessToken;
- string subscriptionId = "<Your_Subscription_ID>";
- string rgName = "<ResourceGroup_of_your_Cosmos_account>";
- string accountName = "<Your_Cosmos_account_name>";
- string cosmosDBRestCall = $"https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{rgName}/providers/Microsoft.DocumentDB/databaseAccounts/{accountName}/listKeys?api-version=2015-04-08";
-
- Uri restCall = new Uri(cosmosDBRestCall);
- HttpClient httpClient = new HttpClient();
- httpClient.DefaultRequestHeaders.Remove("Authorization");
- httpClient.DefaultRequestHeaders.Add("Authorization", "Bearer " + token);
- HttpResponseMessage response = await httpClient.PostAsync(restCall, null);
-
- Console.WriteLine("Got result {0} and keys {1}", response.StatusCode.ToString(), response.Content.ReadAsStringAsync().Result);
- }
-
- /// <summary>
- /// Reads the certificate
- /// </summary>
- private static X509Certificate2 ReadCertificateFromStore(string certName)
- {
- X509Certificate2 cert = null;
- X509Store store = new X509Store(StoreName.My, StoreLocation.CurrentUser);
- store.Open(OpenFlags.ReadOnly);
- X509Certificate2Collection certCollection = store.Certificates;
-
- // Find unexpired certificates.
- X509Certificate2Collection currentCerts = certCollection.Find(X509FindType.FindByTimeValid, DateTime.Now, false);
-
- // From the collection of unexpired certificates, find the ones with the correct name.
- X509Certificate2Collection signingCert = currentCerts.Find(X509FindType.FindBySubjectName, certName, false);
-
- // Return the first certificate in the collection, has the right name and is current.
- cert = signingCert.OfType<X509Certificate2>().OrderByDescending(c => c.NotBefore).FirstOrDefault();
- store.Close();
- return cert;
- }
- }
-}
-```
-
-This script outputs the primary and secondary primary keys as shown in the following screenshot:
--
-Similar to the previous section, you can view the Activity log of your Azure Cosmos DB account to validate that the get keys request event is initiated by the "sampleApp" application.
--
-## Next steps
-
-* [Secure Azure Cosmos DB keys using Azure Key Vault](../store-credentials-key-vault.md)
-
-* [Security baseline for Azure Cosmos DB](../security-baseline.md)
cosmos-db Concepts Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/concepts-authentication.md
Previously updated : 08/02/2023 Last updated : 09/19/2023 # Azure Active Directory and PostgreSQL authentication with Azure Cosmos DB for PostgreSQL
Last updated 08/02/2023
> for production workloads. Certain features might not be supported or might have constrained > capabilities. >
-> [Contact us](mailto:askcosmosdb4postgres@microsoft.com) if you're interested in participating in Azure Active Directory authentication
-> for Azure Cosmos DB for PostgreSQL preview.
->
> You can see a complete list of other new features in [preview features](product-updates.md#features-in-preview). Azure Cosmos DB for PostgreSQL supports PostgreSQL authentication and integration with Azure Active Directory (Azure AD). Each Azure Cosmos DB for PostgreSQL cluster is created with native PostgreSQL authentication enabled and one built-in PostgreSQL role named `citus`. You can add more native PostgreSQL roles after cluster provisioning is completed.
Once you've authenticated against the Active Directory, you then retrieve a toke
## Next steps -- To learn how to configure authentication for Azure Cosmos DB for PostgreSQL clusters, see [Use Azure Active Directory and native PostgreSQL roles for authentication with Azure Cosmos DB for PostgreSQL](./how-to-configure-authentication.md).-- To set up private network access to the cluster nodes, see [Manage private access](./howto-private-access.md).-- To set up public network access to the cluster nodes, see [Manage public access](./howto-manage-firewall-using-portal.md).
+- Check out [Azure AD limits and limitations in Azure Cosmos DB for PostgreSQL](./reference-limits.md#azure-active-directory-authentication)
+- [Learn how to configure authentication for Azure Cosmos DB for PostgreSQL clusters](./how-to-configure-authentication.md)
+- Set up private network access to the cluster nodes, see [Manage private access](./howto-private-access.md)
+- Set up public network access to the cluster nodes, see [Manage public access](./howto-manage-firewall-using-portal.md)
cosmos-db Concepts Row Level Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/concepts-row-level-security.md
Previously updated : 01/30/2023 Last updated : 10/02/2023 # Row-level security in Azure Cosmos DB for PostgreSQL
security policies can compare the role name to values in the `tenant_id`
distribution column to decide whether to allow access. Here's how to apply the approach on a simplified events table distributed by
-`tenant_id`. First [create the roles](howto-create-users.md) `tenant1` and
+`tenant_id`. First [create the roles](./how-to-configure-authentication.md#configure-native-postgresql-authentication) `tenant1` and
`tenant2`. Then run the following SQL commands as the `citus` administrator user:
ERROR: new row violates row-level security policy for table "events_102055"
## Next steps
-Learn how to [create roles](howto-create-users.md) in a
-cluster.
+- Learn how to [create roles](./how-to-configure-authentication.md#configure-native-postgresql-authentication) in a cluster.
+- Check out [security concepts in Azure Cosmos DB for PostgreSQL](./concepts-security-overview.md)
cosmos-db How To Configure Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/how-to-configure-authentication.md
Title: Use Azure Active Directory and native PostgreSQL roles for authentication with Azure Cosmos DB for PostgreSQL
-description: Learn how to set up Azure Active Directory (Azure AD) and add native PostgreSQL roles for authentication with Azure Cosmos DB for PostgreSQL.
+description: Learn how to set up Azure Active Directory (Azure AD) and add native PostgreSQL roles for authentication with Azure Cosmos DB for PostgreSQL
Previously updated : 08/01/2023 Last updated : 09/19/2023 # Use Azure Active Directory and native PostgreSQL roles for authentication with Azure Cosmos DB for PostgreSQL
Last updated 08/01/2023
> for production workloads. Certain features might not be supported or might have constrained > capabilities. >
-> [Contact us](mailto:askcosmosdb4postgres@microsoft.com) if you're interested in participating in Azure Active Directory authentication
-> for Azure Cosmos DB for PostgreSQL preview.
->
> You can see a complete list of other new features in [preview features](product-updates.md#features-in-preview). In this article, you configure authentication methods for Azure Cosmos DB for PostgreSQL. You manage Azure Active Directory (Azure AD) admin users and native PostgreSQL roles for authentication with Azure Cosmos DB for PostgreSQL. You also learn how to use an Azure AD token with Azure Cosmos DB for PostgreSQL.
GRANT SELECT ON ALL TABLES IN SCHEMA public TO "user@tenant.onmicrosoft.com";
## Next steps -- Learn about [authentication in Azure Cosmos DB for PostgreSQL](./concepts-authentication.md).-- Review [Azure Active Directory fundamentals](./../../active-directory/fundamentals/active-directory-whatis.md).-- [Learn more about SQL GRANT in PostgreSQL](https://www.postgresql.org/docs/current/sql-grant.html).
+- Learn about [authentication in Azure Cosmos DB for PostgreSQL](./concepts-authentication.md)
+- Check out [Azure AD limits and limitations in Azure Cosmos DB for PostgreSQL](./reference-limits.md#azure-active-directory-authentication)
+- Review [Azure Active Directory fundamentals](./../../active-directory/fundamentals/active-directory-whatis.md)
+- [Learn more about SQL GRANT in PostgreSQL](https://www.postgresql.org/docs/current/sql-grant.html)
cosmos-db Howto Restore Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/howto-restore-portal.md
Previously updated : 09/17/2023 Last updated : 10/02/2023 # Backup and point-in-time restore of a cluster in Azure Cosmos DB for PostgreSQL
back up and running:
and client applications to the new cluster. * Ensure appropriate [networking settings for private or public access](./concepts-security-overview.md#network-security) are in place for users to connect. These settings aren't copied from the original cluster.
-* Ensure appropriate [logins](./howto-create-users.md) and database level permissions are in place.
+* Ensure appropriate [logins](./how-to-configure-authentication.md#configure-native-postgresql-authentication) and database level permissions are in place.
* Configure [alerts](./howto-alert-on-metric.md#suggested-alerts), as appropriate. ## Next steps
cosmos-db Quickstart Connect Psql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/quickstart-connect-psql.md
Previously updated : 06/07/2023 Last updated : 10/02/2023 # Connect to a cluster with psql - Azure Cosmos DB for PostgreSQL
Your cluster has a default database named `citus`. To connect to the database, y
:::image type="content" source="media/quickstart-connect-psql/get-connection-string.png" alt-text="Screenshot that shows copying the psql connection string.":::
- The **psql** string is of the form `psql "host=c-<cluster>.<uniqueID>.postgres.cosmos.azure.com port=5432 dbname=citus user=citus password={your_password} sslmode=require"`. Notice that the host name starts with a `c.`, for example `c-mycluster.12345678901234.postgres.cosmos.azure.com`. This prefix indicates the coordinator node of the cluster. The default `dbname` is `citus` and can be changed only at cluster provisioning time. The `user` can be any valid [Postgres role](./howto-create-users.md) on your cluster.
+ The **psql** string is of the form `psql "host=c-<cluster>.<uniqueID>.postgres.cosmos.azure.com port=5432 dbname=citus user=citus password={your_password} sslmode=require"`. Notice that the host name starts with a `c.`, for example `c-mycluster.12345678901234.postgres.cosmos.azure.com`. This prefix indicates the coordinator node of the cluster. The default `dbname` is `citus` and can be changed only at cluster provisioning time. The `user` can be any valid [Postgres role](./how-to-configure-authentication.md#configure-native-postgresql-authentication) on your cluster.
1. Open Azure Cloud Shell by selecting the **Cloud Shell** icon on the top menu bar.
cosmos-db Store Credentials Key Vault https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/store-credentials-key-vault.md
Last updated 11/07/2022
[!INCLUDE[NoSQL, MongoDB, Cassandra, Gremlin, Table](includes/appliesto-nosql-mongodb-cassandra-gremlin-table.md)] > [!IMPORTANT]
-> It's recommended to access Azure Cosmos DB is to use a [system-assigned managed identity](managed-identity-based-authentication.md). If your service cannot take advantage of managed identities then use the [certificate-based authentication](certificate-based-authentication.md). If both the managed identity solution and cert based solution do not meet your needs, please use the Azure Key vault solution in this article.
+> It's recommended to access Azure Cosmos DB is to use a [system-assigned managed identity](managed-identity-based-authentication.md). If both the managed identity solution and cert based solution do not meet your needs, please use the Azure Key vault solution in this article.
If you're using Azure Cosmos DB as your database, you connect to databases, container, and items by using an SDK, the API endpoint, and either the primary or secondary key.
dev-box How To Troubleshoot Repair Dev Box https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/how-to-troubleshoot-repair-dev-box.md
+
+ Title: Troubleshoot and Repair Dev Box RDP Connectivity Issues
+description: Having problems connecting to your dev box remotely? Learn how to troubleshoot and resolve connectivity issues to your dev box with developer portal tools.
++++ Last updated : 09/25/2023 +
+#CustomerIntent: As a dev box user, I want to be able to troubleshoot and repair connectivity issues with my dev box so that I don't lose development time.
++
+# Troubleshoot and resolve dev box remote desktop connectivity issues
+
+In this article, you learn how to troubleshoot and resolve remote desktop connectivity (RDC) issues with your dev box. Since RDC issues to your dev box can be time consuming to resolve manually, use the *Troubleshoot & repair* tool in the developer portal to diagnose and repair some common dev box connectivity issues.
++
+When you run the *Troubleshoot & repair* tool, your dev box and its backend services in the Azure infrastructure are scanned for issues. If an issue is detected, *Troubleshoot & repair* fixes the issue so you can connect to your dev box.
+
+## Prerequisites
+
+- Access to the developer portal.
+- The dev box you want to troubleshoot must be running.
+
+## Run Troubleshoot and repair
+
+If you're unable to connect to your dev box using an RDP client, use the *Troubleshoot & repair* tool.
+
+The *Troubleshoot & repair* process takes between 10 to 40 minutes to complete. During this time, you can't use your dev box. The tool scans a list of critical components that relate to RDP connectivity, including but not limited to:
+- Domain join check
+- SxS stack listener readiness
+- URL accessibility check
+- VM power status check
+- Azure resource availability check
+- VM extension check
+- Windows Guest OS readiness
+
+> [!WARNING]
+> Running *Troubleshoot & repair* may effectively restart your Dev Box. Any unsaved data on your Dev Box will be lost.
+
+To run *Troubleshoot & repair* on your dev box, follow these steps:
+
+1. Sign in to the [developer portal](https://aka.ms/devbox-portal).
+
+1. Check that the dev box you want to troubleshoot is running.
+
+ :::image type="content" source="media/how-to-troubleshoot-repair-dev-box/dev-box-running-tile.png" alt-text="Screenshot showing the dev box tile with the status Running.":::
+
+1. If the dev box isn't running, start it, and check whether you can connect to it with RDP.
+
+1. If your dev box is running and you still can't connect to it with RDP, on the Actions menu, select **Troubleshoot & repair**.
+
+ :::image type="content" source="media/how-to-troubleshoot-repair-dev-box/dev-box-actions-troubleshoot-repair.png" alt-text="Screenshot showing the Troubleshoot and repair option for a dev box.":::
+
+1. In the Troubleshoot and repair connectivity message box, select *Yes, I want to troubleshoot this dev box*, and then select **Troubleshoot**.
+
+ :::image type="content" source="media/how-to-troubleshoot-repair-dev-box/dev-box-troubleshooting-confirm.png" alt-text="Screenshot showing the Troubleshoot and repair connectivity confirmation message with Yes, I want to troubleshoot this dev box highlighted.":::
+
+ While waiting for the process to complete, you can leave your dev portal as is, or close it and come back. The process continues in the background.
+
+1. After the RDP connectivity issue is resolved, you can connect to dev box again through [a browser](/azure/dev-box/quickstart-create-dev-box#connect-to-a-dev-box), or [a Remote Desktop client](/azure/dev-box/tutorial-connect-to-dev-box-with-remote-desktop-app?tabs=windows).
+
+## Troubleshoot & repair results
+
+When the *Troubleshoot & repair* process finishes, it lists the results of the checks it ran:
+
+|Outcome |Description |
+|||
+|An issue was resolved. |An issue was detected and fixed. You can try to connect to Dev Box again. |
+|No issue detected. |None of the checks discovered an issue with the Dev Box. |
+|An issue was detected but could not be fixed automatically. |There is an issue with Dev Box, but this action couldnΓÇÖt fix it. You can select **view details** about the issue was and how to fix it manually. |
+
+## Related content
+
+ - [Tutorial: Use a Remote Desktop client to connect to a dev box](tutorial-connect-to-dev-box-with-remote-desktop-app.md)
event-grid Delivery Properties https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/delivery-properties.md
Authorization: BEARER SlAV32hkKG...
``` > [!NOTE]
-> Defining authorization headers is a sensible option when your destination is a Webhook. It should not be used for [functions subscribed with a resource id](/rest/api/eventgrid/controlplane-version2023-06-01-preview/event-subscriptions/create-or-update#azurefunctioneventsubscriptiondestination), Service Bus, Event Hubs, and Hybrid Connections as those destinations support their own authentication schemes when used with Event Grid.
+> Defining authorization headers is a sensible option when your destination is a Webhook. It should not be used for [functions subscribed with a resource id](/rest/api/eventgrid/controlplane-preview/event-subscriptions/create-or-update#azurefunctioneventsubscriptiondestination), Service Bus, Event Hubs, and Hybrid Connections as those destinations support their own authentication schemes when used with Event Grid.
### Service Bus example Azure Service Bus supports the use of following message properties when sending single messages.
event-grid Delivery Retry https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/kubernetes/delivery-retry.md
By default, Event Grid on Kubernetes delivers one event at a time to the subscri
[!INCLUDE [preview-feature-note.md](../includes/preview-feature-note.md)] > [!NOTE]
-> During the preview, Event Grid on Kubernetes features are supported through API version [2020-10-15-Preview](/rest/api/eventgrid/controlplane-version2023-06-01-preview/event-subscriptions/create-or-update).
+> During the preview, Event Grid on Kubernetes features are supported through API version [2020-10-15-Preview](/rest/api/eventgrid/controlplane-preview/event-subscriptions/create-or-update).
## Retry schedule
There are two configurations that determine retry policy. They are:
An event is dropped if either of the limits of the retry policy is reached. Configuration of these limits is done per subscription basis. The following section describes each one is further detail. ### Configuring defaults per subscriber
-You can also specify retry policy limits on a per subscription basis. See our [API documentation](/rest/api/eventgrid/controlplane-version2023-06-01-preview/event-subscriptions/create-or-update) for information on configuring defaults per subscriber. Subscription level defaults override the Event Grid module on Kubernetes level configurations.
+You can also specify retry policy limits on a per subscription basis. See our [API documentation](/rest/api/eventgrid/controlplane-preview/event-subscriptions/create-or-update) for information on configuring defaults per subscriber. Subscription level defaults override the Event Grid module on Kubernetes level configurations.
The following example sets up a Web hook subscription with `maxNumberOfAttempts` to 3 and `eventTimeToLiveInMinutes` to 30 minutes.
event-grid Event Handlers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/kubernetes/event-handlers.md
# Event handlers destinations in Event Grid on Kubernetes An event handler is any system that exposes an endpoint and is the destination for events sent by Event Grid. An event handler receiving an event acts upon it and uses the event payload to execute some logic, which might lead to the occurrence of new events.
-The way to configure Event Grid to send events to a destination is through the creation of an event subscription. It can be done through [Azure CLI](/cli/azure/eventgrid/event-subscription#az-eventgrid-event-subscription-create), [management SDK](../sdk-overview.md#management-sdks), or using direct HTTPs calls using the [2020-10-15-preview API](/rest/api/eventgrid/controlplane-version2023-06-01-preview/event-subscriptions/create-or-update) version.
+The way to configure Event Grid to send events to a destination is through the creation of an event subscription. It can be done through [Azure CLI](/cli/azure/eventgrid/event-subscription#az-eventgrid-event-subscription-create), [management SDK](../sdk-overview.md#management-sdks), or using direct HTTPs calls using the [2020-10-15-preview API](/rest/api/eventgrid/controlplane-preview/event-subscriptions/create-or-update) version.
In general, Event Grid on Kubernetes can send events to any destination via **Webhooks**. Webhooks are HTTP(s) endpoints exposed by a service or workload to which Event Grid has access. The webhook can be a workload hosted in the same cluster, in the same network space, on the cloud, on-premises or anywhere that Event Grid can reach.
In addition to Webhooks, Event Grid on Kubernetes can send events to the followi
## Feature parity
-Event Grid on Kubernetes offers a good level of feature parity with Azure Event Grid's support for event subscriptions. The following list enumerates the main differences in event subscription functionality. Apart from those differences, you can use Azure Event Grid's [REST api version 2020-10-15-preview](/rest/api/eventgrid/controlplane-version2023-06-01-preview/event-subscriptions) as a reference when managing event subscriptions on Event Grid on Kubernetes.
+Event Grid on Kubernetes offers a good level of feature parity with Azure Event Grid's support for event subscriptions. The following list enumerates the main differences in event subscription functionality. Apart from those differences, you can use Azure Event Grid's [REST api version 2020-10-15-preview](/rest/api/eventgrid/controlplane-preview/event-subscriptions) as a reference when managing event subscriptions on Event Grid on Kubernetes.
-1. Use [REST api version 2020-10-15-preview](/rest/api/eventgrid/controlplane-version2023-06-01-preview/event-subscriptions).
+1. Use [REST api version 2020-10-15-preview](/rest/api/eventgrid/controlplane-preview/event-subscriptions).
2. [Azure Event Grid trigger for Azure Functions](../../azure-functions/functions-bindings-event-grid-trigger.md?tabs=csharp%2Cconsole) isn't supported. You can use a WebHook destination type to deliver events to Azure Functions. 3. There's no [dead letter location](../manage-event-delivery.md#set-dead-letter-location) support. That means that you can't use ``properties.deadLetterDestination`` in your event subscription payload. 4. Azure Relay's Hybrid Connections as a destination isn't supported yet.
-5. Only CloudEvents schema is supported. The supported schema value is "[CloudEventSchemaV1_0](/rest/api/eventgrid/controlplane-version2023-06-01-preview/event-subscriptions/create-or-update#eventdeliveryschema)". Cloud Events schema is extensible and based on open standards.
-6. Labels ([properties.labels](/rest/api/eventgrid/controlplane-version2023-06-01-preview/event-subscriptions/create-or-update#request-body)) aren't applicable to Event Grid on Kubernetes. Hence, they aren't available.
-7. [Delivery with resource identity](/rest/api/eventgrid/controlplane-version2023-06-01-preview/event-subscriptions/create-or-update#deliverywithresourceidentity) isn't supported. So, all properties for [Event Subscription Identity](/rest/api/eventgrid/controlplane-version2023-06-01-preview/event-subscriptions/create-or-update#eventsubscriptionidentity) aren't supported.
+5. Only CloudEvents schema is supported. The supported schema value is "[CloudEventSchemaV1_0](/rest/api/eventgrid/controlplane-preview/event-subscriptions/create-or-update#eventdeliveryschema)". Cloud Events schema is extensible and based on open standards.
+6. Labels ([properties.labels](/rest/api/eventgrid/controlplane-preview/event-subscriptions/create-or-update#request-body)) aren't applicable to Event Grid on Kubernetes. Hence, they aren't available.
+7. [Delivery with resource identity](/rest/api/eventgrid/controlplane-preview/event-subscriptions/create-or-update#deliverywithresourceidentity) isn't supported. So, all properties for [Event Subscription Identity](/rest/api/eventgrid/controlplane-preview/event-subscriptions/create-or-update#eventsubscriptionidentity) aren't supported.
8. [Destination endpoint validation](../webhook-event-delivery.md#endpoint-validation-with-event-grid-events) isn't supported yet. ## Event filtering in event subscriptions
event-grid Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/kubernetes/features.md
# Event Grid on Kubernetes with Azure Arc features
-Event Grid on Kubernetes offers a rich set of features that help you integrate your Kubernetes workloads and realize hybrid architectures. It shares the same [rest API](/rest/api/eventgrid/controlplane-version2023-06-01-preview/topics) (starting with version 2020-10-15-preview), [Event Grid CLI](/cli/azure/eventgrid), Azure portal experience, [management SDKs](../sdk-overview.md#management-sdks), and [data plane SDKs](../sdk-overview.md#data-plane-sdks) with Azure Event Grid, the other edition of the same service. When you're ready to publish events, you can use the [data plane SDK examples provided in different languages](https://devblogs.microsoft.com/azure-sdk/event-grid-ga/) that work for both editions of Event Grid.
+Event Grid on Kubernetes offers a rich set of features that help you integrate your Kubernetes workloads and realize hybrid architectures. It shares the same [rest API](/rest/api/eventgrid/controlplane-preview/topics) (starting with version 2020-10-15-preview), [Event Grid CLI](/cli/azure/eventgrid), Azure portal experience, [management SDKs](../sdk-overview.md#management-sdks), and [data plane SDKs](../sdk-overview.md#data-plane-sdks) with Azure Event Grid, the other edition of the same service. When you're ready to publish events, you can use the [data plane SDK examples provided in different languages](https://devblogs.microsoft.com/azure-sdk/event-grid-ga/) that work for both editions of Event Grid.
Although Event Grid on Kubernetes and Azure Event Grid share many features and the goal is to provide the same user experience, there are some differences given the unique requirements they seek to meet and the stage in which they are on their software lifecycle. For example, the only type of topic available in Event Grid on Kubernetes are Event Grid topics that sometimes are also referred as custom topics. Other types of topics are either not applicable or support for them isn't yet available. The main differences between the two editions of Event Grid are presented in the following table.
Although Event Grid on Kubernetes and Azure Event Grid share many features and t
| Feature | Event Grid on Kubernetes | Azure Event Grid | |:--|:-:|:-:|
-| [Event Grid topics](/rest/api/eventgrid/controlplane-version2023-06-01-preview/topics) | Γ£ö | Γ£ö |
+| [Event Grid topics](/rest/api/eventgrid/controlplane-preview/topics) | Γ£ö | Γ£ö |
| [CNCF Cloud Events schema](https://github.com/cloudevents/spec/blob/main/cloudevents/spec.md) | Γ£ö | Γ£ö | | Event Grid and custom schemas | Γ£ÿ* | Γ£ö | | Reliable delivery | Γ£ö | Γ£ö |
Although Event Grid on Kubernetes and Azure Event Grid share many features and t
| Azure Relay's Hybrid Connections as a destination | Γ£ÿ | Γ£ö | | [Advanced filtering](filter-events.md) | Γ£ö*** | Γ£ö | | [Webhook AuthN/AuthZ with Azure AD](../secure-webhook-delivery.md) | Γ£ÿ | Γ£ö |
-| [Event delivery with resource identity](/rest/api/eventgrid/controlplane-version2023-06-01-preview/event-subscriptions/create-or-update) | Γ£ÿ | Γ£ö |
+| [Event delivery with resource identity](/rest/api/eventgrid/controlplane-preview/event-subscriptions/create-or-update) | Γ£ÿ | Γ£ö |
| Same set of data plane SDKs | Γ£ö | Γ£ö | | Same set of management SDKs | Γ£ö | Γ£ö | | Same Event Grid CLI | Γ£ö | Γ£ö |
event-grid Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/kubernetes/overview.md
Event Grid on Kubernetes supports various event-driven integration scenarios. Ho
"As an owner of a system deployed to a Kubernetes cluster, I want to communicate my system's state changes by publishing events and configuring routing of those events so that event handlers, under my control or otherwise, can process my system's events in a way they see fit."
-**Feature** that helps you realize above requirement: [Event Grid topics](/rest/api/eventgrid/controlplane-version2023-06-01-preview/topics).
+**Feature** that helps you realize above requirement: [Event Grid topics](/rest/api/eventgrid/controlplane-preview/topics).
### Event Grid on Kubernetes at a glance From the user perspective, Event Grid on Kubernetes is composed of the following resources in blue:
With Event Grid on Kubernetes, you can forward events to Azure for further proce
Event handler destinations can be any HTTPS or HTTP endpoint to which Event Grid can reach through the network, public or private, and has access (not protected with some authentication mechanism). You define event delivery destinations when you create an event subscription. For more information, see [event handlers](event-handlers.md). ## Features
-Event Grid on Kubernetes supports [Event Grid topics](/rest/api/eventgrid/controlplane-version2023-06-01-preview/topics), which is a feature also offered by [Azure Event Grid](../custom-topics.md). Event Grid topics help you realize the [primary integration use case](#use-case) where your requirements call for integrating your system with another workload that you own or otherwise is made accessible to your system.
+Event Grid on Kubernetes supports [Event Grid topics](/rest/api/eventgrid/controlplane-preview/topics), which is a feature also offered by [Azure Event Grid](../custom-topics.md). Event Grid topics help you realize the [primary integration use case](#use-case) where your requirements call for integrating your system with another workload that you own or otherwise is made accessible to your system.
Some of the capabilities you get with Azure Event Grid on Kubernetes are:
event-grid Partner Events Overview For Partners https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/partner-events-overview-for-partners.md
You have two options:
* [Swagger](https://github.com/Azure/azure-rest-api-specs/tree/main/specification/eventgrid/resource-manager/Microsoft.EventGrid) * [ARM template](/azure/templates/microsoft.eventgrid/allversions) * [ARM template schema](https://github.com/Azure/azure-resource-manager-schemas/blob/main/schemas/2022-06-15/Microsoft.EventGrid.json)
- * [REST APIs](/rest/api/eventgrid/controlplane-version2023-06-01-preview/partner-namespaces)
+ * [REST APIs](/rest/api/eventgrid/controlplane-preview/partner-namespaces)
* [CLI extension](/cli/azure/eventgrid) ### SDKs
event-grid Sdk Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/sdk-overview.md
The management SDKs enable you to create, update, and delete Event Grid topics a
| SDK | Package | Reference documentation | Samples | | -- | - | -- | - |
-| REST API | | [REST reference](/rest/api/eventgrid/controlplane-version2023-06-01-preview/ca-certificates) | |
+| REST API | | [REST reference](/rest/api/eventgrid/controlplane-preview/ca-certificates) | |
| .NET | [Azure.ResourceManager.EventGrid](https://www.nuget.org/packages/Azure.ResourceManager.EventGrid/) | [.NET reference](/dotnet/api/overview/azure/resourcemanager.eventgrid-readme?view=azure-dotnet-preview&preserve-view=true) | [.NET samples](https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/eventgrid/Azure.ResourceManager.EventGrid/samples) | | Java | [azure-resourcemanager-eventgrid](https://central.sonatype.com/artifact/com.azure.resourcemanager/azure-resourcemanager-eventgrid/) | [Java reference](/java/api/overview/azure/resourcemanager-eventgrid-readme?view=azure-java-preview&preserve-view=true) | [Java samples](https://github.com/azure/azure-sdk-for-java/tree/main/sdk/eventgrid/azure-resourcemanager-eventgrid/src/samples) | | JavaScript | [@azure/arm-eventgrid](https://www.npmjs.com/package/@azure/arm-eventgrid) | [JavaScript reference](/javascript/api/overview/azure/arm-eventgrid-readme?view=azure-node-preview&preserve-view=true) | [JavaScript and TypeScript samples](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/eventgrid/arm-eventgrid) |
The data plane SDKs enable you to post events to topics by taking care of authen
| Programming language | Package | Reference documentation | Samples | | -- | - | - | -- |
-| REST API | | [REST reference](/rest/api/eventgrid/dataplanepreview-version2023-06-01/publish-cloud-events) |
+| REST API | | [REST reference](/rest/api/eventgrid/dataplane-preview/publish-cloud-events) |
| .NET | [Azure.Messaging.EventGrid](https://www.nuget.org/packages/Azure.Messaging.EventGrid/) | [.NET reference](/dotnet/api/overview/azure/messaging.eventgrid-readme?view=azure-dotnet-preview&preserve-view=true) | [.NET samples](https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/eventgrid/Azure.Messaging.EventGrid/samples) | |Java | [azure-messaging-eventgrid](https://central.sonatype.com/artifact/com.azure/azure-messaging-eventgrid/) | [Java reference](/java/api/overview/azure/messaging-eventgrid-readme?view=azure-java-preview&preserve-view=true) | [Java samples](https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/eventgrid/azure-messaging-eventgrid/src/samples/java) | | JavaScript | [@azure/eventgrid](https://www.npmjs.com/package/@azure/eventgrid) | [JavaScript reference](/javascript/api/overview/azure/eventgrid-readme?view=azure-node-preview&preserve-view=true) | [JavaScript and TypeScript samples](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/eventgrid/eventgrid) |
event-grid System Topics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/system-topics.md
System topics are visible as Azure resources and provide the following capabilit
## Lifecycle of system topics You can create a system topic in two ways: -- Create an [event subscription on an Azure resource as an extension resource](/rest/api/eventgrid/controlplane-version2023-06-01-preview/event-subscriptions/create-or-update), which automatically creates a system topic with the name in the format: `<Azure resource name>-<GUID>`. The system topic created in this way is automatically deleted when the last event subscription for the topic is deleted.
+- Create an [event subscription on an Azure resource as an extension resource](/rest/api/eventgrid/controlplane-preview/event-subscriptions/create-or-update), which automatically creates a system topic with the name in the format: `<Azure resource name>-<GUID>`. The system topic created in this way is automatically deleted when the last event subscription for the topic is deleted.
- Create a system topic for an Azure resource, and then create an event subscription for that system topic. When you use this method, you can specify a name for the system topic. The system topic isn't deleted automatically when the last event subscription is deleted. You need to manually delete it. When you use the Azure portal, you're always using this method. When you create an event subscription using the [**Events** page of an Azure resource](blob-event-quickstart-portal.md#subscribe-to-the-blob-storage), the system topic is created first and then the subscription for the topic is created. You can explicitly create a system topic first by using the [**Event Grid System Topics** page](create-view-manage-system-topics.md#create-a-system-topic) and then create a subscription for that topic.
expressroute How To Custom Route Alert https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/how-to-custom-route-alert.md
# Configure custom alerts to monitor advertised routes
-This article helps you use Azure Automation and Logic Apps to constantly monitor the number of routes advertised from the ExpressRoute gateway to on-premises networks. Monitoring can help prevent hitting the 1000 routes limit](expressroute-faqs.md#how-many-prefixes-can-be-advertised-from-a-vnet-to-on-premises-on-expressroute-private-peering).
+This article helps you use Azure Automation and Logic Apps to constantly monitor the number of routes advertised from the ExpressRoute gateway to on-premises networks. Monitoring can help prevent hitting the 1000 [routes limit](expressroute-faqs.md#how-many-prefixes-can-be-advertised-from-a-virtual-network-to-on-premises-on-expressroute-private-peering)
**Azure Automation** allows you to automate execution of custom PowerShell script stored in a *runbook*. When using the configuration in this article, the runbook contains a PowerShell script that queries one or more ExpressRoute gateways. It collects a dataset containing the resource group, ExpressRoute gateway name, and number of network prefixes advertised on-premises.
Verify that you have met the following criteria before beginning your configurat
* You have at least one ExpressRoute gateway in your deployment.
-* You have a basic understanding of [Run As accounts](../automation/manage-runas-account.md) in Azure Automation.
- * You are familiar with [Azure Logic Apps](../logic-apps/logic-apps-overview.md). * You are familiar with using Azure PowerShell. Azure PowerShell is required to collect the network prefixes in ExpressRoute gateway. For more information about Azure PowerShell in general, see the [Azure PowerShell documentation](/powershell/azure/).
Verify that you have met the following criteria before beginning your configurat
## <a name="accounts"></a>Create and configure accounts
-When you create an Automation account in the Azure portal, a [Run As](../automation/automation-security-overview.md#run-as-accounts) account is automatically created. This account takes following actions:
+When you create an Automation account in the Azure portal, a Run As account is automatically created. This account takes following actions:
* Creates an Azure Active Directory (Azure AD) application with a self-signed certificate. The Run As account itself has a certificate that needs to be renewed by default every year.
firewall Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/overview.md
Last updated 11/07/2022
# What is Azure Firewall?
-Azure Firewall is a cloud-native and intelligent network firewall security service that provides the best of breed threat protection for your cloud workloads running in Azure. It's a fully stateful, firewall as a service with built-in high availability and unrestricted cloud scalability. It provides both east-west and north-south traffic inspection. To learn what's east-west and north-south traffic, see [East-west and north-south traffic](/azure/architecture/framework/security/design-network-flow#east-west-and-north-south-traffic).
+Azure Firewall is a cloud-native and intelligent network firewall security service that provides the best of breed threat protection for your cloud workloads running in Azure. It's a fully stateful firewall as a service with built-in high availability and unrestricted cloud scalability. It provides both east-west and north-south traffic inspection. To learn what's east-west and north-south traffic, see [East-west and north-south traffic](/azure/architecture/framework/security/design-network-flow#east-west-and-north-south-traffic).
Azure Firewall is offered in three SKUs: Standard, Premium, and Basic.
To learn about Firewall Standard features, see [Azure Firewall Standard features
## Azure Firewall Premium
- Azure Firewall Premium provides advanced capabilities include signature-based IDPS to allow rapid detection of attacks by looking for specific patterns. These patterns can include byte sequences in network traffic, or known malicious instruction sequences used by malware. There are more than 67,000 signatures in over 50 categories that are updated in real time to protect against new and emerging exploits. The exploit categories include malware, phishing, coin mining, and Trojan attacks.
+ Azure Firewall Premium provides advanced capabilities include signature based IDPS to allow rapid detection of attacks by looking for specific patterns. These patterns can include byte sequences in network traffic or known malicious instruction sequences used by malware. There are more than 67,000 signatures in over 50 categories that are updated in real time to protect against new and emerging exploits. The exploit categories include malware, phishing, coin mining, and Trojan attacks.
![Firewall Premium overview](media/overview/firewall-premium.png)
To learn about Firewall Premium features, see [Azure Firewall Premium features](
## Azure Firewall Basic
-Azure Firewall Basic is intended for small and medium size (SMB) customers to secure their Azure cloud
+Azure Firewall Basic is intended for small and medium size (SMB) customers to secure their Azure cloud.
environments. It provides the essential protection SMB customers need at an affordable price point. :::image type="content" source="media/overview/firewall-basic-diagram.png" alt-text="Diagram showing Firewall Basic.":::
-Azure Firewall Basic is similar to Firewall Standard, but has the following main limitations:
+Azure Firewall Basic is like Firewall Standard, but has the following main limitations:
- Supports Threat Intel *alert mode* only - Fixed scale unit to run the service on two virtual machine backend instances
frontdoor Create Front Door Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/create-front-door-portal.md
Previously updated : 08/15/2022 Last updated : 10/02/2023 #Customer intent: As an IT admin, I want to direct user traffic to ensure high availability of web applications.
# Quickstart: Create an Azure Front Door profile - Azure portal
-In this quickstart, you'll learn how to create an Azure Front Door profile using the Azure portal. You can create an Azure Front Door profile through *Quick create* with basic configurations or through the *Custom create* which allows a more advanced configuration.
+This quickstart guides you through the process of creating an Azure Front Door profile using the Azure portal. You have two options to create an Azure Front Door profile: Quick create and Custom create. The Quick create option allows you to configure the basic settings of your profile, while the Custom create option enables you to customize your profile with more advanced settings.
-With *Custom create*, you deploy two App services. Then, you create the Azure Front Door profile using the two App services as your origin. Lastly, you'll verify connectivity to your App services using the Azure Front Door frontend hostname.
+In this quickstart, you use the Custom create option to create an Azure Front Door profile. You first deploy two App services as your origin servers. Then, you configure the Azure Front Door profile to route traffic to your App services based on certain rules. Finally, you test the connectivity to your App services by accessing the Azure Front Door frontend hostname.
:::image type="content" source="media/quickstart-create-front-door/environment-diagram.png" alt-text="Diagram of Front Door deployment environment using the Azure portal." border="false":::
An Azure account with an active subscription. [Create an account for free](https
1. Sign in to the [Azure portal](https://portal.azure.com).
-1. From the home page or the Azure menu, selectΓÇ»**+ Create a resource**. Search forΓÇ»*Front Door and CDN profiles*. Then select **Create**.
+1. To create a new resource for Front Door and CDN profiles, navigate to the home page or the Azure menu and select **+ Create a resource** button. Then, enter *Front Door and CDN profiles* in the search box and select **Create**.
1. On the **Compare offerings** page, select **Quick create**. Then select **Continue to create a Front Door**. :::image type="content" source="./media/create-front-door-portal/front-door-quick-create.png" alt-text="Screenshot of compare offerings.":::
-1. On the **Create a Front Door profile** page, enter, or select the following settings.
+1. On the **Create a Front Door profile** page, provide the following information for the required settings.
:::image type="content" source="./media/create-front-door-portal/front-door-quick-create-2.png" alt-text="Screenshot of Front Door quick create page.":::
An Azure account with an active subscription. [Create an account for free](https
In the previous tutorial, you created an Azure Front Door profile through *Quick create*, which created your profile with basic configurations.
-You'll now create an Azure Front Door profile using *Custom create* and deploy two App services that your Azure Front Door profile will use as your origin.
+You create an Azure Front Door profile using *Custom create* and deploy two App services that your Azure Front Door profile uses as your origins.
### Create two Web App instances If you already have services to use as an origin, skip to [create a Front Door for your application](#create-a-front-door-for-your-application).
-In this example, we create two Web App instances that are deployed in two different Azure regions. Both web application instances will run in *Active/Active* mode, so either one can service incoming traffic. This configuration differs from an *Active/Stand-By* configuration, where one acts as a failover.
+This example demonstrates how to create two Web App instances that are deployed in two different Azure regions. Both web application instances operate in Active/Active mode, which means that they can both handle incoming traffic. This configuration is different from an Active/Stand-By configuration, where one instance serves as a backup for the other.
-Use the following steps to create two Web Apps used in this example.
+To create the two Web Apps for this example, follow these steps:
1. Sign in to the [Azure portal](https://portal.azure.com).
-1. On the top left-hand side of the portal, select **+ Create a resource**. Then search for **Web App**. Select **Create** to begin configuring the first Web App.
+1. To start creating the first Web App, select **+ Create a resource** button on the top left corner of the portal. Then, type *Web App* in the search box and select the **Create** to proceed with the configuration.
-1. On the **Basics** tab of **Create Web App** page, enter, or select the following information.
+1. On the **Create Web App** page, fill in the required information on the **Basics** tab.
:::image type="content" source="./media/create-front-door-portal/create-web-app.png" alt-text="Quick create Azure Front Door premium tier in the Azure portal.":::
Use the following steps to create two Web Apps used in this example.
| **Windows Plan** | Select **Create new** and enter *myAppServicePlanCentralUS* in the text box. | | **Sku and size** | Select **Standard S1 100 total ACU, 1.75-GB memory**. |
-1. Select **Review + create**, review the summary, and then select **Create**. Deployment of the Web App can take up to a minute.
+1. To complete the creation of the Web App, select **Review + create** button and verify the summary of the settings. Then, select the **Create** button to start the deployment process, which may take up to a minute.
-1. After you create the first Web App, create a second Web App. Use the same settings as above, except for the following settings:
+1. To create a second Web App, follow the same steps as for the first Web App, but make the following changes in the settings:
| Setting | Description | |--|--|
Use the following steps to create two Web Apps used in this example.
### Create a Front Door for your application
-Configure Azure Front Door to direct user traffic based on lowest latency between the two Web Apps origins. You'll also secure your Azure Front Door with a Web Application Firewall (WAF) policy.
+In this step, you set up Azure Front Door to route user traffic to the nearest Web App origin based on the latency. You apply a Web Application Firewall (WAF) policy to protect your Azure Front Door from malicious attacks.
1. Sign in to the [Azure portal](https://portal.azure.com).
-
-1. From the home page or the Azure menu, selectΓÇ»**+ Create a resource**. Search forΓÇ»*Front Door and CDN profiles*. Then select **Create**.
-1. On the **Compare offerings** page, select **Custom create**. Then select **Continue to create a Front Door**.
+1. Select **+ Create a resource** from the home page or the Azure menu, search for *Front Door and CDN profiles*, and select **Create**.
+
+1. Select **Custom create** on the *Compare offerings* page and then **Continue** to create a Front Door.
1. On the **Basics** tab, enter or select the following information, and then select **Next: Secret**.
Configure Azure Front Door to direct user traffic based on lowest latency betwee
| **Name** | Enter a unique name in this subscription **Webapp-Contoso-AFD** | | **Tier** | Select **Premium**. |
-1. *Optional*: **Secrets**. If you plan to use managed certificates, this step is optional. If you have an existing Key Vault in Azure that you plan to use to Bring Your Own Certificate for a custom domain, then select **Add a certificate**. You can also add a certificate in the management experience after creation.
+1. *Optional*: **Secrets**. You can skip this step if you plan to use managed certificates. If you have an existing Key Vault in Azure that contains a certificate for a custom domain, you can select **Add a certificate**. You can also add a certificate later in the management experience.
> [!NOTE]
- > You need to have the right permission to add the certificate from Azure Key Vault as a user.
+ > To add a certificate from Azure Key Vault as a user, you must have the appropriate permission.
:::image type="content" source="./media/create-front-door-portal/front-door-custom-create-secret.png" alt-text="Screenshot of add a secret in custom create.":::
-1. In the **Endpoint** tab, select **Add an endpoint** and give your endpoint a globally unique name. You can create more endpoints in your Azure Front Door profile after you complete the deployment. This example uses *contoso-frontend*. Select **Add** to add the endpoint.
+1. In the *Endpoint* tab, select **Add an endpoint**, enter a globally unique name (this example uses contoso-frontend), and select **Add**. You can create more endpoints after the deployment.
:::image type="content" source="./media/create-front-door-portal/front-door-custom-create-add-endpoint.png" alt-text="Screenshot of add an endpoint.":::
-1. Next, select **+ Add a route** to configure routing to your Web App origin.
+1. To configure routing to your Web App origin, select **+ Add a route**.
:::image type="content" source="./media/create-front-door-portal/add-route.png" alt-text="Screenshot of add a route from the endpoint page." lightbox="./media/create-front-door-portal/add-route-expanded.png":::
-1. On the **Add a route** page, enter, or select the following information, select **Add** to add the route to the endpoint configuration.
+1. Enter or select the following information on the **Add a route** page and select **Add** to add the route to the endpoint configuration.
:::image type="content" source="./media/create-front-door-portal/add-route-page.png" alt-text="Screenshot of add a route configuration page." lightbox="./media/create-front-door-portal/add-route-page-expanded.png"::: | Setting | Description | |--|--|
- | Name | Enter a name to identify the mapping between domains and origin group. |
- | Domains | A domain name has been auto-generated for you to use. If you want to add a custom domain, select **Add a new domain**. This example will use the default. |
- | Patterns to match | Set all the URLs this route will accept. This example will use the default, and accept all URL paths. |
- | Accepted protocols | Select the protocol the route will accept. This example will accept both HTTP and HTTPS requests. |
- | Redirect | Enable this setting to redirect all HTTP traffic to the HTTPS endpoint. |
- | Origin group | Select **Add a new origin group**. For the origin group name, enter **myOriginGroup**. Then select **+ Add an origin**. For the first origin, enter **WebApp1** for the *Name* and then for the *Origin Type* select **App services**. In the *Host name*, select **webapp-contoso-001.azurewebsites.net**. Select **Add** to add the origin to the origin group. Repeat the steps to add the second Web App as an origin. For the origin *Name*, enter **WebApp2**. The *Host name* is **webapp-contoso-002.azurewebsites.net**. Choose a priority, the lowest number has the highest priority, a priority of 1 if both origins are needed to be served by Azure Front Door. Choose a weight appropriately for traffic routing, equal weights of 1000 if the traffic needs to be routed to both origins equally. Once both Web App origins have been added, select **Add** to save the origin group configuration. |
- | Origin path | Leave blank. |
- | Forwarding protocol | Select the protocol that will be forwarded to the origin group. This example will match the incoming requests to origins. |
- | Caching | Select the check box if you want to cache contents closer to your users globally using Azure Front Door's edge POPs and the Microsoft network. |
- | Rules | Once you've deployed the Azure Front Door profile, you can configure Rules to apply to your route. |
+ | Name | Provide a name that identifies the mapping between domains and origin group. |
+ | Domains | The system has generated a domain name for you to use. To add a custom domain, select **Add a new domain**. This example uses the default domain name. |
+ | Patterns to match | Specify the URLs that this route accepts. This example uses the default setting, which accepts all URL paths. |
+ | Accepted protocols | Choose the protocol that the route accepts. This example accepts both HTTP and HTTPS requests. |
+ | Redirect | Turn on this setting to redirect all HTTP requests to the HTTPS endpoint. |
+ | Origin group | To create a new origin group, select **Add a new origin group** and enter *myOriginGroup* as the origin group name. Then select **+ Add an origin** and enter *WebApp1* for the **Name** and *App services* for the **Origin Type**. In the **Host name**, select *webapp-contoso-001.azurewebsites.net* and select **Add** to add the origin to the origin group. Repeat the steps to add the second Web App as an origin with *WebApp2* as the **Name** and *webapp-contoso-002.azurewebsites.net* as the **Host name**. Choose a **priority** for each origin, with the lowest number having the highest priority. If you need Azure Front Door to serve both origins, use a priority of 1. Choose a weight for each origin, with the weight determining how traffic is routed to the origins. Use equal weights of 1000 if the traffic needs to be routed to both origins equally. Once both Web App origins have been added, select **Add** to save the origin group configuration. |
+ | Origin path | Don't enter any value. |
+ | Forwarding protocol | Choose the protocol that the origin group receives. This example uses the same protocol as the incoming requests. |
+ | Caching | Mark the check box if you want to use Azure Front DoorΓÇÖs edge POPs and the Microsoft network to cache contents closer to your users globally. |
+ | Rules | After deploying the Azure Front Door profile, you can use Rules to customize your route. |
1. Select **+ Add a policy** to apply a Web Application Firewall (WAF) policy to one or more domains in the Azure Front Door profile. :::image type="content" source="./media/create-front-door-portal/add-policy.png" alt-text="Screenshot of add a policy from endpoint page." lightbox="./media/create-front-door-portal/add-policy-expanded.png":::
-1. On the **Add security policy** page, enter a name to identify this security policy. Then select domains you want to associate the policy with. For *WAF Policy*, you can select a previously created policy or select **Create New** to create a new policy. Select **Save** to add the security policy to the endpoint configuration.
+1. To create a security policy, provide a name that uniquely identifies it. Next, choose the domains that you want to apply the policy to. You can also select an existing WAF policy or create a new one. To finish, select **Save** to add the security policy to the endpoint configuration.
:::image type="content" source="./media/create-front-door-portal/add-security-policy.png" alt-text="Screenshot of add security policy page.":::
-1. SelectΓÇ»**Review + Create**, and thenΓÇ»**Create** to deploy the Azure Front Door profile. It will take a few minutes for configurations to be propagated to all edge locations.
+1. To deploy the Azure Front Door profile, select **Review + Create** and then **Create**. The configurations propagate to all edge locations in a few minutes.
## Verify Azure Front Door
-When you create the Azure Front Door profile, it takes a few minutes for the configuration to be deployed globally. Once completed, you can access the frontend host you created. In a browser, enter the endpoint hostname. For example `contoso-frontend.z01.azurefd.net`. Your request will automatically get routed to the nearest server from the specified servers in the origin group.
-
-If you created these apps in this quickstart, you'll see an information page.
+The global deployment of the Azure Front Door profile takes a few minutes to complete. After that, you can access the frontend host that you created by entering its endpoint hostname in a browser. For example, `contoso-frontend.z01.azurefd.net`. The request is automatically routed to the closest server among the specified servers in the origin group.
-To test instant global failover, do the following steps:
+To test the instant global failover feature, follow these steps if you created the apps in this quickstart. You see an information page with the app details.
-1. Open a browser, as described above, and go to the frontend address: `contoso-frontend.z01.azurefd.net`.
+1. To access the frontend host, enter its endpoint hostname in a browser as described previous. For example, `contoso-frontend.z01.azurefd.net`.
-1. In the Azure portal, search and select *App services*. Scroll down to find one of your Web Apps, **WebApp-Contoso-001** in this example.
+1. In the Azure portal, find and select *App services* from the search bar. Locate one of your Web Apps from the list, such as **WebApp-Contoso-001**.
-1. Select your web app, and then select **Stop**, and **Yes** to verify.
+1. To stop your web app, select it from the list and then select **Stop**. Confirm your action by selecting **Yes**.
-1. Refresh your browser. You should see the same information page.
+1. Reload the browser to see the information page again.
> [!TIP]
- > There is a delay between when the traffic will be directed to the second Web app. You may need to refresh again.
+ > The traffic may take some time to switch to the second Web app. You may need to reload the browser again.
-1. Go to the second Web app, and stop that one as well.
+1. To stop the second Web app, select it from the list and then choose **Stop**. Confirm your action by selecting **Yes**.
-1. Refresh your browser. This time, you should see an error message.
+1. Reload the web page. You should encounter an error message after the refresh.
:::image type="content" source="./media/create-front-door-portal/web-app-stopped-message.png" alt-text="Both instances of the web app stopped"::: ## Clean up resources
-After you're done, you can remove all the items you created. Deleting a resource group also deletes its contents. If you don't intend to use this Azure Front Door, you should remove these resources to avoid unnecessary charges.
+Once you have completed the task, you can delete all the resources you created. Removing a resource group also eliminate its contents. To avoid incurring unnecessary charges, we recommend that you delete these resources if you don't plan to use this Azure Front Door.
-1. In the Azure portal, search for and select **Resource groups**, or select **Resource groups** from the Azure portal menu.
+1. In the Azure portal, locate and select **Resource groups** by using the search bar, or navigate to **Resource groups** from the Azure portal menu.
-1. Filter or scroll down to find a resource group, such as **myAFDResourceGroup**, **myAppResourceGroup** or **myAppResourceGroup2**.
+1. Use the filter option or scroll down the list to locate a resource group, such as **myAFDResourceGroup**, **myAppResourceGroup** or **myAppResourceGroup2**.
-1. Select the resource group, then select **Delete resource group**.
+1. Choose the resource group, then select the option to **Delete** resource group.
> [!WARNING]
- > Once a resource group has been deleted, there is no way to recover the resources.
+ > Deleting a resource group is an irreversible action. The resources within the resource group won't be recoverable once they are deleted.
-1. Type the resource group name to verify, and then select **Delete**.
+1. Enter the name of the resource group to confirm, and then select the **Delete** button.
-1. Repeat the procedure for the other two resource groups.
+1. Follow the same steps for the remaining two resource groups.
## Next steps
-Advance to the next article to learn how to add a custom domain to your Front Door.
+Proceed to the next article to learn how to configure a custom domain for your Azure Front Door.
> [!div class="nextstepaction"] > [Add a custom domain](standard-premium/how-to-add-custom-domain.md)
frontdoor Front Door Ddos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/front-door-ddos.md
Previously updated : 10/31/2022 Last updated : 10/05/2023 # DDoS protection on Front Door
-Azure Front Door has several features and characteristics that can help to prevent distributed denial of service (DDoS) attacks. These features can prevent attackers from reaching your application and affecting your application's availability and performance.
+By using Azure Front Door, you can protect your application from distributed denial of service (DDoS) attacks. Azure Front Door offers several features and characteristics that can block attackers from reaching your application and affecting its availability and performance.
## Infrastructure DDoS protection
-Front Door is protected by the default Azure infrastructure DDoS protection. The full scale and capacity of Front Door's globally deployed network provides defense against common network layer attacks through always-on traffic monitoring and real-time mitigation. This infrastructure DDoS protection has a proven track record in protecting Microsoft's enterprise and consumer services from large-scale attacks.
+Azure Front Door benefits from the default Azure infrastructure DDoS protection. This protection monitors and mitigates network layer attacks in real time by using the global scale and capacity of Front DoorΓÇÖs network. This protection has a proven track record in safeguarding MicrosoftΓÇÖs enterprise and consumer services from large-scale attacks.
## Protocol blocking
-Front Door only accepts traffic on the HTTP and HTTPS protocols, and will only process valid requests with a known `Host` header. This behavior helps to mitigate some common DDoS attack types including volumetric attacks that are spread across a range of protocols and ports, DNS amplification attacks, and TCP poisoning attacks.
+Azure Front Door supports only the HTTP and HTTPS protocols, and requires a valid `Host`` header for each request. This behavior helps to prevent some common DDoS attack types such as volumetric attacks that use various protocols and ports, DNS amplification attacks, and TCP poisoning attacks.
## Capacity absorption
-Front Door is a large scaled, globally distributed service. We have many customers, including Microsoft's own large-scale cloud products that receive hundreds of thousands of requests each second. Front Door is located at the edge of Azure's network, absorbing and geographically isolating large volume attacks. This can prevent malicious traffic from going any further than the edge of the Azure network.
+Azure Front Door is a large-scale, globally distributed service. It serves many customers, including MicrosoftΓÇÖs own cloud products that handle hundreds of thousands of requests per second. Front Door is situated at the edge of AzureΓÇÖs network, where it can intercept and geographically isolate large volume attacks. Therefore, Front Door can prevent malicious traffic from reaching beyond the edge of the Azure network.
## Caching
-[Front Door's caching capabilities](./front-door-caching.md) can be used to protect backends from large traffic volumes generated by an attack. Cached resources will be returned from the Front Door edge nodes so they don't get forwarded to your backend. Even short cache expiry times (seconds or minutes) on dynamic responses can greatly reduce load on backend services. For more information about caching concepts and patterns, see [Caching considerations](/azure/architecture/best-practices/caching) and [Cache-aside pattern](/azure/architecture/patterns/cache-aside).
+You can use [Front DoorΓÇÖs caching capabilities](./front-door-caching.md) to protect your backends from large traffic volumes generated by an attack. Front Door edge nodes return cached resources and avoid forwarding them to your backend. Even short cache expiry times (seconds or minutes) on dynamic responses can significantly reduce the load on your backend services. For more information about caching concepts and patterns, see [Caching considerations](/azure/architecture/best-practices/caching) and [Cache-aside pattern](/azure/architecture/patterns/cache-aside).
## Web Application Firewall (WAF)
-[Front Door's Web Application Firewall (WAF)](../web-application-firewall/afds/afds-overview.md) can be used to mitigate many different types of attacks:
+You can use [Front Door's Web Application Firewall (WAF)](../web-application-firewall/afds/afds-overview.md) to mitigate many different types of attacks:
-* Using the managed rule set provides protection against many common attacks. For more information, see [Managed rules](../web-application-firewall/afds/waf-front-door-drs.md).
-* Traffic from outside a defined geographic region, or within a defined region, can be blocked or redirected to a static webpage. For more information, see [Geo-filtering](../web-application-firewall/afds/waf-front-door-geo-filtering.md).
-* IP addresses and ranges that you identify as malicious can be blocked. For more information, see [IP restrictions](../web-application-firewall/afds/waf-front-door-configure-ip-restriction.md).
-* Rate limiting can be applied to prevent IP addresses from calling your service too frequently. For more information, see [Rate limiting](../web-application-firewall/afds/waf-front-door-rate-limit.md).
-* You can create [custom WAF rules](../web-application-firewall/afds/waf-front-door-custom-rules.md) to automatically block and rate limit HTTP or HTTPS attacks that have known signatures.
-* Using the bot protection managed rule set provides protection against known bad bots. For more information, see [Configuring bot protection](../web-application-firewall/afds/waf-front-door-policy-configure-bot-protection.md).
+* The managed rule set protects your application from many common attacks. For more information, see [Managed rules](../web-application-firewall/afds/waf-front-door-drs.md).
+* You can block or redirect traffic from outside or inside a specific geographic region to a static webpage. For more information, see [Geo-filtering](../web-application-firewall/afds/waf-front-door-geo-filtering.md).
+* You can block IP addresses and ranges that you identify as malicious. For more information, see [IP restrictions](../web-application-firewall/afds/waf-front-door-configure-ip-restriction.md).
+* You can apply rate limiting to prevent IP addresses from calling your service too frequently. For more information, see [Rate limiting](../web-application-firewall/afds/waf-front-door-rate-limit.md).
+* You can create [custom WAF rules](../web-application-firewall/afds/waf-front-door-custom-rules.md) to automatically block and rate limit HTTP or HTTPS attacks that have known signatures.
+* The bot protection managed rule set protects your application from known bad bots. For more information, see [Configuring bot protection](../web-application-firewall/afds/waf-front-door-policy-configure-bot-protection.md).
Refer to [Application DDoS protection](../web-application-firewall/shared/application-ddos-protection.md) for guidance on how to use Azure WAF to protect against DDoS attacks.
-## Protect VNet origins
+## Protect virtual network origins
-Enable [Azure DDoS Protection](../ddos-protection/ddos-protection-overview.md) on the origin VNet to protect your public IPs against DDoS attacks. DDoS Protection customers receive extra benefits including cost protection, SLA guarantee, and access to experts from the DDoS Rapid Response Team for immediate help during an attack.
+To protect your public IPs from DDoS attacks, enable [Azure DDoS Protection](../ddos-protection/ddos-protection-overview.md) on the origin virtual network. DDoS Protection customers receive extra benefits such as cost protection, SLA guarantee, and access to experts from the DDoS Rapid Response Team for immediate assistance during an attack.
## Next steps -- Learn how to configure a [WAF policy for Azure Front Door](front-door-waf.md).
+- Learn how to set up a [WAF policy for Azure Front Door](front-door-waf.md).
- Learn how to [create an Azure Front Door profile](quickstart-create-front-door.md). - Learn [how Azure Front Door works](front-door-routing-architecture.md).
frontdoor Front Door Security Headers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/front-door-security-headers.md
Previously updated : 10/28/2022 Last updated : 10/05/2023 # Customer intent: As an IT admin, I want to learn about Front Door and how to configure a security header via Rules Engine.
In this tutorial, you learn how to:
> [!NOTE] > Header values are limited to 640 characters.
-5. Once you've added all of the rules you'd like to your configuration, don't forget to go to your preferred route and associate your Rules engine configuration to the Route Rule. This step is required to enable the rule to work.
+5. After you have completed adding the rules to your configuration, make sure to associate your Rules engine configuration with the Route Rule of your chosen route. This step is required to enable the rule to work.
:::image type="content" source="./media/front-door-security-headers/front-door-associate-routing-rule.png" alt-text="Screenshot showing how to associate a routing rule.":::
frontdoor Origin Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/origin-security.md
Title: Secure traffic to origins
-description: This article explains how to restrict traffic to your origins to ensure it's been processed by Azure Front Door.
+description: This article explains how to ensure that your origins receive traffic only from Azure Front Door.
Previously updated : 10/25/2022 Last updated : 10/02/2023 zone_pivot_groups: front-door-tiers
frontdoor Quickstart Create Front Door https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/quickstart-create-front-door.md
Title: 'Quickstart: Set up high availability with Azure Front Door Service - Azure portal'
-description: This quickstart shows how to use Azure Front Door Service for your highly available and high-performance global web application by using the Azure portal.
+ Title: 'Quickstart: How to use Azure Front Door Service to enable high availability - Azure portal'
+description: In this quickstart, you learn how to use the Azure portal to set up Azure Front Door Service for your web application that requires high availability and high performance across the globe.
Previously updated : 10/28/2022 Last updated : 10/02/2023
-#Customer intent: As an IT admin, I want to direct user traffic to ensure high availability of web applications.
+#Customer intent: As an IT admin, I want to manage user traffic to ensure high availability of web applications.
# Quickstart: Create a Front Door for a highly available global web application
-Get started with Azure Front Door by using the Azure portal to set up high availability for a web application.
-
-In this quickstart, Azure Front Door pools two instances of a web application that run in different Azure regions. You create a Front Door configuration based on equal weighted and same priority backends. This configuration directs traffic to the nearest site that runs the application. Azure Front Door continuously monitors the web application. The service provides automatic failover to the next available site when the nearest site is unavailable.
+This quickstart shows you how to use the Azure portal to set up high availability for a web application with Azure Front Door. You create a Front Door configuration that distributes traffic across two instances of a web application running in different Azure regions. The configuration uses equal weighted and same priority backends, which means that Azure Front Door directs traffic to the closest available site that hosts the application. Azure Front Door also monitors the health of the web application and performs automatic failover to the next nearest site if the closest site is down.
:::image type="content" source="media/quickstart-create-front-door/environment-diagram.png" alt-text="Diagram of Front Door deployment environment using the Azure portal." border="false":::
In this quickstart, Azure Front Door pools two instances of a web application th
## Create two instances of a web app
-This quickstart requires two instances of a web application that run in different Azure regions. Both the web application instances run in *Active/Active* mode, so either one can take traffic. This configuration differs from an *Active/Stand-By* configuration, where one acts as a failover.
+To complete this quickstart, you need two instances of a web application running in different Azure regions. The web application instances operate in *Active/Active* mode, which means that they can both handle traffic simultaneously. This setup is different from *Active/Stand-By* mode, where one instance serves as a backup for the other.
-If you don't already have a web app, use the following steps to set up example web apps.
+To follow this quickstart, you need two web apps that run in different Azure regions. If you don't have them already, you can use these steps to create example web apps.
1. Sign in to the [Azure portal](https://portal.azure.com).
-1. On the top left-hand side of the screen, select **Create a resource** > **Web App**.
+1. On the top left corner of the screen, select **+ Create a resource** and then search for **Web App**.
:::image type="content" source="media/quickstart-create-front-door/front-door-create-web-app.png" alt-text="Create a web app in the Azure portal." lightbox="./media/quickstart-create-front-door/front-door-create-web-app.png":::
-1. In the **Basics** tab of **Create Web App** page, enter or select the following information.
+1. On the Basics tab of the Create Web App page, provide or select the following details.
- | Setting | Value |
- | | |
- | **Subscription** | Select your subscription. |
- | **Resource group** | Select **Create new** and enter *FrontDoorQS_rg1* in the text box.|
- | **Name** | Enter a unique **Name** for your web app. This example uses *WebAppContoso-1*. |
+ | Setting | Value |
+ |--|--|
+ | **Subscription** | Choose your subscription. |
+ | **Resource group** | Select **Create new** and type *FrontDoorQS_rg1* in the text box. |
+ | **Name** | Type a unique **Name** for your web app. For example, *WebAppContoso-1*. |
| **Publish** | Select **Code**. |
- | **Runtime stack** | Select **.NET Core 3.1 (LTS)**. |
- | **Operating System** | Select **Windows**. |
- | **Region** | Select **Central US**. |
- | **Windows Plan** | Select **Create new** and enter *myAppServicePlanCentralUS* in the text box. |
+ | **Runtime stack** | Select **.NET Core 3.1 (LTS)**. |
+ | **Operating System** | Select **Windows**. |
+ | **Region** | Select **Central US**. |
+ | **Windows Plan** | Select **Create new** and type *myAppServicePlanCentralUS* in the text box. |
| **Sku and size** | Select **Standard S1 100 total ACU, 1.75 GB memory**. |
-1. Select **Review + create**, review the **Summary**, and then select **Create**. It might take several minutes for the deployment to complete.
+1. Select **Review + create** and verify the summary details. Then, select **Create** to initiate the deployment process. The deployment may take several minutes to complete.
:::image type="content" source="media/quickstart-create-front-door/create-web-app.png" alt-text="Screenshot showing Create Web App page." lightbox="./media/quickstart-create-front-door/create-web-app.png":::
-After your deployment is complete, create a second web app. Use the same procedure with the same values, except for the following values:
+Once you have successfully deployed your first web app, proceed to create another one. Follow the same steps and enter the same values as before, except for the ones listed:
-| Setting | Value |
-| | |
-| **Resource group** | Select **Create new** and enter *FrontDoorQS_rg2* |
-| **Name** | Enter a unique name for your Web App, in this example, *WebAppContoso-2* |
-| **Region** | A different region, in this example, *East US* |
-| **App Service plan** > **Windows Plan** | Select **New** and enter *myAppServicePlanEastUS*, and then select **OK** |
+| Setting | Value |
+|--|--|
+| **Resource group** | Select **Create new** and type *FrontDoorQS_rg2* |
+| **Name** | Type a unique name for your Web App, for example, *WebAppContoso-2* |
+| **Region** | Select a different region than the first Web App, for example, *East US* |
+| **App Service plan** > **Windows Plan** | Select **New** and type *myAppServicePlanEastUS*, and then select **OK** |
## Create a Front Door for your application
-Configure Azure Front Door to direct user traffic based on lowest latency between the two web apps servers. To begin, add a frontend host for Azure Front Door.
+Set up Azure Front Door to route user traffic based on the lowest latency between the two web app servers. Start by adding a frontend host for Azure Front Door.
+
+1. From the home page or the Azure menu, select **+ Create a resource**. Select **Networking** > **Front Door and CDN profiles**.
-1. From the home page or the Azure menu, select **Create a resource**. Select **Networking** > **See All** > **Front Door and CDN profiles**.
-1. On the Compare offerings page, select **Explore other offerings**. Then select **Azure Front Door (classic)**. Then select **Continue**.
-1. In the **Basics** tab of **Create a Front Door** page, enter or select the following information, and then select **Next: Configuration**.
+1. On the *Compare offerings* page, select **Explore other offerings**. Then select **Azure Front Door (classic)**. Then select **Continue**.
+
+1. In the Basics tab of *Create a Front Door* page, provide or select the following information, and then select **Next: Configuration**.
| Setting | Value | | | | | **Subscription** | Select your subscription. |
- | **Resource group** | Select **Create new** and enter *FrontDoorQS_rg0* in the text box.|
+ | **Resource group** | Select **Create new** and type *FrontDoorQS_rg0* in the text box.|
| **Resource group location** | Select **Central US**. |
-1. In **Frontends/domains**, select **+** to open **Add a frontend host**.
+1. In **Frontends/domains**, select **+** to open **Add a frontend host** page.
-1. For **Host name**, enter a globally unique hostname. This example uses *contoso-frontend*. Select **Add**.
+1. For **Host name**, type a globally unique hostname. For example, *contoso-frontend*. Select **Add**.
:::image type="content" source="media/quickstart-create-front-door/add-frontend-host-azure-front-door.png" alt-text="Add a frontend host for Azure Front Door." lightbox="./media/quickstart-create-front-door/add-frontend-host-azure-front-door.png":::
-Next, create a backend pool that contains your two web apps.
+Next, set up a backend pool that includes your two web apps.
-1. Still in **Create a Front Door**, in **Backend pools**, select **+** to open **Add a backend pool**.
+1. Still in **Create a Front Door**, in **Backend pools**, select **+** to open the **Add a backend pool** page.
-1. For **Name**, enter *myBackendPool*, then select **Add a backend**.
+1. For **Name**, type *myBackendPool*, then select **Add a backend**.
:::image type="content" source="media/quickstart-create-front-door/front-door-add-backend-pool.png" alt-text="Add a backend pool." lightbox="./media/quickstart-create-front-door/front-door-add-backend-pool.png":::
-1. In the **Add a backend** pane, select the following information and select **Add**.
+1. Provide or select the following information in the *Add a backend* pane and select **Add**.
| Setting | Value | | | | | **Backend host type** | Select **App service**. | | **Subscription** | Select your subscription. |
- | **Backend host name** | Select the first web app you created. In this example, the web app was *WebAppContoso-1*. |
+ | **Backend host name** | Select the first web app you created. For example, *WebAppContoso-1*. |
- **Leave all other fields default.**
+ **Keep all other fields default.**
:::image type="content" source="media/quickstart-create-front-door/front-door-add-a-backend.png" alt-text="Add a backend host to your Front Door." lightbox="./media/quickstart-create-front-door/front-door-add-a-backend.png":::
-1. Select **Add a backend** again. select the following information and select **Add**.
+1. ΓÇ£Select **Add a backend** again. Provide or select the following information and select **Add**.
| Setting | Value | | | | | **Backend host type** | Select **App service**. | | **Subscription** | Select your subscription. |
- | **Backend host name** | Select the second web app you created. In this example, the web app was *WebAppContoso-2*. |
+ | **Backend host name** | Select the second web app you created. For example, *WebAppContoso-2*. |
- **Leave all other fields default.**
+ **Keep all other fields default.**
-1. Select **Add** on the **Add a backend pool** pane to complete the configuration of the backend pool.
+1. Select **Add** on the *Add a backend pool* page to finish the configuration of the backend pool.
:::image type="content" source="media/quickstart-create-front-door/front-door-add-backend-pool-complete.png" alt-text="Add a backend pool for Azure Front Door." lightbox="./media/quickstart-create-front-door/front-door-add-backend-pool-complete.png":::
-Finally, add a routing rule. A routing rule maps your frontend host to the backend pool. The rule forwards a request for `contoso-frontend.azurefd.net` to **myBackendPool**.
+Lastly, create a routing rule. A routing rule links your frontend host to the backend pool. The rule routes a request for `contoso-frontend.azurefd.net` to **myBackendPool**.
-1. Still in **Create a Front Door**, in **Routing rules**, select **+** to configure a routing rule.
+1. Still in *Create a Front Door*, in *Routing rules*, select **+** to set up a routing rule.
-1. In **Add a rule**, for **Name**, enter *LocationRule*. Accept all the default values, then select **Add** to add the routing rule.
+1. In *Add a rule*, for **Name**, type LocationRule. Keep all the default values, then select Add to create the routing rule.ΓÇ¥
:::image type="content" source="media/quickstart-create-front-door/front-door-add-a-rule.png" alt-text="Screenshot showing Add a rule when creating Front Door." lightbox="./media/quickstart-create-front-door/front-door-add-a-rule.png":::
- >[!WARNING]
- > You **must** ensure that each of the frontend hosts in your Front Door has a routing rule with a default path (`/*`) associated with it. That is, across all of your routing rules there must be at least one routing rule for each of your frontend hosts defined at the default path (`/*`). Failing to do so may result in your end-user traffic not getting routed correctly.
+ > [!WARNING]
+ > It's essential that you associate each of the frontend hosts in your Azure Front Door with a routing rule that has a default path `/*`. This means that you need to have at least one routing rule for each of your frontend hosts at the default path `/*` among all of your routing rules. Otherwise, your end-user traffic may not be routed properly.
-1. Select **Review + Create**, and then **Create**.
+1. Select **Review + create** and verify the details. Then, select **Create** to start the deployment.
:::image type="content" source="media/quickstart-create-front-door/configuration-azure-front-door.png" alt-text="Configured Azure Front Door." lightbox="./media/quickstart-create-front-door/configuration-azure-front-door.png"::: ## View Azure Front Door in action
-Once you create a Front Door, it takes a few minutes for the configuration to be deployed globally. Once complete, access the frontend host you created. In a browser, go to your frontend host address. Your request will automatically get routed to the nearest server to you from the specified servers in the backend pool.
+Once you create a Front Door, it takes a few minutes for the configuration to be deployed globally; once completed, access the frontend host you created. In the browser, go to your frontend host address. Your requests automatically get routed to your nearest server from the specified servers in the backend pool.
-If you created these apps in this quickstart, you'll see an information page.
+If you followed this quickstart to create these apps, you see an information page.
-To test instant global failover in action, try the following steps:
+To test the instant global failover feature, try the following steps:
-1. Open the resource group **FrontDoorQS_rg0** and select the frontend service.
+1. Navigate to the resource group **FrontDoorQS_rg0** and select the Front Door service.ΓÇ¥
:::image type="content" source="./media/quickstart-create-front-door/front-door-view-frontend-service.png" alt-text="Screenshot of frontend service." lightbox="./media/quickstart-create-front-door/front-door-view-frontend-service.png":::
To test instant global failover in action, try the following steps:
:::image type="content" source="./media/quickstart-create-front-door/front-door-view-frontend-host-address.png" alt-text="Screenshot of frontend host address." lightbox="./media/quickstart-create-front-door/front-door-view-frontend-host-address.png":::
-1. Open a browser, as described above, and go to your frontend address.
+1. Open the browser, as described previously, and go to your frontend address.
-1. In the Azure portal, search for and select *App services*. Scroll down to find one of your web apps, **WebAppContoso-1** in this example.
+1. In the Azure portal, search for and select App services. Scroll down to find one of your web apps, for example, *WebAppContoso-1*.
-1. Select your web app, and then select **Stop**, and **Yes** to verify.
+1. Select your web app, and then select **Stop**, and **Yes** to confirm.
1. Refresh your browser. You should see the same information page.
- >[!TIP]
- >There is a little bit of delay for these actions. You might need to refresh again.
+ > [!TIP]
+ > These actions may take some time to take effect. You may need to refresh the browser again.ΓÇ¥
-1. Find the other web app, and stop it as well.
+1. Locate the other web app, and stop it as well.
1. Refresh your browser. This time, you should see an error message.
To test instant global failover in action, try the following steps:
## Clean up resources
-After you're done, you can remove all the items you created. Deleting a resource group also deletes its contents. If you don't intend to use this Front Door, you should remove resources to avoid unnecessary charges.
+After you're done, you can delete all the items you created. Deleting the resource group also deletes its contents. If you don't intend to use this Front Door, you should delete the resources to avoid incurring unnecessary charges.
-1. In the Azure portal, search for and select **Resource groups**, or select **Resource groups** from the Azure portal menu.
+1. In the Azure portal, search for and select **Resource groups**, or choose **Resource groups** from the Azure portal menu.
-1. Filter or scroll down to find a resource group, such as **FrontDoorQS_rg0**.
+1. Filter or scroll down to find a resource group, for example, *FrontDoorQS_rg0*.
-1. Select the resource group, then select **Delete resource group**.
+1. Choose the resource group, then select **Delete** resource group.
- >[!WARNING]
- >This action is irreversible.
+ > [!WARNING]
+ > This action can't be undone.
-1. Type the resource group name to verify, and then select **Delete**.
+1. Enter the name of the resource group that you want to delete, and then select **Delete**.
-Repeat the procedure for the other two groups.
+1. Repeat these steps for the remaining two groups.
## Next steps
-Advance to the next article to learn how to add a custom domain to your Front Door.
+Proceed to the next article to learn how to configure a custom domain for your Front Door.
+ > [!div class="nextstepaction"] > [Add a custom domain](front-door-custom-domain.md)
hdinsight Hdinsight Config For Vscode https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-config-for-vscode.md
Title: Azure HDInsight configuration settings reference
description: Introduce the configuration of Azure HDInsight extension. Previously updated : 08/30/2022 Last updated : 09/19/2023
For general information about working with settings in VS Code, refer to [User a
| HDInsight: Azure Environment | Azure | Azure environment | | HDInsight: Disable Open Survey Link | Checked | Enable/Disable opening HDInsight survey | | HDInsight: Enable Skip Pyspark Installation | Unchecked | Enable/Disable skipping pyspark installation |
-| HDInsight: Login Tips Enable | Unchecked | When this option is checked, there will be a prompt when logging in to Azure |
+| HDInsight: Login Tips Enable | Unchecked | When this option is checked, there is a prompt when logging in to Azure |
| HDInsight: Previous Extension Version | Display the version number of the current extension | Show the previous extension version| | HDInsight: Results Font Family | -apple-system,BlinkMacSystemFont,Segoe WPC,Segoe UI,HelveticaNeue-Light,Ubuntu,Droid Sans,sans-serif | Set the font family for the results grid; set to blank to use the editor font | | HDInsight: Results Font Size | 13 |Set the font size for the results gird; set to blank to use the editor size | | HDInsight Cluster: Linked Cluster | -- | Linked clusters urls. Also can edit the JSON file to set |
-| HDInsight Hive: Apply Localization | Unchecked | [Optional] Configuration options for localizing into VSCode's configured locale (must restart VSCode for settings to take effect)|
+| HDInsight Hive: Apply Localization | Unchecked | [Optional] Configuration options for localizing into Visual Studio Code's configured locale (must restart Visual Studio Code for settings to take effect)|
| HDInsight Hive: Copy Include Headers | Unchecked | [Optional] Configuration option for copying results from the Results View | | HDInsight Hive: Copy Remove New Line | Checked | [Optional] Configuration options for copying multi-line results from the Results View | | HDInsight Hive › Format: Align Column Definitions In Columns | Unchecked | Should column definition be aligned | | HDInsight Hive › Format: Datatype Casing | none | Should data types be formatted as UPPERCASE, lowercase, or none (not formatted) | | HDInsight Hive › Format: Keyword Casing | none | Should keywords be formatted as UPPERCASE, lowercase, or none (not formatted) | | HDInsight Hive › Format: Place Commas Before Next Statement | Unchecked | Should commas be placed at the beginning of each statement in a list for example ', mycolumn2' instead of at the end 'mycolumn1,'|
-| HDInsight Hive › Format: Place Select Statement References On New Line | Unchecked | Should references to objects in a select statement be split into separate lines? For example, for 'SELECT C1, C2 FROM T1' both C1 and C2 will be on separate lines
+| HDInsight Hive › Format: Place Select Statement References On New Line | Unchecked | Should references to objects in a SELECT statement be split into separate lines? For example, for 'SELECT C1, C2 FROM T1' both C1 and C2 is on separate lines
| HDInsight Hive: Log Debug Info | Unchecked | [Optional] Log debug output to the VS Code console (Help -> Toggle Developer Tools) | HDInsight Hive: Messages Default Open | Checked | True for the messages pane to be open by default; false for closed|
-| HDInsight Hive: Results Font Family | -apple-system,BlinkMacSystemFont,Segoe WPC,Segoe UI,HelveticaNeue-Light,Ubuntu,Droid Sans,sans-serif | Set the font family for the results grid; set to blank to use the editor font |
+| HDInsight Hive: Results Font Family | -apple-system, BlinkMacSystemFont, Segoe WPC,Segoe UI, HelveticaNeue-Light, Ubuntu, Droid Sans, sans-serif | Set the font family for the results grid; set to blank to use the editor font |
| HDInsight Hive: Results Font Size | 13 | Set the font size for the results grid; set to blank to use the editor size |
-| HDInsight Hive › Save As Csv: Include Headers | Checked | [Optional] When true, column headers are included when saving results as CSV |
+| HDInsight Hive › Save as `csv`: Include Headers | Checked | [Optional] When true, column headers are included when saving results as CSV |
| HDInsight Hive: Shortcuts | -- | Shortcuts related to the results window | | HDInsight Hive: Show Batch Time| Unchecked | [Optional] Should execution time is shown for individual batches | | HDInsight Hive: Split Pane Selection | next | [Optional] Configuration options for which column new result panes should open in |
-| HDInsight Job Submission: Cluster Conf | -- | Cluster Configuration |
-| HDInsight Job Submission: Livy Conf | -- | Livy Configuration. POST/batches |
-| HDInsight Jupyter: Append Results| Checked | Whether to append the results to results window, else clear and display. |
+| HDInsight Job Submission: Cluster `Conf` | -- | Cluster Configuration |
+| HDInsight Job Submission: Livy `Conf` | -- | Livy Configuration. POST/batches |
+| HDInsight Jupyter: Append Results| Checked | Whether to append the results to the results window or to clear and display them. |
| HDInsight Jupyter: Languages | -- | Default settings per language. |
-| HDInsight Jupyter › Log: Verbose | Unchecked | If enable verbose logging |
-| HDInsight Jupyter › Notebook: Startup Args | Can add item | 'jupyter notebook' command-line arguments. Each argument is a separate item in the array. For a full list type 'jupyter notebook--help' in a terminal window. |
+| HDInsight Jupyter › Log: Verbose | Unchecked | If enable verbose logging. |
+| HDInsight Jupyter › Notebook: Startup Args | Can add item | `jupyter notebook` command-line arguments. Each argument is a separate item in the array. For a full list type `jupyter notebook--help` in a terminal window. |
| HDInsight Jupyter › Notebook: Startup Folder | ${workspaceRoot} |-- |
-| HDInsight Jupyter: Python Extension Enabled | Checked | Use Python-Interactive-Window of ms-python extension when submitting pySpark Interactive jobs. Otherwise, use our own jupyter window |
+| HDInsight Jupyter: Python Extension Enabled | Checked | Use Python-Interactive-Window of ms-python extension when submitting pySpark Interactive jobs. Otherwise, use our own `jupyter` window. |
| HDInsight Spark.NET: 7z | C:\Program Files\7-Zip | <Path to 7z.exe> | | HDInsight Spark.NET: HADOOP_HOME | D:\winutils | <Path to bin\winutils.exe> windows OS only | | HDInsight Spark.NET: JAVA_HOME | C:\Program Files\Java\jdk1.8.0_201\ | Path to Java Home|
For general information about working with settings in VS Code, refer to [User a
| HDInsight Spark.NET: SPARK_HOME | D:\spark-2.3.3-bin-hadoop2.7\ | Path to Spark Home | | Hive: Persist Query Result Tabs | Unchecked | Hive PersistQueryResultTabs | | Hive: Split Pane Selection | next | [Optional] Configuration options for which column new result panes should open in |
-| Hive Interactive: Copy Executable Folder | Unchecked | If copy the hive interactive service runtime folder to user's tmp folder |
+| Hive Interactive: Copy Executable Folder | Unchecked | If copy the hive interactive service runtime folder to user's tmp folder. |
| Hql Interactive Server: Wrapper Port | 13424 | Hive interactive service port | | Hql Language Server: Language Wrapper Port | 12342 | Hive language service port servers listen to. | | Hql Language Server: Max Number Of Problems | 100 | Controls the maximum number of problems produced by the server. | | Synapse Spark Compute: Synapse Spark Compute Azure Environment | blank | synapse Spark Compute Azure environment |
-| Synapse Spark pool Job Submission: Livy Conf | -- | Livy Configuration. POST/batches
-| Synapse Spark pool Job Submission: Synapse Spark Pool Cluster Conf | -- | Synapse Spark Pool Configuration |
+| Synapse Spark pool Job Submission: `Livy Conf` | -- | Livy Configuration. POST/batches
+| Synapse Spark pool Job Submission: `Synapse Spark Pool Cluster Conf` | -- | Synapse Spark Pool Configuration |
## Next steps -- For information about Azure HDInsight for VSCode, see [Spark & Hive for Visual Studio Code Tools](/sql/big-data-cluster/spark-hive-tools-vscode).
+- For information about Azure HDInsight for Visual Studio Code, see [Spark & Hive for Visual Studio Code Tools](/sql/big-data-cluster/spark-hive-tools-vscode).
- For a video that demonstrates using Spark & Hive for Visual Studio Code, see [Spark & Hive for Visual Studio Code](https://go.microsoft.com/fwlink/?linkid=858706).
healthcare-apis Overview Of Device Data Processing Stages https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/overview-of-device-data-processing-stages.md
If no Device resource for a given device identifier exists in the FHIR service,
> [!NOTE] > The **Resolution type** can also be adjusted post deployment of the MedTech service if a different **Resolution type** is later required.
-The MedTech service provides near real-time processing and also attempts to reduce the number of requests made to the FHIR service by grouping requests into batches of 300 [normalized messages](#normalize). If there's a low volume of data, and 300 normalized messages haven't been added to the group, then the corresponding FHIR Observations in that group are persisted to the FHIR service after approximately five minutes. When there's fewer than 300 normalized messages to be processed, there may be a delay of approximately five minutes before FHIR Observations are created or updated in the FHIR service.
+The MedTech service provides near real-time processing and also attempts to reduce the number of requests made to the FHIR service by grouping requests into batches of 300 [normalized messages](#normalize). If there's a low volume of data, and 300 normalized messages haven't been added to the group, then the corresponding FHIR Observations in that group are persisted to the FHIR service after approximately five minutes.
> [!NOTE] > When multiple device messages contain data for the same FHIR Observation, have the same timestamp, and are sent within the same device message batch (for example, within the five minute window or in groups of 300 normalized messages), only the data corresponding to the latest device message for that FHIR Observation is persisted.
internet-peering Howto Exchange Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/internet-peering/howto-exchange-portal.md
Title: Create or modify an Exchange peering - Azure portal
-description: Create or modify an Exchange peering using the Azure portal.
+
+description: Learn how to create or modify an Exchange peering using the Azure portal.
+ Previously updated : 01/23/2023-- Last updated : 10/03/2023 # Create or modify an Exchange peering using the Azure portal
As an Internet Exchange Provider, you can create an exchange peering request by
* For Resource group, you can either choose an existing resource group from the drop-down list or create a new group by selecting Create new. We'll create a new resource group for this example.
- >[!NOTE]
- >Once a subscription and resource group have been selected for the peering resource, it cannot be moved to another subscription or resource group.
- * Name corresponds to the resource name and can be anything you choose. * Region is auto-selected if you chose an existing resource group. If you chose to create a new resource group, you also need to choose the Azure region where you want the resource to reside.
internet-peering Howto Legacy Direct Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/internet-peering/howto-legacy-direct-portal.md
 Title: Convert a legacy Direct peering to an Azure resource - Azure portal
-description: Convert a legacy Direct peering to an Azure resource using the Azure portal.
-+
+description: Learn how to convert a legacy Direct peering to an Azure resource using the Azure portal.
+ Previously updated : 01/23/2023-- Last updated : 10/03/2023 # Convert a legacy Direct peering to an Azure resource using the Azure portal
As an Internet Service Provider, you can convert legacy direct peering connectio
* For Resource group, you can either choose an existing resource group from the drop-down list or create a new group by selecting Create new. We'll create a new resource group for this example.
- >[!NOTE]
- >Once a subscription and resource group have been selected for the peering resource, it cannot be moved to another subscription or resource group.
- * Name corresponds to the resource name and can be anything you choose. * Region is auto-selected if you chose an existing resource group. If you chose to create a new resource group, you also need to choose the Azure region where you want the resource to reside.
internet-peering Howto Legacy Exchange Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/internet-peering/howto-legacy-exchange-portal.md
 Title: Convert a legacy Exchange peering to an Azure resource - Azure portal
-description: Convert a legacy Exchange peering to an Azure resource using the Azure portal.
-
+description: Learn how to convert a legacy Exchange peering to an Azure resource using the Azure portal.
+ + Previously updated : 01/23/2023-- Last updated : 10/03/2023 # Convert a legacy Exchange peering to an Azure resource using the Azure portal
As an Internet Exchange Provider, you can create an exchange peering request by
* For Resource group, you can either choose an existing resource group from the drop-down list or create a new group by selecting Create new. We'll create a new resource group for this example.
- >[!NOTE]
- >Once a subscription and resource group have been selected for the peering resource, it cannot be moved to another subscription or resource group.
- * Name corresponds to the resource name and can be anything you choose. * Region is auto-selected if you chose an existing resource group. If you chose to create a new resource group, you also need to choose the Azure region where you want the resource to reside.
iot-hub Iot Hub Automatic Device Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-automatic-device-management.md
To view the details of a configuration and monitor the devices running it, use t
1. In the [Azure portal](https://portal.azure.com), go to your IoT hub.
-2. Select **Configurations ** in Device management.
+2. Select **Configurations** in Device management.
3. Inspect the configuration list. For each configuration, you can view the following details:
iot-hub Iot Hub Event Grid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-event-grid.md
Event Grid enables [filtering](../event-grid/event-filtering.md) on event types,
* Subject: For IoT Hub events, the subject is the device name. The subject takes the format `devices/{deviceId}`. You can filter subjects based on **Begins With** (prefix) and **Ends With** (suffix) matches. The filter uses an `AND` operator, so events with a subject that match both the prefix and suffix are delivered to the subscriber. * Data content: The data content is populated by IoT Hub using the message format. You can choose what events are delivered based on the contents of the telemetry message. For examples, see [advanced filtering](../event-grid/event-filtering.md#advanced-filtering). For filtering on the telemetry message body, you must set the contentType to **application/json** and contentEncoding to **UTF-8** in the message [system properties](./iot-hub-devguide-routing-query-syntax.md#system-properties). Both of these properties are case insensitive.
+For device telemetry events, IoT Hub will create the default [message route](iot-hub-devguide-messages-d2c.md) called *RouteToEventGrid* based on the subscription. To filter messages before telemetry data is sent, update the [routing query](iot-hub-devguide-routing-query-syntax.md).
+ ## Limitations for device connection state events Device connected and device disconnected events are available for devices connecting using either the MQTT or AMQP protocol, or using either of these protocols over WebSockets. Requests made only with HTTPS won't trigger device connection state notifications.
key-vault Soft Delete Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/soft-delete-overview.md
You can't reuse the name of a key vault that has been soft-deleted until the ret
### Purge protection
-Purge protection is an optional Key Vault behavior and is **not enabled by default**. Purge protection can only be enabled once soft-delete is enabled. It can be turned on via [CLI](./key-vault-recovery.md?tabs=azure-cli) or [PowerShell](./key-vault-recovery.md?tabs=azure-powershell). Purge protection is recommended when using keys for encryption to prevent data loss. Most Azure services that integrate with Azure Key Vault, such as Storage, require purge protection to prevent data loss.
+Purge protection is an optional Key Vault behavior and is **not enabled by default**. Purge protection can only be enabled once soft-delete is enabled. It can be turned on, for example, via [CLI](./key-vault-recovery.md?tabs=azure-cli) or [PowerShell](./key-vault-recovery.md?tabs=azure-powershell). Purge protection is recommended when using keys for encryption to prevent data loss. Most Azure services that integrate with Azure Key Vault, such as Storage, require purge protection to prevent data loss.
When purge protection is on, a vault or an object in the deleted state can't be purged until the retention period has passed. Soft-deleted vaults and objects can still be recovered, ensuring that the retention policy will be followed.
Permanently deleting, purging, a key vault is possible via a POST operation on t
Exceptions are: - When the Azure subscription has been marked as *undeletable*. In this case, only the service may then perform the actual deletion, and does so as a scheduled process. -- When the `--enable-purge-protection` argument is enabled on the vault itself. In this case, Key Vault will wait for 90 days from when the original secret object was marked for deletion to permanently delete the object.
+- When the `--enable-purge-protection` argument is enabled on the vault itself. In this case, Key Vault will wait for 7 to 90 days from when the original secret object was marked for deletion to permanently delete the object.
For steps, see [How to use Key Vault soft-delete with CLI: Purging a key vault](./key-vault-recovery.md?tabs=azure-cli#key-vault-cli) or [How to use Key Vault soft-delete with PowerShell: Purging a key vault](./key-vault-recovery.md?tabs=azure-powershell#key-vault-powershell).
key-vault Quick Create Terraform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/keys/quick-create-terraform.md
Title: 'Quickstart: Create an Azure key vault and key using Terraform' description: 'In this article, you create an Azure key vault and key using Terraform' -+ - Previously updated : 4/14/2023+ Last updated : 10/3/2023 content_well_notification: - AI-contribution
lab-services Class Type Networking Gns3 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/class-type-networking-gns3.md
Last updated 04/24/2023
[!INCLUDE [preview note](./includes/lab-services-new-update-focused-article.md)]
-This article shows you how to set up a class for emulating, configuring, testing, and troubleshooting virtual and real networks using [GNS3](https://www.gns3.com/) software.
+This article shows you how to set up a class for emulating, configuring, testing, and troubleshooting networks using [GNS3](https://www.gns3.com/) software.
This article has two main sections. The first section covers how to create the lab. The second section covers how to create the [template machine](./classroom-labs-concepts.md#template-virtual-machine) with nested virtualization enabled and with GNS3 installed and configured.
To configure the template VM, complete the following tasks:
To prepare the template virtual machine for nested virtualization, follow the detailed steps in [enable nested virtualization](how-to-enable-nested-virtualization-template-vm.md).
+If you created a lab template VM with a non-admin account, add the non-admin account to the **Hyper-V Administrators** group. For more information about using nested virtualization with a non-admin account, see [non-admin user best practices](concept-nested-virtualization-template-vm.md#non-admin-user).
+ ### Install GNS3 1. Connect to the template VM by using remote desktop.
-1. Follow the detailed instructions on the GNS3 website, to [install GNS3 on Windows](https://docs.gns3.com/docs/getting-started/installation/windows).
+1. Follow the detailed instructions on the GNS3 website, to [install GNS3 on Windows](https://docs.gns3.com/docs/getting-started/installation/windows).
1. Make sure to select **GNS3 VM** in the component dialog:
To prepare the template virtual machine for nested virtualization, follow the de
When the setup finishes, a zip file `GNS3.VM.Hyper-V.2.2.17.zip` is downloaded to the same folder as the installation file. The zip file contains the virtual disks and the PowerShell script to create the Hyper-V virtual machine.
-To create the GNS 3 VM:
+To create the GNS 3 VM:
1. Connect to the template VM by using remote desktop.
-1. Extract all files in the `GNS3.VM.Hyper-V.2.2.17.zip` file.
+1. Extract all files in the `GNS3.VM.Hyper-V.2.2.17.zip` file. If the template VM has a non-admin account for lab users, extract the files in a location accessible to the non-admin account.
1. Right-select the `create-vm.ps1` PowerShell script, and then select **Run with PowerShell**.
Now that you installed GNS3, and added the GNS3 VM, configure GNS 3 to use the H
Next, you can add appliances for the class. Follow the detailed steps from the GNS3 documentation to [install appliances from the GNS3 marketplace](https://docs.gns3.com/docs/using-gns3/beginners/install-from-marketplace).
+If the template VM has a non-admin account for lab users, install the appliances to a location accessible to the non-admin account. Optionally, you can set the preferences for the admin and non-admin user to look for appliances and projects in a location accessible by both users.
+ ### Prepare to publish template Now that you set up the template virtual machine, verify the following key points before you publish the template:
lab-services Concept Nested Virtualization Template Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/concept-nested-virtualization-template-vm.md
Before setting up a lab with nested virtualization, here are a few things to tak
- Hyper-V guest VMs are licensed as independent machines. For information about licensing for Microsoft operation systems and products, see [Microsoft Licensing](https://www.microsoft.com/licensing/default). Check licensing agreements for any other software you use, before installing it on the template VM or guest VMs. -- Virtualization applications other than Hyper-V are [*not* supported for nested virtualization](/virtualization/hyper-v-on-windows/user-guide/nested-virtualization#3rd-party-virtualization-apps). This includes any software that requires hardware virtualization extensions.
+- Virtualization applications other than Hyper-V [*aren't* supported for nested virtualization](/virtualization/hyper-v-on-windows/user-guide/nested-virtualization#3rd-party-virtualization-apps). This includes any software that requires hardware virtualization extensions.
## Enable nested virtualization for a lab
-To avoid that lab users need to enable nested virtualization on their lab VM and install the nested VMs inside it, you can prepare a lab template. When you publish the lab, each lab user has a lab VM that already contains the nested virtual machines.
+You can enable nested virtualization and create nested Hyper-V VMs on the template VM. When you publish the lab, each lab user has a lab VM that already contains the nested virtual machines.
To enable nested virtualization for a lab:
You can connect to a lab VM from another lab VM or a nested VM without any extra
Consider the following sample lab setup: - Lab VM 1 (Windows Server 2022, IP 10.0.0.8)
- - Nested VM 1-1 (Ubuntu 20.04, IP 192.168.0.102)
- - Nested VM 1-2 (Windows 11, IP 192.168.0.103, remote desktop enabled and allowed)
+ - Nested VM 1-1 (Ubuntu 20.04, IP 192.168.0.102)
+ - Nested VM 1-2 (Windows 11, IP 192.168.0.103, remote desktop enabled and allowed)
- Lab VM 2 (Windows Server 2022, IP 10.0.0.9)
- - Nested VM 2-1 (Ubuntu 20.04, IP 192.168.0.102)
- - Nested VM 2-2 (Windows 11, IP 192.168.0.103, remote desktop enabled and allowed)
+ - Nested VM 2-1 (Ubuntu 20.04, IP 192.168.0.102)
+ - Nested VM 2-2 (Windows 11, IP 192.168.0.103, remote desktop enabled and allowed)
To connect with SSH from lab VM 2 to nested lab VM 1-1:
To connect with RDP from lab VM 2, or its nested VMs, to nested lab VM 1-2:
1. On lab VM 2, or its nested VMs, connect using RDP to `10.0.0.8:3390` - > [!IMPORTANT] > Include `~\` in front of the user name. For example, `~\Administrator` or `~\user1`. ## Recommendations
+### Non-admin user
+
+You may choose to create a non-admin user when creating your lab. There are a few things to note when using nested virtualization with a non-admin account.
+
+- To be able to start or stop VMs, the non-admin user must be added to **Hyper-V Administrators** group.
+- The non-admin user can't mount drives.
+- The Hyper-V VM files must be saved in a location accessible to the non-admin user.
+ ### Processor compatibility The nested virtualization VM sizes may use different processors as shown in the following table:
When you create the nested virtual machines, choose the [VHDX file format](/open
### Configure the number of vCPUs for nested VMs
-By default, when you create the nested virtual machine, only one virtual CPU (*vCPU*) is assigned. Depending on the operating system, and software of the nested VM, you might have to increase the number of vCPUs.
+By default, when you create the nested virtual machine, only one virtual CPU (*vCPU*) is assigned. Depending on the operating system, and software of the nested VM, you might have to increase the number of vCPUs. For more information about managing and setting nested VM CPU resources, see [Hyper-V processor performance](/windows-server/administration/performance-tuning/role/hyper-v-server/processor-performance) or [Set-VM](/powershell/module/hyper-v/set-vm) PowerShell cmdlet.
### Configure the assigned memory for nested VMs
-When you create the nested virtual machine, the minimum assigned memory might not be sufficient for the operating system and installed software of the nested VM. You might have to increase the minimum amount of assigned memory for the nested VM.
+When you create the nested virtual machine, the minimum assigned memory might not be sufficient for the operating system and installed software of the nested VM. You might have to increase the minimum amount of assigned memory for the nested VM. For more information about managing and setting nested VM CPU resources, see [Hyper-V Host CPU Resource Management](/windows-server/virtualization/hyper-v/manage/manage-hyper-v-minroot-2016) or [Set-VM](/powershell/module/hyper-v/set-vm) PowerShell cmdlet.
### Best practices for running Linux on Hyper-V
machine-learning How To Enable Studio Virtual Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-enable-studio-virtual-network.md
Use the following steps to enable access to data stored in Azure Blob and File s
For more information, see the [Blob Data Reader](../role-based-access-control/built-in-roles.md#storage-blob-data-reader) built-in role.
-1. __Grant the workspace managed identity the 'Reader' role for storage private endpoints__. If your storage service uses a __private endpoint__, grant the workspace's managed identity __Reader__ access to the private endpoint. The workspace's managed identity in Azure AD has the same name as your Azure Machine Learning workspace.
+1. __Grant the workspace managed identity the 'Reader' role for storage private endpoints__. If your storage service uses a __private endpoint__, grant the workspace's managed identity __Reader__ access to the private endpoint. The workspace's managed identity in Azure AD has the same name as your Azure Machine Learning workspace. A private endpoint is necessary for both __blob and file__ storage types.
> [!TIP] > Your storage account may have multiple private endpoints. For example, one storage account may have separate private endpoint for blob, file, and dfs (Azure Data Lake Storage Gen2). Add the managed identity to all these endpoints.
machine-learning How To Manage Quotas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-quotas.md
The following table shows more limits in the platform. Reach out to the Azure Ma
| Nodes in a single Parallel Run Step **run** on an Azure Machine Learning compute (AmlCompute) cluster | 100 nodes but configurable up to 65,000 nodes if your cluster is set up to scale as mentioned previously | | Nodes in a single Azure Machine Learning compute (AmlCompute) **cluster** set up as a communication-enabled pool | 300 nodes but configurable up to 4,000 nodes | | Nodes in a single Azure Machine Learning compute (AmlCompute) **cluster** set up as a communication-enabled pool on an RDMA enabled VM Family | 100 nodes |
-| Nodes in a single MPI **run** on an Azure Machine Learning compute (AmlCompute) cluster | 100 nodes but can be increased to 300 nodes |
+| Nodes in a single MPI **run** on an Azure Machine Learning compute (AmlCompute) cluster | 100 nodes |
| Job lifetime | 21 days<sup>1</sup> | | Job lifetime on a low-priority node | 7 days<sup>2</sup> | | Parameter servers per node | 1 |
Azure Machine Learning kubernetes online endpoints have limits described in the
| Number of deployments per endpoint | 20 | | Max request time-out at endpoint level | 300 seconds |
-The sum of kubernetes online endpoints and managed online endpoints under each subscription can't exceed 50. Similarly, the sum of kubernetes online deployments and managed online deployments under each subscription can't exceed 200.
+The sum of kubernetes online endpoint, managed online endpoint, and managed batch endpoint under each subscription can't exceed 50. Similarly, the sum of kubernetes online deployments and managed online deployments and managed batch deployments under each subscription can't exceed 200.
### Azure Machine Learning pipelines [Azure Machine Learning pipelines](concept-ml-pipelines.md) have the following limits.
machine-learning How To Train Scikit Learn https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-train-scikit-learn.md
Next, use the YAML file to create and register this custom environment in your w
For more information on creating and using environments, see [Create and use software environments in Azure Machine Learning](how-to-use-environments.md).
+##### [Optional] Create a custom environment with Intel® Extension for Scikit-Learn
+
+Want to speed up your scikit-learn scripts on Intel hardware? Try adding [Intel® Extension for Scikit-Learn](https://www.intel.com/content/www/us/en/developer/tools/oneapi/scikit-learn.html) into your conda yaml file and following the subsequent steps detailed above. We will show you how to enable these optimizations later in this example:
+[!notebook-python[](~/azureml-examples-main/sdk/python/jobs/single-step/scikit-learn/train-hyperparameter-tune-deploy-with-sklearn/train-hyperparameter-tune-with-sklearn.ipynb?name=make_sklearnex_conda_file)]
+ ## Configure and submit your training job In this section, we'll cover how to run a training job, using a training script that we've provided. To begin, you'll build the training job by configuring the command for running the training script. Then, you'll submit the training job to run in Azure Machine Learning.
Next, create the script file in the source directory.
[!notebook-python[](~/azureml-examples-main/sdk/python/jobs/single-step/scikit-learn/train-hyperparameter-tune-deploy-with-sklearn/train-hyperparameter-tune-with-sklearn.ipynb?name=create_script_file)]
+#### [Optional] Enable Intel® Extension for Scikit-Learn optimizations for more performance on Intel hardware
+
+If you have installed Intel® Extension for Scikit-Learn (as demonstrated in the previous section), you can enable the performance optimizations by adding the two lines of code to the top of the script file, as shown below.
+
+To learn more about Intel® Extension for Scikit-Learn, visit the package's [documentation](https://intel.github.io/scikit-learn-intelex/).
+
+[!notebook-python[](~/azureml-examples-main/sdk/python/jobs/single-step/scikit-learn/train-hyperparameter-tune-deploy-with-sklearn/train-hyperparameter-tune-with-sklearn.ipynb?name=create_sklearnex_script_file)]
+ ### Build the training job Now that you have all the assets required to run your job, it's time to build it using the Azure Machine Learning Python SDK v2. For this, we'll be creating a `command`.
machine-learning Overview What Is Azure Machine Learning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/overview-what-is-azure-machine-learning.md
Supported via Azure Machine Learning Kubernetes, Azure Machine Learning compute
* TensorFlow * MPI
-The MPI distribution can be used for Horovod or custom multinode logic. Additionally, Apache Spark is supported via Azure Synapse Analytics Spark clusters (preview).
-
-> [!IMPORTANT]
-> Using Apache Spark via Azure Synapse Analytics Spark clusters is in public preview.
-> The preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+The MPI distribution can be used for Horovod or custom multinode logic. Additionally, Apache Spark is supported via [serverless Spark compute and attached Synapse Spark pool](apache-spark-azure-ml-concepts.md) that leverage Azure Synapse Analytics Spark clusters.
See [Distributed training with Azure Machine Learning](concept-distributed-training.md).
machine-learning Tutorial Azure Ml In A Day https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-azure-ml-in-a-day.md
You might need to select **Refresh** to see the new folder and script in your **
:::image type="content" source="media/tutorial-azure-ml-in-a-day/refresh.png" alt-text="Screenshot shows the refresh icon.":::
+### [Optional] Enable Intel® Extension for Scikit-Learn optimizations for more performance on Intel hardware
+
+Want to speed up your scikit-learn scripts on Intel hardware? Try enabling [Intel® Extension for Scikit-Learn](https://www.intel.com/content/www/us/en/developer/tools/oneapi/scikit-learn.html) in your training script. Intel® Extension for Scikit-Learn is already installed in the Azure Machine Learning curated environment used in this tutorial, so no additional installation is needed.
+
+To learn more about Intel® Extension for Scikit-Learn, visit the package's [documentation](https://intel.github.io/scikit-learn-intelex/).
+
+If you want to use Intel® Extension for Scikit-Learn as part of the training script described above, you can enable the performance optimizations by adding the two lines of code to the top of the script file, as shown below.
++
+```python
+%%writefile {train_src_dir}/main.py
+import os
+import argparse
+
+# Import and enable Intel Extension for Scikit-learn optimizations
+# where possible
+from sklearnex import patch_sklearn
+patch_sklearn()
+
+import pandas as pd
+import mlflow
+import mlflow.sklearn
+from sklearn.ensemble import GradientBoostingClassifier
+from sklearn.metrics import classification_report
+from sklearn.model_selection import train_test_split
+
+def main():
+ """Main function of the script."""
+
+ # input and output arguments
+ parser = argparse.ArgumentParser()
+ parser.add_argument("--data", type=str, help="path to input data")
+ parser.add_argument("--test_train_ratio", type=float, required=False, default=0.25)
+ parser.add_argument("--n_estimators", required=False, default=100, type=int)
+ parser.add_argument("--learning_rate", required=False, default=0.1, type=float)
+ parser.add_argument("--registered_model_name", type=str, help="model name")
+ args = parser.parse_args()
+
+ # Start Logging
+ mlflow.start_run()
+
+ # enable autologging
+ mlflow.sklearn.autolog()
+
+ ###################
+ #<prepare the data>
+ ###################
+ print(" ".join(f"{k}={v}" for k, v in vars(args).items()))
+
+ print("input data:", args.data)
+
+ credit_df = pd.read_csv(args.data, header=1, index_col=0)
+
+ mlflow.log_metric("num_samples", credit_df.shape[0])
+ mlflow.log_metric("num_features", credit_df.shape[1] - 1)
+
+ train_df, test_df = train_test_split(
+ credit_df,
+ test_size=args.test_train_ratio,
+ )
+ ####################
+ #</prepare the data>
+ ####################
+
+ ##################
+ #<train the model>
+ ##################
+ # Extracting the label column
+ y_train = train_df.pop("default payment next month")
+
+ # convert the dataframe values to array
+ X_train = train_df.values
+
+ # Extracting the label column
+ y_test = test_df.pop("default payment next month")
+
+ # convert the dataframe values to array
+ X_test = test_df.values
+
+ print(f"Training with data of shape {X_train.shape}")
+
+ clf = GradientBoostingClassifier(
+ n_estimators=args.n_estimators, learning_rate=args.learning_rate
+ )
+ clf.fit(X_train, y_train)
+
+ y_pred = clf.predict(X_test)
+
+ print(classification_report(y_test, y_pred))
+ ###################
+ #</train the model>
+ ###################
+
+ ##########################
+ #<save and register model>
+ ##########################
+ # Registering the model to the workspace
+ print("Registering the model via MLFlow")
+ mlflow.sklearn.log_model(
+ sk_model=clf,
+ registered_model_name=args.registered_model_name,
+ artifact_path=args.registered_model_name,
+ )
+
+ # Saving the model to a file
+ mlflow.sklearn.save_model(
+ sk_model=clf,
+ path=os.path.join(args.registered_model_name, "trained_model"),
+ )
+ ###########################
+ #</save and register model>
+ ###########################
+
+ # Stop Logging
+ mlflow.end_run()
+
+if __name__ == "__main__":
+ main()
+```
++ ## Create a compute cluster, a scalable way to run a training job > [!NOTE]
migrate Migrate Support Matrix Hyper V Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/migrate-support-matrix-hyper-v-migration.md
You can select up to 10 VMs at once for replication. If you want to migrate more
| :-- | :- | | **Operating system** | All [Windows](https://support.microsoft.com/help/2721672/microsoft-server-software-support-for-microsoft-azure-virtual-machines) and [Linux](../virtual-machines/linux/endorsed-distros.md) operating systems that are supported by Azure. | **Windows Server 2003** | For VMs running Windows Server 2003, you need to [install Hyper-V Integration Services](prepare-windows-server-2003-migration.md) before migration. |
-**Linux VMs in Azure** | Some VMs might require changes so that they can run in Azure.<br><br> For Linux, Azure Migrate makes the changes automatically for these operating systems:<br> - Red Hat Enterprise Linux 9.x, 8.x, 7.9, 7.8, 7.7, 7.6, 7.5, 7.4, 7.0, 6.x<br> - Cent OS 9.x (Release and Stream), 8.x (Release and Stream), 7.7, 7.6, 7.5, 7.4, 6.x</br> - SUSE Linux Enterprise Server 15 SP4, 15 SP3, 15 SP2, 15 SP1, 15 SP0, 12, 11 SP4, 11 SP3 <br>- Ubuntu 22.04, 21.04, 20.04, 19.04, 19.10, 18.04LTS, 16.04LTS, 14.04LTS<br> - Debian 11, 10, 9, 8, 7<br> - Oracle Linux 9, 8, 7.7-CI, 7.7, 6<br> - Kali Linux (2016, 2017, 2018, 2019, 2020, 2021, 2022) <br> - For other operating systems, you make the [required changes](prepare-for-migration.md#verify-required-changes-before-migrating) manually.
+**Linux VMs in Azure** | Some VMs might require changes so that they can run in Azure.<br><br> For Linux, Azure Migrate makes the changes automatically for these operating systems:<br> - Red Hat Enterprise Linux 9.x, 8.x, 7.9, 7.8, 7.7, 7.6, 7.5, 7.4, 7.0, 6.x<br> - CentOS 9.x (Release and Stream), 8.x (Release and Stream), 7.9, 7.7, 7.6, 7.5, 7.4, 6.x</br> - SUSE Linux Enterprise Server 15 SP4, 15 SP3, 15 SP2, 15 SP1, 15 SP0, 12, 11 SP4, 11 SP3 <br>- Ubuntu 22.04, 21.04, 20.04, 19.04, 19.10, 18.04LTS, 16.04LTS, 14.04LTS<br> - Debian 11, 10, 9, 8, 7<br> - Oracle Linux 9, 8, 7.7-CI, 7.7, 6<br> - Kali Linux (2016, 2017, 2018, 2019, 2020, 2021, 2022) <br> - For other operating systems, you make the [required changes](prepare-for-migration.md#verify-required-changes-before-migrating) manually.
| **Required changes for Azure** | Some VMs might require changes so that they can run in Azure. Make adjustments manually before migration. The relevant articles contain instructions about how to do this. | | **Linux boot** | If /boot is on a dedicated partition, it should reside on the OS disk, and not be spread across multiple disks.<br> If /boot is part of the root (/) partition, then the '/' partition should be on the OS disk, and not span other disks. | | **UEFI boot** | Supported. UEFI-based VMs will be migrated to Azure generation 2 VMs. |
migrate Migrate Support Matrix Vmware Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/migrate-support-matrix-vmware-migration.md
ms. Previously updated : 09/01/2023 Last updated : 09/29/2023
The table summarizes VMware vSphere hypervisor requirements.
**VMware** | **Details** |
-**VMware vCenter Server** | Version 5.5, 6.0, 6.5, 6.7, 7.0.
+**VMware vCenter Server** | Version 5.5, 6.0, 6.5, 6.7, 7.0, 8.0.
**VMware vSphere ESXi host** | Version 5.5, 6.0, 6.5, 6.7, 7.0, 8.0. **vCenter Server permissions** | Agentless migration uses the [Migrate Appliance](migrate-appliance.md). The appliance needs these permissions in vCenter Server:<br/><br/> - **Datastore.Browse** (Datastore -> Browse datastore): Allow browsing of VM log files to troubleshoot snapshot creation and deletion.<br/><br/> - **Datastore.FileManagement** (Datastore -> Low level file operations): Allow read/write/delete/rename operations in the datastore browser, to troubleshoot snapshot creation and deletion.<br/><br/> - **VirtualMachine.Config.ChangeTracking** (Virtual machine -> Disk change tracking): Allow enable or disable change tracking of VM disks, to pull changed blocks of data between snapshots.<br/><br/> - **VirtualMachine.Config.DiskLease** (Virtual machine -> Disk lease): Allow disk lease operations for a VM, to read the disk using the VMware vSphere Virtual Disk Development Kit (VDDK).<br/><br/> - **VirtualMachine.Provisioning.DiskRandomRead** (Virtual machine -> Provisioning -> Allow read-only disk access): Allow opening a disk on a VM, to read the disk using the VDDK.<br/><br/> - **VirtualMachine.Provisioning.DiskRandomAccess** (Virtual machine -> Provisioning -> Allow disk access): Allow opening a disk on a VM, to read the disk using the VDDK.<br/><br/> - **VirtualMachine.Provisioning.GetVmFiles** (Virtual machine -> Provisioning -> Allow virtual machine download): Allows read operations on files associated with a VM, to download the logs and troubleshoot if failure occurs.<br/><br/> - **VirtualMachine.State.\*** (Virtual machine -> Snapshot management): Allow creation and management of VM snapshots for replication.<br/><br/> - **VirtualMachine.GuestOperations.\*** (Virtual machine -> Guest operations): Allow Discovery, Software Inventory, and Dependency Mapping on VMs.<br/><br/> -**VirtualMachine.Interact.PowerOff** (Virtual machine > Interaction > Power off): Allow the VM to be powered off during migration to Azure. **Multiple vCenter Servers** | A single appliance can connect to up to 10 vCenter Servers.
The table summarizes agentless migration requirements for VMware vSphere VMs.
**IPv6** | Not supported. **Target disk** | VMs can be migrated only to managed disks (standard HDD, standard SSD, premium SSD) in Azure. **Simultaneous replication** | Up to 300 simultaneously replicating VMs per vCenter Server with one appliance. Up to 500 simultaneously replicating VMs per vCenter Server when an additional [scale-out appliance](./how-to-scale-out-for-migration.md) is deployed.
-**Automatic installation of Azure VM agent (Windows and Linux Agent)** | Supported for Windows Server 2008 R2 onwards. Supported for RHEL 6, RHEL 7, CentOS 7, Ubuntu 14.04, Ubuntu 16.04, Ubuntu 18.04, Ubuntu 19.04, Ubuntu 19.10, Ubuntu 20.04
+**Automatic installation of Azure VM agent (Windows and Linux Agent)** | Supported for Windows Server 2008 R2 onwards. Supported for RHEL 6, RHEL 7, CentOS 7.9, CentOS 7, Ubuntu 14.04, Ubuntu 16.04, Ubuntu 18.04, Ubuntu 19.04, Ubuntu 19.10, Ubuntu 20.04
> [!NOTE] > Ensure that the following special characters are not passed in any credentials as they are not supported for SSO passwords:
migrate Migrate Support Matrix Vmware https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/migrate-support-matrix-vmware.md
ms. Previously updated : 05/15/2023 Last updated : 09/29/2023
Learn more about [assessments](concepts-assessment-calculation.md).
VMware | Details |
-**vCenter Server** | Servers that you want to discover and assess must be managed by vCenter Server version 7.0, 6.7, 6.5, 6.0, or 5.5.<br /><br /> Discovering servers by providing ESXi host details in the appliance currently isn't supported. <br /><br /> IPv6 addresses aren't supported for vCenter Server (for discovery and assessment of servers) and ESXi hosts (for replication of servers).
+**vCenter Server** | Servers that you want to discover and assess must be managed by vCenter Server version 8.0, 7.0, 6.7, 6.5, 6.0, or 5.5.<br /><br /> Discovering servers by providing ESXi host details in the appliance currently isn't supported. <br /><br /> IPv6 addresses aren't supported for vCenter Server (for discovery and assessment of servers) and ESXi hosts (for replication of servers).
**Permissions** | The Azure Migrate: Discovery and assessment tool requires a vCenter Server read-only account.<br /><br /> If you want to use the tool for software inventory, agentless dependency analysis, web apps and SQL discovery, the account must have privileges for guest operations on VMware VMs. ## Server requirements
migrate Prepare For Agentless Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/prepare-for-agentless-migration.md
Azure Migrate automatically handles these configuration changes for the followin
- Windows Server 2008 or later - Red Hat Enterprise Linux 9.x, 8.x, 7.9, 7.8, 7.7, 7.6, 7.5, 7.4, 7.0, 6.x-- CentOS 9.x (Release and Stream), 8.x (Release and Stream), 7.7, 7.6, 7.5, 7.4, 6.x
+- CentOS 9.x (Release and Stream), 8.x (Release and Stream), 7.9, 7.7, 7.6, 7.5, 7.4, 6.x
- SUSE Linux Enterprise Server 15 SP4, 15 SP3, 15 SP2, 15 SP1, 15 SP0, 12, 11 SP4, 11 SP3 - Ubuntu 22.04, 21.04, 20.04, 19.04, 19.10, 18.04LTS, 16.04LTS, 14.04LTS - Kali Linux (2016, 2017, 2018, 2019, 2020, 2021, 2022)
mysql Tutorial Power Automate With Mysql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/tutorial-power-automate-with-mysql.md
Previously updated : 1/15/2023 Last updated : 9/29/2023 # Tutorial: Create a Power Automate flow app with Azure Database for MySQL - Flexible Server
After saving the flow, we need to test it and run the flow app.
:::image type="content" source="./media/tutorial-power-automate-with-mysql/run-flow-to-get-rows-from-table.png" alt-text="Screenshot that shows output of the run.":::
+## Triggers
+
+Azure database for MySQL connector supports triggers for when an item is created in MySQL or when an item is modified. A trigger is just an event that starts a cloud flow. Before using triggers, make sure your table schema has "created_at" and "updated_at" columns which are of type timestamp. The trigger use these columns to understand when a new item was create or modified and initiate the automated flow.
+
+|Trigger|Description|
+|-|-|
+|(When an item is created)[./connectors/azuremysql/#when-an-item-is-created]|Triggers a flow when an item is created in MySQL (Available only for Power Automate.)|
+|(When an item is modified)[./connectors/azuremysql/#when-an-item-is-modified]|Triggers a flow when an item is modified in MySQL. (Available only for Power Automate.)|
+ ## Next steps [Azure Database for MySQL connector](/connectors/azuremysql/) reference
nat-gateway Troubleshoot Nat And Azure Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/nat-gateway/troubleshoot-nat-and-azure-services.md
To use NAT gateway with Azure App services, follow these steps:
5. Assign NAT gateway to the same subnet being used for Virtual network integration with your application(s).
-To see step-by-step instructions on how to configure NAT gateway with virtual network integration, see [Configuring NAT gateway integration](../app-service/networking/nat-gateway-integration.md#configuring-nat-gateway-integration)
+To see step-by-step instructions on how to configure NAT gateway with virtual network integration, see [Configuring NAT gateway integration](../app-service/networking/nat-gateway-integration.md#configure-nat-gateway-integration)
Important notes about the NAT gateway and Azure App Services integration:
network-watcher Network Watcher Monitor With Azure Automation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-monitor-with-azure-automation.md
Before you start this scenario, you must have the following pre-requisites:
### Create the runbook
-The first step to configuring the example is to create the runbook. This example uses a run-as account. To learn about run-as accounts, visit [Authenticate Runbooks with Azure Run As account](../automation/manage-runas-account.md)
+The first step to configuring the example is to create the runbook.
### Step 1
operator-nexus Concepts Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/concepts-storage.md
spec:
requests: storage: 107Mi storageClassName: nexus-volume
- volumeMode: Filesystem
+ volumeMode: Block
volumeName: testVolume status: accessModes:
operator-nexus Howto Disable Cgroupsv2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/howto-disable-cgroupsv2.md
+
+ Title: "Azure Operator Nexus: Disable cgroupsv2 on a Nexus Kubernetes Node"
+description: How-to guide for disabling support for cgroupsv2 on a Nexus Kubernetes Node
++++ Last updated : 09/18/2023+++
+# Disable `cgroupsv2` on Nexus Kubernetes Node
+
+[Control groups][cgroups], or "`cgroups`" allow the Linux operating system to
+allocate resources--CPU shares, memory, I/O, etc.--to a hierarchy of operating
+system processes. These resources can be isolated from other processes and in
+this way enable containerization of workloads.
+
+An enhanced version 2 of control groups ("[cgroupsv2][cgroups2]") was included
+in Linux kernel 4.5. The primary difference between the original `cgroups` v1
+and the newer `cgroups` v2 is that only a single hierarchy of `cgroups` is
+allowed in the `cgroups` v2. In addition to this single-hierarchy difference,
+`cgroups` v2 makes some backwards-incompatible changes to the pseudo-filesystem
+that `cgroups` v1 used, for example removing the `tasks` pseudofile and the
+`clone_children` functionality.
+
+Some applications may rely on older `cgroups` v1 behavior, however, and this
+documentation explains how to disable `cgroups` v2 on newer Linux operating
+system images used for Operator Nexus Kubernetes worker nodes.
+
+[cgroups]: https://en.wikipedia.org/wiki/Cgroups
+[cgroups2]: https://www.kernel.org/doc/html/latest/admin-guide/cgroup-v2.html
+
+## Nexus Kubernetes 1.27 and beyond
+
+While Kubernetes 1.25 [added support][k8s-cgroupsv2] for `cgroups` v2 within
+the kubelet, in order for `cgroups` v2 to be used it must be enabled in the
+Linux kernel.
+
+Operator Nexus Kubernetes worker nodes run special versions of Microsoft Azure
+Linux (previously called CBL Mariner OS) that correspond to the Kubernetes
+version enabled by that image. The Linux OS image for worker nodes *enables*
+`cgroups` v2 by default in Nexus Kubernetes version 1.27.
+
+`cgroups` v2 *isn't enabled* in versions of Nexus Kubernetes *before* 1.27.
+Therefore you don't need to perform the steps in this guide to disable
+`cgroups` v2.
+
+[k8s-cgroupsv2]: https://kubernetes.io/blog/2022/08/31/cgroupv2-ga-1-25/
+
+## Prerequisites
+
+Before proceeding with this how-to guide, it's recommended that you:
+
+ * Refer to the Nexus Kubernetes cluster [QuickStart guide][qs] for a
+ comprehensive overview and steps involved.
+ * Ensure that you meet the outlined prerequisites to ensure smooth
+ implementation of the guide.
+
+[qs]: ./quickstarts-kubernetes-cluster-deployment-bicep.md
+
+## Apply cgroupv2-disabling `Daemonset`
+
+> [!WARNING]
+> If you perform this step on a Kubernetes cluster that already has workloads
+> running on it, any workloads that are running on Kubernetes cluster nodes
+> will be terminated because the `Daemonset` reboots the host machine.
+> Therefore it is highly recommmended that you apply this `Daemonset` on a new
+> Nexus Kubernetes cluster before workloads are scheduled on it.
+
+Copy the following `Daemonset` definition to a file on a computer where you can
+execute `kubectl` commands against the Nexus Kubernetes cluster on which you
+wish to disable `cgroups` v2.
+
+```yaml
+apiVersion: apps/v1
+kind: DaemonSet
+metadata:
+ name: revert-cgroups
+ namespace: kube-system
+spec:
+ selector:
+ matchLabels:
+ name: revert-cgroups
+ template:
+ metadata:
+ labels:
+ name: revert-cgroups
+ spec:
+ affinity:
+ nodeAffinity:
+ requiredDuringSchedulingIgnoredDuringExecution:
+ nodeSelectorTerms:
+ - matchExpressions:
+ - key: cgroup-version
+ operator: NotIn
+ values:
+ - v1
+ tolerations:
+ - operator: Exists
+ effect: NoSchedule
+ containers:
+ - name: revert-cgroups
+ image: mcr.microsoft.com/cbl-mariner/base/core:1.0
+ command:
+ - nsenter
+ - --target
+ - "1"
+ - --mount
+ - --uts
+ - --ipc
+ - --net
+ - --pid
+ - --
+ - bash
+ - -exc
+ - |
+ CGROUP_VERSION=`stat -fc %T /sys/fs/cgroup/`
+ if [ "$CGROUP_VERSION" == "cgroup2fs" ]; then
+ echo "Using v2, reverting..."
+ sed -i 's/systemd.unified_cgroup_hierarchy=1 cgroup_no_v1=all/systemd.unified_cgroup_hierarchy=0/' /boot/grub2/grub.cfg
+ reboot
+ fi
+
+ sleep infinity
+ securityContext:
+ privileged: true
+ hostNetwork: true
+ hostPID: true
+ hostIPC: true
+ terminationGracePeriodSeconds: 0
+```
+
+And apply the `Daemonset`:
+
+```bash
+kubectl apply -f /path/to/daemonset.yaml
+```
+
+The above `Daemonset` applies to all Kubernetes worker nodes in the cluster
+except ones where a `cgroup-version=v1` label has been applied. For those
+worker nodes with `cgroups` v2 enabled, the `Daemonset` modifies the boot
+configuration of the Linux kernel and reboots the machine.
+
+You can monitor the rollout of the `Daemonset` and its effects by executing the
+following script:
+
+```bash
+#!/bin/bash
+
+set -x
+
+# Set the DaemonSet name and label key-value pair
+DAEMONSET_NAME="revert-cgroups"
+NAMESPACE="kube-system"
+LABEL_KEY="cgroup-version"
+LABEL_VALUE="v1"
+LOG_PATTERN="sleep infinity"
+
+# Function to check if all pods are completed
+check_pods_completed() {
+ local pods_completed=0
+
+ # Get the list of DaemonSet pods
+ pod_list=$(kubectl get pods -n "${NAMESPACE}" -l name="${DAEMONSET_NAME}" -o jsonpath='{range.items[*]}{.metadata.name}{"\n"}{end}')
+
+ # Loop through each pod
+ for pod in $pod_list; do
+
+ # Get the logs from the pod
+ logs=$(kubectl logs -n "${NAMESPACE}" "${pod}")
+
+ # Check if the logs end with the specified pattern
+ if [[ $logs == *"${LOG_PATTERN}"* ]]; then
+ ((pods_completed++))
+ fi
+
+ done
+
+ # Return the number of completed pods
+ echo $pods_completed
+}
+
+# Loop until all pods are completed
+while true; do
+ pods_completed=$(check_pods_completed)
+
+ # Get the total number of pods
+ total_pods=$(kubectl get pods -n "${NAMESPACE}" -l name=${DAEMONSET_NAME} --no-headers | wc -l)
+
+ if [ "$pods_completed" -eq "$total_pods" ]; then
+ echo "All pods are completed."
+ break
+ else
+ echo "Waiting for pods to complete ($pods_completed/$total_pods)..."
+ sleep 10
+ fi
+done
+
+# Once all pods are completed, add the label to the nodes
+node_list=$(kubectl get pods -n "${NAMESPACE}" -l name=${DAEMONSET_NAME} -o jsonpath='{range.items[*]}{.spec.nodeName}{"\n"}{end}' | sort -u)
+
+for node in $node_list; do
+ kubectl label nodes "${node}" ${LABEL_KEY}=${LABEL_VALUE}
+ echo "Added label '${LABEL_KEY}:${LABEL_VALUE}' to node '${node}'."
+done
+
+echo "Script completed."
+```
+
+The above script labels the nodes that have had `cgroups` v2 disabled. This
+labeling removes the `Daemonset` from nodes that have already been rebooted
+with the `cgroups` v1 kernel settings.
operator-nexus Quickstarts Tenant Workload Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/quickstarts-tenant-workload-deployment.md
Before you run the commands, you need to set several variables to define the con
| NETWORK_INTERFACE_NAME | The name of the L3 network interface for the virtual machine. | | ADMIN_USERNAME | The username for the virtual machine administrator. | | SSH_PUBLIC_KEY | The SSH public key that is used for secure communication with the virtual machine. |
-| CPU_CORES | The number of CPU cores for the virtual machine (even number, max 44 vCPUs) |
-| MEMORY_SIZE | The amount of memory (in GB, max 224 GB) for the virtual machine. |
+| CPU_CORES | The number of CPU cores for the virtual machine (even number, max 46 vCPUs) |
+| MEMORY_SIZE | The amount of memory (in GB, max 224 GB) for the virtual machine. |
| VM_DISK_SIZE | The size (in GB) of the virtual machine disk. | | VM_IMAGE | The URL of the virtual machine image. | | ACR_URL | The URL of the Azure Container Registry. |
az networkcloud virtualmachine create \
After a few minutes, the command completes and returns information about the virtual machine. You've created the virtual machine. You're now ready to use them.
-> [!NOTE]
-> If each server has two CPU chipsets and each CPU chip has 28 cores, then with hyperthreading enabled (default), the CPU chip supports 56 vCPUs. With 8 vCPUs in each chip reserved for infrastructure (OS and agents), the remaining 48 are available for tenant workloads.
- ## Review deployed resources [!INCLUDE [quickstart-review-deployment-cli](./includes/virtual-machine/quickstart-review-deployment-cli.md)]
orbital Receive Real Time Telemetry https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/orbital/receive-real-time-telemetry.md
Event Hubs documentation provides guidance on how to write simple consumer apps
## Understanding telemetry points
+### Current Telemetry Schema Version: 4.0
The ground station provides telemetry using Avro as a schema. The schema is below: ```json
The ground station provides telemetry using Avro as a schema. The schema is belo
"name": "contactPlatformIdentifier", "type": [ "null", "string" ] },
+ {
+ "name": "groundStationName",
+ "type": [ "null", "string" ]
+ },
+ {
+ "name": "antennaType",
+ "type": {
+ "name": "antennaTypeEnum",
+ "type": "enum",
+ "symbols": [
+ "Microsoft",
+ "KSAT"
+ ]
+ }
+ },
+ {
+ "name": "antennaId",
+ "type": [ "null", "string" ]
+ },
+ {
+ "name": "spacecraftName",
+ "type": [ "null", "string" ]
+ },
{ "name": "gpsTime", "type": [ "null", "double" ]
The ground station provides telemetry using Avro as a schema. The schema is belo
"name": "contactTleLine2", "type": "string" },
- {
- "name": "antennaType",
- "type": {
- "name": "antennaTypeEnum",
- "type": "enum",
- "symbols": [
- "Microsoft",
- "KSAT"
- ]
- }
- },
{ "name": "links", "type": [
The ground station provides telemetry using Avro as a schema. The schema is belo
"name": "antennaLink", "type": "record", "fields": [
+ {
+ "name": "name",
+ "type": [ "null", "string" ]
+ },
{ "name": "direction", "type": {
The ground station provides telemetry using Avro as a schema. The schema is belo
"name": "antennaLinkChannel", "type": "record", "fields": [
+ {
+ "name": "name",
+ "type": [ "null", "string" ]
+ },
+ {
+ "name": "modemName",
+ "type": [ "null", "string" ]
+ },
+ {
+ "name": "digitizerName",
+ "type": [ "null", "string" ]
+ },
{ "name": "endpointName", "type": "string"
The ground station provides telemetry using Avro as a schema. The schema is belo
"name": "inputRfPowerDbm", "type": [ "null", "double" ] },
+ {
+ "name": "outputRfPowerDbm",
+ "type": [ "null", "double" ]
+ },
+ {
+ "name": "packetRate",
+ "type": [ "null", "double" ]
+ },
+ {
+ "name": "gapCount",
+ "type": [ "null", "double" ]
+ },
{ "name": "modemLockStatus", "type": [
The ground station provides telemetry using Avro as a schema. The schema is belo
| version | Manually set internally | | Release version of the telemetry | | contactID | Contact resource | | Identification number of the contact | | contactPlatformIdentifier | Contact resource | | |
+| groundStationName | Contact resource | | Name of groundstation |
+| antennaType | Respective 1P/3P telemetry builders set this value | MICROSOFT, KSAT, VIASAT | Antenna network used for the contact. |
+| antennaId | Contact resource | | Human-readable name of antenna ID |
+| spacecraftName | Parsed from Contact Platform Identifier | | Name of spacecraft |
| gpsTime | Coversion of utcTime | | Time in GPS time that the customer telemetry message was generated. | | utcTime | Current time | | Time in UTC time that the customer telemetry message was generated. | | azimuthDecimalDegrees | ACU: AntennaAzimuth | | Antenna's azimuth in decimal degrees. | | elevationDecimalDegrees | ACU: AntennaElevation | | Antenna's elevation in decimal degrees. |
-| contactTleLine1 | ACU: Satellite[0].Model.Value | ΓÇó String: TLE <br> ΓÇó "Empty TLE Line 1" if metric is null | First line of the TLE used for the contact. |
-| contactTLeLine2 | ACU: Satellite[0].Model.Value | ΓÇó String: TLE <br> ΓÇó "Empty TLE Line 2" if metric is null | Second line of the TLE used for the contact. |
-| antennaType | Respective 1P/3P telemetry builders set this value | MICROSOFT, KSAT, VIASAT | Antenna network used for the contact. |
+| contactTleLine1 | ACU: Satellite[0].Model.Value | String of TLE Line 1 | First line of the TLE used for the contact. |
+| contactTLeLine2 | ACU: Satellite[0].Model.Value | String of TLE Line 2 | Second line of the TLE used for the contact. |
+| name [Link-level] | Contact profile link | | Name of the link |
| direction | Contact profile link | Uplink, Downlink | Direction of the link used for the contact. | | polarization | Contact profile link | RHCP, LHCP, DualRhcpLhcp, LinearVertical, LinearHorizontal | Polarization of the link used for the contact. |
-| uplinkEnabled | ACU: SBandCurrent or UHFTotalCurrent | ΓÇó NULL (Invalid CenterFrequencyMhz or Downlink direction) <br> ΓÇó False (Bands other than S and UHF or Amp Current < Threshold) <br> ΓÇó True (S/UHF-band, Uplink, Amp Current > Threshold) | Idicates whether uplink was enabled for the contact. |
+| uplinkEnabled | ACU: SBandCurrent or UHFTotalCurrent | ΓÇó NULL (Invalid CenterFrequencyMhz or Downlink direction) <br> ΓÇó False (Bands other than S and UHF or Amp Current < Threshold) <br> ΓÇó True (S/UHF-band, Uplink, Amp Current > Threshold) | Indicates whether uplink was enabled for the contact. |
+| name [Channel-level] | Contact profile link channel | | Name of the channel |
+| modemName | Modem | | Name of modem device |
+| digitizerName | Digitizer | | Name of digitizer device |
| endpointName | Contact profile link channel | | Name of the endpoint used for the contact. | | inputEbN0InDb | Modem: measuredEbN0 | ΓÇó NULL (Modem model other than QRadio or QRx) <br> ΓÇó Double: Input EbN0 | Input energy per bit to noise power spectral density in dB. | | inputEsN0InDb | Not used in 1P telemetry | NULL (Not used in 1P telemetry) | Input energy per symbol to noise power spectral density in dB. |
-| inputRfPowerDbm | Digitizer: inputRfPower | ΓÇó NULL (Uplink) <br> ΓÇó 0 (Digitizer driver other than SNNB or SNWB) <br> ΓÇó Double: Input Rf Power | Input RF power in dBm. |
+| inputRfPowerDbm | Digitizer: inputRfPower | ΓÇó NULL (Uplink or Digitizer driver other than SNNB or SNWB) <br> ΓÇó Double: Input Rf Power | Input RF power in dBm. |
+| outputRfPowerDbm | Digitizer: outputRfPower | ΓÇó NULL (Downlink or Digitizer driver other than SNNB or SNWB) <br> ΓÇó Double: Output Rf Power | Ouput RF power in dBm. |
+| outputPacketRate | Digitizer: rfOutputStream[0].measuredPacketRate | ΓÇó NULL (Downlink or Digitizer driver other than SNNB or SNWB) <br> ΓÇó Double: Output Packet Rate | Measured packet rate for Uplink |
+| gapCount | Digitizer: rfOutputStream[0].gapCount | ΓÇó NULL (Downlink or Digitizer driver other than SNNB or SNWB) <br> ΓÇó Double: Gap count | Packet gap count for Uplink |
| modemLockStatus | Modem: carrierLockState | ΓÇó NULL (Modem model other than QRadio or QRx; couldnΓÇÖt parse lock status Enum) <br> ΓÇó Empty string (if metric reading was null) <br> ΓÇó String: Lock status | Confirmation that the modem was locked. |
-| commandsSent | Modem: commandsSent | ΓÇó 0 (if not Uplink and QRadio) <br> ΓÇó Double: # of commands sent | Confirmation that commands were sent during the contact. |
+| commandsSent | Modem: commandsSent | ΓÇó NULL (if not Uplink and QRadio) <br> ΓÇó Double: # of commands sent | Confirmation that commands were sent during the contact. |
## Changelog
+2023-10-03 - Introduce version 4.0. Updated schema to include uplink packet metrics and names of infrastructure in use (groundstation, antenna, spacecraft, modem, digitizer, link, channel) <br>
2023-06-05 - Updatd schema to show metrics under channels instead of links. ## Next steps
peering-service Onboarding Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/peering-service/onboarding-model.md
- Title: Azure Peering Service onboarding model
-description: Get started to onboard Azure Peering Service.
---- Previously updated : 07/23/2023--
-# Onboarding Peering Service model
-
-Onboarding process of Peering Service is composed of two models:
---
-Action plans for the above listed models are described as follows:
-
-| **Step** | **Action**| **What you get**|
-|--|||
-| 1 | Customer to provision the connectivity from a connectivity partner (no interaction with Microsoft).ΓÇï | An Internet provider who is well connected to Microsoft and meets the technical requirements for performant and reliable connectivity to Microsoft. ΓÇï |
-| 2 (Optional) | Customer registers locations into the Azure portal.​ A location is defined by: ISP/IXP Name​, Physical location of the customer site (state level), IP Prefix given to the location by the Service Provider or the enterprise​. ​| Telemetry​: Internet Route monitoring​, traffic prioritization from Microsoft to the user’s closest edge POP location​. |
-
-## Onboarding Peering Service connection
-
-To onboard Peering Service connection:
--- Work with Internet Service provider (ISP) or Internet Exchange (IX) Partner to obtain Peering Service to connect your network with the Microsoft network.--- Ensure the [connectivity providers](location-partners.md) are partnered with Microsoft for Peering Service. -
-## Onboarding Peering Service connection telemetry
-
-Customers can opt for Peering Service telemetry such as BGP route analytics to monitor networking latency and performance when accessing the Microsoft network by registering the connection into the Azure portal.
-
-To onboard Peering Service connection telemetry, customer must register the Peering Service connection into the Azure portal. Refer to the [Manage a Peering Service connection using the Azure portal](azure-portal.md) to learn the procedure.
-
-Following that, you can measure telemetry by referring [here](measure-connection-telemetry.md).
-
-## Next steps
-
-To learn step by step process on how to register Peering Service connection, see [Manage a Peering Service connection using the Azure portal](azure-portal.md).
-
-To learn about measure connection telemetry, see [Connection telemetry](connection-telemetry.md).
postgresql How To Scale Compute Storage Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-scale-compute-storage-portal.md
Last updated 11/30/2021
[!INCLUDE [applies-to-postgresql-flexible-server](../includes/applies-to-postgresql-flexible-server.md)]
-This article provides steps to perform scaling operations for compute and storage. You will be able to change your compute tiers between burstable, general purpose, and memory optimized SKUs, including choosing the number of vCores that is suitable to run your application. You can also scale up your storage. Expected IOPS are shown based on the compute tier, vCores and the storage capacity. The cost estimate is also shown based on your selection.
+This article provides steps to perform scaling operations for compute and storage. You are able to change your compute tiers between burstable, general purpose, and memory optimized SKUs, including choosing the number of vCores that is suitable to run your application. You can also scale up your storage. Expected IOPS are shown based on the compute tier, vCores and the storage capacity. The cost estimate is also shown based on your selection.
> [!IMPORTANT] > You cannot scale down the storage.
-## Pre-requisites
+## Prerequisites
To complete this how-to guide, you need:
To complete this how-to guide, you need:
Follow these steps to choose the compute tier.
-1. In the [Azure portal](https://portal.azure.com/), choose your flexible server that you want to restore the backup from.
+1. In the [Azure portal](https://portal.azure.com/), choose the flexible server that you want to restore the backup from.
2. Click **Compute+storage**. 3. A page with current settings is displayed.
- :::image type="content" source="./media/how-to-scale-compute-storage-portal/click-compute-storage.png" alt-text="compute+storage view":::
+ :::image type="content" source="./media/how-to-scale-compute-storage-portal/click-compute-storage.png" alt-text="Screenshot that shows compute+storage view.":::
4. You can choose the compute class between burstable, general purpose, and memory optimized tiers.
- :::image type="content" source="./media/how-to-scale-compute-storage-portal/list-compute-tiers.png" alt-text="list compute tiers":::
+ :::image type="content" source="./media/how-to-scale-compute-storage-portal/list-compute-tiers.png" alt-text="Screenshot that list compute tiers.":::
-5. If you are good with the default vCores and memory sizes, you can skip the next step.
+5. If you're good with the default vCores and memory sizes, you can skip the next step.
6. If you want to change the number of vCores, you can click the drop-down of **Compute size** and click the desired number of vCores/Memory from the list.
Follow these steps to choose the compute tier.
:::image type="content" source="./media/how-to-scale-compute-storage-portal/compute-burstable-dropdown.png" alt-text="burstable compute"::: - General purpose compute tier:
- :::image type="content" source="./media/how-to-scale-compute-storage-portal/compute-general-purpose-dropdown.png" alt-text="general-purpose compute":::
+ :::image type="content" source="./media/how-to-scale-compute-storage-portal/compute-general-purpose-dropdown.png" alt-text="Screenshot that shows general-purpose compute.":::
- Memory optimized compute tier:
- :::image type="content" source="./media/how-to-scale-compute-storage-portal/compute-memory-optimized-dropdown.png" alt-text="memory optimized compute":::
+ :::image type="content" source="./media/how-to-scale-compute-storage-portal/compute-memory-optimized-dropdown.png" alt-text="Screenshot that shows memory optimized compute.":::
7. Click **Save**.
-8. You will see a confirmation message. Click **OK** if you want to proceed.
+8. You see a confirmation message. Click **OK** if you want to proceed.
9. A notification about the scaling operation in progress.
-## Scaling storage size
+## Manual Storage Scaling
Follow these steps to increase your storage size.
-1. In the [Azure portal](https://portal.azure.com/), choose your flexible server for which you want to increase the storage size.
+1. In the [Azure portal](https://portal.azure.com/), choose the flexible server for which you want to increase the storage size.
2. Click **Compute+storage**. 3. A page with current settings is displayed.
-4. The field **Storage size in GiB** with a slide-bar is shown with the current size.
+
+4. Select **Storage size in GiB** drop down and choose your new desired size.
+
+ :::image type="content" source="./media/how-to-scale-compute-storage-portal/storage-scaleup.png" alt-text="Screenshot that shows storage scale up.":::
+
+6. If you are good with the storage size, click **Save**.
+
+8. Most of the disk scaling operations are **online** and as soon as you click **Save** scaling process starts without any downtime but some scaling operations are **offline** and you will see below server restart message. Click **continue** if you want to proceed.
+
+ :::image type="content" source="./media/how-to-scale-compute-storage-portal/offline-scaling.png" alt-text="Screenshot that shows offline scaling.":::
+
+10. A receive a notification that scaling operation is in progress.
++
+## Storage Autogrow
+Follow these steps to increase your storage size.
+
+1. In the [Azure portal](https://portal.azure.com/), choose the flexible server for which you want to increase the storage size.
+2. Click **Compute+storage**.
+
+3. A page with current settings is displayed.
+
+
+4. Check **Storage Auto-growth** button
+
+ :::image type="content" source="./media/how-to-scale-compute-storage-portal/storage-autogrow.png" alt-text="Screenshot that shows storage autogrow.":::
+
+6. click **Save**.
+7. You receive a notification that storage autogrow is in progress.
-5. Slide the bar to your desired size. Corresponding IOPS number is shown. The IOPS is dependent on the compute tier and size. The cost information is also shown.
- :::image type="content" source="./media/how-to-scale-compute-storage-portal/storage-scaleup.png" alt-text="storage scale up":::
-6. If you are good with the storage size, click **Save**.
-7. You will see a confirmation message. Click **OK** if you want to proceed.
-8. A notification about the scaling operation in progress.
## Next steps - Learn about [business continuity](./concepts-business-continuity.md) - Learn about [high availability](./concepts-high-availability.md)-- Learn about [backup and recovery](./concepts-backup-restore.md)
+- Learn about [Compute and Storage](./concepts-compute-storage.md)
postgresql Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/overview.md
One advantage of running your workload in Azure is global reach. The flexible se
| Jio India West | :heavy_check_mark: (v3 only) | :x: | :heavy_check_mark: | :x: | | Korea Central | :heavy_check_mark: | :heavy_check_mark: ** | :heavy_check_mark: | :heavy_check_mark: | | Korea South | :heavy_check_mark: | :x: | :heavy_check_mark: | :heavy_check_mark: |
+| Poland Central| :heavy_check_mark: (v3/v4 only) | :heavy_check_mark: | :heavy_check_mark: | :x:|
| North Central US | :heavy_check_mark: | :x: | :heavy_check_mark: | :heavy_check_mark: | | North Europe | :heavy_check_mark: (v3/v4 only) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | Norway East | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :x: |
postgresql Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/release-notes.md
Last updated 9/20/2023
This page provides latest news and updates regarding feature additions, engine versions support, extensions, and any other announcements relevant to Flexible Server - PostgreSQL
+## Release: September 2023
+* General availability of [Storage auto-grow](./concepts-compute-storage.md#storage-auto-grow-preview) for Azure Database for PostgreSQL ΓÇô Flexible Server.
+ ## Release: August 2023 * Support for [minor versions](./concepts-supported-versions.md) 15.3, 14.8, 13.11, 12.15, 11.20 <sup>$</sup> * General availability of [Enhanced Metrics](./concepts-monitoring.md#enhanced-metrics), [Autovacuum Metrics](./concepts-monitoring.md#autovacuum-metrics), [PgBouncer Metrics](./concepts-monitoring.md#pgbouncer-metrics) and [Database availability metric](./concepts-monitoring.md#database-availability-metric) for Azure Database for PostgreSQL ΓÇô Flexible Server.
postgresql Concepts Single To Flexible https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/migrate/concepts-single-to-flexible.md
The following table lists the different tools available for performing the migra
The next section of the document gives an overview of the Single to Flex Migration tool, its implementation, limitations, and the experience that makes it the recommended tool to perform migrations from single to flexible server. > [!NOTE]
-> The Single to Flex Migration tool currently supports only **Offline** migrations. Support for online migrations will be introduced later in the tool.
+> The Single to Flex Migration tool is available in all Azure regions and currently supports only **Offline** migrations. Support for online migrations will be introduced later in the tool.
## Single to Flexible Migration tool - Overview
Along with data migration, the tool automatically provides the following built-i
## Limitations - You can have only one active migration to your flexible server.-- You can select a max of eight databases in one migration attempt. If you've more than eight databases, you must wait for the first migration to be complete before initiating another migration for the rest of the databases. Support for migration of more than eight databases in a single migration will be introduced later.-- The source and target server must be in the same Azure region. Cross region migrations are not supported.
+- The source and target server must be in the same Azure region. Cross region migrations is enabled only for servers in China regions.
- The tool takes care of the migration of data and schema. It doesn't migrate managed service features such as server parameters, connection security details and firewall rules.-- The migration tool shows the number of tables copied from source to target server. You need to validate the data in target server post migration.
+- The migration tool shows the number of tables copied from source to target server. You need to manually validate the data in target server post migration.
- The tool only migrates user databases and not system databases like template_0, template_1, azure_sys and azure_maintenance. > [!NOTE]
postgresql How To Migrate Single To Flexible Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/migrate/how-to-migrate-single-to-flexible-cli.md
The `create` parameters that go into the json file format are as shown below:
| `adminCredentials` | Required | This parameter lists passwords for admin users for both the Single Server source and the Flexible Server target. These passwords help to authenticate against the source and target servers. | `sourceServerUserName` | Required | The default value is the admin user created during the creation of single server and the password provided will be used for authentication against this user. In case you are not using the default user, this parameter is the user or role on the source server used for performing the migration. This user should have necessary privileges and ownership on the database objects involved in the migration and should be a member of **azure_pg_admin** role. | | `targetServerUserName` | Required | The default value is the admin user created during the creation of flexible server and the password provided will be used for authentication against this user. In case you are not using the default user, this parameter is the user or role on the target server used for performing the migration. This user should be a member of **azure_pg_admin**, **pg_read_all_settings**, **pg_read_all_stats**,**pg_stat_scan_tables** roles and should have the **Create role, Create DB** attributes. |
-| `dbsToMigrate` | Required | Specify the list of databases that you want to migrate to Flexible Server. You can include a maximum of eight database names at a time. |
+| `dbsToMigrate` | Required | Specify the list of databases that you want to migrate to Flexible Server. |
| `overwriteDbsInTarget` | Required | When set to true, if the target server happens to have an existing database with the same name as the one you're trying to migrate, migration tool automatically overwrites the database. | | `SetupLogicalReplicationOnSourceDBIfNeeded` | Optional | You can enable logical replication on the source server automatically by setting this property to `true`. This change in the server settings requires a server restart with a downtime of two to three minutes. | | `SourceDBServerFullyQualifiedDomainName` | Optional | Use it when a custom DNS server is used for name resolution for a virtual network. Provide the FQDN of the Single Server source according to the custom DNS server for this property. |
postgresql How To Migrate Single To Flexible Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/migrate/how-to-migrate-single-to-flexible-portal.md
The **Source** tab prompts you to give details related to the Single Server that
:::image type="content" source="./media/concepts-single-to-flexible/single-to-flex-migration-source.png" alt-text="Screenshot of source database server details." lightbox="./media/concepts-single-to-flexible/single-to-flex-migration-source.png":::
-After you make the **Subscription** and **Resource Group** selections, the dropdown list for server names shows Single Servers under that resource group across regions. Select the source that you want to migrate databases from. Note that you can migrate databases from a Single Server to a target Flexible Server in the same region - cross region migrations aren't supported.
+After you make the **Subscription** and **Resource Group** selections, the dropdown list for server names shows Single Servers under that resource group across regions. Select the source that you want to migrate databases from. Note that you can migrate databases from a Single Server to a target Flexible Server in the same region - cross region migrations are supported only in China regions.
After you choose the Single Server source, the **Location**, **PostgreSQL version**, and **Server admin login name** boxes are populated automatically. The server admin login name is the admin username used to create the Single Server. In the **Password** box, enter the password for that admin user. The migration tool performs the migration of single server databases as the admin user.
postgresql Concepts Supported Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/concepts-supported-versions.md
Azure Database for PostgreSQL currently supports the following major versions:
## PostgreSQL version 11
-The current minor release is 11.17. Refer to the [PostgreSQL documentation](https://www.postgresql.org/docs/11/release-11-17.html) to learn more about improvements and fixes in this minor release.
+The current minor release is 11.18. Refer to the [PostgreSQL documentation](https://www.postgresql.org/docs/11/release-11-17.html) to learn more about improvements and fixes in this minor release.
## PostgreSQL version 10
private-5g-core Commission Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/commission-cluster.md
The packet core instances in the Azure Private 5G Core service run on an Arc-ena
> Make a note of the Azure Stack Edge's resource group. The AKS cluster and custom location, created in this procedure, must belong to this resource group. - Review [Azure Stack Edge virtual machine sizing](azure-stack-edge-virtual-machine-sizing.md#azure-stack-edge-virtual-machine-sizing) to ensure your ASE has enough space available to commission the cluster.
-## Enter a minishell session
+## Configure Kubernetes for Azure Private MEC on the Azure Stack Edge device
-You need to run minishell commands on Azure Stack Edge during this procedure. You must use a Windows machine that is on a network with access to the management port of the ASE. You should be able to view the ASE local UI to verify you have access.
+These steps modify the Kubernetes cluster on the Azure Stack Edge device to optimize it for Azure Private MEC workloads.
-> [!TIP]
-> To access the local UI, see [Tutorial: Connect to Azure Stack Edge Pro with GPU](../databox-online/azure-stack-edge-gpu-deploy-connect.md).
-
-### Enable WinRM on your machine
-
-The following process uses PowerShell and needs WinRM to be enabled on your machine. Run the following command from a PowerShell window in Administrator mode:
-```powershell
-winrm quickconfig
-```
-WinRM may already be enabled on your machine, as you only need to do it once. Ensure your network connections are set to Private or Domain (not Public), and accept any changes.
-
-### Start the minishell session
-
-1. From a PowerShell window, enter the ASE management IP address (including quotation marks, for example `"10.10.5.90"`):
- ```powershell
- $ip = "*ASE IP address*"
-
- $sessopt = New-PSSessionOption -SkipCACheck -SkipCNCheck -SkipRevocationCheck
-
- $minishellSession = New-PSSession -ComputerName $ip -ConfigurationName "Minishell" -Credential ~\EdgeUser -UseSSL -SessionOption $sessopt
- ```
-
-1. At the prompt, enter your Azure Stack Edge password. Ignore the following message:
-
- `WARNING: The Windows PowerShell interface of your device is intended to
- be used only for the initial network configuration. Please
- engage Microsoft Support if you need to access this interface
- to troubleshoot any potential issues you may be experiencing.
- Changes made through this interface without involving Microsoft
- Support could result in an unsupported configuration.`
-
-You now have a minishell session set up ready to enable your Azure Kubernetes Service in the next step.
-
-> [!TIP]
-> If there is a network change, the session can break. Run `Get-PSSession` to view the state of the session. If it is still connected, you should still be able to run minishell commands. If it is broken or disconnected, run `Remove-PSSession` to remove the session locally, then start a new session.
-
-## Enable Azure Kubernetes Service on the Azure Stack Edge device
-
-Run the following commands at the PowerShell prompt, specifying the object ID you identified in [Complete the prerequisite tasks for deploying a private mobile network](complete-private-mobile-network-prerequisites.md).
+1. In the local UI, select **Kubernetes** in the left-hand menu.
+2. Under **Choose the option that best describes your scenario**, select **an Azure Private MEC solution in your environment**.
+3. On the **Workload confirmation** popup, select **I confirm I am running Azure Private MEC in my environment**, and click **Apply** to close the popup.
+4. Click **Apply** to save the changes.
-```powershell
-Invoke-Command -Session $minishellSession -ScriptBlock {Set-HcsKubeClusterArcInfo -CustomLocationsObjectId *object ID*}
-Invoke-Command -Session $minishellSession -ScriptBlock {Enable-HcsAzureKubernetesService -f}
-```
-
-Once you've run this command, you should see an updated option in the local UI ΓÇô **Kubernetes** becomes **Kubernetes (Preview)** as shown in the following image.
+Once you've applied these changes, you should see an updated option in the local UI ΓÇô **Kubernetes** becomes **Kubernetes (Preview)** as shown in the following image.
:::image type="content" source="media/commission-cluster/commission-cluster-kubernetes-preview.png" alt-text="Screenshot of configuration menu, with Kubernetes (Preview) highlighted.":::
You can input all the settings on this page before selecting **Apply** at the bo
1. Select **Add virtual network** and fill in the side panel: - **Virtual switch**: select **vswitch-port3** for N2, N3 and up to four DNs, and select **vswitch-port4** for up to six DNs. - **Name**: *N2*, *N3*, or *N6-DNX* (where *X* is the DN number 1-10).
- - **VLAN**: 0
- - **Subnet mask** and **Gateway**: Use the correct subnet mask and gateway for the IP address configured on the ASE port (even if the gateway is not set on the ASE port itself).
- - For example, *255.255.255.0* and *10.232.44.1*
+ - **VLAN**: VLAN ID, or 0 if not using VLANs
+ - **Network** and **Gateway**: Use the correct subnet and gateway for the IP address configured on the ASE port (even if the gateway is not set on the ASE port itself).
+ - For example, *10.232.44.0/24* and *10.232.44.1*
- If the subnet does not have a default gateway, use another IP address in the subnet which will respond to ARP requests (such as one of the RAN IP addresses). If there's more than one gNB connected via a switch, choose one of the IP addresses for the gateway. - **DNS server** and **DNS suffix** should be left blank. 1. Select **Modify** to save the configuration for this virtual network.
You can input all the settings on this page before selecting **Apply** at the bo
- **Virtual switch**: select **vswitch-port5** for N2, N3 and up to four DNs, and select **vswitch-port6** for up to six DNs. - **Name**: *N2*, *N3*, or *N6-DNX* (where *X* is the DN number 1-10). - **VLAN**: VLAN ID, or 0 if not using VLANs
- - **Subnet mask** and **Gateway** must match the external values for the port.
- - For example, *255.255.255.0* and *10.232.44.1*
+ - **Network** and **Gateway**: Use the correct subnet and gateway for the IP address configured on the ASE port (even if the gateway is not set on the ASE port itself).
+ - For example, *10.232.44.0/24* and *10.232.44.1*
- If the subnet does not have a default gateway, use another IP address in the subnet which will respond to ARP requests (such as one of the RAN IP addresses). If there's more than one gNB connected via a switch, choose one of the IP addresses for the gateway. - **DNS server** and **DNS suffix** should be left blank. 1. Select **Modify** to save the configuration for this virtual network.
If you're running other VMs on your Azure Stack Edge, we recommend that you stop
1. For the **Node size**, select **Standard_F16s_HPN**. 1. Ensure the **Arc enabled Kubernetes** checkbox is selected.
-1. The Arc enabled Kubernetes service is automatically created in the same resource group as your **Azure Stack Edge** resource. If your Azure Stack Edge resource group is not in a region that supports Azure Private 5G Core, you must change the region using the **Change** link.
+1. Select the **Change** link and enter the Azure AD application Object Id (OID) for the custom location which you obtained from [Retrieve the Object ID (OID)](complete-private-mobile-network-prerequisites.md#retrieve-the-object-id-oid).
+
+ :::image type="content" source="media/commission-cluster/commission-cluster-configure-kubernetes.png" alt-text="Screenshot of Configure Arc enabled Kubernetes pane, showing where to enter the custom location OID.":::
+
+1. The Arc enabled Kubernetes service is automatically created in the same resource group as your **Azure Stack Edge** resource. If your Azure Stack Edge resource group is not in a region that supports Azure Private 5G Core, you must change the region.
+1. Click **Configure** to apply the configuration.
+1. Check the **Region** and **Azure AD application Object Id (OID)** fields show the appropriate values, and then click **Create**.
1. Work through the prompts to set up the service. The creation of the Kubernetes cluster takes about 20 minutes. During creation, there may be a critical alarm displayed on the **Azure Stack Edge** resource. This alarm is expected and should disappear after a few minutes.
Once deployed, the portal should show **Kubernetes service is running** on the
You'll need *kubectl* access to verify that the cluster has deployed successfully. For read-only *kubectl* access to the cluster, you can download a *kubeconfig* file from the ASE local UI. Under **Device**, select **Download config**. The downloaded file is called *config.json*. This file has permission to describe pods and view logs, but not to access pods with *kubectl exec*.
-The Azure Private 5G Core deployment uses the *core* namespace. If you need to collect diagnostics, you can download a *kubeconfig* file with full access to the *core* namespace using the following minishell commands.
--- Create the namespace, download the *kubeconfig* file and use it to grant access to the namespace:
- ```powershell
- Invoke-Command -Session $minishellSession -ScriptBlock {New-HcsKubernetesNamespace -Namespace "core"}
- Invoke-Command -Session $minishellSession -ScriptBlock {New-HcsKubernetesUser -UserName "core"} | Out-File -FilePath .\kubeconfig-core.yaml
- Invoke-Command -Session $minishellSession -ScriptBlock {Grant-HcsKubernetesNamespaceAccess -Namespace "core" -UserName "core"}
- ```
-- If you need to retrieve the saved *kubeconfig* file later:
- ```powershell
- Invoke-Command -Session $miniShellSession -ScriptBlock { Get-HcsKubernetesUserConfig -UserName "core" }
- ```
-For more information, see [Configure cluster access via Kubernetes RBAC](../databox-online/azure-stack-edge-gpu-create-kubernetes-cluster.md#configure-cluster-access-via-kubernetes-rbac).
- ## Set up portal access Open your **Azure Stack Edge** resource in the Azure portal. Go to the Azure Kubernetes Service pane (shown in [Start the cluster and set up Arc](#start-the-cluster-and-set-up-arc)) and select the **Manage** link to open the **Arc** pane.
private-5g-core Complete Private Mobile Network Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/complete-private-mobile-network-prerequisites.md
To use Azure Private 5G Core, you need to register some additional resource prov
## Retrieve the Object ID (OID)
-You need to obtain the object ID (OID) of the custom location resource provider in your Azure tenant. You will need to provide this OID when you configure your ASE to use AKS-HCI. You can obtain the OID using the Azure CLI or the Azure Cloud Shell on the portal. You'll need to be an owner of your Azure subscription.
+You need to obtain the object ID (OID) of the custom location resource provider in your Azure tenant. You will need to provide this OID when you create the Kubernetes service. You can obtain the OID using the Azure CLI or the Azure Cloud Shell on the portal. You'll need to be an owner of your Azure subscription.
1. Sign in to the Azure CLI or Azure Cloud Shell. 1. Retrieve the OID:
private-5g-core Data Plane Packet Capture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/data-plane-packet-capture.md
Data plane packet capture works by mirroring packets to a Linux kernel interface
## Prerequisites - Identify the **Kubernetes - Azure Arc** resource representing the Azure Arc-enabled Kubernetes cluster on which your packet core instance is running.-- Ensure your local machine has core kubectl access to the Azure Arc-enabled Kubernetes cluster. This requires a core kubeconfig file, which you can obtain by following [Set up kubectl access](commission-cluster.md#set-up-kubectl-access).
+- Ensure your local machine has core kubectl access to the Azure Arc-enabled Kubernetes cluster. This requires a core kubeconfig file, which you can obtain by following [Core namespace access](set-up-kubectl-access.md#core-namespace-access).
## Performing packet capture
private-5g-core Enable Azure Active Directory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/enable-azure-active-directory.md
In this how-to guide, you'll carry out the steps you need to complete after depl
- You must have deployed a site with Azure Active Directory set as the authentication type. - Identify the IP address for accessing the local monitoring tools that you set up in [Management network](complete-private-mobile-network-prerequisites.md#management-network). - Ensure you can sign in to the Azure portal using an account with access to the active subscription you used to create your private mobile network. This account must have permission to manage applications in Azure AD. [Azure AD built-in roles](../active-directory/roles/permissions-reference.md) that have the required permissions include, for example, Application administrator, Application developer, and Cloud application administrator. If you do not have this access, contact your tenant Azure AD administrator so they can confirm your user has been assigned the correct role by following [Assign user roles with Azure Active Directory](/azure/active-directory/fundamentals/active-directory-users-assign-role-azure-portal).-- Ensure your local machine has core kubectl access to the Azure Arc-enabled Kubernetes cluster. This requires a core kubeconfig file, which you can obtain by following [Set up kubectl access](commission-cluster.md#set-up-kubectl-access).
+- Ensure your local machine has core kubectl access to the Azure Arc-enabled Kubernetes cluster. This requires a core kubeconfig file, which you can obtain by following [Core namespace access](set-up-kubectl-access.md#core-namespace-access).
## Configure domain system name (DNS) for local monitoring IP
private-5g-core Modify Local Access Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/modify-local-access-configuration.md
In this how-to guide, you'll learn how to use the Azure portal to change the aut
- Refer to [Choose the authentication method for local monitoring tools](collect-required-information-for-a-site.md#choose-the-authentication-method-for-local-monitoring-tools) and [Collect local monitoring values](collect-required-information-for-a-site.md#collect-local-monitoring-values) to collect the required values and make sure they're in the correct format. - If you want to add or update a custom HTTPS certificate for accessing your local monitoring tools, you'll need a certificate signed by a globally known and trusted CA and stored in an Azure Key Vault. Your certificate must use a private key of type RSA or EC to ensure it's exportable (see [Exportable or non-exportable key](../key-vault/certificates/about-certificates.md) for more information).-- If you want to update your local monitoring authentication method, ensure your local machine has core kubectl access to the Azure Arc-enabled Kubernetes cluster. This requires a core kubeconfig file, which you can obtain by following [Set up kubectl access](commission-cluster.md#set-up-kubectl-access).
+- If you want to update your local monitoring authentication method, ensure your local machine has core kubectl access to the Azure Arc-enabled Kubernetes cluster. This requires a core kubeconfig file, which you can obtain by following [Core namespace access](set-up-kubectl-access.md#core-namespace-access).
- Ensure you can sign in to the Azure portal using an account with access to the active subscription you used to create your private mobile network. This account must have the built-in Contributor or Owner role at the subscription scope. ## View the local access configuration
private-5g-core Modify Packet Core https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/modify-packet-core.md
If you want to modify a packet core instance's local access configuration, follo
- If you want to make changes to the attached data networks, refer to [Collect data network values](collect-required-information-for-a-site.md#collect-data-network-values) to collect the new values and make sure they're in the correct format. - Ensure you can sign in to the Azure portal using an account with access to the active subscription you used to create your private mobile network. This account must have the built-in Contributor or Owner role at the subscription scope.-- If you use Azure Active Directory (Azure AD) to authenticate access to your local monitoring tools and you're making a change that requires a packet core reinstall, ensure your local machine has core kubectl access to the Azure Arc-enabled Kubernetes cluster. This requires a core kubeconfig file, which you can obtain by following [Set up kubectl access](commission-cluster.md#set-up-kubectl-access).
+- If you use Azure Active Directory (Azure AD) to authenticate access to your local monitoring tools and you're making a change that requires a packet core reinstall, ensure your local machine has core kubectl access to the Azure Arc-enabled Kubernetes cluster. This requires a core kubeconfig file, which you can obtain by following [Core namespace access](set-up-kubectl-access.md#core-namespace-access).
## Plan a maintenance window
private-5g-core Monitor Private 5G Core Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/monitor-private-5g-core-alerts.md
+
+ Title: Monitor Azure Private 5G Core with alerts
+description: Guide to creating alerts for packet cores
++++ Last updated : 09/14/2023++
+# Create alerts to track performance of packet cores
+
+Alerts help track important events in your network by sending a notification containing diagnostic information when certain, user-defined conditions are met. Alerts can be customized to represent the severity of incidents on your network and can be viewed in the [Monitor service under Azure Services](https://portal.azure.com/#view/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/~/overview). In this how-to guide, you will create a custom alert for a packet core control plane or data plane resource.
+
+## Prerequisites
+
+- Ensure you can sign in to the Azure portal using an account with access to the active subscription you used to create your private mobile network. This account must have the build-in Contributor or Owner role at the subscription scope.
+- You must have [deployed your private mobile network](how-to-guide-deploy-a-private-mobile-network-azure-portal.md).
+
+## Create an alert rule for your packet core control plane or data plane resources
+
+1. Navigate to the packet core control/data plane you want to create an alert for.
+
+ - You can do this by searching for it under **All resources** or from the **Overview** page of the site that contains the packet core you want to add alerts for.
+
+1. Select **Alerts** from the **Monitoring** tab on the resource menu.
+
+ :::image type="content" source="media/packet-core-resource-menu-alerts-highlighted.png" alt-text="Screenshot of Azure portal showing packet core control/data plane resource menu.":::
+
+1. Select **Alert Rule** from the **Create** dropdown at the top of the page.
+
+ :::image type="content" source="media/alerts-create-dropdown.png" alt-text="Screenshot of Azure portal showing alerts menu with the create dropdown menu open.":::
+
+1. Select **See all signals** just under the dropdown menu or from inside the dropdown menu.
+
+ :::image type="content" source="media/packet-core-alerts-signal-list.png" alt-text="Screenshot of Azure portal showing alert signal selection menu." lightbox="media/packet-core-alerts-signal-list.png":::
+
+1. Select the signal you want the alert to be based on and follow the rest of the create instructions. For more information on alert options and setting actions groups used for notification, please refer to [the Azure Monitor alerts create and edit documentation](https://learn.microsoft.com/azure/azure-monitor/alerts/alerts-create-new-alert-rule?tabs=metric).
+1. Once you've reached the end of the create instructions, select **Review + create** to create your alert.
+1. Verify that your alert rule was created by navigating to the alerts page for your packet core (see steps 1 and 2) and finding it in the list of alert rules on the page.
+
+## Next steps
+- [Learn more about Azure Monitor alerts](https://learn.microsoft.com/azure/azure-monitor/alerts/alerts-overview).
private-5g-core Region Move Private Mobile Network Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/region-move-private-mobile-network-resources.md
You might move your resources to another region for a number of reasons. For exa
- Ensure Azure Private 5G Core supports the region to which you want to move your resources. Refer to [Products available by region](https://azure.microsoft.com/explore/global-infrastructure/products-by-region/?products=private-5g-core). - Verify pricing and charges associated with the target region to which you want to move your resources. - Choose a name for your new resource group in the target region. This must be different to the source region's resource group name.-- If you use Azure Active Directory (Azure AD) to authenticate access to your local monitoring tools, ensure your local machine has core kubectl access to the Azure Arc-enabled Kubernetes cluster. This requires a core kubeconfig file, which you can obtain by following [Set up kubectl access](commission-cluster.md#set-up-kubectl-access).
+- If you use Azure Active Directory (Azure AD) to authenticate access to your local monitoring tools, ensure your local machine has core kubectl access to the Azure Arc-enabled Kubernetes cluster. This requires a core kubeconfig file, which you can obtain by following [Core namespace access](set-up-kubectl-access.md#core-namespace-access).
## Back up deployment information
private-5g-core Reinstall Packet Core https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/reinstall-packet-core.md
Reinstalling the packet core deletes the packet core instance and redeploys it w
- Ensure you can sign in to the Azure portal using an account with access to the active subscription you used to create your private mobile network. This account must have the built-in Contributor or Owner role at the subscription scope. - If your packet core instance is still handling requests from your UEs, we recommend performing the reinstall during a maintenance window to minimize the impact on your service. You should allow up to two hours for the reinstall process to complete.-- If you use Azure Active Directory (Azure AD) to authenticate access to your local monitoring tools, ensure your local machine has core kubectl access to the Azure Arc-enabled Kubernetes cluster. This requires a core kubeconfig file, which you can obtain by following [Set up kubectl access](commission-cluster.md#set-up-kubectl-access).
+- If you use Azure Active Directory (Azure AD) to authenticate access to your local monitoring tools, ensure your local machine has core kubectl access to the Azure Arc-enabled Kubernetes cluster. This requires a core kubeconfig file, which you can obtain by following [Core namespace access](set-up-kubectl-access.md#core-namespace-access).
## View the packet core instance's installation status
private-5g-core Set Up Kubectl Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/set-up-kubectl-access.md
+
+ Title: Set up kubectl access
+
+description: This how-to guide shows how to obtain kubectl files that can be used to monitor your deployment and obtain diagnostics.
++++ Last updated : 07/07/2023+++
+# Set up kubectl access
+
+This how-to guide explains how to obtain the necessary *kubeconfig* files as needed for other procedures. The read-only file is sufficient to view cluster configuration. The core namespace file is needed for operations such as modifying local or Azure Active Directory authentication, or for gathering packet capture.
+
+## Read-only access
+
+For running read-only *kubectl* commands such as to describe pods and view logs, you can download a *kubeconfig* file from the ASE local UI. Under **Device**, select **Download config**.
+
+> [!TIP]
+> To access the local UI, see [Tutorial: Connect to Azure Stack Edge Pro with GPU](../databox-online/azure-stack-edge-gpu-deploy-connect.md).
++
+The downloaded file is called *config.json*. This file has permission to describe pods and view logs, but not to access pods with *kubectl exec*.
+
+## Core namespace access
+
+The Azure Private 5G Core deployment uses the *core* namespace. For operations such as modifying local or Azure Active Directory authentication, or for gathering packet capture, you need a *kubeconfig* file with full access to the *core* namespace. To download this file set up a minishell session and run the necessary commands as directed in this section.
+
+You only need to perform this procedure once. If you've done this procedure before you can use the previously saved *kubeconfig* file.
+
+### Enter a minishell session
+
+You need to run minishell commands on Azure Stack Edge during this procedure. You must use a Windows machine that is on a network with access to the management port of the ASE. You should be able to view the ASE local UI to verify you have access.
+
+#### Enable WinRM on your machine
+
+The following process uses PowerShell and needs WinRM to be enabled on your machine. Run the following command from a PowerShell window in Administrator mode:
+```powershell
+winrm quickconfig
+```
+WinRM may already be enabled on your machine, as you only need to do it once. Ensure your network connections are set to Private or Domain (not Public), and accept any changes.
+
+> [!TIP]
+> WinRM opens your PC to remote connections, which is required for the rest of the procedure. If you don't want to leave remote connections allowed, run `Stop-Service WinRM -PassThru` and then `Set-Service WinRM -StartupType Disabled -PassThru` from a PowerShell window in Administrator mode after you have completed the rest of the procedure to obtain core namespace access.
+
+#### Start the minishell session
+
+1. From a PowerShell window in Administrator mode, enter the ASE management IP address (including quotation marks, for example `"10.10.5.90"`):
+ ```powershell
+ $ip = "<ASE_IP_address>"
+
+ $sessopt = New-PSSessionOption -SkipCACheck -SkipCNCheck -SkipRevocationCheck
+
+ $minishellSession = New-PSSession -ComputerName $ip -ConfigurationName "Minishell" -Credential ~\EdgeUser -UseSSL -SessionOption $sessopt
+ ```
+
+1. At the prompt, enter your Azure Stack Edge password. Ignore the following message:
+
+ ```powershell
+ WARNING: The Windows PowerShell interface of your device is intended to
+ be used only for the initial network configuration. Please
+ engage Microsoft Support if you need to access this interface
+ to troubleshoot any potential issues you may be experiencing.
+ Changes made through this interface without involving Microsoft
+ Support could result in an unsupported configuration.
+ ```
+
+You now have a minishell session set up ready to obtain the *kubeconfig* file in the next step.
+
+> [!TIP]
+> If there is a network change, the session can break. Run `Get-PSSession` to view the state of the session. If it is still connected, you should still be able to run minishell commands. If it is broken or disconnected, run `Remove-PSSession` to remove the session locally, then start a new session.
+
+### Set up kubectl access
+
+- If this is the first time you're running this procedure, you need to run the following steps. These steps create the namespace, download the *kubeconfig* file and use it to grant access to the namespace.
+ ```powershell
+ Invoke-Command -Session $minishellSession -ScriptBlock {New-HcsKubernetesNamespace -Namespace "core"}
+ Invoke-Command -Session $minishellSession -ScriptBlock {New-HcsKubernetesUser -UserName "core"} | Out-File -FilePath .\kubeconfig-core.yaml
+ Invoke-Command -Session $minishellSession -ScriptBlock {Grant-HcsKubernetesNamespaceAccess -Namespace "core" -UserName "core"}
+ ```
+ If you see an error like `The Kubernetes namespace 'core' already exists`, it means you have run these steps before. In this case skip straight to the next bullet to retrieve the previously generated file.
+
+- If you have run this procedure before, you can retrieve the previously generated *kubeconfig* file immediately by running:
+ ```powershell
+ Invoke-Command -Session $miniShellSession -ScriptBlock { Get-HcsKubernetesUserConfig -UserName "core" }
+ ```
+
+For more information, see [Configure cluster access via Kubernetes RBAC](../databox-online/azure-stack-edge-gpu-create-kubernetes-cluster.md#configure-cluster-access-via-kubernetes-rbac).
+
+## Next steps
+- Save the *kubeconfig* file so it's available to use if you need it in the future.
+- If you need the *kubeconfig* file as part of completing a different procedure (such as to set up Azure Active Directory authentication), return to that procedure and continue.
private-5g-core Upgrade Packet Core Arm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/upgrade-packet-core-arm-template.md
If your environment meets the prerequisites, you're familiar with using ARM temp
- You must have a running packet core. Use Azure monitor platform metrics or the packet core dashboards to confirm your packet core instance is operating normally. - Ensure you can sign in to the Azure portal using an account with access to the active subscription you used to create your private mobile network. This account must have the built-in Contributor or Owner role at the subscription scope. - Identify the name of the site that hosts the packet core instance you want to upgrade.-- If you use Azure Active Directory (Azure AD) to authenticate access to your local monitoring tools, ensure your local machine has core kubectl access to the Azure Arc-enabled Kubernetes cluster. This requires a core kubeconfig file, which you can obtain by following [Set up kubectl access](commission-cluster.md#set-up-kubectl-access).
+- If you use Azure Active Directory (Azure AD) to authenticate access to your local monitoring tools, ensure your local machine has core kubectl access to the Azure Arc-enabled Kubernetes cluster. This requires a core kubeconfig file, which you can obtain by following [Core namespace access](set-up-kubectl-access.md#core-namespace-access).
## Review the template
If you encountered issues after the upgrade, you can roll back the packet core i
In this step, you'll roll back your packet core using a REST API request. Follow [Rollback - Azure portal](upgrade-packet-core-azure-portal.md#rollback) if you want to roll back using the Azure portal instead.
-If any of the configuration you set while your packet core instance was running a newer version isn't supported in the version that you want to roll back to, you'll need to revert to the previous configuration before you're able to perform a rollback. Check the packet core release notes for information on when new features were introduced.
+If any of the configuration options you set while your packet core instance was running a newer version aren't supported in the version that you want to roll back to, you'll need to revert to the previous configuration before you're able to perform a rollback. Check the packet core release notes for information on when new features were introduced.
1. Ensure you have a backup of your deployment information. If you need to back up again, follow [Back up deployment information](#back-up-deployment-information). 1. Perform a [rollback POST request](/rest/api/mobilenetwork/packet-core-control-planes/rollback?tabs=HTTP).
private-5g-core Upgrade Packet Core Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/upgrade-packet-core-azure-portal.md
If your deployment contains multiple sites, we recommend upgrading the packet co
- You must have a running packet core. Use Azure monitor platform metrics or the packet core dashboards to confirm your packet core instance is operating normally. - Ensure you can sign in to the Azure portal using an account with access to the active subscription you used to create your private mobile network. This account must have the built-in Contributor or Owner role at the subscription scope.-- If you use Azure Active Directory (Azure AD) to authenticate access to your local monitoring tools, ensure your local machine has core kubectl access to the Azure Arc-enabled Kubernetes cluster. This requires a core kubeconfig file, which you can obtain by following [Set up kubectl access](commission-cluster.md#set-up-kubectl-access).
+- If you use Azure Active Directory (Azure AD) to authenticate access to your local monitoring tools, ensure your local machine has core kubectl access to the Azure Arc-enabled Kubernetes cluster. This requires a core kubeconfig file, which you can obtain by following [Core namespace access](set-up-kubectl-access.md#core-namespace-access).
## View the current packet core version
Once the upgrade completes, check if your deployment is operating normally.
If you encountered issues after the upgrade, you can roll back the packet core instance to the version you were previously running.
-If any of the configuration you set while your packet core instance was running a newer version isn't supported in the version that you want to roll back to, you'll need to revert to the previous configuration before you're able to perform a rollback. Check the packet core release notes for information on when new features were introduced.
+If any of the configuration options you set while your packet core instance was running a newer version aren't supported in the version that you want to roll back to, you'll need to revert to the previous configuration before you're able to perform a rollback. Check the packet core release notes for information on when new features were introduced.
1. Ensure you have a backup of your deployment information. If you need to back up again, follow [Back up deployment information](#back-up-deployment-information). 1. Navigate to the **Packet Core Control Plane** resource that you want to roll back as described in [View the current packet core version](#view-the-current-packet-core-version).
sap Configure Workload Zone https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/automation/configure-workload-zone.md
An [SAP application](deployment-framework.md#sap-concepts) typically has multipl
:::image type="content" source="./media/deployment-framework/workload-zone-architecture.png" alt-text="Diagram that shows SAP workflow zones and systems.":::
+The workload zone provides shared services to all of the SAP Systems in the workload zone. These shared services include:
+
+- Azure Virtual Network
+- Azure Key Vault
+- Shared Azure Storage Accounts for installation media
+- If Azure NetApp Files are used, the Azure NetApp Files account and capacity pool is hosted in the workload zone.
+
+The workload zone is typically deployed in a spoke subscription and the deployment of all the artifacts in the workload zone is done using unique service principal.
+ ## Workload zone deployment configuration The configuration of the SAP workload zone is done via a Terraform `tfvars` variable file. You can find examples of the variable file in the `samples/WORKSPACES/LANDSCAPE` folder.
The following sections show the different sections of the variable file.
This table contains the parameters that define the environment settings. > [!div class="mx-tdCol2BreakAll "]
-> | Variable | Description | Type | Notes |
-> | -- | -- | - | - |
-> | `environment` | Identifier for the workload zone (maximum five characters) | Mandatory | For example, `PROD` for a production environment and `NP` for a nonproduction environment. |
-> | `location` | The Azure region in which to deploy | Required | |
-> | `name_override_file` | Name override file | Optional | See [Custom naming](naming-module.md). |
+> | Variable | Description | Type | Notes |
+> | -- | -- | - | - |
+> | `environment` | Identifier for the workload zone (max five characters) | Mandatory | For example, `PROD` for a production environment and `QA` for a Quality Assurance environment. |
+> | `location` | The Azure region in which to deploy | Required | |
+> | `name_override_file` | Name override file | Optional | See [Custom naming](naming-module.md). |
## Resource group parameters
This table contains the networking parameters.
> [!div class="mx-tdCol2BreakAll "] > | Variable | Description | Type | Notes | > | -- | | | - |
-> | `network_name` | The name of the network | Optional | |
-> | `network_logical_name` | The logical name of the network, for example, `SAP01` | Required | Used for resource naming |
-> | `network_arm_id` | The Azure resource identifier for the virtual network | Optional | For brown-field deployments |
-> | `network_address_space` | The address range for the virtual network | Mandatory | For green-field deployments |
+> | `network_logical_name` | The logical name of the network, for example, `SAP01` | Required | Used for resource naming |
+> | `network_name` | The name of the network | Optional | |
+> | `network_arm_id` | The Azure resource identifier for the virtual network | Optional | For brown-field deployments |
+> | `network_address_space` | The address range for the virtual network | Mandatory | For green-field deployments |
> | | | | |
-> | `admin_subnet_name` | The name of the `admin` subnet | Optional | |
-> | `admin_subnet_address_prefix` | The address range for the `admin` subnet | Mandatory | For green-field deployments |
-> | `admin_subnet_arm_id` | The Azure resource identifier for the `admin` subnet | Mandatory | For brown-field deployments |
+> | `admin_subnet_address_prefix` | The address range for the `admin` subnet | Mandatory | For green-field deployments |
+> | `admin_subnet_arm_id` | The Azure resource identifier for the `admin` subnet | Mandatory | For brown-field deployments |
+> | `admin_subnet_name` | The name of the `admin` subnet | Optional | |
> | | | | | > | `admin_subnet_nsg_name` | The name of the `admin`network security group | Optional | | > | `admin_subnet_nsg_arm_id` | The Azure resource identifier for the `admin` network security group | Mandatory | For brown-field deployments | > | | | | |
-> | `db_subnet_name` | The name of the `db` subnet | Optional | |
> | `db_subnet_address_prefix` | The address range for the `db` subnet | Mandatory | For green-field deployments | > | `db_subnet_arm_id` | The Azure resource identifier for the `db` subnet | Mandatory | For brown-field deployments |
+> | `db_subnet_name` | The name of the `db` subnet | Optional | |
> | | | | | > | `db_subnet_nsg_name` | The name of the `db` network security group | Optional | | > | `db_subnet_nsg_arm_id` | The Azure resource identifier for the `db` network security group | Mandatory | For brown-field deployments | > | | | | |
-> | `app_subnet_name` | The name of the `app` subnet | Optional | |
> | `app_subnet_address_prefix` | The address range for the `app` subnet | Mandatory | For green-field deployments | > | `app_subnet_arm_id` | The Azure resource identifier for the `app` subnet | Mandatory | For brown-field deployments |
+> | `app_subnet_name` | The name of the `app` subnet | Optional | |
> | | | | | > | `app_subnet_nsg_name` | The name of the `app` network security group | Optional | | > | `app_subnet_nsg_arm_id` | The Azure resource identifier for the `app` network security group | Mandatory | For brown-field deployments | > | | | | |
-> | `web_subnet_name` | The name of the `web` subnet | Optional | |
> | `web_subnet_address_prefix` | The address range for the `web` subnet | Mandatory | For green-field deployments | > | `web_subnet_arm_id` | The Azure resource identifier for the `web` subnet | Mandatory | For brown-field deployments |
+> | `web_subnet_name` | The name of the `web` subnet | Optional | |
> | | | | | > | `web_subnet_nsg_name` | The name of the `web` network security group | Optional | | > | `web_subnet_nsg_arm_id` | The Azure resource identifier for the `web` network security group | Mandatory | For brown-field deployments |
This table contains the networking parameters if Azure NetApp Files is used.
> [!div class="mx-tdCol2BreakAll "] > | Variable | Description | Type | Notes | > | -- | -- | | - |
-> | `anf_subnet_name` | The name of the `ANF` subnet | Optional | |
-> | `anf_subnet_arm_id` | The Azure resource identifier for the `ANF` subnet | Required | When using existing subnets |
-> | `anf_subnet_address_prefix` | The address range for the `ANF` subnet | Required | When using `ANF` for new deployments |
+> | `anf_subnet_arm_id` | The Azure resource identifier for the `ANF` subnet | Required | When using existing subnets |
+> | `anf_subnet_address_prefix` | The address range for the `ANF` subnet | Required | When using `ANF` for deployments |
+> | `anf_subnet_name` | The name of the `ANF` subnet | Optional | |
+
+This table contains additional networking parameters.
+
+> [!div class="mx-tdCol2BreakAll "]
+> | Variable | Description | Type | Notes |
+> | -- | -- | | - |
+> | `use_private_endpoint` | Are private endpoints created for storage accounts and key vaults. | Optional | |
+> | `use_service_endpoint` | Are service endpoints defined for the subnets. | Optional | |
+> | `peer_with_control_plane_vnet` | Are virtual networks peered with the control plane virtual network. | Optional | Required for the SAP Installation |
+> | `public_network_access_enabled` | Is public access enabled on the storage accounts and key vaults | Optional | |
#### Minimum required network definition
This table defines the parameters used for defining the key vault information.
## Private DNS > [!div class="mx-tdCol2BreakAll "]
-> | Variable | Description | Type |
-> | - | -- | -- |
-> | `dns_label` | If specified, is the DNS name of the private DNS zone | Optional |
-> | `dns_resource_group_name` | The name of the resource group that contains the private DNS zone | Optional |
+> | Variable | Description | Type |
+> | - | | -- |
+> | `dns_label` | If specified, is the DNS name of the private DNS zone | Optional |
+> | `dns_resource_group_name` | The name of the resource group that contains the private DNS zone | Optional |
+> | `register_virtual_network_to_dns` | Controls if the SAP Virtual Network is registered with the private DNS zone | Optional |
+> | `dns_server_list` | If specified, a list of DNS Server IP addresses | Optional |
## NFS support
use_private_endpoint = true
> [!div class="mx-tdCol2BreakAll "] > | Variable | Description | Type | Notes | > | | --| -- | |
-> | `ANF_account_name` | Name for the Azure NetApp Files account | Optional | |
-> | `ANF_service_level` | Service level for the Azure NetApp Files capacity pool | Optional | |
-> | `ANF_pool_size` | The size (in GB) of the Azure NetApp Files capacity pool | Optional | |
-> | `ANF_qos_type` | The quality of service type of the pool (auto or manual) | Optional | |
-> | `ANF_use_existing_pool` | Use existing for the Azure NetApp Files capacity pool | Optional | |
-> | `ANF_pool_name` | The name of the Azure NetApp Files capacity pool | Optional | |
-> | `ANF_account_arm_id` | Azure resource identifier for the Azure NetApp Files account | Optional | For brown-field deployments |
+> | `ANF_account_name` | Name for the Azure NetApp Files account | Optional | |
+> | `ANF_service_level` | Service level for the Azure NetApp Files capacity pool | Optional | |
+> | `ANF_pool_size` | The size (in GB) of the Azure NetApp Files capacity pool | Optional | |
+> | `ANF_qos_type` | The quality of service type of the pool (auto or manual) | Optional | |
+> | `ANF_use_existing_pool` | Use existing for the Azure NetApp Files capacity pool | Optional | |
+> | `ANF_pool_name` | The name of the Azure NetApp Files capacity pool | Optional | |
+> | `ANF_account_arm_id` | Azure resource identifier for the Azure NetApp Files account | Optional | For brown-field deployments |
> | | | | |
-> | `ANF_transport_volume_use_existing` | Defines if an existing transport volume is used | Optional | |
-> | `ANF_transport_volume_name` | Defines the transport volume name | Optional | For brown-field deployments |
-> | `ANF_transport_volume_size` | Defines the size of the transport volume in GB | Optional | |
-> | `ANF_transport_volume_throughput` | Defines the throughput of the transport volume | Optional | |
+> | `ANF_transport_volume_use_existing` | Defines if an existing transport volume is used | Optional | |
+> | `ANF_transport_volume_name` | Defines the transport volume name | Optional | For brown-field deployments |
+> | `ANF_transport_volume_size` | Defines the size of the transport volume in GB | Optional | |
+> | `ANF_transport_volume_throughput` | Defines the throughput of the transport volume | Optional | |
> | | | | |
-> | `ANF_install_volume_use_existing` | Defines if an existing install volume is used | Optional | |
-> | `ANF_install_volume_name` | Defines the install volume name | Optional | For brown-field deployments |
-> | `ANF_install_volume_size` | Defines the size of the install volume in GB | Optional | |
-> | `ANF_install_volume_throughput` | Defines the throughput of the install volume | Optional | |
+> | `ANF_install_volume_use_existing` | Defines if an existing install volume is used | Optional | |
+> | `ANF_install_volume_name` | Defines the install volume name | Optional | For brown-field deployments |
+> | `ANF_install_volume_size` | Defines the size of the install volume in GB | Optional | |
+> | `ANF_install_volume_throughput` | Defines the throughput of the install volume | Optional | |
#### Minimum required ANF definition
ANF_service_level = "Ultra"
### DNS support > [!div class="mx-tdCol2BreakAll "]
-> | Variable | Description | Type |
-> | -- | -- | -- |
-> | `use_custom_dns_a_registration` | Use an existing private DNS zone. | Optional |
-> | `management_dns_subscription_id` | Subscription ID for the subscription that contains the private DNS zone. | Optional |
+> | Variable | Description | Type |
+> | -- | | -- |
+> | `use_custom_dns_a_registration` | Use an existing private DNS zone. | Optional |
+> | `management_dns_subscription_id` | Subscription ID for the subscription that contains the private DNS zone. | Optional |
> | `management_dns_resourcegroup_name` | Resource group that contains the private DNS zone. | Optional |
-> | `dns_label` | DNS name of the private DNS zone. | Optional |
+> | `dns_label` | DNS name of the private DNS zone. | Optional |
## Other parameters > [!div class="mx-tdCol2BreakAll "] > | Variable | Description | Type | Notes | > | | - | -- | - |
-> | `enable_purge_control_for_keyvaults` | If purge control is enabled on the key vault. | Optional | Use only for test deployments. |
-> | `use_private_endpoint` | Are private endpoints created for storage accounts and key vaults. | Optional | |
-> | `use_service_endpoint` | Are service endpoints defined for the subnets. | Optional | |
-> | `enable_firewall_for_keyvaults_and_storage` | Restrict access to selected subnets. | Optional |
-> | `diagnostics_storage_account_arm_id` | The Azure resource identifier for the diagnostics storage account. | Required | For brown-field deployments. |
-> | `witness_storage_account_arm_id` | The Azure resource identifier for the witness storage account. | Required | For brown-field deployments. |
+> | `place_delete_lock_on_resources` | Places delete locks on the key vaults and the virtual network | Optional | |
+> | `enable_purge_control_for_keyvaults` | If purge control is enabled on the key vault. | Optional | Use only for test deployments. |
+> | `diagnostics_storage_account_arm_id` | The Azure resource identifier for the diagnostics storage account. | Required | For brown-field deployments. |
+> | `witness_storage_account_arm_id` | The Azure resource identifier for the witness storage account. | Required | For brown-field deployments. |
## iSCSI parameters
sap High Availability Guide Suse Pacemaker https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/high-availability-guide-suse-pacemaker.md
Previously updated : 09/15/2023 Last updated : 10/03/2023
Azure offers [scheduled events](../../virtual-machines/linux/scheduled-events.md
Important: The resources must start with 'health-azure'. ```bash
- sudo crm configure primitive health-azure-events \
- ocf:heartbeat:azure-events-az op monitor interval=10s
+ sudo crm configure primitive health-azure-events ocf:heartbeat:azure-events-az \
+ meta allow-unhealthy-nodes=true \
+ op monitor interval=10s
+ sudo crm configure clone health-azure-events-cln health-azure-events ```
+ > [!NOTE]
+ > On configuring 'health-azure-events' resource, following warning message can be ignored.
+ >
+ > WARNING: health-azure-events: unknown attribute 'allow-unhealthy-nodes'.
+ 6. Take the Pacemaker cluster out of maintenance mode ```bash
sap Sap Hana High Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/sap-hana-high-availability.md
Previously updated : 09/15/2023 Last updated : 10/03/2023
In the following test descriptions, we assume `PREFER_SITE_TAKEOVER="true"` and
rsc_nc_HN1_HDB03 (ocf::heartbeat:azure-lb): Started hn1-db-0 ```
+1. Test 10: Crash primary database indexserver
+
+ This test is relevant only when you have set up the susChkSrv hook as outlined in [Implement HANA hooks SAPHanaSR and susChkSrv](./sap-hana-high-availability.md#implement-hana-hooks-saphanasr-and-suschksrv).
+
+ The resource state before starting the test:
+
+ ```output
+ Clone Set: cln_SAPHanaTopology_HN1_HDB03 [rsc_SAPHanaTopology_HN1_HDB03]
+ Started: [ hn1-db-0 hn1-db-1 ]
+ Master/Slave Set: msl_SAPHana_HN1_HDB03 [rsc_SAPHana_HN1_HDB03]
+ Masters: [ hn1-db-0 ]
+ Slaves: [ hn1-db-1 ]
+ Resource Group: g_ip_HN1_HDB03
+ rsc_ip_HN1_HDB03 (ocf::heartbeat:IPaddr2): Started hn1-db-0
+ rsc_nc_HN1_HDB03 (ocf::heartbeat:azure-lb): Started hn1-db-0
+ ```
+
+ Run the following commands as root on the `hn1-db-0` node:
+
+ ```bash
+ hn1-db-0:~ # killall -9 hdbindexserver
+ ```
+
+ When the indexserver is terminated, the susChkSrv hook detects the event and trigger an action to fence 'hn1-db-0' node and initiate a takeover process.
+
+ Run the following commands to register `hn1-db-0` node as secondary and clean up the failed resource:
+
+ ```bash
+ # run as <hana sid>adm
+ hn1adm@hn1-db-0:/usr/sap/HN1/HDB03> hdbnsutil -sr_register --remoteHost=hn1-db-1 --remoteInstance=<instance number> --replicationMode=sync --name=<site 1>
+
+ # run as root
+ hn1-db-0:~ # crm resource cleanup msl_SAPHana_<HANA SID>_HDB<instance number> hn1-db-0
+ ```
+
+ The resource state after the test:
+
+ ```output
+ Clone Set: cln_SAPHanaTopology_HN1_HDB03 [rsc_SAPHanaTopology_HN1_HDB03]
+ Started: [ hn1-db-0 hn1-db-1 ]
+ Master/Slave Set: msl_SAPHana_HN1_HDB03 [rsc_SAPHana_HN1_HDB03]
+ Masters: [ hn1-db-1 ]
+ Slaves: [ hn1-db-0 ]
+ Resource Group: g_ip_HN1_HDB03
+ rsc_ip_HN1_HDB03 (ocf::heartbeat:IPaddr2): Started hn1-db-1
+ rsc_nc_HN1_HDB03 (ocf::heartbeat:azure-lb): Started hn1-db-1
+ ```
+
+ You can execute a comparable test case by causing the indexserver on the secondary node to crash. In the event of indexserver crash, the susChkSrv hook will recognize the occurrence and initiate an action to fence the secondary node.
+ ## Next steps - [Azure Virtual Machines planning and implementation for SAP][planning-guide]
search Hybrid Search Ranking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/hybrid-search-ranking.md
Here's a simple explanation of the RRF process:
1. The engine ranks documents based on combined scores and sorts them. The resulting list is the fused ranking.
-Only fields marked as `searchable` in the index are used for scoring. Only fields marked as `retrievable`, or fields that are specified in `searchFields` in the query, are returned in search results, along with their search score.
+Only fields marked as `searchable` in the index, or `searchFields` in the query, are used for scoring. Only fields marked as `retrievable`, or fields specified in `select` in the query, are returned in search results, along with their search score.
### Parallel query execution
search Index Similarity And Scoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/index-similarity-and-scoring.md
Search scores can be repeated throughout a result set. When multiple hits have t
To break the tie among repeating scores, you can add an **$orderby** clause to first order by score, then order by another sortable field (for example, `$orderby=search.score() desc,Rating desc`). For more information, see [$orderby](search-query-odata-orderby.md).
-Only fields marked as `searchable` in the index are used for scoring. Only fields marked as `retrievable`, or fields that are specified in `searchFields` in the query, are returned in search results, along with their search score.
+Only fields marked as `searchable` in the index, or `searchFields` in the query, are used for scoring. Only fields marked as `retrievable`, or fields specified in `select` in the query, are returned in search results, along with their search score.
> [!NOTE] > A `@search.score = 1` indicates an un-scored or un-ranked result set. The score is uniform across all results. Un-scored results occur when the query form is fuzzy search, wildcard or regex queries, or an empty search (`search=*`, sometimes paired with filters, where the filter is the primary means for returning a match).
search Search Howto Index Sharepoint Online https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-index-sharepoint-online.md
Previously updated : 08/07/2023 Last updated : 10/03/2023 # Index data from SharePoint document libraries
Last updated 08/07/2023
> [!IMPORTANT] > SharePoint indexer support is in public preview. It's offered "as-is", under [Supplemental Terms of Use](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). Preview features aren't recommended for production workloads and aren't guaranteed to become generally available. >
->To use this preview, [request access](https://aka.ms/azure-cognitive-search/indexer-preview), and after access is enabled, use a [preview REST API (2020-06-30-preview or later)](search-api-preview.md) to index your content. There is currently limited portal support and no .NET SDK support.
+>To use this preview, [request access](https://aka.ms/azure-cognitive-search/indexer-preview). Access will be automatically approved after the form is submitted. After access is enabled, use a [preview REST API (2020-06-30-preview or later)](search-api-preview.md) to index your content. There is currently limited portal support and no .NET SDK support.
This article explains how to configure a [search indexer](search-indexer-overview.md) to index documents stored in SharePoint document libraries for full text search in Azure Cognitive Search. Configuration steps are followed by a deeper exploration of behaviors and scenarios you're likely to encounter.
search Search Indexer How To Access Private Sql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-indexer-how-to-access-private-sql.md
Although you can call the Management REST API directly, it's easier to use the A
+ You should have a minimum of Contributor permissions on both Azure Cognitive Search and SQL Managed Instance.
-+ Azure SQL Managed Instance connection string. Managed identity is not currently supported with shared private link.
++ Azure SQL Managed Instance connection string. Managed identity is not currently supported with shared private link. Your connection string must include a user name and password. > [!NOTE] > Azure Private Link is used internally, at no charge, to set up the shared private link. ## 1 - Retrieve connection information
-Retrieve the FQDN of the managed instance, including the DNS zone. The DNS zone is part of the domain name of the SQL Managed Instance. For example, if the FQDN of the SQL Managed Instance is `my-sql-managed-instance.a1b22c333d44.database.windows.net`, the DNS zone is `a1b22c333d44`.
+Retrieve the FQDN of the managed instance, including the DNS zone. The DNS zone is part of the domain name of the SQL Managed Instance. For example, if the FQDN of the SQL Managed Instance is `my-sql-managed-instance.00000000000.database.windows.net`, the DNS zone is `00000000000`.
1. In Azure portal, find the SQL managed instance object.
You could use the [**Import data**](search-get-started-portal.md) wizard for thi
This article assumes Postman or equivalent tool, and uses the REST APIs to make it easier to see all of the properties. Recall that REST API calls for indexers and data sources use the [Search REST APIs](/rest/api/searchservice/), not the [Management REST APIs](/rest/api/searchmanagement/) used to create the shared private link. The syntax and API versions are different between the two REST APIs.
-1. [Create the data source definition](search-howto-connecting-azure-sql-database-to-azure-search-using-indexers.md) as you would normally for Azure SQL. Although the format of the connection string is different, the data source type and other properties are valid for SQL Managed Instance.
+1. [Create the data source definition](search-howto-connecting-azure-sql-database-to-azure-search-using-indexers.md) as you would normally for Azure SQL. The format of the connection string is slightly different for a managed instance, but other properties are the same as if you were configuring a data source connection to Azure SQL database.
Provide the connection string that you copied earlier.
This article assumes Postman or equivalent tool, and uses the REST APIs to make
} ```
- > [!NOTE]
- > If you're familiar with data source definitions in Cognitive Search, you'll notice that data source properties don't vary when using a shared private link. That's because Search will always use a shared private link on the connection if one exists.
- 1. [Create the indexer definition](search-howto-create-indexers.md), setting the indexer execution environment to "private". [Indexer execution](search-indexer-securing-resources.md#indexer-execution-environment) occurs in either a private environment that's specific to the search service, or a multi-tenant environment that's used internally to offload expensive skillset processing for multiple customers. **When connecting over a private endpoint, indexer execution must be private.**
search Search Reliability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-reliability.md
For business continuity and recovery from disasters at a regional level, plan on
## High availability
-In Cognitive Search, replicas are copies of your index. A search service is installed with at least one replica, and can have up to 12 replicas. [Adding replicas](search-capacity-planning.md#adjust-capacity) allows Azure Cognitive Search to do machine reboots and maintenance against one replica, while query execution continues on other replicas.
+In Cognitive Search, replicas are copies of your index. A search service is commissioned with at least one replica, and can have up to 12 replicas. [Adding replicas](search-capacity-planning.md#adjust-capacity) allows Azure Cognitive Search to do machine reboots and maintenance against one replica, while query execution continues on other replicas.
For each individual search service, Microsoft guarantees at least 99.9% availability for configurations that meet these criteria:
For each individual search service, Microsoft guarantees at least 99.9% availabi
+ Three or more replicas for high availability of read-write workloads (queries and indexing)
+The system has internal mechanisms for monitoring replica health and partition integrity. If you provision a specific combination of replicas and partitions, the system ensures that level of capacity for your service.
+ No SLA is provided for the Free tier. For more information, see [SLA for Azure Cognitive Search](https://azure.microsoft.com/support/legal/sla/search/v1_0/). <a name="availability-zones"></a>
search Search What Is An Index https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-what-is-an-index.md
In Cognitive Search, you'll work with one index at a time, where all index-relat
### Continuously available
+An index is immediately available for queries as soon as the first document is indexed, but won't be fully operational until all documents are indexed. Internally, a search index is [distributed across partitions and executes on replicas](search-capacity-planning.md#concepts-search-units-replicas-partitions-shards). The physical index is managed internally. The logical index is managed by you.
+ An index is continuously available, with no ability to pause or take it offline. Because it's designed for continuous operation, any updates to its content, or additions to the index itself, happen in real time. As a result, queries might temporarily return incomplete results if a request coincides with a document update. Notice that query continuity exists for document operations (refreshing or deleting) and for modifications that don't affect the existing structure and integrity of the current index (such as adding new fields). If you need to make structural updates (changing existing fields), those are typically managed using a drop-and-rebuild workflow in a development environment, or by creating a new version of the index on production service.
search Vector Search Ranking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/vector-search-ranking.md
In the HNSW algorithm, a vector query search operation is executed by navigating
1. Completion: The search completes when the desired number of nearest neighbors have been identified, or when other stopping criteria are met. This desired number of nearest neighbors is governed by the query-time parameter `k`.
-Only fields marked as `searchable` in the index are used for scoring. Only fields marked as `retrievable`, or fields that are specified in `searchFields` in the query, are returned in search results, along with their search score.
+Only fields marked as `searchable` in the index, or `searchFields` in the query, are used for scoring. Only fields marked as `retrievable`, or fields specified in `select` in the query, are returned in search results, along with their search score.
## Similarity metrics used to measure nearness
sentinel Data Connectors Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors-reference.md
Data connectors are available as part of the following offerings:
- [WithSecure Elements via Connector](data-connectors/withsecure-elements-via-connector.md)
+## Wiz, Inc.
+
+- [Wiz](data-connectors/wiz.md)
+ ## ZERO NETWORKS LTD - [Zero Networks Segment Audit](data-connectors/zero-networks-segment-audit.md)
sentinel Wiz https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/wiz.md
+
+ Title: "Wiz connector for Microsoft Sentinel"
+description: "Learn how to install the connector Wiz to connect your data source to Microsoft Sentinel."
++ Last updated : 09/26/2023++++
+# Wiz connector for Microsoft Sentinel
+
+The Wiz connector allows you to easily send Wiz Issues, Vulnerability Findinsg, and Audit logs to Microsoft Sentinel.
+
+## Connector attributes
+
+| Connector attribute | Description |
+| | |
+| **Log Analytics table(s)** | WizIssues_CL<br/> WizVulnerabilities_CL<br/> WizAuditLogs_CL<br/> |
+| **Data collection rules support** | Not currently supported |
+| **Supported by** | [Wiz](https://support.wiz.io/) |
+
+## Query samples
+
+**Summary by Issues's severity**
+ ```kusto
+WizIssues_CL
+
+ | summarize Count=count() by severity_s
+ ```
+++
+## Prerequisites
+
+To integrate with Wiz make sure you have:
+
+- **Microsoft.Web/sites permissions**: Read and write permissions to Azure Functions to create a Function App is required. [See the documentation to learn more about Azure Functions](/azure/azure-functions/).
+- **Wiz Service Account credentials**: Ensure you have your Wiz service account client ID and client secret, API endpoint URL, and auth URL. Instructions can be found on [Wiz documentation](https://docs.wiz.io/wiz-docs/docs/azure-sentinel-native-integration#collect-authentication-info-from-wiz).
++
+## Vendor installation instructions
++
+> [!NOTE]
+ > This connector: Uses Azure Functions to connect to Wiz API to pull Wiz Issues, Vulnerability Findings, and Audit Logs into Microsoft Sentinel. This might result in additional data ingestion costs. Check the [Azure Functions pricing page](https://azure.microsoft.com/pricing/details/functions/) for details.
+Creates an Azure Key Vault with all the required parameters stored as secrets.
+
+STEP 1 - Get your Wiz credentials
++
+Follow the instructions on [Wiz documentation](https://docs.wiz.io/wiz-docs/docs/azure-sentinel-native-integration#collect-authentication-info-from-wiz) to get the erquired credentials.
+
+STEP 2 - Deploy the connector and the associated Azure Function
++
+>**IMPORTANT:** Before deploying the Wiz Connector, have the Workspace ID and Workspace Primary Key (can be copied from the following), as well as the Wiz credentials from the previous step.
+++
+Option 1: Deploy using the Azure Resource Manager (ARM) Template
+
+1. Click the **Deploy to Azure** button below.
+
+ [![Deploy To Azure](https://aka.ms/deploytoazurebutton)](https://aka.ms/sentinel-wiz-azuredeploy)
+2. Select the preferred **Subscription**, **Resource Group** and **Location**.
+3. Enter the following parameters:
+> - Choose **KeyVaultName** and **FunctionName** for the new resources
+ >- Enter the following Wiz credentials from step 1: **WizAuthUrl**, **WizEndpointUrl**, **WizClientId**, and **WizClientSecret**
+>- Enter the Workspace credentials **AzureLogsAnalyticsWorkspaceId** and **AzureLogAnalyticsWorkspaceSharedKey**
+>- Choose the Wiz data types you want to send to Microsoft Sentinel, choose at least one from **Wiz Issues**, **Vulnerability Findings**, and **Audit Logs**.
+
+>- (optional) follow [Wiz documentation](https://docs.wiz.io/wiz-docs/docs/azure-sentinel-native-integration#optional-create-a-filter-for-wiz-queries) to add **IssuesQueryFilter**, **VulnerbailitiesQueryFilter**, and **AuditLogsQueryFilter**.
+
+4. Mark the checkbox labeled **I agree to the terms and conditions stated above**.
+5. Click **Purchase** to deploy.
++
+Option 2: Manual Deployment of the Azure Function
+
+>Follow [Wiz documentation](https://docs.wiz.io/wiz-docs/docs/azure-sentinel-native-integration#manual-deployment) to deploy the connector manually.
+++
+## Next steps
+
+For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/wizinc1627338511749.wizinc1627338511749_wiz_mss-sentinel?tab=Overview) in the Azure Marketplace.
service-fabric Service Fabric Reliable Services Reliable Collections Guidelines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-reliable-services-reliable-collections-guidelines.md
The guidelines are organized as simple recommendations prefixed with the terms *
Only one user thread operation is supported within a transaction. Otherwise, it will cause memory leak and lock issues. * Consider dispose transaction as soon as possible after commit completes (especially if using ConcurrentQueue). * Do not perform any blocking code inside a transaction.
+* When [string](/dotnet/api/system.string) is used as the key for a reliable dictionary, the sorting order uses [default string comparer CurrentCulture](/dotnet/api/system.string.compare#system-string-compare(system-string-system-string)). Note that the CurrentCulture sorting order is different from [Ordinal string comparer](/dotnet/api/system.stringcomparer.ordinal).
Here are some things to keep in mind:
service-health Resource Health Vm Annotation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-health/resource-health-vm-annotation.md
Title: Resource Health virtual machine Health Annotations description: Messages, meanings and troubleshooting for virtual machines resource health statuses. Previously updated : 9/29/2022 Last updated : 10/02/2023 # Resource Health virtual machine Health Annotations
-When the health of your virtual machine is impacted by availability impacting disruptions (see [Resource types and health checks](resource-health-checks-resource-types.md)), the platform emits context as to why the disruption has occurred to assist you in responding appropriately.
+Virtual Machine (VM) health annotations inform you of any ongoing activity that influences the availability of your VMs (see [Resource types and health checks](resource-health-checks-resource-types.md)). Annotations carry metadata that help you rationalize the exact impact to availability.
+
+Here are more details on important attributes we recently added, to help you understand below annotations you may observe in [Resource Health](resource-health-overview.md), [Azure Resource Graph](/azure/governance/resource-graph/overview) and [Event Grid System](/azure/event-grid/event-schema-health-resources?tabs=event-grid-event-schema) topics:
+
+- **Context**: Informs whether VM availability was influenced due to Azure or user orchestrated activity. This can assume values of _Platform Initiated | Customer Initiated | VM Initiated | Unknown_
+- **Category**: Informs whether VM availability was influenced due to planned or unplanned activity. This is only applicable to ΓÇÿPlatform-InitiatedΓÇÖ events. This can assume values of _Planned | Unplanned | Not Applicable | Unknown_
+- **ImpactType**: Informs the type of impact to VM availability. This can assume values of:
+
+ - *Downtime Reboot or Downtime Freeze*: Informs when VM is Unavailable due to Azure orchestrated activity (e.g., VirtualMachineStorageOffline, LiveMigrationSucceeded etc.). The reboot or freeze distinction can help you discern the type of downtime impact faced.
+
+ - *Degraded*: Informs when Azure predicts a HW failure on the host server or detects potential degradation in performance. (e.g., VirtualMachinePossiblyDegradedDueToHardwareFailure)
+ - *Informational*: Informs when an authorized user or process triggers a control plane operation (e.g., VirtualMachineDeallocationInitiated, VirtualMachineRestarted). This category also captures cases of platform actions due to customer defined thresholds or conditions. (E.g., VirtualMachinePreempted)
The below table summarizes all the annotations that the platform emits today:
-| Annotation | Description |
-||-|
-| VirtualMachineRestarted | The Virtual Machine is undergoing a reboot as requested by a restart action triggered by an authorized user or process from within the Virtual Machine. No other action is required at this time. For more information, see [understanding Virtual Machine reboots in Azure](/troubleshoot/azure/virtual-machines/understand-vm-reboot). |
-| VirtualMachineCrashed | The Virtual Machine is undergoing a reboot due to a guest OS crash. The local data remains unaffected during this process. No other action is required at this time. For more information, see [understanding Virtual Machine crashes in Azure](/troubleshoot/azure/virtual-machines/understand-vm-reboot#vm-crashes). |
-| VirtualMachineStorageOffline | The Virtual Machine is either currently undergoing a reboot or experiencing an application freeze due to a temporary loss of access to disk. No other action is required at this time, while the platform is working on reestablishing disk connectivity. |
-| VirtualMachineFailedToSecureBoot | Applicable to Azure Confidential Compute Virtual Machines when guest activity such as unsigned booting components leads to a guest OS issue preventing the Virtual Machine from booting securely. You can attempt to retry deployment after ensuring OS boot components are signed by trusted publishers. For more information, see [Secure Boot](/windows-hardware/design/device-experiences/oem-secure-boot). |
-| LiveMigrationSucceeded | The Virtual Machine was briefly paused as a Live Migration operation was successfully performed on your Virtual Machine. This operation was carried out either as a repair action, for allocation optimization or as part of routine maintenance workflows. No other action is required at this time. For more information, see [Live Migration](../virtual-machines/maintenance-and-updates.md#live-migration). |
-| LiveMigrationFailure | A Live Migration operation was attempted on your Virtual Machine as either a repair action, for allocation optimization or as part of routine maintenance workflows. This operation, however, could not be successfully completed and may have resulted in a brief pause of your Virtual Machine. No other action is required at this time. <br/> Also note that [M Series](../virtual-machines/m-series.md), [L Series](../virtual-machines/lasv3-series.md) VM SKUs are not applicable for Live Migration. For more information, see [Live Migration](../virtual-machines/maintenance-and-updates.md#live-migration). |
-| VirtualMachineAllocated | The Virtual Machine is in the process of being set up as requested by an authorized user or process. No other action is required at this time. |
-| VirtualMachineDeallocationInitiated | The Virtual Machine is in the process of being stopped and deallocated as requested by an authorized user or process. No other action is required at this time. |
-| VirtualMachineHostCrashed | The Virtual Machine has unexpectedly crashed due to the underlying host server experiencing a software failure or due to a failed hardware component. While the Virtual Machine is rebooting, the local data remains unaffected. You may attempt to redeploy the Virtual Machine to a different host server if you continue to experience issues. |
-| VirtualMachineMigrationInitiatedForPlannedMaintenance | The Virtual Machine is being migrated to a different host server as part of routine maintenance workflows orchestrated by the platform. No other action is required at this time. For more information, see [Planned Maintenance](../virtual-machines/maintenance-and-updates.md). |
-| VirtualMachineRebootInitiatedForPlannedMaintenance | The Virtual Machine is undergoing a reboot as part of routine maintenance workflows orchestrated by the platform. No other action is required at this time. For more information, see [Maintenance and updates](../virtual-machines/maintenance-and-updates.md). |
-| VirtualMachineHostRebootedForRepair | The Virtual Machine is undergoing a reboot due to the underlying host server experiencing unexpected failures. While the Virtual Machine is rebooting, the local data remains unaffected. For more information, see [understanding Virtual Machine reboots in Azure](/troubleshoot/azure/virtual-machines/understand-vm-reboot). |
-| VirtualMachineMigrationInitiatedForRepair | The Virtual Machine is being migrated to a different host server due to the underlying host server experiencing unexpected failures. Since the Virtual Machine is being migrated to a new host server, the local data will not persist. For more information, see [Service Healing](https://azure.microsoft.com/blog/service-healing-auto-recovery-of-virtual-machines/). |
-| VirtualMachineRedeployInitiatedByControlPlaneDueToPlannedMaintenance | The Virtual Machine is being migrated to a different host server as part of routine maintenance workflows triggered by an authorized user or process. Since the Virtual Machine is being migrated to a new host server, the local data will not persist. For more information, see [Maintenance and updates](../virtual-machines/maintenance-and-updates.md). |
-| VirtualMachineMigrationScheduledForDegradedHardware | The Virtual Machine is experiencing degraded availability as it is running on a host server with a degraded hardware component which is predicted to fail soon. Live Migration will be attempted to safely migrate your Virtual Machine to a healthy host server; however, the operation may fail depending on the degradation of the underlying hardware. <br/> We strongly advise you to redeploy your Virtual Machine to avoid unexpected failures by the redeploy deadline specified. For more information, see [Advancing failure prediction and mitigation](https://azure.microsoft.com/blog/advancing-failure-prediction-and-mitigation-introducing-narya/). |
-| VirtualMachinePossiblyDegradedDueToHardwareFailure | The Virtual Machine is experiencing an imminent risk to its availability as it is running on a host server with a degraded hardware component that will fail soon. Live Migration will be attempted to safely migrate your Virtual Machine to a healthy host server; however, the operation may fail. <br/> We strongly advise you to redeploy your Virtual Machine to avoid unexpected failures by the redeploy deadline specified. For more information, see [Advancing failure prediction and mitigation](https://azure.microsoft.com/blog/advancing-failure-prediction-and-mitigation-introducing-narya/). |
-| VirtualMachineScheduledForServiceHealing | The Virtual Machine is experiencing an imminent risk to its availability as it is running on a host server that is experiencing fatal errors. Live Migration will be attempted to safely migrate your Virtual Machine to a healthy host server; however, the operation may fail depending on the failure signature encountered by the host server. <br/> We strongly advise you to redeploy your Virtual Machine to avoid unexpected failures by the redeploy deadline specified. For more information, see [Advancing failure prediction and mitigation](https://azure.microsoft.com/blog/advancing-failure-prediction-and-mitigation-introducing-narya/). |
-| VirtualMachinePreempted | If you are running a Spot/Low Priority Virtual Machine, it has been preempted either due to capacity recall by the platform or due to billing-based eviction where cost exceeded user defined thresholds. No other action is required at this time. For more information, see [Spot Virtual Machines](../virtual-machines/spot-vms.md). |
-| VirtualMachineRebootInitiatedByControlPlane | The Virtual Machine is undergoing a reboot as requested by an authorized user or process from within the Virtual machine. No other action is required at this time. |
-| VirtualMachineRedeployInitiatedByControlPlane | The Virtual Machine is being migrated to a different host server as requested by an authorized user or process from within the Virtual machine. No other action is required at this time. Since the Virtual Machine is being migrated to a new host server, the local data will not persist. |
-| VirtualMachineSizeChanged | The Virtual Machine is being resized as requested by an authorized user or process. No other action is required at this time. |
-|VirtualMachineConfigurationUpdated | The Virtual Machine configuration is being updated as requested by an authorized user or process. No other action is required at this time. |
-| VirtualMachineStartInitiatedByControlPlane |The Virtual Machine is starting as requested by an authorized user or process. No other action is required at this time. |
-| VirtualMachineStopInitiatedByControlPlane | The Virtual Machine is stopping as requested by an authorized user or process. No other action is required at this time. |
-| VirtualMachineStoppedInternally | The Virtual Machine is stopping as requested by an authorized user or process, or due to a guest activity from within the Virtual Machine. No other action is required at this time. |
-| VirtualMachineProvisioningTimedOut | The Virtual Machine provisioning has failed due to Guest OS issues or incorrect user run scripts. You can attempt to either re-create this Virtual Machine or if this Virtual Machine is part of a virtual machine scale set, you can try reimaging it. |
-| AccelnetUnhealthy | Applicable if Accelerated Networking is enabled for your Virtual Machine ΓÇô We have detected that the Accelerated Networking feature is not functioning as expected. You can attempt to redeploy your Virtual Machine to potentially mitigate the issue. |
+
+| Annotation | Description | Attributes |
+||-|-|
+| VirtualMachineRestarted | The Virtual Machine is undergoing a reboot as requested by a restart action triggered by an authorized user or process from within the Virtual Machine. No other action is required at this time. For more information, see [understanding Virtual Machine reboots in Azure](/troubleshoot/azure/virtual-machines/understand-vm-reboot). | <ul><li>**Context**: Customer Initiated<li>**Category**: Not Applicable<li>**ImpactType**: Informational |
+| VirtualMachineCrashed | The Virtual Machine is undergoing a reboot due to a guest OS crash. The local data remains unaffected during this process. No other action is required at this time. For more information, see [understanding Virtual Machine crashes in Azure](/troubleshoot/azure/virtual-machines/understand-vm-reboot#vm-crashes). | <ul><li>**Context**: VM Initiated<li>**Category**: Not Applicable<li>**ImpactType**: Downtime Reboot |
+| VirtualMachineStorageOffline | The Virtual Machine is either currently undergoing a reboot or experiencing an application freeze due to a temporary loss of access to disk. No other action is required at this time, while the platform is working on reestablishing disk connectivity. | <ul><li>**Context**: Platform Initiated<li>**Category**: Unplanned<li>**ImpactType**: Downtime Reboot |
+| VirtualMachineFailedToSecureBoot | Applicable to Azure Confidential Compute Virtual Machines when guest activity such as unsigned booting components leads to a guest OS issue preventing the Virtual Machine from booting securely. You can attempt to retry deployment after ensuring OS boot components are signed by trusted publishers. For more information, see [Secure Boot](/windows-hardware/design/device-experiences/oem-secure-boot). | <ul><li> **Context**: Customer Initiated<li>**Category**: Not Applicable<li>**ImpactType**: Informational |
+| LiveMigrationSucceeded | The Virtual Machine was briefly paused as a Live Migration operation was successfully performed on your Virtual Machine. This operation was carried out either as a repair action, for allocation optimization or as part of routine maintenance workflows. No other action is required at this time. For more information, see [Live Migration](../virtual-machines/maintenance-and-updates.md#live-migration). | <ul><li> **Context**: Platform Initiated<li>**Category**: Unplanned<li>**ImpactType**: Downtime Freeze |
+| LiveMigrationFailure | A Live Migration operation was attempted on your Virtual Machine as either a repair action, for allocation optimization or as part of routine maintenance workflows. This operation, however, could not be successfully completed and may have resulted in a brief pause of your Virtual Machine. No other action is required at this time. <br/> Also note that [M Series](../virtual-machines/m-series.md), [L Series](../virtual-machines/lasv3-series.md) VM SKUs are not applicable for Live Migration. For more information, see [Live Migration](../virtual-machines/maintenance-and-updates.md#live-migration). | <ul><li> **Context**: Platform Initiated<li>**Category**: Unplanned<li>**ImpactType**: Downtime Freeze |
+| VirtualMachineAllocated | The Virtual Machine is in the process of being set up as requested by an authorized user or process. No other action is required at this time. | <ul><li>**Context**: Customer Initiated<li>**Category**: Not Applicable<li>**ImpactType**: Informational |
+| VirtualMachineDeallocationInitiated | The Virtual Machine is in the process of being stopped and deallocated as requested by an authorized user or process. No other action is required at this time. | <ul><li>**Context**: Customer Initiated<li>**Category**: Not Applicable<li>**ImpactType**: Informational |
+| VirtualMachineHostCrashed | The Virtual Machine has unexpectedly crashed due to the underlying host server experiencing a software failure or due to a failed hardware component. While the Virtual Machine is rebooting, the local data remains unaffected. You may attempt to redeploy the Virtual Machine to a different host server if you continue to experience issues. | <ul><li> **Context**: Platform Initiated<li>**Category**: Unplanned<li>**ImpactType**: Downtime Reboot |
+| VirtualMachineMigrationInitiatedForPlannedMaintenance | The Virtual Machine is being migrated to a different host server as part of routine maintenance workflows orchestrated by the platform. No other action is required at this time. For more information, see [Planned Maintenance](../virtual-machines/maintenance-and-updates.md). | <ul><li>**Context**: Platform Initiated<li>**Category**: Planned<li>**ImpactType**: Downtime Reboot |
+| VirtualMachineRebootInitiatedForPlannedMaintenance | The Virtual Machine is undergoing a reboot as part of routine maintenance workflows orchestrated by the platform. No other action is required at this time. For more information, see [Maintenance and updates](../virtual-machines/maintenance-and-updates.md). | <ul><li> **Context**: Platform Initiated<li>**Category**: Planned<li>**ImpactType**: Downtime Reboot |
+| VirtualMachineHostRebootedForRepair | The Virtual Machine is undergoing a reboot due to the underlying host server experiencing unexpected failures. While the Virtual Machine is rebooting, the local data remains unaffected. For more information, see [understanding Virtual Machine reboots in Azure](/troubleshoot/azure/virtual-machines/understand-vm-reboot). | <ul><li> **Context**: Platform Initiated<li>**Category**: Unplanned<li>**ImpactType**: Downtime Reboot |
+| VirtualMachineMigrationInitiatedForRepair | The Virtual Machine is being migrated to a different host server due to the underlying host server experiencing unexpected failures. Since the Virtual Machine is being migrated to a new host server, the local data will not persist. For more information, see [Service Healing](https://azure.microsoft.com/blog/service-healing-auto-recovery-of-virtual-machines/). | <ul><li>**Context**: Platform Initiated<li>**Category**: Unplanned<li>**ImpactType**: Downtime Reboot |
+| VirtualMachinePlannedFreezeStarted | This virtual machine is undergoing freeze impact due to a routine update. This update is necessary to ensure the underlying platform is up to date with the latest improvements. No additional action is required at this time. | <ul><li> **Context**: Platform Initiated <li>**Category**: Planned<li>**ImpactType**: Informational |
+| VirtualMachinePlannedFreezeSucceeded | This virtual machine has successfully undergone a routine update that resulted in freeze impact. This update is necessary to ensure the underlying platform is up to date with the latest improvements. No additional action is required at this time. | <ul><li>**Context**: Platform Initiated <li>**Category**: Planned<li>**ImpactType**: Downtime Freeze |
+| VirtualMachinePlannedFreezeFailed | This virtual machine underwent a routine update that may have resulted in freeze impact. However this update failed to successfully complete. The platform will automatically coordinate recovery actions, as necessary. This update was to ensure the underlying platform is up to date with the latest improvements. No additional action is required at this time. | <ul><li> **Context**: Platform Initiated <li>**Category**: Planned<li>**ImpactType**: Downtime Freeze |
+| VirtualMachineRedeployInitiatedByControlPlaneDueToPlannedMaintenance | The Virtual Machine is being migrated to a different host server as part of routine maintenance workflows triggered by an authorized user or process. Since the Virtual Machine is being migrated to a new host server, the local data will not persist. For more information, see [Maintenance and updates](../virtual-machines/maintenance-and-updates.md). | <ul><li> **Context**: Customer Initiated <li>**Category**: Not Applicable <li> **ImpactType**: Informational |
+| VirtualMachineMigrationScheduledForDegradedHardware | The Virtual Machine is experiencing degraded availability as it is running on a host server with a degraded hardware component which is predicted to fail soon. Live Migration will be attempted to safely migrate your Virtual Machine to a healthy host server; however, the operation may fail depending on the degradation of the underlying hardware. <br/> We strongly advise you to redeploy your Virtual Machine to avoid unexpected failures by the redeploy deadline specified. For more information, see [Advancing failure prediction and mitigation](https://azure.microsoft.com/blog/advancing-failure-prediction-and-mitigation-introducing-narya/). | <ul><li> **Context**: Platform Initiated <li>**Category**: Unplanned <li>**ImpactType**: Degraded |
+| VirtualMachinePossiblyDegradedDueToHardwareFailure | The Virtual Machine is experiencing an imminent risk to its availability as it is running on a host server with a degraded hardware component that will fail soon. Live Migration will be attempted to safely migrate your Virtual Machine to a healthy host server; however, the operation may fail. <br/> We strongly advise you to redeploy your Virtual Machine to avoid unexpected failures by the redeploy deadline specified. For more information, see [Advancing failure prediction and mitigation](https://azure.microsoft.com/blog/advancing-failure-prediction-and-mitigation-introducing-narya/). | <ul><li> **Context**: Platform Initiated <li>**Category**: Unplanned<li>**ImpactType**: Degraded |
+| VirtualMachineScheduledForServiceHealing | The Virtual Machine is experiencing an imminent risk to its availability as it is running on a host server that is experiencing fatal errors. Live Migration will be attempted to safely migrate your Virtual Machine to a healthy host server; however, the operation may fail depending on the failure signature encountered by the host server. <br/> We strongly advise you to redeploy your Virtual Machine to avoid unexpected failures by the redeploy deadline specified. For more information, see [Advancing failure prediction and mitigation](https://azure.microsoft.com/blog/advancing-failure-prediction-and-mitigation-introducing-narya/). | <ul><li>**Context**: Platform Initiated <li>**Category**: Unplanned<li>**ImpactType**: Degraded |
+| VirtualMachinePreempted | If you are running a Spot/Low Priority Virtual Machine, it has been preempted either due to capacity recall by the platform or due to billing-based eviction where cost exceeded user defined thresholds. No other action is required at this time. For more information, see [Spot Virtual Machines](../virtual-machines/spot-vms.md). | <ul><li> **Context**: Platform Initiated <li>**Category**: Unplanned<li>**ImpactType**: Informational |
+| VirtualMachineRebootInitiatedByControlPlane | The Virtual Machine is undergoing a reboot as requested by an authorized user or process from within the Virtual machine. No other action is required at this time. | <ul><li> **Context**: Customer Initiated <li>**Category**: Not Applicable<li>**ImpactType**: Informational |
+| VirtualMachineRedeployInitiatedByControlPlane | The Virtual Machine is being migrated to a different host server as requested by an authorized user or process from within the Virtual machine. No other action is required at this time. Since the Virtual Machine is being migrated to a new host server, the local data will not persist. | <ul><li> **Context**: Customer Initiated <li>**Category**: Not Applicable <li>**ImpactType**: Informational |
+| VirtualMachineSizeChanged | The Virtual Machine is being resized as requested by an authorized user or process. No other action is required at this time. | <ul><li> **Context**: Customer Initiated <li>**Category**: Not Applicable<li>**ImpactType**: Informational |
+|VirtualMachineConfigurationUpdated | The Virtual Machine configuration is being updated as requested by an authorized user or process. No other action is required at this time. | <ul><li> **Context**: Customer Initiated <li>**Category**: Not Applicable<li>**ImpactType**: Informational |
+| VirtualMachineStartInitiatedByControlPlane |The Virtual Machine is starting as requested by an authorized user or process. No other action is required at this time. | <ul><li> **Context**: Customer Initiated<li>**Category**: Not Applicable<li>**ImpactType**: Informational |
+| VirtualMachineStopInitiatedByControlPlane | The Virtual Machine is stopping as requested by an authorized user or process. No other action is required at this time. | <ul><li> **Context**: Customer Initiated<li>**Category**: Not Applicable<li>**ImpactType**: Informational |
+| VirtualMachineStoppedInternally | The Virtual Machine is stopping as requested by an authorized user or process, or due to a guest activity from within the Virtual Machine. No other action is required at this time. | <ul><li> **Context**: Customer Initiated <li>**Category**: Not Applicable<li>**ImpactType**: Informational |
+| VirtualMachineProvisioningTimedOut | The Virtual Machine provisioning has failed due to Guest OS issues or incorrect user run scripts. You can attempt to either re-create this Virtual Machine or if this Virtual Machine is part of a virtual machine scale set, you can try reimaging it. | <ul><li> **Context**: Platform Initiated <li> **Category**: Unplanned <li> **ImpactType**: Informational |
+| AccelnetUnhealthy | Applicable if Accelerated Networking is enabled for your Virtual Machine ΓÇô We have detected that the Accelerated Networking feature is not functioning as expected. You can attempt to redeploy your Virtual Machine to potentially mitigate the issue. | <ul><li> **Context**: Platform Initiated <li>**Category**: Unplanned <li> **ImpactType**: Degraded |
+
site-recovery Site Recovery Runbook Automation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/site-recovery-runbook-automation.md
To deploy sample scripts to your Automation account, select the **Deploy to Azur
## Next steps - Learn about:
- - [Azure Automation Run As account](../automation/manage-runas-account.md).
- [Running failovers](site-recovery-failover.md) - Review: - [Azure Automation sample scripts](https://gallery.technet.microsoft.com/scriptcenter/site/search?f%5B0%5D.Type=User&f%5B0%5D.Value=SC%20Automation%20Product%20Team&f%5B0%5D.Text=SC%20Automation%20Product%20Team).
site-recovery Vmware Physical Azure Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-physical-azure-support-matrix.md
The replication appliance is an on-premises machine that runs Site Recovery comp
CPU cores | 8 RAM | 16 GB Number of disks | 2 disks<br/><br/> Disks include the OS disk and data disk.
-Operating system | Windows Server 2012 R2, Windows Server 2016 or Windows Server 2019 with Desktop experience
+Operating system | Windows Server 2019 with Desktop experience
Operating system locale | English (en-us) Windows Server roles | Don't enable Active Directory Domain Services; Internet Information Services (IIS) or Hyper-V. Group policies| Don't enable these group policies: <br/> - Prevent access to the command prompt. <br/> - Prevent access to registry editing tools. <br/> - Trust logic for file attachments. <br/> - Turn on Script Execution. <br/> - [Learn more](/previous-versions/windows/it-pro/windows-7/gg176671(v=ws.10))|
storage Storage Performance Checklist https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-performance-checklist.md
This article organizes proven practices for performance into a checklist you can
| &nbsp; |Copying blobs |[Are you using the Azure Data Box family for importing large volumes of data?](#use-azure-data-box) | | &nbsp; |Content distribution |[Are you using a CDN for content distribution?](#content-distribution) | | &nbsp; |Use metadata |[Are you storing frequently used metadata about blobs in their metadata?](#use-metadata) |
+| &nbsp; |Service metadata | [Allow time for account and container metadata propagation](#account-and-container-metadata-updates) |
| &nbsp; |Performance tuning |[Are you proactively tuning client library options to optimize data transfer performance?](#performance-tuning-for-data-transfers) | | &nbsp; |Uploading quickly |[When trying to upload one blob quickly, are you uploading blocks in parallel?](#upload-one-large-blob-quickly) | | &nbsp; |Uploading quickly |[When trying to upload many blobs quickly, are you uploading blobs in parallel?](#upload-many-blobs-quickly) |
For more information about Azure Front Door, see [Azure Front Door](../../frontd
The Blob service supports HEAD requests, which can include blob properties or metadata. For example, if your application needs the Exif (exchangeable image format) data from a photo, it can retrieve the photo and extract it. To save bandwidth and improve performance, your application can store the Exif data in the blob's metadata when the application uploads the photo. You can then retrieve the Exif data in metadata using only a HEAD request. Retrieving only metadata and not the full contents of the blob saves significant bandwidth and reduces the processing time required to extract the Exif data. Keep in mind that 8 KiB of metadata can be stored per blob.
+## Account and container metadata updates
+
+Account and container metadata is propagated across the storage service in the region where the account resides. Full propagation of this metadata can take up to 60 seconds depending on the operation. For example:
+
+- If you are rapidly creating, deleting, and recreating accounts with the same account name in the same region ensure that you are waiting 60 seconds for the account state to fully propagate, or your requests may fail.
+- When you establish a stored access policy on a container, the policy might take up to 30 seconds to take effect.
+ ## Performance tuning for data transfers When an application transfers data using the Azure Storage client library, there are several factors that can affect speed, memory usage, and even the success or failure of the request. To maximize performance and reliability for data transfers, it's important to be proactive in configuring client library transfer options based on the environment your app runs in. To learn more, see [Performance tuning for uploads and downloads](storage-blobs-tune-upload-download.md).
Page blobs are appropriate if the application needs to perform random writes on
- [Scalability and performance targets for Blob storage](scalability-targets.md) - [Scalability and performance targets for standard storage accounts](../common/scalability-targets-standard-account.md?toc=/azure/storage/blobs/toc.json) - [Status and error codes](/rest/api/storageservices/Status-and-Error-Codes2)+
storage File Sync Cloud Tiering Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-cloud-tiering-overview.md
Disks that have server endpoints can run out of space due to various reasons, ev
- Slow or delayed sync causing files not to be tiered - Excessive recalls of tiered files
-When the disk space runs out, Azure File Sync might not function correctly and can even become unusable. While it's not possible for Azure File Sync to completely prevent these occurrences, the low disk space mode (available in Azure File Sync agent versions starting from 15.1) is designed to prevent a server endpoint from reaching this situation.
+When the disk space runs out, Azure File Sync might not function correctly and can even become unusable. While it's not possible for Azure File Sync to completely prevent these occurrences, the low disk space mode (available in Azure File Sync agent versions starting from 15.1) is designed to prevent a server endpoint from reaching this situation and also help the server get out of it faster.
-For server endpoints with cloud tiering enabled and volume free space policy set, if the free space on the volume drops below the calculated threshold, then the volume is in low disk space mode.
+For server endpoints with cloud tiering enabled, if the free space on the volume drops below the calculated threshold, then the volume is in low disk space mode.
In low disk space mode, the Azure File Sync agent does two things differently: -- **Proactive Tiering**: In this mode, the File Sync agent tiers files proactively to the cloud. The sync agent checks for files to be tiered every minute instead of the normal frequency of every hour. Volume free space policy tiering typically doesn't happen during initial upload sync until the full upload is complete; however, in low disk space mode, tiering is enabled during the initial upload sync, and files will be considered for tiering once the individual file has been uploaded to the Azure file share.
+- **Proactive Tiering**: In this mode, the File Sync agent tiers files more proactively to the cloud. The sync agent checks for files to be tiered every minute instead of the normal frequency of every hour. Volume free space policy tiering typically doesn't happen during initial upload sync until the full upload is complete; however, in low disk space mode, tiering is enabled during the initial upload sync, and files will be considered for tiering once the individual file has been uploaded to the Azure file share.
- **Non-Persistent Recalls**: When a user opens a tiered file, files recalled from the Azure file share directly won't be persisted to the disk. Recalls initiated by the `Invoke-StorageSyncFileRecall` cmdlet are an exception to this rule and will be persisted to disk.
If a volume has two server endpoints, one with tiering enabled and one without t
### How is the threshold for low disk space mode calculated? Calculate the threshold by taking the minimum of the following three numbers:-- 10% of volume free space in GiB
+- 10% of volume size in GiB
- Volume Free Space Policy in GiB - 20 GiB
The following table includes some examples of how the threshold is calculated an
| Volume Size | 10% of Volume Size | Volume Free Space Policy | Threshold = Min(10% of Volume Size, Volume Free Space Policy, 20GB) | Current Volume Free Space | Is Low Disk Space Mode? | Reason | |-|--|--|-||-||
-| 100 GiB | 10 GiB | 7% (7 GiB) | 7 GiB = Min (10 GiB, 7 GiB, 20 GiB) | 9% (9 GiB) | No | Current Volume Free Space > Threshold |
-| 100 GiB | 10 GiB | 7% (7 GiB) | 7 GiB = Min (10 GiB, 7 GiB, 20 GiB) | 5% (5 GiB) | Yes | Current Volume Free Space < Threshold |
-| 300 GiB | 30 GiB | 8% (24 GiB) | 20 GiB = Min (30 GiB, 24 GiB, 20 GiB) | 7% (21 GiB) | No | Current Volume Free Space > Threshold |
-| 300 GiB | 30 GiB | 8% (24 GiB) | 20 GiB = Min (30 GiB, 24 GiB, 20 GiB) | 6% (18 GiB) | Yes | Current Volume Free Space < Threshold |
+| 100 GiB | 10 GiB | 7% (7 GiB) | 7 GiB = Min (10 GiB, 7 GiB, 20 GiB) | 9% (9 GiB) | No | Current Volume Free Space (9 GiB) > Threshold (7 GiB) |
+| 100 GiB | 10 GiB | 7% (7 GiB) | 7 GiB = Min (10 GiB, 7 GiB, 20 GiB) | 5% (5 GiB) | Yes | Current Volume Free Space (5 GiB) < Threshold (7 GiB) |
+| 300 GiB | 30 GiB | 8% (24 GiB) | 20 GiB = Min (30 GiB, 24 GiB, 20 GiB) | 7% (21 GiB) | No | Current Volume Free Space (21 GiB) > Threshold (20 GiB) |
+| 300 GiB | 30 GiB | 8% (24 GiB) | 20 GiB = Min (30 GiB, 24 GiB, 20 GiB) | 6% (18 GiB) | Yes | Current Volume Free Space (18 GiB) < Threshold (20 GiB) |
### How does low disk space mode work with volume free space policy?
Here are two ways to exit low disk mode on the server endpoint:
2. You can manually speed up the process by increasing the volume size or freeing up space outside the server endpoint. ### How to check if a server is in Low Disk Space mode?
-Event ID 19000 is logged to the Telemetry event log every minute for each server endpoint. Use this event to determine if the server endpoint is in low disk mode (IsLowDiskMode = true). The Telemetry event log is located in Event Viewer under Applications and Services\Microsoft\FileSync\Agent.
+- If a server endpoint is in low disk mode, it is displayed in the Azure portal in the **cloud tiering health** section of the **Errors + troubleshooting** tab of the server endpoint.
+- Event ID 19000 is logged to the Telemetry event log every minute for each server endpoint. Use this event to determine if the server endpoint is in low disk mode (IsLowDiskMode = true). The Telemetry event log is located in Event Viewer under Applications and Services\Microsoft\FileSync\Agent.
## Next steps
storage File Sync Server Endpoint Delete https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-server-endpoint-delete.md
Initiating change detection in the cloud ensures that your latest changes have b
You can initiate change detection with the Invoke-AzStorageSyncChangeDetection cmdlet: ```powershell
-Invoke-AzStorageSyncChangeDetection -ResourceGroupName "myResourceGroup" -StorageSyncServiceName "myStorageSyncServiceName" -SyncGroupName "mySyncGroupName" -Path "Data","Reporting\Templates"
+Invoke-AzStorageSyncChangeDetection -ResourceGroupName "myResourceGroup" -StorageSyncServiceName "myStorageSyncServiceName" -SyncGroupName "mySyncGroupName" -CloudEndpointName "myCloudEndpointGUID"
``` This step may take a while to complete.
traffic-manager Traffic Manager Faqs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/traffic-manager/traffic-manager-FAQs.md
Previously updated : 08/14/2023 Last updated : 10/02/2023
Geographic routing type can be used in any scenario where an Azure customer need
### How do I decide if I should use Performance routing method or Geographic routing method?
-The key difference between these two popular routing methods is that in Performance routing method your primary goal is to send traffic to the endpoint that can provide the lowest latency to the caller, whereas, in Geographic routing the primary goal is to enforce a geo fence for your callers so that you can deliberately route them to a specific endpoint. The overlap happens since there's a correlation between geographical closeness and lower latency, although this isn't always true. There might be an endpoint in a different geography that can provide a better latency experience for the caller and in that case Performance routing will send the user to that endpoint but Geographic routing will always send them to the endpoint you've mapped for their geographic region. To further make it clear, consider the following example - with Geographic routing you can make uncommon mappings such as send all traffic from Asia to endpoints in the US and all US traffic to endpoints in Asia. In that case, Geographic routing will deliberately do exactly what you have configured it to do and performance optimization isn't a consideration.
+The key difference between these two popular routing methods is that in Performance routing method your primary goal is to send traffic to the endpoint that can provide the lowest latency to the caller, whereas, in Geographic routing the primary goal is to enforce a geo fence for your callers so that you can deliberately route them to a specific endpoint. The overlap happens since thereΓÇÖs a correlation between geographical closeness and lower latency, although this isnΓÇÖt always true. There might be an endpoint in a different geography that can provide a better latency experience for the caller and in that case Performance routing sends the user to that endpoint but Geographic routing always sends them to the endpoint youΓÇÖve mapped for their geographic region. To further make it clear, consider the following example - with Geographic routing you can make uncommon mappings such as send all traffic from Asia to endpoints in the US and all US traffic to endpoints in Asia. In that case, Geographic routing deliberately does exactly what you have configured it to do and performance optimization isnΓÇÖt a consideration.
>[!NOTE] >There may be scenarios where you might need both performance and geographic routing capabilities, for these scenarios nested profiles can be great choice. For example, you can set up a parent profile with geographic routing where you send all traffic from North America to a nested profile that has endpoints in the US and use performance routing to send those traffic to the best endpoint within that set.
Traffic Manager looks at the source IP of the query (this most likely is a local
### Is it guaranteed that Traffic Manager can correctly determine the exact geographic location of the user in every case?
-No, Traffic Manager can't guarantee that the geographic region we infer from the source IP address of a DNS query will always correspond to the user's location due to the following reasons:
+No, Traffic Manager canΓÇÖt guarantee that the geographic region we infer from the source IP address of a DNS query always corresponds to the user's location due to the following reasons:
- First, as described in the previous FAQ, the source IP we see is that of a DNS resolver doing the lookup on behalf of the user. While the geographic location of the DNS resolver is a good proxy for the geographic location of the user, it can also be different depending upon the footprint of the DNS resolver service and the specific DNS resolver service a customer has chosen to use. As an example, a customer located in Malaysia could specify in their device's settings use a DNS resolver service whose DNS server in Singapore might get picked to handle the query resolutions for that user/device. In that case, Traffic Manager can only see the resolver's IP that corresponds to the Singapore location. Also, see the earlier FAQ regarding client subnet address support on this page.
The IP addresses to associate with an endpoint can be specified in two ways. Fir
### How can I specify a fallback endpoint when using Subnet routing?
-In a profile with Subnet routing, if you have an endpoint with no subnets mapped to it, any request that doesn't match with other endpoints will be directed to here. It's highly recommended that you have such a fallback endpoint in your profile since Traffic Manager will return an NXDOMAIN response if a request comes in and it isn't mapped to any endpoints or if it's mapped to an endpoint but that endpoint is unhealthy.
+In a profile with Subnet routing, if you have an endpoint with no subnets mapped to it, any request that doesnΓÇÖt match with other endpoints are directed to here. ItΓÇÖs highly recommended that you have such a fallback endpoint in your profile since Traffic Manager returns an NXDOMAIN response if a request comes in and it isnΓÇÖt mapped to any endpoints or if itΓÇÖs mapped to an endpoint but that endpoint is unhealthy.
### What happens if an endpoint is disabled in a Subnet routing type profile?
-In a profile with Subnet routing, if you have an endpoint with that is disabled, Traffic Manager will behave as if that endpoint and the subnet mappings it has doesn't exist. If a query that would have matched with its IP address mapping is received and the endpoint is disabled, Traffic Manager will return a fallback endpoint (one with no mappings) or if such an endpoint isn't present, will return an NXDOMAIN response.
+In a profile with Subnet routing, if you have an endpoint with that is disabled, Traffic Manager behaves as if that endpoint and the subnet mappings it has doesnΓÇÖt exist. If a query that would have matched with its IP address mapping is received and the endpoint is disabled, Traffic Manager returns a fallback endpoint (one with no mappings) or if such an endpoint isnΓÇÖt present, returns an NXDOMAIN response.
## Traffic Manager MultiValue traffic routing method
Another use for MultiValue routing method is if an endpoint is "dual-homed" to b
### How many endpoints are returned when MultiValue routing is used?
-You can specify the maximum number of endpoints to be returned and MultiValue will return no more than that many healthy endpoints when a query is received. The maximum possible value for this configuration is 10.
+You can specify the maximum number of endpoints to be returned and MultiValue returns no more than that many healthy endpoints when a query is received. The maximum possible value for this configuration is 10.
### Will I get the same set of endpoints when MultiValue routing is used?
-We can't guarantee that the same set of endpoints will be returned in each query. This is also affected by the fact that some of the endpoints might go unhealthy at which point they won't be included in the response
+We canΓÇÖt guarantee that the same set of endpoints are returned in each query. This is also affected by the fact that some of the endpoints might go unhealthy at which point they wonΓÇÖt be included in the response
## Real User Measurements
You can also turn off Real User Measurements by deleting your key. Once you dele
### Can I use Real User Measurements with client applications other than web pages?
-Yes, Real User Measurements is designed to ingest data collected through different type of end-user clients. This FAQ will be updated as new types of client applications get supported.
+Yes, Real User Measurements is designed to ingest data collected through different type of end-user clients. This FAQ is updated as new types of client applications get supported.
### How many measurements are made each time my Real User Measurements enabled web page is rendered?
While you are in control of what is embedded on your web page, we strongly disco
### Will it be possible for others to see the key I use with Real User Measurements?
-When you embed the measurement script to a web page, it will be possible for others to see the script and your Real User Measurements (RUM) key. But it's important to know that this key is different from your subscription ID and is generated by Traffic Manager to be used only for this purpose. Knowing your RUM key won't compromise your Azure account safety.
+When you embed the measurement script to a web page, it is possible for others to see the script and your Real User Measurements (RUM) key. But itΓÇÖs important to know that this key is different from your subscription ID and is generated by Traffic Manager to be used only for this purpose. Knowing your RUM key wonΓÇÖt compromise your Azure account safety.
### Can others abuse my RUM key?
While it's possible for others to use your key to send wrong information to Azur
### Do I need to put the measurement JavaScript in all my web pages?
-Real User Measurements delivers more value as the number of measurements increase. Having said that, it is your decision as to whether you need to put it in all your web pages or a select few. Our recommendation is to start by putting it in your most visited page where a user is expected to stay on that page five seconds or more.
+Real User Measurements delivers more value as the number of measurements increase. Having said that, it's your decision as to whether you need to put it in all your web pages or a select few. Our recommendation is to start by putting it in your most visited page where a user is expected to stay on that page five seconds or more.
### Can information about my end users be identified by Traffic Manager if I use Real User Measurements?
-When the provided measurement JavaScript is used, Traffic Manager will have visibility into the client IP address of the end user and the source IP address of the local DNS resolver they use. Traffic Manager uses the client IP address only after having it truncated to not be able to identify the specific end user who sent the measurements.
+When the provided measurement JavaScript is used, Traffic Manager has visibility into the client IP address of the end user and the source IP address of the local DNS resolver they use. Traffic Manager uses the client IP address only after having it truncated to not be able to identify the specific end user who sent the measurements.
### Does the webpage measuring Real User Measurements need to be using Traffic Manager for routing?
As mentioned in the previous answer, the server-side components of Real User Mea
Traffic View is a feature of Traffic Manager that helps you understand more about your users and how their experience is. It uses the queries received by Traffic Manager and the network latency intelligence tables that the service maintains to provide you with the following: -- The regions from where your users are connecting to your endpoints in Azure.
+- The regions where users reside that are connecting to your endpoints in Azure.
- The volume of users connecting from these regions.-- The Azure regions to which they're getting routed to.-- Their latency experience to these Azure regions.
+- The Azure regions to which theyΓÇÖre being routed.
+- The users' latency experience routing to these Azure regions.
This information is available for you to consume through geographical map overlay and tabular views in the portal in addition to being available as raw data for you to download.
The DNS queries served by Azure Traffic Manager do consider ECS information to i
### How many days of data does Traffic View use?
-Traffic View creates its output by processing the data from the seven days preceding the day before when it's viewed by you. This is a moving window and the latest data will be used each time you visit.
+Traffic View creates its output by processing the data from the seven days preceding the day before when itΓÇÖs viewed by you. This is a moving window and the latest data is used each time you visit.
### How does Traffic View handle external endpoints?
-When you use external endpoints hosted outside Azure regions in a Traffic Manager profile, you can choose to have it mapped to an Azure region, which is a proxy for its latency characteristics (this is in fact needed if you use performance routing method). If it has this Azure region mapping, that Azure region's latency metrics will be used when creating the Traffic View output. If no Azure region is specified, the latency information will be empty in the data for those external endpoints.
+When you use external endpoints hosted outside Azure regions in a Traffic Manager profile, you can choose to have it mapped to an Azure region, which is a proxy for its latency characteristics (this is in fact needed if you use performance routing method). If it has this Azure region mapping, that Azure region's latency metrics are used when creating the Traffic View output. If no Azure region is specified, the latency information is empty in the data for those external endpoints.
### Do I need to enable Traffic View for each profile in my subscription?
-During the preview period, Traffic View was enabled at a subscription level. As part of the improvements we made before the general availability, you can now enable Traffic View at a profile level, allowing you to have more granular enabling of this feature. By default, Traffic View will be disabled for a profile.
+During the preview period, Traffic View was enabled at a subscription level. As part of the improvements we made before the general availability, you can now enable Traffic View at a profile level, allowing you to have more granular enabling of this feature. By default, Traffic View is disabled for a profile.
>[!NOTE] >If you enabled Traffic View at a subscription level during the preview time, you now need to re-enable it for each of the profile under that subscription.
Typically, Traffic Manager is used to direct traffic to applications deployed in
Azure endpoints that are associated with a Traffic Manager profile are tracked using their resource IDs. When an Azure resource that is being used as an endpoint (for example, Public IP, Classic Cloud Service, WebApp, or another Traffic Manager profile used in a nested manner) is moved to a different resource group or subscription, its resource ID changes. In this scenario, currently, you must update the Traffic Manager profile by first deleting and then adding back the endpoints to the profile.
+For more information, see [To move an endpoint](traffic-manager-manage-endpoints.md#to-move-an-endpoint).
+ ## Traffic Manager endpoint monitoring ### Is Traffic Manager resilient to Azure region failures?
Traffic manager can't provide any certificate validation, including:
### Do I use an IP address or a DNS name when adding an endpoint?
-Traffic Manager supports adding endpoints using three ways to refer them ΓÇô as a DNS name, as an IPv4 address and as an IPv6 address. If the endpoint is added as an IPv4 or IPv6 address the query response will be of record type A or AAAA, respectively. If the endpoint was added as a DNS name, then the query response will be of record type CNAME. Adding endpoints as IPv4 or IPv6 address is permitted only if the endpoint is of type **External**.
+Traffic Manager supports adding endpoints using three ways to refer them ΓÇô as a DNS name, as an IPv4 address and as an IPv6 address. If the endpoint is added as an IPv4 or IPv6 address the query response is of record type A or AAAA, respectively. If the endpoint was added as a DNS name, then the query response is of record type CNAME. Adding endpoints as IPv4 or IPv6 address is permitted only if the endpoint is of type **External**.
All routing methods and monitoring settings are supported by the three endpoint addressing types. ### What types of IP addresses can I use when adding an endpoint?
Yes. You can specify TCP as the monitoring protocol and Traffic Manager can init
When TCP monitoring is used, Traffic Manager starts a three-way TCP handshake by sending a SYN request to endpoint at the specified port. It then waits for an SYN-ACK response from the endpoint for a period of time (specified in the timeout settings). - If an SYN-ACK response is received within the timeout period specified in the monitoring settings, then that endpoint is considered healthy. A FIN or FIN-ACK is the expected response from the Traffic Manager when it regularly terminates a socket.-- If an SYN-ACK response is received after the specified timeout, the Traffic Manager will respond with an RST to reset the connection.
+- If an SYN-ACK response is received after the specified timeout, Traffic Manager responds with an RST to reset the connection.
### How fast does Traffic Manager move my users away from an unhealthy endpoint?
Traffic Manager monitoring settings are at a per profile level. If you need to u
### How can I assign HTTP headers to the Traffic Manager health checks to my endpoints? Traffic Manager allows you to specify custom headers in the HTTP(S) health checks it initiates to your endpoints. If you want to specify a custom header, you can do that at the profile level (applicable to all endpoints) or specify it at the endpoint level. If a header is defined at both levels, then the one specified at the endpoint level overrides the profile level 1.
-One common use case for this is specifying host headers so that Traffic Manager requests may get routed correctly to an endpoint hosted in a multi-tenant environment. Another use case of this is to identify Traffic Manager requests from an endpoint's HTTP(S) request logs
+One common use case for this is specifying host headers so that Traffic Manager requests may get routed correctly to an endpoint hosted in a multitenant environment. Another use case of this is to identify Traffic Manager requests from an endpoint's HTTP(S) request logs
### What host header do endpoint health checks use?
traffic-manager Traffic Manager Manage Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/traffic-manager/traffic-manager-manage-endpoints.md
Title: Manage endpoints in Azure Traffic Manager
-description: This article will help you add, remove, enable and disable endpoints from Azure Traffic Manager.
+description: This article helps you add, remove, enable, disable, and move Azure Traffic Manager endpoints.
Previously updated : 04/24/2023 Last updated : 10/02/2023
-# Add, disable, enable, or delete endpoints
+# Add, disable, enable, delete, or move endpoints
The Web Apps feature in Azure App Service already provides failover and round-robin traffic routing functionality for websites within a datacenter, regardless of the website mode. Azure Traffic Manager allows you to specify failover and round-robin traffic routing for websites and cloud services in different datacenters. The first step necessary to provide that functionality is to add the cloud service or website endpoint to Traffic Manager.
You can also disable individual endpoints that are part of a Traffic Manager pro
## To add a cloud service or an App service endpoint to a Traffic Manager profile
-1. From a browser, sign in to the [Azure portal](https://portal.azure.com).
-2. In the portalΓÇÖs search bar, search for the **Traffic Manager profile** name that you want to modify, and then click the Traffic Manager profile in the results that the displayed.
-3. In the **Traffic Manager profile** blade, in the **Settings** section, click **Endpoints**.
-4. In the **Endpoints** blade that is displayed, click **Add**.
+1. Using a web browser, sign in to the [Azure portal](https://portal.azure.com).
+2. In the portalΓÇÖs search bar, search for the **Traffic Manager profile** name that you want to modify, and then select the Traffic Manager profile in the results that the displayed.
+3. In the **Traffic Manager profile** blade, in the **Settings** section, select **Endpoints**.
+4. In the **Endpoints** blade that is displayed, select **Add**.
5. In the **Add endpoint** blade, complete as follows:
- 1. For **Type**, click **Azure endpoint**.
+ 1. For **Type**, select **Azure endpoint**.
2. Provide a **Name** by which you want to recognize this endpoint. 3. For **Target resource type**, from the drop-down, choose the appropriate resource type.
- 4. For **Target resource**, click the **Choose...** selector to list resources under the same subscription in the **Resources blade**. In the **Resource** blade that is displayed, pick the service that you want to add as the first endpoint.
- 5. For **Priority**, select as **1**. This results in all traffic going to this endpoint if it is healthy.
+ 4. For **Target resource**, select the **Choose...** selector to list resources under the same subscription in the **Resources blade**. In the **Resource** blade that is displayed, pick the service that you want to add as the first endpoint.
+ 5. For **Priority**, select as **1**. This results in all traffic going to this endpoint if it's healthy.
6. Keep **Add as disabled** unchecked.
- 7. Click **OK**
+ 7. Select **OK**
6. Repeat steps 4 and 5 to add the next Azure endpoint. Make sure to add it with its **Priority** value set at **2**.
-7. When the addition of both endpoints is complete, they are displayed in the **Traffic Manager profile** blade along with their monitoring status as **Online**.
+7. When the addition of both endpoints is complete, they're displayed in the **Traffic Manager profile** blade along with their monitoring status as **Online**.
> [!NOTE] > After you add or remove an endpoint from a profile using the *Failover* traffic routing method, the failover priority list may not be ordered the way you want. You can adjust the order of the Failover Priority List on the Configuration page. For more information, see [Configure Failover traffic routing](./traffic-manager-configure-priority-routing-method.md). ## To disable an endpoint
-1. From a browser, sign in to the [Azure portal](https://portal.azure.com).
-2. In the portalΓÇÖs search bar, search for the **Traffic Manager profile** name that you want to modify, and then click the Traffic Manager profile in the results that are displayed.
-3. In the **Traffic Manager profile** blade, in the **Settings** section, click **Endpoints**.
-4. Click the endpoint that you want to disable.
-5. In the **Endpoint** blade, change the endpoint status to **Disabled**, and then click **Save**.
+1. Using a web browser, sign in to the [Azure portal](https://portal.azure.com).
+2. In the portalΓÇÖs search bar, search for the **Traffic Manager profile** name that you want to modify, and then select the Traffic Manager profile in the results that are displayed.
+3. In the **Traffic Manager profile** blade, in the **Settings** section, select **Endpoints**.
+4. Select the endpoint that you want to disable.
+5. In the **Endpoint** blade, change the endpoint status to **Disabled**, and then select **Save**.
6. Clients continue to send traffic to the endpoint for the duration of Time-to-Live (TTL). You can change the TTL on the Configuration page of the Traffic Manager profile. ## To enable an endpoint
-1. From a browser, sign in to the [Azure portal](https://portal.azure.com).
-2. In the portalΓÇÖs search bar, search for the **Traffic Manager profile** name that you want to modify, and then click the Traffic Manager profile in the results that are displayed.
-3. In the **Traffic Manager profile** blade, in the **Settings** section, click **Endpoints**.
-4. Click the endpoint that you want to enable.
-5. In the **Endpoint** blade, change the endpoint status to **Enabled**, and then click **Save**.
+1. Using a web browser, sign in to the [Azure portal](https://portal.azure.com).
+2. In the portalΓÇÖs search bar, search for the **Traffic Manager profile** name that you want to modify, and then select the Traffic Manager profile in the results that are displayed.
+3. In the **Traffic Manager profile** blade, in the **Settings** section, select **Endpoints**.
+4. Select the endpoint that you want to enable.
+5. In the **Endpoint** blade, change the endpoint status to **Enabled**, and then select **Save**.
6. Clients continue to send traffic to the endpoint for the duration of Time-to-Live (TTL). You can change the TTL on the Configuration page of the Traffic Manager profile. ## To delete an endpoint
-1. From a browser, sign in to the [Azure portal](https://portal.azure.com).
-2. In the portalΓÇÖs search bar, search for the **Traffic Manager profile** name that you want to modify, and then click the Traffic Manager profile in the results that are displayed.
-3. In the **Traffic Manager profile** blade, in the **Settings** section, click **Endpoints**.
-4. Click the endpoint that you want to delete.
-5. In the **Endpoint** blade, click **Delete**
+1. Using a web browser, sign in to the [Azure portal](https://portal.azure.com).
+2. In the portalΓÇÖs search bar, search for the **Traffic Manager profile** name that you want to modify, and then select the Traffic Manager profile in the results that are displayed.
+3. In the **Traffic Manager profile** blade, in the **Settings** section, select **Endpoints**.
+4. Select the endpoint that you want to delete.
+5. In the **Endpoint** blade, select **Delete**
+
+## To move an endpoint
+
+1. Using a web browser, sign in to the [Azure portal](https://portal.azure.com).
+2. In the portalΓÇÖs search bar, search for the Traffic Manager profile name that you want to modify, and then select the Traffic Manager profile in the results that are displayed.
+3. Inside the resource's blade, select the **Move** option. Follow the instructions to move the resource to the desired subscription or resource group.
+4. When the resource has been successfully moved, return to the Azure Traffic Manager Profile that had the resource as an endpoint.
+5. Locate and select the old endpoint that was previously linked to the resource you moved. Select **Delete** to remove this old endpoint from the Traffic Manager profile.
+6. Select **Add** to create and configure a new endpoint that targets the recently moved Azure resource.
+
+For more information, see: [How do I move my Traffic Manager profile's Azure endpoints to a different resource group or subscription?](traffic-manager-FAQs.md#how-do-i-move-my-traffic-manager-profiles-azure-endpoints-to-a-different-resource-group-or-subscription)
## Next steps
virtual-desktop Agent Updates Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/agent-updates-diagnostics.md
This article describes how to use diagnostic logs in a Log Analytics workspace t
To enable sending diagnostic logs to your Log Analytics workspace:
-1. Create a Log Analytics workspace, if you haven't already. Next, get the workspace ID and primary key by following the instructions in [Use Log Analytics for the diagnostics feature](diagnostics-log-analytics.md#before-you-get-started).
+1. Create a Log Analytics workspace, if you haven't already. Next, get the workspace ID and primary key by following the instructions in [Use Log Analytics for the diagnostics feature](diagnostics-log-analytics.md#prerequisites).
2. Send diagnostics to the Log Analytics workspace you created by following the instructions in [Push diagnostics data to your workspace](diagnostics-log-analytics.md#push-diagnostics-data-to-your-workspace).
virtual-desktop Compare Remote Desktop Clients https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/compare-remote-desktop-clients.md
The following table shows which input methods are available for each Remote Desk
| Mouse | X | X | X | X | X | X | | Touch | X | X | X | X | | X | | Multi-touch | X | X | X | X | | |
-| Pen | X | | X (as touch) | X\* | | |
+| Pen | X | | X | X\* | | |
\* Pen input redirection is not supported when connecting to Windows Server 2012 or Windows Server 2012 R2.
virtual-desktop Connection Quality Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/connection-quality-monitoring.md
>[!IMPORTANT] >The Connection Graphics Data Logs are currently in preview. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
-[Connection quality](connection-latency.md) is essential for good user experiences, so it's important to be able to monitor connections for potential issues and troubleshoot problems as they arise. Azure Virtual Desktop offers tools like [Log Analytics](diagnostics-log-analytics.md) that can help you monitor your deployment's connection health. This article will show you how to configure your diagnostic settings to let you collect connection quality data and query data for specific parameters.
+[Connection quality](connection-latency.md) is essential for good user experiences, so it's important to be able to monitor connections for potential issues and troubleshoot problems as they arise. Azure Virtual Desktop integrates with tools like [Log Analytics](diagnostics-log-analytics.md) that can help you monitor your deployment's connection health. This article will show you how to configure your diagnostic settings to let you collect connection quality data and query data for specific parameters.
## Prerequisites
-To start collecting connection quality data, youΓÇÖll need to [set up a Log Analytics workspace](diagnostics-log-analytics.md).
+To start collecting connection quality data, you need to [set up a Log Analytics workspace for use with Azure Virtual Desktop](diagnostics-log-analytics.md).
>[!NOTE] >Normal storage charges for Log Analytics will apply. Learn more at [Azure Monitor Logs pricing details](../azure-monitor/logs/cost-logs.md).
virtual-desktop Diagnostics Log Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/diagnostics-log-analytics.md
Azure Monitor lets you analyze Azure Virtual Desktop data and review virtual mac
>[!NOTE] >To learn how to monitor your VMs in Azure, see [Monitoring Azure virtual machines with Azure Monitor](../azure-monitor/vm/monitor-vm-azure.md). Also, make sure to review the [Azure Virtual Desktop Insights glossary](./insights-glossary.md) for a better understanding of your user experience on the session host.
-## Before you get started
+## Prerequisites
-Before you can use Log Analytics, you'll need to create a workspace. To do that, follow the instructions in one of the following two articles:
+Before you can use Azure Virtual Desktop with Log Analytics, you need:
-- If you prefer using Azure portal, see [Create a Log Analytics workspace in Azure portal](../azure-monitor/logs/quick-create-workspace.md).-- If you prefer PowerShell, see [Create a Log Analytics workspace with PowerShell](../azure-monitor/logs/powershell-workspace-configuration.md).
+- A Log Analytics workspace. For more information, see [Create a Log Analytics workspace in Azure portal](../azure-monitor/logs/quick-create-workspace.md) or [Create a Log Analytics workspace with PowerShell](../azure-monitor/logs/powershell-workspace-configuration.md). After you've created your workspace, follow the instructions in [Connect Windows computers to Azure Monitor](../azure-monitor/agents/agent-windows.md#workspace-id-and-key) to get the following information:
+ - The workspace ID
+ - The primary key of your workspace
-After you've created your workspace, follow the instructions in [Connect Windows computers to Azure Monitor](../azure-monitor/agents/agent-windows.md#workspace-id-and-key) to get the following information:
+ You'll need this information later in the setup process.
-- The workspace ID-- The primary key of your workspace
+- Access to specific URLs from your session hosts for diagnostics to work. For more information, see [Required URLs for Azure Virtual Desktop](safe-url-list.md) where you'll see entries for **Diagnostic output**.
-You'll need this information later in the setup process.
-
-Make sure to review permission management for Azure Monitor to enable data access for those who monitor and maintain your Azure Virtual Desktop environment. For more information, see [Get started with roles, permissions, and security with Azure Monitor](../azure-monitor/roles-permissions-security.md).
+- Make sure to review permission management for Azure Monitor to enable data access for those who monitor and maintain your Azure Virtual Desktop environment. For more information, see [Get started with roles, permissions, and security with Azure Monitor](../azure-monitor/roles-permissions-security.md).
## Push diagnostics data to your workspace
virtual-desktop Enable Gpu Acceleration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/enable-gpu-acceleration.md
Title: Configure GPU for Azure Virtual Desktop - Azure
-description: How to enable GPU-accelerated rendering and encoding in Azure Virtual Desktop.
+description: Learn how to enable GPU-accelerated rendering and encoding in Azure Virtual Desktop.
Last updated 05/06/2019
-# Configure graphics processing unit (GPU) acceleration for Azure Virtual Desktop
+# Configure GPU acceleration for Azure Virtual Desktop
->[!IMPORTANT]
->This content applies to Azure Virtual Desktop with Azure Resource Manager Azure Virtual Desktop objects. If you're using Azure Virtual Desktop (classic) without Azure Resource Manager objects, see [this article](./virtual-desktop-fall-2019/configure-vm-gpu-2019.md).
+> [!IMPORTANT]
+> This content applies to Azure Virtual Desktop with Azure Resource Manager objects. If you're using Azure Virtual Desktop (classic) without Azure Resource Manager objects, see [this article](./virtual-desktop-fall-2019/configure-vm-gpu-2019.md).
-Azure Virtual Desktop supports GPU-accelerated rendering and encoding for improved app performance and scalability. GPU acceleration is particularly crucial for graphics-intensive apps and is supported in the following operating systems:
+Azure Virtual Desktop supports graphics processing unit (GPU) acceleration in rendering and encoding for improved app performance and scalability. GPU acceleration is crucial for graphics-intensive apps and can be used with all [supported operating systems](prerequisites.md#operating-systems-and-licenses) for Azure Virtual Desktop.
-* Windows 10 version 1511 or newer
-* Windows Server 2016 or newer
+The list doesn't specifically include multi-session versions of Windows. However, each GPU in NV-series Azure virtual machines (VMs) comes with a GRID license that supports 25 concurrent users. For more information, see [NV-series](../virtual-machines/nv-series.md).
->[!NOTE]
-> Multi-session versions of Windows are not specifically listed, however each GPU in NV-series Azure virtual machine comes with a GRID license that supports 25 concurrent users. For more information, see [NV-series](../virtual-machines/nv-series.md).
+This article shows you how to create a GPU-optimized Azure virtual machine, add it to your host pool, and configure it to use GPU acceleration for rendering and encoding.
-Follow the instructions in this article to create a GPU optimized Azure virtual machine, add it to your host pool, and configure it to use GPU acceleration for rendering and encoding. This article assumes you have already [created a host pool](./create-host-pools-azure-marketplace.md) and an [application group](./manage-app-groups.md).
+## Prerequisites
-## Select an appropriate GPU-optimized Azure virtual machine size
+This article assumes that you already created a host pool and an application group.
-Select one of Azure's [NV-series](../virtual-machines/nv-series.md), [NVv3-series](../virtual-machines/nvv3-series.md), [NVv4-series](../virtual-machines/nvv4-series.md), [NVadsA10 v5-series](../virtual-machines/nva10v5-series.md), or [NCasT4_v3-series](../virtual-machines/nct4-v3-series.md) VM sizes to use as a session host. These are tailored for app and desktop virtualization and enable most apps and the Windows user interface to be GPU accelerated. The right choice for your host pool depends on a number of factors, including your particular app workloads, desired quality of user experience, and cost. In general, larger and more capable GPUs offer a better user experience at a given user density, while smaller and fractional-GPU sizes allow more fine-grained control over cost and quality. Note that NV-series VMs are planned to be retired. For more information, see [NV retirement](../virtual-machines/nv-series-retirement.md).
+## Select an appropriate GPU-optimized Azure VM size
->[!NOTE]
->Azure's NC, NCv2, NCv3, ND, and NDv2 series VMs are generally not appropriate for Azure Virtual Desktop session hosts. These VMs are tailored for specialized, high-performance compute or machine learning tools, such as those built with NVIDIA CUDA. They do not support GPU acceleration for most apps or the Windows user interface.
+Select one of the Azure [NV-series](../virtual-machines/nv-series.md), [NVv3-series](../virtual-machines/nvv3-series.md), [NVv4-series](../virtual-machines/nvv4-series.md), [NVadsA10 v5-series](../virtual-machines/nva10v5-series.md), or [NCasT4_v3-series](../virtual-machines/nct4-v3-series.md) VM sizes to use as a session host. These sizes are tailored for app and desktop virtualization. They enable most apps and the Windows user interface to be GPU accelerated.
+
+The right choice for your host pool depends on many factors, including your particular app workloads, desired quality of user experience, and cost. In general, larger and more capable GPUs offer a better user experience at a given user density. Smaller and fractional GPU sizes allow more fine-grained control over cost and quality.
+
+> [!NOTE]
+> NV-series VMs are planned to be retired. For more information, see [NV retirement](../virtual-machines/nv-series-retirement.md).
+
+Azure NC, NCv2, NCv3, ND, and NDv2 series VMs are generally not appropriate for Azure Virtual Desktop session hosts. These VMs are tailored for specialized, high-performance compute or machine learning tools, such as those built with NVIDIA CUDA. They don't support GPU acceleration for most apps or the Windows user interface.
## Install supported graphics drivers in your virtual machine
-To take advantage of the GPU capabilities of Azure N-series VMs in Azure Virtual Desktop, you must install the appropriate graphics drivers. Follow the instructions at [Supported operating systems and drivers](../virtual-machines/sizes-gpu.md#supported-operating-systems-and-drivers) to install drivers. Only drivers distributed by Azure are supported.
+To take advantage of the GPU capabilities of Azure N-series VMs in Azure Virtual Desktop, you must install the appropriate graphics drivers. Follow the instructions at [Supported operating systems and drivers](../virtual-machines/sizes-gpu.md#supported-operating-systems-and-drivers) to install drivers. Only Azure-distributed drivers are supported.
+
+Keep this size-specific information in mind:
+
+* For Azure NV-series, NVv3-series, or NCasT4_v3-series VMs, only NVIDIA GRID drivers support GPU acceleration for most apps and the Windows user interface. NVIDIA CUDA drivers don't support GPU acceleration for these VM sizes.
-* For Azure NV-series, NVv3-series or NCasT4_v3-series VMs, only NVIDIA GRID drivers, and not NVIDIA CUDA drivers, support GPU acceleration for most apps and the Windows user interface. If you choose to install drivers manually, be sure to install GRID drivers. If you choose to install drivers using the Azure VM extension, GRID drivers will automatically be installed for these VM sizes.
-* For Azure NVv4-series VMs, install the AMD drivers provided by Azure. You may install them automatically using the Azure VM extension, or you may install them manually.
+ If you choose to install drivers manually, be sure to install GRID drivers. If you choose to install drivers by using the Azure VM extension, GRID drivers will automatically be installed for these VM sizes.
+* For Azure NVv4-series VMs, install the AMD drivers that Azure provides. You can install them automatically by using the Azure VM extension, or you can install them manually.
-After driver installation, a VM restart is required. Use the verification steps in the above instructions to confirm that graphics drivers were successfully installed.
+After driver installation, a VM restart is required. Use the verification steps in the preceding instructions to confirm that graphics drivers were successfully installed.
## Configure GPU-accelerated app rendering
-By default, apps and desktops running on Windows Server are rendered with the CPU and do not leverage available GPUs for rendering. Configure Group Policy for the session host to enable GPU-accelerated rendering:
+By default, apps and desktops running on Windows Server are rendered with the CPU and don't use available GPUs for rendering. Configure Group Policy for the session host to enable GPU-accelerated rendering:
-1. Connect to the desktop of the VM using an account with local administrator privileges.
-2. Open the Start menu and type "gpedit.msc" to open the Group Policy Editor.
-3. Navigate the tree to **Computer Configuration** > **Administrative Templates** > **Windows Components** > **Remote Desktop Services** > **Remote Desktop Session Host** > **Remote Session Environment**.
-4. Select policy **Use hardware graphics adapters for all Remote Desktop Services sessions** and set this policy to **Enabled** to enable GPU rendering in the remote session.
+1. Connect to the desktop of the VM by using an account that has local administrator privileges.
+2. Open the **Start** menu and enter **gpedit.msc** to open Group Policy Editor.
+3. Go to **Computer Configuration** > **Administrative Templates** > **Windows Components** > **Remote Desktop Services** > **Remote Desktop Session Host** > **Remote Session Environment**.
+4. Select the policy **Use hardware graphics adapters for all Remote Desktop Services sessions**. Set this policy to **Enabled** to enable GPU rendering in the remote session.
## Configure GPU-accelerated frame encoding
-Remote Desktop encodes all graphics rendered by apps and desktops (whether rendered with GPU or with CPU) for transmission to Remote Desktop clients. When part of the screen is frequently updated, this part of the screen is encoded with a video codec (H.264/AVC). By default, Remote Desktop does not leverage available GPUs for this encoding. Configure Group Policy for the session host to enable GPU-accelerated frame encoding. Continuing the steps above:
+Remote Desktop encodes all graphics that apps and desktops render for transmission to Remote Desktop clients. When part of the screen is frequently updated, this part of the screen is encoded with a video codec (H.264/AVC). By default, Remote Desktop doesn't use available GPUs for this encoding.
+
+Configure Group Policy for the session host to enable GPU-accelerated frame encoding. The following procedure continues the previous steps.
+
+> [!NOTE]
+> GPU-accelerated frame encoding is not available in NVv4-series VMs.
+
+1. Select the policy **Configure H.264/AVC hardware encoding for Remote Desktop connections**. Set this policy to **Enabled** to enable hardware encoding for AVC/H.264 in the remote session.
+
+ If you're using Windows Server 2016, set **Prefer AVC Hardware Encoding** to **Always attempt**.
->[!NOTE]
->GPU-accelerated frame encoding is not available in NVv4-series VMs.
+2. Now that you've edited the policies, force a Group Policy update. Open the command prompt as an administrator and run the following command:
-1. Select policy **Configure H.264/AVC hardware encoding for Remote Desktop connections** and set this policy to **Enabled** to enable hardware encoding for AVC/H.264 in the remote session.
+ ```cmd
+ gpupdate.exe /force
+ ```
- >[!NOTE]
- >In Windows Server 2016, set option **Prefer AVC Hardware Encoding** to **Always attempt**.
+3. Sign out of the Remote Desktop session.
-2. Now that the group policies have been edited, force a group policy update. Open the Command Prompt and type:
+## Configure full-screen video encoding
- ```cmd
- gpupdate.exe /force
- ```
+> [!NOTE]
+> You can enable full-screen video encoding even without a GPU present.
-3. Sign out from the Remote Desktop session.
+If you often use applications that produce high-frame-rate content, you might choose to enable full-screen video encoding for a remote session. Such applications might include 3D modeling, CAD/CAM, or video applications.
-## Configure fullscreen video encoding
+A full-screen video profile provides a higher frame rate and better user experience for these applications, at the expense of network bandwidth and both session host and client resources. We recommend that you use GPU-accelerated frame encoding for a full-screen video encoding.
->[!NOTE]
->Fullscreen video encoding can be enabled even without a GPU present.
+Configure Group Policy for the session host to enable full-screen video encoding. Continuing the previous steps:
-If you often use applications that produce a high-frame rate content, such as 3D modeling, CAD/CAM and video applications, you may choose to enable a fullscreen video encoding for a remote session. Fullscreen video profile provides a higher frame rate and better user experience for such applications at expense of network bandwidth and both session host and client resources. It is recommended to use GPU-accelerated frame encoding for a full-screen video encoding. Configure Group Policy for the session host to enable fullscreen video encoding. Continuing the steps above:
+1. Select the policy **Prioritize H.264/AVC 444 Graphics mode for Remote Desktop connections**. Set this policy to **Enabled** to force the H.264/AVC 444 codec in the remote session.
+2. Now that you've edited the policies, force a Group Policy update. Open the command prompt as an administrator and run the following command:
-1. Select policy **Prioritize H.264/AVC 444 Graphics mode for Remote Desktop connections** and set this policy to **Enabled** to force H.264/AVC 444 codec in the remote session.
-2. Now that the group policies have been edited, force a group policy update. Open the Command Prompt and type:
+ ```cmd
+ gpupdate.exe /force
+ ```
- ```cmd
- gpupdate.exe /force
- ```
+3. Sign out of the Remote Desktop session.
-3. Sign out from the Remote Desktop session.
## Verify GPU-accelerated app rendering
-To verify that apps are using the GPU for rendering, try any of the following:
+To verify that apps are using the GPU for rendering, try either of the following methods:
-* For Azure VMs with a NVIDIA GPU, use the `nvidia-smi` utility as described in [Verify driver installation](../virtual-machines/windows/n-series-driver-setup.md#verify-driver-installation) to check for GPU utilization when running your apps.
-* On supported operating system versions, you can use the Task Manager to check for GPU utilization. Select the GPU in the "Performance" tab to see whether apps are utilizing the GPU.
+* For Azure VMs with an NVIDIA GPU, use the `nvidia-smi` utility to check for GPU utilization when running your apps. For more information, see [Verify driver installation](../virtual-machines/windows/n-series-driver-setup.md#verify-driver-installation).
+* On supported operating system versions, you can use Task Manager to check for GPU utilization. Select the GPU on the **Performance** tab to see whether apps are utilizing the GPU.
## Verify GPU-accelerated frame encoding To verify that Remote Desktop is using GPU-accelerated encoding:
-1. Connect to the desktop of the VM using Azure Virtual Desktop client.
-2. Launch the Event Viewer and navigate to the following node: **Applications and Services Logs** > **Microsoft** > **Windows** > **RemoteDesktopServices-RdpCoreCDV** > **Operational**
-3. To determine if GPU-accelerated encoding is used, look for event ID 170. If you see "AVC hardware encoder enabled: 1" then GPU encoding is used.
+1. Connect to the desktop of the VM by using the Azure Virtual Desktop client.
+2. Open Event Viewer and go to the following node: **Applications and Services Logs** > **Microsoft** > **Windows** > **RemoteDesktopServices-RdpCoreCDV** > **Operational**.
+3. Look for event ID 170. If you see **AVC hardware encoder enabled: 1**, Remote Desktop is using GPU-accelerated encoding.
> [!TIP]
-> If you're connecting to your session host outside of Azure Virtual Desktop for testing GPU acceleration, the logs will instead be stored in **Applications and Services Logs** > **Microsoft** > **Windows** > **RemoteDesktopServices-RdpCoreTs** > **Operational** in Event Viewer.
+> If you're connecting to your session host outside Azure Virtual Desktop for testing GPU acceleration, the logs are instead stored in **Applications and Services Logs** > **Microsoft** > **Windows** > **RemoteDesktopServices-RdpCoreTs** > **Operational** in Event Viewer.
-## Verify fullscreen video encoding
+## Verify full-screen video encoding
-To verify that Remote Desktop is using fullscreen video encoding:
+To verify that Remote Desktop is using full-screen video encoding:
-1. Connect to the desktop of the VM using Azure Virtual Desktop client.
-2. Launch the Event Viewer and navigate to the following node: **Applications and Services Logs** > **Microsoft** > **Windows** > **RemoteDesktopServices-RdpCoreCDV** > **Operational**
-3. To determine if fullscreen video encoding is used, look for event ID 162. If you see "AVC Available: 1 Initial Profile: 2048" then AVC 444 is used.
+1. Connect to the desktop of the VM by using the Azure Virtual Desktop client.
+2. Open Event Viewer and go to the following node: **Applications and Services Logs** > **Microsoft** > **Windows** > **RemoteDesktopServices-RdpCoreCDV** > **Operational**.
+3. Look for event ID 162. If you see **AVC Available: 1 Initial Profile: 2048**, Remote Desktop is using full-screen video encoding (AVC 444).
> [!TIP]
-> If you're connecting to your session host outside of Azure Virtual Desktop for testing GPU acceleration, the logs will instead be stored in **Applications and Services Logs** > **Microsoft** > **Windows** > **RemoteDesktopServices-RdpCoreTs** > **Operational** in Event Viewer.
+> If you're connecting to your session host outside Azure Virtual Desktop for testing GPU acceleration, the logs are instead stored in **Applications and Services Logs** > **Microsoft** > **Windows** > **RemoteDesktopServices-RdpCoreTs** > **Operational** in Event Viewer.
## Next steps
-These instructions should have you up and running with GPU acceleration on one session host (one VM). Some additional considerations for enabling GPU acceleration across a larger host pool:
+These instructions should have you operating with GPU acceleration on one session host (one VM). Here are additional considerations for enabling GPU acceleration across a larger host pool:
-* Consider using a [VM extension](../virtual-machines/extensions/overview.md) to simplify driver installation and updates across a number of VMs. Use the [NVIDIA GPU Driver Extension](../virtual-machines/extensions/hpccompute-gpu-windows.md) for VMs with NVIDIA GPUs, and use the [AMD GPU Driver Extension](../virtual-machines/extensions/hpccompute-amd-gpu-windows.md) for VMs with AMD GPUs.
-* Consider using Active Directory Group Policy to simplify group policy configuration across a number of VMs. For information about deploying Group Policy in the Active Directory domain, see [Working with Group Policy Objects](/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/cc731212(v=ws.11)).
+* Consider using a [VM extension](../virtual-machines/extensions/overview.md) to simplify driver installation and updates across VMs. Use the [NVIDIA GPU Driver Extension](../virtual-machines/extensions/hpccompute-gpu-windows.md) for VMs with NVIDIA GPUs. Use the [AMD GPU Driver Extension](../virtual-machines/extensions/hpccompute-amd-gpu-windows.md) for VMs with AMD GPUs.
+* Consider using Active Directory to simplify Group Policy configuration across VMs. For information about deploying Group Policy in the Active Directory domain, see [Working with Group Policy Objects](/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/cc731212(v=ws.11)).
virtual-desktop Safe Url List https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/safe-url-list.md
The following table is the list of URLs your session host VMs need to access for
||||| | `login.microsoftonline.com` | 443 | Authentication to Microsoft Online Services | | `*.wvd.microsoft.com` | 443 | Service traffic | WindowsVirtualDesktop |
-| `*.prod.warm.ingest.monitor.core.windows.net` | 443 | Agent traffic | AzureMonitor |
+| `*.prod.warm.ingest.monitor.core.windows.net` | 443 | Agent traffic<br /><br />[Diagnostic output](diagnostics-log-analytics.md) | AzureMonitor |
| `catalogartifact.azureedge.net` | 443 | Azure Marketplace | AzureFrontDoor.Frontend | | `gcs.prod.monitoring.core.windows.net` | 443 | Agent traffic | AzureCloud | | `kms.core.windows.net` | 1688 | Windows activation | Internet |
The following table lists optional URLs that your session host virtual machines
|--|--|--|--| | `login.microsoftonline.us` | 443 | Authentication to Microsoft Online Services | | `*.wvd.azure.us` | 443 | Service traffic | WindowsVirtualDesktop |
-| `*.prod.warm.ingest.monitor.core.usgovcloudapi.net` | 443 | Agent traffic | AzureMonitor |
+| `*.prod.warm.ingest.monitor.core.usgovcloudapi.net` | 443 | Agent traffic<br /><br />[Diagnostic output](diagnostics-log-analytics.md) | AzureMonitor |
| `gcs.monitoring.core.usgovcloudapi.net` | 443 | Agent traffic | AzureCloud | | `kms.core.usgovcloudapi.net` | 1688 | Windows activation | Internet | | `mrsglobalstugviffx.blob.core.usgovcloudapi.net` | 443 | Agent and side-by-side (SXS) stack updates | AzureCloud |
virtual-desktop Teams Supported Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/teams-supported-features.md
This article lists the features of Microsoft Teams that Azure Virtual Desktop cu
## Supported features
-The following table lists whether the Windows Desktop client, Azure Virtual Desktop Store app or macOS client supports specific features for Teams on Azure Virtual Desktop. Other clients are not supported.
+The following table lists whether the Windows Desktop client, Azure Virtual Desktop Store app or macOS client supports specific features for Teams on Azure Virtual Desktop. Other clients aren't supported.
| Feature | Windows Desktop client and Azure Virtual Desktop app | macOS client | |--|--|--|
The following table lists the minimum required versions for each Teams feature.
| Manage breakout rooms | 1.2.1755 and later | 10.7.7 and later | 1.0.2006.11001 and later | Updates within 90 days of the current version | | Mirror my video | 1.2.3770 and later | Not supported | 1.0.2006.11001 and later | Updates within 90 days of the current version | | Multiwindow | 1.2.1755 and later | 10.7.7 and later | 1.1.2110.16001 and later | Updates within 90 days of the current version |
-| Noise suppression | 1.2.3316 and later | 10.8.1 and later | 1.0.2006.11001 and later | Updates within 90 days of the current version |
+| Noise suppression* | 1.2.3316 and later | 10.8.1 and later | 1.0.2006.11001 and later | Updates within 90 days of the current version |
| Screen share and video together | 1.2.1755 and later | 10.7.7 and later | 1.0.2006.11001 and later | Updates within 90 days of the current version | | Screen share | 1.2.1755 and later | 10.7.7 and later | 1.0.2006.11001 and later | Updates within 90 days of the current version | | Secondary ringer | 1.2.3004 and later | 10.7.7 and later | 1.0.2006.11001 and later | Updates within 90 days of the current version | | Shared system audio | 1.2.4058 and later | Not supported | 1.0.2006.11001 and later | Updates within 90 days of the current version | | Simulcast | 1.2.3667 and later | 10.8.1 and later | 1.0.2006.11001 and later | Updates within 90 days of the current version |
+\* When using [Teams media optimizations](teams-on-avd.md#verify-media-optimizations-loaded), noise suppression is on by default, but confirmation isn't shown in Teams client. This is by design.
+ ## Next steps Learn more about how to set up Teams for Azure Virtual Desktop at [Use Microsoft Teams on Azure Virtual Desktop](teams-on-avd.md).
virtual-desktop Client Features Web https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/users/client-features-web.md
There are several keyboard shortcuts you can use to help use some of the feature
#### Input Method Editor
-The web client supports Input Method Editor (IME) in the remote session. Before you can use the IME, you must install the language pack for the keyboard you want to use in the remote session must be installed on your session host by your admin. To learn more about setting up language packs in the remote session, see [Add language packs to a Windows 10 multi-session image](../language-packs.md).
+The web client supports Input Method Editor (IME) in the remote session. Before you can use the IME in a remote session, the language pack for the keyboard you want to use must be installed on your session host by your admin. To learn more about setting up language packs in the remote session, see [Add language packs to a Windows 10 multi-session image](../language-packs.md).
To enable IME input using the web client:
virtual-desktop Set Up Scaling Script https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/virtual-desktop-fall-2019/set-up-scaling-script.md
First, you'll need an Azure Automation account to run the PowerShell runbook. Th
Now that you have an Azure Automation account, you'll also need to create an Azure Automation Run As account if you don't have one already. This account will let the tool access your Azure resources.
-An [Azure Automation Run As account](../../automation/manage-runas-account.md) provides authentication for managing resources in Azure with Azure cmdlets. When you create a Run As account, it creates a new service principal user in Azure Active Directory and assigns the Contributor role to the service principal user at the subscription level. An Azure Run As account is a great way to authenticate securely with certificates and a service principal name without needing to store a username and password in a credential object. To learn more about Run As account authentication, see [Limit Run As account permissions](../../automation/manage-runas-account.md#limit-run-as-account-permissions).
+An Azure Automation Run As account provides authentication for managing resources in Azure with Azure cmdlets. When you create a Run As account, it creates a new service principal user in Azure Active Directory and assigns the Contributor role to the service principal user at the subscription level. An Azure Run As account is a great way to authenticate securely with certificates and a service principal name without needing to store a username and password in a credential object.
Any user who's a member of the Subscription Admins role and coadministrator of the subscription can create a Run As account.
virtual-desktop Whats New Client Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/whats-new-client-windows.md
description: Learn about recent changes to the Remote Desktop client for Windows
Previously updated : 09/19/2023 Last updated : 10/03/2023 # What's new in the Remote Desktop client for Windows
The following table lists the current versions available for the public and Insi
| Release | Latest version | Download | ||-|-| | Public | 1.2.4582 | [Windows 64-bit](https://go.microsoft.com/fwlink/?linkid=2139369) *(most common)*<br />[Windows 32-bit](https://go.microsoft.com/fwlink/?linkid=2139456)<br />[Windows ARM64](https://go.microsoft.com/fwlink/?linkid=2139370) |
-| Insider | 1.2.4582 | [Windows 64-bit](https://go.microsoft.com/fwlink/?linkid=2139233) *(most common)*<br />[Windows 32-bit](https://go.microsoft.com/fwlink/?linkid=2139144)<br />[Windows ARM64](https://go.microsoft.com/fwlink/?linkid=2139368) |
+| Insider | 1.2.4675 | [Windows 64-bit](https://go.microsoft.com/fwlink/?linkid=2139233) *(most common)*<br />[Windows 32-bit](https://go.microsoft.com/fwlink/?linkid=2139144)<br />[Windows ARM64](https://go.microsoft.com/fwlink/?linkid=2139368) |
+
+## Updates for version 1.2.4675 (Insider)
+
+*Date published: October 3, 2023*
+
+Download: [Windows 64-bit](https://go.microsoft.com/fwlink/?linkid=2139233), [Windows 32-bit](https://go.microsoft.com/fwlink/?linkid=2139144), [Windows ARM64](https://go.microsoft.com/fwlink/?linkid=2139368)
+
+- Enhanced web to Microsoft Remote Desktop Client launch capabilities by adding multiple monitor configuration parameters to support internal and external customers.
+- Added support for the following languages: Czech (Czechia), Hungarian (Hungary), Indonesian (Indonesia), Korean (Korea), Portuguese (Portugal), Turkish (Turkey).
+- Fixed a bug that caused a crash when using Teams Media Optimization.
+- Improved client logging, diagnostics, and error classification to help admins troubleshoot connection and feed issues.
## Updates for version 1.2.4582 *Date published: September 19, 2023*
-Download: [Windows 64-bit](https://go.microsoft.com/fwlink/?linkid=2139369), [Windows 32-bit](https://go.microsoft.com/fwlink/?linkid=2139456), [Windows ARM64](https://go.microsoft.com/fwlink/?linkid=2139370)
+Download: [Windows 64-bit](https://query.prod.cms.rt.microsoft.com/cms/api/am/binary/RW1byOF), [Windows 32-bit](https://query.prod.cms.rt.microsoft.com/cms/api/am/binary/RW1bwjL), [Windows ARM64](https://query.prod.cms.rt.microsoft.com/cms/api/am/binary/RW1byOV)
In this release, we've made the following changes:
virtual-machine-scale-sets Virtual Machine Scale Sets Autoscale Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-autoscale-overview.md
The following examples are scenarios that may benefit the use of schedule-based
- If a department uses an application heavily at certain parts of the month or fiscal cycle, automatically scale the number of VM instances to accommodate their additional demands. - When there is a marketing event, promotion, or holiday sale, you can automatically scale the number of VM instances ahead of anticipated customer demand.
+## Limitations
+- You can have up to 20 Autoscale rules for a given scale set.
+ ## Next steps You can create autoscale rules that use host-based metrics with one of the following tools:
virtual-machines Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/configure.md
Previously updated : 04/11/2023 Last updated : 10/03/2023
This article shares some guidance on configuring and optimizing the InfiniBand-e
On InfiniBand (IB) enabled VMs, the appropriate drivers are required to enable RDMA.
+- The [Ubuntu-HPC VM images](#ubuntu-hpc-vm-images) in the Marketplace come preconfigured with the appropriate IB drivers and GPU drivers.
+- The [AlmaLinux-HPC VM images](#almalinux-hpc-vm-images) in the Marketplace come preconfigured with the appropriate IB drivers and GPU drivers.
- The [CentOS-HPC VM images](#centos-hpc-vm-images) in the Marketplace come preconfigured with the appropriate IB drivers. - The CentOS-HPC version 7.9 VM image additionally comes preconfigured with the NVIDIA GPU drivers.-- The [Ubuntu-HPC VM images](#ubuntu-hpc-vm-images) in the Marketplace come preconfigured with the appropriate IB drivers and GPU drivers. These VM images are based on the base CentOS and Ubuntu marketplace VM images. Scripts used in the creation of these VM images from their base CentOS Marketplace image are on the [azhpc-images repo](https://github.com/Azure/azhpc-images/tree/master/centos). On GPU enabled [N-series](sizes-gpu.md) VMs, the appropriate GPU drivers are additionally required. This can be available by the following methods: -- Use the [Ubuntu-HPC VM images](#ubuntu-hpc-vm-images) and [CentOS-HPC VM image](#centos-hpc-vm-images) version 7.9 that come preconfigured with the NVIDIA GPU drivers and GPU compute software stack (CUDA, NCCL).
+- Use the [Ubuntu-HPC VM images](#ubuntu-hpc-vm-images), [AlmaLinux-HPC VM images](#almalinux-hpc-vm-images), or [CentOS-HPC VM image](#centos-hpc-vm-images) version 7.9 that come preconfigured with the NVIDIA GPU drivers and GPU compute software stack (CUDA, NCCL).
- Add the GPU drivers through the [VM extensions](./extensions/hpccompute-gpu-linux.md). - Install the GPU drivers [manually](./linux/n-series-driver-setup.md). - Some other VM images on the Marketplace also come preinstalled with the NVIDIA GPU drivers, including some VM images from NVIDIA.
-Depending on the workloads' Linux distro and version needs, both the [CentOS-HPC VM images](#centos-hpc-vm-images) and [Ubuntu-HPC VM images](#ubuntu-hpc-vm-images) in the Marketplace are the easiest way to get started with HPC and AI workloads on Azure.
+Depending on the workloads' Linux distro and version needs, [Ubuntu-HPC VM images](#ubuntu-hpc-vm-images), [AlmaLinux-HPC VM images](#almalinux-hpc-vm-images), and [CentOS-HPC VM images](#centos-hpc-vm-images) on the Marketplace are the easiest way to get started with HPC and AI workloads on Azure.
It's also recommended to create [custom VM images](./linux/tutorial-custom-images.md) with workload specific customization and configuration for reuse. ### VM sizes supported by the HPC VM images
The latest Azure HPC marketplace images come with Mellanox OFED 5.1 and above, w
#### GPU driver support
-Currently only the [Ubuntu-HPC VM images](#ubuntu-hpc-vm-images) and [CentOS-HPC VM images](#centos-hpc-vm-images) version 7.9 come preconfigured with the NVIDIA GPU drivers and GPU compute software stack (CUDA, NCCL).
+Currently only the [Ubuntu-HPC VM images](#ubuntu-hpc-vm-images), [AlmaLinux-HPC VM images](#almalinux-hpc-vm-images), and [CentOS-HPC VM images](#centos-hpc-vm-images) version 7.9 come preconfigured with the NVIDIA GPU drivers and GPU compute software stack (CUDA, NCCL).
The VM size support matrix for the GPU drivers in supported HPC VM images is as follows:
All of the VM sizes in the N-series support [Gen 2 VMs](generation-2.md), though
### SR-IOV enabled VMs
+#### Ubuntu-HPC VM images
+
+For SR-IOV enabled [RDMA capable VMs](sizes-hpc.md#rdma-capable-instances), Ubuntu-HPC VM images versions 18.04, 20.04, and 22.04 are suitable. These VM images come preconfigured with the Mellanox OFED drivers for RDMA, NVIDIA GPU drivers, GPU compute software stack (CUDA, NCCL), and commonly used MPI libraries and scientific computing packages. Refer to the [VM size support matrix](#vm-sizes-supported-by-the-hpc-vm-images).
+
+- The available or latest versions of the VM images can be listed with the following information using [CLI](/cli/azure/vm/image#az-vm-image-list) or [Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/microsoft-dsvm.ubuntu-hpc?tab=overview).
+
+ ```output
+ "publisher": "Microsoft-DSVM",
+ "offer": "Ubuntu-HPC",
+ ```
+
+- Scripts used in the creation of the Ubuntu-HPC VM images from a base Ubuntu Marketplace image are on the [azhpc-images repo](https://github.com/Azure/azhpc-images/tree/master/ubuntu).
+- Additionally, details on what's included in the Ubuntu-HPC VM images, and how to deploy them are in a [TechCommunity article](https://techcommunity.microsoft.com/t5/azure-compute/azure-hpc-vm-images/ba-p/977094).
+
+#### AlmaLinux-HPC VM images
+
+For SR-IOV enabled [RDMA capable VMs](sizes-hpc.md#rdma-capable-instances), AlmaLinux-HPC VM images versions 8.5, 8.6, and 8.7 are suitable. These VM images come preconfigured with the Mellanox OFED drivers for RDMA, NVIDIA GPU drivers, GPU compute software stack (CUDA, NCCL), and commonly used MPI libraries and scientific computing packages. Refer to the [VM size support matrix](#vm-sizes-supported-by-the-hpc-vm-images).
+
+- The available or latest versions of the VM images can be listed with the following information using [CLI](/cli/azure/vm/image#az-vm-image-list) or [Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/almalinux.almalinux-hpc?tab=overview).
+
+ ```output
+ "publisher": "AlmaLinux",
+ "offer": "AlmaLinux-HPC",
+ ```
+
+- Scripts used in the creation of the AlmaLinux-HPC VM images from a base AlmaLinux Marketplace image are on the [azhpc-images repo](https://github.com/Azure/azhpc-images/tree/master/alma).
+- Additionally, details on what's included in the AlmaLinux-HPC VM images, and how to deploy them are in a [TechCommunity article](https://techcommunity.microsoft.com/t5/azure-compute/azure-hpc-vm-images/ba-p/977094).
+ #### CentOS-HPC VM images
-For SR-IOV enabled [RDMA capable VMs](sizes-hpc.md#rdma-capable-instances), [Ubuntu-HPC VM images](#ubuntu-hpc-vm-images) and CentOS-HPC VM images version 7.6 and later are suitable. These VM images come preconfigured with the Mellanox OFED drivers for RDMA and commonly used MPI libraries and scientific computing packages. Refer to the [VM size support matrix](#vm-sizes-supported-by-the-hpc-vm-images).
+For SR-IOV enabled [RDMA capable VMs](sizes-hpc.md#rdma-capable-instances), [Ubuntu-HPC VM images](#ubuntu-hpc-vm-images), [AlmaLinux-HPC VM images](#almalinux-hpc-vm-images), and CentOS-HPC VM images version 7.6 and later are suitable. These VM images come preconfigured with the Mellanox OFED drivers for RDMA and commonly used MPI libraries and scientific computing packages. Refer to the [VM size support matrix](#vm-sizes-supported-by-the-hpc-vm-images).
- The available or latest versions of the VM images can be listed with the following information using [CLI](/cli/azure/vm/image#az-vm-image-list) or [Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/openlogic.centos-hpc?tab=Overview).
For SR-IOV enabled [RDMA capable VMs](sizes-hpc.md#rdma-capable-instances), [Ubu
"offer": "CentOS-HPC", ``` -- Scripts used in the creation of the [Ubuntu-HPC VM images](#ubuntu-hpc-vm-images) and CentOS-HPC version 7.6 and later VM images from a base CentOS Marketplace image are on the [azhpc-images repo](https://github.com/Azure/azhpc-images/tree/master/centos).-- Additionally, details on what's included in the [Ubuntu-HPC VM images](#ubuntu-hpc-vm-images) and CentOS-HPC version 7.6 and later VM images, and how to deploy them are in a [TechCommunity article](https://techcommunity.microsoft.com/t5/azure-compute/azure-hpc-vm-images/ba-p/977094).
+- Scripts used in the creation of the [Ubuntu-HPC VM images](#ubuntu-hpc-vm-images), [AlmaLinux-HPC VM images](#almalinux-hpc-vm-images), and CentOS-HPC version 7.6 and later VM images from a base CentOS Marketplace image are on the [azhpc-images repo](https://github.com/Azure/azhpc-images).
+- Additionally, details on what's included in the [Ubuntu-HPC VM images](#ubuntu-hpc-vm-images), [AlmaLinux-HPC VM images](#almalinux-hpc-vm-images), and CentOS-HPC version 7.6 and later VM images, and how to deploy them are in a [TechCommunity article](https://techcommunity.microsoft.com/t5/azure-compute/azure-hpc-vm-images/ba-p/977094).
> [!NOTE]
-> Among the CentOS-HPC VM images, currently only the version 7.9 VM image additionally comes preconfigured with the NVIDIA GPU drivers and GPU compute software stack (CUDA, NCCL). CentOS 7 is currently the only supported CentOS version, which will continue to receive community security patches and bug fix updates until June 2024. Therefore, we are not releasing any new CentOS HPC images to Azure marketplace. You can still use our CentOS HPC version 7.9 images, but it is suggested to consider moving to our AlmaLinux HPC images alternatives in Azure marketplace, which have the same set of drivers installed as Ubuntu/CentOS.
+> Among the CentOS-HPC VM images, currently only the version 7.9 VM image additionally comes preconfigured with the NVIDIA GPU drivers and GPU compute software stack (CUDA, NCCL). CentOS 7 is currently the only supported CentOS version, which will continue to receive community security patches and bug fix updates until June 2024. Therefore, we are not releasing any new CentOS HPC images to Azure marketplace. You can still use our CentOS HPC version 7.9 images, but it is suggested to consider moving to our AlmaLinux-HPC images alternatives in Azure marketplace, which have the same set of drivers installed as Ubuntu-HPC and CentOS-HPC.
> [!NOTE] > SR-IOV enabled N-series VM sizes with FDR InfiniBand (e.g. NCv3 and older) will be able to use the following CentOS-HPC VM image or older versions from the Marketplace:
For SR-IOV enabled [RDMA capable VMs](sizes-hpc.md#rdma-capable-instances), [Ubu
>- OpenLogic:CentOS-HPC:8_1:8.1.2020062400 >- OpenLogic:CentOS-HPC:8_1-gen2:8.1.2020062401
-#### Ubuntu-HPC VM images
-
-For SR-IOV enabled [RDMA capable VMs](sizes-hpc.md#rdma-capable-instances), Ubuntu-HPC VM images versions 18.04 and 20.04 are suitable. These VM images come preconfigured with the Mellanox OFED drivers for RDMA, NVIDIA GPU drivers, GPU compute software stack (CUDA, NCCL), and commonly used MPI libraries and scientific computing packages. Refer to the [VM size support matrix](#vm-sizes-supported-by-the-hpc-vm-images).
--- The available or latest versions of the VM images can be listed with the following information using [CLI](/cli/azure/vm/image#az-vm-image-list) or [Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/microsoft-dsvm.ubuntu-hpc?tab=overview).-
- ```output
- "publisher": "Microsoft-DSVM",
- "offer": "Ubuntu-HPC",
- ```
--- Scripts used in the creation of the Ubuntu-HPC VM images from a base Ubuntu Marketplace image are on the [azhpc-images repo](https://github.com/Azure/azhpc-images/tree/master/ubuntu).-- Additionally, details on what's included in the Ubuntu-HPC VM images, and how to deploy them are in a [TechCommunity article](https://techcommunity.microsoft.com/t5/azure-compute/azure-hpc-vm-images/ba-p/977094).- ### RHEL/CentOS VM images The base RHEL or CentOS-based non-HPC VM images on the Marketplace can be configured for use on the SR-IOV enabled [RDMA capable VMs](sizes-hpc.md#rdma-capable-instances). Learn more about [enabling InfiniBand](./extensions/enable-infiniband.md) and [setting up MPI](setup-mpi.md) on the VMs.
virtual-wan Monitoring Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/monitoring-best-practices.md
+
+ Title: Monitoring Virtual WAN - Best practices
+
+description: Start here to learn monitoring best practices for Virtual WAN.
+++++ Last updated : 10/03/2023++
+# Monitoring Azure Virtual WAN - Best practices
+
+This article provides configuration best practices for monitoring Virtual WAN and the different components that can be deployed with it. The recommendations presented in this article are mostly based on existing Azure Monitor metrics and logs generated by Azure Virtual WAN. For a list of metrics and logs collected for Virtual WAN, see the [Monitoring Virtual WAN data reference](monitor-virtual-wan-reference.md).
+
+Most of the recommendations in this article suggest creating Azure Monitor alerts. Azure Monitor alerts are meant to proactively notify you when thereΓÇÖs an important event in the monitoring data to help you address the root cause quicker and ultimately reduce downtime. To learn how to create a metric alert, see [Tutorial: Create a metric alert for an Azure resource](../azure-monitor/alerts/tutorial-metric-alert.md). To learn how to create a log query alert, see [Tutorial: Create a log query alert for an Azure resource](../azure-monitor/alerts/tutorial-log-alert.md).
+
+## Virtual WAN gateways
+
+### Site-to-site VPN gateway
+
+**Design checklist ΓÇô metric alerts**
+
+* Create alert rule for increase in Tunnel Egress and/or Ingress packet count drop.
+* Create alert rule to monitor BGP peer status.
+* Create alert rule to monitor number of BGP routes advertised and learned.
+* Create alert rule for VPN gateway overutilization.
+* Create alert rule for tunnel overutilization.
+
+|Recommendation | Description|
+|||
+|Create alert rule for increase in Tunnel Egress and/or Ingress packet drop count.| An increase in tunnel egress and/or ingress packet drop count may indicate an issue with the Azure VPN gateway, or with the remote VPN device. Select the **Tunnel Egress/Ingress Packet drop count** metric when creating the alert rule(s). Define a **static Threshold value** greater than **0** and the **Total** aggregation type when configuring the alert logic.<br><br>You can choose to monitor the **Connection** as a whole, or split the alert rule by **Instance** and **Remote IP** to be alerted for issues involving individual tunnels. To learn the difference between the concept of **VPN connection**, **link**, and **tunnel** in Virtual WAN, see the [Virtual WAN FAQ](virtual-wan-faq.md).|
+|Create alert rule to monitor BGP peer status.|When using BGP in your site-to-site connections, it's important to monitor the health of the BGP peerings between the gateway instances and the remote devices, as recurrent failures can disrupt connectivity.<br><br>Select the **BGP Peer Status** metric when creating the alert rule. Using a **static** threshold, choose the **Average** aggregation type and configure the alert to be triggered whenever the value is **less than 1**.<br><br>It's recommended to split the alert by **Instance** and **BGP Peer Address** to detect issues with individual peerings. Avoid selecting the gateway instance IPs as **BGP Peer Address** because this metric monitors the BGP status for every possible combination, including with the instance itself (which is always 0).|
+|Create alert rule to monitor number of BGP routes advertised and learned.|**BGP Routes Advertised** and **BGP Routes Learned** monitor the number of routes advertised to and learned from peers by the VPN gateway, respectively. If these metrics drop to zero unexpectedly, it could be because thereΓÇÖs an issue with the gateway or with on-premises.<br><br>It's recommended to configure an alert for both these metrics to be triggered whenever their value is **zero**. Choose the **Total** aggregation type. Split by **Instance** to monitor individual gateway instances.|
+|Create alert rule for tunnel overutilization.|The maximum throughput allowed per tunnel is determined by the scale units of the gateway instance where it terminates.<br><br>You may want to be alerted if a tunnel is at risk of nearing its maximum throughput, which can lead to performance and connectivity issues, and act proactively on it by investigating the root cause of the increased tunnel utilization or by increasing the gatewayΓÇÖs scale units.<br><br>Select **Tunnel Bandwidth** when creating the alert rule. Split by **Instance** and **Remote IP** to monitor all individual tunnels or choose specific tunnel(s) instead. Configure the alert to be triggered whenever the **Average** throughput is **greater than** a value that is close to the maximum throughput allowed per tunnel.<br><br>To learn more about how a tunnelΓÇÖs maximum throughput is impacted by the gatewayΓÇÖs scale units, see the [Virtual WAN FAQ](virtual-wan-faq.md).|
+
+**Design checklist - log query alerts**
+
+To configure log-based alerts, you must first create a diagnostic setting for your site-to-site/point-to-site VPN gateway. A diagnostic setting is where you define what logs and/or metrics you want to collect and how you want to store that data to be analyzed later. Unlike gateway metrics, gateway logs won't be available if there's no diagnostic setting configured. To learn how to create a diagnostic setting, see [Create diagnostic setting to view logs](monitor-virtual-wan.md#create-diagnostic).
+
+* Create tunnel disconnect alert rule.
+* Create BGP disconnect alert rule.
+
+|Recommendation | Description|
+|||
+|Create tunnel disconnect alert rule.|**Use Tunnel Diagnostic Logs** to track disconnect events in your site-to-site connections. A disconnect event can be due to a failure to negotiate SAs, unresponsiveness of the remote VPN device, among other causes. Tunnel Diagnostic Logs also provide the disconnect reason. See the **Create tunnel disconnect alert rule - log query** below this table to select disconnect events when creating the alert rule.<br><br>Configure the alert to be triggered whenever the number of rows resulting from running the query above is **greater than 0**. For this alert to be effective, select **Aggregation Granularity** to be between 1 and 5 minutes and the **Frequency of evaluation** to also be between 1 and 5 minutes. This way, after the **Aggregation Granularity** interval has passed, the number of rows is 0 again for a new interval.<br><br>For troubleshooting tips when analyzing Tunnel Diagnostic Logs, see [Troubleshoot Azure VPN gateway](../vpn-gateway/troubleshoot-vpn-with-azure-diagnostics.md#TunnelDiagnosticLog) using diagnostic logs. Additionally, use **IKE Diagnostic Logs** to complement your troubleshooting, as these logs contain detailed IKE-specific diagnostics.|
+|Create BGP disconnect alert rule. |Use **Route Diagnostic Logs** to track route updates and issues with BGP sessions. Repeated BGP disconnect events can impact connectivity and cause downtime. See the **Create BGP disconnect rule alert- log query** below this table to select disconnect events when creating the alert rule.<br><br>Configure the alert to be triggered whenever the number of rows resulting from running the query above is **greater than 0**. For this alert to be effective, select **Aggregation Granularity** to be between 1 and 5 minutes and the **Frequency of evaluation** to also be between 1 and 5 minutes. This way, after the **Aggregation Granularity** interval has passed, the number of rows is 0 again for a new interval if the BGP sessions have been restored.<br><br>For more information about the data collected by Route Diagnostic Logs, see [Troubleshooting Azure VPN Gateway using diagnostic logs](../vpn-gateway/troubleshoot-vpn-with-azure-diagnostics.md#RouteDiagnosticLog). |
+
+**Log queries**
+
+* **Create tunnel disconnect alert rule - log query**: The following log query can be used to select tunnel disconnect events when creating the alert rule:
+
+ ```text
+ AzureDiagnostics
+ | where Category == "TunnelDiagnosticLog"
+ | where OperationName == "TunnelDisconnected"
+ ```
+
+* **Create BGP disconnect rule alert- log query**: The following log query can be used to select BGP disconnect events when creating the alert rule:
+
+ ```text
+ AzureDiagnostics
+ | where Category == "RouteDiagnosticLog"
+ | where OperationName == "BgpDisconnectedEvent"
+ ```
+
+### Point-to-site VPN gateway
+
+The following section details the configuration of metric-based alerts only. However, Virtual WAN point-to-site gateways also support diagnostic logs. To learn more about the available diagnostic logs for point-to-site gateways, see [Virtual WAN point-to-site VPN gateway diagnostics](monitor-virtual-wan-reference.md#p2s-diagnostic).
+
+**Design checklist - metric alerts**
+
+* Create alert rule for gateway overutilization.
+* Create alert for P2S connection count nearing limit.
+* Create alert for User VPN route count nearing limit.
+
+|Recommendation | Description|
+|||
+|Create alert rule for gateway overutilization.|The bandwidth of a point-to-site gateway is determined by the number of scale units configured. To learn more about point-to-site gateway scale units, see Point-to-site (User VPN).<br><br>**Use the Gateway P2S Bandwidth** metric to monitor the gatewayΓÇÖs utilization and configure an alert rule that is triggered whenever the gatewayΓÇÖs bandwidth is **greater than** a value near its aggregate throughput ΓÇô for example, if the gateway was configured with 2 scale units, it will have an aggregate throughput of 1 Gbps. In this case, you could define a threshold value of 950 Mbps.<br><br>Use this alert to proactively investigate the root cause of the increased utilization, and ultimately increase the number of scale units, if needed. Select the **Average** aggregation type when configuring the alert rule.|
+|Create alert for P2S connection count nearing limit |The maximum number of point-to-site connections allowed is also determined by the number of scale units configured on the gateway. To learn more about point-to-site gateway scale units, see the FAQ for [Point-to-site (User VPN)](virtual-wan-faq.md#p2s-concurrent).<br><br>Use the **P2S Connection Count** metric to monitor the number of connections. Select this metric to configure an alert rule that is triggered whenever the number of connections is nearing the maximum allowed. For example, a 1-scale unit gateway supports up to 500 concurrent connections. In this case, you could configure the alert to be triggered whenever the number of connections is **greater than** 450.<br><br>Use this alert to determine whether an increase in the number of scale units is required or not. Choose the **Total** aggregation type when configuring the alert rule.|
+|Create alert rule for User VPN routes count nearing limit.|The maximum number of User VPN routes is determined by the protocol used. IKEv2 has a protocol-level limit of 255 routes, whereas OpenVPN has a limit of 1000 routes. To learn more about this, see [VPN server configuration concepts](point-to-site-concepts.md#vpn-server-configuration-concepts).<br><br>You may want to be alerted if youΓÇÖre close to hitting the maximum number of User VPN routes and act proactively to avoid any downtime. Use the **User VPN Route Count** to monitor this and configure an alert rule that is triggered whenever the number of routes surpasses a value close to the limit. For example, if the limit is 255 routes, an appropriate **Threshold** value could be 230. Choose the **Total** aggregation type when configuring the alert rule.|
+
+### ExpressRoute gateway
+
+This section of the article focuses on metric-based alerts. There are no diagnostic logs currently available for Virtual WAN ExpressRoute gateways. In addition to the alerts described below, which focus on the gateway component, it's recommended to use the available metrics, logs, and tools to monitor the ExpressRoute circuit. To learn more about ExpressRoute monitoring, see [ExpressRoute monitoring, metrics, and alerts](../expressroute/expressroute-monitoring-metrics-alerts.md). To learn about how you can use the ExpressRoute Traffic Collector tool, see [Configure ExpressRoute Traffic Collector for ExpressRoute Direct](../expressroute/how-to-configure-traffic-collector.md).
+
+**Design checklist - metric alerts**
+
+* Create alert rule for Bits Received Per Second.
+* Create alert rule for CPU overutilization.
+* Create alert rule for Packets per Second.
+* Create alert rule for number of routes advertised to peer nearing limit.
+* Count alert rule for number of routes learned from peer nearing limit.
+* Create alert rule for high frequency in route changes.
+
+|Recommendation | Description|
+|||
+|Create alert rule for Bits Received Per Second.|**Bits Received per Second** monitors the total amount of traffic received by the gateway from the MSEEs.<br><br>You may want to be alerted if the amount of traffic received by the gateway is at risk of hitting its maximum throughput, as this can lead to performance and connectivity issues. This allows you to act proactively by investigating the root cause of the increased gateway utilization or increasing the gatewayΓÇÖs maximum allowed throughput.<br><br>Choose the **Average** aggregation type and a **Threshold** value close to the maximum throughput provisioned for the gateway when configuring the alert rule.<br><br>Additionally, it's recommended to set an alert when the number of **Bits Received per Second** is near zero, as it may indicate an issue with the gateway or the MSEEs.<br><br>The maximum throughput of an ExpressRoute gateway is determined by number of scale units provisioned. To learn more about ExpressRoute gateway performance, see [About ExpressRoute connections in Azure Virtual WAN](virtual-wan-expressroute-about.md).|
+|Create alert rule for CPU overutilization.|When using ExpressRoute gateways, it's important to monitor the CPU utilization. Prolonged high utilization can impact performance and connectivity.<br><br>Use the **CPU utilization** metric to monitor this and create an alert for whenever the CPU utilization is **greater than** 80%, so you can investigate the root cause and ultimately increase the number of scale units, if needed. Choose the **Average** aggregation type when configuring the alert rule.<br><br>To learn more about ExpressRoute gateway performance, see [About ExpressRoute connections in Azure Virtual WAN](virtual-wan-expressroute-about.md).|
+|Create alert rule for packets received per second.|**Packets per second** monitors the number of inbound packets traversing the Virtual WAN ExpressRoute gateway.<br><br>You may want to be alerted if the number of **packets per second** is nearing the limit allowed for the number of scale units configured on the gateway.<br><br>Choose the Average aggregation type when configuring the alert rule. Choose a **Threshold** value close to the maximum number of **packets per second** allowed based on the number of scale units of the gateway. To learn more about ExpressRoute performance, see [About ExpressRoute connections in Azure Virtual WAN](virtual-wan-expressroute-about.md).<br><br>Additionally, it's recommended to set an alert when the number of **Packets per second** is near zero, as it may indicate an issue with the gateway or MSEEs.|
+|Create alert rule for high frequency in route changes.|**Frequency of Routes changes** shows the change frequency of routes being learned and advertised from and to peers, including other types of branches such as site-to-site and point-to-site VPN. This metric provides visibility when a new branch or more circuits are being connected/disconnected.<br><br>This metric is a useful tool when identifying issues with BGP advertisements, such as flaplings. It's recommended to set an alert **if** the environment is **static** and BGP changes aren't expected. Select a **threshold value** that is **greater than 1** and an **Aggregation Granularity** of 15 minutes to monitor BGP behavior consistently.<br><br>If the environment is dynamic and BGP changes are frequently expected, you may choose not to set an alert otherwise in order to avoid false positives. However, you can still consider this metric for observability of your network.|
+
+## Virtual hub
+
+We're working to support alerts based on virtual hub metrics soon. Currently, you can retrieve information for the Metrics, but alerting is unsupported. There are no diagnostic logs available for virtual hubs at this time.
+
+## Azure Firewall
+
+This section of the article focuses on metric-based alerts. Azure Firewall offers a comprehensive list of [metrics and logs](../firewall/firewall-diagnostics.md) for monitoring purposes. In addition to configuring the alerts described in the following section, explore how [Azure Firewall Workbook](../firewall/firewall-workbook.md) can help monitor your Azure Firewall, or the benefits of connecting Azure Firewall logs to Microsoft Sentinel using [Azure Firewall connector for Microsoft Sentinel](../sentinel/data-connectors/azure-firewall.md).
+
+**Design checklist - metric alerts**
+
+* Create alert rule for risk of SNAT port exhaustion.
+* Create alert rule for firewall overutilization.
+
+|Recommendation | Description|
+|||
+|Create alert rule for risk of SNAT port exhaustion.|Azure Firewall provides 2,496 SNAT ports per public IP address configured per backend virtual machine scale instance. ItΓÇÖs important to estimate in advance the number of SNAT ports that will fulfill your organizational requirements for outbound traffic to the Internet. Not doing so increases the risk of exhausting the number of available SNAT ports on the Azure Firewall, potentially causing outbound connectivity failures.<br><br>Use the **SNAT port utilization** metric to monitor the percentage of outbound SNAT ports currently in use. Create an alert rule for this metric to be triggered whenever this percentage surpasses **95%** (due to an unforeseen traffic increase, for example) so you can act accordingly by configuring an additional public IP address on the Azure Firewall, or by using an [Azure NAT Gateway](../nat-gateway/nat-overview.md) instead. Use the **Maximum** aggregation type when configuring the alert rule.<br><br>To learn more about how to interpret the **SNAT port utilization** metric, see [Overview of Azure Firewall logs and metrics](../firewall/logs-and-metrics.md#metrics). To learn more about how to scale SNAT ports in Azure Firewall, see [Scale SNAT ports with Azure NAT Gateway](../firewall/integrate-with-nat-gateway.md).|
+|Create alert rule for firewall overutilization.|Azure Firewall maximum throughput differs depending on the SKU and features enabled. To learn more about Azure Firewall performance, see [Azure Firewall performance](../firewall/firewall-performance.md).<br><br>You may want to be alerted if your firewall is nearing its maximum throughput and troubleshoot the underlying cause, as this can have an impact in the firewallΓÇÖs performance.<br><br> Create an alert rule to be triggered whenever the **Throughput** metric surpasses a value nearing the firewallΓÇÖs maximum throughput ΓÇô if the maximum throughput is 30Gbps, configure 25Gbps as the **Threshold** value, for example. The **Throughput** metric unit is **bits/sec**. Choose the **Average** aggregation type when creating the alert rule.
+
+## Next steps
+
+* See [Monitoring Virtual WAN data reference](monitor-virtual-wan-reference.md) for a data reference of the metrics, logs, and other important values created by Virtual WAN.
+* See [Monitoring Azure resources with Azure Monitor](../azure-monitor/essentials/monitor-azure-resource.md) for details on monitoring Azure resources.
+* See [Getting started with Azure Metrics Explorer](../azure-monitor/essentials/metrics-getting-started.md) for more details about **Azure Monitor Metrics**.
+* See [All resource metrics supported in Azure Monitor](../azure-monitor/essentials/metrics-supported.md) for a list of all supported metrics.
+* See [Create diagnostic settings in Azure Monitor](../azure-monitor/essentials/diagnostic-settings.md) for more information and troubleshooting when creating diagnostic settings via the Azure portal, CLI, PowerShell, etc.
vpn-gateway Azure Vpn Client Optional Configurations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/azure-vpn-client-optional-configurations.md
The ability to completely block routes isn't supported by the Azure VPN Client.
> - If you encounter the error "_Destination cannot be empty or have more than one entry inside route tag_", check the profile XML file and ensure that the includeroutes/excluderoutes section has only one destination address inside a route tag. >
+## Version Information
+
+Version 3.2.0.0
+
+New in this Release:
+ - AAD Authentication is now available from the settings page.
+ - Server High Availability(HA), releasing on a rolling basis until October 20.
+ - Accesibility Improvements
+ - Connection logs in UTC
+ - Minor bug fixes
+
## Next steps For more information about P2S VPN, see the following articles:
vpn-gateway P2s Session Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/p2s-session-management.md
Azure virtual network gateways provide an easy way to view and disconnect current Point-to-site VPN sessions. This article helps you view and disconnect current sessions. The session status is updated every 5 minutes. It is not updated immediately.
+As this feature allows the disconnection of VPN clients, Reader permissions on the VPN gateway resource are not sufficient. Contributor role is needed to visualize Point-to-site VPN sessions correctly.
## Portal