Updates from: 10/05/2023 01:14:10
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory Service Accounts Principal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/service-accounts-principal.md
An application instance has two properties: the ApplicationID (or ClientID) and
> [!NOTE] > The terms **application** and **service principal** are used interchangeably, when referring to an application in authentication tasks. However, they are two representations of applications in Microsoft Entra ID.
-
+ The ApplicationID represents the global application and is the same for application instances, across tenants. The ObjectID is a unique value for an application object. As with users, groups, and other resources, the ObjectID helps to identify an application instance in Microsoft Entra ID. To learn more, see [Application and service principal relationship in Microsoft Entra ID](../develop/app-objects-and-service-principals.md)
To learn more, see [Application and service principal relationship in Microsoft
You can create an application and its service principal object (ObjectID) in a tenant using: * Azure PowerShell
+* Microsoft Graph PowerShell
* Azure command-line interface (Azure CLI)
-* Microsoft Graph
+* Microsoft Graph API
* The Azure portal * Other tools
When using service principals, use the following table to match challenges and m
To find accounts, run the following commands using service principals with Azure CLI or PowerShell. * Azure CLI - `az ad sp list`
-* PowerShell - `Get-AzureADServicePrincipal -All:$true`
+* PowerShell - `Get-MgServicePrincipal -All:$true`
-For more information, see [Get-AzureADServicePrincipal](/powershell/module/azuread/get-azureadserviceprincipal)
+For more information, see [Get-MgServicePrincipal](/powershell/module/microsoft.graph.applications/get-mgserviceprincipal)
## Assess service principal security
To assess the security, evaluate privileges and credential storage. Use the foll
|Challenge | Mitigation| | - | - |
-| Detect the user who consented to a multi-tenant app, and detect illicit consent grants to a multi-tenant app | - Run the following PowerShell to find multi-tenant apps <br>`Get-AzureADServicePrincipal -All:$true ? {$_.Tags -eq WindowsAzureActiveDirectoryIntegratedApp"}`</br> - Disable user consent </br> - Allow user consent from verified publishers, for selected permissions (recommended) </br> - Configure them in the user context </br> - Use their tokens to trigger the service principal|
+| Detect the user who consented to a multi-tenant app, and detect illicit consent grants to a multi-tenant app | - Run the following PowerShell to find multi-tenant apps <br>`Get-MgServicePrincipal -All:$true | ? {$_.Tags -eq "WindowsAzureActiveDirectoryIntegratedApp"}`</br> - Disable user consent </br> - Allow user consent from verified publishers, for selected permissions (recommended) </br> - Configure them in the user context </br> - Use their tokens to trigger the service principal|
|Use of a hard-coded shared secret in a script using a service principal|Use a certificate| |Tracking who uses the certificate or the secret| Monitor the service principal sign-ins using the Microsoft Entra sign-in logs| |Can't manage service principal sign-in with Conditional Access| Monitor the sign-ins using the Microsoft Entra sign-in logs
Conditional Access:
Use Conditional Access to block service principals from untrusted locations. See, [Create a location-based Conditional Access policy](../conditional-access/workload-identity.md#create-a-location-based-conditional-access-policy)++
active-directory How To Mfa Registration Campaign https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/how-to-mfa-registration-campaign.md
To enable a registration campaign in the Microsoft Entra admin center, complete
1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as [Authentication Policy Administrator](../roles/permissions-reference.md#authentication-policy-administrator) or [Global Administrator](../roles/permissions-reference.md#global-administrator). 1. Browse to **Protection** > **Authentication methods** > **Registration campaign** and click **Edit**.
-1. For **State**, click **Microsoft managed** or **Enabled**. In the following screenshot, the registration campaign is **Microsoft managed**. That setting allows Microsoft to set the default value to be either Enabled or Disabled. From Sept. 25 to Oct. 20, 2023, the Microsoft managed value for the registration campaing will change to **Enabled** for voice call and text message users across all tenants. For more information, see [Protecting authentication methods in Azure Active Directory](concept-authentication-default-enablement.md).
+1. For **State**, click **Microsoft managed** or **Enabled**. In the following screenshot, the registration campaign is **Microsoft managed**. That setting allows Microsoft to set the default value to be either Enabled or Disabled. From Sept. 25 to Oct. 20, 2023, the Microsoft managed value for the registration campaign will change to **Enabled** for voice call and text message users across all tenants. For more information, see [Protecting authentication methods in Azure Active Directory](concept-authentication-default-enablement.md).
:::image type="content" border="true" source="media/how-to-mfa-registration-campaign/admin-experience.png" alt-text="Screenshot of enabling a registration campaign.":::
active-directory Permissions Management Trial User Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/permissions-management-trial-user-guide.md
- Title: Trial User Guide - Microsoft Entra Permissions Management - OBSOLETE
-description: How to get started with your Microsoft Entra Permissions Management free trial
------- Previously updated : 06/16/2023---
-# Trial user guide: Microsoft Entra Permissions Management
-
-Welcome to the Microsoft Entra Permissions Management trial user guide!
-
-This user guide is a simple guide to help you make the most of your free trial, including the Permissions Management Cloud Infrastructure Assessment to help you identify and remediate the most critical permission risks across your multicloud infrastructure. Using the suggested steps in this user guide from the Microsoft Identity team, you'll learn how Permissions Management can assist you to protect all your users and data.
-
-## What is Permissions Management?
-
-Permissions Management is a cloud infrastructure entitlement management (CIEM) solution that provides comprehensive visibility into permissions assigned to all identities including both workload and user identities, actions, and resources across multicloud infrastructures in Microsoft Azure, Amazon Web Services (AWS), and Google Cloud Platform (GCP). Permissions Management detects, automatically right-sizes, and continuously monitors unused and excessive permissions.
-
-Permissions Management helps your organization tackle cloud permissions by enabling the capabilities to continuously discover, remediate and monitor the activity of every unique user and workload identity operating in the cloud, alerting security and infrastructure teams to areas of unexpected or excessive risk.
--- Get granular cross-cloud visibility - Get a comprehensive view of every action performed by any identity on any resource.-- Uncover permission risk - Assess permission risk by evaluating the gap between permissions granted and permissions used.-- Enforce least privilege - Right-size permissions based on usage and activity and enforce permissions on-demand at cloud scale.-- Monitor and detect anomalies - Detect anomalous permission usage and generate detailed forensic reports.-
-![Diagram, schematic Description automatically generated](media/permissions-management-trial-user-guide/microsoft-entra-permissions-management-diagram.png)
--
-## Step 1: Set-up Permissions Management
-
-Before you enable Permissions Management in your organization:
-- You must have a Microsoft Entra tenant. If you don't already have one, [create a free account](https://azure.microsoft.com/free/).-- You must be eligible for or have an active assignment to the global administrator role as a user in that tenant.-
-If the above points are met, continue with the following steps:
-
-1. [Enabling Permissions Management on your Microsoft Entra tenant](../cloud-infrastructure-entitlement-management/onboard-enable-tenant.md#how-to-enable-permissions-management-on-your-azure-ad-tenant)
-2. Use the **Data Collectors** dashboard in Permissions Management to configure data collection settings for your authorization system. [Configure data collection settings](../cloud-infrastructure-entitlement-management/onboard-enable-tenant.md#configure-data-collection-settings).
-
- Note that for each cloud platform, you will have 3 options for onboarding:
-
- **Option 1 (Recommended): Automatically manage** ΓÇô this option allows subscriptions to be automatically detected and monitored without additional configuration.
-
- **Option 2**: **Enter authorization systems** - you have the ability to specify only certain subscriptions to manage and monitor with MEPM (up to 100 per collector).
-
- **Option 3**: **Select authorization systems** - this option detects all subscriptions that are accessible by the Cloud Infrastructure Entitlement Management application.
-
- For information on how to onboard an AWS account, Azure subscription, or GCP project into Permissions Management, select one of the following articles and follow the instructions:
- - [Onboard an AWS account](../cloud-infrastructure-entitlement-management/onboard-aws.md)
- - [Onboard a Microsoft Azure subscription](../cloud-infrastructure-entitlement-management/onboard-azure.md)
- - [Onboard a GCP project](../cloud-infrastructure-entitlement-management/onboard-gcp.md)
-3. [Enable or disable the controller after onboarding is complete](../cloud-infrastructure-entitlement-management/onboard-enable-controller-after-onboarding.md)
-4. [Add an account/subscription/project after onboarding is complete](../cloud-infrastructure-entitlement-management/onboard-add-account-after-onboarding.md)
-
- **Actions to try:**
-
- - [View roles/policies and requests for permission](../cloud-infrastructure-entitlement-management/ui-remediation.md#view-and-create-rolespolicies)
- - [View information about roles/ policies](../cloud-infrastructure-entitlement-management/ui-remediation.md#view-and-create-rolespolicies)
- - [View information about active and completed tasks](../cloud-infrastructure-entitlement-management/ui-tasks.md)
- - [Create a role/policy](../cloud-infrastructure-entitlement-management/how-to-create-role-policy.md)
- - [Clone a role/policy](../cloud-infrastructure-entitlement-management/how-to-clone-role-policy.md)
- - [Modify a role/policy](../cloud-infrastructure-entitlement-management/how-to-modify-role-policy.md)
- - [Delete a role/policy](../cloud-infrastructure-entitlement-management/how-to-delete-role-policy.md)
- - [Attach and detach policies for Amazon Web Services (AWS) identities](../cloud-infrastructure-entitlement-management/how-to-attach-detach-permissions.md)
- - [Add and remove roles and tasks for Microsoft Azure and Google Cloud Platform (GCP) identities](../cloud-infrastructure-entitlement-management/how-to-add-remove-role-task.md)
- - [Revoke access to high-risk and unused tasks or assign read-only status for Microsoft Azure and Google Cloud Platform (GCP) identities](../cloud-infrastructure-entitlement-management/how-to-revoke-task-readonly-status.md)
- - [Create or approve a request for permissions](../cloud-infrastructure-entitlement-management/how-to-create-approve-privilege-request.md) Request permissions on-demand for one-time use or on a schedule. These permissions will automatically be revoked at the end of the requested period.
-
-## Step 2: Discover & assess
-
-Improve your security posture by getting comprehensive and granular visibility to enforce the principle of least privilege access across your entire multicloud environment. The Permissions Management dashboard gives you an overview of your permission profile and locates where the riskiest identities and resources are across your digital estate.
-
-The dashboard leverages the Permission Creep Index, which is a single and unified metric, ranging from 0 to 100, that calculates the gap between permissions granted and permissions used over a specific period. The higher the gap, the higher the index and the larger the potential attack surface. The Permission Creep Index only considers high-risk actions, meaning any action that can cause data leakage, service disruption degradation, or security posture change. Permissions Management creates unique activity profiles for each identity and resource which are used as a baseline to detect anomalous behaviors.
-
-1. [View risk metrics in your authorization system](../cloud-infrastructure-entitlement-management/ui-dashboard.md#view-metrics-related-to-avoidable-risk) in the Permissions Management Dashboard. This information is available for Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP).
- 1. View metrics related to avoidable risk - these metrics allow the Permission Management administrator to identify areas where they can reduce risks related to the principle of least permissions. Information includes [the Permissions Creep Index (PCI)](../cloud-infrastructure-entitlement-management/ui-dashboard.md#the-pci-heat-map) and [Analytics Dashboard](../cloud-infrastructure-entitlement-management/usage-analytics-home.md).
-
-
- 1. Understand the [components of the Permissions Management Dashboard.](../cloud-infrastructure-entitlement-management/ui-dashboard.md#components-of-the-permissions-management-dashboard)
-
-2. View data about the activity in your authorization system
-
- 1. [View user data on the PCI heat map](../cloud-infrastructure-entitlement-management/product-dashboard.md#view-user-data-on-the-pci-heat-map).
- > [!NOTE]
- > The higher the PCI, the higher the risk.
-
- 2. [View information about users, roles, resources, and PCI trends](../cloud-infrastructure-entitlement-management/product-dashboard.md#view-information-about-users-roles-resources-and-pci-trends)
- 3. [View identity findings](../cloud-infrastructure-entitlement-management/product-dashboard.md#view-identity-findings)
- 4. [View resource findings](../cloud-infrastructure-entitlement-management/product-dashboard.md#view-resource-findings)
-3. [Configure your settings for data collection](../cloud-infrastructure-entitlement-management/product-data-sources.md) - use the **Data Collectors** dashboard in Permissions Management to view and configure settings for collecting data from your authorization systems.
-4. [View organizational and personal information](../cloud-infrastructure-entitlement-management/product-account-settings.md) - the **Account settings** dashboard in Permissions Management allows you to view personal information, passwords, and account preferences.
-5. [Select group-based permissions settings](../cloud-infrastructure-entitlement-management/how-to-create-group-based-permissions.md)
-6. [View information about identities, resources and tasks](../cloud-infrastructure-entitlement-management/usage-analytics-home.md) - the **Analytics** dashboard displays detailed information about:
- 1. **Users**: Tracks assigned permissions and usage by users. For more information, see View analytic information about users.
- 2. **Groups**: Tracks assigned permissions and usage of the group and the group members. For more information, see View analytic information about groups
- 3. **Active Resources**: Tracks resources that have been used in the last 90 days. For more information, see View analytic information about active resources
- 4. **Active Tasks**: Tracks tasks that have been performed in the last 90 days. For more information, see View analytic information about active tasks
- 5. **Access Keys**: Tracks the permission usage of access keys for a given user. For more information, see View analytic information about access keys
- 6. **Serverless Functions**: Tracks assigned permissions and usage of the serverless functions for AWS only. For more information, see View analytic information about serverless functions
-
- System administrators can use this information to make decisions about granting permissions and reducing risk on unused permissions.
-
-## Step 3: Remediate & manage
-
-Right-size excessive and/or unused permissions in only a few clicks. Avoid any errors caused by manual processes and implement automatic remediation on all unused permissions for a predetermined set of identities and on a regular basis. You can also grant new permissions on-demand for just-in-time access to specific cloud resources.
-
-There are two facets to removing unused permissions: least privilege policy creation (remediation) and permissions-on-demand. With remediation, an administrator can create policies that remove unused permissions (also known as right-sizing permissions) to achieve least privilege across their multicloud environment.
--- [Manage roles/policies and permissions requests using the Remediation dashboard](../cloud-infrastructure-entitlement-management/ui-remediation.md).-
- The dashboard includes six subtabs:
-
- - **Roles/Policies**: Use this subtab to perform Create Read Update Delete (CRUD) operations on roles/policies.
- - **Role/Policy Name** ΓÇô Displays the name of the role or the AWS policy
- - Note: An exclamation point (!) circled in red means the role or AWS policy has not been used.
- - Role Type ΓÇô Displays the type of role or AWS policy
- - **Permissions**: Use this subtab to perform Read Update Delete (RUD) on granted permissions.
- - **Role/Policy Template**: Use this subtab to create a template for roles/policies template.
- - **Requests**: Use this subtab to view approved, pending, and processed Permission on Demand (POD) requests.
- - **My Requests**: Use this tab to manage lifecycle of the POD request either created by you or needs your approval.
- - **Settings**: Use this subtab to select **Request Role/Policy Filters**, **Request Settings**, and **Auto-Approve** settings.
-
-**Best Practices for Remediation:**
--- **Creating activity-based roles/policies:** High-risk identities will be monitored and right-sized based on their historical activity. Unnecessary risk to leave unused high-risk permissions assigned to identities.-- **Removing direct role assignments:** EPM will generate reports based on role assignments. In cases where high-risk roles are directly assigned, the Remediation permissions tab can query those identities and remove direct role assignments.-- **Assigning read-only permissions:** Identities that are inactive or have high-risk permissions to production environments can be assigned read-only status. Access to production environments can be governed via Permissions On-demand.-
-**Best Practices for Permissions On-demand:**
--- **Requesting Delete Permissions:** No user will have delete permissions unless they request them and are approved.-- **Requesting Privileged Access:** High-privileged access is only granted through just-enough permissions and just-in-time access.-- **Requesting Periodic Access:** Schedule reoccurring daily, weekly, or monthly permissions that are time-bound and revoked at the end of period.-- Manage users, roles and their access levels with the User management dashboard.-
- **Actions to try:**
-
- - [Manage users](../cloud-infrastructure-entitlement-management/ui-user-management.md#manage-users)
- - [Manage groups](../cloud-infrastructure-entitlement-management/ui-user-management.md#manage-groups)
- - [Select group-based permissions settings](../cloud-infrastructure-entitlement-management/how-to-create-group-based-permissions.md)
-
-## Step 4: Monitor & alert
-
-Prevent data breaches caused by misuse and malicious exploitation of permissions with anomaly and outlier detection that alerts on any suspicious activity. Permissions Management continuously updates your Permission Creep Index and flags any incident, then immediately informs you with alerts via email. To further support rapid investigation and remediation, you can generate context-rich forensic reports around identities, actions, and resources.
--- Use queries to view information about user access with the **Audit** dashboard in Permissions Management. You can get an overview of queries a Permissions Management user has created to review how users access their authorization systems and accounts. The following options display at the top of the **Audit** dashboard:-- A tab for each existing query. Select the tab to see details about the query.-- **New Query**: Select the tab to create a new query.-- **New tab (+)**: Select the tab to add a **New Query** tab.-- **Saved Queries**: Select to view a list of saved queries.-
- **Actions to try:**
-
- - [Use a query to view information](../cloud-infrastructure-entitlement-management/ui-audit-trail.md)
- - [Create a custom query](../cloud-infrastructure-entitlement-management/how-to-create-custom-queries.md)
- - [Generate an on-demand report from a query](../cloud-infrastructure-entitlement-management/how-to-audit-trail-results.md)
- - [Filter and query user activity](../cloud-infrastructure-entitlement-management/product-audit-trail.md)
-
-Use the **Activity triggers** dashboard to view information and set alerts and triggers.
--- Set activity alerts and triggers-
- Our customizable machine learning-powered anomaly and outlier detection alerts will notify you of any suspicious activity such as deviations in usage profiles or abnormal access times. Alerts can be used to alert on permissions usage, access to resources, indicators of compromise, insider threats, or to track previous incidents.
-
- **Actions to try**
-
- - [View information about alerts and alert triggers](../cloud-infrastructure-entitlement-management/ui-triggers.md)
- - [Create and view activity alerts and alert triggers](../cloud-infrastructure-entitlement-management/how-to-create-alert-trigger.md)
- - [Create and view rule-based anomaly alerts and anomaly triggers](../cloud-infrastructure-entitlement-management/product-rule-based-anomalies.md)
- - [Create and view statistical anomalies and anomaly triggers](../cloud-infrastructure-entitlement-management/product-statistical-anomalies.md)
- - [Create and view permission analytics triggers](../cloud-infrastructure-entitlement-management/product-permission-analytics.md)
-
-**Best Practices for Custom Alerts:**
--- Permission assignments done outside of approved administrators
- - Examples:
-
- Example: Any activity done by root:
-
- ![Diagram, Any activity done by root user in AWS.](media/permissions-management-trial-user-guide/custom-alerts-1.png)
-
- Alert for monitoring any direct Azure role assignment
-
- ![Diagram, Alert for monitoring any direct Azure role assignment done by anyone other than Admin user.](media/permissions-management-trial-user-guide/custom-alerts-2.png)
--- Access to critical sensitive resources-
- Example: Alert for monitoring any action on Azure resources
-
- ![Diagram, Alert for monitoring any action on Azure resources.](media/permissions-management-trial-user-guide/custom-alerts-3.png)
--- Use of break glass accounts like root in AWS, Global Administrator in Microsoft Entra ID accessing subscriptions, etc.-
- Example: BreakGlass users should be used for emergency access only.
-
- ![Diagram, Example of break glass account users used for emergency access only.](media/permissions-management-trial-user-guide/custom-alerts-4.png)
--- Create and view reports-
- To support rapid remediation, you can set up security reports to be delivered at custom intervals. Permissions Management has various types of system report types available that capture specific sets of data by cloud infrastructure (AWS, Azure, GCP), by account/subscription/project, and more. Reports are fully customizable and can be delivered via email at pre-configured intervals.
-
- These reports enable you to:
-
- - Make timely decisions.
- - Analyze trends and system/user performance.
- - Identify trends in data and high-risk areas so that management can address issues more quickly and improve their efficiency.
- - Automate data analytics in an actionable way.
- - Ensure compliance with audit requirements for periodic reviews of **who has access to what,**
- - Look at views into **Separation of Duties** for security hygiene to determine who has admin permissions.
- - See data for **identity governance** to ensure inactive users are decommissioned because they left the company or to remove vendor accounts that have been left behind, old consultant accounts, or users who as parts of the Joiner/Mover/Leaver process have moved onto another role and are no longer using their access. Consider this a fail-safe to ensure dormant accounts are removed.
- - Identify over-permissioned access to later use the Remediation to pursue **Zero Trust and least privileges.**
-
- **Example of Permissions Management Analytics Report**
-
- > [!div class="mx-imgBorder"]
- > :::image type="content" source="media/permissions-management-trial-user-guide/permissions-management-report-example.png" alt-text="Example of Permissions Management Analytics Report." lightbox="media/permissions-management-trial-user-guide/permissions-management-report-example.png":::
-
- **Actions to try**
- - [View system reports in the Reports dashboard](../cloud-infrastructure-entitlement-management/product-reports.md)
- - [View a list and description of system reports](../cloud-infrastructure-entitlement-management/all-reports.md)
- - [Generate and view a system report](../cloud-infrastructure-entitlement-management/report-view-system-report.md)
- - [Create, view, and share a custom report](../cloud-infrastructure-entitlement-management/report-create-custom-report.md)
- - [Generate and download the Permissions analytics report](../cloud-infrastructure-entitlement-management/product-permissions-analytics-reports.md)
-
-**Key Reports to Monitor:**
--- **Permissions Analytics Report:** lists the key permission risks including Super identities, Inactive identities, Over-provisioned active identities, and more-- **Group entitlements and Usage reports:** Provides guidance on cleaning up directly assigned permissions-- **Access Key Entitlements and Usage reports**: Identifies high risk service principals with old secrets that havenΓÇÖt been rotated every 90 days (best practice) or decommissioned due to lack of use (as recommended by the Cloud Security Alliance).-
-## Next steps
-
-For more information about Permissions Management, see:
-
-**Microsoft Learn**: [Permissions management](../cloud-infrastructure-entitlement-management/index.yml).
-
-**Datasheet:** <https://aka.ms/PermissionsManagementDataSheet>
-
-**Solution Brief:** <https://aka.ms/PermissionsManagementSolutionBrief>
-
-**White Paper:** <https://aka.ms/CIEMWhitePaper>
-
-**Infographic:** <https://aka.ms/PermissionRisksInfographic>
-
-**Security paper:** [2021 State of Cloud Permissions Risks](https://scistorageprod.azureedge.net/assets/2021%20State%20of%20Cloud%20Permission%20Risks.pdf?sv=2019-07-07&sr=b&sig=Sb17HibpUtJm2hYlp6GYlNngGiSY5GcIs8IfpKbRlWk%3D&se=2022-05-27T20%3A37%3A22Z&sp=r)
-
-**Permissions Management Glossary:** <https://aka.ms/PermissionsManagementGlossary>
active-directory Scenario Daemon Acquire Token https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-daemon-acquire-token.md
If you get an error message telling you that you used an invalid scope, you prob
### Did you forget to provide admin consent? Daemon apps need it!
-If you get an **Insufficient privileges to complete the operation** error when you call the API, the tenant administrator needs to grant permissions to the application. See step 6 of Register the client app above.
-You'll typically see an error that looks like this error:
+If you get an **Insufficient privileges to complete the operation** error when you call the API, the tenant administrator needs to grant permissions to the application. For guidance on how to grant admin consent for your application, see step 4 in [Quickstart: Acquire a token and call Microsoft Graph in a .NET Core console app](quickstart-console-app-netcore-acquire-token.md#step-4-admin-consent).
+
+If you don't grant admin consent to your application, you'll run into the following error:
```json Failed to call the web API: Forbidden
active-directory Tutorial Single Page App React Sign In Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/tutorial-single-page-app-react-sign-in-users.md
In this tutorial:
<br /> <h5> <center>
- Welcome to the Microsoft Authentication Library For Javascript -
+ Welcome to the Microsoft Authentication Library For JavaScript -
React SPA Tutorial </center> </h5>
active-directory Whats New Docs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/whats-new-docs.md
Previously updated : 09/04/2023 Last updated : 10/04/2023
Welcome to what's new in the Microsoft identity platform documentation. This article lists new docs that have been added and those that have had significant updates in the last three months.
+## September 2023
+
+### New articles
+
+- [Tutorial: Call an API from a React single-page app](tutorial-single-page-app-react-call-api.md) - Get user data from web API
+
+### Updated articles
+
+- [Access tokens in the Microsoft identity platform](access-tokens.md) - Rebranding of Azure Active Directory to Microsoft Entra
+- [Add app roles to your application and receive them in the token](howto-add-app-roles-in-apps.md) - Add clarity to distinguish between app and user roles
+- [How and why applications are added to Microsoft Entra ID](how-applications-are-added.md) - Rebranding of Azure Active Directory to Microsoft Entra
+- [Making your application multi-tenant](howto-convert-app-to-be-multi-tenant.md) - Rebranding of Azure Active Directory to Microsoft Entra
+- [Microsoft Entra app manifest](reference-app-manifest.md) - Rebranding of Azure Active Directory to Microsoft Entra
+- [Microsoft Entra authentication and authorization error codes](reference-error-codes.md) - Rebranding of Azure Active Directory to Microsoft Entra
+- [Quickstart: Sign in users in a single-page app (SPA) and call the Microsoft Graph API using Angular](quickstart-single-page-app-angular-sign-in.md) - Update SPA quickstarts to use new code sample
+- [Quickstart: Sign in users in a single-page app (SPA) and call the Microsoft Graph API using JavaScript](quickstart-single-page-app-javascript-sign-in.md) - Update SPA quickstarts to use new code sample
+- [Quickstart: Sign in users in a single-page app (SPA) and call the Microsoft Graph API using React](quickstart-single-page-app-react-sign-in.md) - Update SPA quickstarts to use new code sample
+- [Quickstart: Sign in users and call the Microsoft Graph API from an ASP.NET Core web app](quickstart-web-app-aspnet-core-sign-in.md) - Update ASP.NET quickstart to use new code sample
+- [Quickstart: Configure an application to expose a web API](quickstart-configure-app-expose-web-apis.md) - Rebranding of Azure Active Directory to Microsoft Entra
+- [Single sign-on SAML protocol](single-sign-on-saml-protocol.md) - Rebranding of Azure Active Directory to Microsoft Entra
+- [Tutorial: Prepare a Single-page application for authentication](tutorial-single-page-app-react-prepare-spa.md) - Add clarity to the content
+ ## August 2023 ### Updated articles
Welcome to what's new in the Microsoft identity platform documentation. This art
- [Migrate confidential client applications from ADAL.NET to MSAL.NET](msal-net-migration-confidential-client.md) - Improving clarity in the content - [Single sign-on with MSAL.js](msal-js-sso.md) - Add guidance on using the loginHint claim for SSO - [Tutorial: Create a Blazor Server app that uses the Microsoft identity platform for authentication](tutorial-blazor-server.md) - Simplified and leverage the Microsoft Identity App Sync .NET tool-
-## June 2023
-
-### New articles
--- [Configure app multi-instancing](configure-app-multi-instancing.md) - Configuration of multiple instances of the same application within a tenant-- [Migrate away from using email claims for user identification or authorization](migrate-off-email-claim-authorization.md) - Migration guidance for insecure authorization pattern-- [Optional claims reference](optional-claims-reference.md) - v1.0 and v2.0 optional claims reference-
-### Updated articles
--- [A web app that calls web APIs: Code configuration](scenario-web-app-call-api-app-configuration.md) - Editorial review of Node.js code snippet-- [Claims mapping policy type](reference-claims-mapping-policy-type.md) - Editorial review of claims mapping policy type-- [Configure token lifetime policies (preview)](configure-token-lifetimes.md) - Adding service principal policy commands-- [Customize SAML token claims](saml-claims-customization.md) - Review of claims mapping policy type-- [Microsoft identity platform code samples](sample-v2-code.md) - Reworking code samples file to add extra tab-- [Refresh tokens in the Microsoft identity platform](refresh-tokens.md) - Editorial review of refresh tokens-- [Tokens and claims overview](security-tokens.md) - Editorial review of security tokens-- [Tutorial: Sign in users and call Microsoft Graph from an iOS or macOS app](tutorial-v2-ios.md) - Editorial review-- [What's new for authentication?](reference-breaking-changes.md) - Identity breaking change: omission of unverified emails by default
active-directory Troubleshoot Primary Refresh Token https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/troubleshoot-primary-refresh-token.md
To get the PRT error code, run the `dsregcmd` command, and then locate the `SSO
<a name='method-2-use-event-viewer-to-examine-azure-ad-analytic-and-operational-logs'></a>
-#### Method 2: Use Event Viewer to examine Microsoft Entra analytic and operational logs
+#### Method 2: Use Event Viewer to examine AAD analytic and operational logs
1. Select **Start**, and then search for and select **Event Viewer**. 1. If the console tree doesn't appear in the **Event Viewer** window, select the **Show/Hide Console Tree** icon to make the console tree visible. 1. In the console tree, select **Event Viewer (Local)**. If child nodes don't appear underneath this item, double-click your selection to show them. 1. Select the **View** menu. If a check mark isn't displayed next to **Show Analytic and Debug Logs**, select that menu item to enable that feature.
-1. In the console tree, expand **Applications and Services Logs** > **Microsoft** > **Windows** > **Microsoft Entra ID**. The **Operational** and **Analytic** child nodes appear.
+1. In the console tree, expand **Applications and Services Logs** > **Microsoft** > **Windows** > **AAD**. The **Operational** and **Analytic** child nodes appear.
> [!NOTE] > In the Microsoft Entra Cloud Authentication Provider (CloudAP) plug-in, **Error** events are written to the **Operational** event logs, and information events are written to the **Analytic** event logs. You have to examine both the **Operational** and **Analytic** event logs to troubleshoot PRT issues.
-1. In the console tree, select the **Analytic** node to view Microsoft Entra ID-related analytic events.
-1. In the list of analytic events, search for Event IDs 1006 and 1007. Event ID 1006 denotes the beginning of the PRT acquisition flow, and Event ID 1007 denotes the end of the PRT acquisition flow. All events in the **Microsoft Entra ID** logs (both **Analytic** and **Operational**) that occurred between Event ID 1006 and Event ID 1007 are logged as part of the PRT acquisition flow. The following table shows an example event listing.
+1. In the console tree, select the **Analytic** node to view AAD-related analytic events.
+1. In the list of analytic events, search for Event IDs 1006 and 1007. Event ID 1006 denotes the beginning of the PRT acquisition flow, and Event ID 1007 denotes the end of the PRT acquisition flow. All events in the **AAD** logs (both **Analytic** and **Operational**) that occurred between Event ID 1006 and Event ID 1007 are logged as part of the PRT acquisition flow. The following table shows an example event listing.
| Level | Date and Time | Source | Event ID | Task Category | |--|--||-|--|
active-directory Domains Admin Takeover https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/domains-admin-takeover.md
cmdlet | Usage
Confirm-MgDomain -DomainId "contoso.com" ```
+>[!NOTE]
+> The Confirm-MgDomain Cmdlet is being updated. You can monitor the [Confirm-MgDomain Cmdlet](/powershell/module/microsoft.graph.identity.directorymanagement/confirm-mgdomain?view=graph-powershell-1.0&preserve-view=true) article for updates.
+ A successful challenge returns you to the prompt without an error. ## Next steps
active-directory Groups Bulk Download Members https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-bulk-download-members.md
You can bulk download the members of a group in your organization to a comma-sep
![The Download Members command is on the profile page for the group](./media/groups-bulk-download-members/download-panel.png) + ## Check download status You can see the status of all of your pending bulk requests in the **Bulk operation results** page.
active-directory Users Bulk Add https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/users-bulk-add.md
The rows in a downloaded CSV template are as follows:
If there are errors, you can download and view the results file on the **Bulk operation results** page. The file contains the reason for each error. The file submission must match the provided template and include the exact column names. + ## Check status You can see the status of all of your pending bulk requests in the **Bulk operation results** page.
active-directory Users Bulk Delete https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/users-bulk-delete.md
The rows in a downloaded CSV template are as follows:
If there are errors, you can download and view the results file on the **Bulk operation results** page. The file contains the reason for each error. + ## Check status You can see the status of all of your pending bulk requests in the **Bulk operation results** page.
active-directory Users Bulk Download https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/users-bulk-download.md
You can see the status of your pending bulk requests in the **Bulk operation res
Each bulk activity to export a list of users can run for up to one hour. This pace enables export and download of a list of up to 500,000 users. + ## Next steps - [Bulk add users](users-bulk-add.md)
active-directory Users Bulk Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/users-bulk-restore.md
The rows in a downloaded CSV template are as follows:
If there are errors, you can download and view the results file on the **Bulk operation results** page. The file contains the reason for each error. + ## Check status You can see the status of all of your pending bulk requests in the **Bulk operation results** page.
active-directory B2b Quickstart Invite Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/b2b-quickstart-invite-powershell.md
Previously updated : 09/22/2023 Last updated : 09/29/2023 -+ #Customer intent: As a tenant admin, I want to walk through the B2B invitation workflow so that I can understand how to add a user via PowerShell.
active-directory Hybrid On Premises To Cloud https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/hybrid-on-premises-to-cloud.md
Previously updated : 11/17/2022 Last updated : 10/04/2023
Before Microsoft Entra ID, organizations with on-premises identity systems have traditionally managed partner accounts in their on-premises directory. In such an organization, when you start to move apps to Microsoft Entra ID, you want to make sure your partners can access the resources they need. It shouldn't matter whether the resources are on-premises or in the cloud. Also, you want your partner users to be able to use the same sign-in credentials for both on-premises and Microsoft Entra resources.
-If you create accounts for your external partners in your on-premises directory (for example, you create an account with a sign-in name of "msullivan" for an external user named Maria Sullivan in your partners.contoso.com domain), you can now sync these accounts to the cloud. Specifically, you can use [Microsoft Entra Connect](../hybrid/connect/whatis-azure-ad-connect.md) to sync the partner accounts to the cloud, which creates a user account with UserType = Guest. This enables your partner users to access cloud resources using the same credentials as their local accounts, without giving them more access than they need.
+If you create accounts for your external partners in your on-premises directory (for example, you create an account with a sign-in name of "msullivan" for an external user named Maria Sullivan in your partners.contoso.com domain), you can now sync these accounts to the cloud. Specifically, you can use [Microsoft Entra Connect](../hybrid/connect/whatis-azure-ad-connect.md) to sync the partner accounts to the cloud, which creates a user account with UserType = Guest. This enables your partner users to access cloud resources using the same credentials as their local accounts, without giving them more access than they need. For more information about converting local guest accounts see [Convert local guest accounts to Microsoft Entra B2B guest accounts](/azure/active-directory/architecture/10-secure-local-guest).
> [!NOTE]
-> See also how to [invite internal users to B2B collaboration](invite-internal-users.md). With this feature, you can invite internal guest users to use B2B collaboration, regardless of whether you've synced their accounts from your on-premises directory to the cloud. Once the user accepts the invitation to use B2B collaboration, they'll be able to use their own identities and credentials to sign in to the resources you want them to access. You wonΓÇÖt need to maintain passwords or manage account lifecycles.
+> See also how to [invite internal users to B2B collaboration](invite-internal-users.md). With this feature, you can invite internal guest users to use B2B collaboration, regardless of whether you've synced their accounts from your on-premises directory to the cloud. Once the user accepts the invitation to use B2B collaboration, they'll be able to use their own identities and credentials to sign in to the resources you want them to access. You wonΓÇÖt need to maintain passwords or manage account lifecycles.
## Identify unique attributes for UserType
For implementation instructions, see [Enable synchronization of UserType](../hyb
- [Microsoft Entra B2B collaboration for hybrid organizations](hybrid-organizations.md) - [Grant B2B users in Microsoft Entra ID access to your on-premises applications](hybrid-cloud-to-on-premises.md)-- For an overview of Microsoft Entra Connect, see [Integrate your on-premises directories with Microsoft Entra ID](../hybrid/whatis-hybrid-identity.md).+
active-directory Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/whats-new.md
This page updates monthly, so revisit it regularly. If you're looking for items
## September 2023
-### Public Preview - Managing and Changing Passwords in My Security Info
-
-**Type:** New feature
-**Service category:** My Profile/Account
-**Product capability:** End User Experiences
-
-The My Security Info management portal ([My Sign-Ins | Security Info | Microsoft.com](https://mysignins.microsoft.com/security-info)) will now support an improved end user experience of managing passwords. Users are able to change their password, and users capable of multifactor authentication (MFA) are able to update their passwords without providing their current password.
--- ### Public Preview - Device-bound passkeys as an authentication method **Type:** Changed feature
We'll expand the existing FIDO2 authentication methods policy and end user regis
-### General Availability - Authenticator on Android is FIPS 140 compliant
-
-**Type:** New feature
-**Service category:** Microsoft Authenticator App
-**Product capability:** User Authentication
-
-Authenticator version and higher on Android version will be FIPS 140 compliant for all Azure AD authentications using push multi-factor authentications (MFA), Passwordless Phone Sign-In (PSI), and time-based one-time passcodes (TOTP). No changes in configuration are required in the Authenticator app or Azure portal to enable this capability. For more information, see: [Authentication methods in Microsoft Entra ID - Microsoft Authenticator app](../authentication/concept-authentication-authenticator-app.md).
--- ### General Availability - Recovery of deleted application and service principals is now available **Type:** New feature
active-directory Create Access Review Pim For Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/create-access-review-pim-for-groups.md
This article describes how to create one or more access reviews for PIM for Grou
## Prerequisites -- Microsoft Entra ID P2 or Microsoft Entra ID Governance.
+- Microsoft Entra ID Governance License.
- Only Global administrators and Privileged Role administrators can create reviews on PIM for Groups. For more information, see [Use Microsoft Entra groups to manage role assignments](../roles/groups-concept.md). For more information, see [License requirements](access-reviews-overview.md#license-requirements).
active-directory How To Connect Health Sync https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/how-to-connect-health-sync.md
The following documentation is specific to monitoring Microsoft Entra Connect (S
## Alerts for Microsoft Entra Connect Health for sync The Microsoft Entra Connect Health Alerts for sync section provides you the list of active alerts. Each alert includes relevant information, resolution steps, and links to related documentation. By selecting an active or resolved alert you will see a new blade with additional information, as well as steps you can take to resolve the alert, and links to additional documentation. You can also view historical data on alerts that were resolved in the past.
-By selecting an alert you will be provided with additional information as well as steps you can take to resolve the alert and links to additional documentation.
- ![Microsoft Entra Connect Sync error](./media/how-to-connect-health-sync/alert.png) ### Limited Evaluation of Alerts
active-directory Concept Audit Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/concept-audit-logs.md
Previously updated : 09/08/2023 Last updated : 10/04/2023
Audit logs in Microsoft Entra ID provide access to system activity records, ofte
- Has a service principal for an application changed? - Have the names of applications been changed?
+> [!NOTE]
+> Entries in the audit logs are system generated and can't be changed or deleted.
+ ## What do the logs show? Audit logs have a default list view that shows:
active-directory Concept Provisioning Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/concept-provisioning-logs.md
Previously updated : 09/08/2023 Last updated : 10/04/2023
You can use the provisioning logs to find answers to questions like:
- What users from Workday were successfully created in Active Directory?
+> [!NOTE]
+> Entries in the provisioning logs are system generated and can't be changed or deleted.
+ ## What do the logs show? When you select an item in the provisioning list view, you get more details about this item, such as the steps taken to provision the user and tips for troubleshooting issues. The details are grouped into four tabs.
active-directory Concept Sign Ins https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/concept-sign-ins.md
Previously updated : 10/02/2023 Last updated : 10/04/2023
There are four types of logs in the sign-in logs preview:
The classic sign-in logs only include interactive user sign-ins.
+> [!NOTE]
+> Entries in the sign-in logs are system generated and can't be changed or deleted.
+ ### Interactive user sign-ins Interactive sign-ins are performed *by* a user. They provide an authentication factor to Microsoft Entra ID. That authentication factor could also interact with a helper app, such as the Microsoft Authenticator app. Users can provide passwords, responses to MFA challenges, biometric factors, or QR codes to Microsoft Entra ID or to a helper app. This log also includes federated sign-ins from identity providers that are federated to Microsoft Entra ID.
active-directory Howto Use Workbooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/howto-use-workbooks.md
Title: Azure Monitor workbooks for Microsoft Entra ID
-description: Learn how to use Azure Monitor workbooks for analyzing identity logs in Microsoft Entra ID reports.
+ Title: How to use Microsoft Entra workbooks
+description: Learn how to use Azure Monitor workbooks for Microsoft Entra ID, for analyzing identity related activity, trends, and gaps.
Previously updated : 08/24/2023 Last updated : 10/04/2023 +
+# Customer intent: As an IT admin, I want to visualize different types of identity data so I can view trends in activity, identity security gaps, and improve the health of my tenant.
# How to use Microsoft Entra Workbooks
Azure Monitor provides [two built-in roles](../../azure-monitor/roles-permission
For more information on the Azure Monitor built-in roles, see [Roles, permissions, and security in Azure Monitor](../../azure-monitor/roles-permissions-security.md#monitoring-reader).
-For more information on the Log Analytics RBAC roles, see [Azure built-in roles](../../role-based-access-control/built-in-roles.md#log-analytics-contributor)
+For more information on the Log Analytics RBAC roles, see [Azure built-in roles](../../role-based-access-control/built-in-roles.md#log-analytics-contributor).
<a name='azure-ad-roles'></a>
For more information on Microsoft Entra built-in roles, see [Microsoft Entra bui
<a name='how-to-access-azure-workbooks-for-azure-ad'></a>
-## How to access Azure Workbooks for Microsoft Entra ID
+## Access Microsoft Entra workbooks
[!INCLUDE [portal updates](~/articles/active-directory/includes/portal-update.md)]
For more information on Microsoft Entra built-in roles, see [Microsoft Entra bui
- Search for a template by name. - Select the **Browse across galleries** to view templates that aren't specific to Microsoft Entra ID.
- ![Find the Azure Monitor workbooks in Microsoft Entra ID](./media/howto-use-azure-monitor-workbooks/azure-monitor-workbooks-in-azure-ad.png)
+ ![Screenshot of the Microsoft Entra workbooks with navigation steps highlighted.](./media/howto-use-workbooks/workbooks-gallery.png)
## Create a new workbook
Workbooks can be created from scratch or from a template. When creating a new wo
For more information on the available elements, see [Creating an Azure Workbook](../../azure-monitor/visualize/workbooks-create-workbook.md).
- ![Screenshot of the Azure Workbooks +Add menu options.](./media/howto-use-azure-monitor-workbooks/create-new-workbook-elements.png)
+ ![Screenshot of the options available in the workbook editing area.](./media/howto-use-workbooks/add-new-workbooks-elements.png)
**To create a new workbook from a template**: 1. Browse to **Identity** > **Monitoring & health** > **Workbooks**.
Workbooks can be created from scratch or from a template. When creating a new wo
1. Select **Edit** from the top of the page. - Each element of the workbook has its own **Edit** button. - For more information on editing workbook elements, see [Azure Workbooks Templates](../../azure-monitor/visualize/workbooks-templates.md)-
+
+ ![Screenshot of a workbook template with the edit button highlighted.](./media/howto-use-workbooks/workbooks-edit-button.png)
+
1. Select the **Edit** button for any element. Make your changes and select **Done editing**.
- ![Screenshot of a workbook in Edit mode, with the Edit and Done Editing buttons highlighted.](./media/howto-use-azure-monitor-workbooks/edit-buttons.png)
-1. When you're done editing the workbook, select the **Save As** to save your workbook with a new name.
-1. In the **Save As** window:
- - Provide a **Title**, **Subscription**, **Resource Group** (you must have the ability to save a workbook for the selected Resource Group), and **Location**.
+
+ ![Screenshot of a workbook in edit mode, with the edit element and done editing buttons highlighted.](./media/howto-use-workbooks/workbooks-edit-elements.png)
+
+1. When you're done editing the workbook, select the **Save** button. The **Save as** window opens.
+1. Provide a **Title**, **Subscription**, **Resource Group*** and **Location**
+ - *You must have the ability to save a workbook for the selected Resource Group.
- Optionally choose to save your workbook content to an [Azure Storage Account](../../azure-monitor/visualize/workbooks-bring-your-own-storage.md). 1. Select the **Apply** button.
active-directory Overview Monitoring Health https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/overview-monitoring-health.md
Monitoring Microsoft Entra activity logs requires routing the log data to a moni
For an overview of how to access, store, and analyze activity logs, see [How to access activity logs](howto-access-activity-logs.md). -
-## Next steps
--- [Learn about the sign-ins logs](concept-all-sign-ins.md)-- [Learn about the audit logs](concept-audit-logs.md)-- [Use Microsoft Graph to access activity logs](quickstart-access-log-with-graph-api.md)-- [Integrate activity logs with SIEM tools](howto-stream-logs-to-event-hub.md)
active-directory Overview Workbooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/overview-workbooks.md
Title: What are Microsoft Entra workbooks?
-description: Learn about Microsoft Entra workbooks.
+description: Learn how to create and work with Microsoft Entra workbooks, for identity monitoring, alerts, and data visualization.
Previously updated : 11/01/2022 Last updated : 10/03/2023 - # Customer intent: As a Microsoft Entra administrator, I want a visualization tool that I can customize for my tenant.
With Azure Workbooks for Microsoft Entra ID, you can:
- Visualize data for reporting and analysis - Combine multiple elements into a single interactive experience
-Workbooks are found in Microsoft Entra ID and in Azure Monitor. The concepts, processes, and best practices are the same for both types of workbooks. Workbooks for Microsoft Entra ID, however, cover only those identity management scenarios that are associated with Microsoft Entra ID. Sign-ins, Conditional Access, multifactor authentication, and Identity Protection are scenarios included in Azure Workbook for Microsoft Entra ID.
+Workbooks are found in Microsoft Entra ID and in Azure Monitor. The concepts, processes, and best practices are the same for both types of workbooks. Workbooks for Microsoft Entra ID, however, cover only those identity management scenarios that are associated with Microsoft Entra ID. Sign-ins, Conditional Access, multifactor authentication, and Identity Protection are scenarios included in the Workbooks for Microsoft Entra ID.
+
+![Screenshot of the Microsoft Entra workbooks gallery.](./media/overview-workbooks/workbooks-gallery.png)
For more information on workbooks for other Azure services, see [Azure Monitor workbooks](../../azure-monitor/visualize/workbooks-overview.md).
Public workbook templates are built, updated, and deprecated to reflect the need
- [Authentication prompts analysis](workbook-authentication-prompts-analysis.md) - [Conditional Access gap analyzer](workbook-conditional-access-gap-analyzer.md) - [Cross-tenant access activity](workbook-cross-tenant-access-activity.md)
+- [Multifactor authentication gaps](workbook-mfa-gaps.md)
- [Risk analysis](workbook-risk-analysis.md) - [Sensitive Operations Report](workbook-sensitive-operations-report.md)
+- [Sign-ins using legacy authentication](workbook-legacy-authentication.md)
## Next steps
active-directory Plan Monitoring And Reporting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/plan-monitoring-and-reporting.md
Title: Plan reports & monitoring deployment description: Describes how to plan and execute implementation of reporting and monitoring. --++ Previously updated : 01/20/2023 Last updated : 10/04/2023 # Customer intent: For a Microsoft Entra administrator to monitor logs and report on access - # Microsoft Entra monitoring and health deployment dependencies
To better prioritize the use cases and solutions, organize the options by "requi
With Microsoft Entra monitoring, you can route Microsoft Entra activity logs and retain them for long-term reporting and analysis to gain environment insights, and integrate it with SIEM tools. Use the following decision flow chart to help select an architecture.
- ![Decision matrix for business-need architecture.](media/reporting-deployment-plan/deploy-reporting-flow-diagram.png)
+ ![Decision matrix for business-need architecture.](media/plan-monitoring-and-reporting/deploy-reporting-flow-diagram.png)
#### Archive logs in a storage account
active-directory Reference Sla Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/reference-sla-performance.md
Last updated 09/27/2023 - # Microsoft Entra SLA performance
The SLA attainment is truncated at three places after the decimal. Numbers aren'
| June | 99.999% | 99.999% | 99.999% | | July | 99.999% | 99.999% | 99.999% | | August | 99.999% | 99.999% | 99.999% |
-| September | 99.999% | 99.998% | |
+| September | 99.999% | 99.998% | 99.999%|
| October | 99.999% | 99.999% | | | November | 99.998% | 99.999% | | | December | 99.978% | 99.999% | |
To access your tenant-level SLA performance:
* [Microsoft Entra monitoring and health overview](overview-monitoring-health.md) * [Programmatic access to Microsoft Entra reports](./howto-configure-prerequisites-for-reporting-api.md) * [Microsoft Entra ID risk detections](../identity-protection/overview-identity-protection.md)+
active-directory Tutorial Configure Log Analytics Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/tutorial-configure-log-analytics-workspace.md
To configure a Log Analytics workspace you need to **create the workspace** and
3. Select **Create**.
- ![Screenshot shows the Add button in the log analytics workspaces page.](./media/tutorial-log-analytics-wizard/add.png)
+ ![Screenshot shows the Add button in the log analytics workspaces page.](./media/tutorial-configure-log-analytics-workspace/add.png)
4. On the **Create Log Analytics workspace** page, perform the following steps:
To configure a Log Analytics workspace you need to **create the workspace** and
4. Select your region.
- ![Create log analytics workspace](./media/tutorial-log-analytics-wizard/create-log-analytics-workspace.png)
+ ![Create log analytics workspace](./media/tutorial-configure-log-analytics-workspace/create-log-analytics-workspace.png)
5. Select **Review + Create**.
- ![Review and create](./media/tutorial-log-analytics-wizard/review-create.png)
+ ![Review and create](./media/tutorial-configure-log-analytics-workspace/review-create.png)
6. Select **Create** and wait for the deployment. You may need to refresh the page to see the new workspace.
- ![Create](./media/tutorial-log-analytics-wizard/create-workspace.png)
+ ![Create](./media/tutorial-configure-log-analytics-workspace/create-workspace.png)
### Configure Diagnostic settings
To configure Diagnostic settings, you need switch to the Microsoft Entra admin c
1. Select **Add diagnostic setting**.
- ![Add diagnostic setting](./media/tutorial-log-analytics-wizard/add-diagnostic-setting.png)
+ ![Add diagnostic setting](./media/tutorial-configure-log-analytics-workspace/add-diagnostic-setting.png)
1. On the **Diagnostic setting** page, perform the following steps:
To configure Diagnostic settings, you need switch to the Microsoft Entra admin c
3. Select **Save**.
- ![Select diagnostics settings](./media/tutorial-log-analytics-wizard/select-diagnostics-settings.png)
+ ![Select diagnostics settings](./media/tutorial-configure-log-analytics-workspace/select-diagnostics-settings.png)
Your logs can now be queried using the Kusto Query Language (KQL) in Log Analytics. You may need to wait around 15 minutes for the logs to populate.
This procedure shows how to create a new workbook using the quickstart template.
1. In the **Quickstart** section, select **Empty**.
- ![Quick start](./media/tutorial-log-analytics-wizard/quick-start.png)
+ ![Quick start](./media/tutorial-configure-log-analytics-workspace/quick-start.png)
1. From the **Add** menu, select **Add text**.
- ![Add text](./media/tutorial-log-analytics-wizard/add-text.png)
+ ![Add text](./media/tutorial-configure-log-analytics-workspace/add-text.png)
1. In the textbox, enter `# Client apps used in the past week` and select **Done Editing**.
- ![Screenshot shows the text and the Done Editing button.](./media/tutorial-log-analytics-wizard/workbook-text.png)
+ ![Screenshot shows the text and the Done Editing button.](./media/tutorial-configure-log-analytics-workspace/workbook-text.png)
1. Below the text window, open the **Add** menu and select **Add query**.
- ![Add query](./media/tutorial-log-analytics-wizard/add-query.png)
+ ![Add query](./media/tutorial-configure-log-analytics-workspace/add-query.png)
1. In the query textbox, enter: `SigninLogs | where TimeGenerated > ago(7d) | project TimeGenerated, UserDisplayName, ClientAppUsed | summarize count() by ClientAppUsed` 1. Select **Run Query**.
- ![Screenshot shows the Run Query button.](./media/tutorial-log-analytics-wizard/run-workbook-query.png)
+ ![Screenshot shows the Run Query button.](./media/tutorial-configure-log-analytics-workspace/run-workbook-query.png)
1. In the toolbar, from the **Visualization** menu select **Pie chart**.
- ![Pie chart](./media/tutorial-log-analytics-wizard/pie-chart.png)
+ ![Pie chart](./media/tutorial-configure-log-analytics-workspace/pie-chart.png)
1. Select **Done Editing** at the top of the page.
This procedure shows how to add a query to an existing workbook template. The ex
1. In the **Conditional Access** section, select **Conditional Access Insights and Reporting**.
- ![Screenshot shows the Conditional Access Insights and Reporting option.](./media/tutorial-log-analytics-wizard/conditional-access-template.png)
+ ![Screenshot shows the Conditional Access Insights and Reporting option.](./media/tutorial-configure-log-analytics-workspace/conditional-access-template.png)
1. In the toolbar, select **Edit**.
- ![Screenshot shows the Edit button.](./media/tutorial-log-analytics-wizard/edit-workbook-template.png)
+ ![Screenshot shows the Edit button.](./media/tutorial-configure-log-analytics-workspace/edit-workbook-template.png)
1. In the toolbar, select the three dots next to the Edit button, then **Add**, and then **Add query**.
- ![Add workbook query](./media/tutorial-log-analytics-wizard/add-custom-workbook-query.png)
+ ![Add workbook query](./media/tutorial-configure-log-analytics-workspace/add-custom-workbook-query.png)
1. In the query textbox, enter: `SigninLogs | where TimeGenerated > ago(20d) | where ConditionalAccessPolicies != "[]" | summarize dcount(UserDisplayName) by bin(TimeGenerated, 1d), ConditionalAccessStatus` 1. Select **Run Query**.
- ![Screenshot shows the Run Query button to run this query.](./media/tutorial-log-analytics-wizard/run-workbook-insights-query.png)
+ ![Screenshot shows the Run Query button to run this query.](./media/tutorial-configure-log-analytics-workspace/run-workbook-insights-query.png)
1. From the **Time Range** menu, select **Set in query**.
This procedure shows how to add a query to an existing workbook template. The ex
1. In the **Chart title** field, enter `Conditional Access status over the last 20 days` and select **Done Editing**.
- ![Set chart title](./media/tutorial-log-analytics-wizard/set-chart-title.png)
+ ![Set chart title](./media/tutorial-configure-log-analytics-workspace/set-chart-title.png)
Your Conditional Access success and failure chart displays a color-coded snapshot of your tenant.
active-directory Workbook Authentication Prompts Analysis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/workbook-authentication-prompts-analysis.md
Previously updated : 11/01/2022 Last updated : 10/03/2023 -- # Authentication prompts analysis workbook
-As an IT Pro, you want the right information about authentication prompts in your environment so that you can detect unexpected prompts and investigate further. Providing you with this type of information is the goal of the **authentication prompts analysis workbook**.
-
-This article provides you with an overview of this workbook.
+As an IT Pro, you want the right information about authentication prompts in your environment so that you can detect unexpected prompts and investigate further. Providing you with this type of information is the goal of the **Authentication Prompts Analysis** workbook.
+This article provides you with an overview of **Authentication Prompts Analysis** workbook.
## Description ![Workbook category](./media/workbook-authentication-prompts-analysis/workbook-category.png) - Have you recently heard of complaints from your users about getting too many authentication prompts?
-Overprompting users can affect your user's productivity and often leads users getting phished for MFA. To be clear, MFA is essential! We are not talking about if you should require MFA but how frequently you should prompt your users.
+Overprompting users can affect your user's productivity and often leads users getting phished for MFA. To be clear, MFA is essential! We aren't talking about if you should require MFA but how frequently you should prompt your users.
Typically, this scenario is caused by:
You can use this workbook in the following scenarios:
- To view authentication prompt counts of high-profile users. - To track legacy TLS and other authentication process details.
-
-
+## How to access the workbook
+
+3. Select the **Authentication Prompts Analysis** workbook from the **Usage** section.
## Sections
This workbook breaks down authentication prompts by:
- Process detail - Policy - ![Authentication prompts by authentication method](./media/workbook-authentication-prompts-analysis/authentication-prompts-by-authentication-method.png) --
-In many environments, the most used apps are business productivity apps. Anything that isnΓÇÖt expected should be investigated. The charts below show authentication prompts by application.
--
+In many environments, the most used apps are business productivity apps. Anything that isnΓÇÖt expected should be investigated. The following charts show authentication prompts by application.
![Authentication prompts by application](./media/workbook-authentication-prompts-analysis/authentication-prompts-by-application.png)
-The prompts by application list view shows additional information such as timestamps, and request IDs that help with investigations.
+The **prompts by application list view** shows additional information such as timestamps, and request IDs that help with investigations.
Additionally, you get a summary of the average and median prompts count for your tenant. - ![Prompts by application](./media/workbook-authentication-prompts-analysis/prompts-by-authentication-method.png) - This workbook also helps track impactful ways to improve your usersΓÇÖ experience and reduce prompts and the relative percentage. - ![Recommendations for reducing prompts](./media/workbook-authentication-prompts-analysis/recommendations-for-reducing-prompts.png) -- ## Filters - Take advantage of the filters for more granular views of the data: - ![Filter](./media/workbook-authentication-prompts-analysis/filters.png) Filtering for a specific user that has many authentication requests or only showing applications with sign-in failures can also lead to interesting findings to continue to remediate. ## Best practices
+- If data isn't showing up or seems to be showing up incorrectly, confirm that you have set the **Log Analytics Workspace** and **Subscriptions** on the proper resources.
-If data isn't showing up or seems to be showing up incorrectly, confirm that you have set the **Log Analytics Workspace** and **Subscriptions** on the proper resources.
-
+ ![Set workspace and subscriptions](./media/workbook-authentication-prompts-analysis/workspace-and-subscriptions.png)
-![Set workspace and subscriptions](./media/workbook-authentication-prompts-analysis/workspace-and-subscriptions.png)
+- If the visuals are taking too much time to load, try reducing the Time filter to 24 hours or less.
-If the visuals are taking too much time to load, try reducing the Time filter to 24 hours or less.
-
-![Set filter](./media/workbook-authentication-prompts-analysis/set-filter.png)
----
-## Next steps
+ ![Set filter](./media/workbook-authentication-prompts-analysis/set-filter.png)
- To understand more about the different policies that affect MFA prompts, see [Optimize reauthentication prompts and understand session lifetime for Microsoft Entra multifactor authentication](../authentication/concepts-azure-multi-factor-authentication-prompts-session-lifetime.md). -- To learn more about the different vulnerabilities of different MFA methods, see [All your creds belong to us!](https://aka.ms/allyourcreds).- - To learn how to move users from telecom-based methods to the Authenticator app, see [How to run a registration campaign to set up Microsoft Authenticator - Microsoft Authenticator app](../authentication/how-to-mfa-registration-campaign.md).
active-directory Workbook Conditional Access Gap Analyzer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/workbook-conditional-access-gap-analyzer.md
Previously updated : 11/01/2022 Last updated : 10/03/2023 - # Conditional Access gap analyzer workbook
In Microsoft Entra ID, you can protect access to your resources by configuring Conditional Access policies. As an IT administrator, you want to ensure that your Conditional Access policies work as expected to ensure that your resources are properly protected. With the Conditional Access gap analyzer workbook, you can detect gaps in your Conditional Access implementation.
-This article provides you with an overview of this workbook.
+This article provides you with an overview of the **Conditional Access gap analyzer** workbook.
## Description
The Conditional Access gap analyzer workbook helps you to verify that your Condi
- Highlights user sign-ins that have no Conditional Access policies applied to them. - Allows you to ensure that there are no users, applications, or locations that have been unintentionally excluded from Conditional Access policies.
-
+## How to access the workbook
-## Sections
+3. Select the **Conditional Access Gap Analyzer** workbook from the **Conditional Access** section.
+## Sections
-The workbook has four sections:
+The workbook has four sections:
- Users signing in using legacy authentication
This workbook supports setting a time range filter.
![Time range filter](./media/workbook-conditional-access-gap-analyzer/time-range.png) -- ## Best practices Use this workbook to ensure that your tenant is configured to the following Conditional Access best practices:
Use this workbook to ensure that your tenant is configured to the following Cond
- Block all high risk sign-ins -- Block sign-ins from untrusted locations -
-
-----
-## Next steps
--- [How to use Microsoft Entra workbooks](howto-use-azure-monitor-workbooks.md)
+- Block sign-ins from untrusted locations
active-directory Workbook Cross Tenant Access Activity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/workbook-cross-tenant-access-activity.md
Previously updated : 11/01/2022 Last updated : 10/03/2023 -- # Cross-tenant access activity workbook As an IT administrator, you want insights into how your users are collaborating with other organizations. The cross-tenant access activity workbook helps you understand which external users are accessing resources in your organization, and which organizationsΓÇÖ resources your users are accessing. This workbook combines all your organizationΓÇÖs inbound and outbound collaboration into a single view.
-This article provides you with an overview of this workbook.
+This article provides you with an overview of the **Cross-tenant access activity** workbook.
## Description ![Image showing this workbook is found under the Usage category](./media/workbook-cross-tenant-access-activity/workbook-category.png)
-Tenant administrators who are making changes to policies governing cross-tenant access can use this workbook to visualize and review existing access activity patterns before making policy changes. For example, you can identify the apps your users are accessing in external organizations so that you don't inadvertently block critical business processes. Understanding how external users access resources in your tenant (inbound access) and how users in your tenant access resources in external tenants (outbound access) will help ensure you have the right cross-tenant policies in place.
+Tenant administrators who are making changes to policies governing cross-tenant access can use this workbook to visualize and review existing access activity patterns before making policy changes. For example, you can identify the apps your users are accessing in external organizations so that you don't inadvertently block critical business processes. Understanding how external users access resources in your tenant (inbound access) and how users in your tenant access resources in external tenants (outbound access) helps ensure you have the right cross-tenant policies in place.
For more information, see the [Microsoft Entra External ID documentation](../external-identities/index.yml).
+## How to access the workbook
+
+3. Select the **Cross-tenant access activity** workbook from the **Usage** section.
+ ## Sections
-This workbook has four sections:
+This workbook has four sections:
- All inbound and outbound activity by tenant ID
This workbook has four sections:
The total number of external tenants that have had cross-tenant access activity with your tenant is shown at the top of the workbook.
-Under **Step 1**, the external tenant list shows all the tenants that have had inbound or outbound activity with your tenant. When you select an external tenant in the table, the remaining sections update with information about outbound and inbound activity for that tenant.
-
-[ ![Screenshot showing list of external tenants with sign-in data.](./media/workbook-cross-tenant-access-activity/cross-tenant-workbook-step-1.png) ](./media/workbook-cross-tenant-access-activity/cross-tenant-workbook-step-1.png#lightbox)
+![Screenshot of the first section of the workbook.](./media/workbook-cross-tenant-access-activity/cross-tenant-activity-top.png)
-The table under **Step 2** summarizes all outbound and inbound sign-in activity for the selected tenant, including the number of successful sign-ins and the reasons for failed sign-ins. You can select **Outbound activity** or **Inbound activity** to update the remaining sections of the workbook with the type of activity you want to view.
+The **External Tenant** list shows all the tenants that have had inbound or outbound activity with your tenant. When you select an external tenant in the table, the sections after the table display information about outbound and inbound activity for that tenant.
-![Screenshot showing activity for the selected tenant.](./media/workbook-cross-tenant-access-activity/cross-tenant-workbook-step-2.png)
+![Screenshot of the external tenant list.](./media/workbook-cross-tenant-access-activity/cross-tenant-activity-external-tenant-list.png)
-Under **Step 3**, the table lists the applications that are being accessed across tenants. If you selected **Outbound activity** in the previous section, the table shows the applications in external tenants that are being accessed by your users. If you selected **Inbound activity**, you'll see the list of your applications that are being accessed by external users. You can select a row to find out which users are accessing that application.
+When you select an external tenant from the list with outbound activity, associated details appear in the **Outbound activity** table. The same applies when you select an external tenant with inbound activity. Select the **Inbound activity** tab to view the details of an external tenant with inbound activity.
-![Screenshot showing application activity for the selected tenant.](./media/workbook-cross-tenant-access-activity/cross-tenant-workbook-step-3.png)
+![Screenshot of the outbound and inbound activity, with the outbound and inbound options highlighted.](./media/workbook-cross-tenant-access-activity/cross-tenant-activity-outbound-inbound-activity.png)
-The table in **Step 4** displays the list of users who are accessing the application you selected.
-
-![Screenshot showing users accessing an app.](./media/workbook-cross-tenant-access-activity/cross-tenant-workbook-step-4.png)
+When you're viewing external tenants with outbound activity, the subsequent two tables display details for the application and user activity appear. When you're viewing external tenants with inbound activity, the same tables show inbound application and user activity. These tables are dynamic and based on what was previously selected, so make sure you're viewing the correct tenant and activity.
## Filters
Use this workbook to:
- Identify all inbound sign-ins from external Microsoft Entra organizations - Identify all outbound sign-ins by your users to external Microsoft Entra organizations-
-## Next steps
--- [How to use Microsoft Entra workbooks](howto-use-azure-monitor-workbooks.md)
active-directory Workbook Legacy Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/workbook-legacy-authentication.md
Previously updated : 11/01/2022 Last updated : 10/03/2023 - # Sign-ins using legacy authentication workbook Have you ever wondered how you can determine whether it's safe to turn off legacy authentication in your tenant? The sign-ins using legacy authentication workbook helps you to answer this question.
-This article gives you an overview of this workbook.
-
+This article gives you an overview of the **Sign-ins using legacy authentication** workbook.
## Description
Examples of applications that commonly or only use legacy authentication are:
- Apps using legacy auth with mail protocols like POP, IMAP, and SMTP AUTH. - Single-factor authentication (for example, username and password) doesnΓÇÖt provide the required level of protection for todayΓÇÖs computing environments. Passwords are bad as they're easy to guess and humans are bad at choosing good passwords. - Unfortunately, legacy authentication: -- Doesn't support multi-factor authentication (MFA) or other strong authentication methods.
+- Doesn't support multifactor authentication (MFA) or other strong authentication methods.
- Makes it impossible for your organization to move to passwordless authentication. To improve the security of your Microsoft Entra tenant and experience of your users, you should disable legacy authentication. However, important user experiences in your tenant might depend on legacy authentication. Before shutting off legacy authentication, you may want to find those cases so you can migrate them to more secure authentication.
-The sign-ins using legacy authentication workbook lets you see all legacy authentication sign-ins in your environment so you can find and migrate critical workflows to more secure authentication methods before you shut off legacy authentication.
+The **Sign-ins using legacy authentication** workbook lets you see all legacy authentication sign-ins in your environment. This workbook helps you find and migrate critical workflows to more secure authentication methods before you shut off legacy authentication.
+
+## How to access the workbook
-
-
+3. Select the **Sign-ins using legacy authentication** workbook from the **Usage** section.
## Sections
The data collection consists of three steps:
3. View all legacy authentication sign-ins for the user to understand how legacy authentication is being used. --
-
-- ## Filters - This workbook supports multiple filters: - - Time range (up to 90 days) - User principal name
This workbook supports multiple filters:
- Status of the sign-in (success or failure) - ![Filter options](./media/workbook-legacy-authentication/filter-options.png) - ## Best practices - - For guidance on blocking legacy authentication in your environment, see [Block legacy authentication to Microsoft Entra ID with Conditional Access](../conditional-access/block-legacy-authentication.md). - Many email protocols that once relied on legacy authentication now support more secure modern authentication methods. If you see legacy email authentication protocols in this workbook, consider migrating to modern authentication for email instead. For more information, see [Deprecation of Basic authentication in Exchange Online](/exchange/clients-and-mobile-in-exchange-online/deprecation-of-basic-authentication-exchange-online). -- Some clients can use both legacy authentication or modern authentication depending on client configuration. If you see ΓÇ£modern mobile/desktop clientΓÇ¥ or ΓÇ£browserΓÇ¥ for a client in the Microsoft Entra logs, it's using modern authentication. If it has a specific client or protocol name, such as ΓÇ£Exchange ActiveSyncΓÇ¥, it's using legacy authentication to connect to Microsoft Entra ID. The client types in Conditional Access, and the Microsoft Entra reporting page in the Microsoft Entra admin center demarcate modern authentication clients and legacy authentication clients for you, and only legacy authentication is captured in this workbook. --
-## Next steps
+- Some clients can use both legacy authentication or modern authentication depending on client configuration. If you see ΓÇ£modern mobile/desktop clientΓÇ¥ or ΓÇ£browserΓÇ¥ for a client in the Microsoft Entra logs, it's using modern authentication. If it has a specific client or protocol name, such as ΓÇ£Exchange ActiveSync,ΓÇ¥ it's using legacy authentication to connect to Microsoft Entra ID. The client types in Conditional Access, and the Microsoft Entra reporting page in the Microsoft Entra admin center demarcate modern authentication clients and legacy authentication clients for you, and only legacy authentication is captured in this workbook.
- To learn more about identity protection, see [What is identity protection](../identity-protection/overview-identity-protection.md).
active-directory Workbook Mfa Gaps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/workbook-mfa-gaps.md
Previously updated : 12/15/2022 Last updated : 10/03/2023 - # Multifactor Authentication Gaps workbook
-The Multifactor Authentication Gaps workbook helps with identifying user sign-ins and applications that are not protected by multi-factor authentication requirements. This workbook:
-* Identifies user sign-ins not protected by multi-factor authentication requirements.
+The Multifactor Authentication Gaps workbook helps with identifying user sign-ins and applications that aren't protected by multifactor authentication (MFA) requirements. This workbook:
+* Identifies user sign-ins not protected by MFA requirements.
* Provides further drill down options using various pivots such as applications, operating systems, and location. * Provides several filters such as trusted locations and device states to narrow down the users/applications. * Provides filters to scope the workbook for a subset of users and applications.
-## Summary
-The summary widget provides a detailed look at sign-ins related to multifactor authentication.
-
-### Sign-ins not protected by MFA requirement by applications
+This article gives you an overview of the **Multifactor authentication caps** workbook.
-* **Number of users signing-in not protected by multi-factor authentication requirement by application:** This widget provides a time based bar-graph representation of the number of user sign-ins not protected by MFA requirement by applications.
-* **Percent of users signing-in not protected by multi-factor authentication requirement by application:** This widget provides a time based bar-graph representation of the percentage of user sign-ins not protected by MFA requirement by applications.
-* **Select an application and user to learn more:** This widget groups the top users signed in without MFA requirement by application. By selecting the application, it will list the user names and the count of sign-ins without MFA.
-
-### Sign-ins not protected by MFA requirement by users
-* **Sign-ins not protected by multi-factor auth requirement by user:** This widget shows top user and the count of sign-ins not protected by MFA requirement.
-* **Top users with high percentage of authentications not protected by multi-factor authentication requirements:** This widget shows users with top percentage of authentications that are not protected by MFA requirements.
+## How to import the workbook
-### Sign-ins not protected by MFA requirement by Operating Systems
-* **Number of sign-ins not protected by multi-factor authentication requirement by operating system:** This widget provides time based bar graph of sign-in counts that are not protected by MFA by operating system of the devices.
-* **Percent of sign-ins not protected by multi-factor authentication requirement by operating system:** This widget provides time based bar graph of sign-in percentages that are not protected by MFA by operating system of the devices.
+The **MFA gaps** workbook is currently not available as a template, but you can import it from the Microsoft Entra workbooks GitHub repository.
-### Sign-ins not protected by MFA requirement by locations
-* **Number of sign-ins not protected by multi-factor authentication requirement by location:** This widget shows the sign-ins counts that are not protected by MFA requirement in map bubble chart on the world map.
-
-## How to import the workbook
-1. Navigate to **Identity** > **Monitoring & health** > **Workbooks**.
1. Select **+ New**. 1. Select the **Advanced Editor** button from the top of the page. A JSON editor opens.+ ![Screenshot of the Advanced Editor button on the new workbook page.](./media/workbook-mfa-gaps/advanced-editor-button.png) 1. Navigate to the Microsoft Entra workbooks GitHub repository
The summary widget provides a detailed look at sign-ins related to multifactor a
![Screenshot of the GitHub repository with the breadcrumbs and copy file button highlighted.](./media/workbook-mfa-gaps/github-repository.png) 1. Copy the entire JSON file from the GitHub repository. 1. Return Advanced Editor window on the Azure portal and paste the JSON file over the exiting text.
-1. Select the **Apply** button. The workbook will take a few moments to populate.
+1. Select the **Apply** button. The workbook may take a few moments to populate.
1. Select the **Save As** button and provide the required information. - Provide a **Title**, **Subscription**, **Resource Group** (you must have the ability to save a workbook for the selected Resource Group), and **Location**. - Optionally choose to save your workbook content to an [Azure Storage Account](../../azure-monitor/visualize/workbooks-bring-your-own-storage.md). 1. Select the **Apply** button.+
+## Summary
+The summary widget provides a detailed look at sign-ins related to multifactor authentication.
+
+### Sign-ins not protected by MFA requirement by applications
+
+* **Number of users signing-in not protected by multi-factor authentication requirement by application:** This widget provides a time based bar-graph representation of the number of user sign-ins not protected by MFA requirement by applications.
+* **Percent of users signing-in not protected by multi-factor authentication requirement by application:** This widget provides a time based bar-graph representation of the percentage of user sign-ins not protected by MFA requirement by applications.
+* **Select an application and user to learn more:** This widget groups the top users signed in without MFA requirement by application. Select the application to see a list of the user names and the count of sign-ins without MFA.
+
+### Sign-ins not protected by MFA requirement by users
+* **Sign-ins not protected by multi-factor auth requirement by user:** This widget shows top user and the count of sign-ins not protected by MFA requirement.
+* **Top users with high percentage of authentications not protected by multi-factor authentication requirements:** This widget shows users with top percentage of authentications that aren't protected by MFA requirements.
+
+### Sign-ins not protected by MFA requirement by Operating Systems
+* **Number of sign-ins not protected by multi-factor authentication requirement by operating system:** This widget provides time based bar graph of sign-in counts that aren't protected by MFA by operating system of the devices.
+* **Percent of sign-ins not protected by multi-factor authentication requirement by operating system:** This widget provides time based bar graph of sign-in percentages that aren't protected by MFA by operating system of the devices.
+
+### Sign-ins not protected by MFA requirement by locations
+* **Number of sign-ins not protected by multi-factor authentication requirement by location:** This widget shows the sign-ins counts that aren't protected by MFA requirement in map bubble chart on the world map.
active-directory Workbook Risk Analysis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/workbook-risk-analysis.md
Previously updated : 11/01/2022 Last updated : 10/03/2023 - # Identity protection risk analysis workbook Microsoft Entra ID Protection detects, remediates, and prevents compromised identities. As an IT administrator, you want to understand risk trends in your organizations and opportunities for better policy configuration. With the Identity Protection Risky Analysis Workbook, you can answer common questions about your Identity Protection implementation.
-This article provides you with an overview of this workbook.
+This article provides you with an overview of the **Identity Protection Risk Analysis** workbook.
## Description
As an IT administrator, you need to understand trends in identity risks and gaps
- Allows you to understand the trends in real time vs. Offline risk detections. - Provides insight into how effective you are at responding to risky users.
+## How to access the workbook
+
+3. Select the **Identity Protection Risk Analysis** workbook from the **Usage** section.
+ ## Sections This workbook has five sections:
Risky Users:
## Best practices -- **[Enable risky sign-in policies](../identity-protection/concept-identity-protection-policies.md#sign-in-risk-based-conditional-access-policy)** - To prompt for multi-factor authentication (MFA) on medium risk or above. Enabling the policy reduces the proportion of active real-time risk detections by allowing legitimate users to self-remediate the risk detections with MFA.
+- **[Enable risky sign-in policies](../identity-protection/concept-identity-protection-policies.md#sign-in-risk-based-conditional-access-policy)** - To prompt for multifactor authentication (MFA) on medium risk or higher. Enabling the policy reduces the proportion of active real-time risk detections by allowing legitimate users to self-remediate the risk detections with MFA.
-- **[Enable a risky user policy](../identity-protection/howto-identity-protection-configure-risk-policies.md#user-risk-policy-in-conditional-access)** - To enable users to securely remediate their accounts when they're high risk. Enabling the policy reduces the number of active at-risk users in your organization by returning the userΓÇÖs credentials to a safe state.-
-## Next steps
+- **[Enable a risky user policy](../identity-protection/howto-identity-protection-configure-risk-policies.md#user-risk-policy-in-conditional-access)** - To enable users to securely remediate their accounts when they're considered high risk. Enabling the policy reduces the number of active at-risk users in your organization by returning the userΓÇÖs credentials to a safe state.
- To learn more about identity protection, see [What is identity protection](../identity-protection/overview-identity-protection.md). + - For more information about Microsoft Entra workbooks, see [How to use Microsoft Entra workbooks](howto-use-azure-monitor-workbooks.md).
active-directory Workbook Sensitive Operations Report https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/workbook-sensitive-operations-report.md
Previously updated : 11/01/2022 Last updated : 10/03/2023 - # Sensitive operations report workbook
As an IT administrator, you need to be able to identify compromises in your envi
The sensitive operations report workbook is intended to help identify suspicious application and service principal activity that may indicate compromises in your environment. -
-This article provides you with an overview of this workbook.
-
+This article provides you with an overview of the **Sensitive Operations Report** workbook.
## Description
This article provides you with an overview of this workbook.
This workbook identifies recent sensitive operations that have been performed in your tenant and which may service principal compromise.
-If your organization is new to Azure monitor workbooks, you need to integrate your Microsoft Entra sign-in and audit logs with Azure Monitor before accessing the workbook. This integration allows you to store, and query, and visualize your logs using workbooks for up to two years. Only sign-in and audit events created after Azure Monitor integration will be stored, so the workbook won't contain insights prior to that date. Learn more about the prerequisites to Azure Monitor workbooks for Microsoft Entra ID. If you've previously integrated your Microsoft Entra sign-in and audit logs with Azure Monitor, you can use the workbook to assess past information.
-
+If your organization is new to Azure monitor workbooks, you need to integrate your Microsoft Entra sign-in and audit logs with Azure Monitor before accessing the workbook. This integration allows you to store, and query, and visualize your logs using workbooks for up to two years. Only sign-in and audit events created after Azure Monitor integration are stored, so the workbook won't contain insights prior to that date. Learn more about the prerequisites to Azure Monitor workbooks for Microsoft Entra ID. If you've previously integrated your Microsoft Entra sign-in and audit logs with Azure Monitor, you can use the workbook to assess past information.
+## How to access the workbook
+
+3. Select the **Sensitive Operations Report** workbook from the **Troubleshoot** section.
## Sections
This workbook is split into four sections:
![Workbook sections](./media/workbook-sensitive-operations-report/workbook-sections.png) - - **Modified application and service principal credentials/authentication methods** - This report flags actors who have recently changed many service principal credentials, and how many of each type of service principal credentials have been changed. - **New permissions granted to service principals** - This workbook also highlights recently granted OAuth 2.0 permissions to service principals. - **Directory role and group membership updates for service principals** -- - **Modified federation settings** - This report highlights when a user or application modifies federation settings on a domain. For example, it reports when a new Active Directory Federated Service (ADFS) TrustedRealm object, such as a signing certificate, is added to the domain. Modification to domain federation settings should be rare. --- ### Modified application and service principal credentials/authentication methods One of the most common ways for attackers to gain persistence in the environment is by adding new credentials to existing applications and service principals. The credentials allow the attacker to authenticate as the target application or service principal, granting them access to all resources to which it has permissions.
This section includes the following data to help you detect:
- A timeline for all credential changes -- ### New permissions granted to service principals In cases where the attacker can't find a service principal or an application with a high privilege set of permissions through which to gain access, they'll often attempt to add the permissions to another service principal or app.
-This section includes a breakdown of the AppOnly permissions grants to existing service principals. Admins should investigate any instances of excessive high permissions being granted, including, but not limited to, Exchange Online, Microsoft Graph and Azure AD Graph.
-
+This section includes a breakdown of the AppOnly permissions grants to existing service principals. Admins should investigate any instances of excessive high permissions being granted, including, but not limited to, Exchange Online, and Microsoft Graph.
### Directory role and group membership updates for service principals
Following the logic of the attacker adding new permissions to existing service p
This section includes an overview of all changes made to service principal memberships and should be reviewed for any additions to high privilege roles and groups. -- ### Modified federation settings Another common approach to gain a long-term foothold in the environment is to:
This section includes the following data:
- Addition of new domains and trusts -
-
-- ## Filters This paragraph lists the supported filters for each section. - ### Modified Application and Service Principal Credentials/Authentication Methods - Time range
This paragraph lists the supported filters for each section.
- Actor - Exclude actor - ### New permissions granted to service principals - Time range
This paragraph lists the supported filters for each section.
- Operation - Initiating user or app --- ## Best practices
+- * Use modified application and service principal credentials** to look out for credentials being added to service principals that aren't frequently used in your organization. Use the filters present in this section to further investigate any of the suspicious actors or service principals that were modified.
-**Use:**
-
-- **Modified application and service principal credentials** to look out for credentials being added to service principals that aren't frequently used in your organization. Use the filters present in this section to further investigate any of the suspicious actors or service principals that were modified.---- **New permissions granted to service principals** to look out for broad or excessive permissions being added to service principals by actors that may be compromised. --- **Modified federation settings** section to confirm that the added or modified target domain/URL is a legitimate admin behavior. Actions that modify or add domain federation trusts are rare and should be treated as high fidelity to be investigated as soon as possible.----
+- **Use new permissions granted to service principals** to look out for broad or excessive permissions being added to service principals by actors that may be compromised.
-## Next steps
+- **Use modified federation settings** section to confirm that the added or modified target domain/URL is a legitimate admin behavior. Actions that modify or add domain federation trusts are rare and should be treated as high fidelity to be investigated as soon as possible.
-- [How to use Microsoft Entra workbooks](howto-use-azure-monitor-workbooks.md)
active-directory Amazon Business Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/amazon-business-provisioning-tutorial.md
Title: 'Tutorial: Configure Amazon Business for automatic user provisioning with Microsoft Entra ID'
-description: Learn how to automatically provision and de-provision user accounts from Microsoft Entra ID to Amazon Business.
+description: Learn how to automatically provision and deprovision user accounts from Microsoft Entra ID to Amazon Business.
writer: twimmers
# Tutorial: Configure Amazon Business for automatic user provisioning
-This tutorial describes the steps you need to perform in both Amazon Business and Microsoft Entra ID to configure automatic user provisioning. When configured, Microsoft Entra ID automatically provisions and de-provisions users and groups to [Amazon Business](https://www.amazon.com/b2b/info/amazon-business?layout=landing) using the Microsoft Entra provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Microsoft Entra ID](../app-provisioning/user-provisioning.md).
+This tutorial describes the steps you need to perform in both Amazon Business and Microsoft Entra ID to configure automatic user provisioning. When configured, Microsoft Entra ID automatically provisions and deprovisions users and groups to [Amazon Business](https://www.amazon.com/b2b/info/amazon-business?layout=landing) using the Microsoft Entra provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Microsoft Entra ID](../app-provisioning/user-provisioning.md).
## Supported capabilities > [!div class="checklist"] > * Create users in Amazon Business. > * Remove users in Amazon Business when they do not require access anymore.
+> * Assign Amazon Business roles to user.
> * Keep user attributes synchronized between Microsoft Entra ID and Amazon Business. > * Provision groups and group memberships in Amazon Business. > * [Single sign-on](amazon-business-tutorial.md) to Amazon Business (recommended).
The scenario outlined in this tutorial assumes that you already have the followi
* [A Microsoft Entra tenant](../develop/quickstart-create-new-tenant.md). * A user account in Microsoft Entra ID with [permission](../roles/permissions-reference.md) to configure provisioning (for example, Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator).
-* An Amazon Business tenant.
-* A user account in Amazon Business with Admin permissions.
+* An Amazon Business account.
+* A user account in Amazon Business with Admin permissions (Admin on all Legal Entity groups in your Amazon Business account).
## Step 1: Plan your provisioning deployment 1. Learn about [how the provisioning service works](../app-provisioning/user-provisioning.md).
The scenario outlined in this tutorial assumes that you already have the followi
<a name='step-2-configure-amazon-business-to-support-provisioning-with-azure-ad'></a> ## Step 2: Configure Amazon Business to support provisioning with Microsoft Entra ID
-Contact Amazon Business support to configure Amazon Business to support provisioning with Microsoft Entra ID.
+
+Before configuring and enabling the provisioning service, you need to identify a default group for both users and groups. We recommend you to
+
+* Follow the principle of least privilege by having REQUISITIONER only permissions for the default users group.
+* Follow the group naming convention referenced below for ease of referencing the groups throughout this document.
+ * Default SCIM Parent Group
+ * This is the root of your SCIM directory in AmazonBusiness. All SCIM groups are placed directly under this default group. You may select an existing group as the default SCIM parent group.
+ * Default SCIM Users Group
+ * Users who are assigned to your Amazon Business app will be placed into this group by default with a Requisitioner role. It is recommended to have this group at the same level as the Default SCIM Parent Group.
+ * If a user is provisioned without a group assignment, they will be placed into this group by default with a Requisitioner role.
+
+Once you identify/create the Default SCIM Groups, send a URL link for both these groups to your Account Manager. An Amazon Business Integrations Specialist initializes both the groups for your SCIM integration. It is necessary to complete this step before proceeding to the next step.
<a name='step-3-add-amazon-business-from-the-azure-ad-application-gallery'></a> ## Step 3: Add Amazon Business from the Microsoft Entra application gallery
-Add Amazon Business from the Microsoft Entra application gallery to start managing provisioning to Amazon Business. If you have previously setup Amazon Business for SSO you can use the same application. However it's recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](../manage-apps/add-application-portal.md).
+Add Amazon Business from the Microsoft Entra application gallery to start managing provisioning to Amazon Business. If you have previously setup Amazon Business for SSO, you can use the same application. However it's recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](../manage-apps/add-application-portal.md).
## Step 4: Define who will be in scope for provisioning The Microsoft Entra provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* When assigning users and groups to Amazon Business, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add additional roles.
+* When assigning users and groups to Amazon Business, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add other roles.
* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+* You can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add Amazon Business roles. The user can have one of the following roles:
+ * **Requisitioner** (to place orders or submits order requests for approval).
+ * **Administrator** (to manage people, groups, roles and approvals. View orders. Run order reports)
+ * **Finance** (to access invoices, credit notes, analytics, and order history).
+ * **Tech** (to set up system integrations with the programs used at work).
+![Screenshot of the application roles list.](media/amazon-business-provisioning-tutorial/roles.png)
## Step 5: Configure automatic user provisioning to Amazon Business
This section guides you through the steps to configure the Microsoft Entra provi
![Screenshot of Provisioning tab automatic.](common/provisioning-automatic.png)
-1. Under the **Admin Credentials** section, input your Amazon Business Tenant URL, Authorization Endpoint and Token Endpoint. Click **Test Connection** to ensure Microsoft Entra ID can connect to Amazon Business. If the connection fails, ensure your Amazon Business account has Admin permissions and try again.
+1. Under the **Admin Credentials** section, input your Amazon Business Tenant URL, Authorization Endpoint. Click **Test Connection** to ensure Microsoft Entra ID can connect to Amazon Business. If the connection fails, ensure your Amazon Business account has Admin permissions and try again.
![Screenshot of Token.](media/amazon-business-provisioning-tutorial/test-connection.png)
This section guides you through the steps to configure the Microsoft Entra provi
|displayName|String|&check;|&check; |members|Reference|| - 1. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md). 1. To enable the Microsoft Entra provisioning service for Amazon Business, change the **Provisioning Status** to **On** in the **Settings** section.
This section guides you through the steps to configure the Microsoft Entra provi
This operation starts the initial synchronization cycle of all users and groups defined in **Scope** in the **Settings** section. The initial cycle takes longer to perform than subsequent cycles, which occur approximately every 40 minutes as long as the Microsoft Entra provisioning service is running. ## Step 6: Monitor your deployment+ Once you've configured provisioning, use the following resources to monitor your deployment: * Use the [provisioning logs](../reports-monitoring/concept-provisioning-logs.md) to determine which users have been provisioned successfully or unsuccessfully * Check the [progress bar](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) to see the status of the provisioning cycle and how close it's to completion * If the provisioning configuration seems to be in an unhealthy state, the application goes into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md).
+## Feature limitations
+
+* Flat structure is created on the Amazon Business account, that is, all pushed groups are at the same level under the Default SCIM Group. Nested structure/hierarchy is not supported.
+* Group names will be same in Azure and Amazon Business account.
+* As new groups will be created on the Amazon Business Account, Admins need to re-configure the business settings (for example, turning on purchasing, updating shared settings, adding guided buying policies, etc.) for the new groups as needed.
+* Deleting old Groups / removing users from old groups in Amazon Business results in losing visibility into orders placed with that old Group, hence it is recommended to
+ * Not delete the old groups/assignments, and
+ * Turn off purchasing for the old groups.
+* Email / Username Update - Updating email and / or username via SCIM is not supported at this time.
+* Password Sync - Password sync is not supported.
+
+## Troubleshooting tips
+
+* If Amazon Business administrators have only logged in using SSO or donΓÇÖt know their passwords, they can use the forgot password flow to reset their password and then sign in to Amazon Business.
+* If Admin and Requisitioner roles have already been applied to customer in a group, assigning Finance or Tech roles will not result in updates on Amazon Business side.
+* Customers with MASE accounts (Multiple Account Same Email) who delete one of their accounts can see errors that account doesn't exist when provisioning new users for short amount of time (24-48 hours).
+* Customers cannot be removed immediately via Provision on Demand. Provisioning must be turned on and the removal will happen 40 mins after the action is taken.
+ ## More resources * [Managing user account provisioning for Enterprise Apps](../app-provisioning/configure-automatic-user-provisioning-portal.md)
active-directory Diffchecker Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/diffchecker-tutorial.md
+
+ Title: Microsoft Entra SSO integration with Diffchecker
+description: Learn how to configure single sign-on between Microsoft Entra ID and Diffchecker.
++++++++ Last updated : 09/22/2023++++
+# Microsoft Entra SSO integration with Diffchecker
+
+In this tutorial, you learn how to integrate Diffchecker with Microsoft Entra ID. When you integrate Diffchecker with Microsoft Entra ID, you can:
+
+* Control in Microsoft Entra ID who has access to Diffchecker.
+* Enable your users to be automatically signed-in to Diffchecker with their Microsoft Entra accounts.
+* Manage your accounts in one central location.
+
+## Prerequisites
+
+To integrate Microsoft Entra ID with Diffchecker, you need:
+
+* A Microsoft Entra subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Diffchecker single sign-on (SSO) enabled subscription.
+
+## Scenario description
+
+In this tutorial, you configure and test Microsoft Entra SSO in a test environment.
+
+* Diffchecker supports **SP and IDP** initiated SSO.
+
+> [!NOTE]
+> Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
+
+## Adding Diffchecker from the gallery
+
+To configure the integration of Diffchecker into Microsoft Entra ID, you need to add Diffchecker from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator).
+1. Browse to **Identity** > **Applications** > **Enterprise applications** > **New application**.
+1. In the **Add from the gallery** section, type **Diffchecker** in the search box.
+1. Select **Diffchecker** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+
+Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, and walk through the SSO configuration as well. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides)
+
+## Configure and test Microsoft Entra SSO for Diffchecker
+
+Configure and test Microsoft Entra SSO with Diffchecker using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between a Microsoft Entra user and the related user in Diffchecker.
+
+To configure and test Microsoft Entra SSO with Diffchecker, perform the following steps:
+
+1. **[Configure Microsoft Entra SSO](#configure-microsoft-entra-sso)** - to enable your users to use this feature.
+ 1. **[Create a Microsoft Entra ID test user](#create-a-microsoft-entra-id-test-user)** - to test Microsoft Entra single sign-on with B.Simon.
+ 1. **[Assign the Microsoft Entra ID test user](#assign-the-microsoft-entra-id-test-user)** - to enable B.Simon to use Microsoft Entra single sign-on.
+1. **[Configure Diffchecker SSO](#configure-diffchecker-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Diffchecker test user](#create-diffchecker-test-user)** - to have a counterpart of B.Simon in Diffchecker that is linked to the Microsoft Entra ID representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Microsoft Entra SSO
+
+Follow these steps to enable Microsoft Entra SSO in the Microsoft Entra admin center.
+
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator).
+1. Browse to **Identity** > **Applications** > **Enterprise applications** > **Diffchecker** > **Single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Screenshot shows how to edit Basic SAML Configuration.](common/edit-urls.png "Basic Configuration")
+
+1. On the **Basic SAML Configuration** section, perform the following steps:
+
+ a. In the **Identifier** text box, type the URL:
+ `http://www.diffchecker.com/saml/metadata`
+
+ b. In the **Reply URL** text box, type a URL using the following pattern:
+ `https://api.<ENVIRONMENT>.diffchecker.com/auth/saml/acs/orgs/<ID>`
+
+1. Perform the following step, if you wish to configure the application in **SP** initiated mode:
+
+ In the **Sign-on URL** text box, type a URL using the following pattern:
+ `https://api.<ENVIRONMENT>.diffchecker.com/auth/saml/<ID>`
+
+ > [!NOTE]
+ > These values are not real. Update these values with the actual Reply URL and Sign on URL. Contact [Diffchecker support team](mailto:azure@diffchecker.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Microsoft Entra admin center.
+
+1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Base64)** and select **Download** to download the certificate and save it on your computer.
+
+ ![Screenshot shows the Certificate download link.](common/certificatebase64.png "Certificate")
+
+1. On the **Set up Diffchecker** section, copy the appropriate URL(s) based on your requirement.
+
+ ![Screenshot shows to copy configuration URLs.](common/copy-configuration-urls.png "Metadata")
+
+### Create a Microsoft Entra ID test user
+
+In this section, you create a test user in the Microsoft Entra admin center called B.Simon.
+
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [User Administrator](../roles/permissions-reference.md#user-administrator).
+1. Browse to **Identity** > **Users** > **All users**.
+1. Select **New user** > **Create new user**, at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Display name** field, enter `B.Simon`.
+ 1. In the **User principal name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Select **Review + create**.
+1. Select **Create**.
+
+### Assign the Microsoft Entra ID test user
+
+In this section, you enable B.Simon to use Microsoft Entra single sign-on by granting access to Diffchecker.
+
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator).
+1. Browse to **Identity** > **Applications** > **Enterprise applications** > **Diffchecker**.
+1. In the app's overview page, select **Users and groups**.
+1. Select **Add user/group**, then select **Users and groups** in the **Add Assignment** dialog.
+ 1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+ 1. If you're expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+ 1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure Diffchecker SSO
+
+To configure single sign-on on **Diffchecker** side, you need to send the downloaded **Certificate (Base64)** and appropriate copied URLs from Microsoft Entra admin center to [Diffchecker support team](mailto:azure@diffchecker.com). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create Diffchecker test user
+
+In this section, you create a user called B.Simon in Diffchecker. Work with [Diffchecker support team](mailto:azure@diffchecker.com) to add the users in the Diffchecker platform. Users must be created and activated before you use single sign-on.
+
+## Test SSO
+
+In this section, you test your Microsoft Entra single sign-on configuration with following options.
+
+#### SP initiated:
+
+* Click on **Test this application** in Microsoft Entra admin center. This will redirect to Diffchecker Sign on URL where you can initiate the login flow.
+
+* Go to Diffchecker Sign-on URL directly and initiate the login flow from there.
+
+#### IDP initiated:
+
+* Click on **Test this application** in Microsoft Entra admin center and you should be automatically signed in to the Diffchecker for which you set up the SSO.
+
+You can also use Microsoft My Apps to test the application in any mode. When you click the Diffchecker tile in the My Apps, if configured in SP mode you would be redirected to the application sign-on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Diffchecker for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Next Steps
+
+Once you configure Diffchecker you can enforce session control, which protects exfiltration and infiltration of your organization's sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-any-app).
active-directory Directory Services Protector Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/directory-services-protector-tutorial.md
+
+ Title: Microsoft Entra SSO integration with Directory Services Protector
+description: Learn how to configure single sign-on between Microsoft Entra ID and Directory Services Protector.
++++++++ Last updated : 10/03/2023++++
+# Microsoft Entra SSO integration with Directory Services Protector
+
+In this tutorial, you'll learn how to integrate Directory Services Protector with Microsoft Entra ID. When you integrate Directory Services Protector with Microsoft Entra ID, you can:
+
+* Control in Microsoft Entra ID who has access to Directory Services Protector.
+* Enable your users to be automatically signed-in to Directory Services Protector with their Microsoft Entra accounts.
+* Manage your accounts in one central location.
+
+## Prerequisites
+
+To integrate Microsoft Entra ID with Directory Services Protector, you need:
+
+* A Microsoft Entra subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Directory Services Protector single sign-on (SSO) enabled subscription.
+
+## Scenario description
+
+In this tutorial, you configure and test Microsoft Entra SSO in a test environment.
+
+* Directory Services Protector supports both **SP and IDP** initiated SSO.
+* Directory Services Protector supports **Just In Time** user provisioning.
+
+## Adding Directory Services Protector from the gallery
+
+To configure the integration of Directory Services Protector into Microsoft Entra ID, you need to add Directory Services Protector from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator).
+1. Browse to **Identity** > **Applications** > **Enterprise applications** > **New application**.
+1. In the **Add from the gallery** section, type **Directory Services Protector** in the search box.
+1. Select **Directory Services Protector** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+
+Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, and walk through the SSO configuration as well. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides)
+
+## Configure and test Microsoft Entra SSO for Directory Services Protector
+
+Configure and test Microsoft Entra SSO with Directory Services Protector using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between a Microsoft Entra user and the related user in Directory Services Protector.
+
+To configure and test Microsoft Entra SSO with Directory Services Protector, perform the following steps:
+
+1. **[Configure Microsoft Entra SSO](#configure-microsoft-entra-sso)** - to enable your users to use this feature.
+ 1. **[Create a Microsoft Entra ID test user](#create-a-microsoft-entra-id-test-user)** - to test Microsoft Entra single sign-on with B.Simon.
+ 1. **[Assign the Microsoft Entra ID test user](#assign-the-microsoft-entra-id-test-user)** - to enable B.Simon to use Microsoft Entra single sign-on.
+1. **[Configure Directory Services Protector SSO](#configure-directory-services-protector-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Directory Services Protector test user](#create-directory-services-protector-test-user)** - to have a counterpart of B.Simon in Directory Services Protector that is linked to the Microsoft Entra ID representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Microsoft Entra SSO
+
+Follow these steps to enable Microsoft Entra SSO in the Microsoft Entra admin center.
+
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator).
+1. Browse to **Identity** > **Applications** > **Enterprise applications** > **Directory Services Protector** > **Single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Screenshot shows how to edit Basic SAML Configuration.](common/edit-urls.png "Basic Configuration")
+
+1. On the **Basic SAML Configuration** section, if you have **Service Provider metadata file**, perform the following steps:
+
+ a. Click **Upload metadata file**.
+
+ ![Screenshot shows how to upload metadata file.](common/upload-metadata.png "File")
+
+ b. Click on **folder logo** to select the metadata file and click **Upload**.
+
+ ![Screenshot shows how to choose metadata file.](common/browse-upload-metadata.png "Folder")
+
+ c. After the metadata file is successfully uploaded, the **Identifier** and **Reply URL** values get auto populated in Basic SAML Configuration section.
+
+ ![Screenshot shows the image of metadata file.](common/idp-intiated.png "Section")
+
+ > [!Note]
+ > If the **Identifier** and **Reply URL** values do not get auto populated, then fill in the values manually according to your requirement.
+
+1. Perform the following step, if you wish to configure the application in **SP** initiated mode:
+
+ In the **Sign on URL** text box, type a URL using the following pattern:
+ `https://<HOSTNAME>.<DOMAIN>.<EXTENSION>/DSP/Login/SsoLogin`
+
+ > [!NOTE]
+ > The Sign on URL value is not real. Update this value with the actual Sign on URL. Contact [Directory Services Protector support team](mailto:support@semperis.com) to get this value. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Microsoft Entra admin center.
+
+1. Directory Services Protector support team application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes.
+
+ ![Screenshot shows the image of attributes configuration.](common/default-attributes.png "Image")
+
+1. In addition to above, Directory Services Protector support team application expects few more attributes to be passed back in SAML response, which are shown below. These attributes are also pre populated but you can review them as per your requirements.
+
+ | Name | Source Attribute|
+ | | |
+ | role | user.assignedroles |
+
+ > [!NOTE]
+ > Please click [here](../develop/howto-add-app-roles-in-azure-ad-apps.md#app-roles-ui) to know how to configure Role in Microsoft Entra ID.
+
+1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Federation Metadata XML** and select **Download** to download the certificate and save it on your computer.
+
+ ![Screenshot shows the Certificate download link.](common/metadataxml.png "Certificate")
+
+1. On the **Set up Directory Services Protector** section, copy the appropriate URL(s) based on your requirement.
+
+ ![Screenshot shows to copy configuration appropriate URL.](common/copy-configuration-urls.png "Metadata")
+
+### Create a Microsoft Entra ID test user
+
+In this section, you'll create a test user in the Microsoft Entra admin center called B.Simon.
+
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [User Administrator](../roles/permissions-reference.md#user-administrator).
+1. Browse to **Identity** > **Users** > **All users**.
+1. Select **New user** > **Create new user**, at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Display name** field, enter `B.Simon`.
+ 1. In the **User principal name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Select **Review + create**.
+1. Select **Create**.
+
+### Assign the Microsoft Entra ID test user
+
+In this section, you'll enable B.Simon to use Microsoft Entra single sign-on by granting access to Directory Services Protector.
+
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator).
+1. Browse to **Identity** > **Applications** > **Enterprise applications** > **Directory Services Protector**.
+1. In the app's overview page, select **Users and groups**.
+1. Select **Add user/group**, then select **Users and groups** in the **Add Assignment** dialog.
+ 1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+ 1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+ 1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure Directory Services Protector SSO
+
+1. Log in to Directory Services Protector company site as an administrator.
+
+1. Go to **Settings (gear icon)** > **Data Connections** >
+**SAML Authentication** and toggle on the **Enabled** switch.
+![Screenshot shows the settings of the configuration.](./media/directory-services-protector-tutorial/settings.png "Account")
+
+1. In Step **1 - Identity provider**, select **Azure AD** from the drop-down menu and click **SAVE**.
+
+ ![Screenshot shows the admin configuration.](./media/directory-services-protector-tutorial/provider.png "Azure")
+
+1. In Step **2 ΓÇô Data required by the SAML identity provider**, select **CONFIRM** button and **DOWNLOAD METADATA XML** to **Upload metadata file** in the **Basic SAML Configuration** section in Microsoft Entra admin center and click **SAVE**.
+
+ ![Screenshot shows settings of the identity provider.](./media/directory-services-protector-tutorial/identity.png "Center")
+
+1. In Step **3 - User Attributes & Claims**, we don't need this information now, so we can skip to Step 4.
+
+1. In Step **4 ΓÇô Data received from the SAML identity provider**, DSP supports both importing from a metadata URL and importing of a metadata XML provided by Entra ID.
+
+ 1. Select **App federation metadata URL** radio button, and paste the **Metadata URL** in the field from Entra ID, and then select **IMPORT**.
+
+ ![Screenshot shows settings of the app metadata URL.](./media/directory-services-protector-tutorial/field.png "Application")
+
+ 1. Select the radio button to use **Import federation metadata XML** and click **IMPORT XML** to upload the **Federation Metadata XML** file from Microsoft Entra admin center.
+
+ ![Screenshot shows how to import federation file.](./media/directory-services-protector-tutorial/admin.png "Import")
+
+ 1. Click **SAVE**.
+
+1. At the top of the **SAML Authentication** blade in DSP, it should show **Status** now as **Configured**.
+
+ ![Screenshot shows the status of configuration.](./media/directory-services-protector-tutorial/status.png "Page")
+
+### Create Directory Services Protector test user
+
+In this section, a user called Britta Simon is created in Directory Services Protector. Directory Services Protector supports just-in-time user provisioning, which is enabled by default. There is no action item for you in this section. If a user doesn't already exist in Directory Services Protector, a new one is created after authentication.
+
+## Test SSO
+
+In this section, you test your Microsoft Entra single sign-on configuration with following options.
+
+#### SP initiated:
+
+* Click on **Test this application** in Microsoft Entra admin center. This will redirect to Directory Services Protector Sign on URL where you can initiate the login flow.
+
+* Go to Directory Services Protector Sign on URL directly and initiate the login flow from there.
+
+#### IDP initiated:
+
+* Click on **Test this application** in Microsoft Entra admin center and you should be automatically signed in to the Directory Services Protector for which you set up the SSO.
+
+You can also use Microsoft My Apps to test the application in any mode. When you click the Directory Services Protector tile in the My Apps, if configured in SP mode you would be redirected to the application sign-on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Directory Services Protector for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Next Steps
+
+Once you configure Directory Services Protector you can enforce session control, which protects exfiltration and infiltration of your organization's sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-any-app).
active-directory M Files Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/m-files-provisioning-tutorial.md
+
+ Title: 'Tutorial: Configure M-Files for automatic user provisioning with Microsoft Entra ID'
+description: Learn how to automatically provision and deprovision user accounts from Microsoft Entra ID to M-Files.
++
+writer: twimmers
+
+ms.assetid: 52b0484b-2a13-403b-9d2e-e99d2da5880f
++++ Last updated : 09/27/2023+++
+# Tutorial: Configure M-Files for automatic user provisioning
+
+This tutorial describes the steps you need to perform in both M-Files and Microsoft Entra ID to configure automatic user provisioning. When configured, Microsoft Entra ID automatically provisions and deprovisions users and groups to [M-Files](https://www.m-files.com/) using the Microsoft Entra provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Microsoft Entra ID](../app-provisioning/user-provisioning.md).
+
+## Supported capabilities
+> [!div class="checklist"]
+> * Create users in M-Files.
+> * Remove users in M-Files when they do not require access anymore.
+> * Keep user attributes synchronized between Microsoft Entra ID and M-Files.
+> * Provision groups and group memberships in M-Files.
+> * [Single sign-on](m-files-tutorial.md) to M-Files (recommended).
+
+## Prerequisites
+
+The scenario outlined in this tutorial assumes that you already have the following prerequisites:
+
+* [A Microsoft Entra tenant](../develop/quickstart-create-new-tenant.md)
+* A user account in Microsoft Entra ID with [permission](../roles/permissions-reference.md) to configure provisioning (for example, Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator).
+* A user account in M-Files with Admin permissions.
+
+## Step 1: Plan your provisioning deployment
+1. Learn about [how the provisioning service works](../app-provisioning/user-provisioning.md).
+1. Determine who will be in [scope for provisioning](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+1. Determine what data to [map between Microsoft Entra ID and M-Files](../app-provisioning/customize-application-attributes.md).
+
+<a name='step-2-configure-M-Files-to-support-provisioning-with-azure-ad'></a>
+
+## Step 2: Configure M-Files to support provisioning with Microsoft Entra ID
+Contact M-Files support to configure M-Files to support provisioning with Microsoft Entra ID.
+
+<a name='step-3-add-M-Files-from-the-azure-ad-application-gallery'></a>
+
+## Step 3: Add M-Files from the Microsoft Entra application gallery
+
+Add M-Files from the Microsoft Entra application gallery to start managing provisioning to M-Files. If you have previously set up M-Files for SSO you can use the same application. However, it's recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](../manage-apps/add-application-portal.md).
+
+## Step 4: Define who will be in scope for provisioning
+
+The Microsoft Entra provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+* Start small. Test with a small set of users and groups before rolling out to everyone. When the scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When the scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+* If you need more roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
+
+## Step 5: Configure automatic user provisioning to M-Files
+
+This section guides you through the steps to configure the Microsoft Entra provisioning service to create, update, and disable users and/or groups in TestApp based on user and/or group assignments in Microsoft Entra ID.
+
+<a name='to-configure-automatic-user-provisioning-for-M-Files-in-azure-ad'></a>
+
+### To configure automatic user provisioning for M-Files in Microsoft Entra ID:
+
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator).
+1. Browse to **Identity** > **Applications** > **Enterprise applications**
+
+ ![Screenshot of Enterprise applications blade.](common/enterprise-applications.png)
+
+1. In the applications list, select **M-Files**.
+
+ ![Screenshot of the M-Files link in the Applications list.](common/all-applications.png)
+
+1. Select the **Provisioning** tab.
+
+ ![Screenshot of Provisioning tab.](common/provisioning.png)
+
+1. Set the **Provisioning Mode** to **Automatic**.
+
+ ![Screenshot of Provisioning tab automatic.](common/provisioning-automatic.png)
+
+1. Under the **Admin Credentials** section, input your M-Files Tenant URL and Secret Token. Click **Test Connection** to ensure Microsoft Entra ID can connect to M-Files. If the connection fails, ensure your M-Files account has Admin permissions and try again.
+
+ ![Screenshot of Token.](common/provisioning-testconnection-tenanturltoken.png)
+
+1. In the **Notification Email** field, enter the email address of a person or group who should receive the provisioning error notifications and select the **Send an email notification when a failure occurs** check box.
+
+ ![Screenshot of Notification Email.](common/provisioning-notification-email.png)
+
+1. Select **Save**.
+
+1. Under the **Mappings** section, select **Synchronize Microsoft Entra users to M-Files**.
+
+1. Review the user attributes that are synchronized from Microsoft Entra ID to M-Files in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in M-Files for update operations. If you choose to change the [matching target attribute](../app-provisioning/customize-application-attributes.md), you'll need to ensure that the M-Files API supports filtering users based on that attribute. Select the **Save** button to commit any changes.
+
+ |Attribute|Type|Supported for filtering|Required by M-Files|
+ |||||
+ |userName|String|&check;|&check;
+ |active|Boolean||
+ |emails[type eq "work"].value|String|
+ |name.givenName|String||
+ |name.familyName|String||
+ |name.formatted|String||
+ |externalId|String||
+ |urn:ietf:params:scim:schemas:extension:info:2.0:User:info1|String||
+ |urn:ietf:params:scim:schemas:extension:info:2.0:User:info2|String||
+
+1. Under the **Mappings** section, select **Synchronize Microsoft Entra groups to M-Files**.
+
+1. Review the group attributes that are synchronized from Microsoft Entra ID to M-Files in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the groups in M-Files for update operations. Select the **Save** button to commit any changes.
+
+ |Attribute|Type|Supported for filtering|Required by M-Files|
+ |||||
+ |displayName|String|&check;|&check;
+ |members|Reference||
+ |externalId|String||
+
+1. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+1. To enable the Microsoft Entra provisioning service for M-Files, change the **Provisioning Status** to **On** in the **Settings** section.
+
+ ![Screenshot of Provisioning Status Toggled On.](common/provisioning-toggle-on.png)
+
+1. Define the users and/or groups that you would like to provision to M-Files by choosing the desired values in **Scope** in the **Settings** section.
+
+ ![Screenshot of Provisioning Scope.](common/provisioning-scope.png)
+
+1. When you're ready to provision, click **Save**.
+
+ ![Screenshot of Saving Provisioning Configuration.](common/provisioning-configuration-save.png)
+
+This operation starts the initial synchronization cycle of all users and groups defined in **Scope** in the **Settings** section. The initial cycle takes longer to perform than subsequent cycles, which occur approximately every 40 minutes as long as the Microsoft Entra provisioning service is running.
+
+## Step 6: Monitor your deployment
+Once you've configured provisioning, use the following resources to monitor your deployment:
+
+* Use the [provisioning logs](../reports-monitoring/concept-provisioning-logs.md) to determine which users have been provisioned successfully or unsuccessfully.
+* Check the [progress bar](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) to see the status of the provisioning cycle and how close it's to completion.
+* If the provisioning configuration seems to be in an unhealthy state, the application goes into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md).
+
+## More resources
+
+* [Managing user account provisioning for Enterprise Apps](../app-provisioning/configure-automatic-user-provisioning-portal.md).
+* [What is application access and single sign-on with Microsoft Entra ID?](../manage-apps/what-is-single-sign-on.md).
+
+## Next steps
+
+* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md).
active-directory Mosaic Project Operations Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/mosaic-project-operations-tutorial.md
+
+ Title: Microsoft Entra SSO integration with Mosaic Project Operations
+description: Learn how to configure single sign-on between Microsoft Entra ID and Mosaic Project Operations.
++++++++ Last updated : 10/03/2023++++
+# Microsoft Entra SSO integration with Mosaic Project Operations
+
+In this tutorial, you'll learn how to integrate Mosaic Project Operations with Microsoft Entra ID. When you integrate Mosaic Project Operations with Microsoft Entra ID, you can:
+
+* Control in Microsoft Entra ID who has access to Mosaic Project Operations.
+* Enable your users to be automatically signed-in to Mosaic Project Operations with their Microsoft Entra accounts.
+* Manage your accounts in one central location.
+
+## Prerequisites
+
+To integrate Microsoft Entra ID with Mosaic Project Operations, you need:
+
+* A Microsoft Entra subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Mosaic Project Operations single sign-on (SSO) enabled subscription.
+
+## Scenario description
+
+In this tutorial, you configure and test Microsoft Entra SSO in a test environment.
+
+* Mosaic Project Operations supports **SP** initiated SSO.
+
+## Adding Mosaic Project Operations from the gallery
+
+To configure the integration of Mosaic Project Operations into Microsoft Entra ID, you need to add Mosaic Project Operations from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator).
+1. Browse to **Identity** > **Applications** > **Enterprise applications** > **New application**.
+1. In the **Add from the gallery** section, type **Mosaic Project Operations** in the search box.
+1. Select **Mosaic Project Operations** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+
+Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, and walk through the SSO configuration as well. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides)
+
+## Configure and test Microsoft Entra SSO for Mosaic Project Operations
+
+Configure and test Microsoft Entra SSO with Mosaic Project Operations using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between a Microsoft Entra user and the related user in Mosaic Project Operations.
+
+To configure and test Microsoft Entra SSO with Mosaic Project Operations, perform the following steps:
+
+1. **[Configure Microsoft Entra SSO](#configure-microsoft-entra-sso)** - to enable your users to use this feature.
+ 1. **[Create a Microsoft Entra ID test user](#create-a-microsoft-entra-id-test-user)** - to test Microsoft Entra single sign-on with B.Simon.
+ 1. **[Assign the Microsoft Entra ID test user](#assign-the-microsoft-entra-id-test-user)** - to enable B.Simon to use Microsoft Entra single sign-on.
+1. **[Configure Mosaic Project Operations SSO](#configure-mosaic-project-operations-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Mosaic Project Operations test user](#create-mosaic-project-operations-test-user)** - to have a counterpart of B.Simon in Mosaic Project Operations that is linked to the Microsoft Entra ID representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Microsoft Entra SSO
+
+Follow these steps to enable Microsoft Entra SSO in the Microsoft Entra admin center.
+
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator).
+1. Browse to **Identity** > **Applications** > **Enterprise applications** > **Mosaic Project Operations** > **Single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Screenshot shows how to edit Basic SAML Configuration.](common/edit-urls.png "Basic Configuration")
+
+1. On the **Basic SAML Configuration** section, perform the following steps:
+
+ a. In the **Identifier** textbox, type a URL using one of the following patterns:
+
+ | **Identifier** |
+ ||
+ | `https://auth.us-east-1.party.mosaicapp.com/<UUID>` |
+ | `https://auth.us-east-1.<ENVIRONMENT>.mosaicapp.com/<UUID>` |
+
+ b. In the **Reply URL** textbox, type a URL using one of the following patterns:
+
+ | **Reply URL** |
+ ||
+ | `https://auth.us-east-1.party.mosaicapp.com/auth/saml/callback/<UUID>` |
+ | `https://auth.us-east-1.<ENVIRONMENT>.mosaicapp.com/auth/saml/callback/<UUID>` |
+
+ c. In the **Sign on URL** textbox, type the URL:
+ `https://login.mosaicapp.com/login`
+
+ > [!NOTE]
+ > These values are not real. Update these values with the actual Identifier and Reply URL. Contact [Mosaic Project Operations support team](mailto:support@mosaicapp.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Microsoft Entra admin center.
+
+1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Base64)** and select **Download** to download the certificate and save it on your computer.
+
+ ![Screenshot shows the Certificate download link.](common/certificatebase64.png "Certificate")
+
+1. On the **Set up Mosaic Project Operations** section, copy the appropriate URL(s) based on your requirement.
+
+ ![Screenshot shows to copy configuration appropriate URL.](common/copy-configuration-urls.png "Metadata")
+
+### Create a Microsoft Entra ID test user
+
+In this section, you'll create a test user in the Microsoft Entra admin center called B.Simon.
+
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [User Administrator](../roles/permissions-reference.md#user-administrator).
+1. Browse to **Identity** > **Users** > **All users**.
+1. Select **New user** > **Create new user**, at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Display name** field, enter `B.Simon`.
+ 1. In the **User principal name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Select **Review + create**.
+1. Select **Create**.
+
+### Assign the Microsoft Entra ID test user
+
+In this section, you'll enable B.Simon to use Microsoft Entra single sign-on by granting access to Mosaic Project Operations.
+
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator).
+1. Browse to **Identity** > **Applications** > **Enterprise applications** > **Mosaic Project Operations**.
+1. In the app's overview page, select **Users and groups**.
+1. Select **Add user/group**, then select **Users and groups** in the **Add Assignment** dialog.
+ 1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+ 1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+ 1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure Mosaic Project Operations SSO
+
+To configure single sign-on on **Mosaic Project Operations** side, you need to send the downloaded **Certificate (Base64)** and appropriate copied URLs from Microsoft Entra admin center to [Mosaic Project Operations support team](mailto:support@mosaicapp.com). They set this setting to have the SAML SSO connection set properly on both sides. For more information, please refer [this](https://readme.mosaicapp.com/docs/microsoft-azure-saml) link.
+
+### Create Mosaic Project Operations test user
+
+In this section, you create a user called B.Simon in Mosaic Project Operations. Work with [Mosaic Project Operations support team](mailto:support@mosaicapp.com) to add the users in the Mosaic Project Operations platform. Users must be created and activated before you use single sign-on.
+
+## Test SSO
+
+In this section, you test your Microsoft Entra single sign-on configuration with following options.
+
+* Click on **Test this application** in Microsoft Entra admin center. This will redirect to Mosaic Project Operations Sign-on URL where you can initiate the login flow.
+
+* Go to Mosaic Project Operations Sign-on URL directly and initiate the login flow from there.
+
+* You can use Microsoft My Apps. When you click the Mosaic Project Operations tile in the My Apps, this will redirect to Mosaic Project Operations Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Next Steps
+
+Once you configure Mosaic Project Operations you can enforce session control, which protects exfiltration and infiltration of your organization's sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-any-app).
active-directory Quarem Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/quarem-tutorial.md
+
+ Title: Microsoft Entra SSO integration with Quarem
+description: Learn how to configure single sign-on between Microsoft Entra ID and Quarem.
++++++++ Last updated : 09/22/2023++++
+# Microsoft Entra SSO integration with Quarem
+
+In this tutorial, you learn how to integrate Quarem with Microsoft Entra ID. When you integrate Quarem with Microsoft Entra ID, you can:
+
+* Control in Microsoft Entra ID who has access to Quarem.
+* Enable your users to be automatically signed-in to Quarem with their Microsoft Entra accounts.
+* Manage your accounts in one central location.
+
+## Prerequisites
+
+To integrate Microsoft Entra ID with Quarem, you need:
+
+* A Microsoft Entra subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Quarem single sign-on (SSO) enabled subscription.
+
+## Scenario description
+
+In this tutorial, you configure and test Microsoft Entra SSO in a test environment.
+
+* Quarem supports **IDP** initiated SSO.
+
+## Add Quarem from the gallery
+
+To configure the integration of Quarem into Microsoft Entra ID, you need to add Quarem from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator).
+1. Browse to **Identity** > **Applications** > **Enterprise applications** > **New application**.
+1. In the **Add from the gallery** section, type **Quarem** in the search box.
+1. Select **Quarem** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+
+Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, and walk through the SSO configuration as well. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides)
+
+## Configure and test Microsoft Entra SSO for Quarem
+
+Configure and test Microsoft Entra SSO with Quarem using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between a Microsoft Entra user and the related user in Quarem.
+
+To configure and test Microsoft Entra SSO with Quarem, perform the following steps:
+
+1. **[Configure Microsoft Entra SSO](#configure-microsoft-entra-sso)** - to enable your users to use this feature.
+ 1. **[Create a Microsoft Entra ID test user](#create-a-microsoft-entra-id-test-user)** - to test Microsoft Entra single sign-on with B.Simon.
+ 1. **[Assign the Microsoft Entra ID test user](#assign-the-microsoft-entra-id-test-user)** - to enable B.Simon to use Microsoft Entra single sign-on.
+1. **[Configure Quarem SSO](#configure-quarem-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Quarem test user](#create-quarem-test-user)** - to have a counterpart of B.Simon in Quarem that is linked to the Microsoft Entra ID representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Microsoft Entra SSO
+
+Follow these steps to enable Microsoft Entra SSO in the Microsoft Entra admin center.
+
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator).
+1. Browse to **Identity** > **Applications** > **Enterprise applications** > **Quarem** > **Single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Screenshot shows how to edit Basic SAML Configuration.](common/edit-urls.png "Basic Configuration")
+
+1. On the **Basic SAML Configuration** section, the user does not have to perform any step as the app is already pre-integrated with Microsoft Entra.
+
+1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, click copy button to copy **App Federation Metadata Url** and save it on your computer.
+
+ ![Screenshot shows the Certificate download link.](common/copy-metadataurl.png "Certificate")
+
+### Create a Microsoft Entra ID test user
+
+In this section, you create a test user in the Microsoft Entra admin center called B.Simon.
+
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [User Administrator](../roles/permissions-reference.md#user-administrator).
+1. Browse to **Identity** > **Users** > **All users**.
+1. Select **New user** > **Create new user**, at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Display name** field, enter `B.Simon`.
+ 1. In the **User principal name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Select **Review + create**.
+1. Select **Create**.
+
+### Assign the Microsoft Entra ID test user
+
+In this section, you enable B.Simon to use Microsoft Entra single sign-on by granting access to Quarem.
+
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator).
+1. Browse to **Identity** > **Applications** > **Enterprise applications** > **Quarem**.
+1. In the app's overview page, select **Users and groups**.
+1. Select **Add user/group**, then select **Users and groups** in the **Add Assignment** dialog.
+ 1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+ 1. If you're expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+ 1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure Quarem SSO
+
+To configure single sign-on on **Quarem** side, you need to send the **App Federation Metadata Url** to [Quarem support team](mailto:clientservices@quarem.com). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create Quarem test user
+
+In this section, you create a user called B.Simon in Quarem. Work with [Quarem support team](mailto:clientservices@quarem.com) to add the users in the Quarem platform. Users must be created and activated before you use single sign-on.
+
+## Test SSO
+
+In this section, you test your Microsoft Entra single sign-on configuration with following options.
+
+* Click on Test this application in Microsoft Entra admin center and you should be automatically signed in to the Quarem for which you set up the SSO.
+
+* You can use Microsoft My Apps. When you click the Quarem tile in the My Apps, you should be automatically signed in to the Quarem for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Next steps
+
+Once you configure Quarem you can enforce session control, which protects exfiltration and infiltration of your organization's sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-any-app).
active-directory Screensteps Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/screensteps-provisioning-tutorial.md
+
+ Title: 'Tutorial: Configure ScreenSteps for automatic user provisioning with Microsoft Entra ID'
+description: Learn how to automatically provision and deprovision user accounts from Microsoft Entra ID to ScreenSteps.
++
+writer: twimmers
+
+ms.assetid: e6e21ae5-f1d4-479c-b5d9-1377a85ecb71
++++ Last updated : 10/04/2023+++
+# Tutorial: Configure ScreenSteps for automatic user provisioning
+
+This tutorial describes the steps you need to perform in both ScreenSteps and Microsoft Entra ID to configure automatic user provisioning. When configured, Microsoft Entra ID automatically provisions and deprovisions users and groups to [ScreenSteps](http://www.screensteps.com/) using the Microsoft Entra provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Microsoft Entra ID](../app-provisioning/user-provisioning.md).
+
+## Supported capabilities
+> [!div class="checklist"]
+> * Create users in ScreenSteps.
+> * Remove users in ScreenSteps when they do not require access anymore.
+> * Keep user attributes synchronized between Microsoft Entra ID and ScreenSteps.
+> * Provision groups and group memberships in ScreenSteps.
+> * [Single sign-on](screensteps-tutorial.md) to ScreenSteps (recommended).
+
+## Prerequisites
+
+The scenario outlined in this tutorial assumes that you already have the following prerequisites:
+
+* [A Microsoft Entra tenant](../develop/quickstart-create-new-tenant.md)
+* A user account in Microsoft Entra ID with [permission](../roles/permissions-reference.md) to configure provisioning (for example, Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator).
+* A user account in ScreenSteps with Admin permissions.
+
+## Step 1: Plan your provisioning deployment
+1. Learn about [how the provisioning service works](../app-provisioning/user-provisioning.md).
+1. Determine who will be in [scope for provisioning](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+1. Determine what data to [map between Microsoft Entra ID and ScreenSteps](../app-provisioning/customize-application-attributes.md).
+
+<a name='step-2-configure-ScreenSteps-to-support-provisioning-with-azure-ad'></a>
+
+## Step 2: Configure ScreenSteps to support provisioning with Microsoft Entra ID
+Contact ScreenSteps support to configure ScreenSteps to support provisioning with Microsoft Entra ID.
+
+<a name='step-3-add-ScreenSteps-from-the-azure-ad-application-gallery'></a>
+
+## Step 3: Add ScreenSteps from the Microsoft Entra application gallery
+
+Add ScreenSteps from the Microsoft Entra application gallery to start managing provisioning to ScreenSteps. If you have previously set up ScreenSteps for SSO you can use the same application. However, it's recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](../manage-apps/add-application-portal.md).
+
+## Step 4: Define who will be in scope for provisioning
+
+The Microsoft Entra provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+* Start small. Test with a small set of users and groups before rolling out to everyone. When the scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When the scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+* If you need more roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
+
+## Step 5: Configure automatic user provisioning to ScreenSteps
+
+This section guides you through the steps to configure the Microsoft Entra provisioning service to create, update, and disable users and/or groups in TestApp based on user and/or group assignments in Microsoft Entra ID.
+
+<a name='to-configure-automatic-user-provisioning-for-ScreenSteps-in-azure-ad'></a>
+
+### To configure automatic user provisioning for ScreenSteps in Microsoft Entra ID:
+
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator).
+1. Browse to **Identity** > **Applications** > **Enterprise applications**
+
+ ![Screenshot of Enterprise applications blade.](common/enterprise-applications.png)
+
+1. In the applications list, select **ScreenSteps**.
+
+ ![Screenshot of the ScreenSteps link in the Applications list.](common/all-applications.png)
+
+1. Select the **Provisioning** tab.
+
+ ![Screenshot of Provisioning tab.](common/provisioning.png)
+
+1. Set the **Provisioning Mode** to **Automatic**.
+
+ ![Screenshot of Provisioning tab automatic.](common/provisioning-automatic.png)
+
+1. Under the **Admin Credentials** section, input your ScreenSteps Tenant URL and Secret Token. Click **Test Connection** to ensure Microsoft Entra ID can connect to ScreenSteps. If the connection fails, ensure your ScreenSteps account has Admin permissions and try again.
+
+ ![Screenshot of Token.](common/provisioning-testconnection-tenanturltoken.png)
+
+1. In the **Notification Email** field, enter the email address of a person or group who should receive the provisioning error notifications and select the **Send an email notification when a failure occurs** check box.
+
+ ![Screenshot of Notification Email.](common/provisioning-notification-email.png)
+
+1. Select **Save**.
+
+1. Under the **Mappings** section, select **Synchronize Microsoft Entra users to ScreenSteps**.
+
+1. Review the user attributes that are synchronized from Microsoft Entra ID to ScreenSteps in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in ScreenSteps for update operations. If you choose to change the [matching target attribute](../app-provisioning/customize-application-attributes.md), you'll need to ensure that the ScreenSteps API supports filtering users based on that attribute. Select the **Save** button to commit any changes.
+
+ |Attribute|Type|Supported for filtering|Required by ScreenSteps|
+ |||||
+ |userName|String|&check;|&check;
+ |emails[type eq "work"].value|String||
+ |name.givenName|String||
+ |name.familyName|String||
+ |externalId|String||
+
+1. Under the **Mappings** section, select **Synchronize Microsoft Entra groups to ScreenSteps**.
+
+1. Review the group attributes that are synchronized from Microsoft Entra ID to ScreenSteps in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the groups in ScreenSteps for update operations. Select the **Save** button to commit any changes.
+
+ |Attribute|Type|Supported for filtering|Required by ScreenSteps|
+ |||||
+ |displayName|String|&check;|&check;
+ |members|Reference||
+ |externalId|String||
+
+1. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+1. To enable the Microsoft Entra provisioning service for ScreenSteps, change the **Provisioning Status** to **On** in the **Settings** section.
+
+ ![Screenshot of Provisioning Status Toggled On.](common/provisioning-toggle-on.png)
+
+1. Define the users and/or groups that you would like to provision to ScreenSteps by choosing the desired values in **Scope** in the **Settings** section.
+
+ ![Screenshot of Provisioning Scope.](common/provisioning-scope.png)
+
+1. When you're ready to provision, click **Save**.
+
+ ![Screenshot of Saving Provisioning Configuration.](common/provisioning-configuration-save.png)
+
+This operation starts the initial synchronization cycle of all users and groups defined in **Scope** in the **Settings** section. The initial cycle takes longer to perform than subsequent cycles, which occur approximately every 40 minutes as long as the Microsoft Entra provisioning service is running.
+
+## Step 6: Monitor your deployment
+Once you've configured provisioning, use the following resources to monitor your deployment:
+
+* Use the [provisioning logs](../reports-monitoring/concept-provisioning-logs.md) to determine which users have been provisioned successfully or unsuccessfully.
+* Check the [progress bar](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) to see the status of the provisioning cycle and how close it's to completion.
+* If the provisioning configuration seems to be in an unhealthy state, the application goes into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md).
+
+## More resources
+
+* [Managing user account provisioning for Enterprise Apps](../app-provisioning/configure-automatic-user-provisioning-portal.md).
+* [What is application access and single sign-on with Microsoft Entra ID?](../manage-apps/what-is-single-sign-on.md).
+
+## Next steps
+
+* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md).
active-directory Xm Fax And Xm Send Secure Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/xm-fax-and-xm-send-secure-provisioning-tutorial.md
+
+ Title: 'Tutorial: Configure XM Fax and XM SendSecure for automatic user provisioning with Microsoft Entra ID'
+description: Learn how to automatically provision and deprovision user accounts from Microsoft Entra ID to XM Fax and XM SendSecure.
++
+writer: twimmers
+
+ms.assetid: a2f34d7d-17f9-4620-973f-89887005f337
++++ Last updated : 10/04/2023+++
+# Tutorial: Configure XM Fax and XM SendSecure for automatic user provisioning
+
+This tutorial describes the steps you need to perform in both XM Fax and XM SendSecure and Microsoft Entra ID to configure automatic user provisioning. When configured, Microsoft Entra ID automatically provisions and deprovisions users to [XM Fax and XM SendSecure](https://www.opentext.com/pro) using the Microsoft Entra provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Microsoft Entra ID](../app-provisioning/user-provisioning.md).
++
+## Supported capabilities
+> [!div class="checklist"]
+> * Create users in XM Fax and XM SendSecure.
+> * Remove users in XM Fax and XM SendSecure when they do not require access anymore.
+> * Keep user attributes synchronized between Microsoft Entra ID and XM Fax and XM SendSecure.
+> * [Single sign-on](xm-fax-and-xm-send-secure-tutorial.md) to XM Fax and XM SendSecure (recommended).
+
+## Prerequisites
+
+The scenario outlined in this tutorial assumes that you already have the following prerequisites:
+
+* [A Microsoft Entra tenant](../develop/quickstart-create-new-tenant.md)
+* A user account in Microsoft Entra ID with [permission](../roles/permissions-reference.md) to configure provisioning (for example, Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator).
+* A user account in XM Fax and XM SendSecure with Admin permissions.
+
+## Step 1: Plan your provisioning deployment
+* Learn about [how the provisioning service works](../app-provisioning/user-provisioning.md).
+* Determine who will be in [scope for provisioning](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+* Determine what data to [map between Microsoft Entra ID and XM Fax and XM SendSecure](../app-provisioning/customize-application-attributes.md).
+
+## Step 2: Configure XM Fax and XM SendSecure to support provisioning with Microsoft Entra ID
+Contact XM Fax and XM SendSecure support to configure XM Fax and XM SendSecure to support provisioning with Microsoft Entra ID.
+
+## Step 3: Add XM Fax and XM SendSecure from the Microsoft Entra application gallery
+
+Add XM Fax and XM SendSecure from the Microsoft Entra application gallery to start managing provisioning to XM Fax and XM SendSecure. If you have previously setup XM Fax and XM SendSecure for SSO, you can use the same application. However it's recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](../manage-apps/add-application-portal.md).
+
+## Step 4: Define who will be in scope for provisioning
+
+The Microsoft Entra provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users to the application. If you choose to scope who will be provisioned based solely on attributes of the user, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+* Start small. Test with a small set of users before rolling out to everyone. When scope for provisioning is set to assigned users, you can control this by assigning one or two users to the app. When scope is set to all users, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+* If you need more roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
+
+## Step 5: Configure automatic user provisioning to XM Fax and XM SendSecure
+
+This section guides you through the steps to configure the Microsoft Entra provisioning service to create, update, and disable users in TestApp based on user assignments in Microsoft Entra ID.
+
+<a name='to-configure-automatic-user-provisioning-for-XM Fax and XM SendSecure-in-azure-ad'></a>
+
+### To configure automatic user provisioning for XM Fax and XM SendSecure in Microsoft Entra ID:
+
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator).
+1. Browse to **Identity** > **Applications** > **Enterprise applications**
+
+ ![Screenshot of Enterprise applications blade.](common/enterprise-applications.png)
+
+1. In the applications list, select **XM Fax and XM SendSecure**.
+
+ ![Screenshot of the XM Fax and XM SendSecure link in the Applications list.](common/all-applications.png)
+
+1. Select the **Provisioning** tab.
+
+ ![Screenshot of Provisioning tab.](common/provisioning.png)
+
+1. Set the **Provisioning Mode** to **Automatic**.
+
+ ![Screenshot of Provisioning tab automatic.](common/provisioning-automatic.png)
+
+1. Under the **Admin Credentials** section, input your XM Fax and XM SendSecure Tenant URL and Secret Token. Click **Test Connection** to ensure Microsoft Entra ID can connect to XM Fax and XM SendSecure. If the connection fails, ensure your XM Fax and XM SendSecure account has Admin permissions and try again.
+
+ ![Screenshot of Token.](common/provisioning-testconnection-tenanturltoken.png)
+
+1. In the **Notification Email** field, enter the email address of a person who should receive the provisioning error notifications and select the **Send an email notification when a failure occurs** check box.
+
+ ![Screenshot of Notification Email.](common/provisioning-notification-email.png)
+
+1. Select **Save**.
+
+1. Under the **Mappings** section, select **Synchronize Microsoft Entra users to XM Fax and XM SendSecure**.
+
+1. Review the user attributes that are synchronized from Microsoft Entra ID to XM Fax and XM SendSecure in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in XM Fax and XM SendSecure for update operations. If you choose to change the [matching target attribute](../app-provisioning/customize-application-attributes.md), you need to ensure that the XM Fax and XM SendSecure API supports filtering users based on that attribute. Select the **Save** button to commit any changes.
+
+ |Attribute|Type|Supported for filtering|Required by XM Fax and XM SendSecure|
+ |||||
+ |userName|String|&check;|&check;
+ |emails[type eq "work"].value|String|&check;|&check;
+ |active|Boolean||
+ |title|String||
+ |name.givenName|String||
+ |name.familyName|String||
+ |addresses[type eq "work"].streetAddress|String||
+ |addresses[type eq "work"].locality|String||
+ |addresses[type eq "work"].region|String||
+ |addresses[type eq "work"].postalCode|String||
+ |addresses[type eq "work"].country|String||
+ |phoneNumbers[type eq "work"].value|String||
+ |phoneNumbers[type eq "mobile"].value|String||
+ |phoneNumbers[type eq "fax"].value|String||
+ |externalId|String||&check;
+ |roles[primary eq "True"].value|String||
+ |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:organization|String||
+
+1. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+1. To enable the Microsoft Entra provisioning service for XM Fax and XM SendSecure, change the **Provisioning Status** to **On** in the **Settings** section.
+
+ ![Screenshot of Provisioning Status Toggled On.](common/provisioning-toggle-on.png)
+
+1. Define the users that you would like to provision to XM Fax and XM SendSecure by choosing the desired values in **Scope** in the **Settings** section.
+
+ ![Screenshot of Provisioning Scope.](common/provisioning-scope.png)
+
+1. When you're ready to provision, click **Save**.
+
+ ![Screenshot of Saving Provisioning Configuration.](common/provisioning-configuration-save.png)
+
+This operation starts the initial synchronization cycle of all users defined in **Scope** in the **Settings** section. The initial cycle takes longer to perform than subsequent cycles, which occur approximately every 40 minutes as long as the Microsoft Entra provisioning service is running.
+
+## Step 6: Monitor your deployment
+Once you've configured provisioning, use the following resources to monitor your deployment:
+
+* Use the [provisioning logs](../reports-monitoring/concept-provisioning-logs.md) to determine which users have been provisioned successfully or unsuccessfully
+* Check the [progress bar](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) to see the status of the provisioning cycle and how close it's to completion
+* If the provisioning configuration seems to be in an unhealthy state, the application goes into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md).
+
+## More resources
+
+* [Managing user account provisioning for Enterprise Apps](../app-provisioning/configure-automatic-user-provisioning-portal.md)
+* [What is application access and single sign-on with Microsoft Entra ID?](../manage-apps/what-is-single-sign-on.md)
+
+## Next steps
+
+* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md)
active-directory Workload Identities Faqs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/workload-identities/workload-identities-faqs.md
Previously updated : 9/15/2023 Last updated : 10/03/2023
pricing](https://www.microsoft.com/security/business/identity-access/microsoft-e
|**Lifecycle Management**| | | | |Access reviews for service provider-assigned privileged roles | Closely monitor workload identities with impactful permissions | | Yes | | Application authentication methods API | Allows IT admins to enforce best practices for how apps in their organizations use application authentication methods. | | Yes |
+| App Health Recommendations | Identify unused or inactive workload identities and their risk levels. Get remediation guidelines. | | Yes |
|**Identity Protection** | | | |Identity Protection for workload identities | Detect and remediate compromised workload identities | | Yes |
You can purchase the plan through Enterprise Agreement (EA)/Enterprise Subscript
## Where can I find more feature details to determine if I need a license(s)?
-Microsoft Entra Workload ID has three premium features that require a license.
+Microsoft Entra Workload ID has four premium features that require a license.
- [Conditional Access](../conditional-access/workload-identity.md): Supports location or risk-based policies for workload identities.
suspicious changes to accounts.
Enables delegation of reviews to the right people, focused on the most important privileged roles.
+- [App health recommendations](/azure/active-directory/reports-monitoring/howto-use-recommendations): Provides you with personalized insights with actionable guidance so you can implement best practices, improve the state of your Microsoft Entra tenant, and optimize the configurations for your scenarios.
+ ## What do the numbers in each category on the [Workload identities - Microsoft Entra admin center](https://entra.microsoft.com/#view/Microsoft_Azure_ManagedServiceIdentity/WorkloadIdentitiesBlade) mean? Category definitions: -- **Enterprise apps/Service Principals**: This category includes multi-tenant apps, gallery apps, non-gallery apps and service principals.
+- **Enterprise apps/Service Principals**: This category includes multitenant apps, gallery apps, non-gallery apps and service principals.
- **Microsoft apps**: Apps such as Outlook and Microsoft Teams.
applications for connecting resources that support Microsoft Entra authenticatio
All workload identities - service principles, apps and managed identities, configured in your directory for a Microsoft Entra Workload ID Premium feature require a license. Customers donΓÇÖt need to license all the workload identities. You can find the right number of Workload ID licenses with the following guidance:
-1. Customer will need to license enterprise applications or service principals ONLY if they set up Conditional Access policies or use Identity Protection for them.
-2. Customers don't need to license applications at all, even if they are using Conditional Access policies.
-3. Customers will need to license managed identities, only when they set up access reviews for managed identities.
+1. Customer needs to license enterprise applications or service principals ONLY if they set up Conditional Access policies or use Identity Protection for them.
+2. Customers don't need to license applications at all, even if they're using Conditional Access policies.
+3. Customers need to license managed identities, only when they set up access reviews for managed identities.
You can find the number of each workload identity type (enterprise apps/service principals, apps, managed identities) on the product landing page at the [Microsoft Entra admin center](https://entra.microsoft.com). ## Do these licenses require individual workload identities assignment?
active-directory Workload Identity Federation Considerations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/workload-identities/workload-identity-federation-considerations.md
Previously updated : 08/11/2023 Last updated : 10/04/2023 -+
The creation of federated identity credentials is available on user-assigned man
- Brazil Southeast - Malaysia South - Poland Central-- UK North-- UK South2- Support for creating federated identity credentials in these regions will be rolled out gradually except East Asia where support won't be provided.
advisor Advisor Reference Reliability Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/advisor/advisor-reference-reliability-recommendations.md
Title: Reliability recommendations
description: Full list of available reliability recommendations in Advisor. - Last updated 09/27/2023
Learn more about [Virtual machine - MigrateStandardStorageAccountToPremium (Upgr
### Enable virtual machine replication to protect your applications from regional outage
-Virtual machines that don't have replication enabled to another region aren't resilient to regional outages. Replicating the machines drastically reduce any adverse business impact during the time of an Azure region outage. We highly recommend enabling replication of all the business critical virtual machines from the below list so that in an event of an outage, you can quickly bring up your machines in remote Azure region.
+Virtual machines that don't have replication enabled to another region aren't resilient to regional outages. Replicating the machines drastically reduce any adverse business impact during the time of an Azure region outage. We highly recommend enabling replication of all the business critical virtual machines from the following list so that in an event of an outage, you can quickly bring up your machines in remote Azure region.
Learn more about [Virtual machine - ASRUnprotectedVMs (Enable virtual machine replication to protect your applications from regional outage)](https://aka.ms/azure-site-recovery-dr-azure-vms). ### Upgrade VM from Premium Unmanaged Disks to Managed Disks at no extra cost
Learn more about [Virtual machine - ASRUpdateOutboundConnectivityProtocolToServi
### Update your firewall configurations to allow new RHUI 4 IPs
-Your Virtual Machine Scale Sets will start receiving package content from RHUI4 servers on October 12, 2023. If you're allowing RHUI 3 IPs [https://aka.ms/rhui-server-list] via firewall and proxy, allow the new RHUI 4 IPs [https://aka.ms/rhui-server-list] to continue receiving RHEL package updates.
+Your Virtual Machine Scale Sets start receiving package content from RHUI4 servers on October 12, 2023. If you're allowing RHUI 3 IPs [https://aka.ms/rhui-server-list] via firewall and proxy, allow the new RHUI 4 IPs [https://aka.ms/rhui-server-list] to continue receiving RHEL package updates.
Learn more about [Virtual machine scale set - Rhui3ToRhui4MigrationVMSS (Update your firewall configurations to allow new RHUI 4 IPs)](https://aka.ms/rhui-server-list). ### Update your firewall configurations to allow new RHUI 4 IPs
-Your Virtual Machine Scale Sets will start receiving package content from RHUI4 servers on October 12, 2023. If you're allowing RHUI 3 IPs [https://aka.ms/rhui-server-list] via firewall and proxy, allow the new RHUI 4 IPs [https://aka.ms/rhui-server-list] to continue receiving RHEL package updates.
+Your Virtual Machine Scale Sets start receiving package content from RHUI4 servers on October 12, 2023. If you're allowing RHUI 3 IPs [https://aka.ms/rhui-server-list] via firewall and proxy, allow the new RHUI 4 IPs [https://aka.ms/rhui-server-list] to continue receiving RHEL package updates.
Learn more about [Virtual machine - Rhui3ToRhui4MigrationV2 (Update your firewall configurations to allow new RHUI 4 IPs)](https://aka.ms/rhui-server-list).
Learn more about [Availability set - ManagedDisksAvSet (Use Managed Disks to imp
### Check Point Virtual Machine may lose Network Connectivity
-We have identified that your Virtual Machine might be running a version of Check Point image that has been known to lose network connectivity in the event of a platform servicing operation. We recommend that you upgrade to a newer version of the image. Contact Check Point for further instructions on how to upgrade your image.
+We have identified that your Virtual Machine may be running a version of Check Point image that might lose network connectivity during a platform servicing operation. We recommend that you upgrade to a newer version of the image. Contact Check Point for further instructions on how to upgrade your image.
Learn more about [Virtual machine - CheckPointPlatformServicingKnownIssueA (Check Point Virtual Machine may lose Network Connectivity.)](https://supportcenter.checkpoint.com/supportcenter/portal?eventSubmit_doGoviewsolutiondetails=&solutionid=sk151752&partition=Advanced&product=CloudGuard).
Learn more about [Virtual machine - SessionHostNeedsAssistanceForUrlCheck (Acces
### Clusters having node pools using non-recommended B-Series
-Cluster has one or more node pools using a non-recommended burstable VM SKU. With burstable VMs, full vCPU capability 100% is unguaranteed. Please make sure B-series VM's are not used in Production environment.
+Cluster has one or more node pools using a non-recommended burstable VM SKU. With burstable VMs, full vCPU capability 100% is unguaranteed. Make sure B-series VMs are not used in a Production environment.
Learn more about [Kubernetes service - ClustersUsingBSeriesVMs (Clusters having node pools using non-recommended B-Series)](/azure/virtual-machines/sizes-b-series-burstable).
Learn more about [Kubernetes service - ClustersUsingBSeriesVMs (Clusters having
### Replication - Add a primary key to the table that currently does not have one
-Based on our internal monitoring, we have observed significant replication lag on your replica server. This lag is occurring because the replica server is replaying relay logs on a table that lacks a primary key. To ensure that the replica server can effectively synchronize with the primary and keep up with changes, we highly recommend adding primary keys to the tables in the primary server and subsequently recreating the replica server.
+Based on our internal monitoring, we have observed significant replication lag on your replica server. This lag is occurring because the replica server is replaying relay logs on a table that lacks a primary key. To ensure that the replica can synchronize with the primary and keep up with changes, add primary keys to the tables in the primary server and then recreate the replica server.
Learn more about [Azure Database for MySQL flexible server - MySqlFlexibleServerReplicaMissingPKfb41 (Replication - Add a primary key to the table that currently does not have one)](/azure/mysql/how-to-troubleshoot-replication-latency#no-primary-key-or-unique-key-on-a-table). ### High Availability - Add primary key to the table that currently does not have one
-Our internal monitoring system has identified significant replication lag on the High Availability standby server. This lag is primarily caused by the standby server replaying relay logs on a table that lacks a primary key. To address this issue and adhere to best practices, it is recommended to add primary keys to all tables. Once this is done, proceed to disable and then re-enable High Availability to mitigate the problem.
+Our internal monitoring system has identified significant replication lag on the High Availability standby server. The standby server replaying relay logs on a table that lacks a primary key, is the main cause of the lag. To address this issue and adhere to best practices, we recommend you add primary keys to all tables. Once you add the primary keys, proceed to disable and then re-enable High Availability to mitigate the problem.
Learn more about [Azure Database for MySQL flexible server - MySqlFlexibleServerHAMissingPKcf38 (High Availability - Add primary key to the table that currently does not have one.)](/azure/mysql/how-to-troubleshoot-replication-latency#no-primary-key-or-unique-key-on-a-table).
Learn more about [Azure Database for MySQL flexible server - MySqlFlexibleServer
### Improve PostgreSQL availability by removing inactive logical replication slots
-Our internal telemetry indicates that your PostgreSQL server may have inactive logical replication slots. THIS NEEDS IMMEDIATE ATTENTION. Inactive logical replication can result in degraded server performance and unavailability due to WAL file retention and buildup of snapshot files. To improve performance and availability, we STRONGLY recommend that you IMMEDIATELY either delete the inactive replication slots, or start consuming the changes from these slots so that the slots' Log Sequence Number (LSN) advances and is close to the current LSN of the server.
+Our internal telemetry indicates that your PostgreSQL server may have inactive logical replication slots. THIS NEEDS IMMEDIATE ATTENTION. Inactive logical replication can result in degraded server performance and unavailability due to WAL file retention and buildup of snapshot files. To improve performance and availability, we STRONGLY recommend that you IMMEDIATELY take action. Either delete the inactive replication slots, or start consuming the changes from these slots so that the slots' Log Sequence Number (LSN) advances and is close to the current LSN of the server.
Learn more about [PostgreSQL server - OrcasPostgreSqlLogicalReplicationSlots (Improve PostgreSQL availability by removing inactive logical replication slots)](https://aka.ms/azure_postgresql_logical_decoding). ### Improve PostgreSQL availability by removing inactive logical replication slots
-Our internal telemetry indicates that your PostgreSQL flexible server may have inactive logical replication slots. THIS NEEDS IMMEDIATE ATTENTION. Inactive logical replication slots can result in degraded server performance and unavailability due to WAL file retention and buildup of snapshot files. To improve performance and availability, we STRONGLY recommend that you IMMEDIATELY either delete the inactive replication slots, or start consuming the changes from these slots so that the slots' Log Sequence Number (LSN) advances and is close to the current LSN of the server.
+Our internal telemetry indicates that your PostgreSQL flexible server may have inactive logical replication slots. THIS NEEDS IMMEDIATE ATTENTION. Inactive logical replication slots can result in degraded server performance and unavailability due to WAL file retention and buildup of snapshot files. To improve performance and availability, we STRONGLY recommend that you IMMEDIATELY take action. Either delete the inactive replication slots, or start consuming the changes from these slots so that the slots' Log Sequence Number (LSN) advances and is close to the current LSN of the server.
Learn more about [Azure Database for PostgreSQL flexible server - OrcasPostgreSqlFlexibleServerLogicalReplicationSlots (Improve PostgreSQL availability by removing inactive logical replication slots)](https://aka.ms/azure_postgresql_flexible_server_logical_decoding).
Learn more about [IoT hub - UpgradeDeviceClientSdk (Upgrade device client SDK to
### IoT Hub Potential Device Storm Detected
-This is when two or more devices are trying to connect to the IoT Hub using the same device ID credentials. When the second device (B) connects, it causes the first one (A) to become disconnected. Then (A) attempts to reconnect again, which causes (B) to get disconnected.
+A device storm is when two or more devices are trying to connect to the IoT Hub using the same device ID credentials. When the second device (B) connects, it causes the first one (A) to become disconnected. Then (A) attempts to reconnect again, which causes (B) to get disconnected.
Learn more about [IoT hub - IoTHubDeviceStorm (IoT Hub Potential Device Storm Detected)](https://aka.ms/IotHubDeviceStorm). ### Upgrade Device Update for IoT Hub SDK to a supported version
-Your Device Update for IoT Hub Instance is using an outdated version of the SDK. We recommend upgrading to the latest version for the latest fixes, performance improvements, and new feature capabilities.
+Your Device Update for IoT Hub Instance is using an outdated version of the SDK. We recommend you upgrade to the latest version for the latest fixes, performance improvements, and new feature capabilities.
Learn more about [IoT hub - DU_SDK_Advisor_Recommendation (Upgrade Device Update for IoT Hub SDK to a supported version)](/azure/iot-hub-device-update/understand-device-update). ### IoT Hub Quota Exceeded Detected
-We have detected that your IoT Hub has exceeded its daily message quota. Consider adding units or increasing the SKU level to prevent this in the future.
+We have detected that your IoT Hub has exceeded its daily message quota. To prevent this in the future, add units or increase the SKU level.
Learn more about [IoT hub - IoTHubQuotaExceededAdvisor (IoT Hub Quota Exceeded Detected)](/azure/iot-hub/troubleshoot-error-codes#403002-iothubquotaexceeded).
Learn more about [Azure Cosmos DB account - CosmosDBLazyIndexing (Configure Cons
### Upgrade your old Azure Cosmos DB SDK to the latest version
-Your Azure Cosmos DB account is using an old version of the SDK. We recommend upgrading to the latest version for the latest fixes, performance improvements, and new feature capabilities.
+Your Azure Cosmos DB account is using an old version of the SDK. We recommend you upgrade to the latest version for the latest fixes, performance improvements, and new feature capabilities.
Learn more about [Azure Cosmos DB account - CosmosDBUpgradeOldSDK (Upgrade your old Azure Cosmos DB SDK to the latest version)](../cosmos-db/index.yml). ### Upgrade your outdated Azure Cosmos DB SDK to the latest version
-Your Azure Cosmos DB account is using an outdated version of the SDK. We recommend upgrading to the latest version for the latest fixes, performance improvements, and new feature capabilities.
+Your Azure Cosmos DB account is using an outdated version of the SDK. We recommend you upgrade to the latest version for the latest fixes, performance improvements, and new feature capabilities.
Learn more about [Azure Cosmos DB account - CosmosDBUpgradeOutdatedSDK (Upgrade your outdated Azure Cosmos DB SDK to the latest version)](../cosmos-db/index.yml). ### Configure your Azure Cosmos DB containers with a partition key
-Your Azure Cosmos DB nonpartitioned collections are approaching their provisioned storage quota. Migrate these collections to new collections with a partition key definition so that they can automatically be scaled out by the service.
+Your Azure Cosmos DB nonpartitioned collections are approaching their provisioned storage quota. Migrate these collections to new collections with a partition key definition so the service can automatically scale them out.
Learn more about [Azure Cosmos DB account - CosmosDBFixedCollections (Configure your Azure Cosmos DB containers with a partition key)](../cosmos-db/partitioning-overview.md#choose-partitionkey). ### Upgrade your Azure Cosmos DB for MongoDB account to v4.0 to save on query/storage costs and utilize new features
-Your Azure Cosmos DB for MongoDB account is eligible to upgrade to version 4.0. Upgrading to v4.0 can reduce your storage costs by up to 55% and your query costs by up to 45% by leveraging a new storage format. Numerous additional features such as multi-document transactions are also included in v4.0.
+Your Azure Cosmos DB for MongoDB account is eligible to upgrade to version 4.0. Reduce your storage costs by up to 55% and your query costs by up to 45% by upgrading to the v4.0 new storage format. Numerous other features such as multi-document transactions are also included in v4.0.
Learn more about [Azure Cosmos DB account - CosmosDBMongoSelfServeUpgrade (Upgrade your Azure Cosmos DB for MongoDB account to v4.0 to save on query/storage costs and utilize new features)](/azure/cosmos-db/mongodb-version-upgrade). ### Add a second region to your production workloads on Azure Cosmos DB
-Based on their names and configuration, we have detected the Azure Cosmos DB accounts below as being potentially used for production workloads. These accounts currently run in a single Azure region. You can increase their availability by configuring them to span at least two Azure regions.
+Based on their names and configuration, we have detected the Azure Cosmos DB accounts listed as being potentially used for production workloads. These accounts currently run in a single Azure region. You can increase their availability by configuring them to span at least two Azure regions.
> [!NOTE] > Additional regions incur extra costs.
Learn more about [Azure Cosmos DB account - CosmosDBMongoServerSideRetries (Enab
### Migrate your Azure Cosmos DB for MongoDB account to v4.0 to save on query/storage costs and utilize new features
-Migrate your database account to a new database account to take advantage of Azure Cosmos DB for MongoDB v4.0. Upgrading to v4.0 can reduce your storage costs by up to 55% and your query costs by up to 45% by leveraging a new storage format. Numerous additional features such as multi-document transactions are also included in v4.0. When upgrading, you must also migrate the data in your existing account to a new account created using version 4.0. Azure Data Factory or Studio 3T can assist you in migrating your data.
+Migrate your database account to a new database account to take advantage of Azure Cosmos DB for MongoDB v4.0. Reduce your storage costs by up to 55% and your query costs by up to 45% by upgrading to the v4.0 new storage format. Numerous other features such as multi-document transactions are also included in v4.0. When upgrading, you must also migrate the data in your existing account to a new account created using version 4.0. Azure Data Factory or Studio 3T can assist you in migrating your data.
Learn more about [Azure Cosmos DB account - CosmosDBMongoMigrationUpgrade (Migrate your Azure Cosmos DB for MongoDB account to v4.0 to save on query/storage costs and utilize new features)](/azure/cosmos-db/mongodb-feature-support-40).
Learn more about [Azure Cosmos DB account - CosmosDBKeyVaultWrap (Your Azure Cos
### Avoid being rate limited from metadata operations
-We found a high number of metadata operations on your account. Your data in Azure Cosmos DB, including metadata about your databases and collections is distributed across partitions. Metadata operations have a system-reserved request unit (RU) limit. Avoid being rate limited from metadata operations by using static Azure Cosmos DB client instances in your code and caching the names of databases and collections.
+We found a high number of metadata operations on your account. Your data in Azure Cosmos DB, including metadata about your databases and collections, is distributed across partitions. Metadata operations have a system-reserved request unit (RU) limit. A high number of metadata operations can cause rate limiting. Avoid this by using static Azure Cosmos DB client instances in your code, and caching the names of databases and collections.
Learn more about [Azure Cosmos DB account - CosmosDBHighMetadataOperations (Avoid being rate limited from metadata operations)](/azure/cosmos-db/performance-tips).
Learn more about [Azure Cosmos DB account - CosmosDBMongoNudge36AwayFrom32 (Use
### Upgrade to 2.6.14 version of the Async Java SDK v2 to avoid a critical issue or upgrade to Java SDK v4 as Async Java SDK v2 is being deprecated
-There is a critical bug in version 2.6.13 and lower of the Azure Cosmos DB Async Java SDK v2 causing errors when a Global logical sequence number (LSN) greater than the Max Integer value is reached. This happens transparent to you by the service after a large volume of transactions occur in the lifetime of an Azure Cosmos DB container. Note: This is a critical hotfix for the Async Java SDK v2, however we still highly recommend you migrate to the [Java SDK v4](../cosmos-db/sql/sql-api-sdk-java-v4.md).
+There is a critical bug in version 2.6.13 and lower, of the Azure Cosmos DB Async Java SDK v2 causing errors when a Global logical sequence number (LSN) greater than the Max Integer value is reached. These service errors happen after a large volume of transactions occur in the lifetime of an Azure Cosmos DB container. Note: There is a critical hotfix for the Async Java SDK v2, however we still highly recommend you migrate to the [Java SDK v4](../cosmos-db/sql/sql-api-sdk-java-v4.md).
Learn more about [Azure Cosmos DB account - CosmosDBMaxGlobalLSNReachedV2 (Upgrade to 2.6.14 version of the Async Java SDK v2 to avoid a critical issue or upgrade to Java SDK v4 as Async Java SDK v2 is being deprecated)](../cosmos-db/sql/sql-api-sdk-async-java.md). ### Upgrade to the current recommended version of the Java SDK v4 to avoid a critical issue
-There is a critical bug in version 4.15 and lower of the Azure Cosmos DB Java SDK v4 causing errors when a Global logical sequence number (LSN) greater than the Max Integer value is reached. This happens transparent to you by the service after a large volume of transactions occur in the lifetime of an Azure Cosmos DB container.
+There is a critical bug in version 4.15 and lower of the Azure Cosmos DB Java SDK v4 causing errors when a Global logical sequence number (LSN) greater than the Max Integer value is reached. These service errors happen after a large volume of transactions occur in the lifetime of an Azure Cosmos DB container.
Learn more about [Azure Cosmos DB account - CosmosDBMaxGlobalLSNReachedV4 (Upgrade to the current recommended version of the Java SDK v4 to avoid a critical issue)](../cosmos-db/sql/sql-api-sdk-java-v4.md).
Learn more about [Azure Cosmos DB account - CosmosDBMaxGlobalLSNReachedV4 (Upgra
### Upgrade your Azure Fluid Relay client library
-You have recently invoked the Azure Fluid Relay service with an old client library. Your Azure Fluid Relay client library should now be upgraded to the latest version to ensure your application remains operational. Upgrading provides the most up-to-date functionality, as well as enhancements in performance and stability. For more information on the latest version to use and how to upgrade, see the following article.
+You have recently invoked the Azure Fluid Relay service with an old client library. Your Azure Fluid Relay client library should now be upgraded to the latest version to ensure your application remains operational. Upgrading provides the most up-to-date functionality and enhancements in performance and stability. For more information on the latest version to use and how to upgrade, see the following article.
Learn more about [FluidRelay Server - UpgradeClientLibrary (Upgrade your Azure Fluid Relay client library)](https://github.com/microsoft/FluidFramework). ## HDInsight
+### Your cluster running Ubuntu 16.04 is out of support
+
+We detected that your HDInsight cluster still uses Ubuntu 16.04 LTS. End of support for Azure HDInsight clusters on Ubuntu 16.04 LTS began on November 30, 2022. Existing clusters run as is without support from Microsoft. Consider rebuilding your cluster with the latest images.
+
+Learn more about [HDInsight cluster - ubuntu1604HdiClusters (Your cluster running Ubuntu 16.04 is out of support)](/azure/hdinsight/hdinsight-component-versioning#supported-hdinsight-versions).
+
+### Upgrade your HDInsight Cluster
+
+We detected your cluster is not using the latest image. We recommend customers to use the latest versions of HDInsight Images as they bring in the best of open source updates, Azure updates and security fixes. HDInsight release happens every 30 to 60 days. Consider moving to the latest release.
+
+Learn more about [HDInsight cluster - upgradeHDInsightCluster (Upgrade your HDInsight Cluster)](/azure/hdinsight/hdinsight-release-notes).
+
+### Your cluster was created one year ago
+
+We detected your cluster was created 1 year ago. As part of the best practices, we recommend you to use the latest HDInsight images as they bring in the best of open source updates, Azure updates and security fixes. The recommended maximum duration for cluster upgrades is less than six months.
+
+Learn more about [HDInsight cluster - clusterOlderThanAYear (Your cluster was created one year ago)](/azure/hdinsight/hdinsight-overview-before-you-start#keep-your-clusters-up-to-date).
+
+### Your Kafka Cluster Disks are almost full
+
+The data disks used by Kafka brokers in your HDInsight cluster are almost full. When that happens, the Apache Kafka broker process can't start and fails because of the disk full error. To mitigate, find the retention time for every topic, back up the files that are older and restart the brokers.
+
+Learn more about [HDInsight cluster - KafkaDiskSpaceFull (Your Kafka Cluster Disks are almost full)](https://aka.ms/kafka-troubleshoot-full-disk).
+
+### Creation of clusters under custom VNet requires more permission
+
+Your clusters with custom VNet were created without VNet joining permission. Ensure that the users who perform create operations have permissions to the Microsoft.Network/virtualNetworks/subnets/join action before September 30, 2023.
+
+Learn more about [HDInsight cluster - EnforceVNetJoinPermissionCheck (Creation of clusters under custom VNet requires more permission)](https://aka.ms/hdinsightEnforceVnet).
+ ### Deprecation of Kafka 1.1 in HDInsight 4.0 Kafka cluster Starting July 1, 2020, you can't create new Kafka clusters with Kafka 1.1 on HDInsight 4.0. Existing clusters run as is without support from Microsoft. Consider moving to Kafka 2.1 on HDInsight 4.0 by June 30 2020 to avoid potential system/support interruption.
Learn more about [HDInsight cluster - SparkVersionRetirement (Deprecation of Old
### Enable critical updates to be applied to your HDInsight clusters
-HDInsight service is applying an important certificate related update to your cluster. However, one or more policies in your subscription are preventing HDInsight service from creating or modifying network resources (Load balancer, Network Interface and Public IP address) associated with your clusters and applying this update. Take actions to allow HDInsight service to create or modify network resources (Load balancer, Network interface and Public IP address) associated with your clusters before January 13, 2021 05:00 PM UTC. The HDInsight team is performing updates between January 13, 2021 05:00 PM UTC and January 16, 2021 05:00 PM UTC. Failure to apply this update may result in your clusters becoming unhealthy and unusable.
+HDInsight service is applying an important certificate related update to your cluster. However, one or more policies in your subscription are preventing HDInsight service from creating or modifying network resources associated with your clusters and applying this update. Take actions to allow HDInsight service to create or modify network resources such as Load balancer, Network interface and Public IP address, associated with your clusters before January 13, 2021 05:00 PM UTC. The HDInsight team is performing updates between January 13, 2021 05:00 PM UTC and January 16, 2021 05:00 PM UTC. Failure to apply this update may result in your clusters becoming unhealthy and unusable.
Learn more about [HDInsight cluster - GCSCertRotation (Enable critical updates to be applied to your HDInsight clusters)](../hdinsight/hdinsight-hadoop-provision-linux-clusters.md).
Learn more about [HDInsight cluster - GCSCertRotationR3DropRecreate (Drop and re
### Apply critical updates to your HDInsight clusters
-The HDInsight service has attempted to apply a critical certificate update on all your running clusters. However, one or more policies in your subscription are preventing HDInsight service from creating or modifying network resources (Load balancer, Network Interface and Public IP address) associated with your clusters and applying this update. Remove or update your policy assignment to allow HDInsight service to create or modify network resources (Load balancer, Network interface and Public IP address) associated with your clusters before January 21, 2021 05:00 PM UTC. The HDInsight team is performing updates between January 21, 2021 05:00 PM UTC and January 23, 2021 05:00 PM UTC. To verify the policy update, you can try to create network resources (Load balancer, Network interface and Public IP address) in the same resource group and Subnet where your cluster is in. Failure to apply this update may result in your clusters becoming unhealthy and unusable. You can also drop and recreate your cluster before January 25, 2021 to prevent the cluster from becoming unhealthy and unusable. The HDInsight service sends another notification if we failed to apply the update to your clusters.
+The HDInsight service has attempted to apply a critical certificate update on all your running clusters. However, one or more policies in your subscription are preventing HDInsight service from creating or modifying network resources associated with your clusters and applying this update. Remove or update your policy assignment to allow HDInsight service to create or modify network resources such as Load balancer, Network interface and Public IP address, associated with your clusters before January 21, 2021 05:00 PM UTC. The HDInsight team is performing updates between January 21, 2021 05:00 PM UTC and January 23, 2021 05:00 PM UTC. To verify the policy update, you can try to create network resources in the same resource group and subnet where your cluster is. Failure to apply this update may result in your clusters becoming unhealthy and unusable. You can also drop and recreate your cluster before January 25, 2021 to prevent the cluster from becoming unhealthy and unusable. The HDInsight service sends another notification if we failed to apply the update to your clusters.
Learn more about [HDInsight cluster - GCSCertRotationR3PlanPatch (Apply critical updates to your HDInsight clusters)](../hdinsight/hdinsight-hadoop-provision-linux-clusters.md).
Learn more about [Kubernetes - Azure Arc - Arc-enabled K8s agent version upgrade
### Increase Media Services quotas or limits to ensure continuity of service
-Your media account is about to hit its quota limits. Review current usage of Assets, Content Key Policies and Stream Policies for the media account. To avoid any disruption of service, you should request quota limits to be increased for the entities that are closer to hitting quota limit. You can request quota limits to be increased by opening a ticket and adding relevant details to it. Don't create additional Azure Media accounts in an attempt to obtain higher limits.
+Your media account is about to hit its quota limits. Review current usage of Assets, Content Key Policies and Stream Policies for the media account. To avoid any disruption of service, you should request quota limits to be increased for the entities that are closer to hitting quota limit. You can request quota limits to be increased by opening a ticket and adding relevant details to it. Don't create extra Azure Media accounts in an attempt to obtain higher limits.
Learn more about [Media Service - AccountQuotaLimit (Increase Media Services quotas or limits to ensure continuity of service.)](https://aka.ms/ams-quota-recommendation/).
+## Azure NetApp Files
+
+### Implement disaster recovery strategies for your Azure NetApp Files Resources
+
+To avoid data or functionality loss in the event of a regional or zonal disaster, implement common disaster recovery techniques such as cross region replication or cross zone replication for your Azure NetApp Files volumes
+
+Learn more about [Volume - ANFCRRCZRRecommendation (Implement disaster recovery strategies for your Azure NetApp Files Resources)](https://aka.ms/anfcrr).
+
+### Azure NetApp Files Enable Continuous Availability for SMB Volumes
+
+Recommendation to enable SMB volume for Continuous Availability.
+
+Learn more about [Volume - anfcaenablement (Azure NetApp Files Enable Continuous Availability for SMB Volumes)](https://aka.ms/anfdoc-continuous-availability).
+ ## Networking ### Upgrade your SKU or add more instances to ensure fault tolerance
Learn more about [Traffic Manager profile - ProximityProfile (Add or move one en
### Implement multiple ExpressRoute circuits in your Virtual Network for cross premises resiliency
-We have detected that your ExpressRoute gateway only has 1 ExpressRoute circuit associated to it. Connect one or more additional circuits to your gateway to ensure peering location redundancy and resiliency
+We have detected that your ExpressRoute gateway only has 1 ExpressRoute circuit associated to it. Connect one or more extra circuits to your gateway to ensure peering location redundancy and resiliency
Learn more about [Virtual network gateway - ExpressRouteGatewayRedundancy (Implement multiple ExpressRoute circuits in your Virtual Network for cross premises resiliency)](../expressroute/designing-for-high-availability-with-expressroute.md). ### Implement ExpressRoute Monitor on Network Performance Monitor for end-to-end monitoring of your ExpressRoute circuit
-We have detected that your ExpressRoute circuit isn't currently being monitored by ExpressRoute Monitor on Network Performance Monitor. ExpressRoute monitor provides end-to-end monitoring capabilities including: Loss, latency, and performance from on-premises to Azure and Azure to on-premises
+We have detected that ExpressRoute Monitor on Network Performance Monitor isn't currently monitoring your ExpressRoute circuit. ExpressRoute monitor provides end-to-end monitoring capabilities including: Loss, latency, and performance from on-premises to Azure and Azure to on-premises
Learn more about [ExpressRoute circuit - ExpressRouteGatewayE2EMonitoring (Implement ExpressRoute Monitor on Network Performance Monitor for end-to-end monitoring of your ExpressRoute circuit)](../expressroute/how-to-npm.md). ### Avoid hostname override to ensure site integrity
-Try to avoid overriding the hostname when configuring Application Gateway. Having a different domain on the frontend of Application Gateway than the one which is used to access the backend can potentially lead to cookies or redirect urls being broken. Note that this might not be the case in all situations and that certain categories of backends (like REST APIs) in general are less sensitive to this. Make sure the backend is able to deal with this or update the Application Gateway configuration so the hostname does not need to be overwritten towards the backend. When used with App Service, attach a custom domain name to the Web App and avoid use of the `*.azurewebsites.net` host name towards the backend.
+Try to avoid overriding the hostname when configuring Application Gateway. Having a domain on the frontend of Application Gateway different than the one used to access the backend, can potentially lead to cookies or redirect URLs being broken. This might not be the case in all situations, and certain categories of backends, like REST APIs, are less sensitive in general. Make sure the backend is able to deal with this or update the Application Gateway configuration so the hostname does not need to be overwritten towards the backend. When used with App Service, attach a custom domain name to the Web App and avoid use of the `*.azurewebsites.net` host name towards the backend.
Learn more about [Application gateway - AppGatewayHostOverride (Avoid hostname override to ensure site integrity)](https://aka.ms/appgw-advisor-usecustomdomain).
Learn more about [ExpressRoute circuit - UseGlobalReachForDR (Use ExpressRoute G
### Azure WAF RuleSet CRS 3.1/3.2 has been updated with Log4j 2 vulnerability rule
-In response to Log4j 2 vulnerability (CVE-2021-44228), Azure Web Application Firewall (WAF) RuleSet CRS 3.1/3.2 has been updated on your Application Gateway to help provide additional protection from this vulnerability. The rules are available under Rule 944240 and no action is needed to enable this.
+In response to Log4j 2 vulnerability (CVE-2021-44228), Azure Web Application Firewall (WAF) RuleSet CRS 3.1/3.2 has been updated on your Application Gateway to help provide extra protection from this vulnerability. The rules are available under Rule 944240 and no action is needed to enable them.
Learn more about [Application gateway - AppGwLog4JCVEPatchNotification (Azure WAF RuleSet CRS 3.1/3.2 has been updated with log4j2 vulnerability rule)](https://aka.ms/log4jcve).
Learn more about [Application gateway - AppGwLog4JCVEPatchNotification (Azure WA
To mitigate the impact of Log4j 2 vulnerability, we recommend these steps:
-1) Upgrade Log4j 2 to version 2.15.0 on your backend servers. If upgrade isn't possible, follow the system property guidance link below.
-2) Take advantage of WAF Core rule sets (CRS) by upgrading to WAF SKU
+1) Upgrade Log4j 2 to version 2.15.0 on your backend servers. If upgrade isn't possible, follow the system property guidance link provided.
+2) Take advantage of WAF Core rule sets (CRS) by upgrading to WAF SKU.
Learn more about [Application gateway - AppGwLog4JCVEGenericNotification (Additional protection to mitigate Log4j2 vulnerability (CVE-2021-44228))](https://aka.ms/log4jcve).
Prevent risk of connectivity failures due to SNAT port exhaustion by using NAT g
Learn more about [Virtual network - natGateway (Use NAT gateway for outbound connectivity)](/azure/load-balancer/load-balancer-outbound-connections#2-associate-a-nat-gateway-to-the-subnet).
+### Update VNet permission of Application Gateway users
+
+To improve security and provide a more consistent experience across Azure, all users must pass a permission check before creating or updating an Application Gateway in a Virtual Network. The users or service principals must include at least Microsoft.Network/virtualNetworks/subnets/join/action permission.
+
+Learn more about [Application gateway - AppGwLinkedAccessFailureRecmmendation (Update VNet permission of Application Gateway users)](https://aka.ms/agsubnetjoin).
+
+### Use version-less Key Vault secret identifier to reference the certificates
+
+We strongly recommend that you use a version-less secret identifier to allow your application gateway resource to automatically retrieve the new certificate version, whenever available. Example: https://myvault.vault.azure.net/secrets/mysecret/
+
+Learn more about [Application gateway - AppGwAdvisorRecommendationForCertificateUpdate (Use version-less Key Vault secret identifier to reference the certificates)](https://aka.ms/agkvversion).
+ ### Enable Active-Active gateways for redundancy In active-active configuration, both instances of the VPN gateway establish S2S VPN tunnels to your on-premises VPN device. When a planned maintenance or unplanned event happens to one gateway instance, traffic is switched over to the other active IPsec tunnel automatically.
Learn more about [Virtual network gateway - VNetGatewayActiveActive (Enable Acti
### Enable soft delete for your Recovery Services vaults
-Soft delete helps you retain your backup data in the Recovery Services vault for an additional duration after deletion, giving you an opportunity to retrieve it before it's permanently deleted.
+The soft delete option helps you retain your backup data in the Recovery Services vault for an extra duration after deletion. This gives you an opportunity to retrieve the data before it's permanently deleted.
Learn more about [Recovery Services vault - AB-SoftDeleteRsv (Enable soft delete for your Recovery Services vaults)](../backup/backup-azure-security-feature-cloud.md).
Learn more about [Service limits in Azure Cognitive Search](/azure/search/search
### You are close to exceeding your available storage quota. Add additional partitions if you need more storage
-you're close to exceeding your available storage quota. Add additional partitions if you need more storage. After exceeding storage quota, you can still query, but indexing operations no longer work.
+you're close to exceeding your available storage quota. Add extra partitions if you need more storage. After exceeding storage quota, you can still query, but indexing operations no longer work.
Learn more about [Service limits in Azure Cognitive Search](/azure/search/search-limits-quotas-capacity).
+## Azure SQL
+
+### Enable Azure backup for SQL on your virtual machines
+
+Enable backups for SQL databases on your virtual machines using Azure backup and realize the benefits of zero-infrastructure backup, point-in-time restore, and central management with SQL AG integration.
+
+Learn more about [SQL virtual machine - EnableAzBackupForSQL (Enable Azure backup for SQL on your virtual machines)](/azure/backup/backup-azure-sql-database).
+ ## Storage ### You have ADLS Gen1 Accounts Which Need to be Migrated to ADLS Gen2
-As previously announced, Azure Data Lake Storage Gen1 will be retired on February 29, 2024. We highly recommend migrating your data lake to Azure Data Lake Storage Gen2, which offers advanced capabilities specifically designed for big data analytics and is built on top of Azure Blob Storage.
+As previously announced, Azure Data Lake Storage Gen1 will be retired on February 29, 2024. We highly recommend that you migrate your data lake to Azure Data Lake Storage Gen2, which offers advanced capabilities specifically designed for big data analytics, and is built on top of Azure Blob Storage.
Learn more about [Data lake store account - ADLSGen1_Deprecation (You have ADLS Gen1 Accounts Which Needs to be Migrated to ADLS Gen2)](https://azure.microsoft.com/updates/action-required-switch-to-azure-data-lake-storage-gen2-by-29-february-2024/). ### Enable Soft Delete to protect your blob data
-After enabling Soft Delete, deleted data transitions to a soft deleted state instead of being permanently deleted. When data is overwritten, a soft deleted snapshot is generated to save the state of the overwritten data. You can configure the amount of time soft deleted data is recoverable before it permanently expires.
+After enabling the soft delete option, deleted data transitions to a soft deleted state instead of being permanently deleted. When data is overwritten, a soft deleted snapshot is generated to save the state of the overwritten data. You can configure the amount of time soft deleted data is recoverable before it permanently expires.
Learn more about [Storage Account - StorageSoftDelete (Enable Soft Delete to protect your blob data)](https://aka.ms/softdelete).
We have identified that you're using Premium SSD Unmanaged Disks in Storage acco
Learn more about [Storage Account - StoragePremiumBlobQuotaLimit (Use Managed Disks for storage accounts reaching capacity limit)](https://aka.ms/premium_blob_quota).
+### Configure blob backup
+
+Configure blob backup
+
+Learn more about [Storage Account - ConfigureBlobBackup (Configure blob backup)](/azure/backup/blob-backup-overview).
+
+## Subscriptions
+
+### Turn on Azure Backup to get simple, reliable, and cost-effective protection for your data
+
+Keep your information and applications safe with robust, one click backup from Azure. Activate Azure Backup to get cost-effective protection for a wide range of workloads including VMs, SQL databases, applications, and file shares.
+
+Learn more about [Subscription - AzureBackupService (Turn on Azure Backup to get simple, reliable, and cost-effective protection for your data)](/azure/backup/).
+ ## Web ### Consider scaling out your App Service Plan to avoid CPU exhaustion
Learn more about [Static Web App - StaticWebAppsUpgradeToStandardSKU (Consider u
### Application code should be fixed as worker process crashed due to Unhandled Exception
-We identified the below thread resulted in an unhandled exception for your App and application code should be fixed to prevent impact to application availability. A crash happens when an exception in your code terminates the process.
+We identified the following thread resulted in an unhandled exception for your App and application code should be fixed to prevent impact to application availability. A crash happens when an exception in your code terminates the process.
Learn more about [App service - AppServiceProactiveCrashMonitoring (Application code should be fixed as worker process crashed due to Unhandled Exception)](https://azure.github.io/AppService/2020/08/11/Crash-Monitoring-Feature-in-Azure-App-Service.html). ### Consider changing your App Service configuration to 64-bit
-We identified your application is running in 32-bit and the memory is reaching the 2GB limit.
-Consider switching to 64-bit processes so you can take advantage of the additional memory available in your Web Worker role. This action triggers a web app restart, so schedule accordingly.
+We identified your application is running in 32-bit and the memory is reaching the 2GB limit. Consider switching to 64-bit processes so you can take advantage of the extra memory available in your Web Worker role. This action triggers a web app restart, so schedule accordingly.
Learn more about [App service 32-bit limitations](/troubleshoot/azure/app-service/web-apps-performance-faqs#i-see-the-message-worker-process-requested-recycle-due-to-percent-memory-limit-how-do-i-address-this-issue).
+## SAP solutions on Azure
+
+### Review SAP configuration for timeout values used with Azure NetApp Files
+
+High availability of SAP while used with Azure NetApp Files relies on setting proper timeout values to prevent disruption to your application. Review the documentation to ensure your configuration meets the timeout values as noted in the documentation.
+
+Learn more about [Volume - SAPTimeoutsANF (Review SAP configuration for timeout values used with Azure NetApp Files)](/azure/sap/workloads/get-started).
+
+### Enable HA ports in the Azure Load Balancer for ASCS HA setup in SAP workloads
+
+Enable HA ports in the Load balancing rules for HA set up of ASCS instance in SAP workloads. Open the load balancer, select 'load balancing rules' and add/edit the rule to enable the recommended settings.
+
+Learn more about [Central Server Instance - ASCSHAEnableLBPorts (Enable HA ports in the Azure Load Balancer for ASCS HA setup in SAP workloads)](/azure/virtual-machines/workloads/sap/sap-hana-high-availability).
+
+### Enable Floating IP in the Azure Load balancer for ASCS HA setup in SAP workloads
+
+Enable floating IP in the load balancing rules for the Azure Load Balancer for HA set up of ASCS instance in SAP workloads. Open the load balancer, select 'load balancing rules' and add/edit the rule to enable the recommended settings.
+
+Learn more about [Central Server Instance - ASCSHAEnableFloatingIpLB (Enable Floating IP in the Azure Load balancer for ASCS HA setup in SAP workloads)](/azure/virtual-machines/workloads/sap/sap-hana-high-availability).
+
+### Set the Idle timeout in Azure Load Balancer to 30 minutes for ASCS HA setup in SAP workloads
+
+To prevent load balancer timeout, make sure that all Azure Load Balancing Rules have: 'Idle timeout (minutes)' set to the maximum value of 30 minutes. Open the load balancer, select 'load balancing rules' and add/edit the rule to enable the recommended settings.
+
+Learn more about [Central Server Instance - ASCSHASetIdleTimeOutLB (Set the Idle timeout in Azure Load Balancer to 30 minutes for ASCS HA setup in SAP workloads)](/azure/virtual-machines/workloads/sap/sap-hana-high-availability).
+
+### Enable HA ports in the Azure Load Balancer for HANA DB HA setup in SAP workloads
+
+Enable HA ports in the Load balancing rules for HA set up of HANA DB instance in SAP workloads. Open the load balancer, select 'load balancing rules' and add/edit the rule to enable the recommended settings.
+
+Learn more about [Database Instance - DBHAEnableLBPorts (Enable HA ports in the Azure Load Balancer for HANA DB HA setup in SAP workloads)](/azure/virtual-machines/workloads/sap/sap-hana-high-availability).
+
+### Set the Idle timeout in Azure Load Balancer to 30 minutes for HANA DB HA setup in SAP workloads
+
+To prevent load balancer timeout, make sure that all Azure Load Balancing Rules have: 'Idle timeout (minutes)' set to the maximum value of 30 minutes. Open the load balancer, select 'load balancing rules' and add/edit the rule to enable the recommended settings.
+
+Learn more about [Database Instance - DBHASetIdleTimeOutLB (Set the Idle timeout in Azure Load Balancer to 30 minutes for HANA DB HA setup in SAP workloads)](/azure/virtual-machines/workloads/sap/sap-hana-high-availability).
+
+### Enable Floating IP in the Azure Load balancer for HANA DB HA setup in SAP workloads
+
+Enable floating IP in the load balancing rules for the Azure Load Balancer for HA set up of HANA DB instance in SAP workloads. Open the load balancer, select 'load balancing rules' and add/edit the rule to enable the recommended settings.
+
+Learn more about [Database Instance - DBHAEnableFloatingIpLB (Enable Floating IP in the Azure Load balancer for HANA DB HA setup in SAP workloads)](/azure/virtual-machines/workloads/sap/sap-hana-high-availability).
+
+### Disable TCP timestamps on VMs placed behind Azure Load Balancer in ASCS HA setup in SAP workloads
+
+Disable TCP timestamps on VMs placed behind AzurEnabling TCP timestamps cause the health probes to fail due to TCP packets being dropped by the VM's guest OS TCP stack causing the load balancer to mark the endpoint as down.
+
+Learn more about [Central Server Instance - ASCSLBHADisableTCP (Disable TCP timestamps on VMs placed behind Azure Load Balancer in ASCS HA setup in SAP workloads)](/azure/sap/workloads/sap-hana-high-availability).
+
+### Disable TCP timestamps on VMs placed behind Azure Load Balancer in HANA DB HA setup in SAP workloads
+
+Disable TCP timestamps on VMs placed behind Azure Load Balancer. Enable TCP timestamps cause the health probes to fail due to TCP packets being dropped by the VM's guest OS TCP stack causing the load balancer to mark the endpoint as down.
+
+Learn more about [Database Instance - DBLBHADisableTCP (Disable TCP timestamps on VMs placed behind Azure Load Balancer in HANA DB HA setup in SAP workloads)](/azure/load-balancer/load-balancer-custom-probe-overview).
+
+### Enable stonith in the cluster cofiguration in HA enabled SAP workloads for VMs with Redhat OS
+
+In a Pacemaker cluster, the implementation of node level fencing is done using STONITH (Shoot The Other Node in the Head) resource. Ensure that 'stonith-enable' is set to 'true' in the HA cluster configuration of your SAP workload.
+
+Learn more about [Database Instance - StonithEnabledHARH (Enable stonith in the cluster cofiguration in HA enabled SAP workloads for VMs with Redhat OS)](/azure/virtual-machines/workloads/sap/sap-hana-high-availability-rhel).
+
+### Set the corosync token in Pacemaker cluster to 30000 for HA enabled HANA DB for VM with Redhat OS
+
+The corosync token setting determines the timeout that is used directly or as a base for real token timeout calculation in HA clusters. Set the corosync token to 30000 as per recommendation for SAP on Azure to allow memory-preserving maintenance.
+
+Learn more about [Database Instance - CorosyncTokenHARH (Set the corosync token in Pacemaker cluster to 30000 for HA enabled HANA DB for VM with Redhat OS)](/azure/virtual-machines/workloads/sap/sap-hana-high-availability-rhel).
+
+### Set the expected votes parameter to 2 in the cluster cofiguration in HA enabled SAP workloads
+
+In case of a two node HA cluster, set the quorum votes to 2 as per recommendation for SAP on Azure.
+
+Learn more about [Database Instance - ExpectedVotesParamtersHARH (Set the expected votes parameter to 2 in the cluster cofiguration in HA enabled SAP workloads)](/azure/virtual-machines/workloads/sap/sap-hana-high-availability-rhel).
+
+### Set the stonith timeout to 144 for the cluster cofiguration in HA enabled SAP workloads
+
+Set the stonith timeout to 144 for HA cluster as per recommendation for SAP on Azure.
+
+Learn more about [Database Instance - StonithTimeoutHASLE (Set the stonith timeout to 144 for the cluster cofiguration in HA enabled SAP workloads)](/azure/virtual-machines/workloads/sap/sap-hana-high-availability).
+
+### Enable stonith in the cluster cofiguration in HA enabled SAP workloads for VMs with SUSE OS
+
+In a Pacemaker cluster, the implementation of node level fencing is done using STONITH (Shoot The Other Node in the Head) resource. Ensure that 'stonith-enable' is set to 'true' in the HA cluster configuration.
+
+Learn more about [Database Instance - StonithEnabledHASLE (Enable stonith in the cluster cofiguration in HA enabled SAP workloads for VMs with SUSE OS)](/azure/virtual-machines/workloads/sap/sap-hana-high-availability).
+
+### Set the corosync token in Pacemaker cluster to 30000 for HA enabled HANA DB for VM with SUSE OS
+
+The corosync token setting determines the timeout that is used directly or as a base for real token timeout calculation in HA clusters. Set the corosync token to 30000 as per recommendation for SAP on Azure to allow memory-preserving maintenance.
+
+Learn more about [Database Instance - CorosyncTokenHASLE (Set the corosync token in Pacemaker cluster to 30000 for HA enabled HANA DB for VM with SUSE OS)](/azure/virtual-machines/workloads/sap/sap-hana-high-availability).
+
+### Enable the 'concurrent-fencing' parameter in Pacemaker cofiguration in ASCS HA setup in SAP workloads
+
+The concurrent-fencing parameter when set to true, enables the fencing operations to be performed in parallel. Set this parameter to 'true' in the pacemaker cluster configuration for ASCS HA setup.
+
+Learn more about [Central Server Instance - ConcurrentFencingHAASCSRH (Enable the 'concurrent-fencing' parameter in Pacemaker cofiguration in ASCS HA setup in SAP workloads)](/azure/virtual-machines/workloads/sap/sap-hana-high-availability-rhel).
+
+### Ensure that stonith is enabled for the Pacemaker cofiguration in ASCS HA setup in SAP workloads
+
+In a Pacemaker cluster, the implementation of node level fencing is done using STONITH (Shoot The Other Node in the Head) resource. Ensure that 'stonith-enable' is set to 'true' in the HA cluster configuration of your SAP workload.
+
+Learn more about [Central Server Instance - StonithEnabledHAASCSRH (Ensure that stonith is enabled for the Pacemaker cofiguration in ASCS HA setup in SAP workloads)](/azure/virtual-machines/workloads/sap/sap-hana-high-availability-rhel).
+
+### Set the corosync token in Pacemaker cluster to 30000 for ASCS HA setup in SAP workloads
+
+The corosync token setting determines the timeout that is used directly or as a base for real token timeout calculation in HA clusters. Set the corosync token to 30000 as per recommendation for SAP on Azure to allow memory-preserving maintenance.
+
+Learn more about [Central Server Instance - CorosyncTokenHAASCSRH (Set the corosync token in Pacemaker cluster to 30000 for ASCS HA setup in SAP workloads)](/azure/virtual-machines/workloads/sap/sap-hana-high-availability-rhel).
+
+### Set parameter PREFER_SITE_TAKEOVER to 'true' in the Pacemaker cofiguration for HANA DB HA setup
+
+The parameter PREFER_SITE_TAKEOVER in SAP HANA topology defines if the HANA SR resource agent should prefer to takeover to the secondary instance instead of restarting the failed primary locally. Set it to 'true' for reliable function of HANA DB HA setup.
+
+Learn more about [Database Instance - PreferSiteTakeOverHARH (Set parameter PREFER_SITE_TAKEOVER to 'true' in the Pacemaker cofiguration for HANA DB HA setup)](/azure/virtual-machines/workloads/sap/sap-hana-high-availability-rhel).
+
+### Set the expected votes parameter to 2 in Pacemaker cofiguration in ASCS HA setup in SAP workloads
+
+In case of a two node HA cluster, set the quorum votes to 2 as per recommendation for SAP on Azure.
+
+Learn more about [Central Server Instance - ExpectedVotesHAASCSRH (Set the expected votes parameter to 2 in Pacemaker cofiguration in ASCS HA setup in SAP workloads)](/azure/virtual-machines/workloads/sap/sap-hana-high-availability-rhel).
+
+### Enable the 'concurrent-fencing' parameter in the Pacemaker cofiguration for HANA DB HA setup
+
+The concurrent-fencing parameter when set to true, enables the fencing operations to be performed in parallel. Set this parameter to 'true' in the pacemaker cluster configuration for HANA DB HA setup.
+
+Learn more about [Database Instance - ConcurrentFencingHARH (Enable the 'concurrent-fencing' parameter in the Pacemaker cofiguration for HANA DB HA setup)](/azure/virtual-machines/workloads/sap/sap-hana-high-availability-rhel).
+
+### Set 'token_retransmits_before_loss_const' to 10 in Pacemaker cluster in ASCS HA setup in SAP workloads
+
+The corosync token_retransmits_before_loss_const determines how many token retransmits the system attempts before timeout in HA clusters. Set the totem.token_retransmits_before_loss_const to 10 as per recommendation for ASCS HA setup.
+
+Learn more about [Central Server Instance - TokenRestransmitsHAASCSSLE (Set 'token_retransmits_before_loss_const' to 10 in Pacemaker cluster in ASCS HA setup in SAP workloads)](/azure/virtual-machines/workloads/sap/sap-hana-high-availability).
+
+### Set the corosync token in Pacemaker cluster to 30000 for ASCS HA setup in SAP workloads
+
+The corosync token setting determines the timeout that is used directly or as a base for real token timeout calculation in HA clusters. Set the corosync token to 30000 as per recommendation for SAP on Azure to allow memory-preserving maintenance.
+
+Learn more about [Central Server Instance - CorosyncTokenHAASCSSLE (Set the corosync token in Pacemaker cluster to 30000 for ASCS HA setup in SAP workloads)](/azure/virtual-machines/workloads/sap/sap-hana-high-availability).
+
+### Set the 'corosync max_messages' in Pacemaker cluster to 20 for ASCS HA setup in SAP workloads
+
+The corosync max_messages constant specifies the maximum number of messages that may be sent by one processor on receipt of the token. We recommend you set to 20 times the corosync token parameter in Pacemaker cluster configuration.
+
+Learn more about [Central Server Instance - CorosyncMaxMessagesHAASCSSLE (Set the 'corosync max_messages' in Pacemaker cluster to 20 for ASCS HA setup in SAP workloads)](/azure/virtual-machines/workloads/sap/sap-hana-high-availability).
+
+### Set the 'corosync consensus' in Pacemaker cluster to 36000 for ASCS HA setup in SAP workloads
+
+The corosync parameter 'consensus' specifies in milliseconds how long to wait for consensus to be achieved before starting a new round of membership in the cluster configuration. We recommend that you set 1.2 times the corosync token in Pacemaker cluster configuration for ASCS HA setup.
+
+Learn more about [Central Server Instance - CorosyncConsensusHAASCSSLE (Set the 'corosync consensus' in Pacemaker cluster to 36000 for ASCS HA setup in SAP workloads)](/azure/virtual-machines/workloads/sap/sap-hana-high-availability).
+
+### Set the expected votes parameter to 2 in the cluster cofiguration in ASCS HA setup in SAP workloads
+
+In case of a two node HA cluster, set the quorum parameter expected_votes to 2 as per recommendation for SAP on Azure.
+
+Learn more about [Central Server Instance - ExpectedVotesHAASCSSLE (Set the expected votes parameter to 2 in the cluster cofiguration in ASCS HA setup in SAP workloads)](/azure/virtual-machines/workloads/sap/sap-hana-high-availability).
+
+### Set the stonith timeout to 144 for the cluster cofiguration in ASCS HA setup in SAP workloads
+
+Set the stonith timeout to 144 for HA cluster as per recommendation for SAP on Azure.
+
+Learn more about [Central Server Instance - StonithTimeOutHAASCS (Set the stonith timeout to 144 for the cluster cofiguration in ASCS HA setup in SAP workloads)](/azure/virtual-machines/workloads/sap/sap-hana-high-availability).
+
+### Set parameter PREFER_SITE_TAKEOVER to 'true' in the cluster cofiguration in HA enabled SAP workloads
+
+The parameter PREFER_SITE_TAKEOVER in SAP HANA topology defines if the HANA SR resource agent should prefer to takeover to the secondary instance instead of restarting the failed primary locally. Set it to 'true' for reliable function of HANA DB HA setup.
+
+Learn more about [Database Instance - PreferSiteTakeoverHDB (Set parameter PREFER_SITE_TAKEOVER to 'true' in the cluster cofiguration in HA enabled SAP workloads)](/azure/virtual-machines/workloads/sap/sap-hana-high-availability).
+
+### Set the two_node parameter to 1 in the cluster cofiguration in ASCS HA setup in SAP workloads
+
+In case of a two node HA cluster, set the quorum parameter 'two_node' to 1 as per recommendation for SAP on Azure.
+
+Learn more about [Central Server Instance - TwoNodesParametersHAASCSSLE (Set the two_node parameter to 1 in the cluster cofiguration in ASCS HA setup in SAP workloads)](/azure/virtual-machines/workloads/sap/sap-hana-high-availability).
+
+### Set the 'corosync join' in Pacemaker cluster to 60 for ASCS HA setup in SAP workloads
+
+The corosync join timeout specifies in milliseconds how long to wait for join messages in the membership protocol. We recommend that you set 60 in Pacemaker cluster configuration for ASCS HA setup.
+
+Learn more about [Central Server Instance - CorosyncJoinHAASCSSLE (Set the 'corosync join' in Pacemaker cluster to 60 for ASCS HA setup in SAP workloads)](/azure/virtual-machines/workloads/sap/sap-hana-high-availability).
+
+### Set 'token_retransmits_before_loss_const' to 10 in Pacemaker cluster in HA enabled SAP workloads
+
+The corosync token_retransmits_before_loss_const determines how many token retransmits should be attempted before timeout in HA clusters. Set the totem.token_retransmits_before_loss_const to 10 as per recommendation for HANA DB HA setup.
+
+Learn more about [Database Instance - TokenRetransmitsHDB (Set 'token_retransmits_before_loss_const' to 10 in Pacemaker cluster in HA enabled SAP workloads)](/azure/virtual-machines/workloads/sap/sap-hana-high-availability).
+
+### Set the expected votes parameter to 2 in the cluster cofiguration in HA enabled SAP workloads
+
+Set the expected votes parameter to 2 in the cluster cofiguration in HA enabled SAP workloads.
+
+Learn more about [Database Instance - ExpectedVotesSuseHDB (Set the expected votes parameter to 2 in the cluster cofiguration in HA enabled SAP workloads)](/azure/virtual-machines/workloads/sap/sap-hana-high-availability).
+
+### Set the two_node parameter to 1 in the cluster cofiguration in HA enabled SAP workloads
+
+In case of a two node HA cluster, set the quorum parameter 'two_node' to 1 as per recommendation for SAP on Azure.
+
+Learn more about [Database Instance - TwoNodeParameterSuseHDB (Set the two_node parameter to 1 in the cluster cofiguration in HA enabled SAP workloads)](/azure/virtual-machines/workloads/sap/sap-hana-high-availability).
+
+### Set the 'corosync join' in Pacemaker cluster to 60 for HA enabled HANA DB in SAP workloads
+
+The corosync join timeout specifies in milliseconds how long to wait for join messages in the membership protocol. We recommend that you set 60 in Pacemaker cluster configuration for HANA DB HA setup.
+
+Learn more about [Database Instance - CorosyncHDB (Set the 'corosync join' in Pacemaker cluster to 60 for HA enabled HANA DB in SAP workloads)](/azure/virtual-machines/workloads/sap/sap-hana-high-availability).
+
+### Ensure that stonith is enabled for the cluster cofiguration in ASCS HA setup in SAP workloads
+
+In a Pacemaker cluster, the implementation of node level fencing is done using STONITH (Shoot The Other Node in the Head) resource. Ensure that 'stonith-enable' is set to 'true' in the HA cluster configuration.
+
+Learn more about [Central Server Instance - StonithEnabledHAASCS (Ensure that stonith is enabled for the cluster cofiguration in ASCS HA setup in SAP workloads)](/azure/virtual-machines/workloads/sap/sap-hana-high-availability).
+
+### Enable the 'concurrent-fencing' parameter in the cluster cofiguration in HA enabled SAP workloads
+
+The concurrent-fencing parameter when set to true, enables the fencing operations to be performed in parallel. Set this parameter to 'true' in the pacemaker cluster configuration for HANA DB HA setup.
+
+Learn more about [Database Instance - ConcurrentFencingSuseHDB (Enable the 'concurrent-fencing' parameter in the cluster cofiguration in HA enabled SAP workloads)](/azure/virtual-machines/workloads/sap/sap-hana-high-availability).
+
+### Set the 'corosync max_messages' in Pacemaker cluster to 20 for HA enabled HANA DB in SAP workloads
+
+The corosync max_messages constant specifies the maximum number of messages that may be sent by one processor on receipt of the token. We recommend that you set 20 times the corosync token parameter in Pacemaker cluster configuration.
+
+Learn more about [Database Instance - CorosyncMaxMessageHDB (Set the 'corosync max_messages' in Pacemaker cluster to 20 for HA enabled HANA DB in SAP workloads)](/azure/virtual-machines/workloads/sap/sap-hana-high-availability).
+
+### Set the 'corosync consensus' in Pacemaker cluster to 36000 for HA enabled HANA DB in SAP workloads
+
+The corosync parameter 'consensus' specifies in milliseconds how long to wait for consensus to be achieved before starting a new round of membership in the cluster configuration. We recommend that you set 1.2 times the corosync token in Pacemaker cluster configuration for HANA DB HA setup.
+
+Learn more about [Database Instance - CorosyncConsensusHDB (Set the 'corosync consensus' in Pacemaker cluster to 36000 for HA enabled HANA DB in SAP workloads)](/azure/virtual-machines/workloads/sap/sap-hana-high-availability).
+
+### Enable the 'concurrent-fencing' parameter in Pacemaker cofiguration in ASCS HA setup in SAP workloads
+
+The concurrent-fencing parameter when set to true, enables the fencing operations to be performed in parallel. Set this parameter to 'true' in the pacemaker cluster configuration for ASCS HA setup.
+
+Learn more about [Central Server Instance - ConcurrentFencingHAASCSSLE (Enable the 'concurrent-fencing' parameter in Pacemaker cofiguration in ASCS HA setup in SAP workloads)](/azure/virtual-machines/workloads/sap/sap-hana-high-availability).
+
+### Set stonith-timeout to 900 in Pacemaker configuration with Azure fence agent for ASCS HA setup
+
+stonith-timeout should be set to 900 for reliable function of the Pacemaker for ASCS HA setup. This is applicable if you are using Azure fence agent for fencing with either managed identity or service principal.
+
+Learn more about [Central Server Instance - StonithTimeOutHAASCSSLE (Set stonith-timeout to 900 in Pacemaker configuration with Azure fence agent for ASCS HA setup)](/azure/virtual-machines/workloads/sap/sap-hana-high-availability).
+
+### Create the softdog config file in Pacemaker configuration for ASCS HA setup in SAP workloads
+
+The softdog timer is loaded as a kernel module in linux OS. This timer triggers a system reset if it detects that the system has hung. Ensure that the softdog configuration file is created in the Pacemaker cluster forASCS HA set up.
+
+Learn more about [Central Server Instance - SoftdogConfigHAASCSSLE (Create the softdog config file in Pacemaker configuration for ASCS HA setup in SAP workloads)](/azure/virtual-machines/workloads/sap/sap-hana-high-availability).
+
+### Ensure the softdog module is loaded in for Pacemaker in ASCS HA setup in SAP workloads
+
+The softdog timer is loaded as a kernel module in linux OS. This timer triggers a system reset if it detects that the system has hung. First ensure that you created the softdog configuration file, then load the softdog module in the Pacemaker configuration for ASCS HA setup.
+
+Learn more about [Central Server Instance - softdogmoduleloadedHAASCSSLE (Ensure the softdog module is loaded in for Pacemaker in ASCS HA setup in SAP workloads)](/azure/virtual-machines/workloads/sap/sap-hana-high-availability).
+
+### Create the softdog config file in Pacemaker configuration for HA enable HANA DB in SAP workloads
+
+The softdog timer is loaded as a kernel module in linux OS. This timer triggers a system reset if it detects that the system has hung. Ensure that the softdog configuration file is created in the Pacemaker cluster for HANA DB HA setup.
+
+Learn more about [Database Instance - SoftdogConfigSuseHDB (Create the softdog config file in Pacemaker configuration for HA enable HANA DB in SAP workloads)](/azure/virtual-machines/workloads/sap/sap-hana-high-availability).
+
+### Set stonith-timeout to 900 in Pacemaker configuration with Azure fence agent for HANA DB HA setup
+
+stonith-timeout should be set to 900 for reliable function of the Pacemaker for HANA DB HA setup. This is applicable if you are using Azure fence agent for fencing with either managed identity or service principal.
+
+Learn more about [Database Instance - StonithTimeOutSuseHDB (Set stonith-timeout to 900 in Pacemaker configuration with Azure fence agent for HANA DB HA setup)](/azure/virtual-machines/workloads/sap/sap-hana-high-availability).
+
+### There should be one instance of fence_azure_arm in Pacemaker configuration for ASCS HA setup
+
+fence_azure_arm is an I/O fencing agent for Azure Resource Manager. Ensure that there is one instance of fence_azure_arm in the pacemaker configuration for ASCS HA setup. This is applicable if you are using Azure fence agent for fencing with either managed identity or service principal.
+
+Learn more about [Central Server Instance - FenceAzureArmHAASCSSLE (There should be one instance of fence_azure_arm in Pacemaker configuration for ASCS HA setup)](/azure/virtual-machines/workloads/sap/sap-hana-high-availability).
+
+### There should be one instance of fence_azure_arm in Pacemaker configuration for HANA DB HA setup
+
+fence_azure_arm is an I/O fencing agent for Azure Resource Manager. Ensure that there is one instance of fence_azure_arm in the pacemaker configuration for HANA DB HA setup. This is applicable if you are using Azure fence agent for fencing with either managed identity or service principal.
+
+Learn more about [Database Instance - FenceAzureArmSuseHDB (There should be one instance of fence_azure_arm in Pacemaker configuration for HANA DB HA setup)](/azure/virtual-machines/workloads/sap/sap-hana-high-availability).
+
+### Ensure the softdog module is loaded in for Pacemaker in HA enabled HANA DB in SAP workloads
+
+The softdog timer is loaded as a kernel module in linux OS. This timer triggers a system reset if it detects that the system has hung. First ensure that you created the softdog configuration file, then load the softdog module in the Pacemaker configuration for HANA DB HA setup.
+
+Learn more about [Database Instance - SoftdogModuleSuseHDB (Ensure the softdog module is loaded in for Pacemaker in HA enabled HANA DB in SAP workloads)](/azure/virtual-machines/workloads/sap/sap-hana-high-availability).
++ ## Next steps Learn more about [Reliability - Microsoft Azure Well Architected Framework](/azure/architecture/framework/resiliency/overview)
ai-services Quotas Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/quotas-limits.md
The default quota for models varies by model and region. Default quota limits ar
<tr> <td rowspan="2">gpt-4-32k</td> <td>East US, South Central US, West Europe, France Central</td>
- <td>40 K</td>
+ <td>60 K</td>
</tr> <tr> <td>North Central US, Australia East, East US 2, Canada East, Japan East, UK South, Sweden Central, Switzerland North</td>
ai-services Batch Transcription Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/batch-transcription-create.md
Previously updated : 09/15/2023 Last updated : 10/3/2023 zone_pivot_groups: speech-cli-rest
To create a transcription, use the [Transcriptions_Create](https://eastus.dev.co
- Set the required `locale` property. This should match the expected locale of the audio data to transcribe. The locale can't be changed later. - Set the required `displayName` property. Choose a transcription name that you can refer to later. The transcription name doesn't have to be unique and can be changed later. - Optionally to use a model other than the base model, set the `model` property to the model ID. For more information, see [Using custom models](#using-custom-models) and [Using Whisper models](#using-whisper-models).-- Optionally you can set the `wordLevelTimestampsEnabled` property to `true` to enable word-level timestamps in the transcription results. The default value is `false`. -- Optionally you can set the `languageIdentification` property. Language identification is used to identify languages spoken in audio when compared against a list of [supported languages](language-support.md?tabs=language-identification). If you set the `languageIdentification` property, then you must also set `languageIdentification.candidateLocales` with candidate locales.
+- Optionally you can set the `wordLevelTimestampsEnabled` property to `true` to enable word-level timestamps in the transcription results. The default value is `false`. For Whisper models set the `displayFormWordLevelTimestampsEnabled` property instead. Whisper is a display-only model, so the lexical field isn't populated in the transcription.
+- Optionally you can set the `languageIdentification` property. Language identification is used to identify languages spoken in audio when compared against a list of [supported languages](language-support.md?tabs=language-identification). If you set the `languageIdentification` property, then you must also set `languageIdentification.candidateLocales` with candidate locales.
For more information, see [request configuration options](#request-configuration-options).
Here are some property options that you can use to configure a transcription whe
|`locale`|The locale of the batch transcription. This should match the expected locale of the audio data to transcribe. The locale can't be changed later.<br/><br/>This property is required.| |`model`|You can set the `model` property to use a specific base model or [Custom Speech](how-to-custom-speech-train-model.md) model. If you don't specify the `model`, the default base model for the locale is used. For more information, see [Using custom models](#using-custom-models) and [Using Whisper models](#using-whisper-models).| |`profanityFilterMode`|Specifies how to handle profanity in recognition results. Accepted values are `None` to disable profanity filtering, `Masked` to replace profanity with asterisks, `Removed` to remove all profanity from the result, or `Tags` to add profanity tags. The default value is `Masked`. |
-|`punctuationMode`|Specifies how to handle punctuation in recognition results. Accepted values are `None` to disable punctuation, `Dictated` to imply explicit (spoken) punctuation, `Automatic` to let the decoder deal with punctuation, or `DictatedAndAutomatic` to use dictated and automatic punctuation. The default value is `DictatedAndAutomatic`.|
+|`punctuationMode`|Specifies how to handle punctuation in recognition results. Accepted values are `None` to disable punctuation, `Dictated` to imply explicit (spoken) punctuation, `Automatic` to let the decoder deal with punctuation, or `DictatedAndAutomatic` to use dictated and automatic punctuation. The default value is `DictatedAndAutomatic`.<br/><br/>This property isn't applicable for Whisper models.|
|`timeToLive`|A duration after the transcription job is created, when the transcription results will be automatically deleted. The value is an ISO 8601 encoded duration. For example, specify `PT12H` for 12 hours. As an alternative, you can call [Transcriptions_Delete](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Transcriptions_Delete) regularly after you retrieve the transcription results.| |`wordLevelTimestampsEnabled`|Specifies if word level timestamps should be included in the output. The default value is `false`.<br/><br/>This property isn't applicable for Whisper models. Whisper is a display-only model, so the lexical field isn't populated in the transcription.|
aks Azure Csi Files Storage Provision https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-csi-files-storage-provision.md
The following YAML creates a pod that uses the persistent volume claim *my-azure
```yaml kind: Pod
- apiVersion: v1
- metadata:
- name: mypod
- spec:
- containers:
- - name: mypod
- image: mcr.microsoft.com/oss/nginx/nginx:1.15.5-alpine
- resources:
- requests:
- cpu: 100m
- memory: 128Mi
- limits:
- cpu: 250m
- memory: 256Mi
- volumeMounts:
- - mountPath: /mnt/azure
- name: volume
- volumes:
- - name: volume
- persistentVolumeClaim:
- claimName: my-azurefile
+apiVersion: v1
+metadata:
+ name: mypod
+spec:
+ containers:
+ - name: mypod
+ image: mcr.microsoft.com/oss/nginx/nginx:1.15.5-alpine
+ resources:
+ requests:
+ cpu: 100m
+ memory: 128Mi
+ limits:
+ cpu: 250m
+ memory: 256Mi
+ volumeMounts:
+ - mountPath: /mnt/azure
+ name: volume
+ volumes:
+ - name: volume
+ persistentVolumeClaim:
+ claimName: my-azurefile
``` 2. Create the pod using the [`kubectl apply`][kubectl-apply] command.
aks Cluster Autoscaler https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/cluster-autoscaler.md
You can also configure more granular details of the cluster autoscaler by changi
| max-graceful-termination-sec | Maximum number of seconds the cluster autoscaler waits for pod termination when trying to scale down a node | 600 seconds | | balance-similar-node-groups | Detects similar node pools and balances the number of nodes between them | false | | expander | Type of node pool [expander](https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#what-are-expanders) to be used in scale up. Possible values: `most-pods`, `random`, `least-waste`, `priority` | random |
-| skip-nodes-with-local-storage | If true, cluster autoscaler doesn't delete nodes with pods with local storage, for example, EmptyDir or HostPath | false |
+| skip-nodes-with-local-storage | If true, cluster autoscaler doesn't delete nodes with pods with local storage, for example, EmptyDir or HostPath | true |
| skip-nodes-with-system-pods | If true, cluster autoscaler doesn't delete nodes with pods from kube-system (except for DaemonSet or mirror pods) | true | | max-empty-bulk-delete | Maximum number of empty nodes that can be deleted at the same time | 10 nodes | | new-pod-scale-up-delay | For scenarios like burst/batch scale where you don't want CA to act before the kubernetes scheduler could schedule all the pods, you can tell CA to ignore unscheduled pods before they're a certain age. | 0 seconds |
To further help improve cluster resource utilization and free up CPU and memory
[kubernetes-hpa]: https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/ [kubernetes-hpa-walkthrough]: https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/ [metrics-server]: https://kubernetes.io/docs/tasks/debug-application-cluster/resource-metrics-pipeline/#metrics-server
-[kubernetes-cluster-autoscaler]: https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler
+[kubernetes-cluster-autoscaler]: https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler
aks Create Node Pools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/create-node-pools.md
The following limitations apply when you create AKS clusters that support multip
* The AKS cluster must use the Standard SKU load balancer to use multiple node pools. The feature isn't supported with Basic SKU load balancers. * The AKS cluster must use Virtual Machine Scale Sets for the nodes. * The name of a node pool may only contain lowercase alphanumeric characters and must begin with a lowercase letter.
- * For Linux node pools, the length must be between 1-12 characters.
+ * For Linux node pools, the length must be between 1-11 characters.
* For Windows node pools, the length must be between 1-6 characters. * All node pools must reside in the same virtual network. * When you create multiple node pools at cluster creation time, the Kubernetes versions for the node pools must match the version set for the control plane.
aks Custom Node Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/custom-node-configuration.md
The settings below can be used to tune the operation of the virtual memory (VM)
| Setting | Allowed values/interval | Default | Description | | - | -- | - | -- | | `vm.max_map_count` | 65530 - 262144 | 65530 | This file contains the maximum number of memory map areas a process may have. Memory map areas are used as a side-effect of calling `malloc`, directly by `mmap`, `mprotect`, and `madvise`, and also when loading shared libraries. |
-| `vm.vfs_cache_pressure` | 1 - 500 | 100 | This percentage value controls the tendency of the kernel to reclaim the memory, which is used for caching of directory and inode objects. |
+| `vm.vfs_cache_pressure` | 1 - 100 | 100 | This percentage value controls the tendency of the kernel to reclaim the memory, which is used for caching of directory and inode objects. |
| `vm.swappiness` | 0 - 100 | 60 | This control is used to define how aggressive the kernel will swap memory pages. Higher values will increase aggressiveness, lower values decrease the amount of swap. A value of 0 instructs the kernel not to initiate swap until the amount of free and file-backed pages is less than the high water mark in a zone. | | `swapFileSizeMB` | 1 MB - Size of the [temporary disk](../virtual-machines/managed-disks-overview.md#temporary-disk) (/dev/sdb) | None | SwapFileSizeMB specifies size in MB of a swap file will be created on the agent nodes from this node pool. | | `transparentHugePageEnabled` | `always`, `madvise`, `never` | `always` | [Transparent Hugepages](https://www.kernel.org/doc/html/latest/admin-guide/mm/transhuge.html#admin-guide-transhuge) is a Linux kernel feature intended to improve performance by making more efficient use of your processorΓÇÖs memory-mapping hardware. When enabled the kernel attempts to allocate `hugepages` whenever possible and any Linux process will receive 2-MB pages if the `mmap` region is 2 MB naturally aligned. In certain cases when `hugepages` are enabled system wide, applications may end up allocating more memory resources. An application may `mmap` a large region but only touch 1 byte of it, in that case a 2-MB page might be allocated instead of a 4k page for no good reason. This scenario is why it's possible to disable `hugepages` system-wide or to only have them inside `MADV_HUGEPAGE madvise` regions. |
aks Howto Deploy Java Liberty App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/howto-deploy-java-liberty-app.md
This article demonstrates how to:
* Run your Java, Java EE, Jakarta EE, or MicroProfile application on the Open Liberty or WebSphere Liberty runtime. * Build the application Docker image using Open Liberty or WebSphere Liberty container images.
-* Deploy the containerized application to an AKS cluster using the Open Liberty Operator.
+* Deploy the containerized application to an AKS cluster using the Open Liberty Operator or WebSphere Liberty Operator.
-The Open Liberty Operator simplifies the deployment and management of applications running on Kubernetes clusters. With the Open Liberty Operator, you can also perform more advanced operations, such as gathering traces and dumps.
+The Open Liberty Operator simplifies the deployment and management of applications running on Kubernetes clusters. With the Open Liberty or WebSphere Liberty Operator, you can also perform more advanced operations, such as gathering traces and dumps.
For more information on Open Liberty, see [the Open Liberty project page](https://openliberty.io/). For more information on IBM WebSphere Liberty, see [the WebSphere Liberty product page](https://www.ibm.com/cloud/websphere-liberty).
Now that you've gathered the necessary properties, you can build the application
cd <path-to-your-repo>/java-app # The following variables will be used for deployment file generation into target.
-export LOGIN_SERVER=<Azure_Container_Registery_Login_Server_URL>
-export REGISTRY_NAME=<Azure_Container_Registery_Name>
-export USER_NAME=<Azure_Container_Registery_Username>
-export PASSWORD=<Azure_Container_Registery_Password>
+export LOGIN_SERVER=<Azure_Container_Registry_Login_Server_URL>
+export REGISTRY_NAME=<Azure_Container_Registry_Name>
+export USER_NAME=<Azure_Container_Registry_Username>
+export PASSWORD=<Azure_Container_Registry_Password>
export DB_SERVER_NAME=<Server name>.database.windows.net export DB_NAME=<Database name> export DB_USER=<Server admin login>@<Server name>
You can now use the following steps to test the Docker image locally before depl
### Upload image to ACR
-Now, we upload the built image to the ACR created in the offer.
+Upload the built image to the ACR created in the offer.
```bash docker tag javaee-cafe:v1 ${LOGIN_SERVER}/javaee-cafe:v1
aks Outbound Rules Control Egress https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/outbound-rules-control-egress.md
The following network and FQDN/application rules are required for an AKS cluster
* AKS uses an admission controller to inject the FQDN as an environment variable to all deployments under kube-system and gatekeeper-system. This ensures all system communication between nodes and API server uses the API server FQDN and not the API server IP. * If you have an app or solution that needs to talk to the API server, you must add an **additional** network rule to allow **TCP communication to port 443 of your API server's IP**. * On rare occasions, if there's a maintenance operation, your API server IP might change. Planned maintenance operations that can change the API server IP are always communicated in advance.
+* Under certain circumstances, it might happen that traffic towards "md-*.blob.storage.azure.net" is required. This dependency is due to some internal mechanisms of Azure Managed Disks. You might also want to use the Storage [service tag](../virtual-network/service-tags-overview.md).
+ ### Azure Global required network rules
api-management Api Management Howto Integrate Internal Vnet Appgateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-integrate-internal-vnet-appgateway.md
If you use Azure Active Directory or third-party authentication, enable the [coo
## Setting Variables
-Throughout this guide, you will need to define several variables.
+Throughout this guide, you will need to define several variables. Naming is based on the [Cloud Adoption Framework abbreviation](/azure/cloud-adoption-framework/ready/azure-best-practices/resource-abbreviations) guidance.
```powershell # These variables must be changed. $subscriptionId = "00000000-0000-0000-0000-000000000000" # GUID of your Azure subscription $domain = "contoso.net" # The custom domain for your certificate
-$apimServiceName = "ContosoApi" # API Management service instance name, must be globally unique
+$apimServiceName = "apim-contoso" # API Management service instance name, must be globally unique
$apimDomainNameLabel = $apimServiceName # Domain name label for API Management's public IP address, must be globally unique $apimAdminEmail = "admin@contoso.net" # Administrator's email address - use your email address
$managementCertPfxPassword = "certificatePassword123" # Password for man
# These variables may be changed.
-$resGroupName = "apim-appGw-RG" # Resource group name that will hold all assets
+$resGroupName = "rg-apim-agw" # Resource group name that will hold all assets
$location = "West US" # Azure region that will hold all assets $apimOrganization = "Contoso" # Organization name
-$appgwName = "apim-app-gw" # The name of the Application Gateway
+$appgwName = "agw-contoso" # The name of the Application Gateway
``` ## Create a resource group for Resource Manager
The following example shows how to create a virtual network by using Resource Ma
Internet -SourcePortRange * -DestinationAddressPrefix * -DestinationPortRange 443 $appGwNsg = New-AzNetworkSecurityGroup -ResourceGroupName $resGroupName -Location $location -Name `
- "NSG-APPGW" -SecurityRules $appGwRule1, $appGwRule2
+ "nsg-agw" -SecurityRules $appGwRule1, $appGwRule2
``` 1. Create a network security group (NSG) and NSG rules for the API Management subnet. [API Management stv2 requires several specific NSG rules](api-management-using-with-internal-vnet.md#enable-vnet-connection).
The following example shows how to create a virtual network by using Resource Ma
-SourcePortRange * -DestinationAddressPrefix AzureKeyVault -DestinationPortRange 443 $apimNsg = New-AzNetworkSecurityGroup -ResourceGroupName $resGroupName -Location $location -Name `
- "NSG-APIM" -SecurityRules $apimRule1, $apimRule2, $apimRule3, $apimRule4
+ "nsg-apim" -SecurityRules $apimRule1, $apimRule2, $apimRule3, $apimRule4
``` 1. Assign the address range 10.0.0.0/24 to the subnet variable to be used for Application Gateway while you create a virtual network.
The following example shows how to create a virtual network by using Resource Ma
$apimSubnet = New-AzVirtualNetworkSubnetConfig -Name "apimSubnet" -NetworkSecurityGroup $apimNsg -AddressPrefix "10.0.1.0/24" ```
-1. Create a virtual network named **appgwvnet** in resource group **apim-appGw-RG** for the West US region. Use the prefix 10.0.0.0/16 with subnets 10.0.0.0/24 and 10.0.1.0/24.
+1. Create a virtual network named **vnet-contoso**. Use the prefix 10.0.0.0/16 with subnets 10.0.0.0/24 and 10.0.1.0/24.
```powershell
- $vnet = New-AzVirtualNetwork -Name "appgwvnet" -ResourceGroupName $resGroupName `
+ $vnet = New-AzVirtualNetwork -Name "vnet-contoso" -ResourceGroupName $resGroupName `
-Location $location -AddressPrefix "10.0.0.0/16" -Subnet $appGatewaySubnet,$apimSubnet ```
app-service App Service Undelete https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/app-service-undelete.md
Title: Restore deleted apps
description: Learn how to restore a deleted app in Azure App Service. Avoid the headache of an accidentally deleted app. Previously updated : 4/3/2023 Last updated : 10/4/2023
The detailed information includes:
>- `Restore-AzDeletedWebApp` isn't supported for function apps hosted on the Consumption plan or Elastic Premium plan. >- The Restore-AzDeletedWebApp cmdlet restores a deleted web app. The web app specified by TargetResourceGroupName, TargetName, and TargetSlot will be overwritten with the contents and settings of the deleted web app. If the target parameters are not specified, they will automatically be filled with the deleted web app's resource group, name, and slot. If the target web app does not exist, it will automatically be created in the app service plan specified by TargetAppServicePlanName. >- By default `Restore-AzDeletedWebApp` will restore both your app configuration as well any content. If you want to only restore content, you use the **`-RestoreContentOnly`** flag with this commandlet.
+>- Custom domains, bindings, or certs that you import to your app won't be restored. You'll need to re-add them after your app is restored.
After identifying the app you want to restore, you can restore it using `Restore-AzDeletedWebApp`, as shown in the following examples.
automation Pre Post Scripts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/update-management/pre-post-scripts.md
Pre-scripts and post-scripts are runbooks to run in your Azure Automation accoun
For a runbook to be used as a pre-script or post-script, you must import it into your Automation account and [publish the runbook](../manage-runbooks.md#publish-a-runbook).
-Currently, only PowerShell and Python 2 runbooks are supported as Pre/Post scripts. Other runbook types like Python 3, Graphical, PowerShell Workflow, Graphical PowerShell Workflow are currently not supported as Pre/Post scripts.
+Currently, only PowerShell 5.1 and Python 2 runbooks are supported as Pre/Post scripts. Other runbook types like Python 3, Graphical, PowerShell Workflow, Graphical PowerShell Workflow are currently not supported as Pre/Post scripts.
## Pre-script and post-script parameters
azure-app-configuration Concept Snapshots https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/concept-snapshots.md
Last updated 05/16/2023
A snapshot is a named, immutable subset of an App Configuration store's key-values. The key-values that make up a snapshot are chosen during creation time through the usage of key and label filters. Once a snapshot is created, the key-values within are guaranteed to remain unchanged.
+A brief overview is available in this [video](https://aka.ms/appconfig/snapshotVideo), highlighting three reasons that snapshots can be helpful to you.
+ ## Deploy safely with snapshots Snapshots are designed to safely deploy configuration changes. Deploying faulty configuration changes into a running environment can cause issues such as service disruption and data loss. In order to avoid such issues, it's important to be able to vet configuration changes before moving into production environments. If such an issue does occur, it's important to be able to roll back any faulty configuration changes in order to restore service. Snapshots are created for managing these scenarios.
azure-arc Conceptual Gitops Flux2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/conceptual-gitops-flux2.md
Title: "GitOps Flux v2 configurations with AKS and Azure Arc-enabled Kubernetes"
+ Title: "Application deployments with GitOps (Flux v2)"
description: "This article provides a conceptual overview of GitOps in Azure for use in Azure Arc-enabled Kubernetes and Azure Kubernetes Service (AKS) clusters." Previously updated : 05/08/2023 Last updated : 10/04/2023
-# GitOps Flux v2 configurations with AKS and Azure Arc-enabled Kubernetes
+# Application deployments with GitOps (Flux v2) for AKS and Azure Arc-enabled Kubernetes
-Azure provides configuration management capability using GitOps in Azure Kubernetes Service (AKS) and Azure Arc-enabled Kubernetes clusters.
+Azure provides an automated application deployments capability using GitOps that works with Azure Kubernetes Service (AKS) and Azure Arc-enabled Kubernetes clusters. The key benefits provided by adopting GitOps for deploying applications to Kubernetes clusters include:
+
+* Continual visibility into the status of applications running on clusters.
+* Separation of concerns between application development teams and infrastructure teams. Application teams don't need to have experience with Kubernetes deployments. Platform engineering teams typically create a self-serve model for application teams, empowering them to run deployments with higher confidence.
+* Ability to recreate clusters with the same desired state in case of a crash or to scale out.
With GitOps, you declare the desired state of your Kubernetes clusters in files in Git repositories. The Git repositories may contain the following files:
With GitOps, you declare the desired state of your Kubernetes clusters in files
Because these files are stored in a Git repository, they're versioned, and changes between versions are easily tracked. Kubernetes controllers run in the clusters and continually reconcile the cluster state with the desired state declared in the Git repository. These operators pull the files from the Git repositories and apply the desired state to the clusters. The operators also continuously assure that the cluster remains in the desired state.
-GitOps on Azure Arc-enabled Kubernetes or Azure Kubernetes Service uses [Flux](https://fluxcd.io/docs/), a popular open-source tool set. Flux provides support for common file sources (Git and Helm repositories, Buckets, Azure Blob Storage) and template types (YAML, Helm, and Kustomize). Flux also supports [multi-tenancy](#multi-tenancy) and deployment dependency management, among [other features](https://fluxcd.io/docs/).
+GitOps on Azure Arc-enabled Kubernetes or Azure Kubernetes Service uses [Flux](https://fluxcd.io/docs/), a popular open-source tool set. Flux provides support for common file sources (Git and Helm repositories, Buckets, Azure Blob Storage) and template types (YAML, Helm, and Kustomize). Flux also supports [multi-tenancy](#multi-tenancy) and deployment dependency management, among [other features](https://fluxcd.io/docs/). Flux is deployed directly on the cluster, and each cluster's control plane is logically separated. Hence, it can scale well to hundreds and thousands of clusters. It enables pure pull-based GitOps application deployments. No access to clusters is needed by the source repo or by any other cluster.
## Flux cluster extension -- GitOps is enabled in an Azure Arc-enabled Kubernetes or AKS cluster as a `Microsoft.KubernetesConfiguration/extensions/microsoft.flux` [cluster extension](./conceptual-extensions.md) resource. The `microsoft.flux` extension must be installed in the cluster before one or more `fluxConfigurations` can be created. The extension is installed automatically when you create the first `Microsoft.KubernetesConfiguration/fluxConfigurations` in a cluster, or you can install it manually using the portal, the Azure CLI (`az k8s-extension create --extensionType=microsoft.flux`), ARM template, or REST API.
-### Version support
-
-The most recent version of the Flux v2 extension (`microsoft.flux`) and the two previous versions (N-2) are supported. We generally recommend that you use the [most recent version](extensions-release.md#flux-gitops) of the extension. Starting with `microsoft.flux` version 1.7.0, ARM64-based clusters are supported.
-
-> [!NOTE]
-> If you have been using Flux v1, we recommend [migrating to Flux v2](conceptual-gitops-flux2.md#migrate-from-flux-v1) as soon as possible.
->
-> Support for Flux v1-based cluster configuration resources created prior to January 1, 2024 will end on [May 24, 2025](https://azure.microsoft.com/updates/migrate-your-gitops-configurations-from-flux-v1-to-flux-v2-by-24-may-2025/). Starting on January 1, 2024, you won't be able to create new Flux v1-based cluster configuration resources.
- ### Controllers By default, the `microsoft.flux` extension installs the [Flux controllers](https://fluxcd.io/docs/components/) (Source, Kustomize, Helm, Notification) and the FluxConfig CRD, fluxconfig-agent, and fluxconfig-controller. You can control which of these controllers is installed. Optionally, you can also install the Flux image-automation and image-reflector controllers, which provide functionality for updating and retrieving Docker images.
Each `fluxConfigurations` resource in Azure is associated with one Flux `GitRepo
> > Sensitive customer inputs like private key and token/password are stored for less than 48 hours in the Kubernetes Configuration service. If you update any of these values in Azure, make sure that your clusters connect with Azure within 48 hours.
+### Version support
+
+The most recent version of the Flux v2 extension (`microsoft.flux`) and the two previous versions (N-2) are supported. We generally recommend that you use the [most recent version](extensions-release.md#flux-gitops) of the extension. Starting with `microsoft.flux` version 1.7.0, ARM64-based clusters are supported.
+
+> [!NOTE]
+> If you have been using Flux v1, we recommend [migrating to Flux v2](conceptual-gitops-flux2.md#migrate-from-flux-v1) as soon as possible.
+>
+> Support for Flux v1-based cluster configuration resources created prior to January 1, 2024 will end on [May 24, 2025](https://azure.microsoft.com/updates/migrate-your-gitops-configurations-from-flux-v1-to-flux-v2-by-24-may-2025/). Starting on January 1, 2024, you won't be able to create new Flux v1-based cluster configuration resources.
+ ## GitOps with Private Link If you've added support for [private link to an Azure Arc-enabled Kubernetes cluster](private-link.md), then the `microsoft.flux` extension works out-of-the-box with communication back to Azure. For connections to your Git repository, Helm repository, or any other endpoints that are needed to deploy your Kubernetes manifests, you must provision these endpoints behind your firewall, or list them on your firewall, so that the Flux Source controller can successfully reach them.
For on-premises repositories, Flux uses `libgit2`.
### Kustomization
+Kustomization is a setting created for Flux configurations that lets you choose a specific path in the source repo that is reconciled into the cluster. You don't need to create a `kustomization.yaml file on this specified path. By default, all of the manifests in this path will be reconciled. However, if you want to have a Kustomize overlay for applications available on this repo path, you should create [Kustomize files](https://kustomize.io/) in git for the flux configuration to make use of.
+ By using [`az k8s-configuration flux kustomization create`](/cli/azure/k8s-configuration/flux/kustomization#az-k8s-configuration-flux-kustomization-create), you can create one or more kustomizations during the configuration. | Parameter | Format | Notes |
azure-arc Tutorial Use Gitops Flux2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/tutorial-use-gitops-flux2.md
This tutorial describes how to use GitOps in a Kubernetes cluster. GitOps with Flux v2 is enabled as a [cluster extension](conceptual-extensions.md) in Azure Arc-enabled Kubernetes clusters or Azure Kubernetes Service (AKS) clusters. After the `microsoft.flux` cluster extension is installed, you can create one or more `fluxConfigurations` resources that sync your Git repository sources to the cluster and reconcile the cluster to the desired state. With GitOps, you can use your Git repository as the source of truth for cluster configuration and application deployment.
-In this tutorial, we use an example GitOps configuration with two kustomizations, so that you can see how one kustomization can have a dependency on another. You can add more kustomizations and dependencies as needed, depending on your scenario.
+In this tutorial, we use an example GitOps configuration with two [kustomizations](conceptual-gitops-flux2.md#kustomization), so that you can see how one kustomization can have a dependency on another. You can add more kustomizations and dependencies as needed, depending on your scenario.
Before you dive in, take a moment to [learn how GitOps with Flux works conceptually](./conceptual-gitops-flux2.md).
Follow these steps to apply a sample Flux configuration to a cluster. As part of
1. In the **Source** section:
- 1. In **Source type**, select **Git Repository.**
+ 1. In **Source type**, select **Git Repository.**
1. Enter the URL for the repository where the Kubernetes manifests are located: `https://github.com/Azure/gitops-flux2-kustomize-helm-mt`. 1. For reference type, select **Branch**. Leave **Branch** set to **main**. 1. For **Repository type**, select **Public**.
Follow these steps to apply a sample Flux configuration to a cluster. As part of
:::image type="content" source="media/tutorial-use-gitops-flux2/portal-configuration-source.png" alt-text="Screenshow showing the Source options for a GitOps configuration in the Azure portal." lightbox="media/tutorial-use-gitops-flux2/portal-configuration-source.png":::
-1. In the **Kustomizations** section, create two kustomizations: `infrastructure` and `staging`. These kustomizations are Flux resources, each associated with a path in the repository, that represent the set of manifests that Flux should reconcile to the cluster.
+1. In the **Kustomizations** section, create two [kustomizations](conceptual-gitops-flux2.md#kustomization): `infrastructure` and `staging`. These kustomizations are Flux resources, each associated with a path in the repository, that represent the set of manifests that Flux should reconcile to the cluster.
1. Select **Create**. 1. In the **Create a Kustomization** screen:
To view all of the configurations for a cluster, navigate to the cluster and sel
:::image type="content" source="media/tutorial-use-gitops-flux2/portal-view-configurations.png" alt-text="Screenshot showing all configurations for a cluster in the Azure portal." lightbox="media/tutorial-use-gitops-flux2/portal-view-configurations.png":::
-Select the name of a configuration to view more details such as the configuration's status, properties, and source. You can then select **Configuration objects** to view all of the objects that were created to enable the GitOps configuration. This lets you quickly see the compliance state and other details about each object.
+Select the name of a configuration to view more details such as the configuration's status, properties, and source. You can then select **Configuration objects** to view all of the objects that were created to enable the GitOps configuration. This lets you quickly see the compliance state and other details about each object.
:::image type="content" source="media/tutorial-use-gitops-flux2/portal-configuration-objects.png" alt-text="Screenshots showing configuration objects and their state in the Azure portal." lightbox="media/tutorial-use-gitops-flux2/portal-configuration-objects.png":::
azure-maps Add Bubble Layer Map Ios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/add-bubble-layer-map-ios.md
Title: Add a bubble layer to iOS maps description: Learn how to render points on maps as circles with fixed sizes. See how to use the Azure Maps iOS SDK to add and customize bubble layers for this purpose.--++ Last updated 11/23/2021
azure-maps Add Controls Map Ios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/add-controls-map-ios.md
Title: Add controls to an iOS map description: How to add zoom control, pitch control, rotate control and a style picker to a map in Microsoft Azure Maps iOS SDK.--++ Last updated 11/19/2021
azure-maps Add Heat Map Layer Ios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/add-heat-map-layer-ios.md
Title: Add a heat map layer to iOS maps description: Learn how to create a heat map. See how to use the Azure Maps iOS SDK to add a heat map layer to a map. Find out how to customize heat map layers.--++ Last updated 11/23/2021
azure-maps Add Image Layer Map Ios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/add-image-layer-map-ios.md
Title: Add an Image layer to an iOS map description: Learn how to add images to a map. See how to use the Azure Maps iOS SDK to customize image layers and overlay images on fixed sets of coordinates.--++ Last updated 11/23/2021
azure-maps Add Line Layer Map Ios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/add-line-layer-map-ios.md
Title: Add a line layer to iOS maps description: Learn how to add lines to maps. See examples that use the Azure Maps iOS SDK to add line layers to maps and to customize lines with symbols and color gradients.--++ Last updated 11/23/2021
azure-maps Add Polygon Extrusion Layer Map Ios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/add-polygon-extrusion-layer-map-ios.md
Title: Add a polygon extrusion layer to an iOS map description: How to add a polygon extrusion layer to the Microsoft Azure Maps iOS SDK.--++ Last updated 11/23/2021
azure-maps Add Polygon Layer Map Ios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/add-polygon-layer-map-ios.md
Title: Add a polygon layer to iOS maps description: Learn how to add polygons or circles to maps. See how to use the Azure Maps iOS SDK to customize geometric shapes and make them easy to update and maintain.--++ Last updated 11/23/2021
azure-maps Add Symbol Layer Ios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/add-symbol-layer-ios.md
Title: Add a symbol layer to iOS maps description: Learn how to add a marker to a map. See an example that uses the Azure Maps iOS SDK to add a symbol layer that contains point-based data from a data source.--++ Last updated 11/19/2021
azure-maps Add Tile Layer Map Ios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/add-tile-layer-map-ios.md
Title: Add a tile layer to iOS maps description: Learn how to add a tile layer to a map. See an example that uses the Azure Maps iOS SDK to add a weather radar overlay to a map.--++ Last updated 11/23/2021
azure-maps Android Map Add Line Layer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/android-map-add-line-layer.md
Title: Add a line layer to Android maps | Microsoft Azure Maps description: Learn how to add lines to maps. See examples that use the Azure Maps Android SDK to add line layers to maps and to customize lines with symbols and color gradients.--++ Last updated 2/26/2021
azure-maps Android Map Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/android-map-events.md
Title: Handle map events in Android maps | Microsoft Azure Maps description: Learn which events are fired when users interact with maps. View a list of all supported map events. See how to use the Azure Maps Android SDK to handle events.--++ Last updated 2/26/2021
azure-maps Clustering Point Data Android Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/clustering-point-data-android-sdk.md
Title: Clustering point data in the Android SDK | Microsoft Azure Maps description: Learn how to cluster point data on maps. See how to use the Azure Maps Android SDK to cluster data, react to cluster mouse events, and display cluster aggregates.--++ Last updated 03/23/2021
azure-maps Clustering Point Data Ios Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/clustering-point-data-ios-sdk.md
Title: Clustering point data in the iOS SDK description: Learn how to cluster point data on maps. See how to use the Azure Maps iOS SDK to cluster data, react to cluster mouse events, and display cluster aggregates.--++ Last updated 11/18/2021
azure-maps Clustering Point Data Web Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/clustering-point-data-web-sdk.md
Title: Clustering point data in the Web SDK | Microsoft Azure Maps description: Learn how to cluster point data on maps. See how to use the Azure Maps Web SDK to cluster data, react to cluster mouse events, and display cluster aggregates.--++ Last updated 07/29/2019
azure-maps Create Data Source Android Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/create-data-source-android-sdk.md
Title: Create a data source for Android maps | Microsoft Azure Maps description: "Find out how to create a data source for a map. Learn about the data sources that the Azure Maps Android SDK uses: GeoJSON sources and vector tiles."--++ Last updated 2/26/2021
azure-maps Create Data Source Ios Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/create-data-source-ios-sdk.md
Title: Create a data source for iOS maps | Microsoft Azure Maps description: "Find out how to create a data source for a map. Learn about the data sources that the Azure Maps iOS SDK uses: GeoJSON sources and vector tiles."--++ Last updated 10/22/2021
azure-maps Create Data Source Web Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/create-data-source-web-sdk.md
Title: Create a data source for a map in Microsoft Azure Maps description: "Find out how to create a data source for a map. Learn about the data sources that the Azure Maps Web SDK uses: GeoJSON sources and vector tiles."--++ Last updated 12/07/2020
azure-maps Data Driven Style Expressions Android Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/data-driven-style-expressions-android-sdk.md
Title: Data-driven style Expressions in Android maps | Microsoft Azure Maps description: Learn about data-driven style expressions. See how to use these expressions in the Azure Maps Android SDK to adjust styles in maps.--++ Last updated 2/26/2021
azure-maps Data Driven Style Expressions Ios Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/data-driven-style-expressions-ios-sdk.md
Title: Data-driven style expressions in iOS maps description: Learn about data-driven style expressions. See how to use these expressions in the Azure Maps iOS SDK to adjust styles in maps.--++ Last updated 11/18/2021
azure-maps Data Driven Style Expressions Web Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/data-driven-style-expressions-web-sdk.md
Title: Data-driven style Expressions in the Azure Maps Web SDK | Microsoft Azure Maps description: Learn about data-driven style expressions. See how to use these expressions in the Azure Maps Web SDK to adjust styles in maps.--++ Last updated 4/4/2019
azure-maps Display Feature Information Android https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/display-feature-information-android.md
Title: Display feature information in Android maps | Microsoft Azure Maps description: Learn how to display information when users interact with map features. Use the Azure Maps Android SDK to display toast messages and other types of messages.--++ Last updated 2/26/2021
azure-maps Display Feature Information Ios Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/display-feature-information-ios-sdk.md
Title: Display feature information in iOS maps | Microsoft Azure Maps description: Learn how to display information when users interact with map features. Use the Azure Maps iOS SDK to display toast messages and other types of messages.--++ Last updated 11/23/2021
azure-maps Drawing Tools Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/drawing-tools-events.md
Title: Drawing tools events | Microsoft Azure Maps description: This article demonstrates how to add a drawing toolbar to a map using Microsoft Azure Maps Web SDK--++ Last updated 05/23/2023
azure-maps How To Add Shapes To Android Map https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-add-shapes-to-android-map.md
Title: Add a polygon layer to Android maps | Microsoft Azure Maps description: Learn how to add polygons or circles to maps. See how to use the Azure Maps Android SDK to customize geometric shapes and make them easy to update and maintain.--++ Last updated 2/26/2021
azure-maps How To Add Symbol To Android Map https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-add-symbol-to-android-map.md
Title: Add a symbol layer to Android maps | Microsoft Azure Maps description: Learn how to add a marker to a map. See an example that uses the Azure Maps Android SDK to add a symbol layer that contains point-based data from a data source.--++ Last updated 2/26/2021
azure-maps How To Add Tile Layer Android Map https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-add-tile-layer-android-map.md
Title: Add a tile layer to Android maps | Microsoft Azure Maps description: Learn how to add a tile layer to a map. See an example that uses the Azure Maps Android SDK to add a weather radar overlay to a map.--++ Last updated 3/25/2021
azure-maps How To Dev Guide Csharp Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-dev-guide-csharp-sdk.md
Title: How to create Azure Maps applications using the C# REST SDK description: How to develop applications that incorporate Azure Maps using the C# SDK Developers Guide.--++ Last updated 11/11/2021
azure-maps How To Dev Guide Java Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-dev-guide-java-sdk.md
Title: How to create Azure Maps applications using the Java REST SDK (preview) description: How to develop applications that incorporate Azure Maps using the Java REST SDK Developers Guide.--++ Last updated 01/25/2023
azure-maps How To Dev Guide Js Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-dev-guide-js-sdk.md
Title: How to create Azure Maps applications using the JavaScript REST SDK (preview) description: How to develop applications that incorporate Azure Maps using the JavaScript SDK Developers Guide.--++ Last updated 11/15/2021
azure-maps How To Dev Guide Py Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-dev-guide-py-sdk.md
Title: How to create Azure Maps applications using the Python REST SDK (preview) description: How to develop applications that incorporate Azure Maps using the Python SDK Developers Guide.--++ Last updated 01/15/2021
azure-maps How To Show Traffic Android https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-show-traffic-android.md
Title: Show traffic data on Android maps | Microsoft Azure Maps description: This article demonstrates how to display traffic data on a map using the Microsoft Azure Maps Android SDK.--++ Last updated 2/26/2021
azure-maps How To Use Android Map Control Library https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-use-android-map-control-library.md
Title: Get started with Android map control | Microsoft Azure Maps description: Become familiar with the Azure Maps Android SDK. See how to create a project in Android Studio, install the SDK, and create an interactive map.--++ Last updated 2/26/2021
azure-maps How To Use Image Templates Web Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-use-image-templates-web-sdk.md
Title: Image templates in the Azure Maps Web SDK | Microsoft Azure Maps description: Learn how to add image icons and pattern-filled polygons to maps by using the Azure Maps Web SDK. View available image and fill pattern templates.--++ Last updated 8/6/2019
azure-maps How To Use Indoor Module Ios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-use-indoor-module-ios.md
Title: Use the Azure Maps indoor maps module to develop iOS applications with Microsoft Creator services description: Learn how to use the Microsoft Azure Maps indoor maps module for the iOS SDK to render maps by embedding the module's JavaScript libraries.--++ Last updated 12/10/2021
azure-maps How To Use Ios Map Control Library https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-use-ios-map-control-library.md
Title: Get started with iOS map control description: Become familiar with the Azure Maps iOS SDK. See how to install the SDK and create an interactive map.--++ Last updated 11/23/2021
azure-maps How To Use Map Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-use-map-control.md
Title: How to use the Azure Maps web map control description: Learn how to add and localize maps to web and mobile applications by using the Map Control client-side JavaScript library in Azure Maps. --++ Last updated 06/29/2023
azure-maps How To Use Npm Package https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-use-npm-package.md
Title: How to use the Azure Maps map control npm package description: Learn how to add maps to node.js applications by using the map control npm package in Azure Maps. --++ Last updated 07/04/2023
azure-maps How To Use Services Module https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-use-services-module.md
Title: Use the Azure Maps Services module description: Learn about the Azure Maps services module. See how to load and use this helper library to access Azure Maps REST services in web or Node.js applications.--++ Last updated 06/26/2023
azure-maps How To Use Ts Rest Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-use-ts-rest-sdk.md
Title: Use Azure Maps TypeScript REST SDK description: Learn about the Azure Maps TypeScript REST SDK. See how to load and use this client library to access Azure Maps REST services in web or Node.js applications.--++ Last updated 07/01/2023
azure-maps Interact Map Ios Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/interact-map-ios-sdk.md
Title: Handle map events in iOS maps description: Learn which events are fired when users interact with maps. View a list of all supported map events. See how to use the Azure Maps iOS SDK to handle events.--++ Last updated 11/18/2021
azure-maps Map Accessibility https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-accessibility.md
Title: Create an accessible map application with Azure Maps | Microsoft Azure Maps description: Learn about accessibility considerations in Azure Maps. See what features are available for making map applications accessible, and view accessibility tips.--++ Last updated 05/15/2023
azure-maps Map Add Bubble Layer Android https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-add-bubble-layer-android.md
Title: Add a Bubble layer to Android maps | Microsoft Azure Maps description: Learn how to render points on maps as circles with fixed sizes. See how to use the Azure Maps Android SDK to add and customize bubble layers for this purpose.--++ Last updated 2/26/2021
azure-maps Map Add Bubble Layer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-add-bubble-layer.md
Title: Add a Bubble layer to a map | Microsoft Azure Maps description: Learn how to render points on maps as circles with fixed sizes. See how to use the Azure Maps Web SDK to add and customize bubble layers for this purpose.--++ Last updated 05/15/2023
azure-maps Map Add Controls Android https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-add-controls-android.md
Title: Add controls to an Android map | Microsoft Azure Maps description: How to add zoom control, pitch control, rotate control and a style picker to a map in Microsoft Azure Maps Android SDK.--++ Last updated 02/26/2021
azure-maps Map Add Controls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-add-controls.md
Title: Add controls to a map | Microsoft Azure Maps description: How to add zoom control, pitch control, rotate control and a style picker to a map in Microsoft Azure Maps.--++ Last updated 05/15/2023
azure-maps Map Add Custom Html https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-add-custom-html.md
Title: Add an HTML Marker to map | Microsoft Azure Maps description: Learn how to add HTML markers to maps. See how to use the Azure Maps Web SDK to customize markers and add popups and mouse events to a marker.--++ Last updated 05/17/2023
azure-maps Map Add Drawing Toolbar https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-add-drawing-toolbar.md
Title: Add drawing tools toolbar to map | Microsoft Azure Maps description: How to add a drawing toolbar to a map using Azure Maps Web SDK--++ Last updated 06/05/2023
azure-maps Map Add Heat Map Layer Android https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-add-heat-map-layer-android.md
Title: Add a heat map layer to Android maps | Microsoft Azure Maps description: Learn how to create a heat map. See how to use the Azure MapsAndroid SDK to add a heat map layer to a map. Find out how to customize heat map layers.--++ Last updated 02/26/2021
azure-maps Map Add Heat Map Layer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-add-heat-map-layer.md
Title: Add a heat map layer to a map | Microsoft Azure Maps description: Learn how to create a heat map and customize heat map layers using the Azure Maps Web SDK.--++ Last updated 06/06/2023
azure-maps Map Add Image Layer Android https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-add-image-layer-android.md
Title: Add an Image layer to an Android map | Microsoft Azure Maps description: Learn how to add images to a map. See how to use the Azure Maps Android SDK to customize image layers and overlay images on fixed sets of coordinates.--++ Last updated 02/26/2021
azure-maps Map Add Image Layer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-add-image-layer.md
Title: Add an Image layer to a map | Microsoft Azure Maps description: Learn how to add images to a map. See how to use the Azure Maps Web SDK to customize image layers and overlay images on fixed sets of coordinates.--++ Last updated 06/06/2023
azure-maps Map Add Line Layer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-add-line-layer.md
Title: Add a line layer to a map | Microsoft Azure Maps description: Learn how to add lines to maps. See examples that use the Azure Maps Web SDK to add line layers to maps and to customize lines with symbols and color gradients.--++ Last updated 06/06/2023
azure-maps Map Add Pin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-add-pin.md
Title: Add a Symbol layer to a map | Microsoft Azure Maps description: Learn how to add customized symbols, such as text or icons, to maps. See how to use data sources and symbol layers in the Azure Maps Web SDK for this purpose.--++ Last updated 06/14/2023
azure-maps Map Add Popup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-add-popup.md
Title: Add a popup to a point on a map |Microsoft Azure Maps description: Learn about popups, popup templates, and popup events in Azure Maps. See how to add a popup to a point on a map and how to reuse and customize popups.--++ Last updated 06/14/2023
azure-maps Map Add Shape https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-add-shape.md
Title: Add a polygon layer to a map description: Learn how to add polygons or circles to maps. See how to use the Azure Maps Web SDK to customize geometric shapes and make them easy to update and maintain.--++ Last updated 06/07/2023
azure-maps Map Add Snap Grid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-add-snap-grid.md
Title: Add snap grid to the map | Microsoft Azure Maps description: How to add a snap grid to a map using Azure Maps Web SDK--++ Last updated 06/08/2023
azure-maps Map Add Tile Layer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-add-tile-layer.md
Title: Add a tile layer to a map description: Learn how to superimpose images on maps. See an example that uses the Azure Maps Web SDK to add a tile layer containing a weather radar overlay to a map.--++ Last updated 06/08/2023
azure-maps Map Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-create.md
Title: Create a map with Azure Maps description: Find out how to add maps to web pages by using the Azure Maps Web SDK. Learn about options for animation, style, the camera, services, and user interactions.--++ Last updated 06/13/2023
azure-maps Map Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-events.md
Title: Handle map events description: Learn which events are fired when users interact with maps. View a list of all supported map events. See how to use the Azure Maps Web SDK to handle events.--++ Last updated 06/12/2023
azure-maps Map Extruded Polygon Android https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-extruded-polygon-android.md
Title: Add a polygon extrusion layer to an Android map | Microsoft Azure Maps description: How to add a polygon extrusion layer to the Microsoft Azure Maps Android SDK.--++ Last updated 02/26/2021
azure-maps Map Extruded Polygon https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-extruded-polygon.md
Title: Add a polygon extrusion layer to a map description: How to add a polygon extrusion layer to the Microsoft Azure Maps Web SDK.--++ Last updated 06/15/2023
azure-maps Map Get Information From Coordinate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-get-information-from-coordinate.md
Title: Show information about a coordinate on a map description: Learn how to display information about an address on the map when a user selects a coordinate.--++ Last updated 07/01/2023
azure-maps Map Get Shape Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-get-shape-data.md
Title: Get data from shapes on a map description: In this article learn, how to get shape data drawn on a map using the Microsoft Azure Maps Web SDK.--++ Last updated 07/13/2023
azure-maps Map Route https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-route.md
Title: Show route directions on a map description: This article demonstrates how to display directions between two locations on a map using the Microsoft Azure Maps Web SDK.--++ Last updated 07/01/2023
azure-maps Map Search Location https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-search-location.md
Title: Show search results on a map description: This article demonstrates how to perform a search request using Microsoft Azure Maps Web SDK and display the results on the map.--++ Last updated 07/01/2023
azure-maps Map Show Traffic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-show-traffic.md
Title: Show traffic on a map description: Find out how to add traffic data to maps. Learn about flow data, and see how to use the Azure Maps Web SDK to add incident data and flow data to maps.--++ Last updated 06/15/2023
azure-maps Migrate From Google Maps Android App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/migrate-from-google-maps-android-app.md
Title: Tutorial - Migrate an Android app description: 'Tutorial on how to migrate an Android app from Google Maps to Microsoft Azure Maps'--++ Last updated 12/1/2021
azure-maps Quick Android Map https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/quick-android-map.md
Title: 'Quickstart: Create an Android app with Azure Maps' description: 'Quickstart: Learn how to create an Android app using the Azure Maps Android SDK.'--++ Last updated 09/22/2022
azure-maps Quick Ios App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/quick-ios-app.md
Title: Create an iOS app description: Steps to create an Azure Maps account and the first iOS App.--++ Last updated 11/23/2021
azure-maps Release Notes Map Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/release-notes-map-control.md
Title: Release notes - Map Control description: Release notes for the Azure Maps Web SDK. -+ Last updated 3/15/2023
This document contains information about new features and other changes to the M
## v3 (latest)
+### [3.0.1] (October 6, 2023)
+
+#### Bug fixes (3.0.1)
+
+- Various accessibility improvements.
+
+- Resolved the issue with dynamic attribution when progressive loading is enabled.
+
+- Fixed missing event names in `HtmlMarkerEvents`.
+
+#### Other changes (3.0.1)
+
+- Modified member methods to be protected for the zoom, pitch, and compass controls.
+
+- Telemetry is disabled by default in the Azure Government cloud.
+ ### [3.0.0] (August 18, 2023) #### Bug fixes (3.0.0)
This update is the first preview of the upcoming 3.0.0 release. The underlying [
## v2
+### [2.3.3] (October 6, 2023)
+
+#### Bug fixes (2.3.3)
+
+- Resolved the issue with dynamic attribution when progressive loading is enabled.
+ ### [2.3.2] (August 11, 2023) #### Bug fixes (2.3.2)
Stay up to date on Azure Maps:
> [!div class="nextstepaction"] > [Azure Maps Blog]
+[3.0.1]: https://www.npmjs.com/package/azure-maps-control/v/3.0.1
[3.0.0]: https://www.npmjs.com/package/azure-maps-control/v/3.0.0 [3.0.0-preview.10]: https://www.npmjs.com/package/azure-maps-control/v/3.0.0-preview.10 [3.0.0-preview.9]: https://www.npmjs.com/package/azure-maps-control/v/3.0.0-preview.9
Stay up to date on Azure Maps:
[3.0.0-preview.3]: https://www.npmjs.com/package/azure-maps-control/v/3.0.0-preview.3 [3.0.0-preview.2]: https://www.npmjs.com/package/azure-maps-control/v/3.0.0-preview.2 [3.0.0-preview.1]: https://www.npmjs.com/package/azure-maps-control/v/3.0.0-preview.1
+[2.3.3]: https://www.npmjs.com/package/azure-maps-control/v/2.3.3
[2.3.2]: https://www.npmjs.com/package/azure-maps-control/v/2.3.2 [2.3.1]: https://www.npmjs.com/package/azure-maps-control/v/2.3.1 [2.3.0]: https://www.npmjs.com/package/azure-maps-control/v/2.3.0
azure-maps Rest Sdk Developer Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/rest-sdk-developer-guide.md
Title: REST SDK Developer Guide description: How to develop applications that incorporate Azure Maps using the various SDK Developer how-to articles.--++ Last updated 10/31/2021
azure-maps Set Android Map Styles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/set-android-map-styles.md
Title: Set a map style in Android maps description: Learn two ways of setting the style of a map. See how to use the Azure Maps Android SDK in either the layout file or the activity class to adjust the style.--++ Last updated 02/26/2021
azure-maps Set Drawing Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/set-drawing-options.md
Title: Drawing tools module description: This article describes how to set drawing options data using the Microsoft Azure Maps Web SDK--++ Last updated 06/15/2023
azure-maps Set Map Style Ios Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/set-map-style-ios-sdk.md
Title: Set a map style in iOS maps | Microsoft Azure Maps description: Learn two ways of setting the style of a map. See how to use the Azure Maps iOS SDK in either the layout file or the activity class to adjust the style.--++ Last updated 07/22/2023
azure-maps Show Traffic Data Map Ios Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/show-traffic-data-map-ios-sdk.md
Title: Show traffic data on iOS maps description: This article describes how to display traffic data on a map using the Microsoft Azure Maps iOS SDK.--++ Last updated 07/21/2023
azure-maps Spatial Io Add Ogc Map Layer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/spatial-io-add-ogc-map-layer.md
Title: Add an Open Geospatial Consortium (OGC) map layer description: Learn how to overlay an OGC map layer on the map, and how to use the different options in the OgcMapLayer class.--++ Last updated 06/16/2023
azure-maps Spatial Io Add Simple Data Layer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/spatial-io-add-simple-data-layer.md
Title: Add a simple data layer description: Learn how to add a simple data layer using the Spatial IO module, provided by Azure Maps Web SDK.--++ Last updated 06/19/2023
azure-maps Spatial Io Connect Wfs Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/spatial-io-connect-wfs-service.md
Title: Connect to a Web Feature Service (WFS) service | Microsoft Azure Maps description: Learn how to connect to a WFS service, then query the WFS service using the Azure Maps web SDK and the Spatial IO module.--++ Last updated 06/20/2023
azure-maps Spatial Io Read Write Spatial Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/spatial-io-read-write-spatial-data.md
Title: Read and write spatial data description: Learn how to read and write data using the Spatial IO module, provided by Azure Maps Web SDK.--++ Last updated 06/21/2023
azure-maps Tutorial Load Geojson File Android https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/tutorial-load-geojson-file-android.md
Title: 'Tutorial: Load GeoJSON data into Azure Maps Android SDK' description: Tutorial on how to load GeoJSON data file into the Azure Maps Android map SDK.--++ Last updated 12/10/2020
azure-maps Web Sdk Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/web-sdk-best-practices.md
Title: Azure Maps Web SDK best practices description: Learn tips & tricks to optimize your use of the Azure Maps Web SDK. --++ Last updated 06/23/2023
azure-maps Web Sdk Migration Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/web-sdk-migration-guide.md
Title: The Azure Maps Web SDK v1 migration guide description: Find out how to migrate your Azure Maps Web SDK v1 applications to the most recent version of the Web SDK.--++ Last updated 08/18/2023
azure-monitor Distributed Tracing Telemetry Correlation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/distributed-tracing-telemetry-correlation.md
It's important to make sure the incoming and outgoing configurations are exactly
### Enable W3C distributed tracing support for web apps
-This feature is enabled by default for Javascript and the headers are automatically included when the hosting page domain is the same as the domain the requests are sent to (for example, the hosting page is `example.com` and the Ajax requests are sent to `example.com`). To change the distributed tracing mode, use the [`distributedTracingMode` configuration field](./javascript-sdk-configuration.md#sdk-configuration). AI_AND_W3C is provided by default for backward compatibility with any legacy services instrumented by Application Insights.
+This feature is enabled by default for JavaScript and the headers are automatically included when the hosting page domain is the same as the domain the requests are sent to (for example, the hosting page is `example.com` and the Ajax requests are sent to `example.com`). To change the distributed tracing mode, use the [`distributedTracingMode` configuration field](./javascript-sdk-configuration.md#sdk-configuration). AI_AND_W3C is provided by default for backward compatibility with any legacy services instrumented by Application Insights.
- **[npm-based setup](./javascript-sdk.md?tabs=npmpackage#get-started)**
azure-monitor Javascript Sdk Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/javascript-sdk-configuration.md
Title: Microsoft Azure Monitor Application Insights JavaScript SDK configuration description: Microsoft Azure Monitor Application Insights JavaScript SDK configuration. Previously updated : 07/10/2023 Last updated : 10/03/2023 ms.devlang: javascript
To configure or change the storage account or blob container that's linked to yo
> [!div class="mx-imgBorder"] > ![Screenshot that shows reconfiguring your selected Azure blob container on the Properties pane.](./media/javascript-sdk-configuration/reconfigure.png)
-#### Troubleshooting
-
-This section offers troubleshooting tips for common issues related to the uploading of source maps to your Azure Storage account blob container.
-
-##### Required Azure role-based access control settings on your blob container
-
-Any user on the portal who uses this feature must be assigned at least as a [Storage Blob Data Reader][storage blob data reader] to your blob container. Assign this role to anyone who might use the source maps through this feature.
-
-> [!NOTE]
-> Depending on how the container was created, this role might not have been automatically assigned to you or your team.
-
-##### Source map not found
-
-1. Verify that the corresponding source map is uploaded to the correct blob container.
-1. Verify that the source map file is named after the JavaScript file it maps to and uses the suffix `.map`.
-
- For example, `/static/js/main.4e2ca5fa.chunk.js` searches for the blob named `main.4e2ca5fa.chunk.js.map`.
-1. Check your browser's console to see if any errors were logged. Include this information in any support ticket.
- ### View the unminified callstack To view the unminified callstack, select an Exception Telemetry item in the Azure portal, find the source maps that match the call stack, and drag and drop the source maps onto the call stack in the Azure portal. The source map must have the same name as the source file of a stack frame, but with a `map` extension.
azure-monitor Best Practices Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/best-practices-containers.md
+
+ Title: Best practices for monitoring Kubernetes
+description: Provides a template for a Well-Architected Framework (WAF) article specific to monitoring Kubernetes with Azure Monitor.
+++ Last updated : 03/29/2023+++
+# Best practices for monitoring Kubernetes with Azure Monitor
+This article provides best practices for monitoring the health and performance of your [Azure Kubernetes Service (AKS)](../aks/intro-kubernetes.md) and [Azure Arc-enabled Kubernetes](../azure-arc/kubernetes/overview.md) clusters. The guidance is based on the five pillars of architecture excellence described in [Azure Well-Architected Framework](/azure/architecture/framework/).
+++
+## Reliability
+In the cloud, we acknowledge that failures happen. Instead of trying to prevent failures altogether, the goal is to minimize the effects of a single failing component. Use the following information to best leverage Azure Monitor to ensure the reliability of your Kubernetes clusters and monitoring environment.
+++
+## Security
+Security is one of the most important aspects of any architecture. Azure Monitor provides features to employ both the principle of least privilege and defense-in-depth. Use the following information to monitor your Kubernetes clusters and ensure that only authorized users access collected data.
+++
+## Cost optimization
+Cost optimization refers to ways to reduce unnecessary expenses and improve operational efficiencies. You can significantly reduce your cost for Azure Monitor by understanding your different configuration options and opportunities to reduce the amount of data that it collects. See [Azure Monitor cost and usage](usage-estimated-costs.md) to understand the different ways that Azure Monitor charges and how to view your monthly bill.
+
+> [!NOTE]
+> See [Optimize costs in Azure Monitor](best-practices-cost.md) for cost optimization recommendations across all features of Azure Monitor.
+++
+## Operational excellence
+Operational excellence refers to operations processes required keep a service running reliably in production. Use the following information to minimize the operational requirements for monitoring your Kubernetes clusters.
+++
+## Performance efficiency
+Performance efficiency is the ability of your workload to scale to meet the demands placed on it by users in an efficient manner. Use the following information to monitor the performance of your Kubernetes clusters and ensure they're configured for maximum performance.
++
+## Next step
+
+- [Get best practices for a complete deployment of Azure Monitor](best-practices.md).
azure-monitor Best Practices Cost https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/best-practices-cost.md
This article describes [Cost optimization](/azure/architecture/framework/cost/)
|:|:| | Collect only critical resource log data from Azure resources. | When you create [diagnostic settings](essentials/diagnostic-settings.md) to send [resource logs](essentials/resource-logs.md) for your Azure resources to a Log Analytics database, only specify those categories that you require. Since diagnostic settings don't allow granular filtering of resource logs, you can use a [workspace transformation](essentials/data-collection-transformations.md?#workspace-transformation-dcr) to further filter unneeded data for those resources that use a [supported table](logs/tables-feature-support.md). See [Diagnostic settings in Azure Monitor](essentials/diagnostic-settings.md#controlling-costs) for details on how to configure diagnostic settings and using transformations to filter their data. |
-## Virtual machines
+## Alerts
-## Container insights
-### Design checklist
+## Virtual machines
-> [!div class="checklist"]
-> - Configure agent collection to remove unneeded data.
-> - Modify settings for collection of metric data.
-> - Limit Prometheus metrics collected.
-> - Configure Basic Logs.
-### Configuration recommendations
+
+## Containers
-| Recommendation | Benefit |
-|:|:|
-| Configure agent collection to remove unneeded data. | Analyze the data collected by Container insights as described in [Controlling ingestion to reduce cost](containers/container-insights-cost.md#control-ingestion-to-reduce-cost) and adjust your configuration to stop collection of data in ContainerLogs you don't need. |
-| Modify settings for collection of metric data. | You can reduce your costs by modifying the default collection settings Container insights uses for the collection of metric data. See [Enable cost optimization settings](containers/container-insights-cost-config.md) for details on modifying both the frequency that metric data is collected and the namespaces that are collected. |
-| Limit Prometheus metrics collected. | If you configured Prometheus metric scraping, then follow the recommendations at [Controlling ingestion to reduce cost](containers/container-insights-cost.md#prometheus-metrics-scraping) to optimize your data collection for cost. |
-| Configure Basic Logs. | [Convert your schema to ContainerLogV2](containers/container-insights-logging-v2.md) which is compatible with Basic logs and can provide significant cost savings as described in [Controlling ingestion to reduce cost](containers/container-insights-cost.md#configure-basic-logs). |
azure-monitor Container Insights Logging V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-logging-v2.md
For Python:
Customers must [enable ContainerLogV2](./container-insights-logging-v2.md#enable-the-containerlogv2-schema) for multi-line logging to work. ### How to enable
-Multi-line logging is a preview feature and can be enabled by setting **enabled** flag to "true" under the `[log_collection_settings.enable_multiline_logs]` section in the the [config map](https://github.com/microsoft/Docker-Provider/blob/ci_prod/kubernetes/container-azm-ms-agentconfig.yaml)
+Multi-line logging feature can be enabled by setting **enabled** flag to "true" under the `[log_collection_settings.enable_multiline_logs]` section in the the [config map](https://github.com/microsoft/Docker-Provider/blob/ci_prod/kubernetes/container-azm-ms-agentconfig.yaml)
```yaml [log_collection_settings.enable_multiline_logs]
azure-monitor Analyze Usage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/analyze-usage.md
Last updated 08/25/2022
-
+ # Analyze usage in a Log Analytics workspace Azure Monitor costs can vary significantly based on the volume of data being collected in your Log Analytics workspace. This volume is affected by the set of solutions using the workspace and the amount of data that each solution collects. This article provides guidance on analyzing your collected data to assist in controlling your data ingestion costs. It helps you determine the cause of higher-than-expected usage. It also helps you to predict your costs as you monitor more resources and configure different Azure Monitor features.
Select **Additional Queries** for prebuilt queries that help you further underst
### Usage and estimated costs The **Data ingestion per solution** chart on the [Usage and estimated costs](../usage-estimated-costs.md#usage-and-estimated-costs) page for each workspace shows the total volume of data sent and how much is being sent by each solution over the previous 31 days. This information helps you determine trends such as whether any increase is from overall data usage or usage by a particular solution.
-## Log queries
-You can use [log queries](log-query-overview.md) in [Log Analytics](log-analytics-overview.md) if you need deeper analysis into your collected data. Each table in a Log Analytics workspace has the following standard columns that can assist you in analyzing billable data:
--- [_IsBillable](log-standard-columns.md#_isbillable) identifies records for which there's an ingestion charge. Use this column to filter out non-billable data.-- [_BilledSize](log-standard-columns.md#_billedsize) provides the size in bytes of the record.
+## Querying data volumes from the Usage table
-## Data volume by solution
Analyze the amount of billable data collected by a particular service or solution. These queries use the [Usage](/azure/azure-monitor/reference/tables/usage) table that collects usage data for each table in the workspace. > [!NOTE] > The clause with `TimeGenerated` is only to ensure that the query experience in the Azure portal looks back beyond the default 24 hours. When you use the **Usage** data type, `StartTime` and `EndTime` represent the time buckets for which results are presented.
-**Billable data volume by solution over the past month**
-
-```kusto
-Usage
-| where TimeGenerated > ago(32d)
-| where StartTime >= startofday(ago(31d)) and EndTime < startofday(now())
-| where IsBillable == true
-| summarize BillableDataGB = sum(Quantity) / 1000. by bin(StartTime, 1d), Solution
-| render columnchart
-```
- **Billable data volume by type over the past month** ```kusto
Usage
| sort by Solution asc, DataType asc ```
+## Querying data volume from the events directly
+
+You can use [log queries](log-query-overview.md) in [Log Analytics](log-analytics-overview.md) if you need deeper analysis into your collected data. Each table in a Log Analytics workspace has the following standard columns that can assist you in analyzing billable data:
+
+- [_IsBillable](log-standard-columns.md#_isbillable) identifies records for which there's an ingestion charge. Use this column to filter out non-billable data.
+- [_BilledSize](log-standard-columns.md#_billedsize) provides the size in bytes of the record.
+ **Billable data volume for specific events** If you find that a particular data type is collecting excessive data, you might want to analyze the data in that table to determine particular records that are increasing. This example filters specific event IDs in the `Event` table and then provides a count for each ID. You can modify this query by using the columns from other tables.
Event
| summarize count(), Bytes=sum(_BilledSize) by EventID, bin(TimeGenerated, 1d) ```
-## Data volume by computer
-You can analyze the amount of billable data collected from a virtual machine or a set of virtual machines. The **Usage** table doesn't have the granularity to show data volumes for specific virtual machines, so these queries use the [find operator](/azure/data-explorer/kusto/query/findoperator) to search all tables that include a computer name. The **Usage** type is omitted because this query is only for analytics of data trends.
-
-> [!WARNING]
-> Use [find](/azure/data-explorer/kusto/query/findoperator?pivots=azuremonitor) queries sparingly because scans across data types are [resource intensive](./query-optimization.md#query-details-pane) to execute. If you don't need results per subscription, resource group, or resource name, use the [Usage](/azure/azure-monitor/reference/tables/usage) table as in the preceding queries.
-
-**Billable data volume by computer for the last full day**
-
-```kusto
-find where TimeGenerated between(startofday(ago(1d))..startofday(now())) project _BilledSize, _IsBillable, Computer, Type
-| where _IsBillable == true and Type != "Usage"
-| extend computerName = tolower(tostring(split(Computer, '.')[0]))
-| summarize BillableDataBytes = sum(_BilledSize) by computerName
-| sort by BillableDataBytes desc nulls last
-```
-
-**Count of billable events by computer for the last full day**
-
-```kusto
-find where TimeGenerated between(startofday(ago(1d))..startofday(now())) project _IsBillable, Computer, Type
-| where _IsBillable == true and Type != "Usage"
-| extend computerName = tolower(tostring(split(Computer, '.')[0]))
-| summarize eventCount = count() by computerName
-| sort by eventCount desc nulls last
-```
-
-## Data volume by Azure resource, resource group, or subscription
+### Data volume by Azure resource, resource group, or subscription
You can analyze the amount of billable data collected from a particular resource or set of resources. These queries use the [_ResourceId](./log-standard-columns.md#_resourceid) and [_SubscriptionId](./log-standard-columns.md#_subscriptionid) columns for data from resources hosted in Azure. > [!WARNING]
find where TimeGenerated between(startofday(ago(1d))..startofday(now())) project
> [!TIP] > For workspaces with large data volumes, doing queries such as the ones shown in this section, which query large volumes of raw data, might need to be restricted to a single day. To track trends over time, consider setting up a [Power BI report](./log-powerbi.md) and using [incremental refresh](./log-powerbi.md#collect-data-with-power-bi-dataflows) to collect data volumes per resource once a day.
-## Querying for data volumes excluding known free data types
+### Data volume by computer
+You can analyze the amount of billable data collected from a virtual machine or a set of virtual machines. The **Usage** table doesn't have the granularity to show data volumes for specific virtual machines, so these queries use the [find operator](/azure/data-explorer/kusto/query/findoperator) to search all tables that include a computer name. The **Usage** type is omitted because this query is only for analytics of data trends.
+
+> [!WARNING]
+> Use [find](/azure/data-explorer/kusto/query/findoperator?pivots=azuremonitor) queries sparingly because scans across data types are [resource intensive](./query-optimization.md#query-details-pane) to execute. If you don't need results per subscription, resource group, or resource name, use the [Usage](/azure/azure-monitor/reference/tables/usage) table as in the preceding queries.
+
+**Billable data volume by computer for the last full day**
+
+```kusto
+find where TimeGenerated between(startofday(ago(1d))..startofday(now())) project _BilledSize, _IsBillable, Computer, Type
+| where _IsBillable == true and Type != "Usage"
+| extend computerName = tolower(tostring(split(Computer, '.')[0]))
+| summarize BillableDataBytes = sum(_BilledSize) by computerName
+| sort by BillableDataBytes desc nulls last
+```
+
+**Count of billable events by computer for the last full day**
+
+```kusto
+find where TimeGenerated between(startofday(ago(1d))..startofday(now())) project _IsBillable, Computer, Type
+| where _IsBillable == true and Type != "Usage"
+| extend computerName = tolower(tostring(split(Computer, '.')[0]))
+| summarize eventCount = count() by computerName
+| sort by eventCount desc nulls last
+```
+
+### Querying for data volumes excluding known free data types
The following query will return the monthly data volume in GB, excluding all data types which are supposed to be free from data ingestion charges: ```kusto
W3CIISLog
- See [Azure Monitor cost and usage](../usage-estimated-costs.md) for a description of the different types of Azure Monitor charges and how to analyze them on your Azure bill. - See [Azure Monitor best practices - Cost management](../best-practices-cost.md) for best practices on configuring and managing Azure Monitor to minimize your charges. - See [Data collection transformations in Azure Monitor (preview)](../essentials/data-collection-transformations.md) for information on using transformations to reduce the amount of data you collected in a Log Analytics workspace by filtering unwanted records and columns.+
azure-netapp-files Create Active Directory Connections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/create-active-directory-connections.md
Several features of Azure NetApp Files require that you have an Active Directory
* Alternatively, an AD domain user account with `msDS-SupportedEncryptionTypes` write permission on the AD connection admin account can also be used to set the Kerberos encryption type property on the AD connection admin account. >[!NOTE]
- >It's _not_ recommended nor required to add the Azure NetApp Files AD admin account to the AD domain groups listed above. Nor is it recommended or required to grant `msDS-SupportedEncryptionTypes` write permission to the Azure NetApp Files AD admin account.
+ >When you modify the setting to enable AES on the AD connection admin account, it is a best practice to use a user account that has write permission to the AD object that is not the Azure NetApp Files AD admin. You can do so with another domain admin account or by delegating control to an account. For more information, see [Delegating Administration by Using OU Objects](/windows-server/identity/ad-ds/plan/delegating-administration-by-using-ou-objects).
If you set both AES-128 and AES-256 Kerberos encryption on the admin account of the AD connection, the highest level of encryption supported by your AD DS will be used.
azure-resource-manager Deploy Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/deploy-cli.md
To pass inline parameters, provide the values in `parameters`. For example, to p
az deployment group create \ --resource-group testgroup \ --template-file <path-to-bicep> \
- --parameters exampleString='inline string' exampleArray='("value1", "value2")'
+ --parameters exampleString='inline string' exampleArray='["value1", "value2"]'
``` If you're using Azure CLI with Windows Command Prompt (CMD) or PowerShell, pass the array in the format: `exampleArray="['value1','value2']"`.
azure-resource-manager Azure Services Resource Providers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/azure-services-resource-providers.md
The resources providers that are marked with **- registered** are registered by
| Microsoft.DesktopVirtualization | [Azure Virtual Desktop](../../virtual-desktop/index.yml) | | Microsoft.Devices | [Azure IoT Hub](../../iot-hub/index.yml)<br />[Azure IoT Hub Device Provisioning Service](../../iot-dps/index.yml) | | Microsoft.DeviceUpdate | [Device Update for IoT Hub](../../iot-hub-device-update/index.yml)
-| Microsoft.DevOps | [Azure DevOps](/azure/devops/) |
| Microsoft.DevSpaces | [Azure Dev Spaces](/previous-versions/azure/dev-spaces/) | | Microsoft.DevTestLab | [Azure Lab Services](../../lab-services/index.yml) | | Microsoft.DigitalTwins | [Azure Digital Twins](../../digital-twins/overview.md) |
The resources providers that are marked with **- registered** are registered by
| Microsoft.TimeSeriesInsights | [Azure Time Series Insights](../../time-series-insights/index.yml) | | Microsoft.Token | Token | | Microsoft.VirtualMachineImages | [Azure Image Builder](../../virtual-machines/image-builder-overview.md) |
-| microsoft.visualstudio | [Azure DevOps](/azure/devops/) |
+| microsoft.visualstudio | [Azure DevOps](/azure/devops/) |
| Microsoft.VMware | [Azure VMware Solution](../../azure-vmware/index.yml) | | Microsoft.VMwareCloudSimple | [Azure VMware Solution by CloudSimple](../../vmware-cloudsimple/index.md) | | Microsoft.VSOnline | [Azure DevOps](/azure/devops/) |
azure-vmware Disaster Recovery Using Vmware Site Recovery Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/disaster-recovery-using-vmware-site-recovery-manager.md
In this article, you'll implement disaster recovery for on-premises VMware vSphe
> [!NOTE]
-> The current version of VMware Site Recovery Manager (SRM) in Azure VMware Solution is 8.5.0.3.
+> The current version of VMware Site Recovery Manager (SRM) in Azure VMware Solution is 8.7.0.3.
## Supported scenarios VMware SRM helps you plan, test, and run the recovery of VMs between a protected VMware vCenter Server site and a recovery VMware vCenter Server site. You can use VMware SRM with Azure VMware Solution with the following two DR scenarios:
Make sure you've explicitly provided the remote user the VMware VRM administrato
> [!NOTE] > The current version of VMware Site Recovery Manager (SRM) in Azure VMware Solution is 8.5.0.3.+ 1. From the **Disaster Recovery Solution** drop-down, select **VMware Site Recovery Manager (SRM) ΓÇô vSphere Replication**. :::image type="content" source="media/VMware-srm-vsphere-replication/disaster-recovery-solution-srm-add-on.png" alt-text="Screenshot showing the Disaster recovery tab under Add-ons with VMware Site Recovery Manager (SRM) - vSphere replication selected." border="true" lightbox="media/VMware-srm-vsphere-replication/disaster-recovery-solution-srm-add-on.png":::
VMware and Microsoft support teams will engage each other as needed to troublesh
- [Network ports for vSphere Replication](https://kb.vmware.com/s/article/2087769) +
cdn Cdn Create A Storage Account With Cdn https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-create-a-storage-account-with-cdn.md
Last updated 04/29/2022-+ # Quickstart: Integrate an Azure Storage account with Azure CDN
communication-services Phone Number Management For Czech Republic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/phone-number-management-for-czech-republic.md
+
+ Title: Phone Number Management for Czech Republic
+
+description: Learn about subscription Eligibility and Number Capabilities for PSTN and SMS Numbers in Czech Republic.
+++++ Last updated : 09/29/2023+++++
+# Phone number management for Czech Republic
+Use the below tables to find all the relevant information on number availability, eligibility and restrictions for phone numbers in Slovakia.
+
+## Number types and capabilities availability
+
+| Number Type | Send SMS | Receive SMS | Make Calls | Receive Calls |
+| :- | :- | :- | :- | : |
+|Alphanumeric Sender ID\*|General Availability |-|-|-|
+
+\* Please refer to [SMS Concepts page](../sms/concepts.md) for supported destinations for this service.
+
+## Subscription eligibility
+
+To acquire a phone number, you need to be on a paid Azure subscription. Phone numbers can't be acquired by Azure free credits. Also, due to regulatory reasons phone number availability is dependent on your Azure subscription billing location.
+
+More details on eligible subscription types are as follows:
+
+| Number Type | Eligible Azure Agreement Type |
+| :- | :-- |
+| Alphanumeric Sender ID | Modern Customer Agreement (Field Led and Customer Led), Modern Partner Agreement (CSP), Enterprise Agreement, Pay-As-You-Go|
+
+## Azure subscription billing locations where Czech Republic phone numbers are available
+| Country/Region |
+| :- |
+|Czech Republic|
++
+## Find information about other countries/regions
+++
+## Next steps
+
+For more information about Azure Communication Services' telephony options, see the following pages:
+
+- [Learn more about Telephony](../telephony/telephony-concept.md)
+- Get a Telephony capable [phone number](../../quickstarts/telephony/get-phone-number.md)
communication-services Phone Number Management For Finland https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/phone-number-management-for-finland.md
Use the below tables to find all the relevant information on number availability
| Number Type | Send SMS | Receive SMS | Make Calls | Receive Calls | | :- | :- | :- | :- | : | | Toll-Free | - | - | - | Public Preview\* |-
+|Alphanumeric Sender ID\**|General Availability |-|-|-|
\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details.
+\** Please refer to [SMS Concepts page](../sms/concepts.md) for supported destinations for this service.
+ ## Subscription eligibility
More details on eligible subscription types are as follows:
| Number Type | Eligible Azure Agreement Type | | :- | :-- | | Toll-Free and Local (Geographic/National) | Modern Customer Agreement (Field and Customer Led), Modern Partner Agreement (CSP), Enterprise Agreement, Pay-As-You-Go |
+| Alphanumeric Sender ID | Modern Customer Agreement (Field Led and Customer Led), Modern Partner Agreement (CSP), Enterprise Agreement**, Pay-As-You-Go|
\** Applications from all other subscription types are reviewed and approved on a case-by-case basis. Reach out to acstns@microsoft.com for assistance with your application.
communication-services Phone Number Management For Ireland https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/phone-number-management-for-ireland.md
More details on eligible subscription types are as follows:
## Azure subscription billing locations where Ireland phone numbers are available | Country/Region | | :- |
-|Canada|
|Denmark| |Ireland| |Italy|
-|Puerto Rico|
|Sweden|
-|United Kingdom|
-|United States|
+ ## Find information about other countries/regions
communication-services Phone Number Management For Italy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/phone-number-management-for-italy.md
More details on eligible subscription types are as follows:
| :- | |Canada| |Denmark|
+|France|
|Ireland| |Italy| |Puerto Rico|
communication-services Phone Number Management For Slovakia https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/phone-number-management-for-slovakia.md
Use the below tables to find all the relevant information on number availability
| :- | :- | :- | :- | : | | Toll-Free |- | - | General Availability | General Availability\* | | Local | - | - | General Availability | General Availability\* |-
+|Alphanumeric Sender ID\**|General Availability |-|-|-|
\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details.
More details on eligible subscription types are as follows:
| Number Type | Eligible Azure Agreement Type | | :- | :-- | | Toll-Free and Local (Geographic/National) | Modern Customer Agreement (Field and Customer Led), Modern Partner Agreement (CSP), Enterprise Agreement, Pay-As-You-Go |
+| Alphanumeric Sender ID | Modern Customer Agreement (Field Led and Customer Led), Modern Partner Agreement (CSP), Enterprise Agreement**, Pay-As-You-Go|
## Azure subscription billing locations where Slovakia phone numbers are available
communication-services Phone Number Management For Slovenia https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/phone-number-management-for-slovenia.md
+
+ Title: Phone Number Management for Slovenia
+
+description: Learn about subscription Eligibility and Number Capabilities for PSTN and SMS Numbers in Slovenia.
+++++ Last updated : 09/29/2023+++++
+# Phone number management for Slovenia
+Use the below tables to find all the relevant information on number availability, eligibility and restrictions for phone numbers in Slovakia.
+
+## Number types and capabilities availability
+
+| Number Type | Send SMS | Receive SMS | Make Calls | Receive Calls |
+| :- | :- | :- | :- | : |
+|Alphanumeric Sender ID\*|General Availability |-|-|-|
+
+\* Please refer to [SMS Concepts page](../sms/concepts.md) for supported destinations for this service.
+
+## Subscription eligibility
+
+To acquire a phone number, you need to be on a paid Azure subscription. Phone numbers can't be acquired by Azure free credits. Also, due to regulatory reasons phone number availability is dependent on your Azure subscription billing location.
+
+More details on eligible subscription types are as follows:
+
+| Number Type | Eligible Azure Agreement Type |
+| :- | :-- |
+| Alphanumeric Sender ID | Modern Customer Agreement (Field Led and Customer Led), Modern Partner Agreement (CSP), Enterprise Agreement, Pay-As-You-Go|
+
+## Azure subscription billing locations where Slovenia phone numbers are available
+| Country/Region |
+| :- |
+|Slovenia|
++
+## Find information about other countries/regions
+++
+## Next steps
+
+For more information about Azure Communication Services' telephony options, see the following pages:
+
+- [Learn more about Telephony](../telephony/telephony-concept.md)
+- Get a Telephony capable [phone number](../../quickstarts/telephony/get-phone-number.md)
communication-services Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/sms/concepts.md
Previously updated : 03/20/2023 Last updated : 07/10/2023 + # SMS overview
Sending SMS to any recipient requires getting a phone number. Choosing the right
|Factors | Toll-Free| Short Code | Alphanumeric Sender ID| ||-||--|
-|**Description**|Toll free numbers are telephone numbers with distinct three-digit codes that can be used for business to consumer communication without any charge to the consumer| Short codes are 5-6 digit numbers used for business to consumer messaging such as alerts, notifications, and marketing | Alphanumeric Sender IDs are displayed as a custom alphanumeric phrase like the companyΓÇÖs name (CONTOSO, MyCompany) on the recipient handset. Alphanumeric sender IDs can be used for a variety of use cases like one-time passcodes, marketing alerts, and flight status notifications. |
+|**Description**|Toll free numbers are telephone numbers with distinct three-digit codes that can be used for business to consumer communication without any charge to the consumer| Short codes are 5-6 digit numbers used for business to consumer messaging such as alerts, notifications, and marketing | Alphanumeric Sender IDs are displayed as a custom alphanumeric phrase like the companyΓÇÖs name (CONTOSO, MyCompany) on the recipient handset. Alphanumeric sender IDs can be used for a variety of use cases like one-time passcodes, marketing alerts, and flight status notifications. There are two types of alphanumeric sender IDs: **Dynamic alphanumeric sender ID:** Supported in countries that do not require registration for use. Dynamic alphanumeric sender IDs can be instantly provisioned. **Pre-registered alphanumeric sender ID:** Supported in countries that require registration for use. Pre-registered alphanumeric sender IDs are typically provisioned in 4-5 weeks. |
|**Format**|+1 (8XX) XYZ PQRS| 12345 | CONTOSO* | |**SMS support**|Two-way SMS| Two-way SMS | One-way outbound SMS | |**Calling support**|Yes| No | No | |**Provisioning time**| 5-6 weeks| 6-8 weeks | Instant | |**Throughput** | 200 messages/min (can be increased upon request)| 6000 messages/ min (can be increased upon request) | 600 messages/ min (can be increased upon request)|
-|**Supported Destinations**| United States, Canada, Puerto Rico| United States | Germany, Netherlands, United Kingdom, Australia, France, Switzerland, Sweden, Italy, Spain, Denmark, Ireland, Portugal, Poland, Austria, Lithuania, Latvia, Estonia |
+|**Supported Destinations**| United States, Canada, Puerto Rico| United States | Germany, Netherlands, United Kingdom, Australia, France, Switzerland, Sweden, Italy, Spain, Denmark, Ireland, Portugal, Poland, Austria, Lithuania, Latvia, Estonia, Norway, Finland, Slovakia, Slovenia, Czech Republic|
|**Get started**|[Get a toll-free number](../../quickstarts/telephony/get-phone-number.md)|[Get a short code](../../quickstarts/sms/apply-for-short-code.md) | [Enable alphanumeric sender ID](../../quickstarts/sms/enable-alphanumeric-sender-id.md) | \* See [Alphanumeric sender ID FAQ](./sms-faq.md#alphanumeric-sender-id) for detailed formatting requirements.
communication-services Add Multiple Senders Mgmt Sdks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/email/add-multiple-senders-mgmt-sdks.md
In this quick start, you will learn how to add and remove sender addresses in Az
::: zone-end ::: zone pivot="programming-language-javascript" ::: zone-end ::: zone pivot="programming-language-java"
communication-services Enable Alphanumeric Sender Id https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/sms/enable-alphanumeric-sender-id.md
## Prerequisites -- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- An Azure account with an active subscription. [Create an account for free.](https://azure.microsoft.com/free/?WT.mc_id=A261C142F)
- [An active Communication Services resource.](../create-communication-resource.md)
+- [An eligible subscription address.](../../concepts/numbers/sub-eligibility-number-capability.md)
## Alphanumeric sender ID
-To enable alphanumeric sender ID, go to your Communication Services resource on the [Azure portal](https://portal.azure.com).
+Alphanumeric Sender IDs are displayed as a custom alphanumeric phrase like the companyΓÇÖs name (CONTOSO, MyCompany) on the recipient handset. Alphanumeric sender IDs can be used for various use cases like one-time passcodes, marketing alerts, and flight status notifications.
+
+There are two types of alphanumeric sender IDs:
+- Dynamic alphanumeric sender ID: Supported in countries that don't require registration for use. Dynamic alphanumeric sender IDs can be instantly provisioned.
+- Preregistered alphanumeric sender ID: Supported in countries that require registration for use. Preregistered alphanumeric sender IDs are typically provisioned in 4-5 weeks.
+
+Refer to [SMS overview page](../../concepts/sms/concepts.md) for list of countries that are supported.
+
+## Enable dynamic alphanumeric sender ID
+To enable dynamic alphanumeric sender ID, go to your Communication Services resource on the [Azure portal](https://portal.azure.com).
:::image type="content" source="./media/enable-alphanumeric-sender-id/manage-phone-azure-portal-start-1.png"alt-text="Screenshot showing a Communication Services resource's main page.":::
-## Enable alphanumeric sender ID
-Navigate to the Alphanumeric Sender ID blade in the resource menu and click on "Enable Alphanumeric Sender ID" button to enable alphanumeric sender ID service. If the enable button is not available for your subscription and your [subscription address](../../concepts/numbers/sub-eligibility-number-capability.md) is supported for alphanumeric sender ID, [create a support ticket](https://aka.ms/ACS-Support).
+Navigate to the Alphanumeric Sender ID blade in the resource menu, select dynamic tab and click on "Enable Alphanumeric Sender ID" button to enable alphanumeric sender ID service. If the enable button isn't available for your subscription and your [subscription address](../../concepts/numbers/sub-eligibility-number-capability.md) is supported for alphanumeric sender ID, [create a support ticket](https://aka.ms/ACS-Support).
:::image type="content" source="./media/enable-alphanumeric-sender-id/enable-alphanumeric-sender-id.png"alt-text="Screenshot showing an Alphanumeric senderID blade.":::
+## Enable preregistered alphanumeric sender ID
+To enable preregistered alphanumeric sender ID, go to your Communication Services resource on the [Azure portal](https://portal.azure.com).
++
+Navigate to the Alphanumeric Sender ID blade in the resource menu, select preregistered tab and click on "Submit an application" button to submit the registration form.
++ ## Next steps > [!div class="nextstepaction"]
communications-gateway Manage Enterprise Operator Connect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/manage-enterprise-operator-connect.md
The Operator Connect and Teams Phone Mobile programs don't allow you to use the
## Prerequisites
-Confirm that you have [!INCLUDE [project-synergy-nmp-permissions](includes/communications-gateway-nmp-project-synergy-permissions.md)] permissions for the Project Synergy enterprise application and **Reader** access to the Azure portal for your subscription. If you don't have these permissions, ask your administrator to set them up by following [Set up user roles for Azure Communications Gateway](provision-user-roles.md).
+Confirm that you have [!INCLUDE [project-synergy-nmp-permissions](includes/communications-gateway-nmp-project-synergy-permissions.md)] permissions for the Project Synergy enterprise application and **Reader** access to your subscription. If you don't have these permissions, ask your administrator to set them up by following [Set up user roles for Azure Communications Gateway](provision-user-roles.md).
If you're assigning new numbers to an enterprise customer:
If you're assigning new numbers to an enterprise customer:
|Country | The country for the number. Only required if you're uploading a North American Toll-Free number, otherwise optional.| |Ticket number (optional) |The ID of any ticket or other request that you want to associate with this range of numbers. Up to 64 characters. |
-## 1. Go to your Communications Gateway resource
+## Go to your Communications Gateway resource
1. Sign in to the [Azure portal](https://azure.microsoft.com/). 1. In the search bar at the top of the page, search for your Communications Gateway resource. 1. Select your Communications Gateway resource.
-## 2. Select an enterprise customer to manage
+## Select an enterprise customer to manage
When an enterprise customer uses the Teams Admin Center to request service, the Operator Connect APIs create a **consent**. This consent represents the relationship between you and the enterprise.
The Number Management Portal allows you to update the status of these consents.
1. Find the enterprise that you want to manage. 1. If you need to change the status of the relationship, select **Update Relationship Status** from the menu for the enterprise. Set the new status. For example, if you're agreeing to provide service to a customer, set the status to **Agreement signed**. If you set the status to **Consent Declined** or **Contract Terminated**, you must provide a reason.
-## 3. Manage numbers for the enterprise
+## Manage numbers for the enterprise
Assigning numbers to an enterprise allows IT administrators at the enterprise to allocate those numbers to their users. 1. Go to the number management page for the enterprise.
- * If you followed [2. Select an enterprise customer to manage](#2-select-an-enterprise-customer-to-manage), select **Manage numbers** from the menu.
+ * If you followed [Select an enterprise customer to manage](#select-an-enterprise-customer-to-manage), select **Manage numbers** from the menu.
* Otherwise, select **Numbers** in the sidebar and search for the enterprise using the enterprise's Azure Active Directory tenant ID. 1. To add new numbers for an enterprise: 1. Select **Upload numbers**.
Assigning numbers to an enterprise allows IT administrators at the enterprise to
1. Select **Release numbers**. 1. 1. Wait 30 seconds, then refresh the order status. When the order status is **Complete**, the numbers have been removed.
-## 4. View civic addresses for an enterprise
+## View civic addresses for an enterprise
You can view civic addresses for an enterprise. The enterprise configures the details of each civic address, so you can't configure these details. 1. Go to the civic address page for the enterprise.
- * If you followed [2. Select an enterprise customer to manage](#2-select-an-enterprise-customer-to-manage), select **Civic addresses** from the menu.
+ * If you followed [Select an enterprise customer to manage](#select-an-enterprise-customer-to-manage), select **Civic addresses** from the menu.
* Otherwise, select **Civic addresses** in the sidebar and search for the enterprise using the enterprise's Azure Active Directory tenant ID. 1. View the civic addresses. You can see the address, the company name, the description and whether the address was validated when the enterprise configured the address. 1. Optionally, select an individual address to view additional information provided by the enterprise (for example, the ELIN information).
communications-gateway Provision User Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/provision-user-roles.md
Familiarize yourself with the Azure user roles relevant to Azure Communications
A list of all available defined Azure roles is available in [Azure built-in roles](../role-based-access-control/built-in-roles.md).
-## 1. Understand the user roles required for Azure Communications Gateway
+## Understand the user roles required for Azure Communications Gateway
Your staff might need different user roles, depending on the tasks they need to carry out.
Your staff might need different user roles, depending on the tasks they need to
| Deploying Azure Communications Gateway |**Contributor** access to your subscription| | Raising support requests |**Owner**, **Contributor** or **Support Request Contributor** access to your subscription or a custom role with `Microsoft.Support/*` access at the subscription level| |Monitoring logs and metrics | **Reader** access to your subscription|
-|Using the Number Management Portal| [!INCLUDE [project-synergy-nmp-permissions](includes/communications-gateway-nmp-project-synergy-permissions.md)] roles for the Project Synergy enterprise application and **Reader** access to the Azure portal for your subscription|
+|Using the Number Management Portal| [!INCLUDE [project-synergy-nmp-permissions](includes/communications-gateway-nmp-project-synergy-permissions.md)] roles for the Project Synergy enterprise application and **Reader** access to your subscription|
-## 2. Configure user roles
+## Configure user roles
You need to use the Azure portal to configure user roles.
-### 2.1 Prepare to assign a user role
+### Prepare to assign a user role
1. Read through [Steps to assign an Azure role](../role-based-access-control/role-assignments-steps.md) and ensure that you: - Know who needs access.
You need to use the Azure portal to configure user roles.
- Are signed in with a user account with a role that can change role assignments for the subscription, such as **Owner** or **User Access Administrator**. 1. If you're managing access to the Number Management Portal, ensure that you're signed in with a user account that can change roles for enterprise applications. For example, you could be a Global Administrator, Cloud Application Administrator or Application Administrator. For more information, see [Assign users and groups to an application](../active-directory/manage-apps/assign-user-or-group-access-portal.md).
-### 2.2 Assign a user role
+### Assign a user role
-1. Follow the steps in [Assign a user role using the Azure portal](../role-based-access-control/role-assignments-portal.md) to assign the permissions you determined in [1. Understand the user roles required for Azure Communications Gateway](#1-understand-the-user-roles-required-for-azure-communications-gateway).
+1. Follow the steps in [Assign a user role using the Azure portal](../role-based-access-control/role-assignments-portal.md) to assign the permissions you determined in [Understand the user roles required for Azure Communications Gateway](#understand-the-user-roles-required-for-azure-communications-gateway).
1. If you're managing access to the Number Management Portal, follow [Assign users and groups to an application](../active-directory/manage-apps/assign-user-or-group-access-portal.md) to assign [!INCLUDE [project-synergy-nmp-permissions](includes/communications-gateway-nmp-project-synergy-permissions.md)] roles for each user in the Project Synergy application. ## Next steps
confidential-computing Tdx Confidential Vm Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/tdx-confidential-vm-overview.md
Title: Preview of DCesv5 & ECesv5 confidential VMs description: Learn about Azure DCesv5 and ECesv5 series confidential virtual machines (confidential VMs). These series are for tenants with high security and confidentiality requirements. -+
cosmos-db Hierarchical Partition Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/hierarchical-partition-keys.md
item.setSessionId("0000-11-0000-1111");
Mono<CosmosItemResponse<UserSession>> createResponse = container.createItem(item); ```
-##### [Javascript SDK v4](#tab/javascript-v4)
+##### [JavaScript SDK v4](#tab/javascript-v4)
```javascript // Create a new item
PartitionKey partitionKey = new PartitionKeyBuilder()
Mono<CosmosItemResponse<UserSession>> createResponse = container.createItem(item, partitionKey); ```
-##### [Javascript SDK v4](#tab/javascript-v4)
+##### [JavaScript SDK v4](#tab/javascript-v4)
```javascript const item: UserSession = {
PartitionKey partitionKey = new PartitionKeyBuilder()
// Perform a point read Mono<CosmosItemResponse<UserSession>> readResponse = container.readItem(id, partitionKey, UserSession.class); ```
-##### [Javascript SDK v4](#tab/javascript-v4)
+##### [JavaScript SDK v4](#tab/javascript-v4)
```javascript // Store the unique identifier
pagedResponse.byPage().flatMap(fluxResponse -> {
return Flux.empty(); }).blockLast(); ```
-##### [Javascript SDK v4](#tab/javascript-v4)
+##### [JavaScript SDK v4](#tab/javascript-v4)
```javascript // Define a single-partition query that specifies the full partition key path
pagedResponse.byPage().flatMap(fluxResponse -> {
}).blockLast(); ```
-##### [Javascript SDK v4](#tab/javascript-v4)
+##### [JavaScript SDK v4](#tab/javascript-v4)
```javascript // Define a targeted cross-partition query specifying prefix path[s]
cosmos-db Vector Search https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/vcore/vector-search.md
To perform a vector search, use the `$search` aggregation pipeline stage in a Mo
``` To retrieve the similarity score (`searchScore`) along with the documents found by the vector search, use the `$project` operator to include `searchScore` and rename it as `<custom_name_for_similarity_score>` in the results. Then the document is also projected as nested object. Note that the similarity score is calculated using the metric defined in the vector index.
-### Query a vector index by using $search
+### Query vectors and vector distances (aka similarity scores) using $search"
Continuing with the last example, create another vector, `queryVector`. Vector search measures the distance between `queryVector` and the vectors in the `vectorContent` path of your documents. You can set the number of results that the search returns by setting the parameter `k`, which is set to `2` here.
cost-management-billing Ai Powered Cost Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/ai-powered-cost-management.md
Title: Understand and optimize your cloud costs with AI-powered functionality in Cost Management
-description: This article helps you to understand and concepts about optimizing your cloud costs with AI-powered functionality in Cost Management.
+description: This article helps you to understand the concepts about optimizing your cloud costs with AI-powered functionality in Cost Management.
Previously updated : 05/22/2023 Last updated : 10/04/2023
-# Understand and optimize your cloud costs with AI-powered functionality in Cost Management
+# Understand and optimize costs with AI-powered Cost Management - Preview
-Today, we're pleased to announce the preview of Microsoft Cost Management's new AI-powered functionality. This interactive experience, available through the Azure portal, provides users with quick analysis, insights, and recommendations to help them better understand, analyze, manage, and forecast their cloud costs and bills.
+AI-powered Microsoft Cost Management is available in preview. This interactive experience, available through the Azure portal, provides users with quick analysis, insights, and recommendations to help them better understand, analyze, manage, and forecast their cloud costs and bills.
Whether you're part of a large organization, a budding developer, or a student you can use the new experience. With it you can gain greater control over your cloud spending and ensure that your investments are utilized in the most effective way possible.
cost-management-billing Understand Cost Mgt Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/understand-cost-mgt-data.md
Title: Understand Cost Management data
-description: This article helps you better understand data that's included in Cost Management and how frequently it's processed, collected, shown, and closed.
+description: This article helps you better understand data that's included in Cost Management. It also explains how frequently it's processed, collected, shown, and closed.
Previously updated : 12/06/2022 Last updated : 10/04/2023
_┬╣ For data before May 2014, visit the [Azure Enterprise portal](https://ea.azu
_┬▓ Microsoft Customer Agreements started in March 2019 and don't have any historical data before this point._
-_┬│ Historical data for credit-based and pay-in-advance subscriptions might not match your invoice. See [Historical data may not match invoice](#historical-data-might-not-match-invoice) below._
+_┬│ Historical data for credit-based and pay-in-advance subscriptions might not match your invoice. See the following [Historical data may not match invoice](#historical-data-might-not-match-invoice) section._
_⁴ Quota IDs are the same across Microsoft Customer Agreement and classic subscription offers. Classic CSP subscriptions are not supported._
The following tables show data that's included or isn't in Cost Management. All
| **Included** | **Not included** | | | |
-| Azure service usage (including deleted resources)⁵ | Unbilled services (e.g., free tier resources) |
+| Azure service usage (including deleted resources)⁵ | Unbilled services (for example, free tier resources) |
| Marketplace offering usage⁶ | Support charges - For more information, see [Invoice terms explained](../understand/understand-invoice.md). | | Marketplace purchases⁶ | Taxes - For more information, see [Invoice terms explained](../understand/understand-invoice.md). | | Reservation purchases⁷ | Credits - For more information, see [Invoice terms explained](../understand/understand-invoice.md). |
_⁷ Reservation purchases are only available for Enterprise Agreement (EA) and
_⁸ Only available for specific offers._
-Please note Cost Management data only includes the usage and purchases from services and resources that are actively running. Cost data is historical and will include resources, resource groups, and subscriptions that have been stopped, deleted, or cancelled and may not reflect the same resources, resource groups, and subscriptions you see in other tools, like Azure Resource Manager or Azure Resource Graph, which only show the current resources that are deployed in your subscriptions. Not all resources emit usage and therefore may not be represented in the cost data. Similarly, some resources are not tracked by Azure Resource Manager and may not be represented in subscription resources.
+Cost Management data only includes the usage and purchases from services and resources that are actively running. Cost data is historical and includes resources, resource groups, and subscriptions that have been stopped, deleted, or canceled and may not reflect the same resources, resource groups, and subscriptions you see in other tools, like Azure Resource Manager or Azure Resource Graph, which only show the current resources that are deployed in your subscriptions. Not all resources emit usage and therefore may not be represented in the cost data. Similarly, Azure Resource Manager doesn't track some resources so they may not be represented in subscription resources.
## How tags are used in cost and usage data
Cost Management receives tags as part of each usage record submitted by the indi
- Resource tags are only included in usage data while the tag is applied ΓÇô tags aren't applied to historical data. - Resource tags are only available in Cost Management after the data is refreshed. - Resource tags are only available in Cost Management when the resource is active/running and producing usage records. For example, when a VM is deallocated.-- Managing tags requires contributor access to each resource or the [tag contributor](../../role-based-access-control/built-in-roles.md#tag-contributor) RBAC role.
+- Managing tags requires contributor access to each resource or the [tag contributor](../../role-based-access-control/built-in-roles.md#tag-contributor) Azure RBAC role.
- Managing tag policies requires either owner or policy contributor access to a management group, subscription, or resource group. If you don't see a specific tag in Cost Management, consider the following questions:
If you don't see a specific tag in Cost Management, consider the following quest
Here are a few tips for working with tags: - Plan ahead and define a tagging strategy that allows you to break down costs by organization, application, environment, and so on.-- [Group and allocate costs using tag inheritance](enable-tag-inheritance.md) to apply resource group and subscription tags to child resource usage records. If you were using Azure policy to enforce tagging for cost reporting, consider enabling the tag inheritance setting for easier management and more flexibility.
+- [Group and allocate costs using tag inheritance](enable-tag-inheritance.md) to apply resource group and subscription tags to child resource usage records. If you're using Azure policy to enforce tagging for cost reporting, consider enabling the tag inheritance setting for easier management and more flexibility.
- Use the Tags API with either Query or UsageDetails to get all cost based on the current tags. ## Cost and usage data updates and retention
Cost and usage data is typically available in Cost Management within 8-24 hours.
The following examples illustrate how billing periods could end:
-* Enterprise Agreement (EA) subscriptions ΓÇô If the billing month ends on March 31, estimated charges are updated up to 72 hours later. In this example, by midnight (UTC) April 4.
+* Enterprise Agreement (EA) subscriptions ΓÇô If the billing month ends on March 31, estimated charges are updated up to 72 hours later. In this example, by midnight (UTC) April 4. There are uncommon circumstances where it may take longer than 72 hours to finalize a billing period.
* Pay-as-you-go subscriptions ΓÇô If the billing month ends on May 15, then the estimated charges might get updated up to 72 hours later. In this example, by midnight (UTC) May 19.
-After your billing period ends and your invoice is created, it can take up to 48 hours later for the usage data to get finalized. If the usage file isn't ready, you'll see a message on the Invoices page in the Azure portal stating `Your usage and charges file is not ready`. After the usage file is available, you can download it.
+Usage charges can continue to accrue and can change until the fifth day of the month after your current billing period ends, as Azure completes processing all data. If the usage file isn't ready, you see a message on the Invoices page in the Azure portal stating `Your usage and charges file is not ready`. After the usage file is available, you can download it.
-Once cost and usage data becomes available in Cost Management, it will be retained for at least seven years. Only the last 13 months are available from the portal. For historical data before 13 months, please use [Exports](tutorial-export-acm-data.md) or the [Cost Details API](../automate/usage-details-best-practices.md#cost-details-api).
+Once cost and usage data becomes available in Cost Management, it's retained for at least seven years. Only the last 13 months are available from the portal. For historical data before 13 months, use [Exports](tutorial-export-acm-data.md) or the [Cost Details API](../automate/usage-details-best-practices.md#cost-details-api).
### Rerated data
data-factory Airflow Sync Github Repository https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/airflow-sync-github-repository.md
Response Body:
Here are some API payload examples: -- Git sync properties for Github with PAT:
+- Git sync properties for GitHub with PAT:
```rest "gitSyncProperties": { "gitServiceType": "Github",
Here are some API payload examples:
"tenantId": <service principal tenant id> }``` -- Git sync properties for Github public repo:
+- Git sync properties for GitHub public repo:
```rest "gitSyncProperties": { "gitServiceType": "Github",
data-factory Connector Google Cloud Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-google-cloud-storage.md
This Google Cloud Storage connector is supported for the following capabilities:
| Supported capabilities|IR | || --| |[Copy activity](copy-activity-overview.md) (source/-)|&#9312; &#9313;|
+|[Mapping data flow](concepts-data-flow-overview.md) (source/-)|&#9312; |
|[Lookup activity](control-flow-lookup-activity.md)|&#9312; &#9313;| |[GetMetadata activity](control-flow-get-metadata-activity.md)|&#9312; &#9313;| |[Delete activity](delete-activity.md)|&#9312; &#9313;|
Assume that you have the following source folder structure and want to copy the
| | | | | bucket<br/>&nbsp;&nbsp;&nbsp;&nbsp;FolderA<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**File1.csv**<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;File2.json<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Subfolder1<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**File3.csv**<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;File4.json<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**File5.csv**<br/>&nbsp;&nbsp;&nbsp;&nbsp;Metadata<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;FileListToCopy.txt | File1.csv<br>Subfolder1/File3.csv<br>Subfolder1/File5.csv | **In dataset:**<br>- Bucket: `bucket`<br>- Folder path: `FolderA`<br><br>**In copy activity source:**<br>- File list path: `bucket/Metadata/FileListToCopy.txt` <br><br>The file list path points to a text file in the same data store that includes a list of files you want to copy, one file per line, with the relative path to the path configured in the dataset. |
+## Mapping data flow properties
+
+When you're transforming data in mapping data flows, you can read files from Google Cloud Storage in the following formats:
+
+- [Avro](format-avro.md#mapping-data-flow-properties)
+- [Delta](format-delta.md#mapping-data-flow-properties)
+- [CDM](format-common-data-model.md#mapping-data-flow-properties)
+- [Delimited text](format-delimited-text.md#mapping-data-flow-properties)
+- [Excel](format-excel.md#mapping-data-flow-properties)
+- [JSON](format-json.md#mapping-data-flow-properties)
+- [ORC](format-orc.md#mapping-data-flow-properties)
+- [Parquet](format-parquet.md#mapping-data-flow-properties)
+- [XML](format-xml.md#mapping-data-flow-properties)
+
+Format specific settings are located in the documentation for that format. For more information, see [Source transformation in mapping data flow](data-flow-source.md).
+
+### Source transformation
+
+In source transformation, you can read from a container, folder, or individual file in Google Cloud Storage. Use the **Source options** tab to manage how the files are read.
++
+**Wildcard paths:** Using a wildcard pattern will instruct the service to loop through each matching folder and file in a single source transformation. This is an effective way to process multiple files within a single flow. Add multiple wildcard matching patterns with the plus sign that appears when you hover over your existing wildcard pattern.
+
+From your source container, choose a series of files that match a pattern. Only a container can be specified in the dataset. Your wildcard path must therefore also include your folder path from the root folder.
+
+Wildcard examples:
+
+- `*` Represents any set of characters.
+- `**` Represents recursive directory nesting.
+- `?` Replaces one character.
+- `[]` Matches one or more characters in the brackets.
+
+- `/data/sales/**/*.csv` Gets all .csv files under /data/sales.
+- `/data/sales/20??/**/` Gets all files in the 20th century.
+- `/data/sales/*/*/*.csv` Gets .csv files two levels under /data/sales.
+- `/data/sales/2004/*/12/[XY]1?.csv` Gets all .csv files in December 2004 starting with X or Y prefixed by a two-digit number.
+
+**Partition root path:** If you have partitioned folders in your file source with a `key=value` format (for example, `year=2019`), then you can assign the top level of that partition folder tree to a column name in your data flow's data stream.
+
+First, set a wildcard to include all paths that are the partitioned folders plus the leaf files that you want to read.
++
+Use the **Partition root path** setting to define what the top level of the folder structure is. When you view the contents of your data via a data preview, you'll see that the service will add the resolved partitions found in each of your folder levels.
++
+**List of files:** This is a file set. Create a text file that includes a list of relative path files to process. Point to this text file.
+
+**Column to store file name:** Store the name of the source file in a column in your data. Enter a new column name here to store the file name string.
+
+**After completion:** Choose to do nothing with the source file after the data flow runs, delete the source file, or move the source file. The paths for the move are relative.
+
+To move source files to another location post-processing, first select "Move" for file operation. Then, set the "from" directory. If you're not using any wildcards for your path, then the "from" setting will be the same folder as your source folder.
+
+If you have a source path with wildcard, your syntax will look like this:
+
+`/data/sales/20??/**/*.csv`
+
+You can specify "from" as:
+
+`/data/sales`
+
+And you can specify "to" as:
+
+`/backup/priorSales`
+
+In this case, all files that were sourced under `/data/sales` are moved to `/backup/priorSales`.
+
+> [!NOTE]
+> File operations run only when you start the data flow from a pipeline run (a pipeline debug or execution run) that uses the Execute Data Flow activity in a pipeline. File operations *do not* run in Data Flow debug mode.
+
+**Filter by last modified:** You can filter which files you process by specifying a date range of when they were last modified. All datetimes are in UTC.
+ ## Lookup activity properties To learn details about the properties, check [Lookup activity](control-flow-lookup-activity.md).
data-factory How To Diagnostic Logs And Metrics For Managed Airflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-diagnostic-logs-and-metrics-for-managed-airflow.md
+
+ Title: Diagnostics logs and metrics for Managed Airflow
+
+description: This article explains how to use diagnostic logs and metrics to monitor Airflow IR.
++++ Last updated : 09/28/2023++
+# Diagnostics logs and metrics for Managed Airflow
+
+This guide walks you through the following:
+
+1. How to enable diagnostics logs and metrics for the Managed Airflow.
+
+2. How to view logs and metrics.
+
+3. How to run a query.
+
+4. How to monitor metrics and set the alert system in Dag failure.
+
+## How to enable Diagnostics logs and metrics for the Managed Airflow
+
+1. Open your Azure Data Factory resource -> Select **Diagnostic settings** on the left navigation pane -> Select “Add Diagnostic setting.”
+
+ :::image type="content" source="media/diagnostics-logs-and-metrics-for-managed-airflow/start-with-diagnostic-logs.png" alt-text="Screenshot that shows where diagnostic logs tab is located in data factory." lightbox="media/diagnostics-logs-and-metrics-for-managed-airflow/start-with-diagnostic-logs.png":::
+
+2. Fill out the Diagnostic settings name -> Select the following categories for the Airflow Logs
+
+ - Airflow task execution logs
+ - Airflow worker logs
+ - Airflow dag processing logs
+ - Airflow scheduler logs
+ - Airflow web logs
+ - If you select **AllMetrics**, various Data Factory metrics are made available for you to monitor or raise alerts on. These metrics include the metrics for Data Factory activity and Managed Airflow IR such as AirflowIntegrationRuntimeCpuUsage, AirflowIntegrationRuntimeMemory.
+
+ :::image type="content" source="media/diagnostics-logs-and-metrics-for-managed-airflow/select-log-category-and-all-metrics.png" alt-text="Screenshot that shows which logs to select for Airflow environment." lightbox="media/diagnostics-logs-and-metrics-for-managed-airflow/select-log-category-and-all-metrics.png":::
+
+3. Select the destination details, Log Analytics workspace:
+
+ :::image type="content" source="media/diagnostics-logs-and-metrics-for-managed-airflow/select-log-analytics-workspace.png" alt-text="Screenshot that shows select log analytics workspace as destination for diagnostic logs." lightbox="media/diagnostics-logs-and-metrics-for-managed-airflow/select-log-analytics-workspace.png":::
+
+4. Click on Save.
+
+## How to view logs
+
+1. After adding Diagnostic settings, you can find them listed in the "**Diagnostic settings**" section. To access and view logs, simply click on the Log Analytics workspace that you've configured.
+ :::image type="content" source="media/diagnostics-logs-and-metrics-for-managed-airflow/01-click-on-log-analytics-workspace.png" alt-text="Screenshot that shows click on log analytics workspace url." lightbox="media/diagnostics-logs-and-metrics-for-managed-airflow/01-click-on-log-analytics-workspace.png":::
+
+2. Click on **View Logs**, under the section ΓÇ£Maximize your Log Analytics experienceΓÇ¥.
+ :::image type="content" source="media/diagnostics-logs-and-metrics-for-managed-airflow/02-view-logs.png" alt-text="Screenshot that shows click on view logs." lightbox="media/diagnostics-logs-and-metrics-for-managed-airflow/02-view-logs.png":::
+
+3. You are directed to your log analytics workspace, where the chosen tables are imported into the workspace automatically.
+ :::image type="content" source="media/diagnostics-logs-and-metrics-for-managed-airflow/03-log-analytics-workspace.png" alt-text="Screenshot that shows logs analytics workspace." lightbox="media/diagnostics-logs-and-metrics-for-managed-airflow/03-log-analytics-workspace.png":::
+
+Other useful links for the schema:
+
+1. [Azure Monitor Logs reference - ADFAirflowSchedulerLogs | Microsoft Learn](/azure/azure-monitor/reference/tables/ADFAirflowSchedulerLogs)
+
+2. [Azure Monitor Logs reference - ADFAirflowTaskLogs | Microsoft Learn](/azure/azure-monitor/reference/tables/adfairflowtasklogs)
+
+3. [Azure Monitor Logs reference - ADFAirflowWebLogs | Microsoft Learn](/azure/azure-monitor/reference/tables/adfairflowweblogs)
+
+4. [Azure Monitor Logs reference - ADFAirflowWorkerLogs | Microsoft Learn](/azure/azure-monitor/reference/tables/adfairflowworkerlogs)
+
+5. [Azure Monitor Logs reference - AirflowDagProcessingLogs | Microsoft Learn](/azure/azure-monitor/reference/tables/AirflowDagProcessingLogs)
+
+## How to write a query
+
+1. LetΓÇÖs start with simplest query that returns all the records in the ADFAirflowTaskLogs.
+ You can double click on the table name to add it to query window, or you can directly type table name in window.
+ :::image type="content" source="media/diagnostics-logs-and-metrics-for-managed-airflow/simple-query.png" alt-text="Screenshot that shows kusto query to retrieve all logs." lightbox="media/diagnostics-logs-and-metrics-for-managed-airflow/simple-query.png":::
+
+2. To narrow down your search results, such as filtering them based on a specific task ID, you can use the following query:
+
+```kusto
+ADFAirflowTaskLogs
+| where DagId == "<your_dag_id>"
+and TaskId == "<your_task_id>"
+```
+
+Similarly, you can create custom queries according to your needs using any tables available in LogManagement.
+
+For more information:
+
+1. [https://learn.microsoft.com/azure/azure-monitor/logs/log-analytics-tutorial](https://learn.microsoft.com/azure/azure-monitor/logs/log-analytics-tutorial)
+
+2. [Kusto Query Language (KQL) overview - Azure Data Explorer | Microsoft Learn](/azure/data-explorer/kusto/query/)
+
+## How to monitor metrics.
+
+Azure Data Factory offers comprehensive metrics for Airflow Integration Runtimes (IR), allowing you to effectively monitor the performance of your Airflow IR and establish alerting mechanisms as needed.
+
+1. Open your Azure Data Factory Resource.
+
+2. In the left navigation pane, Click **Metrics** under Monitoring section.
+ :::image type="content" source="media/diagnostics-logs-and-metrics-for-managed-airflow/metrics-in-data-factory-studio.png" alt-text="Screenshot that shows where metrics tab is located in data factory." lightbox="media/diagnostics-logs-and-metrics-for-managed-airflow/metrics-in-data-factory-studio.png":::
+
+3. Select the scope -> Metric Namespace -> Metric you want to monitor.
+ :::image type="content" source="media/diagnostics-logs-and-metrics-for-managed-airflow/monitor-metrics.png" alt-text="Screenshot that shows metrics to select." lightbox="media/diagnostics-logs-and-metrics-for-managed-airflow/monitor-metrics.png":::
+
+4. For example, we created the multi-line chart, to visualize the Integration Runtime CPU Percentage and Airflow Integration Runtime Dag Bag Size.
+ :::image type="content" source="media/diagnostics-logs-and-metrics-for-managed-airflow/multi-line-chart.png" alt-text="Screenshot that shows multiline chart of metrics." lightbox="media/diagnostics-logs-and-metrics-for-managed-airflow/multi-line-chart.png":::
+
+5. You can set up an alert rule that triggers when specific conditions are met by your metrics.
+ Refer to guide: [Overview of Azure Monitor alerts - Azure Monitor | Microsoft Learn](/azure/azure-monitor/alerts/alerts-overview)
+
+6. Click on Save to Dashboard, once your chat is complete, else your chart disappears.
+ :::image type="content" source="media/diagnostics-logs-and-metrics-for-managed-airflow/save-to-dashboard.png" alt-text="Screenshot that shows save to dashboard." lightbox="media/diagnostics-logs-and-metrics-for-managed-airflow/save-to-dashboard.png":::
+
+For more information: [https://learn.microsoft.com/azure/azure-monitor/reference/supported-metrics/microsoft-datafactory-factories-metrics](/azure/azure-monitor/reference/supported-metrics/microsoft-datafactory-factories-metrics)
data-manager-for-agri Faq Agriculture Data Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-manager-for-agri/faq-agriculture-data-services.md
+
+ Title: Microsoft Azure Data Manager for Agriculture FAQs
+description: Get answers to common questions about Azure Data Manager for Agriculture.
++ Last updated : 10/03/2023++++
+# Common questions about Azure Data Manager for Agriculture
+
+This article answers commonly asked questions about the Azure Data Manager for Agriculture.
+
+## Service Updates
+
+### Can I move an Azure Data Manager for Agriculture resource from one subscription or resource group to another?
+
+Today, you can't move an Azure Data Manager for Agriculture resource from one subscription or resource group to another. Instead you can, delete the existing Azure Data Manager for Agriculture instance in the current resource group or subscription. Create a new Azure Data Manager for Agriculture instance in the target resource group or subscription. Next, reingest the data into the new Azure Data Manager for Agriculture instance.
databox-online Azure Stack Edge Gpu 2309 Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-2309-release-notes.md
Previously updated : 10/01/2023 Last updated : 10/02/2023
The 2309 release has the following new features and enhancements:
- Beginning this release, you can create VM images starting from an image from Azure Marketplace or an image in your Storage account. For more information, see [Create a VM image from Azure Marketplace or Azure storage account](azure-stack-edge-create-a-vm-from-azure-marketplace.md). - Several security, supportability, diagnostics, resiliency, and performance improvements were made in this release. - You can deploy Azure Kubernetes service (AKS) on an Azure Stack Edge cluster. This feature is supported only for SAP and PMEC customers. For more information, see [Deploy AKS on Azure Stack Edge](azure-stack-edge-deploy-aks-on-azure-stack-edge.md).-- In this release, a precheck that verifies if the Azure Resource Manager certificate has expired, was added to the Azure Kubernetes Service (AKS) update.-- The `Set-HcsKubernetesAzureMonitorConfiguration` PowerShell cmdlet that enables the Azure Monitor is fixed in this release. Though the cmdlet is available to use, we recommend that you configure Azure Monitor for Azure Stack Edge via the Azure Arc portal. - Starting March 2023, Azure Stack Edge devices are required to be on the 2301 release or later to create a Kubernetes cluster. In preparation for this requirement, it is highly recommended that you update to the latest version as soon as possible.
The 2309 release has the following new features and enhancements:
|**3.**|Virtual machines and virtual network |In the earlier releases, the primary network interface on the VM was not validated to have a reachable gateway IP. <br><br>In this release, this issue is fixed. The validation of the gateway IP helps identify any potential network configuration issues before the VM provision timeout occurs. | |**4.**|Virtual machines and virtual network |In the earlier releases, the MAC address allocation algorithm only considered the last two octets whereas the MAC address range actually spanned last three octets. This discrepancy led to allocation conflicts in certain corner cases, resulting in overlapping MAC addresses. <br><br>The MAC address allocation is revised in this release to fix the above issue.| |**5.**|Azure Kubernetes Service (AKS) |In previous releases, if there was a host power failure, the pod with SRIOV capable CNIs also failed with the following error: <br><br>`Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "<GUID>": plugin type="multus" name="multus-cni-network" failed (add): [core/core-upf-pp-0:n3-dpdk]: error adding container to network "n3-dpdk": failed to rename netvsc link: failed to rename link "001dd81c9dd4_VF" to "001dd81c9dd4": file exists.`<br><br>This failure of pod with SRIOV capable CNIs is fixed in this release. For AKS SRIOV CNI plugin, the driver name returned from ethtool is used to determine device is VF or netvsc. |
+|**6.**|Azure Monitor |The `Set-HcsKubernetesAzureMonitorConfiguration` PowerShell cmdlet that enables the Azure Monitor is fixed in this release. Though the cmdlet is available to use, we recommend that you configure Azure Monitor for Azure Stack Edge via the Azure Arc portal. |
+|**7.**|Update |In this release, a precheck that verifies if the Azure Resource Manager certificate has expired, was added to the Azure Kubernetes Service (AKS) update. |
-<!--## Known issues in this release
+## Known issues in this release
| No. | Feature | Issue | Workaround/comments | | | | | |
-|**1.**|Need known issues in 2303 |-->
+|**1.**|AKS Update |The AKS Kubernetes update may fail if the one of the AKS VMs are not running. This issue may be seen in the 2-node cluster. |If the AKS update has failed, [Connect to the PowerShell interface of the device](azure-stack-edge-gpu-connect-powershell-interface.md). Check the state of the Kubernetes VMs by running `Get-VM` cmdlet. If the VM is off, run the `Start-VM` cmdlet to restart the VM. Once the Kubernetes VM is running, reapply the update. |
+ ## Known issues from previous releases
expressroute About Fastpath https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/about-fastpath.md
FastPath Private endpoint/Private Link connectivity is supported for the followi
> [!NOTE] > * Enabling FastPath Private endpoint/Link support for limited GA scenarios may take upwards of 2 weeks to complete. Please plan your deployment(s) in advance. > * Connections associated to ExpressRoute partner circuits aren't eligible for this preview. Both IPv4 and IPv6 connectivity is supported.
+> * FastPath connectivity to a Private endpoint/Link service deployed to a spoke Virtual Network, peered to the Hub Virtual Network (where the ExpressRoute Virtual Network Gateway is deployed), is not supported. FastPath only supports connectivity to Private Endpoints/Link services deployed to the Hub Virtual Network.
> * Private Link pricing will not apply to traffic sent over ExpressRoute FastPath. For more information about pricing, check out the [Private Link pricing page](https://azure.microsoft.com/pricing/details/private-link/). > * FastPath supports a max of 100Gbps connectivity to a single Availability Zone (Az).
hdinsight Apache Hbase Migrate Hdinsight 5 1 New Storage Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hbase/apache-hbase-migrate-hdinsight-5-1-new-storage-account.md
+
+ Title: Migrate an HBase cluster to an HDInsight 5.1 and new storage account - Azure HDInsight
+description: Learn how to migrate an Apache HBase cluster in Azure HDInsight to an HDInsight 5.1 with a different Azure Storage account.
+++ Last updated : 06/30/2023++
+# Migrate Apache HBase to an HDInsight 5.1 and new storage account
+
+This article discusses how to update your Apache HBase cluster on Azure HDInsight to a newer version with a different Azure Storage account.
+
+This article applies only if you need to use different Storage accounts for your source and destination clusters. To upgrade versions with the same Storage account for your source and destination clusters, see [Migrate Apache HBase to a new version](./apache-hbase-migrate-hdinsight-5-1.md).
+
+The downtime while upgrading can be more than 20 minutes. This downtime caused by the steps to flush all in-memory data, and wait for all procedure to complete and the time to configure and restart the services on the new cluster. Your results vary, depending on the number of nodes, amount of data, and other variables.
++
+## Review Apache HBase compatibility
+
+Before upgrading Apache HBase, ensure the HBase versions on the source and destination clusters are compatible. Review the HBase version compatibility matrix and release notes in the [HBase Reference Guide](https://hbase.apache.org/book.html#upgrading) to make sure your application is compatible with the new version.
+
+Here's an example compatibility matrix. **Y** indicates compatibility and **N** indicates a potential incompatibility:
+
+| Compatibility type | Major version| Minor version | Patch |
+| | | | |
+| Client-Server wire compatibility | N | Y | Y |
+| Server-Server compatibility | N | Y | Y |
+| File format compatibility | N | Y | Y |
+| Client API compatibility | N | Y | Y |
+| Client binary compatibility | N | N | Y |
+| **Server-side limited API compatibility** | | | |
+| Stable | N | Y | Y |
+| Evolving | N | N | Y |
+| Unstable | N | N | N |
+| Dependency compatibility | N | Y | Y |
+| Operational compatibility | N | N | Y |
+
+The HBase version release notes should describe any breaking incompatibilities. Test your application in a cluster running the target version of HDInsight and HBase.
+
+For more information about HDInsight versions and compatibility, see [Azure HDInsight versions](../hdinsight-component-versioning.md).
+
+## Apache HBase cluster migration overview
+
+To upgrade and migrate your Apache HBase cluster on Azure HDInsight to a new storage account, you complete the following basic steps. For detailed instructions, see the detailed steps and commands.
+
+Prepare the source cluster:
+1. Stop data ingestion.
+1. Check cluster health
+1. Stop replication if needed
+1. Flush `memstore` data.
+1. Stop HBase.
+1. For clusters with accelerated writes, back up the Write Ahead Log (WAL) directory.
+
+Prepare the destination cluster:
+1. Create the destination cluster.
+1. Stop HBase from Ambari.
+1. Clean Zookeeper data.
+1. Switch user to HBase.
+
+Complete the migration:
+1. Clean the destination file system, migrate the data, and remove `/hbase/hbase.id`.
+1. Clean and migrate the WAL.
+1. Start all services from the Ambari destination cluster.
+1. Verify HBase.
+1. Delete the source cluster.
+
+## Detailed migration steps and commands
+
+Use these detailed steps and commands to migrate your Apache HBase cluster with a new storage account.
+
+### Prepare the source cluster
+
+1. Stop ingestion to the source HBase cluster.
+
+1. Check Hbase hbck to verify cluster health
+
+ 1. Verify HBCK Report page on HBase UI. Healthy cluster does not show any inconsistencies
+
+ :::image type="content" source="./media/apache-hbase-migrate-new-version/verify-hbck-report.png" alt-text="Screenshot showing how to verify HBCK report." lightbox="./media/apache-hbase-migrate-new-version/verify-hbck-report.png":::
+
+ 1. If any inconsistencies exist, please fix inconsistencies using [hbase hbck2](/azure/hdinsight/hbase/how-to-use-hbck2-tool/)
+
+1. Note down number of regions in online at source cluster, so that the number can be referred at destination cluster after the migration.
+
+ :::image type="content" source="./media/apache-hbase-migrate-new-version/total-number-of-regions.png" alt-text="Screenshot showing count of number of regions." lightbox="./media/apache-hbase-migrate-new-version/total-number-of-regions.png":::
+
+1. If replication enabled on the cluster, please stop it and reenable the replication on destination cluster after migration. Refer [HBase replication guide](/azure/hdinsight/hbase/apache-hbase-replication/)
+
+1. Flush the source HBase cluster you're upgrading.
+
+ HBase writes incoming data to an in-memory store called a *`memstore`*. After the `memstore` reaches a certain size, HBase flushes it to disk for long-term storage in the cluster's storage account. Deleting the source cluster after an upgrade also deletes any data in the `memstore`s. To retain the data, manually flush each table's `memstore` to disk before upgrading.
+
+ You can flush the `memstore` data by running the [flush_all_tables.sh](https://github.com/Azure/hbase-utils/blob/master/scripts/flush_all_tables.sh) script from the [hbase-utils GitHub repository](https://github.com/Azure/hbase-utils/).
+
+ You can also flush the `memstore` data by running the following HBase shell command from inside the HDInsight cluster:
+
+ ```bash
+ hbase shell
+ flush "<table-name>"
+ ```
+1. Wait for 15 mins and verify that all the procedures are completed, and masterProcWal files doesn't have any pending procedures.
+
+ 1. Verify the Procedures page to confirm that there are no pending procedures.
+
+ :::image type="content" source="./media/apache-hbase-migrate-new-version/verify-master-process.png" alt-text="Screenshot showing how to verify master process." lightbox="./media/apache-hbase-migrate-new-version/verify-master-process.png":::
+1. STOP HBase
+
+ 1. Sign in to [Apache Ambari](https://ambari.apache.org/) on the source cluster with `https://<OLDCLUSTERNAME>.azurehdinsight.net`
+ 1. Turn on maintenance mode for HBase.
+ 1. Stop HBase Masters only first. First stop standby masters, in last stop Active HBase master.
+
+ :::image type="content" source="./media/apache-hbase-migrate-new-version/stop-master-services.png" alt-text="Screenshot showing how to stop master services." lightbox="./media/apache-hbase-migrate-new-version/stop-master-services.png":::
+
+ 1. Stop the HBase service, it stops remaining servers.
+
+ > [!NOTE]
+ > HBase 2.4.11 does not support some of the old Procedures.
+ >
+ > For more information on connecting to and using Ambari, see [Manage HDInsight clusters by using the Ambari Web UI](../hdinsight-hadoop-manage-ambari.md).
+ >
+ > Stopping HBase in the previous steps mentioned how Hbase avoids creating new master proc WALs.
+
+1. If your source HBase cluster doesn't have the [Accelerated Writes](apache-hbase-accelerated-writes.md) feature, skip this step. For source HBase clusters with Accelerated Writes, back up the WAL directory under HDFS by running the following commands from an SSH session on any source cluster Zookeeper node or worker node.
+
+ ```bash
+ hdfs dfs -mkdir /hbase-wal-backup
+ hdfs dfs -cp hdfs://mycluster/hbasewal /hbase-wal-backup
+ ```
+
+### Prepare the destination cluster
+
+1. In the Azure portal, [set up a new destination HDInsight cluster](../hdinsight-hadoop-provision-linux-clusters.md) that uses a different storage account than your source cluster.
+
+1. Sign in to [Apache Ambari](https://ambari.apache.org/) on the new cluster at `https://<NEWCLUSTERNAME>.azurehdinsight.net`, and stop the HBase services.
+
+1. Clean the Zookeeper data on the destination cluster by running the following commands in any Zookeeper node or worker node:
+
+ ```bash
+ hbase zkcli
+ rmr /hbase-unsecure
+ quit
+ ```
+
+1. Switch the user to HBase by running `sudo su hbase`.
+
+### Clean and migrate the file system and WAL
+
+Run the following commands, depending on your source HDInsight version and whether the source and destination clusters have Accelerated Writes. The destination cluster is always HDInsight version 4.0, since HDInsight 3.6 is in Basic support and isn't recommended for new clusters.
+
+- [The source cluster is HDInsight 4.0 with Accelerated Writes, and the destination cluster has Accelerated Writes](#the-source-cluster-is-hdinsight-40-without-accelerated-writes-and-the-destination-cluster-has-accelerated-writes).
+- [The source cluster is HDInsight 4.0 without Accelerated Writes, and the destination cluster has Accelerated Writes](#the-source-cluster-is-hdinsight-40-without-accelerated-writes-and-the-destination-cluster-doesnt-have-accelerated-writes).
+- [The source cluster is HDInsight 4.0 without Accelerated Writes, and the destination cluster doesn't have Accelerated Writes]().
+
+The `<container-endpoint-url>` for the storage account is `https://<storageaccount>.blob.core.windows.net/<container-name>`. Pass the SAS token for the storage account at the very end of the URL.
+
+- The `<container-fullpath>` for storage type WASB is `wasbs://<container-name>@<storageaccount>.blob.core.windows.net`
+- The `<container-fullpath>` for storage type Azure Data Lake Storage Gen2 is `abfs://<container-name>@<storageaccount>.dfs.core.windows.net`.
+
+#### Copy commands
+
+The HDFS copy command is `hdfs dfs <copy properties starting with -D> -cp`
+
+Use `hadoop distcp` for better performance when copying files not in a page blob: `hadoop distcp <copy properties starting with -D>`
+
+To pass the key of the storage account, use:
+- `-Dfs.azure.account.key.<storageaccount>.blob.core.windows.net='<storage account key>'`
+- `-Dfs.azure.account.keyprovider.<storageaccount>.blob.core.windows.net=org.apache.hadoop.fs.azure.SimpleKeyProvider`
+
+You can also use [AzCopy](../../storage/common/storage-ref-azcopy.md) for better performance when copying HBase data files.
+
+1. Run the AzCopy command:
+
+ ```bash
+ azcopy cp "<source-container-endpoint-url>/hbase" "<target-container-endpoint-url>" --recursive
+ ```
+
+1. If the destination storage account is Azure Blob storage, do this step after the copy. If the destination storage account is Data Lake Storage Gen2, skip this step.
+
+ The Hadoop WASB driver uses special zero sized blobs corresponding to every directory. AzCopy skips these files when doing the copy. Some WASB operations use these blobs, so you must create them in the destination cluster. To create the blobs, run the following Hadoop command from any node in the destination cluster:
+
+ ```bash
+ sudo -u hbase hadoop fs -chmod -R 0755 /hbase
+ ```
+
+You can download AzCopy from [Get started with AzCopy](../../storage/common/storage-use-azcopy-v10.md). For more information about using AzCopy, see [azcopy copy](../../storage/common/storage-ref-azcopy-copy.md).
++
+#### The source cluster is HDInsight 4.0 without Accelerated Writes, and the destination cluster has Accelerated Writes
+
+1. To clean the file system and migrate data, run the following commands:
+
+ ```bash
+ hdfs dfs -rm -r /hbase
+ hadoop distcp <source-container-fullpath>/hbase /
+ ```
+
+1. Remove `hbase.id` by running `hdfs dfs -rm /hbase/hbase.id`
+
+1. To clean and migrate the WAL, run the following commands:
+
+ ```bash
+ hdfs dfs -rm -r hdfs://<destination-cluster>/hbasewal
+ hdfs dfs -Dfs.azure.page.blob.dir="/hbase-wals" -cp <source-container-fullpath>/hbase-wals hdfs://<destination-cluster>/hbasewal
+ ```
+
+#### The source cluster is HDInsight 4.0 without Accelerated Writes, and the destination cluster doesn't have Accelerated Writes
+
+1. To clean the file system and migrate data, run the following commands:
+
+ ```bash
+ hdfs dfs -rm -r /hbase
+ hadoop distcp <source-container-fullpath>/hbase /
+ ```
+
+1. Remove `hbase.id` by running `hdfs dfs -rm /hbase/hbase.id`
+
+1. To clean and migrate the WAL, run the following commands:
+
+ ```bash
+ hdfs dfs -rm -r /hbase-wals/*
+ hdfs dfs -Dfs.azure.page.blob.dir="/hbase-wals" -cp <source-container-fullpath>/hbase-wals /
+ ```
+
+### Complete the migration
+
+1. On the destination cluster, save your changes and restart all required services as indicated by Ambari.
+
+1. Point your application to the destination cluster.
+
+ > [!NOTE]
+ > The static DNS name for your application changes when you upgrade. Rather than hard-coding this DNS name, you can configure a CNAME in your domain name's DNS settings that points to the cluster's name. Another option is to use a configuration file for your application that you can update without redeploying.
+
+1. Start the ingestion.
+
+1. Verify HBase consistency and simple Data Definition Language (DDL) and Data Manipulation Language (DML) operations.
+
+1. If the destination cluster is satisfactory, delete the source cluster.
+
+## Troubleshooting
+
+### Use case 1:
+If Hbase masters and region servers up and regions stuck in transition or only one region i.e `hbase:meta` region is assigned. Waiting for other regions to assign
+
+**Solution:**
+
+1. ssh into any ZooKeeper node of original cluster and run `kinit -k -t /etc/security/keytabs/hbase.service.keytab hbase/<zk FQDN>` if this is ESP cluster
+1. Run `echo "scan '`hbase:meta`'" | hbase shell > meta.out` to read the `hbase:meta` into a file
+1. Run `grep "info:sn" meta.out | awk '{print $4}' | sort | uniq` to get all RS instance names where the regions were present in old cluster. Output should be like `value=<wn FQDN>,16020,........`
+1. Create a dummy WAL dir with that `wn` value
+
+ If the cluster is accelerated write cluster
+ ```
+ hdfs dfs -mkdir hdfs://mycluster/hbasewal/WALs/<wn FQDN>,16020,.........
+ ```
+ If the cluster is nonaccelarated Write cluster
+ ```
+ hdfs dfs -mkdir /hbase-wals/WALs/<wn FQDN>,16020,.........
+ ```
+1. Restart Active `Hmaster`
+## Next steps
+
+To learn more about [Apache HBase](https://hbase.apache.org/) and upgrading HDInsight clusters, see the following articles:
+
+- [Upgrade an HDInsight cluster to a newer version](../hdinsight-upgrade-cluster.md)
+- [Monitor and manage Azure HDInsight using the Apache Ambari Web UI](../hdinsight-hadoop-manage-ambari.md)
+- [Azure HDInsight versions](../hdinsight-component-versioning.md)
+- [Optimize Apache HBase](../optimize-hbase-ambari.md)
hdinsight Apache Hbase Migrate Hdinsight 5 1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hbase/apache-hbase-migrate-hdinsight-5-1.md
+
+ Title: Migrate an HBase cluster to an HDInsight 5.1 - Azure HDInsight
+description: Learn how to migrate Apache HBase clusters in Azure HDInsight to an HDInsight 5.1.
+++ Last updated : 10/03/2023++
+# Migrate an Apache HBase cluster to an HDInsight 5.1
+
+This article discusses how to update your Apache HBase cluster on Azure HDInsight to a newer version.
+
+This article applies only if you use the same Azure Storage account for your source and destination clusters. To upgrade with a new or different Storage account for your destination cluster, see [Migrate Apache HBase to HDInsight 5.1 with a new Storage account](./apache-hbase-migrate-hdinsight-5-1-new-storage-account.md).
+
+The downtime for upgrading may take more than 20 minutes. This downtime caused by the steps to flush all in-memory data, and wait for all procedure to complete and the time to configure and restart the services on the new cluster. Your results vary, depending on the number of nodes, amount of data, and other variables.
+
+## Review Apache HBase compatibility
+
+Before upgrading Apache HBase, ensure the HBase versions on the source and destination clusters are compatible. Review the HBase version compatibility matrix and release notes in the [HBase Reference Guide](https://hbase.apache.org/book.html#upgrading) to make sure your application is compatible with the new version.
+
+Here's an example compatibility matrix. **Y** indicates compatibility and **N** indicates a potential incompatibility:
+
+| Compatibility type | Major version| Minor version | Patch |
+| | | | |
+| Client-Server wire compatibility | N | Y | Y |
+| Server-Server compatibility | N | Y | Y |
+| File format compatibility | N | Y | Y |
+| Client API compatibility | N | Y | Y |
+| Client binary compatibility | N | N | Y |
+| **Server-side limited API compatibility** | | | |
+| Stable | N | Y | Y |
+| Evolving | N | N | Y |
+| Unstable | N | N | N |
+| Dependency compatibility | N | Y | Y |
+| Operational compatibility | N | N | Y |
+
+For more information about HDInsight versions and compatibility, see [Azure HDInsight versions](../hdinsight-component-versioning.md).
+
+## Apache HBase cluster migration overview
+
+To upgrade your Apache HBase cluster on Azure HDInsight, complete the following basic steps. For detailed instructions, see the detailed steps and commands, or use the scripts from the section [Migrate HBase using scripts](#migrate-hbase-using-scripts) for automated migration.
+
+Prepare the source cluster:
+1. Stop data ingestion.
+1. Check cluster health
+1. Stop replication if needed
+1. Flush ``memstore``data.
+1. Stop HBase.
+1. For clusters with accelerated writes, back up the Write Ahead Log (WAL) directory.
+
+Prepare the destination cluster:
+1. Create the destination cluster.
+1. Stop HBase from Ambari.
+1. Update `fs.defaultFS` in HDFS service configs to refer to the original source cluster container.
+1. For clusters with accelerated writes, update `hbase.rootdir` in HBase service configs to refer to the original source cluster container.
+1. Clean Zookeeper data.
+
+Complete the migration:
+1. Clean and migrate the WAL.
+1. Copy apps from the destination cluster's default container to the original source container.
+1. Start all services from the Ambari destination cluster.
+1. Verify HBase.
+1. Delete the source cluster.
+
+## Detailed migration steps and commands
+
+Use these detailed steps and commands to migrate your Apache HBase cluster.
+
+### Prepare the source cluster
+
+1. Stop ingestion to the source HBase cluster.
+
+1. Check Hbase hbck to verify cluster health
+
+ 1. Verify HBCK Report page on HBase UI. Healthy cluster does not show any inconsistencies
+ :::image type="content" source="./media/apache-hbase-migrate-new-version/verify-hbck-report.png" alt-text="Screenshot showing how to verify HBCK report." lightbox="./media/apache-hbase-migrate-new-version/verify-hbck-report.png":::
+ 1. If any inconsistencies exist, fix inconsistencies using [hbase hbck2](/azure/hdinsight/hbase/how-to-use-hbck2-tool/)
+
+1. Note down number of regions in online at source cluster, so that the number can be referred at destination cluster after the migration.
+ :::image type="content" source="./media/apache-hbase-migrate-new-version/total-number-of-regions.png" alt-text="Screenshot showing total number of regions." lightbox="./media/apache-hbase-migrate-new-version/total-number-of-regions.png":::
+
+1. If replication enabled on the cluster, stop and reenable the replication on destination cluster after migration. For more information, see [Hbase replication guide](/azure/hdinsight/hbase/apache-hbase-replication/)
+
+1. Flush the source HBase cluster you're upgrading.
+
+ HBase writes incoming data to an in-memory store called a *`memstore`*. After the ``memstore``reaches a certain size, HBase flushes it to disk for long-term storage in the cluster's storage account. Deleting the source cluster after an upgrade also deletes any data in the `memstore`s. To retain the data, manually flush each table's ``memstore``to disk before upgrading.
+
+ You can flush the ``memstore``data by running the [flush_all_tables.sh](https://github.com/Azure/hbase-utils/blob/master/scripts/flush_all_tables.sh) script from the [Azure hbase-utils GitHub repository](https://github.com/Azure/hbase-utils/).
+
+ You can also flush ``memstore``data by running the following HBase shell command from the HDInsight cluster:
+
+ ```bash
+ hbase shell
+ flush "<table-name>"
+ ```
+1. Wait for 15 mins and verify that all the procedures are completed, and masterProcWal files doesn't have any pending procedures.
+
+ 1. Verify the Procedures page to confirm that there are no pending procedures.
+
+ :::image type="content" source="./media/apache-hbase-migrate-new-version/verify-master-process.png" alt-text="Screenshot showing how to verify master process." lightbox="./media/apache-hbase-migrate-new-version/verify-master-process.png":::
+
+1. STOP HBase
+
+ 1. Sign in to [Apache Ambari](https://ambari.apache.org/) on the source cluster with `https://<OLDCLUSTERNAME>.azurehdinsight.net`
+ 1. Turn on maintenance mode for HBase.
+ 1. Stop HBase Masters only first. First stop standby masters, in last stop Active HBase master.
+
+ :::image type="content" source="./media/apache-hbase-migrate-new-version/stop-master-services.png" alt-text="Screenshot showing how to stop master services." lightbox="./media/apache-hbase-migrate-new-version/stop-master-services.png":::
+
+ 1. Stop the HBase service, it stops remaining servers.
+
+ > [!NOTE]
+ > HBase 2.4.11 doesn't t support some of the old Procedures.
+ >
+ >For more information on connecting to and using Ambari, see [Manage HDInsight clusters by using the Ambari Web UI](../hdinsight-hadoop-manage-ambari.md).
+ >
+ >Stopping HBase in the previous steps to avoid creating new master proc WALs.
+
+1. If your source HBase cluster doesn't have the [Accelerated Writes](apache-hbase-accelerated-writes.md) feature, skip this step. For source HBase clusters with Accelerated Writes, back up the WAL directory under HDFS by running the following commands from an SSH session on any of the Zookeeper nodes or worker nodes of the source cluster.
+
+ ```bash
+ hdfs dfs -mkdir /hbase-wal-backup
+ hdfs dfs -cp hdfs://mycluster/hbasewal /hbase-wal-backup
+ ```
+
+### Prepare the destination cluster
+
+1. In the Azure portal, [set up a new destination HDInsight cluster](../hdinsight-hadoop-provision-linux-clusters.md) using the same storage account as the source cluster, but with a different container name:
+
+
+1. Sign in to [Apache Ambari](https://ambari.apache.org/) on the new cluster at `https://<NEWCLUSTERNAME>.azurehdinsight.net`, and stop the HBase services.
+
+1. Under **Services** > **HDFS** > **Configs** > **Advanced** > **Advanced core-site**, change the `fs.defaultFS` HDFS setting to point to the original source cluster container name. For example, the setting in the following screenshot should be changed to `wasbs://hbase-upgrade-old-2021-03-22`.
+
+ :::image type="content" source="./media/apache-hbase-migrate-new-version/hdfs-advanced-settings.png" alt-text="Screenshot shows in Ambari, select Services > HDFS > Configs > Advanced > Advanced core-site and change the container name." lightbox="./media/apache-hbase-migrate-new-version/hdfs-advanced-settings.png":::
+
+1. If your destination cluster has the Accelerated Writes feature, change the `hbase.rootdir` path to point to the original source cluster container name. For example, the following path should be changed to `hbase-upgrade-old-2021-03-22`. If your cluster doesn't have Accelerated Writes, skip this step.
+
+ :::image type="content" source="./media/apache-hbase-migrate-new-version/change-container-name-for-hbase-rootdir.png" alt-text="Screenshot shows in Ambari, change the container name for the HBase rootdir." border="true" lightbox="./media/apache-hbase-migrate-new-version/change-container-name-for-hbase-rootdir.png":::
+
+1. Clean the Zookeeper data on the destination cluster by running the following commands in any Zookeeper node or worker node:
+
+ ```bash
+ hbase zkcli
+ rmr /hbase-unsecure
+ quit
+ ```
+
+### Clean and migrate WAL
+
+Run the following commands, depending on your source HDInsight version and whether the source and destination clusters have Accelerated Writes.
+
+1. The destination cluster is always HDInsight version 4.0, since HDInsight 3.6 is in Basic support and isn't recommended for new clusters.
+1. The HDFS copy command is `hdfs dfs <copy properties starting with -D> -cp <source> <destination> # Serial execution`.
+
+> [!NOTE]
+> - The `<source-container-fullpath>` for storage type WASB is `wasbs://<source-container-name>@<storageaccountname>.blob.core.windows.net`.
+> - The `<source-container-fullpath>` for storage type Azure Data Lake Storage Gen2 is `abfs://<source-container-name>@<storageaccountname>.dfs.core.windows.net`.
+
+- [The source cluster is HDInsight 4.0 with Accelerated Writes, and the destination cluster has Accelerated Writes](#the-source-cluster-is-hdinsight-40-with-accelerated-writes-and-the-destination-cluster-has-accelerated-writes).
+- [The source cluster is HDInsight 4.0 without Accelerated Writes, and the destination cluster has Accelerated Writes](#the-source-cluster-is-hdinsight-40-without-accelerated-writes-and-the-destination-cluster-has-accelerated-writes).
+- [The source cluster is HDInsight 4.0 without Accelerated Writes, and the destination cluster doesn't have Accelerated Writes](#the-source-cluster-is-hdinsight-40-without-accelerated-writes-and-the-destination-cluster-doesnt-have-accelerated-writes).
+
+#### The source cluster is HDInsight 4.0 with Accelerated Writes, and the destination cluster has Accelerated Writes
+
+Clean the WAL FS data for the destination cluster, and copy the WAL directory from the source cluster into the destination cluster's HDFS. Copy the directory by running the following commands in any Zookeeper node or worker node on the destination cluster:
+
+```bash
+sudo -u hbase hdfs dfs -rm -r hdfs://mycluster/hbasewal
+sudo -u hbase hdfs dfs -cp <source-container-fullpath>/hbase-wal-backup/hbasewal hdfs://mycluster/
+```
+
+#### The source cluster is HDInsight 4.0 without Accelerated Writes, and the destination cluster has Accelerated Writes
+
+Clean the WAL FS data for the destination cluster, and copy the WAL directory from the source cluster into the destination cluster's HDFS. Copy the directory by running the following commands in any Zookeeper node or worker node on the destination cluster:
+
+```bash
+sudo -u hbase hdfs dfs -rm -r hdfs://mycluster/hbasewal
+sudo -u hbase hdfs dfs -cp <source-container-fullpath>/hbase-wals/* hdfs://mycluster/hbasewal
+ ```
+#### The source cluster is HDInsight 4.0 without Accelerated Writes, and the destination cluster doesn't have Accelerated Writes
+
+Clean the WAL FS data for the destination cluster, and copy the source cluster WAL directory into the destination cluster's HDFS. To copy the directory, run the following commands in any Zookeeper node or worker node on the destination cluster:
+
+```bash
+sudo -u hbase hdfs dfs -rm -r /hbase-wals/*
+sudo -u hbase hdfs dfs -Dfs.azure.page.blob.dir="/hbase-wals" -cp <source-container-fullpath>/hbase-wals /
+```
+### Complete the migration
+
+1. Using the `sudo -u hdfs` user context, copy the folder `/hdp/apps/<new-version-name>` and its contents from the `<destination-container-fullpath>` to the `/hdp/apps` folder under `<source-container-fullpath>`. You can copy the folder by running the following commands on the destination cluster:
+
+ ```bash
+ sudo -u hdfs hdfs dfs -cp /hdp/apps/<hdi-version> <source-container-fullpath>/hdp/apps
+ ```
+
+ For example:
+ ```bash
+ sudo -u hdfs hdfs dfs -cp /hdp/apps/4.1.3.6 wasbs://hbase-upgrade-old-2021-03-22@hbaseupgrade.blob.core.windows.net/hdp/apps
+ ```
+
+1. On the destination cluster, save your changes, and restart all required services as Ambari indicates.
+
+1. Point your application to the destination cluster.
+
+ > [!NOTE]
+ > The static DNS name for your application changes when you upgrade. Rather than hard-coding this DNS name, you can configure a CNAME in your domain name's DNS settings that points to the cluster's name. Another option is to use a configuration file for your application that you can update without redeploying.
+
+1. Start the ingestion.
+
+1. Verify HBase consistency and simple Data Definition Language (DDL) and Data Manipulation Language (DML) operations.
+
+1. If the destination cluster is satisfactory, delete the source cluster.
+
+## Migrate HBase using scripts
+
+1. Note down number of regions in online at source cluster, so that the number can be referred at destination cluster after the migration.
+ :::image type="content" source="./media/apache-hbase-migrate-new-version/total-number-of-regions.png" alt-text="Screenshot showing count of number of regions." lightbox="./media/apache-hbase-migrate-new-version/total-number-of-regions.png":::
+
+1. Flush the source HBase cluster you're upgrading.
+
+ HBase writes incoming data to an in-memory store called a *`memstore`*. After the ``memstore``reaches a certain size, HBase flushes it to disk for long-term storage in the cluster's storage account. Deleting the source cluster after an upgrade also deletes any data in the `memstore`s. To retain the data, manually flush each table's ``memstore``to disk before upgrading.
+
+ You can flush the ``memstore``data by running the [flush_all_tables.sh](https://github.com/Azure/hbase-utils/blob/master/scripts/flush_all_tables.sh) script from the [Azure hbase-utils GitHub repository](https://github.com/Azure/hbase-utils/).
+
+ You can also flush ``memstore``data by running the following HBase shell command from the HDInsight cluster:
+
+ ```bash
+ hbase shell
+ flush "<table-name>"
+ ```
+1. Wait for 15 mins and verify that all the procedures are completed, and masterProcWal files doesn't have any pending procedures.
+
+ 1. Verify the Procedures page to confirm that there are no pending procedures.
+
+ :::image type="content" source="./media/apache-hbase-migrate-new-version/verify-master-process.png" alt-text="Screenshot showing how to verify master process." lightbox="./media/apache-hbase-migrate-new-version/verify-master-process.png":::
++
+1. Execute the script [migrate-to-HDI5.1-hbase-source.sh ](https://github.com/Azure/hbase-utils/blob/master/scripts/migrate-to-HDI5.1-hbase-source.sh) on the source cluster and [migrate-hbase-dest.sh](https://github.com/Azure/hbase-utils/blob/master/scripts/migrate-hbase-dest.sh) on the destination cluster. Use the following instructions to execute these scripts.
+ > [!NOTE]
+ > These scripts don't copy the HBase old WALs as part of the migration; therefore, the scripts are not to be used on clusters that have either HBase Backup or Replication feature enabled.
+
+2. On source cluster
+ ```bash
+ sudo bash migrate-to-HDI5.1-hbase-source.sh
+ ```
+
+3. On destination cluster
+ ```bash
+ sudo bash migrate-hbase-dest.sh -f <src_default_Fs>
+ ```
+
+Mandatory argument for the above command:
+
+```
+ -f, --src-fs
+ The fs.defaultFS of the source cluster
+ For example:
+ -f wasb://anynamehbase0316encoder-2021-03-17t01-07-55-935z@anynamehbase0hdistorage.blob.core.windows.net
+```
++
+## Troubleshooting
+
+### Use case 1:
+If Hbase masters and region servers up and regions stuck in transition or only one region, for example, `hbase:meta` region is assigned. Waiting for other regions to assign
+
+**Solution:**
+
+1. ssh into any ZooKeeper node of original cluster and run `kinit -k -t /etc/security/keytabs/hbase.service.keytab hbase/<zk FQDN>` if this is ESP cluster
+1. Run `echo "scan '`hbase:meta`'" | hbase shell > meta.out` to read the `hbase:meta` into a file
+1. Run `grep "info:sn" meta.out | awk '{print $4}' | sort | uniq` to get all RS instance names where the regions were present in old cluster. Output should be like `value=<wn FQDN>,16020,........`
+1. Create a dummy WAL dir with that `wn` value
+
+ If the cluster is accelerated write cluster
+ ```
+ hdfs dfs -mkdir hdfs://mycluster/hbasewal/WALs/<wn FQDN>,16020,.........
+ ```
+ If the cluster is nonaccelarated Write cluster
+ ```
+ hdfs dfs -mkdir /hbase-wals/WALs/<wn FQDN>,16020,.........
+ ```
+1. Restart active `Hmaster`
+
+## Next steps
+
+To learn more about [Apache HBase](https://hbase.apache.org/) and upgrading HDInsight clusters, see the following articles:
+
+- [Upgrade an HDInsight cluster to a newer version](../hdinsight-upgrade-cluster.md)
+- [Monitor and manage Azure HDInsight using the Apache Ambari Web UI](../hdinsight-hadoop-manage-ambari.md)
+- [Azure HDInsight versions](../hdinsight-component-versioning.md)
+- [Optimize Apache HBase](../optimize-hbase-ambari.md)
healthcare-apis Migration Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/migration-faq.md
Azure API for FHIR will be retired on September 30, 2026.
## Are new deployments of Azure API for FHIR allowed?
-Due to the retirement of Azure API for FHIR after April 1, 2025 customers will not be able to create new deployments of Azure API of FHIR. Until April 1, 2025 new deployments are allowed.
+Due to the retirement of Azure API for FHIR after April 1, 2025 customers won't be able to create new deployments of Azure API of FHIR. Until April 1, 2025 new deployments are allowed.
## Why is Microsoft retiring Azure API for FHIR?
Azure API for FHIR is a service that was purpose built for protected health info
## What are the benefits of migrating to Azure Health Data Services FHIR service?
-AHDS FHIR service offers a rich set of capabilities such as:
+Azure Health Data Service FHIR service offers a rich set of capabilities such as:
- Consumption-based pricing model where customers pay only for used storage and throughput - Support for transaction bundles
AHDS FHIR service offers a rich set of capabilities such as:
SMART on FHIR proxy is retiring. Organizations need to transition to the SMART on FHIR (Enhanced), which uses Azure Health Data and AI OSS samples by **September 21, 2026**. After September 21, 2026, applications relying on SMART on FHIR proxy will report errors when accessing the FHIR service.
-SMART on FHIR (Enhanced) provides more capabilities than SMART on FHIR proxy, and meets requirements in the SMART on FHIR Implementation Guide (v 1.0.0) and §170.315(g)(10) Standardized API for patient and population services criterion.
+SMART on FHIR (Enhanced) provides more capabilities than SMART on FHIR proxy and meets requirements in the SMART on FHIR Implementation Guide (v 1.0.0) and §170.315(g)(10) Standardized API for patient and population services criterion.
## What will happen after the service is retired on September 30, 2026?
Check out these resources if you need further assistance:
- Get answers from community experts in [Microsoft Q&A](/answers/questions/1377356/retirement-announcement-azure-api-for-fhir). - If you have a support plan and require technical support, [contact us](https://ms.portal.azure.com/#view/Microsoft_Azure_Support/HelpAndSupportBlade/~/overview).++
internet-peering Howto Direct Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/internet-peering/howto-direct-portal.md
Title: Create or modify a Direct peering - Azure portal
-description: Create or modify a Direct peering using the Azure portal.
-+
+description: Learn how to create or modify a Direct peering using the Azure portal.
+ Previously updated : 01/23/2023-- Last updated : 10/04/2023 # Create or modify a Direct peering using the Azure portal
As an Internet Service Provider or Internet Exchange Provider, you can create a
3. For Resource group, you can either choose an existing resource group from the drop-down list or create a new group by selecting Create new. We'll create a new resource group for this example.
- >[!NOTE]
- >Once a subscription and resource group have been selected for the peering resource, it cannot be moved to another subscription or resource group.
- 4. Name corresponds to the resource name and can be anything you choose. 5. Region is auto-selected if you chose an existing resource group. If you chose to create a new resource group, you also need to choose the Azure region where you want the resource to reside.
As an Internet Service Provider or Internet Exchange Provider, you can create a
## <a name="delete"></a>Deprovision a Direct peering [!INCLUDE [peering-direct-delete-portal](./includes/delete.md)]
-## Next steps
+## Related content
- [Create or modify Exchange peering by using the portal](howto-exchange-portal.md). - [Convert a legacy Exchange peering to an Azure resource by using the portal](howto-legacy-exchange-portal.md).
internet-peering Howto Exchange Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/internet-peering/howto-exchange-portal.md
Previously updated : 10/03/2023 Last updated : 10/04/2023 # Create or modify an Exchange peering using the Azure portal
As an Internet Exchange Provider, you can create an exchange peering request by
## <a name="delete"></a>Deprovision an Exchange peering [!INCLUDE [peering-exchange-delete-portal](./includes/delete.md)]
-## Next steps
+## Related content
- [Create or modify a Direct peering by using the portal](howto-direct-portal.md). - [Convert a legacy Direct peering to an Azure resource by using the portal](howto-legacy-direct-portal.md).
internet-peering Howto Legacy Direct Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/internet-peering/howto-legacy-direct-portal.md
Previously updated : 10/03/2023 Last updated : 10/04/2023 # Convert a legacy Direct peering to an Azure resource using the Azure portal
As an Internet Service Provider, you can convert legacy direct peering connectio
### <a name=get></a>Verify Direct peering [!INCLUDE [peering-direct-get-portal](./includes/direct-portal-get.md)]
-## Next steps
+## Related content
- [Create or modify a Direct peering by using the portal](howto-direct-portal.md). - [Internet peering frequently asked questions (FAQ)](faqs.md).
internet-peering Howto Legacy Exchange Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/internet-peering/howto-legacy-exchange-portal.md
Previously updated : 10/03/2023 Last updated : 10/04/2023 # Convert a legacy Exchange peering to an Azure resource using the Azure portal
As an Internet Exchange Provider, you can create an exchange peering request by
### <a name=get></a>Verify Exchange peering [!INCLUDE [peering-exchange-get-portal](./includes/exchange-portal-get.md)]
-## Next steps
+## Related content
- [Create or modify an Exchange peering by using the portal](howto-exchange-portal.md). - [Internet peering frequently asked questions (FAQ)](faqs.md).
key-vault Access Behind Firewall https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/access-behind-firewall.md
Depending on your configuration and environment, there are some variations.
## Ports
-All traffic to a key vault for all three functions (authentication, management, and data plane access) goes over HTTPS: port 443. However, there will occasionally be HTTP (port 80) traffic for CRL. Clients that support OCSP shouldn't reach CRL, but may occasionally reach [http://cdp1.public-trust.com/CRL/Omniroot2025.crl](http://cdp1.public-trust.com/CRL/Omniroot2025.crl).
+All traffic to a key vault for all three functions (authentication, management, and data plane access) goes over HTTPS: port 443. However, there will occasionally be HTTP (port 80) traffic for CRL. Clients that support OCSP shouldn't reach CRL, but may occasionally reach CRL endpoints listed [here](../../security/fundamentals/azure-ca-details.md#certificate-downloads-and-revocation-lists).
## Authentication
load-balancer Load Balancer Outbound Connections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-outbound-connections.md
A public IP assigned to a VM is a 1:1 relationship (rather than 1: many) and imp
:::image type="content" source="./media/load-balancer-outbound-connections/default-outbound-access.png" alt-text="Diagram of default outbound access.":::
->[!NOTE]
-> This method is **NOT recommended** for production workloads as it adds risk of exhausting ports. Please refrain from using this method for production workloads to avoid potential connection failures.
+In Azure, virtual machines created in a virtual network without explicit outbound connectivity defined are assigned a default outbound public IP address. This IP address enables outbound connectivity from the resources to the Internet. This access is referred to as [default outbound access](../virtual-network/ip-services/default-outbound-access.md). This method of access is **not recommended** as it is insecure and the IP addresses are subject to change.
-Default outbound access is when An Azure resource is allocated a minimal number of ports for outbound. This access occurs when the resource meets any of the following conditions:
--- doesn't have a public IP associated to it.-- doesn't have a load balancer with outbound Rules in front of it.-- isn't part of Virtual Machine Scale Sets flexible orchestration mode.-- doesn't have a NAT gateway resource associated to its subnet. -
-Some other examples of default outbound access are:
--- Use of a basic SKU load balancer-- A virtual machine in Azure (without the associations mentioned above). In this case, outbound connectivity is provided by the default outbound access IP. This IP is a dynamic IP assigned by Azure that you can't control. Default SNAT isn't recommended for production workloads and can cause connectivity failures.-- A virtual machine in the backend pool of a load balancer without outbound rules. As a result, you use the frontend IP address of a load balancer for outbound and inbound and are more prone to connectivity failures from SNAT port exhaustion.
+>[!Important]
+>On September 30, 2025, default outbound access for new deployments will be retired. For more information, see the [official announcement](https://azure.microsoft.com/updates/upgrade-to-standard-sku-public-ip-addresses-in-azure-by-30-september-2025-basic-sku-will-be-retired/). It is reccomended to use one the explict forms of connectivity as shown in options 1-3 above.
### What are SNAT ports?
machine-learning How To Deploy Online Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-online-endpoints.md
To configure a deployment:
```python model = Model(path="../model-1/model/sklearn_regression_model.pkl") env = Environment(
- conda_file="../model-1/environment/conda.yml",
+ conda_file="../model-1/environment/conda.yaml",
image="mcr.microsoft.com/azureml/openmpi4.1.0-ubuntu20.04:latest", )
machine-learning How To Use Retrieval Augmented Generation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-retrieval-augmented-generation.md
In your Azure Machine Learning workspace, you can enable prompt flow by turn-on
## Next steps
-[Get started with RAG using a prompt flow sample (preview)](how-to-use-pipelines-prompt-flow.md)
+[Use Azure Machine Learning pipelines with no code to construct RAG pipelines (preview)](how-to-use-pipelines-prompt-flow.md)
[How to create vector index in Azure Machine Learning prompt flow (preview).](how-to-create-vector-index.md)
machine-learning Tutorial Azure Ml In A Day https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-azure-ml-in-a-day.md
You might need to select **Refresh** to see the new folder and script in your **
:::image type="content" source="media/tutorial-azure-ml-in-a-day/refresh.png" alt-text="Screenshot shows the refresh icon.":::
-### [Optional] Enable Intel® Extension for Scikit-Learn optimizations for more performance on Intel hardware
-
-Want to speed up your scikit-learn scripts on Intel hardware? Try enabling [Intel® Extension for Scikit-Learn](https://www.intel.com/content/www/us/en/developer/tools/oneapi/scikit-learn.html) in your training script. Intel® Extension for Scikit-Learn is already installed in the Azure Machine Learning curated environment used in this tutorial, so no additional installation is needed.
-
-To learn more about Intel® Extension for Scikit-Learn, visit the package's [documentation](https://intel.github.io/scikit-learn-intelex/).
-
-If you want to use Intel® Extension for Scikit-Learn as part of the training script described above, you can enable the performance optimizations by adding the two lines of code to the top of the script file, as shown below.
--
-```python
-%%writefile {train_src_dir}/main.py
-import os
-import argparse
-
-# Import and enable Intel Extension for Scikit-learn optimizations
-# where possible
-from sklearnex import patch_sklearn
-patch_sklearn()
-
-import pandas as pd
-import mlflow
-import mlflow.sklearn
-from sklearn.ensemble import GradientBoostingClassifier
-from sklearn.metrics import classification_report
-from sklearn.model_selection import train_test_split
-
-def main():
- """Main function of the script."""
-
- # input and output arguments
- parser = argparse.ArgumentParser()
- parser.add_argument("--data", type=str, help="path to input data")
- parser.add_argument("--test_train_ratio", type=float, required=False, default=0.25)
- parser.add_argument("--n_estimators", required=False, default=100, type=int)
- parser.add_argument("--learning_rate", required=False, default=0.1, type=float)
- parser.add_argument("--registered_model_name", type=str, help="model name")
- args = parser.parse_args()
-
- # Start Logging
- mlflow.start_run()
-
- # enable autologging
- mlflow.sklearn.autolog()
-
- ###################
- #<prepare the data>
- ###################
- print(" ".join(f"{k}={v}" for k, v in vars(args).items()))
-
- print("input data:", args.data)
-
- credit_df = pd.read_csv(args.data, header=1, index_col=0)
-
- mlflow.log_metric("num_samples", credit_df.shape[0])
- mlflow.log_metric("num_features", credit_df.shape[1] - 1)
-
- train_df, test_df = train_test_split(
- credit_df,
- test_size=args.test_train_ratio,
- )
- ####################
- #</prepare the data>
- ####################
-
- ##################
- #<train the model>
- ##################
- # Extracting the label column
- y_train = train_df.pop("default payment next month")
-
- # convert the dataframe values to array
- X_train = train_df.values
-
- # Extracting the label column
- y_test = test_df.pop("default payment next month")
-
- # convert the dataframe values to array
- X_test = test_df.values
-
- print(f"Training with data of shape {X_train.shape}")
-
- clf = GradientBoostingClassifier(
- n_estimators=args.n_estimators, learning_rate=args.learning_rate
- )
- clf.fit(X_train, y_train)
-
- y_pred = clf.predict(X_test)
-
- print(classification_report(y_test, y_pred))
- ###################
- #</train the model>
- ###################
-
- ##########################
- #<save and register model>
- ##########################
- # Registering the model to the workspace
- print("Registering the model via MLFlow")
- mlflow.sklearn.log_model(
- sk_model=clf,
- registered_model_name=args.registered_model_name,
- artifact_path=args.registered_model_name,
- )
-
- # Saving the model to a file
- mlflow.sklearn.save_model(
- sk_model=clf,
- path=os.path.join(args.registered_model_name, "trained_model"),
- )
- ###########################
- #</save and register model>
- ###########################
-
- # Stop Logging
- mlflow.end_run()
-
-if __name__ == "__main__":
- main()
-```
-- ## Create a compute cluster, a scalable way to run a training job > [!NOTE]
migrate Migrate Support Matrix Hyper V https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/migrate-support-matrix-hyper-v.md
The following are sample scripts for creating a login and provisioning it with t
```sql -- Create a login to run the assessment use master;
- -- If a SID needs to be specified, add here
- DECLARE @SID NVARCHAR(MAX) = N'';
+ DECLARE @SID NVARCHAR(MAX) = N'';
CREATE LOGIN [MYDOMAIN\MYACCOUNT] FROM WINDOWS; SELECT @SID = N'0x'+CONVERT(NVARCHAR, sid, 2) FROM sys.syslogins where name = 'MYDOMAIN\MYACCOUNT' IF (ISNULL(@SID,'') != '')
The following are sample scripts for creating a login and provisioning it with t
```sql -- Create a login to run the assessment use master;
- -- If a SID needs to be specified, add here
+ -- NOTE: SQL instances that host replicas of Always On Availability Groups must use the same SID with SQL login.
+ -- After the account is created in one of the member instances, copy the SID output from the script and include
+ -- this value when executing against the remaining replicas.
+ -- When the SID needs to be specified, add the value to the @SID variable definition below.
DECLARE @SID NVARCHAR(MAX) = N''; IF (@SID = N'') BEGIN
The following are sample scripts for creating a login and provisioning it with t
END ELSE BEGIN
- CREATE LOGIN [evaluator]
- WITH PASSWORD = '<provide a strong password>'
- , SID = @SID
+ DECLARE @SQLString NVARCHAR(500) = 'CREATE LOGIN [evaluator]
+ WITH PASSWORD = ''<provide a strong password>''
+ , SID = '+@SID
+ EXEC SP_EXECUTESQL @SQLString
END SELECT @SID = N'0x'+CONVERT(NVARCHAR, sid, 2) FROM sys.syslogins where name = 'evaluator' IF (ISNULL(@SID,'') != '')
- PRINT N'Created login [evaluator] with SID = '+@SID
+ PRINT N'Created login [evaluator] with SID = '''+ @SID +'''. If this instance hosts any Always On Availability Group replica, use this SID value when executing the script against the instances hosting the other replicas'
ELSE PRINT N'Login creation failed' GO
migrate Migrate Support Matrix Physical https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/migrate-support-matrix-physical.md
The following are sample scripts for creating a login and provisioning it with t
```sql -- Create a login to run the assessment use master;
- -- If a SID needs to be specified, add here
- DECLARE @SID NVARCHAR(MAX) = N'';
+ DECLARE @SID NVARCHAR(MAX) = N'';
CREATE LOGIN [MYDOMAIN\MYACCOUNT] FROM WINDOWS; SELECT @SID = N'0x'+CONVERT(NVARCHAR, sid, 2) FROM sys.syslogins where name = 'MYDOMAIN\MYACCOUNT' IF (ISNULL(@SID,'') != '')
The following are sample scripts for creating a login and provisioning it with t
```sql -- Create a login to run the assessment use master;
- -- If a SID needs to be specified, add here
+ -- NOTE: SQL instances that host replicas of Always On Availability Groups must use the same SID for the SQL login.
+ -- After the account is created in one of the members, copy the SID output from the script and include this value
+ -- when executing against the remaining replicas.
+ -- When the SID needs to be specified, add the value to the @SID variable definition below.
DECLARE @SID NVARCHAR(MAX) = N''; IF (@SID = N'') BEGIN
The following are sample scripts for creating a login and provisioning it with t
END ELSE BEGIN
- CREATE LOGIN [evaluator]
- WITH PASSWORD = '<provide a strong password>'
- , SID = @SID
+ DECLARE @SQLString NVARCHAR(500) = 'CREATE LOGIN [evaluator]
+ WITH PASSWORD = ''<provide a strong password>''
+ , SID = '+@SID
+ EXEC SP_EXECUTESQL @SQLString
END
- SELECT @SID = N'0x'+CONVERT(NVARCHAR, sid, 2) FROM sys.syslogins where name = 'evaluator'
+ SELECT @SID = N'0x'+CONVERT(NVARCHAR(35), sid, 2) FROM sys.syslogins where name = 'evaluator'
IF (ISNULL(@SID,'') != '')
- PRINT N'Created login [evaluator] with SID = '+@SID
+ PRINT N'Created login [evaluator] with SID = '''+ @SID +'''. If this instance hosts any Always On Availability Group replica, use this SID value when executing the script against the instances hosting the other replicas'
ELSE PRINT N'Login creation failed' GO
migrate Migrate Support Matrix Vmware https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/migrate-support-matrix-vmware.md
The following are sample scripts for creating a login and provisioning it with t
```sql -- Create a login to run the assessment use master;
- -- If a SID needs to be specified, add here
- DECLARE @SID NVARCHAR(MAX) = N'';
+ DECLARE @SID NVARCHAR(MAX) = N'';
CREATE LOGIN [MYDOMAIN\MYACCOUNT] FROM WINDOWS; SELECT @SID = N'0x'+CONVERT(NVARCHAR, sid, 2) FROM sys.syslogins where name = 'MYDOMAIN\MYACCOUNT' IF (ISNULL(@SID,'') != '')
The following are sample scripts for creating a login and provisioning it with t
```sql -- Create a login to run the assessment use master;
- -- If a SID needs to be specified, add here
- DECLARE @SID NVARCHAR(MAX) = N'';
+ -- NOTE: SQL instances that host replicas of Always On Availability Groups must use the same SID for the SQL login.
+ -- After the account is created in one of the members, copy the SID output from the script and include this value
+ -- when executing against the remaining replicas.
+ -- When the SID needs to be specified, add the value to the @SID variable definition below.
+ DECLARE @SID NVARCHAR(MAX) = N'';
IF (@SID = N'') BEGIN CREATE LOGIN [evaluator]
The following are sample scripts for creating a login and provisioning it with t
END ELSE BEGIN
- CREATE LOGIN [evaluator]
- WITH PASSWORD = '<provide a strong password>'
- , SID = @SID
+ DECLARE @SQLString NVARCHAR(500) = 'CREATE LOGIN [evaluator]
+ WITH PASSWORD = ''<provide a strong password>''
+ , SID = ' + @SID
+ EXEC SP_EXECUTESQL @SQLString
END
- SELECT @SID = N'0x'+CONVERT(NVARCHAR, sid, 2) FROM sys.syslogins where name = 'evaluator'
+ SELECT @SID = N'0x'+CONVERT(NVARCHAR(100), sid, 2) FROM sys.syslogins where name = 'evaluator'
IF (ISNULL(@SID,'') != '')
- PRINT N'Created login [evaluator] with SID = '+@SID
+ PRINT N'Created login [evaluator] with SID = '''+ @SID +'''. If this instance hosts any Always On Availability Group replica, use this SID value when executing the script against the instances hosting the other replicas'
ELSE PRINT N'Login creation failed' GO
networking Networking Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/networking/fundamentals/networking-overview.md
This section describes networking services in Azure that help protect your netwo
Azure DDoS Protection consists of two tiers: -- [DDoS Network Protection](../../ddos-protection/ddos-protection-overview.md#ddos-network-protection) Azure DDoS Network Protection, combined with application design best practices, provides enhanced DDoS mitigation features to defend against DDoS attacks. It's automatically tuned to help protect your specific Azure resources in a virtual network.-- [DDoS IP Protection](../../ddos-protection/ddos-protection-overview.md#ddos-ip-protection) DDoS IP Protection is a pay-per-protected IP model. DDoS IP Protection contains the same core engineering features as DDoS Network Protection, but will differ in the following value-added
+- [DDoS Network Protection](../../ddos-protection/ddos-protection-overview.md#ddos-network-protection), combined with application design best practices, provides enhanced DDoS mitigation features to defend against DDoS attacks. It's automatically tuned to help protect your specific Azure resources in a virtual network.
+- [DDoS IP Protection](../../ddos-protection/ddos-protection-overview.md#ddos-ip-protection) is a pay-per-protected IP model. DDoS IP Protection contains the same core engineering features as DDoS Network Protection, but will differ in the following value-added
:::image type="content" source="./media/networking-overview/ddos-protection-overview-architecture.png" alt-text="Diagram of the reference architecture for a DDoS protected PaaS web application.":::
networking Nva Accelerated Connections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/networking/nva-accelerated-connections.md
# Accelerated connections and NVAs (Limited GA)
-This article helps you understand the **Accelerated Connections** feature. When Accelerated Connections is enabled on the virtual network interface (vNIC) with Accelerated Networking, networking performance is improved. This high-performance feature is available on Network Virtual Appliances (NVAs) deployed from Azure Marketplace and offers competitive performance in Connections Per Second (CPS) optimization, along with improvements to handling large amounts of simultaneous connections. To access this feature during limited GA, use the [sign-up form](https://go.microsoft.com/fwlink/?linkid=2223706).
+This article helps you understand the **Accelerated Connections** feature. When Accelerated Connections is enabled on the virtual network interface (vNIC) with Accelerated Networking, this feature significantly improves networking efficiency, resulting in enhanced overall performance. This high-performance feature offers industry leading performance in Connections Per Second (CPS) optimization, along with improvements to handling large amounts of simultaneous connections. The feature also improves the number of Total Active Connections for network intensive workloads. Accelerated Connections is configured at the network interface level to allow flexibility to size the performance at vNIC. This especially benefits smaller VM sizes. These benefits are available for Network Virtual Appliances (NVAs) with a large number of connections. To access this feature during limited General Availability (limited GA), use the [sign-up form](https://go.microsoft.com/fwlink/?linkid=2223706).
> [!IMPORTANT]
-> This feature is currently in limited general availability (GA), and customer sign-up is needed to use it.
+> This feature is currently in limited General Availability (GA) and customer sign-up is needed to use it.
>
-Accelerated Connections supports the workloads that can send large amounts of active connections simultaneously. It supports these connections bursts with negligible degradation to VM throughput, latency or connections per second performance. The data path for the network traffic is highly optimized to offload the Software-defined networks (SDN) policies evaluation. The goal is to eliminate any bottlenecks in the cloud implementation and networking performance.
+Accelerated Connections supports the workloads that utilize large amounts of active connections simultaneously. It supports these connections bursts with negligible degradation to Virtual Machine (VM) throughput, latency or Connections Per Second (CPS) performance. The data path for the network traffic is highly optimized to offload the Software-defined networks (SDN) policies evaluation. The goal is to eliminate any bottlenecks in the cloud implementation and networking performance.
-Accelerated Connections is implemented at the network interface level to allow maximum flexibility of network capacity. Multiple vNICs can be configured with this enhancement, the number depends on the supported VM family. Network Virtual Appliances (NVAs) on Azure Marketplace will be the first workloads to leverage this technology.
+Feature enablement is at the vNIC level and irrespective of the VM size, making it available for VMs as small as four vCPUs. After enabling the feature, a performance improvement of up to twenty-five times (x25) in terms of Connections Per Second (CPS) can be achieved, especially for high numbers of simultaneous active connections. This essentially allows users to enhance existing VMΓÇÖs network capabilities without resizing to a larger VM size.
-Network Virtual Appliances (NVAs) with most large scale solutions requiring v-firewall, v-switch, load balancers and other critical network features would experience higher CPS performance with Accelerated Connections.
+There are a total of four performance tiers at vNIC level which gives the flexibility to control the networking capability. All tiers have different networking capabilities. Instructions on how to select performance tier based on VM sizes will be provided after a customer sign up for the feature.
+
+Accelerated Connections is implemented at the network interface level to allow maximum flexibility of network capacity. Multiple vNICs can be configured with this enhancement, the number depends on the supported VM family. Network Virtual Appliances (NVAs) on Azure Marketplace will be the first workloads to be offered this ground-breaking feature.
+
+Network Virtual Appliances (NVAs) with the largest scale workloads requiring virtual firewalls, virtual switches, load balancers and other critical network features will experience dramatically improved CPS performance with Accelerated Connections.
> [!NOTE] > During limited GA, this feature is only supported for NVAs available on the [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps?search=network%20virtual%20appliance&page=1&filters=virtual-machine-images%3Bpartners). >
-**Diagram 1**
+**Architecture diagram**
:::image type="content" source="./media/nva-accelerated-connections/accelerated-connections-diagram.png" alt-text="Diagram of the connection performance optimization feature."::: ### Benefits
-* Increased Connections Per Second (CPS)
-* Consistent active connections
-* Increased CPU capacity/stability for high traffic network optimized VM
-* Reduced jitter
-* Decreased CPU utilization
+* Industry leading Connections Per Second (CPS)
+* Increased number of total connections
+* Consistent throughput across a very large number of active connections
+* Reduced jitter on connection creation
+* Cost savings when using fewer VMs and licenses while achieving industry leading network connection performance
### Considerations and limitations * This feature is available only for NVAs deployed from Azure Marketplace during limited GA. * To enable this feature, you must sign up using the [sign-up form](https://go.microsoft.com/fwlink/?linkid=2223706).
-* During limited GA, this feature is only available on Resource Groups created after sign-up.
-* This feature can be enabled and is supported on new deployments using an Azure Resource Manager (ARM) template and preview instructions.
+* This feature can be enabled and is supported only on new deployments.
* Feature support may vary as per the NVAs available on Marketplace.
-* Detaching and attaching a network interface on a running VM isn't supported as other Azure features.
-* Marketplace portal isn't supported for the limited GA.
-* This feature is free during the limited GA, and chargeable after GA.
+* Detaching and attaching a network interface on a VM requires stop-deallocate first.
+* Marketplace portal isn't supported for the limited GA. Other tools such as templates, CLI, Terraform and other multi-cloud tools are supported.
+* This feature is free during the limited GA, but chargeable after limited GA.
## Prerequisites
The following section lists the required prerequisites:
* [Accelerated networking](../virtual-network/accelerated-networking-overview.md) must be enabled on the traffic network interfaces of the NVA. * Custom tags must be added to the resources during deployment (instructions will be provided).
-* The data path tags should be added to the vNIC properties.
-* You've signed up for the limited GA using the [sign-up form](https://go.microsoft.com/fwlink/?linkid=2223706).
+* Registered for the limited GA using the [sign-up form](https://go.microsoft.com/fwlink/?linkid=2223706).
## Supported regions
operator-nexus Quickstarts Kubernetes Cluster Deployment Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/quickstarts-kubernetes-cluster-deployment-powershell.md
+
+ Title: Create an Azure Nexus Kubernetes cluster by using Azure PowerShell
+description: Learn how to create an Azure Nexus Kubernetes cluster by using Azure PowerShell.
+++++ Last updated : 09/26/2023++
+# Quickstart: Create an Azure Nexus Kubernetes cluster by using Azure PowerShell
+
+ Deploy an Azure Nexus Kubernetes cluster using Azure PowerShell.
+
+This quick-start guide is designed to help you get started with using Nexus kubernetes cluster. By following the steps outlined in this guide, you're able to quickly and easily create a customized Nexus kubernetes cluster that meets your specific needs and requirements. Whether you're a beginner or an expert in Nexus networking, this guide is here to help. You learn everything you need to know to customize and create Nexus kubernetes cluster.
+
+## Before you begin
++
+## Create an Azure Nexus Kubernetes cluster
+
+The following example creates a cluster named *myNexusK8sCluster* in resource group *myResourceGroup* in the *eastus* location.
+
+Before you run the commands, you need to set several variables to define the configuration for your cluster. Here are the variables you need to set, along with some default values you can use for certain variables:
+
+| Variable | Description |
+| -- | |
+| LOCATION | The Azure region where you want to create your cluster. |
+| RESOURCE_GROUP | The name of the Azure resource group where you want to create the cluster. |
+| SUBSCRIPTION_ID | The ID of your Azure subscription. |
+| CUSTOM_LOCATION | This argument specifies a custom location of the Nexus instance. |
+| CSN_ARM_ID | CSN ID is the unique identifier for the cloud services network you want to use. |
+| CNI_ARM_ID | CNI ID is the unique identifier for the network interface to be used by the container runtime. |
+| AAD_ADMIN_GROUP_OBJECT_ID | The object ID of the Azure Active Directory group that should have admin privileges on the cluster. |
+| CLUSTER_NAME | The name you want to give to your Nexus Kubernetes cluster. |
+| K8S_VERSION | The version of Kubernetes you want to use. |
+| ADMIN_USERNAME | The username for the cluster administrator. |
+| SSH_PUBLIC_KEY | The SSH public key that is used for secure communication with the cluster. |
+| CONTROL_PLANE_COUNT | The number of control plane nodes for the cluster. |
+| CONTROL_PLANE_VM_SIZE | The size of the virtual machine for the control plane nodes. |
+| INITIAL_AGENT_POOL_NAME | The name of the initial agent pool. |
+| INITIAL_AGENT_POOL_COUNT | The number of nodes in the initial agent pool. |
+| INITIAL_AGENT_POOL_VM_SIZE | The size of the virtual machine for the initial agent pool. |
+| MODE | The mode of the agent pool containing the node, values apply System or User or NotApplicable |
+| AGENT_POOL_CONFIGURATION | The parameter specifies the agent pools created for running critical system services and workloads. |
+| POD_CIDR | The network range for the Kubernetes pods in the cluster, in CIDR notation. |
+| SERVICE_CIDR | The network range for the Kubernetes services in the cluster, in CIDR notation. |
+| DNS_SERVICE_IP | The IP address for the Kubernetes DNS service. |
+
+Once you've defined these variables, you can run the Azure PowerShell command to create the cluster. Add the ```-Debug``` flag at the end to provide more detailed output for troubleshooting purposes.
+
+To define these variables, use the following set commands and replace the example values with your preferred values. You can also use the default values for some of the variables, as shown in the following example:
+
+```azurepowershell-interactive
+# Azure parameters
+$RESOURCE_GROUP="myResourceGroup"
+$SUBSCRIPTION="<Azure subscription ID>"
+$CUSTOM_LOCATION="/subscriptions/<subscription_id>/resourceGroups/<managed_resource_group>/providers/microsoft.extendedlocation/customlocations/<custom-location-name>"
+$CUSTOM_LOCATION_TYPE="CustomLocation"
+$LOCATION="<ClusterAzureRegion>"
+
+# Network parameters
+$CSN_ARM_ID="/subscriptions/<subscription_id>/resourceGroups/<resource_group>/providers/Microsoft.NetworkCloud/cloudServicesNetworks/<csn-name>"
+$CNI_ARM_ID="/subscriptions/<subscription_id>/resourceGroups/<resource_group>/providers/Microsoft.NetworkCloud/l3Networks/<l3Network-name>"
+$POD_CIDR="10.244.0.0/16"
+$SERVICE_CIDR="10.96.0.0/16"
+$DNS_SERVICE_IP="10.96.0.10"
+
+# AgentPoolConfiguration parameters
+$INITIAL_AGENT_POOL_COUNT="1"
+$MODE="System"
+$INITIAL_AGENT_POOL_NAME="agentpool1"
+$INITIAL_AGENT_POOL_VM_SIZE="NC_P10_56_v1"
+
+# NAKS Cluster Parameters
+$CLUSTER_NAME="myNexusK8sCluster"
+$SSH_PUBLIC_KEY = @{
+ KeyData = "$(cat ~/.ssh/id_rsa.pub)"
+}
+$K8S_VERSION="1.24.9"
+$AAD_ADMIN_GROUP_OBJECT_ID="3d4c8620-ac8c-4bd6-9a92-f2b75923ef9f"
+$ADMIN_USERNAME="azureuser"
+$CONTROL_PLANE_COUNT="1"
+$CONTROL_PLANE_VM_SIZE="NC_G6_28_v1"
+
+$AGENT_POOL_CONFIGURATION = New-AzNetworkCloudInitialAgentPoolConfigurationObject `
+-Count $INITIAL_AGENT_POOL_COUNT `
+-Mode $MODE `
+-Name $INITIAL_AGENT_POOL_NAME `
+-VmSkuName $INITIAL_AGENT_POOL_VM_SIZE
+```
+
+> [!IMPORTANT]
+> It is essential that you replace the placeholders for CUSTOM_LOCATION, CSN_ARM_ID, CNI_ARM_ID, and AAD_ADMIN_GROUP_OBJECT_ID with your actual values before running these commands.
+
+After defining these variables, you can create the Kubernetes cluster by executing the following Azure PowerShell command:
+
+```azurepowershell-interactive
+New-AzNetworkCloudKubernetesCluster -KubernetesClusterName $CLUSTER_NAME `
+-ResourceGroupName $RESOURCE_GROUP `
+-SubscriptionId $SUBSCRIPTION `
+-Location $LOCATION `
+-ExtendedLocationName $CUSTOM_LOCATION `
+-ExtendedLocationType $CUSTOM_LOCATION_TYPE `
+-KubernetesVersion $K8S_VERSION `
+-AadConfigurationAdminGroupObjectId $AAD_ADMIN_GROUP_OBJECT_ID `
+-AdminUsername $ADMIN_USERNAME `
+-SshPublicKey $SSH_PUBLIC_KEY `
+-ControlPlaneNodeConfigurationCount $CONTROL_PLANE_COUNT `
+-ControlPlaneNodeConfigurationVMSkuName $CONTROL_PLANE_VM_SIZE `
+-InitialAgentPoolConfiguration $AGENT_POOL_CONFIGURATION `
+-NetworkConfigurationCloudServicesNetworkId $CSN_ARM_ID `
+-NetworkConfigurationCniNetworkId $CNI_ARM_ID `
+-NetworkConfigurationPodCidr $POD_CIDR `
+-NetworkConfigurationDnsServiceIP $SERVICE_CIDR `
+-NetworkConfigurationServiceCidr $DNS_SERVICE_IP
+```
+
+After a few minutes, the command completes and returns information about the cluster. For more advanced options, see [Quickstart: Deploy an Azure Nexus Kubernetes cluster using Bicep](./quickstarts-kubernetes-cluster-deployment-bicep.md).
+
+## Review deployed resources
++
+## Connect to the cluster
++
+## Add an agent pool
+
+The cluster created in the previous step has a single node pool. Let's add a second agent pool using the ```New-AzNetworkCloudAgentPool``` create command. The following example creates an agent pool named ```myNexusK8sCluster-nodepool-2```:
+
+You can also use the default values for some of the variables, as shown in the following example:
+
+```azurepowershell-interactive
+$RESOURCE_GROUP="myResourceGroup"
+$SUBSCRIPTION="<Azure subscription ID>"
+$CUSTOM_LOCATION="/subscriptions/<subscription_id>/resourceGroups/<managed_resource_group>/providers/microsoft.extendedlocation/customlocations/<custom-location-name>"
+$CUSTOM_LOCATION_TYPE="CustomLocation"
+$LOCATION="<ClusterAzureRegion>"
+$CLUSTER_NAME="myNexusK8sCluster"
+$AGENT_POOL_NAME="myNexusK8sCluster-nodepool-2"
+$AGENT_POOL_VM_SIZE="NC_P10_56_v1"
+$AGENT_POOL_COUNT="1"
+$AGENT_POOL_MODE="User"
+```
+
+After defining these variables, you can add an agent pool by executing the following Azure PowerShell command:
+
+```azurepowershell-interactive
+New-AzNetworkCloudAgentPool -KubernetesClusterName $CLUSTER_NAME `
+-Name $AGENT_POOL_NAME `
+-ResourceGroupName $RESOURCE_GROUP `
+-SubscriptionId $SUBSCRIPTION `
+-ExtendedLocationName $CUSTOM_LOCATION `
+-ExtendedLocationType $CUSTOM_LOCATION_TYPE `
+-Location $LOCATION `
+-Count $AGENT_POOL_COUNT `
+-Mode $AGENT_POOL_MODE `
+-VMSkuName $AGENT_POOL_VM_SIZE
+```
+
+After a few minutes, the command completes and returns information about the agent pool. For more advanced options, see [Quickstart: Deploy an Azure Nexus Kubernetes cluster using Bicep](./quickstarts-kubernetes-cluster-deployment-bicep.md).
++
+## Clean up resources
++
+## Next steps
+
orbital Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/orbital/get-started.md
Azure Orbital Ground Station can be used to communicate with a private satellite
## Learn about resources
-Azure Orbital Ground Station uses three different types of resources:
+Azure Orbital Ground Station uses three different types of Azure resources:
- [Spacecraft](spacecraft-object.md) - [Contact profile](concepts-contact-profile.md) - [Contact](concepts-contact.md)
orbital Modem Chain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/orbital/modem-chain.md
You have the flexibility to choose between managed modem or virtual RF functiona
### Prerequisites - Managed modem: a modem configuration file-- Virtual RF: GNU radio or your own software radio
+- Virtual RF: GNU radio or software radio
## Managed modems vs virtual RF delivery
playwright-testing Concept Determine Optimal Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/playwright-testing/concept-determine-optimal-configuration.md
+
+ Title: Optimal test suite configuration
+description: Learn about the factors that affect test completion time in Microsoft Playwright Testing. Get practical steps to determine the optimal Playwright test project configuration.
+ Last updated : 10/04/2023+++
+# Determine the optimal test suite configuration
+
+Microsoft Playwright Testing Preview enables you to speed up your Playwright test execution by increasing parallelism at cloud scale. Several factors affect the completion time for your test suite. Determining the optimal configuration for reducing test suite completion time is application-specific and requires experimentation. This article explains the different levels to configure parallelism for your tests, the factors that influence test duration, and how to determine your optimal configuration to minimize test completion time.
+
+In Playwright, you can run tests in parallel by using worker processes. By using Microsoft Playwright Testing, you can further increase parallelism by using cloud-hosted browsers. In general, adding more parallelism reduces the time to complete your test suite. However, adding more worker processes doesn't always result in shorter test suite completion times. For example, the client machine computing resources, network latency, or test complexity might also affect test duration.
+
+The following chart gives an example of running a test suite. By running the test suite with Microsoft Playwright Testing instead of locally, you can significantly increase the parallelism and reduce the test completion time. Notice that, when running with the service, the completion time reaches a minimum limit, after which adding more workers only has a minimal effect. The chart also shows how using more computing resources on the client machine positively affects the test completion time for tests running with the service.
++
+## Worker processes
+
+In [Playwright](https://playwright.dev/docs/intro), all tests run in worker processes. These processes are OS processes, running independently, in parallel, orchestrated by the Playwright Test runner. All workers have identical environments and each process starts its own browser.
+
+Generally, increasing the number of parallel workers can reduce the time it takes to complete the full test suite. You can learn more about [Playwright Test parallelism](https://playwright.dev/docs/test-parallel) in the Playwright documentation.
+
+As previously shown in the chart, the test suite completion time doesn't continue to decrease as you add more worker processes. There are [other factors that influence the test suite duration](#factors-that-influence-completion-time).
+
+### Run tests locally
+
+By default, `@playwright/test` limits the number of workers to 1/2 of the number of CPU cores on your machine. You can override the number of workers for running your test.
+
+When you run tests locally, the number of worker processes is limited to the number of CPU cores on your machine. Beyond a certain point, adding more workers leads to resource contention, which slows down each worker and introduces test flakiness.
+
+To override the number of workers using the [`--workers` command line flag](https://playwright.dev/docs/test-cli#reference):
+
+```bash
+npx playwright test --workers=10
+```
+
+To specify the number of workers in `playwright.config.ts` using the `workers` setting:
+
+```typescript
+export default defineConfig({
+ ...
+ workers: 10,
+ ...
+});
+```
+
+### Run tests with the service
+
+When you use Microsoft Playwright Testing, you can increase the number of workers at cloud-scale to larger numbers. When you use the service, the worker processes continue to run locally, but the resource-intensive browser instances are now running remotely in the cloud.
+
+Because the worker processes still run on the client machine (developer workstation or CI agent machine), the client machine might still become a bottleneck for scalable execution as you add more workers. Learn how you can [determine the optimal configuration](#workflow-for-determining-your-optimal-configuration).
+
+You can specify the number of workers on the command line with the `--workers` flag:
+
+```bash
+npx playwright test --config=playwright.service.config.ts --workers=30
+```
+
+Alternately you can specify the number of workers in `playwright.service.config.ts` using the `workers` setting:
+
+```typescript
+export default defineConfig({
+ ...
+ workers: 30,
+ ...
+});
+```
+
+## Factors that influence completion time
+
+In addition to the number of parallel worker processes, there are several factors that influence the test suite completion time.
+
+| Factor | Effects on test duration |
+|-|-|
+| **Client machine compute resources** | The worker processes still run on the client machine (developer workstation or CI agent machine) and need to communicate with the remote browsers. Increasing the number of parallel workers might result in resource contention on the client machine, and slow down tests. |
+| **Complexity of the test code** | As the complexity of the test code increases, the time to complete the tests might also increase. |
+| **Latency between the client machine and the remote browsers** | The workers run on the client machine and communicate with the remote browsers. Depending on the Azure region where the browsers are hosted, the network latency might increase. Learn how you can [optimize regional latency in Microsoft Playwright Testing](./how-to-optimize-regional-latency.md). |
+| **Playwright configuration settings** | Playwright settings such as service timeouts, retries, or tracing can adversely affect the test completion time. Experiment with the optimal configuration for these settings when running your tests in the cloud. |
+| **Target application's load-handling capacity** | Running tests with Microsoft Playwright Testing enables you to run with higher parallelism, which results in a higher load on the target application. Verify that the application can handle the load that is generated by running your Playwright tests. |
+
+Learn more about the [workflow for determining the optimal configuration](#workflow-for-determining-your-optimal-configuration) for minimizing the test suite duration.
+
+## Workflow for determining your optimal configuration
+
+The optimal configuration for minimizing the test suite completion time is specific to your application and environment. To determine your optimal configuration, experiment with different levels of parallelization, client machine hardware configuration, or test suite setup.
+
+The following approach can help you find the optimal configuration for running your tests with Microsoft Playwright Testing:
+
+### 1. Determine your test completion time goal
+
+Determine what is an acceptable test suite completion time and associated cost per test run.
+
+Depending on the scenario, your requirements for test completion might be different. When you're running your end-to-end tests with every code change, as part of a continuous integration (CI) workflow, minimizing test completion time is essential. When you schedule your end-to-end tests in a (nightly) batch run, you might have requirements that are less demanding.
+
+### 2. Verify that your tests run correctly on the client machine
+
+Before you run your Playwright test suite with Microsoft Playwright Testing, make sure that your tests run correctly on your client machine. If you run your tests as part of a CI workflow, validate that your tests run correctly on the CI agent machine. Ensure that you run your tests with a minimum of two parallel workers to verify that your tests are properly configured for parallel execution. Learn more about [parallelism in Playwright](https://playwright.dev/docs/test-parallel).
+
+### 3. Run with cloud-hosted browsers on Microsoft Playwright Testing
+
+Once your tests run correctly, add the service configuration to run your tests on cloud-hosted browsers with the service. Validate that your tests continue to run correctly from your client machine (developer workstation or CI agent machine).
+
+Get started with the [Quickstart: run Playwright tests at scale with Microsoft Playwright Testing](./quickstart-run-end-to-end-tests.md).
+
+### 4. Verify the Azure region remote browsers
+
+Microsoft Playwright Testing can use remote browsers in the Azure region that's nearest to your client machine, or use the fixed region on which your workspace was created.
+
+Learn how you can [optimize regional latency for your workspace](./how-to-optimize-regional-latency.md).
+
+### 5. Experiment with the number of parallel workers
+
+Experiment with the number of parallel workers to run your tests. Measure the test completion time and compare against the target goal you set previously.
+
+Notice at which point the test completion time no longer reduces as you add more workers. Move to the next step to further optimize your setup.
+
+> [!NOTE]
+> While the service is in preview, the number of [parallel workers per workspace is limited](./resource-limits-quotas-capacity.md) to 50. You can [request an increase of this limit for your workspace](https://aka.ms/mpt/feedback).
+
+### 6. Scale the client
+
+As you increase parallelism, the client machine might experience compute resource contention. Increase the computing resources on the client machine, for example by selecting [larger GitHub-hosted runners](https://docs.github.com/actions/using-github-hosted-runners/about-larger-runners).
+
+Alternatively, if you have hardware limitations, you can [shard](https://playwright.dev/docs/test-sharding) your client tests.
+
+Rerun your tests and experiment with the number of parallel workers.
+
+### 7. Update your Playwright test configuration settings
+
+Configure your Playwright test configuration settings, such as test [timeouts](https://playwright.dev/docs/test-timeouts), [trace](https://playwright.dev/docs/api/class-testoptions#test-options-trace) settings, or [retries](https://playwright.dev/docs/test-retries).
+
+## Related content
+
+- [Run your Playwright tests at scale with Microsoft Playwright Testing](./quickstart-run-end-to-end-tests.md)
+- [What is Microsoft Playwright Testing](./overview-what-is-microsoft-playwright-testing.md)
playwright-testing How To Configure Visual Comparisons https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/playwright-testing/how-to-configure-visual-comparisons.md
+
+ Title: Configure visual comparisons
+description: Learn how to configure visual comparisons with Microsoft Playwright Testing.
+ Last updated : 10/04/2023+++
+# Configure visual comparisons with Microsoft Playwright Testing Preview
+
+In this article, you learn how to properly configure Playwright's visual comparison tests when using Microsoft Playwright Testing Preview. Unexpected test failures may occur because Playwright's snapshots differ between local and remote browsers.
+
+> [!IMPORTANT]
+> Microsoft Playwright Testing is currently in preview. For legal terms that apply to Azure features that are in beta, in preview, or otherwise not yet released into general availability, see the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+## Background
+
+The Playwright Test runner uses the host OS as a part of the expected screenshot path. If you're running tests using remote browsers on a different OS than your host machine, the visual comparison tests fail. Our recommendation is to only run visual comparisons when using the service. If you're taking screenshots on the service, there's no need to compare them to your local setup since they don't match.
+
+## Configure ignoreSnapshots
+
+You can use the [`ignoreSnapshots` option](https://playwright.dev/docs/api/class-testconfig#test-config-ignore-snapshots) to only run visual comparisons when using Microsoft Playwright Testing.
+
+1. Set `ignoreSnapshots: true` in the original `playwright.config.ts` that doesn't use the service.
+1. Set `ignoreSnapshots: false` in `playwright.service.config.ts`.
+
+When you're using the service, its configuration overrides `playwright.config.ts`, and runs visual comparisons.
+
+## Configure the snapshot path
+
+To configure snapshot paths for a particular project or the whole config, you can set [`snapshotPathTemplate` option](https://playwright.dev/docs/api/class-testproject#test-project-snapshot-path-template).
+
+```js
+// This path is exactly like the default path, but replaces OS with hardcoded value that is used on the service (linux).
+config.snapshotPathTemplate = '{snapshotDir}/{testFileDir}/{testFileName}-snapshots/{arg}{-projectName}-linux{ext}'
+
+// This is an alternative path where you keep screenshots in a separate directory, one per service OS (linux in this case).
+config.snapshotPathTemplate = '{testDir}/__screenshots__/{testFilePath}/linux/{arg}{ext}';
+```
+
+## Example service config
+
+Example service config that runs visual comparisons and configures the path for `snapshotPathTemplate`:
+
+```typeScript
+import { defineConfig } from '@playwright/test';
+import config from './playwright.config';
+import dotenv from 'dotenv';
+
+dotenv.config();
+
+// Name the test run if it's not named yet.
+process.env.PLAYWRIGHT_SERVICE_RUN_ID = process.env.PLAYWRIGHT_SERVICE_RUN_ID || new Date().toISOString();
+
+// Can be 'linux' or 'windows'.
+const os = process.env.PLAYWRIGHT_SERVICE_OS || 'linux';
+
+export default defineConfig(config, {
+ workers: 20,
+
+ // Enable screenshot testing and configure directory with expectations.
+ ignoreSnapshots: false,
+ snapshotPathTemplate: `{testDir}/__screenshots__/{testFilePath}/${os}/{arg}{ext}`,
+
+ use: {
+ // Specify the service endpoint.
+ connectOptions: {
+ wsEndpoint: `${process.env.PLAYWRIGHT_SERVICE_URL}?cap=${JSON.stringify({
+ os,
+ runId: process.env.PLAYWRIGHT_SERVICE_RUN_ID
+ })}`,
+ timeout: 30000,
+ headers: {
+ 'x-mpt-access-key': process.env.PLAYWRIGHT_SERVICE_ACCESS_TOKEN!
+ },
+ // Allow service to access the localhost.
+ exposeNetwork: '<loopback>'
+ }
+ }
+});
+```
+
+## Related content
+
+- Learn more about [Playwright Visual Comparisons](https://playwright.dev/docs/test-snapshots).
playwright-testing How To Manage Access Tokens https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/playwright-testing/how-to-manage-access-tokens.md
+
+ Title: Manage workspace access tokens
+description: Learn how to create & manage access tokens to authenticate requests to Microsoft Playwright Testing Preview. Access tokens provide secure access to run tests on the service, and to the Microsoft Playwright Testing API.
+ Last updated : 10/04/2023+++
+# Manage workspace access tokens in Microsoft Playwright Testing Preview
+
+In this article, you learn how to manage workspace access tokens in Microsoft Playwright Testing Preview. You use access tokens to authenticate and authorize access to your workspace.
+
+Access tokens are associated with a user account and workspace. When you use an access token for running Playwright tests, the service checks your Azure role-based access control (Azure RBAC) role to verify if you're granted access to run tests on the service. Learn more about [workspace access in Microsoft Playwright Testing](./how-to-manage-workspace-access.md).
+
+You can create multiple access tokens per workspace, for example to distinguish between running tests interactively or as part of your continuous integration (CI) workflow. When you create an access token, the token has a limited lifespan.
+
+> [!IMPORTANT]
+> Microsoft Playwright Testing is currently in preview. For legal terms that apply to Azure features that are in beta, in preview, or otherwise not yet released into general availability, see the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+## Prerequisites
+
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- A Microsoft Playwright Testing workspace. To create a workspace, see [Quickstart: Run Playwright tests at scale](./quickstart-run-end-to-end-tests.md).
+- To create or delete access tokens, your Azure account needs to have the [Contributor](/azure/role-based-access-control/built-in-roles#owner) or [Owner](/azure/role-based-access-control/built-in-roles#contributor) role at the workspace level. Learn more about [managing access to a workspace](./how-to-manage-workspace-access.md).
+
+## Protect your access tokens
+
+Your workspace access tokens are similar to a password for your Microsoft Playwright Testing workspace. Always be careful to protect your access tokens. Avoid distributing access tokens to other users, hard-coding them, or saving them anywhere in plain text that is accessible to others.
+
+Revoke and recreate your tokens if you believe they may have been compromised.
+
+## View all workspace access tokens
+
+You can view the list of access tokens for your workspace in the Playwright portal. For each token, the list displays the token name, status, and expiration date. You can't retrieve the access token value after it has been created.
+
+You can only view the list of access tokens for the workspaces you have access to.
+
+To view the list of workspace access tokens:
+
+1. Sign in to the [Playwright portal](https://aka.ms/mpt/portal) with your Azure account.
+
+1. Select your workspace.
+
+1. Select the settings icon on the home page to go to the workspace settings.
+
+1. Select the **Access tokens** page to view the list of access tokens.
+
+ The **Access tokens** page displays the list of access tokens for the workspace.
+
+ :::image type="content" source="./media/how-to-manage-access-tokens/playwright-testing-view-tokens.png" alt-text="Screenshot that shows the access tokens settings page in the Playwright portal." lightbox="./media/how-to-manage-access-tokens/playwright-testing-view-tokens.png":::
+
+## Generate a workspace access token
+
+Create an access token to authorize access to your Microsoft Playwright Testing workspace, and to run existing Playwright tests in your workspace. You can create multiple access tokens for your workspace. When you create an access token, you have to specify an expiration date for the token. After a token expires, you need to create a new access token.
+
+When you use an access token, the service checks the Azure RBAC role of the user that is associated with the access token to verify that the required permissions are granted. For example, if you have the Reader role, you can't run Playwright tests but you can view the test results. When there are role assignment changes, the service checks the permissions at the time you perform the action.
+
+To create a new workspace access token:
+
+1. Sign in to the [Playwright portal](https://aka.ms/mpt/portal) with your Azure account.
+
+1. Select your workspace.
+
+1. Select the settings icon on the home page to go to the workspace settings.
+
+1. On the **Access tokens** page, select **Generate new token**.
+
+ :::image type="content" source="./media/how-to-manage-access-tokens/playwright-testing-generate-new-access-token.png" alt-text="Screenshot that shows the access tokens settings page in the Playwright Testing portal, highlighting the 'Generate new token' button." lightbox="./media/how-to-manage-access-tokens/playwright-testing-generate-new-access-token.png":::
+
+1. Enter the access token details, and then select **Generate token**.
+
+ :::image type="content" source="./media/how-to-manage-access-tokens/playwright-testing-generate-token.png" alt-text="Screenshot that shows setup guide in the Playwright Testing portal, highlighting the 'Generate token' button." lightbox="./media/how-to-manage-access-tokens/playwright-testing-generate-token.png":::
+
+1. Copy the access token for the workspace.
+
+ You can save the access token in a CI/CD secrets store or use it in an environment variable for running tests interactively.
+
+ :::image type="content" source="./media/how-to-manage-access-tokens/playwright-testing-copy-access-token.png" alt-text="Screenshot that shows how to copy the generated access token in the Playwright Testing portal." lightbox="./media/how-to-manage-access-tokens/playwright-testing-copy-access-token.png":::
+
+ > [!IMPORTANT]
+ > You can only access the token value immediately after you've created it. You can't access the token value anymore at a later time.
+
+> [!NOTE]
+> The number of access tokens per user and per workspace is limited. For more information, see the [Microsoft Playwright Testing service limits](./resource-limits-quotas-capacity.md).
+
+## Delete an access token
+
+You can only delete access tokens that you created in a workspace. To create an access token:
+
+1. Sign in to the [Playwright portal](https://aka.ms/mpt/portal) with your Azure account.
+
+1. Select your workspace.
+
+1. Select the settings icon on the home page to go to the workspace settings.
+
+1. On the **Access tokens** page, select **Delete** next to the access token that you want to delete.
+
+ :::image type="content" source="./media/how-to-manage-access-tokens/playwright-testing-delete-token.png" alt-text="Screenshot that shows how to delete an access tokenin the Playwright portal." lightbox="./media/how-to-manage-access-tokens/playwright-testing-delete-token.png":::
+
+1. Select **Delete** on the deletion confirmation page.
+
+> [!CAUTION]
+> You can't undo the delete operation of an access token. Any existing scripts for running tests with this token will fail after deleting the access token.
+
+## Related content
+
+- Learn more about [managing access to a workspace](./how-to-manage-workspace-access.md).
playwright-testing How To Manage Playwright Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/playwright-testing/how-to-manage-playwright-workspace.md
+
+ Title: Manage workspaces
+description: Learn how to create and manage Microsoft Playwright Testing workspaces. Use the Playwright portal or Azure portal to manage workspaces.
+ Last updated : 10/04/2023+++
+# Manage workspaces in Microsoft Playwright Testing Preview
+
+In this article, you create, view, and delete Microsoft Playwright Testing Preview workspaces. You can access and manage a workspace in the Azure portal or in the Playwright portal.
+
+The following table lists the differences in functionality, based on how you access Microsoft Playwright Testing:
+
+| Functionality | Azure portal | Playwright portal | Learn more |
+|-|-|-|-|
+| Create a workspace | Yes | Yes | [Quickstart: run Playwright test in the cloud](./quickstart-run-end-to-end-tests.md) |
+| View the list of workspaces | Yes | Yes | [View all workspaces](#display-the-list-of-workspaces) |
+| View the workspace activity log | No | Yes | [Display activity log](#display-the-workspace-activity-log) |
+| Delete a workspace | Yes | Yes | [Delete a workspace](#delete-a-workspace) |
+| Configure region affinity | Yes | No | [Configure region affinity](./how-to-optimize-regional-latency.md) |
+| Grant or revoke access to a workspace | Yes | No | [Manage workspace access](./how-to-manage-workspace-access.md)|
+
+> [!IMPORTANT]
+> Microsoft Playwright Testing is currently in preview. For legal terms that apply to Azure features that are in beta, in preview, or otherwise not yet released into general availability, see the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+## Prerequisites
+
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+
+## Create a workspace
+
+To get started with running your Playwright tests on cloud browsers, you first create a Microsoft Playwright Testing workspace. You can create a workspace in either the Azure portal or the Playwright portal.
+
+# [Playwright portal](#tab/playwright)
+
+When you create a workspace in the Playwright portal, the service creates a new resource group and a Microsoft Playwright Testing Azure resource in your Azure subscription. The name of the new resource group is based on the workspace name.
++
+# [Azure portal](#tab/portal)
+
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+1. Select the menu button in the upper-left corner of the portal, and then select **Create a resource** a resource.
+
+ :::image type="content" source="./media/how-to-manage-playwright-workspace/azure-portal-create-resource.png" alt-text="Screenshot that shows the Azure portal menu to create a new resource." lightbox="./media/how-to-manage-playwright-workspace/azure-portal-create-resource.png":::
+
+1. Enter *Microsoft Playwright Testing* in the search box.
+1. Select the **Microsoft Playwright Testing (Preview)** card, and then select **Create**.
+
+ :::image type="content" source="./media/how-to-manage-playwright-workspace/azure-portal-search-playwright-resource.png" alt-text="Screenshot that shows the Azure Marketplace search page with the Microsoft Playwright Testing search result." lightbox="./media/how-to-manage-playwright-workspace/azure-portal-search-playwright-resource.png":::
+
+1. Provide the following information to configure a new Microsoft Playwright Testing workspace:
+
+ |Field |Description |
+ |||
+ |**Subscription** | Select the Azure subscription that you want to use for this Microsoft Playwright Testing workspace. |
+ |**Resource group** | Select an existing resource group. Or select **Create new**, and then enter a unique name for the new resource group. |
+ |**Name** | Enter a unique name to identify your workspace.<BR>The name can only consist of alphanumerical characters, and have a length between 3 and 64 characters. |
+ |**Location** | Select a geographic location to host your workspace. <BR>This location also determines where the test execution results and related artifacts are stored. |
+
+ > [!NOTE]
+ > Optionally, you can configure more details on the **Tags** tab. Tags are name/value pairs that enable you to categorize resources and view consolidated billing by applying the same tag to multiple resources and resource groups.
+
+1. After you're finished configuring the resource, select **Review + Create**.
+
+1. Review all the configuration settings and select **Create** to start the deployment of the Microsoft Playwright Testing workspace.
+
+ When the process has finished, a deployment success message appears.
+
+1. To view the new workspace, select **Go to resource**.
+
+ :::image type="content" source="./media/how-to-manage-playwright-workspace/create-resource-deployment-complete.png" alt-text="Screenshot that shows the deployment completion information in the Azure portal." lightbox="./media/how-to-manage-playwright-workspace/create-resource-deployment-complete.png":::
+++
+## Display the list of workspaces
+
+To get the list of Playwright workspaces that you have access to:
+
+# [Playwright portal](#tab/playwright)
+
+1. Sign in to the [Playwright portal](https://aka.ms/mpt/portal) with your Azure account.
+
+1. Select your current workspace in the top of the screen, and then select **Manage all workspaces**.
+
+ :::image type="content" source="./media/how-to-manage-playwright-workspace/playwright-portal-manage-all-workspaces.png" alt-text="Screenshot that shows the Playwright portal, highlighting the Manage all workspaces menu item." lightbox="./media/how-to-manage-playwright-workspace/playwright-portal-manage-all-workspaces.png":::
+
+1. On the **Workspaces** page, you can now see all the workspaces that you have access to.
+
+ The page shows a card for each of the workspaces in the currently selected Azure subscription. You can switch to another subscription by selecting a subscription from the list.
+
+ :::image type="content" source="./media/how-to-manage-playwright-workspace/playwright-portal-workspaces.png" alt-text="Screenshot that shows the list of all workspaces in the Playwright portal." lightbox="./media/how-to-manage-playwright-workspace/playwright-portal-workspaces.png":::
+
+ > [!TIP]
+ > Notice that the workspace card indicates if the workspace is included in a free trial.
+
+1. Select a workspace to view the workspace details and activity log.
+
+# [Azure portal](#tab/portal)
+
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+
+1. In the top search field, enter **Microsoft Playwright Testing**.
+
+1. Select **Microsoft Playwright Testing Preview** from the **Services** section to view all your workspaces.
+
+ :::image type="content" source="./media/how-to-manage-playwright-workspace/azure-portal-search-playwright-workspaces.png" alt-text="Screenshot that shows the search box in the Azure portal, to search for Microsoft Playwright Testing resources." lightbox="./media/how-to-manage-playwright-workspace/azure-portal-search-playwright-workspaces.png":::
+
+1. Look through the list of workspaces found. You can filter based on subscription, resource groups, and locations.
+
+1. Select a workspace to display its details.
+
+ You can navigate to the workspace in the Playwright portal by selecting the dashboard URL in the workspace **Overview** page.
+
+ :::image type="content" source="./media/how-to-manage-playwright-workspace/azure-portal-workspace-details-dashboard-url.png" alt-text="Screenshot that shows the workspace Overview page in the Azure portal, highlighting the Playwright portal URL." lightbox="./media/how-to-manage-playwright-workspace/azure-portal-workspace-details-dashboard-url.png":::
+++
+## Display the workspace activity log
+
+You can view the list of test runs for the workspace in the Playwright portal. Microsoft Playwright Testing only stores test run metadata, and doesn't store the test code, test results, trace files, or other artifacts.
+
+The workspace activity log lists for each test run the following details:
+
+- Total test duration of the test suite
+- Maximum number of parallel browsers
+- Total time across all parallel browsers. This is the time that you're billed for the test run.
+
+To view the list of test runs in the Playwright portal:
+
+1. Sign in to the [Playwright portal](https://aka.ms/mpt/portal) with your Azure account.
+
+1. Optionally, switch to another workspace by selecting your current workspace in the top of the screen, and then select **Manage all workspaces**.
+
+1. On the workspace home page, you can view the workspace activity log.
+
+ :::image type="content" source="./media/how-to-manage-playwright-workspace/playwright-testing-activity-log.png" alt-text="Screenshot that shows the activity log for a workspace in the Playwright Testing portal." lightbox="./media/how-to-manage-playwright-workspace/playwright-testing-activity-log.png":::
+
+## Delete a workspace
+
+To delete a Playwright workspace:
+
+# [Playwright portal](#tab/playwright)
+
+1. Sign in to the [Playwright portal](https://aka.ms/mpt/portal) with your Azure account.
+
+1. Select your current workspace in the top of the screen, and then select **Manage all workspaces**.
+
+1. On the **Workspaces** page, select the ellipsis (**...**) next to your workspace, and then select **Delete workspace**.
+
+ :::image type="content" source="./media/how-to-manage-playwright-workspace/playwright-portal-delete-workspace.png" alt-text="Screenshot that shows the Workspaces page in the Playwright portal, highlighting the Delete workspace menu item." lightbox="./media/how-to-manage-playwright-workspace/playwright-portal-delete-workspace.png":::
+
+1. On the **Delete Workspace** page, select **Delete** to confirm the deletion of the workspace.
+
+ > [!WARNING]
+ > Deleting a workspace is an irreversible action. The workspace and the activity log can't be recovered.
+
+# [Azure portal](#tab/portal)
+
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+
+1. Navigate to your Playwright workspace.
+
+1. Select **Delete** to delete the workspace.
+
+ :::image type="content" source="./media/how-to-manage-playwright-workspace/azure-portal-delete-workspace.png" alt-text="Screenshot that shows the delete workspace functionality in the Azure portal." lightbox="./media/how-to-manage-playwright-workspace/azure-portal-delete-workspace.png":::
+
+ > [!WARNING]
+ > Deleting a workspace is an irreversible action. The workspace and the activity log can't be recovered.
+++
+## Related content
+
+- [Optimize regional latency for a workspace](./how-to-optimize-regional-latency.md)
+
+- [Manage workspace access](./how-to-manage-workspace-access.md)
+
+- Get started with [running Playwright tests at scale](./quickstart-run-end-to-end-tests.md)
+- Learn more about the [Microsoft Playwright Testing resource limits](./resource-limits-quotas-capacity.md)
playwright-testing How To Manage Workspace Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/playwright-testing/how-to-manage-workspace-access.md
+
+ Title: Manage workspace access
+description: Learn how to manage access to a Microsoft Playwright Testing workspace by using Azure role-based access control (Azure RBAC). Grant user permissions for a workspace by assigning roles.
+ Last updated : 10/04/2023+++
+# Manage access to a workspace in Microsoft Playwright Testing Preview
+
+In this article, you learn how to manage access to a workspace in Microsoft Playwright Testing Preview. The service uses [Azure role-based access control](/azure/role-based-access-control/overview) (Azure RBAC) to authorize access rights to your workspace. Role assignments are the way you control access to resources using Azure RBAC.
+
+> [!IMPORTANT]
+> Microsoft Playwright Testing is currently in preview. For legal terms that apply to Azure features that are in beta, in preview, or otherwise not yet released into general availability, see the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+## Prerequisites
+
+- To assign roles in Azure, your account needs the [User Access Administrator](/azure/role-based-access-control/built-in-roles#user-access-administrator), [Owner](/azure/role-based-access-control/built-in-roles#owner), or one of the [classic administrator roles](/azure/role-based-access-control/rbac-and-directory-admin-roles#classic-subscription-administrator-roles).
+
+ To verify your permissions in the Azure portal:
+
+ 1. In the [Azure portal](https://portal.azure.com), go to your Microsoft Playwright Testing workspace.
+ 1. On the left pane, select **Access Control (IAM)**, and then select **View my access**.
+
+## Default roles
+
+Microsoft Playwright Testing workspaces uses three Azure built-in roles. To grant users access to a workspace, you assign them one of the following Azure built-in roles:
+
+| Role | Access level |
+| | |
+| **Reader** | - Read-only access to the workspace in the Playwright portal.<br/>- View test results for the workspace.<br/>- Can't [create or delete workspace access tokens](./how-to-manage-access-tokens.md).<br/>Can't run Playwright tests on the service. |
+| **Contributor** | - Full access to manage the workspace in the Azure portal but can't assign roles in Azure RBAC.<br/>- Full access to the workspace in the Playwright portal.<br/>- [Create and delete their access tokens](./how-to-manage-access-tokens.md).<br/>- Run Playwright tests on the service. |
+| **Owner** | - Full access to manage the workspace in the Azure portal, including assigning roles in Azure RBAC.<br/>- Full access to the workspace in the Playwright portal.<br/>- [Create and delete their access tokens](./how-to-manage-access-tokens.md).<br/>- Run Playwright tests on the service. |
+
+> [!IMPORTANT]
+> Before you assign an Azure RBAC role, determine the scope of access that is needed. Best practices dictate that it's always best to grant only the narrowest possible scope. Azure RBAC roles defined at a broader scope are inherited by the resources beneath them. For more information about scope for Azure RBAC role assignments, see [Understand scope for Azure RBAC](/azure/role-based-access-control/scope-overview).
+
+## Grant access to a user
+
+You can grant a user access to a Microsoft Playwright Testing workspace by using the Azure portal:
+
+1. Sign in to the [Playwright portal](https://aka.ms/mpt/portal) with your Azure account.
+
+1. Select the workspace settings icon, and then go to the **Users** page.
+
+ :::image type="content" source="media/how-to-manage-workspace-access/playwright-testing-user-settings.png" alt-text="Screenshot that shows the Users page in the workspace settings in the Playwright Testing portal." lightbox="media/how-to-manage-workspace-access/playwright-testing-user-settings.png":::
+
+1. Select **Manage users for your workspace in the Azure portal** to go to your workspace in the Azure portal.
+
+ Alternately, you can go directly to the Azure portal and select your workspace:
+
+ 1. Sign in to the [Azure portal](https://portal.azure.com/).
+ 1. Enter **Playwright Testing** in the search box, and then select **Playwright Testing** in the **Services** category.
+ 1. Select your Microsoft Playwright Testing workspace from the list.
+ 1. On the left pane, select **Access Control (IAM)**.
+
+1. On the **Access Control (IAM)** page, select **Add > Add role assignment**.
+
+ If you don't have permissions to assign roles, the **Add role assignment** option is disabled.
+
+ :::image type="content" source="media/how-to-manage-workspace-access/add-role-assignment.png" alt-text="Screenshot that shows how to add a role assignment to your workspace in the Azure portal." lightbox="media/how-to-manage-workspace-access/add-role-assignment.png":::
+
+1. On the **Role** tab, select **Privileged administrator** roles.
+
+1. Select one of the Microsoft Playwright Testing [default roles](#default-roles), and then select **Next**.
+
+ :::image type="content" source="media/how-to-manage-workspace-access/add-role-assignment-select-role.png" alt-text="Screenshot that shows the list of roles when adding a role assignment in the Azure portal." lightbox="media/how-to-manage-workspace-access/add-role-assignment-select-role.png":::
+
+1. On the **Members** tab, make sure **User, group, or service principal** is selected.
+
+1. Select **Select members**, find and select the users, groups, or service principals.
+
+ :::image type="content" source="media/how-to-manage-workspace-access/add-role-assignment-select-members.png" alt-text="Screenshot that shows the member selection interface when adding a role assignment in the Azure portal." lightbox="media/how-to-manage-workspace-access/add-role-assignment-select-members.png":::
+
+1. Select **Review + assign** to assign the role.
+
+ For more information about how to assign roles, see [Assign Azure roles using the Azure portal](/azure/role-based-access-control/role-assignments-portal).
+
+## Revoke access for a user
+
+You can revoke a user's access to a Microsoft Playwright Testing workspace using the Azure portal:
+
+1. In the [Azure portal](https://portal.azure.com), go to your Microsoft Playwright Testing workspace.
+
+1. On the left pane, select **Access Control (IAM)**, and then select **Role assignments**.
+
+1. In the list of role assignments, add a checkmark next to the user and role you want to remove, and then select **Remove**.
+
+ :::image type="content" source="media/how-to-manage-workspace-access/remove-role-assignment.png" alt-text="Screenshot that shows the list of role assignments and how to delete an assignment in the Azure portal." lightbox="media/how-to-manage-workspace-access/remove-role-assignment.png":::
+
+1. Select **Yes** in the confirmation window to remove the role assignment.
+
+ For more information about how to remove role assignments, see [Remove Azure role assignments](/azure/role-based-access-control/role-assignments-remove).
+
+## (Optional) Use Azure AD security groups to manage workspace access
+
+Instead of granting or revoking access to individual users, you can manage access for groups of users using Azure AD security groups. This approach has the following benefits:
+
+- Avoid the need for granting team or project leaders the Owner role on the workspace. You can grant them access only to the security group to let them manage access to the workspace.
+- You can organize, manage and revoke users' permissions on a workspace and other resources as a group, without having to manage permissions on a user-by-user basis.
+- Using Azure AD groups helps you to avoid reaching the [subscription limit](/azure/role-based-access-control/troubleshooting#limits) on role assignments.
+
+To use Azure AD security groups:
+
+1. [Create a security group](/azure/active-directory/fundamentals/active-directory-groups-create-azure-portal).
+
+1. [Add a group owner](/azure/active-directory/fundamentals/active-directory-accessmanagement-managing-group-owners). This user has permissions to add or remove group members. The group owner isn't required to be group member, or have direct RBAC role on the workspace.
+
+1. Assign the group an RBAC role on the workspace, such as Reader or Contributor.
+
+1. [Add group members](/azure/active-directory/fundamentals/active-directory-groups-members-azure-portal). The added members can now access to the workspace.
+
+## Create a custom role for restricted tenants
+
+If you're using Azure Active Directory [tenant restrictions](/azure/active-directory/external-identities/tenant-restrictions-v2) and users with temporary access, you can create a custom role in Azure RBAC to manage permissions and grant access to run tests.
+
+Perform the following steps to manage permissions with a custom role:
+
+1. Follow these steps to [create an Azure custom role](/azure/role-based-access-control/custom-roles-portal).
+
+1. Select **Add permissions**, and enter *Playwright* in the search box, and then select **Microsoft.AzurePlaywrightService**.
+
+1. Select the `microsoft.playwrightservice/accounts/write` permission, and then select **Add**.
+
+ :::image type="content" source="media/how-to-manage-workspace-access/custom-role-permissions.png" alt-text="Screenshot that shows the list of permissions for adding to the custom role in the Azure portal, highlighting the permission record to add." lightbox="media/how-to-manage-workspace-access/custom-role-permissions.png":::
+
+1. Follow these steps to [add a role assignment](/azure/role-based-access-control/role-assignments-portal) for the custom role to the user account.
+
+ The user can now continue to run tests in the workspace.
+
+## Troubleshooting
+
+Here are a few things to be aware of while you use Azure role-based access control (Azure RBAC):
+
+- When you create a resource in Azure, such as a workspace, you are not automatically the owner of the resource. Your role is inherited from the highest scope role that you're authorized against in that subscription. As an example, if you're a Contributor for the subscription, you have the permissions to create a Microsoft Playwright Testing workspace. However, you would be assigned the Contributor role against that workspace, and not the Owner role.
+
+- When there are two role assignments to the same Azure Active Directory user with conflicting sections of Actions/NotActions, your operations listed in NotActions from one role might not take effect if they're also listed as Actions in another role. To learn more about how Azure parses role assignments, read [How Azure RBAC determines if a user has access to a resource](/azure/role-based-access-control/overview#how-azure-rbac-determines-if-a-user-has-access-to-a-resource).
+
+- It can sometimes take up to 1 hour for your new role assignments to take effect over cached permissions.
+
+## Related content
+
+- Get started with [running Playwright tests at scale](./quickstart-run-end-to-end-tests.md)
+
+- Learn how to [manage Playwright Testing workspaces](./how-to-manage-playwright-workspace.md)
playwright-testing How To Optimize Regional Latency https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/playwright-testing/how-to-optimize-regional-latency.md
+
+ Title: Optimize regional latency
+description: Learn how to optimize regional latency for a Microsoft Playwright Testing Preview workspace. Choose to run tests on remote browsers in an Azure region nearest to you, or in a fixed region.
+ Last updated : 10/04/2023+++
+# Optimize regional latency for a workspace in Microsoft Playwright Testing Preview
+
+Learn how to minimize the network latency between the client machine and the remote browsers for a Microsoft Playwright Testing Preview workspace.
+
+Microsoft Playwright Testing lets you run your Playwright tests on hosted browsers in the Azure region that's nearest to your client machine. The service collects the test results in the Azure region of the remote browsers, and then transfers the results to the workspace region.
+
+By default, when you create a new workspace, the service runs tests in an Azure region closest to the client machine. When you disable this setting on the workspace, the service uses remote browsers in the Azure region of the workspace.
+
+> [!IMPORTANT]
+> Microsoft Playwright Testing is currently in preview. For legal terms that apply to Azure features that are in beta, in preview, or otherwise not yet released into general availability, see the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+## Configure regional settings for a workspace
+
+You can configure the regional settings for your workspace in the Azure portal.
+
+1. Sign in to the [Playwright portal](https://aka.ms/mpt/portal) with your Azure account.
+
+1. Select the workspace settings icon, and then go to the **General** page to view the workspace settings.
+
+ :::image type="content" source="./media/how-to-optimize-regional-latency/playwright-testing-general-settings.png" alt-text="Screenshot that shows the workspace settings page in the Playwright Testing portal." lightbox="./media/how-to-optimize-regional-latency/playwright-testing-general-settings.png":::
+
+1. Select **Select region** to go to your workspace in the Azure portal.
+
+ Alternately, you can go directly to the Azure portal and select your workspace:
+
+ 1. Sign in to the [Azure portal](https://portal.azure.com/).
+ 1. Enter **Playwright Testing** in the search box, and then select **Playwright Testing** in the **Services** category.
+ 1. Select your Microsoft Playwright Testing workspace from the list.
+
+1. In your workspace, select **Region Management** in the left pane.
+
+ :::image type="content" source="./media/how-to-optimize-regional-latency/configure-workspace-region-management.png" alt-text="Screenshot that shows the Region Management page in the Azure portal." lightbox="./media/how-to-optimize-regional-latency/configure-workspace-region-management.png":::
+
+1. Choose the corresponding region selection option.
+
+ By default, the service uses remote browsers in the Azure region that's closest to the client machine to minimize latency.
+
+## Related content
+
+- Learn more about how to [determine the optimal configuration for optimizing test suite completion](./concept-determine-optimal-configuration.md).
+
+- [Manage a Microsoft Playwright Testing workspace](./how-to-manage-playwright-workspace.md)
+
+- [Understand how Microsoft Playwright Testing works](./overview-what-is-microsoft-playwright-testing.md#how-it-works)
playwright-testing How To Test Local Applications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/playwright-testing/how-to-test-local-applications.md
+
+ Title: Use remote browsers for local applications
+description: Learn how to run end-to-end for locally deployed applications with Microsoft Playwright Testing Preview. Use cloud-hosted browsers to test apps on localhost or private networks.
+ Last updated : 10/04/2023+++
+# Use cloud-hosted browsers for locally deployed apps with Microsoft Playwright Testing Preview
+
+Learn how to use Microsoft Playwright Testing Preview to run end-to-end tests for locally deployed applications. Microsoft Playwright Testing uses cloud-hosted, remote browsers for running Playwright tests at scale. You can use the service to run tests for apps on localhost, or that you host on your infrastructure.
+
+Playwright enables you to expose networks that are available on the client machine to remote browsers. When you expose a network, you can connect to local resources from your Playwright test code without having to configure additional firewall settings.
+
+> [!IMPORTANT]
+> Microsoft Playwright Testing is currently in preview. For legal terms that apply to Azure features that are in beta, in preview, or otherwise not yet released into general availability, see the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+## Configure Playwright to expose local networks
+
+To expose local networks and resources to remote browsers, you can use the `exposeNetwork` option in Playwright. Learn more about the [`exposeNetwork` option](https://playwright.dev/docs/next/api/class-browsertype#browser-type-connect-option-expose-network) in the Playwright documentation.
+
+You can specify one or multiple networks by using a list of rules. For example, to expose test/staging deployments and [localhost](https://en.wikipedia.org/wiki/Localhost): `*.test.internal-domain,*.staging.internal-domain,<loopback>`.
+
+You can configure the `exposeNetwork` option in `playwright.service.config.ts`. The following example shows how to expose the `localhost` network by using the [`<loopback>`](https://en.wikipedia.org/wiki/Loopback) rule:
+
+```typescript
+export default defineConfig(config, {
+ workers: 20,
+ use: {
+ // Specify the service endpoint.
+ connectOptions: {
+ wsEndpoint: `${process.env.PLAYWRIGHT_SERVICE_URL}?cap=${JSON.stringify({
+ // Can be 'linux' or 'windows'.
+ os: process.env.PLAYWRIGHT_SERVICE_OS || 'linux',
+ runId: process.env.PLAYWRIGHT_SERVICE_RUN_ID
+ })}`,
+ timeout: 30000,
+ headers: {
+ 'x-mpt-access-key': process.env.PLAYWRIGHT_SERVICE_ACCESS_TOKEN!
+ },
+ // Allow service to access the localhost.
+ exposeNetwork: '<loopback>'
+ }
+ }
+});
+```
+
+You can now reference `localhost` in the Playwright test code, and run the tests on cloud-hosted browsers with Microsoft Playwright Testing:
+
+```bash
+npx playwright test --config=playwright.service.config.ts --workers=20
+```
+
+## Related content
+
+- [Run Playwright tests at scale with Microsoft Playwright Testing](./quickstart-run-end-to-end-tests.md)
+- Learn more about [writing Playwright tests](https://playwright.dev/docs/intro) in the Playwright documentation
playwright-testing How To Try Playwright Testing Free https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/playwright-testing/how-to-try-playwright-testing-free.md
+
+ Title: Microsoft Playwright Testing free trial
+description: Learn how to get started for free with Microsoft Playwright Testing Preview free trial.
+ Last updated : 10/04/2023+++
+# Try Microsoft Playwright Testing Preview for free
+
+Microsoft Playwright Testing Preview is a fully managed service for end-to-end testing built on top of Playwright. With the free trial, you can try Microsoft Playwright Testing for free for 30 days and 100 test minutes. In this article, you learn about the limits of the free trial, how to get started, and how to track your free trial usage.
+
+> [!IMPORTANT]
+> Microsoft Playwright Testing is currently in preview. For legal terms that apply to Azure features that are in beta, in preview, or otherwise not yet released into general availability, see the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+## Prerequisites
+
+* An Azure account with an active subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+* Your Azure account needs the [Owner](/azure/role-based-access-control/built-in-roles#owner), [Contributor](/azure/role-based-access-control/built-in-roles#contributor), or one of the [classic administrator roles](/azure/role-based-access-control/rbac-and-directory-admin-roles#classic-subscription-administrator-roles).
+
+## Limits to free trial
+
+The following table lists the limits for the Microsoft Playwright Testing free trial.
+
+| Resource | Limit |
+|-|-|
+| Duration of trial | 30 days |
+| Total test minutes┬╣ | 100 minutes |
+| Number of workspaces┬╣┬▓┬│ | 1 |
+
+┬╣ If you run a test that exceeds the free trial test minute limit, only the overage test minutes account towards the pay-as-you-go billing model.
+
+┬▓ These limits only apply to the *first* workspace you create in your Azure subscription. Any subsequent workspaces you create in the subscription automatically uses the pay-as-you-go billing model.
+
+┬│ If you delete the free trial workspace, you can't create a new free trial workspace anymore.
+
+If you exceed any of these limits, the workspace is automatically converted to the pay-as-you-go billing model. Learn more about the [Microsoft Playwright Testing pricing](https://aka.ms/mpt/pricing).
+
+## Create a workspace
+
+The first time you create a workspace in your Azure subscription, the workspace is automatically enrolled in the free trial.
+
+To create a workspace in the Playwright portal:
+
+1. Sign in to the [Playwright portal](https://aka.ms/mpt/portal) with your Azure account.
+
+1. Select **Create workspace**.
+
+ If this is the first workspace you create in the Azure subscription, you see a message that the workspace is eligible for the free trial.
+
+ :::image type="content" source="./media/how-to-try-playwright-testing-free/playwright-testing-create-free-trial.png" alt-text="Screenshot that shows the create workspace experience in the Playwright portal, showing the free trial message." lightbox="./media/how-to-try-playwright-testing-free/playwright-testing-create-free-trial.png":::
+
+1. Provide the following information to create the workspace:
+
+ |Field |Description |
+ |||
+ |**Workspace name** | Enter a unique name to identify your workspace.<BR>The name can only consist of alphanumerical characters, and have a length between 3 and 64 characters. |
+ |**Azure subscription** | Select the Azure subscription that you want to use for this Microsoft Playwright Testing workspace. |
+ |**Region** | Select a geographic location to host your workspace. <BR>This is the location where the test run data is stored for the workspace. |
+
+1. Select **Create workspace**.
+
+## Track your free trial usage
+
+You can track the usage of the free trial for a workspace in either of these ways:
+
+- Select the settings icon and then select **Billing**.
+
+ :::image type="content" source="./media/how-to-try-playwright-testing-free/playwright-testing-billing-details.png" alt-text="Screenshot that shows the Billing setting page in the Playwright portal." lightbox="./media/how-to-try-playwright-testing-free/playwright-testing-billing-details.png":::
+
+- Select the **In free trial** menu item.
+
+ :::image type="content" source="./media/how-to-try-playwright-testing-free/playwright-testing-free-trial-menu.png" alt-text="Screenshot that shows the free trial status menu item in the Playwright portal, to track the free trial usage." lightbox="./media/how-to-try-playwright-testing-free/playwright-testing-free-trial-menu.png":::
+
+In the list of all workspaces, you can view a banner message that indicates if a workspace is in the free trial.
+++
+## Upgrade your workspace
+
+When you exceed any of the limits of the free trial, your workspace is automatically converted to the pay-as-you-go billing model.
+
+All test runs, access tokens, and other artifacts linked to your workspace remain available.
+
+## Next step
+
+> [!div class="nextstepaction"]
+> [Quickstart: Run Playwright tests at scale](quickstart-run-end-to-end-tests.md)
playwright-testing Monitor Playwright Testing Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/playwright-testing/monitor-playwright-testing-reference.md
+
+ Title: Monitor Microsoft Playwright Testing data reference
+description: Important reference material needed when you monitor Microsoft Playwright Testing Preview.
+ Last updated : 10/04/2023++
+# Monitor Microsoft Playwright Testing Preview data reference
+
+Learn about the data and resources collected by Azure Monitor from your workspace in Microsoft Playwright Testing Preview. See [Monitor Microsoft Playwright Testing](monitor-playwright-testing.md) for details on collecting and analyzing monitoring data.
+
+> [!IMPORTANT]
+> Microsoft Playwright Testing is currently in preview. For legal terms that apply to Azure features that are in beta, in preview, or otherwise not yet released into general availability, see the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+## Resource logs
+
+This section lists the types of resource logs you can collect for Microsoft Playwright Testing.
+
+### Operational logs
+
+Operational log entries include elements listed in the following table:
+
+|Name |Description |
+|||
+|Time | Date and time when the record was created |
+|ResourceId | Azure Resource Manager resource ID |
+|Location | Azure Resource Manager resource location |
+|OperationName | Name of the operation attempted on the resource |
+|Category | Category of the emitted log |
+|ResultType | Indicates if the request was successful or failed |
+|ResultSignature | HTTP status code of the API response |
+|ResultDescription | Additional details about the result |
+|DurationMs | The duration of the operation in milliseconds |
+|CorrelationId | Unique identifier to be used to correlate logs |
+|Level | Security Level of the log |
+
+## Related content
+
+- See [Monitor Microsoft Playwright Testing](./monitor-playwright-testing.md) for a description of monitoring Microsoft Playwright Testing.
+- See [Monitor Azure resources with Azure Monitor](/azure/azure-monitor/essentials/monitor-azure-resource) for details on monitoring Azure resources.
playwright-testing Monitor Playwright Testing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/playwright-testing/monitor-playwright-testing.md
+
+ Title: Monitoring Microsoft Playwright Testing
+description: Learn about the monitoring data generated by Microsoft Playwright Testing Preview.
+ Last updated : 10/04/2023+++
+# Monitoring Microsoft Playwright Testing Preview
+
+When you have critical applications and business processes relying on Azure resources, you want to monitor those resources for their availability, performance, and operation. This article describes the monitoring data generated by Microsoft Playwright Testing Preview.
+
+Microsoft Playwright Testing creates monitoring data using [Azure Monitor](/azure/azure-monitor/overview), which is a full stack monitoring service in Azure. Azure Monitor provides a complete set of features to monitor your Azure resources. It can also monitor resources in other clouds and on-premises. Learn more about [monitoring Azure resources with Azure Monitor](/azure/azure-monitor/essentials/monitor-azure-resource).
+
+> [!TIP]
+> To understand costs associated with Azure Monitor, see [Usage and estimated costs](/azure/azure-monitor//usage-estimated-costs). To understand the time it takes for your data to appear in Azure Monitor, see [Log data ingestion time](/azure/azure-monitor/logs/data-ingestion-time).
+
+> [!IMPORTANT]
+> Microsoft Playwright Testing is currently in preview. For legal terms that apply to Azure features that are in beta, in preview, or otherwise not yet released into general availability, see the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+## Monitoring data
+
+Microsoft Playwright Testing collects the same kinds of monitoring data as other Azure resources that are described in [Monitoring data from Azure resources](/azure/azure-monitor/essentials/monitor-azure-resource#monitoring-data-from-azure-resources).
+
+See [Monitor Microsoft Playwright Testing data reference](./monitor-playwright-testing-reference.md) for detailed information on logs metrics created by Microsoft Playwright Testing.
+
+## Collection and routing
+
+Platform metrics and the Activity log are collected and stored automatically, but can be routed to other locations by using a diagnostic setting. Resource Logs are not collected and stored until you create a diagnostic setting and route them to one or more locations.
+
+See [Create diagnostic setting to collect platform logs and metrics in Azure](/azure/azure-monitor/platform/diagnostic-settings) for the detailed process for creating a diagnostic setting using the Azure portal, CLI, or PowerShell. When you create a diagnostic setting, you specify which categories of logs to collect.
+
+The log categories for Microsoft Playwright Testing are listed in [Monitor Microsoft Playwright Testing data reference](./monitor-playwright-testing-reference.md#resource-logs).
+
+## Analyzing logs
+
+Data in Azure Monitor Logs is stored in tables where each table has its own set of unique properties. All resource logs in Azure Monitor have the same fields followed by service-specific fields. The common schema is outlined in [Azure Monitor resource log schema](/azure/azure-monitor/essentials/resource-logs-schema).
+
+You can find the schema for Microsoft Playwright Testing resource logs in the [Monitor Microsoft Playwright Testing data reference](monitor-playwright-testing-reference.md#resource-logs).
+
+The [Activity log](/azure/azure-monitor/essentials/activity-log) is a type of Azure platform log that provides insight into subscription-level events. You can view it independently or route it to Azure Monitor Logs, where you can do much more complex queries using Log Analytics.
+
+For a list of resource logs types collected for Microsoft Playwright Testing, see [Monitor Microsoft Playwright Testing data reference](monitor-playwright-testing-reference.md#resource-logs).
+
+## Related content
+
+- See [Monitor Microsoft Playwright Testing data reference](monitor-playwright-testing-reference.md) for a reference of the metrics, logs, and other important values created by Microsoft Playwright Testing.
+
+- See [Monitoring Azure resources with Azure Monitor](/azure/azure-monitor/essentials/monitor-azure-resource) for details on monitoring Azure resources.
playwright-testing Overview What Is Microsoft Playwright Testing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/playwright-testing/overview-what-is-microsoft-playwright-testing.md
+
+ Title: What is Microsoft Playwright Testing?
+description: 'Microsoft Playwright Testing is a fully managed service for end-to-end testing built on top of Playwright. Run Playwright tests with high parallelization across different operating system and browser combinations simultaneously.'
+ Last updated : 10/04/2023+++
+# What is Microsoft Playwright Testing Preview?
+
+Microsoft Playwright Testing Preview is a fully managed service for end-to-end testing built on top of Playwright. With Playwright, you can automate end-to-end tests to ensure your web applications work the way you expect it to, across different web browsers and operating systems. The service abstracts the complexity and infrastructure for running Playwright tests with high parallelization.
+
+Run your Playwright test suite in the cloud, without changes to your test code or modifications to your tooling setup. Use the [Playwright Test Visual Studio Code extension](https://marketplace.visualstudio.com/items?itemName=ms-playwright.playwright) for a rich editor experience, or use the Playwright CLI to add automation within your continuous integration (CI) workflow.
+
+Get started with [Quickstart: run your Playwright tests at scale with Microsoft Playwright Testing](./quickstart-run-end-to-end-tests.md).
+
+To learn more about how to create end-to-end tests with the Playwright framework, visit the [Getting started documentation](https://playwright.dev/docs/intro) on the Playwright website.
+
+> [!IMPORTANT]
+> Microsoft Playwright Testing is currently in preview. For legal terms that apply to Azure features that are in beta, in preview, or otherwise not yet released into general availability, see the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+## Accelerate tests with parallel remote browsers
+
+As your application becomes more complex, your test suite increases in size. The time to complete your test suite also grows accordingly. Use parallel remote browsers to shorten the overall test suite completion time.
+
+- Distribute your tests across many parallel browsers, hosted on cloud infrastructure.
+
+- Scale your tests beyond the processing power of your developer workstation, local infrastructure, or CI agent machines.
+
+- Consistent regional performance by running your tests on browsers in an Azure region that's closest to your client machine.
+
+Learn more about how you can [configure for optimal performance](./concept-determine-optimal-configuration.md).
+
+## Test consistently across multiple operating systems and browsers
+
+Modern web apps need to work flawlessly across numerous browsers, operating systems, and devices.
+
+- Run tests simultaneously across all modern browsers on Windows, Linux, and mobile emulation of Google Chrome for Android and Mobile Safari.
+
+- Using service-managed browsers ensures consistent and reliable results for both functional and visual regression testing, whether tests are run from your team's developer workstations or CI pipeline.
+
+- Microsoft Playwright Testing supports all [browsers supported by Playwright](https://playwright.dev/docs/release-notes).
+
+## Endpoint testing
+
+Use cloud-hosted remote browsers to test web applications regardless of where they're hosted, without having to allow inbound connections on your firewall.
+
+- Test publicly and privately hosted applications.
+
+- During the development phase, [run tests against a localhost development server](./how-to-test-local-applications.md).
+
+## Playwright support
+
+Microsoft Playwright Testing is built on top of the Playwright framework.
+
+- Support for multiple versions of Playwright with each new Playwright release.
+
+- Integrate your existing Playwright test suite without changing your test code.
+
+- Use the [Playwright Test Visual Studio Code extension](https://marketplace.visualstudio.com/items?itemName=ms-playwright.playwright) for a rich editor experience.
+
+- Continuous end-to-end testing by using the Playwright CLI to [integrate with continuous integration (CI) tools](./quickstart-automate-end-to-end-testing.md).
+
+## How it works
+
+Microsoft Playwright Testing instantiates cloud-hosted browsers across different operating systems. Playwright runs on the client machine and interacts with Microsoft Playwright Testing to run your Playwright tests on the hosted browsers. The client machine can be your developer workstation or a CI agent machine if you run your tests as part of your CI workflow. The Playwright test code remains on the client machine during the test run.
++
+After a test run completes, Playwright sends the test run metadata to the service. The test results, trace files, and other test run files are available on the client machine.
+
+To run existing tests with Microsoft Playwright Testing requires no changes to your test code. Add a service configuration file to your test project, and specify your workspace settings, such as the access token and the service endpoint.
+
+Learn more about how to [determine the optimal configuration for optimizing test suite completion](./concept-determine-optimal-configuration.md).
+
+## In-region data residency & data at rest
+
+Microsoft Playwright Testing doesn't store or process customer data outside the region you deploy the workspace in. When you use the regional affinity feature, the metadata is transferred from the cloud hosted browser region to the workspace region in a secure and compliant manner.
+
+Microsoft Playwright Testing automatically encrypts all data stored in your workspace with keys managed by Microsoft (service-managed keys). For example, this data includes workspace details and Playwright test run meta data like test start and end time, test minutes, and who ran the test.
+
+## Next step
+
+> [!div class="nextstepaction"]
+> [Quickstart: Run Playwright tests at scale](quickstart-run-end-to-end-tests.md)
playwright-testing Quickstart Automate End To End Testing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/playwright-testing/quickstart-automate-end-to-end-testing.md
+
+ Title: 'Quickstart: Continuous end-to-end testing'
+description: In this quickstart, you learn how to run your Playwright tests at scale in your CI pipeline with Microsoft Playwright Testing. Continuously validate that your web app runs correctly across browsers and operating systems.
+ Last updated : 10/04/2023+++
+# Quickstart: Set up continuous end-to-end testing with Microsoft Playwright Testing Preview
+
+In this quickstart, you set up continuous end-to-end testing with Microsoft Playwright Testing Preview to validate that your web app runs correctly across different browsers and operating systems with every code commit. Learn how to add your Playwright tests to a continuous integration (CI) workflow, such as GitHub Actions, Azure Pipelines, or other CI platforms.
+
+After you complete this quickstart, you have a CI workflow that runs your Playwright test suite at scale with Microsoft Playwright Testing.
+
+> [!IMPORTANT]
+> Microsoft Playwright Testing is currently in preview. For legal terms that apply to Azure features that are in beta, in preview, or otherwise not yet released into general availability, see the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+## Prerequisites
+
+* An Azure account with an active subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+
+* A Microsoft Playwright Testing workspace. Complete the [quickstart: run Playwright tests at scale](./quickstart-run-end-to-end-tests.md) to create a workspace.
+
+# [GitHub Actions](#tab/github)
+- A GitHub account. If you don't have a GitHub account, you can [create one for free](https://github.com/).
+- A GitHub repository that contains your Playwright test specifications and GitHub Actions workflow. To create a repository, see [Creating a new repository](https://docs.github.com/github/creating-cloning-and-archiving-repositories/creating-a-new-repository).
+- A GitHub Actions workflow. If you need help with getting started with GitHub Actions, see [create your first workflow](https://docs.github.com/en/actions/quickstart)
+
+# [Azure Pipelines](#tab/pipelines)
+- An Azure DevOps organization and project. If you don't have an Azure DevOps organization, you can [create one for free](/azure/devops/organizations/projects/create-project).
+- A pipeline definition. If you need help with getting started with Azure Pipelines, see [create your first pipeline](/azure/devops/pipelines/create-first-pipeline).
+++
+## Configure a service access token
+
+Microsoft Playwright Testing uses access tokens to authorize users to run Playwright tests with the service. You can generate a service access token in the Playwright portal, and then specify the access token in the service configuration file.
+
+To generate an access token and store it as a CI workflow secret, perform the following steps:
+
+1. Sign in to the [Playwright portal](https://aka.ms/mpt/portal) with your Azure account.
+
+1. Select the workspace settings icon, and then go to the **Access tokens** page.
+
+ :::image type="content" source="./media/quickstart-automate-end-to-end-testing/playwright-testing-generate-new-access-token.png" alt-text="Screenshot that shows the access tokens settings page in the Playwright Testing portal." lightbox="./media/quickstart-automate-end-to-end-testing/playwright-testing-generate-new-access-token.png":::
+
+1. Select **Generate new token** to create a new access token for your CI workflow.
+
+1. Enter the access token details, and then select **Generate token**.
+
+ :::image type="content" source="./media/quickstart-automate-end-to-end-testing/playwright-testing-generate-token.png" alt-text="Screenshot that shows setup guide in the Playwright Testing portal, highlighting the 'Generate token' button." lightbox="./media/quickstart-automate-end-to-end-testing/playwright-testing-generate-token.png":::
+
+ :::image type="content" source="./media/quickstart-automate-end-to-end-testing/playwright-testing-copy-access-token.png" alt-text="Screenshot that shows how to copy the generated access token in the Playwright Testing portal." lightbox="./media/quickstart-automate-end-to-end-testing/playwright-testing-copy-access-token.png":::
+
+1. Store the access token in a CI workflow secret to avoid specifying the token in clear text in the workflow definition:
+
+ # [GitHub Actions](#tab/github)
+
+ 1. Go to your GitHub repository, and select **Settings** > **Secrets and variables** > **Actions**.
+ 1. Select **New repository secret**.
+ 1. Enter the secret details, and then select **Add secret** to create the CI/CD secret.
+
+ | Parameter | Value |
+ | -- | |
+ | **Name** | *PLAYWRIGHT_SERVICE_ACCESS_TOKEN* |
+ | **Value** | Paste the workspace access token you copied previously. |
+
+ 1. Select **OK** to create the workflow secret.
+
+ # [Azure Pipelines](#tab/pipelines)
+
+ 1. Go to your Azure DevOps project.
+ 1. Go to the **Pipelines** page, select the appropriate pipeline, and then select **Edit**.
+ 1. Locate the **Variables** for this pipeline.
+ 1. Add a new variable.
+ 1. Enter the variable details, and then select **Add secret** to create the CI/CD secret.
+
+ | Parameter | Value |
+ | -- | |
+ | **Name** | *PLAYWRIGHT_SERVICE_ACCESS_TOKEN* |
+ | **Value** | Paste the workspace access token you copied previously. |
+ | **Keep this value secret** | Check this value |
+
+ 1. Select **OK**, and then **Save** to create the workflow secret.
+
+
+
+## Get the service region endpoint URL
+
+In the service configuration, you have to provide the region-specific service endpoint. The endpoint depends on the Azure region you selected when creating the workspace.
+
+To get the service endpoint URL and store it as a CI workflow secret, perform the following steps:
+
+1. Sign in to the [Playwright portal](https://aka.ms/mpt/portal) with your Azure account.
+
+1. On the workspace home page, select **View setup guide**.
+
+ > [!TIP]
+ > If you have multiple workspaces, you can switch to another workspace by selecting the workspace name at the top of the page, and then select **Manage all workspaces**.
+
+1. In **Add region endpoint in your setup**, copy the service endpoint URL.
+
+ The endpoint URL matches the Azure region that you selected when creating the workspace.
+
+1. Store the service endpoint URL in a CI workflow secret:
+
+ | Secret name | Value |
+ | -- | |
+ | *PLAYWRIGHT_SERVICE_URL* | Paste the endpoint URL you copied previously. |
+
+## Add service configuration file
+
+If you haven't configured your Playwright tests yet for running them on cloud-hosted browsers, add a service configuration file to your repository. In the next step, you then specify this service configuration file on the Playwright CLI.
+
+1. Create a new file `playwright.service.config.ts` alongside the `playwright.config.ts` file.
+
+1. Add the following content to it:
+
+ ```typescript
+ import { defineConfig } from '@playwright/test';
+ import config from './playwright.config';
+ import dotenv from 'dotenv';
+
+ dotenv.config();
+
+ // Name the test run if it's not named yet.
+ process.env.PLAYWRIGHT_SERVICE_RUN_ID = process.env.PLAYWRIGHT_SERVICE_RUN_ID || new Date().toISOString();
+
+ // Can be 'linux' or 'windows'.
+ const os = process.env.PLAYWRIGHT_SERVICE_OS || 'linux';
+
+ export default defineConfig(config, {
+ // Define more generous timeout for the service operation if necessary.
+ // timeout: 60000,
+ // expect: {
+ // timeout: 10000,
+ // },
+ workers: 20,
+
+ // Enable screenshot testing and configure directory with expectations.
+ // https://learn.microsoft.com/azure/playwright-testing/how-to-configure-visual-comparisons
+ ignoreSnapshots: false,
+ snapshotPathTemplate: `{testDir}/__screenshots__/{testFilePath}/${os}/{arg}{ext}`,
+
+ use: {
+ // Specify the service endpoint.
+ connectOptions: {
+ wsEndpoint: `${process.env.PLAYWRIGHT_SERVICE_URL}?cap=${JSON.stringify({
+ // Can be 'linux' or 'windows'.
+ os,
+ runId: process.env.PLAYWRIGHT_SERVICE_RUN_ID
+ })}`,
+ timeout: 30000,
+ headers: {
+ 'x-mpt-access-key': process.env.PLAYWRIGHT_SERVICE_ACCESS_TOKEN!
+ },
+ // Allow service to access the localhost.
+ exposeNetwork: '<loopback>'
+ }
+ }
+ });
+ ```
+
+1. Save and commit the file to your source code repository.
+
+## Update the workflow definition
+
+Update the CI workflow definition to run your Playwright tests with the Playwright CLI. Pass the [service configuration file](#add-service-configuration-file) as an input parameter for the Playwright CLI. You configure your environment by specifying environment variables.
+
+1. Open the CI workflow definition
+
+1. Add the following steps to run your Playwright tests in Microsoft Playwright Testing.
+
+ The following steps describe the workflow changes for GitHub Actions or Azure Pipelines. Similarly, you can run your Playwright tests by using the Playwright CLI in other CI platforms.
+
+ # [GitHub Actions](#tab/github)
+
+ ```yml
+ - name: Install dependencies
+ working-directory: path/to/playwright/folder # update accordingly
+ run: npm ci
+ - name: Run Playwright tests
+ working-directory: path/to/playwright/folder # update accordingly
+ env:
+ # Access token and regional endpoint for Microsoft Playwright Testing
+ PLAYWRIGHT_SERVICE_ACCESS_TOKEN: ${{ secrets.PLAYWRIGHT_SERVICE_ACCESS_TOKEN }}
+ PLAYWRIGHT_SERVICE_URL: ${{ secrets.PLAYWRIGHT_SERVICE_URL }}
+ PLAYWRIGHT_SERVICE_RUN_ID: ${{ github.run_id }}-${{ github.run_attempt }}-${{ github.sha }}
+ run: npx playwright test -c playwright.service.config.ts --workers=20
+ ```
+
+ # [Azure Pipelines](#tab/pipelines)
+
+ ```yml
+ - task: PowerShell@2
+ enabled: true
+ displayName: "Install dependencies"
+ inputs:
+ targetType: 'inline'
+ script: 'npm ci'
+ workingDirectory: path/to/playwright/folder # update accordingly
+
+ - task: PowerShell@2
+ enabled: true
+ displayName: "Run Playwright tests"
+ env:
+ PLAYWRIGHT_SERVICE_ACCESS_TOKEN: $(PLAYWRIGHT_SERVICE_ACCESS_TOKEN)
+ PLAYWRIGHT_SERVICE_URL: $(PLAYWRIGHT_SERVICE_URL)
+ inputs:
+ targetType: 'inline'
+ script: 'npx playwright test -c playwright.service.config.ts --workers=20'
+ workingDirectory: path/to/playwright/folder # update accordingly
+ ```
+
+
+
+1. Save and commit your changes.
+
+ When the CI workflow is triggered, your Playwright tests will run in your Microsoft Playwright Testing workspace on cloud-hosted browsers, across 20 parallel workers.
+
+## Related content
+
+You've successfully set up a continuous end-to-end testing workflow to run your Playwright tests at scale on cloud-hosted browsers.
+
+- [Grant users access to the workspace](./how-to-manage-workspace-access.md)
+
+- [Manage your workspaces](./how-to-manage-playwright-workspace.md)
playwright-testing Quickstart Run End To End Tests https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/playwright-testing/quickstart-run-end-to-end-tests.md
+
+ Title: 'Quickstart: Run Playwright tests at scale'
+description: 'This quickstart shows how to run your Playwright tests with highly parallel cloud browsers using Microsoft Playwright Testing Preview. The cloud-hosted browsers support multiple operating systems and all modern browsers.'
+ Last updated : 10/04/2023+++
+# Quickstart: Run end-to-end tests at scale with Microsoft Playwright Testing Preview
+
+In this quickstart, you learn how to run your Playwright tests with highly parallel cloud browsers using Microsoft Playwright Testing Preview. Use cloud infrastructure to validate your application across multiple browsers, devices, and operating systems.
+
+After you complete this quickstart, you have a Microsoft Playwright Testing workspace to run your Playwright tests at scale.
+
+> [!IMPORTANT]
+> Microsoft Playwright Testing is currently in preview. For legal terms that apply to Azure features that are in beta, in preview, or otherwise not yet released into general availability, see the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+## Prerequisites
+
+* An Azure account with an active subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+* Your Azure account needs the [Owner](/azure/role-based-access-control/built-in-roles#owner), [Contributor](/azure/role-based-access-control/built-in-roles#contributor), or one of the [classic administrator roles](/azure/role-based-access-control/rbac-and-directory-admin-roles#classic-subscription-administrator-roles).
+
+## Create a workspace
+
+To get started with running your Playwright tests at scale on cloud browsers, you first create a Microsoft Playwright Testing workspace in the Playwright portal.
++
+When the workspace creation finishes, you're redirected to the setup guide.
+
+## Create an access token for service authentication
+
+Microsoft Playwright Testing uses access tokens to authorize users to run Playwright tests with the service. You first generate a service access token in the Playwright portal, and then store the value in an environment variable.
+
+To generate the access token, perform the following steps:
+
+1. In the workspace setup guide, in **Create an access token**, select **Generate token**.
+
+ :::image type="content" source="./media/quickstart-run-end-to-end-tests/playwright-testing-generate-token.png" alt-text="Screenshot that shows setup guide in the Playwright Testing portal, highlighting the 'Generate token' button." lightbox="./media/quickstart-run-end-to-end-tests/playwright-testing-generate-token.png":::
+
+1. Copy the access token for the workspace.
+
+ You need the access token value for configuring your environment in a later step.
+
+ :::image type="content" source="./media/quickstart-run-end-to-end-tests/playwright-testing-copy-access-token.png" alt-text="Screenshot that shows how to copy the generated access token in the Playwright Testing portal." lightbox="./media/quickstart-run-end-to-end-tests/playwright-testing-copy-access-token.png":::
+
+## Configure the service region endpoint
+
+In the service configuration, you have to provide the region-specific service endpoint. The endpoint depends on the Azure region you selected when creating the workspace.
+
+To get the service endpoint URL, perform the following steps:
+
+1. In **Add region endpoint in your setup**, copy the region endpoint for your workspace.
+
+ The endpoint URL matches the Azure region that you selected when creating the workspace.
+
+ :::image type="content" source="./media/quickstart-run-end-to-end-tests/playwright-testing-region-endpoint.png" alt-text="Screenshot that shows how to copy the workspace region endpoint in the Playwright Testing portal." lightbox="./media/quickstart-run-end-to-end-tests/playwright-testing-region-endpoint.png":::
+
+## Set up your environment
+
+To set up your environment, you have to configure the `PLAYWRIGHT_SERVICE_ACCESS_TOKEN` and `PLAYWRIGHT_SERVICE_URL` environment variables with the values you obtained in the previous steps.
+
+We recommend that you use the `dotenv` module to manage your environment. With `dotenv`, you define your environment variables in the `.env` file.
+
+1. Add the `dotenv` module to your project:
+
+ ```shell
+ npm i --save-dev dotenv
+ ```
+
+1. Create a `.env` file and replace the `{MY-ACCESS-TOKEN}` and `{MY-REGION-ENDPOINT}` text placeholders:
+
+ ```
+ PLAYWRIGHT_SERVICE_ACCESS_TOKEN={MY-ACCESS-TOKEN}
+ PLAYWRIGHT_SERVICE_URL={MY-REGION-ENDPOINT}
+ ```
+
+> [!CAUTION]
+> Make sure that you don't add the `.env` file to your source code repository to avoid leaking your access token value.
+
+## Add Microsoft Playwright Testing configuration
+
+To run your Playwright tests in your Microsoft Playwright Testing workspace, you need to add a service configuration file alongside your Playwright configuration file. The service configuration file references the environment variables to get the workspace endpoint and your access token. In the next step, you pass this service configuration file to the Playwright CLI.
+
+To add the service configuration to your project:
+
+1. Create a new file `playwright.service.config.ts` alongside the `playwright.config.ts` file.
+
+1. Add the following content to it:
+
+ ```typescript
+ import { defineConfig } from '@playwright/test';
+ import config from './playwright.config';
+ import dotenv from 'dotenv';
+
+ dotenv.config();
+
+ // Name the test run if it's not named yet.
+ process.env.PLAYWRIGHT_SERVICE_RUN_ID = process.env.PLAYWRIGHT_SERVICE_RUN_ID || new Date().toISOString();
+
+ // Can be 'linux' or 'windows'.
+ const os = process.env.PLAYWRIGHT_SERVICE_OS || 'linux';
+
+ export default defineConfig(config, {
+ // Define more generous timeout for the service operation if necessary.
+ // timeout: 60000,
+ // expect: {
+ // timeout: 10000,
+ // },
+ workers: 20,
+
+ // Enable screenshot testing and configure directory with expectations.
+ // https://learn.microsoft.com/azure/playwright-testing/how-to-configure-visual-comparisons
+ ignoreSnapshots: false,
+ snapshotPathTemplate: `{testDir}/__screenshots__/{testFilePath}/${os}/{arg}{ext}`,
+
+ use: {
+ // Specify the service endpoint.
+ connectOptions: {
+ wsEndpoint: `${process.env.PLAYWRIGHT_SERVICE_URL}?cap=${JSON.stringify({
+ // Can be 'linux' or 'windows'.
+ os,
+ runId: process.env.PLAYWRIGHT_SERVICE_RUN_ID
+ })}`,
+ timeout: 30000,
+ headers: {
+ 'x-mpt-access-key': process.env.PLAYWRIGHT_SERVICE_ACCESS_TOKEN!
+ },
+ // Allow service to access the localhost.
+ exposeNetwork: '<loopback>'
+ }
+ }
+ });
+ ```
+
+1. Save the file.
+
+## Run your tests at scale with Microsoft Playwright Testing
+
+You've now prepared the configuration for running your Playwright tests in the cloud with Microsoft Playwright Testing. To run your Playwright tests, you use the Playwright CLI and specify the service configuration file and number of workers on the command-line.
+
+Perform the following steps to run your Playwright tests:
+
+1. Open a terminal window and enter the following command to run your Playwright tests on remote browsers in your workspace:
+
+ Depending on the size of your test suite, the tests run on up to 20 parallel workers.
+
+ ```bash
+ npx playwright test --config=playwright.service.config.ts --workers=20
+ ```
+
+ You should see a similar output when the tests complete:
+
+ ```output
+ Running 6 tests using 6 workers
+ 6 passed (18.2s)
+
+ To open last HTML report run:
+
+ npx playwright show-report
+ ```
+
+1. Go to the [Playwright portal](https://aka.ms/mpt/portal) to view your test run.
+
+ The activity log lists for each test run the following details: the total test completion time, the number of parallel workers, and the number of test minutes.
+
+ :::image type="content" source="./media/quickstart-run-end-to-end-tests/playwright-testing-activity-log.png" alt-text="Screenshot that shows the activity log for a workspace in the Playwright Testing portal." lightbox="./media/quickstart-run-end-to-end-tests/playwright-testing-activity-log.png":::
+
+## Optimize parallel worker configuration
+
+Once your tests are running smoothly with the service, experiment with varying the number of parallel workers to determine the optimal configuration that minimizes test completion time. With Microsoft Playwright Testing, you can run with up to 50 parallel workers. Several factors influence the best configuration for your project, such as the CPU, memory, and network resources of your client machine, the target application's load-handling capacity, and the type of actions carried out in your tests.
+
+Learn more about how to [determine the optimal configuration for optimizing test suite completion](./concept-determine-optimal-configuration.md).
+
+## Next step
+
+You've successfully created a Microsoft Playwright Testing workspace in the Playwright portal and run your Playwright tests on cloud browsers.
+
+Advance to the next quickstart to set up continuous end-to-end testing by running your Playwright tests in your CI/CD workflow.
+
+> [!div class="nextstepaction"]
+> [Set up continuous end-to-end testing in CI/CD](./quickstart-automate-end-to-end-testing.md)
playwright-testing Resource Limits Quotas Capacity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/playwright-testing/resource-limits-quotas-capacity.md
+
+ Title: Service limits
+description: 'Service limitations and quotas for running Playwright testing with Microsoft Playwright Testing Preview.'
+ Last updated : 10/04/2023+++
+# Service limits in Microsoft Playwright Testing Preview
+
+Azure uses limits and quotas to prevent budget overruns due to fraud, and to honor Azure capacity constraints. Consider these limits as you scale for production workloads. In this article, you learn about:
+
+- Default limits on Azure resources related to Microsoft Playwright Testing Preview.
+- Limitations for the Playwright test code
+- Supported operating systems and browsers
+- Requesting quota increases.
+
+> [!IMPORTANT]
+> Microsoft Playwright Testing is currently in preview. For legal terms that apply to Azure features that are in beta, in preview, or otherwise not yet released into general availability, see the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+## Default resource quotas
+
+While the service is in preview, the following limits apply on a per-subscription basis.
+
+| Resource | Limit |
+|||
+| Workspaces per region per subscription | 2 |
+| Parallel workers per workspace | 50 |
+| Access tokens per user per workspace | 10 |
+
+## Test code limitations
+
+- Only tests Playwright version 1.37 and higher is supported.
+- Only the Playwright runner and test code written in JavaScript or TypeScript are supported.
+
+## Supported operating systems and browsers
+
+- The service supports running hosted browsers on Linux and Windows.
+- Supports all [browsers that Playwright supports](https://playwright.dev/docs/browsers).
+
+## Other limitations
+
+- Moving a workspace to another resource group is not yet supported.
+- The Playwright portal is only available in English. Localization in other languages is currently in progress.
+
+## Request quota increases
+
+To raise the resource quota above the default limit for your subscription, [create an issue in the Playwright Testing GitHub repository](https://github.com/microsoft/playwright-testing-service/issues/new/choose).
+
+## Related content
+
+- Get started and [run Playwright tests at scale](quickstart-run-end-to-end-tests.md)
playwright-testing Troubleshoot Test Run Failures https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/playwright-testing/troubleshoot-test-run-failures.md
+
+ Title: Troubleshoot test run issues
+description: Learn how to troubleshoot issues when running Playwright tests with Microsoft Playwright Testing Preview.
+ Last updated : 10/04/2023++
+# Troubleshoot issues with running tests with Microsoft Playwright Testing preview
+
+This article addresses issues that might arise when you run Playwright tests at scale with Microsoft Playwright Testing Preview.
+
+> [!IMPORTANT]
+> Microsoft Playwright Testing is currently in preview. For legal terms that apply to Azure features that are in beta, in preview, or otherwise not yet released into general availability, see the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+## Tests are failing with a `401 Unauthorized` error
+
+Your access token may be invalid or expired. Make sure you're using the correct access token or [generate a new access token](./how-to-manage-access-tokens.md#generate-a-workspace-access-token).
+
+## Tests run slow
+
+Microsoft Playwright Testing hosts the remote browsers in specific Azure regions. If your client machine or target web application is outside these regions, you might experience increased network latency. Learn how you can [optimize regional latency for your workspace](./how-to-optimize-regional-latency.md).
+
+## Tests seem to hang
+
+Your tests might hang due to a piece of code that's unintentionally paused the test execution. For example, you might have added pause statements while debugging your test.
+
+Search for any instances of `pause()` statements in your code and comment them out.
+
+## Tests are failing because of a timeout
+
+Your tests could be timing out because of the following reasons:
+
+- Your client machine is in a different region than the browsers.
+
+ Connecting to service-hosted browsers introduces network latency. You might need to increase your [timeout settings in the Playwright configuration](https://playwright.dev/docs/test-timeouts). Start with increasing the *test timeout* setting in `playwright.service.config.ts`.
+
+- Trace files cause performance issues (currently a known problem).
+
+ Sending the Playwright trace files from the service to the client machine can create congestion, which can cause tests to fail due to a timeout.You can [disable tracing in the Playwright configuration file](https://playwright.dev/docs/api/class-testoptions#test-options-trace).
+
+## Unable to test web applications hosted behind firewall
+
+Ensure that you set the `exposeNetwork` option in the `playwright.service.config.ts` file to make the network available on the client machine to the cloud browser. Example values for this option are: `<loopback>` for the localhost network, `*` to expose all networks, or the IP address/DNS of the application endpoint.
+
+Learn how more about how to [test locally deployed applications](./how-to-test-local-applications.md).
+
+## The time displayed in the browser is different from my local time
+
+Web applications often display the time based on the user's location. When you run tests with Microsoft Playwright Testing, the client machine and the service browsers may be in different regions.
+
+You can mitigate the issue by [specifying the time zone in the Playwright configuration file](https://playwright.dev/docs/emulation#locale--timezone).
+
+## Test fails with `Path is not available when connecting remotely`
+
+You might encounter the `Path is not available when connecting remotely` error when you run your Playwright tests on remote browsers with Microsoft Playwright Testing. For example, when you're testing the functionality to download a file in your test code.
+
+The cause of this issue is that the `path()` function on the download file instance is not available when run on remote browsers.
+
+To resolve this issue, you should use the `saveAs()` function to save a local copy of the file on your client machine. Learn more about [downloads in the Playwright documentation](https://playwright.dev/docs/downloads).
+
+The following code snippet gives an example of how to use `saveAs()` instead of `path()` for reading the contents of a downloaded file:
+
+```typescript
+const downloadPromise = page.waitForEvent('download');
+await page.getByText('Download file').click();
+
+const download = await downloadPromise;
+
+// FAILS: download.path() fails when connecting to a remote browser
+// const result = fs.readFileSync(await download.path(), 'utf-8');
+
+// FIX: use saveAs() to download the file, when connecting to a remote browser
+await download.saveAs('/path/to/save/at/' + download.suggestedFilename());
+```
+
+## Related content
+
+- [Manage workspace access](./how-to-manage-workspace-access.md)
+- [Optimize regional latency a workspace](./how-to-optimize-regional-latency.md)
playwright-testing Troubleshoot Unable Sign Into Playwright Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/playwright-testing/troubleshoot-unable-sign-into-playwright-portal.md
+
+ Title: '[Resolved] Trouble signing into Playwright portal'
+description: 'How to resolve the issue with signing into the Playwright portal, which results in error code AADSTS7000112.'
+ Last updated : 10/04/2023++
+# [Resolved] AADSTS7000112: Application 'b1fd4ebf-2bed-4162-be84-97e0fe523f64'(PlaywrightServiceAADLogin) is disabled.
+
+## Symptoms
+
+When using Microsoft Playwright Testing, you fail to sign into the Playwright portal. You receive the following error message:
+
+**AADSTS7000112: Application 'b1fd4ebf-2bed-4162-be84-97e0fe523f64'(PlaywrightServiceAADLogin) is disabled.**
+
+## Cause
+
+This issue occurs if the service principal for Microsoft Playwright Testing is disabled for the tenant.
+
+## Resolution
+
+To resolve this issue, you need to enable the service principal for Microsoft Playwright Testing for the tenant.
+
+> [!IMPORTANT]
+> To enable the service principal, you need to be a tenant admin.
+
+Follow these steps to enable the Microsoft Playwright Testing service principal:
+
+1. Open an elevated Windows PowerShell command prompt (run Windows PowerShell as an administrator).
+
+1. Install the Microsoft Azure Active Directory Module for Windows PowerShell by running the following cmdlet:
+
+ ```powershell
+ Install-Module MSOnline
+ ```
+
+1. Connect to Azure AD for your Microsoft 365 subscription by running the following cmdlet:
+
+ ```powershell
+ Connect-MsolService
+ ```
+
+1. Check the current status of the service principal for Microsoft Playwright Testing by running the following cmdlet:
+
+ ```powershell
+ (Get-MsolServicePrincipal -AppPrincipalId b1fd4ebf-2bed-4162-be84-97e0fe523f64).accountenabled
+ ```
+
+2. Enable the service principal for Microsoft Playwright Testing by running the following cmdlet:
+
+ ```powershell
+ Get-MsolServicePrincipal -AppPrincipalId b1fd4ebf-2bed-4162-be84-97e0fe523f64 | Set-MsolServicePrincipal -AccountEnabled $true
+ ```
postgresql Concepts Major Version Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-major-version-upgrade.md
If in-place major version upgrade pre-check operations fail then it aborts with
- In-place major version upgrade doesn't support certain extensions and there are some limitations to upgrading certain extensions. The extensions **Timescaledb**, **pgaudit**, **dblink**, **orafce** and **postgres_fdw** are unsupported for all PostgreSQL versions. -- Please ensure that the **PostGIS** extensions, installed within a specific schema, are included in your search_path server parameter. It is necessary to update this server parameter to encompass those schemas before proceeding with major version upgrade.
+- When upgrading servers with PostGIS extension installed, set the 'search_path' server parameter to explicitly include the schemas of the PostGIS extension, extensions that depend on PostGIS, and extensions that serve as dependencies for the below extensions.
+
+ **e.g postgis,postgis_raster,postgis_sfcgal,postgis_tiger_geocoder,postgis_topology,address_standardizer,address_standardizer_data_us,fuzzystrmatch (required for postgis_tiger_geocoder).**
+ - Servers configured with logical replication slots aren't supported.
postgresql How To Restore To Different Subscription Or Resource Group https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-restore-to-different-subscription-or-resource-group.md
+
+ Title: Cross Subscription and Cross Resource Group Restore in Azure Database for PostgreSQL - Flexible Server
+description: This article describes how to restore to a different Subscription or resource group server in Azure Database for PostgreSQL - Flexible Server using the Azure portal.
++++++ Last updated : 09/30/2023++
+# Cross Subscription and Cross Resource Group Restore in Azure Database for PostgreSQL Flexible Server
++
+This article provides a step-by-step procedure for using the Azure portal to perform a restore to a different subscription or resource group in a flexible server through automated backups. You can perform this procedure to the latest restore point or to a custom restore point within your retention period.
+
+## Prerequisites
+
+To complete this how-to guide, you need Azure Database for PostgreSQL - Flexible Server. The procedure is also applicable for a flexible server that's configured with zone redundancy.
+
+## Restore to a different Subscription or Resource group
++
+1. In the [Azure portal](https://portal.azure.com/), choose the flexible server that you want to restore.
+
+2. Select **Overview** from the left pane, and then select **Restore**.
+
+ :::image type="content" source="./media/how-to-restore-server-portal/cross-restore-overview.png" alt-text="Screenshot that shows a server overview and the Restore button.":::
+
+3. Under **Subscription** drop down, select different subscription. If you want to change the **Resource group** go to next step else
+ you can go to Step 5.
+
+4. Select **Resource Group** drop down, choose different Resource group
+
+5. Under **Server details**, for **Name**, provide a server name. For **Availability zone**, you can optionally choose an availability zone to restore to.
+
+ :::image type="content" source="./media/how-to-restore-server-portal/choose-different-subscription-or-resource-group.png" alt-text="Screenshot that shows selections for restoring to different subscription or resource group.":::
+
+6. Select **Review + create** and click **create**, a notification shows that the restore operation has started.
+
+## Geo Restore to a different Subscription or Resource group
+
+If your source server is configured with geo-redundant backup, you can restore the servers in a paired region to a different resource group or subscription using below steps
+
+> [!NOTE]
+> For the first time that you perform a geo-restore, wait at least one hour after you create the source server
+
+1. In the [Azure portal](https://portal.azure.com/), choose the flexible server that you want to restore the backup from.
+
+2. Select **Overview** from the left pane, and then select **Restore**.
+
+ :::image type="content" source="./media/how-to-restore-server-portal/cross-restore-overview.png" alt-text="Screenshot that shows a server overview.":::
+
+3. Under **Subscription** drop down, select different subscription. If you want to change the **Resource group** go to next step else
+ you can go to Step 5.
+
+4. Select **Resource Group** drop down, choose different Resource group
+
+5. Check the **Restore to paired region** option
+
+6. Under **Server details**, for **Name**, provide a server name. For **Availability zone**, you can optionally choose an availability zone to restore to.
+
+ :::image type="content" source="./media/how-to-restore-server-portal/geo-restore-different-subscription-or-resource-group.png" alt-text="Screenshot that shows selections for restoring to the latest point.":::
+
+6. Select **Review + create** and click **create**, a notification shows that the restore operation has started.
++++
+## Next steps
+
+- Learn about [business continuity](./concepts-business-continuity.md).
+- Learn about [zone-redundant high availability](./concepts-high-availability.md).
+- Learn about [backup and recovery](./concepts-backup-restore.md).
+
postgresql How To Scale Compute Storage Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-scale-compute-storage-portal.md
Follow these steps to increase your storage size.
## Storage Autogrow
-Follow these steps to increase your storage size.
+
+Please use below steps to enable storage autogrow for your flexible server and automatically scale your storage in most cases.
1. In the [Azure portal](https://portal.azure.com/), choose the flexible server for which you want to increase the storage size.+ 2. Click **Compute+storage**. 3. A page with current settings is displayed.
Follow these steps to increase your storage size.
:::image type="content" source="./media/how-to-scale-compute-storage-portal/storage-autogrow.png" alt-text="Screenshot that shows storage autogrow.":::
-6. click **Save**.
-7. You receive a notification that storage autogrow is in progress.
+5. click **Save**.
+6. You receive a notification that storage autogrow enablement is in progress.
+> [!IMPORTANT]
+> Storage autogrow always trigger disk scaling operations online. In specific scenarios where disk scaling process cannot be done online storage autogrow does not trigger and you need to manually increase the storage. These scenarios include reaching, starting at, or crossing the 4,096-GiB limit.
-## Next steps
+### Next steps
- Learn about [business continuity](./concepts-business-continuity.md) - Learn about [high availability](./concepts-high-availability.md)-- Learn about [Compute and Storage](./concepts-compute-storage.md)
+- Learn about [Compute and Storage](./concepts-compute-storage.md)
sap Register Existing System https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/center-sap-solutions/register-existing-system.md
Last updated 02/03/2023-+ #Customer intent: As a developer, I want to register my existing SAP system so that I can use the system with Azure Center for SAP solutions.
sap Hana Tiering Guidance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/hana-tiering-guidance.md
+
+ Title: Managing SAP HANA data footprint for balancing cost and performance
+description: Learn about HANA database archiving strategies to manage data footprint and reduce costs.
++++ Last updated : 09/27/2023++++
+# Managing SAP HANA data footprint for balancing cost and performance
+
+Data archiving has always been a critical decision-making item and is heavily used by many companies to organize their legacy data to achieve cost benefits, balancing the need to comply with regulations and retain data for certain period with the cost of storing the data. Customers planning to migrate to S/4HANA or HANA based solution or reduce existing data storage footprint can leverage on the various data tiering options supported on Azure.
+
+This article describes options on Azure with emphasis on classifying the data usage pattern.
+
+## Overview
+
+SAP HANA is an in-memory database and is supported on SAP certified servers. Azure provides more than 100 solutions [certified](https://www.sap.com/dmc/exp/2014-09-02-hana-hardware/enEN/#/solutions?filters=iaas;ve:24) to run SAP HANA. In-memory capabilities of SAP HANA allow customers to execute business transactions at an incredible speed. But do you need fast access to all data, at any given point in time? Food for thought.
+
+Most organizations choose to offload less accessed SAP data to HANA storage tier OR archive legacy data to an extended solution to attain maximum performance out of their investment. This tiering of data helps balance SAP HANA footprint and reduces cost and complexity throughout effectively.
+
+Customers can refer to the table below for data tier characteristics and choose to move data to the temperature tier as per desired usage.
+
+| Classification | Hot Data | Warm Data | Cold Data |
+| | |-- | - |
+| Frequently accessed | High | Medium | Low |
+| Expected performance | High | Medium | Low |
+| Business critical | High | Medium | Low |
+
+Frequently accessed, high-value data is classified as "hot" and is stored in-memory on the SAP HANA database. Less frequently accessed "warm" data is offloaded from in-memory and stored on HANA storage tier making it unified part of SAP HANA system. Finally, legacy or rarely accessed data is stored on low-cost storage tiers like disk or Hadoop, which remains accessible at any time.
+
+"One size fits all" approach does not work here. Post data characterization is done, the next step is to map SAP solution to the data tiering solution that is supported by SAP on Azure.
+
+| SAP Solution | Hot | Warm | Cold |
+| | |-- | - |
+| Native SAP HANA | SAP certified VMs | HANA Dynamic Tiering, HANA extension Node, NSE | DLM with Data Intelligence, DLM with Hadoop |
+| SAP S/4HANA | SAP certified VMs | Data aging via NSE | SAP IQ |
+| SAP Business Suite on HANA | SAP certified VMs | Data aging via NSE | SAP IQ |
+| SAP BW/4 HANA | SAP certified VMs | NSE, HANA extension Node | NLS with SAP IQ and Hadoop, Data Intelligence with ADLS |
+| SAP BW on HANA | SAP certified VMs | NSE, HANA extension Node | NLS with SAP IQ and Hadoop, Data Intelligence with ADLS |
+
+[2462641 - Is HANA Dynamic Tiering supported for Business Suite on HANA, or other SAP applications ( S/4, BW ) ? - SAP ONE Support Launchpad](https://launchpad.support.sap.com/#/notes/2462641)
+
+[2140959 - SAP HANA Dynamic Tiering - Additional Information - SAP ONE Support Launchpad](https://launchpad.support.sap.com/#/notes/2140959)
+
+[2799997 - FAQ: SAP HANA Native Storage Extension (NSE) - SAP ONE Support Launchpad](https://launchpad.support.sap.com/#/notes/2799997)
+
+[2816823 - Use of SAP HANA Native Storage Extension in SAP S/4HANA and SAP Business Suite powered by SAP HANA - SAP ONE Support Launchpad](https://launchpad.support.sap.com/#/notes/2816823)
+
+## Configuration
+
+### Warm Data Tiering
+
+#### SAP HANA Dynamic Tiering for Azure Virtual Machines
+
+[SAP HANA infrastructure configurations and operations on Azure - Azure Virtual Machines | Microsoft Learn](./hana-vm-operations.md#sap-hana-dynamic-tiering-20-for-azure-virtual-machines)
+
+#### SAP HANA Native Storage Extension
+
+SAP HANA Native Storage Extension (NSE) is native technology available starting SAP HANA 2.0 SPS 04. NSE is a built-in disk-based extension to in-memory column store data of SAP HANA. Customers do not need special hardware or certification for NSE. Any HANA certified Azure virtual machines are valid to implement NSE.
+
+##### Overview
+
+The capacity of SAP HANA database with NSE is the amount of hot data memory and warm data stored on disk. NSE allocates buffer cache in HANA main memory and is sized separately from SAP HANA hot and working memory. As per SAP documentation, buffer cache is enabled by default and is sized by default as 10% of HANA memory. Please be informed NSE is not a replacement for data archiving as it does not reduce HANA disk size. Unlike data archiving, activation of NSE can be reversed.
+
+[SAP HANA Native Storage Extension | SAP Help Portal](https://help.sap.com/docs/SAP_HANA_PLATFORM/6b94445c94ae495c83a19646e7c3fd56/4efaa94f8057425c8c7021da6fc2ddf5.html)
+
+[2799997 - FAQ: SAP HANA Native Storage Extension (NSE) - SAP ONE Support Launchpad](https://launchpad.support.sap.com/#/notes/2799997)
+
+[2973243 - Guidance for use of SAP HANA Native Storage Extension in SAP S/4HANA and SAP Business Suite powered by SAP HANA - SAP ONE Support Launchpad](https://launchpad.support.sap.com/#/notes/2973243)
+
+NSE is supported for scale-up and scale-out systems. Availability for scale out systems starts with SAP HANA 2.0 SPS 04. Please refer SAP Note 2927591 to understand the functional restrictions.
+
+[2927591 - SAP HANA Native Storage Extension 2.0 SPS 05 Functional Restrictions - SAP ONE Support Launchpad](https://launchpad.support.sap.com/#/notes/2927591)
+
+SAP HANA NSE disaster recovery on Azure can be achieved using a variety of methods, including:
+
+- HANA System replication: HANA System replication allows you to create a copy of your SAP HANA NSE system in another Azure zone or region of choice. This copy is periodically replicated with your production SAP HANA NSE system. In the event of a disaster, fail over can be triggered to the disaster recovery SAP HANA NSE system.
+
+- Backup and restore: You can also use backup and restore to protect your SAP HANA NSE system from disaster. You can back up your SAP HANA NSE system to Azure Backup, and then restore it to a new SAP HANA NSE system in the event of a disaster. Native Azure backup capabilities can be leveraged here.
+
+- Azure Site Recovery: Azure Site Recovery is a disaster recovery service that can be used to replicate and recover your SAP HANA NSE system to another Azure region. Azure Site Recovery provides several features that make it a good choice for SAP HANA NSE disaster recovery, such as:
+
+ - Asynchronous replication, which can reduce the impact of replication on your production SAP HANA NSE system.
+
+ - Point-in-time restore, which allows you to restore your SAP HANA NSE system to a specific point in time.
+
+ - Automated failover and failback, which can help you to quickly recover your SAP HANA NSE system in the event of a disaster.
+
+The best method for SAP HANA NSE disaster recovery on Azure will depend on your specific needs and requirements.
+
+[Restore SAP HANA database instances on Azure VMs - Azure Backup | Microsoft Learn](/azure/backup/sap-hana-database-instances-restore)
+
+#### SAP HANA Extension Node
+
+HANA Extension nodes are supported for BW on HANA, BW/4HANA and SAP HANA native applications. For SAP BW on HANA, you will need SAP HANA 1.0 SP 12 as minimum HANA . release and BW 7.4 SP12 as minimum BW release. For SAP HANA native applications, you will need HANA 2 SPS03 as minimum HANA release.
+
+The extension nodes setup is based on HANA scale-out offering. Customers with scale-up architecture need to extend to scale-out deployment. Apart from HANA standard license, no additional license is required. Extension node cannot share the same OS, network and disk with HANA standard node.
+
+##### Networking Configuration
+
+Configure the networking settings for the Azure VMs to ensure proper communication between the SAP HANA primary node and the extension nodes. This includes configuring Azure virtual network (VNet) settings, subnets, and network security groups (NSGs) to allow the necessary network traffic.
+
+##### High Availability and Monitoring
+
+Implement high availability mechanisms, such as clustering or replication, to ensure that the SAP HANA system remains resilient in case of node failures. Additionally, set up monitoring and alerting mechanisms to keep track of the health and performance of the SAP HANA system on Azure.
+
+##### Data Backup and Recovery
+
+Implement a robust backup and recovery strategy to protect your SAP HANA data. Azure offers various backup options, including Azure Backup or SAP HANA-specific backup tools. Configure regular backups of both the primary and extension nodes to ensure data integrity and availability.
+
+##### Advantages of SAP HANA Extension Node
+
+[Data tiering and extension nodes for SAP HANA on Azure (Large Instances) - Azure Virtual Machines | Microsoft Learn](/azure/virtual-machines/workloads/sap/hana-data-tiering-extension-nodes)
+
+### Cold Data Tiering
+
+SAP DLM (Data Lifecycle Management) provides tools and methodologies provided by SAP to manage the lifecycle of data SAP HANA to low-cost storage.
+
+Let's explore three common scenarios for SAP HANA data tiering using Azure services.
+
+#### Data Tiering with SAP Data Intelligence
+
+SAP Data Intelligence enables organizations to discover, integrate, orchestrate, and govern data from various sources, both within and outside the enterprise.
+
+SAP Data Intelligence enables the integration of SAP HANA with Azure Data Lake Storage. Cold data can be seamlessly moved from the in-memory tier to ADLS, leveraging its cost-effective storage capabilities. SAP
+Data Intelligence facilitates the orchestration of data pipelines, allowing for transparent access and query execution on data residing in ADLS.
+
+You can leverage the capabilities and services offered by Azure in conjunction with SAP Data Intelligence. Here are a few integration options:
+
+##### Azure Data Lake Storage integration
+
+SAP Data Intelligence supports integration with Azure Data Lake Storage, which is a scalable and secure data storage solution in Azure. You can configure connections
+in SAP Data Intelligence to access and process data stored in Azure Data Lake Storage. This allows you to leverage the power of SAP Data Intelligence for data ingestion, data transformation, and advanced analytics on data residing in Azure.
+
+SAP Data Intelligence provides a wide range of connectors and transformations that facilitate data movement and transformation tasks. You can configure SAP Data Intelligence pipelines to extract cold data
+from SAP HANA, transform it if necessary, and load it into Azure Blob Storage. This ensures seamless data transfer and enables further processing or analysis on the tiered data.
+
+SAP HANA provides query federation capabilities that seamlessly combine data from different storage tiers. With SAP HANA Smart Data Access (SDA) and SAP Data Intelligence, you can federate queries to access data
+stored in SAP HANA and Azure Blob Storage as if it were in a single location. This transparent data access allows users and applications to retrieve and analyze data from both tiers without the need for manual
+data movement or complex integration.
+
+##### Azure Synapse Analytics integration
+Azure Synapse Analytics is a cloud-based analytics service that combines big data and data warehousing capabilities. You can integrate SAP Data Intelligence with Azure Synapse Analytics to perform advanced analytics and data processing on large volumes of data. SAP Data Intelligence can connect to Azure Synapse Analytics to execute data pipelines, transformations, and machine learning tasks leveraging the power of Azure Synapse Analytics.
+
+##### Azure services integration
+SAP Data Intelligence can also integrate with other Azure services like Azure Blob Storage, Azure SQL
+Database, Azure Event Hubs, and more. This allows you to leverage the capabilities of these Azure services within your data workflows and processing tasks in SAP Data Intelligence.
+
+#### Data Tiering with SAP IQ
+
+SAP IQ (formerly Sybase IQ), a highly scalable columnar database, can be utilized as a storage option for cold data in the SAP HANA Data Tiering landscape. With SAP Data Intelligence, organizations can set up data pipelines to move cold data from SAP HANA to SAP IQ. This approach provides efficient compression and query performance for historical or less frequently accessed data.
+
+You can provision virtual machines (VMs) in Azure and install SAP IQ on those VMs. Azure Blob Storage is a scalable and cost-effective cloud storage service provided by Microsoft Azure. With SAP HANA Data Tiering,
+organizations can integrate SAP IQ with Azure Blob Storage to store the data that has been tiered off from SAP HANA.
+
+SAP HANA Data Tiering enables organizations to define policies and rules to automatically move cold data from SAP HANA to SAP IQ in Azure Blob Storage. This data movement can be performed based on data aging criteria or business rules. Once the data is in SAP IQ, it can be efficiently compressed and stored, optimizing storage utilization.
+
+SAP HANA provides query federation capabilities, allowing queries to seamlessly access and combine data from SAP HANA and SAP IQ as if it were in a single location. This transparent data access ensures that users and applications can retrieve and analyze data from both tiers without the need for manual data movement or complex integration.
+
+It's important to note that the specific steps and configurations may vary based on your requirements, SAP IQ version, and Azure deployment options. Therefore, referring to the official documentation and consulting with SAP and Azure experts is highly recommended for a successful deployment of SAP IQ on Azure with data tiering.
+
+#### Data Tiering with NLS on Hadoop
+
+Near-Line Storage (NLS) on Hadoop offers a cost-effective solution for managing cold data with SAP HANA. SAP Data Intelligence enables seamless integration between SAP HANA and Hadoop-based storage systems, such as
+Hadoop Distributed File System (HDFS). Data pipelines can be established to move cold data from SAP HANA to NLS on Hadoop, allowing for efficient data archiving and retrieval.
+
+[Implement SAP BW NLS with SAP IQ on Azure | Microsoft Learn](dbms-guide-sapiq.md)
sap Integration Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/integration-get-started.md
Select an area for resources about how to integrate SAP and Azure in that space.
| [Azure Integration Services](#azure-integration-services) | Connect your SAP workloads with your end users, business partners, and their systems with world-class integration services. Learn about co-development efforts that enable SAP Event Mesh to exchange cloud events with Azure Event Grid, understand how you can achieve high-availability for services like SAP Cloud Integration, automate your SAP invoice processing with Logic Apps and Azure AI services and more. | | [App Development in any language including ABAP and DevOps](#app-development-in-any-language-including-abap-and-devops) | Apply best-in-class developer tooling to your SAP app developments and DevOps processes. | | [Azure Data Services](#azure-data-services) | Learn how to integrate your SAP data with Data Services like Azure Synapse Analytics, Azure Data Lake Storage, Azure Data Factory, Power BI, Data Warehouse Cloud, Analytics Cloud, which connector to choose, tune performance, efficiently troubleshoot, and more. |
-| [Threat Monitoring and Response Automation with Microsoft Security Services for SAP](#microsoft-security-for-sap) | Learn how to best secure your SAP workload with Microsoft Defender for Cloud and the [SAP certified](https://www.sap.com/dmc/exp/2013_09_adpd/enEN/#/solutions?id=s:33db1376-91ae-4f36-a435-aafa892a88d8) Microsoft Sentinel solution. Prevent incidents from happening, detect and respond to threats in real-time. |
+| [Threat Monitoring and Response Automation with Microsoft Security Services for SAP](#microsoft-security-for-sap) | Learn how to best secure your SAP workload with Microsoft Defender for Cloud, the [SAP certified](https://www.sap.com/dmc/exp/2013_09_adpd/enEN/#/solutions?id=s:33db1376-91ae-4f36-a435-aafa892a88d8) Microsoft Sentinel solution, and immutable vault for Azure Backup. Prevent incidents from happening, detect, and respond to threats in real-time. |
| [SAP Business Technology Platform (BTP)](#sap-btp) | Discover integration scenarios like SAP Private Link to securely and efficiently connect your BTP apps to your Azure workloads. | ### Azure OpenAI service
Complimenting that, use the [SAP certified](https://www.sap.com/dmc/exp/2013_09_
Learn more about identity focused integration capabilities that power the analysis on Defender and Sentinel via the [Microsoft Entra ID section](#microsoft-entra-id-formerly-azure-ad).
+Leverage the [immutable vault for Azure Backup](/azure/backup/backup-azure-immutable-vault-concept) to protect your SAP data from ransomware attacks.
+ #### Microsoft Defender for Cloud The [Defender product family](../../defender-for-cloud/defender-for-cloud-introduction.md) consist of multiple products tailored to provide "cloud security posture management" (CSPM) and "cloud workload protection" (CWPP) for the various workload types. Below excerpt serves as entry point to start securing your SAP system.
For more information about using Microsoft Defender for Endpoint (MDE) via Micro
Also see the following SAP resources:
+- [3356389 - Antivirus or other security software affecting SAP operations](https://me.sap.com/notes/3356389)
- [2808515 - Installing security software on SAP servers running on Linux](https://me.sap.com/notes/2808515) - [1730997 - Unrecommended versions of antivirus software](https://me.sap.com/notes/1730997)
See below video to experience the SAP security orchestration, automation and res
> [!VIDEO https://www.youtube.com/embed/b-AZnR-nQpg]
+#### Immutable vault for Azure Backup for SAP
+
+For more information about [immutable vault for Azure Backup](/azure/backup/backup-azure-immutable-vault-concept), see the following Azure documentation:
+
+- [Backup and restore plan to protect against ransomware](/azure/security/fundamentals/backup-plan-to-protect-against-ransomware)
+- [Back up SAP HANA System Replication databases on Azure VMs](/azure/backup/sap-hana-database-with-hana-system-replication-backup#create-a-recovery-services-vault)
+ ### SAP BTP For more information about Azure integration with SAP Business Technology Platform (BTP), see the following SAP resources:
search Search Add Autocomplete Suggestions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-add-autocomplete-suggestions.md
Title: Add autocomplete to a search box
+ Title: Autocomplete or typeahead
-description: Enable search-as-you-type query actions in Azure Cognitive Search by creating suggesters and formulating requests that autocomplete a search box with finished terms or phrases. You can also return suggested matches.
+description: Enable search-as-you-type query actions in Azure Cognitive Search by creating suggesters and queries that autocomplete a search string with finished terms or phrases. You can also return suggested matches.
Previously updated : 09/12/2022 Last updated : 10/03/2023
-# Add autocomplete and suggestions to client apps using Azure Cognitive Search
+# How to add autocomplete and search suggestions in client apps
Search-as-you-type is a common technique for improving query productivity. In Azure Cognitive Search, this experience is supported through *autocomplete*, which finishes a term or phrase based on partial input (completing "micro" with "microsoft"). A second user experience is *suggestions*, or a short list of matching documents (returning book titles with an ID so that you can link to a detail page about that book). Both autocomplete and suggestions are predicated on a match in the index. The service won't offer queries that return zero results.
-To implement these experiences in Azure Cognitive Search, you will need:
+To implement these experiences in Azure Cognitive Search:
-+ A *suggester* definition that's embedded in the index schema.
-+ A *query* specifying [Autocomplete](/rest/api/searchservice/autocomplete) or [Suggestions](/rest/api/searchservice/suggestions) API on the request.
-+ A *UI control* to handle search-as-you-type interactions in your client app. We recommend using an existing JavaScript library for this purpose.
++ Add a `suggester` to an index schema++ Build a query that calls the [Autocomplete](/rest/api/searchservice/autocomplete) or [Suggestions](/rest/api/searchservice/suggestions) API on the request.++ Add a UI control to handle search-as-you-type interactions in your client app. We recommend using an existing JavaScript library for this purpose. In Azure Cognitive Search, autocompleted queries and suggested results are retrieved from the search index, from selected fields that you have registered with a suggester. A suggester is part of the index, and it specifies which fields will provide content that either completes a query, suggests a result, or does both. When the index is created and loaded, a suggester data structure is created internally to store prefixes used for matching on partial queries. For suggestions, choosing suitable fields that are unique, or at least not repetitive, is essential to the experience. For more information, see [Create a suggester](index-add-suggesters.md).
-The remainder of this article is focused on queries and client code. It uses JavaScript and C# to illustrate key points. REST API examples are used to concisely present each operation. For links to end-to-end code samples, see [Next steps](#next-steps).
+The remainder of this article is focused on queries and client code. It uses JavaScript and C# to illustrate key points. REST API examples are used to concisely present each operation. For end-to-end code samples, see [Next steps](#next-steps).
## Set up a request
spring-apps Application Observability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/application-observability.md
+
+ Title: Optimize application observability for Azure Spring Apps
+description: Learn how to observe the application of Azure Spring Apps.
++++ Last updated : 10/02/2023+++
+# Optimize application observability for Azure Spring Apps
+
+> [!NOTE]
+> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
+
+**This article applies to:** ✔️ Java ❌ C#
+
+**This article applies to:** <br>
+❌ Standard consumption and dedicated (Preview) ✔️ Basic/Standard ❌ Enterprise
+
+This article shows you how to observe your production applications deployed on Azure Spring Apps and diagnose and investigate production issues. Observability is the ability to collect insights, analytics, and actionable intelligence through the logs, metrics, traces, and alerts.
+
+To find out if your applications meet expectations and to discover and predict issues in all applications, focus on the following areas:
+
+- **Availability**: Check that the application is available and accessible to the user.
+- **Reliability**: Check that the application is reliable and can be used normally.
+- **Failure**: Understand that the application isn't working properly and further fixes are required.
+- **Performance**: Understand which performance issues the application encounters that need further attention and find out the root cause of the problem.
+- **Alerts**: Know the current state of the application. Proactively notify others and take necessary actions when the application isn't working properly.
+
+This article uses the well-known [PetClinic](https://github.com/azure-samples/spring-petclinic-microservices) sample app as the production application. For more information on how to deploy PetClinic to Azure Spring Apps and use MySQL as the persistent store, see the following articles:
+
+- [Deploy microservice applications to Azure Spring Apps](./quickstart-deploy-microservice-apps.md)
+- [Integrate Azure Spring Apps with Azure Database for MySQL](./quickstart-integrate-azure-database-mysql.md)
+
+Log Analytics and Application Insights are deeply integrated with Azure Spring Apps. You can use Log Analytics to diagnose your application with various log queries and use Application Insights to investigate production issues. For more information, see the following articles:
+
+- [Overview of Log Analytics in Azure Monitor](../azure-monitor/logs/log-analytics-overview.md)
+- [Azure Monitor Insights overview](../azure-monitor/insights/insights-overview.md)
+
+## Prerequisites
+
+- An Azure subscription. [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)]
++
+## Query logs to diagnose an application problem
+
+If you encounter production issues, you need to do a root cause analysis. Finding logs is an important part of this analysis, especially for distributed applications with logs spread across multiple applications. The trace data collected by Application Insights can help you find the log information for all related links, including the exception stack information.
+
+This section explains how to use Log Analytics to query the application logs and use Application Insights to investigate request failures. For more information, see the following articles:
+
+- [Log Analytics tutorial](../azure-monitor/logs/log-analytics-tutorial.md)
+- [Application Map: Triage distributed applications](../azure-monitor/app/app-map.md)
+
+### Log queries
+
+This section explains how to query application logs from the `AppPlatformLogsforSpring` table hosted by Azure Spring Apps. You can use the [Kusto Query Language](/azure/data-explorer/kusto/query/) to customize your queries for application logs.
+
+To see the built-in example query statements or to write your own queries, open the Azure Spring Apps instance and go to the **Logs** menu.
+
+#### Show the application logs that contain the "error" or "exception" terms
+
+To see the application logs containing the terms "error" or "exception", select **Alerts** on the **Queries** page, and then select **Run** in the **Show the application logs which contain the "error" or "exception" terms** section.
+
+The following query shows the application logs from the last hour that contains the terms "error" or "exception". You can customize the query with any keyword you want to search for.
+
+```sql
+AppPlatformLogsforSpring
+| where TimeGenerated > ago(1h)
+| where Log contains "error" or Log contains "exception"
+| project TimeGenerated , ServiceName , AppName , InstanceName , Log , _ResourceId
+```
++
+#### Show the error and exception number of each application
+
+To see the error and exception number of an application, select **Alerts** on the **Queries** page, and then select **Run** in the **Show the error and exception number of each application** section.
+
+The following query shows a pie chart of the number of the logs in the last 24 hours that contain the terms "error" or "exception". To view the results in a table format, select **Result**.
+
+```sql
+AppPlatformLogsforSpring
+| where TimeGenerated > ago(24h)
+| where Log contains "error" or Log contains "exception"
+| extend FullAppName = strcat(ServiceName, "/", AppName)
+| summarize count_per_app = count() by FullAppName, ServiceName, AppName, _ResourceId
+| sort by count_per_app desc
+| render piechart
+```
++
+#### Query the customers service log with a key word
+
+Use the following query to see a list of logs in the `customers-service` app that contain the term "root cause". Update the query to use the keyword that you're looking for.
+
+```sql
+AppPlatformLogsforSpring
+| where AppName == "customers-service"
+| where Log contains "root cause"
+| project-keep InstanceName, Log
+```
++
+### Investigate request failures
+
+Use the following steps to investigate request failures in the application cluster and to view the failed request list and specific examples of the failed requests:
+
+1. Go to the Azure Spring Apps instance overview page.
+
+1. On the navigation menu, select **Application Insights** to go to the Application Insights overview page. Then, select **Failures**.
+
+ :::image type="content" source="media/application-observability/application-insights-failures.png" alt-text="Screenshot of the Azure portal that shows the Application Insights Failures page." lightbox="media/application-observability/application-insights-failures.png":::
+
+1. On the **Failure** page, select the `PUT` operation that has the most failed requests count, select **1 Samples** to go into the details, and then select the suggested sample.
+
+ :::image type="content" source="media/application-observability/application-insights-failure-suggested-sample.png" alt-text="Screenshot of the Azure portal that shows the Select a sample operation pane with the suggested failure sample." lightbox="media/application-observability/application-insights-failure-suggested-sample.png":::
+
+1. Go to the **End-to-end transaction details** page to view the full call stack in the right panel.
+
+ :::image type="content" source="media/application-observability/application-insights-e2e-exception.png" alt-text="Screenshot of the Azure portal that shows the End-to-end transaction details page with Application Insights failures." lightbox="media/application-observability/application-insights-e2e-exception.png":::
+
+## Improve the application performance using Application Insights
+
+If there's a performance issue, the trace data collected by Application Insights can help find the log information of all relevant links, including the execution time of each link, to help find the location of the performance bottleneck.
+
+To use Application Insights to investigate the performance issues, use the following steps:
+
+1. Go to the Azure Spring Apps instance overview page.
+
+1. On the navigation menu, select **Application Insights** to go to the Application Insights overview page. Then, select **Performance**.
+
+ :::image type="content" source="media/application-observability/application-insights-performance.png" alt-text="Screenshot of the Azure portal that shows the Application Insights Performance page." lightbox="media/application-observability/application-insights-performance.png":::
+
+1. On the **Performance** page, select the slowest `GET /api/gateway/owners/{ownerId}` operation, select **3 Samples** to go into the details, and then select the suggested sample.
+
+ :::image type="content" source="media/application-observability/application-insights-performance-suggested-sample.png" alt-text="Screenshot of the Azure portal that shows the Select a sample operation pane with the suggested performance sample." lightbox="media/application-observability/application-insights-performance-suggested-sample.png":::
+
+1. Go to the **End-to-end transaction details** page to view the full call stack in the right panel.
+
+ :::image type="content" source="media/application-observability/application-insights-e2e-performance.png" alt-text="Screenshot of the Azure portal that shows the End-to-end transaction details page with the Application Insights performance issue." lightbox="media/application-observability/application-insights-e2e-performance.png":::
++
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Set up a staging environment](../spring-apps/how-to-staging-environment.md)
+
+> [!div class="nextstepaction"]
+> [Map an existing custom domain to Azure Spring Apps](./tutorial-custom-domain.md)
+
+> [!div class="nextstepaction"]
+> [Use TLS/SSL certificates](./how-to-use-tls-certificate.md)
storage Storage Plan Manage Costs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-plan-manage-costs.md
Previously updated : 09/07/2023 Last updated : 10/03/2023
At the end of your billing cycle, the charges for each meter are summed. Your bi
Data storage and metadata are billed per GB on a monthly basis. For data and metadata stored for less than a month, you can estimate the impact on your monthly bill by calculating the cost of each GB per day. You can use a similar approach to estimating the cost of encryption scopes that are in use for less than a month. The number of days in any given month varies. Therefore, to obtain the best approximation of your costs in a given month, make sure to divide the monthly cost by the number of days that occur in that month.
+#### Storage units
+
+Azure Blob Storage uses the following base-2 units of measurement to represent storage capacity: KiB, MiB, GiB, TiB, PiB. Line items on your bill that contain GB as a unit of measurement (For example, per GB / per month) are calculated by Azure Blob Storage as binary GB (GiB). For example, a line item on your bill that shows **1** for **Data Stored (GB/month)** corresponds to 1 GiB per month of usage. The following table describes each base-2 unit:
+
+| Acronym | Unit | Definition |
+|||--|
+| KiB | kibibyte | 1,024 bytes |
+| MiB | mebibyte | 1,024 KiB (1,048,576 bytes) |
+| GiB | gibibyte | 1024 MiB (1,073,741,824 bytes) |
+| TiB | tebibyte | 1024 GiB (1,099,511,627,776 bytes) |
+ ### Finding the unit price for each meter To find unit prices, open the correct pricing page and select the appropriate file structure. Then, apply the appropriate redundancy, region, and currency filters. Prices for each meter appear in a table. Prices differ based on other settings in your account such as data redundancy options, access tier and performance tier.
storage File Sync Server Registration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-server-registration.md
description: Learn how to register and unregister a Windows Server with an Azure
Previously updated : 06/15/2022 Last updated : 10/04/2023
Because Azure File Sync will rarely be the only service running in your datacent
You can throttle the network utilization of Azure File Sync by using the `StorageSyncNetworkLimit` cmdlets. > [!NOTE]
-> Network limits do not apply when a tiered file is accessed.
+> Network limits does not apply to the following scenarios:
+> - When a tiered file is accessed.
+> - Sync metadata that is exchanged between the registered server and Storage Sync Service.
+>
+> Because this network traffic is not throttled, Azure File Sync may exceed the network limit configured. Our recommendation is to monitor the network traffic and adjust the limit to account for the network traffic that is not throttled.
For example, you can create a new throttle limit to ensure that Azure File Sync does not use more than 10 Mbps between 9 am and 5 pm (17:00h) during the work week:
synapse-analytics Connect Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/connect-overview.md
Get connected to the Synapse SQL capability in Azure Synapse Analytics.
[Azure Data Studio](/sql/azure-data-studio/download-azure-data-studio) is fully supported starting from version 1.18.0. SSMS is partially supported starting from version 18.5, you can use it to connect and query only.
-> [!NOTE]
-> If an AAD login has a connection open for more than 1 hour at time of query execution, any query that relies on AAD will fail. This includes querying storage using AAD pass-through and statements that interact with AAD (like CREATE EXTERNAL PROVIDER). This affects every tool that keeps connections open, like in query editor in SSMS and ADS. Tools that open new connections to execute a query, like Synapse Studio, are not affected.
-
-> You can restart SSMS or connect and disconnect in ADS to mitigate this issue.
- ## Find your server name The server name for the dedicated SQL pool in the following example is: showdemoweu.sql.azuresynapse.net.
Synapse SQL standardizes some settings during connection and object creation. Th
For executing **serverless SQL pool** queries, recommended tools are [Azure Data Studio](get-started-azure-data-studio.md) and Azure Synapse Studio. ## Next steps
-To connect and query with Visual Studio, see [Query with Visual Studio](../sql-data-warehouse/sql-data-warehouse-query-visual-studio.md?context=/azure/synapse-analytics/context/context). To learn more about authentication options, see [Authentication to Synapse SQL](sql-authentication.md?tabs=provisioned).
+To connect and query with Visual Studio, see [Query with Visual Studio](../sql-data-warehouse/sql-data-warehouse-query-visual-studio.md?context=/azure/synapse-analytics/context/context). To learn more about authentication options, see [Authentication to Synapse SQL](sql-authentication.md?tabs=provisioned).
synapse-analytics Resources Self Help Sql On Demand https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/resources-self-help-sql-on-demand.md
There are reasons why this error code can happen:
#### [0x80070005](#tab/x80070005)
-This error can occur when the authentication method is user identity, which is also known as Azure AD pass-through, and the Azure AD access token expires.
+This error can occur when the authentication method is user identity, which is also known as Azure AD pass-through, and the Azure AD access token expires. This can happen if you are logging in for the first time after more than 90 days and at the same time you are inactive in the session for more than one hour.
The error message might also resemble: `File {path} cannot be opened because it does not exist or it is used by another process.`
To read or download a blob in the Archive tier, rehydrate it to an online tier.
#### [0x80070057](#tab/x80070057)
-This error can occur when the authentication method is user identity, which is also known as Azure AD pass-through, and the Azure AD access token expires.
+This error can occur when the authentication method is user identity, which is also known as Azure AD pass-through, and the Azure AD access token expires. This can happen if you are logging in for the first time after more than 90 days and at the same time you are inactive in the session for more than one hour.
The error message might also resemble the following pattern: `File {path} cannot be opened because it does not exist or it is used by another process.`
virtual-desktop Safe Url List https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/safe-url-list.md
The following table is the list of URLs your session host VMs need to access for
||||| | `login.microsoftonline.com` | 443 | Authentication to Microsoft Online Services | | `*.wvd.microsoft.com` | 443 | Service traffic | WindowsVirtualDesktop |
-| `*.prod.warm.ingest.monitor.core.windows.net` | 443 | Agent traffic<br /><br />[Diagnostic output](diagnostics-log-analytics.md) | AzureMonitor |
+| `*.prod.warm.ingest.monitor.core.windows.net` | 443 | Agent traffic<br />[Diagnostic output](diagnostics-log-analytics.md) | AzureMonitor |
| `catalogartifact.azureedge.net` | 443 | Azure Marketplace | AzureFrontDoor.Frontend | | `gcs.prod.monitoring.core.windows.net` | 443 | Agent traffic | AzureCloud | | `kms.core.windows.net` | 1688 | Windows activation | Internet |
The following table lists optional URLs that your session host virtual machines
|--|--|--|--| | `login.microsoftonline.us` | 443 | Authentication to Microsoft Online Services | | `*.wvd.azure.us` | 443 | Service traffic | WindowsVirtualDesktop |
-| `*.prod.warm.ingest.monitor.core.usgovcloudapi.net` | 443 | Agent traffic<br /><br />[Diagnostic output](diagnostics-log-analytics.md) | AzureMonitor |
+| `*.prod.warm.ingest.monitor.core.usgovcloudapi.net` | 443 | Agent traffic<br />[Diagnostic output](diagnostics-log-analytics.md) | AzureMonitor |
| `gcs.monitoring.core.usgovcloudapi.net` | 443 | Agent traffic | AzureCloud | | `kms.core.usgovcloudapi.net` | 1688 | Windows activation | Internet | | `mrsglobalstugviffx.blob.core.usgovcloudapi.net` | 443 | Agent and side-by-side (SXS) stack updates | AzureCloud |
virtual-desktop Whats New Client Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/whats-new-client-windows.md
description: Learn about recent changes to the Remote Desktop client for Windows
Previously updated : 10/03/2023 Last updated : 10/04/2023 # What's new in the Remote Desktop client for Windows
The following table lists the current versions available for the public and Insi
Download: [Windows 64-bit](https://go.microsoft.com/fwlink/?linkid=2139233), [Windows 32-bit](https://go.microsoft.com/fwlink/?linkid=2139144), [Windows ARM64](https://go.microsoft.com/fwlink/?linkid=2139368) -- Enhanced web to Microsoft Remote Desktop Client launch capabilities by adding multiple monitor configuration parameters to support internal and external customers.
+- Added new parameters for multiple monitor configuration when connecting to a remote resource using the [Uniform Resource Identifier (URI) scheme](uri-scheme.md).
- Added support for the following languages: Czech (Czechia), Hungarian (Hungary), Indonesian (Indonesia), Korean (Korea), Portuguese (Portugal), Turkish (Turkey). - Fixed a bug that caused a crash when using Teams Media Optimization. - Improved client logging, diagnostics, and error classification to help admins troubleshoot connection and feed issues.
virtual-machines B Series Cpu Credit Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/b-series-cpu-credit-model/b-series-cpu-credit-model.md
Similarly, utilizing the example of a Standard_B32as_v2 VM size, if the workload
## Credit monitoring
-To monitor B-series specific credit metrics, customers can utilize the Azure monitor data platform, see [Overview of metrics in Microsoft Azure](../../azure-monitor/data-platform.md). Azure monitor data platform can be accessed via Azure portal and other orchestration paths, and via programmatic API calls to Azure monitor.
-Via Azure monitor data platform, customers can access B-series credit model specific metrics such as 'CPU Credits Consumed', 'CPU Credits Remaining' and 'Percentage CPU' for their given B-series size in real time.
+To monitor B-series specific credit metrics, customers can utilize the Azure monitor data platform, see [Overview of metrics in Microsoft Azure](../../azure-monitor/data-platform.md). Azure monitor data platform can be accessed via the Azure portal and other orchestration paths, and via programmatic API calls to Azure monitor.
+Via Azure monitor data platform, customers can access B-series credit model-specific metrics such as 'CPU Credits Consumed', 'CPU Credits Remaining', and 'Percentage CPU' for their given B-series size in real time.
## Other sizes and information
virtual-machines Trusted Launch Existing Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/trusted-launch-existing-vm.md
Title: Enable Trusted Launch on existing VMs
-description: Enable Trusted Launch on existing Azure VMs.
+ Title: Enable Trusted launch on existing VMs
+description: Enable Trusted launch on existing Azure VMs.
Last updated 08/13/2023
-# Enable Trusted Launch on existing Azure VMs
+# Enable Trusted launch on existing Azure VMs
**Applies to:** :heavy_check_mark: Linux VM :heavy_check_mark: Windows VM :heavy_check_mark: Generation 2 VM
-Azure Virtual Machines supports enabling Trusted Launch on existing [Azure Generation 2](generation-2.md) VMs by upgrading to [Trusted launch](trusted-launch.md) security type.
+Azure Virtual Machines supports enabling Trusted launch on existing [Azure Generation 2](generation-2.md) VMs by upgrading to [Trusted launch](trusted-launch.md) security type.
[Trusted launch](trusted-launch.md) is a way to enable foundational compute security on [Azure Generation 2 VMs](generation-2.md). Trusted launch protects your Virtual Machines against advanced and persistent attack techniques like boot kits and rootkits by combining infrastructure technologies like Secure Boot, vTPM and Boot Integrity Monitoring on your VM. > [!IMPORTANT]
-> Enabling Trusted Launch on existing virtual machines (VMs) is currently not supported for following scenarios:
+> Enabling Trusted launch on existing virtual machines (VMs) is currently not supported for following scenarios:
> > - Azure Generation 1 VMs is currently not supported.
-> - Azure Virtual Machine Scale Sets (VMSS) Uniform & Flex is currently not supported.
+> - Azure Virtual Machine Scale Sets (VMSS) Uniform & Flex are currently not supported.
## Prerequisites - Azure Generation 2 VM(s) is configured with:
- - [Trusted Launch supported size family](trusted-launch.md#virtual-machines-sizes)
- - [Trusted Launch supported OS Image](trusted-launch.md#operating-systems-supported). For custom OS image or disks, the base image should be **Trusted Launch capable**.
-- Azure Generation 2 VM(s) is not using [features currently not supported with Trusted Launch](trusted-launch.md#unsupported-features).-- Azure Generation 2 VM(s) should be **stopped and deallocated** before enabling Trusted Launch security type.-- Azure Backup if enabled for Generation 2 VM(s) should be configured with [Enhanced Backup Policy](../backup/backup-azure-vms-enhanced-policy.md). Trusted Launch security type cannot be enabled for Generation 2 VM(s) configured with *Standard Policy* backup protection.
+ - [Trusted launch supported size family](trusted-launch.md#virtual-machines-sizes)
+ - [Trusted launch supported OS Image](trusted-launch.md#operating-systems-supported). For custom OS image or disks, the base image should be **Trusted launch capable**.
+- Azure Generation 2 VM(s) is not using [features currently not supported with Trusted launch](trusted-launch.md#unsupported-features).
+- Azure Generation 2 VM(s) should be **stopped and deallocated** before enabling Trusted launch security type.
+- Azure Backup if enabled for Generation 2 VM(s) should be configured with [Enhanced Backup Policy](../backup/backup-azure-vms-enhanced-policy.md). Trusted launch security type cannot be enabled for Generation 2 VM(s) configured with *Standard Policy* backup protection.
## Best practices -- [Create restore point](create-restore-points.md) for Azure Generation 2 VM(s) before enabling Trusted Launch security type. You can use the Restore Point to re-create the disks and Generation 2 VM with the previous well-known state.-- Enable Trusted launch on a test Generation 2 VM and ensure if any changes are required to meet the prerequisites before enabling Trusted Launch on Generation 2 VMs running production workloads.
+- Enable Trusted launch on a test Generation 2 VM and ensure if any changes are required to meet the prerequisites before enabling Trusted launch on Generation 2 VMs associated with production workloads.
+- [Create restore point](create-restore-points.md) for Azure Generation 2 VM(s) associated with production workloads before enabling Trusted launch security type. You can use the Restore Point to re-create the disks and Generation 2 VM with the previous well-known state.
-## Enable Trusted Launch on existing VM
+## Enable Trusted launch on existing VM
+
+> [!NOTE]
+>
+> - After enabling Trusted launch, currently virtual machines cannot be rolled back to security type **Standard** (Non-Trusted launch configuration).
+> - **vTPM** is enabled by default.
+> - **Secure Boot** is recommended to be enabled (not enabled by default) if you are not using custom unsigned kernel or drivers. Secure Boot preserves boot integrity and enables foundational security for VM.
+
+### [Portal](#tab/portal)
+
+This section steps through using the Azure portal to enable Trusted launch on existing Azure Generation 2 VM.
+
+1. Log in to [Azure portal](https://portal.azure.com)
+2. Validate virtual machine generation is **V2** and **Stop** VM.
++
+3. On **Overview** page in VM **Properties**, Select **Standard** under **Security type**. This navigates to **Configuration** page for VM.
++
+4. Select drop-down **Security type** under **Security type** section of **Configuration** page.
++
+5. Select **Trusted launch** under drop-down and select check-boxes to enable **Secure Boot** and **vTPM**. Click **Save** after making required changes.
+
+> [!NOTE]
+>
+> - Generation 2 VMs created using [Azure Compute Gallery (ACG)](azure-compute-gallery.md), [Managed Image](capture-image-resource.md), [OS Disk](./scripts/create-vm-from-managed-os-disks.md) cannot be upgraded to Trusted launch using Portal. Please ensure [OS Version is supported for Trusted launch](trusted-launch.md#operating-systems-supported) and use PowerShell, CLI or ARM template to execute upgrade.
++
+6. Close the **Configuration** page once the update is successfully complete and validate **Security type** under VM properties on **Overview** page.
++
+7. Start the upgraded Trusted launch VM and ensure that it has started successfully and verify that you are able to log in to the VM using either RDP (for Windows VM) or SSH (for Linux VM).
### [CLI](#tab/cli)
-This section steps through using the Azure CLI to enable Trusted Launch on existing Azure Generation 2 VM.
+This section steps through using the Azure CLI to enable Trusted launch on existing Azure Generation 2 VM.
Make sure that you've installed the latest [Azure CLI](/cli/azure/install-az-cli2) and are logged in to an Azure account with [az login](/cli/azure/reference-index).
az vm deallocate \
--resource-group myResourceGroup --name myVm ```
-3. Enable Trusted Launch by setting `--security-type` to `TrustedLaunch`.
-
-> [!NOTE]
->
-> - After enabling Trusted Launch, currently virtual machine cannot be rolled back to security type **Standard** (Non-Trusted Launch configuration).
-> - **vTPM** is enabled by default.
-> - **Secure Boot** is recommended to be enabled (not enabled by default) if you are not using custom unsigned kernel or drivers. Secure Boot preserves boot integrity and enables foundational security for VM.
+3. Enable Trusted launch by setting `--security-type` to `TrustedLaunch`.
```azurecli-interactive az vm update \
az vm start \
--resource-group myResourceGroup --name myVm ```
-6. Start the upgraded Trusted Launch VM and ensure that it has started successfully and verify that you are able to log in to the VM using either RDP (for Windows VM) or SSH (for Linux VM).
+6. Start the upgraded Trusted launch VM and ensure that it has started successfully and verify that you are able to log in to the VM using either RDP (for Windows VM) or SSH (for Linux VM).
### [PowerShell](#tab/powershell)
-This section steps through using the Azure PowerShell to enable Trusted Launch on existing Azure Generation 2 VM.
+This section steps through using the Azure PowerShell to enable Trusted launch on existing Azure Generation 2 VM.
Make sure that you've installed the latest [Azure PowerShell](/powershell/azure/install-azps-windows) and are logged in to an Azure account with [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount).
Connect-AzAccount -SubscriptionId 00000000-0000-0000-0000-000000000000
Stop-AzVM -ResourceGroupName myResourceGroup -Name myVm ```
-3. Enable Trusted Launch by setting `--security-type` to `TrustedLaunch`.
-
-> [!NOTE]
->
-> - After enabling Trusted Launch, currently virtual machine cannot be rolled back to security type **Standard** (Non-Trusted Launch configuration).
-> - **vTPM** is enabled by default.
-> - **Secure Boot** is recommended to be enabled (not enabled by default) if you are not using custom unsigned kernel or drivers. Secure Boot preserves boot integrity and enables foundational security for VM.
+3. Enable Trusted launch by setting `--security-type` to `TrustedLaunch`.
```azurepowershell-interactive Get-AzVM -ResourceGroupName myResourceGroup -VMName myVm `
Get-AzVM -ResourceGroupName myResourceGroup -VMName myVm `
Start-AzVM -ResourceGroupName myResourceGroup -Name myVm ```
-6. Start the upgraded Trusted Launch VM and ensure that it has started successfully and verify that you are able to log in to the VM using either RDP (for Windows VM) or SSH (for Linux VM).
+6. Start the upgraded Trusted launch VM and ensure that it has started successfully and verify that you are able to log in to the VM using either RDP (for Windows VM) or SSH (for Linux VM).
### [Template](#tab/template)
-This section steps through using an ARM template to enable Trusted Launch on existing Azure Generation 2 VM.
+This section steps through using an ARM template to enable Trusted launch on existing Azure Generation 2 VM.
[!INCLUDE [About Azure Resource Manager](../../includes/resource-manager-quickstart-introduction.md)]
This section steps through using an ARM template to enable Trusted Launch on exi
"$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "parameters": {
- "vmsToMigrate": {
+ "vmsToUpgrade": {
"type": "object", "metadata": {
- "description": "Specifies the list of Gen2 virtual machines to be migrated to Trusted Launch."
+ "description": "Specifies the list of Gen2 virtual machines to be upgraded to Trusted launch."
} }, "vTpmEnabled": {
This section steps through using an ARM template to enable Trusted Launch on exi
{ "type": "Microsoft.Compute/virtualMachines", "apiVersion": "2022-11-01",
- "name": "[parameters('vmsToMigrate').virtualMachines[copyIndex()].vmName]",
- "location": "[parameters('vmsToMigrate').virtualMachines[copyIndex()].location]",
+ "name": "[parameters('vmsToUpgrade').virtualMachines[copyIndex()].vmName]",
+ "location": "[parameters('vmsToUpgrade').virtualMachines[copyIndex()].location]",
"properties": { "securityProfile": { "uefiSettings": {
- "secureBootEnabled": "[parameters('vmsToMigrate').virtualMachines[copyIndex()].secureBootEnabled]",
+ "secureBootEnabled": "[parameters('vmsToUpgrade').virtualMachines[copyIndex()].secureBootEnabled]",
"vTpmEnabled": "[parameters('vTpmEnabled')]" }, "securityType": "TrustedLaunch"
This section steps through using an ARM template to enable Trusted Launch on exi
}, "copy": { "name": "vmCopy",
- "count": "[length(parameters('vmsToMigrate').virtualMachines)]"
+ "count": "[length(parameters('vmsToUpgrade').virtualMachines)]"
} } ]
This section steps through using an ARM template to enable Trusted Launch on exi
"$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentParameters.json#", "contentVersion": "1.0.0.0", "parameters": {
- "vmsToMigrate": {
+ "vmsToUpgrade": {
"value": { "virtualMachines": [ {
Property | Description of Property | Example template value
-|-|- vmName | Name of Azure Generation 2 VM | "myVm" location | Location of Azure Generation 2 VM | "westus3"
-secureBootEnabled | Enable secure boot with Trusted Launch security type | true
-
-> [!NOTE]
->
-> - After enabling Trusted Launch, currently virtual machine cannot be rolled back to security type **Standard** (Non-Trusted Launch configuration).
-> - **vTPM** is enabled by default.
-> - **Secure Boot** is recommended to be enabled (not enabled by default) if you are not using custom unsigned kernel or drivers. Secure Boot preserves boot integrity and enables foundational security for VM.
+secureBootEnabled | Enable secure boot with Trusted launch security type | true
3. **Deallocate** all Azure Generation 2 VM(s) to be updated.
New-AzResourceGroupDeployment `
5. Verify that the deployment is successful. Check for the security type and UEFI settings of the VM using Azure portal. Check the Security type section in the Overview page.
-6. Start the upgraded Trusted Launch VM and ensure that it has started successfully and verify that you are able to log in to the VM using either RDP (for Windows VM) or SSH (for Linux VM).
+6. Start the upgraded Trusted launch VM and ensure that it has started successfully and verify that you are able to log in to the VM using either RDP (for Windows VM) or SSH (for Linux VM).
## Next steps
-**(Recommended)** Post-Upgrades enable [Boot Integrity Monitoring](trusted-launch.md#microsoft-defender-for-cloud-integration) to monitor the health of the VM using Microsoft Defender for Cloud.
+**(Recommended)** Post-Upgrades enable [Boot integrity monitoring](trusted-launch.md#microsoft-defender-for-cloud-integration) to monitor the health of the VM using Microsoft Defender for Cloud.
-Learn more about [trusted launch](trusted-launch.md) and review [frequently asked questions](trusted-launch-faq.md)
+Learn more about [Trusted launch](trusted-launch.md) and review [frequently asked questions](trusted-launch-faq.md)
virtual-network Default Outbound Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/default-outbound-access.md
If you deploy a virtual machine in Azure and it doesn't have explicit outbound c
:::image type="content" source="./media/default-outbound-access/default-outbound-access.png" alt-text="Diagram of default outbound access.":::
+>[!Important]
+>On September 30, 2025, default outbound access for new deployments will be retired. For more information, see the [official announcement](https://azure.microsoft.com/updates/upgrade-to-standard-sku-public-ip-addresses-in-azure-by-30-september-2025-basic-sku-will-be-retired/). It is reccomended to use one the explict forms of connectivity discussed below.
+ ## Why is disabling default outbound access recommended? * Secure by default
virtual-network Remove Public Ip Address Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/remove-public-ip-address-vm.md
The following example dissociates a public IP address named *myVMPublicIP* from
```azurepowershell $nic = Get-AzNetworkInterface -Name myVMNic -ResourceGroup myResourceGroup
-$nic.IpConfigurations.publicipaddress.id = $null
+$nic.IpConfigurations[0].PublicIpAddress = $null
Set-AzNetworkInterface -NetworkInterface $nic ```
virtual-network Routing Preference Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/routing-preference-overview.md
The price difference between both options is reflected in the internet egress da
### Regional availability
-Internet routing preference is available in all regions except:
-
-* Australia Central
-
-* Austria East
-
-* Brazil Southeast
-
-* Germany Central
-
-* Germany NorthEast
-
-* Norway West
-
-* Sweden Central
-
-* West US 3
+Internet routing preference is available in all regions listed below:
+
+- Australia Central
+- Australia Central 2
+- Australia East
+- Australia Southeast
+- Brazil South
+- Brazil Southeast
+- Canada Central
+- Canada East
+- Central India
+- Central US
+- Central US EUAP
+- East Asia
+- East US
+- East US 2
+- East US 2 EUAP
+- France Central
+- France South
+- Germany North
+- Germany West Central
+- Japan East
+- Japan West
+- Korea Central
+- Korea South
+- North Central US
+- North Europe
+- Norway East
+- Norway West
+- South Africa North
+- South Africa West
+- South Central US
+- South India
+- Southeast Asia
+- Sweden Central
+- Switzerland North
+- Switzerland West
+- UAE Central
+- UAE North
+- UK South
+- UK West
+- West Central US
+- West Europe
+- West India
+- West US
+- West US 2
+- West US 3
## Next steps
vpn-gateway Openvpn Azure Ad Client https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/openvpn-azure-ad-client.md
Once you have a working profile and need to distribute it to other users, you ca
You can configure the Azure VPN Client with optional configuration settings such as additional DNS servers, custom DNS, forced tunneling, custom routes, and other additional settings. For a description of the available optional settings and configuration steps, see [Azure VPN Client optional settings](azure-vpn-client-optional-configurations.md).
+## Azure VPN Client Version Information
+
+Version 3.2.0.0
+
+New in this Release:
+
+- AAD Authentication is now available from the settings page.
+- Server High Availability(HA), releasing on a rolling basis until October 20.
+- Accesibility Improvements
+- Connection logs in UTC
+- Minor bug fixes
+
## Next steps For more information, see [Create an Azure AD tenant for P2S Open VPN connections that use Azure AD authentication](openvpn-azure-ad-tenant.md).+
web-application-firewall Afds Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/afds/afds-overview.md
Previously updated : 08/16/2022 Last updated : 10/04/2023
If bot protection is enabled, incoming requests that match bot rules are logged.
## Configuration
-You can configure and deploy all WAF policies by using the Azure portal, REST APIs, Azure Resource Manager templates, and Azure PowerShell. You can also configure and manage Azure WAF policies at scale by using Firewall Manager integration (preview). For more information, see [Use Azure Firewall Manager to manage Azure Web Application Firewall policies (preview)](../shared/manage-policies.md).
+You can configure and deploy all WAF policies by using the Azure portal, REST APIs, Azure Resource Manager templates, and Azure PowerShell. You can also configure and manage Azure WAF policies at scale by using Firewall Manager integration. For more information, see [Use Azure Firewall Manager to manage Azure Web Application Firewall policies](../shared/manage-policies.md).
## Monitoring
-Monitoring for a WAF on Azure Front Door is integrated with Azure Monitor to track alerts and easily monitor traffic trends.
+Monitoring for a WAF on Azure Front Door is integrated with Azure Monitor to track alerts and easily monitor traffic trends. For more information, see [Azure Web Application Firewall monitoring and logging](waf-front-door-monitor.md).
## Next steps