Updates from: 10/18/2023 01:17:05
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Page Layout https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/page-layout.md
Previously updated : 08/23/2023 Last updated : 10/16/2023
Azure AD B2C page layout uses the following versions of the [jQuery library](htt
**2.1.21** -- Additional sanitization of script tags to avoid XSS attacks.
+- Additional sanitization of script tags to avoid XSS attacks. This revision breaks any script tags in the `<body>`. You should add script tags to the `<head>` tag. For more information, see [Enable JavaScript and page layout versions in Azure Active Directory B2C](javascript-and-page-layout.md?pivots=b2c-user-flow).
**2.1.20** - Fixed Enter event trigger on MFA.
Azure AD B2C page layout uses the following versions of the [jQuery library](htt
**2.1.10** -- Added additional sanitization of script tags to avoid XSS attacks.
+- Added additional sanitization of script tags to avoid XSS attacks. This revision breaks any script tags in the `<body>`. You should add script tags to the `<head>` tag. For more information, see [Enable JavaScript and page layout versions in Azure Active Directory B2C](javascript-and-page-layout.md?pivots=b2c-user-flow).
**2.1.9**
active-directory Application Provisioning Configuration Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/application-provisioning-configuration-api.md
Content-type: application/json
### Retrieve the template for the provisioning connector
-Applications in the gallery that are enabled for provisioning have templates to streamline configuration. Use the request below to [retrieve the template for the provisioning configuration](/graph/api/synchronization-synchronizationtemplate-list?tabs=http&view=graph-rest-beta&preserve-view=true). Note that you will need to provide the ID. The ID refers to the preceding resource, which in this case is the servicePrincipal resource.
+Applications in the gallery that are enabled for provisioning have templates to streamline configuration. Use the request below to [retrieve the template for the provisioning configuration](/graph/api/synchronization-synchronization-list-templates?preserve-view=true&tabs=http&view=graph-rest-beta). Note that you will need to provide the ID. The ID refers to the preceding resource, which in this case is the servicePrincipal resource.
#### Request
HTTP/1.1 200 OK
``` ### Create the provisioning job
-To enable provisioning, you'll first need to [create a job](/graph/api/synchronization-synchronizationjob-post?tabs=http&view=graph-rest-beta&preserve-view=true). Use the following request to create a provisioning job. Use the templateId from the previous step when specifying the template to be used for the job.
+To enable provisioning, you'll first need to [create a job](/graph/api/synchronization-synchronization-post-jobs?preserve-view=true&tabs=http&view=graph-rest-beta). Use the following request to create a provisioning job. Use the templateId from the previous step when specifying the template to be used for the job.
#### Request
active-directory Application Provisioning Log Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/application-provisioning-log-analytics.md
Provisioning integrates with Azure Monitor logs and Log Analytics. With Azure mo
## Enabling provisioning logs
-You should already be familiar with Azure monitoring and Log Analytics. If not, jump over to learn about them, and then come back to learn about application provisioning logs. To learn more about Azure monitoring, see [Azure Monitor overview](../../azure-monitor/overview.md). To learn more about Azure Monitor logs and Log Analytics, see [Overview of log queries in Azure Monitor](../../azure-monitor/logs/log-query-overview.md).
+You should already be familiar with Azure monitoring and Log Analytics. If not, jump over to learn about them, and then come back to learn about application provisioning logs. To learn more about Azure monitoring, see [Azure Monitor overview](/azure/azure-monitor/overview). To learn more about Azure Monitor logs and Log Analytics, see [Overview of log queries in Azure Monitor](/azure/azure-monitor/logs/log-query-overview).
Once you've configured Azure monitoring, you can enable logs for application provisioning. The option is located on the **Diagnostics settings** page.
The underlying data stream that Provisioning sends log viewers is almost identic
## Azure Monitor workbooks
-Azure Monitor workbooks provide a flexible canvas for data analysis. They also provide for the creation of rich visual reports within the Azure portal. To learn more, see [Azure Monitor Workbooks overview](../../azure-monitor/visualize/workbooks-overview.md).
+Azure Monitor workbooks provide a flexible canvas for data analysis. They also provide for the creation of rich visual reports within the Azure portal. To learn more, see [Azure Monitor Workbooks overview](/azure/azure-monitor/visualize/workbooks-overview).
Application provisioning comes with a set of prebuilt workbooks. You can find them on the Workbooks page. To view the data, ensure that all the filters (timeRange, jobID, appName) are populated. Also confirm the app was provisioned, otherwise there isn't any data in the logs.
Application provisioning comes with a set of prebuilt workbooks. You can find th
## Custom queries
-You can create custom queries and show the data on Azure dashboards. To learn how, see [Create and share dashboards of Log Analytics data](../../azure-monitor/logs/get-started-queries.md). Also, be sure to check out [Overview of log queries in Azure Monitor](../../azure-monitor/logs/log-query-overview.md).
+You can create custom queries and show the data on Azure dashboards. To learn how, see [Create and share dashboards of Log Analytics data](/azure/azure-monitor/logs/get-started-queries). Also, be sure to check out [Overview of log queries in Azure Monitor](/azure/azure-monitor/logs/log-query-overview).
Here are some samples to get started with application provisioning.
AADProvisioningLogs
Azure Monitor lets you configure custom alerts so that you can get notified about key events related to Provisioning. For example, you might want to receive an alert on spikes in failures. Or perhaps spikes in disables or deletes. Another example of where you might want to be alerted is a lack of any provisioning, which indicates something is wrong.
-To learn more about alerts, see [Azure Monitor Log Alerts](../../azure-monitor/alerts/alerts-create-new-alert-rule.md).
+To learn more about alerts, see [Azure Monitor Log Alerts](/azure/azure-monitor/alerts/alerts-create-new-alert-rule).
Alert when there's a spike in failures. Replace the jobID with the jobID for your application.
We're taking an open source and community-based approach to application provisio
## Next steps - [Log analytics](../reports-monitoring/howto-analyze-activity-logs-log-analytics.md)-- [Get started with queries in Azure Monitor logs](../../azure-monitor/logs/get-started-queries.md)-- [Create and manage alert groups in the Azure portal](../../azure-monitor/alerts/action-groups.md)-- [Install and use the log analytics views for Microsoft Entra ID](../../azure-monitor/visualize/workbooks-view-designer-conversion-overview.md)
+- [Get started with queries in Azure Monitor logs](/azure/azure-monitor/logs/get-started-queries)
+- [Create and manage alert groups in the Azure portal](/azure/azure-monitor/alerts/action-groups)
+- [Install and use the log analytics views for Microsoft Entra ID](/azure/azure-monitor/visualize/workbooks-view-designer-conversion-overview)
- [Provisioning logs API](/graph/api/resources/provisioningobjectsummary?preserve-view=true&view=graph-rest-beta)
active-directory Application Provisioning Quarantine Status https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/application-provisioning-quarantine-status.md
After you've resolved the issue, restart the provisioning job. Certain changes t
POST /servicePrincipals/{id}/synchronization/jobs/{jobId}/restart ```
-Replace "{ID}" with the value of the Application ID, and replace "{jobId}" with the [ID of the synchronization job](/graph/api/resources/synchronization-configure-with-directory-extension-attributes?tabs=http&view=graph-rest-beta&preserve-view=true#list-synchronization-jobs-in-the-context-of-the-service-principal).
+Replace "{ID}" with the value of the Application ID, and replace "{jobId}" with the [ID of the synchronization job](/graph/synchronization-configure-with-directory-extension-attributes?preserve-view=true&tabs=http&view=graph-rest-beta#list-synchronization-jobs-in-the-context-of-the-service-principal).
active-directory Customize Application Attributes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/customize-application-attributes.md
When you're editing the list of supported attributes, the following properties a
- **Multi-value?** - Whether the attribute supports multiple values. - **Exact case?** - Whether the attributes values are evaluated in a case-sensitive way. - **API Expression** - Don't use, unless instructed to do so by the documentation for a specific provisioning connector (such as Workday).-- **Referenced Object Attribute** - If it's a Reference type attribute, then this menu lets you select the table and attribute in the target application that contains the value associated with the attribute. For example, if you have an attribute named "Department" whose stored value references an object in a separate "Departments" table, you would select "Departments.Name". The reference tables and the primary ID fields supported for a given application are preconfigured and can't be edited using the Microsoft Entra admin center. However, you can edit them using the [Microsoft Graph API](/graph/api/resources/synchronization-configure-with-custom-target-attributes).
+- **Referenced Object Attribute** - If it's a Reference type attribute, then this menu lets you select the table and attribute in the target application that contains the value associated with the attribute. For example, if you have an attribute named "Department" whose stored value references an object in a separate "Departments" table, you would select "Departments.Name". The reference tables and the primary ID fields supported for a given application are preconfigured and can't be edited using the Microsoft Entra admin center. However, you can edit them using the [Microsoft Graph API](/graph/synchronization-configure-with-custom-target-attributes).
#### Provisioning a custom extension attribute to a SCIM compliant application
active-directory Inbound Provisioning Api Logic Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/inbound-provisioning-api-logic-apps.md
From an implementation perspective:
### Integration scenario variations
-While this tutorial uses a CSV file as a system of record, you can customize the sample Azure Logic Apps workflow to read data from any system of record. Azure Logic Apps provides a wide range of [built-in connectors](/azure/logic-apps/connectors/built-in/reference) and [managed connectors](/connectors/connector-reference/connector-reference-logicapps-connectors) with pre-built triggers and actions that you can use in your integration workflow.
+While this tutorial uses a CSV file as a system of record, you can customize the sample Azure Logic Apps workflow to read data from any system of record. Azure Logic Apps provides a wide range of [built-in connectors](/azure/logic-apps/connectors/built-in/reference/) and [managed connectors](/connectors/connector-reference/connector-reference-logicapps-connectors) with pre-built triggers and actions that you can use in your integration workflow.
Here's a list of enterprise integration scenario variations, where API-driven inbound provisioning can be implemented with a Logic Apps workflow.
The Logic Apps deployment template published in the [Microsoft Entra inbound pro
|# | Automation task | Implementation guidance | Advanced customization | ||||| |1 | Read worker data from the CSV file. | The Logic Apps workflow uses an Azure Function to read the CSV file stored in an Azure File Share. The Azure Function converts CSV data into JSON format. If your CSV file format is different, update the workflow step "Parse JSON" and "Construct SCIMUser". | If your system of record is different, check guidance provided in the section [Integration scenario variations](#integration-scenario-variations) on how to customize the Logic Apps workflow by using an appropriate connector. |
-|2 | Pre-process and convert data to SCIM format. | By default, the Logic Apps workflow converts each record in the CSV file to a SCIM Core User + Enterprise User representation. If you plan to use custom SCIM schema extensions, update the step "Construct SCIMUser" to include your custom SCIM schema extensions. | If you want to run C# code for advanced formatting and data validation, use [custom Azure Functions](../../logic-apps/logic-apps-azure-functions.md).|
+|2 | Pre-process and convert data to SCIM format. | By default, the Logic Apps workflow converts each record in the CSV file to a SCIM Core User + Enterprise User representation. If you plan to use custom SCIM schema extensions, update the step "Construct SCIMUser" to include your custom SCIM schema extensions. | If you want to run C# code for advanced formatting and data validation, use [custom Azure Functions](/azure/logic-apps/logic-apps-azure-functions).|
|3 | Use the right authentication method | You can either [use a service principal](inbound-provisioning-api-grant-access.md#configure-a-service-principal) or [use managed identity](inbound-provisioning-api-grant-access.md#configure-a-managed-identity) to access the inbound provisioning API. Update the step "Send SCIMBulkPayload to API endpoint" with the right authentication method. | - | |4 | Provision accounts in on-premises Active Directory or Microsoft Entra ID. | Configure [API-driven inbound provisioning app](inbound-provisioning-api-configure-app.md). This generates a unique [/bulkUpload](/graph/api/synchronization-synchronizationjob-post-bulkupload) API endpoint. Update the step "Send SCIMBulkPayload to API endpoint" to use the right bulkUpload API endpoint. | If you plan to use bulk request with custom SCIM schema, then extend the provisioning app schema to include your custom SCIM schema attributes. | |5 | Scan the provisioning logs and retry provisioning for failed records. | This automation is not yet implemented in the sample Logic Apps workflow. To implement it, refer to the [provisioning logs Graph API](/graph/api/resources/provisioningobjectsummary). | - |
active-directory Inbound Provisioning Api Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/inbound-provisioning-api-powershell.md
The PowerShell sample script published in the [Microsoft Entra inbound provision
|3 | Use a certificate for authentication to Microsoft Entra ID. | [Create a service principal that can access](inbound-provisioning-api-grant-access.md) the inbound provisioning API. Refer to steps in the section [Configure client certificate for service principal authentication](#configure-client-certificate-for-service-principal-authentication) to learn how to use client certificate for authentication. | If you'd like to use managed identity instead of a service principal for authentication, then review the use of `Connect-MgGraph` in the sample script and update it to use [managed identities](/powershell/microsoftgraph/authentication-commands#using-managed-identity). | |4 | Provision accounts in on-premises Active Directory or Microsoft Entra ID. | Configure [API-driven inbound provisioning app](inbound-provisioning-api-configure-app.md). This generates a unique [/bulkUpload](/graph/api/synchronization-synchronizationjob-post-bulkupload) API endpoint. Refer to the steps in the section [Generate and upload bulk request payload as admin user](#generate-and-upload-bulk-request-payload-as-admin-user) to learn how to upload data to this endpoint. Validate the attribute flow and customize the attribute mappings per your integration requirements. To run the script using a service principal with certificate-based authentication, refer to the steps in the section [Upload bulk request payload using client certificate authentication](#upload-bulk-request-payload-using-client-certificate-authentication) | If you plan to [use bulk request with custom SCIM schema](#generate-bulk-request-with-custom-scim-schema), then [extend the provisioning app schema](#extending-provisioning-job-schema) to include your custom SCIM schema elements.| |5 | Scan the provisioning logs and retry provisioning for failed records. | Refer to the steps in the section [Get provisioning logs of the latest sync cycles](#get-provisioning-logs-of-the-latest-sync-cycles) to learn how to fetch and analyze provisioning log data. Identify failed user records and include them in the next upload cycle. | - |
-|6 | Deploy your PowerShell based automation to production. | Once you have verified your API-driven provisioning flow and customized the PowerShell script to meet your requirements, you can deploy the automation as a [PowerShell Workflow runbook in Azure Automation](../../automation/learn/automation-tutorial-runbook-textual.md) or as a server process [scheduled to run on a Windows server](/troubleshoot/windows-server/system-management-components/schedule-server-process). | - |
+|6 | Deploy your PowerShell based automation to production. | Once you have verified your API-driven provisioning flow and customized the PowerShell script to meet your requirements, you can deploy the automation as a [PowerShell Workflow runbook in Azure Automation](/azure/automation/learn/automation-tutorial-runbook-textual) or as a server process [scheduled to run on a Windows server](/troubleshoot/windows-server/system-management-components/schedule-server-process). | - |
## Download the PowerShell script
active-directory Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/known-issues.md
When two users in the source tenant have the same mail, and they both need to be
### Usage of Microsoft Entra B2B collaboration for cross-tenant access - B2B users are unable to manage certain Microsoft 365 services in remote tenants (such as Exchange Online), as there's no directory picker.-- To learn about Azure Virtual Desktop support for B2B users, see [Prerequisites for Azure Virtual Desktop](../../virtual-desktop/prerequisites.md?tabs=portal).
+- To learn about Azure Virtual Desktop support for B2B users, see [Prerequisites for Azure Virtual Desktop](/azure/virtual-desktop/prerequisites?tabs=portal).
- B2B users with UserType Member aren't currently supported in Power BI. For more information, see [Distribute Power BI content to external guest users using Microsoft Entra B2B](/power-bi/guidance/whitepaper-azure-b2b-power-bi) - Converting a guest account into a Microsoft Entra member account or converting a Microsoft Entra member account into a guest isn't supported by Teams. For more information, see [Guest access in Microsoft Teams](/microsoftteams/guest-access). ::: zone-end
active-directory Plan Auto User Provisioning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/plan-auto-user-provisioning.md
Refer to the following links to troubleshoot any issues that may turn up during
* [Keep up to date on what's new with Microsoft Entra ID](https://azure.microsoft.com/updates/?product=active-directory)
-* [Microsoft Q&A Microsoft Entra forum](/answers/topics/azure-active-directory.html)
+* [Microsoft Q&A Microsoft Entra forum](/answers/tags/455/entra-id)
## Next steps * [Configure Automatic User Provisioning](../app-provisioning/configure-automatic-user-provisioning-portal.md)
active-directory Plan Cloud Hr Provision https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/plan-cloud-hr-provision.md
To review these events and all other activities performed by the provisioning se
All activities performed by the provisioning service are recorded in the Microsoft Entra audit logs. You can route Microsoft Entra audit logs to Azure Monitor logs for further analysis. With Azure Monitor logs (also known as Log Analytics workspace), you can query data to find events, analyze trends, and perform correlation across various data sources. Watch this [video](https://youtu.be/MP5IaCTwkQg) to learn the benefits of using Azure Monitor logs for Microsoft Entra logs in practical user scenarios.
-Install the [log analytics views for Microsoft Entra activity logs](../../azure-monitor/visualize/workbooks-view-designer-conversion-overview.md) to get access to [prebuilt reports](https://github.com/AzureAD/Deployment-Plans/tree/master/Log%20Analytics%20Views) around provisioning events in your environment.
+Install the [log analytics views for Microsoft Entra activity logs](/azure/azure-monitor/visualize/workbooks-view-designer-conversion-overview) to get access to [prebuilt reports](https://github.com/AzureAD/Deployment-Plans/tree/master/Log%20Analytics%20Views) around provisioning events in your environment.
For more information, see how to [analyze the Microsoft Entra activity logs with your Azure Monitor logs](../reports-monitoring/howto-analyze-activity-logs-log-analytics.md).
active-directory Provisioning Workbook https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/provisioning-workbook.md
This workbook:
## Enabling provisioning logs
-You should already be familiar with Azure monitoring and Log Analytics. If not, jump over to learn about them and then come back to learn about application provisioning logs. To learn more about Azure monitoring, see [Azure Monitor overview](../../azure-monitor/overview.md). To learn more about Azure Monitor logs and Log Analytics, see [Overview of log queries in Azure Monitor](../../azure-monitor/logs/log-query-overview.md) and [Provisioning Logs for troubleshooting cloud sync](../hybrid/cloud-sync/how-to-troubleshoot.md).
+You should already be familiar with Azure monitoring and Log Analytics. If not, jump over to learn about them and then come back to learn about application provisioning logs. To learn more about Azure monitoring, see [Azure Monitor overview](/azure/azure-monitor/overview). To learn more about Azure Monitor logs and Log Analytics, see [Overview of log queries in Azure Monitor](/azure/azure-monitor/logs/log-query-overview) and [Provisioning Logs for troubleshooting cloud sync](../hybrid/cloud-sync/how-to-troubleshoot.md).
## Source and Target At the top of the workbook, using the drop-down, specify the source and target identities.
By clicking on the Source ID in the **Sync details** or the **Sync details by c
## Custom queries
-You can create custom queries and show the data on Azure dashboards. To learn how, see [Create and share dashboards of Log Analytics data](../../azure-monitor/logs/get-started-queries.md). Also, be sure to check out [Overview of log queries in Azure Monitor](../../azure-monitor/logs/log-query-overview.md).
+You can create custom queries and show the data on Azure dashboards. To learn how, see [Create and share dashboards of Log Analytics data](/azure/azure-monitor/logs/get-started-queries). Also, be sure to check out [Overview of log queries in Azure Monitor](/azure/azure-monitor/logs/log-query-overview).
## Custom alerts Azure Monitor lets you configure custom alerts so that you can get notified about key events related to Provisioning. For example, you might want to receive an alert on spikes in failures. Or perhaps spikes in disables or deletes. Another example of where you might want to be alerted is a lack of any provisioning, which indicates something is wrong.
-To learn more about alerts, see [Azure Monitor Log Alerts](../../azure-monitor/alerts/alerts-create-new-alert-rule.md).
+To learn more about alerts, see [Azure Monitor Log Alerts](/azure/azure-monitor/alerts/alerts-create-new-alert-rule).
## Next steps
active-directory Use Scim To Build Users And Groups Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/use-scim-to-build-users-and-groups-endpoints.md
# Tutorial: Develop a sample SCIM endpoint in Microsoft Entra ID
-This tutorial describes how to deploy the SCIM [reference code](https://aka.ms/scimreferencecode) with [Azure App Service](../../app-service/index.yml). Then, test the code by using Postman or by integrating with the Microsoft Entra provisioning service. The tutorial is intended for developers who want to get started with SCIM, or anyone interested in testing a [SCIM endpoint](./use-scim-to-provision-users-and-groups.md).
+This tutorial describes how to deploy the SCIM [reference code](https://aka.ms/scimreferencecode) with [Azure App Service](/azure/app-service/). Then, test the code by using Postman or by integrating with the Microsoft Entra provisioning service. The tutorial is intended for developers who want to get started with SCIM, or anyone interested in testing a [SCIM endpoint](./use-scim-to-provision-users-and-groups.md).
In this tutorial, you learn how to:
In this tutorial, you learn how to:
## Deploy your SCIM endpoint in Azure
-The steps here deploy the SCIM endpoint to a service by using [Visual Studio 2019](https://visualstudio.microsoft.com/downloads/) and [Visual Studio Code](https://code.visualstudio.com/) with [Azure App Service](../../app-service/index.yml). The SCIM reference code can run locally, hosted by an on-premises server, or deployed to another external service.
+The steps here deploy the SCIM endpoint to a service by using [Visual Studio 2019](https://visualstudio.microsoft.com/downloads/) and [Visual Studio Code](https://code.visualstudio.com/) with [Azure App Service](/azure/app-service/). The SCIM reference code can run locally, hosted by an on-premises server, or deployed to another external service.
### Get and deploy the sample app
Go to the [reference code](https://github.com/AzureAD/SCIMReferenceCode) from Gi
1. If not installed, add [Azure App Service for Visual Studio Code](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azureappservice) extension.
-1. To deploy the Microsoft.SCIM.WebHostSample app to Azure App Services, [create a new App Services](../../app-service/quickstart-dotnetcore.md?tabs=net60&pivots=development-environment-vscode#2-publish-your-web-app).
+1. To deploy the Microsoft.SCIM.WebHostSample app to Azure App Services, [create a new App Services](/azure/app-service/quickstart-dotnetcore?tabs=net60&pivots=development-environment-vscode#2-publish-your-web-app).
1. In the Visual Studio Code terminal, run the .NET CLI command. This command generates a deployable publish folder for the app in the bin/debug/publish directory.
active-directory Use Scim To Provision Users And Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/use-scim-to-provision-users-and-groups.md
TLS 1.2 Cipher Suites minimum bar:
### IP Ranges
-The Microsoft Entra provisioning service currently operates under the IP Ranges for Microsoft Entra ID as listed [here](https://www.microsoft.com/download/details.aspx?id=56519&WT.mc_id=rss_alldownloads_all). You can add the IP ranges listed under the Microsoft Entra ID tag to allow traffic from the Microsoft Entra provisioning service into your application. You need to review the IP range list carefully for computed addresses. An address such as '40.126.25.32' could be represented in the IP range list as '40.126.0.0/18'. You can also programmatically retrieve the IP range list using the following [API](/rest/api/virtualnetwork/servicetags/list).
+The Microsoft Entra provisioning service currently operates under the IP Ranges for Microsoft Entra ID as listed [here](https://www.microsoft.com/download/details.aspx?id=56519&WT.mc_id=rss_alldownloads_all). You can add the IP ranges listed under the Microsoft Entra ID tag to allow traffic from the Microsoft Entra provisioning service into your application. You need to review the IP range list carefully for computed addresses. An address such as '40.126.25.32' could be represented in the IP range list as '40.126.0.0/18'. You can also programmatically retrieve the IP range list using the following [API](/rest/api/virtualnetwork/service-tags/list).
Microsoft Entra ID also supports an agent based solution to provide connectivity to applications in private networks (on-premises, hosted in Azure, hosted in AWS, etc.). Customers can deploy a lightweight agent, which provides connectivity to Microsoft Entra ID without opening any inbound ports, on a server in their private network. Learn more [here](./on-premises-scim-provisioning.md).
active-directory Workday Attribute Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/workday-attribute-reference.md
To configure additional XPATHs, refer to the section [Tutorial: Managing your co
## Custom XPATH values The table below provides a list of other commonly used custom XPATH API expressions when provisioning workers from Workday to Active Directory or Microsoft Entra ID. Please test the XPATH API expressions provided here with your version of Workday referring to the instructions captured in the section [Tutorial: Managing your configuration](../saas-apps/workday-inbound-tutorial.md#managing-your-configuration).
-To add more attributes to the XPATH table for the benefit of customers implementing this integration, please leave a comment below or directly [contribute](/contribute) to the article.
+To add more attributes to the XPATH table for the benefit of customers implementing this integration, please leave a comment below or directly [contribute](/contribute/) to the article.
> [!div class="mx-tdBreakAll"] > | \# | Workday Attribute Name | Workday API version | Workday XPATH API expression |
active-directory Application Proxy Add On Premises Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-add-on-premises-application.md
To enable TLS 1.2:
1. Restart the server. > [!NOTE]
-> Microsoft is updating Azure services to use TLS certificates from a different set of Root Certificate Authorities (CAs). This change is being made because the current CA certificates do not comply with one of the C).
+> Microsoft is updating Azure services to use TLS certificates from a different set of Root Certificate Authorities (CAs). This change is being made because the current CA certificates do not comply with one of the CA/Browser Forum Baseline requirements. For more information, see [Azure TLS certificate changes](/azure/security/fundamentals/tls-certificate-changes).
## Prepare your on-premises environment
active-directory Application Proxy Application Gateway Waf https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-application-gateway-waf.md
The Application Gateway [Firewall logs][waf-logs] provide more details about the
## Next steps
-To prevent false positives, learn how to [Customize Web Application Firewall rules](../../web-application-firewall/ag/application-gateway-customize-waf-rules-portal.md), configure [Web Application Firewall exclusion lists](../../web-application-firewall/ag/application-gateway-waf-configuration.md?tabs=portal), or [Web Application Firewall custom rules](../../web-application-firewall/ag/create-custom-waf-rules.md).
+To prevent false positives, learn how to [Customize Web Application Firewall rules](/azure/web-application-firewall/ag/application-gateway-customize-waf-rules-portal), configure [Web Application Firewall exclusion lists](/azure/web-application-firewall/ag/application-gateway-waf-configuration?tabs=portal), or [Web Application Firewall custom rules](/azure/web-application-firewall/ag/create-custom-waf-rules).
-[waf-overview]: ../../web-application-firewall/ag/ag-overview.md
-[appgw_quick]: ../../application-gateway/quick-create-portal.md
+[waf-overview]: /azure/web-application-firewall/ag/ag-overview
+[appgw_quick]: /azure/application-gateway/quick-create-portal
[appproxy-add-app]: ./application-proxy-add-on-premises-application.md [appproxy-optimize]: ./application-proxy-network-topology.md [appproxy-custom-domain]: ./application-proxy-configure-custom-domain.md
-[private-dns]: ../../dns/private-dns-getstarted-portal.md
-[waf-logs]: ../../application-gateway/application-gateway-diagnostics.md#firewall-log
+[private-dns]: /azure/dns/private-dns-getstarted-portal
+[waf-logs]: /azure/application-gateway/application-gateway-diagnostics#firewall-log
active-directory Application Proxy Azure Front Door https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-azure-front-door.md
Follow these steps to configure the Front Door Service (Standard):
## Next steps
-To prevent false positives, learn how to [Customize Web Application Firewall rules](../../web-application-firewall/ag/application-gateway-customize-waf-rules-portal.md), configure [Web Application Firewall exclusion lists](../../web-application-firewall/ag/application-gateway-waf-configuration.md?tabs=portal), or [Web Application Firewall custom rules](../../web-application-firewall/ag/create-custom-waf-rules.md).
+To prevent false positives, learn how to [Customize Web Application Firewall rules](/azure/web-application-firewall/ag/application-gateway-customize-waf-rules-portal), configure [Web Application Firewall exclusion lists](/azure/web-application-firewall/ag/application-gateway-waf-configuration?tabs=portal), or [Web Application Firewall custom rules](/azure/web-application-firewall/ag/create-custom-waf-rules).
-[front-door-overview]: ../../frontdoor/front-door-overview.md
-[front-door-origin]: ../../frontdoor/origin.md?pivots=front-door-standard-premium#origin-host-header
-[front-door-tier]: ../../frontdoor/standard-premium/tier-comparison.md
-[front-door-custom-domain]: ../../frontdoor/standard-premium/how-to-add-custom-domain.md
+[front-door-overview]: /azure/frontdoor/front-door-overview
+[front-door-origin]: /azure/frontdoor/origin?pivots=front-door-standard-premium#origin-host-header
+[front-door-tier]: /azure/frontdoor/standard-premium/tier-comparison
+[front-door-custom-domain]: /azure/frontdoor/standard-premium/how-to-add-custom-domain
[appproxy-custom-domain]: ./application-proxy-configure-custom-domain.md
-[private-dns]: ../../dns/private-dns-getstarted-portal.md
-[waf-logs]: ../../application-gateway/application-gateway-diagnostics.md#firewall-log
+[private-dns]: /azure/dns/private-dns-getstarted-portal
+[waf-logs]: /azure/application-gateway/application-gateway-diagnostics#firewall-log
active-directory Application Proxy Back End Kerberos Constrained Delegation How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-back-end-kerberos-constrained-delegation-how-to.md
If you still can't make progress, Microsoft support can assist you. Create a sup
## Next steps
-[Configure KCD on a managed domain](../../active-directory-domain-services/deploy-kcd.md).
+[Configure KCD on a managed domain](/entra/identity/domain-services/deploy-kcd).
active-directory Application Proxy Configure Connectors With Proxy Servers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-configure-connectors-with-proxy-servers.md
If you see other response codes, such as 407 or 502, that means that the proxy i
## Next steps * [Understand Microsoft Entra application proxy connectors](application-proxy-connectors.md)
-* If you have problems with connector connectivity issues, ask your question in the [Microsoft Q&A question page for Microsoft Entra ID](/answers/topics/azure-active-directory.html) or create a ticket with our support team.
+* If you have problems with connector connectivity issues, ask your question in the [Microsoft Q&A question page for Microsoft Entra ID](/answers/tags/455/entra-id) or create a ticket with our support team.
active-directory Application Proxy Configure Custom Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-configure-custom-domain.md
To publish your app through Application Proxy with a custom domain:
![Add CNAME DNS entry](./media/application-proxy-configure-custom-domain/dns-info.png)
-10. Follow the instructions at [Manage DNS records and record sets by using the Microsoft Entra admin center](../../dns/dns-operations-recordsets-portal.md) to add a DNS record that redirects the new external URL to the *msappproxy.net* domain in Azure DNS. If a different DNS provider is used, please contact the vendor for the instructions.
+10. Follow the instructions at [Manage DNS records and record sets by using the Microsoft Entra admin center](/azure/dns/dns-operations-recordsets-portal) to add a DNS record that redirects the new external URL to the *msappproxy.net* domain in Azure DNS. If a different DNS provider is used, please contact the vendor for the instructions.
> [!IMPORTANT] > Ensure that you are properly using a CNAME record that points to the *msappproxy.net* domain. Do not point records to IP addresses or server DNS names since these are not static and may impact the resiliency of the service.
active-directory Application Proxy Deployment Plan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-deployment-plan.md
For detailed information on the topic, see [KCD for single sign-on](application-
* **DNS records for URLs**
- * Before using custom domains in Application Proxy you must create a CNAME record in public DNS, allowing clients to resolve the custom defined external URL to the pre-defined Application Proxy address. Failing to create a CNAME record for an application that uses a custom domain will prevent remote users from connecting to the application. Steps required to add CNAME records can vary from DNS provider to provider, so learn how to [manage DNS records and record sets by using the Microsoft Entra admin center](../../dns/dns-operations-recordsets-portal.md).
+ * Before using custom domains in Application Proxy you must create a CNAME record in public DNS, allowing clients to resolve the custom defined external URL to the pre-defined Application Proxy address. Failing to create a CNAME record for an application that uses a custom domain will prevent remote users from connecting to the application. Steps required to add CNAME records can vary from DNS provider to provider, so learn how to [manage DNS records and record sets by using the Microsoft Entra admin center](/azure/dns/dns-operations-recordsets-portal).
* Similarly, connector hosts must be able to resolve the internal URL of applications being published.
The connectors and the service take care of all the high availability tasks. You
#### Windows event logs and performance counters
-Connectors have both admin and session logs. The admin logs include key events and their errors. The session logs include all the transactions and their processing details. Logs and counters are located in Windows Event Logs for more information see [Understand Microsoft Entra application proxy Connectors](./application-proxy-connectors.md#under-the-hood). Follow this [tutorial to configure event log data sources in Azure Monitor](../../azure-monitor/agents/data-sources-windows-events.md).
+Connectors have both admin and session logs. The admin logs include key events and their errors. The session logs include all the transactions and their processing details. Logs and counters are located in Windows Event Logs for more information see [Understand Microsoft Entra application proxy Connectors](./application-proxy-connectors.md#under-the-hood). Follow this [tutorial to configure event log data sources in Azure Monitor](/azure/azure-monitor/agents/data-sources-windows-events).
### Troubleshooting guide and steps
active-directory Application Proxy High Availability Load Balancing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-high-availability-load-balancing.md
Connectors establish their connections based on principles for high availability
1. A user on a client device tries to access an on-premises application published through Application Proxy. 2. The request goes through an Azure Load Balancer to determine which Application Proxy service instance should take the request. There are tens of instances available to accept the requests for all traffic in the region. This method helps to evenly distribute the traffic across the service instances.
-3. The request is sent to [Service Bus](../../service-bus-messaging/index.yml).
+3. The request is sent to [Service Bus](/azure/service-bus-messaging/).
4. Service Bus signals to an available connector. The connector then picks up the request from Service Bus. - In step 2, requests go to different Application Proxy service instances, so connections are more likely to be made with different connectors. As a result, connectors are almost evenly used within the group. 5. The connector passes the request to the applicationΓÇÖs back-end server. Then the application sends the response back to the connector.
active-directory Application Proxy Integrate With Logic Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-integrate-with-logic-apps.md
When a new Enterprise Application is created, a matching App Registration is als
- [How to configure an Application Proxy application](./application-proxy-config-how-to.md) - [Access on-premises APIs with Microsoft Entra application proxy](./application-proxy-secure-api-access.md)-- [Common scenarios, examples, tutorials, and walkthroughs for Azure Logic Apps](../../logic-apps/logic-apps-examples-and-scenarios.md)
+- [Common scenarios, examples, tutorials, and walkthroughs for Azure Logic Apps](/azure/logic-apps/logic-apps-examples-and-scenarios)
active-directory Application Proxy Integrate With Microsoft Cloud Application Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-integrate-with-microsoft-cloud-application-security.md
Here are some examples of the types of policies you can create with Defender for
- Use client certificates or device compliance to block access to specific applications from unmanaged devices. - Restrict user sessions from non-corporate networks. You can give restricted access to users accessing an application from outside your corporate network. For example, this restricted access can block the user from downloading sensitive documents.
-For more information, see [Protect apps with Microsoft Defender for Cloud Apps Conditional Access App Control](/cloud-app-security/proxy-intro-aad).
+For more information, see [Protect apps with Microsoft Defender for Cloud Apps Conditional Access App Control](/defender-cloud-apps/proxy-intro-aad).
## Requirements
To configure your application with the Conditional Access Application Control, f
## Test Conditional Access App Control
-To test the deployment of Microsoft Entra applications with Conditional Access Application Control, follow the instructions in [Test the deployment for Microsoft Entra apps](/cloud-app-security/proxy-deployment-aad).
+To test the deployment of Microsoft Entra applications with Conditional Access Application Control, follow the instructions in [Test the deployment for Microsoft Entra apps](/defender-cloud-apps/proxy-deployment-aad).
active-directory Application Proxy Integrate With Power Bi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-integrate-with-power-bi.md
You can use Microsoft Intune to manage the client apps that your company's workf
5. Under **APIs my organization uses**, search for ΓÇ£Microsoft Mobile Application ManagementΓÇ¥ and select it. 6. Add the **DeviceManagementManagedApps.ReadWrite** permission to the application 7. Click **Grant admin consent** to grant the permission access to the application.
-8. Configure the Intune policy you want by referring to [How to create and assign app protection policies](/intune/app-protection-policies).
+8. Configure the Intune policy you want by referring to [How to create and assign app protection policies](/mem/intune/apps/app-protection-policies).
## Troubleshooting
-If the application returns an error page after trying to load a report for more than a few minutes, you might need to change the timeout setting. By default, Application Proxy supports applications that take up to 85 seconds to respond to a request. To lengthen this setting to 180 seconds, select the back-end timeout to **Long** in the App Proxy settings page for the application. For tips on how to create fast and reliable reports see [Power BI Reports Best Practices](/power-bi/power-bi-reports-performance).
+If the application returns an error page after trying to load a report for more than a few minutes, you might need to change the timeout setting. By default, Application Proxy supports applications that take up to 85 seconds to respond to a request. To lengthen this setting to 180 seconds, select the back-end timeout to **Long** in the App Proxy settings page for the application. For tips on how to create fast and reliable reports see [Power BI Reports Best Practices](/power-bi/guidance/power-bi-optimization).
Using Microsoft Entra application proxy to enable the Power BI mobile app to connect to on premises Power BI Report Server is not supported with Conditional Access policies that require the Microsoft Power BI app as an approved client app.
active-directory Application Proxy Network Topology https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-network-topology.md
Latency is not compromised because traffic is flowing over a dedicated connectio
Although the focus of this article is connector placement, you can also change the placement of the application to get better latency characteristics.
-Increasingly, organizations are moving their networks into hosted environments. This enables them to place their apps in a hosted environment that is also part of their corporate network, and still be within the domain. In this case, the patterns discussed in the preceding sections can be applied to the new application location. If you're considering this option, see [Microsoft Entra Domain Services](../../active-directory-domain-services/overview.md).
+Increasingly, organizations are moving their networks into hosted environments. This enables them to place their apps in a hosted environment that is also part of their corporate network, and still be within the domain. In this case, the patterns discussed in the preceding sections can be applied to the new application location. If you're considering this option, see [Microsoft Entra Domain Services](/entra/identity/domain-services/overview).
Additionally, consider organizing your connectors using [connector groups](application-proxy-connector-groups.md) to target apps that are in different locations and networks.
active-directory Application Proxy Page Load Speed Problem https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-page-load-speed-problem.md
This article helps you to understand why a Microsoft Entra application proxy app
## Overview Although your applications are working, they can experience a long latency. There might be network topology tweaks that you can make to improve speed. For an evaluation of different topologies, see the [network considerations document](application-proxy-network-topology.md).
-Besides network topology, there are currently no further recommendations for performance tuning. As the Application Proxy service expands it might come to a data center that is physically closer. The closer proximity might help with latency. For a list of Azure data centers, see the [latency test page](http://www.azurespeed.com/Azure/Latency).
+Besides network topology, there are currently no further recommendations for performance tuning. As the Application Proxy service expands it might come to a data center that is physically closer. The closer proximity might help with latency. For a list of Azure data centers, see the [latency test page](https://www.azurespeed.com/Azure/Latency).
## Next steps [Work with existing on-premises proxy servers](application-proxy-configure-connectors-with-proxy-servers.md)
active-directory Application Proxy Secure Api Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-secure-api-access.md
The following diagram shows how you can use Microsoft Entra application proxy to
The Microsoft Entra application proxy forms the backbone of the solution, working as a public endpoint for API access, and providing authentication and authorization. You can access your APIs from a vast array of platforms by using the [Microsoft Authentication Library (MSAL)](../develop/reference-v2-libraries.md) libraries.
-Since Microsoft Entra application proxy authentication and authorization are built on top of Microsoft Entra ID, you can use Microsoft Entra Conditional Access to ensure only trusted devices can access APIs published through Application Proxy. Use Microsoft Entra join or Microsoft Entra hybrid joined for desktops, and Intune Managed for devices. You can also take advantage of Microsoft Entra ID P1 or P2 features like Microsoft Entra multifactor authentication, and the machine learning-backed security of [Azure Identity Protection](../identity-protection/overview-identity-protection.md).
+Since Microsoft Entra application proxy authentication and authorization are built on top of Microsoft Entra ID, you can use Microsoft Entra Conditional Access to ensure only trusted devices can access APIs published through Application Proxy. Use Microsoft Entra join or Microsoft Entra hybrid joined for desktops, and Intune Managed for devices. You can also take advantage of Microsoft Entra ID P1 or P2 features like Microsoft Entra multifactor authentication, and the machine learning-backed security of [Microsoft Entra ID Protection](../identity-protection/overview-identity-protection.md).
## Prerequisites
active-directory Application Proxy Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-security.md
Apply richer policy controls before connections to your network are established.
With [Conditional Access](../conditional-access/concept-conditional-access-cloud-apps.md), you can define restrictions on how users are allowed to access your applications. You can create policies that restrict sign-ins based on location, strength of authentication, and user risk profile.
-You can also use Conditional Access to configure Multi-Factor Authentication policies, adding another layer of security to your user authentications. Additionally, your applications can also be routed to Microsoft Defender for Cloud Apps via Microsoft Entra Conditional Access to provide real-time monitoring and controls, via [access](/cloud-app-security/access-policy-aad) and [session](/cloud-app-security/session-policy-aad) policies
+You can also use Conditional Access to configure Multi-Factor Authentication policies, adding another layer of security to your user authentications. Additionally, your applications can also be routed to Microsoft Defender for Cloud Apps via Microsoft Entra Conditional Access to provide real-time monitoring and controls, via [access](/defender-cloud-apps/access-policy-aad) and [session](/defender-cloud-apps/session-policy-aad) policies
### Traffic termination
active-directory Powershell Assign Group To App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/scripts/powershell-assign-group-to-app.md
This sample requires the [Azure Active Directory PowerShell 2.0 for Graph module
| Command | Notes | |||
-| [New-AzureADGroupAppRoleAssignment](/powershell/module/AzureAD/New-azureadgroupapproleassignment) | Assigns a group to an application role. |
+| [New-AzureADGroupAppRoleAssignment](/powershell/module/azuread/new-azureadgroupapproleassignment) | Assigns a group to an application role. |
## Next steps
active-directory Powershell Assign User To App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/scripts/powershell-assign-user-to-app.md
This sample requires the [Azure Active Directory PowerShell 2.0 for Graph module
| Command | Notes | |||
-| [New-AzureADUserAppRoleAssignment](/powershell/module/AzureAD/new-azureaduserapproleassignment) | Assigns a user to an application role. |
+| [New-AzureADUserAppRoleAssignment](/powershell/module/azuread/new-azureaduserapproleassignment) | Assigns a user to an application role. |
## Next steps
active-directory Powershell Display Users Group Of App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/scripts/powershell-display-users-group-of-app.md
This sample requires the [Azure Active Directory PowerShell 2.0 for Graph module
| Command | Notes | |||
-| [Get-AzureADUser](/powershell/module/AzureAD/get-azureaduser)| Gets a user. |
-| [Get-AzureADGroup](/powershell/module/AzureAD/get-azureadgroup)| Gets a group. |
+| [Get-AzureADUser](/powershell/module/azuread/get-azureaduser)| Gets a user. |
+| [Get-AzureADGroup](/powershell/module/azuread/get-azureadgroup)| Gets a group. |
| [Get-AzureADServicePrincipal](/powershell/module/azuread/get-azureadserviceprincipal) | Gets a service principal. |
-| [Get-AzureADUserAppRoleAssignment](/powershell/module/AzureAD/get-azureaduserapproleassignment) | Get a user application role assignment. |
-| [Get-AzureADGroupAppRoleAssignment](/powershell/module/AzureAD/get-azureadgroupapproleassignment) | Get a group application role assignment. |
+| [Get-AzureADUserAppRoleAssignment](/powershell/module/azuread/get-azureaduserapproleassignment) | Get a user application role assignment. |
+| [Get-AzureADGroupAppRoleAssignment](/powershell/module/azuread/get-azureadgroupapproleassignment) | Get a group application role assignment. |
## Next steps
active-directory What Is Application Proxy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/what-is-application-proxy.md
In today's cloud-first world, Microsoft Entra ID is best suited to control who a
## The future of remote access
-In today's digital workplace, users work anywhere with multiple devices and apps. The only constant is user identity. That's why the first step to a secure network today is to use [Microsoft Entra identity management](../../security/fundamentals/identity-management-overview.md) capabilities as your security control plane. A model that uses identity as your control plane is typically comprised of the following components:
+In today's digital workplace, users work anywhere with multiple devices and apps. The only constant is user identity. That's why the first step to a secure network today is to use [Microsoft Entra identity management](/azure/security/fundamentals/identity-management-overview) capabilities as your security control plane. A model that uses identity as your control plane is typically comprised of the following components:
* An identity provider to keep track of users and user-related information. * Device directory to maintain a list of devices that have access to corporate resources. This directory includes corresponding device information (for example, type of device, integrity etc.).
The remote access solution offered by Application Proxy and Microsoft Entra ID s
* **Remote access as a service**. You don't have to worry about maintaining and patching on-premises servers to enable remote access. Application Proxy is an internet scale service that Microsoft owns, so you always get the latest security patches and upgrades. Unpatched software still accounts for a large number of attacks. According to the Department of Homeland Security, as many as [85 percent of targeted attacks are preventable](https://www.us-cert.gov/ncas/alerts/TA15-119A). With this service model, you don't have to carry the heavy burden of managing your edge servers anymore and scramble to patch them as needed.
-* **Intune integration**. With Intune, corporate traffic is routed separately from personal traffic. Application Proxy ensures that the corporate traffic is authenticated. [Application Proxy and the Intune Managed Browser](/intune/app-configuration-managed-browser#how-to-configure-application-proxy-settings-for-protected-browsers) capability can also be used together to enable remote users to securely access internal websites from iOS and Android devices.
+* **Intune integration**. With Intune, corporate traffic is routed separately from personal traffic. Application Proxy ensures that the corporate traffic is authenticated. [Application Proxy and the Intune Managed Browser](/mem/intune/apps/manage-microsoft-edge#how-to-configure-application-proxy-settings-for-protected-browsers) capability can also be used together to enable remote users to securely access internal websites from iOS and Android devices.
### Roadmap to the cloud
For more information about choosing where to install your connectors and optimiz
Up to this point, we've focused on using Application Proxy to publish on-premises apps externally while enabling single sign-on to all your cloud and on-premises apps. However, there are other use cases for App Proxy that are worth mentioning. They include:
-* **Securely publish REST APIs**. When you have business logic or APIs running on-premises or hosted on virtual machines in the cloud, Application Proxy provides a public endpoint for API access. API endpoint access lets you control authentication and authorization without requiring incoming ports. It provides additional security through Microsoft Entra ID P1 or P2 features such as multi-factor authentication and device-based Conditional Access for desktops, iOS, MAC, and Android devices using Intune. To learn more, see [How to enable native client applications to interact with proxy applications](./application-proxy-configure-native-client-application.md) and [Protect an API by using OAuth 2.0 with Microsoft Entra ID and API Management](../../api-management/api-management-howto-protect-backend-with-aad.md).
+* **Securely publish REST APIs**. When you have business logic or APIs running on-premises or hosted on virtual machines in the cloud, Application Proxy provides a public endpoint for API access. API endpoint access lets you control authentication and authorization without requiring incoming ports. It provides additional security through Microsoft Entra ID P1 or P2 features such as multi-factor authentication and device-based Conditional Access for desktops, iOS, MAC, and Android devices using Intune. To learn more, see [How to enable native client applications to interact with proxy applications](./application-proxy-configure-native-client-application.md) and [Protect an API by using OAuth 2.0 with Microsoft Entra ID and API Management](/azure/api-management/api-management-howto-protect-backend-with-aad).
* **Remote Desktop Services** **(RDS)**. Standard RDS deployments require open inbound connections. However, the [RDS deployment with Application Proxy](./application-proxy-integrate-with-remote-desktop-services.md) has a permanent outbound connection from the server running the connector service. This way, you can offer more applications to users by publishing on-premises applications through Remote Desktop Services. You can also reduce the attack surface of the deployment with a limited set of two-step verification and Conditional Access controls to RDS. * **Publish applications that connect using WebSockets**. Support with [Qlik Sense](./application-proxy-qlik.md) is in Public Preview and will be expanded to other apps in the future. * **Enable native client applications to interact with proxy applications**. You can use Microsoft Entra application proxy to publish web apps, but it also can be used to publish [native client applications](./application-proxy-configure-native-client-application.md) that are configured with Microsoft Authentication Library (MSAL). Native client applications differ from web apps because they're installed on a device, while web apps are accessed through a browser.
active-directory 2 Secure Access Current State https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/2-secure-access-current-state.md
Generally, users seeking external collaboration know the applications to use, an
To find collaborating users:
-* Microsoft 365 [Audit log activities](/microsoft-365/compliance/audit-log-activities?view=o365-worldwide&preserve-view=true) - search for events and discover activities audited in Microsoft 365
+* Microsoft 365 [Audit log activities](/purview/audit-log-activities?view=o365-worldwide&preserve-view=true) - search for events and discover activities audited in Microsoft 365
* [Auditing and reporting a B2B collaboration user](../external-identities/auditing-and-reporting.md) - verify guest user access, and see records of system and user activities ## Enumerate guest users and organizations
Investigate access to your sensitive apps for awareness about external access. S
If your email and network plans are enabled, you can investigate content sharing through email or unauthorized software as a service (SaaS) apps. * Identify, prevent, and monitor accidental sharing
- * [Learn about data loss prevention](/microsoft-365/compliance/dlp-learn-about-dlp?view=o365-worldwide&preserve-view=true)
+ * [Learn about data loss prevention](/purview/dlp-learn-about-dlp?view=o365-worldwide&preserve-view=true)
* Identify unauthorized apps * [Microsoft Defender for Cloud Apps overview](/defender-cloud-apps/what-is-defender-for-cloud-apps)
active-directory 4 Secure Access Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/4-secure-access-groups.md
Use Microsoft Entra security groups to assign:
* Microsoft 365 * Dynamics 365 * Enterprise mobility and security
- * See, [What is group-based licensing in Microsoft Entra ID?](../fundamentals/licensing-whatis-azure-portal.md)
+ * See, [What is group-based licensing in Microsoft Entra ID?](../fundamentals/concept-group-based-licensing.md)
* Elevated permissions * See, [Use Microsoft Entra groups to manage role assignments](../roles/groups-concept.md)
active-directory 5 Secure Access B2b https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/5-secure-access-b2b.md
If you use a self-service portal, use API connectors to collect user attributes
Learn more: * [Use API connectors to customize and extend self-service sign-up](../external-identities/api-connectors-overview.md)
-* [Manage Azure AD B2C with Microsoft Graph](../../active-directory-b2c/microsoft-graph-operations.md)
+* [Manage Azure AD B2C with Microsoft Graph](/azure/active-directory-b2c/microsoft-graph-operations)
<a name='troubleshoot-invitation-redemption-to-azure-ad-users'></a>
By default, Teams allows external access. The organization can communicate with
Sharing through SharePoint and OneDrive adds users not in the entitlement management process. * [Secure external access to Microsoft Teams, SharePoint, and OneDrive for Business](9-secure-access-teams-sharepoint.md)
-* [Block OneDrive use from Office](/office365/troubleshoot/group-policy/block-onedrive-use-from-office)
+* [Block OneDrive use from Office](/microsoft-365/troubleshoot/group-policy/block-onedrive-use-from-office)
### Emailed documents and sensitivity labels Users send documents to external users by email. You can use sensitivity labels to restrict and encrypt access to documents.
-See, [Learn about sensitivity labels](/microsoft-365/compliance/sensitivity-labels?view=o365-worldwide&preserve-view=true).
+See, [Learn about sensitivity labels](/purview/sensitivity-labels?view=o365-worldwide&preserve-view=true).
### Unsanctioned collaboration tools
active-directory 8 Secure Access Sensitivity Labels https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/8-secure-access-sensitivity-labels.md
Team members who need to create sensitivity labels require permissions to:
* Microsoft 365 Defender portal, * Microsoft Purview compliance portal, or
-* [Microsoft Purview compliance portal](/microsoft-365/compliance/microsoft-365-compliance-center?view=o365-worldwide&preserve-view=true)
+* [Microsoft Purview compliance portal](/purview/microsoft-365-compliance-center?view=o365-worldwide&preserve-view=true)
By default, tenant Global Administrators have access to admin centers and can provide access, without granting tenant Admin permissions. For this delegated limited admin access, add users to the following role groups:
Consider the content categories that external users can't have access to, such a
Sensitivity labels can be applied automatically or manually to content.
-See, [Apply a sensitivity label to content automatically](/microsoft-365/compliance/apply-sensitivity-label-automatically?view=o365-worldwide&preserve-view=true)
+See, [Apply a sensitivity label to content automatically](/purview/apply-sensitivity-label-automatically?view=o365-worldwide&preserve-view=true)
#### Sensitivity labels on email and content
Sensitivity labels applied to a container, such as a SharePoint site, aren't app
Learn more:
-* [Enable sensitivity labels for Office files in SharePoint and OneDrive](/microsoft-365/compliance/sensitivity-labels-sharepoint-onedrive-files?view=o365-worldwide&preserve-view=true).
-* [Use sensitivity labels to protect content in Microsoft Teams, Microsoft 365 Groups, and SharePoint sites](/microsoft-365/compliance/sensitivity-labels-teams-groups-sites)
+* [Enable sensitivity labels for Office files in SharePoint and OneDrive](/purview/sensitivity-labels-sharepoint-onedrive-files?view=o365-worldwide&preserve-view=true).
+* [Use sensitivity labels to protect content in Microsoft Teams, Microsoft 365 Groups, and SharePoint sites](/purview/sensitivity-labels-teams-groups-sites)
* [Assign sensitivity labels to Microsoft 365 groups in Microsoft Entra ID](../enterprise-users/groups-assign-sensitivity-labels.md) ### Implement sensitivity labels After you determine use of sensitivity labels, see the following documentation for implementation.
-* [Get started with sensitivity labels](/microsoft-365/compliance/get-started-with-sensitivity-labels?view=o365-worldwide&preserve-view=true)
-* [Create and publish sensitivity labels](/microsoft-365/compliance/create-sensitivity-labels?view=o365-worldwide&preserve-view=true)
-* [Restrict access to content by using sensitivity labels to apply encryption](/microsoft-365/compliance/encryption-sensitivity-labels?view=o365-worldwide&preserve-view=true)
+* [Get started with sensitivity labels](/purview/get-started-with-sensitivity-labels?view=o365-worldwide&preserve-view=true)
+* [Create and publish sensitivity labels](/purview/create-sensitivity-labels?view=o365-worldwide&preserve-view=true)
+* [Restrict access to content by using sensitivity labels to apply encryption](/purview/encryption-sensitivity-labels?view=o365-worldwide&preserve-view=true)
## Next steps
active-directory 9 Secure Access Teams Sharepoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/9-secure-access-teams-sharepoint.md
The External Identities collaboration feature in Microsoft Entra ID controls per
Learn more:
-* [Manage external meetings and chat in Microsoft Teams](/microsoftteams/manage-external-access)
-* [Step 1. Determine your cloud identity model](/microsoft-365/enterprise/about-microsoft-365-identity)
+* [Manage external meetings and chat in Microsoft Teams](/microsoftteams/trusted-organizations-external-meetings-chat)
+* [Step 1. Determine your cloud identity model](/microsoft-365/enterprise/deploy-identity-solution-identity-model)
* [Identity models and authentication for Microsoft Teams](/microsoftteams/identify-models-authentication) * [Sensitivity labels for Microsoft Teams](/microsoftteams/sensitivity-labels)
SharePoint administrators can find organization-wide settings in the SharePoint
Learn more: * [SharePoint admin center](https://microsoft-admin.sharepoint.com) - access permissions are required
-* [Get started with the SharePoint admin center](/sharepoint/get-started-new-admin-center)
+* [Get started with the SharePoint admin center](/sharepoint/manage-sites-in-new-admin-center)
* [External sharing overview](/sharepoint/external-sharing-overview) <a name='integrating-sharepoint-and-onedrive-with-azure-ad-b2b'></a>
active-directory Architecture Icons https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/architecture-icons.md
Microsoft permits the use of these icons in architectural diagrams, training mat
## More icon sets from Microsoft -- [Azure architecture icons](/azure/architecture/icons)
+- [Azure architecture icons](/azure/architecture/icons/)
- [Microsoft 365 architecture icons and templates](/microsoft-365/solutions/architecture-icons-templates) - [Dynamics 365 icons](/dynamics365/get-started/icons) - [Microsoft Power Platform icons](/power-platform/guidance/icons)
active-directory Auth Ldap https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/auth-ldap.md
There is a need to for an application or service to use LDAP authentication.
## Implement LDAP authentication with Microsoft Entra ID
-* [Create and configure a Microsoft Entra Domain Services instance](../../active-directory-domain-services/tutorial-create-instance.md)
+* [Create and configure a Microsoft Entra Domain Services instance](/entra/identity/domain-services/tutorial-create-instance)
-* [Configure virtual networking for a Microsoft Entra Domain Services instance](../../active-directory-domain-services/tutorial-configure-networking.md)
+* [Configure virtual networking for a Microsoft Entra Domain Services instance](/entra/identity/domain-services/tutorial-configure-networking)
-* [Configure Secure LDAP for a Microsoft Entra Domain Services managed domain](../../active-directory-domain-services/tutorial-configure-ldaps.md)
+* [Configure Secure LDAP for a Microsoft Entra Domain Services managed domain](/entra/identity/domain-services/tutorial-configure-ldaps)
-* [Create an outbound forest trust to an on-premises domain in Microsoft Entra Domain Services](../../active-directory-domain-services/tutorial-create-forest-trust.md)
+* [Create an outbound forest trust to an on-premises domain in Microsoft Entra Domain Services](/entra/identity/domain-services/tutorial-create-forest-trust)
active-directory Auth Oidc https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/auth-oidc.md
There is a need for user consent and for web sign in.
* [Microsoft identity platform and OpenID Connect protocol](../develop/v2-protocols-oidc.md)
-* [Web sign-in with OpenID Connect in Azure Active Directory B2C](../../active-directory-b2c/openid-connect.md)
+* [Web sign-in with OpenID Connect in Azure Active Directory B2C](/azure/active-directory-b2c/openid-connect)
-* [Secure your application by using OpenID Connect and Microsoft Entra ID](/training/modules/secure-app-with-oidc-and-azure-ad/)
+* [Secure your application by using OpenID Connect and Microsoft Entra ID](../develop/v2-protocols-oidc.md)
active-directory Auth Passwordless https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/auth-passwordless.md
Microsoft Entra ID enables integration with the following passwordless authentic
- [Overview of Microsoft Entra certificate-based authentication](../authentication/concept-certificate-based-authentication.md): Microsoft Entra certificate-based authentication (CBA) enables customers to allow or require users to authenticate directly with X.509 certificates against their Microsoft Entra ID for applications and browser sign-in. This feature enables customers to adopt phishing resistant authentication and authenticate with an X.509 certificate against their Public Key Infrastructure (PKI). - [Enable passwordless security key sign-in](../authentication/howto-authentication-passwordless-security-key.md): For enterprises that use passwords and have a shared PC environment, security keys provide a seamless way for workers to authenticate without entering a username or password. Security keys provide improved productivity for workers, and have better security. This article explains how to sign in to web-based applications with your Microsoft Entra account using a FIDO2 security key.-- [Windows Hello for Business Overview](/windows/security/identity-protection/hello-for-business/hello-overview): Windows Hello for Business replaces passwords with strong two-factor authentication on devices. This authentication consists of a type of user credential that is tied to a device and uses a biometric or PIN.
+- [Windows Hello for Business Overview](/windows/security/identity-protection/hello-for-business/): Windows Hello for Business replaces passwords with strong two-factor authentication on devices. This authentication consists of a type of user credential that is tied to a device and uses a biometric or PIN.
- [Enable passwordless sign-in with Microsoft Authenticator](../authentication/howto-authentication-passwordless-phone.md): Microsoft Authenticator can be used to sign in to any Microsoft Entra account without using a password. Microsoft Authenticator uses key-based authentication to enable a user credential that is tied to a device, where the device uses a PIN or biometric. Windows Hello for Business uses a similar technology. Microsoft Authenticator can be used on any device platform, including mobile. Microsoft Authenticator can be used with any app or website that integrates with Microsoft Authentication Libraries.
active-directory Automate Provisioning To Applications Solutions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/automate-provisioning-to-applications-solutions.md
After users are provisioned into Microsoft Entra ID, use Lifecycle Workflows (LC
[Learn more about Microsoft Entra Lifecycle Workflows](../governance/what-are-lifecycle-workflows.md) > [!Note]
-> For scenarios not covered by LCW, customers can leverage the extensibility of [Logic Applications](../..//logic-apps/logic-apps-overview.md).
+> For scenarios not covered by LCW, customers can leverage the extensibility of [Logic Applications](/azure/logic-apps/logic-apps-overview).
### Reconcile changes made directly in the target system
active-directory B2c Deployment Plans https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/b2c-deployment-plans.md
Azure Active Directory B2C (Azure AD B2C) is an identity and access management s
### Requirements - Assess the primary reason to turn off systems
- - See, [What is Azure Active Directory B2C?](../../active-directory-b2c/overview.md)
+ - See, [What is Azure Active Directory B2C?](/azure/active-directory-b2c/overview)
- For a new application, plan the design of the Customer Identity Access Management (CIAM) system
- - See, [Planning and design](../../active-directory-b2c/best-practices.md#planning-and-design)
+ - See, [Planning and design](/azure/active-directory-b2c/best-practices#planning-and-design)
- Identify customer locations and create a tenant in the corresponding datacenter
- - See, [Tutorial: Create an Azure Active Directory B2C tenant](../../active-directory-b2c/tutorial-create-tenant.md)
+ - See, [Tutorial: Create an Azure Active Directory B2C tenant](/azure/active-directory-b2c/tutorial-create-tenant)
- Confirm your application types and supported technologies: - [Overview of the Microsoft Authentication Library (MSAL)](../develop/msal-overview.md) - [Develop with open source languages, frameworks, databases, and tools in Azure](https://azure.microsoft.com/free/open-source/search/?OCID=AID2200277_SEM_f63bcafc4d5f1d7378bfaa2085b249f9:G:s&ef_id=f63bcafc4d5f1d7378bfaa2085b249f9:G:s&msclkid=f63bcafc4d5f1d7378bfaa2085b249f9). - For back-end services, use the [client credentials](../develop/msal-authentication-flows.md#client-credentials) flow - To migrate from an identity provider (IdP):
- - [Seamless migration](../../active-directory-b2c/user-migration.md#seamless-migration)
+ - [Seamless migration](/azure/active-directory-b2c/user-migration#seamless-migration)
- Go to [`user-migration`](https://github.com/azure-ad-b2c/user-migration) - Select protocols - If you use Kerberos, Microsoft Windows NT LAN Manager (NTLM), and Web Services Federation (WS-Fed), see the video, [Application and identity migration to Azure AD B2C](https://www.bing.com/videos/search?q=application+migration+in+azure+ad+b2c&docid=608034225244808069&mid=E21B87D02347A8260128E21B87D02347A8260128&view=detail&FORM=VIRE)
Help set realistic expectations and make contingency plans to meet key milestone
### Deploy authentication and authorization * Before your applications interact with Azure AD B2C, register them in a tenant you manage
- * See, [Tutorial: Create an Azure Active Directory B2C tenant](../../active-directory-b2c/tutorial-create-tenant.md)
+ * See, [Tutorial: Create an Azure Active Directory B2C tenant](/azure/active-directory-b2c/tutorial-create-tenant)
* For authorization, use the Identity Experience Framework (IEF) sample user journeys * See, [Azure Active Directory B2C: Custom CIAM User Journeys](https://github.com/azure-ad-b2c/samples#local-account-policy-enhancements) * Use policy-based control for cloud-native environments
Learn more with the Microsoft Identity PDF, [Gaining expertise with Azure AD B2C
Azure AD B2C projects start with one or more client applications.
-* [The new App registrations experience for Azure Active Directory B2C](../../active-directory-b2c/app-registrations-training-guide.md)
- * Refer to [Azure Active Directory B2C code samples](../../active-directory-b2c/integrate-with-app-code-samples.md) for implementation
+* [The new App registrations experience for Azure Active Directory B2C](/azure/active-directory-b2c/app-registrations-training-guide)
+ * Refer to [Azure Active Directory B2C code samples](/azure/active-directory-b2c/integrate-with-app-code-samples) for implementation
* Set up your user journey based on custom user flows
- * [Comparing user flows and custom policies](../../active-directory-b2c/user-flow-overview.md#comparing-user-flows-and-custom-policies)
- * [Add an identity provider to your Azure Active Directory B2C tenant](../../active-directory-b2c/add-identity-provider.md)
- * [Migrate users to Azure AD B2C](../../active-directory-b2c/user-migration.md)
+ * [Comparing user flows and custom policies](/azure/active-directory-b2c/user-flow-overview#comparing-user-flows-and-custom-policies)
+ * [Add an identity provider to your Azure Active Directory B2C tenant](/azure/active-directory-b2c/add-identity-provider)
+ * [Migrate users to Azure AD B2C](/azure/active-directory-b2c/user-migration)
* [Azure Active Directory B2C: Custom CIAM User Journeys](https://github.com/azure-ad-b2c/samples) for advanced scenarios ### Application deployment checklist
Azure AD B2C projects start with one or more client applications.
* Determine where front-end and back-end applications are hosted: on-premises, cloud, or hybrid-cloud * Confirm the platforms or languages in use: * For example ASP.NET, Java, and Node.js
- * See, [Quickstart: Set up sign in for an ASP.NET application using Azure AD B2C](../../active-directory-b2c/quickstart-web-app-dotnet.md)
+ * See, [Quickstart: Set up sign in for an ASP.NET application using Azure AD B2C](/azure/active-directory-b2c/quickstart-web-app-dotnet)
* Verify where user attributes are stored * For example, Lightweight Directory Access Protocol (LDAP) or databases
Azure AD B2C projects start with one or more client applications.
* Confirm the number of users accessing applications * Determine the IdP types needed: * For example, Facebook, local account, and Active Directory Federation Services (AD FS)
- * See, [Active Directory Federation Services](/windows-server/identity/active-directory-federation-services)
+ * See, [Active Directory Federation Services](/windows-server/identity/ad-fs/ad-fs-overview)
* Outline the claim schema required from your application, Azure AD B2C, and IdPs if applicable
- * See, [ClaimsSchema](../../active-directory-b2c/claimsschema.md)
+ * See, [ClaimsSchema](/azure/active-directory-b2c/claimsschema)
* Determine the information to collect during sign-in and sign-up
- * [Set up a sign-up and sign-in flow in Azure Active Directory B2C](../../active-directory-b2c/add-sign-up-and-sign-in-policy.md?pivots=b2c-user-flow)
+ * [Set up a sign-up and sign-in flow in Azure Active Directory B2C](/azure/active-directory-b2c/add-sign-up-and-sign-in-policy?pivots=b2c-user-flow)
### Client application onboarding and deliverables
Use the following checklist for onboarding an application
|Application target user group | Select among end customers, business customers, or a digital service. </br>Determine a need for employee sign-in.| |Application business value| Understand the business need and/or goal to determine the best Azure AD B2C solution and integration with other client applications.| |Your identity groups| Cluster identities into groups with requirements, such as business-to-consumer (B2C), business-to-business (B2B) business-to-employee (B2E), and business-to-machine (B2M) for IoT device sign-in and service accounts.|
-|Identity provider (IdP)| See, [Select an identity provider](../../active-directory-b2c/add-identity-provider.md#select-an-identity-provider). For example, for a customer-to-customer (C2C) mobile app use an easy sign-in process. </br>B2C with digital services has compliance requirements. </br>Consider email sign-in. |
+|Identity provider (IdP)| See, [Select an identity provider](/azure/active-directory-b2c/add-identity-provider#select-an-identity-provider). For example, for a customer-to-customer (C2C) mobile app use an easy sign-in process. </br>B2C with digital services has compliance requirements. </br>Consider email sign-in. |
|Regulatory constraints | Determine a need for remote profiles or privacy policies. | |Sign-in and sign-up flow | Confirm email verification or email verification during sign-up. </br>For check-out processes, see [How it works: Microsoft Entra multifactor authentication](../authentication/concept-mfa-howitworks.md). </br>See the video [Azure AD B2C user migration using Microsoft Graph API](https://www.youtube.com/watch?v=c8rN1ZaR7wk&list=PL3ZTgFEc7LyuJ8YRSGXBUVItCPnQz3YX0&index=4). | |Application and authentication protocol| Implement client applications such as Web application, single-page application (SPA), or native. </br>Authentication protocols for client application and Azure AD B2C: OAuth, OIDC, and SAML. </br>See the video [Protecting Web APIs with Microsoft Entra ID](https://www.youtube.com/watch?v=r2TIVBCm7v4&list=PL3ZTgFEc7LyuJ8YRSGXBUVItCPnQz3YX0&index=9).|
-| User migration | Confirm if you'll [migrate users to Azure AD B2C](../../active-directory-b2c/user-migration.md): Just-in-time (JIT) migration and bulk import/export. </br>See the video [Azure AD B2C user migration strategies](https://www.youtube.com/watch?v=lCWR6PGUgz0&list=PL3ZTgFEc7LyuJ8YRSGXBUVItCPnQz3YX0&index=2).|
+| User migration | Confirm if you'll [migrate users to Azure AD B2C](/azure/active-directory-b2c/user-migration): Just-in-time (JIT) migration and bulk import/export. </br>See the video [Azure AD B2C user migration strategies](https://www.youtube.com/watch?v=lCWR6PGUgz0&list=PL3ZTgFEc7LyuJ8YRSGXBUVItCPnQz3YX0&index=2).|
Use the following checklist for delivery. |Area| Description| ||| |Protocol information| Gather the base path, policies, and metadata URL of both variants. </br>Specify attributes such as sample sign-in, client application ID, secrets, and redirects.|
-|Application samples | See, [Azure Active Directory B2C code samples](../../active-directory-b2c/integrate-with-app-code-samples.md).|
-|Penetration testing | Inform your operations team about pen tests, then test user flows including the OAuth implementation. </br>See, [Penetration testing](../../security/fundamentals/pen-testing.md) and [Penetration testing rules of engagement](https://www.microsoft.com/msrc/pentest-rules-of-engagement).
-| Unit testing | Unit test and generate tokens. </br>See, [Microsoft identity platform and OAuth 2.0 Resource Owner Password Credentials](../develop/v2-oauth-ropc.md). </br>If you reach the Azure AD B2C token limit, see [Azure AD B2C: File Support Requests](../../active-directory-b2c/find-help-open-support-ticket.md). </br>Reuse tokens to reduce investigation on your infrastructure. </br>[Set up a resource owner password credentials flow in Azure Active Directory B2C](../../active-directory-b2c/add-ropc-policy.md?pivots=b2c-user-flow&tabs=app-reg-ga).|
-| Load testing | Learn about [Azure AD B2C service limits and restrictions](../../active-directory-b2c/service-limits.md). </br>Calculate the expected authentications and user sign-ins per month. </br>Assess high load traffic durations and business reasons: holiday, migration, and event. </br>Determine expected peak rates for sign-up, traffic, and geographic distribution, for example per second.
+|Application samples | See, [Azure Active Directory B2C code samples](/azure/active-directory-b2c/integrate-with-app-code-samples).|
+|Penetration testing | Inform your operations team about pen tests, then test user flows including the OAuth implementation. </br>See, [Penetration testing](/azure/security/fundamentals/pen-testing) and [Penetration testing rules of engagement](https://www.microsoft.com/msrc/pentest-rules-of-engagement).
+| Unit testing | Unit test and generate tokens. </br>See, [Microsoft identity platform and OAuth 2.0 Resource Owner Password Credentials](../develop/v2-oauth-ropc.md). </br>If you reach the Azure AD B2C token limit, see [Azure AD B2C: File Support Requests](/azure/active-directory-b2c/find-help-open-support-ticket). </br>Reuse tokens to reduce investigation on your infrastructure. </br>[Set up a resource owner password credentials flow in Azure Active Directory B2C](/azure/active-directory-b2c/add-ropc-policy?pivots=b2c-user-flow&tabs=app-reg-ga).|
+| Load testing | Learn about [Azure AD B2C service limits and restrictions](/azure/active-directory-b2c/service-limits). </br>Calculate the expected authentications and user sign-ins per month. </br>Assess high load traffic durations and business reasons: holiday, migration, and event. </br>Determine expected peak rates for sign-up, traffic, and geographic distribution, for example per second.
### Security
Use the following checklist to enhance application security.
* See, [What authentication and verification methods are available in Microsoft Entra ID?](../authentication/concept-authentication-methods.md) * Confirm use of anti-bot mechanisms * Assess the risk of attempts to create a fraudulent account or sign-in
- * See, [Tutorial: Configure Microsoft Dynamics 365 Fraud Protection with Azure Active Directory B2C](../../active-directory-b2c/partner-dynamics-365-fraud-protection.md)
+ * See, [Tutorial: Configure Microsoft Dynamics 365 Fraud Protection with Azure Active Directory B2C](/azure/active-directory-b2c/partner-dynamics-365-fraud-protection)
* Confirm needed conditional postures as part of sign-in or sign-up #### Conditional Access and identity protection
Use the following checklist to enhance application security.
* The modern security perimeter now extends beyond an organization's network. The perimeter includes user and device identity. * See, [What is Conditional Access?](../conditional-access/overview.md) * Enhance the security of Azure AD B2C with Microsoft Entra ID Protection
- * See, [Identity Protection and Conditional Access in Azure AD B2C](../../active-directory-b2c/conditional-access-identity-protection-overview.md)
+ * See, [Identity Protection and Conditional Access in Azure AD B2C](/azure/active-directory-b2c/conditional-access-identity-protection-overview)
### Compliance
To help comply with regulatory requirements and enhance back-end system security
Use the following checklist to help define user experience requirements. * Identify integrations to extend CIAM capabilities and build seamless end-user experiences
- * [Azure Active Directory B2C ISV partners](../../active-directory-b2c/partner-gallery.md)
+ * [Azure Active Directory B2C ISV partners](/azure/active-directory-b2c/partner-gallery)
* Use screenshots and user stories to show the application end-user experience * For example, screenshots of sign-in, sign-up, sign-up/sign-in (SUSI), profile edit, and password reset * Look for hints passed through by using queryString parameters in your CIAM solution * For high user-experience customization, consider a using front-end developer * In Azure AD B2C, you can customize HTML and CSS
- * See, [Guidelines for using JavaScript](../../active-directory-b2c/javascript-and-page-layout.md?pivots=b2c-custom-policy#guidelines-for-using-javascript)
+ * See, [Guidelines for using JavaScript](/azure/active-directory-b2c/javascript-and-page-layout?pivots=b2c-custom-policy#guidelines-for-using-javascript)
* Implement an embedded experience by using iframe support:
- * See, [Embedded sign-up or sign-in experience](../../active-directory-b2c/embedded-login.md?pivots=b2c-custom-policy)
+ * See, [Embedded sign-up or sign-in experience](/azure/active-directory-b2c/embedded-login?pivots=b2c-custom-policy)
* For a single-page application, use a second sign-in HTML page that loads into the `<iframe>` element ## Monitoring auditing, and logging
Use the following checklist to help define user experience requirements.
Use the following checklist for monitoring, auditing, and logging. * Monitoring
- * [Monitor Azure AD B2C with Azure Monitor](../../active-directory-b2c/azure-monitor.md)
+ * [Monitor Azure AD B2C with Azure Monitor](/azure/active-directory-b2c/azure-monitor)
* See the video [Monitoring and reporting Azure AD B2C using Azure Monitor](https://www.youtube.com/watch?v=Mu9GQy-CbXI&list=PL3ZTgFEc7LyuJ8YRSGXBUVItCPnQz3YX0&index=1) * Auditing and logging
- * [Accessing Azure AD B2C audit logs](../../active-directory-b2c/view-audit-logs.md)
+ * [Accessing Azure AD B2C audit logs](/azure/active-directory-b2c/view-audit-logs)
## Resources -- [Register a Microsoft Graph application](../../active-directory-b2c/microsoft-graph-get-started.md)-- [Manage Azure AD B2C with Microsoft Graph](../../active-directory-b2c/microsoft-graph-operations.md)-- [Deploy custom policies with Azure Pipelines](../../active-directory-b2c/deploy-custom-policies-devops.md)-- [Manage Azure AD B2C custom policies with Azure PowerShell](../../active-directory-b2c/manage-custom-policies-powershell.md)
+- [Register a Microsoft Graph application](/azure/active-directory-b2c/microsoft-graph-get-started)
+- [Manage Azure AD B2C with Microsoft Graph](/azure/active-directory-b2c/microsoft-graph-operations)
+- [Deploy custom policies with Azure Pipelines](/azure/active-directory-b2c/deploy-custom-policies-devops)
+- [Manage Azure AD B2C custom policies with Azure PowerShell](/azure/active-directory-b2c/manage-custom-policies-powershell)
## Next steps
-[Recommendations and best practices for Azure Active Directory B2C](../../active-directory-b2c/best-practices.md)
+[Recommendations and best practices for Azure Active Directory B2C](/azure/active-directory-b2c/best-practices)
active-directory Backup Authentication System Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/backup-authentication-system-apps.md
Native applications are public client applications that run directly on desktop
Native applications are protected by the backup authentication system when all the following are true:
-1. Your application persists the token cache for at least three days. Applications should use the deviceΓÇÖs token cache location or the [token cache serialization API](../develop/msal-net-token-cache-serialization.md) to persist the token cache even when the user closes the application.
-1. Your application makes use of the MSAL [AcquireTokenSilent API](../develop/msal-net-acquire-token-silently.md) to retrieve tokens using cached Refresh Tokens. The use of the [AcquireTokenInteractive API](../develop/scenario-desktop-acquire-token-interactive.md) may fail to acquire a token from the backup authentication system if user interaction is required.
+1. Your application persists the token cache for at least three days. Applications should use the deviceΓÇÖs token cache location or the [token cache serialization API](/entra/msal/dotnet/how-to/token-cache-serialization) to persist the token cache even when the user closes the application.
+1. Your application makes use of the MSAL [AcquireTokenSilent API](/entr) may fail to acquire a token from the backup authentication system if user interaction is required.
The backup authentication system doesn't currently support the [device authorization grant](../develop/v2-oauth2-device-code.md).
active-directory Backup Authentication System https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/backup-authentication-system.md
Certain other types of policies don't support use of the backup authentication s
- Use of the [sign-in frequency control](../conditional-access/concept-conditional-access-session.md#sign-in-frequency) as part of a Conditional Access policy. - Use of the [authentication methods policy](../conditional-access/concept-conditional-access-grant.md#require-authentication-strength).-- Use of [classic Conditional Access policies](../conditional-access/policy-migration.md).
+- Use of [classic Conditional Access policies](../conditional-access/policy-migration-mfa.md).
## Workload identity resilience in the backup authentication system
The backup authentication system is supported in all cloud environments except M
- [Application requirements for the backup authentication system](backup-authentication-system-apps.md) - [Introduction to the backup authentication system](https://azure.microsoft.com/blog/advancing-service-resilience-in-azure-active-directory-with-its-backup-authentication-service/) - [Resilience Defaults for Conditional Access](../conditional-access/resilience-defaults.md)-- [Microsoft Entra SLA performance reporting](../reports-monitoring/reference-azure-ad-sla-performance.md)
+- [Microsoft Entra SLA performance reporting](../reports-monitoring/reference-sla-performance.md)
active-directory Govern Service Accounts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/govern-service-accounts.md
We recommend the following practices for service account privileges.
After you understand the purpose, scope, and permissions, create your service account, use the instructions in the following articles.
-* [How to use managed identities for App Service and Azure Functions](../../app-service/overview-managed-identity.md?tabs=dotnet)
+* [How to use managed identities for App Service and Azure Functions](/azure/app-service/overview-managed-identity?tabs=dotnet)
* [Create a Microsoft Entra application and service principal that can access resources](../develop/howto-create-service-principal-portal.md) Use a managed identity when possible. If you can't use a managed identity, use a service principal. If you can't use a service principal, then use a Microsoft Entra user account.
Use one of the following monitoring methods:
* Microsoft Entra sign-in logs in the Azure portal * Export the Microsoft Entra sign-in logs to
- * [Azure Storage documentation](../../storage/index.yml)
- * [Azure Event Hubs documentation](../../event-hubs/index.yml), or
- * [Azure Monitor Logs overview](../../azure-monitor/logs/data-platform-logs.md)
+ * [Azure Storage documentation](/azure/storage/)
+ * [Azure Event Hubs documentation](/azure/event-hubs/), or
+ * [Azure Monitor Logs overview](/azure/azure-monitor/logs/data-platform-logs)
Use the following screenshot to see service principal sign-ins.
Regularly review service account permissions and accessed scopes to see if they
* See, [`AzureADAssessment`](https://github.com/AzureAD/AzureADAssessment) and confirm validity * Don't set service principal credentials to **Never expire** * Use certificates or credentials stored in Azure Key Vault, when possible
- * [What is Azure Key Vault?](../../key-vault/general/basic-concepts.md)
+ * [What is Azure Key Vault?](/azure/key-vault/general/basic-concepts)
The free PowerShell sample collects service principal OAuth2 grants and credential information, records them in a comma-separated values (CSV) file, and a Power BI sample dashboard. For more information, see [`AzureADAssessment`](https://github.com/AzureAD/AzureADAssessment).
active-directory Monitor Sign In Health For Resilience https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/monitor-sign-in-health-for-resilience.md
You can configure alerts based on the App sign-in health workbook. This workbook
- Compare trends over a period of time. Week over week is the workbook's default setting. > [!NOTE]
-> See all available workbooks and the prerequisites for using them in [How to use Azure Monitor workbooks for reports](../reports-monitoring/howto-use-azure-monitor-workbooks.md).
+> See all available workbooks and the prerequisites for using them in [How to use Azure Monitor workbooks for reports](../reports-monitoring/howto-use-workbooks.md).
During an impacting event, two things may happen:
During an impacting event, two things may happen:
- A Microsoft Entra tenant. - A user with global administrator or security administrator role for the Microsoft Entra tenant.-- A Log Analytics workspace in your Azure subscription to send logs to Azure Monitor logs. Learn how to [create a Log Analytics workspace](../../azure-monitor/logs/quick-create-workspace.md).-- Microsoft Entra logs integrated with Azure Monitor logs. Learn how to [Integrate Microsoft Entra sign-in logs with Azure Monitor Stream.](../reports-monitoring/howto-integrate-activity-logs-with-log-analytics.md)
+- A Log Analytics workspace in your Azure subscription to send logs to Azure Monitor logs. Learn how to [create a Log Analytics workspace](/azure/azure-monitor/logs/quick-create-workspace).
+- Microsoft Entra logs integrated with Azure Monitor logs. Learn how to [Integrate Microsoft Entra sign-in logs with Azure Monitor Stream.](../reports-monitoring/howto-integrate-activity-logs-with-azure-monitor-logs.md)
## Configure the App sign-in health workbook
Use the following instructions to create email alerts based on the queries refle
- The successful usage drops by 90% from the same hour two days ago, as shown in the preceding hourly usage graph example. - The failure rate increases by 90% from the same hour two days ago, as shown in the preceding hourly failure rate graph example.
-To configure the underlying query and set alerts, complete the following steps using the sample query as the basis for your configuration. The query structure description appears at the end of this section. Learn how to create, view, and manage log alerts using Azure Monitor in [Manage log alerts](../../azure-monitor/alerts/alerts-create-new-alert-rule.md).
+To configure the underlying query and set alerts, complete the following steps using the sample query as the basis for your configuration. The query structure description appears at the end of this section. Learn how to create, view, and manage log alerts using Azure Monitor in [Manage log alerts](/azure/azure-monitor/alerts/alerts-create-new-alert-rule).
1. In the workbook, select **Edit** as shown in the following screenshot. Select the **query icon** in the upper right corner of the graph.
After you set up queries and alerts, create business processes to manage the ale
## Next steps
-[Learn more about workbooks](../reports-monitoring/howto-use-azure-monitor-workbooks.md)
+[Learn more about workbooks](../reports-monitoring/howto-use-workbooks.md)
active-directory Multi Tenant Common Considerations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/multi-tenant-common-considerations.md
Microsoft Entra External ID pricing is based on monthly active users (MAU). The
## Office 365 considerations
-The following information addresses Office 365 in the context of this paper's scenarios. Detailed information is available at [Microsoft 365 inter-tenant collaboration 365 inter-tenant collaboration](/office365/enterprise/office-365-inter-tenant-collaboration) describes options that include using a central location for files and conversations, sharing calendars, using IM, audio/video calls for communication, and securing access to resources and applications.
+The following information addresses Office 365 in the context of this paper's scenarios. Detailed information is available at [Microsoft 365 inter-tenant collaboration 365 inter-tenant collaboration](/microsoft-365/enterprise/microsoft-365-inter-tenant-collaboration) describes options that include using a central location for files and conversations, sharing calendars, using IM, audio/video calls for communication, and securing access to resources and applications.
### Microsoft Exchange Online
For example:
```Set-MailUser externaluser1_contoso.com#EXT#@fabricam.onmicrosoft.com\ -HiddenFromAddressListsEnabled:\$false``` -- External users may be unhidden using [Azure AD PowerShell](/powershell/module/azuread). You can execute the [Set-AzureADUser](/powershell/module/azuread/set-azureaduser) PowerShell cmdlet to set the **ShowInAddressList** property to a value of **\$true.**
+- External users may be unhidden using [Azure AD PowerShell](/powershell/module/azuread/). You can execute the [Set-AzureADUser](/powershell/module/azuread/set-azureaduser) PowerShell cmdlet to set the **ShowInAddressList** property to a value of **\$true.**
For example:
After you enable external sharing in SharePoint Online, the ability to search fo
- You can enable the ability to search for guest users in these ways: - Modify the **ShowPeoplePickerSuggestionsForGuestUsers** setting at the tenant and site collection level.
- - Set the feature using the [Set-SPOTenant](/powershell/module/sharepoint-online/Set-SPOTenant) and [Set-SPOSite](/powershell/module/sharepoint-online/set-sposite) [SharePoint Online PowerShell](/powershell/sharepoint/sharepoint-online/connect-sharepoint-online) cmdlets.
+ - Set the feature using the [Set-SPOTenant](/powershell/module/sharepoint-online/set-spotenant) and [Set-SPOSite](/powershell/module/sharepoint-online/set-sposite) [SharePoint Online PowerShell](/powershell/sharepoint/sharepoint-online/connect-sharepoint-online) cmdlets.
- Guest users that are visible in the Exchange Online GAL are also visible in the SharePoint Online people picker. The accounts are visible regardless of the setting for **ShowPeoplePickerSuggestionsForGuestUsers**. ### Microsoft Teams
active-directory Multi Tenant User Management Scenarios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/multi-tenant-user-management-scenarios.md
This approach only works when all tenants that you need to synchronize are in th
Use an external Identity and Access Management (IAM) solution such as [Microsoft Identity Manager](/microsoft-identity-manager/microsoft-identity-manager-2016) (MIM) as a synchronization engine.
-This advanced deployment uses MIM as a synchronization engine. MIM calls the [Microsoft Graph API](https://developer.microsoft.com/graph) and [Exchange Online PowerShell](/powershell/exchange/exchange-online/exchange-online-powershell?view=exchange-ps&preserve-view=true). Alternative implementations can include the cloud-hosted [Active Directory Synchronization Service](/windows-server/identity/ad-ds/get-started/virtual-dc/active-directory-domain-services-overview) (ADSS) managed service offering from [Microsoft Industry Solutions](https://www.microsoft.com/industrysolutions). There are non-Microsoft offerings that you can create from scratch with other IAM offerings (such as SailPoint, Omada, and OKTA).
+This advanced deployment uses MIM as a synchronization engine. MIM calls the [Microsoft Graph API](https://developer.microsoft.com/graph) and [Exchange Online PowerShell](/powershell/exchange/exchange-online-powershell?view=exchange-ps&preserve-view=true). Alternative implementations can include the cloud-hosted [Active Directory Synchronization Service](/windows-server/identity/ad-ds/get-started/virtual-dc/active-directory-domain-services-overview) (ADSS) managed service offering from [Microsoft Industry Solutions](https://www.microsoft.com/industrysolutions). There are non-Microsoft offerings that you can create from scratch with other IAM offerings (such as SailPoint, Omada, and OKTA).
You perform a cloud-to-cloud synchronization of identity (users, contacts, and groups) from one tenant to another as illustrated in the following diagram.
active-directory Ops Guide Auth https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/ops-guide-auth.md
Like a user in your organization, a device is a core identity you want to protec
You can carry out this goal by bringing device identities and managing them in Microsoft Entra ID by using one of the following methods: -- Organizations can use [Microsoft Intune](/intune/what-is-intune) to manage the device and enforce compliance policies, attest device health, and set Conditional Access policies based on whether the device is compliant. Microsoft Intune can manage iOS devices, Mac desktops (Via JAMF integration), Windows desktops (natively using Mobile Device Management for Windows 10, and co-management with Microsoft Configuration Manager) and Android mobile devices.
+- Organizations can use [Microsoft Intune](/mem/intune/fundamentals/what-is-intune) to manage the device and enforce compliance policies, attest device health, and set Conditional Access policies based on whether the device is compliant. Microsoft Intune can manage iOS devices, Mac desktops (Via JAMF integration), Windows desktops (natively using Mobile Device Management for Windows 10, and co-management with Microsoft Configuration Manager) and Android mobile devices.
- [Microsoft Entra hybrid join](../devices/how-to-hybrid-join.md) provides management with Group Policies or Microsoft Configuration Manager in an environment with Active Directory domain-joined computers devices. Organizations can deploy a managed environment either through PHS or PTA with Seamless SSO. Bringing your devices to Microsoft Entra ID maximizes user productivity through SSO across your cloud and on-premises resources while enabling you to secure access to your cloud and on-premises resources with [Conditional Access](../conditional-access/overview.md) at the same time. If you have domain-joined Windows devices that aren't registered in the cloud, or domain-joined Windows devices that are registered in the cloud but without Conditional Access policies, then you should register the unregistered devices and, in either case, [use Microsoft Entra hybrid join as a control](../conditional-access/concept-conditional-access-grant.md) in your Conditional Access policies.
Conditional Access is an essential tool for improving the security posture of yo
#### Conditional Access recommended reading - [Best practices for Conditional Access in Microsoft Entra ID](../conditional-access/overview.md)-- [Identity and device access configurations](/microsoft-365/enterprise/microsoft-365-policies-configurations)
+- [Identity and device access configurations](/microsoft-365/security/office-365-security/microsoft-365-policies-configurations)
- [Microsoft Entra Conditional Access settings reference](../conditional-access/concept-conditional-access-conditions.md) - [Common Conditional Access policies](../conditional-access/concept-conditional-access-policy-common.md)
Legacy authentication is a term that refers to authentication protocols used by
Attackers strongly prefer these protocols - in fact, nearly [100% of password spray attacks](https://techcommunity.microsoft.com/t5/Azure-Active-Directory-Identity/Your-Pa-word-doesn-t-matter/ba-p/731984) use legacy authentication protocols! Hackers use legacy authentication protocols, because they don't support interactive sign-in, which is needed for additional security challenges like multifactor authentication and device authentication.
-If legacy authentication is widely used in your environment, you should plan to migrate legacy clients to clients that support [modern authentication](/office365/enterprise/modern-auth-for-office-2013-and-2016) as soon as possible. In the same token, if you have some users already using modern authentication but others that still use legacy authentication, you should take the following steps to lock down legacy authentication clients:
+If legacy authentication is widely used in your environment, you should plan to migrate legacy clients to clients that support [modern authentication](/microsoft-365/enterprise/modern-auth-for-office-2013-and-2016) as soon as possible. In the same token, if you have some users already using modern authentication but others that still use legacy authentication, you should take the following steps to lock down legacy authentication clients:
1. Use [Sign-In Activity reports](../reports-monitoring/concept-sign-ins.md) to identify users who are still using legacy authentication and plan remediation:
Below are a list of apps with permissions you might want to scrutinize for Micro
| Microsoft Graph API| Directory.AccessAsUser.All | | Azure REST API | user_impersonation |
-To avoid this scenario, you should refer to [detect and remediate illicit consent grants in Office 365](/office365/securitycompliance/detect-and-remediate-illicit-consent-grants) to identify and fix any applications with illicit grants or applications that have more grants than are necessary. Next, [remove self-service altogether](../manage-apps/configure-user-consent.md) and [establish governance procedures](../manage-apps/configure-admin-consent-workflow.md). Finally, schedule regular reviews of app permissions and remove them when they are not needed.
+To avoid this scenario, you should refer to [detect and remediate illicit consent grants in Office 365](/microsoft-365/security/office-365-security/detect-and-remediate-illicit-consent-grants) to identify and fix any applications with illicit grants or applications that have more grants than are necessary. Next, [remove self-service altogether](../manage-apps/configure-user-consent.md) and [establish governance procedures](../manage-apps/configure-admin-consent-workflow.md). Finally, schedule regular reviews of app permissions and remove them when they are not needed.
#### Consent grants recommended reading
Attackers originate from various parts of the world. Manage this risk by using C
![Create a new named location](./media/ops-guide-auth/ops-img14.png)
-If available, use a security information and event management (SIEM) solution to analyze and find patterns of access across regions. If you don't use a SIEM product, or it isn't ingesting authentication information from Microsoft Entra ID, we recommend you use [Azure Monitor](../../azure-monitor/overview.md) to identify patterns of access across regions.
+If available, use a security information and event management (SIEM) solution to analyze and find patterns of access across regions. If you don't use a SIEM product, or it isn't ingesting authentication information from Microsoft Entra ID, we recommend you use [Azure Monitor](/azure/azure-monitor/overview) to identify patterns of access across regions.
## Access usage
If available, use a security information and event management (SIEM) solution t
### Microsoft Entra logs archived and integrated with incident response plans
-Having access to sign-in activity, audits and risk events for Microsoft Entra ID is crucial for troubleshooting, usage analytics, and forensics investigations. Microsoft Entra ID provides access to these sources through REST APIs that have a limited retention period. A security information and event management (SIEM) system, or equivalent archival technology, is key for long-term storage of audits and supportability. To enable long-term storage of Microsoft Entra logs, you must either add them to your existing SIEM solution or use [Azure Monitor](../reports-monitoring/concept-activity-logs-azure-monitor.md). Archive logs that can be used as part of your incident response plans and investigations.
+Having access to sign-in activity, audits and risk events for Microsoft Entra ID is crucial for troubleshooting, usage analytics, and forensics investigations. Microsoft Entra ID provides access to these sources through REST APIs that have a limited retention period. A security information and event management (SIEM) system, or equivalent archival technology, is key for long-term storage of audits and supportability. To enable long-term storage of Microsoft Entra logs, you must either add them to your existing SIEM solution or use [Azure Monitor](../reports-monitoring/concept-log-monitoring-integration-options-considerations.md). Archive logs that can be used as part of your incident response plans and investigations.
#### Logs recommended reading
Having access to sign-in activity, audits and risk events for Microsoft Entra ID
- [Get data using the Microsoft Entra reporting API with certificates](../reports-monitoring/howto-configure-prerequisites-for-reporting-api.md) - [Microsoft Graph for Microsoft Entra ID Protection](../identity-protection/howto-identity-protection-graph-api.md) - [Office 365 Management Activity API reference](/office/office-365-management-api/office-365-management-activity-api-reference)-- [How to use the Microsoft Entra ID Power BI Content Pack](../reports-monitoring/howto-use-azure-monitor-workbooks.md)
+- [How to use the Microsoft Entra ID Power BI Content Pack](../reports-monitoring/howto-use-workbooks.md)
## Summary
active-directory Ops Guide Iam https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/ops-guide-iam.md
The [Microsoft Entra Connect Configuration Documenter](https://github.com/Micros
### Group-based licensing for Microsoft cloud services
-Microsoft Entra ID streamlines the management of licenses through [group-based licensing](../fundamentals/licensing-whatis-azure-portal.md) for Microsoft cloud services. This way, IAM provides the group infrastructure and delegated management of those groups to the proper teams in the organizations. There are multiple ways to set up the membership of groups in Microsoft Entra ID, including:
+Microsoft Entra ID streamlines the management of licenses through [group-based licensing](../fundamentals/concept-group-based-licensing.md) for Microsoft cloud services. This way, IAM provides the group infrastructure and delegated management of those groups to the proper teams in the organizations. There are multiple ways to set up the membership of groups in Microsoft Entra ID, including:
- **Synchronized from on-premises** - Groups can come from on-premises directories, which could be a good fit for organizations that have established group management processes that can be extended to assign licenses in Microsoft 365.
The [default delta sync](../hybrid/connect/how-to-connect-sync-feature-scheduler
#### Microsoft Entra Connect troubleshooting recommended reading -- [Prepare directory attributes for synchronization with Microsoft 365 by using the IdFix tool](/office365/enterprise/prepare-directory-attributes-for-synch-with-idfix)
+- [Prepare directory attributes for synchronization with Microsoft 365 by using the IdFix tool](/microsoft-365/enterprise/set-up-directory-synchronization)
- [Microsoft Entra Connect: Troubleshooting Errors during synchronization](../hybrid/connect/tshoot-connect-sync-errors.md) ## Summary
active-directory Ops Guide Ops https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/ops-guide-ops.md
Adopting best practices can help the optimal operation of on-premises agents. Co
### Identity secure score
-The [identity secure score](./../fundamentals/identity-secure-score.md) provides a quantifiable measure of the security posture of your organization. It's key to constantly review and address findings reported and strive to have the highest score possible. The score helps you to:
+The [identity secure score](../reports-monitoring/concept-identity-secure-score.md) provides a quantifiable measure of the security posture of your organization. It's key to constantly review and address findings reported and strive to have the highest score possible. The score helps you to:
- Objectively measure your identity security posture - Plan identity security improvements
If your organization currently has no program in place to monitor changes in Ide
### Notifications
-Microsoft sends email communications to administrators to notify various changes in the service, configuration updates that are needed, and errors that require admin intervention. It's important that customers set the notification email addresses so that notifications are sent to the proper team members who can acknowledge and act upon all notifications. We recommend you add multiple recipients to the [Message Center](/office365/admin/manage/message-center) and request that notifications (including Microsoft Entra Connect Health notifications) be sent to a distribution list or shared mailbox. If you only have one Global Administrator account with an email address, be sure to configure at least two email-capable accounts.
+Microsoft sends email communications to administrators to notify various changes in the service, configuration updates that are needed, and errors that require admin intervention. It's important that customers set the notification email addresses so that notifications are sent to the proper team members who can acknowledge and act upon all notifications. We recommend you add multiple recipients to the [Message Center](/microsoft-365/admin/manage/message-center) and request that notifications (including Microsoft Entra Connect Health notifications) be sent to a distribution list or shared mailbox. If you only have one Global Administrator account with an email address, be sure to configure at least two email-capable accounts.
There are two "From" addresses used by Microsoft Entra ID: <o365mc@email2.microsoft.com>, which sends Message Center notifications; and <azure-noreply@microsoft.com>, which sends notifications related to:
Refer to the following table to learn the type of notifications that are sent an
#### Notifications recommended reading -- [Change your organization's address, technical contact, and more](/office365/admin/manage/change-address-contact-and-more)
+- [Change your organization's address, technical contact, and more](/microsoft-365/admin/manage/change-address-contact-and-more)
## Operational surface area
The Active Directory administrative tier model was designed to protect identity
![Diagram showing the three layers of the Tier model](./media/ops-guide-auth/ops-img18.png)
-The [tier model](/windows-server/identity/securing-privileged-access/securing-privileged-access-reference-material) is composed of three levels and only includes administrative accounts, not standard user accounts.
+The [tier model](/security/privileged-access-workstations/privileged-access-access-model) is composed of three levels and only includes administrative accounts, not standard user accounts.
- **Tier 0** - Direct Control of enterprise identities in the environment. Tier 0 includes accounts, groups, and other assets that have direct or indirect administrative control of the Active Directory forest, domains, or domain controllers, and all the assets in it. The security sensitivity of all Tier 0 assets is equivalent as they're all effectively in control of each other. - **Tier 1** - Control of enterprise servers and applications. Tier 1 assets include server operating systems, cloud services, and enterprise applications. Tier 1 administrator accounts have administrative control of a significant amount of business value that is hosted on these assets. A common example role is server administrators who maintain these operating systems with the ability to impact all enterprise services.
active-directory Protect M365 From On Premises Attacks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/protect-m365-from-on-premises-attacks.md
In Microsoft Entra ID, users who have privileged roles, such as administrators,
- Deploy emergency access accounts. Do *not* use on-premises password vaults to store credentials. See [Manage emergency access accounts in Microsoft Entra ID](../roles/security-emergency-access.md).
-For more information, see [Securing privileged access](/security/compass/overview). Also, see [Secure access practices for administrators in Microsoft Entra ID](../roles/security-planning.md).
+For more information, see [Securing privileged access](/security/privileged-access-workstations/overview). Also, see [Secure access practices for administrators in Microsoft Entra ID](../roles/security-planning.md).
### Use cloud authentication
When used to provision hybrid accounts, the Microsoft Entra ID-from-cloud-HR sys
Cloud groups allow you to decouple your collaboration and access from your on-premises infrastructure. -- **Collaboration**. Use Microsoft 365 Groups and Microsoft Teams for modern collaboration. Decommission on-premises distribution lists, and [upgrade distribution lists to Microsoft 365 Groups in Outlook](/office365/admin/manage/upgrade-distribution-lists).
+- **Collaboration**. Use Microsoft 365 Groups and Microsoft Teams for modern collaboration. Decommission on-premises distribution lists, and [upgrade distribution lists to Microsoft 365 Groups in Outlook](/microsoft-365/admin/create-groups/office-365-groups).
- **Access**. Use Microsoft Entra security groups or Microsoft 365 Groups to authorize access to applications in Microsoft Entra ID. - **Office 365 licensing**. Use group-based licensing to provision to Office 365 by using cloud-only groups. This method decouples control of group membership from on-premises infrastructure.
Owners of groups that are used for access should be considered privileged identi
Use Microsoft Entra capabilities to securely manage devices.
-Deploy Microsoft Entra joined Windows 10 workstations with mobile device management policies. Enable Windows Autopilot for a fully automated provisioning experience. See [Plan your Microsoft Entra join implementation](../devices/device-join-plan.md) and [Windows Autopilot](/mem/autopilot/windows-autopilot).
+Deploy Microsoft Entra joined Windows 10 workstations with mobile device management policies. Enable Windows Autopilot for a fully automated provisioning experience. See [Plan your Microsoft Entra join implementation](../devices/device-join-plan.md) and [Windows Autopilot](/autopilot/windows-autopilot).
- **Use Windows 10 workstations**. - Deprecate machines that run Windows 8.1 and earlier. - Don't deploy computers that have server operating systems as workstations. - **Use Microsoft Intune as the authority for all device management workloads.** See [Microsoft Intune](https://www.microsoft.com/security/business/endpoint-management/microsoft-intune).-- **Deploy privileged access devices.** For more information, see [Device roles and profiles](/security/compass/privileged-access-devices#device-roles-and-profiles).
+- **Deploy privileged access devices.** For more information, see [Device roles and profiles](/security/privileged-access-workstations/privileged-access-devices#device-roles-and-profiles).
### Workloads, applications, and resources
Deploy Microsoft Entra joined Windows 10 workstations with mobile device managem
- **Application and workload servers**
- Applications or resources that required servers can be migrated to Azure infrastructure as a service (IaaS). Use Microsoft Entra Domain Services to decouple trust and dependency on on-premises instances of Active Directory. To achieve this decoupling, make sure virtual networks used for Microsoft Entra Domain Services don't have a connection to corporate networks. See [Microsoft Entra Domain Services](../../active-directory-domain-services/overview.md).
+ Applications or resources that required servers can be migrated to Azure infrastructure as a service (IaaS). Use Microsoft Entra Domain Services to decouple trust and dependency on on-premises instances of Active Directory. To achieve this decoupling, make sure virtual networks used for Microsoft Entra Domain Services don't have a connection to corporate networks. See [Microsoft Entra Domain Services](/entra/identity/domain-services/overview).
- Use credential tiering. Application servers are typically considered tier-1 assets. For more information, see [Enterprise access model](/security/compass/privileged-access-access-model#ADATM_BM).
+ Use credential tiering. Application servers are typically considered tier-1 assets. For more information, see [Enterprise access model](/security/privileged-access-workstations/privileged-access-access-model#ADATM_BM).
## Conditional Access policies
Monitor the following key scenarios, in addition to any scenarios specific to yo
- **User and Entity Behavioral Analytics (UEBA) alerts**
- Use UEBA to get insights on anomaly detection. Microsoft Defender for Cloud Apps provides UEBA in the cloud. See [Investigate risky users](/cloud-app-security/tutorial-ueba).
+ Use UEBA to get insights on anomaly detection. Microsoft Defender for Cloud Apps provides UEBA in the cloud. See [Investigate risky users](/defender-cloud-apps/tutorial-ueba).
- You can integrate on-premises UEBA from Azure Advanced Threat Protection (ATP). Microsoft Defender for Cloud Apps reads signals from Microsoft Entra ID Protection. See [Connect to your Active Directory Forest](/defender-for-identity/install-step2).
+ You can integrate on-premises UEBA from Azure Advanced Threat Protection (ATP). Microsoft Defender for Cloud Apps reads signals from Microsoft Entra ID Protection. See [Connect to your Active Directory Forest](/defender-for-identity/directory-service-accounts).
- **Emergency access accounts activity**
Define a log storage and retention strategy, design, and implementation to facil
- Audit logs - Risk events
- Microsoft Entra ID provides Azure Monitor integration for the sign-in activity log and audit logs. See [Microsoft Entra activity logs in Azure Monitor](../reports-monitoring/concept-activity-logs-azure-monitor.md).
+ Microsoft Entra ID provides Azure Monitor integration for the sign-in activity log and audit logs. See [Microsoft Entra activity logs in Azure Monitor](../reports-monitoring/concept-log-monitoring-integration-options-considerations.md).
- Use the Microsoft Graph API to ingest risk events. See [Use the Microsoft Graph identity protection APIs](/graph/api/resources/identityprotection-root).
+ Use the Microsoft Graph API to ingest risk events. See [Use the Microsoft Graph identity protection APIs](/graph/api/resources/identityprotection-overview).
- You can stream Microsoft Entra logs to Azure Monitor logs. See [Integrate Microsoft Entra logs with Azure Monitor logs](../reports-monitoring/howto-integrate-activity-logs-with-log-analytics.md).
+ You can stream Microsoft Entra logs to Azure Monitor logs. See [Integrate Microsoft Entra logs with Azure Monitor logs](../reports-monitoring/howto-integrate-activity-logs-with-azure-monitor-logs.md).
- **Hybrid infrastructure operating system security logs**. All hybrid identity infrastructure operating system logs should be archived and carefully monitored as a tier-0 system, because of the surface-area implications. Include the following elements:
active-directory Recover From Deletions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/recover-from-deletions.md
This article addresses recovering from soft and hard deletions in your Microsoft
## Monitor for deletions
-The [Microsoft Entra audit log](../reports-monitoring/concept-audit-logs.md) contains information on all delete operations performed in your tenant. Export these logs to a security information and event management tool such as [Microsoft Sentinel](../../sentinel/overview.md).
+The [Microsoft Entra audit log](../reports-monitoring/concept-audit-logs.md) contains information on all delete operations performed in your tenant. Export these logs to a security information and event management tool such as [Microsoft Sentinel](/azure/sentinel/overview).
You can also use Microsoft Graph to audit changes and build a custom solution to monitor differences over time. For more information on how to find deleted items by using Microsoft Graph, see [List deleted items - Microsoft Graph v1.0](/graph/api/directory-deleteditems-list?tabs=http).
active-directory Recoverability Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/recoverability-overview.md
The deletion of some objects can cause a ripple effect because of dependencies.
## Monitoring and data retention
-The [Microsoft Entra audit log](../reports-monitoring/concept-audit-logs.md) contains information on all delete and configuration operations performed in your tenant. We recommend that you export these logs to a security information and event management tool such as [Microsoft Sentinel](../../sentinel/overview.md). You can also use Microsoft Graph to audit changes and build a custom solution to monitor differences over time. For more information on finding deleted items by using Microsoft Graph, see [List deleted items - Microsoft Graph v1.0](/graph/api/directory-deleteditems-list?tabs=http).
+The [Microsoft Entra audit log](../reports-monitoring/concept-audit-logs.md) contains information on all delete and configuration operations performed in your tenant. We recommend that you export these logs to a security information and event management tool such as [Microsoft Sentinel](/azure/sentinel/overview). You can also use Microsoft Graph to audit changes and build a custom solution to monitor differences over time. For more information on finding deleted items by using Microsoft Graph, see [List deleted items - Microsoft Graph v1.0](/graph/api/directory-deleteditems-list?tabs=http).
### Audit logs
active-directory Resilience B2c Developer Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/resilience-b2c-developer-best-practices.md
Your business requirements and desired end-user experience will dictate your fre
- **SPAs**: A SPA may depend on access tokens to make calls to the APIs. A SPA traditionally uses the implicit flow that doesn't result in a refresh token. The SPA can use a hidden `iframe` to perform new token requests against the authorization endpoint if the browser still has an active session with the Azure AD B2C. For SPAs, there are a few options available to allow the user to continue to use the application. - Extend the access token's validity duration to meet your business requirements. - Build your application to use an API gateway as the authentication proxy. In this configuration, the SPA loads without any authentication and the API calls are made to the API gateway. The API gateway sends the user through a sign-in process using an [authorization code grant](https://oauth.net/2/grant-types/authorization-code/) based on a policy and authenticates the user. Then the authentication session between the API gateway and the client is maintained using an authentication cookie. The API gateway services the APIs using the token that is obtained by the API gateway (or some other direct authentication method such as certificates, client credentials, or API keys).
- - [Migrate your SPA from implicit grant](https://developer.microsoft.com/identity/blogs/msal-js-2-0-supports-authorization-code-flow-is-now-generally-available/) to [authorization code grant flow](../../active-directory-b2c/implicit-flow-single-page-application.md) with Proof Key for Code Exchange (PKCE) and Cross-origin Resource Sharing (CORS) support. Migrate your application from MSAL.js 1.x to MSAL.js 2.x to realize the resiliency of Web applications.
+ - [Migrate your SPA from implicit grant](https://developer.microsoft.com/identity/blogs/msal-js-2-0-supports-authorization-code-flow-is-now-generally-available/) to [authorization code grant flow](/azure/active-directory-b2c/implicit-flow-single-page-application) with Proof Key for Code Exchange (PKCE) and Cross-origin Resource Sharing (CORS) support. Migrate your application from MSAL.js 1.x to MSAL.js 2.x to realize the resiliency of Web applications.
- For mobile applications, it's recommended to extend both the refresh and access token lifetimes. - **Backend or microservice applications**: Because backend (daemon) applications are non-interactive and aren't in a user context, the prospect of token theft is greatly diminished. Recommendation is to strike a balance between security and lifetime and set a long token lifetime. ## Configure Single sign-on
-With [Single sign-on (SSO)](../manage-apps/what-is-single-sign-on.md), users sign in once with a single account and get access to multiple applications. The application can be a web, mobile, or a Single page application (SPA), regardless of platform or domain name. When the user initially signs in to an application, Azure AD B2C persists a [cookie-based session](../../active-directory-b2c/session-behavior.md).
+With [Single sign-on (SSO)](../manage-apps/what-is-single-sign-on.md), users sign in once with a single account and get access to multiple applications. The application can be a web, mobile, or a Single page application (SPA), regardless of platform or domain name. When the user initially signs in to an application, Azure AD B2C persists a [cookie-based session](/azure/active-directory-b2c/session-behavior).
Upon subsequent authentication requests, Azure AD B2C reads and validates the cookie-based session and issues an access token without prompting the user to sign in again. If SSO is configured with a limited scope at a policy or an application, later access to other policies and applications will require fresh authentication.
The most common disrupters of service are the code and configuration changes. Ad
Protect your applications against known vulnerabilities such as Distributed Denial of Service (DDoS) attacks, SQL injections, cross-site scripting, remote code execution, and many others as documented in [OWASP Top 10](https://owasp.org/www-project-top-ten/). Deployment of a Web Application Firewall (WAF) can defend against common exploits and vulnerabilities. -- Use Azure [WAF](../../web-application-firewall/overview.md), which provides centralized protection against attacks.-- Use WAF with Microsoft Entra [Identity Protection and Conditional Access to provide multi-layer protection](../../active-directory-b2c/conditional-access-identity-protection-overview.md) when using Azure AD B2C.
+- Use Azure [WAF](/azure/web-application-firewall/overview), which provides centralized protection against attacks.
+- Use WAF with Microsoft Entra [Identity Protection and Conditional Access to provide multi-layer protection](/azure/active-directory-b2c/conditional-access-identity-protection-overview) when using Azure AD B2C.
- Build resistance to bot-driven [sign-ups by integrating with a CAPTCHA system](https://github.com/azure-ad-b2c/samples/tree/master/policies/captcha-integration). ## Secrets rotation
Azure AD B2C uses secrets for applications, APIs, policies, and encryption. The
### How to implement secret rotation - Use [managed identities](../managed-identities-azure-resources/overview.md) for supported resources to authenticate to any service that supports Microsoft Entra authentication. When you use managed identities, you can manage resources automatically, including rotation of credentials.-- Take an inventory of all the [keys and certificates configured](../../active-directory-b2c/policy-keys-overview.md) in Azure AD B2C. This list is likely to include keys used in custom policies, [APIs](../../active-directory-b2c/secure-rest-api.md), signing ID token, and certificates for SAML.
+- Take an inventory of all the [keys and certificates configured](/azure/active-directory-b2c/policy-keys-overview) in Azure AD B2C. This list is likely to include keys used in custom policies, [APIs](/azure/active-directory-b2c/secure-rest-api), signing ID token, and certificates for SAML.
- Using CICD, rotate secrets that are about to expire within two months from the anticipated peak season. The recommended maximum cryptoperiod of private keys associated to a certificate is one year. - Proactively monitor and rotate the API access credentials such as passwords, and certificates.
In the context of resiliency, testing of REST APIs needs to include verification
### How to test APIs
-We recommend your test plan to include [comprehensive API tests](../../active-directory-b2c/best-practices.md#testing). If you're planning for an upcoming surge because of promotion or holiday traffic, you need to revise your load testing with the new estimates. Conduct load testing of your APIs and Content Delivery Network (CDN) in a developer environment and not in production.
+We recommend your test plan to include [comprehensive API tests](/azure/active-directory-b2c/best-practices#testing). If you're planning for an upcoming surge because of promotion or holiday traffic, you need to revise your load testing with the new estimates. Conduct load testing of your APIs and Content Delivery Network (CDN) in a developer environment and not in production.
## Next steps
active-directory Resilience B2c https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/resilience-b2c.md
# Build resilience in your customer identity and access management with Azure Active Directory B2C
-[Azure AD B2C](../../active-directory-b2c/overview.md) is a Customer Identity and Access Management (CIAM) platform that is designed to help you launch your critical customer facing applications successfully. We have many built-in features for [resilience](https://azure.microsoft.com/blog/advancing-azure-active-directory-availability/) that are designed to help our service scale to your needs and improve resilience in the face of potential outage situations. In addition, when launching a mission critical application, it's important to consider various design and configuration elements in your application. Consider how the application is configured within Azure AD B2C to ensure that you get a resilient behavior in response to outage or failure scenarios. In this article, we'll discuss some of the best practices to help you increase resilience.
+[Azure AD B2C](/azure/active-directory-b2c/overview) is a Customer Identity and Access Management (CIAM) platform that is designed to help you launch your critical customer facing applications successfully. We have many built-in features for [resilience](https://azure.microsoft.com/blog/advancing-azure-active-directory-availability/) that are designed to help our service scale to your needs and improve resilience in the face of potential outage situations. In addition, when launching a mission critical application, it's important to consider various design and configuration elements in your application. Consider how the application is configured within Azure AD B2C to ensure that you get a resilient behavior in response to outage or failure scenarios. In this article, we'll discuss some of the best practices to help you increase resilience.
A resilient service is one that continues to function despite disruptions. You can help improve resilience in your service by:
active-directory Resilience Client App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/resilience-client-app.md
MSAL caches tokens and uses a silent token acquisition pattern. MSAL serializes
Learn more: * [Token cache serialization](https://github.com/AzureAD/microsoft-identity-web/wiki/token-cache-serialization)
-* [Token cache serialization in MSAL.NET](../develop/msal-net-token-cache-serialization.md)
+* [Token cache serialization in MSAL.NET](/entra/msal/dotnet/how-to/token-cache-serialization)
* [Custom token cache serialization in MSAL for Java](/entra/msal/java/advanced/msal-java-token-cache-serialization) * [Custom token cache serialization in MSAL for Python](/entra/msal/python/advanced/msal-python-token-cache-serialization).
active-directory Resilience Daemon App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/resilience-daemon-app.md
If you develop daemon apps on ASP.NET Core, use the Microsoft.Identity.Web libra
Learn more:
-* [Microsoft Identity Web authentication library](../develop/microsoft-identity-web.md)
+* [Microsoft Identity Web authentication library](/entra/msal/dotnet/microsoft-identity-web/)
* [Distributed token cache](https://github.com/AzureAD/microsoft-identity-web/wiki/token-cache-serialization#distributed-token-cache) ## Cache and store tokens
active-directory Resilience With Monitoring Alerting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/resilience-with-monitoring-alerting.md
Similarly, to detect failures or performance disruptions, setting up a good base
### How to implement monitoring and alerting -- **Monitoring**: Use [Azure Monitor](../../active-directory-b2c/azure-monitor.md) to continuously monitor health against key Service Level Objectives (SLO) and get notification whenever a critical change happens. Begin by identifying Azure AD B2C policy or an application as a critical component of your business whose health needs to be monitored to maintain SLO. Identify key indicators that align with your SLOs.
+- **Monitoring**: Use [Azure Monitor](/azure/active-directory-b2c/azure-monitor) to continuously monitor health against key Service Level Objectives (SLO) and get notification whenever a critical change happens. Begin by identifying Azure AD B2C policy or an application as a critical component of your business whose health needs to be monitored to maintain SLO. Identify key indicators that align with your SLOs.
For example, track the following metrics, since a sudden drop in either will lead to a loss in business. - **Total requests**: The total "n" number of requests sent to Azure AD B2C policy. - **Success rate (%)**: Successful requests/Total number of requests.
- Access the [key indicators](../../active-directory-b2c/view-audit-logs.md) in [application insights](../../active-directory-b2c/analytics-with-application-insights.md) where Azure AD B2C policy-based logs, [audit logs](../../active-directory-b2c/analytics-with-application-insights.md), and sign-in logs are stored.
+ Access the [key indicators](/azure/active-directory-b2c/view-audit-logs) in [application insights](/azure/active-directory-b2c/analytics-with-application-insights) where Azure AD B2C policy-based logs, [audit logs](/azure/active-directory-b2c/analytics-with-application-insights), and sign-in logs are stored.
- **Visualizations**: Using Log analytics build dashboards to visually monitor the key indicators.
For example, track the following metrics, since a sudden drop in either will lea
- **Previous period**: Create temporal charts to show changes in the Total requests and Success rate (%) over some previous period for reference purposes, for example, last week. -- **Alerting**: Using log analytics define [alerts](../../azure-monitor/alerts/alerts-create-new-alert-rule.md) that get triggered when there are sudden changes in the key indicators. These changes may negatively impact the SLOs. Alerts use various forms of notification methods including email, SMS, and webhooks. Start by defining a criterion that acts as a threshold against which alert will be triggered. For example:
+- **Alerting**: Using log analytics define [alerts](/azure/azure-monitor/alerts/alerts-create-new-alert-rule) that get triggered when there are sudden changes in the key indicators. These changes may negatively impact the SLOs. Alerts use various forms of notification methods including email, SMS, and webhooks. Start by defining a criterion that acts as a threshold against which alert will be triggered. For example:
- Alert against abrupt drop in Total requests: Trigger an alert when number of total requests drop abruptly. For example, when there's a 25% drop in the total number of requests compared to previous period, raise an alert. - Alert against significant drop in Success rate (%): Trigger an alert when success rate of the selected policy significantly drops.
- - Upon receiving an alert, troubleshoot the issue using [Log Analytics](../../azure-monitor/visualize/workbooks-view-designer-conversion-overview.md), [Application Insights](../../active-directory-b2c/troubleshoot-with-application-insights.md), and [VS Code extension](https://marketplace.visualstudio.com/items?itemName=AzureADB2CTools.aadb2c) for Azure AD B2C. After you resolve the issue and deploy an updated application or policy, it continues to monitor the key indicators until they return back to normal range.
+ - Upon receiving an alert, troubleshoot the issue using [Log Analytics](/azure/azure-monitor/visualize/workbooks-view-designer-conversion-overview), [Application Insights](/azure/active-directory-b2c/troubleshoot-with-application-insights), and [VS Code extension](https://marketplace.visualstudio.com/items?itemName=AzureADB2CTools.aadb2c) for Azure AD B2C. After you resolve the issue and deploy an updated application or policy, it continues to monitor the key indicators until they return back to normal range.
-- **Service alerts**: Use the [Azure AD B2C service level alerts](../../service-health/service-health-overview.md) to get notified of service issues, planned maintenance, health advisory, and security advisory.
+- **Service alerts**: Use the [Azure AD B2C service level alerts](/azure/service-health/service-health-overview) to get notified of service issues, planned maintenance, health advisory, and security advisory.
-- **Reporting**: [By using log analytics](../reports-monitoring/howto-integrate-activity-logs-with-log-analytics.md), build reports that help you gain understanding about user insights, technical challenges, and growth opportunities.
- - **Health Dashboard**: Create [custom dashboards using Azure Dashboard](../../azure-monitor/app/tutorial-app-dashboards.md) feature, which supports adding charts using Log Analytics queries. For example, identify pattern of successful and failed sign-ins, failure reasons and telemetry about devices used to make the requests.
+- **Reporting**: [By using log analytics](../reports-monitoring/howto-integrate-activity-logs-with-azure-monitor-logs.md), build reports that help you gain understanding about user insights, technical challenges, and growth opportunities.
+ - **Health Dashboard**: Create [custom dashboards using Azure Dashboard](/azure/azure-monitor/app/tutorial-app-dashboards) feature, which supports adding charts using Log Analytics queries. For example, identify pattern of successful and failed sign-ins, failure reasons and telemetry about devices used to make the requests.
- **Abandon Azure AD B2C journeys**: Use the [workbook](https://github.com/azure-ad-b2c/siem#list-of-abandon-journeys) to track the list of abandoned Azure AD B2C journeys where user started the sign-in or sign-up journey but never finished it. It provides you details about policy ID and breakdown of steps that are taken by the user before abandoning the journey. - **Azure AD B2C monitoring workbooks**: Use the [monitoring workbooks](https://github.com/azure-ad-b2c/siem) that include Azure AD B2C dashboard, Multi-factor authentication (MFA) operations, Conditional Access report, and Search logs by correlationId. This practice provides better insights into the health of your Azure AD B2C environment.
active-directory Resilient End User Experience https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/resilient-end-user-experience.md
The sign-up and sign-in end-user experience is made up of the following elements
## Choose between user flow and custom policy
-To help you set up the most common identity tasks, Azure AD B2C provides built-in configurable [user flows](../../active-directory-b2c/user-flow-overview.md). You can also build your own [custom policies](../../active-directory-b2c/custom-policy-overview.md) that offer you maximum flexibility. However, it's recommended to use custom policies only to address complex scenarios.
+To help you set up the most common identity tasks, Azure AD B2C provides built-in configurable [user flows](/azure/active-directory-b2c/user-flow-overview). You can also build your own [custom policies](/azure/active-directory-b2c/custom-policy-overview) that offer you maximum flexibility. However, it's recommended to use custom policies only to address complex scenarios.
### How to decide between user flow and custom policy Choose built-in user flows if your business requirements can be met by them. Since extensively tested by Microsoft, you can minimize the testing needed for validating policy-level functional, performance, or scale of these identity user flows. You still need to test your applications for functionality, performance, and scale.
-Should you [choose custom policies](../../active-directory-b2c/user-flow-overview.md) because of your business requirements, make sure you perform policy-level testing for functional, performance, or scale in addition to application-level testing.
+Should you [choose custom policies](/azure/active-directory-b2c/user-flow-overview) because of your business requirements, make sure you perform policy-level testing for functional, performance, or scale in addition to application-level testing.
-See the article that [compares user flows and custom polices](../../active-directory-b2c/user-flow-overview.md#comparing-user-flows-and-custom-policies) to help you decide.
+See the article that [compares user flows and custom polices](/azure/active-directory-b2c/user-flow-overview#comparing-user-flows-and-custom-policies) to help you decide.
## Choose multiple IDPs
-When using an [external identity provider](../../active-directory-b2c/add-identity-provider.md) such as Facebook, make sure to have a fallback plan in case the external provider becomes unavailable.
+When using an [external identity provider](/azure/active-directory-b2c/add-identity-provider) such as Facebook, make sure to have a fallback plan in case the external provider becomes unavailable.
### How to set up multiple IDPs
As part of the external identity provider registration process, include a verifi
2. Configure a profile policy to allow users to [link the other identity to their account](https://github.com/Azure-Samples/active-directory-b2c-advanced-policies/tree/master/account-linking) after they sign in.
- 3. Notify and allow users to [switch to an alternate IDP](../../active-directory-b2c/customize-ui-with-html.md#configure-dynamic-custom-page-content-uri) during an outage.
+ 3. Notify and allow users to [switch to an alternate IDP](/azure/active-directory-b2c/customize-ui-with-html#configure-dynamic-custom-page-content-uri) during an outage.
## Availability of Multi-factor authentication
-When using a [phone service for Multi-factor authentication (MFA)](../../active-directory-b2c/phone-authentication-user-flows.md), make sure to consider an alternative service provider. The local Telco or phone service provider may experience disruptions in their service.
+When using a [phone service for Multi-factor authentication (MFA)](/azure/active-directory-b2c/phone-authentication-user-flows), make sure to consider an alternative service provider. The local Telco or phone service provider may experience disruptions in their service.
### How to choose an alternate MFA
active-directory Resilient External Processes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/resilient-external-processes.md
In this article, we provide you guidance on how to plan for and implement the RE
## Ensure correct placement of the APIs
-Identity experience framework (IEF) policies allow you to call an external system using a [RESTful API technical profile](../../active-directory-b2c/restful-technical-profile.md). External systems aren't controlled by the IEF runtime environment and are a potential failure point.
+Identity experience framework (IEF) policies allow you to call an external system using a [RESTful API technical profile](/azure/active-directory-b2c/restful-technical-profile). External systems aren't controlled by the IEF runtime environment and are a potential failure point.
### How to manage external systems using APIs
Identity experience framework (IEF) policies allow you to call an external syste
- Remove API calls from the pre-authenticated path whenever possible. If you can't, then you must place strict protections for Denial of Service (DoS) and Distributed Denial of Service (DDoS) attacks in front of your APIs. Attackers can load the sign-in page and try to flood your API with DoS attacks and disable your application. For example, using CAPTCHA in your sign in, sign up flow can help. -- Use [API connectors of built-in sign-up user flow](../../active-directory-b2c/api-connectors-overview.md) wherever possible to integrate with web APIs either After federating with an identity provider during sign-up or before creating the user. Since the user flows are already extensively tested, it's likely that you don't have to perform user flow-level functional, performance, or scale testing. You still need to test your applications for functionality, performance, and scale.
+- Use [API connectors of built-in sign-up user flow](/azure/active-directory-b2c/api-connectors-overview) wherever possible to integrate with web APIs either After federating with an identity provider during sign-up or before creating the user. Since the user flows are already extensively tested, it's likely that you don't have to perform user flow-level functional, performance, or scale testing. You still need to test your applications for functionality, performance, and scale.
-- Azure AD B2C RESTful API [technical profiles](../../active-directory-b2c/restful-technical-profile.md) don't provide any caching behavior. Instead, RESTful API profile implements a retry logic and a timeout that is built into the policy.
+- Azure AD B2C RESTful API [technical profiles](/azure/active-directory-b2c/restful-technical-profile) don't provide any caching behavior. Instead, RESTful API profile implements a retry logic and a timeout that is built into the policy.
-- For APIs that need writing data, queue up a task to have such tasks executed by a background worker. Services like [Azure queues](../../storage/queues/storage-queues-introduction.md) can be used. This practice will make the API return efficiently and increase the policy execution performance.
+- For APIs that need writing data, queue up a task to have such tasks executed by a background worker. Services like [Azure queues](/azure/storage/queues/storage-queues-introduction) can be used. This practice will make the API return efficiently and increase the policy execution performance.
## API error handling
As the APIs live outside the Azure AD B2C system, it's needed to have proper err
### How to gracefully handle API errors -- An API could fail for various reasons, make your application resilient to such failures. [Return an HTTP 4XX error message](../../active-directory-b2c/restful-technical-profile.md#returning-validation-error-message) if the API is unable to complete the request. In the Azure AD B2C policy, try to gracefully handle the unavailability of the API and perhaps render a reduced experience.
+- An API could fail for various reasons, make your application resilient to such failures. [Return an HTTP 4XX error message](/azure/active-directory-b2c/restful-technical-profile#returning-validation-error-message) if the API is unable to complete the request. In the Azure AD B2C policy, try to gracefully handle the unavailability of the API and perhaps render a reduced experience.
-- [Handle transient errors gracefully](../../active-directory-b2c/restful-technical-profile.md#error-handling). The RESTful API profile allows you to configure error messages for various [circuit breakers](/azure/architecture/patterns/circuit-breaker).
+- [Handle transient errors gracefully](/azure/active-directory-b2c/restful-technical-profile#error-handling). The RESTful API profile allows you to configure error messages for various [circuit breakers](/azure/architecture/patterns/circuit-breaker).
-- Proactively monitor and using Continuous Integration/Continuous Delivery (CICD), rotate the API access credentials such as passwords and certificates used by the [Technical profile engine](../../active-directory-b2c/restful-technical-profile.md).
+- Proactively monitor and using Continuous Integration/Continuous Delivery (CICD), rotate the API access credentials such as passwords and certificates used by the [Technical profile engine](/azure/active-directory-b2c/restful-technical-profile).
## API management - best practices
While you deploy the REST APIs and configure the RESTful technical profile, foll
- API Management (APIM) publishes, manages, and analyzes your APIs. APIM also handles authentication to provide secure access to backend services and microservices. Use an API gateway to scale out API deployments, caching, and load balancing. -- Recommendation is to get the right token at the beginning of the user journey instead of calling multiple times for each API and [secure an Azure APIM API](../../active-directory-b2c/secure-api-management.md?tabs=app-reg-ga).
+- Recommendation is to get the right token at the beginning of the user journey instead of calling multiple times for each API and [secure an Azure APIM API](/azure/active-directory-b2c/secure-api-management?tabs=app-reg-ga).
## Next steps
active-directory Road To The Cloud Establish https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/road-to-the-cloud-establish.md
If you're using Microsoft Office 365, Exchange Online, or Teams, then you're alr
* [Select authentication methods](../hybrid/connect/choose-ad-authn.md). We strongly recommend password hash synchronization.
-* Secure your hybrid identity infrastructure by following [Five steps to securing your identity infrastructure](../../security/fundamentals/steps-secure-identity.md).
+* Secure your hybrid identity infrastructure by following [Five steps to securing your identity infrastructure](/azure/security/fundamentals/steps-secure-identity).
## Optional tasks
active-directory Road To The Cloud Implement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/road-to-the-cloud-implement.md
Client workstations are traditionally joined to Active Directory and managed via
[Windows Local Administrator Password Solution](../devices/howto-manage-local-admin-passwords.md) (LAPS) enables a cloud-first solution to manage the passwords of local administrator accounts.
-For more information, see [Learn more about cloud-native endpoints](/mem/cloud-native-endpoints-overview).
+For more information, see [Learn more about cloud-native endpoints](/mem/solutions/cloud-native-endpoints/cloud-native-endpoints-overview).
## Applications
The organization has a process to evaluate Microsoft Entra alternatives when it'
* Provide a recommendation to change the procurement policy and application development policy to require modern protocols (OIDC/OAuth2 and SAML) and authenticate by using Microsoft Entra ID. New apps should also support [Microsoft Entra app provisioning](../app-provisioning/what-is-hr-driven-provisioning.md) and have no dependency on LDAP queries. Exceptions require explicit review and approval. > [!IMPORTANT]
- > Depending on the anticipated demands of applications that require legacy protocols, you can choose to deploy [Microsoft Entra Domain Services](../../active-directory-domain-services/overview.md) when more current alternatives won't work.
+ > Depending on the anticipated demands of applications that require legacy protocols, you can choose to deploy [Microsoft Entra Domain Services](/entra/identity/domain-services/overview) when more current alternatives won't work.
* Provide a recommendation to create a policy to prioritize use of cloud-native alternatives. The policy should limit deployment of new application servers to the domain. Common cloud-native scenarios to replace Active Directory-joined servers include:
The organization has a process to evaluate Microsoft Entra alternatives when it'
* SharePoint or OneDrive provides collaboration support across Microsoft 365 solutions and built-in governance, risk, security, and compliance.
- * [Azure Files](../../storage/files/storage-files-introduction.md) offers fully managed file shares in the cloud that are accessible via the industry-standard SMB or NFS protocol. Customers can use native [Microsoft Entra authentication to Azure Files](../../virtual-desktop/create-profile-container-azure-ad.md) over the internet without line of sight to a domain controller.
+ * [Azure Files](/azure/storage/files/storage-files-introduction) offers fully managed file shares in the cloud that are accessible via the industry-standard SMB or NFS protocol. Customers can use native [Microsoft Entra authentication to Azure Files](/azure/virtual-desktop/create-profile-container-azure-ad) over the internet without line of sight to a domain controller.
* Microsoft Entra ID works with third-party applications in the Microsoft [application gallery](/microsoft-365/enterprise/integrated-apps-and-azure-ads).
active-directory Road To The Cloud Migrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/road-to-the-cloud-migrate.md
To transform groups and distribution lists:
* For self-managed group capabilities provided by Microsoft Identity Manager, replace the capability with self-service group management.
-* You can [convert distribution lists to Microsoft 365 groups](/microsoft-365/admin/manage/upgrade-distribution-lists) in Outlook. This approach is a great way to give your organization's distribution lists all the features and functionality of Microsoft 365 groups.
+* You can [convert distribution lists to Microsoft 365 groups](/microsoft-365/admin/create-groups/office-365-groups) in Outlook. This approach is a great way to give your organization's distribution lists all the features and functionality of Microsoft 365 groups.
* Upgrade your [distribution lists to Microsoft 365 groups in Outlook](https://support.microsoft.com/office/7fb3d880-593b-4909-aafa-950dd50ce188) and [decommission your on-premises Exchange server](/exchange/decommission-on-premises-exchange).
You can integrate non-Windows workstations with Microsoft Entra ID to enhance th
* Deploy the [Microsoft Enterprise SSO (single sign-on) plug-in for Apple devices](../develop/apple-sso-plugin.md).
- * Plan to deploy [Platform SSO for macOS 13](https://techcommunity.microsoft.com/t5/microsoft-endpoint-manager-blog/microsoft-simplifies-endpoint-manager-enrollment-for-apple/ba-p/3570319).
+ * Plan to deploy [Platform SSO for macOS 13](https://techcommunity.microsoft.com/t5/microsoft-intune-blog/microsoft-simplifies-endpoint-manager-enrollment-for-apple/ba-p/3570319).
* For Linux, you can [sign in to a Linux virtual machine (VM) by using Microsoft Entra credentials](../../active-directory/devices/howto-vm-sign-in-azure-ad-linux.md).
This project has two primary initiatives:
For more information, see:
-* [Deploy Microsoft Entra joined VMs in Azure Virtual Desktop](../../virtual-desktop/azure-ad-joined-session-hosts.md)
+* [Deploy Microsoft Entra joined VMs in Azure Virtual Desktop](/azure/virtual-desktop/azure-ad-joined-session-hosts)
* [Windows 365 planning guide](/windows-365/enterprise/planning-guide)
Use the following table to determine what Azure-based tools you can use to repla
| Management area | On-premises (Active Directory) feature | Equivalent Microsoft Entra feature | | - | - | -| | Security policy management| GPO, Microsoft Configuration Manager| [Microsoft 365 Defender for Cloud](https://azure.microsoft.com/services/security-center/) |
-| Update management| Microsoft Configuration Manager, Windows Server Update Services| [Azure Automation Update Management](../../automation/update-management/overview.md) |
-| Configuration management| GPO, Microsoft Configuration Manager| [Azure Automation State Configuration](../../automation/automation-dsc-overview.md) |
-| Monitoring| System Center Operations Manager| [Azure Monitor Log Analytics](../../azure-monitor/logs/log-analytics-overview.md) |
+| Update management| Microsoft Configuration Manager, Windows Server Update Services| [Azure Automation Update Management](/azure/automation/update-management/overview) |
+| Configuration management| GPO, Microsoft Configuration Manager| [Azure Automation State Configuration](/azure/automation/automation-dsc-overview) |
+| Monitoring| System Center Operations Manager| [Azure Monitor Log Analytics](/azure/azure-monitor/logs/log-analytics-overview) |
Here's more information that you can use for application server management:
To reduce or eliminate those dependencies, you have three main approaches.
In the most preferred approach, you undertake projects to migrate from legacy applications to SaaS alternatives that use modern authentication. Have the SaaS alternatives authenticate to Microsoft Entra ID directly:
-1. Deploy Microsoft Entra Domain Services into an Azure virtual network and [extend the schema](/azure/active-directory-domain-services/concepts-custom-attributes) to incorporate additional attributes needed by the applications.
+1. Deploy Microsoft Entra Domain Services into an Azure virtual network and [extend the schema](/entra/identity/domain-services/concepts-custom-attributes) to incorporate additional attributes needed by the applications.
2. Lift and shift legacy apps to VMs on the Azure virtual network that are domain-joined to Microsoft Entra Domain Services.
In the most preferred approach, you undertake projects to migrate from legacy ap
4. As legacy apps retire through attrition, eventually decommission Microsoft Entra Domain Services running in the Azure virtual network. >[!NOTE]
->* Use Microsoft Entra Domain Services if the dependencies are aligned with [common deployment scenarios for Microsoft Entra Domain Services](../../active-directory-domain-services/scenarios.md).
+>* Use Microsoft Entra Domain Services if the dependencies are aligned with [common deployment scenarios for Microsoft Entra Domain Services](/entra/identity/domain-services/scenarios).
>* To validate if Microsoft Entra Domain Services is a good fit, you might use tools like [Service Map in Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/Microsoft.ServiceMapOMS?tab=Overview) and [automatic dependency mapping with Service Map and Live Maps](https://techcommunity.microsoft.com/t5/system-center-blog/automatic-dependency-mapping-with-service-map-and-live-maps/ba-p/351867). >* Validate that your SQL Server instantiations can be [migrated to a different domain](https://social.technet.microsoft.com/wiki/contents/articles/24960.migrating-sql-server-to-new-domain.aspx). If your SQL service is running in virtual machines, [use this guidance](/azure/azure-sql/migration-guides/virtual-machines/sql-server-to-sql-on-azure-vm-individual-databases-guide).
This approach enables you to decouple the app from the existing Active Directory
### Move VPN authentication
-This project focuses on moving your VPN authentication to Microsoft Entra ID. It's important to know that different configurations are available for VPN gateway connections. You need to determine which configuration best fits your needs. For more information on designing a solution, see [VPN gateway design](../../vpn-gateway/design.md).
+This project focuses on moving your VPN authentication to Microsoft Entra ID. It's important to know that different configurations are available for VPN gateway connections. You need to determine which configuration best fits your needs. For more information on designing a solution, see [VPN gateway design](/azure/vpn-gateway/design).
Here are key points about usage of Microsoft Entra ID for VPN authentication:
Here are key points about usage of Microsoft Entra ID for VPN authentication:
* [Tutorial: Microsoft Entra SSO integration with Palo Alto Networks GlobalProtect](../saas-apps/palo-alto-networks-globalprotect-tutorial.md)
-* For Windows 10 devices, consider integrating [Microsoft Entra ID support into the built-in VPN client](/windows-server/remote/remote-access/vpn/ad-ca-vpn-connectivity-windows10).
+* For Windows 10 devices, consider integrating [Microsoft Entra ID support into the built-in VPN client](/windows-server/remote/remote-access/how-to-aovpn-conditional-access).
* After you evaluate this scenario, you can implement a solution to remove your dependency with on-premises to authenticate to VPN.
To simplify your environment, you can use [Microsoft Entra application proxy](..
It's important to mention that enabling remote access to an application by using the preceding technologies is an interim step. You need to do more work to completely decouple the application from Active Directory.
-Microsoft Entra Domain Services allows you to migrate application servers to the cloud IaaS and decouple from Active Directory, while using Microsoft Entra application proxy to enable remote access. To learn more about this scenario, check [Deploy Microsoft Entra application proxy for Microsoft Entra Domain Services](../../active-directory-domain-services/deploy-azure-app-proxy.md).
+Microsoft Entra Domain Services allows you to migrate application servers to the cloud IaaS and decouple from Active Directory, while using Microsoft Entra application proxy to enable remote access. To learn more about this scenario, check [Deploy Microsoft Entra application proxy for Microsoft Entra Domain Services](/entra/identity/domain-services/deploy-azure-app-proxy).
## Next steps
active-directory Secure Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/secure-best-practices.md
In the following sections are recommendations for Azure solutions. For general g
* Use [External identities cross-tenant access settings](../external-identities/cross-tenant-access-overview.md) to manage how they collaborate with other Microsoft Entra organizations and other Microsoft Azure clouds through B2B collaboration and [B2B direct connect](../external-identities/cross-tenant-access-settings-b2b-direct-connect.md).
-* For specific device configuration and control, you can use device filters in Conditional Access policies to [target or exclude specific devices](../conditional-access/concept-condition-filters-for-devices.md). This enables you to restrict access to Azure management tools from a designated secure admin workstation (SAW). Other approaches you can take include using [Azure Virtual desktop](../../virtual-desktop/terminology.md), [Azure Bastion](../../bastion/bastion-overview.md), or [Cloud PC](/graph/cloudpc-concept-overview).
+* For specific device configuration and control, you can use device filters in Conditional Access policies to [target or exclude specific devices](../conditional-access/concept-condition-filters-for-devices.md). This enables you to restrict access to Azure management tools from a designated secure admin workstation (SAW). Other approaches you can take include using [Azure Virtual desktop](/azure/virtual-desktop/terminology), [Azure Bastion](/azure/bastion/bastion-overview), or [Cloud PC](/graph/cloudpc-concept-overview).
* Billing management applications such as Azure EA portal or MCA billing accounts aren't represented as cloud applications for Conditional Access targeting. As a compensating control, define separate administration accounts and target Conditional Access policies to those accounts using an "All Apps" condition.
Below are some identity governance principles to consider across all the tenant
* **Least privileged access** - Identities should only be granted the permissions needed to perform the privileged operations per their role in the organization.
- * Azure RBAC [custom roles](../../role-based-access-control/custom-roles.md) allow designing least privileged roles based on organizational needs. We recommend that custom roles definitions are authored or reviewed by specialized security teams and mitigate risks of unintended excessive privileges. Authoring of custom roles can be audited through [Azure Policy](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/General/Subscription_AuditCustomRBACRoles_Audit.json).
+ * Azure RBAC [custom roles](/azure/role-based-access-control/custom-roles) allow designing least privileged roles based on organizational needs. We recommend that custom roles definitions are authored or reviewed by specialized security teams and mitigate risks of unintended excessive privileges. Authoring of custom roles can be audited through [Azure Policy](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/General/Subscription_AuditCustomRBACRoles_Audit.json).
* To mitigate accidental use of roles that aren't meant for wider use in the organization, use Azure Policy to define explicitly which role definitions can be used to assign access. Learn more from this [GitHub Sample](https://github.com/Azure/azure-policy/tree/master/samples/Authorization/allowed-role-definitions). * **Privileged access from secure workstations** - All privileged access should occur from secure, locked down devices. Separating these sensitive tasks and accounts from daily use workstations and devices protect privileged accounts from phishing attacks, application and OS vulnerabilities, various impersonation attacks, and credential theft attacks such as keystroke logging, [Pass-the-Hash](https://aka.ms/AzureADSecuredAzure/27a), and Pass-The-Ticket.
-Some approaches you can use for [using secure devices as part of your privileged access story](/security/compass/privileged-access-devices) include using Conditional Access policies to [target or exclude specific devices](../conditional-access/concept-condition-filters-for-devices.md), using [Azure Virtual desktop](../../virtual-desktop/terminology.md), [Azure Bastion](../../bastion/bastion-overview.md), or [Cloud PC](/graph/cloudpc-concept-overview), or creating Azure-managed workstations or privileged access workstations.
+Some approaches you can use for [using secure devices as part of your privileged access story](/security/privileged-access-workstations/privileged-access-devices) include using Conditional Access policies to [target or exclude specific devices](../conditional-access/concept-condition-filters-for-devices.md), using [Azure Virtual desktop](/azure/virtual-desktop/terminology), [Azure Bastion](/azure/bastion/bastion-overview), or [Cloud PC](/graph/cloudpc-concept-overview), or creating Azure-managed workstations or privileged access workstations.
* **Privileged role process guardrails** - Organizations must define processes and technical guardrails to ensure that privileged operations can be executed whenever needed while complying with regulatory requirements. Examples of guardrails criteria include:
Some approaches you can use for [using secure devices as part of your privileged
* The Azure EA Enterprise portal doesn't provide an audit log. To mitigate this, consider an automated governed process to provision subscriptions with the considerations described above and use dedicated EA accounts and audit the authentication logs.
-* [Microsoft Customer Agreement](../../cost-management-billing/understand/mca-overview.md) (MCA) roles don't integrate natively with PIM. To mitigate this, use dedicated MCA accounts and monitor usage of these accounts.
+* [Microsoft Customer Agreement](/azure/cost-management-billing/understand/mca-overview) (MCA) roles don't integrate natively with PIM. To mitigate this, use dedicated MCA accounts and monitor usage of these accounts.
* Monitoring IAM assignments outside Microsoft Entra PIM isn't automated through Azure Policies. The mitigation is to not grant Subscription Owner or User Access Administrator roles to engineering teams. Instead create groups assigned to least privileged roles such as Contributor and delegate the management of those groups to engineering teams.
Below are some considerations when designing a governed subscription lifecycle p
* Other aspects such as tagging, cross-charging, product-view usage, etc.
-* Don't allow ad-hoc subscription creation through the portals or by other means. Instead consider managing [subscriptions programmatically using Azure Resource Manager](../../cost-management-billing/manage/programmatically-create-subscription.md) and pulling consumption and billing reports [programmatically](/rest/api/consumption/). This can help limit subscription provisioning to authorized users and enforce your policy and taxonomy goals. Guidance on following [AZOps principals](https://github.com/azure/azops/wiki/introduction) can be used to help create a practical solution.
+* Don't allow ad-hoc subscription creation through the portals or by other means. Instead consider managing [subscriptions programmatically using Azure Resource Manager](/azure/cost-management-billing/manage/programmatically-create-subscription) and pulling consumption and billing reports [programmatically](/rest/api/consumption/). This can help limit subscription provisioning to authorized users and enforce your policy and taxonomy goals. Guidance on following [AZOps principals](https://github.com/azure/azops/wiki/introduction) can be used to help create a practical solution.
* When a subscription is provisioned, create Microsoft Entra cloud groups to hold standard Azure Resource Manager Roles needed by application teams such as Contributor, Reader and approved custom roles. This enables you to manage Azure RBAC role assignments with governed privileged access at scale.
Below are some considerations when designing a governed subscription lifecycle p
1. As a guardrail, don't assign product owners to User Access Administrator or Owner roles to avoid inadvertent direct assignment of roles outside Microsoft Entra PIM, or potentially changing the subscription to a different tenant altogether.
- 1. For customers who choose to enable cross-tenant subscription management in non-production tenants through Azure Lighthouse, make sure that the same access policies from the production privileged account (for example, privileged access only from [secured workstations](/security/compass/privileged-access-deployment)) are enforced when authenticating to manage subscriptions.
+ 1. For customers who choose to enable cross-tenant subscription management in non-production tenants through Azure Lighthouse, make sure that the same access policies from the production privileged account (for example, privileged access only from [secured workstations](/security/privileged-access-workstations/privileged-access-deployment)) are enforced when authenticating to manage subscriptions.
-* If your organization has pre-approved reference architectures, the subscription provisioning can be integrated with resource deployment tools such as [Azure Blueprints](../../governance/blueprints/overview.md) or [Terraform](https://www.terraform.io).
+* If your organization has pre-approved reference architectures, the subscription provisioning can be integrated with resource deployment tools such as [Azure Blueprints](/azure/governance/blueprints/overview) or [Terraform](https://www.terraform.io).
* Given the tenant affinity to Azure Subscriptions, subscription provisioning should be aware of multiple identities for the same human actor (employee, partner, vendor, etc.) across multiple tenants and assign access accordingly.
The following are additional operational considerations for Microsoft Entra ID,
### Inventory and visibility
-**Azure subscription discovery** - For each discovered tenant, a Microsoft Entra Global Administrator can [elevate access](../../role-based-access-control/elevate-access-global-admin.md) to gain visibility of all subscriptions in the environment. This elevation will assign the global administrator the User Access Administrator built-in role at the root management group.
+**Azure subscription discovery** - For each discovered tenant, a Microsoft Entra Global Administrator can [elevate access](/azure/role-based-access-control/elevate-access-global-admin) to gain visibility of all subscriptions in the environment. This elevation will assign the global administrator the User Access Administrator built-in role at the root management group.
>[!NOTE] >This action is highly privileged and might give the admin access to subscriptions that hold extremely sensitive information if that data has not been properly isolated. **Enabling read access to discover resources** - Management groups enable RBAC assignment at scale across multiple subscriptions. Customers can grant a Reader role to a centralized IT team by configuring a role assignment in the root management group, which will propagate to all subscriptions in the environment.
-**Resource discovery** - After gaining resource Read access in the environment, [Azure Resource Graph](../../governance/resource-graph/overview.md) can be used to query resources in the environment.
+**Resource discovery** - After gaining resource Read access in the environment, [Azure Resource Graph](/azure/governance/resource-graph/overview) can be used to query resources in the environment.
### Logging and monitoring
-**Central security log management** - Ingest logs from each environment in a [centralized way](/security/benchmark/azure/security-control-logging-monitoring), following consistent best practices across environments (for example, diagnostics settings, log retention, SIEM ingestion, etc.). [Azure Monitor](../../azure-monitor/overview.md) can be used to ingest logs from different sources such as endpoint devices, network, operating systems' security logs, etc.
+**Central security log management** - Ingest logs from each environment in a [centralized way](/security/benchmark/azure/security-control-logging-monitoring), following consistent best practices across environments (for example, diagnostics settings, log retention, SIEM ingestion, etc.). [Azure Monitor](/azure/azure-monitor/overview) can be used to ingest logs from different sources such as endpoint devices, network, operating systems' security logs, etc.
Detailed information on using automated or manual processes and tools to monitor logs as part of your security operations is available at [Microsoft Entra security operation guide](https://github.com/azure/azops/wiki/introduction).
The log strategy must include the following Microsoft Entra logs for each tenant
* Risk events
-Microsoft Entra ID provides [Azure Monitor integration](../reports-monitoring/concept-activity-logs-azure-monitor.md) for the sign-in activity log and audit logs. Risk events can be ingested through [Microsoft Graph API](/graph/tutorial-riskdetection-api).
+Microsoft Entra ID provides [Azure Monitor integration](../reports-monitoring/concept-log-monitoring-integration-options-considerations.md) for the sign-in activity log and audit logs. Risk events can be ingested through [Microsoft Graph API](/graph/tutorial-riskdetection-api).
The following diagram shows the different data sources that need to be incorporated as part of the monitoring strategy:
-Azure AD B2C tenants can be [integrated with Azure Monitor](../../active-directory-b2c/azure-monitor.md). We recommend monitoring of Azure AD B2C using the same criteria discussed above for Microsoft Entra ID.
+Azure AD B2C tenants can be [integrated with Azure Monitor](/azure/active-directory-b2c/azure-monitor). We recommend monitoring of Azure AD B2C using the same criteria discussed above for Microsoft Entra ID.
-Subscriptions that have enabled cross-tenant management with Azure Lighthouse can enable cross-tenant monitoring if the logs are collected by Azure Monitor. The corresponding Log Analytics workspaces can reside in the resource tenant and can be analyzed centrally in the managing tenant using Azure Monitor workbooks. To learn more, check [Monitor delegated resources at scale - Azure Lighthouse](../../lighthouse/how-to/monitor-at-scale.md).
+Subscriptions that have enabled cross-tenant management with Azure Lighthouse can enable cross-tenant monitoring if the logs are collected by Azure Monitor. The corresponding Log Analytics workspaces can reside in the resource tenant and can be analyzed centrally in the managing tenant using Azure Monitor workbooks. To learn more, check [Monitor delegated resources at scale - Azure Lighthouse](/azure/lighthouse/how-to/monitor-at-scale).
### Hybrid infrastructure OS security logs
The following scenarios must be explicitly monitored and investigated:
* **Suspicious activity** - All [Microsoft Entra risk events](../identity-protection/overview-identity-protection.md) should be monitored for suspicious activity. All tenants should define the network [named locations](../conditional-access/location-condition.md) to avoid noisy detections on location-based signals. [Microsoft Entra ID Protection](../identity-protection/overview-identity-protection.md) is natively integrated with Azure Security Center. It's recommended that any risk detection investigation includes all the environments the identity is provisioned (for example, if a human identity has an active risk detection in the corporate tenant, the team operating the customer facing tenant should also investigate the activity of the corresponding account in that environment).
-* **User entity behavioral analytics (UEBA) alerts** - UEBA should be used to get insightful information based on anomaly detection. [Microsoft Microsoft 365 Defender for Cloud Apps](https://www.microsoft.com/security/business/siem-and-xdr/microsoft-defender-cloud-apps) provides [UEBA in the cloud](/defender-cloud-apps/tutorial-ueba). Customers can integrate [on-premises UEBA from Microsoft Microsoft 365 Defender for Identity](/defender-cloud-apps/mdi-integration). MCAS reads signals from Microsoft Entra ID Protection.
+* **User entity behavioral analytics (UEBA) alerts** - UEBA should be used to get insightful information based on anomaly detection. [Microsoft Microsoft 365 Defender for Cloud Apps](https://www.microsoft.com/security/business/siem-and-xdr/microsoft-defender-cloud-apps) provides [UEBA in the cloud](/defender-cloud-apps/tutorial-ueba). Customers can integrate [on-premises UEBA from Microsoft Microsoft 365 Defender for Identity](/microsoft-365/security/defender/microsoft-365-security-center-mdi). MCAS reads signals from Microsoft Entra ID Protection.
* **Emergency access accounts activity** - Any access using [emergency access accounts](./security-operations-privileged-accounts.md) should be monitored and [alerts](../roles/security-emergency-access.md) created for investigations. This monitoring must include:
The following scenarios must be explicitly monitored and investigated:
**IT service management tools** - Organizations using IT Service Management (ITSM) systems such as ServiceNow should configure [Microsoft Entra PIM role activation settings](../privileged-identity-management/pim-how-to-change-default-settings.md) to request a ticket number as part of the activation purposes.
-Similarly, Azure Monitor can be integrated with ITSM systems through the [IT Service Management Connector](../../azure-monitor/alerts/itsmc-overview.md).
+Similarly, Azure Monitor can be integrated with ITSM systems through the [IT Service Management Connector](/azure/azure-monitor/alerts/itsmc-overview).
**Operational practices** - Minimize operational activities that require direct access to the environment to human identities. Instead model them as Azure Pipelines that execute common operations (for example, add capacity to a PaaS solution, run diagnostics, etc.) and model direct access to the Azure Resource Manager interfaces to "break glass" scenarios.
active-directory Secure Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/secure-introduction.md
Applications that use Microsoft Entra ID require directory objects to be configu
### Access to Azure resources
-Users, groups, and service principal objects (workload identities) in the Microsoft Entra tenant are granted roles by using [Azure Role Based Access Control](../../role-based-access-control/overview.md) (RBAC) and [Azure attribute-based access control](../../role-based-access-control/conditions-overview.md) (ABAC).
+Users, groups, and service principal objects (workload identities) in the Microsoft Entra tenant are granted roles by using [Azure Role Based Access Control](/azure/role-based-access-control/overview) (RBAC) and [Azure attribute-based access control](/azure/role-based-access-control/conditions-overview) (ABAC).
* Azure RBAC enables you to provide access based on role as determined by security principal, role definition, and scope.
Azure resources that [support Managed Identities](../managed-identities-azure-re
Applications using Microsoft Entra ID for sign-in may also use Azure resources such as compute or storage as part of its implementation. For example, a custom application that runs in Azure and trusts Microsoft Entra ID for authentication has directory objects and Azure resources.
-Lastly, all Azure resources in the Microsoft Entra tenant affect tenant-wide [Azure Quotas and Limits](../../azure-resource-manager/management/azure-subscription-service-limits.md).
+Lastly, all Azure resources in the Microsoft Entra tenant affect tenant-wide [Azure Quotas and Limits](/azure/azure-resource-manager/management/azure-subscription-service-limits).
### Access to Directory Objects
Who should have the ability to administer the environment and its resources? The
Given the interdependence between a Microsoft Entra tenant and its resources, it's critical to understand the security and operational risks of compromise or error. If you're operating in a federated environment with synchronized accounts, an on-premises compromise can lead to a Microsoft Entra ID compromise.
-* **Identity compromise** - Within the boundary of a tenant, any identity can be assigned any role, given the one providing access has sufficient privileges. While the effect of compromised non-privileged identities is largely contained, compromised administrators can have broad implications. For example, if a Microsoft Entra Global Administrator account is compromised, Azure resources can become compromised. To mitigate risk of identity compromise, or bad actors, implement [tiered administration](/security/compass/privileged-access-access-model) and ensure that you follow principles of least privilege for [Microsoft Entra Administrator Roles](../roles/delegate-by-task.md). Similarly, ensure that you create Conditional Access policies that specifically exclude test accounts and test service principals from accessing resources outside of the test applications. For more information on privileged access strategy, see [Privileged access: Strategy](/security/compass/privileged-access-strategy).
+* **Identity compromise** - Within the boundary of a tenant, any identity can be assigned any role, given the one providing access has sufficient privileges. While the effect of compromised non-privileged identities is largely contained, compromised administrators can have broad implications. For example, if a Microsoft Entra Global Administrator account is compromised, Azure resources can become compromised. To mitigate risk of identity compromise, or bad actors, implement [tiered administration](/security/privileged-access-workstations/privileged-access-access-model) and ensure that you follow principles of least privilege for [Microsoft Entra Administrator Roles](../roles/delegate-by-task.md). Similarly, ensure that you create Conditional Access policies that specifically exclude test accounts and test service principals from accessing resources outside of the test applications. For more information on privileged access strategy, see [Privileged access: Strategy](/security/privileged-access-workstations/privileged-access-strategy).
* **Federated environment compromise**
active-directory Secure Multiple Tenants https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/secure-multiple-tenants.md
In addition to the outcomes achieved with a single tenant architecture as descri
* **Object footprint** - Applications that write to Microsoft Entra ID and/or other Microsoft Online services through Microsoft Graph or other management interfaces can operate in a separate object space. This enables development teams to perform tests during the software development lifecycle without affecting other tenants.
-* **Quotas** - Consumption of tenant-wide [Azure Quotas and Limits](../../azure-resource-manager/management/azure-subscription-service-limits.md) is separated from that of the other tenants.
+* **Quotas** - Consumption of tenant-wide [Azure Quotas and Limits](/azure/azure-resource-manager/management/azure-subscription-service-limits) is separated from that of the other tenants.
### Configuration separation
Another approach could have been to utilize the capabilities of Microsoft Entra
## Multi-tenant resource isolation
-With a new tenant, you have a separate set of administrators. Organizations can choose to use corporate identities through [Microsoft Entra B2B collaboration](../external-identities/what-is-b2b.md). Similarly, organizations can implement [Azure Lighthouse](../../lighthouse/overview.md) for cross-tenant management of Azure resources so that non-production Azure subscriptions are managed by identities in the production counterpart. Azure Lighthouse can't be used to manage services outside of Azure, such as Microsoft Intune. For Managed Service Providers (MSPs), [Microsoft 365 Lighthouse](/microsoft-365/lighthouse/m365-lighthouse-overview?view=o365-worldwide&preserve-view=true) is an admin portal that helps secure and manage devices, data, and users at scale for small- and medium-sized business (SMB) customers who are using Microsoft 365 Business Premium, Microsoft 365 E3, or Windows 365 Business.
+With a new tenant, you have a separate set of administrators. Organizations can choose to use corporate identities through [Microsoft Entra B2B collaboration](../external-identities/what-is-b2b.md). Similarly, organizations can implement [Azure Lighthouse](/azure/lighthouse/overview) for cross-tenant management of Azure resources so that non-production Azure subscriptions are managed by identities in the production counterpart. Azure Lighthouse can't be used to manage services outside of Azure, such as Microsoft Intune. For Managed Service Providers (MSPs), [Microsoft 365 Lighthouse](/microsoft-365/lighthouse/m365-lighthouse-overview?view=o365-worldwide&preserve-view=true) is an admin portal that helps secure and manage devices, data, and users at scale for small- and medium-sized business (SMB) customers who are using Microsoft 365 Business Premium, Microsoft 365 E3, or Windows 365 Business.
This will allow users to continue to use their corporate credentials, while achieving the benefits of separation.
active-directory Secure Resource Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/secure-resource-management.md
The following are some of the terms you should be familiar with:
**Resource** - A manageable item that is available through Azure. Virtual machines, storage accounts, web apps, databases, and virtual networks are examples of resources.
-**Resource group** - A container that holds related resources for an Azure solution such as a collection of virtual machines, associated VNets, and load balancers that require management by specific teams. The [resource group](../../azure-resource-manager/management/overview.md) includes those resources that you want to manage as a group. You decide which resources belong in a resource group based on what makes the most sense for your organization. Resource groups can also be used to help with life-cycle management by deleting all resources that have the same lifespan at one time. This approach also provides security benefit by leaving no fragments that might be exploited.
+**Resource group** - A container that holds related resources for an Azure solution such as a collection of virtual machines, associated VNets, and load balancers that require management by specific teams. The [resource group](/azure/azure-resource-manager/management/overview) includes those resources that you want to manage as a group. You decide which resources belong in a resource group based on what makes the most sense for your organization. Resource groups can also be used to help with life-cycle management by deleting all resources that have the same lifespan at one time. This approach also provides security benefit by leaving no fragments that might be exploited.
**Subscription** - From an organizational hierarchy perspective, a subscription is a billing and management container of resources and resource groups. An Azure subscription has a trust relationship with Microsoft Entra ID. A subscription trusts Microsoft Entra ID to authenticate users, services, and devices. >[!Note] >A subscription may trust only one Microsoft Entra tenant. However, each tenant may trust multiple subscriptions and subscriptions can be moved between tenants.
-**Management group** - [Azure management groups](../../governance/management-groups/overview.md) provide a hierarchical method of applying policies and compliance at different scopes above subscriptions. It can be at the tenant root management group (highest scope) or at lower levels in the hierarchy. You organize subscriptions into containers called "management groups" and apply your governance conditions to the management groups. All subscriptions within a management group automatically inherit the conditions applied to the management group. Note, policy definitions can be applied to a management group or subscription.
+**Management group** - [Azure management groups](/azure/governance/management-groups/overview) provide a hierarchical method of applying policies and compliance at different scopes above subscriptions. It can be at the tenant root management group (highest scope) or at lower levels in the hierarchy. You organize subscriptions into containers called "management groups" and apply your governance conditions to the management groups. All subscriptions within a management group automatically inherit the conditions applied to the management group. Note, policy definitions can be applied to a management group or subscription.
-**Resource provider** - A service that supplies Azure resources. For example, a common [resource provider](../../azure-resource-manager/management/resource-providers-and-types.md) is Microsoft. Compute, which supplies the virtual machine resource. Microsoft. Storage is another common resource provider.
+**Resource provider** - A service that supplies Azure resources. For example, a common [resource provider](/azure/azure-resource-manager/management/resource-providers-and-types) is Microsoft. Compute, which supplies the virtual machine resource. Microsoft. Storage is another common resource provider.
-**Resource Manager template** - A JavaScript Object Notation (JSON) file that defines one or more resources to deploy to a resource group, subscription, tenant, or management group. The template can be used to deploy the resources consistently and repeatedly. See [Template deployment overview](../../azure-resource-manager/templates/overview.md). Additionally, the [Bicep language](../../azure-resource-manager/bicep/overview.md) can be used instead of JSON.
+**Resource Manager template** - A JavaScript Object Notation (JSON) file that defines one or more resources to deploy to a resource group, subscription, tenant, or management group. The template can be used to deploy the resources consistently and repeatedly. See [Template deployment overview](/azure/azure-resource-manager/templates/overview). Additionally, the [Bicep language](/azure/azure-resource-manager/bicep/overview) can be used instead of JSON.
## Azure Resource Management Model
-Each Azure subscription is associated with controls used by [Azure Resource Manager](../../azure-resource-manager/management/overview.md) (ARM). Resource Manager is the deployment and management service for Azure, it has a trust relationship with Microsoft Entra ID for identity management for organizations, and the Microsoft Account (MSA) for individuals. Resource Manager provides a management layer that enables you to create, update, and delete resources in your Azure subscription. You use management features like access control, locks, and tags, to secure and organize your resources after deployment.
+Each Azure subscription is associated with controls used by [Azure Resource Manager](/azure/azure-resource-manager/management/overview) (ARM). Resource Manager is the deployment and management service for Azure, it has a trust relationship with Microsoft Entra ID for identity management for organizations, and the Microsoft Account (MSA) for individuals. Resource Manager provides a management layer that enables you to create, update, and delete resources in your Azure subscription. You use management features like access control, locks, and tags, to secure and organize your resources after deployment.
>[!NOTE]
->Prior to ARM, there was another deployment model named Azure Service Manager (ASM) or "classic". To learn more, see [Azure Resource Manager vs. classic deployment](../../azure-resource-manager/management/deployment-models.md). Managing environments with the ASM model is out of scope of this content.
+>Prior to ARM, there was another deployment model named Azure Service Manager (ASM) or "classic". To learn more, see [Azure Resource Manager vs. classic deployment](/azure/azure-resource-manager/management/deployment-models). Managing environments with the ASM model is out of scope of this content.
Azure Resource Manager is the front-end service, which hosts the REST APIs used by PowerShell, the Azure portal, or other clients to manage resources. When a client makes a request to manage a specific resource, Resource Manager proxies the request to the resource provider to complete the request. For example, if a client makes a request to manage a virtual machine resource, Resource Manager proxies the request to the Microsoft. Compute resource provider. Resource Manager requires the client to specify an identifier for both the subscription and the resource group to manage the virtual machine resource.
Before any resource management request can be executed by Resource Manager, a se
* **Valid user check** - The user requesting to manage the resource must have an account in the Microsoft Entra tenant associated with the subscription of the managed resource.
-* **User permission check** - Permissions are assigned to users using [role-based access control (RBAC)](../../role-based-access-control/overview.md). An RBAC role specifies a set of permissions a user may take on a specific resource. RBAC helps you manage who has access to Azure resources, what they can do with those resources, and what areas they have access to.
+* **User permission check** - Permissions are assigned to users using [role-based access control (RBAC)](/azure/role-based-access-control/overview). An RBAC role specifies a set of permissions a user may take on a specific resource. RBAC helps you manage who has access to Azure resources, what they can do with those resources, and what areas they have access to.
-* **Azure policy check** - [Azure policies](../../governance/policy/overview.md) specify the operations allowed or explicitly denied for a specific resource. For example, a policy can specify that users are only allowed (or not allowed) to deploy a specific type of virtual machine.
+* **Azure policy check** - [Azure policies](/azure/governance/policy/overview) specify the operations allowed or explicitly denied for a specific resource. For example, a policy can specify that users are only allowed (or not allowed) to deploy a specific type of virtual machine.
The following diagram summarizes the resource model we just described. ![Diagram that shows Azure resource management with ARM and Microsoft Entra ID.](media/secure-resource-management/resource-model.png)
-**Azure Lighthouse** - [Azure Lighthouse](../../lighthouse/overview.md) enables resource management across tenants. Organizations can delegate roles at the subscription or resource group level to identities in another tenant.
+**Azure Lighthouse** - [Azure Lighthouse](/azure/lighthouse/overview) enables resource management across tenants. Organizations can delegate roles at the subscription or resource group level to identities in another tenant.
-Subscriptions that enable [delegated resource management](../../lighthouse/concepts/architecture.md) with Azure Lighthouse have attributes that indicate the tenant IDs that can manage subscriptions or resource groups, and mapping between the built-in RBAC role in the resource tenant to identities in the service provider tenant. At runtime, Azure Resource Manager will consume these attributes to authorize tokens coming from the service provider tenant.
+Subscriptions that enable [delegated resource management](/azure/lighthouse/concepts/architecture) with Azure Lighthouse have attributes that indicate the tenant IDs that can manage subscriptions or resource groups, and mapping between the built-in RBAC role in the resource tenant to identities in the service provider tenant. At runtime, Azure Resource Manager will consume these attributes to authorize tokens coming from the service provider tenant.
It's worth noting that Azure Lighthouse itself is modeled as an Azure resource provider, which means that aspects of the delegation across a tenant can be targeted through Azure Policies.
When an Account Owner creates an Azure subscription within an enterprise agreeme
* The Azure subscription is associated with the same Microsoft Entra tenant of the Account Owner.
-* The account owner who created the subscription will be assigned the Service Administrator and Account Administrator roles. (The Azure EA Portal assigns Azure Service Manager (ASM) or "classic" roles to manage subscriptions. To learn more, see [Azure Resource Manager vs. classic deployment](../../azure-resource-manager/management/deployment-models.md).)
+* The account owner who created the subscription will be assigned the Service Administrator and Account Administrator roles. (The Azure EA Portal assigns Azure Service Manager (ASM) or "classic" roles to manage subscriptions. To learn more, see [Azure Resource Manager vs. classic deployment](/azure/azure-resource-manager/management/deployment-models).)
An enterprise agreement can be configured to support multiple tenants by setting the authentication type of "Work or school account cross-tenant" in the Azure EA Portal. Given the above, organizations can set multiple accounts for each tenant, and multiple subscriptions for each account, as shown in the diagram below.
It's important to note that the default configuration described above grants the
To further decouple and prevent the account owner from regaining service administrator access to the subscription, the subscription's tenant can be [changed](../fundamentals/how-subscriptions-associated-directory.md) after creation. If the account owner doesn't have a user object in the Microsoft Entra tenant the subscription is moved to, they can't regain the service owner role.
-To learn more, visit [Azure roles, Microsoft Entra roles, and classic subscription administrator roles](../../role-based-access-control/rbac-and-directory-admin-roles.md).
+To learn more, visit [Azure roles, Microsoft Entra roles, and classic subscription administrator roles](/azure/role-based-access-control/rbac-and-directory-admin-roles).
### Microsoft Customer Agreement
-Customers enrolled with a [Microsoft Customer Agreement](../../cost-management-billing/understand/mca-overview.md) (MCA) have a different billing management system with its own roles.
+Customers enrolled with a [Microsoft Customer Agreement](/azure/cost-management-billing/understand/mca-overview) (MCA) have a different billing management system with its own roles.
-A [billing account](../../cost-management-billing/manage/understand-mca-roles.md) for the Microsoft Customer Agreement contains one or more [billing profiles](../../cost-management-billing/manage/understand-mca-roles.md) that allow managing invoices and payment methods. Each billing profile contains one or more [invoice sections](../../cost-management-billing/manage/understand-mca-roles.md) to organize costs on the billing profile's invoice.
+A [billing account](/azure/cost-management-billing/manage/understand-mca-roles) for the Microsoft Customer Agreement contains one or more [billing profiles](/azure/cost-management-billing/manage/understand-mca-roles) that allow managing invoices and payment methods. Each billing profile contains one or more [invoice sections](/azure/cost-management-billing/manage/understand-mca-roles) to organize costs on the billing profile's invoice.
In a Microsoft Customer Agreement, billing roles come from a single Microsoft Entra tenant. To provision subscriptions for multiple tenants, the subscriptions must be initially created in the same Microsoft Entra tenant as the MCA, and then changed. In the diagram below, the subscriptions for the Corporate IT pre-production environment were moved to the ContosoSandbox tenant after creation.
In a Microsoft Customer Agreement, billing roles come from a single Microsoft En
## RBAC and role assignments in Azure
-In the Microsoft Entra Fundamentals section, you learned Azure RBAC is the authorization system that provides fine-grained access management to Azure resources, and includes many [built-in roles](../../role-based-access-control/built-in-roles.md). You can create [custom roles](../../role-based-access-control/custom-roles.md), and assign roles at different scopes. Permissions are enforced by assigning RBAC roles to objects requesting access to Azure resources.
+In the Microsoft Entra Fundamentals section, you learned Azure RBAC is the authorization system that provides fine-grained access management to Azure resources, and includes many [built-in roles](/azure/role-based-access-control/built-in-roles). You can create [custom roles](/azure/role-based-access-control/custom-roles), and assign roles at different scopes. Permissions are enforced by assigning RBAC roles to objects requesting access to Azure resources.
-Microsoft Entra roles operate on concepts like [Azure role-based access control](../../role-based-access-control/overview.md). The [difference between these two role-based access control systems](../../role-based-access-control/rbac-and-directory-admin-roles.md) is that Azure RBAC uses Azure Resource Management to control access to Azure resources such as virtual machines or storage, and Microsoft Entra roles control access to Microsoft Entra ID, applications, and Microsoft services such as Office 365.
+Microsoft Entra roles operate on concepts like [Azure role-based access control](/azure/role-based-access-control/overview). The [difference between these two role-based access control systems](/azure/role-based-access-control/rbac-and-directory-admin-roles) is that Azure RBAC uses Azure Resource Management to control access to Azure resources such as virtual machines or storage, and Microsoft Entra roles control access to Microsoft Entra ID, applications, and Microsoft services such as Office 365.
Both Microsoft Entra roles and Azure RBAC roles integrate with Microsoft Entra Privileged Identity Management to enable just-in-time activation policies such as approval workflow and MFA. ## ABAC and role assignments in Azure
-[Attribute-based access control (ABAC)](../../role-based-access-control/conditions-overview.md) is an authorization system that defines access based on attributes associated with security principals, resources, and environment. With ABAC, you can grant a security principal access to a resource based on attributes. Azure ABAC refers to the implementation of ABAC for Azure.
+[Attribute-based access control (ABAC)](/azure/role-based-access-control/conditions-overview) is an authorization system that defines access based on attributes associated with security principals, resources, and environment. With ABAC, you can grant a security principal access to a resource based on attributes. Azure ABAC refers to the implementation of ABAC for Azure.
Azure ABAC builds on Azure RBAC by adding role assignment conditions based on attributes in the context of specific actions. A role assignment condition is an additional check that you can optionally add to your role assignment to provide more fine-grained access control. A condition filters down permissions granted as a part of the role definition and role assignment. For example, you can add a condition that requires an object to have a specific tag to read the object. You can't explicitly deny access to specific resources using conditions.
When a requirement exists to deploy IaaS workloads to Azure that require identit
* Consider a location that is geographically closed to the servers and applications that require Microsoft Entra Domain Services services.
-* Consider regions that provide Availability Zones capabilities for high availability requirements. For more information, see [Regions and Availability Zones in Azure](../../reliability/availability-zones-service-support.md).
+* Consider regions that provide Availability Zones capabilities for high availability requirements. For more information, see [Regions and Availability Zones in Azure](/azure/reliability/availability-zones-service-support).
**Object provisioning** - Microsoft Entra Domain Services synchronizes identities from the Microsoft Entra ID that is associated with the subscription that Microsoft Entra Domain Services is deployed into. It's also worth noting that if the associated Microsoft Entra ID has synchronization set up with Microsoft Entra Connect (user forest scenario) then the life cycle of these identities can also be reflected in Microsoft Entra Domain Services. This service has two modes that can be used for provisioning user and group objects from Microsoft Entra ID.
When a requirement exists to deploy IaaS workloads to Azure that require identit
* **Scoped**: Only users in scope of a group(s) are synchronized from Microsoft Entra ID into Microsoft Entra Domain Services.
-When you first deploy Microsoft Entra Domain Services, an automatic one-way synchronization is configured to replicate the objects from Microsoft Entra ID. This one-way synchronization continues to run in the background to keep the Microsoft Entra Domain Services managed domain up to date with any changes from Microsoft Entra ID. No synchronization occurs from Microsoft Entra Domain Services back to Microsoft Entra ID. For more information, see [How objects and credentials are synchronized in a Microsoft Entra Domain Services managed domain](../../active-directory-domain-services/synchronization.md).
+When you first deploy Microsoft Entra Domain Services, an automatic one-way synchronization is configured to replicate the objects from Microsoft Entra ID. This one-way synchronization continues to run in the background to keep the Microsoft Entra Domain Services managed domain up to date with any changes from Microsoft Entra ID. No synchronization occurs from Microsoft Entra Domain Services back to Microsoft Entra ID. For more information, see [How objects and credentials are synchronized in a Microsoft Entra Domain Services managed domain](/entra/identity/domain-services/synchronization).
It's worth noting that if you need to change the type of synchronization from All to Scoped (or vice versa), then the Microsoft Entra Domain Services managed domain will need to be deleted, recreated and configured. In addition, organizations should consider the use of "scoped" provisioning to reduce the identities to only those that need access to Microsoft Entra Domain Services resources as a good practice.
-**Group Policy Objects (GPO)** - To configure GPO in a Microsoft Entra Domain Services managed domain you must use Group Policy Management tools on a server that has been domain joined to the Microsoft Entra Domain Services managed domain. For more information, see [Administer Group Policy in a Microsoft Entra Domain Services managed domain](../../active-directory-domain-services/manage-group-policy.md).
+**Group Policy Objects (GPO)** - To configure GPO in a Microsoft Entra Domain Services managed domain you must use Group Policy Management tools on a server that has been domain joined to the Microsoft Entra Domain Services managed domain. For more information, see [Administer Group Policy in a Microsoft Entra Domain Services managed domain](/entra/identity/domain-services/manage-group-policy).
-**Secure LDAP** - Microsoft Entra Domain Services provides a secure LDAP service that can be used by applications that require it. This setting is disabled by default and to enable secure LDAP a certificate needs to be uploaded, in addition, the NSG that secures the VNet that Microsoft Entra Domain Services is deployed on to must allow port 636 connectivity to the Microsoft Entra Domain Services managed domains. For more information, see [Configure secure LDAP for a Microsoft Entra Domain Services managed domain](../../active-directory-domain-services/tutorial-configure-ldaps.md).
+**Secure LDAP** - Microsoft Entra Domain Services provides a secure LDAP service that can be used by applications that require it. This setting is disabled by default and to enable secure LDAP a certificate needs to be uploaded, in addition, the NSG that secures the VNet that Microsoft Entra Domain Services is deployed on to must allow port 636 connectivity to the Microsoft Entra Domain Services managed domains. For more information, see [Configure secure LDAP for a Microsoft Entra Domain Services managed domain](/entra/identity/domain-services/tutorial-configure-ldaps).
-**Administration** - To perform administration duties on Microsoft Entra Domain Services (for example, domain join machines or edit GPO), the account used for this task needs to be part of the Microsoft Entra DC Administrators group. Accounts that are members of this group can't directly sign-in to domain controllers to perform management tasks. Instead, you create a management VM that is joined to the Microsoft Entra Domain Services managed domain, then install your regular AD DS management tools. For more information, see [Management concepts for user accounts, passwords, and administration in Microsoft Entra Domain Services](../../active-directory-domain-services/administration-concepts.md).
+**Administration** - To perform administration duties on Microsoft Entra Domain Services (for example, domain join machines or edit GPO), the account used for this task needs to be part of the Microsoft Entra DC Administrators group. Accounts that are members of this group can't directly sign-in to domain controllers to perform management tasks. Instead, you create a management VM that is joined to the Microsoft Entra Domain Services managed domain, then install your regular AD DS management tools. For more information, see [Management concepts for user accounts, passwords, and administration in Microsoft Entra Domain Services](/entra/identity/domain-services/administration-concepts).
**Password hashes** - For authentication with Microsoft Entra Domain Services to work, password hashes for all users need to be in a format that is suitable for NT LAN Manager (NTLM) and Kerberos authentication. To ensure authentication with Microsoft Entra Domain Services works as expected, the following prerequisites need to be performed. * **Users synchronized with Microsoft Entra Connect (from AD DS)** - The legacy password hashes need to be synchronized from on-premises AD DS to Microsoft Entra ID.
-* **Users created in Microsoft Entra ID** - Need to reset their password for the correct hashes to be generated for usage with Microsoft Entra Domain Services. For more information, see [Enable synchronization of password hashes](../../active-directory-domain-services/tutorial-configure-password-hash-sync.md).
+* **Users created in Microsoft Entra ID** - Need to reset their password for the correct hashes to be generated for usage with Microsoft Entra Domain Services. For more information, see [Enable synchronization of password hashes](/entra/identity/domain-services/tutorial-configure-password-hash-sync).
-**Network** - Microsoft Entra Domain Services is deployed on to an Azure VNet so considerations need to be made to ensure that servers and applications are secured and can access the managed domain correctly. For more information, see [Virtual network design considerations and configuration options for Microsoft Entra Domain Services](../../active-directory-domain-services/network-considerations.md).
+**Network** - Microsoft Entra Domain Services is deployed on to an Azure VNet so considerations need to be made to ensure that servers and applications are secured and can access the managed domain correctly. For more information, see [Virtual network design considerations and configuration options for Microsoft Entra Domain Services](/entra/identity/domain-services/network-considerations).
* Microsoft Entra Domain Services must be deployed in its own subnet: Don't use an existing subnet or a gateway subnet.
It's worth noting that if you need to change the type of synchronization from Al
* **Microsoft Entra Domain Services requires 3-5 IP addresses** - Make sure that your subnet IP address range can provide this number of addresses. Restricting the available IP addresses can prevent Microsoft Entra Domain Services from maintaining two domain controllers.
-* **VNet DNS Server** - As previously discussed about the "hub and spoke" model, it's important to have DNS configured correctly on the VNets to ensure that servers joined to the Microsoft Entra Domain Services managed domain have the correct DNS settings to resolve the Microsoft Entra Domain Services managed domain. Each VNet has a DNS server entry that is passed to servers as they obtain an IP address and these DNS entries need to be the IP addresses of the Microsoft Entra Domain Services managed domain. For more information, see [Update DNS settings for the Azure virtual network](../../active-directory-domain-services/tutorial-create-instance.md).
+* **VNet DNS Server** - As previously discussed about the "hub and spoke" model, it's important to have DNS configured correctly on the VNets to ensure that servers joined to the Microsoft Entra Domain Services managed domain have the correct DNS settings to resolve the Microsoft Entra Domain Services managed domain. Each VNet has a DNS server entry that is passed to servers as they obtain an IP address and these DNS entries need to be the IP addresses of the Microsoft Entra Domain Services managed domain. For more information, see [Update DNS settings for the Azure virtual network](/entra/identity/domain-services/tutorial-create-instance).
**Challenges** - The following list highlights key challenges with using this option for Identity Isolation.
Conditional Access: A key benefit of using Microsoft Entra ID for signing into A
**Challenges**: The list below highlights key challenges with using this option for identity isolation.
-* No central management or configuration of servers. For example, there's no Group Policy that can be applied to a group of servers. Organizations should consider deploying [Update Management in Azure](../../automation/update-management/overview.md) to manage patching and updates of these servers.
+* No central management or configuration of servers. For example, there's no Group Policy that can be applied to a group of servers. Organizations should consider deploying [Update Management in Azure](/azure/automation/update-management/overview) to manage patching and updates of these servers.
* Not suitable for multi-tiered applications that have requirements to authenticate with on-premises mechanisms such as Windows Integrated Authentication across these servers or services. If this is a requirement for the organization, then it's recommended that you explore the Standalone Active Directory Domain Services, or the Microsoft Entra Domain Services scenarios described in this section.
active-directory Secure Single Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/secure-single-tenant.md
By using Privileged Identity Management (PIM) you can define who in your organiz
>[!NOTE] >Using PIM requires and Microsoft Entra ID P2 license per human.
-If you must ensure that global administrators are unable to manage a specific resource, you must isolate that resource in a separate tenant with separate global administrators. This can be especially important for backups, see [multi-user authorization guidance](../../backup/multi-user-authorization.md) for examples of this.
+If you must ensure that global administrators are unable to manage a specific resource, you must isolate that resource in a separate tenant with separate global administrators. This can be especially important for backups, see [multi-user authorization guidance](/azure/backup/multi-user-authorization) for examples of this.
## Common usage
Another scenario for isolation within a single tenant could be separation betwee
Azure RBAC role assignments allow scoped administration of Azure resources. Similarly, Microsoft Entra ID allows granular management of Microsoft Entra ID trusting applications through multiple capabilities such as Conditional Access, user and group filtering, administrative unit assignments and application assignments.
-If you must ensure full isolation (including staging of organization-level configuration) of Microsoft 365 services, you need to choose a [multiple tenant isolation](../../backup/multi-user-authorization.md).
+If you must ensure full isolation (including staging of organization-level configuration) of Microsoft 365 services, you need to choose a [multiple tenant isolation](/azure/backup/multi-user-authorization).
## Scoped management in a single tenant
Azure RBAC allows you to design an administration model with granular scopes and
* **Individual resources** - You can assign roles to specific resources so that they don't impact any other resources. In the example above, the Benefits engineering team can assign a data analyst the Cosmos DB Account Reader role just for the test instance of the Azure Cosmos DB database, without interfering with the test web app or any production resource.
-For more information, see [Azure built-in roles](../../role-based-access-control/built-in-roles.md) and [What is Azure role-based access control (Azure RBAC)?](../../role-based-access-control/overview.md).
+For more information, see [Azure built-in roles](/azure/role-based-access-control/built-in-roles) and [What is Azure role-based access control (Azure RBAC)?](/azure/role-based-access-control/overview).
This is a hierarchical structure, so the higher up in the hierarchy, the more scope, visibility, and impact there's to lower levels. Top-level scopes affect all Azure resources in the Microsoft Entra tenant boundary. This also means that permissions can be applied at multiple levels. The risk this introduces is that assigning roles higher up the hierarchy could provide more access lower down the scope than intended. [Microsoft Entra](https://www.microsoft.com/security/business/identity-access/microsoft-entra-permissions-management) (formally CloudKnox) is a Microsoft product that provides visibility and remediation to help reduce the risk. A few details are as follows:
This is a hierarchical structure, so the higher up in the hierarchy, the more sc
* Global Administrators can [elevate access](https://aka.ms/AzureADSecuredAzure/12a) to all subscriptions and management groups.
-Both top-level scopes should be strictly monitored. It's important to plan for other dimensions of resource isolation such as networking. For general guidance on Azure networking, see [Azure best practices for network security](../../security/fundamentals/network-best-practices.md). Infrastructure as a Service (IaaS) workloads have special scenarios where both identity and resource isolation need to be part of the overall design and strategy.
+Both top-level scopes should be strictly monitored. It's important to plan for other dimensions of resource isolation such as networking. For general guidance on Azure networking, see [Azure best practices for network security](/azure/security/fundamentals/network-best-practices). Infrastructure as a Service (IaaS) workloads have special scenarios where both identity and resource isolation need to be part of the overall design and strategy.
Consider isolating sensitive or test resources according to [Azure landing zone conceptual architecture](/azure/cloud-adoption-framework/ready/landing-zone/). For example, Identity subscription should be assigned to separated management group and all subscriptions for development purposes could be separated in "Sandbox" management group. More details can be found in the [Enterprise-Scale documentation](/azure/cloud-adoption-framework/ready/enterprise-scale/faq). Separation for testing purposes within a single tenant is also considered in the [management group hierarchy of the reference architecture](/azure/cloud-adoption-framework/ready/enterprise-scale/testing-approach).
active-directory Security Operations Applications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/security-operations-applications.md
The log files you use for investigation and monitoring are:
* [Microsoft Entra audit logs](../reports-monitoring/concept-audit-logs.md)
-* [Sign-in logs](../reports-monitoring/concept-all-sign-ins.md)
+* [Sign-in logs](../reports-monitoring/concept-sign-ins.md)
-* [Microsoft 365 Audit logs](/microsoft-365/compliance/auditing-solutions-overview)
+* [Microsoft 365 Audit logs](/purview/audit-solutions-overview)
-* [Azure Key Vault logs](../../key-vault/general/logging.md)
+* [Azure Key Vault logs](/azure/key-vault/general/logging)
From the Azure portal, you can view the Microsoft Entra audit logs and download as comma-separated value (CSV) or JavaScript Object Notation (JSON) files. The Azure portal has several ways to integrate Microsoft Entra logs with other tools, which allow more automation of monitoring and alerting:
-* **[Microsoft Sentinel](../../sentinel/overview.md)** ΓÇô enables intelligent security analytics at the enterprise level with security information and event management (SIEM) capabilities.
+* **[Microsoft Sentinel](/azure/sentinel/overview)** ΓÇô enables intelligent security analytics at the enterprise level with security information and event management (SIEM) capabilities.
* **[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure)** - Sigma is an evolving open standard for writing rules and templates that automated management tools can use to parse log files. Where there are Sigma templates for our recommended search criteria, we've added a link to the Sigma repo. The Sigma templates aren't written, tested, and managed by Microsoft. Rather, the repo and templates are created and collected by the worldwide IT security community.
-* **[Azure Monitor](../../azure-monitor/overview.md)** ΓÇô automated monitoring and alerting of various conditions. Can create or use workbooks to combine data from different sources.
+* **[Azure Monitor](/azure/azure-monitor/overview)** ΓÇô automated monitoring and alerting of various conditions. Can create or use workbooks to combine data from different sources.
-* **[Azure Event Hubs](../../event-hubs/event-hubs-about.md) integrated with a SIEM**- [Microsoft Entra logs can be integrated to other SIEMs](../reports-monitoring/tutorial-azure-monitor-stream-logs-to-event-hub.md) such as Splunk, ArcSight, QRadar, and Sumo Logic via the Azure Event Hubs integration.
+* **[Azure Event Hubs](/azure/event-hubs/event-hubs-about) integrated with a SIEM**- [Microsoft Entra logs can be integrated to other SIEMs](../reports-monitoring/howto-stream-logs-to-event-hub.md) such as Splunk, ArcSight, QRadar, and Sumo Logic via the Azure Event Hubs integration.
-* **[Microsoft Defender for Cloud Apps](/cloud-app-security/what-is-cloud-app-security)** ΓÇô discover and manage apps, govern across apps and resources, and check your cloud appsΓÇÖ compliance.
+* **[Microsoft Defender for Cloud Apps](/defender-cloud-apps/what-is-defender-for-cloud-apps)** ΓÇô discover and manage apps, govern across apps and resources, and check your cloud appsΓÇÖ compliance.
* **[Securing workload identities with Identity Protection Preview](..//identity-protection/concept-workload-identity-risk.md)** - detects risk on workload identities across sign-in behavior and offline indicators of compromise.
Applications should follow the principle of least privilege. Investigate applica
| Application permissions (app roles) for other APIs are granted |Medium| Microsoft Entra audit logs| ΓÇ£Add app role assignment to service principalΓÇ¥, <br>-where-<br>Target(s) identifies any other API.| Alert as in the preceding row.<br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) | | Highly privileged delegated permissions are granted on behalf of all users |High| Microsoft Entra audit logs| ΓÇ£Add delegated permission grantΓÇ¥, where Target(s) identifies an API with sensitive data (such as Microsoft Graph), <br> DelegatedPermissionGrant.Scope includes high-privilege permissions, <br>-and-<br>DelegatedPermissionGrant.ConsentType is ΓÇ£AllPrincipalsΓÇ¥.| Alert as in the preceding row.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/ServicePrincipalAssignedAppRoleWithSensitiveAccess.yaml)<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/AzureADRoleManagementPermissionGrant.yaml)<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/SuspiciousOAuthApp_OfflineAccess.yaml)<br><br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
-For more information on monitoring app permissions, see this tutorial: [Investigate and remediate risky OAuth apps](/cloud-app-security/investigate-risky-oauth).
+For more information on monitoring app permissions, see this tutorial: [Investigate and remediate risky OAuth apps](/defender-cloud-apps/investigate-risky-oauth).
### Azure Key Vault
Use Azure Key Vault to store your tenantΓÇÖs secrets. We recommend you pay atten
| What to monitor| Risk Level| Where| Filter/sub-filter| Notes | |-|-|-|-|-|
-| How and when your Key Vaults are accessed and by whom| Medium| [Azure Key Vault logs](../../key-vault/general/logging.md?tabs=Vault)| Resource type: Key Vaults| Look for: any access to Key Vault outside regular processes and hours, any changes to Key Vault ACL.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/AzureDiagnostics/AzureKeyVaultAccessManipulation.yaml)<br><br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
+| How and when your Key Vaults are accessed and by whom| Medium| [Azure Key Vault logs](/azure/key-vault/general/logging?tabs=Vault)| Resource type: Key Vaults| Look for: any access to Key Vault outside regular processes and hours, any changes to Key Vault ACL.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/AzureDiagnostics/AzureKeyVaultAccessManipulation.yaml)<br><br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
-After you set up Azure Key Vault, [enable logging](../../key-vault/general/howto-logging.md?tabs=azure-cli). See [how and when your Key Vaults are accessed](../../key-vault/general/logging.md?tabs=Vault), and [configure alerts](../../key-vault/general/alert.md) on Key Vault to notify assigned users or distribution lists via email, phone, text, or [Event Grid](../../key-vault/general/event-grid-overview.md) notification, if health is affected. In addition, setting up [monitoring](../../key-vault/general/alert.md) with Key Vault insights gives you a snapshot of Key Vault requests, performance, failures, and latency. [Log Analytics](../../azure-monitor/logs/log-analytics-overview.md) also has some [example queries](../../azure-monitor/logs/queries.md) for Azure Key Vault that can be accessed after selecting your Key Vault and then under ΓÇ£MonitoringΓÇ¥ selecting ΓÇ£LogsΓÇ¥.
+After you set up Azure Key Vault, [enable logging](/azure/key-vault/general/howto-logging?tabs=azure-cli). See [how and when your Key Vaults are accessed](/azure/key-vault/general/logging?tabs=Vault), and [configure alerts](/azure/key-vault/general/alert) on Key Vault to notify assigned users or distribution lists via email, phone, text, or [Event Grid](/azure/key-vault/general/event-grid-overview) notification, if health is affected. In addition, setting up [monitoring](/azure/key-vault/general/alert) with Key Vault insights gives you a snapshot of Key Vault requests, performance, failures, and latency. [Log Analytics](/azure/azure-monitor/logs/log-analytics-overview) also has some [example queries](/azure/azure-monitor/logs/queries) for Azure Key Vault that can be accessed after selecting your Key Vault and then under ΓÇ£MonitoringΓÇ¥ selecting ΓÇ£LogsΓÇ¥.
### End-user consent
After you set up Azure Key Vault, [enable logging](../../key-vault/general/howto
|-|-|-|-|-| | End-user consent to application| Low| Microsoft Entra audit logs| Activity: Consent to application / ConsentContext.IsAdminConsent = false| Look for: high profile or highly privileged accounts, app requests high-risk permissions, apps with suspicious names, for example generic, misspelled, etc.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/AuditLogs/ConsentToApplicationDiscovery.yaml)<br><br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
-The act of consenting to an application isn't malicious. However, investigate new end-user consent grants looking for suspicious applications. You can [restrict user consent operations](../../security/fundamentals/steps-secure-identity.md).
+The act of consenting to an application isn't malicious. However, investigate new end-user consent grants looking for suspicious applications. You can [restrict user consent operations](/azure/security/fundamentals/steps-secure-identity).
For more information on consent operations, see the following resources:
For more information on consent operations, see the following resources:
* [Detect and Remediate Illicit Consent Grants - Office 365](/microsoft-365/security/office-365-security/detect-and-remediate-illicit-consent-grants)
-* [Incident response playbook - App consent grant investigation](/security/compass/incident-response-playbook-app-consent)
+* [Incident response playbook - App consent grant investigation](/security/operations/incident-response-playbook-app-consent)
### End user stopped due to risk-based consent
Alert when these changes are detected outside approved change management procedu
* GitHub Microsoft Entra toolkit - [https://github.com/microsoft/AzureADToolkit](https://github.com/microsoft/AzureADToolkit)
-* Azure Key Vault security overview and security guidance - [Azure Key Vault security overview](../../key-vault/general/security-features.md)
+* Azure Key Vault security overview and security guidance - [Azure Key Vault security overview](/azure/key-vault/general/security-features)
* Solorgate risk information and tools - [Microsoft Entra workbook to help you access Solorigate risk](https://techcommunity.microsoft.com/t5/azure-active-directory-identity/azure-ad-workbook-to-help-you-assess-solorigate-risk/ba-p/2010718)
-* OAuth attack detection guidance - [Unusual addition of credentials to an OAuth app](/cloud-app-security/investigate-anomaly-alerts)
+* OAuth attack detection guidance - [Unusual addition of credentials to an OAuth app](/defender-cloud-apps/investigate-anomaly-alerts)
-* Microsoft Entra monitoring configuration information for SIEMs - [Partner tools with Azure Monitor integration](../..//azure-monitor/essentials/stream-monitoring-data-event-hubs.md)
+* Microsoft Entra monitoring configuration information for SIEMs - [Partner tools with Azure Monitor integration](/azure/azure-monitor/essentials/stream-monitoring-data-event-hubs)
## Next steps
active-directory Security Operations Consumer Accounts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/security-operations-consumer-accounts.md
Evaluate the following list:
Use log files to investigate and monitor. See the following articles for more: * [Audit logs in Microsoft Entra ID](../reports-monitoring/concept-audit-logs.md)
-* [Sign-in logs in Microsoft Entra ID (preview)](../reports-monitoring/concept-all-sign-ins.md)
+* [Sign-in logs in Microsoft Entra ID (preview)](../reports-monitoring/concept-sign-ins.md)
* [How To: Investigate risk](../identity-protection/howto-identity-protection-investigate-risk.md) ### Audit logs and automation tools
Use log files to investigate and monitor. See the following articles for more:
From the Azure portal, you can view Microsoft Entra audit logs and download as comma separated value (CSV) or JavaScript Object Notation (JSON) files. Use the Azure portal to integrate Microsoft Entra logs with other tools to automate monitoring and alerting: * **Microsoft Sentinel** ΓÇô security analytics with security information and event management (SIEM) capabilities
- * [What is Microsoft Sentinel?](../../sentinel/overview.md)
+ * [What is Microsoft Sentinel?](/azure/sentinel/overview)
* **Sigma rules** - an open standard for writing rules and templates that automated management tools can use to parse log files. If there are Sigma templates for our recommended search criteria, we added a link to the Sigma repo. Microsoft doesn't write, test, or manage Sigma templates. The repo and templates are created, and collected, by the IT security community. * [SigmaHR/sigma](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) * **Azure Monitor** ΓÇô automated monitoring and alerting of various conditions. Create or use workbooks to combine data from different sources.
- * [Azure Monitor overview](../../azure-monitor/overview.md)
+ * [Azure Monitor overview](/azure/azure-monitor/overview)
* **Azure Event Hubs integrated with a SIEM** - integrate Microsoft Entra logs with SIEMs such as Splunk, ArcSight, QRadar and Sumo Logic with Azure Event Hubs
- * [Azure Event Hubs-A big data streaming platform and event ingestion service](../../event-hubs/event-hubs-about.md)
- * [Tutorial: Stream Microsoft Entra logs to an Azure event hub](../reports-monitoring/tutorial-azure-monitor-stream-logs-to-event-hub.md)
+ * [Azure Event Hubs-A big data streaming platform and event ingestion service](/azure/event-hubs/event-hubs-about)
+ * [Tutorial: Stream Microsoft Entra logs to an Azure event hub](../reports-monitoring/howto-stream-logs-to-event-hub.md)
* **Microsoft Defender for Cloud Apps** ΓÇô discover and manage apps, govern across apps and resources, and conform cloud app compliance * [Microsoft Defender for Cloud Apps overview](/defender-cloud-apps/what-is-defender-for-cloud-apps) * **Identity Protection** - detect risk on workload identities across sign-in behavior and offline indicators of compromise
active-directory Security Operations Devices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/security-operations-devices.md
The log files you use for investigation and monitoring are:
* [Microsoft Entra audit logs](../reports-monitoring/concept-audit-logs.md)
-* [Sign-in logs](../reports-monitoring/concept-all-sign-ins.md)
+* [Sign-in logs](../reports-monitoring/concept-sign-ins.md)
* [Microsoft 365 Audit logs](/microsoft-365/compliance/auditing-solutions-overview)
-* [Azure Key Vault logs](../..//key-vault/general/logging.md?tabs=Vault)
+* [Azure Key Vault logs](/azure/key-vault/general/logging?tabs=Vault)
From the Azure portal, you can view the Microsoft Entra audit logs and download as comma-separated value (CSV) or JavaScript Object Notation (JSON) files. The Azure portal has several ways to integrate Microsoft Entra logs with other tools that allow for greater automation of monitoring and alerting:
-* **[Microsoft Sentinel](../../sentinel/overview.md)** ΓÇô enables intelligent security analytics at the enterprise level by providing security information and event management (SIEM) capabilities.
+* **[Microsoft Sentinel](/azure/sentinel/overview)** ΓÇô enables intelligent security analytics at the enterprise level by providing security information and event management (SIEM) capabilities.
* **[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure)** - Sigma is an evolving open standard for writing rules and templates that automated management tools can use to parse log files. Where Sigma templates exist for our recommended search criteria, we've added a link to the Sigma repo. The Sigma templates aren't written, tested, and managed by Microsoft. Rather, the repo and templates are created and collected by the worldwide IT security community.
-* **[Azure Monitor](../..//azure-monitor/overview.md)** ΓÇô enables automated monitoring and alerting of various conditions. Can create or use workbooks to combine data from different sources.
+* **[Azure Monitor](/azure/azure-monitor/overview)** ΓÇô enables automated monitoring and alerting of various conditions. Can create or use workbooks to combine data from different sources.
-* **[Azure Event Hubs](../../event-hubs/event-hubs-about.md) -integrated with a SIEM**- [Microsoft Entra logs can be integrated to other SIEMs](../reports-monitoring/tutorial-azure-monitor-stream-logs-to-event-hub.md) such as Splunk, ArcSight, QRadar, and Sumo Logic via the Azure Event Hubs integration.
+* **[Azure Event Hubs](/azure/event-hubs/event-hubs-about) -integrated with a SIEM**- [Microsoft Entra logs can be integrated to other SIEMs](../reports-monitoring/howto-stream-logs-to-event-hub.md) such as Splunk, ArcSight, QRadar, and Sumo Logic via the Azure Event Hubs integration.
* **[Microsoft Defender for Cloud Apps](/cloud-app-security/what-is-cloud-app-security)** ΓÇô enables you to discover and manage apps, govern across apps and resources, and check your cloud appsΓÇÖ compliance.
You can also use [Microsoft Intune to set and monitor device compliance policies
It might not be possible to block access to all cloud and software-as-a-service applications with Conditional Access policies requiring compliant devices.
-[Mobile device management](/windows/client-management/mdm/) (MDM) helps you keep Windows 10 devices compliant. With Windows version 1809, we released a [security baseline](/windows/client-management/mdm/) of policies. Microsoft Entra ID can [integrate with MDM](/windows/client-management/mdm/azure-active-directory-integration-with-mdm) to enforce device compliance with corporate policies, and can report a deviceΓÇÖs compliance status.
+[Mobile device management](/windows/client-management/mdm/) (MDM) helps you keep Windows 10 devices compliant. With Windows version 1809, we released a [security baseline](/windows/client-management/mdm/) of policies. Microsoft Entra ID can [integrate with MDM](/windows/client-management/azure-active-directory-integration-with-mdm) to enforce device compliance with corporate policies, and can report a deviceΓÇÖs compliance status.
| What to monitor| Risk Level| Where| Filter/sub-filter| Notes | | - |- |- |- |- |
Stale devices include devices that haven't signed in for a specified time period
## BitLocker key retrieval
-Attackers who have compromised a userΓÇÖs device may retrieve the [BitLocker](/windows/security/information-protection/bitlocker/bitlocker-device-encryption-overview-windows-10) keys in Microsoft Entra ID. It's uncommon for users to retrieve keys, and should be monitored and investigated.
+Attackers who have compromised a userΓÇÖs device may retrieve the [BitLocker](/windows/security/operating-system-security/data-protection/bitlocker/bitlocker-device-encryption-overview-windows-10) keys in Microsoft Entra ID. It's uncommon for users to retrieve keys, and should be monitored and investigated.
| What to monitor| Risk Level| Where| Filter/sub-filter| Notes | | - |- |- |- |- |
active-directory Security Operations Infrastructure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/security-operations-infrastructure.md
The log files you use for investigation and monitoring are:
* [Microsoft Entra audit logs](../reports-monitoring/concept-audit-logs.md)
-* [Sign-in logs](../reports-monitoring/concept-all-sign-ins.md)
+* [Sign-in logs](../reports-monitoring/concept-sign-ins.md)
* [Microsoft 365 Audit logs](/microsoft-365/compliance/auditing-solutions-overview)
-* [Azure Key Vault logs](../../key-vault/general/logging.md?tabs=Vault)
+* [Azure Key Vault logs](/azure/key-vault/general/logging?tabs=Vault)
From the Azure portal, you can view the Microsoft Entra audit logs and download as comma separated value (CSV) or JavaScript Object Notation (JSON) files. The Azure portal has several ways to integrate Microsoft Entra logs with other tools that allow for greater automation of monitoring and alerting:
-* **[Microsoft Sentinel](../../sentinel/overview.md)** ΓÇô Enables intelligent security analytics at the enterprise level by providing security information and event management (SIEM) capabilities.
+* **[Microsoft Sentinel](/azure/sentinel/overview)** ΓÇô Enables intelligent security analytics at the enterprise level by providing security information and event management (SIEM) capabilities.
* **[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure)** - Sigma is an evolving open standard for writing rules and templates that automated management tools can use to parse log files. Where Sigma templates exist for our recommended search criteria, we've added a link to the Sigma repo. The Sigma templates aren't written, tested, and managed by Microsoft. Rather, the repo and templates are created and collected by the worldwide IT security community.
-* **[Azure Monitor](../../azure-monitor/overview.md)** ΓÇô Enables automated monitoring and alerting of various conditions. Can create or use workbooks to combine data from different sources.
+* **[Azure Monitor](/azure/azure-monitor/overview)** ΓÇô Enables automated monitoring and alerting of various conditions. Can create or use workbooks to combine data from different sources.
-* **[Azure Event Hubs](../../event-hubs/event-hubs-about.md)** integrated with a SIEM - [Microsoft Entra logs can be integrated to other SIEMs](../reports-monitoring/tutorial-azure-monitor-stream-logs-to-event-hub.md) such as Splunk, ArcSight, QRadar and Sumo Logic via the Azure Event Hubs integration.
+* **[Azure Event Hubs](/azure/event-hubs/event-hubs-about)** integrated with a SIEM - [Microsoft Entra logs can be integrated to other SIEMs](../reports-monitoring/howto-stream-logs-to-event-hub.md) such as Splunk, ArcSight, QRadar and Sumo Logic via the Azure Event Hubs integration.
* **[Microsoft Defender for Cloud Apps](/cloud-app-security/what-is-cloud-app-security)** ΓÇô Enables you to discover and manage apps, govern across apps and resources, and check your cloud appsΓÇÖ compliance.
The following are links to specific articles that focus on monitoring and alerti
* [Understand and use Lateral Movement Paths with Microsoft Defender for Identity](/defender-for-identity/use-case-lateral-movement-path) - Detection techniques to help identify when non-sensitive accounts are used to gain access to sensitive network accounts.
-* [Working with security alerts in Microsoft Defender for Identity](/defender-for-identity/working-with-suspicious-activities) - This article describes how to review and manage alerts after they're logged.
+* [Working with security alerts in Microsoft Defender for Identity](/defender-for-identity/manage-security-alerts) - This article describes how to review and manage alerts after they're logged.
The following are specific things to look for:
To configure monitoring for Application Proxy, see [Troubleshoot Application Pro
| - | - | - | - | - | | Kerberos errors| Medium | Various tools| Medium | Kerberos authentication error guidance under Kerberos errors on [Troubleshoot Application Proxy problems and error messages](../app-proxy/application-proxy-troubleshoot.md). | | DC security issues| High| DC Security Audit logs| Event ID 4742(S): A computer account was changed<br>-and-<br>Flag ΓÇô Trusted for Delegation<br>-or-<br>Flag ΓÇô Trusted to Authenticate for Delegation| Investigate any flag change. |
-| Pass-the-ticket like attacks| High| | | Follow guidance in:<br>[Security principal reconnaissance (LDAP) (external ID 2038)](/defender-for-identity/reconnaissance-alerts)<br>[Tutorial: Compromised credential alerts](/defender-for-identity/compromised-credentials-alerts)<br>[Understand and use Lateral Movement Paths with Microsoft Defender for Identity](/defender-for-identity/use-case-lateral-movement-path)<br>[Understanding entity profiles](/defender-for-identity/entity-profiles) |
+| Pass-the-ticket like attacks| High| | | Follow guidance in:<br>[Security principal reconnaissance (LDAP) (external ID 2038)](/defender-for-identity/reconnaissance-discovery-alerts)<br>[Tutorial: Compromised credential alerts](/defender-for-identity/credential-access-alerts)<br>[Understand and use Lateral Movement Paths with Microsoft Defender for Identity](/defender-for-identity/use-case-lateral-movement-path)<br>[Understanding entity profiles](/defender-for-identity/investigate-assets) |
### Legacy authentication settings
Monitoring single sign-on and Kerberos activity can help you detect general cred
| - | - | - | - | - | | Errors associated with SSO and Kerberos validation failures|Medium | Microsoft Entra sign-in log| | Single sign-on list of error codes at [Single sign-on](../hybrid/connect/tshoot-connect-sso.md). | | Query for troubleshooting errors|Medium | PowerShell| See query following table. check in each forest with SSO enabled.| Check in each forest with SSO enabled. |
-| Kerberos-related events|High | Microsoft Defender for Identity monitoring| | Review guidance available at [Microsoft Defender for Identity Lateral Movement Paths (LMPs)](/defender-for-identity/use-case-lateral-movement-path) |
+| Kerberos-related events|High | Microsoft Defender for Identity monitoring| | Review guidance available at [Microsoft Defender for Identity Lateral Movement Paths (LMPs)](/defender-for-identity/understand-lateral-movement-paths) |
```kusto <QueryList>
active-directory Security Operations Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/security-operations-introduction.md
Microsoft has many products and services that enable you to customize your IT en
* [Microsoft Defender for Identity architecture](/defender-for-identity/architecture) * [Connect Microsoft Defender for Identity to Active Directory quickstart](/defender-for-identity/install-step2)
- * [Azure security baseline for Microsoft Defender for Identity](/defender-for-identity/security-baseline)
+ * [Azure security baseline for Microsoft Defender for Identity](/security/benchmark/azure/baselines/defender-for-identity-security-baseline)
* [Monitoring Active Directory for Signs of Compromise](/windows-server/identity/ad-ds/plan/security-best-practices/monitoring-active-directory-for-signs-of-compromise) * Cloud-based Azure environments
- * [Monitor sign-ins with the Microsoft Entra sign-in log](../reports-monitoring/concept-all-sign-ins.md)
+ * [Monitor sign-ins with the Microsoft Entra sign-in log](../reports-monitoring/concept-sign-ins.md)
* [Audit activity reports in the Azure portal](../reports-monitoring/concept-audit-logs.md) * [Investigate risk with Microsoft Entra ID Protection](../identity-protection/howto-identity-protection-investigate-risk.md)
- * [Connect Microsoft Entra ID Protection data to Microsoft Sentinel](../../sentinel/data-connectors/azure-active-directory-identity-protection.md)
+ * [Connect Microsoft Entra ID Protection data to Microsoft Sentinel](/azure/sentinel/data-connectors/azure-active-directory-identity-protection)
* Active Directory Domain Services (AD DS)
Microsoft has many products and services that enable you to customize your IT en
The log files you use for investigation and monitoring are: * [Microsoft Entra audit logs](../reports-monitoring/concept-audit-logs.md)
-* [Sign-in logs](../reports-monitoring/concept-all-sign-ins.md)
+* [Sign-in logs](../reports-monitoring/concept-sign-ins.md)
* [Microsoft 365 Audit logs](/microsoft-365/compliance/auditing-solutions-overview)
-* [Azure Key Vault logs](../../key-vault/general/logging.md?tabs=Vault)
+* [Azure Key Vault logs](/azure/key-vault/general/logging?tabs=Vault)
From the Azure portal, you can view the Microsoft Entra audit logs. Download logs as comma separated value (CSV) or JavaScript Object Notation (JSON) files. The Azure portal has several ways to integrate Microsoft Entra logs with other tools that allow for greater automation of monitoring and alerting:
-* **[Microsoft Sentinel](../../sentinel/overview.md)** - Enables intelligent security analytics at the enterprise level by providing security information and event management (SIEM) capabilities.
+* **[Microsoft Sentinel](/azure/sentinel/overview)** - Enables intelligent security analytics at the enterprise level by providing security information and event management (SIEM) capabilities.
* **[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure)** - Sigma is an evolving open standard for writing rules and templates that automated management tools can use to parse log files. Where Sigma templates exist for our recommended search criteria, we have added a link to the Sigma repo. The Sigma templates are not written, tested, and managed by Microsoft. Rather, the repo and templates are created and collected by the worldwide IT security community.
-* **[Azure Monitor](../../azure-monitor/overview.md)** - Enables automated monitoring and alerting of various conditions. Can create or use workbooks to combine data from different sources.
+* **[Azure Monitor](/azure/azure-monitor/overview)** - Enables automated monitoring and alerting of various conditions. Can create or use workbooks to combine data from different sources.
-* **[Azure Event Hubs](../../event-hubs/event-hubs-about.md)** integrated with a SIEM. Microsoft Entra logs can be integrated to other SIEMs such as Splunk, ArcSight, QRadar and Sumo Logic via the Azure Event Hubs integration. For more information, see [Stream Microsoft Entra logs to an Azure event hub](../reports-monitoring/tutorial-azure-monitor-stream-logs-to-event-hub.md).
+* **[Azure Event Hubs](/azure/event-hubs/event-hubs-about)** integrated with a SIEM. Microsoft Entra logs can be integrated to other SIEMs such as Splunk, ArcSight, QRadar and Sumo Logic via the Azure Event Hubs integration. For more information, see [Stream Microsoft Entra logs to an Azure event hub](../reports-monitoring/howto-stream-logs-to-event-hub.md).
* **[Microsoft Defender for Cloud Apps](/cloud-app-security/what-is-cloud-app-security)** - Enables you to discover and manage apps, govern across apps and resources, and check the compliance of your cloud apps.
As part of an Azure cloud-based environment, the following items should be basel
* **Graph API** - The Microsoft Graph API is a RESTful web API that enables you to access Microsoft Cloud service resources. After you register your app and get authentication tokens for a user or service, you can make requests to the Microsoft Graph API. For more information, see [Overview of Microsoft Graph](/graph/overview).
-* **Domain Service** - Microsoft Entra Domain Services (AD DS) provides managed domain services such as domain join, group policy. For more information, see [What is Microsoft Entra Domain Services](../../active-directory-domain-services/overview.md).
+* **Domain Service** - Microsoft Entra Domain Services (AD DS) provides managed domain services such as domain join, group policy. For more information, see [What is Microsoft Entra Domain Services](/entra/identity/domain-services/overview).
-* **Azure Resource Manager** - Azure Resource Manager is the deployment and management service for Azure. It provides a management layer that enables you to create, update, and delete resources in your Azure account. For more information, see [What is Azure Resource Manager](../../azure-resource-manager/management/overview.md).
+* **Azure Resource Manager** - Azure Resource Manager is the deployment and management service for Azure. It provides a management layer that enables you to create, update, and delete resources in your Azure account. For more information, see [What is Azure Resource Manager](/azure/azure-resource-manager/management/overview).
* **Managed identity** - Managed identities eliminate the need for developers to manage credentials. Managed identities provide an identity for applications to use when connecting to resources that support Microsoft Entra authentication. For more information, see [What are managed identities for Azure resources](../managed-identities-azure-resources/overview.md).
As part of an Azure cloud-based environment, the following items should be basel
* **Entitlement management** - Microsoft Entra entitlement management is an [identity governance](../governance/identity-governance-overview.md) feature. Organizations can manage identity and access lifecycle at scale, by automating access request workflows, access assignments, reviews, and expiration. For more information, see [What is Microsoft Entra entitlement management](../governance/entitlement-management-overview.md).
-* **Activity logs** - The Activity log is an Azure [platform log](../../azure-monitor/essentials/platform-logs-overview.md) that provides insight into subscription-level events. This log includes such information as when a resource is modified or when a virtual machine is started. For more information, see [Azure Activity log](../../azure-monitor/essentials/activity-log.md).
+* **Activity logs** - The Activity log is an Azure [platform log](/azure/azure-monitor/essentials/platform-logs-overview) that provides insight into subscription-level events. This log includes such information as when a resource is modified or when a virtual machine is started. For more information, see [Azure Activity log](/azure/azure-monitor/essentials/activity-log).
* **Self-service password reset service** - Microsoft Entra self-service password reset (SSPR) gives users the ability to change or reset their password. The administrator or help desk isn't required. For more information, see [How it works: Microsoft Entra self-service password reset](../authentication/concept-sspr-howitworks.md).
active-directory Security Operations Privileged Accounts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/security-operations-privileged-accounts.md
Microsoft Entra ID uses identity and access management (IAM) as the control plan
You're entirely responsible for all layers of security for your on-premises IT environment. When you use Azure services, prevention and response are the joint responsibilities of Microsoft as the cloud service provider and you as the customer.
-* For more information on the shared responsibility model, see [Shared responsibility in the cloud](../../security/fundamentals/shared-responsibility.md).
+* For more information on the shared responsibility model, see [Shared responsibility in the cloud](/azure/security/fundamentals/shared-responsibility).
* For more information on securing access for privileged users, see [Securing privileged access for hybrid and cloud deployments in Microsoft Entra ID](../roles/security-planning.md). * For a wide range of videos, how-to guides, and content of key concepts for privileged identity, see [Privileged Identity Management documentation](../privileged-identity-management/index.yml).
The log files you use for investigation and monitoring are:
* [Microsoft 365 Audit logs](/microsoft-365/compliance/auditing-solutions-overview)
-* [Azure Key Vault insights](../../key-vault/key-vault-insights-overview.md)
+* [Azure Key Vault insights](/azure/key-vault/key-vault-insights-overview)
From the Azure portal, you can view the Microsoft Entra audit logs and download as comma-separated value (CSV) or JavaScript Object Notation (JSON) files. The Azure portal has several ways to integrate Microsoft Entra logs with other tools that allow for greater automation of monitoring and alerting:
-* **[Microsoft Sentinel](../../sentinel/overview.md)**. Enables intelligent security analytics at the enterprise level by providing security information and event management (SIEM) capabilities.
+* **[Microsoft Sentinel](/azure/sentinel/overview)**. Enables intelligent security analytics at the enterprise level by providing security information and event management (SIEM) capabilities.
* **[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure)** - Sigma is an evolving open standard for writing rules and templates that automated management tools can use to parse log files. Where Sigma templates exist for our recommended search criteria, we have added a link to the Sigma repo. The Sigma templates are not written, tested, and managed by Microsoft. Rather, the repo and templates are created and collected by the worldwide IT security community.
-* **[Azure Monitor](../../azure-monitor/overview.md)**. Enables automated monitoring and alerting of various conditions. Can create or use workbooks to combine data from different sources.
+* **[Azure Monitor](/azure/azure-monitor/overview)**. Enables automated monitoring and alerting of various conditions. Can create or use workbooks to combine data from different sources.
-* **[Azure Event Hubs](../../event-hubs/event-hubs-about.md)** integrated with a SIEM. Enables Microsoft Entra logs to be pushed to other SIEMs such as Splunk, ArcSight, QRadar, and Sumo Logic via the Azure Event Hubs integration. For more information, see [Stream Microsoft Entra logs to an Azure event hub](../reports-monitoring/tutorial-azure-monitor-stream-logs-to-event-hub.md).
+* **[Azure Event Hubs](/azure/event-hubs/event-hubs-about)** integrated with a SIEM. Enables Microsoft Entra logs to be pushed to other SIEMs such as Splunk, ArcSight, QRadar, and Sumo Logic via the Azure Event Hubs integration. For more information, see [Stream Microsoft Entra logs to an Azure event hub](../reports-monitoring/howto-stream-logs-to-event-hub.md).
* **[Microsoft Defender for Cloud Apps](/cloud-app-security/what-is-cloud-app-security)**. Enables you to discover and manage apps, govern across apps and resources, and check your cloud apps' compliance.
You can monitor privileged account sign-in events in the Microsoft Entra sign-in
| - | - | - | - | - | | Sign-in failure, bad password threshold | High | Microsoft Entra sign-in log | Status = Failure<br>-and-<br>error code = 50126 | Define a baseline threshold and then monitor and adjust to suit your organizational behaviors and limit false alerts from being generated.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/MultipleDataSources/PrivilegedAccountsSigninFailureSpikes.yaml)<br><br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) | | Failure because of Conditional Access requirement |High | Microsoft Entra sign-in log | Status = Failure<br>-and-<br>error code = 53003<br>-and-<br>Failure reason = Blocked by Conditional Access | This event can be an indication an attacker is trying to get into the account.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/SigninLogs/UserAccounts-CABlockedSigninSpikes.yaml)<br><br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
-| Privileged accounts that don't follow naming policy| | Azure subscription | [List Azure role assignments using the Azure portal](../../role-based-access-control/role-assignments-list-portal.md)| List role assignments for subscriptions and alert where the sign-in name doesn't match your organization's format. An example is the use of ADM_ as a prefix. |
+| Privileged accounts that don't follow naming policy| | Azure subscription | [List Azure role assignments using the Azure portal](/azure/role-based-access-control/role-assignments-list-portal)| List role assignments for subscriptions and alert where the sign-in name doesn't match your organization's format. An example is the use of ADM_ as a prefix. |
| Interrupt | High, medium | Microsoft Entra Sign-ins | Status = Interrupted<br>-and-<br>error code = 50074<br>-and-<br>Failure reason = Strong auth required<br>Status = Interrupted<br>-and-<br>Error code = 500121<br>Failure reason = Authentication failed during strong authentication request | This event can be an indication an attacker has the password for the account but can't pass the multi-factor authentication challenge.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/MultipleDataSources/AADPrivilegedAccountsFailedMFA.yaml)<br><br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) | | Privileged accounts that don't follow naming policy| High | Microsoft Entra directory | [List Microsoft Entra role assignments](../roles/view-assignments.md)| List role assignments for Microsoft Entra roles and alert where the UPN doesn't match your organization's format. An example is the use of ADM_ as a prefix. | | Discover privileged accounts not registered for multi-factor authentication | High | Microsoft Graph API| Query for IsMFARegistered eq false for admin accounts. [List credentialUserRegistrationDetails - Microsoft Graph beta](/graph/api/reportroot-list-credentialuserregistrationdetails?view=graph-rest-beta&preserve-view=true&tabs=http) | Audit and investigate to determine if the event is intentional or an oversight. |
Monitor all completed and attempted changes by a privileged account. This data e
Privileged accounts that have been assigned permissions in Microsoft Entra Domain Services can perform tasks for Microsoft Entra Domain Services that affect the security posture of your Azure-hosted virtual machines that use Microsoft Entra Domain Services. Enable security audits on virtual machines and monitor the logs. For more information on enabling Microsoft Entra Domain Services audits and for a list of sensitive privileges, see the following resources:
-* [Enable security audits for Microsoft Entra Domain Services](../../active-directory-domain-services/security-audit-events.md)
+* [Enable security audits for Microsoft Entra Domain Services](/entra/identity/domain-services/security-audit-events)
* [Audit Sensitive Privilege Use](/windows/security/threat-protection/auditing/audit-sensitive-privilege-use) | What to monitor | Risk level | Where | Filter/subfilter | Notes | | - | - | - | - | - | | Attempted and completed changes | High | Microsoft Entra audit logs | Date and time<br>-and-<br>Service<br>-and-<br>Category and name of the activity (what)<br>-and-<br>Status = Success or failure<br>-and-<br>Target<br>-and-<br>Initiator or actor (who) | Any unplanned changes should be alerted on immediately. These logs should be retained to help with any investigation. Any tenant-level changes should be investigated immediately (link out to Infra doc) that would lower the security posture of your tenant. An example is excluding accounts from multifactor authentication or Conditional Access. Alert on any additions or changes to applications. See [Microsoft Entra security operations guide for Applications](security-operations-applications.md). | | **Example**<br>Attempted or completed change to high-value apps or services | High | Audit log | Service<br>-and-<br>Category and name of the activity | Date and time, Service, Category and name of the activity, Status = Success or failure, Target, Initiator or actor (who) |
-| Privileged changes in Microsoft Entra Domain Services | High | Microsoft Entra Domain Services | Look for event [4673](/windows/security/threat-protection/auditing/event-4673) | [Enable security audits for Microsoft Entra Domain Services](../../active-directory-domain-services/security-audit-events.md)<br>For a list of all privileged events, see [Audit Sensitive Privilege use](/windows/security/threat-protection/auditing/audit-sensitive-privilege-use). |
+| Privileged changes in Microsoft Entra Domain Services | High | Microsoft Entra Domain Services | Look for event [4673](/windows/security/threat-protection/auditing/event-4673) | [Enable security audits for Microsoft Entra Domain Services](/entra/identity/domain-services/security-audit-events)<br>For a list of all privileged events, see [Audit Sensitive Privilege use](/windows/security/threat-protection/auditing/audit-sensitive-privilege-use). |
## Changes to privileged accounts
You can monitor privileged account changes by using Microsoft Entra audit logs a
| Elevation not occurring on SAW/PAW| High| Microsoft Entra sign-in logs| Device ID <br>-and-<br>Browser<br>-and-<br>OS<br>-and-<br>Compliant/Managed<br>Correlate with:<br>Service = PIM<br>-and-<br>Category = Role management<br>-and-<br>Activity type = Add member to role completed (PIM activation)<br>-and-<br>Status = Success or failure<br>-and-<br>Modified properties = Role.DisplayName| If this change is configured, any attempt to elevate on a non-PAW/SAW device should be investigated immediately because it could indicate an attacker is trying to use the account.<br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) | | Elevation to manage all Azure subscriptions| High| Azure Monitor| Activity Log tab <br>Directory Activity tab <br> Operations Name = Assigns the caller to user access admin <br> -and- <br> Event category = Administrative <br> -and-<br>Status = Succeeded, start, fail<br>-and-<br>Event initiated by| This change should be investigated immediately if it isn't planned. This setting could allow an attacker access to Azure subscriptions in your environment. |
-For more information about managing elevation, see [Elevate access to manage all Azure subscriptions and management groups](../../role-based-access-control/elevate-access-global-admin.md). For information on monitoring elevations by using information available in the Microsoft Entra logs, see [Azure Activity log](../../azure-monitor/essentials/activity-log.md), which is part of the Azure Monitor documentation.
+For more information about managing elevation, see [Elevate access to manage all Azure subscriptions and management groups](/azure/role-based-access-control/elevate-access-global-admin). For information on monitoring elevations by using information available in the Microsoft Entra logs, see [Azure Activity log](/azure/azure-monitor/essentials/activity-log), which is part of the Azure Monitor documentation.
For information about configuring alerts for Azure roles, see [Configure security alerts for Azure resource roles in Privileged Identity Management](../privileged-identity-management/pim-resource-roles-configure-alerts.md).
active-directory Security Operations Privileged Identity Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/security-operations-privileged-identity-management.md
Traditionally, organizational security has focused on the entry and exit points
You're entirely responsible for all layers of security for your on-premises IT environment. When you use Azure cloud services, prevention and response are joint responsibilities of Microsoft as the cloud service provider and you as the customer.
-* For more information on the shared responsibility model, see [Shared responsibility in the cloud](../../security/fundamentals/shared-responsibility.md).
+* For more information on the shared responsibility model, see [Shared responsibility in the cloud](/azure/security/fundamentals/shared-responsibility).
* For more information on securing access for privileged users, see [Securing Privileged access for hybrid and cloud deployments in Microsoft Entra ID](../roles/security-planning.md).
The log files you use for investigation and monitoring are:
* [Microsoft Entra audit logs](../reports-monitoring/concept-audit-logs.md)
-* [Sign-in logs](../reports-monitoring/concept-all-sign-ins.md)
+* [Sign-in logs](../reports-monitoring/concept-sign-ins.md)
* [Microsoft 365 Audit logs](/microsoft-365/compliance/auditing-solutions-overview)
-* [Azure Key Vault logs](../../key-vault/general/logging.md?tabs=Vault)
+* [Azure Key Vault logs](/azure/key-vault/general/logging?tabs=Vault)
In the Azure portal, view the Microsoft Entra audit logs and download them as comma-separated value (CSV) or JavaScript Object Notation (JSON) files. The Azure portal has several ways to integrate Microsoft Entra logs with other tools to automate monitoring and alerting:
-* [**Microsoft Sentinel**](../../sentinel/overview.md) ΓÇô enables intelligent security analytics at the enterprise level by providing security information and event management (SIEM) capabilities.
+* [**Microsoft Sentinel**](/azure/sentinel/overview) ΓÇô enables intelligent security analytics at the enterprise level by providing security information and event management (SIEM) capabilities.
* **[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure)** - Sigma is an evolving open standard for writing rules and templates that automated management tools can use to parse log files. Where Sigma templates exist for our recommended search criteria, we've added a link to the Sigma repo. The Sigma templates aren't written, tested, and managed by Microsoft. Rather, the repo and templates are created and collected by the worldwide IT security community.
-* [**Azure Monitor**](../../azure-monitor/overview.md) ΓÇô enables automated monitoring and alerting of various conditions. Can create or use workbooks to combine data from different sources.
+* [**Azure Monitor**](/azure/azure-monitor/overview) ΓÇô enables automated monitoring and alerting of various conditions. Can create or use workbooks to combine data from different sources.
-* [**Azure Event Hubs**](../../event-hubs/event-hubs-about.md) **integrated with a SIEM**- [Microsoft Entra logs can be integrated to other SIEMs](../reports-monitoring/tutorial-azure-monitor-stream-logs-to-event-hub.md) such as Splunk, ArcSight, QRadar, and Sumo Logic via the Azure Event Hubs integration.
+* [**Azure Event Hubs**](/azure/event-hubs/event-hubs-about) **integrated with a SIEM**- [Microsoft Entra logs can be integrated to other SIEMs](../reports-monitoring/howto-stream-logs-to-event-hub.md) such as Splunk, ArcSight, QRadar, and Sumo Logic via the Azure Event Hubs integration.
* [**Microsoft Defender for Cloud Apps**](/cloud-app-security/what-is-cloud-app-security) ΓÇô enables you to discover and manage apps, govern across apps and resources, and check your cloud appsΓÇÖ compliance.
active-directory Security Operations User Accounts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/security-operations-user-accounts.md
The log files you use for investigation and monitoring are:
* [Microsoft Entra audit logs](../reports-monitoring/concept-audit-logs.md)
-* [Sign-in logs](../reports-monitoring/concept-all-sign-ins.md)
+* [Sign-in logs](../reports-monitoring/concept-sign-ins.md)
* [Microsoft 365 Audit logs](/microsoft-365/compliance/auditing-solutions-overview)
-* [Azure Key Vault logs](../../key-vault/general/logging.md?tabs=Vault)
+* [Azure Key Vault logs](/azure/key-vault/general/logging?tabs=Vault)
* [Risky Users log](../identity-protection/howto-identity-protection-investigate-risk.md)
The log files you use for investigation and monitoring are:
From the Azure portal, you can view the Microsoft Entra audit logs and download as comma separated value (CSV) or JavaScript Object Notation (JSON) files. The Azure portal has several ways to integrate Microsoft Entra logs with other tools that allow for greater automation of monitoring and alerting:
-* **[Microsoft Sentinel](../../sentinel/overview.md)** ΓÇô enables intelligent security analytics at the enterprise level by providing security information and event management (SIEM) capabilities.
+* **[Microsoft Sentinel](/azure/sentinel/overview)** ΓÇô enables intelligent security analytics at the enterprise level by providing security information and event management (SIEM) capabilities.
* **[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure)** - Sigma is an evolving open standard for writing rules and templates that automated management tools can use to parse log files. Where Sigma templates exist for our recommended search criteria, we've added a link to the Sigma repo. The Sigma templates aren't written, tested, and managed by Microsoft. Rather, the repo and templates are created and collected by the worldwide IT security community.
-* **[Azure Monitor](../../azure-monitor/overview.md)** ΓÇô enables automated monitoring and alerting of various conditions. Can create or use workbooks to combine data from different sources.
+* **[Azure Monitor](/azure/azure-monitor/overview)** ΓÇô enables automated monitoring and alerting of various conditions. Can create or use workbooks to combine data from different sources.
-* **[Azure Event Hubs](../../event-hubs/event-hubs-about.md)** integrated with a SIEM - [Microsoft Entra logs can be integrated to other SIEMs](../reports-monitoring/tutorial-azure-monitor-stream-logs-to-event-hub.md) such as Splunk, ArcSight, QRadar and Sumo Logic via the Azure Event Hubs integration.
+* **[Azure Event Hubs](/azure/event-hubs/event-hubs-about)** integrated with a SIEM - [Microsoft Entra logs can be integrated to other SIEMs](../reports-monitoring/howto-stream-logs-to-event-hub.md) such as Splunk, ArcSight, QRadar and Sumo Logic via the Azure Event Hubs integration.
* **[Microsoft Defender for Cloud Apps](/cloud-app-security/what-is-cloud-app-security)** ΓÇô enables you to discover and manage apps, govern across apps and resources, and check your cloud apps' compliance.
Frequently, user accounts have an attribute that identifies a real user. For exa
| - | - | - | - | - | | User accounts that don't have expected attributes defined.| Low| Microsoft Entra audit logs| Activity: Add user<br>Status = success| Look for accounts with your standard attributes either null or in the wrong format. For example, EmployeeID <br> [Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/Useraccountcreatedwithoutexpectedattributesdefined.yaml) | | User accounts created using incorrect naming format.| Low| Microsoft Entra audit logs| Activity: Add user<br>Status = success| Look for accounts with a UPN that does not follow your naming policy. <br> [Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/UserAccountCreatedUsingIncorrectNamingFormat.yaml) |
-| Privileged accounts that don't follow naming policy.| High| Azure Subscription| [List Azure role assignments using the Azure portal - Azure RBAC](../../role-based-access-control/role-assignments-list-portal.md)| List role assignments for subscriptions and alert where sign-in name does not match your organizations format. For example, ADM_ as a prefix. |
+| Privileged accounts that don't follow naming policy.| High| Azure Subscription| [List Azure role assignments using the Azure portal - Azure RBAC](/azure/role-based-access-control/role-assignments-list-portal)| List role assignments for subscriptions and alert where sign-in name does not match your organizations format. For example, ADM_ as a prefix. |
| Privileged accounts that don't follow naming policy.| High| Microsoft Entra directory| [List Microsoft Entra role assignments](../roles/view-assignments.md)| List roles assignments for Microsoft Entra roles alert where UPN doesn't match your organizations format. For example, ADM_ as a prefix. | For more information on parsing, see:
-* Microsoft Entra audit logs - [Parse text data in Azure Monitor Logs](../../azure-monitor/logs/parse-text.md)
+* Microsoft Entra audit logs - [Parse text data in Azure Monitor Logs](/azure/azure-monitor/logs/parse-text)
-* Azure Subscriptions - [List Azure role assignments using Azure PowerShell](../../role-based-access-control/role-assignments-list-powershell.md)
+* Azure Subscriptions - [List Azure role assignments using Azure PowerShell](/azure/role-based-access-control/role-assignments-list-powershell)
* Microsoft Entra ID - [List Microsoft Entra role assignments](../roles/view-assignments.md)
For this risk area, we recommend you monitor standard user accounts and privileg
### How to detect
-You use Azure Identity Protection and the Microsoft Entra sign-in logs to help discover threats indicated by unusual sign-in characteristics. Information about Identity Protection is available at [What is Identity Protection](../identity-protection/overview-identity-protection.md). You can also replicate the data to Azure Monitor or a SIEM for monitoring and alerting purposes. To define normal for your environment and to set a baseline, determine:
+You use Microsoft Entra ID Protection and the Microsoft Entra sign-in logs to help discover threats indicated by unusual sign-in characteristics. Information about Identity Protection is available at [What is Identity Protection](../identity-protection/overview-identity-protection.md). You can also replicate the data to Azure Monitor or a SIEM for monitoring and alerting purposes. To define normal for your environment and to set a baseline, determine:
* the parameters you consider normal for your user base.
active-directory Service Accounts Govern On Premises https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/service-accounts-govern-on-premises.md
Consider the following restrictions, although some might not be relevant to your
* Learn more: [Set-ADAccountExpiration](/powershell/module/activedirectory/set-adaccountexpiration) * See, [Set-ADUser (Active Directory)](/powershell/module/activedirectory/set-aduser) * Password policy requirements
- * See, [Password and account lockout policies on Microsoft Entra Domain Services managed domains](../../active-directory-domain-services/password-policy.md)
+ * See, [Password and account lockout policies on Microsoft Entra Domain Services managed domains](/entra/identity/domain-services/password-policy)
* Create accounts in an organizational unit location that ensures only some users will manage it * See, [Delegating Administration of Account OUs and Resource OUs](/windows-server/identity/ad-ds/plan/delegating-administration-of-account-ous-and-resource-ous) * Set up and collect auditing that detects service account changes:
active-directory Service Accounts Managed Identities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/service-accounts-managed-identities.md
After the target system accepts the token for authentication, it supports mechan
Azure control plane operations are managed by Azure Resource Manager and use Azure role-based access control (Azure RBAC). In the data plane, target systems have authorization mechanisms. Azure Storage supports Azure RBAC on the data plane. For example, applications using Azure App Services can read data from Azure Storage, and applications using Azure Kubernetes Service can read secrets stored in Azure Key Vault. Learn more:
-* [What is Azure Resource Manager?](../../azure-resource-manager/management/overview.md)
-* [What is Azure role-based Azure RBAC?](../../role-based-access-control/overview.md)
-* [Azure control plane and data plane](../../azure-resource-manager/management/control-plane-and-data-plane.md)
+* [What is Azure Resource Manager?](/azure/azure-resource-manager/management/overview)
+* [What is Azure role-based Azure RBAC?](/azure/role-based-access-control/overview)
+* [Azure control plane and data plane](/azure/azure-resource-manager/management/control-plane-and-data-plane)
* [Azure services that can use managed identities to access other services](../managed-identities-azure-resources/managed-identities-status.md) ## System-assigned and user-assigned managed identities
To assess managed identity security:
`Get-AzureADGroupMember -ObjectId <String> [-All <Boolean>] [-Top <Int32>] [<CommonParameters>]` * Confirm what resources the managed identity accesses
- * See, [List Azure role assignments using Azure PowerShell](../../role-based-access-control/role-assignments-list-powershell.md).
+ * See, [List Azure role assignments using Azure PowerShell](/azure/role-based-access-control/role-assignments-list-powershell).
## Move to managed identities
active-directory Service Accounts Principal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/service-accounts-principal.md
Because certificates are more secure, it's recommended you use them, when possib
For more information on Azure Key Vault and how to use it for certificate and secret management, see:
-* [About Azure Key Vault](../../key-vault/general/overview.md)
-* [Assign a Key Vault access policy](../../key-vault/general/assign-access-policy.md)
+* [About Azure Key Vault](/azure/key-vault/general/overview)
+* [Assign a Key Vault access policy](/azure/key-vault/general/assign-access-policy)
### Challenges and mitigations
When using Microsoft Graph, check the API documentation. Ensure the permission t
Learn more:
-* [How to use managed identities for App Service and Azure Functions](../../app-service/overview-managed-identity.md?tabs=dotnet)
+* [How to use managed identities for App Service and Azure Functions](/azure/app-service/overview-managed-identity?tabs=dotnet)
* [Create a Microsoft Entra application and service principal that can access resources](../develop/howto-create-service-principal-portal.md) * [Use Azure PowerShell to create a service principal with a certificate](../develop/howto-authenticate-service-principal-powershell.md)
Conditional Access:
Use Conditional Access to block service principals from untrusted locations. See, [Create a location-based Conditional Access policy](../conditional-access/workload-identity.md#create-a-location-based-conditional-access-policy)--
active-directory Sync Directory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/sync-directory.md
Microsoft designed [Microsoft Entra Connect cloud sync](../hybrid/cloud-sync/wha
Explore the following resources to learn more about directory synchronization with Microsoft Entra ID. * [What is identity provisioning with Microsoft Entra ID?](../hybrid/what-is-provisioning.md)Provisioning is the process of creating an object based on certain conditions, keeping the object up-to-date and deleting the object when conditions are no longer met. On-premises provisioning involves provisioning from on premises sources (like Active Directory) to Microsoft Entra ID.
-* [Hybrid Identity: Directory integration tools comparison](../hybrid/connect/plan-hybrid-identity-design-considerations-tools-comparison.md) describes differences between Microsoft Entra Connect Sync and Microsoft Entra Connect cloud provisioning.
+* [Hybrid Identity: Directory integration tools comparison](../hybrid/index.yml) describes differences between Microsoft Entra Connect Sync and Microsoft Entra Connect cloud provisioning.
* [Microsoft Entra Connect and Microsoft Entra Connect Health installation roadmap](../hybrid/connect/how-to-connect-install-roadmap.md) provides detailed installation and configuration steps. ## Next steps
active-directory Sync Ldap https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/sync-ldap.md
Use LDAP synchronization when you need to synchronize identity data between your
Explore the following resources to learn more about LDAP synchronization with Microsoft Entra ID.
-* [Hybrid Identity: Directory integration tools comparison](../hybrid/connect/plan-hybrid-identity-design-considerations-tools-comparison.md) describes differences between Microsoft Entra Connect Sync and Microsoft Entra Connect cloud provisioning.
+* [Hybrid Identity: Directory integration tools comparison](../hybrid/index.yml) describes differences between Microsoft Entra Connect Sync and Microsoft Entra Connect cloud provisioning.
* [Microsoft Entra Connect and Microsoft Entra Connect Health installation roadmap](../hybrid/connect/how-to-connect-install-roadmap.md) provides detailed installation and configuration steps. * The [Generic LDAP Connector](/microsoft-identity-manager/reference/microsoft-identity-manager-2016-connector-genericldap) enables you to integrate the synchronization service with an LDAP v3 server.
active-directory Concept Authentication Authenticator App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-authentication-authenticator-app.md
Previously updated : 09/14/2023 Last updated : 10/17/2023
To get started with passwordless sign-in, see [Enable passwordless sign-in with
The Authenticator app can help prevent unauthorized access to accounts and stop fraudulent transactions by pushing a notification to your smartphone or tablet. Users view the notification, and if it's legitimate, select **Verify**. Otherwise, they can select **Deny**. > [!NOTE]
-> Starting in August, 2023, sign-ins from unfamiliar locations no longer generate notifications. Similar to how unfamiliar locations work in [Smart lockout](howto-password-smart-lockout.md), a location becomes "familiar" during the first 14 days of use, or the first 10 sign-ins. If the location is unfamiliar, or if the relevant Google or Apple service responsible for push notifications isn't available, users won't see their notification as usual. In that case, they should open Microsoft Authenticator, or Authenticator Lite in a relevant companion app like Outlook, refresh by either pulling down or hitting **Refresh**, and approve the request.
+> Starting in August, 2023, anomalous sign-ins don't generate notifications, similarly to how sign-ins from unfamiliar locations don't generate notifications. To approve an anomalous sign-in, users can open Microsoft Authenticator, or Authenticator Lite in a relevant companion app like Outlook. Then they can either pull down to refresh or tap **Refresh**, and approve the request.
![Screenshot of example web browser prompt for Authenticator app notification to complete sign-in process.](media/tutorial-enable-azure-mfa/tutorial-enable-azure-mfa-browser-prompt.png)
active-directory Concept Certificate Based Authentication Technical Deep Dive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-certificate-based-authentication-technical-deep-dive.md
Now we'll walk through each step:
:::image type="content" border="true" source="./media/concept-certificate-based-authentication-technical-deep-dive/sign-in-alt.png" alt-text="Screenshot of the Sign-in if FIDO2 is also enabled.":::
-1. Once the user selects certificate-based authentication, the client is redirected to the certauth endpoint, which is [https://certauth.login.microsoftonline.com](https://certauth.login.microsoftonline.com) for Azure Global. For [Azure Government](../../azure-government/compare-azure-government-global-azure.md#guidance-for-developers), the certauth endpoint is [https://certauth.login.microsoftonline.us](https://certauth.login.microsoftonline.us).
+1. Once the user selects certificate-based authentication, the client is redirected to the certauth endpoint, which is [https://certauth.login.microsoftonline.com](https://certauth.login.microsoftonline.com) for Azure Global. For [Azure Government](/azure/azure-government/compare-azure-government-global-azure#guidance-for-developers), the certauth endpoint is [https://certauth.login.microsoftonline.us](https://certauth.login.microsoftonline.us).
However, with the issue hints feature enabled (coming soon), the new certauth endpoint will change to `https://t{tenantid}.certauth.login.microsoftonline.com`.
As of now, there's no way for the administrator to manually force or re-trigger
## Understanding Sign-in logs
-Sign-in logs provide information about sign-ins and how your resources are used by your users. For more information about sign-in logs, see [Sign-in logs in Microsoft Entra ID](../reports-monitoring/concept-all-sign-ins.md).
+Sign-in logs provide information about sign-ins and how your resources are used by your users. For more information about sign-in logs, see [Sign-in logs in Microsoft Entra ID](../reports-monitoring/concept-sign-ins.md).
Let's walk through two scenarios, one where the certificate satisfies single-factor authentication and another where the certificate satisfies MFA.
active-directory Concept Password Ban Bad Combined Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-password-ban-bad-combined-policy.md
The following Microsoft Entra password policy requirements apply for all passwor
## Password expiration policies
-Password expiration policies are unchanged but they're included in this topic for completeness. A *Global Administrator* or *User Administrator* can use the [Azure AD module for PowerShell](/powershell/module/Azuread/) to set user passwords not to expire.
+Password expiration policies are unchanged but they're included in this topic for completeness. A *Global Administrator* or *User Administrator* can use the [Azure AD module for PowerShell](/powershell/module/azuread/) to set user passwords not to expire.
> [!NOTE] > By default, only passwords for user accounts that aren't synchronized through Microsoft Entra Connect can be configured to not expire. For more information about directory synchronization, see [Connect AD with Microsoft Entra ID](../hybrid/connect/how-to-connect-password-hash-synchronization.md#password-expiration-policy).
active-directory Concept Registration Mfa Sspr Combined https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-registration-mfa-sspr-combined.md
Previously updated : 09/13/2023 Last updated : 10/17/2023
Combined registration supports the authentication methods and actions in the fol
| Office phone* | Yes | Yes | Yes | | Email | Yes | Yes | Yes | | Security questions | Yes | No | Yes |
+| Passwords | No | Yes | No |
| App passwords* | Yes | No | Yes | | FIDO2 security keys*| Yes | No | Yes |
If the SSPR policy requires users to review their security info at regular inter
### Manage mode
-Users can access manage mode by going to [https://aka.ms/mysecurityinfo](https://aka.ms/mysecurityinfo) or by selecting **Security info** from My Account. From there, users can add methods, delete or change existing methods, change the default method, and more.
+Users can access manage mode by going to [Security info](https://aka.ms/mysecurityinfo) or by selecting **Security info** from My Account. From there, users can add methods, delete or change existing methods, change the default method, and more.
## Key usage scenarios
+### Update a password in MySignIns (preview)
+A user navigates to [Security info](https://aka.ms/mysecurityinfo). After signing in, the user can update their password. For more information about different authentication methods that you can require by using Conditional Access policies, see [How to secure the registration of security info](/azure/active-directory/conditional-access/howto-conditional-access-policy-registration). When finished, the user has the new password updated on the Security info page.
++ ### Protect Security info registration with Conditional Access To secure when and how users register for Microsoft Entra multifactor authentication and self-service password reset, you can use user actions in Conditional Access policy. This functionality may be enabled in organizations that want users to register for Microsoft Entra multifactor authentication and SSPR from a central location, such as a trusted network location during HR onboarding. Learn more on how to configure [common Conditional Access policies for securing security info registration.](../conditional-access/howto-conditional-access-policy-registration.md)
active-directory Concept Resilient Controls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-resilient-controls.md
Mitigating an actual disruption must be an organizationΓÇÖs primary focus in dea
### Administrator lockout contingency
-To unlock admin access to your tenant, you should create emergency access accounts. These emergency access accounts, also known as *break glass* accounts, allow access to manage Microsoft Entra configuration when normal privileged account access procedures arenΓÇÖt available. At least two emergency access accounts should be created following the [emergency access account recommendations](../users-groups-roles/directory-emergency-access.md).
+To unlock admin access to your tenant, you should create emergency access accounts. These emergency access accounts, also known as *break glass* accounts, allow access to manage Microsoft Entra configuration when normal privileged account access procedures arenΓÇÖt available. At least two emergency access accounts should be created following the [emergency access account recommendations](../roles/security-emergency-access.md).
### Mitigating user lockout
Incorporate the following access controls in your existing Conditional Access po
- Provision multiple authentication methods for each user that rely on different communication channels, for example the Microsoft Authenticator app (internet-based), OATH token (generated on-device), and SMS (telephonic). The following PowerShell script will help you identify in advance, which additional methods your users should register: [Script for Microsoft Entra multifactor authentication authentication method analysis](/samples/azure-samples/azure-mfa-authentication-method-analysis/azure-mfa-authentication-method-analysis/). - Deploy Windows Hello for Business on Windows 10 devices to satisfy MFA requirements directly from device sign-in.-- Use trusted devices via [Microsoft Entra hybrid join](../devices/overview.md) or [Microsoft Intune](/intune/planning-guide). Trusted devices will improve user experience because the trusted device itself can satisfy the strong authentication requirements of policy without an MFA challenge to the user. MFA will then be required when enrolling a new device and when accessing apps or resources from untrusted devices.
+- Use trusted devices via [Microsoft Entra hybrid join](../devices/overview.md) or [Microsoft Intune](/mem/intune/fundamentals/intune-planning-guide). Trusted devices will improve user experience because the trusted device itself can satisfy the strong authentication requirements of policy without an MFA challenge to the user. MFA will then be required when enrolling a new device and when accessing apps or resources from untrusted devices.
- Use Microsoft Entra ID Protection risk-based policies that prevent access when the user or sign-in is at risk in place of fixed MFA policies. - If you are protecting VPN access using Microsoft Entra multifactor authentication NPS extension, consider federating your VPN solution as a [SAML app](../manage-apps/view-applications-portal.md) and determine the app category as recommended below.
active-directory Concept Sspr Writeback https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-sspr-writeback.md
Passwords are written back in all the following situations:
* Any administrator self-service force change password operation, for example, password expiration. * Any administrator self-service password reset that originates from the [password reset portal](https://passwordreset.microsoftonline.com). * Any administrator-initiated end-user password reset from the Microsoft Entra admin center.
- * Any administrator-initiated end-user password reset from the [Microsoft Graph API](/graph/api/passwordauthenticationmethod-resetpassword).
+ * Any administrator-initiated end-user password reset from the [Microsoft Graph API](/graph/api/authenticationmethod-resetpassword).
## Unsupported writeback operations
active-directory How To Authentication Find Coverage Gaps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/how-to-authentication-find-coverage-gaps.md
Requiring multifactor authentication (MFA) for the administrators in your tenant
## Detect current usage for Microsoft Entra Built-in administrator roles
-The [Microsoft Entra ID Secure Score](../fundamentals/identity-secure-score.md) provides a score for **Require MFA for administrative roles** in your tenant. This improvement action tracks the MFA usage of Global administrator, Security administrator, Exchange administrator, and SharePoint administrator.
+The [Microsoft Entra ID Secure Score](../reports-monitoring/concept-identity-secure-score.md) provides a score for **Require MFA for administrative roles** in your tenant. This improvement action tracks the MFA usage of Global administrator, Security administrator, Exchange administrator, and SharePoint administrator.
There are different ways to check if your admins are covered by an MFA policy.
After your admins are enforced for multifactor authentication and have been usin
- [Phone Sign-in (with Microsoft Authenticator)](concept-authentication-authenticator-app.md) - [FIDO2](concept-authentication-passwordless.md#fido2-security-keys)-- [Windows Hello for Business](/windows/security/identity-protection/hello-for-business/hello-overview)
+- [Windows Hello for Business](/windows/security/identity-protection/hello-for-business/)
You can read more about these authentication methods and their security considerations in [Microsoft Entra authentication methods](concept-authentication-methods.md).
active-directory How To Authentication Methods Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/how-to-authentication-methods-manage.md
The legacy MFA policy has separate controls for **SMS** and **Phone calls**. But
The Authentication methods policy has controls for **SMS** and **Voice calls**, matching the legacy MFA policy. If your tenant is using SSPR and **Mobile phone** is enabled, you'll want to enable both **SMS** and **Voice calls** in the Authentication methods policy. If your tenant is using SSPR and **Office phone** is enabled, you'll want to enable **Voice calls** in the Authentication methods policy, and ensure that the **Office phone** option is enabled.
+> [!NOTE]
+> The **Use for sign-in** option is default enabled on **SMS** settings. This option enables SMS sign-in. If SMS sign-in is enabled for users, they will be skipped from cross-tenant synchronization. If you are using cross-tenant synchronization or don't want to enable SMS sign-in, disable SMS Sign-in for target users.
+ ### OATH tokens The OATH token controls in the legacy MFA and SSPR policies were single controls that enabled the use of three different types of OATH tokens: the Microsoft Authenticator app, third-party software OATH TOTP code generator apps, and hardware OATH tokens.
active-directory How To Mfa Authenticator Lite https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/how-to-mfa-authenticator-lite.md
Previously updated : 09/13/2023 Last updated : 10/17/2023
If the sign-in was done by phone app notification, under **authenticationAppDevi
If a user has registered Authenticator Lite, the userΓÇÖs registered authentication methods include **Microsoft Authenticator (in Outlook)**. ## Push notifications in Authenticator Lite
-Push notifications sent by Authenticator Lite aren't configurable and don't depend on the Authenticator feature settings. The settings for features included in the Authenticator Lite experience are listed in the following table. Every authentication includes a number matching prompt and does not include app and location context, regardless of Microsoft Authentiator feature settings.
+Push notifications sent by Authenticator Lite aren't configurable and don't depend on the Authenticator feature settings. Authenticator Lite doesn't support passwordless authentication mode. The settings for features included in the Authenticator Lite experience are listed in the following table. Every authentication includes a number matching prompt and does not include app and location context, regardless of Microsoft Authentiator feature settings.
| Authenticator Feature | Authenticator Lite Experience| |::|:-:|
Authenticator Lite enforces number matching in every authentication. If your ten
To learn more about verification notifications, see [Microsoft Authenticator authentication method](concept-authentication-authenticator-app.md). ## Common questions
-### Are users on the legacy policy eligible for Authenticator Lite?
-No, only those users configured for Authenticator app via the modern authentication methods policy are eligible for this experience. If your tenant is currently on the legacy policy and you are interested in this feature, please migrate your users to the modern auth policy.
### Does Authenticator Lite work as a broker app? No, Authenticator Lite is only available for push notifications and TOTP.
active-directory How To Mfa Registration Campaign https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/how-to-mfa-registration-campaign.md
A nudge won't appear if a user is in scope for a Conditional Access policy that
**Do users see a nudge when there is a terms of use (ToU) screen presented to the user during sign-in?**
-A nudge won't appear if a user is presented with the [terms of use (ToU)](/azure/active-directory/conditional-access/terms-of-use) screen during sign-in.
+A nudge won't appear if a user is presented with the [terms of use (ToU)](../conditional-access/terms-of-use.md) screen during sign-in.
**Do users see a nudge when Conditional Access custom controls are applicable to the sign-in?**
-A nudge won't appear if a user is redirected during sign-in due to [Conditional Access custom controls](/azure/active-directory/conditional-access/controls) settings.
+A nudge won't appear if a user is redirected during sign-in due to [Conditional Access custom controls](../conditional-access/controls.md) settings.
## Next steps
active-directory How To Mfa Server Migration Utility https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/how-to-mfa-server-migration-utility.md
Using the data points you collected in [Authentication services](#authentication
### Update domain federation settings Once you've completed user migrations, and moved all of your [Authentication services](#authentication-services) off of MFA Server, it's time to update your domain federation settings. After the update, Microsoft Entra no longer sends MFA request to your on-premises federation server.
-To configure Microsoft Entra ID to ignore MFA requests to your on-premises federation server, install the [Microsoft Graph PowerShell SDK](/powershell/microsoftgraph/installation?view=graph-powershell-&preserve-view=true) and set [federatedIdpMfaBehavior](/graph/api/resources/internaldomainfederation?view=graph-rest-1.0#federatedidpmfabehavior-values&preserve-view=true) to `rejectMfaByFederatedIdp`, as shown in the following example.
+To configure Microsoft Entra ID to ignore MFA requests to your on-premises federation server, install the [Microsoft Graph PowerShell SDK](/powershell/microsoftgraph/installation?view=graph-powershell-1.0&preserve-view=true&viewFallbackFrom=graph-powershell-) and set [federatedIdpMfaBehavior](/graph/api/resources/internaldomainfederation?view=graph-rest-1.0#federatedidpmfabehavior-values&preserve-view=true) to `rejectMfaByFederatedIdp`, as shown in the following example.
#### Request <!-- {
active-directory How To Migrate Mfa Server To Mfa User Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/how-to-migrate-mfa-server-to-mfa-user-authentication.md
We don't recommend that you reuse groups that are used for security. If you're u
## Monitoring
-Many [Azure Monitor workbooks](../reports-monitoring/howto-use-azure-monitor-workbooks.md) and **Usage & Insights** reports are available to monitor your deployment.
+Many [Azure Monitor workbooks](../reports-monitoring/howto-use-workbooks.md) and **Usage & Insights** reports are available to monitor your deployment.
These reports can be found in Microsoft Entra ID in the navigation pane under **Monitoring**. ### Monitoring Staged Rollout
active-directory How To Migrate Mfa Server To Mfa With Federation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/how-to-migrate-mfa-server-to-mfa-with-federation.md
For domains that set the **SupportsMfa** property, these rules determine how **f
- If the **federatedIdpMfaBehavior** property is never set, Microsoft Entra ID will continue to honor the **SupportsMfa** setting. - If **federatedIdpMfaBehavior** or **SupportsMfa** isn't set, Microsoft Entra ID will default to `acceptIfMfaDoneByFederatedIdp` behavior.
-You can check the status of **federatedIdpMfaBehavior** by using [Get-MgDomainFederationConfiguration](/powershell/module/microsoft.graph.identity.directorymanagement/get-mgdomainfederationconfiguration?view=graph-powershell-beta&preserve-view=true).
+You can check the status of **federatedIdpMfaBehavior** by using [Get-MgDomainFederationConfiguration](/powershell/module/microsoft.graph.identity.directorymanagement/get-mgdomainfederationconfiguration?view=graph-powershell-1.0&preserve-view=true&viewFallbackFrom=graph-powershell-beta).
```powershell Get-MgDomainFederationConfiguration ΓÇôDomainID yourdomain.com
active-directory Howto Authentication Passwordless Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-authentication-passwordless-deployment.md
Microsoft Entra ID adds entries to the audit logs when:
* A user enables or disables their account on a security key or resets the second factor for the security key on their Win 10 machine. See event IDs: 4670 and 5382.
-**Microsoft Entra ID keeps most auditing data for 30 days** and makes the data available by using the [Microsoft Entra admin center](https://entra.microsoft.com) or API for you to download into your analysis systems. If you require longer retention, export and consume logs in a SIEM tool such as [Microsoft Sentinel](../../sentinel/connect-azure-active-directory.md), Splunk, or Sumo Logic. We recommend longer retention for auditing, trend analysis, and other business needs as applicable
+**Microsoft Entra ID keeps most auditing data for 30 days** and makes the data available by using the [Microsoft Entra admin center](https://entra.microsoft.com) or API for you to download into your analysis systems. If you require longer retention, export and consume logs in a SIEM tool such as [Microsoft Sentinel](/azure/sentinel/connect-azure-active-directory), Splunk, or Sumo Logic. We recommend longer retention for auditing, trend analysis, and other business needs as applicable
There are two tabs in the Authentication methods activity dashboard - Registration and Usage.
active-directory Howto Authentication Passwordless Security Key On Premises https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-authentication-passwordless-security-key-on-premises.md
# Enable passwordless security key sign-in to on-premises resources by using Microsoft Entra ID
-This document discusses how to enable passwordless authentication to on-premises resources for environments with both *Microsoft Entra joined* and *Microsoft Entra hybrid joined* Windows 10 devices. This passwordless authentication functionality provides seamless single sign-on (SSO) to on-premises resources when you use Microsoft-compatible security keys, or with [Windows Hello for Business Cloud trust](/windows/security/identity-protection/hello-for-business/hello-hybrid-cloud-trust)
+This document discusses how to enable passwordless authentication to on-premises resources for environments with both *Microsoft Entra joined* and *Microsoft Entra hybrid joined* Windows 10 devices. This passwordless authentication functionality provides seamless single sign-on (SSO) to on-premises resources when you use Microsoft-compatible security keys, or with [Windows Hello for Business Cloud trust](/windows/security/identity-protection/hello-for-business/hello-hybrid-cloud-kerberos-trust)
## Use SSO to sign in to on-premises resources by using FIDO2 keys
active-directory Howto Authentication Passwordless Security Key Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-authentication-passwordless-security-key-windows.md
This document focuses on enabling FIDO2 security key based passwordless authenti
| [Microsoft Entra hybrid joined devices](../devices/concept-hybrid-join.md) require Windows 10 version 2004 or higher | | X | | Fully patched Windows Server 2016/2019 Domain Controllers. | | X | | [Microsoft Entra Hybrid Authentication Management module](https://www.powershellgallery.com/packages/AzureADHybridAuthenticationManagement/2.1.1.0) | | X |
-| [Microsoft Intune](/intune/fundamentals/what-is-intune) (Optional) | X | X |
+| [Microsoft Intune](/mem/intune/fundamentals/what-is-intune) (Optional) | X | X |
| Provisioning package (Optional) | X | X | | Group Policy (Optional) | | X |
Organizations may choose to use one or more of the following methods to enable t
To enable the use of security keys using Intune, complete the following steps:
-1. Sign in to the [Microsoft Intune admin center](https://endpoint.microsoft.com).
+1. Sign in to the [Microsoft Intune admin center](https://intune.microsoft.com/).
1. Browse to **Devices** > **Enroll Devices** > **Windows enrollment** > **Windows Hello for Business**. 1. Set **Use security keys for sign-in** to **Enabled**.
Configuration of security keys for sign-in isn't dependent on configuring Window
To target specific device groups to enable the credential provider, use the following custom settings via Intune:
-1. Sign in to the [Microsoft Intune admin center](https://endpoint.microsoft.com).
+1. Sign in to the [Microsoft Intune admin center](https://intune.microsoft.com/).
1. Browse to **Devices** > **Windows** > **Configuration profiles** > **Create profile**. 1. Configure the new profile with the following settings: - Platform: Windows 10 and later
To target specific device groups to enable the credential provider, use the foll
- OMA-URI: ./Device/Vendor/MSFT/PassportForWork/SecurityKey/UseSecurityKeyForSignin - Data Type: Integer - Value: 1
-1. The remainder of the policy settings include assigning to specific users, devices, or groups. For more information, see [Assign user and device profiles in Microsoft Intune](/intune/device-profile-assign).
+1. The remainder of the policy settings include assigning to specific users, devices, or groups. For more information, see [Assign user and device profiles in Microsoft Intune](/mem/intune/configuration/device-profile-assign).
### Enable with a provisioning package
active-directory Howto Authentication Temporary Access Pass https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-authentication-temporary-access-pass.md
c5dbd20a-8b8f-4791-a23f-488fcbde3b38 5/22/2022 11:19:17 PM False True
```
-For more information, see [New-MgUserAuthenticationTemporaryAccessPassMethod](/powershell/module/microsoft.graph.identity.signins/new-mguserauthenticationtemporaryaccesspassmethod) and [Get-MgUserAuthenticationTemporaryAccessPassMethod](/powershell/module/microsoft.graph.identity.signins/get-mguserauthenticationtemporaryaccesspassmethod?view=graph-powershell-beta&preserve-view=true).
+For more information, see [New-MgUserAuthenticationTemporaryAccessPassMethod](/powershell/module/microsoft.graph.identity.signins/new-mguserauthenticationtemporaryaccesspassmethod) and [Get-MgUserAuthenticationTemporaryAccessPassMethod](/powershell/module/microsoft.graph.identity.signins/get-mguserauthenticationtemporaryaccesspassmethod?view=graph-powershell-1.0&preserve-view=true&viewFallbackFrom=graph-powershell-beta).
## Use a Temporary Access Pass
You can also use PowerShell:
Remove-MgUserAuthenticationTemporaryAccessPassMethod -UserId user3@contoso.com -TemporaryAccessPassAuthenticationMethodId c5dbd20a-8b8f-4791-a23f-488fcbde3b38 ```
-For more information, see [Remove-MgUserAuthenticationTemporaryAccessPassMethod](/powershell/module/microsoft.graph.identity.signins/remove-mguserauthenticationtemporaryaccesspassmethod?view=graph-powershell-beta&preserve-view=true).
+For more information, see [Remove-MgUserAuthenticationTemporaryAccessPassMethod](/powershell/module/microsoft.graph.identity.signins/remove-mguserauthenticationtemporaryaccesspassmethod?view=graph-powershell-1.0&preserve-view=true&viewFallbackFrom=graph-powershell-beta).
## Replace a Temporary Access Pass
active-directory Howto Authentication Use Email Signin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-authentication-use-email-signin.md
In the current preview state, the following limitations apply to email as an alt
* [Microsoft Entra joined devices](../devices/concept-directory-join.md) * [Microsoft Entra registered devices](../devices/concept-device-registration.md) * [Resource Owner Password Credentials (ROPC)](../develop/v2-oauth-ropc.md)
+ * [Single Sign-On and App Protection Policies on Mobile Platform](../develop/mobile-sso-support-overview.md)
* Legacy authentication such as POP3 and SMTP * Skype for Business
Email as an alternate login ID applies to [Microsoft Entra B2B collaboration](..
## Enable user sign-in with an email address > [!NOTE]
-> This configuration option uses HRD policy. For more information, see [homeRealmDiscoveryPolicy resource type](/graph/api/resources/homeRealmDiscoveryPolicy).
+> This configuration option uses HRD policy. For more information, see [homeRealmDiscoveryPolicy resource type](/graph/api/resources/homerealmdiscoverypolicy).
Once users with the *ProxyAddresses* attribute applied are synchronized to Microsoft Entra ID using Microsoft Entra Connect, you need to enable the feature for users to sign in with email as an alternate login ID for your tenant. This feature tells the Microsoft Entra login servers to not only check the sign-in identifier against UPN values, but also against *ProxyAddresses* values for the email address.
You need *Global Administrator* privileges to complete the following steps:
Install-Module Microsoft.Graph ```
- For more information on installation, see [Install the Microsoft Graph PowerShell SDK](/graph/powershell/installation).
+ For more information on installation, see [Install the Microsoft Graph PowerShell SDK](/powershell/microsoftgraph/installation).
1. Sign-in to your Microsoft Entra tenant using the `Connect-MgGraph` cmdlet:
active-directory Howto Mfa Getstarted https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-mfa-getstarted.md
Risk policies include:
If your users were enabled using per-user enabled and enforced MFA, the following PowerShell can assist you in making the conversion to Conditional Access based MFA.
-Run this PowerShell in an ISE window or save as a `.PS1` file to run locally. The operation can only be done by using the [MSOnline module](/powershell/module/msonline#msonline).
+Run this PowerShell in an ISE window or save as a `.PS1` file to run locally. The operation can only be done by using the [MSOnline module](/powershell/module/msonline/#msonline).
```PowerShell # Sets the MFA requirement state
active-directory Howto Password Ban Bad On Premises Deploy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-password-ban-bad-on-premises-deploy.md
The following requirements apply to the Microsoft Entra Password Protection DC a
* All machines where the Microsoft Entra Password Protection DC agent will be installed must have .NET 4.7.2 installed. * If .NET 4.7.2 is not already installed, download and run the installer found at [The .NET Framework 4.7.2 offline installer for Windows](https://support.microsoft.com/topic/microsoft-net-framework-4-7-2-offline-installer-for-windows-05a72734-2127-a15d-50cf-daf56d5faec2). * Any Active Directory domain that runs the Microsoft Entra Password Protection DC agent service must use Distributed File System Replication (DFSR) for sysvol replication.
- * If your domain isn't already using DFSR, you must migrate before installing Microsoft Entra Password Protection. For more information, see [SYSVOL Replication Migration Guide: FRS to DFS Replication](/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/dd640019(v=ws.10))
+ * If your domain isn't already using DFSR, you must migrate before installing Microsoft Entra Password Protection. For more information, see [SYSVOL Replication Migration Guide: FRS to DFS Replication](/windows-server/storage/dfs-replication/migrate-sysvol-to-dfsr)
> [!WARNING] > The Microsoft Entra Password Protection DC agent software will currently install on domain controllers in domains that are still using FRS (the predecessor technology to DFSR) for sysvol replication, but the software will NOT work properly in this environment.
active-directory Howto Registration Mfa Sspr Combined https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-registration-mfa-sspr-combined.md
To secure when and how users register for Microsoft Entra multifactor authentica
> [!NOTE] > This policy applies only when a user accesses a combined registration page. This policy doesn't enforce MFA enrollment when a user accesses other applications. >
-> You can create an MFA registration policy by using [Azure Identity Protection - Configure MFA Policy](../identity-protection/howto-identity-protection-configure-mfa-policy.md).
+> You can create an MFA registration policy by using [Microsoft Entra ID Protection - Configure MFA Policy](../identity-protection/howto-identity-protection-configure-mfa-policy.md).
For more information about creating trusted locations in Conditional Access, see [What is the location condition in Microsoft Entra Conditional Access?](../conditional-access/location-condition.md#named-locations)
active-directory Howto Sspr Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-sspr-deployment.md
You can use pre-built reports on Microsoft Entra admin center to measure the SSP
> [!NOTE] > You must be [a global administrator](../roles/permissions-reference.md), and you must opt-in for this data to be gathered for your organization. To opt in, you must visit the Reporting tab or the audit logs on the Microsoft Entra admin center at least once. Until then, the data doesn't collect for your organization.
-Audit logs for registration and password reset are available for 30 days. If security auditing within your corporation requires longer retention, the logs need to be exported and consumed into a SIEM tool such as [Microsoft Sentinel](../../sentinel/connect-azure-active-directory.md), Splunk, or ArcSight.
+Audit logs for registration and password reset are available for 30 days. If security auditing within your corporation requires longer retention, the logs need to be exported and consumed into a SIEM tool such as [Microsoft Sentinel](/azure/sentinel/connect-azure-active-directory), Splunk, or ArcSight.
![SSPR Reporting screenshot](./media/howto-sspr-deployment/sspr-reporting.png)
active-directory Troubleshoot Sspr Writeback https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/troubleshoot-sspr-writeback.md
For Azure AD Connect version *1.1.443.0* and above, *outbound HTTPS* access is r
* *\*.passwordreset.microsoftonline.com* * *\*.servicebus.windows.net*
-Azure [GOV endpoints](../../azure-government/compare-azure-government-global-azure.md#guidance-for-developers):
+Azure [GOV endpoints](/azure/azure-government/compare-azure-government-global-azure#guidance-for-developers):
* *\*.passwordreset.microsoftonline.us* * *\*.servicebus.usgovcloudapi.net*
A best practice when you troubleshoot problems with password writeback is to ins
| 31017| AuthTokenSuccess| This event indicates that we successfully retrieved an authorization token for the Global Administrator specified during Microsoft Entra Connect setup to start the offboarding or onboarding process.| | 31018| KeyPairCreationSuccess| This event indicates that we successfully created the password encryption key. This key is used to encrypt passwords from the cloud to be sent to your on-premises environment.| | 31019| ServiceBusHeartBeat| This event indicates that we successfully sent a request to your tenant's Service Bus instance.|
-| 31034| ServiceBusListenerError| This event indicates that there was an error connecting to your tenant's Service Bus listener. If the error message includes "The remote certificate is invalid", check to make sure that your Microsoft Entra Connect server has all the required Root CAs as described in [Azure TLS certificate changes](../../security/fundamentals/tls-certificate-changes.md). |
+| 31034| ServiceBusListenerError| This event indicates that there was an error connecting to your tenant's Service Bus listener. If the error message includes "The remote certificate is invalid", check to make sure that your Microsoft Entra Connect server has all the required Root CAs as described in [Azure TLS certificate changes](/azure/security/fundamentals/tls-certificate-changes). |
| 31044| PasswordResetService| This event indicates that password writeback is not working. The Service Bus listens for requests on two separate relays for redundancy. Each relay connection is managed by a unique Service Host. The writeback client returns an error if either Service Host is not running.| | 32000| UnknownError| This event indicates an unknown error occurred during a password management operation. Look at the exception text in the event for more details. If you're having problems, try disabling and then re-enabling password writeback. If this doesn't help, include a copy of your event log along with the tracking ID specified when you open a support request.| | 32001| ServiceError| This event indicates there was an error connecting to the cloud password reset service. This error generally occurs when the on-premises service was unable to connect to the password-reset web service.|
A best practice when you troubleshoot problems with password writeback is to ins
## Microsoft Entra forums
-If you have general questions about Microsoft Entra ID and self-service password reset, you can ask the community for assistance on the [Microsoft Q&A question page for Microsoft Entra ID](/answers/topics/azure-active-directory.html). Members of the community include engineers, product managers, MVPs, and fellow IT professionals.
+If you have general questions about Microsoft Entra ID and self-service password reset, you can ask the community for assistance on the [Microsoft Q&A question page for Microsoft Entra ID](/answers/tags/455/entra-id). Members of the community include engineers, product managers, MVPs, and fellow IT professionals.
## Contact Microsoft support
active-directory Tutorial Risk Based Sspr Mfa https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/tutorial-risk-based-sspr-mfa.md
Title: Risk-based user sign-in protection in Microsoft Entra ID
-description: In this tutorial, you learn how to enable Azure Identity Protection to protect users when risky sign-in behavior is detected on their account.
+description: In this tutorial, you learn how to enable Microsoft Entra ID Protection to protect users when risky sign-in behavior is detected on their account.
-# Customer intent: As a Microsoft Entra Administrator, I want to learn how to use Azure Identity Protection to protect users by automatically detecting risk sign-in behavior and prompting for additional forms of authentication or request a password change.
+# Customer intent: As a Microsoft Entra Administrator, I want to learn how to use Microsoft Entra ID Protection to protect users by automatically detecting risk sign-in behavior and prompting for additional forms of authentication or request a password change.
# Tutorial: Use risk detections for user sign-ins to trigger Microsoft Entra multifactor authentication or password changes
active-directory Active Directory Acs Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/azuread-dev/active-directory-acs-migration.md
Each Microsoft cloud service that accepts tokens that are issued by Access Contr
| Service | Guidance | | - | -- |
-| Azure Service Bus | [Migrate to shared access signatures](../../service-bus-messaging/service-bus-sas.md) |
-| Azure Service Bus Relay | [Migrate to shared access signatures](../../azure-relay/relay-migrate-acs-sas.md) |
-| Azure Managed Cache | [Migrate to Azure Cache for Redis](../../azure-cache-for-redis/cache-faq.yml) |
+| Azure Service Bus | [Migrate to shared access signatures](/azure/service-bus-messaging/service-bus-sas) |
+| Azure Service Bus Relay | [Migrate to shared access signatures](/azure/azure-relay/relay-migrate-acs-sas) |
+| Azure Managed Cache | [Migrate to Azure Cache for Redis](/azure/azure-cache-for-redis/cache-faq) |
| Azure DataMarket | [Migrate to the Azure AI services APIs](https://azure.microsoft.com/services/cognitive-services/) | | BizTalk Services | [Migrate to the Logic Apps feature of Azure App Service](https://azure.microsoft.com/services/cognitive-services/) | | Azure Media Services | [Migrate to Azure AD authentication](https://azure.microsoft.com/blog/azure-media-service-aad-auth-and-acs-deprecation/) |
-| Azure Backup | [Upgrade the Azure Backup agent](../../backup/backup-azure-file-folder-backup-faq.yml) |
+| Azure Backup | [Upgrade the Azure Backup agent](/azure/backup/backup-azure-file-folder-backup-faq) |
<!-- Dynamics CRM: Migrate to new SDK, Dynamics team handling privately --> <!-- Azure RemoteApp deprecated in favor of Citrix: https://www.zdnet.com/article/microsoft-to-drop-azure-remoteapp-in-favor-of-citrix-remoting-technologies/ -->
The following table compares the features of Access Control that are relevant to
If you decide that Azure AD B2C is the best migration path for your applications and services, begin with the following resources: -- [Azure AD B2C documentation](../../active-directory-b2c/overview.md)-- [Azure AD B2C custom policies](../../active-directory-b2c/custom-policy-overview.md)
+- [Azure AD B2C documentation](/azure/active-directory-b2c/overview)
+- [Azure AD B2C custom policies](/azure/active-directory-b2c/custom-policy-overview)
- [Azure AD B2C pricing](https://azure.microsoft.com/pricing/details/active-directory-b2c/) #### Migrate to Ping Identity or Auth0
active-directory Conditional Access Dev Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/azuread-dev/conditional-access-dev-guide.md
Developers can take this challenge and append it onto a new request to Azure AD.
### Prerequisites
-Microsoft Entra Conditional Access is a feature included in [Microsoft Entra ID P1 or P2](../fundamentals/whatis.md). You can learn more about licensing requirements in the [unlicensed usage report](../reports-monitoring/overview-reports.md). Developers can join the [Microsoft Developer Network](/), which includes a free subscription to the Enterprise Mobility Suite, which includes Microsoft Entra ID P1 or P2.
+Microsoft Entra Conditional Access is a feature included in [Microsoft Entra ID P1 or P2](../fundamentals/whatis.md). You can learn more about licensing requirements in the [unlicensed usage report](../reports-monitoring/overview-monitoring-health.md). Developers can join the [Microsoft Developer Network](/), which includes a free subscription to the Enterprise Mobility Suite, which includes Microsoft Entra ID P1 or P2.
### Considerations for specific scenarios
active-directory Howto Get Appsource Certified https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/azuread-dev/howto-get-appsource-certified.md
For more information about multi-tenancy, see [How to sign in any Azure Active D
A *single-tenant application* is an application that only accepts sign-ins from users of a defined Azure AD instance. External users (including work or school accounts from other organizations, or personal accounts) can sign in to a single-tenant application after adding each user as a guest account to the Azure AD instance that the application is registered.
-You can add users as guest accounts to Azure AD through the [Azure AD B2B collaboration](../external-identities/what-is-b2b.md) and you can do this [programmatically](../../active-directory-b2c/integrate-with-app-code-samples.md). When using B2B, users can create a self-service portal that does not require an invitation to sign in. For more info, see [Self-service portal for Azure AD B2B collaboration sign-up](../external-identities/self-service-portal.md).
+You can add users as guest accounts to Azure AD through the [Azure AD B2B collaboration](../external-identities/what-is-b2b.md) and you can do this [programmatically](/azure/active-directory-b2c/integrate-with-app-code-samples). When using B2B, users can create a self-service portal that does not require an invitation to sign in. For more info, see [Self-service portal for Azure AD B2B collaboration sign-up](../external-identities/self-service-portal.md).
Single-tenant applications can enable the *Contact Me* experience, but if you want to enable the single-click/free trial experience that AppSource recommends, enable multi-tenancy on your application instead.
Use the following comments section to provide feedback and help us refine and sh
[AAD-Auth-Scenarios]:v1-authentication-scenarios.md [AAD-Auth-Scenarios-Browser-To-WebApp]:v1-authentication-scenarios.md#web-browser-to-web-application [AAD-Dev-Guide]: v1-overview.md
-[AAD-Howto-Multitenant-Overview]: howto-convert-app-to-be-multi-tenant.md
[AAD-QuickStart-Web-Apps]: v1-overview.md#get-started <!--Image references-->
active-directory V1 Oauth2 Implicit Grant Flow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/azuread-dev/v1-oauth2-implicit-grant-flow.md
If you are developing a Web application that includes a backend, and consuming a
<!--Image references--> <!--Reference style links in use-->
-[ACOM-How-And-Why-Apps-Added-To-AAD]: active-directory-how-applications-are-added.md
[ACOM-How-To-Integrate]: ../develop/how-to-integrate.md?toc=/azure/active-directory/azuread-dev/toc.json&bc=/azure/active-directory/azuread-dev/breadcrumb/toc.json [OAuth2-Spec-Implicit-Misuse]: https://tools.ietf.org/html/rfc6749#section-10.16 [OAuth2-Threat-Model-And-Security-Implications]: https://tools.ietf.org/html/rfc6819
active-directory Concept Condition Filters For Devices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/concept-condition-filters-for-devices.md
The filter for devices condition in Conditional Access evaluates policy based on
- [Update device Graph API](/graph/api/device-update?tabs=http) - [Conditional Access: Conditions](concept-conditional-access-conditions.md) - [Common Conditional Access policies](concept-conditional-access-policy-common.md)-- [Securing devices as part of the privileged access story](/security/compass/privileged-access-devices)
+- [Securing devices as part of the privileged access story](/security/privileged-access-workstations/privileged-access-devices)
active-directory Concept Conditional Access Cloud Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/concept-conditional-access-cloud-apps.md
Administrators can assign a Conditional Access policy to the following cloud app
- Microsoft Commerce Tools Authentication Service - Microsoft Forms - Microsoft Intune-- [Microsoft Intune Enrollment](/intune/enrollment/multi-factor-authentication)
+- [Microsoft Intune Enrollment](/mem/intune/enrollment/multi-factor-authentication)
- Microsoft Planner - Microsoft Power Apps - Microsoft Power Automate
Because the policy is applied to the Azure management portal and API, services,
- Microsoft IoT Central > [!NOTE]
-> The Microsoft Azure Management application applies to [Azure PowerShell](/powershell/azure/what-is-azure-powershell), which calls the [Azure Resource Manager API](../../azure-resource-manager/management/overview.md). It does not apply to [Microsoft Graph PowerShell](/powershell/microsoftgraph/overview), which calls the [Microsoft Graph API](/graph/overview).
+> The Microsoft Azure Management application applies to [Azure PowerShell](/powershell/azure/what-is-azure-powershell), which calls the [Azure Resource Manager API](/azure/azure-resource-manager/management/overview). It does not apply to [Microsoft Graph PowerShell](/powershell/microsoftgraph/overview), which calls the [Microsoft Graph API](/graph/overview).
For more information on how to set up a sample policy for Microsoft Azure Management, see [Conditional Access: Require MFA for Azure management](howto-conditional-access-policy-azure-management.md).
In some cases, an **All cloud apps** policy could inadvertently block user acces
- Calls to Azure AD Graph and MS Graph, to access user profile, group membership and relationship information that is commonly used by applications excluded from policy. The excluded scopes are listed below. Consent is still required for apps to use these permissions. - For native clients:
- - Azure AD Graph: email, offline_access, openid, profile, User.read
- - MS Graph: User.read, People.read, and UserProfile.read
+ - Azure AD Graph: email, offline_access, openid, profile, User.Read
+ - MS Graph: email, offline_access, openid, profile, User.Read, People.Read
- For confidential / authenticated clients:
- - Azure AD Graph: email, offline_access, openid, profile, User.read, User.read.all, and User.readbasic.all
- - MS Graph: User.read,User.read.all, User.read.All People.read, People.read.all, GroupMember.Read.All, Member.Read.Hidden, and UserProfile.read
+ - Azure AD Graph: email, offline_access, openid, profile, User.Read, User.Read.All, and User.ReadBasic.All
+ - MS Graph: email, offline_access, openid, profile, User.Read, User.Read.All, User.ReadBasic.All, People.Read, People.Read.All, GroupMember.Read.All, Member.Read.Hidden
## User actions
User actions are tasks that can be performed by a user. Currently, Conditional A
## Traffic forwarding profiles
-Traffic forwarding profiles in Global Secure Access enable administrators to define and control how traffic is routed through Microsoft Entra Internet Access and Microsoft Entra Private Access. Traffic forwarding profiles can be assigned to devices and remote networks. For an example of how to apply a Conditional Access policy to these traffic profiles, see the article [How to apply Conditional Access policies to the Microsoft 365 traffic profile](../../global-secure-access/how-to-target-resource-microsoft-365-profile.md).
+Traffic forwarding profiles in Global Secure Access enable administrators to define and control how traffic is routed through Microsoft Entra Internet Access and Microsoft Entra Private Access. Traffic forwarding profiles can be assigned to devices and remote networks. For an example of how to apply a Conditional Access policy to these traffic profiles, see the article [How to apply Conditional Access policies to the Microsoft 365 traffic profile](/entra/global-secure-access/how-to-target-resource-microsoft-365-profile).
-For more information about these profiles, see the article [Global Secure Access traffic forwarding profiles](../../global-secure-access/concept-traffic-forwarding.md).
+For more information about these profiles, see the article [Global Secure Access traffic forwarding profiles](/entra/global-secure-access/concept-traffic-forwarding).
## Authentication context
To delete an authentication context, it must have no assigned Conditional Access
For more information about authentication context use in applications, see the following articles. -- [Use sensitivity labels to protect content in Microsoft Teams, Microsoft 365 groups, and SharePoint sites](/microsoft-365/compliance/sensitivity-labels-teams-groups-sites)-- [Microsoft Defender for Cloud Apps](/cloud-app-security/session-policy-aad?branch=pr-en-us-2082#require-step-up-authentication-authentication-context)
+- [Use sensitivity labels to protect content in Microsoft Teams, Microsoft 365 groups, and SharePoint sites](/purview/sensitivity-labels-teams-groups-sites)
+- [Microsoft Defender for Cloud Apps](/defender-cloud-apps/session-policy-aad?branch=pr-en-us-2082#require-step-up-authentication-authentication-context)
- [Custom applications](../develop/developer-guide-conditional-access-authentication-context.md) ## Next steps
For more information about authentication context use in applications, see the f
- [Conditional Access: Conditions](concept-conditional-access-conditions.md) - [Conditional Access common policies](concept-conditional-access-policy-common.md) - [Client application dependencies](service-dependencies.md)-
active-directory Concept Conditional Access Conditions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/concept-conditional-access-conditions.md
We don't support selecting macOS or Linux device platforms when selecting **Requ
## Locations
-When administrators configure location as a condition, they can choose to include or exclude locations. These named locations may include the public IPv4 or IPv6 network information, country or region, unknown areas that don't map to specific countries or regions, and [Global Secure Access' compliant network](../../global-secure-access/how-to-compliant-network.md).
+When administrators configure location as a condition, they can choose to include or exclude locations. These named locations may include the public IPv4 or IPv6 network information, country or region, unknown areas that don't map to specific countries or regions, and [Global Secure Access' compliant network](/entra/global-secure-access/how-to-compliant-network).
When including **any location**, this option includes any IP address on the internet not just configured named locations. When administrators select **any location**, they can choose to exclude **all trusted** or **selected locations**.
This setting has an effect on access attempts made from the following mobile app
| MFA and location policy for apps. Device-based policies arenΓÇÖt supported.| Any My Apps app service | Android and iOS | | Microsoft Teams Services - this client app controls all services that support Microsoft Teams and all its Client Apps - Windows Desktop, iOS, Android, WP, and web client | Microsoft Teams | Windows 10, Windows 8.1, Windows 7, iOS, Android, and macOS | | Office 2016 apps, Office 2013 (with modern authentication), [OneDrive sync client](/onedrive/enable-conditional-access) | SharePoint | Windows 8.1, Windows 7 |
-| Office 2016 apps, Universal Office apps, Office 2013 (with modern authentication), [OneDrive sync client](/onedrive/enable-conditional-access) | SharePoint Online | Windows 10 |
+| Office 2016 apps, Universal Office apps, Office 2013 (with modern authentication), [OneDrive sync client](/sharepoint/enable-conditional-access) | SharePoint Online | Windows 10 |
| Office 2016 (Word, Excel, PowerPoint, OneNote only). | SharePoint | macOS | | Office 2019| SharePoint | Windows 10, macOS | | Office mobile apps | SharePoint | Android, iOS |
active-directory Concept Conditional Access Grant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/concept-conditional-access-grant.md
Administrators can choose to require [specific authentication strengths](../auth
### Require device to be marked as compliant
-Organizations that have deployed Intune can use the information returned from their devices to identify devices that meet specific policy compliance requirements. Intune sends compliance information to Microsoft Entra ID so Conditional Access can decide to grant or block access to resources. For more information about compliance policies, see [Set rules on devices to allow access to resources in your organization by using Intune](/intune/protect/device-compliance-get-started).
+Organizations that have deployed Intune can use the information returned from their devices to identify devices that meet specific policy compliance requirements. Intune sends compliance information to Microsoft Entra ID so Conditional Access can decide to grant or block access to resources. For more information about compliance policies, see [Set rules on devices to allow access to resources in your organization by using Intune](/mem/intune/protect/device-compliance-get-started).
A device can be marked as compliant by Intune for any device operating system or by a third-party mobile device management system for Windows devices. You can find a list of supported third-party mobile device management systems in [Support third-party device compliance partners in Intune](/mem/intune/protect/device-compliance-partners).
See [Require approved client apps for cloud app access with Conditional Access](
### Require app protection policy
-In Conditional Access policy, you can require that an [Intune app protection policy](/intune/app-protection-policy) is present on the client app before access is available to the selected applications. These mobile application management (MAM) app protection policies allow you to manage and protect your organization's data within specific applications.
+In Conditional Access policy, you can require that an [Intune app protection policy](/mem/intune/apps/app-protection-policy) is present on the client app before access is available to the selected applications. These mobile application management (MAM) app protection policies allow you to manage and protect your organization's data within specific applications.
To apply this grant control, Conditional Access requires that the device is registered in Microsoft Entra ID, which requires using a broker app. The broker app can be either Microsoft Authenticator for iOS or Microsoft Company Portal for Android devices. If a broker app isn't installed on the device when the user attempts to authenticate, the user is redirected to the app store to install the broker app. App protection policies are generally available for iOS and Android, and in public preview for Microsoft Edge on Windows. [Windows devices support no more than 3 Microsoft Entra user accounts in the same session](../devices/faq.yml#i-can-t-add-more-than-3-microsoft-entra-user-accounts-under-the-same-user-session-on-a-windows-10-11-device--why). For more information about how to apply policy to Windows devices, see the article [Require an app protection policy on Windows devices (preview)](how-to-app-protection-policy-windows.md).
active-directory Concept Conditional Access Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/concept-conditional-access-policies.md
The information used to calculate the device platform comes from unverified sour
#### Locations
-Locations connect IP addresses, geographies, and [Global Secure Access' compliant network](../../global-secure-access/how-to-compliant-network.md) to Conditional Access policy decisions. Administrators can choose to define locations and mark some as trusted like those for their organization's primary network locations.
+Locations connect IP addresses, geographies, and [Global Secure Access' compliant network](/entra/global-secure-access/how-to-compliant-network) to Conditional Access policy decisions. Administrators can choose to define locations and mark some as trusted like those for their organization's primary network locations.
#### Client apps
The article [Common Conditional Access policies](concept-conditional-access-poli
[Planning a cloud-based Microsoft Entra multifactor authentication deployment](../authentication/howto-mfa-getstarted.md)
-[Managing device compliance with Intune](/intune/device-compliance-get-started)
+[Managing device compliance with Intune](/mem/intune/protect/device-compliance-get-started)
-[Microsoft Defender for Cloud Apps and Conditional Access](/cloud-app-security/proxy-intro-aad)
+[Microsoft Defender for Cloud Apps and Conditional Access](/defender-cloud-apps/proxy-intro-aad)
active-directory Concept Conditional Access Session https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/concept-conditional-access-session.md
For more information on the use and configuration of app-enforced restrictions,
## Conditional Access application control
-Conditional Access App Control uses a reverse proxy architecture and is uniquely integrated with Microsoft Entra Conditional Access. Microsoft Entra Conditional Access allows you to enforce access controls on your organizationΓÇÖs apps based on certain conditions. The conditions define what user or group of users, cloud apps, and locations and networks a Conditional Access policy applies to. After youΓÇÖve determined the conditions, you can route users to [Microsoft Defender for Cloud Apps](/cloud-app-security/what-is-cloud-app-security) where you can protect data with Conditional Access App Control by applying access and session controls.
+Conditional Access App Control uses a reverse proxy architecture and is uniquely integrated with Microsoft Entra Conditional Access. Microsoft Entra Conditional Access allows you to enforce access controls on your organizationΓÇÖs apps based on certain conditions. The conditions define what user or group of users, cloud apps, and locations and networks a Conditional Access policy applies to. After youΓÇÖve determined the conditions, you can route users to [Microsoft Defender for Cloud Apps](/defender-cloud-apps/what-is-defender-for-cloud-apps) where you can protect data with Conditional Access App Control by applying access and session controls.
Conditional Access App Control enables user app access and sessions to be monitored and controlled in real time based on access and session policies. Access and session policies are used within the Defender for Cloud Apps portal to refine filters and set actions to take. With the access and session policies, you can:
Conditional Access App Control enables user app access and sessions to be monito
- Block access (Preview): You can granularly block access for specific apps and users depending on several risk factors. For example, you can block them if they're using client certificates as a form of device management. - Block custom activities: Some apps have unique scenarios that carry risk, for example, sending messages with sensitive content in apps like Microsoft Teams or Slack. In these kinds of scenarios, you can scan messages for sensitive content and block them in real time.
-For more information, see the article [Deploy Conditional Access App Control for featured apps](/cloud-app-security/proxy-deployment-aad).
+For more information, see the article [Deploy Conditional Access App Control for featured apps](/defender-cloud-apps/proxy-deployment-aad).
## Sign-in frequency
active-directory Concept Continuous Access Evaluation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/concept-continuous-access-evaluation.md
When the sum of all IP ranges specified in location policies exceeds 5,000, user
| Semi-Annual Enterprise Channel | If set to enabled or 1, CAE won't be supported. | If set to enabled or 1, CAE won't be supported. | | Current Channel <br> or <br> Monthly Enterprise Channel | CAE is supported whatever the setting | CAE is supported whatever the setting |
-For an explanation of the office update channels, see [Overview of update channels for Microsoft 365 Apps](/deployoffice/overview-update-channels). The recommendation is that organizations don't disable Web Account Manager (WAM).
+For an explanation of the office update channels, see [Overview of update channels for Microsoft 365 Apps](/deployoffice/updates/overview-update-channels). The recommendation is that organizations don't disable Web Account Manager (WAM).
### Coauthoring in Office apps
active-directory Concept Filter For Applications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/concept-filter-for-applications.md
Custom security attributes are security sensitive and can only be managed by del
1. Assign the appropriate role to the users who will manage or report on these attributes at the directory scope.
- For detailed steps, see [Assign Azure roles](../../role-based-access-control/role-assignments-portal.md).
+ For detailed steps, see [Assign Azure roles](/azure/role-based-access-control/role-assignments-portal).
## Create custom security attributes
active-directory Concept Token Protection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/concept-token-protection.md
This process helps to assess your usersΓÇÖ client and app compatibility for toke
### Create a Conditional Access policy
-Users who perform specialized roles like those described in [Privileged access security levels](/security/compass/privileged-access-security-levels#specialized) are possible targets for this functionality. We recommend piloting with a small subset to begin.
+Users who perform specialized roles like those described in [Privileged access security levels](/security/privileged-access-workstations/privileged-access-security-levels#specialized) are possible targets for this functionality. We recommend piloting with a small subset to begin.
:::image type="content" source="media/concept-token-protection/exposed-policy-attributes.png" alt-text="Screenshot of a configured Conditional Access policy and its components." lightbox="media/concept-token-protection/exposed-policy-attributes.png":::
Use Microsoft Entra sign-in log to verify the outcome of a token protection enfo
#### Log Analytics
-You can also use [Log Analytics](../reports-monitoring/tutorial-log-analytics-wizard.md) to query the sign-in logs (interactive and non-interactive) for blocked requests due to token protection enforcement failure.
+You can also use [Log Analytics](../reports-monitoring/tutorial-configure-log-analytics-workspace.md) to query the sign-in logs (interactive and non-interactive) for blocked requests due to token protection enforcement failure.
Here's a sample Log Analytics query searching the non-interactive sign-in logs for the last seven days, highlighting **Blocked** versus **Allowed** requests by **Application**. These queries are only samples and are subject to change.
active-directory Howto Conditional Access Apis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/howto-conditional-access-apis.md
Microsoft Graph provides a unified programmability model that organizations can
The following examples are provided as is with no support. You can use these examples as a basis for tooling in your organization.
-Many of the following examples use tools like [Managed Identities](../managed-identities-azure-resources/overview.md), [Logic Apps](../../logic-apps/logic-apps-overview.md), [OneDrive](https://www.microsoft.com/microsoft-365/onedrive/online-cloud-storage), [Teams](https://www.microsoft.com/microsoft-365/microsoft-teams/group-chat-software/), and [Azure Key Vault](../../key-vault/general/overview.md).
+Many of the following examples use tools like [Managed Identities](../managed-identities-azure-resources/overview.md), [Logic Apps](/azure/logic-apps/logic-apps-overview), [OneDrive](https://www.microsoft.com/microsoft-365/onedrive/online-cloud-storage), [Teams](https://www.microsoft.com/microsoft-365/microsoft-teams/group-chat-software/), and [Azure Key Vault](/azure/key-vault/general/overview).
## Configure
active-directory Howto Conditional Access Insights Reporting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/howto-conditional-access-insights-reporting.md
Users must have at least the Security Reader role assigned and Log Analytics wor
If you haven't integrated Microsoft Entra logs with Azure Monitor logs, you need to take the following steps before the workbook loads:
-1. [Create a Log Analytics workspace in Azure Monitor](../../azure-monitor/logs/quick-create-workspace.md).
-1. [Integrate Microsoft Entra logs with Azure Monitor logs](../reports-monitoring/howto-integrate-activity-logs-with-log-analytics.md).
+1. [Create a Log Analytics workspace in Azure Monitor](/azure/azure-monitor/logs/quick-create-workspace).
+1. [Integrate Microsoft Entra logs with Azure Monitor logs](../reports-monitoring/howto-integrate-activity-logs-with-azure-monitor-logs.md).
## How it works
In order to access the workbook, you need the proper permissions in Microsoft En
![Screenshot showing how to troubleshoot failing queries.](./media/howto-conditional-access-insights-reporting/query-troubleshoot-sign-in-logs.png)
-For more information about how to stream Microsoft Entra sign-in logs to a Log Analytics workspace, see the article [Integrate Microsoft Entra logs with Azure Monitor logs](../reports-monitoring/howto-integrate-activity-logs-with-log-analytics.md).
+For more information about how to stream Microsoft Entra sign-in logs to a Log Analytics workspace, see the article [Integrate Microsoft Entra logs with Azure Monitor logs](../reports-monitoring/howto-integrate-activity-logs-with-azure-monitor-logs.md).
### Why are the queries in the workbook failing?
You can edit and customize the workbook by going to **Identity** > **Monitoring
- [Conditional Access report-only mode](concept-conditional-access-report-only.md) -- For more information about Microsoft Entra workbooks, see the article, [How to use Azure Monitor workbooks for Microsoft Entra reports](../reports-monitoring/howto-use-azure-monitor-workbooks.md).
+- For more information about Microsoft Entra workbooks, see the article, [How to use Azure Monitor workbooks for Microsoft Entra reports](../reports-monitoring/howto-use-workbooks.md).
- [Conditional Access common policies](concept-conditional-access-policy-common.md)
active-directory Howto Conditional Access Policy Admin Mfa https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/howto-conditional-access-policy-admin-mfa.md
Accounts that are assigned administrative rights are targeted by attackers. Requiring multifactor authentication (MFA) on those accounts is an easy way to reduce the risk of those accounts being compromised.
-Microsoft recommends you require MFA on the following roles at a minimum, based on [identity score recommendations](../fundamentals/identity-secure-score.md):
+Microsoft recommends you require MFA on the following roles at a minimum, based on [identity score recommendations](../reports-monitoring/concept-identity-secure-score.md):
- Global Administrator - Application Administrator
active-directory Howto Conditional Access Policy Compliant Device Admin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/howto-conditional-access-policy-compliant-device-admin.md
More information about device compliance policies can be found in the article, [
Requiring a Microsoft Entra hybrid joined device is dependent on your devices already being Microsoft Entra hybrid joined. For more information, see the article [Configure Microsoft Entra hybrid join](../devices/how-to-hybrid-join.md).
-Microsoft recommends you require enable this policy for the following roles at a minimum, based on [identity score recommendations](../fundamentals/identity-secure-score.md):
+Microsoft recommends you require enable this policy for the following roles at a minimum, based on [identity score recommendations](../reports-monitoring/concept-identity-secure-score.md):
- Global administrator - Application administrator
Organizations that use the [Subscription Activation](/windows/deployment/windows
[Use report-only mode for Conditional Access to determine the results of new policy decisions.](concept-conditional-access-report-only.md)
-[Device compliance policies work with Microsoft Entra ID](/intune/device-compliance-get-started#device-compliance-policies-work-with-azure-ad)
+[Device compliance policies work with Microsoft Entra ID](/mem/intune/protect/device-compliance-get-started#device-compliance-policies-work-with-azure-ad)
active-directory Howto Continuous Access Evaluation Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/howto-continuous-access-evaluation-troubleshoot.md
The continuous access evaluation insights workbook allows administrators to view
### Accessing the CAE workbook template
-Log Analytics integration must be completed before workbooks are displayed. For more information about how to stream Microsoft Entra sign-in logs to a Log Analytics workspace, see the article [Integrate Microsoft Entra logs with Azure Monitor logs](../reports-monitoring/howto-integrate-activity-logs-with-log-analytics.md).
+Log Analytics integration must be completed before workbooks are displayed. For more information about how to stream Microsoft Entra sign-in logs to a Log Analytics workspace, see the article [Integrate Microsoft Entra logs with Azure Monitor logs](../reports-monitoring/howto-integrate-activity-logs-with-azure-monitor-logs.md).
1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator). 1. Browse to **Identity** > **Monitoring & health** > **Workbooks**.
For more information about named locations, see the article [Using the location
## Next steps -- [Integrate Microsoft Entra logs with Azure Monitor logs](../reports-monitoring/howto-integrate-activity-logs-with-log-analytics.md)
+- [Integrate Microsoft Entra logs with Azure Monitor logs](../reports-monitoring/howto-integrate-activity-logs-with-azure-monitor-logs.md)
- [Using the location condition](location-condition.md#named-locations) - [Continuous access evaluation](concept-continuous-access-evaluation.md)
active-directory Howto Policy Approved App Or App Protection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/howto-policy-approved-app-or-app-protection.md
After administrators confirm the settings using [report-only mode](howto-conditi
## Next steps
-[App protection policies overview](/intune/apps/app-protection-policy)
+[App protection policies overview](/mem/intune/apps/app-protection-policy)
[Conditional Access common policies](concept-conditional-access-policy-common.md)
active-directory Location Condition https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/location-condition.md
If you have these trusted IPs configured, they show up as **MFA Trusted IPs** in
### All Network Access locations of my tenant
-Organizations with access to Global Secure Access preview features have another location listed that is made up of users and devices that comply with your organization's security policies. For more information, see the section [Enable Global Secure Access signaling for Conditional Access](../../global-secure-access/how-to-compliant-network.md#enable-global-secure-access-signaling-for-conditional-access). It can be used with Conditional Access policies to perform a compliant network check for access to resources.
+Organizations with access to Global Secure Access preview features have another location listed that is made up of users and devices that comply with your organization's security policies. For more information, see the section [Enable Global Secure Access signaling for Conditional Access](/entra/global-secure-access/how-to-compliant-network#enable-global-secure-access-signaling-for-conditional-access). It can be used with Conditional Access policies to perform a compliant network check for access to resources.
### Selected locations
When you use a cloud hosted proxy or VPN solution, the IP address Microsoft Entr
When a cloud proxy is in place, a policy that requires a [Microsoft Entra hybrid joined or compliant device](howto-conditional-access-policy-compliant-device.md#create-a-conditional-access-policy) can be easier to manage. Keeping a list of IP addresses used by your cloud hosted proxy or VPN solution up to date can be nearly impossible.
-We recommend organizations utilize Global Secure Access to enable [source IP restoration](../../global-secure-access/how-to-source-ip-restoration.md) to avoid this change in address and simplify management.
+We recommend organizations utilize Global Secure Access to enable [source IP restoration](/entra/global-secure-access/how-to-source-ip-restoration) to avoid this change in address and simplify management.
### When is a location evaluated?
active-directory Plan Conditional Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/plan-conditional-access.md
By default, each policy created from template is created in report-only mode. We
[Enable policies in report-only mode](howto-conditional-access-insights-reporting.md). Once you save a policy in report-only mode, you can see the effect on real-time sign-ins in the sign-in logs. From the sign-in logs, select an event and navigate to the **Report-only** tab to see the result of each report-only policy.
-You can view the aggregate affects of your Conditional Access policies in the **Insights and Reporting workbook**. To access the workbook, you need an Azure Monitor subscription and you'll need to [stream your sign-in logs to a log analytics workspace](../reports-monitoring/howto-integrate-activity-logs-with-log-analytics.md).
+You can view the aggregate affects of your Conditional Access policies in the **Insights and Reporting workbook**. To access the workbook, you need an Azure Monitor subscription and you'll need to [stream your sign-in logs to a log analytics workspace](../reports-monitoring/howto-integrate-activity-logs-with-azure-monitor-logs.md).
### Plan for disruption
active-directory Terms Of Use https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/terms-of-use.md
A: The user is blocked from getting access to the application. The user would ha
A: You can [review previously accepted terms of use policies](#how-users-can-review-their-terms-of-use), but currently there isn't a way to unaccept. **Q: What happens if I'm also using Intune terms and conditions?**<br />
-A: If you've configured both Microsoft Entra terms of use and [Intune terms and conditions](/intune/terms-and-conditions-create), the user is required to accept both. For more information, see the [Choosing the right Terms solution for your organization blog post](https://go.microsoft.com/fwlink/?linkid=2010506&clcid=0x409).
+A: If you've configured both Microsoft Entra terms of use and [Intune terms and conditions](/mem/intune/enrollment/terms-and-conditions-create), the user is required to accept both. For more information, see the [Choosing the right Terms solution for your organization blog post](https://go.microsoft.com/fwlink/?linkid=2010506&clcid=0x409).
**Q: What endpoints does the terms of use service use for authentication?**<br /> A: Terms of use utilize the following endpoints for authentication: https://tokenprovider.termsofuse.identitygovernance.azure.com, https://myaccount.microsoft.com and https://account.activedirectory.windowsazure.com. If your organization has an allowlist of URLs for enrollment, you need to add these endpoints to your allowlist, along with the Microsoft Entra endpoints for sign-in.
active-directory Troubleshoot Policy Changes Audit Log https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/troubleshoot-policy-changes-audit-log.md
Audit log data is only kept for 30 days by default, which may not be long enough
- Stream data to Event Hubs - Send data to a partner solution
-Find these options under **Identity** > **Monitoring & health** > **Diagnostic settings** > **Edit setting**. If you don't have a diagnostic setting, follow the instructions in the article [Create diagnostic settings to send platform logs and metrics to different destinations](../../azure-monitor/essentials/diagnostic-settings.md) to create one.
+Find these options under **Identity** > **Monitoring & health** > **Diagnostic settings** > **Edit setting**. If you don't have a diagnostic setting, follow the instructions in the article [Create diagnostic settings to send platform logs and metrics to different destinations](/azure/azure-monitor/essentials/diagnostic-settings) to create one.
## Use the audit log
Find these options under **Identity** > **Monitoring & health** > **Diagnostic s
## Use Log Analytics
-Log Analytics allows organizations to query data using built in queries or custom created Kusto queries, for more information, see [Get started with log queries in Azure Monitor](../../azure-monitor/logs/get-started-queries.md).
+Log Analytics allows organizations to query data using built in queries or custom created Kusto queries, for more information, see [Get started with log queries in Azure Monitor](/azure/azure-monitor/logs/get-started-queries).
:::image type="content" source="media/troubleshoot-policy-changes-audit-log/log-analytics-new-old-value.png" alt-text="Log Analytics query for updates to Conditional Access policies showing new and old value location" lightbox="media/troubleshoot-policy-changes-audit-log/log-analytics-new-old-value.png":::
For more information about programmatically updating your Conditional Access pol
## Next steps -- [What is Microsoft Entra monitoring?](../reports-monitoring/overview-monitoring.md)-- [Install and use the log analytics views for Microsoft Entra ID](../../azure-monitor/visualize/workbooks-view-designer-conversion-overview.md)
+- [What is Microsoft Entra monitoring?](../reports-monitoring/overview-monitoring-health.md)
+- [Install and use the log analytics views for Microsoft Entra ID](/azure/azure-monitor/visualize/workbooks-view-designer-conversion-overview)
- [Conditional Access: Programmatic access](howto-conditional-access-apis.md)
active-directory Access Token Claims Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/access-token-claims-reference.md
The v1.0 tokens include the following claims if applicable, but not v2.0 tokens
| Claim | Format | Description | |-|--|-| | `ipaddr`| String | The IP address the user authenticated from. |
-| `onprem_sid`| String, in [SID format](/windows/desktop/SecAuthZ/sid-components) | In cases where the user has an on-premises authentication, this claim provides their SID. Use this claim for authorization in legacy applications. |
+| `onprem_sid`| String, in [SID format](/windows/win32/secauthz/sid-components) | In cases where the user has an on-premises authentication, this claim provides their SID. Use this claim for authorization in legacy applications. |
| `pwd_exp`| int, a Unix timestamp | Indicates when the user's password expires. | | `pwd_url`| String | A URL where users can reset their password. | | `in_corp`| boolean | Signals if the client is signing in from the corporate network. |
active-directory Access Tokens https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/access-tokens.md
The following examples suppose that your application is validating a v2.0 access
### Validate the issuer
-[OpenID Connect Core](https://openid.net/specs/openid-connect-core-1_0.html#IDTokenValidation) says "The Issuer Identifier \[...\] MUST exactly match the value of the iss (issuer) Claim." For applications which use a tenant-specific metadata endpoint (like [https://login.microsoftonline.com/8eaef023-2b34-4da1-9baa-8bc8c9d6a490/v2.0/.well-known/openid-configuration](https://login.microsoftonline.com/8eaef023-2b34-4da1-9baa-8bc8c9d6a490/v2.0/.well-known/openid-configuration) or [https://login.microsoftonline.com/contoso.onmicrosoft.com/v2.0/.well-known/openid-configuration](https://login.microsoftonline.com/contoso.onmicrosoft.com/v2.0/.well-known/openid-configuration)), this is all that is needed.
+[OpenID Connect Core](https://openid.net/specs/openid-connect-core-1_0.html#IDTokenValidation) says "The Issuer Identifier \[...\] MUST exactly match the value of the iss (issuer) Claim." For applications which use a tenant-specific metadata endpoint (like `https://login.microsoftonline.com/8eaef023-2b34-4da1-9baa-8bc8c9d6a490/v2.0/.well-known/openid-configuration` or `https://login.microsoftonline.com/contoso.onmicrosoft.com/v2.0/.well-known/openid-configuration`), this is all that is needed.
Microsoft Entra ID has a tenant-independent version of the document available at [https://login.microsoftonline.com/common/v2.0/.well-known/openid-configuration](https://login.microsoftonline.com/common/v2.0/.well-known/openid-configuration). This endpoint returns an issuer value `https://login.microsoftonline.com/{tenantid}/v2.0`. Applications may use this tenant-independent endpoint to validate tokens from every tenant with the following modifications:
active-directory App Objects And Service Principals https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/app-objects-and-service-principals.md
Learn how to create a service principal:
- [Using the Microsoft Entra admin center](howto-create-service-principal-portal.md) - [Using Azure PowerShell](howto-authenticate-service-principal-powershell.md)-- [Using Azure CLI](/cli/azure/create-an-azure-service-principal-azure-cli)
+- [Using Azure CLI](/cli/azure/azure-cli-sp-tutorial-1)
- [Using Microsoft Graph](/graph/api/serviceprincipal-post-serviceprincipals) and then use [Microsoft Graph Explorer](https://developer.microsoft.com/graph/graph-explorer) to query both the application and service principal objects. <!--Reference style links -->
active-directory Apple Sso Plugin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/apple-sso-plugin.md
Use the following configuration to enable Just in Time Registration for iOS/iPad
Learn more about Just in Time Registration [here](https://techcommunity.microsoft.com/t5/intune-customer-success/just-in-time-registration-for-ios-ipados-with-microsoft-intune/ba-p/3660843). #### Conditional Access policies and password changes
-Microsoft Enterprise SSO plug-in for Apple devices is compatible with various [Microsoft Entra Conditional Access policies](/azure/active-directory/conditional-access/overview) and password change events. `browser_sso_interaction_enabled` is required to be enabled to achieve compatibility.
+Microsoft Enterprise SSO plug-in for Apple devices is compatible with various [Microsoft Entra Conditional Access policies](../conditional-access/overview.md) and password change events. `browser_sso_interaction_enabled` is required to be enabled to achieve compatibility.
Compatible events and policies are documented in the following sections:
When a user resets their password, all tokens that were issued before that will
<a name='azure-ad-multi-factor-authentication'></a> ##### Microsoft Entra multifactor authentication
-[Multifactor authentication](/azure/active-directory/authentication/concept-mfa-howitworks) is a process in which users are prompted during the sign-in process for an additional form of identification, such as a code on their cellphone or a fingerprint scan. Multifactor authentication can be enabled for specific resources. When the Microsoft Enterprise SSO plug-in is enabled, user will be asked to perform multifactor authentication in the first application that requires it. Microsoft Enterprise SSO plug-in will show its own user interface on top of the application that is currently active.
+[Multifactor authentication](../authentication/concept-mfa-howitworks.md) is a process in which users are prompted during the sign-in process for an additional form of identification, such as a code on their cellphone or a fingerprint scan. Multifactor authentication can be enabled for specific resources. When the Microsoft Enterprise SSO plug-in is enabled, user will be asked to perform multifactor authentication in the first application that requires it. Microsoft Enterprise SSO plug-in will show its own user interface on top of the application that is currently active.
##### User sign-in frequency
-[Sign-in frequency](/azure/active-directory/conditional-access/howto-conditional-access-session-lifetime#user-sign-in-frequency) defines the time period before a user is asked to sign in again when attempting to access a resource. If a user is trying to access a resource after the time period has passed in various apps, a user would normally need to sign in again in each of those apps. When the Microsoft Enterprise SSO plug-in is enabled, a user will be asked to sign in to the first application that participates in SSO. Microsoft Enterprise SSO plug-in will show its own user interface on top of the application that is currently active.
+[Sign-in frequency](../conditional-access/howto-conditional-access-session-lifetime.md#user-sign-in-frequency) defines the time period before a user is asked to sign in again when attempting to access a resource. If a user is trying to access a resource after the time period has passed in various apps, a user would normally need to sign in again in each of those apps. When the Microsoft Enterprise SSO plug-in is enabled, a user will be asked to sign in to the first application that participates in SSO. Microsoft Enterprise SSO plug-in will show its own user interface on top of the application that is currently active.
### Required network configuration The Microsoft Enterprise SSO plug-in relies on Apple's [enterprise SSO](https://developer.apple.com/documentation/authenticationservices) framework. Apple's enterprise SSO framework ensures that only an approved SSO plug-in can work for each identity provider by utilizing a technology called [associated domains](https://developer.apple.com/documentation/xcode/supporting-associated-domains). To verify the identity of the SSO plug-in, each Apple device will send a network request to an endpoint owned by the identity provider and read information about approved SSO plug-ins. In addition to reaching out directly to the identity provider, Apple has also implemented another caching for this information.
Other Apple URLs that may need to be allowed are documented in their support art
You can use Intune as your MDM service to ease configuration of the Microsoft Enterprise SSO plug-in. For example, you can use Intune to enable the plug-in and add old apps to an allowlist so they get SSO.
-For more information, see the [Intune configuration documentation](/intune/configuration/ios-device-features-settings).
+For more information, see the [Intune configuration documentation](/mem/intune/configuration/ios-device-features-settings).
## Use the SSO plug-in in your application
active-directory Application Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/application-model.md
For more information about the application model, see the following articles:
* For more information on application objects and service principals in the Microsoft identity platform, see [How and why applications are added to Microsoft Entra ID](how-applications-are-added.md). * For more information on single-tenant apps and multi-tenant apps, see [Tenancy in Microsoft Entra ID](single-and-multi-tenant-apps.md).
-* For more information on how Microsoft Entra ID also provides Azure Active Directory B2C so that organizations can sign in users, typically customers, by using social identities like a Google account, see [Azure Active Directory B2C documentation](../../active-directory-b2c/index.yml).
+* For more information on how Microsoft Entra ID also provides Azure Active Directory B2C so that organizations can sign in users, typically customers, by using social identities like a Google account, see [Azure Active Directory B2C documentation](/azure/active-directory-b2c/).
active-directory Authentication National Cloud https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/authentication-national-cloud.md
Learn how to use the [Microsoft Authentication Library (MSAL) in a national clou
National cloud documentation: -- [Azure Government](../../azure-government/index.yml)
+- [Azure Government](/azure/azure-government/)
- [Microsoft Azure operated by 21Vianet](/azure/china/)-- [Azure Germany (Closed on October 29, 2021)](../../germany/index.yml)
+- [Azure Germany (Closed on October 29, 2021)](/azure/germany/)
active-directory Authorization Basics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/authorization-basics.md
Role-based access control (RBAC) is possibly the most common approach to enforci
In advanced RBAC implementations, roles may be mapped to collections of permissions, where a permission describes a granular action or activity that can be performed. Roles are then configured as combinations of permissions. Compute the overall permission set for an entity by combining the permissions granted to the various roles the entity is assigned. A good example of this approach is the RBAC implementation that governs access to resources in Azure subscriptions. > [!NOTE]
-> [Application RBAC](./custom-rbac-for-developers.md) differs from [Azure RBAC](../../role-based-access-control/overview.md) and [Microsoft Entra RBAC](../roles/custom-overview.md#understand-azure-ad-role-based-access-control). Azure custom roles and built-in roles are both part of Azure RBAC, which helps manage Azure resources. Microsoft Entra RBAC allows management of Microsoft Entra resources.
+> [Application RBAC](./custom-rbac-for-developers.md) differs from [Azure RBAC](/azure/role-based-access-control/overview) and [Microsoft Entra RBAC](../roles/custom-overview.md#understand-azure-ad-role-based-access-control). Azure custom roles and built-in roles are both part of Azure RBAC, which helps manage Azure resources. Microsoft Entra RBAC allows management of Microsoft Entra resources.
### Attribute-based access control
One advantage of ABAC is that more granular and dynamic access control can be ac
One method for achieving ABAC with Microsoft Entra ID is using [dynamic groups](../enterprise-users/groups-create-rule.md). Dynamic groups allow administrators to dynamically assign users to groups based on specific user attributes with desired values. For example, an Authors group could be created where all users with the job title Author are dynamically assigned to the Authors group. Dynamic groups can be used in combination with RBAC for authorization where you map roles to groups and dynamically assign users to groups.
-[Azure ABAC](../../role-based-access-control/conditions-overview.md) is an example of an ABAC solution that is available today. Azure ABAC builds on Azure RBAC by adding role assignment conditions based on attributes in the context of specific actions.
+[Azure ABAC](/azure/role-based-access-control/conditions-overview) is an example of an ABAC solution that is available today. Azure ABAC builds on Azure RBAC by adding role assignment conditions based on attributes in the context of specific actions.
## Implementing authorization
It's not strictly necessary for developers to embed authorization logic entirely
- To learn about custom role-based access control implementation in applications, see [Role-based access control for application developers](./custom-rbac-for-developers.md). - To learn about the process of registering your application so it can integrate with the Microsoft identity platform, see [Application model](./application-model.md).-- For an example of configuring simple authentication-based authorization, see [Configure your App Service or Azure Functions app to use Microsoft Entra login](../../app-service/configure-authentication-provider-aad.md).
+- For an example of configuring simple authentication-based authorization, see [Configure your App Service or Azure Functions app to use Microsoft Entra login](/azure/app-service/configure-authentication-provider-aad).
- To learn about proper authorization using token claims, see [Secure applications and APIs by validating claims](./claims-validation.md)
active-directory Custom Extension Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/custom-extension-get-started.md
In this step, you create an HTTP trigger function API in the Azure portal. The f
| Setting | Suggested value | Description | | | - | -- | | **Subscription** | Your subscription | The subscription under which the new function app will be created in. |
- | **[Resource Group](../../azure-resource-manager/management/overview.md)** | *myResourceGroup* | Select and existing resource group, or name for the new one in which you'll create your function app. |
+ | **[Resource Group](/azure/azure-resource-manager/management/overview)** | *myResourceGroup* | Select and existing resource group, or name for the new one in which you'll create your function app. |
| **Function App name** | Globally unique name | A name that identifies the new function app. Valid characters are `a-z` (case insensitive), `0-9`, and `-`. | |**Publish**| Code | Option to publish code files or a Docker container. For this tutorial, select **Code**. | | **Runtime stack** | .NET | Your preferred programming language. For this tutorial, select **.NET**. |
active-directory Custom Extension Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/custom-extension-troubleshoot.md
In order to troubleshoot issues with your custom claims provider REST API endpoi
You can also use [Microsoft Entra sign-in logs](../reports-monitoring/concept-sign-ins.md) in addition to your REST API logs, and hosting environment diagnostics solutions. Using Microsoft Entra sign-in logs, you can find errors, which may affect the users' sign-ins. The Microsoft Entra sign-in logs provide information about the HTTP status, error code, execution duration, and number of retries that occurred the API was called by Microsoft Entra ID.
-Microsoft Entra sign-in logs also integrate with [Azure Monitor](../../azure-monitor/index.yml). You can set up alerts and monitoring, visualize the data, and integrate with security information and event management (SIEM) tools. For example, you can set up notifications if the number of errors exceed a certain threshold that you choose.
+Microsoft Entra sign-in logs also integrate with [Azure Monitor](/azure/azure-monitor/). You can set up alerts and monitoring, visualize the data, and integrate with security information and event management (SIEM) tools. For example, you can set up notifications if the number of errors exceed a certain threshold that you choose.
To access the Microsoft Entra sign-in logs:
One of the most common issues is that your custom claims provider API doesn't re
1. If your API accesses any downstream APIs, cache the access token used to call these APIs, so a new token doesn't have to be acquired on every execution. 1. Performance issues are often related to downstream services. Add logging, which records the process time to call to any downstream services.
-1. If you use a cloud provider to host your API, use a hosting plan that keeps the API always "warm". For Azure Functions, it can be either [the Premium plan or Dedicated plan](../../azure-functions/functions-scale.md).
+1. If you use a cloud provider to host your API, use a hosting plan that keeps the API always "warm". For Azure Functions, it can be either [the Premium plan or Dedicated plan](/azure/azure-functions/functions-scale).
1. [Run automated integration tests](test-automate-integration-testing.md) for your authentications. You can also use Postman or other tools to test just your API performance. ## Next steps
active-directory Custom Rbac For Developers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/custom-rbac-for-developers.md
# Role-based access control for application developers
-Role-based access control (RBAC) allows certain users or groups to have specific permissions to access and manage resources. Application RBAC differs from [Azure role-based access control](../../role-based-access-control/overview.md) and [Microsoft Entra role-based access control](../roles/custom-overview.md#understand-azure-ad-role-based-access-control). Azure custom roles and built-in roles are both part of Azure RBAC, which is used to help manage Azure resources. Microsoft Entra RBAC is used to manage Microsoft Entra resources. This article explains application-specific RBAC. For information about implementing application-specific RBAC, see [How to add app roles to your application and receive them in the token](./howto-add-app-roles-in-apps.md).
+Role-based access control (RBAC) allows certain users or groups to have specific permissions to access and manage resources. Application RBAC differs from [Azure role-based access control](/azure/role-based-access-control/overview) and [Microsoft Entra role-based access control](../roles/custom-overview.md#understand-azure-ad-role-based-access-control). Azure custom roles and built-in roles are both part of Azure RBAC, which is used to help manage Azure resources. Microsoft Entra RBAC is used to manage Microsoft Entra resources. This article explains application-specific RBAC. For information about implementing application-specific RBAC, see [How to add app roles to your application and receive them in the token](./howto-add-app-roles-in-apps.md).
## Roles definitions
App roles and groups both store information about user assignments in the Micros
Using custom storage allows developers extra customization and control over how to assign roles to users and how to represent them. However, the extra flexibility also introduces more responsibility. For example, there's no mechanism currently available to include this information in tokens returned from Microsoft Entra ID. Applications must retrieve the roles if role information is maintained in a custom data store. Retrieving the roles is typically done using extensibility points defined in the middleware available to the platform that's being used to develop the application. Developers are responsible for properly securing the custom data store.
-Using [Azure AD B2C Custom policies](../../active-directory-b2c/custom-policy-overview.md) it's possible to interact with custom data stores and to include custom claims within a token.
+Using [Azure AD B2C Custom policies](/azure/active-directory-b2c/custom-policy-overview) it's possible to interact with custom data stores and to include custom claims within a token.
## Choose an approach
Although either app roles or groups can be used for authorization, key differenc
## Next steps -- [Azure Identity Management and access control security best practices](../../security/fundamentals/identity-management-best-practices.md)
+- [Azure Identity Management and access control security best practices](/azure/security/fundamentals/identity-management-best-practices)
- To learn about proper authorization using token claims, see [Secure applications and APIs by validating claims](./claims-validation.md)
active-directory Deploy Web App Authentication Pipeline https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/deploy-web-app-authentication-pipeline.md
You'll learn how to:
- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). - An Azure DevOps organization. [Create one for free](/azure/devops/pipelines/get-started/pipelines-sign-up). - To use Microsoft-hosted agents, your Azure DevOps organization must have access to Microsoft-hosted parallel jobs. [Check your parallel jobs and request a free grant](/azure/devops/pipelines/troubleshooting/troubleshooting#check-for-available-parallel-jobs).-- A Microsoft Entra [tenant](/azure/active-directory/develop/quickstart-create-new-tenant).
+- A Microsoft Entra [tenant](./quickstart-create-new-tenant.md).
- A [GitHub account](https://github.com) and Git [setup locally](https://docs.github.com/en/get-started/quickstart/set-up-git). - .NET 6.0 SDK or later.
Save your changes and run the pipeline.
Next, add a stage to the pipeline that deploys Azure resources. The pipeline uses an [inline script](/azure/devops/pipelines/scripts/powershell) to create the App Service instance. In a later step, the inline script creates a Microsoft Entra app registration for App Service authentication. An Azure CLI bash script is used because Azure Resource Manager (and Azure Pipelines tasks) can't create an app registration.
-The inline script runs in the context of the pipeline, assign the [Application.Administrator](/azure/active-directory/roles/permissions-reference#application-administrator) role to the app so the script can create app registrations:
+The inline script runs in the context of the pipeline, assign the [Application.Administrator](../roles/permissions-reference.md#application-administrator) role to the app so the script can create app registrations:
1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com). 1. Browse to **Identity** > **Roles & admins** > **Roles & admins**.
active-directory Developer Glossary https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/developer-glossary.md
Many of the terms in this glossary are related to the OAuth 2.0 and OpenID Conne
[AAD-App-Manifest]:reference-app-manifest.md [AAD-App-SP-Objects]:app-objects-and-service-principals.md [AAD-Auth-Scenarios]:./authentication-vs-authorization.md
-[AAD-Dev-Guide]:../develop.md
[Graph-Perm-Scopes]: /graph/permissions-reference [Graph-App-Resource]: /graph/api/resources/application [Graph-Sp-Resource]: /graph/api/resources/serviceprincipal
Many of the terms in this glossary are related to the OAuth 2.0 and OpenID Conne
[AAD-Multi-Tenant-Overview]:howto-convert-app-to-be-multi-tenant.md [AAD-Security-Token-Claims]: ./authentication-vs-authorization.md#claims-in-azure-ad-security-tokens [AAD-Tokens-Claims]:access-tokens.md
-[AAD-RBAC]: ../../role-based-access-control/role-assignments-portal.md
+[AAD-RBAC]: /azure/role-based-access-control/role-assignments-portal
[JWT]: https://tools.ietf.org/html/rfc7519 [Microsoft-Graph]: https://developer.microsoft.com/graph [O365-Perm-Ref]: /graph/permissions-reference
active-directory Developer Guide Conditional Access Authentication Context https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/developer-guide-conditional-access-authentication-context.md
The table below will show all corner cases where ACRS is added to the token's cl
- [Granular Conditional Access for sensitive data and actions (Blog)](https://techcommunity.microsoft.com/t5/azure-active-directory-identity/granular-conditional-access-for-sensitive-data-and-actions/ba-p/1751775) - [Zero trust with the Microsoft identity platform](/security/zero-trust/identity-developer)-- [Building Zero Trust ready apps with the Microsoft identity platform](/security/zero-trust/identity-developer)
+- [Building Zero Trust ready apps with the Microsoft identity platform](/security/zero-trust/develop/identity)
- [Conditional Access authentication context](../conditional-access/concept-conditional-access-cloud-apps.md#authentication-context) - [authenticationContextClassReference resource type - MS Graph](/graph/api/conditionalaccessroot-list-authenticationcontextclassreferences) - [Claims challenge, claims request, and client capabilities in the Microsoft identity platform](claims-challenge.md)-- [Using authentication context with Microsoft Purview Information Protection and SharePoint](/microsoft-365/compliance/sensitivity-labels-teams-groups-sites#more-information-about-the-dependencies-for-the-authentication-context-option)
+- [Using authentication context with Microsoft Purview Information Protection and SharePoint](/purview/sensitivity-labels-teams-groups-sites#more-information-about-the-dependencies-for-the-authentication-context-option)
- [How to use Continuous Access Evaluation enabled APIs in your applications](app-resilience-continuous-access-evaluation.md)
active-directory How Applications Are Added https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/how-applications-are-added.md
Like application objects, service principals can also be created through multipl
- When users sign in to a third-party application integrated with Microsoft Entra ID - During sign-in, users are asked to give permission to the application to access their profile and other permissions. The first person to give consent causes a service principal that represents the application to be added to the directory.-- When users sign in to Microsoft online services like [Microsoft 365](https://products.office.com/)
+- When users sign in to Microsoft online services like Microsoft 365.
- When you subscribe to Microsoft 365 or begin a trial, one or more service principals are created in the directory representing the various services that are used to deliver all of the functionality associated with Microsoft 365. - Some Microsoft 365 services like SharePoint create service principals on an ongoing basis to allow secure communication between components including workflows. - When an admin adds an application from the app gallery (this will also create an underlying app object)
active-directory How To Integrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/how-to-integrate.md
There are several ways for your application to integrate with the Microsoft iden
**Reduce sign in friction and reduce support costs.** By using the Microsoft identity platform to sign in to your application, your users won't have one more name and password to remember. As a developer, you'll have one less password to store and protect. Not having to handle forgotten password resets may be a significant savings alone. The Microsoft identity platform powers sign in for some of the world's most popular cloud applications, including Microsoft 365 and Microsoft Azure. With hundreds of millions users from millions of organizations, chances are your user is already signed in to the Microsoft identity platform. Learn more about [adding support for the Microsoft identity platform sign in](./authentication-vs-authorization.md).
-**Simplify sign up for your application.** During sign up for your application, the Microsoft identity platform can send essential information about a user so that you can pre-fill your sign up form or eliminate it completely. Users can sign up for your application using their Microsoft Entra account via a familiar consent experience similar to those found in social media and mobile applications. Any user can sign up and sign in to an application that is integrated with the Microsoft identity platform without requiring IT involvement. Learn more about [signing-up your application for Microsoft Entra account login](../../app-service/configure-authentication-provider-aad.md).
+**Simplify sign up for your application.** During sign up for your application, the Microsoft identity platform can send essential information about a user so that you can pre-fill your sign up form or eliminate it completely. Users can sign up for your application using their Microsoft Entra account via a familiar consent experience similar to those found in social media and mobile applications. Any user can sign up and sign in to an application that is integrated with the Microsoft identity platform without requiring IT involvement. Learn more about [signing-up your application for Microsoft Entra account login](/azure/app-service/configure-authentication-provider-aad).
### Browse for users, manage user provisioning, and control access to your application
Integration with the Microsoft identity platform comes with benefits that do not
### Advanced security features
-**Multi-factor authentication.** The Microsoft identity platform provides native multi-factor authentication. IT administrators can require multi-factor authentication to access your application, so that you do not have to code this support yourself. Learn more about [Multi-Factor Authentication](/azure/multi-factor-authentication/).
+**Multi-factor authentication.** The Microsoft identity platform provides native multi-factor authentication. IT administrators can require multi-factor authentication to access your application, so that you do not have to code this support yourself. Learn more about [Multi-Factor Authentication](../authentication/index.yml).
**Anomalous sign in detection.** The Microsoft identity platform processes more than a billion sign-ins a day, while using machine learning algorithms to detect suspicious activity and notify IT administrators of possible problems. By supporting the Microsoft identity platform sign-in, your application gets the benefit of this protection. Learn more about [viewing Microsoft Entra reports](../reports-monitoring/overview-monitoring-health.md).
active-directory Howto Authenticate Service Principal Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/howto-authenticate-service-principal-powershell.md
To complete this article, you must have sufficient permissions in both your Micr
The easiest way to check whether your account has adequate permissions is through the portal. See [Check required permission](howto-create-service-principal-portal.md#permissions-required-for-registering-an-app). ## Assign the application to a role
-To access resources in your subscription, you must assign the application to a role. Decide which role offers the right permissions for the application. To learn about the available roles, see [Azure built-in roles](../../role-based-access-control/built-in-roles.md).
+To access resources in your subscription, you must assign the application to a role. Decide which role offers the right permissions for the application. To learn about the available roles, see [Azure built-in roles](/azure/role-based-access-control/built-in-roles).
You can set the scope at the level of the subscription, resource group, or resource. Permissions are inherited to lower levels of scope. For example, adding an application to the *Reader* role for a resource group means it can read the resource group and any resources it contains. To allow the application to execute actions like reboot, start and stop instances, select the *Contributor* role. ## Create service principal with self-signed certificate
-The following example covers a simple scenario. It uses [New-ΓÇïAzADΓÇïServiceΓÇïPrincipal](/powershell/module/az.resources/new-azadserviceprincipal) to create a service principal with a self-signed certificate, and uses [New-AzRoleAssignment](/powershell/module/az.resources/new-azroleassignment) to assign the [Reader](../../role-based-access-control/built-in-roles.md#reader) role to the service principal. The role assignment is scoped to your currently selected Azure subscription. To select a different subscription, use [Set-AzContext](/powershell/module/Az.Accounts/Set-AzContext).
+The following example covers a simple scenario. It uses [New-ΓÇïAzADΓÇïServiceΓÇïPrincipal](/powershell/module/az.resources/new-azadserviceprincipal) to create a service principal with a self-signed certificate, and uses [New-AzRoleAssignment](/powershell/module/az.resources/new-azroleassignment) to assign the [Reader](/azure/role-based-access-control/built-in-roles#reader) role to the service principal. The role assignment is scoped to your currently selected Azure subscription. To select a different subscription, use [Set-AzContext](/powershell/module/az.accounts/set-azcontext).
> [!NOTE] > The New-SelfSignedCertificate cmdlet and the PKI module are currently not supported in PowerShell Core.
Connect-AzAccount -ServicePrincipal `
## Create service principal with certificate from Certificate Authority
-The following example uses a certificate issued from a Certificate Authority to create service principal. The assignment is scoped to the specified Azure subscription. It adds the service principal to the [Reader](../../role-based-access-control/built-in-roles.md#reader) role. If an error occurs during the role assignment, it retries the assignment.
+The following example uses a certificate issued from a Certificate Authority to create service principal. The assignment is scoped to the specified Azure subscription. It adds the service principal to the [Reader](/azure/role-based-access-control/built-in-roles#reader) role. If an error occurs during the role assignment, it retries the assignment.
```powershell Param (
active-directory Howto Create Service Principal Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/howto-create-service-principal-portal.md
You've created your Microsoft Entra application and service principal.
## Assign a role to the application
-To access resources in your subscription, you must assign a role to the application. Decide which role offers the right permissions for the application. To learn about the available roles, see [Azure built-in roles](../../role-based-access-control/built-in-roles.md).
+To access resources in your subscription, you must assign a role to the application. Decide which role offers the right permissions for the application. To learn about the available roles, see [Azure built-in roles](/azure/role-based-access-control/built-in-roles).
You can set the scope at the level of the subscription, resource group, or resource. Permissions are inherited to lower levels of scope.
Once you've saved the client secret, the value of the client secret is displayed
## Configure access policies on resources
-You might need to configure extra permissions on resources that your application needs to access. For example, you must also [update a key vault's access policies](../../key-vault/general/security-features.md#privileged-access) to give your application access to keys, secrets, or certificates.
+You might need to configure extra permissions on resources that your application needs to access. For example, you must also [update a key vault's access policies](/azure/key-vault/general/security-features#privileged-access) to give your application access to keys, secrets, or certificates.
To configure access policies:
To configure access policies:
## Next steps - Learn how to use [Azure PowerShell](howto-authenticate-service-principal-powershell.md) or [Azure CLI](/cli/azure/create-an-azure-service-principal-azure-cli) to create a service principal.-- To learn about specifying security policies, see [Azure role-based access control (Azure RBAC)](../../role-based-access-control/role-assignments-portal.md).-- For a list of available actions that can be granted or denied to users, see [Azure Resource Manager Resource Provider operations](../../role-based-access-control/resource-provider-operations.md).
+- To learn about specifying security policies, see [Azure role-based access control (Azure RBAC)](/azure/role-based-access-control/role-assignments-portal).
+- For a list of available actions that can be granted or denied to users, see [Azure Resource Manager Resource Provider operations](/azure/role-based-access-control/resource-provider-operations).
- For information about working with app registrations by using **Microsoft Graph**, see the [Applications](/graph/api/resources/application) API reference.
active-directory Howto Get List Of All Auth Library Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/howto-get-list-of-all-auth-library-apps.md
Workbooks are a set of queries that collect and visualize information that is av
Microsoft Entra ID doesn't send sign-in events to Azure Monitor by default, which the Sign-ins workbook in Azure Monitor requires.
-Configure AD to send sign-in events to Azure Monitor by following the steps in [Integrate your Microsoft Entra sign-in and audit logs with Azure Monitor](../reports-monitoring/howto-integrate-activity-logs-with-log-analytics.md). In the **Diagnostic settings** configuration step, select the **SignInLogs** check box.
+Configure AD to send sign-in events to Azure Monitor by following the steps in [Integrate your Microsoft Entra sign-in and audit logs with Azure Monitor](../reports-monitoring/howto-integrate-activity-logs-with-azure-monitor-logs.md). In the **Diagnostic settings** configuration step, select the **SignInLogs** check box.
No sign-in event that occurred *before* you configure Microsoft Entra ID to send the events to Azure Monitor will appear in the Sign-ins workbook.
active-directory Identity Platform Integration Checklist https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/identity-platform-integration-checklist.md
Use the following checklist to ensure that your application is effectively integ
![checkbox](./medi).
-![checkbox](./medi) to store and regularly rotate your credentials.
+![checkbox](./medi) or [Azure Key Vault](/azure/key-vault/general/basic-concepts) to store and regularly rotate your credentials.
![checkbox](./medi#permission-types). Only use application permissions if necessary; use delegated permissions where possible. For a full list of Microsoft Graph permissions, see this [permissions reference](/graph/permissions-reference).
active-directory Mark App As Publisher Verified https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/mark-app-as-publisher-verified.md
When an app registration has a verified publisher, it means that the publisher o
## Quickstart If you are already enrolled in the [Cloud Partner Program (CPP)](/partner-center/intro-to-cloud-partner-program-membership) and have met the [pre-requisites](publisher-verification-overview.md#requirements), you can get started right away:
-1. Sign into the [App Registration portal](https://aka.ms/PublisherVerificationPreview) using [multi-factor authentication](../fundamentals/concept-fundamentals-mfa-get-started.md)
+1. Sign into the [App Registration portal](https://aka.ms/PublisherVerificationPreview) using [multi-factor authentication](../authentication/concept-mfa-licensing.md)
1. Choose an app and click **Branding & properties**.
For more details on specific benefits, requirements, and frequently asked questi
## Mark your app as publisher verified Make sure you meet the [pre-requisites](publisher-verification-overview.md#requirements), then follow these steps to mark your app(s) as Publisher Verified.
-1. Sign in using [multi-factor authentication](../fundamentals/concept-fundamentals-mfa-get-started.md) to an organizational (Microsoft Entra) account authorized to make changes to the app you want to mark as Publisher Verified and on the CPP Account in Partner Center.
+1. Sign in using [multi-factor authentication](../authentication/concept-mfa-licensing.md) to an organizational (Microsoft Entra) account authorized to make changes to the app you want to mark as Publisher Verified and on the CPP Account in Partner Center.
- The Microsoft Entra user must have one of the following [roles](../roles/permissions-reference.md): Application Admin, Cloud Application Admin, or Global Administrator.
active-directory Msal Android B2c https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-android-b2c.md
# Use MSAL for Android with B2C
-The Microsoft Authentication Library (MSAL) enables application developers to authenticate users with social and local identities by using [Azure Active Directory B2C (Azure AD B2C)](../../active-directory-b2c/index.yml). Azure AD B2C is an identity management service. Use it to customize and control how customers sign up, sign in, and manage their profiles when they use your applications.
+The Microsoft Authentication Library (MSAL) enables application developers to authenticate users with social and local identities by using [Azure Active Directory B2C (Azure AD B2C)](/azure/active-directory-b2c/). Azure AD B2C is an identity management service. Use it to customize and control how customers sign up, sign in, and manage their profiles when they use your applications.
## Choosing a compatible authorization_user_agent The B2C identity management system supports authentication with a number of social account providers such as Google, Facebook, Twitter, and Amazon. If you plan to support such account types in your app, it is recommended that you configure your MSAL public client application to use either the `DEFAULT` or `BROWSER` value when specifying your manifest's [`authorization_user_agent`](msal-configuration.md#authorization_user_agent) due to restrictions prohibiting use of WebView-based authentication with some external identity providers.
The configuration file for the app would declare two `authorities`. One for each
} ```
-The `redirect_uri` must be registered in the app configuration, and also in `AndroidManifest.xml` to support redirection during the [authorization code grant flow](../../active-directory-b2c/authorization-code-flow.md).
+The `redirect_uri` must be registered in the app configuration, and also in `AndroidManifest.xml` to support redirection during the [authorization code grant flow](/azure/active-directory-b2c/authorization-code-flow).
## Initialize IPublicClientApplication
String tenantId = account.getTenantId();
### IdToken claims
-Claims returned in the IdToken are populated by the Security Token Service (STS), not by MSAL. Depending on the identity provider (IdP) used, some claims may be absent. Some IdPs don't currently provide the `preferred_username` claim. Because this claim is used by MSAL for caching, a placeholder value, `MISSING FROM THE TOKEN RESPONSE`, is used in its place. For more information on B2C IdToken claims, see [Overview of tokens in Azure Active Directory B2C](../../active-directory-b2c/tokens-overview.md#claims).
+Claims returned in the IdToken are populated by the Security Token Service (STS), not by MSAL. Depending on the identity provider (IdP) used, some claims may be absent. Some IdPs don't currently provide the `preferred_username` claim. Because this claim is used by MSAL for caching, a placeholder value, `MISSING FROM THE TOKEN RESPONSE`, is used in its place. For more information on B2C IdToken claims, see [Overview of tokens in Azure Active Directory B2C](/azure/active-directory-b2c/tokens-overview#claims).
## Managing accounts and policies
When you renew tokens for a policy with `acquireTokenSilent`, provide the same `
## Next steps
-Learn more about Azure Active Directory B2C (Azure AD B2C) at [What is Azure Active Directory B2C?](../../active-directory-b2c/overview.md)
+Learn more about Azure Active Directory B2C (Azure AD B2C) at [What is Azure Active Directory B2C?](/azure/active-directory-b2c/overview)
active-directory Msal Android Shared Devices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-android-shared-devices.md
These Microsoft applications support Microsoft Entra shared device mode:
- [Microsoft Teams](/microsoftteams/platform/) - [Microsoft Managed Home Screen](/mem/intune/apps/app-configuration-managed-home-screen-app) app for Android Enterprise-- [Microsoft Edge](/microsoft-edge)
+- [Microsoft Edge](/microsoft-edge/)
- [Outlook](/mem/intune/apps/app-configuration-policies-outlook)-- [Microsoft Power Apps](/power-apps)
+- [Microsoft Power Apps](/power-apps/)
- [Microsoft Power BI Mobile](/power-bi/consumer/mobile/mobile-app-shared-device-mode) (preview)-- [Microsoft Viva Engage](/viva/engage/overview) (previously [Yammer](/yammer))
+- [Microsoft Viva Engage](/viva/engage/overview) (previously [Yammer](/viva/engage/overview))
## Third-party MDMs that support shared device mode
active-directory Msal Authentication Flows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-authentication-flows.md
The following constraints apply to the applications using the ROPC flow:
- ROPC is **supported** in .NET desktop and .NET Core applications. - ROPC is **unsupported** in Universal Windows Platform (UWP) applications. - ROPC in Azure AD B2C is supported _only_ for local accounts.
- - For information about ROPC in MSAL.NET and Azure AD B2C, see [Using ROPC with Azure AD B2C](./msal-net-b2c-considerations.md#resource-owner-password-credentials-ropc).
+ - For information about ROPC in MSAL.NET and Azure AD B2C, see [Using ROPC with Azure AD B2C](/entra/msal/dotnet/acquiring-tokens/desktop-mobile/social-identities#resource-owner-password-credentials-ropc).
## Integrated Windows authentication (IWA)
active-directory Msal B2c Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-b2c-overview.md
# Use the Microsoft Authentication Library for JavaScript to work with Azure AD B2C
-The [Microsoft Authentication Library for JavaScript (MSAL.js)](https://github.com/AzureAD/microsoft-authentication-library-for-js) enables JavaScript developers to authenticate users with social and local identities using [Azure Active Directory B2C](../../active-directory-b2c/overview.md) (Azure AD B2C).
+The [Microsoft Authentication Library for JavaScript (MSAL.js)](https://github.com/AzureAD/microsoft-authentication-library-for-js) enables JavaScript developers to authenticate users with social and local identities using [Azure Active Directory B2C](/azure/active-directory-b2c/overview) (Azure AD B2C).
By using Azure AD B2C as an identity management service, you can customize and control how your customers sign up, sign in, and manage their profiles when they use your applications.
Azure AD B2C also enables you to brand and customize the UI that your applicatio
## Supported app types and scenarios
-MSAL.js enables [single-page applications](../../active-directory-b2c/application-types.md#single-page-applications) to sign-in users with Azure AD B2C using the [authorization code flow with PKCE](../../active-directory-b2c/authorization-code-flow.md) grant. With MSAL.js and Azure AD B2C:
+MSAL.js enables [single-page applications](/azure/active-directory-b2c/application-types#single-page-applications) to sign-in users with Azure AD B2C using the [authorization code flow with PKCE](/azure/active-directory-b2c/authorization-code-flow) grant. With MSAL.js and Azure AD B2C:
- Users **can** authenticate with their social and local identities. - Users **can** be authorized to access Azure AD B2C protected resources (but not Microsoft Entra protected resources).
For more information, see: [Working with Azure AD B2C](https://github.com/AzureA
Follow the tutorial on how to: -- [Sign in users with Azure AD B2C in a single-page application](../../active-directory-b2c/configure-authentication-sample-spa-app.md)-- [Call an Azure AD B2C protected web API](../../active-directory-b2c/enable-authentication-web-api.md)
+- [Sign in users with Azure AD B2C in a single-page application](/azure/active-directory-b2c/configure-authentication-sample-spa-app)
+- [Call an Azure AD B2C protected web API](/azure/active-directory-b2c/enable-authentication-web-api)
active-directory Msal Ios Shared Devices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-ios-shared-devices.md
To take advantage of shared device mode feature, app developers and cloud device
Your device needs to be configured to support shared device mode. It must have iOS 14+ installed and be MDM-enrolled. MDM configuration also needs to enable [Microsoft Enterprise SSO plug-in for Apple devices](apple-sso-plugin.md).
-Microsoft Intune supports zero-touch provisioning for devices in Microsoft Entra shared device mode, which means that the device can be set up and enrolled in Intune with minimal interaction from the frontline worker. To set up device in shared device mode when using Microsoft Intune as the MDM, see [Set up enrollment for devices in Microsoft Entra shared device mode](/mem/intune/enrollment/automated-device-enrollment-shared-device-mode/).
+Microsoft Intune supports zero-touch provisioning for devices in Microsoft Entra shared device mode, which means that the device can be set up and enrolled in Intune with minimal interaction from the frontline worker. To set up device in shared device mode when using Microsoft Intune as the MDM, see [Set up enrollment for devices in Microsoft Entra shared device mode](/mem/intune/enrollment/automated-device-enrollment-shared-device-mode).
> [!IMPORTANT] > We are working with third-party MDMs to support shared device mode. We will update the list of third-party MDMs as they start supporting the shared device mode.
active-directory Msal National Cloud https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-national-cloud.md
Before you start, make sure that you meet these prerequisites.
### Choose the appropriate identities
-[Azure Government](../../azure-government/index.yml) applications can use Microsoft Entra Government identities and Microsoft Entra Public identities to authenticate users. Because you can use any of these identities, decide which authority endpoint you should choose for your scenario:
+[Azure Government](/azure/azure-government/) applications can use Microsoft Entra Government identities and Microsoft Entra Public identities to authenticate users. Because you can use any of these identities, decide which authority endpoint you should choose for your scenario:
- Microsoft Entra Public: Commonly used if your organization already has a Microsoft Entra Public tenant to support Microsoft 365 (Public or GCC) or another application. - Microsoft Entra Government: Commonly used if your organization already has a Microsoft Entra Government tenant to support Office 365 (GCC High or DoD) or is creating a new tenant in Microsoft Entra Government.
After you decide, a special consideration is where you perform your app registra
### Get an Azure Government subscription
-To get an Azure Government subscription, see [Managing and connecting to your subscription in Azure Government](../../azure-government/compare-azure-government-global-azure.md).
+To get an Azure Government subscription, see [Managing and connecting to your subscription in Azure Government](/azure/azure-government/compare-azure-government-global-azure).
If you don't have an Azure Government subscription, create a [free account](https://azure.microsoft.com/global-infrastructure/government/request/) before you begin.
See [National cloud authentication endpoints](authentication-national-cloud.md)
National cloud documentation: -- [Azure Government](../../azure-government/index.yml)
+- [Azure Government](/azure/azure-government/)
- [Microsoft Azure operated by 21Vianet](/azure/china/)-- [Azure Germany (closes on October 29, 2021)](../../germany/index.yml)
+- [Azure Germany (closes on October 29, 2021)](/azure/germany/)
active-directory Msal Net Migration Ios Broker https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-net-migration-ios-broker.md
They enable:
- Single sign-on. - Device identification, which is required by some [Conditional Access policies](../conditional-access/overview.md). For more information, see [Device management](../conditional-access/concept-conditional-access-conditions.md#device-platforms).-- Application identification verification, which is also required in some enterprise scenarios. For more information, see [Intune mobile application management (MAM)](/intune/mam-faq).
+- Application identification verification, which is also required in some enterprise scenarios. For more information, see [Intune mobile application management (MAM)](/mem/intune/apps/mam-faq).
## Migrate from ADAL to MSAL
For more information about enabling keychain access, see [Enable keychain access
## Next steps
-Learn about [Xamarin iOS-specific considerations with MSAL.NET](msal-net-xamarin-ios-considerations.md).
+Learn about [Xamarin iOS-specific considerations with MSAL.NET](msal-net-xamarin-ios-considerations.md).
active-directory Msal Node Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-node-migration.md
const msalTokenCache = publicClientApplication.getTokenCache();
Importantly, your previous token cache with ADAL Node won't be transferable to MSAL Node, since cache schemas are incompatible. However, you may use the valid refresh tokens your app obtained previously with ADAL Node in MSAL Node. See the section on [refresh tokens](#remove-logic-around-refresh-tokens) for more.
-You can also write your cache to disk by providing your own **cache plugin**. The cache plugin must implement the interface [ICachePlugin](/javascript/api/@azure/msal-node/icacheplugin). Like logging, caching is part of the configuration options and is created with the initialization of the MSAL Node instance:
+You can also write your cache to disk by providing your own **cache plugin**. The cache plugin must implement the interface [ICachePlugin](/javascript/api/%40azure/msal-node/icacheplugin). Like logging, caching is part of the configuration options and is created with the initialization of the MSAL Node instance:
```javascript const msal = require('@azure/msal-node');
active-directory Multi Service Web App Access Microsoft Graph As User https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/multi-service-web-app-access-microsoft-graph-as-user.md
- Title: Tutorial - Web app accesses Microsoft Graph as the user
-description: In this tutorial, you learn how to access data in Microsoft Graph from a web app for a signed-in user.
------- Previously updated : 09/15/2023---
-#Customer intent: As an application developer, I want to learn how to access data in Microsoft Graph from a web app for a signed-in user.
---
-# Tutorial: Access Microsoft Graph from a secured app as the user
-
-Learn how to access Microsoft Graph from a web app running on Azure App Service.
--
-You want to add access to Microsoft Graph from your web app and perform some action as the signed-in user. This section describes how to grant delegated permissions to the web app and get the signed-in user's profile information from Microsoft Entra ID.
-
-In this tutorial, you learn how to:
-
-> [!div class="checklist"]
->
-> * Grant delegated permissions to a web app.
-> * Call Microsoft Graph from a web app for a signed-in user.
--
-## Prerequisites
-
-* A web application running on Azure App Service that has the [App Service authentication/authorization module enabled](multi-service-web-app-authentication-app-service.md).
-
-## Grant front-end access to call Microsoft Graph
-
-Now that you've enabled authentication and authorization on your web app, the web app is registered with the Microsoft identity platform and is backed by a Microsoft Entra application. In this step, you give the web app permissions to access Microsoft Graph for the user. (Technically, you give the web app's Microsoft Entra application the permissions to access the Microsoft Graph Microsoft Entra application for the user.)
-
-In the [Microsoft Entra admin center](https://entra.microsoft.com) menu, select **Applications**.
-
-Select **App registrations** > **Owned applications** > **View all applications in this directory**. Select your web app name, and then select **API permissions**.
-
-Select **Add a permission**, and then select Microsoft APIs and Microsoft Graph.
-
-Select **Delegated permissions**, and then select **User.Read** from the list. Select **Add permissions**.
-
-## Configure App Service to return a usable access token
-
-The web app now has the required permissions to access Microsoft Graph as the signed-in user. In this step, you configure App Service authentication and authorization to give you a usable access token for accessing Microsoft Graph. For this step, you need to add the User.Read scope for the downstream service (Microsoft Graph): `https://graph.microsoft.com/User.Read`.
-
-> [!IMPORTANT]
-> If you don't configure App Service to return a usable access token, you receive a ```CompactToken parsing failed with error code: 80049217``` error when you call Microsoft Graph APIs in your code.
-
-# [Azure Resource Explorer](#tab/azure-resource-explorer)
-Go to [Azure Resource Explorer](https://resources.azure.com/) and using the resource tree, locate your web app. The resource URL should be similar to `https://resources.azure.com/subscriptions/subscriptionId/resourceGroups/SecureWebApp/providers/Microsoft.Web/sites/SecureWebApp20200915115914`.
-
-The Azure Resource Explorer is now opened with your web app selected in the resource tree. At the top of the page, select **Read/Write** to enable editing of your Azure resources.
-
-In the left browser, drill down to **config** > **authsettingsV2**.
-
-In the **authsettingsV2** view, select **Edit**. Find the **login** section of **identityProviders** -> **azureActiveDirectory** and add the following **loginParameters** settings: `"loginParameters":[ "response_type=code id_token","scope=openid offline_access profile https://graph.microsoft.com/User.Read" ]` .
-
-```json
-"identityProviders": {
- "azureActiveDirectory": {
- "enabled": true,
- "login": {
- "loginParameters":[
- "response_type=code id_token",
- "scope=openid offline_access profile https://graph.microsoft.com/User.Read"
- ]
- }
- }
- }
-},
-```
-
-Save your settings by selecting **PUT**. This setting can take several minutes to take effect. Your web app is now configured to access Microsoft Graph with a proper access token. If you don't, Microsoft Graph returns an error saying that the format of the compact token is incorrect.
-
-# [Azure CLI](#tab/azure-cli)
-
-Use the Azure CLI to call the App Service Web App REST APIs to [get](/rest/api/appservice/web-apps/get-auth-settings) and [update](/rest/api/appservice/web-apps/update-auth-settings) the auth configuration settings so your web app can call Microsoft Graph. Open a command window and login to Azure CLI:
-
-```azurecli
-az login
-```
-
-Get your existing 'config/authsettingsv2ΓÇÖ settings and save to a local *authsettings.json* file.
-
-```azurecli
-az rest --method GET --url '/subscriptions/{SUBSCRIPTION_ID}/resourceGroups/{RESOURCE_GROUP}/providers/Microsoft.Web/sites/{WEBAPP_NAME}/config/authsettingsv2/list?api-version=2020-06-01' > authsettings.json
-```
-
-Open the authsettings.json file using your preferred text editor. Find the **login** section of **identityProviders** -> **azureActiveDirectory** and add the following **loginParameters** settings: `"loginParameters":[ "response_type=code id_token","scope=openid offline_access profile https://graph.microsoft.com/User.Read" ]` .
-
-```json
-"identityProviders": {
- "azureActiveDirectory": {
- "enabled": true,
- "login": {
- "loginParameters":[
- "response_type=code id_token",
- "scope=openid offline_access profile https://graph.microsoft.com/User.Read"
- ]
- }
- }
- }
-},
-```
-
-Save your changes to the *authsettings.json* file and upload the local settings to your web app:
-
-```azurecli
-az rest --method PUT --url '/subscriptions/{SUBSCRIPTION_ID}/resourceGroups/{RESOURCE_GROUP}/providers/Microsoft.Web/sites/{WEBAPP_NAME}/config/authsettingsv2?api-version=2020-06-01' --body @./authsettings.json
-```
--
-## Call Microsoft Graph
-
-Your web app now has the required permissions and also adds Microsoft Graph's client ID to the login parameters.
-
-# [C#](#tab/programming-language-csharp)
-Using the [Microsoft.Identity.Web library](https://github.com/AzureAD/microsoft-identity-web/), the web app gets an access token for authentication with Microsoft Graph. In version 1.2.0 and later, the Microsoft.Identity.Web library integrates with and can run alongside the App Service authentication/authorization module. Microsoft.Identity.Web detects that the web app is hosted in App Service and gets the access token from the App Service authentication/authorization module. The access token is then passed along to authenticated requests with the Microsoft Graph API.
-
-To see this code as part of a sample application, see the [sample on GitHub](https://github.com/Azure-Samples/ms-identity-easyauth-dotnet-storage-graphapi/tree/main/2-WebApp-graphapi-on-behalf).
-
-> [!NOTE]
-> The Microsoft.Identity.Web library isn't required in your web app for basic authentication/authorization or to authenticate requests with Microsoft Graph. It's possible to [securely call downstream APIs](../../app-service/tutorial-auth-aad.md#call-api-securely-from-server-code) with only the App Service authentication/authorization module enabled.
->
-> However, the App Service authentication/authorization is designed for more basic authentication scenarios. For more complex scenarios (handling custom claims, for example), you need the Microsoft.Identity.Web library or [Microsoft Authentication Library](msal-overview.md). There's a little more setup and configuration work in the beginning, but the Microsoft.Identity.Web library can run alongside the App Service authentication/authorization module. Later, when your web app needs to handle more complex scenarios, you can disable the App Service authentication/authorization module and Microsoft.Identity.Web will already be a part of your app.
-
-### Install client library packages
-
-Install the [Microsoft.Identity.Web](https://www.nuget.org/packages/Microsoft.Identity.Web/) and [Microsoft.Identity.Web.GraphServiceClient](https://www.nuget.org/packages/Microsoft.Identity.Web.GraphServiceClient) NuGet packages in your project by using the .NET Core command-line interface or the Package Manager Console in Visual Studio.
-
-#### .NET Core command line
-
-Open a command line, and switch to the directory that contains your project file.
-
-Run the install commands.
-
-```dotnetcli
-dotnet add package Microsoft.Identity.Web.GraphServiceClient
-
-dotnet add package Microsoft.Identity.Web
-```
-
-#### Package Manager Console
-
-Open the project/solution in Visual Studio, and open the console by using the **Tools** > **NuGet Package Manager** > **Package Manager Console** command.
-
-Run the install commands.
-```powershell
-Install-Package Microsoft.Identity.Web.GraphServiceClient
-
-Install-Package Microsoft.Identity.Web
-```
-
-### Startup.cs
-
-In the *Startup.cs* file, the ```AddMicrosoftIdentityWebApp``` method adds Microsoft.Identity.Web to your web app. The ```AddMicrosoftGraph``` method adds Microsoft Graph support.
-
-```csharp
-using Microsoft.AspNetCore.Builder;
-using Microsoft.AspNetCore.Hosting;
-using Microsoft.Extensions.Configuration;
-using Microsoft.Extensions.DependencyInjection;
-using Microsoft.Extensions.Hosting;
-using Microsoft.Identity.Web;
-using Microsoft.AspNetCore.Authentication.OpenIdConnect;
-
-// Some code omitted for brevity.
-public class Startup
-{
- // This method gets called by the runtime. Use this method to add services to the container.
- public void ConfigureServices(IServiceCollection services)
- {
- services.AddOptions();
- string[] initialScopes = Configuration.GetValue<string>("DownstreamApi:Scopes")?.Split(' ');
-
- services.AddAuthentication(OpenIdConnectDefaults.AuthenticationScheme)
- .AddMicrosoftIdentityWebApp(Configuration.GetSection("AzureAd"))
- .EnableTokenAcquisitionToCallDownstreamApi(initialScopes)
- .AddMicrosoftGraph(Configuration.GetSection("DownstreamApi"))
- .AddInMemoryTokenCaches();
-
- services.AddAuthorization(options =>
- {
- // By default, all incoming requests will be authorized according to the default policy
- options.FallbackPolicy = options.DefaultPolicy;
- });
- services.AddRazorPages()
- .AddMvcOptions(options => {})
- .AddMicrosoftIdentityUI();
-
- services.AddControllersWithViews()
- .AddMicrosoftIdentityUI();
- }
-}
-
-```
-
-### appsettings.json
-
-*Microsoft Entra ID* specifies the configuration for the Microsoft.Identity.Web library. In the [Microsoft Entra admin center](https://entra.microsoft.com), select **Applications** from the portal menu and then select **App registrations**. Select the app registration created when you enabled the App Service authentication/authorization module. (The app registration should have the same name as your web app.) You can find the tenant ID and client ID in the app registration overview page. The domain name can be found in the Microsoft Entra overview page for your tenant.
-
-*Graph* specifies the Microsoft Graph endpoint and the initial scopes needed by the app.
-
-```json
-{
- "AzureAd": {
- "Instance": "https://login.microsoftonline.com/",
- "Domain": "[Enter the domain of your tenant, e.g. contoso.onmicrosoft.com]",
- "TenantId": "[Enter 'common', or 'organizations' or the Tenant Id (Obtained from the Entra admin center. Select 'Endpoints' from the 'App registrations' blade and use the GUID in any of the URLs), e.g. da41245a5-11b3-996c-00a8-4d99re19f292]",
- "ClientId": "[Enter the Client Id (Application ID obtained from the Microsoft Entra admin center), e.g. ba74781c2-53c2-442a-97c2-3d60re42f403]",
- "ClientSecret": "[Copy the client secret added to the app from the Microsoft Entra admin center]",
- "ClientCertificates": [
- ],
- // the following is required to handle Continuous Access Evaluation challenges
- "ClientCapabilities": [ "cp1" ],
- "CallbackPath": "/signin-oidc"
- },
- "DownstreamApis": {
- "MicrosoftGraph": {
- // Specify BaseUrl if you want to use Microsoft graph in a national cloud.
- // See https://learn.microsoft.com/graph/deployments#microsoft-graph-and-graph-explorer-service-root-endpoints
- // "BaseUrl": "https://graph.microsoft.com/v1.0",
-
- // Set RequestAppToken this to "true" if you want to request an application token (to call graph on
- // behalf of the application). The scopes will then automatically
- // be ['https://graph.microsoft.com/.default'].
- // "RequestAppToken": false
-
- // Set Scopes to request (unless you request an app token).
- "Scopes": [ "User.Read" ]
-
- // See https://aka.ms/ms-id-web/downstreamApiOptions for all the properties you can set.
- }
- },
- "Logging": {
- "LogLevel": {
- "Default": "Information",
- "Microsoft": "Warning",
- "Microsoft.Hosting.Lifetime": "Information"
- }
- },
- "AllowedHosts": "*"
-}
-```
-
-### Index.cshtml.cs
-
-The following example shows how to call Microsoft Graph as the signed-in user and get some user information. The ```GraphServiceClient``` object is injected into the controller, and authentication has been configured for you by the Microsoft.Identity.Web library.
-
-```csharp
-using System.Threading.Tasks;
-using Microsoft.AspNetCore.Mvc.RazorPages;
-using Microsoft.Graph;
-using System.IO;
-using Microsoft.Identity.Web;
-using Microsoft.Extensions.Logging;
-
-// Some code omitted for brevity.
-
-[AuthorizeForScopes(Scopes = new[] { "User.Read" })]
-public class IndexModel : PageModel
-{
- private readonly ILogger<IndexModel> _logger;
- private readonly GraphServiceClient _graphServiceClient;
-
- public IndexModel(ILogger<IndexModel> logger, GraphServiceClient graphServiceClient)
- {
- _logger = logger;
- _graphServiceClient = graphServiceClient;
- }
-
- public async Task OnGetAsync()
- {
- try
- {
- var user = await _graphServiceClient.Me.GetAsync();
- ViewData["Me"] = user;
- ViewData["name"] = user.DisplayName;
-
- using (var photoStream = await _graphServiceClient.Me.Photo.Content.GetAsync())
- {
- byte[] photoByte = ((MemoryStream)photoStream).ToArray();
- ViewData["photo"] = Convert.ToBase64String(photoByte);
- }
- }
- catch (Exception ex)
- {
- ViewData["photo"] = null;
- }
- }
-}
-```
-
-# [Node.js](#tab/programming-language-nodejs)
-
-Using a custom **AuthProvider** class that encapsulates authentication logic, the web app gets the user's access token from the incoming requests header. The **AuthProvider** instance detects that the web app is hosted on App Service and gets the access token from the App Service authentication/authorization module. The access token is then passed down to the Microsoft Graph SDK client to make an authenticated request to the `/me` endpoint.
-
-To see this code as part of a sample application, see *graphController.js* in the [sample on GitHub](https://github.com/Azure-Samples/ms-identity-easyauth-nodejs-storage-graphapi/tree/main/2-WebApp-graphapi-on-behalf).
-
-> [!NOTE]
-> The App Service authentication/authorization is designed for more basic authentication scenarios. Later, when your web app needs to handle more complex scenarios, you can disable the App Service authentication/authorization module and the **AuthProvider** instance in the sample will fallback to use [MSAL Node](https://github.com/AzureAD/microsoft-authentication-library-for-js/tree/dev/lib/msal-node), which is the recommended library for adding authentication/authorization to Node.js applications.
-
-```nodejs
-const graphHelper = require('../utils/graphHelper');
-
-// Some code omitted for brevity.
-
-exports.getProfilePage = async(req, res, next) => {
-
- try {
- const graphClient = graphHelper.getAuthenticatedClient(req.session.protectedResources["graphAPI"].accessToken);
-
- const profile = await graphClient
- .api('/me')
- .get();
-
- res.render('profile', { isAuthenticated: req.session.isAuthenticated, profile: profile, appServiceName: appServiceName });
- } catch (error) {
- next(error);
- }
-}
-```
-
-To query Microsoft Graph, use the [Microsoft Graph JavaScript SDK](https://github.com/microsoftgraph/msgraph-sdk-javascript). The code for this is located in [utils/graphHelper.js](https://github.com/Azure-Samples/ms-identity-easyauth-nodejs-storage-graphapi/blob/main/2-WebApp-graphapi-on-behalf/utils/graphHelper.js):
-
-```nodejs
-const graph = require('@microsoft/microsoft-graph-client');
-
-// Some code omitted for brevity.
-
-getAuthenticatedClient = (accessToken) => {
- // Initialize Graph client
- const client = graph.Client.init({
- // Use the provided access token to authenticate requests
- authProvider: (done) => {
- done(null, accessToken);
- }
- });
-
- return client;
-}
-```
--
-## Clean up resources
-
-If you're finished with this tutorial and no longer need the web app or associated resources, [clean up the resources you created](multi-service-web-app-clean-up-resources.md).
-
-## Next steps
-
-> [!div class="nextstepaction"]
-> [App service accesses Microsoft Graph as the app](multi-service-web-app-access-microsoft-graph-as-app.md)
+
+ Title: Tutorial - Web app accesses Microsoft Graph as the user
+description: In this tutorial, you learn how to access data in Microsoft Graph from a web app for a signed-in user.
+++++++ Last updated : 09/15/2023++
+ms.devlang: csharp, javascript
+
+#Customer intent: As an application developer, I want to learn how to access data in Microsoft Graph from a web app for a signed-in user.
+++
+# Tutorial: Access Microsoft Graph from a secured app as the user
+
+Learn how to access Microsoft Graph from a web app running on Azure App Service.
++
+You want to add access to Microsoft Graph from your web app and perform some action as the signed-in user. This section describes how to grant delegated permissions to the web app and get the signed-in user's profile information from Microsoft Entra ID.
+
+In this tutorial, you learn how to:
+
+> [!div class="checklist"]
+>
+> * Grant delegated permissions to a web app.
+> * Call Microsoft Graph from a web app for a signed-in user.
++
+## Prerequisites
+
+* A web application running on Azure App Service that has the [App Service authentication/authorization module enabled](multi-service-web-app-authentication-app-service.md).
+
+## Grant front-end access to call Microsoft Graph
+
+Now that you've enabled authentication and authorization on your web app, the web app is registered with the Microsoft identity platform and is backed by a Microsoft Entra application. In this step, you give the web app permissions to access Microsoft Graph for the user. (Technically, you give the web app's Microsoft Entra application the permissions to access the Microsoft Graph Microsoft Entra application for the user.)
+
+In the [Microsoft Entra admin center](https://entra.microsoft.com) menu, select **Applications**.
+
+Select **App registrations** > **Owned applications** > **View all applications in this directory**. Select your web app name, and then select **API permissions**.
+
+Select **Add a permission**, and then select Microsoft APIs and Microsoft Graph.
+
+Select **Delegated permissions**, and then select **User.Read** from the list. Select **Add permissions**.
+
+## Configure App Service to return a usable access token
+
+The web app now has the required permissions to access Microsoft Graph as the signed-in user. In this step, you configure App Service authentication and authorization to give you a usable access token for accessing Microsoft Graph. For this step, you need to add the User.Read scope for the downstream service (Microsoft Graph): `https://graph.microsoft.com/User.Read`.
+
+> [!IMPORTANT]
+> If you don't configure App Service to return a usable access token, you receive a ```CompactToken parsing failed with error code: 80049217``` error when you call Microsoft Graph APIs in your code.
+
+# [Azure Resource Explorer](#tab/azure-resource-explorer)
+Go to [Azure Resource Explorer](https://resources.azure.com/) and using the resource tree, locate your web app. The resource URL should be similar to `https://resources.azure.com/subscriptions/subscriptionId/resourceGroups/SecureWebApp/providers/Microsoft.Web/sites/SecureWebApp20200915115914`.
+[//]: # (BROKEN LINK HttpLinkUnauthorized ABOVE: https://resources.azure.com/)
+
+The Azure Resource Explorer is now opened with your web app selected in the resource tree. At the top of the page, select **Read/Write** to enable editing of your Azure resources.
+
+In the left browser, drill down to **config** > **authsettingsV2**.
+
+In the **authsettingsV2** view, select **Edit**. Find the **login** section of **identityProviders** -> **azureActiveDirectory** and add the following **loginParameters** settings: `"loginParameters":[ "response_type=code id_token","scope=openid offline_access profile https://graph.microsoft.com/User.Read" ]` .
+
+```json
+"identityProviders": {
+ "azureActiveDirectory": {
+ "enabled": true,
+ "login": {
+ "loginParameters":[
+ "response_type=code id_token",
+ "scope=openid offline_access profile https://graph.microsoft.com/User.Read"
+ ]
+ }
+ }
+ }
+},
+```
+
+Save your settings by selecting **PUT**. This setting can take several minutes to take effect. Your web app is now configured to access Microsoft Graph with a proper access token. If you don't, Microsoft Graph returns an error saying that the format of the compact token is incorrect.
+
+# [Azure CLI](#tab/azure-cli)
+
+Use the Azure CLI to call the App Service Web App REST APIs to [get](/rest/api/appservice/web-apps/get-auth-settings) and [update](/rest/api/appservice/web-apps/update-auth-settings) the auth configuration settings so your web app can call Microsoft Graph. Open a command window and login to Azure CLI:
+
+```azurecli
+az login
+```
+
+Get your existing 'config/authsettingsv2ΓÇÖ settings and save to a local *authsettings.json* file.
+
+```azurecli
+az rest --method GET --url '/subscriptions/{SUBSCRIPTION_ID}/resourceGroups/{RESOURCE_GROUP}/providers/Microsoft.Web/sites/{WEBAPP_NAME}/config/authsettingsv2/list?api-version=2020-06-01' > authsettings.json
+```
+
+Open the authsettings.json file using your preferred text editor. Find the **login** section of **identityProviders** -> **azureActiveDirectory** and add the following **loginParameters** settings: `"loginParameters":[ "response_type=code id_token","scope=openid offline_access profile https://graph.microsoft.com/User.Read" ]` .
+
+```json
+"identityProviders": {
+ "azureActiveDirectory": {
+ "enabled": true,
+ "login": {
+ "loginParameters":[
+ "response_type=code id_token",
+ "scope=openid offline_access profile https://graph.microsoft.com/User.Read"
+ ]
+ }
+ }
+ }
+},
+```
+
+Save your changes to the *authsettings.json* file and upload the local settings to your web app:
+
+```azurecli
+az rest --method PUT --url '/subscriptions/{SUBSCRIPTION_ID}/resourceGroups/{RESOURCE_GROUP}/providers/Microsoft.Web/sites/{WEBAPP_NAME}/config/authsettingsv2?api-version=2020-06-01' --body @./authsettings.json
+```
++
+## Call Microsoft Graph
+
+Your web app now has the required permissions and also adds Microsoft Graph's client ID to the login parameters.
+
+# [C#](#tab/programming-language-csharp)
+Using the [Microsoft.Identity.Web library](https://github.com/AzureAD/microsoft-identity-web/), the web app gets an access token for authentication with Microsoft Graph. In version 1.2.0 and later, the Microsoft.Identity.Web library integrates with and can run alongside the App Service authentication/authorization module. Microsoft.Identity.Web detects that the web app is hosted in App Service and gets the access token from the App Service authentication/authorization module. The access token is then passed along to authenticated requests with the Microsoft Graph API.
+
+To see this code as part of a sample application, see the [sample on GitHub](https://github.com/Azure-Samples/ms-identity-easyauth-dotnet-storage-graphapi/tree/main/2-WebApp-graphapi-on-behalf).
+
+> [!NOTE]
+> The Microsoft.Identity.Web library isn't required in your web app for basic authentication/authorization or to authenticate requests with Microsoft Graph. It's possible to [securely call downstream APIs](/azure/app-service/tutorial-auth-aad#call-api-securely-from-server-code) with only the App Service authentication/authorization module enabled.
+>
+> However, the App Service authentication/authorization is designed for more basic authentication scenarios. For more complex scenarios (handling custom claims, for example), you need the Microsoft.Identity.Web library or [Microsoft Authentication Library](msal-overview.md). There's a little more setup and configuration work in the beginning, but the Microsoft.Identity.Web library can run alongside the App Service authentication/authorization module. Later, when your web app needs to handle more complex scenarios, you can disable the App Service authentication/authorization module and Microsoft.Identity.Web will already be a part of your app.
+
+### Install client library packages
+
+Install the [Microsoft.Identity.Web](https://www.nuget.org/packages/Microsoft.Identity.Web/) and [Microsoft.Identity.Web.GraphServiceClient](https://www.nuget.org/packages/Microsoft.Identity.Web.GraphServiceClient) NuGet packages in your project by using the .NET Core command-line interface or the Package Manager Console in Visual Studio.
+
+#### .NET Core command line
+
+Open a command line, and switch to the directory that contains your project file.
+
+Run the install commands.
+
+```dotnetcli
+dotnet add package Microsoft.Identity.Web.GraphServiceClient
+
+dotnet add package Microsoft.Identity.Web
+```
+
+#### Package Manager Console
+
+Open the project/solution in Visual Studio, and open the console by using the **Tools** > **NuGet Package Manager** > **Package Manager Console** command.
+
+Run the install commands.
+```powershell
+Install-Package Microsoft.Identity.Web.GraphServiceClient
+
+Install-Package Microsoft.Identity.Web
+```
+
+### Startup.cs
+
+In the *Startup.cs* file, the ```AddMicrosoftIdentityWebApp``` method adds Microsoft.Identity.Web to your web app. The ```AddMicrosoftGraph``` method adds Microsoft Graph support.
+
+```csharp
+using Microsoft.AspNetCore.Builder;
+using Microsoft.AspNetCore.Hosting;
+using Microsoft.Extensions.Configuration;
+using Microsoft.Extensions.DependencyInjection;
+using Microsoft.Extensions.Hosting;
+using Microsoft.Identity.Web;
+using Microsoft.AspNetCore.Authentication.OpenIdConnect;
+
+// Some code omitted for brevity.
+public class Startup
+{
+ // This method gets called by the runtime. Use this method to add services to the container.
+ public void ConfigureServices(IServiceCollection services)
+ {
+ services.AddOptions();
+ string[] initialScopes = Configuration.GetValue<string>("DownstreamApi:Scopes")?.Split(' ');
+
+ services.AddAuthentication(OpenIdConnectDefaults.AuthenticationScheme)
+ .AddMicrosoftIdentityWebApp(Configuration.GetSection("AzureAd"))
+ .EnableTokenAcquisitionToCallDownstreamApi(initialScopes)
+ .AddMicrosoftGraph(Configuration.GetSection("DownstreamApi"))
+ .AddInMemoryTokenCaches();
+
+ services.AddAuthorization(options =>
+ {
+ // By default, all incoming requests will be authorized according to the default policy
+ options.FallbackPolicy = options.DefaultPolicy;
+ });
+ services.AddRazorPages()
+ .AddMvcOptions(options => {})
+ .AddMicrosoftIdentityUI();
+
+ services.AddControllersWithViews()
+ .AddMicrosoftIdentityUI();
+ }
+}
+
+```
+
+### appsettings.json
+
+*Microsoft Entra ID* specifies the configuration for the Microsoft.Identity.Web library. In the [Microsoft Entra admin center](https://entra.microsoft.com), select **Applications** from the portal menu and then select **App registrations**. Select the app registration created when you enabled the App Service authentication/authorization module. (The app registration should have the same name as your web app.) You can find the tenant ID and client ID in the app registration overview page. The domain name can be found in the Microsoft Entra overview page for your tenant.
+
+*Graph* specifies the Microsoft Graph endpoint and the initial scopes needed by the app.
+
+```json
+{
+ "AzureAd": {
+ "Instance": "https://login.microsoftonline.com/",
+ "Domain": "[Enter the domain of your tenant, e.g. contoso.onmicrosoft.com]",
+ "TenantId": "[Enter 'common', or 'organizations' or the Tenant Id (Obtained from the Entra admin center. Select 'Endpoints' from the 'App registrations' blade and use the GUID in any of the URLs), e.g. da41245a5-11b3-996c-00a8-4d99re19f292]",
+ "ClientId": "[Enter the Client Id (Application ID obtained from the Microsoft Entra admin center), e.g. ba74781c2-53c2-442a-97c2-3d60re42f403]",
+ "ClientSecret": "[Copy the client secret added to the app from the Microsoft Entra admin center]",
+ "ClientCertificates": [
+ ],
+ // the following is required to handle Continuous Access Evaluation challenges
+ "ClientCapabilities": [ "cp1" ],
+ "CallbackPath": "/signin-oidc"
+ },
+ "DownstreamApis": {
+ "MicrosoftGraph": {
+ // Specify BaseUrl if you want to use Microsoft graph in a national cloud.
+ // See https://learn.microsoft.com/graph/deployments#microsoft-graph-and-graph-explorer-service-root-endpoints
+ // "BaseUrl": "https://graph.microsoft.com/v1.0",
+
+ // Set RequestAppToken this to "true" if you want to request an application token (to call graph on
+ // behalf of the application). The scopes will then automatically
+ // be ['https://graph.microsoft.com/.default'].
+ // "RequestAppToken": false
+
+ // Set Scopes to request (unless you request an app token).
+ "Scopes": [ "User.Read" ]
+
+ // See https://aka.ms/ms-id-web/downstreamApiOptions for all the properties you can set.
+ }
+ },
+ "Logging": {
+ "LogLevel": {
+ "Default": "Information",
+ "Microsoft": "Warning",
+ "Microsoft.Hosting.Lifetime": "Information"
+ }
+ },
+ "AllowedHosts": "*"
+}
+```
+
+### Index.cshtml.cs
+
+The following example shows how to call Microsoft Graph as the signed-in user and get some user information. The ```GraphServiceClient``` object is injected into the controller, and authentication has been configured for you by the Microsoft.Identity.Web library.
+
+```csharp
+using System.Threading.Tasks;
+using Microsoft.AspNetCore.Mvc.RazorPages;
+using Microsoft.Graph;
+using System.IO;
+using Microsoft.Identity.Web;
+using Microsoft.Extensions.Logging;
+
+// Some code omitted for brevity.
+
+[AuthorizeForScopes(Scopes = new[] { "User.Read" })]
+public class IndexModel : PageModel
+{
+ private readonly ILogger<IndexModel> _logger;
+ private readonly GraphServiceClient _graphServiceClient;
+
+ public IndexModel(ILogger<IndexModel> logger, GraphServiceClient graphServiceClient)
+ {
+ _logger = logger;
+ _graphServiceClient = graphServiceClient;
+ }
+
+ public async Task OnGetAsync()
+ {
+ try
+ {
+ var user = await _graphServiceClient.Me.GetAsync();
+ ViewData["Me"] = user;
+ ViewData["name"] = user.DisplayName;
+
+ using (var photoStream = await _graphServiceClient.Me.Photo.Content.GetAsync())
+ {
+ byte[] photoByte = ((MemoryStream)photoStream).ToArray();
+ ViewData["photo"] = Convert.ToBase64String(photoByte);
+ }
+ }
+ catch (Exception ex)
+ {
+ ViewData["photo"] = null;
+ }
+ }
+}
+```
+
+# [Node.js](#tab/programming-language-nodejs)
+
+Using a custom **AuthProvider** class that encapsulates authentication logic, the web app gets the user's access token from the incoming requests header. The **AuthProvider** instance detects that the web app is hosted on App Service and gets the access token from the App Service authentication/authorization module. The access token is then passed down to the Microsoft Graph SDK client to make an authenticated request to the `/me` endpoint.
+
+To see this code as part of a sample application, see *graphController.js* in the [sample on GitHub](https://github.com/Azure-Samples/ms-identity-easyauth-nodejs-storage-graphapi/tree/main/2-WebApp-graphapi-on-behalf).
+
+> [!NOTE]
+> The App Service authentication/authorization is designed for more basic authentication scenarios. Later, when your web app needs to handle more complex scenarios, you can disable the App Service authentication/authorization module and the **AuthProvider** instance in the sample will fallback to use [MSAL Node](https://github.com/AzureAD/microsoft-authentication-library-for-js/tree/dev/lib/msal-node), which is the recommended library for adding authentication/authorization to Node.js applications.
+
+```nodejs
+const graphHelper = require('../utils/graphHelper');
+
+// Some code omitted for brevity.
+
+exports.getProfilePage = async(req, res, next) => {
+
+ try {
+ const graphClient = graphHelper.getAuthenticatedClient(req.session.protectedResources["graphAPI"].accessToken);
+
+ const profile = await graphClient
+ .api('/me')
+ .get();
+
+ res.render('profile', { isAuthenticated: req.session.isAuthenticated, profile: profile, appServiceName: appServiceName });
+ } catch (error) {
+ next(error);
+ }
+}
+```
+
+To query Microsoft Graph, use the [Microsoft Graph JavaScript SDK](https://github.com/microsoftgraph/msgraph-sdk-javascript). The code for this is located in [utils/graphHelper.js](https://github.com/Azure-Samples/ms-identity-easyauth-nodejs-storage-graphapi/blob/main/2-WebApp-graphapi-on-behalf/utils/graphHelper.js):
+
+```nodejs
+const graph = require('@microsoft/microsoft-graph-client');
+
+// Some code omitted for brevity.
+
+getAuthenticatedClient = (accessToken) => {
+ // Initialize Graph client
+ const client = graph.Client.init({
+ // Use the provided access token to authenticate requests
+ authProvider: (done) => {
+ done(null, accessToken);
+ }
+ });
+
+ return client;
+}
+```
++
+## Clean up resources
+
+If you're finished with this tutorial and no longer need the web app or associated resources, [clean up the resources you created](multi-service-web-app-clean-up-resources.md).
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [App service accesses Microsoft Graph as the app](multi-service-web-app-access-microsoft-graph-as-app.md)
active-directory Multi Service Web App Access Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/multi-service-web-app-access-storage.md
In the [Azure portal](https://portal.azure.com), go into your storage account to
1. Select **Review and assign** and then select **Review and assign** once more.
-For detailed steps, see [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.md).
+For detailed steps, see [Assign Azure roles using the Azure portal](/azure/role-based-access-control/role-assignments-portal).
Your web app now has access to your storage account.
active-directory Multi Service Web App Authentication App Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/multi-service-web-app-authentication-app-service.md
Learn how to enable authentication for your web app running on Azure App Service
App Service provides built-in authentication and authorization support, so you can sign in users and access data by writing minimal or no code in your web app. Using the App Service authentication/authorization module isn't required, but helps simplify authentication and authorization for your app. This article shows how to secure your web app with the App Service authentication/authorization module by using Microsoft Entra ID as the identity provider.
-The authentication/authorization module is enabled and configured through the Azure portal and app settings. No SDKs, specific languages, or changes to application code are required.ΓÇï A variety of identity providers are supported, which includes Microsoft Entra ID, Microsoft Account, Facebook, Google, and TwitterΓÇïΓÇï. When the authentication/authorization module is enabled, every incoming HTTP request passes through it before being handled by app code.ΓÇïΓÇï To learn more, see [Authentication and authorization in Azure App Service](../../app-service/overview-authentication-authorization.md).
+The authentication/authorization module is enabled and configured through the Azure portal and app settings. No SDKs, specific languages, or changes to application code are required.ΓÇï A variety of identity providers are supported, which includes Microsoft Entra ID, Microsoft Account, Facebook, Google, and TwitterΓÇïΓÇï. When the authentication/authorization module is enabled, every incoming HTTP request passes through it before being handled by app code.ΓÇïΓÇï To learn more, see [Authentication and authorization in Azure App Service](/azure/app-service/overview-authentication-authorization).
In this tutorial, you learn how to:
In this tutorial, you learn how to:
## Create and publish a web app on App Service
-For this tutorial, you need a web app deployed to App Service. You can use an existing web app, or you can follow one of the [ASP.NET Core](../../app-service/quickstart-dotnetcore.md), [Node.js](../../app-service/quickstart-nodejs.md), [Python](../../app-service/quickstart-python.md), or [Java](../../app-service/quickstart-java.md) quickstarts to create and publish a new web app to App Service.
+For this tutorial, you need a web app deployed to App Service. You can use an existing web app, or you can follow one of the [ASP.NET Core](/azure/app-service/quickstart-dotnetcore), [Node.js](/azure/app-service/quickstart-nodejs), [Python](/azure/app-service/quickstart-python), or [Java](/azure/app-service/quickstart-java) quickstarts to create and publish a new web app to App Service.
Whether you use an existing web app or create a new one, take note of the following:
You need these names throughout this tutorial.
## Configure authentication and authorization
-You now have a web app running on App Service. Next, you enable authentication and authorization for the web app. You use Microsoft Entra ID as the identity provider. For more information, see [Configure Microsoft Entra authentication for your App Service application](../../app-service/configure-authentication-provider-aad.md).
+You now have a web app running on App Service. Next, you enable authentication and authorization for the web app. You use Microsoft Entra ID as the identity provider. For more information, see [Configure Microsoft Entra authentication for your App Service application](/azure/app-service/configure-authentication-provider-aad).
In the [Azure portal](https://portal.azure.com) menu, select **Resource groups**, or search for and select **Resource groups** from any page.
active-directory Optional Claims https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/optional-claims.md
You can configure optional claims for your application through the Azure portal
1. Browse to **Identity** > **Applications** > **App registrations**. 1. Choose the application for which you want to configure optional claims based on your scenario and desired outcome. 1. Under **Manage**, select **Token configuration**.
- - The UI option **Token configuration** blade isn't available for apps registered in an Azure AD B2C tenant, which can be configured by modifying the application manifest. For more information, see [Add claims and customize user input using custom policies in Azure Active Directory B2C](../../active-directory-b2c/configure-user-input.md)
+ - The UI option **Token configuration** blade isn't available for apps registered in an Azure AD B2C tenant, which can be configured by modifying the application manifest. For more information, see [Add claims and customize user input using custom policies in Azure Active Directory B2C](/azure/active-directory-b2c/configure-user-input)
Configure claims using the manifest:
active-directory Permissions Consent Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/permissions-consent-overview.md
Preauthorization allows a resource application owner to grant permissions withou
- [User and admin consent overview](../manage-apps/user-admin-consent-overview.md) - [OpenID connect scopes](scopes-oidc.md) -- [Making your application multi-tenant](./howto-convert-app-to-be-multi-tenant.md)-- [Microsoft Entra Microsoft Q&A](/answers/topics/azure-active-directory.html)
+- [Microsoft Entra Microsoft Q&A](/answers/tags/455/entra-id)
active-directory Quickstart Create New Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-create-new-tenant.md
You'll provide the following information to create your new tenant:
## Social and local accounts
-To begin building external facing applications that sign in social and local accounts, create an Azure AD B2C tenant. To begin, see [Create an Azure AD B2C tenant](../../active-directory-b2c/tutorial-create-tenant.md).
+To begin building external facing applications that sign in social and local accounts, create an Azure AD B2C tenant. To begin, see [Create an Azure AD B2C tenant](/azure/active-directory-b2c/tutorial-create-tenant).
## Next steps
active-directory Quickstart Desktop App Uwp Sign In https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-desktop-app-uwp-sign-in.md
When the app's window appears, you can select the **Call Microsoft Graph API** b
### MSAL.NET
-MSAL ([Microsoft.Identity.Client](/dotnet/api/microsoft.identity.client?)) is the library used to sign in users and request security tokens. The security tokens are used to access an API protected by the Microsoft identity platform. You can install MSAL by running the following command in Visual Studio's *Package Manager Console*:
+MSAL ([Microsoft.Identity.Client](/dotnet/api/microsoft.identity.client)) is the library used to sign in users and request security tokens. The security tokens are used to access an API protected by the Microsoft identity platform. You can install MSAL by running the following command in Visual Studio's *Package Manager Console*:
```powershell Install-Package Microsoft.Identity.Client
The value of `ClientId` is the **Application (client) ID** of the app you regist
### Requesting tokens
-MSAL has two methods for acquiring tokens in a UWP app: [`AcquireTokenInteractive`](/dotnet/api/microsoft.identity.client.acquiretokeninteractiveparameterbuilder?) and [`AcquireTokenSilent`](/dotnet/api/microsoft.identity.client.acquiretokensilentparameterbuilder).
+MSAL has two methods for acquiring tokens in a UWP app: [`AcquireTokenInteractive`](/dotnet/api/microsoft.identity.client.acquiretokeninteractiveparameterbuilder) and [`AcquireTokenSilent`](/dotnet/api/microsoft.identity.client.acquiretokensilentparameterbuilder).
#### Get a user token interactively
active-directory Quickstart Desktop App Wpf Sign In https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-desktop-app-wpf-sign-in.md
See [How the sample works](#how-the-sample-works) for an illustration.
## Prerequisites
-* [Visual Studio](https://visualstudio.microsoft.com/vs/) with the [Universal Windows Platform development](/windows/uwp/get-started/get-set-up) workload installed
+* [Visual Studio](https://visualstudio.microsoft.com/vs/) with the [Universal Windows Platform development](/windows/apps/windows-app-sdk/set-up-your-development-environment) workload installed
## Register and download your quickstart app You have two options to start your quickstart application:
active-directory Quickstart Single Page App Angular Sign In https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-single-page-app-angular-sign-in.md
# Quickstart: Sign in users in a single-page app (SPA) and call the Microsoft Graph API using Angular
-This quickstart uses a sample Angular single-page app (SPA) to show you how to sign in users by using the [authorization code flow](/azure/active-directory/develop/v2-oauth2-auth-code-flow) with Proof Key for Code Exchange (PKCE) and call the Microsoft Graph API. The sample uses the [Microsoft Authentication Library for JavaScript](/javascript/api/@azure/msal-react) to handle authentication.
+This quickstart uses a sample Angular single-page app (SPA) to show you how to sign in users by using the [authorization code flow](./v2-oauth2-auth-code-flow.md) with Proof Key for Code Exchange (PKCE) and call the Microsoft Graph API. The sample uses the [Microsoft Authentication Library for JavaScript](/javascript/api/%40azure/msal-react/) to handle authentication.
## Prerequisites
A message appears indicating that you have signed out. You can now close the bro
- [Quickstart: Protect an ASP.NET Core web API with the Microsoft identity platform](./quickstart-web-api-aspnet-core-protect-api.md) -- Learn more by building this Angular SPA from scratch with the following series - [Tutorial: Sign in users and call Microsoft Graph](./tutorial-v2-angular-auth-code.md)
+- Learn more by building this Angular SPA from scratch with the following series - [Tutorial: Sign in users and call Microsoft Graph](./tutorial-v2-angular-auth-code.md)
active-directory Quickstart Single Page App Javascript Sign In https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-single-page-app-javascript-sign-in.md
# Quickstart: Sign in users in a single-page app (SPA) and call the Microsoft Graph API using JavaScript
-This quickstart uses a sample JavaScript (JS) single-page app (SPA) to show you how to sign in users by using the [authorization code flow](/azure/active-directory/develop/v2-oauth2-auth-code-flow) with Proof Key for Code Exchange (PKCE) and call the Microsoft Graph API. The sample uses the [Microsoft Authentication Library for JavaScript](/javascript/api/@azure/msal-react) to handle authentication.
+This quickstart uses a sample JavaScript (JS) single-page app (SPA) to show you how to sign in users by using the [authorization code flow](./v2-oauth2-auth-code-flow.md) with Proof Key for Code Exchange (PKCE) and call the Microsoft Graph API. The sample uses the [Microsoft Authentication Library for JavaScript](/javascript/api/@azure/msal-react) to handle authentication.
## Prerequisites
active-directory Quickstart Single Page App React Sign In https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-single-page-app-react-sign-in.md
# Quickstart: Sign in users in a single-page app (SPA) and call the Microsoft Graph API using React
-This quickstart uses a sample React single-page app (SPA) to show you how to sign in users by using the [authorization code flow](/azure/active-directory/develop/v2-oauth2-auth-code-flow) with Proof Key for Code Exchange (PKCE). The sample uses the [Microsoft Authentication Library for JavaScript](/javascript/api/@azure/msal-react) to handle authentication.
+This quickstart uses a sample React single-page app (SPA) to show you how to sign in users by using the [authorization code flow](./v2-oauth2-auth-code-flow.md) with Proof Key for Code Exchange (PKCE). The sample uses the [Microsoft Authentication Library for JavaScript](/javascript/api/@azure/msal-react) to handle authentication.
## Prerequisites
A message appears indicating that you have signed out. You can now close the bro
- [Quickstart: Protect an ASP.NET Core web API with the Microsoft identity platform](./quickstart-web-api-aspnet-core-protect-api.md) -- Learn more by building this React SPA from scratch with the following series - [Tutorial: Sign in users and call Microsoft Graph](./tutorial-single-page-app-react-register-app.md)
+- Learn more by building this React SPA from scratch with the following series - [Tutorial: Sign in users and call Microsoft Graph](./tutorial-single-page-app-react-register-app.md)
active-directory Quickstart Web App Python Sign In https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-web-app-python-sign-in.md
The following diagram displays how the sample app works:
## Prerequisites - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).-- A Microsoft Entra tenant. For more information on how to get a Microsoft Entra tenant, see [how to get a Microsoft Entra tenant.](/azure/active-directory/develop/quickstart-create-new-tenant)
+- A Microsoft Entra tenant. For more information on how to get a Microsoft Entra tenant, see [how to get a Microsoft Entra tenant.](./quickstart-create-new-tenant.md)
- [Python 3.7+](https://www.python.org/downloads/) ## Step 1: Register your application
active-directory Reference Error Codes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/reference-error-codes.md
The `error` field has several possible values - review the protocol documentatio
| AADSTS51000 | RequiredFeatureNotEnabled - The feature is disabled. | | AADSTS51001 | DomainHintMustbePresent - Domain hint must be present with on-premises security identifier or on-premises UPN. | | AADSTS1000104| XCB2BResourceCloudNotAllowedOnIdentityTenant - Resource cloud {resourceCloud} isn't allowed on identity tenant {identityTenant}. {resourceCloud} - cloud instance which owns the resource. {identityTenant} - is the tenant where signing-in identity is originated from. |
-| AADSTS51004 | UserAccountNotInDirectory - The user account doesnΓÇÖt exist in the directory. An application likely chose the wrong tenant to sign into, and the currently logged in user was prevented from doing so since they did not exist in your tenant. If this user should be able to log in, add them as a guest. For further information, please visit [add B2B users](/azure/active-directory/b2b/add-users-administrator). |
+| AADSTS51004 | UserAccountNotInDirectory - The user account doesnΓÇÖt exist in the directory. An application likely chose the wrong tenant to sign into, and the currently logged in user was prevented from doing so since they did not exist in your tenant. If this user should be able to log in, add them as a guest. For further information, please visit [add B2B users](../external-identities/add-users-administrator.md). |
| AADSTS51005 | TemporaryRedirect - Equivalent to HTTP status 307, which indicates that the requested information is located at the URI specified in the location header. When you receive this status, follow the location header associated with the response. When the original request method was POST, the redirected request will also use the POST method. | | AADSTS51006 | ForceReauthDueToInsufficientAuth - Integrated Windows authentication is needed. User logged in using a session token that is missing the integrated Windows authentication claim. Request the user to log in again. | | AADSTS52004 | DelegationDoesNotExistForLinkedIn - The user has not provided consent for access to LinkedIn resources. |
active-directory Saml Claims Customization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/saml-claims-customization.md
Any constant (static) value can be assigned to any claim. Use the following step
1. On the **Attributes & Claims** blade, select the required claim that you want to modify. 1. Enter the constant value without quotes in the **Source attribute** as per your organization and select **Save**. The constant value is displayed.
-### Directory Schema extensions (Preview)
+### Directory Schema extensions
You can also configure directory schema extension attributes as non-conditional/conditional attributes. Use the following steps to configure the single or multi-valued directory schema extension attribute as a claim:
To apply a transformation to a user attribute:
1. In **Manage claim**, select *Transformation* as the claim source to open the **Manage transformation** page. 1. Select the function from the transformation dropdown. Depending on the function selected, provide parameters and a constant value to evaluate in the transformation.
-1. Select the source of the attribute by clicking on the appropriate radio button. Directory schema extension source is in preview currently.
+1. Select the source of the attribute by clicking on the appropriate radio button.
1. Select the attribute name from the dropdown. 1. **Treat source as multivalued** is a checkbox indicating whether the transform should be applied to all values or just the first. By default, transformations are only applied to the first element in a multi-value claim, by checking this box it ensures it's applied to all. This checkbox is only be enabled for multi-valued attributes, for example `user.proxyaddresses`. 1. To apply multiple transformations, select **Add transformation**. You can apply a maximum of two transformations to a claim. For example, you could first extract the email prefix of the `user.mail`. Then, make the string upper case.
To add a claim condition:
1. In **Manage claim**, expand the Claim conditions. 1. Select the user type. 1. Select the group(s) to which the user should belong. You can select up to 50 unique groups across all claims for a given application.
-1. Select the **Source** where the claim is going to retrieve its value. You can either select a user attribute from the dropdown for the source attribute or apply a transformation to the user attribute. You can also select a directory schema extension (preview) before emitting it as a claim.
+1. Select the **Source** where the claim is going to retrieve its value. You can either select a user attribute from the dropdown for the source attribute or apply a transformation to the user attribute. You can also select a directory schema extension before emitting it as a claim.
The order in which you add the conditions are important. Microsoft Entra first evaluates all conditions with source `Attribute` and then evaluates all conditions with source `Transformation` to decide which value to emit in the claim. Conditions with the same source are evaluated from top to bottom. The last value, which matches the expression is emitted in the claim. Transformations such as `IsNotEmpty` and `Contains` act like restrictions.
active-directory Single And Multi Tenant Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/single-and-multi-tenant-apps.md
Previously updated : 02/17/2023 Last updated : 10/17/2023
Building great multi-tenant apps can be challenging because of the number of dif
For more information about tenancy in Microsoft Entra ID, see: - [How to convert an app to be multi-tenant](howto-convert-app-to-be-multi-tenant.md)-- [Enable multi-tenant log-ins](howto-convert-app-to-be-multi-tenant.md)
active-directory Assign Local Admin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/assign-local-admin.md
By adding Microsoft Entra roles to the local administrators group, you can updat
## Manage the Global Administrator role
-To view and update the membership of the [Global Administrator](/azure/active-directory/roles/permissions-reference#global-administrator) role, see:
+To view and update the membership of the [Global Administrator](../roles/permissions-reference.md#global-administrator) role, see:
- [View all members of an administrator role in Microsoft Entra ID](../roles/manage-roles-portal.md) - [Assign a user to administrator roles in Microsoft Entra ID](../fundamentals/how-subscriptions-associated-directory.md) ## Manage the Azure AD Joined Device Local Administrator role
-You can manage the [Azure AD Joined Device Local Administrator](/azure/active-directory/roles/permissions-reference#azure-ad-joined-device-local-administrator) role from **Device settings**.
+You can manage the [Azure AD Joined Device Local Administrator](../roles/permissions-reference.md#azure-ad-joined-device-local-administrator) role from **Device settings**.
1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Cloud Device Administrator](../roles/permissions-reference.md#cloud-device-administrator). 1. Browse to **Identity** > **Devices** > **All devices** > **Device settings**.
Organizations can use Intune to manage these policies using [Custom OMA-URI Sett
By default, Microsoft Entra ID adds the user performing the Microsoft Entra join to the administrator group on the device. If you want to prevent regular users from becoming local administrators, you have the following options: -- [Windows Autopilot](/windows/deployment/windows-autopilot/windows-10-autopilot) -
-Windows Autopilot provides you with an option to prevent primary user performing the join from becoming a local administrator by [creating an Autopilot profile](/intune/enrollment-autopilot#create-an-autopilot-deployment-profile).
-- [Bulk enrollment](/intune/windows-bulk-enroll) - a Microsoft Entra join that is performed in the context of a bulk enrollment happens in the context of an autocreated user. Users signing in after a device has been joined aren't added to the administrators group.
+- [Windows Autopilot](/autopilot/windows-autopilot) -
+Windows Autopilot provides you with an option to prevent primary user performing the join from becoming a local administrator by [creating an Autopilot profile](/autopilot/enrollment-autopilot#create-an-autopilot-deployment-profile).
+- [Bulk enrollment](/mem/intune/enrollment/windows-bulk-enroll) - a Microsoft Entra join that is performed in the context of a bulk enrollment happens in the context of an autocreated user. Users signing in after a device has been joined aren't added to the administrators group.
## Manually elevate a user on a device
active-directory Concept Directory Join https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/concept-directory-join.md
Any organization can deploy Microsoft Entra joined devices no matter the size or
| | Windows Autopilot | | **Device sign in options** | Organizational accounts using: | | | Password |
-| | [Passwordless](/azure/active-directory/authentication/concept-authentication-passwordless) options like [Windows Hello for Business](/windows/security/identity-protection/hello-for-business/hello-planning-guide) and FIDO2.0 security keys. |
+| | [Passwordless](../authentication/concept-authentication-passwordless.md) options like [Windows Hello for Business](/windows/security/identity-protection/hello-for-business/hello-planning-guide) and FIDO2.0 security keys. |
| **Device management** | Mobile Device Management (example: Microsoft Intune) | | | [Configuration Manager standalone or co-management with Microsoft Intune](/mem/configmgr/comanage/overview) | | **Key capabilities** | SSO to both cloud and on-premises resources |
Administrators can secure and further control Microsoft Entra joined devices usi
- Software installation - Software updates
-Administrators can make organization applications available to Microsoft Entra joined devices using Configuration Manager to [Manage apps from the Microsoft Store for Business and Education](/configmgr/apps/deploy-use/manage-apps-from-the-windows-store-for-business).
+Administrators can make organization applications available to Microsoft Entra joined devices using Configuration Manager to [Manage apps from the Microsoft Store for Business and Education](/mem/configmgr/apps/deploy-use/manage-apps-from-the-windows-store-for-business).
-Microsoft Entra join can be accomplished using self-service options like the Out of Box Experience (OOBE), bulk enrollment, or [Windows Autopilot](/intune/enrollment-autopilot).
+Microsoft Entra join can be accomplished using self-service options like the Out of Box Experience (OOBE), bulk enrollment, or [Windows Autopilot](/autopilot/enrollment-autopilot).
Microsoft Entra joined devices can still maintain single sign-on access to on-premises resources when they are on the organization's network. Devices that are Microsoft Entra joined can still authenticate to on-premises servers like file, print, and other applications.
active-directory Concept Hybrid Join https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/concept-hybrid-join.md
Microsoft Entra hybrid joined devices require network line of sight to your on-p
| | Windows 8.1, Windows Server 2012 R2, Windows Server 2012, and Windows Server 2008 R2 - Require MSI | | **Device sign in options** | Organizational accounts using: | | | Password |
-| | [Passwordless](/azure/active-directory/authentication/concept-authentication-passwordless) options like [Windows Hello for Business](/windows/security/identity-protection/hello-for-business/hello-planning-guide) and FIDO2.0 security keys. |
+| | [Passwordless](../authentication/concept-authentication-passwordless.md) options like [Windows Hello for Business](/windows/security/identity-protection/hello-for-business/hello-planning-guide) and FIDO2.0 security keys. |
| **Device management** | [Group Policy](/mem/configmgr/comanage/faq#my-environment-has-too-many-group-policy-objects-and-legacy-authenticated-apps--do-i-have-to-use-hybrid-azure-ad-) | | | [Configuration Manager standalone or co-management with Microsoft Intune](/mem/configmgr/comanage/overview) | | **Key capabilities** | SSO to both cloud and on-premises resources |
active-directory Concept Primary Refresh Token https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/concept-primary-refresh-token.md
The following Windows components play a key role in requesting and using a PRT:
* **Microsoft Entra CloudAP plugin**: a Microsoft Entra specific plugin built on the CloudAP framework that verifies user credentials with Microsoft Entra ID during Windows sign in. * **Microsoft Entra WAM plugin**: a Microsoft Entra specific plugin built on the WAM framework that enables SSO to applications that rely on Microsoft Entra ID for authentication. * **Dsreg**: a Microsoft Entra specific component on Windows 10 or newer, that handles the device registration process for all device states.
-* **Trusted Platform Module** (TPM): A TPM is a hardware component built into a device that provides hardware-based security functions for user and device secrets. More details can be found in the article [Trusted Platform Module Technology Overview](/windows/security/information-protection/tpm/trusted-platform-module-overview).
+* **Trusted Platform Module** (TPM): A TPM is a hardware component built into a device that provides hardware-based security functions for user and device secrets. More details can be found in the article [Trusted Platform Module Technology Overview](/windows/security/hardware-security/tpm/trusted-platform-module-overview).
## What does the PRT contain?
A PRT is an opaque blob sent from Microsoft Entra whose contents aren't known to
## How is a PRT issued?
-Device registration is a prerequisite for device based authentication in Microsoft Entra ID. A PRT is issued to users only on registered devices. For more in-depth details on device registration, see the article [Windows Hello for Business and Device Registration](/windows/security/identity-protection/hello-for-business/hello-how-it-works-device-registration). During device registration, the dsreg component generates two sets of cryptographic key pairs:
+Device registration is a prerequisite for device based authentication in Microsoft Entra ID. A PRT is issued to users only on registered devices. For more in-depth details on device registration, see the article [Windows Hello for Business and Device Registration](./device-registration-how-it-works.md). During device registration, the dsreg component generates two sets of cryptographic key pairs:
* Device key (dkpub/dkpriv) * Transport key (tkpub/tkpriv)
A PRT can get a multifactor authentication claim in specific scenarios. When an
* As Windows Hello for Business is considered multifactor authentication, the MFA claim is updated when the PRT itself is refreshed, so the MFA duration will continually extend when users sign in with Windows Hello for Business. * **MFA during WAM interactive sign in**: During a token request through WAM, if a user is required to do MFA to access the app, the PRT that is renewed during this interaction is imprinted with an MFA claim. * In this case, the MFA claim isn't updated continuously, so the MFA duration is based on the lifetime set on the directory.
- * When a previous existing PRT and RT are used for access to an app, the PRT and RT are regarded as the first proof of authentication. A new AT is required with a second proof and an imprinted MFA claim. This process also issues a new PRT and RT.
+ * When a previous existing PRT and RT are used for access to an app, the PRT and RT are regarded as the first proof of authentication. A new RT is required with a second proof and an imprinted MFA claim. This process also issues a new PRT and RT.
Windows 10 or newer maintain a partitioned list of PRTs for each credential. So, thereΓÇÖs a PRT for each of Windows Hello for Business, password, or smartcard. This partitioning ensures that MFA claims are isolated based on the credential used, and not mixed up during token requests.
active-directory Device Join Out Of Box https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/device-join-out-of-box.md
To verify whether a device is joined to your Microsoft Entra ID, review the **Ac
- For more information about managing devices, see [managing device identities](manage-device-identities.md). - [What is Microsoft Intune?](/mem/intune/fundamentals/what-is-intune)-- [Overview of Windows Autopilot](/mem/autopilot/windows-autopilot)
+- [Overview of Windows Autopilot](/autopilot/windows-autopilot)
- [Passwordless authentication options for Microsoft Entra ID](../authentication/concept-authentication-passwordless.md)
active-directory Device Join Plan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/device-join-plan.md
Review supported and unsupported policies to determine whether you can use an MD
- Are the unsupported policies applicable in a cloud-driven deployment? If your MDM solution isn't available through the Microsoft Entra app gallery, you can add it following the process
-outlined in [Microsoft Entra integration with MDM](/windows/client-management/mdm/azure-active-directory-integration-with-mdm).
+outlined in [Microsoft Entra integration with MDM](/windows/client-management/azure-active-directory-integration-with-mdm).
-Through co-management, you can use Microsoft Configuration Manager to manage certain aspects of your devices while policies are delivered through your MDM platform. Microsoft Intune enables co-management with Microsoft Configuration Manager. For more information on co-management for Windows 10 or newer devices, see [What is co-management?](/configmgr/core/clients/manage/co-management-overview). If you use an MDM product other than Intune, check with your MDM provider on applicable co-management scenarios.
+Through co-management, you can use Microsoft Configuration Manager to manage certain aspects of your devices while policies are delivered through your MDM platform. Microsoft Intune enables co-management with Microsoft Configuration Manager. For more information on co-management for Windows 10 or newer devices, see [What is co-management?](/mem/configmgr/comanage/overview). If you use an MDM product other than Intune, check with your MDM provider on applicable co-management scenarios.
**Recommendation:** Consider MDM only management for Microsoft Entra joined devices.
Microsoft Entra joined devices don't support on-premises applications relying on
### Remote Desktop Services
-Remote desktop connection to a Microsoft Entra joined devices requires the host machine to be either Microsoft Entra joined or Microsoft Entra hybrid joined. Remote desktop from an unjoined or non-Windows device isn't supported. For more information, see [Connect to remote Microsoft Entra joined pc](/windows/client-management/connect-to-remote-aadj-pc)
+Remote desktop connection to a Microsoft Entra joined devices requires the host machine to be either Microsoft Entra joined or Microsoft Entra hybrid joined. Remote desktop from an unjoined or non-Windows device isn't supported. For more information, see [Connect to remote Microsoft Entra joined pc](/windows/client-management/client-tools/connect-to-remote-aadj-pc)
Starting with the Windows 10 2004 update, users can also use remote desktop from a Microsoft Entra registered Windows 10 or newer device to another Microsoft Entra joined device.
active-directory Device Sso To On Premises Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/device-sso-to-on-premises-resources.md
Microsoft Entra Connect or Microsoft Entra Connect cloud sync synchronize your o
> > For Windows Hello for Business Cloud Kerberos Trust, see [Configure and provision Windows Hello for Business - cloud Kerberos trust](/windows/security/identity-protection/hello-for-business/hello-hybrid-cloud-kerberos-trust-provision). >
-> For Windows Hello for Business Hybrid Key Trust, see [Configure Microsoft Entra joined devices for On-premises Single-Sign On using Windows Hello for Business](/windows/security/identity-protection/hello-for-business/hello-hybrid-aadj-sso-base).
+> For Windows Hello for Business Hybrid Key Trust, see [Configure Microsoft Entra joined devices for On-premises Single-Sign On using Windows Hello for Business](/windows/security/identity-protection/hello-for-business/hello-hybrid-aadj-sso).
> > For Windows Hello for Business Hybrid Certificate Trust, see [Using Certificates for AADJ On-premises Single-sign On](/windows/security/identity-protection/hello-for-business/hello-hybrid-aadj-sso-cert). During an access attempt to an on-premises resource requesting Kerberos or NTLM, the device: 1. Sends the on-premises domain information and user credentials to the located DC to get the user authenticated.
-1. Receives a Kerberos [Ticket-Granting Ticket (TGT)](/windows/desktop/secauthn/ticket-granting-tickets) or NTLM token based on the protocol the on-premises resource or application supports. If the attempt to get the Kerberos TGT or NTLM token for the domain fails, Credential Manager entries are tried, or the user may receive an authentication pop-up requesting credentials for the target resource. This failure can be related to a delay caused by a DCLocator timeout.
+1. Receives a Kerberos [Ticket-Granting Ticket (TGT)](/windows/win32/secauthn/ticket-granting-tickets) or NTLM token based on the protocol the on-premises resource or application supports. If the attempt to get the Kerberos TGT or NTLM token for the domain fails, Credential Manager entries are tried, or the user may receive an authentication pop-up requesting credentials for the target resource. This failure can be related to a delay caused by a DCLocator timeout.
All apps that are configured for **Windows-Integrated authentication** seamlessly get SSO when a user tries to access them.
active-directory How To Hybrid Join Downlevel https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/how-to-hybrid-join-downlevel.md
You also must enable **Allow updates to status bar via script** in the userΓÇÖs
To register Windows downlevel devices, organizations must install [Microsoft Workplace Join for non-Windows 10 computers](https://www.microsoft.com/download/details.aspx?id=53554). Microsoft Workplace Join for non-Windows 10 computers is available in the Microsoft Download Center.
-You can deploy the package by using a software distribution system like [Microsoft Configuration Manager](/configmgr/). The package supports the standard silent installation options with the `quiet` parameter. The current branch of Configuration Manager offers benefits over earlier versions, like the ability to track completed registrations.
+You can deploy the package by using a software distribution system like [Microsoft Configuration Manager](/mem/configmgr/). The package supports the standard silent installation options with the `quiet` parameter. The current branch of Configuration Manager offers benefits over earlier versions, like the ability to track completed registrations.
The installer creates a scheduled task on the system that runs in the user context. The task is triggered when the user signs in to Windows. The task silently joins the device with Microsoft Entra ID by using the user credentials after it authenticates with Microsoft Entra ID.
active-directory Howto Manage Local Admin Passwords https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/howto-manage-local-admin-passwords.md
LAPS is available to all customers with Microsoft Entra ID Free or higher licens
### Required roles or permission
-Other than the built-in Microsoft Entra roles of Cloud Device Administrator, Intune Administrator, and Global Administrator that are granted *device.LocalCredentials.Read.All*, you can use [Microsoft Entra custom roles](/azure/active-directory/roles/custom-create) or administrative units to authorize local administrator password recovery. For example,
+Other than the built-in Microsoft Entra roles of Cloud Device Administrator, Intune Administrator, and Global Administrator that are granted *device.LocalCredentials.Read.All*, you can use [Microsoft Entra custom roles](../roles/custom-create.md) or administrative units to authorize local administrator password recovery. For example,
-- Custom roles must be assigned the *microsoft.directory/deviceLocalCredentials/password/read* permission to authorize local administrator password recovery. During the preview, you must create a custom role and grant permissions using the [Microsoft Graph API](/azure/active-directory/roles/custom-create#create-a-role-with-the-microsoft-graph-api) or [PowerShell](/azure/active-directory/roles/custom-create#create-a-role-using-powershell). Once you have created the custom role, you can assign it to users.
+- Custom roles must be assigned the *microsoft.directory/deviceLocalCredentials/password/read* permission to authorize local administrator password recovery. During the preview, you must create a custom role and grant permissions using the [Microsoft Graph API](../roles/custom-create.md#create-a-role-with-the-microsoft-graph-api) or [PowerShell](../roles/custom-create.md#create-a-role-using-powershell). Once you have created the custom role, you can assign it to users.
-- You can also create a Microsoft Entra ID [administrative unit](/azure/active-directory/roles/administrative-units), add devices, and assign the Cloud Device Administrator role scoped to the administrative unit to authorize local administrator password recovery.
+- You can also create a Microsoft Entra ID [administrative unit](../roles/administrative-units.md), add devices, and assign the Cloud Device Administrator role scoped to the administrative unit to authorize local administrator password recovery.
<a name='enabling-windows-laps-with-azure-ad'></a>
active-directory Howto Vm Sign In Azure Ad Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/howto-vm-sign-in-azure-ad-linux.md
The following Azure regions are currently supported for this feature:
- Azure Government - Microsoft Azure operated by 21Vianet
-Use of the SSH extension for Azure CLI on Azure Kubernetes Service (AKS) clusters is not supported. For more information, see [Support policies for AKS](../../aks/support-policies.md).
+Use of the SSH extension for Azure CLI on Azure Kubernetes Service (AKS) clusters is not supported. For more information, see [Support policies for AKS](/azure/aks/support-policies).
If you choose to install and use the Azure CLI locally, it must be version 2.22.1 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install the Azure CLI](/cli/azure/install-azure-cli). > [!NOTE]
-> This functionality is also available for [Azure Arc-enabled servers](../../azure-arc/servers/ssh-arc-overview.md).
+> This functionality is also available for [Azure Arc-enabled servers](/azure/azure-arc/servers/ssh-arc-overview).
<a name='meet-requirements-for-login-with-azure-ad-using-openssh-certificate-based-authentication'></a>
There are a few ways to open Cloud Shell:
If you choose to install and use the Azure CLI locally, this article requires you to use version 2.22.1 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install the Azure CLI](/cli/azure/install-azure-cli). 1. Create a resource group by running [az group create](/cli/azure/group#az-group-create).
-1. Create a VM by running [az vm create](/cli/azure/vm#az-vm-create&preserve-view=true). Use a supported distribution in a supported region.
+1. Create a VM by running [az vm create](/cli/azure/vm?preserve-view=true#az-vm-create&preserve-view=true). Use a supported distribution in a supported region.
1. Install the Microsoft Entra login VM extension by using [az vm extension set](/cli/azure/vm/extension#az-vm-extension-set). The following example deploys a VM and then installs the extension to enable Microsoft Entra login for a Linux VM. VM extensions are small applications that provide post-deployment configuration and automation tasks on Azure virtual machines. Customize the example as needed to support your testing requirements.
It takes a few minutes to create the VM and supporting resources.
The AADSSHLoginForLinux extension can be installed on an existing (supported distribution) Linux VM with a running VM agent to enable Microsoft Entra authentication. If you're deploying this extension to a previously created VM, the VM must have at least 1 GB of memory allocated or the installation will fail.
-The `provisioningState` value of `Succeeded` appears when the extension is successfully installed on the VM. The VM must have a running [VM agent](../../virtual-machines/extensions/agent-linux.md) to install the extension.
+The `provisioningState` value of `Succeeded` appears when the extension is successfully installed on the VM. The VM must have a running [VM agent](/azure/virtual-machines/extensions/agent-linux) to install the extension.
## Configure role assignments for the VM
There are two ways to configure role assignments for a VM:
- Azure Cloud Shell experience > [!NOTE]
-> The Virtual Machine Administrator Login and Virtual Machine User Login roles use `dataActions` and can be assigned at the management group, subscription, resource group, or resource scope. We recommend that you assign the roles at the management group, subscription, or resource group level and not at the individual VM level. This practice avoids the risk of reaching the [Azure role assignments limit](../../role-based-access-control/troubleshoot-limits.md) per subscription.
+> The Virtual Machine Administrator Login and Virtual Machine User Login roles use `dataActions` and can be assigned at the management group, subscription, resource group, or resource scope. We recommend that you assign the roles at the management group, subscription, or resource group level and not at the individual VM level. This practice avoids the risk of reaching the [Azure role assignments limit](/azure/role-based-access-control/troubleshoot-limits) per subscription.
<a name='azure-ad-portal'></a>
To configure role assignments for your Microsoft Entra ID-enabled Linux VMs:
1. Select **Add** > **Add role assignment** to open the **Add role assignment** page.
-1. Assign the following role. For detailed steps, see [Assign Azure roles by using the Azure portal](../../role-based-access-control/role-assignments-portal.md).
+1. Assign the following role. For detailed steps, see [Assign Azure roles by using the Azure portal](/azure/role-based-access-control/role-assignments-portal).
| Setting | Value | | | |
az role assignment create \
> [!NOTE] > If your Microsoft Entra domain and login username domain don't match, you must specify the object ID of your user account by using `--assignee-object-id`, not just the username for `--assignee`. You can obtain the object ID for your user account by using [az ad user list](/cli/azure/ad/user#az-ad-user-list).
-For more information on how to use Azure RBAC to manage access to your Azure subscription resources, see [Steps to assign an Azure role](../../role-based-access-control/role-assignments-steps.md).
+For more information on how to use Azure RBAC to manage access to your Azure subscription resources, see [Steps to assign an Azure role](/azure/role-based-access-control/role-assignments-steps).
## Install the SSH extension for Azure CLI
Use Azure Policy to:
With this capability, you can use many levels of enforcement. You can flag new and existing Linux VMs within your environment that don't have Microsoft Entra login enabled. You can also use Azure Policy to deploy the Microsoft Entra extension on new Linux VMs that don't have Microsoft Entra login enabled, as well as remediate existing Linux VMs to the same standard.
-In addition to these capabilities, you can use Azure Policy to detect and flag Linux VMs that have unapproved local accounts created on their machines. To learn more, review [Azure Policy](../../governance/policy/overview.md).
+In addition to these capabilities, you can use Azure Policy to detect and flag Linux VMs that have unapproved local accounts created on their machines. To learn more, review [Azure Policy](/azure/governance/policy/overview).
## Troubleshoot sign-in issues
If you get a message that says the token couldn't be retrieved from the local ca
### Access denied: Azure role not assigned
-If you see an "Azure role not assigned" error on your SSH prompt, verify that you've configured Azure RBAC policies for the VM that grants the user either the Virtual Machine Administrator Login role or the Virtual Machine User Login role. If you're having problems with Azure role assignments, see the article [Troubleshoot Azure RBAC](../../role-based-access-control/troubleshooting.md).
+If you see an "Azure role not assigned" error on your SSH prompt, verify that you've configured Azure RBAC policies for the VM that grants the user either the Virtual Machine Administrator Login role or the Virtual Machine User Login role. If you're having problems with Azure role assignments, see the article [Troubleshoot Azure RBAC](/azure/role-based-access-control/troubleshooting).
### Problems deleting the old (AADLoginForLinux) extension
active-directory Howto Vm Sign In Azure Ad Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/howto-vm-sign-in-azure-ad-windows.md
Microsoft Azure operated by 21Vianet:
### Authentication requirements
-[Microsoft Entra Guest accounts](/azure/active-directory/external-identities/what-is-b2b) can't connect to Azure VMs or Azure Bastion enabled VMs via Microsoft Entra authentication.
+[Microsoft Entra Guest accounts](../external-identities/what-is-b2b.md) can't connect to Azure VMs or Azure Bastion enabled VMs via Microsoft Entra authentication.
<a name='enable-azure-ad-login-for-a-windows-vm-in-azure'></a>
There are two ways to enable Microsoft Entra login for your Windows VM:
- Azure Cloud Shell, when you're creating a Windows VM or using an existing Windows VM. > [!NOTE]
-> If a device object with the same displayName as the hostname of a VM where an extension is installed exists, the VM fails to join Microsoft Entra ID with a hostname duplication error. Avoid duplication by [modifying the hostname](../../virtual-network/virtual-networks-viewing-and-modifying-hostnames.md#modify-a-hostname).
+> If a device object with the same displayName as the hostname of a VM where an extension is installed exists, the VM fails to join Microsoft Entra ID with a hostname duplication error. Avoid duplication by [modifying the hostname](/azure/virtual-network/virtual-networks-viewing-and-modifying-hostnames#modify-a-hostname).
### Azure portal
To configure role assignments for your Microsoft Entra ID-enabled Windows Server
1. Select **Add** > **Add role assignment** to open the **Add role assignment** page.
-1. Assign the following role. For detailed steps, see [Assign Azure roles by using the Azure portal](../../role-based-access-control/role-assignments-portal.md).
+1. Assign the following role. For detailed steps, see [Assign Azure roles by using the Azure portal](/azure/role-based-access-control/role-assignments-portal).
| Setting | Value | | | |
az role assignment create \
For more information about how to use Azure RBAC to manage access to your Azure subscription resources, see the following articles: -- [Assign Azure roles by using the Azure CLI](../../role-based-access-control/role-assignments-cli.md)-- [Assign Azure roles by using the Azure portal](../../role-based-access-control/role-assignments-portal.md)-- [Assign Azure roles by using Azure PowerShell](../../role-based-access-control/role-assignments-powershell.md)
+- [Assign Azure roles by using the Azure CLI](/azure/role-based-access-control/role-assignments-cli)
+- [Assign Azure roles by using the Azure portal](/azure/role-based-access-control/role-assignments-portal)
+- [Assign Azure roles by using Azure PowerShell](/azure/role-based-access-control/role-assignments-powershell)
<a name='log-in-by-using-azure-ad-credentials-to-a-windows-vm'></a>
To connect to the remote computer:
- You're then prompted to allow the remote desktop connection when connecting to a new PC. Microsoft Entra remembers up to 15 hosts for 30 days before prompting again. If you see this dialogue, select **Yes** to connect. > [!IMPORTANT]
-> If your organization has configured and is using [Microsoft Entra Conditional Access](/azure/active-directory/conditional-access/overview), your device must satisfy the Conditional Access requirements to allow connection to the remote computer. Conditional Access policies may be applied to the application **Microsoft Remote Desktop (a4a365df-50f1-4397-bc59-1a1564b8bb9c)** for controlled access.
+> If your organization has configured and is using [Microsoft Entra Conditional Access](../conditional-access/overview.md), your device must satisfy the Conditional Access requirements to allow connection to the remote computer. Conditional Access policies may be applied to the application **Microsoft Remote Desktop (a4a365df-50f1-4397-bc59-1a1564b8bb9c)** for controlled access.
> [!NOTE] > The Windows lock screen in the remote session doesn't support Microsoft Entra authentication tokens or passwordless authentication methods like FIDO keys. The lack of support for these authentication methods means that users can't unlock their screens in a remote session. When you try to lock a remote session, either through user action or system policy, the session is instead disconnected and the service sends a message to the user explaining they've been disconnected. Disconnecting the session also ensures that when the connection is relaunched after a period of inactivity, Microsoft Entra ID reevaluates the applicable Conditional Access policies.
To connect to the remote computer:
> [!IMPORTANT] > Remote connection to VMs that are joined to Microsoft Entra ID is allowed only from Windows 10 or later PCs that are either Microsoft Entra registered (minimum required build is 20H1) or Microsoft Entra joined or Microsoft Entra hybrid joined to the *same* directory as the VM. Additionally, to RDP by using Microsoft Entra credentials, users must belong to one of the two Azure roles, Virtual Machine Administrator Login or Virtual Machine User Login. >
-> If you're using a Microsoft Entra registered Windows 10 or later PC, you must enter credentials in the `AzureAD\UPN` format (for example, `AzureAD\john@contoso.com`). At this time, you can use Azure Bastion to log in with Microsoft Entra authentication [via the Azure CLI and the native RDP client mstsc](../../bastion/native-client.md).
+> If you're using a Microsoft Entra registered Windows 10 or later PC, you must enter credentials in the `AzureAD\UPN` format (for example, `AzureAD\john@contoso.com`). At this time, you can use Azure Bastion to log in with Microsoft Entra authentication [via the Azure CLI and the native RDP client mstsc](/azure/bastion/native-client).
To log in to your Windows Server 2019 virtual machine by using Microsoft Entra ID:
Use Azure Policy to:
With this capability, you can use many levels of enforcement. You can flag new and existing Windows VMs within your environment that don't have Microsoft Entra login enabled. You can also use Azure Policy to deploy the Microsoft Entra extension on new Windows VMs that don't have Microsoft Entra login enabled, and remediate existing Windows VMs to the same standard.
-In addition to these capabilities, you can use Azure Policy to detect and flag Windows VMs that have unapproved local accounts created on their machines. To learn more, review [Azure Policy](../../governance/policy/overview.md).
+In addition to these capabilities, you can use Azure Policy to detect and flag Windows VMs that have unapproved local accounts created on their machines. To learn more, review [Azure Policy](/azure/governance/policy/overview).
## Troubleshoot deployment problems
You might get the following error message when you initiate a remote desktop con
Verify that you've [configured Azure RBAC policies](#configure-role-assignments-for-the-vm) for the VM that grant the user the Virtual Machine Administrator Login or Virtual Machine User Login role. > [!NOTE]
-> If you're having problems with Azure role assignments, see [Troubleshoot Azure RBAC](../../role-based-access-control/troubleshooting.md).
+> If you're having problems with Azure role assignments, see [Troubleshoot Azure RBAC](/azure/role-based-access-control/troubleshooting).
### Unauthorized client or password change required
active-directory Hybrid Join Plan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/hybrid-join-plan.md
Microsoft Entra hybrid join supports a broad range of Windows devices. Because t
- **Note**: Azure National cloud customers require version 1803 - Windows Server 2019
-For devices running the Windows desktop operating system, supported versions are listed in this article [Windows 10 release information](/windows/release-information/). As a best practice, Microsoft recommends you upgrade to the latest version of Windows.
+For devices running the Windows desktop operating system, supported versions are listed in this article [Windows 10 release information](/windows/release-health/). As a best practice, Microsoft recommends you upgrade to the latest version of Windows.
### Windows down-level devices
As a first planning step, you should review your environment and determine wheth
- If you're relying on a Virtual Machine (VM) snapshot to create more VMs, make sure that snapshot isn't from a VM that is already registered with Microsoft Entra ID as Microsoft Entra hybrid joined. -- If you're using [Unified Write Filter](/windows-hardware/customize/enterprise/unified-write-filter) and similar technologies that clear changes to the disk at reboot, they must be applied after the device is Microsoft Entra hybrid joined. Enabling such technologies before completion of Microsoft Entra hybrid join will result in the device getting unjoined on every reboot.
+- If you're using [Unified Write Filter](/windows/iot/iot-enterprise/customize/unified-write-filter) and similar technologies that clear changes to the disk at reboot, they must be applied after the device is Microsoft Entra hybrid joined. Enabling such technologies before completion of Microsoft Entra hybrid join will result in the device getting unjoined on every reboot.
<a name='handling-devices-with-azure-ad-registered-state'></a>
active-directory Manage Device Identities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/manage-device-identities.md
From there, you can go to **All devices** to:
- Devices deployed via [Windows Autopilot](/windows/deployment/windows-autopilot/windows-autopilot). - Printers that use [Universal Print](/universal-print/fundamentals/universal-print-getting-started). - Complete device identity management tasks like enable, disable, delete, and manage.
- - The management options for [Printers](/universal-print/fundamentals/) and [Windows Autopilot](/windows/deployment/windows-autopilot/windows-autopilot) are limited in Microsoft Entra ID. These devices must be managed from their respective admin interfaces.
+ - The management options for [Printers](/universal-print/fundamentals/) and [Windows Autopilot](/autopilot/windows-autopilot) are limited in Microsoft Entra ID. These devices must be managed from their respective admin interfaces.
- Configure your device identity settings. - Enable or disable enterprise state roaming. - Review device-related audit logs.
You must be assigned one of the following roles to manage device settings:
- This setting allows you to specify whether users are required to provide another authentication factor to join or register their devices to Microsoft Entra ID. The default is **No**. We recommend that you require multifactor authentication when a device is registered or joined. Before you enable multifactor authentication for this service, you must ensure that multifactor authentication is configured for users that register their devices. For more information on Microsoft Entra multifactor authentication services, see [getting started with Microsoft Entra multifactor authentication](../authentication/concept-mfa-howitworks.md). This setting may not work with third-party identity providers. > [!NOTE]
- > The **Require multifactor authentication to register or join devices with Microsoft Entra ID** setting applies to devices that are either Microsoft Entra joined (with some exceptions) or Microsoft Entra registered. This setting doesn't apply to Microsoft Entra hybrid joined devices, [Microsoft Entra joined VMs in Azure](./howto-vm-sign-in-azure-ad-windows.md#enable-azure-ad-login-for-a-windows-vm-in-azure), or Microsoft Entra joined devices that use [Windows Autopilot self-deployment mode](/mem/autopilot/self-deploying).
+ > The **Require multifactor authentication to register or join devices with Microsoft Entra ID** setting applies to devices that are either Microsoft Entra joined (with some exceptions) or Microsoft Entra registered. This setting doesn't apply to Microsoft Entra hybrid joined devices, [Microsoft Entra joined VMs in Azure](./howto-vm-sign-in-azure-ad-windows.md#enable-azure-ad-login-for-a-windows-vm-in-azure), or Microsoft Entra joined devices that use [Windows Autopilot self-deployment mode](/autopilot/self-deploying).
- **Maximum number of devices**: This setting enables you to select the maximum number of Microsoft Entra joined or Microsoft Entra registered devices that a user can have in Microsoft Entra ID. If users reach this limit, they can't add more devices until one or more of the existing devices are removed. The default value is **50**. You can increase the value up to 100. If you enter a value above 100, Microsoft Entra ID sets it to 100. You can also use **Unlimited** to enforce no limit other than existing quota limits.
active-directory Manage Stale Devices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/manage-stale-devices.md
You have two options to retrieve the value of the activity timestamp:
:::image type="content" source="./media/manage-stale-devices/01.png" alt-text="Screenshot listing the name, owner, and other information of devices. One column lists the activity time stamp." border="false"::: -- The [Get-AzureADDevice](/powershell/module/azuread/Get-AzureADDevice) cmdlet.
+- The [Get-AzureADDevice](/powershell/module/azuread/get-azureaddevice) cmdlet.
:::image type="content" source="./media/manage-stale-devices/02.png" alt-text="Screenshot showing command-line output. One line is highlighted and lists a time stamp for the ApproximateLastLogonTimeStamp value." border="false":::
A typical routine consists of the following steps:
1. Connect to Microsoft Entra ID using the [Connect-AzureAD](/powershell/module/azuread/connect-azuread) cmdlet 1. Get the list of devices
-1. Disable the device using the [Set-AzureADDevice](/powershell/module/azuread/Set-AzureADDevice) cmdlet (disable by using -AccountEnabled option).
+1. Disable the device using the [Set-AzureADDevice](/powershell/module/azuread/set-azureaddevice) cmdlet (disable by using -AccountEnabled option).
1. Wait for the grace period of however many days you choose before deleting the device.
-1. Remove the device using the [Remove-AzureADDevice](/powershell/module/azuread/Remove-AzureADDevice) cmdlet.
+1. Remove the device using the [Remove-AzureADDevice](/powershell/module/azuread/remove-azureaddevice) cmdlet.
### Get the list of devices
active-directory Plan Device Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/plan-device-deployment.md
Microsoft Entra registered devices provide support for Bring Your Own Devices (B
### Registering devices
-Registered devices are often managed with [Microsoft Intune](/mem/intune/enrollment/device-enrollment). Devices are enrolled in Intune in several ways, depending on the operating system.
+Registered devices are often managed with [Microsoft Intune](/mem/intune/fundamentals/deployment-guide-enrollment). Devices are enrolled in Intune in several ways, depending on the operating system.
BYOD and corporate owned mobile device are registered by users installing the Company portal app.
-* [iOS](/mem/intune/user-help/install-and-sign-in-to-the-intune-company-portal-app-ios)
+* [iOS](/mem/intune/user-help/sign-in-to-the-company-portal)
* [Android](/mem/intune/user-help/enroll-device-android-company-portal) * [Windows 10 or newer](/mem/intune/user-help/enroll-windows-10-device) * [macOS](/mem/intune/user-help/enroll-your-device-in-intune-macos-cp)
active-directory Troubleshoot Device Windows Joined https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/troubleshoot-device-windows-joined.md
The troubleshooter will review the contents of the file you uploaded and provide
- [Troubleshoot Microsoft Entra hybrid joined devices](troubleshoot-hybrid-join-windows-current.md) - [Troubleshooting Microsoft Entra hybrid joined down-level devices](troubleshoot-hybrid-join-windows-legacy.md) - [Troubleshoot pending device state](/troubleshoot/azure/active-directory/pending-devices)-- [MDM enrollment of Windows 10-based devices](/windows/client-management/mdm/mdm-enrollment-of-windows-devices)-- [Troubleshooting Windows device enrollment errors in Intune](/troubleshoot/mem/intune/troubleshoot-windows-enrollment-errors)
+- [MDM enrollment of Windows 10-based devices](/windows/client-management/mdm-enrollment-of-windows-devices)
+- [Troubleshooting Windows device enrollment errors in Intune](/troubleshoot/mem/intune/device-enrollment/troubleshoot-windows-enrollment-errors)
active-directory Troubleshoot Hybrid Join Windows Current https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/troubleshoot-hybrid-join-windows-current.md
Use Event Viewer logs to locate the phase and error code for the join failures.
| **NTE_BAD_KEYSET** (0x80090016/-2146893802) | The Trusted Platform Module (TPM) operation failed or was invalid. | The failure likely results from a bad sysprep image. Ensure that the machine from which the sysprep image was created isn't Microsoft Entra joined, Microsoft Entra hybrid joined, or Microsoft Entra registered. | | **TPM_E_PCP_INTERNAL_ERROR** (0x80290407/-2144795641) | Generic TPM error. | Disable TPM on devices with this error. Windows 10 versions 1809 and later automatically detect TPM failures and complete Microsoft Entra hybrid join without using the TPM. | | **TPM_E_NOTFIPS** (0x80280036/-2144862154) | TPM in FIPS mode isn't currently supported. | Disable TPM on devices with this error. Windows 10 version 1809 automatically detects TPM failures and completes the Microsoft Entra hybrid join without using the TPM. |
-| **NTE_AUTHENTICATION_IGNORED** (0x80090031/-2146893775) | TPM is locked out. | Transient error. Wait for the cool-down period. The join attempt should succeed after a while. For more information, see [TPM fundamentals](/windows/security/information-protection/tpm/tpm-fundamentals#anti-hammering). |
+| **NTE_AUTHENTICATION_IGNORED** (0x80090031/-2146893775) | TPM is locked out. | Transient error. Wait for the cool-down period. The join attempt should succeed after a while. For more information, see [TPM fundamentals](/windows/security/hardware-security/tpm/tpm-fundamentals#anti-hammering). |
| | |
active-directory Troubleshoot Primary Refresh Token https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/troubleshoot-primary-refresh-token.md
# Troubleshoot primary refresh token issues on Windows devices
-This article discusses how to troubleshoot issues that involve the [primary refresh token](/azure/active-directory/devices/concept-primary-refresh-token) (PRT) when you authenticate on a Microsoft Entra joined Windows device by using your Microsoft Entra credentials.
+This article discusses how to troubleshoot issues that involve the [primary refresh token](./concept-primary-refresh-token.md) (PRT) when you authenticate on a Microsoft Entra joined Windows device by using your Microsoft Entra credentials.
<!-- docutune:ignore AAD -->
active-directory Whats New Sovereign Clouds https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/whats-new-sovereign-clouds.md
Azure AD receives improvements on an ongoing basis. To stay up to date with the most recent developments, this article provides you with information about: -- [Azure Government](../../azure-government/documentation-government-welcome.md)
+- [Azure Government](/azure/azure-government/documentation-government-welcome)
This page updates monthly, so revisit it regularly. If you're looking for items older than six months, you can find them in [Archive for What's new in Sovereign Clouds](whats-new-archive.md).
Azure Active Directory Identity Protection "Leaked Credentials" detection is now
You can now create trusts on both user and resource forests. On-premises Active Directory DS users can't authenticate to resources in the Azure Active Directory DS resource forest until you create an outbound trust to your on-premises Active Directory DS. An outbound trust requires network connectivity to your on-premises virtual network to which you have installed Azure AD Domain Service. On a user forest, trusts can be created for on-premises Active Directory forests that aren't synchronized to Azure Active Directory DS.
-For more information, see: [How trust relationships work for forests in Active Directory](/azure/active-directory-domain-services/concepts-forest-trust).
+For more information, see: [How trust relationships work for forests in Active Directory](/entra/identity/domain-services/concepts-forest-trust).
Azure AD supports provisioning users into applications hosted on-premises or in
**Service category:** Azure AD Domain Services **Product capability:** Azure AD Domain Services
-Now within the Azure portal you have access to view key data for your Azure AD-DS Domain Controllers such as: LDAP Searches/sec, Total Query Received/sec, DNS Total Response Sent/sec, LDAP Successful Binds/sec, memory usage, processor time, Kerberos Authentications, and NTLM Authentications. For more information, see: [Check fleet metrics of Azure Active Directory Domain Services](../../active-directory-domain-services/fleet-metrics.md).
+Now within the Azure portal you have access to view key data for your Azure AD-DS Domain Controllers such as: LDAP Searches/sec, Total Query Received/sec, DNS Total Response Sent/sec, LDAP Successful Binds/sec, memory usage, processor time, Kerberos Authentications, and NTLM Authentications. For more information, see: [Check fleet metrics of Azure Active Directory Domain Services](/entra/identity/domain-services/fleet-metrics).
You can now use administrative units to delegate management of specified devices
**Service category:** Conditional Access **Product capability:** Identity Security & Protection
-Represents a tenant's customizable terms of use agreement that is created, and managed, with Azure Active Directory (Azure AD). You can use the following methods to create and manage the [Azure Active Directory Terms of Use feature](/graph/api/resources/agreement?#json-representation) according to your scenario. For more information, see: [agreement resource type](/graph/api/resources/agreement).
+Represents a tenant's customizable terms of use agreement that is created, and managed, with Azure Active Directory (Azure AD). You can use the following methods to create and manage the [Azure Active Directory Terms of Use feature](/graph/api/resources/agreement#json-representation) according to your scenario. For more information, see: [agreement resource type](/graph/api/resources/agreement).
active-directory Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/whats-new.md
SAML/Ws-Fed based identity providers for authentication in Azure AD B2B are gene
**Service category:** Azure Active Directory Domain Services **Product capability:** Azure Active Directory Domain Services
-Azure Active Directory Domain Services will now support synchronizing custom attributes from Azure AD for on-premises accounts. For more information, see: [Custom attributes for Azure Active Directory Domain Services](/azure/active-directory-domain-services/concepts-custom-attributes).
+Azure Active Directory Domain Services will now support synchronizing custom attributes from Azure AD for on-premises accounts. For more information, see: [Custom attributes for Azure Active Directory Domain Services](/entra/identity/domain-services/concepts-custom-attributes).
active-directory Concept Identity Protection Risks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/identity-protection/concept-identity-protection-risks.md
Previously updated : 06/14/2023 Last updated : 10/02/2023
Risk detections in Microsoft Entra ID Protection include any identified suspicious actions related to user accounts in the directory. Risk detections (both user and sign-in linked) contribute to the overall user risk score that is found in the Risky Users report.
-Identity Protection provides organizations access to powerful resources to see and respond quickly to these suspicious actions.
+ID Protection provides organizations access to powerful resources to see and respond quickly to these suspicious actions.
![Security overview showing risky users and sign-ins](./media/concept-identity-protection-risks/identity-protection-security-overview.png) > [!NOTE]
-> Identity Protection generates risk detections only when the correct credentials are used. If incorrect credentials are used on a sign-in, it does not represent risk of credential compromise.
+> ID Protection generates risk detections only when the correct credentials are used. If incorrect credentials are used on a sign-in, it does not represent risk of credential compromise.
## Risk types and detection
The following premium detections are visible only to Microsoft Entra ID P2 custo
The algorithm ignores obvious "false positives" contributing to the impossible travel conditions, such as VPNs and locations regularly used by other users in the organization. The system has an initial learning period of the earliest of 14 days or 10 logins, during which it learns a new user's sign-in behavior.
+##### Investigating atypical travel detections
+
+1. If you're able to confirm the activity wasn't performed by a legitimate user:
+ 1. **Recommended action**: Mark the sign-in as compromised, and invoke a password reset if not already performed by self-remediation. Block user if attacker has access to reset password or perform MFA and reset password.
+1. If a user is known to use the IP address in the scope of their duties:
+ 1. **Recommended action**: Dismiss the alert
+1. If you're able to confirm that the user recently travelled to the destination mentioned detailed in the alert:
+ 1. **Recommended action**: Dismiss the alert.
+1. If you're able to confirm that the IP address range is from a sanctioned VPN.
+ 1. **Recommended action**: Mark sign-in as safe and add the VPN IP address range to named locations in Azure AD and Microsoft Defender for Cloud Apps.
+ #### Anomalous token **Calculated offline**. This detection indicates that there are abnormal characteristics in the token such as an unusual token lifetime or a token that is played from an unfamiliar location. This detection covers Session Tokens and Refresh Tokens.
The algorithm ignores obvious "false positives" contributing to the impossible t
> [!NOTE] > Anomalous token is tuned to incur more noise than other detections at the same risk level. This tradeoff is chosen to increase the likelihood of detecting replayed tokens that may otherwise go unnoticed. Because this is a high noise detection, there's a higher than normal chance that some of the sessions flagged by this detection are false positives. We recommend investigating the sessions flagged by this detection in the context of other sign-ins from the user. If the location, application, IP address, User Agent, or other characteristics are unexpected for the user, the tenant admin should consider this risk as an indicator of potential token replay.
+##### Investigating anomalous token detections
+
+1. If you're able to confirm that the activity wasn't performed by a legitimate user using a combination of risk alert, location, application, IP address, User Agent, or other characteristics that are unexpected for the user:
+ 1. **Recommended action**: Mark the sign-in as compromised, and invoke a password reset if not already performed by self-remediation. Block the user if an attacker has access to reset password or perform MFA and reset password and revoke all tokens.
+1. If you're able to confirm location, application, IP address, User Agent, or other characteristics are expected for the user and there aren't other indications of compromise:
+ 1. **Recommended action**: Allow the user to self-remediate with a Conditional Access risk policy or have an admin confirm sign-in as safe.
+
+For further investigation of token based detections, see the article [Token tactics: How to prevent, detect, and respond to cloud token theft](https://www.microsoft.com/security/blog/2022/11/16/token-tactics-how-to-prevent-detect-and-respond-to-cloud-token-theft/) and the [Token theft investigation playbook](/security/operations/token-theft-playbook).
+ #### Token issuer anomaly **Calculated offline**. This risk detection indicates the SAML token issuer for the associated SAML token is potentially compromised. The claims included in the token are unusual or match known attacker patterns.
+##### Investigating token issuer anomaly detections
+
+1. If you're able to confirm that the activity wasn't performed by a legitimate user:
+ 1. **Recommended action**: Mark the sign-in as compromised, and invoke a password reset if not already performed by self-remediation. Block the user if an attacker has access to reset password or perform MFA and reset password and revoke all tokens.
+1. If the user confirmed this action was performed by them and there are no other indicators of compromise:
+ 1. **Recommended action**: Allow the user to self-remediate with a Conditional Access risk policy or have an admin confirm sign-in as safe.
+
+For further investigation of token based detections, see the article [Token tactics: How to prevent, detect, and respond to cloud token theft](https://www.microsoft.com/security/blog/2022/11/16/token-tactics-how-to-prevent-detect-and-respond-to-cloud-token-theft/).
+ #### Malware linked IP address (deprecated)
-**Calculated offline**. This risk detection type indicates sign-ins from IP addresses infected with malware that is known to actively communicate with a bot server. This detection matches the IP addresses of the user's device against IP addresses that were in contact with a bot server while the bot server was active. **This detection has been deprecated**. Identity Protection no longer generates new "Malware linked IP address" detections. Customers who currently have "Malware linked IP address" detections in their tenant will still be able to view, remediate, or dismiss them until the 90-day detection retention time is reached.
+**Calculated offline**. This risk detection type indicates sign-ins from IP addresses infected with malware that is known to actively communicate with a bot server. This detection matches the IP addresses of the user's device against IP addresses that were in contact with a bot server while the bot server was active. **This detection has been deprecated**. ID Protection no longer generates new "Malware linked IP address" detections. Customers who currently have "Malware linked IP address" detections in their tenant will still be able to view, remediate, or dismiss them until the 90-day detection retention time is reached.
#### Suspicious browser **Calculated offline**. Suspicious browser detection indicates anomalous behavior based on suspicious sign-in activity across multiple tenants from different countries in the same browser.
+##### Investigating suspicious browser detections
+
+1. Browser is not commonly used by the user or activity within the browser does not match the users normally behavior.
+ 1. **Recommended action**: Mark the sign-in as compromised, and invoke a password reset if not already performed by self-remediation. Block the user if an attacker has access to reset password or perform MFA and reset password and revoke all tokens.
++ #### Unfamiliar sign-in properties **Calculated in real-time**. This risk detection type considers past sign-in history to look for anomalous sign-ins. The system stores information about previous sign-ins, and triggers a risk detection when a sign-in occurs with properties that are unfamiliar to the user. These properties can include IP, ASN, location, device, browser, and tenant IP subnet. Newly created users are in "learning mode" period where the unfamiliar sign-in properties risk detection is turned off while our algorithms learn the user's behavior. The learning mode duration is dynamic and depends on how much time it takes the algorithm to gather enough information about the user's sign-in patterns. The minimum duration is five days. A user can go back into learning mode after a long period of inactivity.
Selecting an unfamiliar sign-in properties risk allows you to see **Additional I
**Calculated offline**. This detection indicates sign-in from a malicious IP address. An IP address is considered malicious based on high failure rates because of invalid credentials received from the IP address or other IP reputation sources.
+##### Investigating malicious IP address detections
+
+1. If you're able to confirm that the activity wasn't performed by a legitimate user:
+ 1. **Recommended action**: Mark the sign-in as compromised, and invoke a password reset if not already performed by self-remediation. Block the user if an attacker has access to reset password or perform MFA and reset password and revoke all tokens.
+1. If a user is known to use the IP address in the scope of their duties:
+ 1. **Recommended action**: Dismiss the alert
+ #### Suspicious inbox manipulation rules
-**Calculated offline**. This detection is discovered using information provided by [Microsoft Defender for Cloud Apps](/cloud-app-security/anomaly-detection-policy#suspicious-inbox-manipulation-rules). This detection looks at your environment and triggers alerts when suspicious rules that delete or move messages or folders are set on a user's inbox. This detection may indicate: a user's account is compromised, messages are being intentionally hidden, and the mailbox is being used to distribute spam or malware in your organization.
+**Calculated offline**. This detection is discovered using information provided by [Microsoft Defender for Cloud Apps](/defender-cloud-apps/anomaly-detection-policy#suspicious-inbox-manipulation-rules). This detection looks at your environment and triggers alerts when suspicious rules that delete or move messages or folders are set on a user's inbox. This detection may indicate: a user's account is compromised, messages are being intentionally hidden, and the mailbox is being used to distribute spam or malware in your organization.
#### Password spray **Calculated offline**. A password spray attack is where multiple usernames are attacked using common passwords in a unified brute force manner to gain unauthorized access. This risk detection is triggered when a password spray attack has been successfully performed. For example, the attacker is successfully authenticated, in the detected instance.
+##### Investigating password spray detections
+
+1. If you're able to confirm that the activity wasn't performed by a legitimate user:
+ 1. **Recommended action**: Mark the sign-in as compromised, and invoke a password reset if not already performed by self-remediation. Block the user if an attacker has access to reset password or perform MFA and reset password and revoke all tokens.
+1. If a user is known to use the IP address in the scope of their duties:
+ 1. **Recommended action**: Dismiss the alert
+1. If you're able to confirm that the account has not been compromised and can see no brute force or password spray indicators against the account.
+ 1. **Recommended action**: Allow the user to self-remediate with a Conditional Access risk policy or have an admin confirm sign-in as safe.
+
+For further investigation of password spray risk detections, see the article [Guidance for identifying and investigating password spray attacks](/security/operations/incident-response-playbook-password-spray).
+ #### Impossible travel
-**Calculated offline**. This detection is discovered using information provided by [Microsoft Defender for Cloud Apps](/cloud-app-security/anomaly-detection-policy#impossible-travel). This detection identifies user activities (is a single or multiple sessions) originating from geographically distant locations within a time period shorter than the time it takes to travel from the first location to the second. This risk may indicate that a different user is using the same credentials.
+**Calculated offline**. This detection is discovered using information provided by [Microsoft Defender for Cloud Apps](/defender-cloud-apps/anomaly-detection-policy#impossible-travel). This detection identifies user activities (is a single or multiple sessions) originating from geographically distant locations within a time period shorter than the time it takes to travel from the first location to the second. This risk may indicate that a different user is using the same credentials.
#### New country
-**Calculated offline**. This detection is discovered using information provided by [Microsoft Defender for Cloud Apps](/cloud-app-security/anomaly-detection-policy#activity-from-infrequent-country). This detection considers past activity locations to determine new and infrequent locations. The anomaly detection engine stores information about previous locations used by users in the organization.
+**Calculated offline**. This detection is discovered using information provided by [Microsoft Defender for Cloud Apps](/defender-cloud-apps/anomaly-detection-policy#activity-from-infrequent-country). This detection considers past activity locations to determine new and infrequent locations. The anomaly detection engine stores information about previous locations used by users in the organization.
#### Activity from anonymous IP address
-**Calculated offline**. This detection is discovered using information provided by [Microsoft Defender for Cloud Apps](/cloud-app-security/anomaly-detection-policy#activity-from-anonymous-ip-addresses). This detection identifies that users were active from an IP address that has been identified as an anonymous proxy IP address.
+**Calculated offline**. This detection is discovered using information provided by [Microsoft Defender for Cloud Apps](/defender-cloud-apps/anomaly-detection-policy#activity-from-anonymous-ip-addresses). This detection identifies that users were active from an IP address that has been identified as an anonymous proxy IP address.
#### Suspicious inbox forwarding
-**Calculated offline**. This detection is discovered using information provided by [Microsoft Defender for Cloud Apps](/cloud-app-security/anomaly-detection-policy#suspicious-inbox-forwarding). This detection looks for suspicious email forwarding rules, for example, if a user created an inbox rule that forwards a copy of all emails to an external address.
+**Calculated offline**. This detection is discovered using information provided by [Microsoft Defender for Cloud Apps](/defender-cloud-apps/anomaly-detection-policy#suspicious-inbox-forwarding). This detection looks for suspicious email forwarding rules, for example, if a user created an inbox rule that forwards a copy of all emails to an external address.
#### Mass access to sensitive files
Customers without Microsoft Entra ID P2 licenses receive detections titled "addi
**Calculated offline**. This risk detection type indicates that the user's valid credentials have been leaked. When cybercriminals compromise valid passwords of legitimate users, they often share these gathered credentials. This sharing is typically done by posting publicly on the dark web, paste sites, or by trading and selling the credentials on the black market. When the Microsoft leaked credentials service acquires user credentials from the dark web, paste sites, or other sources, they're checked against Microsoft Entra users' current valid credentials to find valid matches. For more information about leaked credentials, see [Common questions](#common-questions).
+##### Investigating leaked credentials detections
+
+1. If this detection signal has alerted for a leaked credential for a user:
+ 1. **Recommended action**: Mark the sign-in as compromised, and invoke a password reset if not already performed by self-remediation. Block the user if an attacker has access to reset password or perform MFA and reset password and revoke all tokens.
+ <a name='azure-ad-threat-intelligence-user'></a> #### Microsoft Entra threat intelligence (user)
Customers without Microsoft Entra ID P2 licenses receive detections titled "addi
### Risk levels
-Identity Protection categorizes risk into three tiers: low, medium, and high. When configuring [Identity protection policies](./concept-identity-protection-policies.md), you can also configure it to trigger upon **No risk** level. No Risk means there's no active indication that the user's identity has been compromised.
+ID Protection categorizes risk into three tiers: low, medium, and high. When configuring [ID Protection policies](./concept-identity-protection-policies.md), you can also configure it to trigger upon **No risk** level. No Risk means there's no active indication that the user's identity has been compromised.
Microsoft doesn't provide specific details about how risk is calculated. Each level of risk brings higher confidence that the user or sign-in is compromised. For example, something like one instance of unfamiliar sign-in properties for a user might not be as threatening as leaked credentials for another user.
Risk detections like leaked credentials require the presence of password hashes
### Why are there risk detections generated for disabled user accounts?
-Disabled user accounts can be re-enabled. If the credentials of a disabled account are compromised, and the account gets re-enabled, bad actors might use those credentials to gain access. Identity Protection generates risk detections for suspicious activities against disabled user accounts to alert customers about potential account compromise. If an account is no longer in use and wont be re-enabled, customers should consider deleting it to prevent compromise. No risk detections are generated for deleted accounts.
+Disabled user accounts can be re-enabled. If the credentials of a disabled account are compromised, and the account gets re-enabled, bad actors might use those credentials to gain access. ID Protection generates risk detections for suspicious activities against disabled user accounts to alert customers about potential account compromise. If an account is no longer in use and wont be re-enabled, customers should consider deleting it to prevent compromise. No risk detections are generated for deleted accounts.
### Where does Microsoft find leaked credentials?
active-directory Concept Identity Protection Security Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/identity-protection/concept-identity-protection-security-overview.md
The security overview page is being replaced by the [Microsoft Entra ID Protecti
- [What is risk](concept-identity-protection-risks.md) - [Policies available to mitigate risks](concept-identity-protection-policies.md)-- [Identity Secure Score](../fundamentals/identity-secure-score.md)
+- [Identity Secure Score](../reports-monitoring/concept-identity-secure-score.md)
active-directory Concept Workload Identity Risk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/identity-protection/concept-workload-identity-risk.md
Organizations can find workload identities that have been flagged for risk in on
### Microsoft Graph APIs
-You can also query risky workload identities [using the Microsoft Graph API](/graph/use-the-api). There are two new collections in the [Identity Protection APIs](/graph/api/resources/identityprotection-root).
+You can also query risky workload identities [using the Microsoft Graph API](/graph/use-the-api). There are two new collections in the [Identity Protection APIs](/graph/api/resources/identityprotection-overview).
- `riskyServicePrincipals` - `servicePrincipalRiskDetections`
active-directory Howto Export Risk Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/identity-protection/howto-export-risk-data.md
Microsoft Entra ID stores reports and security signals for a defined period of t
| Microsoft Entra multifactor authentication usage | 30 days | 30 days | 30 days | | Risky sign-ins | 7 days | 30 days | 30 days |
-Organizations can choose to store data for longer periods by changing diagnostic settings in Microsoft Entra ID to send **RiskyUsers**, **UserRiskEvents**, **RiskyServicePrincipals**, and **ServicePrincipalRiskEvents** data to a Log Analytics workspace, archive data to a storage account, stream data to an event hub, or send data to a partner solution. Find these options in the [Microsoft Entra admin center](https://entra.microsoft.com) > **Identity** > **Monitoring & health** > **Diagnostic settings** > **Edit setting**. If you don't have a diagnostic setting, follow the instructions in the article [Create diagnostic settings to send platform logs and metrics to different destinations](../../azure-monitor/essentials/diagnostic-settings.md) to create one.
+Organizations can choose to store data for longer periods by changing diagnostic settings in Microsoft Entra ID to send **RiskyUsers**, **UserRiskEvents**, **RiskyServicePrincipals**, and **ServicePrincipalRiskEvents** data to a Log Analytics workspace, archive data to a storage account, stream data to an event hub, or send data to a partner solution. Find these options in the [Microsoft Entra admin center](https://entra.microsoft.com) > **Identity** > **Monitoring & health** > **Diagnostic settings** > **Edit setting**. If you don't have a diagnostic setting, follow the instructions in the article [Create diagnostic settings to send platform logs and metrics to different destinations](/azure/azure-monitor/essentials/diagnostic-settings) to create one.
[ ![Diagnostic settings screen in Microsoft Entra ID showing existing configuration](./media/howto-export-risk-data/change-diagnostic-setting-in-portal.png) ](./media/howto-export-risk-data/change-diagnostic-setting-in-portal.png#lightbox) ## Log Analytics
-Log Analytics allows organizations to query data using built in queries or custom created Kusto queries, for more information, see [Get started with log queries in Azure Monitor](../../azure-monitor/logs/get-started-queries.md).
+Log Analytics allows organizations to query data using built in queries or custom created Kusto queries, for more information, see [Get started with log queries in Azure Monitor](/azure/azure-monitor/logs/get-started-queries).
Once enabled you'll find access to Log Analytics in the [Microsoft Entra admin center](https://entra.microsoft.com) > **Identity** > **Monitoring & health** > **Log Analytics**. The following tables are of most interest to Identity Protection administrators:
AADRiskyUsers
## Storage account
-By routing logs to an Azure storage account, you can keep it for longer than the default retention period. For more information, see the article [Tutorial: Archive Microsoft Entra logs to an Azure storage account](../reports-monitoring/quickstart-azure-monitor-route-logs-to-storage-account.md).
+By routing logs to an Azure storage account, you can keep it for longer than the default retention period. For more information, see the article [Tutorial: Archive Microsoft Entra logs to an Azure storage account](../reports-monitoring/howto-archive-logs-to-storage-account.md).
## Azure Event Hubs
-Azure Event Hubs can look at incoming data from sources like Microsoft Entra ID Protection and provide real-time analysis and correlation. For more information, see the article [Tutorial: Stream Microsoft Entra logs to an Azure event hub](../reports-monitoring/tutorial-azure-monitor-stream-logs-to-event-hub.md)
+Azure Event Hubs can look at incoming data from sources like Microsoft Entra ID Protection and provide real-time analysis and correlation. For more information, see the article [Tutorial: Stream Microsoft Entra logs to an Azure event hub](../reports-monitoring/howto-stream-logs-to-event-hub.md)
## Other options
-Organizations can choose to [connect Microsoft Entra data to Microsoft Sentinel](../../sentinel/data-connectors/azure-active-directory-identity-protection.md) as well for further processing.
+Organizations can choose to [connect Microsoft Entra data to Microsoft Sentinel](/azure/sentinel/data-connectors/azure-active-directory-identity-protection) as well for further processing.
Organizations can use the [Microsoft Graph API to programmatically interact with risk events](howto-identity-protection-graph-api.md). ## Next steps - [What is Microsoft Entra monitoring?](../reports-monitoring/overview-monitoring-health.md)-- [Install and use the log analytics views for Microsoft Entra ID](../../azure-monitor/visualize/workbooks-view-designer-conversion-overview.md)-- [Connect data from Microsoft Entra ID Protection](../../sentinel/data-connectors/azure-active-directory-identity-protection.md)
+- [Install and use the log analytics views for Microsoft Entra ID](/azure/azure-monitor/visualize/workbooks-view-designer-conversion-overview)
+- [Connect data from Microsoft Entra ID Protection](/azure/sentinel/data-connectors/azure-active-directory-identity-protection)
- [Microsoft Entra ID Protection and the Microsoft Graph PowerShell SDK](howto-identity-protection-graph-api.md)-- [Tutorial: Stream Microsoft Entra logs to an Azure event hub](../reports-monitoring/tutorial-azure-monitor-stream-logs-to-event-hub.md)
+- [Tutorial: Stream Microsoft Entra logs to an Azure event hub](../reports-monitoring/howto-stream-logs-to-event-hub.md)
active-directory Howto Identity Protection Graph Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/identity-protection/howto-identity-protection-graph-api.md
Invoke-MgDismissRiskyUser -UserIds $riskyUsers.Id
- [Get started with the Microsoft Graph PowerShell SDK](/powershell/microsoftgraph/get-started) - [Tutorial: Identify and remediate risks using Microsoft Graph APIs](/graph/tutorial-riskdetection-api)-- [Overview of Microsoft Graph](https://developer.microsoft.com/graph/docs)
+- [Overview of Microsoft Graph](/graph/overview)
- [Microsoft Entra ID Protection](./overview-identity-protection.md)
active-directory Howto Identity Protection Investigate Risk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/identity-protection/howto-identity-protection-investigate-risk.md
Previously updated : 11/11/2022 Last updated : 10/02/2023
# How To: Investigate risk
-Identity Protection provides organizations with three reports they can use to investigate identity risks in their environment. These reports are the **risky users**, **risky sign-ins**, and **risk detections**. Investigation of events is key to better understanding and identifying any weak points in your security strategy.
-
-All three reports allow for downloading of events in .CSV format for further analysis. The risky users and risky sign-ins reports allow for downloading the most recent 2500 entries, while the risk detections report allows for downloading the most recent 5000 records.
+Identity Protection provides organizations with reporting they can use to investigate identity risks in their environment. These reports include **risky users**, **risky sign-ins**, **risky workload identities**, and **risk detections**. Investigation of events is key to better understanding and identifying any weak points in your security strategy. All of these reports allow for downloading of events in .CSV format or integration with other security solutions like a dedicated SIEM tool for further analysis.
Organizations can take advantage of the Microsoft Graph API integrations to aggregate data with other sources they may have access to as an organization.
Administrators can then choose to take action on these events. Administrators ca
- Block user from signing in - [Investigate further using Microsoft Defender for Identity](#investigate-risk-with-microsoft-365-defender)
+#### Understand the scope
+
+1. Consider creating a known traveler database for updated organizational travel reporting and use it to cross-reference travel activity.
+1. Add corporate VPN's and IP Address ranges to named locations to reduce false positives.
+1. Review the logs to identify similar activities with the same characteristics. This could be an indication of more compromised accounts.
+ 1. If there are common characteristics, like IP address, geography, success/failure, etc..., consider blocking these with a Conditional Access policy.
+ 1. Review which resource may have been compromised, such as potential data downloads or administrative modifications.
+ 1. Enable self-remediation policies through Conditional Access
+1. If you see that the user performed other risky activities, such as downloading a large volume of files from a new location, this is a strong indication of a possible compromise.
+ ## Risky sign-ins :::image type="content" source="media/howto-identity-protection-investigate-risk/risky-sign-ins-without-details.png" alt-text="Screenshot of the Risky sign-ins report." lightbox="media/howto-identity-protection-investigate-risk/risky-sign-ins-with-details.png":::
Organizations may use the following frameworks to begin their investigation into
1. Location - Is the user traveling to a different location or accessing devices from multiple locations? 1. IP address 1. User agent string
- 1. If you have access to other security tools like [Microsoft Sentinel](../../sentinel/overview.md), check for corresponding alerts that might indicate a larger issue.
+ 1. If you have access to other security tools like [Microsoft Sentinel](/azure/sentinel/overview), check for corresponding alerts that might indicate a larger issue.
1. Organizations with access to [Microsoft 365 Defender](/defender-for-identity/understanding-security-alerts) can follow a user risk event through other related alerts and incidents and the MITRE ATT&CK chain. 1. Select the user in the Risky users report. 1. Select the **ellipsis (...)** in the toolbar then choose **Investigate with Microsoft 365 Defender**.
Organizations may use the following frameworks to begin their investigation into
1. Location 1. IP address
-<a name='investigate-azure-ad-threat-intelligence-detections'></a>
+> [!IMPORTANT]
+> If you suspect an attacker can impersonate the user, reset their password, and perform MFA; you should block the user and revoke all refresh and access tokens.
### Investigate Microsoft Entra threat intelligence detections
If more information is shown for the detection:
1. Does the IP generate a high number of failures for a user or set of users in your directory? 1. Is the traffic of the IP coming from an unexpected protocol or application, for example Exchange legacy protocols? 1. If the IP address corresponds to a cloud service provider, rule out that there are no legitimate enterprise applications running from the same IP.
-1. This account was attacked by a Password spray:
+1. This account was the victim of a password spray attack:
1. Validate that no other users in your directory are targets of the same attack. 1. Do other users have sign-ins with similar atypical patterns seen in the detected sign-in within the same time frame? Password spray attacks may display unusual patterns in: 1. User agent string
If more information is shown for the detection:
1. Protocol 1. Ranges of IPs/ASNs 1. Time and frequency of sign-ins
- 1. This detection was triggered by a real-time rule
- 1. Validate that no other users in your directory are targets of the same attack. This can be found by the TI_RI_#### number assigned to the rule.
- 1. Real-time rules protect against novel attacks identified by Microsoft's threat intelligence. If multiple users in your directory were targets of the same attack, investigate unusual patterns in other attributes of the sign in.
+1. This detection was triggered by a real-time rule:
+ 1. Validate that no other users in your directory are targets of the same attack. This can be found by the TI_RI_#### number assigned to the rule.
+ 1. Real-time rules protect against novel attacks identified by Microsoft's threat intelligence. If multiple users in your directory were targets of the same attack, investigate unusual patterns in other attributes of the sign in.
## Investigate risk with Microsoft 365 Defender
active-directory Overview Identity Protection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/identity-protection/overview-identity-protection.md
When user remediation isn't enabled, an administrator must manually review them
Data from Identity Protection can be exported to other tools for archive, further investigation, and correlation. The Microsoft Graph based APIs allow organizations to collect this data for further processing in a tool such as their SIEM. Information about how to access the Identity Protection API can be found in the article,ΓÇ»[Get started with Microsoft Entra ID Protection and Microsoft Graph](howto-identity-protection-graph-api.md)
-Information about integrating Identity Protection information with Microsoft Sentinel can be found in the article,ΓÇ»[Connect data from Microsoft Entra ID Protection](../../sentinel/data-connectors-reference.md#microsoft).
+Information about integrating Identity Protection information with Microsoft Sentinel can be found in the article,ΓÇ»[Connect data from Microsoft Entra ID Protection](/azure/sentinel/data-connectors-reference#microsoft).
Organizations may store data for longer periods by changing the diagnostic settings in Microsoft Entra ID. They can choose to send data to a Log Analytics workspace, archive data to a storage account, stream data to Event Hubs, or send data to another solution. Detailed information about how to do so can be found in the article,ΓÇ»[How To: Export risk data](howto-export-risk-data.md).
active-directory Migrate Okta Federation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/migrate-okta-federation.md
Learn more: [Configure your company branding](../fundamentals/how-to-customize-b
## Defederate Office 365 domains
-When your organization is comfortable with the managed authentication experience, you can defederate your domain from Okta. To begin, use the following commands to connect to Microsoft Graph PowerShell. If you don't have the Microsoft Graph PowerShell module, download it by entering `install-module MSOnline`.
-
-```PowerShell
-
-import-module MSOnline
-Connect-MgGraph
-New-MgDomainFederationConfiguration
--domainname yourdomain.com -authentication managed-
-```
+When your organization is comfortable with the managed authentication experience, you can defederate your domain from Okta. To begin, use the following commands to connect to Microsoft Graph PowerShell. If you don't have the Microsoft Graph PowerShell module, download it by entering `Install-Module Microsoft.Graph`.
+
+1. In PowerShell, sign in to Microsoft Entra ID by using a Global Administrator account.
+ ```powershell
+ Connect-MgGraph -Scopes "Domain.ReadWrite.All", "Directory.AccessAsUser.All"
+ ```
+
+2. To convert the domain, run the following command:
+ ```powershell
+ Update-MgDomain -DomainId yourdomain.com -AuthenticationType "Managed"
+ ```
+
+3. Verify that the domain has been converted to managed by running the command below. The Authentication type should be set to managed.
+ ```powershell
+ Get-MgDomain -DomainId yourdomain.com
+ ```
After you set the domain to managed authentication, you've defederated your Office 365 tenant from Okta while maintaining user access to the Okta home page.
active-directory Cross Tenant Synchronization Configure Graph https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/multi-tenant-organizations/cross-tenant-synchronization-configure-graph.md
In the source tenant, to enable provisioning, create a provisioning job.
# [Microsoft Graph](#tab/ms-graph)
-1. In the source tenant, use the [Add synchronization secrets](/graph/api/synchronization-synchronization-secrets) API to save your credentials.
+1. In the source tenant, use the [Add synchronization secrets](/graph/api/synchronization-serviceprincipal-put-synchronization) API to save your credentials.
**Request**
active-directory Cross Tenant Synchronization Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/multi-tenant-organizations/cross-tenant-synchronization-overview.md
To configure this setting using Microsoft Graph, see the [Update crossTenantAcce
#### How do users know what tenants they belong to?
-For cross-tenant synchronization, users don't receive an email or have to accept a consent prompt. If users want to see what tenants they belong to, they can open their [My Account](https://support.microsoft.com/account-billing/my-account-portal-for-work-or-school-accounts-eab41bfe-3b9e-441e-82be-1f6e568d65fd) page and select **Organizations**. In the Microsoft Entra admin center, users can open their [Portal settings](../../azure-portal/set-preferences.md), view their **Directories + subscriptions**, and switch directories.
+For cross-tenant synchronization, users don't receive an email or have to accept a consent prompt. If users want to see what tenants they belong to, they can open their [My Account](https://support.microsoft.com/account-billing/my-account-portal-for-work-or-school-accounts-eab41bfe-3b9e-441e-82be-1f6e568d65fd) page and select **Organizations**. In the Microsoft Entra admin center, users can open their [Portal settings](/azure/azure-portal/set-preferences), view their **Directories + subscriptions**, and switch directories.
For more information, including privacy information, see [Leave an organization as an external user](../external-identities/leave-the-organization.md).
Does cross-tenant synchronization support deprovisioning users?
Does cross-tenant synchronization support restoring users? - If the user in the source tenant is restored, reassigned to the app, meets the scoping condition again within 30 days of soft deletion, it will be restored in the target tenant.-- IT admins can also manually [restore](/azure/active-directory/fundamentals/active-directory-users-restore) the user directly in the target tenant.
+- IT admins can also manually [restore](../fundamentals/users-restore.md) the user directly in the target tenant.
How can I deprovision all the users that are currently in scope of cross-tenant synchronization?
active-directory Multi Tenant Organization Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/multi-tenant-organizations/multi-tenant-organization-known-issues.md
The experiences and issues described in this article have the following scope.
- Cross-tenant synchronization deprovisioning: By default, when provisioning scope is reduced while a synchronization job is running, users fall out of scope and are soft deleted, unless Target Object Actions for Delete is disabled. For more information, see [Deprovisioning](cross-tenant-synchronization-overview.md#deprovisioning) and [Define who is in scope for provisioning](cross-tenant-synchronization-configure.md#step-8-optional-define-who-is-in-scope-for-provisioning-with-scoping-filters). -- Cross-tenant synchronization deprovisioning: Currently, [SkipOutOfScopeDeletions](../app-provisioning/skip-out-of-scope-deletions.md?toc=%2Fazure%2Factive-directory%2Fmulti-tenant-organizations%2Ftoc.json&pivots=cross-tenant-synchronization) works for application provisioning jobs, but not for Microsoft Entra cross-tenant synchronization. To avoid soft deletion of users taken out of scope of cross-tenant synchronization, set [Target Object Actions for Delete](cross-tenant-synchronization-configure.md#step-8-optional-define-who-is-in-scope-for-provisioning-with-scoping-filters) to disabled.
+- Cross-tenant synchronization deprovisioning: Currently, [SkipOutOfScopeDeletions](../app-provisioning/skip-out-of-scope-deletions.md?toc=/azure/active-directory/multi-tenant-organizations/toc.json&pivots=cross-tenant-synchronization) works for application provisioning jobs, but not for Microsoft Entra cross-tenant synchronization. To avoid soft deletion of users taken out of scope of cross-tenant synchronization, set [Target Object Actions for Delete](cross-tenant-synchronization-configure.md#step-8-optional-define-who-is-in-scope-for-provisioning-with-scoping-filters) to disabled.
## Next steps
active-directory Workbook Mfa Gaps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/workbook-mfa-gaps.md
The **MFA gaps** workbook is currently not available as a template, but you can
1. Select the **Apply** button. The workbook may take a few moments to populate. 1. Select the **Save As** button and provide the required information. - Provide a **Title**, **Subscription**, **Resource Group** (you must have the ability to save a workbook for the selected Resource Group), and **Location**.
- - Optionally choose to save your workbook content to an [Azure Storage Account](../../azure-monitor/visualize/workbooks-bring-your-own-storage.md).
+ - Optionally choose to save your workbook content to an [Azure Storage Account](/azure/azure-monitor/visualize/workbooks-bring-your-own-storage).
1. Select the **Apply** button. ## Summary
The summary widget provides a detailed look at sign-ins related to multifactor a
* **Percent of sign-ins not protected by multi-factor authentication requirement by operating system:** This widget provides time based bar graph of sign-in percentages that aren't protected by MFA by operating system of the devices. ### Sign-ins not protected by MFA requirement by locations
-* **Number of sign-ins not protected by multi-factor authentication requirement by location:** This widget shows the sign-ins counts that aren't protected by MFA requirement in map bubble chart on the world map.
+* **Number of sign-ins not protected by multi-factor authentication requirement by location:** This widget shows the sign-ins counts that aren't protected by MFA requirement in map bubble chart on the world map.
active-directory Zenya Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/zenya-provisioning-tutorial.md
For more information (in dutch) also read: [`Implementatie SCIM koppeling`](http
|userName|String| |phoneNumbers[type eq "work"].value|String| |externalId|String|-
+ |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:department|String|
+ |title|String|
+ |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:manager|String|
1. Under the **Mappings** section, select **Synchronize Microsoft Entra groups to Zenya**.
active-directory Admin Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/admin-api.md
The response contains the following properties.
#### didModel type
-We support two different didModels. One is `ion` and the other supported method is `web`
-
-#### ION
-
-| Property | Type | Description |
-| -- | -- | -- |
-| `did` | string | The DID for this verifiable credential service instance |
-| `signingKeys` | string array | URL to the signing key |
-| `recoveryKeys` | string array | URL to the recovery key |
-| `encryptionKeys` | string array | URL to the encryption key |
-| `linkedDomainUrls` | string array | Domains linked to this DID |
-| `didDocumentStatus` | string | status of the DID, `published` when it's written to ION otherwise it is `submitted`|
- #### Web | Property | Type | Description |
Content-type: application/json
### Create authority
-This call creates a new **private key**, recovery key and update key, stores these keys in the specified Azure Key Vault and sets the permissions to this Key Vault for the verifiable credential service and a create new **DID** with corresponding DID Document and commits that to the ION network.
+This call creates a new **private key**, recovery key and update key, stores these keys in the specified Azure Key Vault and sets the permissions to this Key Vault for the verifiable credential service and a create new **DID** with corresponding DID Document.
#### HTTP request
Example message
} ```
-### Linked domains
-
-It's possible to update the domain related to the DID. This functionality needs to write an update operation to ION to get this update distributed around the world. The update can take some time, currently up to an hour before it's processed and available for other users.
-
-#### HTTP request
-
-`POST /v1.0/verifiableCredentials/authorities/:authorityId/updateLinkedDomains`
-
-replace the value of `:authorityId` with the value of the authority ID you want to update.
-
-#### Request headers
-
-| Header | Value |
-| -- | -- |
-| Authorization | Bearer (token). Required |
-| Content-Type | application/json |
-
-#### Request body
-
-You need to specify the domain you want to publish to the DID Document. Although the value of domains is an array, you should only specify a **single domain**.
-
-In the request body, supply a JSON representation of the following:
-
-| Property | Type | Description |
-| -- | -- | -- |
-| `domainUrls` | string array | link to domain(s), need to start with https and not contain a path |
-
-Example message:
-
-```
-{
- "domainUrls" : ["https://www.mydomain.com"]
-}
-```
-
-#### Response message
-
-```
-HTTP/1.1 202 Accepted
-Content-type: application/json
-
-Accepted
-```
-
-The didDocumentStatus switches to `submitted` it will take a while before the change is committed to the ION network.
-
-If you try to submit a change before the operation is completed, you'll get the following error message:
-
-```
-HTTP/1.1 409 Conflict
-Content-type: application/json
-
-{
- "requestId":"83047b1c5811284ce56520b63b9ba83a","date":"Mon, 07 Feb 2022 18:36:24 GMT",
- "mscv":"tf5p8EaXIY1iWgYM.1",
- "error":
- {
- "code": "conflict",
- "innererror": {
- "code":"ionOperationNotYetPublished",
- "message":"There is already an operation in queue for this organization's DID (decentralized identifier), please wait until the operation is published to submit a new one."
- }
- }
-}
-```
-
-You need to wait until the didDocumentstatus is back to `published` before you can submit another change.
-
-The domain URLs must start with https and not contain any path values.
-
-Possible error messages:
-
-```
-HTTP/1.1 400 Bad Request
-Content-type: application/json
-
-{
- "requestId":"57c5ac78abb86bbfbc6f9e96d9ae6b18",
- "date":"Mon, 07 Feb 2022 18:47:14 GMT",
- "mscv":"+QfihZZk87z0nky2.0",
- "error": "BadRequest",
- "innererror": {
- "code":"parameterUrlSchemeMustBeHttps",
- "message":"URLs must begin with HTTPS: domains"
- }
-}
-```
-
-```
-HTTP/1.1 400 Bad Request
-Content-type: application/json
-
-{
- "requestId":"e65753b03f28f159feaf434eaf140547",
- "date":"Mon, 07 Feb 2022 18:48:36 GMT",
- "mscv":"QWB4uvgYzCKuMeKg.0",
- "error": "BadRequest",
- "innererror": {
- "code":"parameterUrlPathMustBeEmpty",
- "message":"The URL can only include a domain. Please remove any characters after the domain name and try again. linkedDomainUrl"
- }
-}
-```
--
-#### Remarks
-
-Although it is technically possible to publish multiple domains, we currently only support a single domain per authority.
- ### Well-known DID configuration The `generateWellknownDidConfiguration` method generates the signed did-configuration.json file. The file must be uploaded to the `.well-known` folder in the root of the website hosted for the domain in the linked domain of this verifiable credential instance. Instructions can be found [here](how-to-dnsbind.md#verify-domain-ownership-and-distribute-did-configurationjson-file).
active-directory Decentralized Identifier Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/decentralized-identifier-overview.md
Microsoft is actively collaborating with members of the Decentralized Identity F
Before we can understand DIDs, it helps to compare them with current identity systems. Email addresses and social network IDs are human-friendly aliases for collaboration but are now overloaded to serve as the control points for data access across many scenarios beyond collaboration. This creates a potential problem, because access to these IDs can be removed at any time by external parties.
-Decentralized Identifiers (DIDs) are different. DIDs are user-generated, self-owned, globally unique identifiers rooted in decentralized systems like ION. They possess unique characteristics, like greater assurance of immutability, censorship resistance, and tamper evasiveness. These attributes are critical for any ID system intended to provide self-ownership and user control.
+Decentralized Identifiers (DIDs) are different. DIDs are user-generated, self-owned, globally unique identifiers rooted in decentralized systems trust systems. They possess unique characteristics, like greater assurance of immutability, censorship resistance, and tamper evasiveness. These attributes are critical for any ID system intended to provide self-ownership and user control.
MicrosoftΓÇÖs verifiable credential solution uses decentralized credentials (DIDs) to cryptographically sign as proof that a relying party (verifier) is attesting to information proving they are the owners of a verifiable credential. A basic understanding of DIDs is recommended for anyone creating a verifiable credential solution based on the Microsoft offering.
To deliver on these promises, we need a technical foundation made up of seven ke
IDs users create, own, and control independently of any organization or government. DIDs are globally unique identifiers linked to Decentralized Public Key Infrastructure (DPKI) metadata composed of JSON documents that contain public key material, authentication descriptors, and service endpoints. **2. Trust System**.
-In order to be able to resolve DID documents, DIDs are typically recorded on an underlying network of some kind that represents a trust system. Microsoft currently supports two trust systems, which are:
--- DID:Web is a permission based model that allows trust using a web domainΓÇÖs existing reputation. DID:Web is in support status General Available.--- ION (Identity Overlay Network) ION is a Layer 2 open, permissionless network based on the purely deterministic Sidetree protocol, which requires no special tokens, trusted validators, or other consensus mechanisms; the linear progression of Bitcoin's time chain is all that's required for its operation. DID:ION is in preview.
+In order to be able to resolve DID documents, DIDs are typically recorded on an underlying network of some kind that represents a trust system. Microsoft currently supports DID:Web trust system. DID:Web is a permission based model that allows trust using a web domainΓÇÖs existing reputation. DID:Web is in support status General Available.
**3. DID User Agent/Wallet: Microsoft Authenticator App**. Enables real people to use decentralized identities and Verifiable Credentials. Authenticator creates DIDs, facilitates issuance and presentation requests for verifiable credentials and manages the backup of your DID's seed through an encrypted wallet file.
active-directory How To Dnsbind https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/how-to-dnsbind.md
It is of high importance that you link your DID to a domain recognizable to the
## How do you update the linked domain on your DID?
-If your trust system is Web, then updating your linked domain isn't supported. You have to opt-out and re-onboard. If your trust system is ION, you can update the linked domain via redoing the **Verify domain ownership** step. It might take up to two hours for your DID document to be updated in the [ION network](https://identity.foundation/ion) with the new domain information. No other changes to the domain are possible before the changes are published.
-
-### How do I know when the linked domain update has successfully completed?
-
-If the trust system is ION, once the domain changes are published to ION, the domain section inside the Microsoft Entra Verified ID service displays Published as the status and you should be able to make new changes to the domain. If the trust system is Web, the changes are public as soon as you replace the did-configuration.json file on your web server.
-
->[!IMPORTANT]
-> No changes to your domain are possible while publishing is in progress.
+With the Web trust system, updating your linked domain isn't supported. You have to opt out and re-onboard.
## Linked Domain domain made easy for developers The easiest way for a developer to get a domain to use for linked domain is to use Azure Storage's static website feature. You can't control what the domain name is, other than it contains your storage account name as part of it's hostname.
-Follow these steps to quickly setup a domain to use for Linked Domain:
+Follow these steps to quickly set up a domain to use for Linked Domain:
1. Create an **Azure Storage account**. During storage account creation, choose StorageV2 (general-purpose v2 account) and Locally redundant storage (LRS). 1. Go to that Storage Account and select **Static website** in the left hand menu and enable static website. If you can't see the **Static website** menu item, you didn't create a **V2** storage account.
active-directory How To Register Didwebsite https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/how-to-register-didwebsite.md
## Why do I need to register my decentralized ID?
-If your trust system for the tenant is Web, you need register your decentralized ID to be able to issue and verify your credentials. When the trust system is Web, you have to make this information available on your website and complete this registration. Otherwise your public key isn't made public. When you use the ION based trust system, information like your issuers' public keys are published to blockchain and you don't need to complete this step.
+For Web trust system, you need register your decentralized ID to be able to issue and verify your credentials. You have to make this information available on your website and complete this registration. Otherwise your public key isn't made public.
## How do I register my decentralized ID?
active-directory Introduction To Verifiable Credentials Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/introduction-to-verifiable-credentials-architecture.md
However, there are scenarios where using a decentralized architecture with verif
In decentralized identity systems, control of the lifecycle and usage of the credentials is shared between the issuer, the holder, and relying party consuming the credential.
-Consider the scenario in the diagram below where Proseware, an e-commerce website, wants to offer Woodgrove employees corporate discounts.
+Consider the scenario in the diagram where Proseware, an e-commerce website, wants to offer Woodgrove employees corporate discounts.
![Example of a decentralized identity system](media/introduction-to-verifiable-credentials-architecture/decentralized-architecture.png)
Terminology for verifiable credentials (VCs) might be confusing if you're not fa
* In the preceding diagram, Woodgrove is the issuer of verifiable credentials to its employees.
- ΓÇ£A ***holder*** is a role an entity might perform by possessing one or more verifiable credentials and generating presentations from them. A holder is usually, but not always, a subject of the verifiable credentials they are holding. Holders store their credentials in credential repositories.ΓÇ¥
+ ΓÇ£A ***holder*** is a role an entity might perform by possessing one or more verifiable credentials and generating presentations from them. A holder is usually, but not always, a subject of the verifiable credentials they're holding. Holders store their credentials in credential repositories.ΓÇ¥
* In the preceding diagram, Alice is a Woodgrove employee. They obtained a verifiable credential from the Woodgrove issuer, and is the holder of that credential.
Terminology for verifiable credentials (VCs) might be confusing if you're not fa
ΓÇ£A ***credential*** is a set of one or more claims made by an issuer. A verifiable credential is a tamper-evident credential that has authorship that can be cryptographically verified. Verifiable credentials can be used to build verifiable presentations, which can also be cryptographically verified. The claims in a credential can be about different subjects.ΓÇ¥
- ΓÇ£A ***decentralized identifier*** is a portable URI-based identifier, also known as a DID, associated with an entity. These identifiers are often used in a verifiable credential and are associated with subjects, issuers, and verifiers.ΓÇ¥.
-
-* In the preceding diagram, the public keys of the actorΓÇÖs DIDs are made available via trust system (Web or ION).
+ ΓÇ£A ***decentralized identifier*** is a portable URI-based identifier, also known as a DID, associated with an entity. These identifiers are often used in a verifiable credential and are associated with subjects, issuers, and verifiers.ΓÇ¥
ΓÇ£A ***decentralized identifier document***, also referred to as a ***DID document***, is a document that is accessible using a verifiable data registry and contains information related to a specific decentralized identifier, such as the associated repository and public key information.ΓÇ¥
-* In the scenario above, both the issuer and verifier have a DID, and a DID document. The DID document contains the public key, and the list of DNS web domains associated with the DID (also known as linked domains).
+* In the scenario, both the issuer and verifier have a DID, and a DID document. The DID document contains the public key, and the list of DNS web domains associated with the DID (also known as linked domains).
* Woodgrove (issuer) signs their employeesΓÇÖ VCs with its private key; similarly, Proseware (verifier) signs requests to present a VC using its key, which is also associated with its DID. A ***trust system*** is the foundation in establishing trust between decentralized systems. It can be a distributed ledger or it can be something centralized, such as [DID Web](https://w3c-ccg.github.io/did-method-web/).
- ΓÇ£A ***distributed ledger*** is a non-centralized system for recording events. These systems establish sufficient confidence for participants to rely upon the data recorded by others to make operational decisions. They typically use distributed databases where different nodes use a consensus protocol to confirm the ordering of cryptographically signed transactions. The linking of digitally signed transactions over time often makes the history of the ledger effectively immutable.ΓÇ¥
-
-* The Microsoft solution uses the ***Identity Overlay Network (ION)*** to provide decentralized public key infrastructure (PKI) capability. As an alternative to ION, Microsoft also offers DID Web as the trust system.
+ ΓÇ£A ***distributed ledger*** is a noncentralized system for recording events. These systems establish sufficient confidence for participants to rely upon the data recorded by others to make operational decisions. They typically use distributed databases where different nodes use a consensus protocol to confirm the ordering of cryptographically signed transactions. The linking of digitally signed transactions over time often makes the history of the ledger effectively immutable.ΓÇ¥
### Combining centralized and decentralized identity architectures
These use cases demonstrate how centralized identities and decentralized identit
### Distributing initial credentials
-Alice accepts employment with Woodgrove. As part of the onboarding process, a Microsoft Entra account is created for Alice to use inside of the Woodgrove trust boundary. AliceΓÇÖs manager must figure out how to enable Alice, who works remotely, to receive initial sign-in information in a secure way. In the past, the IT department might have provided those credentials to their manager, who would print them and hand them to Alice. This doesnΓÇÖt work with remote employees.
+Alice accepts employment with Woodgrove. As part of the onboarding process, a Microsoft Entra account is created for Alice to use inside of the Woodgrove trust boundary. AliceΓÇÖs manager must figure out how to enable Alice, who works remotely, to receive initial sign-in information in a secure way. In the past, the IT department might have provided those credentials to their manager, who would print them and hand them to Alice. Printing the credentials doesnΓÇÖt work with remote employees.
VCs can add value to centralized systems by augmenting the credential distribution process. Instead of needing the manager to provide credentials, Alice can use their VC as proof of identity to receive their initial username and credentials for centralized systems access. Alice presents the proof of identity they added to their wallet as part of the onboarding process.
By combining centralized and decentralized identity architectures for onboarding
![Accessing resources inside of the trust boundary](media/introduction-to-verifiable-credentials-architecture/inside-trust-boundary.png)
-As an employee, Alice is operating inside of the trust boundary of Woodgrove. Woodgrove acts as the identity provider (IDP) and maintains complete control of the identity and the configuration of the apps Alice uses to interact within the Woodgrove trust boundary. To use resources in the Microsoft Entra ID trust boundary, Alice provides potentially multiple forms of proof of identification to sign in WoodgroveΓÇÖs trust boundary and access the resources inside of WoodgroveΓÇÖs technology environment. This is a typical scenario that is well served using a centralized identity architecture.
+As an employee, Alice is operating inside of the trust boundary of Woodgrove. Woodgrove acts as the identity provider (IDP) and maintains complete control of the identity and the configuration of the apps Alice uses to interact within the Woodgrove trust boundary. To use resources in the Microsoft Entra ID trust boundary, Alice provides potentially multiple forms of proof of identification to sign in WoodgroveΓÇÖs trust boundary and access the resources inside of WoodgroveΓÇÖs technology environment. Multiple proofs is a typical scenario that is well served using a centralized identity architecture.
* Woodgrove manages the trust boundary and using good security practices provides the least-privileged level of access to Alice based on the job performed. To maintain a strong security posture, and potentially for compliance reasons, Woodgrove must also be able to track employeesΓÇÖ permissions and access to resources and must be able to revoke permissions when the employment is terminated.
-* Alice only uses the credential that Woodgrove maintains to access Woodgrove resources. Alice has no need to track when the credential is used since the credential is managed by Woodgrove and only used with Woodgrove resources. The identity is only valid inside of the Woodgrove trust boundary when access to Woodgrove resources is necessary, so Alice has no need to possess the credential.
+* Alice only uses the credential that Woodgrove maintains to access Woodgrove resources. Alice has no need to track when the credential is used since Woodgrove is managing the credential and which is only used with Woodgrove resources. The identity is only valid inside of the Woodgrove trust boundary when access to Woodgrove resources is necessary, so Alice has no need to possess the credential.
### Using VCs inside the trust boundary
Individual employees have changing identity needs, and VCs can augment centraliz
* While employed by Woodgrove Alice might need gain access to resources based on meeting specific requirements. For example, when Alice completes privacy training, she can be issued a new employee VC with that claim, and that VC can be used to access restricted resources.
-* VCs can be used inside of the trust boundary for account recovery. For example, if the employee has lost their phone and computer, they can regain access by getting a new VC from the identity verification service trusted by Woodgrove, and then use that VC to get new credentials.
+* VCs can be used inside of the trust boundary for account recovery. For example, if the employee has lost their phone and computer, they can regain access by getting a new VC from the identity verification service, that is trusted by Woodgrove, and then use that VC to get new credentials.
## User journey: Accessing external resources
By providing Alice the VC, Woodgrove is attesting that Alice is an employee. Woo
* Proseware doesnΓÇÖt need to expand their trust boundary to validate Alice is an employee of Woodgrove. Proseware can use the VC that Woodgrove provides instead. Because the trust boundary isnΓÇÖt expanded, managing the trust relationship is easier, and Proseware can easily end the relationship by not accepting the VCs anymore.
-* Alice doesnΓÇÖt need to provide Proseware personal information, such as an email. Alice maintains the VC in a wallet application on a personal device. The only person that can use the VC is Alice, and Alice must initiate usage of the credential. Each usage of the VC is recorded by the wallet application, so Alice has a record of when and where the VC is used.
+* Alice doesnΓÇÖt need to provide Proseware personal information, such as an email. Alice maintains the VC in a wallet application on a personal device. The only person that can use the VC is Alice, and Alice must initiate usage of the credential. Each usage of the VC is being recorded by the wallet application, so Alice has a record of when and where the VC is used.
-By combining centralized and decentralized identity architectures for operating inside and outside of trust boundaries, complexity and risk can be reduced and limited relationships become easier to manage.
+By combining centralized and decentralized identity architectures for operating inside and outside of trust boundaries at Woodgrove, complexity and risk can be reduced and limited relationships become easier to manage.
### Changes over time
-Woodgrove will add and end business relationships with other organizations and will need to determine when centralized and decentralized identity architectures are used.
+Woodgrove adds new and ends current business relationships with other organizations and needs to determine when centralized and decentralized identity architectures are used.
-By combining centralized and decentralized identity architectures, the responsibility and effort associated with identity and proof of identity is distributed, risk is reduced, and the user doesn't risk releasing their private information as often or to as many unknown verifiers. Specifically:
+By combining centralized and decentralized identity architectures, the responsibility and effort associated with identity and proof of identity is distributed, risk is . The user doesn't risk releasing their private information as often or to as many unknown verifiers. Specifically:
-* In centralized identity architectures, the IDP issues credentials and performs verification of those issued credentials. Information about all identities is processed by the IDP, either storing them in or retrieving them from a directory. IDPs may also dynamically accept security tokens from other IDP systems, such as social sign-ins or business partners. For a relying party to use identities in the IDP trust boundary, they must be configured to accept the tokens issued by the IDP.
+* In centralized identity architectures, the IDP issues credentials and performs verification of those issued credentials. The IDP processes information about all identities. It either stores them in a directory or retrieves them from a directory. Optionally, IDPs can accept security tokens from other IDP systems, such as social sign-ins or business partners. For a relying party to use identities in the IDP trust boundary, they must be configured to accept the tokens issued by the IDP.
## How decentralized identity systems work In decentralized identity architectures, the issuer, user, and relying party (RP) each have a role in establishing and ensuring ongoing trusted exchange of each otherΓÇÖs credentials. The public keys of the actorsΓÇÖ DIDs are resolvable via the trust system, which allows signature validation and therefore trust of any artifact, including a verifiable credential. Relying parties can consume verifiable credentials without establishing trust relationships with the issuer. Instead, the issuer provides the subject a credential to present as proof to relying parties. All messages between actors are signed with the actorΓÇÖs DID; DIDs from issuers and verifiers also need to own the DNS domains that generated the requests.
-For example: When VC holders need to access a resource, they must present the VC to that relying party. They do so by using a wallet application to read the RPΓÇÖs request to present a VC. As a part of reading the request, the wallet application uses the RPΓÇÖs DID to find the RPs public keys using the trust system, validating that the request to present the VC hasn't been tampered with. The wallet also checks that the DID is referenced in a metadata document hosted in the DNS domain of the RP, to prove domain ownership.
+For example: When VC holders need to access a resource, they must present the VC to that relying party. They do so by using a wallet application to read the RPΓÇÖs request to present a VC. As a part of reading the request, the wallet application uses the RPΓÇÖs DID to find the RPΓÇÖs public keys using the trust system, validating that the request to present the VC hasn't been tampered with. To prove domain ownership, the wallet also checks that the DID is being referenced in a metadata document hosted in the DNS domain of the RP.
![How a decentralized identity system works](media/introduction-to-verifiable-credentials-architecture/how-decentralized-works.png)
In this flow, the credential holder interacts with the issuer to request a verif
1. The wallet downloads the request from the link. The request includes:
- * DID of the issuer. This is used by the wallet app to resolve via the trust system to find the public keys and linked domains.
+ * DID of the issuer. The issuer's DID is used by the wallet app to resolve via the trust system to find the public keys and linked domains.
- * URL with the VC manifest, which specifies the contract requirements to issue the VC. This can include id_token, self-attested attributes that must be provided, or the presentation of another VC.
+ * URL with the VC manifest, which specifies the contract requirements to issue the VC. The contract requirement can include id_token, self-attested attributes that must be provided, or the presentation of another VC.
* Look and feel of the VC (URL of the logo file, colors, etc.). 1. The wallet validates the issuance requests and processes the contract requirements:
- 1. Validates that the issuance request message is signed by the issuerΓÇÖ keys found in the DID document resolved via the trust system. This ensures that the message hasn't been tampered with.
+ 1. Validates that the issuance request message is signed by the issuerΓÇÖs keys found in the DID document resolved via the trust system. Validating the signature ensures that the message hasn't been tampered with.
- 1. Validates that the DNS domain referenced in the issuerΓÇÖs DID document is owned by the issuer.
+ 1. Validates that the issuer owns the DNS domain referenced in the issuerΓÇÖs DID document.
1. Depending on the VC contract requirements, the wallet might require the holder to collect additional information, for example asking for self-issued attributes, or navigating through an OIDC flow to obtain an id_token.
In this flow, a holder interacts with a relying party (RP) to present a VC as pa
* The RP DID as the ΓÇ£audienceΓÇ¥ of the payload.
-1. The Microsoft Entra Verified ID service validates the response sent by the wallet. Depending on how the original presentation request was created in step 2, this validation can include checking the status of the presented VC with the VC issuer for cases such as revocation.
+1. The Microsoft Entra Verified ID service validates the response sent by the wallet. In some cases, the VC issuer can revoke the VC. To make sure the VC is still valid, the verifier needs to check with the VC issuer. This depends on how the verifier asked for the VC in step 2.
1. Upon validation, the Microsoft Entra Verified ID service calls back the RP with the result.
For detailed information on how to build a validation solution and architectural
Decentralized architectures can be used to enhance existing solutions and provide new capabilities.
-To deliver on the aspirations of the [Decentralized Identity Foundation](https://identity.foundation/) (DIF) and W3C [Design goals](https://www.w3.org/TR/did-core/), the following should be considered when creating a verifiable credential solution:
+To deliver on the aspirations of the [Decentralized Identity Foundation](https://identity.foundation/) (DIF) and W3C [Design goals](https://www.w3.org/TR/did-core/), the following items should be considered when creating a verifiable credential solution:
* There are no central points of trust establishment between actors in the system. That is, trust boundaries aren't expanded through federation because actors trust specific VCs.
active-directory Plan Issuance Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/plan-issuance-solution.md
Out of scope for this content is articles covering supporting technologies that
## Components of the solution
-As part of your plan for an issuance solution, you must design a solution that enables the interactions between the issuer, the user, and the verifier. You may issue more than one verifiable credential. The following diagram shows the components of your issuance architecture.
+As part of your plan for an issuance solution, you must design a solution that enables the interactions between the issuer, the user, and the verifier. The following diagram shows the components of your issuance architecture.
### Microsoft VC issuance solution architecture
As part of your plan for an issuance solution, you must design a solution that e
A prerequisite for running the Microsoft Entra Verified ID service is that it's hosted in a Microsoft Entra tenant. The Microsoft Entra tenant provides an Identity and Access Management (IAM) control plane for the Azure resources that are part of the solution.
-Each tenant uses the multi-tenant Microsoft Entra Verified ID service, and has a decentralized identifier (DID). The DID provides proof that the issuer owns the domain incorporated into the DID. The DID is used by the subject and the verifier to validate the issuer.
+Each tenant uses the multitenant Microsoft Entra Verified ID service, and has a decentralized identifier (DID). The DID provides proof that the issuer owns the domain incorporated into the DID. The DID is used by the subject and the verifier to validate the issuer.
### Microsoft Azure services
Each issuer has a single key set used for signing, updating, and recovery. This
* Display definitions determine how claims are displayed in the holderΓÇÖs wallet and also includes branding and other elements. The Display definition can be localized into multiple languages. See [How to customize your verifiable credentials](../verifiable-credentials/credential-design.md).
-* Rules are an issuer-defined model that describes the required inputs of a verifiable credential. Rules also defined trusted input sources, and the mapping of input claims to output claims stored in the VC. Depending on the type of attestation defined in the rules definition, the input claims can come from different providers. Input claims may come from an OIDC Identity Provider, from an id_token_hint or they may be self asserted during issuance via user input in the wallet.
+* Rules are an issuer-defined model that describes the required inputs of a verifiable credential. Rules also defined trusted input sources, and the mapping of input claims to output claims stored in the VC. Depending on the type of attestation defined in the rules definition, the input claims can come from different providers. Input claims may come from an OIDC Identity Provider, from an id_token_hint or from self asserted claims during issuance via user input in the wallet.
* **Input** ΓÇô Are a subset of the model in the rules file for client consumption. The subset must describe the set of inputs, where to obtain the inputs and the endpoint to call to obtain a verifiable credential.
The Microsoft Entra Verified ID service enables you to issue and revoke VCs base
### Trust System
-![ION](media/plan-issuance-solution/plan-for-issuance-solution-ion.png)
+![Screenshot highlighting the trust system in the architecture.](media/plan-issuance-solution/plan-for-issuance-solution-ion.png)
-Microsoft Entra Verified ID currently supports two trust system. One is the [Identity Overlay Network (ION)](https://identity.foundation/ion/), [a Sidetree-based network](https://identity.foundation/sidetree/spec/) that uses BitcoinΓÇÖs blockchain for decentralized identifier (DID) implementation. The DID document of the issuer is stored in ION and is used to perform cryptographic signature checks by parties to the transaction. The other alternative for trust system is [DID Web](https://w3c-ccg.github.io/did-method-web/), where the DID document is hosted on the issuers webserver.
+Microsoft Entra Verified ID currently supports Web as trust system [DID Web](https://w3c-ccg.github.io/did-method-web/), where the DID document is hosted on the issuers webserver.
### Microsoft Authenticator application ![Microsoft Authenticator application](media/plan-issuance-solution/plan-for-issuance-solution-authenticator.png)
-Microsoft Authenticator is the mobile application that orchestrates the interactions between the user, the Microsoft Entra Verified ID service, and dependencies that are described in the contract used to issue VCs. It acts as a digital wallet in which the holder of the VC stores the VC, including the private key of the subject of the VC. Authenticator is also the mechanism used to present VCs for verification.
+Microsoft Authenticator is the mobile application. The Authenticator orchestrates the interactions between the user, the Microsoft Entra Verified ID service and the contract used to issue VCs. It acts as a digital wallet in which the holder of the VC stores the VC, including the private key of the subject of the VC. Authenticator is also the mechanism used to present VCs for verification.
### Issuance business logic
Your issuance solution includes a web front end where users request a VC, an ide
A web front end serves issuance requests to the subjectΓÇÖs wallet by generating deep links or QR codes. Based on the configuration of the contract, other components might be required to satisfy the requirements to create a VC.
-These services provide supporting roles that don't necessarily need to integrate with ION or Microsoft Entra Verified ID issuance service. This layer typically includes:
+These services provide supporting roles that don't necessarily need to integrate with Microsoft Entra Verified ID issuance service. This layer typically includes:
* **OpenID Connect (OIDC)-compliant service or services** are used to obtain id_tokens needed to issue the VC. Existing identity systems such as Microsoft Entra ID or Azure AD B2C can provide the OIDC-compliant service, as can custom solutions such as Identity Server.
-* **Attribute stores** ΓÇô These might be outside of directory services and provide attributes needed to issue a VC. For example, a student information system might provide claims about degrees earned.
+* **Attribute stores** ΓÇô Attribute stores might be outside of directory services and provide attributes needed to issue a VC. For example, a student information system might provide claims about degrees earned.
* **Additional middle-tier services** that contain business rules for lookups, validating, billing, and any other runtime checks and workflows needed to issue credentials.
For more information on setting up your web front end, see the tutorial [Configu
## Credential Design Considerations
-Your specific use cases determine your credential design. The use case will determine:
+Your specific use cases determine your credential design. The use case determines:
* the interoperability requirements
-* the way users will need to prove their identity to get their VC
+* the way users need to prove their identity to get their VC
* the claims that are needed in the credentials
-* if credentials will ever need to be revoked
+* if credentials need to be revoked
### Credential Use Cases With Microsoft Entra Verified ID, the most common credential use cases are:
-**Identity Verification**: a credential is issued based on multiple criteria. This may include verifying the authenticity of government-issued documents like a passport or driverΓÇÖs license and corelating the information in that document with other information such as:
+**Identity Verification**: a credential is issued based on multiple criteria. Multiple criteria may include verifying the authenticity of government-issued documents like a passport or driverΓÇÖs license and corelating the information in that document with other information such as:
* a userΓÇÖs selfie
Common schemas are an area where standards are still emerging. One example of su
After establishing the use case for a credential, you need to decide the credential type and what attributes to include in the credential. Verifiers can read the claims in the VC presented by the users.
-All verifiable credentials must declare their *type* in their [rules definition](rules-and-display-definitions-model.md#rulesmodel-type). The credential type distinguishes a verifiable credentials schema from other credentials and it ensures interoperability between issuers and verifiers. To indicate a credential type, provide one or more credential types that the credential satisfies. Each type is represented by a unique string. Often, a URI is used to ensure global uniqueness. The URI doesn't need to be addressable. It's treated as a string. As an example, a diploma credential issued by Contoso University might declare the following types:
+All verifiable credentials must declare their *type* in their [rules definition](rules-and-display-definitions-model.md#rulesmodel-type). The credential type distinguishes a verifiable credentials schema from other credentials and it ensures interoperability between issuers and verifiers. To indicate a credential type, provide one or more credential types that the credential satisfies. Each type is a unique string. Often, a URI is used to ensure global uniqueness. The URI doesn't need to be addressable. It's treated as a string. As an example, a diploma credential issued by Contoso University might declare the following types:
| Type | Purpose | | - | - |
In addition to the industry-specific standards and schemas that might be applica
* **Minimize private information**: Meet the use cases with the minimal amount of private information necessary. For example, a VC used for e-commerce websites that offers discounts to employees and alumni can be fulfilled by presenting the credential with just the first and last name claims. Additional information such as hiring date, title, department, aren't needed.
-* **Favor abstract claims**: Each claim should meet the need while minimizing the detail. For example, a claim named ΓÇ£ageOverΓÇ¥ with discrete values such as ΓÇ£13ΓÇ¥,ΓÇ¥21ΓÇ¥,ΓÇ¥60ΓÇ¥, is more abstract than a date of birth claim.
+* **Favor abstract claims**: Each claim should meet the need while minimizing the detail. For example, a claim named ΓÇ£ageOverΓÇ¥ with discrete values such as 13, 21, 60, is more abstract than a date of birth claim.
-* **Plan for revocability**: We recommend you define an index claim to enable mechanisms to find and revoke credentials. You are limited to defining one index claim per contract. It is important to note that values for indexed claims aren't stored in the backend, only a hash of the claim value. For more information, see [Revoke a previously issued verifiable credential](../verifiable-credentials/how-to-issuer-revoke.md).
+* **Plan for revocability**: We recommend you define an index claim to enable mechanisms to find and revoke credentials. You're limited to defining one index claim per contract. It's important to note that values for indexed claims aren't stored in the backend, only a hash of the claim value. For more information, see [Revoke a previously issued verifiable credential](../verifiable-credentials/how-to-issuer-revoke.md).
For other considerations on credential attributes, refer to the [Verifiable Credentials Data Model 1.0 (w3.org)](https://www.w3.org/TR/vc-data-model/) specification.
As with any solution, you must plan for performance. The key areas to focus on a
The following provides areas to consider when planning for performance:
-* The Microsoft Entra Verified ID issuance service is deployed in West Europe, North Europe, West US 2, and West Central US Azure regions. If your Microsoft Entra tenant resides within EU, the Microsoft Entra Verified ID service will be in EU too.
+* The Microsoft Entra Verified ID issuance service is deployed in West Europe, North Europe, West US 2, West Central US, Australia and Japan Azure regions. If your Microsoft Entra tenant resides within EU, the Microsoft Entra Verified ID service is in EU too.
-* To limit latency, deploy your issuance frontend website and key vault in the region listed above that is closest to where requests are expected to originate.
+* To limit latency, deploy your issuance frontend website and key vault in the region listed above.
Model based on throughput: * The Issuer service is subject to [Azure Key Vault service limits](../../key-vault/general/service-limits.md).
Model based on throughput:
* You can't control throttling; however, we recommend you read [Azure Key Vault throttling guidance](../../key-vault/general/overview-throttling.md).
-* If you are planning a large rollout and onboarding of VCs, consider batching VC creation to ensure you don't exceed limits.
+* If you're planning a large rollout and onboarding of VCs, consider batching VC creation to ensure you don't exceed limits.
-As part of your plan for performance, determine what you will monitor to better understand the performance of the solution. In addition to application-level website monitoring, consider the following as you define your VC issuance monitoring strategy:
+As part of your plan for performance, determine what you monitor to better understand the performance of the solution. In addition to application-level website monitoring, consider the following as you define your VC issuance monitoring strategy:
-For scalability, consider implementing metrics for the following:
+For scalability, consider implementing metrics for the following items:
* Define the logical phases of your issuance process. For example:
For scalability, consider implementing metrics for the following:
* Time spent (latency)
-* Monitor Azure Key Vault using the following:
+* Monitor Azure Key Vault using the following link:
* [Azure Key Vault monitoring and alerting](../../key-vault/general/alert.md)
To plan for reliability, we recommend:
* For frontend and business layer, your solution can manifest in an unlimited number of ways. As with any solution, for the dependencies you identify, ensure that the dependencies are resilient and monitored.
-If the rare event that the Microsoft Entra Verified ID issuance service or Azure Key Vault services become unavailable, the entire solution will become unavailable.
+If the rare event that the Microsoft Entra Verified ID issuance service or Azure Key Vault services become unavailable, the entire solution becomes unavailable.
### Plan for compliance Your organization may have specific compliance needs related to your industry, type of transactions, or country/region of operation.
-**Data residency**: The Microsoft Entra Verified ID issuance service is deployed in a subset of Azure regions. The service is used for compute functions only. We don't store values of verifiable credentials in Microsoft systems. However, as part of the issuance process, personal data is sent and used when issuing VCs. Using the VC service shouldn't impact data residency requirements. If, as a part of identity verification you store any personal information, that should be stored in a manner and region that meets your compliance requirements. For Azure-related guidance, visit the Microsoft Trust Center website.
+**Data residency**: The Microsoft Entra Verified ID issuance service is deployed in a subset of Azure regions. The service is used for compute functions only. We don't store values of verifiable credentials in Microsoft systems. However, as part of the issuance process, personal data is sent and used when issuing VCs. Using the VC service shouldn't impact data residency requirements. If you store any personal information as a part of identity verification, that should be stored in a manner and region that meets your compliance requirements. For Azure-related guidance, visit the Microsoft Trust Center website.
-**Revoking credentials**: Determine if your organization will need to revoke credentials. For example, an admin may need to revoke credentials when an employee leaves the company. Or if a credential is issued for a driverΓÇÖs license, and the holder is caught doing something that would cause the driverΓÇÖs license to be suspended, the VC might need to be revoked. For more information, see [Revoke a previously issued verifiable credential](how-to-issuer-revoke.md).
+**Revoking credentials**: Determine if your organization needs to revoke credentials. For example, an admin may need to revoke credentials when an employee leaves the company. For more information, see [Revoke a previously issued verifiable credential](how-to-issuer-revoke.md).
-**Expiring credentials**: Determine if you will expire credentials, and if so under what circumstances. For example, if you issue a VC as proof of having a driverΓÇÖs license, it might expire after a few years. If you issue a VC as a verification of an association with a user, you may want to expire it annually to ensure users come back annually to get the most updated version of the VC.
+**Expiring credentials**: Determine how your credentials expire. For example, if you issue a VC as proof of having a driverΓÇÖs license, it might expire after a few years. Other VCs can have a shorter validity to ensure users come back periodically to update their VC.
## Plan for operations
-When planning for operations, it is critical you develop a schema to use for troubleshooting, reporting and distinguishing various customers you support. Additionally, if the operations team is responsible for executing VC revocation, that process must be defined. Each step in the process should be correlated so that you can determine which log entries can be associated with each unique issuance request. For auditing, we recommend you capture each attempt of credential issuing individually. Specifically:
+When planning for operations, it's critical you develop a schema to use for troubleshooting, reporting and distinguishing various customers you support. Additionally, if the operations team is responsible for executing VC revocation, that process must be defined. Each step in the process should be correlated so that you can determine which log entries can be associated with each unique issuance request. For auditing, we recommend you capture each attempt of credential issuing individually. Specifically:
* Generate unique transaction IDs that customers and support engineers can refer to as needed. * Devise a mechanism to correlate the logs of Azure Key Vault transactions to the transaction IDs of the issuance portion of the solution.
-* If you are an identity verification service issuing VCs on behalf of multiple customers, monitor and mitigate by customer or contract ID for customer-facing reporting and billing.
+* If you're an identity verification service issuing VCs on behalf of multiple customers, monitor and mitigate by customer or contract ID for customer-facing reporting and billing.
-* If you are an identity verification service issuing VCs on behalf of multiple customers, use the customer or contract ID for customer-facing reporting and billing, monitoring, and mitigating.
+* If you're an identity verification service issuing VCs on behalf of multiple customers, use the customer or contract ID for customer-facing reporting and billing, monitoring, and mitigating.
## Plan for security
-As part of your design considerations focused on security, we recommend the following:
+As part of your design considerations focused on security, we recommend the following items:
* For key management:
As part of your design considerations focused on security, we recommend the foll
* Define a dedicated service principal to authorize access Azure Key Vault. If your website is on Azure, we recommend that you use an [Azure Managed Identity](../managed-identities-azure-resources/overview.md).
- * Treat the service principal that represents the website and the user as a single trust boundary. While it is possible to create multiple websites, there is only one key set for the issuance solution.
+ * Treat the service principal that represents the website and the user as a single trust boundary. While it's possible to create multiple websites, there's only one key set for the issuance solution.
-For security logging and monitoring, we recommend the following:
+For security logging and monitoring, we recommend the following items:
-* Enable logging and alerting of Azure Key Vault to track credential issuance operations, key extraction attempts, permission changes, and to monitor and send alert for configuration changes. More information can be found at [How to enable Key Vault logging](../../key-vault/general/howto-logging.md).
+* Enable logging and alerting of Azure Key Vault. Track credential issuance operations, key extraction attempts and permission changes. Monitor and send alert for configuration changes. More information can be found at [How to enable Key Vault logging](../../key-vault/general/howto-logging.md).
* Archive logs in a security information and event management (SIEM) systems, such as [Microsoft Sentinel](https://azure.microsoft.com/services/azure-sentinel) for long-term retention.
For guidance on managing your Azure environment, we recommend you review the [Mi
## Additional considerations
-When you complete your POC, gather all the information and documentation generated, and consider tearing down the issuer configuration. This will help avoid issuing verifiable credentials after your POC timeframe expires.
+When you complete your POC, gather all the information and documentation generated, and consider tearing down the issuer configuration.
-For more information on Key Vault implementation and operation, refer to [Best practices to use Key Vault](../../key-vault/general/best-practices.md). For more information on Securing Azure environments with Active Directory, refer to [Securing Azure environments with Microsoft Entra ID](https://aka.ms/AzureADSecuredAzure).
+For more information on Key Vault implementation and operation, see [Best practices to use Key Vault](../../key-vault/general/best-practices.md). For more information on Securing Azure environments with Active Directory, see [Securing Azure environments with Microsoft Entra ID](https://aka.ms/AzureADSecuredAzure).
## Next steps
active-directory Plan Verification Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/plan-verification-solution.md
MicrosoftΓÇÖs Microsoft Entra Verified ID (Microsoft Entra VC) service enables you to trust proofs of user identity without expanding your trust boundary. With Microsoft Entra VC, you create accounts or federate with another identity provider. When a solution implements a verification exchange using verifiable credentials, it enables applications to request credentials that aren't bound to a specific domain. This approach makes it easier to request and verify credentials at scale.
-If you havenΓÇÖt already, we suggest you review the [Microsoft Entra Verified ID architecture overview](introduction-to-verifiable-credentials-architecture.md). You may also want to review [Plan your Microsoft Entra Verified ID issuance solution](plan-issuance-solution.md).
+If you havenΓÇÖt already, we suggest you review the [Microsoft Entra Verified ID architecture overview](introduction-to-verifiable-credentials-architecture.md). You also want to review [Plan your Microsoft Entra Verified ID issuance solution](plan-issuance-solution.md).
## Scope of guidance
-This content covers the technical aspects of planning for a verifiable credential (VC) verification solution using Microsoft products and services. The solution interfaces with a trust system, where currently supported trust systems are Identity Overlay Network (ION) or DID Web. ION acts as the decentralized public key infrastructure (DPKI) while DID Web is a centralized public key infrastructure.
+This content covers the technical aspects of planning for a verifiable credential verification solution using Microsoft products and services. The solution interfaces with a trust system, where currently DID Web is supported. DID Web is a centralized public key infrastructure.
Supporting technologies that aren't specific to verification solutions are out of scope. For example, websites are used in a verifiable credential verification solution but planning a website deployment isn't covered in detail.
As part of your plan for a verification solution, you must enable the interactio
### Microsoft Entra Verified ID service
-In the context of a verifier solution, the Microsoft Entra Verified ID service is the interface between the Microsoft components of the solution and the trust system. The service provisions the key set to Key Vault, provisions the decentralized identifier (DID). In the case of ION, the service writes the DID document to the distributed ledger, where it can be used by subjects and issuers.
+In the context of a verifier solution, the Microsoft Entra Verified ID service is the interface between the Microsoft components of the solution and the trust system. The service provisions the key set to Key Vault, provisions the decentralized identifier (DID).
<a name='azure-active-directory-tenant-'></a> ### Microsoft Entra tenant
-The service requires a Microsoft Entra tenant that provides an Identity and Access Management (IAM) control plane for the Azure resources that are part of the solution. Each Microsoft Entra tenant uses the multi-tenant Microsoft Entra Verified ID service, and it issues a single DID document representing the verifier. If you have multiple relying parties using your verification service, they all use the same verifier DID. The verifier DID provides pointers to the public key that allows subjects and issuers to validate messages that come from the relying party.
+The service requires a Microsoft Entra tenant that provides an Identity and Access Management (IAM) control plane for the Azure resources that are part of the solution. Each Microsoft Entra tenant uses the multitenant Microsoft Entra Verified ID service, and it issues a single DID document representing the verifier. If you have multiple relying parties using your verification service, they all use the same verifier DID. The verifier DID provides pointers to the public key that allows subjects and issuers to validate messages that come from the relying party.
### Azure Key Vault ![Diagram of the components of a verification solution with Azure Key Vault highlighted.](./media/plan-verification-solution/plan-verification-solution-key-vault.png)
-The Azure Key Vault service stores your verifier keys, which are generated when you enable the Microsoft Entra Verified ID issuance service. The keys are used to provide message security. Each verifier has a single key set used for signing, updating, and recovering VCs. This key set is used each time you service a verification request. Microsoft key set currently uses Elliptic Curve Cryptography (ECC) [SECP256k1](https://en.bitcoin.it/wiki/Secp256k1). We're exploring other cryptographic signature schemas that will be adopted by the broader DID community.
+The Azure Key Vault service stores your verifier keys, which are generated when you enable the Microsoft Entra Verified ID issuance service. The keys are used to provide message security. Each verifier has a single key set used for signing, updating, and recovering VCs. This key set is used each time you service a verification request. Microsoft key set currently uses Elliptic Curve Cryptography (ECC) [SECP256k1](https://en.bitcoin.it/wiki/Secp256k1). We're exploring other cryptographic signature schemas that are adopted by the broader DID community.
### Request Service API
Application programming interfaces (APIs) provide developers a method to abstrac
![Diagram of the components of a verification solution with the trust system highlighted.](./media/plan-verification-solution/plan-verification-solution-ion.png)
-Microsoft Entra Verified ID currently supports two trust system. One is [Identity Overlay Network (ION)](https://identity.foundation/ion/), [a Sidetree-based network](https://identity.foundation/sidetree/spec/)that uses BitcoinΓÇÖs blockchain for decentralized identifier (DID) implementation. The DID document of the issuer is stored in ION and is used to perform cryptographic signature checks by parties to the transaction. The other alternative for trust system is [DID Web](https://w3c-ccg.github.io/did-method-web/), where the DID document is hosted on the issuers webserver.
+Microsoft Entra Verified ID currently supports [DID Web](https://w3c-ccg.github.io/did-method-web/) as a trust system, where the DID document is hosted on the issuers webserver.
### Microsoft Authenticator application ![Diagram of the components of a verification solution with Microsoft Authenticator application highlighted.](media/plan-verification-solution/plan-verification-solution-authenticator.png)
-Microsoft Authenticator is the mobile application that orchestrates the interactions between the relying party, the user, the Microsoft Entra Verified ID issuance service, and dependencies described in the contract used to issue VCs. Microsoft Authenticator acts as a digital wallet in which the holder of the VC stores the VC. It's also the mechanism used to present VCs for verification.
+Microsoft Authenticator is the mobile application. The Authenticator orchestrates the interactions between the user, the Microsoft Entra Verified ID service and the contract used to issue VCs. It acts as a digital wallet in which the holder of the VC stores the VC, including the private key of the subject of the VC. Authenticator is also the mechanism used to present VCs for verification.
+ ### Relying party (RP)
Microsoft Authenticator is the mobile application that orchestrates the interact
#### Web front end
-The relying party web front end uses the Request Service API to verify VCs by generating deep links or QR codes that are consumed by the subjectΓÇÖs wallet. Depending on the scenario, the front end can be a publicly accessible or internal website to enable end-user experiences that require verification. However, the endpoints that the wallet accesses must be publicly accessible. Specifically, it controls redirection to the wallet with specific request parameters. This is accomplished using the Microsoft-provided APIs.
+The relying party web front end uses the Request Service API to verify VCs by generating deep links or QR codes that the subjectΓÇÖs wallet consumes. Depending on the scenario, the front end can be a publicly accessible or internal website to enable end-user experiences that require verification. However, the endpoints that the wallet accesses must be publicly accessible. Specifically, it controls redirection to the wallet with specific request parameters.
#### Business logic
Verifiable credentials can be used to enable faster onboarding by replacing some
**Target identity systems**: Organization-specific identity repositories that the onboarding portal needs to interact with while onboarding subjects. The systems to integrate are determined based on the kinds of identities you want to onboard with VC validation. Common scenarios of identity verification for onboarding include:
-* External Identities such as vendors, partners, suppliers, and customers, which in centralized identity systems onboard to Microsoft Entra ID using APIs to issue business-to-business (B2B) invitations, or entitlement management assignment to packages.
+* External Identities that Microsoft Entra ID onboard using APIs to issue business-to-business (B2B) invitations, or entitlement management assignment to packages.
* Employee identities, which in centralized identity systems are already onboarded through human resources (HR) systems. In this case, the identity verification might be integrated as part of existing stages of HR workflows.
Verifiable credentials can be used to enable faster onboarding by replacing some
* **Issuer**: Account onboarding is a good fit for an external identity-proofing service as the issuer of the VCs. Examples of checks for onboarding include: liveness check, government-issued document validation, address, or phone number confirmation, and so on.
-* **Storing VC Attributes**: Where possible don't store attributes from VCs in your app-specific store. Be especially careful with personal data. If this information is required by specific flows within your applications, consider asking for the VC to retrieve the claims on demand.
+* **Storing VC Attributes**: Where possible don't store attributes from VCs in your app-specific store. Be especially careful with personal data. If specific flows within your applications require this information, consider asking for the VC to retrieve the claims on demand.
* **VC Attribute correlation with back-end systems**: When defining the attributes of the VC with the issuer, establish a mechanism to correlate information in the back-end system after the user presents the VC. The mechanism typically uses a time-bound, unique identifier in the context of your RP in combination with the claims you receive. Some examples: * **New employee**: When the HR workflow reaches the point where identity proofing is required, the RP can generate a link with a time-bound unique identifier. The RP then sends it to the candidateΓÇÖs email address on the HR system. This unique identifier should be sufficient to correlate information such as firstName, lastName from the VC verification request to the HR record or underlying data. The attributes in the VC can be used to complete user attributes in the HR system, or to validate accuracy of user attributes about the employee.
- * **External identities** - invitation: When an existing user in your organization invites an external user to be onboarded in the target system, the RP can generate a link with a unique identifier that represents the invitation transaction and sends it to the external userΓÇÖs email address. This unique identifier should be sufficient to correlate the VC verification request to the invitation record or underlying data and continue the provisioning workflow. The attributes in the VC can be used to validate or complete the external user attributes.
+ * **External identities** - invitation: When an external user is invited to the target system, the RP can generate a link with a unique identifier that represents the invitation transaction. This link can be sent to the external userΓÇÖs email address. This unique identifier should be sufficient to correlate the VC verification request to the invitation record or underlying data and continue the provisioning workflow. The attributes in the VC can be used to validate or complete the external user attributes.
* **External identities** - self-service: When external identities sign up to the target system through self-service (for example, a B2C application) the attributes in the VC can be used to populate the initial attributes of the user account. The VC attributes can also be used to find out if a profile already exists.
Verifiable credentials can be used as other proof to access to sensitive applica
* **Goal**: The goal of the scenario determines what kind of credential and issuer is needed. Typical scenarios include:
- * **Authorization**: In this scenario, the user presents the VC to make an authorization decision. VCs designed for proof of completion of a training or holding a specific certification, are a good fit for this scenario. The VC attributes should contain fine-grained information conducive to authorization decisions and auditing. For example, if the VC is used to certify the individual is trained and can access sensitive financial apps, the app logic can check the department claim for fine-grained authorization, and use the employee ID for audit purposes.
+ * **Authorization**: In this scenario, the user presents the VC to make an authorization decision. VCs designed for proof of completion of a training or holding a specific certification, are a good fit for this scenario. The VC attributes should contain fine-grained information conducive to authorization decisions and auditing. For example, the VC is used to certify the individual is trained and can access sensitive financial apps. The app logic can check the department claim for fine-grained authorization, and use the employee ID for audit purposes.
- * **Confirmation of identity verification**: In this scenario, the goal is to confirm that the same person who initially onboarded is indeed the one attempting to access the high-value application. A credential from an identity verification issuer would be a good fit and the application logic should validate that the attributes from the VC align with the user who logged in the application.
+ * **Confirmation of identity verification**: In this scenario, the goal is to confirm that the same person who initially onboarded is indeed the one attempting to access the high-value application. A credential from an identity verification issuer would be a good fit. The application logic should validate that the attributes from the VC align with the user who logged in the application.
-* **Check Revocation**: When using VCs to access sensitive resources, it is common to check the status of the VC with the original issuer and deny access for revoked VCs. When working with the issuers, ensure that revocation is explicitly discussed as part of the design of your scenario.
+* **Check Revocation**: When using VCs to access sensitive resources, it's common to check the status of the VC with the original issuer and deny access for revoked VCs. When working with the issuers, ensure that revocation is explicitly discussed as part of the design of your scenario.
* **User Experience**: When using VCs to access sensitive resources, there are two patterns you can consider. * **Step-up authentication**: users start the session with the application with existing authentication mechanisms. Users must present a VC for specific high-value operations within the application such as approvals of business workflows. This is a good fit for scenarios where such high-value operations are easy to identify and update within the application flows.
- * **Session establishment**: Users must present a VC as part of initiating the session with the application. This is a good fit when the nature of the entire application is high-value.
+ * **Session establishment**: Users must present a VC as part of initiating the session with the application. Presenting a VC is a good fit when the nature of the entire application is high-value.
### Accessing applications outside organization boundaries
The decentralized nature of verifiable credentials enables this scenario without
* **Authentication**: In this scenario, a user must have possession of VC to prove employment or relationship to a particular organization(s). In this case, the RP should be configured to accept VCs issued by the target organizations.
- * **Authorization**: Based on the application requirements, the applications might consume the VC attributes for fine-grained authorization decisions and auditing. For example, if an e-commerce website offers discounts to employees of the organizations in a particular location, they can validate this based on the country/region claim in the VC (if present).
+ * **Authorization**: Based on the application requirements, the applications might consume the VC attributes for fine-grained authorization decisions and auditing. For example, if an e-commerce website offers discounts to employees of the organizations in a particular location, they can validate discount eligibility based on the country/region claim in the VC (if present).
-* **Check Revocation**: When using VCs to access sensitive resources, it is common to check the status of the VC with the original issuer and deny access for revoked VCs. When working with the issuers, ensure that revocation is explicitly discussed as part of the design of your scenario.
+* **Check Revocation**: When using VCs to access sensitive resources, it's common to check the status of the VC with the original issuer and deny access for revoked VCs. When working with the issuers, ensure that revocation is explicitly discussed as part of the design of your scenario.
* **User Experience**: Users can present a VC as part of initiating the session with the application. Typically, applications also provide an alternative method to start the session to accommodate cases where users donΓÇÖt have VCs.
Note: While the scenario we describe in this section is specific to recover Micr
#### Other Elements
-**Account portal**: This is a web front end that orchestrates the API calls for VC presentation and validation. This orchestration can include Microsoft Graph calls to recover accounts in Microsoft Entra ID.
+**Account portal**: Web front end that orchestrates the API calls for VC presentation and validation. This orchestration can include Microsoft Graph calls to recover accounts in Microsoft Entra ID.
-**Custom logic or workflows**: Logic with organization-specific steps before and after updating the user account. This might include approval workflows, other validations, logging, notifications, etc.
+**Custom logic or workflows**: Logic with organization-specific steps before and after updating the user account. Custom logic might include approval workflows, other validations, logging, notifications, etc.
**Microsoft Graph**: Exposes representational state transfer (REST) APIs and client libraries to access Microsoft Entra data that is used to perform account recovery.
-**Microsoft Entra enterprise directory**: This is the Microsoft Entra tenant that contains the accounts that are being created or updated through the account portal.
+**Microsoft Entra enterprise directory**: The Microsoft Entra tenant that contains the accounts that are being created or updated through the account portal.
#### Design considerations
-**VC Attribute correlation with Microsoft Entra ID**: When defining the attributes of the VC in collaboration with the issuer, establish a mechanism to correlate information with internal systems based on the claims in the VC and user input. For example, if you have an identity verification provider (IDV) verify identity prior to onboarding employees, ensure that the issued VC includes claims that would also be present in an internal system such as a human resources system for correlation. This might be a phone number, address, or date of birth. In addition to claims in the VC, the RP can ask for some information such as the last four digits of their social security number (SSN) as part of this process.
+**VC Attribute correlation with Microsoft Entra ID**: When defining the attributes of the VC in collaboration with the issuer, make sure you agree on claims that identify the user. For example, if identity verification provider (IDV) verifies the identity prior to onboarding employees, ensure that the issued VC includes claims that can be matched against internal systems. Such claims might be a phone number, address, or date of birth. The RP can ask for information not found in the VC as part of this process, such as the last four digits of their social security number (SSN).
-**Role of VCs with Existing Microsoft Entra Credential Reset Capabilities**: Microsoft Entra ID has a built-in self-service password reset (SSPR) capability. Verifiable Credentials can be used to provide another way to recover, particularly in cases where users do not have access to or lost control of the SSPR method, for example theyΓÇÖve lost both computer and mobile device. In this scenario, the user can reobtain a VC from an identity proof issuer and present it to recover their account.
+**Role of VCs with Existing Microsoft Entra Credential Reset Capabilities**: Microsoft Entra ID has a built-in self-service password reset (SSPR) capability. Verifiable Credentials can be used to provide another way to recover in cases where users don't have access to or lost control of the SSPR method. In scenarios where the user have lost both computer and mobile, the user can reobtain a VC from an identity proof issuer and present it to recover their account remotely.
-Similarly, you can use a VC to generate a temporary access pass that will allow users to reset their MFA authentication methods without a password.
+Similarly, you can use a VC to generate a temporary access pass that allows users to reset their MFA authentication methods without a password.
**Authorization**: Create an authorization mechanism such as a security group that the RP checks before proceeding with the credential recovery. For example, only users in specific groups might be eligible to recover an account with a VC.
Similarly, you can use a VC to generate a temporary access pass that will allow
* Grant the RP website the ability to use a service principal granted the MS Graph scope `UserAuthenticationMethod.ReadWrite.All` to reset authentication methods. DonΓÇÖt grant `User.ReadWrite.All`, which enables the ability to create and delete users.
-* If your RP is running in Azure, use Managed Identities to call Microsoft Graph. This removes the risks around managing service principal credentials in code or configuration files. For more information, see [Managed identities for Azure resources.](../managed-identities-azure-resources/overview.md)
+* If your RP is running in Azure, use Managed Identities to call Microsoft Graph. Managed Identities removes the risks around managing service principal credentials in code or configuration files. For more information, see [Managed identities for Azure resources.](../managed-identities-azure-resources/overview.md)
## Plan for identity management
-Below are some IAM considerations when incorporating VCs to relying parties. Relying parties are typically applications.
+The following are IAM considerations when incorporating VCs to relying parties. Relying parties are typically applications.
### Authentication * The subject of a VC must be a human.
-* Presentation of VCs must be interactively performed by a human VC holder, who holds the VC in their wallet. Non-interactive flows such as on-behalf-of are not supported.
+* A human has the VC in their wallet and must interactively present the VC. Non-interactive flows such as on-behalf-of aren't supported.
### Authorization * A successful presentation of the VC can be considered a coarse-grained authorization gate by itself. The VC attributes can also be consumed for fine-grained authorization decisions.
-* Determine if an expired VC has meaning in your application; if so check the value of the `exp` claim (the expiration time) of the VC as part of the authorization checks. One example where expiration is not relevant is requiring a government-issued document such as a driverΓÇÖs license to validate if the subject is older than 18. The date of birth claim is valid, even if the VC is expired.
+* Determine if an expired VC has meaning in your application; if so check the value of the `exp` claim (the expiration time) of the VC as part of the authorization checks. One example where expiration isn't relevant is requiring a government-issued document such as a driverΓÇÖs license to validate if the subject is older than 18. The date of birth claim is valid, even if the VC is expired.
* Determine if a revoked VC has meaning to your authorization decision.
- * If it is not relevant, then skip the call to check status API (which is on by default).
+ * If it isn't relevant, then skip the call to check status API (which is on by default).
- * If it is relevant, add the proper handling of exceptions in your application.
+ * If it's relevant, add the proper handling of exceptions in your application.
### User Profiles
-You can use information in presented VCs to build a user profile. If you want to consume attributes to build a profile, consider the following.
+You can use information in presented VCs to build a user profile. If you want to consume attributes to build a profile, consider the following items.
-* When the VC is issued, it contains a snapshot of attributes as of issuance. VCs might have long validity periods, and you must determine the age of attributes that you will accept as sufficiently fresh to use as a part of the profile.
+* When the VC is issued, it contains a snapshot of attributes as of issuance. VCs might have long validity periods, and you must determine the age of attributes that you'll accept as sufficiently fresh to use as a part of the profile.
-* If a VC needs to be presented every time the subject starts a session with the RP, consider using the output of the VC presentation to build a non-persistent user profile with the attributes. This helps to reduce privacy risks associated with storing user properties at rest. If the subjectΓÇÖs attributes need to be persisted locally by the application, only store the minimal set of claims required by your application (as opposed to store the entire content of the VC).
+* If a VC needs to be presented every time the subject starts a session with the RP, consider using the output of the VC presentation to build a non-persistent user profile with the attributes. A non-persistent user profile helps to reduce privacy risks associated with storing user properties at rest. Your application may need to save the subjectΓÇÖs attributes locally. If so, only save the claims that your application needs. Do not save the whole VC.
* If the application requires a persistent user profile store:
- * Consider using the `sub` claim as an immutable identifier of the user. This is an opaque unique attribute that will be constant for a given subject/RP pair.
+ * Consider using the `sub` claim as an immutable identifier of the user. This is an opaque unique attribute that is constant for a given subject/RP pair.
- * Define a mechanism to deprovision the user profile from the application. Due to the decentralized nature of the Microsoft Entra Verified ID system, there is no application user provisioning lifecycle.
+ * Define a mechanism to deprovision the user profile from the application. Due to the decentralized nature of the Microsoft Entra Verified ID system, there's no application user provisioning lifecycle.
- * Do not store personal data claims returned in the VC token.
+ * Don't store personal data claims returned in the VC token.
* Only store claims needed for the logic of the relying party.
You can use information in presented VCs to build a user profile. If you want to
As with any solution, you must plan for performance. Focus areas include latency, throughput, and scalability. During initial phases of a release cycle, performance shouldn't be a concern. However, when adoption of your solution results in many verifiable credentials being verified, performance planning might become a critical part of your solution.
-The following provides areas to consider when planning for performance:
+The following items provide areas to consider when planning for performance:
-* The Microsoft Entra Verified ID issuance service is deployed in West Europe, North Europe, West US 2, and West Central US Azure regions. To limit latency, deploy your verification front end (website) and key vault in the region listed above that is closest to where requests are expected to originate from.
+* The Microsoft Entra Verified ID issuance service is deployed in West Europe, North Europe, West US 2, and West Central US Azure regions. To limit latency, deploy your verification front end (website) and key vault in the closest region.
* Model based on throughput:
The following provides areas to consider when planning for performance:
## Plan for reliability
-To best plan for high availability and disaster recovery, we suggest the following:
+To best plan for high availability and disaster recovery, we suggest the following items:
-* Microsoft Entra Verified ID service is deployed in the West Europe, North Europe, West US 2, and West Central US Azure regions. Consider deploying your supporting web servers and supporting applications in one of those regions, specifically in the ones from which you expect most of your validation traffic to originate.
+* Microsoft Entra Verified ID service is deployed in the West Europe, North Europe, West US 2, and West Central US, Australia and Japan Azure regions. Consider deploying your supporting web servers and supporting applications in one of those regions, specifically in the ones from which you expect most of your validation traffic to originate.
* Review and incorporate best practices from [Azure Key Vault availability and redundancy](../../key-vault/general/disaster-recovery-guidance.md) as you design for your availability and redundancy goals. ## Plan for security
-As you are designing for security, consider the following:
+As you're designing for security, consider the following:
* All relying parties (RPs) in a single tenant have the same trust boundary since they share the same DID.
As part of your operational planning, consider monitoring the following:
* **For security**:
- * Enable logging for Key Vault to track signing operations, and to monitor and alert on configuration changes. Refer to [How to enable Key Vault logging](../../key-vault/general/howto-logging.md) for more information.
+ * Enable logging for Key Vault to track signing operations, and to monitor and alert on configuration changes. See [How to enable Key Vault logging](../../key-vault/general/howto-logging.md) for more information.
* Archive logs in a security information and event management (SIEM) systems, such as [Microsoft Sentinel](https://azure.microsoft.com/services/azure-sentinel/) for long-term retention.
active-directory Verifiable Credentials Configure Tenant Quick https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/verifiable-credentials-configure-tenant-quick.md
+
+ Title: Tutorial - Quick setup of your tenant for Microsoft Entra Verified ID
+description: In this tutorial, you learn how to quickly configure your tenant to support the Verified ID service.
++++++ Last updated : 10/06/2023
+# Customer intent: As an enterprise, we want to enable customers to manage information about themselves by using verifiable credentials.
+++
+# Quick Microsoft Entra Verified ID setup
++
+Quick Verified ID setup, available in preview, removes several configuration steps an admin needs to complete with a single click on a `Get started` button. The quick setup takes care of signing keys, registering your decentralized ID and verify your domain ownership. It also creates a Verified Workplace Credential for you.
+
+In this tutorial, you learn how to use the quick setup to configure your Microsoft Entra tenant to use the verifiable credentials service.
+
+Specifically, you learn how to:
+
+> [!div class="checklist"]
+> - Configure your the Verified ID service using the quick setup.
+> - Controlling how issuances of Verified Workplace Credentials in MyAccount
+
+## Prerequisites
+
+- Ensure that you have the [global administrator](../../active-directory/roles/permissions-reference.md#global-administrator) or the [authentication policy administrator](../../active-directory/roles/permissions-reference.md#authentication-policy-administrator) permission for the directory you want to configure. If you're not the global administrator, you need the [application administrator](../../active-directory/roles/permissions-reference.md#application-administrator) permission to complete the app registration including granting admin consent.
+- Ensure that you have a custom domain registered for the Microsoft Entra tenant. If you don't have one registered, the setup defaults to the manual setup experience.
+
+## Set up Verified ID
+
+If you have a custom domain registered for your Microsoft Entra tenant, you see this `Get started` option. If you don't have a custom domain registered, either register it before setting up Verified ID or continue using the [manual setup](verifiable-credentials-configure-tenant.md).
++
+To set up Verified ID, follow these steps:
+
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Global Administrator](../roles/permissions-reference.md#global-administrator).
+
+1. Select **Verified ID**.
+
+1. From the left menu, select **Setup**.
+
+1. Click the **Get started** button.
+
+1. If you have multiple domains registered for your Microsoft Entra tenant, select the one you would like to use for Verified ID.
+
+ :::image type="content" source="media/verifiable-credentials-configure-tenant-quick/verifiable-credentials-select-domain.png" alt-text="Screenshot that shows how to select domain.":::
+
+When the setup process is complete, you see a default workplace credential available to edit and offer to employees of your tenant on their MyAccount page.
++
+## MyAccount available now to simplify issuance of Workplace Credentials
+Issuing Verified Workplace Credentials is now available via [myaccount.microsoft.com](https://myaccount.microsoft.com/). Users can sign in to myaccount using their Microsoft Entra ID credentials and issue themselves a Verified Workplace Credential via the `Get my Verified ID` option.
++
+As an admin, you can either remove the option in MyAccount and create your custom application for issuing Verified Workplace Credentials. You can also select specific groups of users that are allowed to be issued credentials from MyAccount.
++
+## How Quick Verified ID setup works
+
+- A shared signing key is used across multiple tenants within a given region. It's no longer required to deploy Azure Key Vault. Since it's a shared key, the validityInterval of issued credentials is limited to six months.
+- The custom domain registered for your Microsoft Entra tenant is used for domain verification. It's no longer required to upload your DID configuration JSON to verify your domain.
+- The Decentralized identifier (DID) gets a name like `did:web:verifiedid.entra.microsoft.com:tenantid:authority-id`
+
+## Register an application in Microsoft Entra ID
+
+If you're planning to use custom credentials or set up your own application for issuing or verification Verified ID, you need to register an application and grant the appropriate permissions for it. Follow this section in the manual setup to [register an application](verifiable-credentials-configure-tenant.md#register-an-application-in-microsoft-entra-id)
+
+## Next steps
+
+- [Learn how to issue Microsoft Entra Verified ID credentials from a web application](verifiable-credentials-configure-issuer.md).
+- [Learn how to verify Microsoft Entra Verified ID credentials](verifiable-credentials-configure-verifier.md).
active-directory Verifiable Credentials Configure Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/verifiable-credentials-configure-tenant.md
Title: Tutorial - Configure your tenant for Microsoft Entra Verified ID
-description: In this tutorial, you learn how to configure your tenant to support the Verified ID service.
+ Title: Tutorial - Manual Microsoft Entra Verified ID setup
+description: In this tutorial, you learn how to manually configure your tenant to support the Verified ID service.
Last updated 09/15/2023
-# Configure your tenant for Microsoft Entra Verified ID
+# Manual Microsoft Entra Verified ID setup
[!INCLUDE [Verifiable Credentials announcement](../../../includes/verifiable-credentials-brand.md)]
-Microsoft Entra Verified ID is a decentralized identity solution that helps you safeguard your organization. The service allows you to issue and verify credentials. Issuers can use the Verified ID service to issue their own customized verifiable credentials. Verifiers can use the service's free REST API to easily request and accept verifiable credentials in apps and services. In both cases, your Microsoft Entra tenant needs to be configured to either issue your own verifiable credentials, or verify the presentation of a user's verifiable credentials issued by a third party. In the event that you are both an issuer and a verifier, you can use a single Microsoft Entra tenant to both issue your own verifiable credentials and verify those of others.
+Manual Verified ID setup is the classic way of setting up Verified ID where you as an admin have to configure Azure KeyVault, take care of registering your decentralized ID and verifying your domain.
-In this tutorial, you learn how to configure your Microsoft Entra tenant to use the verifiable credentials service.
+In this tutorial, you learn how to use the manual setup to configure your Microsoft Entra tenant to use the verifiable credentials service.
Specifically, you learn how to: > [!div class="checklist"] > - Create an Azure Key Vault instance.
-> - Set up the Verified ID service.
+> - Configure your the Verified ID service using the manual setup.
> - Register an application in Microsoft Entra ID. The following diagram illustrates the Verified ID architecture and the component you configure.
To set up Verified ID, follow these steps:
1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Global Administrator](../roles/permissions-reference.md#global-administrator).
-1. Select **Verifiable Credentials**.
+1. Select **Verified ID**.
1. From the left menu, select **Setup**.
-1. From the middle menu, select **Define organization settings**
+1. From the middle menu, select **Configure organization settings**
1. Set up your organization by providing the following information:
To set up Verified ID, follow these steps:
1. **Key vault**: Select the key vault that you created earlier.
- 1. Under **Advanced**, you may choose the **trust system** that you want to use for your tenant. You can choose from either **Web** or **ION**. Web means your tenant uses [did:web](https://w3c-ccg.github.io/did-method-web/) as the did method and ION means it uses [did:ion](https://identity.foundation/ion/).
-
- >[!IMPORTANT]
- > The only way to change the trust system is to opt-out of the Verified ID service and redo the onboarding.
- 1. Select **Save**. :::image type="content" source="media/verifiable-credentials-configure-tenant/verifiable-credentials-getting-started-save.png" alt-text="Screenshot that shows how to set up Verifiable Credentials first step.":::
active-directory Verifiable Credentials Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/verifiable-credentials-faq.md
This page contains commonly asked questions about Verifiable Credentials and Dec
### What is a DID?
-Decentralized Identifiers (DIDs) are unique identifiers that can be used to secure access to resources, sign and verify credentials, and facilitate application data exchange. Unlike traditional usernames and email addresses, DIDs are owned and controlled by the entity itself (be it a person, device, or company). DIDs exist independently of any external organization or trusted intermediary. [The W3C Decentralized Identifier spec](https://www.w3.org/TR/did-core/) explains DIDs in further detail.
+Decentralized Identifiers (DIDs) are unique identifiers that can be used to secure access to resources, sign and verify credentials, and facilitate application data exchange. Unlike traditional usernames and email addresses, entities and owning and controlling the DIDs themselves (be it a person, device, or company). DIDs exist independently of any external organization or trusted intermediary. [The W3C Decentralized Identifier spec](https://www.w3.org/TR/did-core/) explains DIDs in further detail.
### Why do we need a DID?
Individuals owning and controlling their identities are able to exchange verifia
### What is a Verifiable Credential?
-Credentials are a part of our daily lives; driver's licenses are used to assert that we're capable of operating a motor vehicle, university degrees can be used to assert our level of education, and government-issued passports enable us to travel between countries/regions. Verifiable Credentials provides a mechanism to express these sorts of credentials on the Web in a way that is cryptographically secure, privacy respecting, and machine-verifiable. [The W3C Verifiable Credentials spec](https://www.w3.org/TR/vc-data-model/) explains verifiable credentials in further detail.
+Credentials are a part of our daily lives. Driver's licenses are used to assert that we're capable of operating a motor vehicle. University degrees can be used to assert our level of education and government-issued passports enable us to travel between countries and regions. Verifiable Credentials provides a mechanism to express these sorts of credentials on the Web in a way that is cryptographically secure, privacy respecting, and machine-verifiable. [The W3C Verifiable Credentials spec](https://www.w3.org/TR/vc-data-model/) explains verifiable credentials in further detail.
## Conceptual questions ### What happens when a user loses their phone? Can they recover their identity?
-There are multiple ways of offering a recovery mechanism to users, each with their own tradeoffs. We're currently evaluating options and designing approaches to recovery that offer convenience and security while respecting a user's privacy and self-sovereignty.
+There are multiple ways of offering a recovery mechanism to users, each with their own tradeoffs. Microsoft currently evaluating options and designing approaches to recovery that offer convenience and security while respecting a user's privacy and self-sovereignty.
### How can a user trust a request from an issuer or verifier? How do they know a DID is the real DID for an organization?
-We implement [the Decentralized Identity Foundation's Well Known DID Configuration spec](https://identity.foundation/.well-known/resources/did-configuration/) in order to connect a DID to a highly known existing system, domain names. Each DID created using the Microsoft Entra Verified ID has the option of including a root domain name that will be encoded in the DID Document. Follow the article titled [Link your Domain to your Distributed Identifier](how-to-dnsbind.md) to learn more.
-
-<a name='why-does-the-entra-verified-id-support-ion-as-its-did-method-and-therefore-bitcoin-to-provide-decentralized-public-key-infrastructure'></a>
+We implement [the Decentralized Identity Foundation's Well Known DID Configuration spec](https://identity.foundation/.well-known/resources/did-configuration/) in order to connect a DID to a highly known existing system, domain names. Each DID created using the Microsoft Entra Verified ID has the option of including a root domain name that is encoded in the DID Document. Follow the article titled [Link your Domain to your Distributed Identifier](how-to-dnsbind.md) to learn more.
### Why does the Microsoft Entra Verified ID support ION as its DID method, and therefore Bitcoin to provide decentralized public key infrastructure?
-Microsoft now offers two different trust systems, Web and ION. You may choose to use either one of them during tenant onboarding. ION is a decentralized, permissionless, scalable decentralized identifier Layer 2 network that runs atop Bitcoin. It achieves scalability without including a special crypto asset token, trusted validators, or centralized consensus mechanisms. We use Bitcoin for the base Layer 1 substrate because of the strength of the decentralized network to provide a high degree of immutability for a chronological event record system.
+Microsoft now offers two different trust systems, Web and ION. You can choose to use either one of them during tenant onboarding. ION is a decentralized, permissionless, scalable decentralized identifier Layer 2 network that runs atop Bitcoin. It achieves scalability without including a special crypto asset token, trusted validators, or centralized consensus mechanisms. We use Bitcoin for the base Layer 1 substrate because of the strength of the decentralized network to provide a high degree of immutability for a chronological event record system.
## Using the preview
There are no special licensing requirements to issue Verifiable credentials. All
### How do I reset the Microsoft Entra Verified ID service?
-Resetting requires that you opt out and opt back into the Microsoft Entra Verified ID service, your existing verifiable credentials configurations will reset and your tenant will obtain a new DID to use during issuance and presentation.
+Resetting requires that you opt out and opt back into the Microsoft Entra Verified ID service. Your existing verifiable credentials configuration is reset and your tenant will obtain a new DID to use during issuance and presentation.
1. Follow the [opt-out](how-to-opt-out.md) instructions. 1. Go over the Microsoft Entra Verified ID [deployment steps](verifiable-credentials-configure-tenant.md) to reconfigure the service.
- 1. If you are in the European region, it's recommended that your Azure Key Vault, and container are in the same European region otherwise you may experience some performance and latency issues. Create new instances of these services in the same EU region as needed.
+ 1. If you're in the European region, it's recommended that your Azure Key Vault, and container are in the same European region to avoid performance and latency issues. Create new instances of these services in the same EU region as needed.
1. Finish [setting up](verifiable-credentials-configure-tenant.md#set-up-verified-id) your verifiable credentials service. You need to recreate your credentials. 1. If your tenant needs to be configured as an issuer, it's recommended that your storage account is in the European region as your Verifiable Credentials service. 2. You also need to issue new credentials because your tenant now holds a new DID.
Resetting requires that you opt out and opt back into the Microsoft Entra Verifi
1. In the [Azure portal](https://portal.azure.com), go to Microsoft Entra ID for the subscription you use for your Microsoft Entra Verified ID deployment. 1. Under Manage, select Properties :::image type="content" source="media/verifiable-credentials-faq/region.png" alt-text="settings delete and opt out":::
-1. See the value for Country or Region. If the value is a country or a region in Europe, your Microsoft Entra Verified ID service will be set up in Europe.
+1. See the value for Country or Region. If the value is a country or a region in Europe, your Microsoft Entra Verified ID service is set up in Europe.
### How can I check if my tenant has the new Hub endpoint?
No, at this point it isn't possible to keep your tenant's DID after you have opt
### I cannot use ngrok, what do I do?
-The tutorials for deploying and running the [samples](verifiable-credentials-configure-issuer.md#prerequisites) describes the use of the `ngrok` tool as an application proxy. This tool is sometimes blocked by IT admins from being used in corporate networks. An alternative is to deploy the sample to [Azure App Service](../../app-service/overview.md) and run it in the cloud. The following links help you deploy the respective sample to Azure App Service. The Free pricing tier will be sufficient for hosting the sample. For each tutorial, you need to start by first creating the Azure App Service instance, then skip creating the app since you already have an app and then continue the tutorial with deploying it.
+The tutorials for deploying and running the [samples](verifiable-credentials-configure-issuer.md#prerequisites) describes the use of the `ngrok` tool as an application proxy. This tool is sometimes blocked by IT admins from being used in corporate networks. An alternative is to deploy the sample to [Azure App Service](../../app-service/overview.md) and run it in the cloud. The following links help you deploy the respective sample to Azure App Service. The Free pricing tier is sufficient for hosting the sample. For each tutorial, you need to start by first creating the Azure App Service instance, then skip creating the app since you already have an app and then continue the tutorial with deploying it.
- Dotnet - [Publish to App Service](../../app-service/quickstart-dotnetcore.md?tabs=net60&pivots=development-environment-vs#2-publish-your-web-app) - Node - [Deploy to App Service](../../app-service/quickstart-nodejs.md?tabs=linux&pivots=development-environment-vscode#deploy-to-azure) - Java - [Deploy to App Service](../../app-service/quickstart-java.md?tabs=javase&pivots=platform-linux-development-environment-maven#4deploy-the-app). You need to add the maven plugin for Azure App Service to the sample. - Python - [Deploy using Visual Studio Code](../../app-service/quickstart-python.md?tabs=flask%2Cwindows%2Cazure-cli%2Cvscode-deploy%2Cdeploy-instructions-azportal%2Cterminal-bash%2Cdeploy-instructions-zip-azcli#3deploy-your-application-code-to-azure)
-Regardless of which language of the sample you are using, they will pickup the Azure AppService hostname `https://something.azurewebsites.net` and use it as the public endpoint. You don't need to configure something extra to make it work. If you make changes to the code or configuration, you need to redeploy the sample to Azure AppServices. Troubleshooting/debugging will not be as easy as running the sample on your local machine, where traces to the console window shows you errors, but you can achieve almost the same by using the [Log Stream](../../app-service/troubleshoot-diagnostic-logs.md#stream-logs).
+Regardless of which language of the sample you're using, they'll pick up the Azure AppService hostname `https://something.azurewebsites.net` and use it as the public endpoint. You don't need to configure something extra to make it work. If you make changes to the code or configuration, you need to redeploy the sample to Azure AppServices. Troubleshooting/debugging is easier running the sample on your local machine, where traces to the console window show you errors, but you can achieve almost the same by using the [Log Stream](../../app-service/troubleshoot-diagnostic-logs.md#stream-logs).
## Next steps
active-directory Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/whats-new.md
This article lists the latest features, improvements, and changes in the Microsoft Entra Verified ID service.
+## October 2023
+
+- [Quick Verified ID setup](verifiable-credentials-configure-tenant-quick.md) introduced as preview which enables an admin to onboard a Microsoft Entra tenant with just one click of a button.
+- [MyAccount available now to simplify issuance of Workplace Credentials](verifiable-credentials-configure-tenant-quick.md#myaccount-available-now-to-simplify-issuance-of-workplace-credentials)
+- [Manual Verified ID setup](verifiable-credentials-configure-tenant.md) still available as an option to `Quick Verified ID setup`.
+ ## September 2023 Verified ID is retiring old Request Service API endpoints that were available before Verified ID was General Available. These APIs should not have been used since GA in August 2022, but if they are used in your app, you need to migrate. The API endpoints being retired are:
POST https://verifiedid.did.msidentity.com/v1.0/:tenant/verifiablecredentials/is
The first API was for creating an issuance or presentation request. The second API was for retrieving a request and the last two APIs was for a wallet completing issuance or presentation. The API endpoints to use since preview are the following. ```http
-POST https://verifiedid.did.msidentity.com/v1.0/:tenant/verifiablecredentials/createPresentationRequest
-POST https://verifiedid.did.msidentity.com/v1.0/:tenant/verifiablecredentials/createIssuanceRequest
-GET https://verifiedid.did.msidentity.com/v1.0/:tenant/verifiablecredentials/presentationRequests/:requestId
-POST https://verifiedid.did.msidentity.com/v1.0/:tenant/verifiablecredentials/completeIssuance
-POST https://verifiedid.did.msidentity.com/v1.0/:tenant/verifiablecredentials/verifyPresentation
+POST https://verifiedid.did.msidentity.com/v1.0/verifiablecredentials/createPresentationRequest
+POST https://verifiedid.did.msidentity.com/v1.0/verifiablecredentials/createIssuanceRequest
+GET https://verifiedid.did.msidentity.com/v1.0/verifiablecredentials/presentationRequests/:requestId
+POST https://verifiedid.did.msidentity.com/v1.0/verifiablecredentials/completeIssuance
+POST https://verifiedid.did.msidentity.com/v1.0/verifiablecredentials/verifyPresentation
``` Please note that the `/request` API is split into two depending on if you are creating an issuance or presentation request.
The retired API endpoints will not work after October 2023, 2023.
## August 2023
-The `presentation_verified` callback from the Request Service API now returns when a Verified ID credential was issued and when it expires. Business rules can use these values to see the time windoww of when the presented Verified ID credential is valid. An example of this is that it expires in an hour while the business required in needs to be valid until the end of the day.
+The `presentation_verified` callback from the Request Service API now returns when a Verified ID credential was issued and when it expires. Business rules can use these values to see the time window of when the presented Verified ID credential is valid. An example of this is that it expires in an hour while the business required in needs to be valid until the end of the day.
## June 2023
Instructions for setting up place of work verification on LinkedIn available [he
- Admin API now supports [application access tokens](admin-api.md#authentication) and in addition to user bearer tokens. - Introducing the Microsoft Entra Verified ID [Services partner gallery](services-partners.md) listing trusted partners that can help accelerate your Microsoft Entra Verified ID implementation. - Improvements to our Administrator onboarding experience in the [Admin portal](verifiable-credentials-configure-tenant.md#register-decentralized-id-and-verify-domain-ownership) based on customer feedback.-- Updates to our samples in [github](https://github.com/Azure-Samples/active-directory-verifiable-credentials) showcasing how to dynamically display VC claims.
+- Updates to our samples in [Github](https://github.com/Azure-Samples/active-directory-verifiable-credentials) showcasing how to dynamically display VC claims.
## February 2023
active-directory Workload Identities Faqs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/workload-identities/workload-identities-faqs.md
[Microsoft Entra Workload ID](workload-identities-overview.md) is now available in two editions: **Free** and **Workload Identities Premium**. The free edition of workload identities is included with a subscription of a commercial online service such as [Azure](https://azure.microsoft.com/) and [Power Platform](https://powerplatform.microsoft.com/). The Workload Identities Premium offering is available through a Microsoft representative, the [Open Volume License
-Program](https://www.microsoft.com/licensing/how-to-buy/how-to-buy), and the [Cloud Solution Providers program](../../lighthouse/concepts/cloud-solution-provider.md). Azure and Microsoft 365 subscribers can also purchase Workload
+Program](https://www.microsoft.com/licensing/how-to-buy/how-to-buy), and the [Cloud Solution Providers program](/azure/lighthouse/concepts/cloud-solution-provider). Azure and Microsoft 365 subscribers can also purchase Workload
Identities Premium online. For more information, see [what are workload identities?](workload-identities-overview.md)
suspicious changes to accounts.
Enables delegation of reviews to the right people, focused on the most important privileged roles. -- [App health recommendations](/azure/active-directory/reports-monitoring/howto-use-recommendations): Provides recommendations for addressing identity hygiene gaps in your application portfolio so you can improve the security and resilience posture of a tenant.
+- [App health recommendations](../reports-monitoring/howto-use-recommendations.md): Provides recommendations for addressing identity hygiene gaps in your application portfolio so you can improve the security and resilience posture of a tenant.
## What do the numbers in each category on the [Workload identities - Microsoft Entra admin center](https://entra.microsoft.com/#view/Microsoft_Azure_ManagedServiceIdentity/WorkloadIdentitiesBlade) mean?
active-directory Workload Identity Federation Block Using Azure Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/workload-identities/workload-identity-federation-block-using-azure-policy.md
# Block workload identity federation on managed identities using a policy
-This article describes how to block the creation of federated identity credentials on user-assigned managed identities by using Azure Policy. By blocking the creation of federated identity credentials, you can block everyone from using [workload identity federation](workload-identity-federation.md) to access Microsoft Entra protected resources. [Azure Policy](../../governance/policy/overview.md) helps enforce certain business rules on your Azure resources and assess compliance of those resources.
+This article describes how to block the creation of federated identity credentials on user-assigned managed identities by using Azure Policy. By blocking the creation of federated identity credentials, you can block everyone from using [workload identity federation](workload-identity-federation.md) to access Microsoft Entra protected resources. [Azure Policy](/azure/governance/policy/overview) helps enforce certain business rules on your Azure resources and assess compliance of those resources.
The Not allowed resource types built-in policy can be used to block the creation of federated identity credentials on user-assigned managed identities.
active-directory Workload Identity Federation Considerations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/workload-identities/workload-identity-federation-considerations.md
Creating multiple federated identity credentials under the same user-assigned ma
When you use automation or Azure Resource Manager templates (ARM templates) to create federated identity credentials under the same parent identity, create the federated credentials sequentially. Federated identity credentials under different managed identities can be created in parallel without any restrictions.
-If federated identity credentials are provisioned in a loop, you can [provision them serially](../../azure-resource-manager/templates/copy-resources.md#serial-or-parallel) by setting *"mode": "serial"*.
+If federated identity credentials are provisioned in a loop, you can [provision them serially](/azure/azure-resource-manager/templates/copy-resources#serial-or-parallel) by setting *"mode": "serial"*.
You can also provision multiple new federated identity credentials sequentially using the *dependsOn* property. The following Azure Resource Manager template (ARM template) example creates three new federated identity credentials sequentially on a user-assigned managed identity by using the *dependsOn* property:
You can also provision multiple new federated identity credentials sequentially
*Applies to: applications and user-assigned managed identities*
-It's possible to use a deny [Azure Policy](../../governance/policy/overview.md) as in the following ARM template example:
+It's possible to use a deny [Azure Policy](/azure/governance/policy/overview) as in the following ARM template example:
```json {
active-directory Workload Identity Federation Create Trust User Assigned Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/workload-identities/workload-identity-federation-create-trust-user-assigned-managed-identity.md
To learn more about supported regions, time to propagate federated credential up
- If you're unfamiliar with managed identities for Azure resources, check out the [overview section](../managed-identities-azure-resources/overview.md). Be sure to review the [difference between a system-assigned and user-assigned managed identity](../managed-identities-azure-resources/overview.md#managed-identity-types). - If you don't already have an Azure account, [sign up for a free account](https://azure.microsoft.com/free/) before you continue. - Get the information for your external IdP and software workload, which you need in the following steps.-- To create a user-assigned managed identity and configure a federated identity credential, your account needs the [Contributor](../../role-based-access-control/built-in-roles.md#contributor) or [Owner](../../role-based-access-control/built-in-roles.md#owner) role assignment.
+- To create a user-assigned managed identity and configure a federated identity credential, your account needs the [Contributor](/azure/role-based-access-control/built-in-roles#contributor) or [Owner](/azure/role-based-access-control/built-in-roles#owner) role assignment.
- [Create a user-assigned manged identity](../managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md?pivots=identity-mi-methods-azp#create-a-user-assigned-managed-identity) - Find the name of the user-assigned managed identity, which you need in the following steps.
For a workflow triggered by a pull request event, specify an **Entity type** of
Fill in the **Cluster issuer URL**, **Namespace**, **Service account name**, and **Name** fields: -- **Cluster issuer URL** is the [OIDC issuer URL](../../aks/use-oidc-issuer.md) for the managed cluster or the [OIDC Issuer URL](https://azure.github.io/azure-workload-identity/docs/installation/self-managed-clusters/oidc-issuer.html) for a self-managed cluster.
+- **Cluster issuer URL** is the [OIDC issuer URL](/azure/aks/use-oidc-issuer) for the managed cluster or the [OIDC Issuer URL](https://azure.github.io/azure-workload-identity/docs/installation/self-managed-clusters/oidc-issuer.html) for a self-managed cluster.
- **Service account name** is the name of the Kubernetes service account, which provides an identity for processes that run in a Pod. - **Namespace** is the service account namespace. - **Name** is the name of the federated credential, which can't be changed later.
To delete a specific federated identity credential, select the **Delete** icon f
- If you're unfamiliar with managed identities for Azure resources, check out the [overview section](../managed-identities-azure-resources/overview.md). Be sure to review the [difference between a system-assigned and user-assigned managed identity](../managed-identities-azure-resources/overview.md#managed-identity-types). - If you don't already have an Azure account, [sign up for a free account](https://azure.microsoft.com/free/) before you continue. - Get the information for your external IdP and software workload, which you need in the following steps.-- To create a user-assigned managed identity and configure a federated identity credential, your account needs the [Contributor](../../role-based-access-control/built-in-roles.md#contributor) or [Owner](../../role-based-access-control/built-in-roles.md#owner) role assignment.
+- To create a user-assigned managed identity and configure a federated identity credential, your account needs the [Contributor](/azure/role-based-access-control/built-in-roles#contributor) or [Owner](/azure/role-based-access-control/built-in-roles#owner) role assignment.
- [Create a user-assigned manged identity](../managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md?pivots=identity-mi-methods-azcli#create-a-user-assigned-managed-identity-1) - Find the name of the user-assigned managed identity, which you need in the following steps.
az identity federated-credential delete --name $ficId --identity-name $uaId --re
- If you're unfamiliar with managed identities for Azure resources, check out the [overview section](../managed-identities-azure-resources/overview.md). Be sure to review the [difference between a system-assigned and user-assigned managed identity](../managed-identities-azure-resources/overview.md#managed-identity-types). - If you don't already have an Azure account, [sign up for a free account](https://azure.microsoft.com/free/) before you continue. - Get the information for your external IdP and software workload, which you need in the following steps.-- To create a user-assigned managed identity and configure a federated identity credential, your account needs the [Contributor](../../role-based-access-control/built-in-roles.md#contributor) or [Owner](../../role-based-access-control/built-in-roles.md#owner) role assignment.
+- To create a user-assigned managed identity and configure a federated identity credential, your account needs the [Contributor](/azure/role-based-access-control/built-in-roles#contributor) or [Owner](/azure/role-based-access-control/built-in-roles#owner) role assignment.
- To run the example scripts, you have two options:
- - Use [Azure Cloud Shell](../../cloud-shell/overview.md), which you can open by using the **Try It** button in the upper-right corner of code blocks.
+ - Use [Azure Cloud Shell](/azure/cloud-shell/overview), which you can open by using the **Try It** button in the upper-right corner of code blocks.
- Run scripts locally with Azure PowerShell, as described in the next section. - [Create a user-assigned manged identity](../managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md?pivots=identity-mi-methods-powershell#list-user-assigned-managed-identities-2) - Find the name of the user-assigned managed identity, which you need in the following steps.
Remove-AzFederatedIdentityCredentials -ResourceGroupName azure-rg-test -Identity
- If you're unfamiliar with managed identities for Azure resources, check out the [overview section](../managed-identities-azure-resources/overview.md). Be sure to review the [difference between a system-assigned and user-assigned managed identity](../managed-identities-azure-resources/overview.md#managed-identity-types). - If you don't already have an Azure account, [sign up for a free account](https://azure.microsoft.com/free/) before you continue. - Get the information for your external IdP and software workload, which you need in the following steps.-- To create a user-assigned managed identity and configure a federated identity credential, your account needs the [Contributor](../../role-based-access-control/built-in-roles.md#contributor) or [Owner](../../role-based-access-control/built-in-roles.md#owner) role assignment.
+- To create a user-assigned managed identity and configure a federated identity credential, your account needs the [Contributor](/azure/role-based-access-control/built-in-roles#contributor) or [Owner](/azure/role-based-access-control/built-in-roles#owner) role assignment.
- [Create a user-assigned manged identity](../managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md?pivots=identity-mi-methods-arm#create-a-user-assigned-managed-identity-3) - Find the name of the user-assigned managed identity, which you need in the following steps.
Remove-AzFederatedIdentityCredentials -ResourceGroupName azure-rg-test -Identity
Resource Manager templates help you deploy new or modified resources defined by an Azure resource group. Several options are available for template editing and deployment, both local and portal-based. You can: -- Use a [custom template from Azure Marketplace](../../azure-resource-manager/templates/deploy-portal.md#deploy-resources-from-custom-template) to create a template from scratch or base it on an existing common or [quickstart template](https://azure.microsoft.com/resources/templates/).-- Derive from an existing resource group by exporting a template. You can export them from either [the original deployment](../../azure-resource-manager/management/manage-resource-groups-portal.md#export-resource-groups-to-templates) or from the [current state of the deployment](../../azure-resource-manager/management/manage-resource-groups-portal.md#export-resource-groups-to-templates).-- Use a local [JSON editor (such as VS Code)](../../azure-resource-manager/templates/quickstart-create-templates-use-the-portal.md), and then upload and deploy by using PowerShell or the Azure CLI.-- Use the Visual Studio [Azure Resource Group project](../../azure-resource-manager/templates/create-visual-studio-deployment-project.md) to create and deploy a template.
+- Use a [custom template from Azure Marketplace](/azure/azure-resource-manager/templates/deploy-portal#deploy-resources-from-custom-template) to create a template from scratch or base it on an existing common or [quickstart template](https://azure.microsoft.com/resources/templates/).
+- Derive from an existing resource group by exporting a template. You can export them from either [the original deployment](/azure/azure-resource-manager/management/manage-resource-groups-portal#export-resource-groups-to-templates) or from the [current state of the deployment](/azure/azure-resource-manager/management/manage-resource-groups-portal#export-resource-groups-to-templates).
+- Use a local [JSON editor (such as VS Code)](/azure/azure-resource-manager/templates/quickstart-create-templates-use-the-portal), and then upload and deploy by using PowerShell or the Azure CLI.
+- Use the Visual Studio [Azure Resource Group project](/azure/azure-resource-manager/templates/create-visual-studio-deployment-project) to create and deploy a template.
## Configure a federated identity credential on a user-assigned managed identity
-Federated identity credential and parent user assigned identity can be created or updated be means of template below. You can [deploy ARM templates](../../azure-resource-manager/templates/quickstart-create-templates-use-the-portal.md) from the [Azure portal](https://portal.azure.com).
+Federated identity credential and parent user assigned identity can be created or updated be means of template below. You can [deploy ARM templates](/azure/azure-resource-manager/templates/quickstart-create-templates-use-the-portal) from the [Azure portal](https://portal.azure.com).
All of the template parameters are mandatory.
Make sure that any kind of automation creates federated identity credentials und
- If you're unfamiliar with managed identities for Azure resources, check out the [overview section](../managed-identities-azure-resources/overview.md). Be sure to review the [difference between a system-assigned and user-assigned managed identity](../managed-identities-azure-resources/overview.md#managed-identity-types). - If you don't already have an Azure account, [sign up for a free account](https://azure.microsoft.com/free/) before you continue. - Get the information for your external IdP and software workload, which you need in the following steps.-- To create a user-assigned managed identity and configure a federated identity credential, your account needs the [Contributor](../../role-based-access-control/built-in-roles.md#contributor) or [Owner](../../role-based-access-control/built-in-roles.md#owner) role assignment.
+- To create a user-assigned managed identity and configure a federated identity credential, your account needs the [Contributor](/azure/role-based-access-control/built-in-roles#contributor) or [Owner](/azure/role-based-access-control/built-in-roles#owner) role assignment.
- You can run all the commands in this article either in the cloud or locally:
- - To run in the cloud, use [Azure Cloud Shell](../../cloud-shell/overview.md).
- - To run locally, install [curl](https://curl.haxx.se/download.html) and the [Azure CLI](/cli/azure/install-azure-cli).
+ - To run in the cloud, use [Azure Cloud Shell](/azure/cloud-shell/overview).
+ - To run locally, install [curl](https://curl.se/download.html) and the [Azure CLI](/cli/azure/install-azure-cli).
- [Create a user-assigned manged identity](../managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md?pivots=identity-mi-methods-rest#create-a-user-assigned-managed-identity-4) - Find the name of the user-assigned managed identity, which you need in the following steps.
https://management.azure.com/subscriptions/<SUBSCRIPTION ID>/resourceGroups/<RES
## Next steps -- For information about the required format of JWTs created by external identity providers, read about the [assertion format](/azure/active-directory/develop/active-directory-certificate-credentials#assertion-format).
+- For information about the required format of JWTs created by external identity providers, read about the [assertion format](../develop/certificate-credentials.md#assertion-format).
active-directory Workload Identity Federation Create Trust https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/workload-identities/workload-identity-federation-create-trust.md
To learn more about supported regions, time to propagate federated credential up
::: zone pivot="identity-wif-apps-methods-azp" ## Prerequisites
-[Create an app registration](/azure/active-directory/develop/quickstart-register-app) in Microsoft Entra ID. Grant your app access to the Azure resources targeted by your external software workload.
+[Create an app registration](../develop/quickstart-register-app.md) in Microsoft Entra ID. Grant your app access to the Azure resources targeted by your external software workload.
Find the object ID of the app (not the application (client) ID), which you need in the following steps. You can find the object ID of the app in the [Microsoft Entra admin center](https://entra.microsoft.com). Go to the list of app registrations and select your app registration. In **Overview**->**Essentials**, find the **Object ID**.
Select the **Kubernetes accessing Azure resources** scenario from the dropdown m
Fill in the **Cluster issuer URL**, **Namespace**, **Service account name**, and **Name** fields: -- **Cluster issuer URL** is the [OIDC issuer URL](../../aks/use-oidc-issuer.md) for the managed cluster or the [OIDC Issuer URL](https://azure.github.io/azure-workload-identity/docs/installation/self-managed-clusters/oidc-issuer.html) for a self-managed cluster.
+- **Cluster issuer URL** is the [OIDC issuer URL](/azure/aks/use-oidc-issuer) for the managed cluster or the [OIDC Issuer URL](https://azure.github.io/azure-workload-identity/docs/installation/self-managed-clusters/oidc-issuer.html) for a self-managed cluster.
- **Service account name** is the name of the Kubernetes service account, which provides an identity for processes that run in a Pod. - **Namespace** is the service account namespace. - **Name** is the name of the federated credential, which can't be changed later.
To delete a federated identity credential, select the **Delete** icon for the cr
[!INCLUDE [azure-cli-prepare-your-environment-no-header.md](~/articles/reusable-content/azure-cli/azure-cli-prepare-your-environment-no-header.md)] -- [Create an app registration](/azure/active-directory/develop/quickstart-register-app) in Microsoft Entra ID. Grant your app access to the Azure resources targeted by your external software workload.
+- [Create an app registration](../develop/quickstart-register-app.md) in Microsoft Entra ID. Grant your app access to the Azure resources targeted by your external software workload.
- Find the object ID, app (client) ID, or identifier URI of the app, which you need in the following steps. You can find these values in the [Microsoft Entra admin center](https://entra.microsoft.com). Go to the list of registered applications and select your app registration. In **Overview**->**Essentials**, get the **Object ID**, **Application (client) ID**, or **Application ID URI** value, which you need in the following steps. - Get the *subject* and *issuer* information for your external IdP and software workload, which you need in the following steps.
az ad app federated-credential create --id f6475511-fd81-4965-a00e-41e7792b7b9c
### Kubernetes example
-*issuer* is your service account issuer URL (the [OIDC issuer URL](../../aks/use-oidc-issuer.md) for the managed cluster or the [OIDC Issuer URL](https://azure.github.io/azure-workload-identity/docs/installation/self-managed-clusters/oidc-issuer.html) for a self-managed cluster).
+*issuer* is your service account issuer URL (the [OIDC issuer URL](/azure/aks/use-oidc-issuer) for the managed cluster or the [OIDC Issuer URL](https://azure.github.io/azure-workload-identity/docs/installation/self-managed-clusters/oidc-issuer.html) for a self-managed cluster).
*subject* is the subject name in the tokens issued to the service account. Kubernetes uses the following format for subject names: `system:serviceaccount:<SERVICE_ACCOUNT_NAMESPACE>:<SERVICE_ACCOUNT_NAME>`.
az ad app federated-credential delete --id f6475511-fd81-4965-a00e-41e7792b7b9c
## Prerequisites - To run the example scripts, you have two options:
- - Use [Azure Cloud Shell](../../cloud-shell/overview.md), which you can open by using the **Try It** button in the upper-right corner of code blocks.
+ - Use [Azure Cloud Shell](/azure/cloud-shell/overview), which you can open by using the **Try It** button in the upper-right corner of code blocks.
- Run scripts locally with Azure PowerShell, as described in the next section.-- [Create an app registration](/azure/active-directory/develop/quickstart-register-app) in Microsoft Entra ID. Grant your app access to the Azure resources targeted by your external software workload.
+- [Create an app registration](../develop/quickstart-register-app.md) in Microsoft Entra ID. Grant your app access to the Azure resources targeted by your external software workload.
- Find the object ID of the app (not the application (client) ID), which you need in the following steps. You can find the object ID of the app in the [Microsoft Entra admin center](https://entra.microsoft.com). Go to the list of registered applications and select your app registration. In **Overview**->**Essentials**, find the **Object ID**. - Get the *subject* and *issuer* information for your external IdP and software workload, which you need in the following steps.
New-AzADAppFederatedCredential -ApplicationObjectId $appObjectId -Audience api:/
### Kubernetes example - *ApplicationObjectId*: the object ID of the app (not the application (client) ID) you previously registered in Microsoft Entra ID.-- *Issuer* is your service account issuer URL (the [OIDC issuer URL](../../aks/use-oidc-issuer.md) for the managed cluster or the [OIDC Issuer URL](https://azure.github.io/azure-workload-identity/docs/installation/self-managed-clusters/oidc-issuer.html) for a self-managed cluster).
+- *Issuer* is your service account issuer URL (the [OIDC issuer URL](/azure/aks/use-oidc-issuer) for the managed cluster or the [OIDC Issuer URL](https://azure.github.io/azure-workload-identity/docs/installation/self-managed-clusters/oidc-issuer.html) for a self-managed cluster).
- *Subject* is the subject name in the tokens issued to the service account. Kubernetes uses the following format for subject names: `system:serviceaccount:<SERVICE_ACCOUNT_NAMESPACE>:<SERVICE_ACCOUNT_NAME>`. - *Name* is the name of the federated credential, which can't be changed later. - *Audience* lists the audiences that can appear in the `aud` claim of the external token.
Remove-AzADAppFederatedCredential -ApplicationObjectId $appObjectId -FederatedCr
::: zone pivot="identity-wif-apps-methods-rest" ## Prerequisites
-[Create an app registration](/azure/active-directory/develop/quickstart-register-app) in Microsoft Entra ID. Grant your app access to the Azure resources targeted by your external software workload.
+[Create an app registration](../develop/quickstart-register-app.md) in Microsoft Entra ID. Grant your app access to the Azure resources targeted by your external software workload.
Find the object ID of the app (not the application (client) ID), which you need in the following steps. You can find the object ID of the app in the [Microsoft Entra admin center](https://entra.microsoft.com). Go to the list of registered applications and select your app registration. In **Overview**->**Essentials**, find the **Object ID**.
And you get the response:
Run the following method to configure a federated identity credential on an app and create a trust relationship with a Kubernetes service account. Specify the following parameters: -- *issuer* is your service account issuer URL (the [OIDC issuer URL](../../aks/use-oidc-issuer.md) for the managed cluster or the [OIDC Issuer URL](https://azure.github.io/azure-workload-identity/docs/installation/self-managed-clusters/oidc-issuer.html) for a self-managed cluster).
+- *issuer* is your service account issuer URL (the [OIDC issuer URL](/azure/aks/use-oidc-issuer) for the managed cluster or the [OIDC Issuer URL](https://azure.github.io/azure-workload-identity/docs/installation/self-managed-clusters/oidc-issuer.html) for a self-managed cluster).
- *subject* is the subject name in the tokens issued to the service account. Kubernetes uses the following format for subject names: `system:serviceaccount:<SERVICE_ACCOUNT_NAMESPACE>:<SERVICE_ACCOUNT_NAME>`. - *name* is the name of the federated credential, which can't be changed later. - *audiences* lists the audiences that can appear in the external token. This field is mandatory. The recommended value is "api://AzureADTokenExchange".
az rest -m DELETE -u 'https://graph.microsoft.com/applications/f6475511-fd81-49
- To learn how to use workload identity federation for Kubernetes, see [Microsoft Entra Workload ID for Kubernetes](https://azure.github.io/azure-workload-identity/docs/quick-start.html) open source project. - To learn how to use workload identity federation for GitHub Actions, see [Configure a GitHub Actions workflow to get an access token](/azure/developer/github/connect-from-azure). - Read the [GitHub Actions documentation](https://docs.github.com/actions/deployment/security-hardening-your-deployments/configuring-openid-connect-in-azure) to learn more about configuring your GitHub Actions workflow to get an access token from Microsoft identity provider and access Azure resources.-- For more information, read about how Microsoft Entra ID uses the [OAuth 2.0 client credentials grant](/azure/active-directory/develop/v2-oauth2-client-creds-grant-flow#third-case-access-token-request-with-a-federated-credential) and a client assertion issued by another IdP to get a token.
+- For more information, read about how Microsoft Entra ID uses the [OAuth 2.0 client credentials grant](../develop/v2-oauth2-client-creds-grant-flow.md#third-case-access-token-request-with-a-federated-credential) and a client assertion issued by another IdP to get a token.
- For information about the required format of JWTs created by external identity providers, read about the [assertion format](/azure/active-directory/develop/active-directory-certificate-credentials#assertion-format).
active-directory Workload Identity Federation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/workload-identities/workload-identity-federation.md
You use workload identity federation to configure a [user-assigned managed ident
The following scenarios are supported for accessing Microsoft Entra protected resources using workload identity federation: -- Workloads running on any Kubernetes cluster (Azure Kubernetes Service (AKS), Amazon Web Services EKS, Google Kubernetes Engine (GKE), or on-premises). Establish a trust relationship between your user-assigned managed identity or app in Microsoft Entra ID and a Kubernetes workload (described in the [workload identity overview](../../aks/workload-identity-overview.md)).
+- Workloads running on any Kubernetes cluster (Azure Kubernetes Service (AKS), Amazon Web Services EKS, Google Kubernetes Engine (GKE), or on-premises). Establish a trust relationship between your user-assigned managed identity or app in Microsoft Entra ID and a Kubernetes workload (described in the [workload identity overview](/azure/aks/workload-identity-overview)).
- GitHub Actions. First, configure a trust relationship between your [user-assigned managed identity](workload-identity-federation-create-trust-user-assigned-managed-identity.md) or [application](workload-identity-federation-create-trust.md) in Microsoft Entra ID and a GitHub repo in the [Microsoft Entra admin center](https://entra.microsoft.com) or using Microsoft Graph. Then [configure a GitHub Actions workflow](/azure/developer/github/connect-from-azure) to get an access token from Microsoft identity provider and access Azure resources. - Google Cloud. First, configure a trust relationship between your user-assigned managed identity or app in Microsoft Entra ID and an identity in Google Cloud. Then configure your software workload running in Google Cloud to get an access token from Microsoft identity provider and access Microsoft Entra protected resources. See [Access Microsoft Entra protected resources from an app in Google Cloud](https://blog.identitydigest.com/azuread-federate-gcp/). - Workloads running in Amazon Web Services (AWS). First, configure a trust relationship between your user-assigned managed identity or app in Microsoft Entra ID and an identity in Amazon Cognito. Then configure your software workload running in AWS to get an access token from Microsoft identity provider and access Microsoft Entra protected resources. See [Workload identity federation with AWS](https://blog.identitydigest.com/azuread-federate-aws/).-- Other workloads running in compute platforms outside of Azure. Configure a trust relationship between your [user-assigned managed identity](workload-identity-federation-create-trust-user-assigned-managed-identity.md) or [application](workload-identity-federation-create-trust.md) in Microsoft Entra ID and the external IdP for your compute platform. You can use tokens issued by that platform to authenticate with Microsoft identity platform and call APIs in the Microsoft ecosystem. Use the [client credentials flow](/azure/active-directory/develop/v2-oauth2-client-creds-grant-flow#third-case-access-token-request-with-a-federated-credential) to get an access token from Microsoft identity platform, passing in the identity provider's JWT instead of creating one yourself using a stored certificate.
+- Other workloads running in compute platforms outside of Azure. Configure a trust relationship between your [user-assigned managed identity](workload-identity-federation-create-trust-user-assigned-managed-identity.md) or [application](workload-identity-federation-create-trust.md) in Microsoft Entra ID and the external IdP for your compute platform. You can use tokens issued by that platform to authenticate with Microsoft identity platform and call APIs in the Microsoft ecosystem. Use the [client credentials flow](../develop/v2-oauth2-client-creds-grant-flow.md#third-case-access-token-request-with-a-federated-credential) to get an access token from Microsoft identity platform, passing in the identity provider's JWT instead of creating one yourself using a stored certificate.
- SPIFFE and SPIRE are a set of platform agnostic, open-source standards for providing identities to your software workloads deployed across platforms and cloud vendors. First, configure a trust relationship between your user-assigned managed identity or app in Microsoft Entra ID and a SPIFFE ID for an external workload. Then configure your external software workload to get an access token from Microsoft identity provider and access Microsoft Entra protected resources. See [Workload identity federation with SPIFFE and SPIRE](https://blog.identitydigest.com/azuread-federate-spiffe/). > [!NOTE]
The workflow for exchanging an external token for an access token is the same, h
1. The external workload (such as a GitHub Actions workflow) requests a token from the external IdP (such as GitHub). 1. The external IdP issues a token to the external workload.
-1. The external workload (the login action in a GitHub workflow, for example) [sends the token to Microsoft identity platform](/azure/active-directory/develop/v2-oauth2-client-creds-grant-flow#third-case-access-token-request-with-a-federated-credential) and requests an access token.
+1. The external workload (the login action in a GitHub workflow, for example) [sends the token to Microsoft identity platform](../develop/v2-oauth2-client-creds-grant-flow.md#third-case-access-token-request-with-a-federated-credential) and requests an access token.
1. Microsoft identity platform checks the trust relationship on the [user-assigned managed identity](workload-identity-federation-create-trust-user-assigned-managed-identity.md) or [app registration](workload-identity-federation-create-trust.md) and validates the external token against the OpenID Connect (OIDC) issuer URL on the external IdP. 1. When the checks are satisfied, Microsoft identity platform issues an access token to the external workload. 1. The external workload accesses Microsoft Entra protected resources using the access token from Microsoft identity platform. A GitHub Actions workflow, for example, uses the access token to publish a web app to Azure App Service.
Learn more about how workload identity federation works:
- How to create, delete, get, or update [federated identity credentials](workload-identity-federation-create-trust-user-assigned-managed-identity.md) on a user-assigned managed identity. - How to create, delete, get, or update [federated identity credentials](workload-identity-federation-create-trust.md) on an app registration.-- Read the [workload identity overview](../../aks/workload-identity-overview.md) to learn how to configure a Kubernetes workload to get an access token from Microsoft identity provider and access Microsoft Entra protected resources.
+- Read the [workload identity overview](/azure/aks/workload-identity-overview) to learn how to configure a Kubernetes workload to get an access token from Microsoft identity provider and access Microsoft Entra protected resources.
- Read the [GitHub Actions documentation](https://docs.github.com/actions/deployment/security-hardening-your-deployments/configuring-openid-connect-in-azure) to learn more about configuring your GitHub Actions workflow to get an access token from Microsoft identity provider and access Microsoft Entra protected resources.-- How Microsoft Entra ID uses the [OAuth 2.0 client credentials grant](/azure/active-directory/develop/v2-oauth2-client-creds-grant-flow#third-case-access-token-request-with-a-federated-credential) and a client assertion issued by another IdP to get a token.
+- How Microsoft Entra ID uses the [OAuth 2.0 client credentials grant](../develop/v2-oauth2-client-creds-grant-flow.md#third-case-access-token-request-with-a-federated-credential) and a client assertion issued by another IdP to get a token.
- For information about the required format of JWTs created by external identity providers, read about the [assertion format](/azure/active-directory/develop/active-directory-certificate-credentials#assertion-format).
ai-services Concept Image Retrieval https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/concept-image-retrieval.md
Title: Image Retrieval concepts - Image Analysis 4.0
+ Title: Multi-modal embeddings concepts - Image Analysis 4.0
description: Concepts related to image vectorization using the Image Analysis 4.0 API.
Last updated 03/06/2023
-# Image retrieval (version 4.0 preview)
+# Multi-modal embeddings (version 4.0 preview)
-Image retrieval is the process of searching a large collection of images to find those that are most similar to a given query image. Image retrieval systems have traditionally used features extracted from the images, such as content labels, tags, and image descriptors, to compare images and rank them by similarity. However, vector similarity search is gaining more popularity due to a number of benefits over traditional keyword-based search and is becoming a vital component in popular content search services.
+Multi-modal embedding is the process of generating a numerical representation of an image that captures its features and characteristics in a vector format. These vectors encode the content and context of an image in a way that is compatible with text search over the same vector space.
+
+Image retrieval systems have traditionally used features extracted from the images, such as content labels, tags, and image descriptors, to compare images and rank them by similarity. However, vector similarity search is gaining more popularity due to a number of benefits over traditional keyword-based search and is becoming a vital component in popular content search services.
## What's the difference between vector search and keyword-based search?
Keyword search is the most basic and traditional method of information retrieval
Vector search, on the other hand, searches large collections of vectors in high-dimensional space to find vectors that are similar to a given query. Vector search looks for semantic similarities by capturing the context and meaning of the search query. This approach is often more efficient than traditional image retrieval techniques, as it can reduce search space and improve the accuracy of the results.
-## Business Applications
+## Business applications
-Image retrieval has a variety of applications in different fields, including:
+Multi-modal embedding has a variety of applications in different fields, including:
-- Digital asset management: Image retrieval can be used to manage large collections of digital images, such as in museums, archives, or online galleries. Users can search for images based on visual features and retrieve the images that match their criteria.-- Security and surveillance: Image retrieval can be used in security and surveillance systems to search for images based on specific features or patterns, such as in, people & object tracking, or threat detection. -- Forensic image retrieval: Image retrieval can be used in forensic investigations to search for images based on their visual content or metadata, such as in cases of cyber-crime.-- E-commerce: Image retrieval can be used in online shopping applications to search for similar products based on their features or descriptions or provide recommendations based on previous purchases.-- Fashion and design: Image retrieval can be used in fashion and design to search for images based on their visual features, such as color, pattern, or texture. This can help designers or retailers to identify similar products or trends.
+- Digital asset management: Multi-modal embedding can be used to manage large collections of digital images, such as in museums, archives, or online galleries. Users can search for images based on visual features and retrieve the images that match their criteria.
+- Security and surveillance: Vectorization can be used in security and surveillance systems to search for images based on specific features or patterns, such as in, people & object tracking, or threat detection.
+- Forensic image retrieval: Vectorization can be used in forensic investigations to search for images based on their visual content or metadata, such as in cases of cyber-crime.
+- E-commerce: Vectorization can be used in online shopping applications to search for similar products based on their features or descriptions or provide recommendations based on previous purchases.
+- Fashion and design: Vectorization can be used in fashion and design to search for images based on their visual features, such as color, pattern, or texture. This can help designers or retailers to identify similar products or trends.
> [!CAUTION]
-> Image Retrieval is not designed analyze medical images for diagnostic features or disease patterns. Please do not use Image Retrieval for medical purposes.
+> Multi-modal embedding is not designed analyze medical images for diagnostic features or disease patterns. Please do not use Multi-modal embedding for medical purposes.
## What are vector embeddings?
Vector embeddings are a way of representing content&mdash;text or images&mdash;a
:::image type="content" source="media/image-retrieval.png" alt-text="Diagram of image retrieval process.":::
-1. Vectorize Images and Text: the Image Retrieval APIs, **VectorizeImage** and **VectorizeText**, can be used to extract feature vectors out of an image or text respectively. The APIs return a single feature vector representing the entire input.
+1. Vectorize Images and Text: the Multi-modal embeddings APIs, **VectorizeImage** and **VectorizeText**, can be used to extract feature vectors out of an image or text respectively. The APIs return a single feature vector representing the entire input.
> [!NOTE]
- > Image Retrieval does not do any biometric processing of human faces. For face detection and identification, see the [Azure AI Face service](./overview-identity.md).
+ > Multi-modal embedding does not do any biometric processing of human faces. For face detection and identification, see the [Azure AI Face service](./overview-identity.md).
1. Measure similarity: Vector search systems typically use distance metrics, such as cosine distance or Euclidean distance, to compare vectors and rank them by similarity. The [Vision studio](https://portal.vision.cognitive.azure.com/) demo uses [cosine distance](./how-to/image-retrieval.md#calculate-vector-similarity) to measure similarity. 1. Retrieve Images: Use the top _N_ vectors similar to the search query and retrieve images corresponding to those vectors from your photo library to provide as the final result.
+### Relevance score
+
+The image and video retrieval services return a field called "relevance." The term "relevance" denotes a measure of similarity score between a query and image or video frame embeddings. The relevance score is comprised of two components:
+1. The cosine similarity (that falls in the range of [0,1]) between the query and image or video frame embeddings.
+1. A metadata score, which reflects the similarity between the query and the metadata associated with the image or video frame.
+
+> [!IMPORTANT]
+> The relevance score is a good measure to rank results such as images or video frames with respect to a single query. However, the relevance score cannot be accurately compared across queries. Therefore, it's not possible to easily map the relevance score to a confidence level. It's also not possible to trivially create a threshold algorithm to eliminate irrelevant results based solely on the relevance score.
+ ## Next steps
-Enable image retrieval for your search service and follow the steps to generate vector embeddings for text and images.
-* [Call the Image retrieval APIs](./how-to/image-retrieval.md)
+Enable Multi-modal embeddings for your search service and follow the steps to generate vector embeddings for text and images.
+* [Call the Multi-modal embeddings APIs](./how-to/image-retrieval.md)
ai-services Image Retrieval https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/how-to/image-retrieval.md
Title: Do image retrieval using vectorization - Image Analysis 4.0
+ Title: Do image retrieval using multi-modal embeddings - Image Analysis 4.0
description: Learn how to call the image retrieval API to vectorize image and search terms.
-# Do image retrieval using vectorization (version 4.0 preview)
+# Do image retrieval using multi-modal embeddings (version 4.0 preview)
-The Image Retrieval APIs enable the _vectorization_ of images and text queries. They convert images to coordinates in a multi-dimensional vector space. Then, incoming text queries can also be converted to vectors, and images can be matched to the text based on semantic closeness. This allows the user to search a set of images using text, without the need to use image tags or other metadata. Semantic closeness often produces better results in search.
+The Multi-modal embeddings APIs enable the _vectorization_ of images and text queries. They convert images to coordinates in a multi-dimensional vector space. Then, incoming text queries can also be converted to vectors, and images can be matched to the text based on semantic closeness. This allows the user to search a set of images using text, without the need to use image tags or other metadata. Semantic closeness often produces better results in search.
> [!IMPORTANT] > These APIs are only available in the following geographic regions: East US, France Central, Korea Central, North Europe, Southeast Asia, West Europe, West US.
The Image Retrieval APIs enable the _vectorization_ of images and text queries.
* Once you have your Azure subscription, <a href="https://portal.azure.com/#create/Microsoft.CognitiveServicesComputerVision" title="Create a Vision resource" target="_blank">create a Vision resource </a> in the Azure portal to get your key and endpoint. Be sure to create it in one of the permitted geographic regions: East US, France Central, Korea Central, North Europe, Southeast Asia, West Europe, West US. * After it deploys, select **Go to resource**. Copy the key and endpoint to a temporary location to use later on.
-## Try out Image Retrieval
+## Try out Multi-modal embeddings
-You can try out the Image Retrieval feature quickly and easily in your browser using Vision Studio.
+You can try out the Multi-modal embeddings feature quickly and easily in your browser using Vision Studio.
> [!IMPORTANT] > The Vision Studio experience is limited to 500 images. To use a larger image set, create your own search application using the APIs in this guide.
The API call returns a **vector** JSON object, which defines the text string's c
## Calculate vector similarity
-Cosine similarity is a method for measuring the similarity of two vectors. In an Image Retrieval scenario, you'll compare the search query vector with each image's vector. Images that are above a certain threshold of similarity can then be returned as search results.
+Cosine similarity is a method for measuring the similarity of two vectors. In an image retrieval scenario, you'll compare the search query vector with each image's vector. Images that are above a certain threshold of similarity can then be returned as search results.
The following example C# code calculates the cosine similarity between two vectors. It's up to you to decide what similarity threshold to use for returning images as search results.
ai-services Video Retrieval https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/how-to/video-retrieval.md
+
+ Title: Do video retrieval using vectorization - Image Analysis 4.0
+
+description: Learn how to call the Spatial Analysis Video Retrieval APIs to vectorize video frames and search terms.
++++++ Last updated : 10/16/2023++++
+# Do video retrieval using vectorization (version 4.0 preview)
+
+Azure AI Spatial Analysis Video Retrieval APIs are part of Azure AI Vision and enable developers to create an index, add documents (videos and images) to it, and search with natural language. Developers can define metadata schemas for each index and ingest metadata to the service to help with retrieval. Developers can also specify what features to extract from the index (vision, speech) and filter their search based on features.
+
+## Prerequisites
+
+- Azure subscription - [Create one for free](https://azure.microsoft.com/free/cognitive-services).
+- Once you have your Azure subscription, [create a Vision resource using the portal](/azure/cognitive-services/cognitive-services-apis-create-account). For this preview, you must create your resource in the East US region.
+- An Azure Storage resource - [Create one](/azure/storage/common/storage-account-create?tabs=azure-portal)
+
+## Input requirements
+
+### Supported file formats
+| File format | Description |
+| -- | -- |
+| `asf` | ASF (Advanced / Active Streaming Format) |
+| `flv` | FLV (Flash Video) |
+| `matroskamm`, `webm` | Matroska / WebM |
+| `mov`, `mp4`, `m4a`, `3gp`, `3g2`, `mj2` | QuickTime / MOV |
+| `mpegts` | MPEG-TS (MPEG-2 Transport Stream) |
+| `rawvideo` | raw video |
+| `rm` | RealMedia |
+| `rtsp` | RTSP input |
+
+### Supported codecs
+| Codec | Format |
+| -- | -- |
+| `h264` | H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10 |
+| `rawvideo` | raw video |
+| `h265` | HEVC
+| `libvpx-vp9` | libvpx VP9 (codec vp9) |
+
+## Call the Video Retrieval APIs
+
+To use the Spatial Analysis Video Retrieval APIs in a typical pattern, you would do the following steps:
+
+1. Create an index using **PUT - Create an index**.
+2. Add video documents to the index using **PUT - CreateIngestion**.
+3. Wait for the ingestion to complete, checking with **GET - ListIngestions**.
+4. Search for a keyword or phrase using **POST - SearchByText**.
++
+### Use Video Retrieval APIs for metadata-based search
+
+The Spatial Analysis Video Retrieval APIs allows a user to add metadata to video files. Metadata is additional information associated with video files such as "Camera ID," "Timestamp," or "Location" that can be used to organize, filter, and search for specific videos. This example demonstrates how to create an index, add video files with associated metadata, and perform searches using different features.
+
+### Step 1: Create an Index
+
+To begin, you need to create an index to store and organize the video files and their metadata. The example below demonstrates how to create an index named "my-video-index."
+
+```bash
+curl.exe -v -X PUT "https://<YOUR_ENDPOINT_URL>/computervision/retrieval/indexes/my-video-index?api-version=2023-05-01-preview" -H "Ocp-Apim-Subscription-Key: <YOUR_SUBSCRIPTION_KEY>" -H "Content-Type: application/json" --data-ascii "
+{
+ 'metadataSchema': {
+ 'fields': [
+ {
+ 'name': 'cameraId',
+ 'searchable': false,
+ 'filterable': true,
+ 'type': 'string'
+ },
+ {
+ 'name': 'timestamp',
+ 'searchable': false,
+ 'filterable': true,
+ 'type': 'datetime'
+ }
+ ]
+ },
+ 'features': [
+ {
+ 'name': 'vision',
+ 'domain': 'surveillance'
+ },
+ {
+ 'name': 'speech'
+ }
+ ]
+}"
+```
+
+**Response:**
+```
+HTTP/1.1 201 Created
+Content-Length: 530
+Content-Type: application/json; charset=utf-8
+request-id: cb036529-d1cf-4b44-a1ef-0a4e9fc62885
+api-supported-versions: 2023-01-15-preview,2023-05-01-preview
+x-envoy-upstream-service-time: 202
+Date: Thu, 06 Jul 2023 18:05:05 GMT
+Connection: close
+
+{
+ "name": "my-video-index",
+ "metadataSchema": {
+ "language": "en",
+ "fields": [
+ {
+ "name": "cameraid",
+ "searchable": false,
+ "filterable": true,
+ "type": "string"
+ },
+ {
+ "name": "timestamp",
+ "searchable": false,
+ "filterable": true,
+ "type": "datetime"
+ }
+ ]
+ },
+ "userData": {},
+ "features": [
+ {
+ "name": "vision",
+ "modelVersion": "2023-05-31",
+ "domain": "surveillance"
+ },
+ {
+ "name": "speech",
+ "modelVersion": "2023-06-30",
+ "domain": "generic"
+ }
+ ],
+ "eTag": "\"7966244a79384cca9880d67a4daa9eb1\"",
+ "createdDateTime": "2023-07-06T18:05:06.7582534Z",
+ "lastModifiedDateTime": "2023-07-06T18:05:06.7582534Z"
+}
+```
+
+### Step 2: Add video files to the index
+
+Next, you can add video files to the index with their associated metadata. The example below demonstrates how to add two video files to the index using SAS URLs to provide access.
++
+```bash
+curl.exe -v -X PUT "https://<YOUR_ENDPOINT_URL>/computervision/retrieval/indexes/my-video-index/ingestions/my-ingestion?api-version=2023-05-01-preview" -H "Ocp-Apim-Subscription-Key: <YOUR_SUBSCRIPTION_KEY>" -H "Content-Type: application/json" --data-ascii "
+{
+ 'videos': [
+ {
+ 'mode': 'add',
+ 'documentId': '02a504c9cd28296a8b74394ed7488045',
+ 'documentUrl': 'https://example.blob.core.windows.net/videos/02a504c9cd28296a8b74394ed7488045.mp4?sas_token_here',
+ 'metadata': {
+ 'cameraId': 'camera1',
+ 'timestamp': '2023-06-30 17:40:33'
+ }
+ },
+ {
+ 'mode': 'add',
+ 'documentId': '043ad56daad86cdaa6e493aa11ebdab3',
+ 'documentUrl': '[https://example.blob.core.windows.net/videos/043ad56daad86cdaa6e493aa11ebdab3.mp4?sas_token_here',
+ 'metadata': {
+ 'cameraId': 'camera2'
+ }
+ }
+ ]
+}"
+```
+
+**Response:**
+```
+HTTP/1.1 202 Accepted
+Content-Length: 152
+Content-Type: application/json; charset=utf-8
+request-id: ee5e48df-13f8-4a87-a337-026947144321
+operation-location: http://api.example.com.trafficmanager.net/retrieval/indexes/my-test-index/ingestions/my-ingestion
+api-supported-versions: 2023-01-15-preview,2023-05-01-preview
+x-envoy-upstream-service-time: 709
+Date: Thu, 06 Jul 2023 18:15:34 GMT
+Connection: close
+
+{
+ "name": "my-ingestion",
+ "state": "Running",
+ "createdDateTime": "2023-07-06T18:15:33.8105687Z",
+ "lastModifiedDateTime": "2023-07-06T18:15:34.3418564Z"
+}
+```
+
+### Step 3: Wait for ingestion to complete
+
+After you add video files to the index, the ingestion process starts. It might take some time depending on the size and number of files. To ensure the ingestion is complete before performing searches, you can use the **Get Ingestion** call to check the status. Wait for this call to return `"state" = "Completed"` before proceeding to the next step.
+
+```bash
+curl.exe -v _X GET "https://<YOUR_ENDPOINT_URL>/computervision/retrieval/indexes/my-video-index/ingestions?api-version=2023-05-01-preview&$top=20" -H "ocp-apim-subscription-key: <YOUR_SUBSCRIPTION_KEY>"
+```
+
+**Response:**
+```
+HTTP/1.1 200 OK
+Content-Length: 164
+Content-Type: application/json; charset=utf-8
+request-id: 4907feaf-88f1-4009-a1a5-ad366f04ee31
+api-supported-versions: 2023-01-15-preview,2023-05-01-preview
+x-envoy-upstream-service-time: 12
+Date: Thu, 06 Jul 2023 18:17:47 GMT
+Connection: close
+
+{
+ "value": [
+ {
+ "name": "my-ingestion",
+ "state": "Completed",
+ "createdDateTime": "2023-07-06T18:15:33.8105687Z",
+ "lastModifiedDateTime": "2023-07-06T18:15:34.3418564Z"
+ }
+ ]
+}
+```
+
+### Step 4: Perform searches with metadata
+
+After you add video files to the index, you can search for specific videos using metadata. This example demonstrates two types of searches: one using the "vision" feature and another using the "speech" feature.
+
+#### Search with "vision" feature
+
+To perform a search using the "vision" feature, specify the query text and any desired filters.
+
+```bash
+POST -v -X "https://<YOUR_ENDPOINT_URL>/computervision/retrieval/indexes/my-video-index:queryByText?api-version=2023-05-01-preview" -H "Ocp-Apim-Subscription-Key: <YOUR_SUBSCRIPTION_KEY>" -H "Content-Type: application/json" --data-ascii "
+{
+ 'queryText': 'a man with black hoodie',
+ 'filters': {
+ 'stringFilters': [
+ {
+ 'fieldName': 'cameraId',
+ 'values': [
+ 'camera1'
+ ]
+ }
+ ],
+ 'featureFilters': ['vision']
+ }
+}"
+```
+
+**Response:**
+```
+HTTP/1.1 200 OK
+Content-Length: 3289
+Content-Type: application/json; charset=utf-8
+request-id: 4c2477df-d89d-4a98-b433-611083324a3f
+api-supported-versions: 2023-05-01-preview
+x-envoy-upstream-service-time: 233
+Date: Thu, 06 Jul 2023 18:42:08 GMT
+Connection: close
+
+{
+ "value": [
+ {
+ "documentId": "02a504c9cd28296a8b74394ed7488045",
+ "documentKind": "VideoFrame",
+ "start": "00:01:58",
+ "end": "00:02:09",
+ "best": "00:02:03",
+ "relevance": 0.23974405229091644
+ },
+ {
+ "documentId": "02a504c9cd28296a8b74394ed7488045",
+ "documentKind": "VideoFrame",
+ "start": "00:02:27",
+ "end": "00:02:29",
+ "best": "00:02:27",
+ "relevance": 0.23762696981430054
+ },
+ {
+ "documentId": "02a504c9cd28296a8b74394ed7488045",
+ "documentKind": "VideoFrame",
+ "start": "00:00:26",
+ "end": "00:00:27",
+ "best": "00:00:26",
+ "relevance": 0.23250913619995117
+ },
+ ]
+}
+```
+
+#### Search with "speech" feature
+
+To perform a search using the "speech" feature, provide the query text and any desired filters.
+
+```bash
+curl.exe -v -X POST "https://<YOUR_ENDPOINT_URL>com/computervision/retrieval/indexes/my-video-index:queryByText?api-version=2023-05-01-preview" -H "Ocp-Apim-Subscription-Key: <YOUR_SUBSCRIPTION_KEY>" -H "Content-Type: application/json" --data-ascii "
+{
+ 'queryText': 'leave the area',
+ 'dedup': false,
+ 'filters': {
+ 'stringFilters': [
+ {
+ 'fieldName': 'cameraId',
+ 'values': [
+ 'camera1'
+ ]
+ }
+ ],
+ 'featureFilters': ['speech']
+ }
+}"
+```
+
+**Response:**
+```
+HTTP/1.1 200 OK
+Content-Length: 49001
+Content-Type: application/json; charset=utf-8
+request-id: b54577bb-1f46-44d8-9a91-c9326df3ac23
+api-supported-versions: 2023-05-01-preview
+x-envoy-upstream-service-time: 148
+Date: Thu, 06 Jul 2023 18:43:07 GMT
+Connection: close
+
+{
+ "value": [
+ {
+ "documentId": "02a504c9cd28296a8b74394ed7488045",
+ "documentKind": "SpeechTextSegment",
+ "start": "00:07:07.8400000",
+ "end": "00:07:08.4400000",
+ "best": "00:07:07.8400000",
+ "relevance": 0.8597901463508606
+ },
+ {
+ "documentId": "02a504c9cd28296a8b74394ed7488045",
+ "documentKind": "SpeechTextSegment",
+ "start": "00:07:02.0400000",
+ "end": "00:07:03.0400000",
+ "best": "00:07:02.0400000",
+ "relevance": 0.8506758213043213
+ },
+ {
+ "documentId": "02a504c9cd28296a8b74394ed7488045",
+ "documentKind": "SpeechTextSegment",
+ "start": "00:07:10.4400000",
+ "end": "00:07:11.5200000",
+ "best": "00:07:10.4400000",
+ "relevance": 0.8474636673927307
+ }
+ ]
+}
+```
+
+## Next steps
+
+[Multi-modal embeddings concepts](../concept-image-retrieval.md)
ai-services Intro To Spatial Analysis Public Preview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/intro-to-spatial-analysis-public-preview.md
Spatial Analysis can also be configured to detect if a person is wearing a prote
![Spatial Analysis classifies whether people have facemasks in an elevator](https://user-images.githubusercontent.com/11428131/137015842-ce524f52-3ac4-4e42-9067-25d19b395803.png)
+## Video Retrieval
+
+Spatial Analysis Video Retrieval is a service that lets you create a search index, add documents (videos and images) to it, and search with natural language. Developers can define metadata schemas for each index and ingest metadata to the service to help with retrieval. Developers can also specify what features to extract from the index (vision, speech) and filter their search based on features.
+
+[Call the Video Retrieval APIs](./how-to/video-retrieval.md)
+ ## Input requirements Spatial Analysis works on videos that meet the following requirements:
ai-services Overview Image Analysis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/overview-image-analysis.md
The Product Recognition APIs let you analyze photos of shelves in a retail store
[Product Recognition](./concept-shelf-analysis.md)
-## Image Retrieval (v4.0 preview only)
+## Multi-modal embeddings (v4.0 preview only)
-The Image Retrieval APIs enable the _vectorization_ of images and text queries. They convert images to coordinates in a multi-dimensional vector space. Then, incoming text queries can also be converted to vectors, and images can be matched to the text based on semantic closeness. This allows the user to search a set of images using text, without the need to use image tags or other metadata. Semantic closeness often produces better results in search.
+The multi-modal embeddings APIs enable the _vectorization_ of images and text queries. They convert images to coordinates in a multi-dimensional vector space. Then, incoming text queries can also be converted to vectors, and images can be matched to the text based on semantic closeness. This allows the user to search a set of images using text, without needing to use image tags or other metadata. Semantic closeness often produces better results in search.
These APIs are only available in the following geographic regions: East US, France Central, Korea Central, North Europe, Southeast Asia, West Europe, West US.
-[Image Retrieval](./concept-image-retrieval.md)
+[Multi-modal embeddings](./concept-image-retrieval.md)
## Background removal (v4.0 preview only)
ai-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/whats-new.md
# What's new in Azure AI Vision
-Learn what's new in the service. These items may be release notes, videos, blog posts, and other types of information. Bookmark this page to stay up to date with new features, enhancements, fixes, and documentation updates.
+Learn what's new in the service. These items might be release notes, videos, blog posts, and other types of information. Bookmark this page to stay up to date with new features, enhancements, fixes, and documentation updates.
## September 2023
Search and interact with video content in the same intuitive way you think and w
You can now create and train your own [custom image classification and object detection models](./concept-model-customization.md), using Vision Studio or the v4.0 REST APIs.
-### Image Retrieval APIs (public preview)
+### Multi-modal embeddings APIs (public preview)
-The [Image Retrieval APIs](./how-to/image-retrieval.md), part of the Image Analysis 4.0 API, enable the _vectorization_ of images and text queries. They let you convert images and text to coordinates in a multi-dimensional vector space. You can now search with natural language and find relevant images using vector similarity search.
+The [Multi-modal embeddings APIs](./how-to/image-retrieval.md), part of the Image Analysis 4.0 API, enable the _vectorization_ of images and text queries. They let you convert images and text to coordinates in a multi-dimensional vector space. You can now search with natural language and find relevant images using vector similarity search.
### Background removal APIs (public preview)
ai-services Harm Categories https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/content-safety/concepts/harm-categories.md
Content Safety recognizes four distinct categories of objectionable content.
| Category | Description | | | - |
-| Hate | The hate category describes language attacks or uses that include pejorative or discriminatory language with reference to a person or identity group on the basis of certain differentiating attributes of these groups including but not limited to race, ethnicity, nationality, gender identity and expression, sexual orientation, religion, immigration status, ability status, personal appearance, and body size. |
-| Sexual | The sexual category describes language related to anatomical organs and genitals, romantic relationships, acts portrayed in erotic or affectionate terms, physical sexual acts, including those portrayed as an assault or a forced sexual violent act against oneΓÇÖs will, prostitution, pornography, and abuse. |
-| Violence | The violence category describes language related to physical actions intended to hurt, injure, damage, or kill someone or something; describes weapons, etc. |
-| Self-harm | The self-harm category describes language related to physical actions intended to purposely hurt, injure, or damage oneΓÇÖs body, or kill oneself. |
+| Hate and Fairness | Hate and fairness-related harms refer to any content that attacks or uses pejorative or discriminatory language with reference to a person or identity group based on certain differentiating attributes of these groups including but not limited to race, ethnicity, nationality, gender identity and expression, sexual orientation, religion, immigration status, ability status, personal appearance, and body size. </br></br> Fairness is concerned with ensuring that AI systems treat all groups of people equitably without contributing to existing societal inequities. Similar to hate speech, fairness-related harms hinge upon disparate treatment of identity groups. |
+| Sexual | Sexual describes language related to anatomical organs and genitals, romantic relationships, acts portrayed in erotic or affectionate terms, pregnancy, physical sexual acts, including those portrayed as an assault or a forced sexual violent act against one's will, prostitution, pornography, and abuse. |
+| Violence | Violence describes language related to physical actions intended to hurt, injure, damage, or kill someone or something; describes weapons, guns and related entities, such as manufactures, associations, legislation, and so on. |
+| Self-Harm | Self-harm describes language related to physical actions intended to purposely hurt, injure, damage one's body or kill oneself. |
Classification can be multi-labeled. For example, when a text sample goes through the text moderation model, it could be classified as both Sexual content and Violence.
Classification can be multi-labeled. For example, when a text sample goes throug
Every harm category the service applies also comes with a severity level rating. The severity level is meant to indicate the severity of the consequences of showing the flagged content.
-| 4 Severity Levels |8 Severity Levels | Label |
-| -- | -- |
-|Severity Level 0 ΓÇô Safe | Severity Level 0 and 1 ΓÇô Safe |Content might be related to violence, self-harm, sexual or hate categories but the terms are used in general, journalistic, scientific, medical, and similar professional contexts which are appropriate for most audiences. |
-|Severity Level 2 ΓÇô Low | Severity Level 2 and 3 ΓÇô Low |Content that expresses prejudiced, judgmental, or opinionated views, includes offensive use of language, stereotyping, use cases exploring a fictional world (e.g., gaming, literature) and depictions at low intensity. |
-|Severity Level 4 ΓÇô Medium| Severity Level 4 and 5 ΓÇô Medium |Content that uses offensive, insulting, mocking, intimidating, or demeaning language towards specific identity groups, includes depictions of seeking and executing harmful instructions, fantasies, glorification, promotion of harm at medium intensity. |
-|Severity Level 6 ΓÇô High | Severity Level 6-7 ΓÇô High |Content that displays explicit and severe harmful instructions, actions, damage, or abuse, includes endorsement, glorification, promotion of severe harmful acts, extreme or illegal forms of harm, radicalization, and non-consensual power exchange or abuse. |
+**Text**: The current version of the text model supports the full 0-7 severity scale. The classifier detects amongst all severities along this scale.
+
+**Image**: The current version of the image model supports a trimmed version of the full 0-7 severity scale for image analysis. The classifier only returns severities 0, 2, 4, and 6; each two adjacent levels are mapped to a single level.
+
+| **Severity Level** | **Description** |
+| | |
+| Level 0 ΓÇô Safe | Content that might be related to violence, self-harm, sexual or hate & fairness categories, but the terms are used in general, journalistic, scientific, medical, or similar professional contexts that are **appropriate for most audiences**. This level doesn't include content unrelated to the above categories. |
+| Level 1 | Content that might be related to violence, self-harm, sexual or hate & fairness categories but the terms are used in general, journalistic, scientific, medial, and similar professional contexts that **may not be appropriate for all audiences**. This level might contain content that, in other contexts, might acquire a different meaning and higher severity level. Content can express **negative or positive sentiments towards identity groups or representations without endorsement of action.** |
+| Level 2 ΓÇô Low | Content that expresses **general hate speech that does not target identity groups**, expressions **targeting identity groups with positive sentiment or intent**, use cases exploring a **fictional world** (for example, gaming, literature) and depictions at low intensity. |
+| Level 3 | Content that expresses **prejudiced, judgmental or opinionated views**, including offensive use of language, stereotyping, and depictions aimed at **identity groups with negative sentiment**. |
+| Level 4 ΓÇô Medium | Content that **uses offensive, insulting language towards identity groups, including fantasies or harm at medium intensity**. |
+| Level 5 | Content that displays harmful instructions, **attacks against identity groups**, and **displays of harmful actions** with the **aim of furthering negative sentiments**. |
+| Level 6 ΓÇô High | Content that displays **harmful actions, damage** , including promotion of severe harmful acts, radicalization, and non-consensual power exchange or abuse. |
+| Level 7 | Content of the highest severity and maturity that **endorses, glorifies, or promotes extreme forms of activity towards identity groups**, includes extreme or illegal forms of harm, and radicalization. |
## Next steps
ai-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/custom-text-analytics-for-health/overview.md
As you use custom Text Analytics for health, see the following reference documen
|APIs| Reference documentation| ||||
-|REST APIs (Authoring) | [REST API documentation](/rest/api/language/2022-10-01-preview/text-analysis-authoring) |
-|REST APIs (Runtime) | [REST API documentation](/rest/api/language/2022-10-01-preview/text-analysis-runtime/submit-job) |
+|REST APIs (Authoring) | [REST API documentation](/rest/api/language/2023-04-01/text-analysis-authoring) |
+|REST APIs (Runtime) | [REST API documentation](/rest/api/language/2023-04-01/text-analysis-runtime/submit-job) |
## Responsible AI
ai-services Use Your Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/concepts/use-your-data.md
To get started, [connect your data source](../use-your-data-quickstart.md) using
Azure OpenAI on your data uses an [Azure Cognitive Search](/azure/search/search-what-is-azure-search) index to determine what data to retrieve based on user inputs and provided conversation history. We recommend using Azure OpenAI Studio to create your index from a blob storage or local files. See the [quickstart article](../use-your-data-quickstart.md?pivots=programming-language-studio) for more information.
-## Ingesting your data into Azure Cognitive Search
-
-For documents and datasets with long text, you should use the available [data preparation script](https://go.microsoft.com/fwlink/?linkid=2244395) to ingest the data into cognitive search. The script chunks the data so that your response with the service will be more accurate. This script also supports scanned PDF file and images and ingests the data using [Document Intelligence](../../../ai-services/document-intelligence/overview.md).
-- ## Data formats and file types Azure OpenAI on your data supports the following filetypes:
There is an [upload limit](../quotas-limits.md), and there are some caveats abou
This will impact the quality of Azure Cognitive Search and the model response. +
+## Ingesting your data into Azure Cognitive Search
+
+> [!TIP]
+> For documents and datasets with long text, you should use the available [data preparation script](https://go.microsoft.com/fwlink/?linkid=2244395). The script chunks data so that your response with the service will be more accurate. This script also supports scanned PDF files and images.
+
+There are three different sources of data that you can use with Azure OpenAI on your data.
+* Blobs in an Azure storage container that you provide
+* Local files uploaded using the Azure OpenAI Studio
+* URLs/web addresses.
+
+Once data is ingested, an [Azure Cognitive Search](/azure/search/search-what-is-azure-search) index in your search resource gets created to integrate the information with Azure OpenAI models.
+
+**Data ingestion from Azure storage containers**
+
+1. Ingestion assets are created in Azure Cognitive Search resource and Azure storage account. Currently these assets are: indexers, indexes, data sources, a [custom skill](/azure/search/cognitive-search-custom-skill-interface) in the search resource, and a container (later called the chunks container) in the Azure storage account. You can specify the input Azure storage container using the [Azure OpenAI studio](https://oai.azure.com/), or the [ingestion API](../reference.md#start-an-ingestion-job).
+
+2. Data is read from the input container, contents are opened and chunked into small chunks with a maximum of 1024 tokens each. If vector search is enabled, the service will calculate the vector representing the embeddings on each chunk. The output of this step (called the "preprocessed" or "chunked" data) is stored in the chunks container created in the previous step.
+
+3. The preprocessed data is loaded from the chunks container, and indexed in the Azure Cognitive Search index.
++
+**Data ingestion from local files**
+
+Using the Azure OpenAI Studio, you can upload files from your machine. The service then stores the files to an Azure storage container and performs ingestion from the container.
+
+**Data ingestion from URLs**
+
+A crawling component first crawls the provided URL and stores its contents to an Azure Storage Container. The service then performs ingestion from the container.
+
+### Troubleshooting failed ingestion jobs
+
+To troubleshoot a failed job, always look out for errors or warnings specified either in the API response or Azure OpenAI studio. Here are some of the common errors and warnings:
+
+**Quota Limitations Issues**
+
+*An index with the name X in service Y could not be created. Index quota has been exceeded for this service. You must either delete unused indexes first, add a delay between index creation requests, or upgrade the service for higher limits.*
+
+*Standard indexer quota of X has been exceeded for this service. You currently have X standard indexers. You must either delete unused indexers first, change the indexer 'executionMode', or upgrade the service for higher limits.*
+
+Resolution:
+
+Upgrade to a higher pricing tier or delete unused assets.
+
+**Preprocessing Timeout Issues**
+
+*Could not execute skill because the Web Api request failed*
+
+*Could not execute skill because Web Api skill response is invalid*
+
+Resolution:
+
+Break down the input documents into smaller documents and try again.
+
+**Permissions Issues**
+
+*This request is not authorized to perform this operation*
+
+Resolution:
+
+This means the storage account is not accessible with the given credentials. In this case, please review the storage account credentials passed to the API and ensure the storage account is not hidden behind a private endpoint (if a private endpoint is not configured for this resource).
+## Custom parameters
+
+In the **Data parameters** section in Azure OpenAI Studio, you can modify following additional settings.
++
+|Parameter name | Description |
+|||
+|**Retrieved documents** | Specifies the number of top-scoring documents from your data index used to generate responses. You might want to increase the value when you have short documents or want to provide more context. The default value is 3. |
+| **Strictness** | Sets the threshold to categorize documents as relevant to your queries. Raising the value means a higher threshold for relevance and filters out more less-relevant documents for responses. Setting this value too high might cause the model to fail to generate responses due to limited available documents. The default value is 3. |
+ ## Virtual network support & private endpoint support
+See the following table for scenarios supported by virtual networks and private endpoints **when you bring your own Azure Cognitive Search index**.
+
+| Network access to the Azure OpenAI Resource | Network access to the Azure Cognitive search resource | Is vector search enabled? | Azure OpenAI studio | Chat with the model using the API |
+||-|||--|
+| Public | Public | Either | Supported | Supported |
+| Private | Public | Yes | Not supported | Supported |
+| Private | Public | No | Supported | Supported |
+| Regardless of resource access allowances | Private | Either | Not supported | Supported |
+
+Additionally, data ingestion has the following configuration support:
+
+| Network access to the Azure OpenAI Resource | Network access to the Azure Cognitive search resource | Azure OpenAI studio support | [Ingestion API](../reference.md#start-an-ingestion-job) support |
+||-|--|--|
+| Public | Public | Supported | Supported |
+| Private | Regardless of resource access allowances. | Not supported | Not supported |
+| Public | Private | Not supported | Not supported |
+++ ### Azure OpenAI resources You can protect Azure OpenAI resources in [virtual networks and private endpoints](/azure/ai-services/cognitive-services-virtual-networks) the same way as any Azure AI service.
Learn more about the [manual approval workflow](/azure/private-link/private-endp
After you approve the request in your search service, you can start using the [chat completions extensions API](/azure/ai-services/openai/reference#completions-extensions). Public network access can be disabled for that search service.
-> [!NOTE]
-> Virtual networks & private endpoints are only supported for the API, and not currently supported for Azure OpenAI Studio.
### Storage accounts Storage accounts in virtual networks, firewalls, and private endpoints are currently not supported by Azure OpenAI on your data.
Storage accounts in virtual networks, firewalls, and private endpoints are curre
To add a new data source to your Azure OpenAI resource, you need the following Azure RBAC roles.
-|Azure RBAC role |Needed when |
-|||
-|[Cognitive Services Contributor](../how-to/role-based-access-control.md#cognitive-services-contributor) | You want to use Azure OpenAI on your data. |
-|[Search Index Data Contributor](/azure/role-based-access-control/built-in-roles#search-index-data-contributor) | You have an existing Azure Cognitive Search index that you want to use, instead of creating a new one. |
-|[Search Service Contributor](/azure/role-based-access-control/built-in-roles#search-service-contributor) | You plan to create a new Azure Cognitive Search index. |
-|[Storage Blob Data Contributor](/azure/role-based-access-control/built-in-roles#storage-blob-data-contributor) | You have an existing Blob storage container that you want to use, instead of creating a new one. |
+|Azure RBAC role | Which resource needs this role? | Needed when |
+||||
+| [Cognitive Services OpenAI Contributor](../how-to/role-based-access-control.md#cognitive-services-openai-contributor) | The Azure Cognitive Search resource, to access Azure OpenAI resource. | You want to use Azure OpenAI on your data. |
+|[Search Index Data Reader](/azure/role-based-access-control/built-in-roles#search-index-data-reader) | The Azure OpenAI resource, to access the Azure Cognitive Search resource. | You want to use Azure OpenAI on your data. |
+|[Search Service Contributor](/azure/role-based-access-control/built-in-roles#search-service-contributor) | The Azure OpenAI resource, to access the Azure Cognitive Search resource. | You plan to create a new Azure Cognitive Search index. |
+|[Storage Blob Data Contributor](/azure/role-based-access-control/built-in-roles#storage-blob-data-contributor) | You have an existing Blob storage container that you want to use, instead of creating a new one. | The Azure Cognitive Search and Azure OpenAI resources, to access the storage account. |
+| [Cognitive Services OpenAI User](../how-to/role-based-access-control.md#cognitive-services-openai-user) | The web app, to access the Azure OpenAI resource. | You want to deploy a web app. |
+| [Contributor](/azure/role-based-access-control/built-in-roles#contributor) | Your subscription, to access Azure Resource Manager. | You want to deploy a web app. |
+| [Cognitive Services Contributor Role](/azure/role-based-access-control/built-in-roles#cognitive-services-contributor) | The Azure Cognitive Search resource, to access Azure OpenAI resource. | You want to deploy a [web app](#using-the-web-app). |
+++ ## Document-level access control
This system message can help improve the quality of the response by specifying t
> [!NOTE] > The system message is used to modify how GPT assistant responds to a user question based on retrieved documentation. It does not affect the retrieval process. If you'd like to provide instructions for the retrieval process, it is better to include them in the questions.
-> The system message is only guidance. The model might not adhere to every instruction specified because it has been primed with certain behaviors such as objectivity, and avoiding controversial statements. Unexpected behavior may occur if the system message contradicts with these behaviors.
+> The system message is only guidance. The model might not adhere to every instruction specified because it has been primed with certain behaviors such as objectivity, and avoiding controversial statements. Unexpected behavior might occur if the system message contradicts with these behaviors.
### Maximum response
Set a limit on the number of tokens per model response. The upper limit for Azur
### Limit responses to your data
-This option encourages the model to respond using your data only, and is selected by default. If you unselect this option, the model may more readily apply its internal knowledge to respond. Determine the correct selection based on your use case and scenario.
+This option encourages the model to respond using your data only, and is selected by default. If you unselect this option, the model might more readily apply its internal knowledge to respond. Determine the correct selection based on your use case and scenario.
### Search options
Azure OpenAI on your data provides several search options you can use when you a
| *hybrid (vector + keyword)* | A hybrid of vector search and keyword search | [Additional pricing](https://azure.microsoft.com/pricing/details/cognitive-services/openai-service/) on your Azure OpenAI account from calling the embedding model. |Performs similarity search over vector fields using vector embeddings, while also supporting flexible query parsing and full text search over alphanumeric fields using term queries.| | *hybrid (vector + keyword) + semantic* | A hybrid of vector search, semantic and keyword search for retrieval. | [Additional pricing](https://azure.microsoft.com/pricing/details/cognitive-services/openai-service/) on your Azure OpenAI account from calling the embedding model, and additional pricing for [semantic search](/azure/search/semantic-search-overview#availability-and-pricing) usage. |Leverages vector embeddings, language understanding and flexible query parsing to create rich search experiences and generative AI apps that can handle complex and diverse information retrieval scenarios. |
-The optimal search option can vary depending on your dataset and use-case. You may need to experiment with multiple options to determine which works best for your use-case.
+The optimal search option can vary depending on your dataset and use-case. You might need to experiment with multiple options to determine which works best for your use-case.
### Index field mapping
While Power Virtual Agents has features that leverage Azure OpenAI such as [gene
> [!NOTE] > Deploying to Power Virtual Agents from Azure OpenAI is only available to US regions.
-> Power Virtual Agents supports Azure Cognitive Search indexes with keyword or semantic search only. Other data sources and advanced features may not be supported.
+> Power Virtual Agents supports Azure Cognitive Search indexes with keyword or semantic search only. Other data sources and advanced features might not be supported.
#### Using the web app
When customizing the app, we recommend:
##### Important considerations -- Publishing creates an Azure App Service in your subscription. It may incur costs depending on the [pricing plan](https://azure.microsoft.com/pricing/details/app-service/windows/) you select. When you're done with your app, you can delete it from the Azure portal.
+- Publishing creates an Azure App Service in your subscription. It might incur costs depending on the [pricing plan](https://azure.microsoft.com/pricing/details/app-service/windows/) you select. When you're done with your app, you can delete it from the Azure portal.
- By default, the app will only be accessible to you. To add authentication (for example, restrict access to the app to members of your Azure tenant): 1. Go to the [Azure portal](https://portal.azure.com/#home) and search for the app name you specified during publishing. Select the web app, and go to the **Authentication** tab on the left navigation menu. Then select **Add an identity provider**.
ai-services Switching Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/switching-endpoints.md
openai.api_version = "2023-05-15" # subject to change
## Keyword argument for model
-OpenAI uses the `model` keyword argument to specify what model to use. Azure OpenAI has the concept of [deployments](create-resource.md?pivots=web-portal#deploy-a-model) and uses the `deployment_id` keyword argument to describe which model deployment to use. Azure OpenAI also supports the use of `engine` interchangeably with `deployment_id`.
+OpenAI uses the `model` keyword argument to specify what model to use. Azure OpenAI has the concept of [deployments](create-resource.md?pivots=web-portal#deploy-a-model) and uses the `deployment_id` keyword argument to describe which model deployment to use. Azure OpenAI also supports the use of `engine` interchangeably with `deployment_id`. `deployment_id` corresponds to the custom name you chose for your model during model deployment. By convention in our docs, we often show `deployment_id`'s which match the underlying model name, but if you chose a different deployment name that doesn't match the model name you need to use that name when working with models in Azure OpenAI.
For OpenAI `engine` still works in most instances, but it's deprecated and `model` is preferred.
embedding = openai.Embedding.create(
```python completion = openai.Completion.create( prompt="<prompt>",
- deployment_id="text-davinci-003"
+ deployment_id="text-davinci-003" # This must match the custom deployment name you chose for your model.
#engine="text-davinci-003" ) chat_completion = openai.ChatCompletion.create( messages="<messages>",
- deployment_id="gpt-4"
+ deployment_id="gpt-4" # This must match the custom deployment name you chose for your model.
#engine="gpt-4" ) embedding = openai.Embedding.create( input="<input>",
- deployment_id="text-embedding-ada-002"
+ deployment_id="text-embedding-ada-002" # This must match the custom deployment name you chose for your model.
#engine="text-embedding-ada-002" ) ```
inputs = ["A", "B", "C"] #max array size=16
embedding = openai.Embedding.create( input=inputs,
- deployment_id="text-embedding-ada-002"
+ deployment_id="text-embedding-ada-002" # This must match the custom deployment name you chose for your model.
#engine="text-embedding-ada-002" ) ```
ai-services Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/reference.md
Previously updated : 09/15/2023 Last updated : 10/05/2023 recommendations: false
POST https://{your-resource-name}.openai.azure.com/openai/deployments/{deploymen
| ```user``` | string | Optional | | A unique identifier representing your end-user, which can help monitoring and detecting abuse | | ```n``` | integer | Optional | 1 | How many completions to generate for each prompt. Note: Because this parameter generates many completions, it can quickly consume your token quota. Use carefully and ensure that you have reasonable settings for max_tokens and stop. | | ```stream``` | boolean | Optional | False | Whether to stream back partial progress. If set, tokens will be sent as data-only server-sent events as they become available, with the stream terminated by a data: [DONE] message.|
-| ```logprobs``` | integer | Optional | null | Include the log probabilities on the logprobs most likely tokens, as well the chosen tokens. For example, if logprobs is 10, the API will return a list of the 10 most likely tokens. the API will always return the logprob of the sampled token, so there may be up to logprobs+1 elements in the response. This parameter cannot be used with `gpt-35-turbo`. |
+| ```logprobs``` | integer | Optional | null | Include the log probabilities on the logprobs most likely tokens, as well the chosen tokens. For example, if logprobs is 10, the API will return a list of the 10 most likely tokens. the API will always return the logprob of the sampled token, so there might be up to logprobs+1 elements in the response. This parameter cannot be used with `gpt-35-turbo`. |
| ```suffix```| string | Optional | null | The suffix that comes after a completion of inserted text. | | ```echo``` | boolean | Optional | False | Echo back the prompt in addition to the completion. This parameter cannot be used with `gpt-35-turbo`. | | ```stop``` | string or array | Optional | null | Up to four sequences where the API will stop generating further tokens. The returned text won't contain the stop sequence. |
Output formatting adjusted for ease of reading, actual output is a single block
| ```logit_bias``` | object | Optional | null | Modify the likelihood of specified tokens appearing in the completion. Accepts a json object that maps tokens (specified by their token ID in the tokenizer) to an associated bias value from -100 to 100. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model, but values between -1 and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token.| | ```user``` | string | Optional | | A unique identifier representing your end-user, which can help Azure OpenAI to monitor and detect abuse.| |```function_call```| | Optional | | Controls how the model responds to function calls. "none" means the model does not call a function, and responds to the end-user. "auto" means the model can pick between an end-user or calling a function. Specifying a particular function via {"name": "my_function"} forces the model to call that function. "none" is the default when no functions are present. "auto" is the default if functions are present. This parameter requires API version [`2023-07-01-preview`](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-07-01-preview/generated.json) |
-|```functions``` | [`FunctionDefinition[]`](#functiondefinition) | Optional | | A list of functions the model may generate JSON inputs for. This parameter requires API version [`2023-07-01-preview`](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-07-01-preview/generated.json)|
+|```functions``` | [`FunctionDefinition[]`](#functiondefinition) | Optional | | A list of functions the model can generate JSON inputs for. This parameter requires API version [`2023-07-01-preview`](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-07-01-preview/generated.json)|
### ChatMessage
A single, role-attributed message within a chat completion interaction.
|||| | content | string | The text associated with this message payload.| | function_call | [FunctionCall](#functioncall)| The name and arguments of a function that should be called, as generated by the model. |
-| name | string | The `name` of the author of this message. `name` is required if role is `function`, and it should be the name of the function whose response is in the `content`. May contain a-z, A-Z, 0-9, and underscores, with a maximum length of 64 characters.|
+| name | string | The `name` of the author of this message. `name` is required if role is `function`, and it should be the name of the function whose response is in the `content`. Can contain a-z, A-Z, 0-9, and underscores, with a maximum length of 64 characters.|
|role | [ChatRole](#chatrole) | The role associated with this message payload | ### ChatRole
The name and arguments of a function that should be called, as generated by the
| Name | Type | Description| ||||
-| arguments | string | The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may fabricate parameters not defined by your function schema. Validate the arguments in your code before calling your function. |
+| arguments | string | The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and might fabricate parameters not defined by your function schema. Validate the arguments in your code before calling your function. |
| name | string | The name of the function to call.| ### FunctionDefinition
-The definition of a caller-specified function that chat completions may invoke in response to matching user input. This requires API version [`2023-07-01-preview`](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-07-01-preview/generated.json)
+The definition of a caller-specified function that chat completions can invoke in response to matching user input. This requires API version [`2023-07-01-preview`](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-07-01-preview/generated.json)
|Name | Type| Description| ||||
The following parameters can be used inside of the `parameters` field inside of
| `semanticConfiguration` | string | Optional | null | The semantic search configuration. Only required when `queryType` is set to `semantic` or `vectorSemanticHybrid`. | | `roleInformation` | string | Optional | null | Gives the model instructions about how it should behave and the context it should reference when generating a response. Corresponds to the "System Message" in Azure OpenAI Studio. See [Using your data](./concepts/use-your-data.md#system-message) for more information. ThereΓÇÖs a 100 token limit, which counts towards the overall token limit.| | `filter` | string | Optional | null | The filter pattern used for [restricting access to sensitive documents](./concepts/use-your-data.md#document-level-access-control)
-| `embeddingEndpoint` | string | Optional | null | The endpoint URL for an Ada embedding model deployment, generally of the format `https://YOUR_RESOURCE_NAME.openai.azure.com/openai/deployments/YOUR_DEPLOYMENT_NAME/embeddings?api-version=2023-05-15`. Used for [vector search](./concepts/use-your-data.md#search-options). |
-| `embeddingKey` | string | Optional | null | The API key for an Ada embedding model deployment. Used for [vector search](./concepts/use-your-data.md#search-options). |
+| `embeddingEndpoint` | string | Optional | null | The endpoint URL for an Ada embedding model deployment, generally of the format `https://YOUR_RESOURCE_NAME.openai.azure.com/openai/deployments/YOUR_DEPLOYMENT_NAME/embeddings?api-version=2023-05-15`. Use with the `embeddingKey` parameter for [vector search](./concepts/use-your-data.md#search-options) outside of private networks and private endpoints. |
+| `embeddingKey` | string | Optional | null | The API key for an Ada embedding model deployment. Use with `embeddingEndpoint` for [vector search](./concepts/use-your-data.md#search-options) outside of private networks and private endpoints. |
+| `embeddingDeploymentName` | string | Optional | null | The Ada embedding model deployment name within the same Azure OpenAI resource. Used instead of `embeddingEndpoint` and `embeddingKey` for [vector search](./concepts/use-your-data.md#search-options). Should only be used when both the `embeddingEndpoint` and `embeddingKey` parameters are defined. When this parameter is provided, Azure OpenAI on your data will use an internal call to evaluate the Ada embedding model, rather than calling the Azure OpenAI endpoint. This enables you to use vector search in private networks and private endpoints. Billing remains the same whether this parameter is defined or not. Available in regions where embedding models are [available](./concepts/models.md#embeddings-models-1) starting in API versions `2023-06-01-preview` and later.|
+
+### Start an ingestion job
+
+```console
+curl -i -X PUT https://YOUR_RESOURCE_NAME.openai.azure.com/openai/extensions/on-your-data/ingestion-jobs/JOB_NAME?api-version=2023-10-01-preview \
+-H "Content-Type: application/json" \
+-H "api-key: YOUR_API_KEY" \
+-H "searchServiceEndpoint: https://YOUR_AZURE_COGNITIVE_SEARCH_NAME.search.windows.net" \
+-H "searchServiceAdminKey: YOUR_SEARCH_SERVICE_ADMIN_KEY" \
+-H "storageConnectionString: YOUR_STORAGE_CONNECTION_STRING" \
+-H "storageContainer: YOUR_INPUT_CONTAINER" \
+-d '{ "dataRefreshIntervalInMinutes": 10 }'
+```
+
+### Example response
+
+```json
+{
+ "id": "test-1",
+ "dataRefreshIntervalInMinutes": 10,
+ "completionAction": "cleanUpAssets",
+ "status": "running",
+ "warnings": [],
+ "progress": {
+ "stageProgress": [
+ {
+ "name": "Preprocessing",
+ "totalItems": 100,
+ "processedItems": 100
+ },
+ {
+ "name": "Indexing",
+ "totalItems": 350,
+ "processedItems": 40
+ }
+ ]
+ }
+}
+```
+
+| Parameters | Type | Required? | Default | Description |
+||||||
+| `dataRefreshIntervalInMinutes` | string | Required | 0 | The data refresh interval in minutes. If you want to run a single ingestion job without a schedule, set this parameter to `0`. |
+| `completionAction` | string | Optional | `cleanUpAssets` | What should happen to the assets created during the ingestion process upon job completion. Valid values are `cleanUpAssets` or `keepAllAssets`. `keepAllAssets` leaves all the intermediate assets for users interested in reviewing the intermediate results, which can be helpful for debugging assets. `cleanUpAssets` removes the assets after job completion. |
+| `searchServiceEndpoint` | string | Required |null | The endpoint of the search resource in which the data will be ingested. |
+| `searchServiceAdminKey` | string | Optional | null | If provided, the key will be used to authenticate with the `searchServiceEndpoint`. If not provided, the system-assigned identity of the Azure OpenAI resource will be used. In this case, the system-assigned identity must have "Search Service Contributor" role assignment on the search resource. |
+| `storageConnectionString` | string | Required | null | The connection string for the storage account where the input data is located. An account key has to be provided in the connection string. It should look something like `DefaultEndpointsProtocol=https;AccountName=<your storage account>;AccountKey=<your account key>` |
+| `storageContainer` | string | Required | null | The name of the container where the input data is located. |
+| `embeddingEndpoint` | string | Optional | null | Not required if you use semantic or only keyword search. It is required if you use vector, hybrid, or hybrid + semantic search |
+| `embeddingKey` | string | Optional | null | The key of the embedding endpoint. This is required if the embedding endpoint is not empty. |
++
+### List ingestion jobs
+
+```console
+curl -i -X GET https://YOUR_RESOURCE_NAME.openai.azure.com/openai/extensions/on-your-data/ingestion-jobs?api-version=2023-10-01-preview \
+-H "api-key: YOUR_API_KEY"
+```
+
+### Example response
+
+```json
+{
+ "value": [
+ {
+ "id": "test-1",
+ "dataRefreshIntervalInMinutes": 10,
+ "completionAction": "cleanUpAssets",
+ "status": "succeeded",
+ "warnings": []
+ },
+ {
+ "id": "test-2",
+ "dataRefreshIntervalInMinutes": 10,
+ "completionAction": "cleanUpAssets",
+ "status": "failed",
+ "error": {
+ "code": "BadRequest",
+ "message": "Could not execute skill because the Web Api request failed."
+ },
+ "warnings": []
+ }
+ ]
+}
+```
+
+### Get the status of an ingestion job
+
+```console
+curl -i -X GET https://YOUR_RESOURCE_NAME.openai.azure.com/openai/extensions/on-your-data/ingestion-jobs/YOUR_JOB_NAME?api-version=2023-10-01-preview \
+-H "api-key: YOUR_API_KEY"
+```
+
+#### Example response body
+
+```json
+{
+ "id": "test-1",
+ "dataRefreshIntervalInMinutes": 10,
+ "completionAction": "cleanUpAssets",
+ "status": "succeeded",
+ "warnings": []
+}
+```
## Image generation
ai-services Fine Tune https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/tutorials/fine-tune.md
In this tutorial you learn how to:
- The OpenAI Python library should be at least version: `0.28.1`. - [Jupyter Notebooks](https://jupyter.org/) - An Azure OpenAI resource in a [region where `gpt-35-turbo-0613` fine-tuning is available](../concepts/models.md). If you don't have a resource the process of creating one is documented in our resource [deployment guide](../how-to/create-resource.md).-- Necessary [Role-based access control permissions](../how-to/role-based-access-control.md). To perform all the actions described in this tutorial requires the equivalent of `Cognitive Services Contributor` + `Cognitive Services OpenAI Contributor` + `Cognitive Services Usages Reader` depending on how the permissions in your environment are defined.
+- Fine-tuning access requires **Cognitive Services OpenAI Contributor**.
+- If you do not already have access to view quota, and deploy models in Azure OpenAI Studio you will require [additional permissions](../how-to/role-based-access-control.md).
+ > [!IMPORTANT] > We strongly recommend reviewing the [pricing information](https://azure.microsoft.com/pricing/details/cognitive-services/openai-service/#pricing) for fine-tuning prior to beginning this tutorial to make sure you are comfortable with the associated costs. In testing, this tutorial resulted in one training hour billed, in addition to the costs that are associated with fine-tuning inference, and the hourly hosting costs of having a fine-tuned model deployed. Once you have completed the tutorial, you should delete your fine-tuned model deployment otherwise you will continue to incur the hourly hosting cost.
aks Coredns Custom https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/coredns-custom.md
CoreDNS can also be used to configure stub domains.
## Hosts plugin
-All built-in plugins are supported, so the [CoreDNS hosts][coredns hosts] plugin is available to customize as well.
+All built-in plugins are supported, so the [CoreDNS hosts][coredns hosts] plugin is available to customize /etc/hosts as well.
-```yaml
-apiVersion: v1
-kind: ConfigMap
-metadata:
- name: coredns-custom # this is the name of the configmap you can overwrite with your changes
- namespace: kube-system
-data:
- test.override: | # you may select any name here, but it must end with the .override file extension
- hosts {
- 10.0.0.1 example1.org
- 10.0.0.2 example2.org
- 10.0.0.3 example3.org
- fallthrough
- }
-```
+1. Create a file named `corednsms.yaml` and paste the following example configuration. Make sure to update the IP addresses and hostnames with the values for your own environment.
+
+ ```yaml
+ apiVersion: v1
+ kind: ConfigMap
+ metadata:
+ name: coredns-custom # this is the name of the configmap you can overwrite with your changes
+ namespace: kube-system
+ data:
+ test.override: | # you may select any name here, but it must end with the .override file extension
+ hosts {
+ 10.0.0.1 example1.org
+ 10.0.0.2 example2.org
+ 10.0.0.3 example3.org
+ fallthrough
+ }
+ ```
+
+2. Create the ConfigMap using the [`kubectl apply configmap`][kubectl-apply] command and specify the name of your YAML manifest.
+
+ ```console
+ kubectl apply -f corednsms.yaml
+ ```
+
+3. To reload the ConfigMap and enable Kubernetes Scheduler to restart CoreDNS without downtime, perform a rolling restart using [`kubectl rollout restart`][kubectl-rollout].
+
+ ```console
+ kubectl -n kube-system rollout restart deployment coredns
+ ```
## Troubleshooting
aks Outbound Rules Control Egress https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/outbound-rules-control-egress.md
The following network and FQDN/application rules are required for an AKS cluster
* IP address dependencies are for non-HTTP/S traffic (both TCP and UDP traffic). * FQDN HTTP/HTTPS endpoints can be placed in your firewall device. * Wildcard HTTP/HTTPS endpoints are dependencies that can vary with your AKS cluster based on a number of qualifiers.
-* AKS uses an admission controller to inject the FQDN as an environment variable to all deployments under kube-system and gatekeeper-system. This ensures all system communication between nodes and API server uses the API server FQDN and not the API server IP.
-* If you have an app or solution that needs to talk to the API server, you must add an **additional** network rule to allow **TCP communication to port 443 of your API server's IP**.
+* AKS uses an admission controller to inject the FQDN as an environment variable to all deployments under kube-system and gatekeeper-system. This ensures all system communication between nodes and API server uses the API server FQDN and not the API server IP. You can get the same behavior on your own pods, in any namespace, by annotating the pod spec with an annotation named `kubernetes.azure.com/set-kube-service-host-fqdn`. If that annotation is present, AKS will set the KUBERNETES_SERVICE_HOST variable to the domain name of the API server instead of the in-cluster service IP. This is useful in cases where the cluster egress is via a layer 7 firewall.
+* If you have an app or solution that needs to talk to the API server, you must either add an **additional** network rule to allow **TCP communication to port 443 of your API server's IP** **OR** , if you have a layer 7 firewall configured to allow traffic to the API Server's domain name, set `kubernetes.azure.com/set-kube-service-host-fqdn` in your pod specs.
* On rare occasions, if there's a maintenance operation, your API server IP might change. Planned maintenance operations that can change the API server IP are always communicated in advance. * Under certain circumstances, it might happen that traffic towards "md-*.blob.storage.azure.net" is required. This dependency is due to some internal mechanisms of Azure Managed Disks. You might also want to use the Storage [service tag](../virtual-network/service-tags-overview.md).
aks Upgrade Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/upgrade-cluster.md
default 9m22s Normal Surge node/aks-nodepool1-96663640-vmss000002 Created a surg
To stay within a supported Kubernetes version, you usually have to upgrade your cluster at least once per year and prepare for all possible disruptions. These disruptions include ones caused by API breaking changes, deprecations, and dependencies such as Helm and CSI. It can be difficult to anticipate these disruptions and migrate critical workloads without experiencing any downtime.
-AKS now automatically stops upgrade operations consisting of a minor version change if deprecated APIs are detected. This feature alerts you with an error message if it detects usage of APIs that are deprecated in the targeted version.
+AKS automatically stops upgrade operations consisting of a minor version change if deprecated APIs are detected. This feature alerts you with an error message if it detects usage of APIs that are deprecated in the targeted version.
All of the following criteria must be met in order for the stop to occur:
You can also check past API usage by enabling [Container Insights][container-ins
### Bypass validation to ignore API changes > [!NOTE]
-> This method requires you to use the Azure CLI version 2.53 or `aks-preview` Azure CLI extension version 0.5.134 or later. This method isn't recommended, as deprecated APIs in the targeted Kubernetes version may not work long term. We recommend to removing them as soon as possible after the upgrade completes.
+> This method requires you to use the Azure CLI version 2.53 or later. This method isn't recommended, as deprecated APIs in the targeted Kubernetes version may not work long term. We recommend to removing them as soon as possible after the upgrade completes.
Bypass validation to ignore API breaking changes using the [`az aks update`][az-aks-update] command, specifying `enable-force-upgrade`, and setting the `upgrade-override-until` property to define the end of the window during which validation is bypassed. If no value is set, it defaults the window to three days from the current time. The date and time you specify must be in the future.
api-management How To Server Sent Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/how-to-server-sent-events.md
Follow these guidelines when using API Management to reach a backend API that im
* **Avoid other policies that buffer responses** - Certain policies such as [`validate-content`](validate-content-policy.md) can also buffer response content and shouldn't be used with APIs that implement SSE.
-* **Avoid logging request/response body for Azure Monitor and Application Insights** - You can configure API request logging for Azure Monitor or Application Insights using diagnostic settings. The diagnostic settings allow you to log the request/response body at various stages of the request execution. For APIs that implement SSE, this can cause unexpected buffering which can lead to problems. Diagnostic settings for Azure Monitor and Application Insights configured at the global/All APIs scope apply to all APIs in the service. You can override the settings for individual APIs as needed. For APIs that implement SSE, ensure you have disabled request/response body logging for Azure Monitor and Application Insights.
+* **Avoid logging request/response body for Azure Monitor, Application Insights, and Event Hubs** - You can configure API request logging for Azure Monitor or Application Insights using diagnostic settings. The diagnostic settings allow you to log the request/response body at various stages of the request execution. For APIs that implement SSE, this can cause unexpected buffering which can lead to problems. Diagnostic settings for Azure Monitor and Application Insights configured at the global/All APIs scope apply to all APIs in the service. You can override the settings for individual APIs as needed. When logging to Event Hubs, you configure the scope and amount of context information for request/response logging by using the [log-to-eventhubs](api-management-howto-log-event-hubs.md#configure-log-to-eventhub-policy). For APIs that implement SSE, ensure you have disabled request/response body logging for Azure Monitor, Application Insights, and Event Hubs.
* **Disable response caching** - To ensure that notifications to the client are timely, verify that [response caching](api-management-howto-cache.md) isn't enabled. For more information, see [API Management caching policies](api-management-caching-policies.md).
api-management Protect With Ddos Protection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/protect-with-ddos-protection.md
Enabling Azure DDoS Protection for API Management is supported only for instance
> If the instance is hosted on the `stv1` platform, you must [migrate](compute-infrastructure.md#how-do-i-migrate-to-the-stv2-platform) to the `stv2` platform. * An Azure DDoS Protection [plan](../ddos-protection/manage-ddos-protection.md) * The plan you select can be in the same, or different, subscription than the virtual network and the API Management instance. If the subscriptions differ, they must be associated to the same Microsoft Entra tenant.
- * You may use a plan created using either the Network DDoS protection SKU or IP DDoS Protection SKU (preview). See [Azure DDoS Protection SKU Comparison](../ddos-protection/ddos-protection-sku-comparison.md).
+ * You may use a plan created using either the Network DDoS protection SKU or IP DDoS Protection SKU. See [Azure DDoS Protection SKU Comparison](../ddos-protection/ddos-protection-sku-comparison.md).
> [!NOTE] > Azure DDoS Protection plans incur additional charges. For more information, see [Pricing](https://azure.microsoft.com/pricing/details/ddos-protection/).
api-management Validate Azure Ad Token Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/validate-azure-ad-token-policy.md
The `validate-azure-ad-token` policy enforces the existence and validity of a JS
tenant-id="tenant ID or URL (for example, "contoso.onmicrosoft.com") of the Azure Active Directory service" header-name="name of HTTP header containing the token (alternatively, use query-parameter-name or token-value attribute to specify token)" query-parameter-name="name of query parameter used to pass the token (alternative, use header-name or token-value attribute to specify token)"
- token-value="expression returning the token as a stripng (alternatively, use header-name or query-parameter attribute to specify token)"
+ token-value="expression returning the token as a string (alternatively, use header-name or query-parameter attribute to specify token)"
failed-validation-httpcode="HTTP status code to return on failure" failed-validation-error-message="error message to return on failure" output-token-variable-name="name of a variable to receive a JWT object representing successfully validated token">
app-service Configure Gateway Required Vnet Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-gateway-required-vnet-integration.md
Title: Configure gateway-required virtual network integration for your app
description: Integrate your app in Azure App Service with Azure virtual networks using gateway-required virtual network integration. Previously updated : 01/20/2023 Last updated : 10/17/2023
When gateway-required virtual network integration is enabled, there's a required
If certificates or network information is changed, select **Sync Network**. When you select **Sync Network**, you cause a brief outage in connectivity between your app and your virtual network. Your app isn't restarted, but the loss of connectivity could cause your site to not function properly.
+### Certificate renewal
+
+The certificate used by the gateway-required virtual network integration has a lifespan of 8 years. If you have apps with gateway-required virtual network integrations that live longer you will have to renew the certificate. You can validate if your certificate has expired or has less than 6 month to expiry by visiting the VNet Integration page in Azure portal.
++
+You can renew your certificate when the portal shows a near expiry or expired certificate. To renew the certificate you need to disconnect and reconnect the virtual network. Reconnecting will cause a brief outage in connectivity between your app and your virtual network. Your app isn't restarted, but the loss of connectivity could cause your site to not function properly.
+ ## Pricing details Three charges are related to the use of the gateway-required virtual network integration feature:
app-service Configure Language Php https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-language-php.md
To customize PHP_INI_USER, PHP_INI_PERDIR, and PHP_INI_ALL directives for linux
4. Create a directory called "ini" (for example, mkdir ini). 5. Change the current working directory to the "ini" folder you just created.
-You need to create an "ini" file to add your settings to. In this example, we use "extensions.ini." There are no file editors such as Vi, Vim, or Nano so you'll use echo to add the settings to the file. Change the "upload_max_filesize" from 2M to 50M. Use the following command to add the setting and create an "extensions.ini" file if one doesn't already exist.
+You need to create an "ini" file to add your settings to. In this example, we use "extensions.ini". There are no file editors such as Vi, Vim, or Nano so you'll use echo to add the settings to the file. Change the "upload_max_filesize" from 2M to 50M. Use the following command to add the setting and create an "extensions.ini" file if one doesn't already exist.
``` /home/site/wwwroot/ini>echo "upload_max_filesize=50M" >> extensions.ini
Then, go to the Azure portal and add an Application Setting to scan the "ini" di
1. Go to the [Azure portal](https://portal.azure.com) and select your App Service Linux PHP application. 2. Select Application Settings for the app. 3. Under the Application settings section, select **+ Add new setting**.
-4. For the App Setting Name, enter "PHP_INI_SCAN_DIR" and for value, enter "/home/site/wwwroot/ini."
+4. For the App Setting Name, enter "PHP_INI_SCAN_DIR" and for value, enter "/home/site/wwwroot/ini".
5. Select the save button. > [!NOTE]
app-service Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/overview.md
App Service Environment v3 is available in the following regions:
| North Europe | ✅ | ✅ | ✅ | | Norway East | ✅ | ✅ | ✅ | | Norway West | ✅ | | ✅ |
-| Poland Central | ✅ | | |
+| Poland Central | ✅ | ✅ | |
| Qatar Central | ✅** | ✅** | | | South Africa North | ✅ | ✅ | ✅ | | South Africa West | ✅ | | ✅ |
attestation Azure Diagnostic Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/attestation/azure-diagnostic-monitoring.md
- Title: Azure diagnostic monitoring for Azure Attestation
-description: Azure diagnostic monitoring for Azure Attestation
---- Previously updated : 11/14/2022----
-# Set up diagnostics with Microsoft Azure Attestation
-
-This article helps you create and configure diagnostic settings to send platform metrics and platform logs to different destinations. [Platform logs](../azure-monitor/essentials/platform-logs-overview.md) in Azure, including the Azure Activity log and resource logs, provide detailed diagnostic and auditing information for Azure resources and the Azure platform that they depend on. [Platform metrics](../azure-monitor/essentials/data-platform-metrics.md) are collected by default and are stored in the Azure Monitor Metrics database.
-
-Before you begin, make sure you've [set up Azure Attestation with Azure PowerShell](quickstart-powershell.md).
-
-Azure Attestation is enabled in the diagnostic settings and can be used to monitor activity. Set up [Azure Monitoring](../azure-monitor/overview.md) for the service endpoint by using the following code.
-
-```powershell
-
- Connect-AzAccount
-
- Set-AzContext -Subscription "<Subscription id>"
-
- $attestationProviderName="<Name of the attestation provider>"
-
- $attestationResourceGroup="<Name of the resource Group>"
-
- $attestationProvider=Get-AzAttestation -Name $attestationProviderName -ResourceGroupName $attestationResourceGroup
-
- $storageAccount=New-AzStorageAccount -ResourceGroupName $attestationProvider.ResourceGroupName -Name "<Storage Account Name>" -SkuName Standard_LRS -Location "<Location>"
-
- Set-AzDiagnosticSetting -ResourceId $attestationProvider.Id -StorageAccountId $storageAccount.Id -Enabled $true
-
-```
-
-Activity logs are in the **Containers** section of the storage account. For more information, see [Collect and analyze resource logs from an Azure resource](../azure-monitor/essentials/tutorial-resource-logs.md).
attestation Enable Logging https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/attestation/enable-logging.md
+
+ Title: Enable logging in Azure Attestation
+description: Enable logging in Azure Attestation
++++ Last updated : 10/16/2023++++
+# Enable logging in Azure Attestation
+
+After you create one or more Azure Attestation providers, you'll likely want to monitor how and when your resources are accessed, and by whom. You can do this by enabling logging for Microsoft Azure Attestation, which saves information in an Azure storage account and/or log analytics workspace you provide.
+
+## What is logged
+
+- All authenticated REST API requests, including failed requests because of access permissions, system errors, or bad requests.
+- Operations on the attestation provider, including setting of attestation policy and attest operations.
+- Unauthenticated requests that result in a 401 response. Examples are requests that lack a bearer token, are malformed or expired, or have an invalid token.
+
+## Prerequisites
+
+To complete this tutorial, you will need an Azure Attestation provider. You can create a new provider using one of these methods:
+
+- [Create an attestation provider using the Azure CLI](quickstart-azure-cli.md)
+- [Create an attestation provider using Azure PowerShell](quickstart-powershell.md)
+- [Create an attestation provider using the Azure portal](quickstart-portal.md)
+
+You will also need a destination for your logs. This can be an existing or new Azure storage account and/or Log Analytics workspace. You can create a new Azure storage account using one of these methods:
+
+- [Create a storage account using the Azure CLI](../storage/common/storage-account-create.md)
+- [Create a storage account using Azure PowerShell](../storage/common/storage-account-create.md)
+- [Create a storage account using the Azure portal](../storage/common/storage-account-create.md)
+
+You can create a new Log Analytics workspace using one of these methods:
+
+- [Create a Log Analytics workspace using the Azure CLI](../azure-monitor/logs/quick-create-workspace.md)
+- [Create a Log Analytics workspace using Azure PowerShell](../azure-monitor/logs/quick-create-workspace.md)
+- [Create a Log Analytics workspace the Azure portal](../azure-monitor/logs/quick-create-workspace.md)
+
+ ## Enable logging
+
+ You can enable logging for Azure Attestation by using the Azure PowerShell, or the Azure portal.
+
+ ### Using PowerShell with storage account as destination
+
+```powershell
+
+ Connect-AzAccount
+
+ Set-AzContext -Subscription "<Subscription id>"
+
+ $attestationProviderName="<Name of the attestation provider>"
+
+ $attestationResourceGroup="<Name of the resource Group>"
+
+ $attestationProvider=Get-AzAttestation -Name $attestationProviderName -ResourceGroupName $attestationResourceGroup
+
+ $storageAccount=New-AzStorageAccount -ResourceGroupName $attestationProvider.ResourceGroupName -Name "<Storage Account Name>" -SkuName Standard_LRS -Location "<Location>"
+
+ Set-AzDiagnosticSetting -ResourceId $attestationProvider.Id -StorageAccountId $storageAccount.Id -Enabled $true
+
+```
+
+ When logging is enabled, logs are automatically created for you in **Containers** section of the specified storage account. Please expect some delay for the logs to appear in containers section.
+
+ ### Using portal
+
+To configure diagnostic settings in the Azure portal, follow these steps:
+
+1. From the Resource pane menu, select **Diagnostic settings**, and then **Add diagnostic setting**
+2. Under **Category groups**, select both **audit** and **allLogs**.
+3. If Azure Log Analytics is the destination, select **Send to Log Analytics workspace** and choose your subscription and workspace from the drop-down menus. You might also select **Archive to a storage account** and choose your subscription and storage account from the drop-down menus.
+4. When you have selected your desired options, selectΓÇ»**Save**.
+
+## Access your logs from storage account
+
+When logging is enabled, upto three containers will be automatically created in your specified storage account: **insights-logs-operational, insights-logs-auditevent and insights-logs-notprocessed**. Please expect some delay for the logs to appear in containers section.
+
+**insights-logs-notprocessed** includes logs related to malformed requests. **insights-logs-auditevent** was created to provide early access to logs for customers using VBS. To view the logs, you have to download blobs.
+
+### Using PowerShell
+
+With Azure PowerShell, useΓÇ»[Get-AzStorageBlob](/powershell/module/az.storage/get-azstorageblob). To list all the blobs in this container, enter:
+
+```powershell
+$operationalBlob= Get-AzStorageBlob -Container " insights-logs-operational" -Context $storageAccount.Context
+
+$operationalBlob.Name
+```
+
+From the output of the Azure PowerShell cmdlet, you can see that the names of the blobs are in the following format:ΓÇ»
+
+```
+resourceId=<ARM resource ID>/y=<year>/m=<month>/d=<day of month>/h=<hour>/m=<minute>/filename.json.
+```
+
+The date and time values use Coordinated Universal Time.
+
+### Using portal
+
+To access logs in the Azure portal, follow these steps:
+
+1. Open your storage account and click on **Containers** from resource pane menu
+2. Select **insights-logs-operational** and follow the navigation shown in the below screenshot to locate a json file and view the logs
+
+[ ![Screenshot of logs in Azure portal experience.](./media/view-logs-inline.png) ](./media/view-logs-expanded.png#lightbox)
+
+## Use Azure Monitor logs
+
+You can use Azure Monitor logs to review activity in Azure Attestation resources. In Azure Monitor logs, you use log queries to analyze data and get the information you need. For more information, see [Monitoring Azure Attestation](monitor-logs.md)
+
+## Next steps
+
+- For information on how to interpret logs, seeΓÇ»[Azure Attestation logging](view-logs.md)
+- To learn more about using Azure Monitor for analyzing Azure Attestation logs, seeΓÇ»[Monitoring Azure Attestation](monitor-logs.md).
attestation Logs Data Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/attestation/logs-data-reference.md
+
+ Title: Azure Attestation monitoring data reference
+description: Azure Attestation monitoring data reference
++++ Last updated : 10/16/2023++++
+# Data reference of Azure Attestation logs
+
+The section below provides reference to the details of Azure Attestation logs. See [Monitor Azure Attestation](monitor-logs.md) for details on collecting and analyzing monitoring data for Azure Attestation.
+
+## Resource logs
+
+This section lists the types of resource logs you can collect for Azure Attestation. For full details, seeΓÇ»[Azure Attestation logging](view-logs.md).
+
+## Azure Monitor Logs tables
+
+This section refers to all the Azure Monitor Logs Attestation tables relevant to Azure Attestation and available for query by Log Analytics.
+
+For a reference of all Azure Monitor Logs / Log Analytics tables, including information about what columns are available for Azure Attestation see the [Azure Monitor Log Table Reference](/azure/azure-monitor/reference/tables/tables-resourcetype).
+
+## Diagnostics tables
+
+Azure Attestation uses the [Azure Activity](/azure/azure-monitor/reference/tables/azureactivity) and [Azure Attestation Diagnostics](/azure/azure-monitor/reference/tables/azureattestationdiagnostics) tables to store resource log information.
+
attestation Monitor Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/attestation/monitor-logs.md
+
+ Title: Monitor Azure Attestation
+description: Monitoring Azure Attestation
++++ Last updated : 10/16/2023++++
+# Monitor Azure Attestation
+
+This article describes the monitoring data generated by Azure Attestation and steps to analyze the same. Azure Attestation uses [Azure Monitor](../azure-monitor/overview.md).ΓÇ» If you are unfamiliar with the features of Azure Monitor common to all Azure services that use it, readΓÇ»[Monitoring Azure resources with Azure Monitor](../azure-monitor/essentials/monitor-azure-resource.md).
+
+## Monitoring data
+
+Azure Attestation collects the same kind of monitoring data as other Azure resources that are described inΓÇ»[Monitoring data from Azure resources](../azure-monitor/essentials/monitor-azure-resource.md).
+
+See Azure [Attestation Monitoring data reference](../attestation/logs-data-reference.md) for detailed information on the monitoring logs generated by Azure Attestation.
+
+## Collection and routing
+
+Activity logs are collected and stored automatically, but can be routed to other locations by using a diagnostic setting. Resource Logs are not collected and stored until you create a diagnostic setting and route them to one or more locations. More details can be found [here](../azure-monitor/essentials/diagnostic-settings.md)
+
+To create a diagnostic setting for Azure Attestation, seeΓÇ»[Azure Attestation logging](../attestation/enable-logging.md).
+
+## Analyze logs using log analytics
+
+Log Analytics is a tool in the Azure portal that's used to edit and run log queries against data in the Azure Monitor Logs store. To leverage log analytics where you can run complex queries, select log analytics workspace as one of the destinations while creating the diagnostic setting.
+
+Once the diagnostic setting is created, when you select Logs from the Azure Monitor menu, Log Analytics is opened with the query scope set to the current attestation provider. This means that log queries will only include data from that resource. See [Log query scope and time range in Azure Monitor Log Analytics](../azure-monitor/logs/scope.md) for details.
+
+Here are some queries that you can enter into the Log search bar to help you monitor your Key Vault resources. These queries work with the [new language](../azure-monitor/logs/log-query-overview.md).
+
+- Are there any authorization failures?
+- Are there any policy configuration failures?
+- Are there any slow requests?
+- Have there been any changes to attestation policy?
+- Who is calling this attestation provider?
+- How active has this Attestation provider been?
+
+## Next steps
+
+- For information on how to interpret logs, seeΓÇ»[Azure Attestation logging](../attestation/view-logs.md)
attestation View Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/attestation/view-logs.md
Logging information will be available up to 10 minutes after the operation occur
## Interpret your Azure Attestation logs
-When logging is enabled, up to three containers may be automatically created for you in your specified storage account: **insights-logs-auditevent, insights-logs-operational, insights-logs-notprocessed**. It is recommended to only use **insights-logs-operational** and **insights-logs-notprocessed**. **insights-logs-auditevent** was created to provide early access to logs for customers using VBS. Future enhancements to logging will occur in the **insights-logs-operational** and **insights-logs-notprocessed**.
+When logging is enabled, up to three containers might be automatically created for you in your specified storage account: **insights-logs-auditevent, insights-logs-operational, insights-logs-notprocessed**. It is recommended to only use **insights-logs-operational** and **insights-logs-notprocessed**. **insights-logs-auditevent** was created to provide early access to logs for customers using VBS. Future enhancements to logging will occur in the **insights-logs-operational** and **insights-logs-notprocessed**.
**Insights-logs-operational** contains generic information across all TEE types.
The properties contain additional Azure attestation specific context:
| infoDataReceived | Information about the request received from the client. Includes some HTTP headers, the number of headers received, the content type and content length | ## Next steps-- [How to enable Microsoft Azure Attestation logging ](azure-diagnostic-monitoring.md)
+- [How to enable Microsoft Azure Attestation logging](enable-logging.md)
azure-arc Azure Data Studio Dashboards https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/azure-data-studio-dashboards.md
# Azure Data Studio dashboards
-[Azure Data Studio](/sql/azure-data-studio/what-is) provides an experience similar to the Azure portal for viewing information about your Azure Arc resources. These views are called **dashboards** and have a layout and options similar to what you could see about a given resource in the Azure portal, but give you the flexibility of seeing that information locally in your environment in cases where you don't have a connection available to Azure.
+[Azure Data Studio](/azure-data-studio/what-is-azure-data-studio) provides an experience similar to the Azure portal for viewing information about your Azure Arc resources. These views are called **dashboards** and have a layout and options similar to what you could see about a given resource in the Azure portal, but give you the flexibility of seeing that information locally in your environment in cases where you don't have a connection available to Azure.
## Connect to a data controller ### Prerequisites -- Download [Azure Data Studio](/sql/azure-data-studio/download-azure-data-studio)
+- Download [Azure Data Studio](/azure-data-studio/download-azure-data-studio)
- Azure Arc extension is installed ### Connect
azure-arc Connect Active Directory Sql Managed Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/connect-active-directory-sql-managed-instance.md
GO
## Connect to Azure Arc-enabled SQL Managed Instance
-From your domain joined Windows-based client machine or a Linux-based domain aware machine, you can use `sqlcmd` utility, or open [SQL Server Management Studio](/sql/ssms/download-sql-server-management-studio-ssms) or [Azure Data Studio (ADS)](/sql/azure-data-studio/download-azure-data-studio) to connect to the Azure Arc-enabled SQL Managed Instance using AD authentication.
+From your domain joined Windows-based client machine or a Linux-based domain aware machine, you can use `sqlcmd` utility, or open [SQL Server Management Studio](/sql/ssms/download-sql-server-management-studio-ssms) or [Azure Data Studio (ADS)](/azure-data-studio/download-azure-data-studio) to connect to the Azure Arc-enabled SQL Managed Instance using AD authentication.
A domain-aware Linux-based machine is one where you are able to use Kerberos authentication using kinit. Such machine should have /etc/krb5.conf file set to point to the Active Directory domain (realm) being used. It should also have /etc/resolv.conf file set such that one can run DNS lookups against the Active Directory domain.
azure-arc Create Postgresql Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/create-postgresql-server.md
For example:
} ```
-You can use the PostgreSQL Instance endpoint to connect to the PostgreSQL server from your favorite tool: [Azure Data Studio](/sql/azure-data-studio/download-azure-data-studio), [pgcli](https://www.pgcli.com/) psql, pgAdmin, etc.
+You can use the PostgreSQL Instance endpoint to connect to the PostgreSQL server from your favorite tool: [Azure Data Studio](/azure-data-studio/download-azure-data-studio), [pgcli](https://www.pgcli.com/) psql, pgAdmin, etc.
[!INCLUDE [use-insider-azure-data-studio](includes/use-insider-azure-data-studio.md)]
azure-arc Install Client Tools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/install-client-tools.md
The following table lists common tools required for creating and managing Azure
||||| | Azure CLI (`az`)<sup>1</sup> | Yes | Modern command-line interface for managing Azure services. Used to manage Azure services in general and also specifically Azure Arc-enabled data services using the CLI or in scripts for both indirectly connected mode (available now) and directly connected mode (available soon). ([More info](/cli/azure/)). | [Install](/cli/azure/install-azure-cli) | | `arcdata` extension for Azure (`az`) CLI | Yes | Command-line tool for managing Azure Arc-enabled data services as an extension to the Azure CLI (`az`) | [Install](install-arcdata-extension.md) |
-| Azure Data Studio | Yes | Rich experience tool for connecting to and querying a variety of databases including Azure SQL, SQL Server, PostrgreSQL, and MySQL. Extensions to Azure Data Studio provide an administration experience for Azure Arc-enabled data services. | [Install](/sql/azure-data-studio/download-azure-data-studio) |
+| Azure Data Studio | Yes | Rich experience tool for connecting to and querying a variety of databases including Azure SQL, SQL Server, PostrgreSQL, and MySQL. Extensions to Azure Data Studio provide an administration experience for Azure Arc-enabled data services. | [Install](/azure-data-studio/download-azure-data-studio) |
| Azure Arc extension for Azure Data Studio | Yes | Extension for Azure Data Studio that provides a management experience for Azure Arc-enabled data services.| Install from the extensions gallery in Azure Data Studio.| | PostgreSQL extension in Azure Data Studio | No | PostgreSQL extension for Azure Data Studio that provides management capabilities for PostgreSQL. | <!--{need link} [Install](../azure-data-studio/data-virtualization-extension.md) --> Install from extensions gallery in Azure Data Studio.| | Kubernetes CLI (kubectl)<sup>2</sup> | Yes | Command-line tool for managing the Kubernetes cluster ([More info](https://kubernetes.io/docs/tasks/tools/install-kubectl/)). | [Windows](https://kubernetes.io/docs/tasks/tools/install-kubectl-windows) \| [Linux](https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/) \| [macOS](https://kubernetes.io/docs/tasks/tools/install-kubectl-macos/) |
azure-arc Managed Instance Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/managed-instance-features.md
Azure Arc-enabled SQL Managed Instance supports various data tools that can help
| | | | | Azure portal | Yes | | Azure CLI | Yes |
-| [Azure Data Studio](/sql/azure-data-studio/what-is) | Yes |
+| [Azure Data Studio](/azure-data-studio/what-is-azure-data-studio) | Yes |
| Azure PowerShell | No | | [BACPAC file (export)](/sql/relational-databases/data-tier-applications/export-a-data-tier-application) | Yes | | [BACPAC file (import)](/sql/relational-databases/data-tier-applications/import-a-bacpac-file-to-create-a-new-user-database) | Yes |
azure-arc Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/release-notes.md
For additional information about service tiers, see [High Availability with Azur
### User experience improvements
-The following improvements are available in [Azure Data Studio](/sql/azure-data-studio/download-azure-data-studio).
+The following improvements are available in [Azure Data Studio](/azure-data-studio/download-azure-data-studio).
- Azure Arc and Azure CLI extensions now generally available. - Changed edit commands for SQL Managed Instance for Azure Arc dashboard to use `update`, reflecting Azure CLI changes. This works in indirect or direct mode.
This release is published November 3, 2021
#### Azure Data Studio
-Install or update to the latest version of [Arc extension for Azure Data Studio](/sql/azure-data-studio/extensions/azure-arc-extension).
+Install or update to the latest version of [Arc extension for Azure Data Studio](/azure-data-studio/extensions/azure-arc-extension).
#### Azure (`az`) CLI
azure-arc Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/resource-bridge/upgrade.md
Deploying a new resource bridge consists of downloading the appliance image (~3.
Overall, the upgrade generally takes at least 30 minutes, depending on network speeds. A short intermittent downtime may happen during the handoff between the old Arc resource bridge to the new Arc resource bridge. Additional downtime may occur if prerequisites are not met, or if a change in the network (DNS, firewall, proxy, etc.) impacts the Arc resource bridge's ability to communicate.
+Upgrading takes your Arc resource bridge to the next appliance version, which might not be the latest available appliance version. Multiple upgrades could be needed to reach the minimum n-3 supported version. You can check your appliance version by checking the Azure resource of your Arc resource bridge.
+ There are two ways to upgrade Arc resource bridge: cloud-managed upgrades managed by Microsoft, or manual upgrades where Azure CLI commands are performed by an admin. ## Cloud-managed upgrade
For example, to upgrade a resource bridge on Azure Stack HCI, run: `az arcapplia
## Private cloud providers
-Partner products that use Arc resource bridge may choose to handle upgrades differently, including enabling cloud-managed upgrade by default. This article will be updated to reflect any such changes.
+Currently, some private cloud providers differ in how they handle Arc resource bridge upgrades.
+
+For Arc-enabled VMware, both cloud-managed upgrade and manual upgrade are supported.
-[Azure Arc VM management (preview) on Azure Stack HCI](/azure-stack/hci/manage/azure-arc-vm-management-overview) supports upgrade of an Arc resource bridge on Azure Stack HCI, version 22H2 up until Arc resource bridge version 1.0.14 and `az arcappliance` CLI extension version 0.2.33. These upgrades can be done through manual upgrade or a support request for cloud-managed upgrade. For additional upgrades afterwards, you must transition to Azure Stack HCI, version 23H2 (preview). In version 23H2 (preview), the LCM tool manages upgrades across all components as a "validated recipe" package. For more information, visit the [Arc VM management FAQ page](/azure-stack/hci/manage/faqs-arc-enabled-vms).
+[Azure Arc VM management (preview) on Azure Stack HCI](/azure-stack/hci/manage/azure-arc-vm-management-overview) supports upgrade of an Arc resource bridge on Azure Stack HCI, version 22H2 up until Arc resource bridge version 1.0.14 and `az arcappliance` CLI extension version 0.2.33. These upgrades can be done through manual upgrade or a support request for cloud-managed upgrade. For additional upgrades afterwards, you must transition to Azure Stack HCI, version 23H2 (preview). In version 23H2 (preview), the LCM tool manages upgrades across all components as a "validated recipe" package. For more information, visit the [Arc VM management FAQ page](/azure-stack/hci/manage/azure-arc-vms-faq).
+
+For Arc-enabled SCVMM, the upgrade feature isn't available yet. Review the steps for [performing the disaster recovery operation](/azure/azure-arc/system-center-virtual-machine-manager/disaster-recovery), then delete the appliance VM from SCVMM and perform the recovery steps.  This deploys a new resource bridge and reconnect pre-existing Azure resources.
## Version releases
-The Arc resource bridge version is tied to the versions of underlying components used in the appliance image, such as the Kubernetes version. When there is a change in the appliance image, the Arc resource bridge version gets incremented. This generally happens when a new `az arcappliance` CLI extension version is released. An updated extension is typically released on a monthly cadence at the end of the month. For detailed release info, refer to the [Arc resource bridge release notes](https://github.com/Azure/ArcResourceBridge/releases) on GitHub.
+The Arc resource bridge version is tied to the versions of underlying components used in the appliance image, such as the Kubernetes version. When there is a change in the appliance image, the Arc resource bridge version gets incremented. This generally happens when a new `az arcappliance` CLI extension version is released. A new extension is typically released on a monthly cadence at the end of the month. For detailed release info, refer to the [Arc resource bridge release notes](https://github.com/Azure/ArcResourceBridge/releases) on GitHub.
## Notification and upgrade availability
If an Arc resource bridge is unable to be upgraded to a supported version, you m
- Learn about [Arc resource bridge maintenance operations](maintenance.md). - Learn about [troubleshooting Arc resource bridge](troubleshoot-resource-bridge.md). +
azure-fluid-relay Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-fluid-relay/reference/service-limits.md
Operations are incremental updates sent over the websocket connection. The size
## Fluid summaries
-Incremental summaries uploaded to Azure Fluid Relay can't exceed 28 MB in size. More info [here](https://fluidframework.com/docs/concepts/summarizer).
+Incremental summaries uploaded to Azure Fluid Relay can't exceed 28 MB in size. If the size of the document grows above 95 MB, subsequent client load or join requests will fail. For more information, see [Fluid Framework Summarizer](https://fluidframework.com/docs/concepts/summarizer).
## Signals
azure-functions Functions Bindings Azure Sql Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-azure-sql-trigger.md
In addition to the required ConnectionStringSetting [application setting](./func
## Set up change tracking (required)
-Setting up change tracking for use with the Azure SQL trigger requires two steps. These steps can be completed from any SQL tool that supports running queries, including [Visual Studio Code](/sql/tools/visual-studio-code/mssql-extensions), [Azure Data Studio](/sql/azure-data-studio/download-azure-data-studio) or [SQL Server Management Studio](/sql/ssms/download-sql-server-management-studio-ssms).
+Setting up change tracking for use with the Azure SQL trigger requires two steps. These steps can be completed from any SQL tool that supports running queries, including [Visual Studio Code](/sql/tools/visual-studio-code/mssql-extensions), [Azure Data Studio](/azure-data-studio/download-azure-data-studio) or [SQL Server Management Studio](/sql/ssms/download-sql-server-management-studio-ssms).
1. Enable change tracking on the SQL database, substituting `your database name` with the name of the database where the table to be monitored is located:
azure-functions Functions Bindings Dapr Input Secret https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-dapr-input-secret.md
To use the Dapr secret input binding, start by setting up a Dapr secret store co
To use the `daprSecret` in **Python v2**, set up your project with the correct dependencies.
-1. [Create and activate a virtual environment](https://learn.microsoft.com/azure/azure-functions/create-first-function-cli-python?tabs=macos%2Cbash%2Cazure-cli&pivots=python-mode-decorators#create-venv).
+1. [Create and activate a virtual environment](./create-first-function-cli-python.md?tabs=macos%2Cbash%2Cazure-cli&pivots=python-mode-decorators#create-venv).
1. In your `requirements.text` file, add the following line:
azure-functions Functions Bindings Dapr Input State https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-dapr-input-state.md
To use the Dapr state input binding, start by setting up a Dapr state store comp
To use the `daprState` in Python v2, set up your project with the correct dependencies.
-1. [Create and activate a virtual environment](https://learn.microsoft.com/azure/azure-functions/create-first-function-cli-python?tabs=macos%2Cbash%2Cazure-cli&pivots=python-mode-decorators#create-venv).
+1. [Create and activate a virtual environment](./create-first-function-cli-python.md?tabs=macos%2Cbash%2Cazure-cli&pivots=python-mode-decorators#create-venv)
1. In your `requirements.text` file, add the following line:
azure-functions Functions Bindings Dapr Output Invoke https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-dapr-output-invoke.md
To use the Dapr service invocation output binding, learn more about [how to use
To use the `daprInvoke` in Python v2, set up your project with the correct dependencies.
-1. [Create and activate a virtual environment](https://learn.microsoft.com/azure/azure-functions/create-first-function-cli-python?tabs=macos%2Cbash%2Cazure-cli&pivots=python-mode-decorators#create-venv).
+1. [Create and activate a virtual environment](create-first-function-cli-python.md?tabs=macos%2Cbash%2Cazure-cli&pivots=python-mode-decorators#create-venv).
1. In your `requirements.text` file, add the following line:
azure-functions Functions Bindings Dapr Output Publish https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-dapr-output-publish.md
To use the Dapr publish output binding, start by setting up a Dapr pub/sub compo
To use the `daprPublish` in Python v2, set up your project with the correct dependencies.
-1. [Create and activate a virtual environment](https://learn.microsoft.com/azure/azure-functions/create-first-function-cli-python?tabs=macos%2Cbash%2Cazure-cli&pivots=python-mode-decorators#create-venv).
+1. [Create and activate a virtual environment](create-first-function-cli-python.md?tabs=macos%2Cbash%2Cazure-cli&pivots=python-mode-decorators#create-venv).
1. In your `requirements.text` file, add the following line:
azure-functions Functions Bindings Dapr Output State https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-dapr-output-state.md
To use the Dapr state output binding, start by setting up a Dapr state store com
To use the `daprState` in Python v2, set up your project with the correct dependencies.
-1. [Create and activate a virtual environment](https://learn.microsoft.com/azure/azure-functions/create-first-function-cli-python?tabs=macos%2Cbash%2Cazure-cli&pivots=python-mode-decorators#create-venv).
+1. [Create and activate a virtual environment](create-first-function-cli-python.md?tabs=macos%2Cbash%2Cazure-cli&pivots=python-mode-decorators#create-venv).
1. In your `requirements.text` file, add the following line:
azure-functions Functions Bindings Dapr Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-dapr-output.md
To use the Dapr output binding, start by setting up a Dapr output binding compon
To use the `daprBinding` in Python v2, set up your project with the correct dependencies.
-1. [Create and activate a virtual environment](https://learn.microsoft.com/azure/azure-functions/create-first-function-cli-python?tabs=macos%2Cbash%2Cazure-cli&pivots=python-mode-decorators#create-venv).
+1. [Create and activate a virtual environment](create-first-function-cli-python.md?tabs=macos%2Cbash%2Cazure-cli&pivots=python-mode-decorators#create-venv).
1. In your `requirements.text` file, add the following line:
azure-functions Functions Bindings Dapr Trigger Svc Invoke https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-dapr-trigger-svc-invoke.md
To use a Dapr Service Invocation trigger, learn more about which components to u
To use the `daprServiceInvocationTrigger` in Python v2, set up your project with the correct dependencies.
-1. [Create and activate a virtual environment](https://learn.microsoft.com/azure/azure-functions/create-first-function-cli-python?tabs=macos%2Cbash%2Cazure-cli&pivots=python-mode-decorators#create-venv).
+1. [Create and activate a virtual environment](create-first-function-cli-python.md?tabs=macos%2Cbash%2Cazure-cli&pivots=python-mode-decorators#create-venv).
1. In your `requirements.text` file, add the following line:
azure-functions Functions Bindings Dapr Trigger Topic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-dapr-trigger-topic.md
To use a Dapr Topic trigger, start by setting up a Dapr pub/sub component. You c
To use the `daprTopicTrigger` in Python v2, set up your project with the correct dependencies.
-1. [Create and activate a virtual environment](https://learn.microsoft.com/azure/azure-functions/create-first-function-cli-python?tabs=macos%2Cbash%2Cazure-cli&pivots=python-mode-decorators#create-venv).
+1. [Create and activate a virtual environment](create-first-function-cli-python.md?tabs=macos%2Cbash%2Cazure-cli&pivots=python-mode-decorators#create-venv).
1. In your `requirements.text` file, add the following line:
azure-functions Functions Bindings Dapr Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-dapr-trigger.md
To use the Dapr Input Binding trigger, start by setting up a Dapr input binding
To use the `daprBindingTrigger` in Python v2, set up your project with the correct dependencies.
-1. [Create and activate a virtual environment](https://learn.microsoft.com/azure/azure-functions/create-first-function-cli-python?tabs=macos%2Cbash%2Cazure-cli&pivots=python-mode-decorators#create-venv).
+1. [Create and activate a virtual environment](create-first-function-cli-python.md?tabs=macos%2Cbash%2Cazure-cli&pivots=python-mode-decorators#create-venv).
1. In your `requirements.text` file, add the following line:
azure-functions Functions Host Json V1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-host-json-v1.md
Configuration setting for the [SendGrind output binding](functions-bindings-send
Configuration setting for [Service Bus triggers and bindings](functions-bindings-service-bus.md). ```json
-{ "extensions":
+{
"serviceBus": { "maxConcurrentCalls": 16, "prefetchCount": 100,
azure-monitor Alerts Create New Alert Rule https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-create-new-alert-rule.md
To edit an existing alert rule:
||| |Number of violations|The number of violations that trigger the alert.| |Evaluation period|The time period within which the number of violations occur. |
- |Override query time range| If you want the alert evaluation period to be different than the query time range, enter a time range here.<br> The alert time range is limited to a maximum of two days. Even if the query contains an **ago** command with a time range of longer than two days, the two-day maximum time range is applied. For example, even if the query text contains **ago(7d)**, the query only scans up to two days of data. If the query requires more data than the alert evaluation you can change the time range manually. If the query contains **ago** command, it will be changed automatically to 2 days (48 hours).|<br>
+ |Override query time range| If you want the alert evaluation period to be different than the query time range, enter a time range here.<br> The alert time range is limited to a maximum of two days. Even if the query contains an **ago** command with a time range of longer than two days, the two-day maximum time range is applied. For example, even if the query text contains **ago(7d)**, the query only scans up to two days of data. If the query requires more data than the alert evaluation you can change the time range manually. If the query contains **ago** command, it will be changed automatically to 2 days (48 hours).|
> [!NOTE] > If you or your administrator assigned the Azure Policy **Azure Log Search Alerts over Log Analytics workspaces should use customer-managed keys**, you must select **Check workspace linked storage**. If you don't, the rule creation will fail because it won't meet the policy requirements.
azure-monitor Ip Addresses https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/ip-addresses.md
[Azure Monitor](../overview.md) uses several IP addresses. Azure Monitor is made up of core platform metrics and logs in addition to Log Analytics and Application Insights. You might need to know IP addresses if the app or infrastructure that you're monitoring is hosted behind a firewall. > [!NOTE] > Although these addresses are static, it's possible that we'll need to change them from time to time. All Application Insights traffic represents outbound traffic with the exception of availability monitoring and webhook action groups, which also require inbound firewall rules.
azure-monitor Opentelemetry Python Opencensus Migrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opentelemetry-python-opencensus-migrate.md
# Migrating from OpenCensus Python SDK and Azure Monitor OpenCensus exporter for Python to Azure Monitor OpenTelemetry Python Distro > [!NOTE]
-> [OpenCensus Python SDK is deprecated](https://opentelemetry.io/blog/2023/sunsetting-opencensus/), but Microsoft supports it until retirement on September 30, 2024. We now recommend the [OpenTelemetry-based Python offering](https://learn.microsoft.com/azure/azure-monitor/app/opentelemetry-enable?tabs=python) and provide [migration guidance](https://learn.microsoft.com/azure/azure-monitor/app/opentelemetry-python-opencensus-migrate?tabs=aspnetcore).
+> [OpenCensus Python SDK is deprecated](https://opentelemetry.io/blog/2023/sunsetting-opencensus/), but Microsoft supports it until retirement on September 30, 2024. We now recommend the [OpenTelemetry-based Python offering](./opentelemetry-enable.md?tabs=python) and provide [migration guidance](./opentelemetry-python-opencensus-migrate.md?tabs=aspnetcore).
Follow these steps to migrate Python applications to the [Azure Monitor](../overview.md) [Application Insights](./app-insights-overview.md) [OpenTelemetry Distro](./opentelemetry-enable.md?tabs=python).
azure-monitor Prometheus Metrics Multiple Workspaces https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/prometheus-metrics-multiple-workspaces.md
You can create multiple Data Collection Rules that point to the same Data Collec
} ``` -- Add an additional DCRA with the relevant Data Collection Rule. You *must* replace `<dcraName>`:
+- Add an additional Data Collection Rule Association (DCRA) with the relevant Data Collection Rule (DCR). This associates the DCR with the cluster. You must replace `<dcraName>`:
```json { "type": "Microsoft.Resources/deployments",
relabel_configs:
The source label is `__address__` because this label will always exist so this relabel config will always be applied. The target label will always be `microsoft_metrics_account` and its value should be replaced with the corresponding label value for the workspace. + ### Example If you want to configure three different jobs to send the metrics to three different workspaces, then include the following in each data collection rule:
scrape_configs:
- [Learn more about Azure Monitor managed service for Prometheus](../essentials/prometheus-metrics-overview.md). - [Collect Prometheus metrics from AKS cluster](prometheus-metrics-enable.md).++
azure-monitor Migrate To Azure Storage Lifecycle Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/migrate-to-azure-storage-lifecycle-policy.md
To set the rule for a specific webapp app, use *insights-activity-logs/ResourceI
## [CLI](#tab/cli)
-Use the [az storage account management-policy create](https://docs.microsoft.com/cli/azure/storage/account/management-policy?view=azure-cli-latest#az-storage-account-management-policy-create) command to create a lifecycle management policy. You must still set the retention in your diagnostic settings to *0*. See the Azure portal section above for more information.
+Use the [az storage account management-policy create](/cli/azure/storage/account/management-policy#az-storage-account-management-policy-create) command to create a lifecycle management policy. You must still set the retention in your diagnostic settings to *0*. See the Azure portal section above for more information.
Use the [az storage account management-policy create](https://docs.microsoft.com
az storage account management-policy create --account-name <storage account name> --resource-group <resource group name> --policy @<policy definition file> ```
-The sample policy definition file below sets the retention for all blobs in the container *insights-activity-logs* for the given subscription ID. For more information, see [Lifecycle management policy definition](https://learn.microsoft.com/azure/storage/blobs/lifecycle-management-overview#lifecycle-management-policy-definition).
+The sample policy definition file below sets the retention for all blobs in the container *insights-activity-logs* for the given subscription ID. For more information, see [Lifecycle management policy definition](../../storage/blobs/lifecycle-management-overview.md#lifecycle-management-policy-definition).
```json {
azure-netapp-files Configure Network Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/configure-network-features.md
na Previously updated : 05/16/2023 Last updated : 10/17/2023
Two settings are available for network features:
* You can create Basic volumes from Basic volume snapshots and Standard volumes from Standard volume snapshots. Creating a Basic volume from a Standard volume snapshot isn't supported. Creating a Standard volume from a Basic volume snapshot isn't supported.
-* When you restore a backup to a new volume, you can configure the new volume with Basic or Standard network features.
+* When you restore a backup to a new volume, you can configure the new volume with Basic or Standard network features.
+
+* When you change the network features option of existing volumes from Basic to Standard network features, access to existing Basic networking volumes might be lost if your UDR or NSG implementations prevent the Basic networking volumes from connecting to DNS and domain controllers. You might also lose the ability to update information, such as the site name, in the Active Directory connector if all volumes canΓÇÖt communicate with DNS and domain controllers. For guidance about UDRs and NSGs, see [Configure network features for an Azure NetApp Files volume](azure-netapp-files-network-topologies.md#udrs-and-nsgs).
## <a name="set-the-network-features-option"></a>Set network features option during volume creation
azure-resource-manager Bicep Config Linter https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-config-linter.md
For the rule about hardcoded environment URLs, you can customize which URLs are
"datalake.azure.net", "azuredatalakestore.net", "azuredatalakeanalytics.net",
- "vault.azure.net",
"api.loganalytics.io", "api.loganalytics.iov1", "asazure.windows.net",
azure-resource-manager Bicep Functions Parameters File https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-functions-parameters-file.md
using './main.bicep'
param securePassword = getSecret('exampleSubscription', 'exampleResourceGroup', 'exampleKeyVault', 'exampleSecretPassword', 'exampleSecretVersion') ```
-## readEnvironmentVariable()
+## readEnvironmentVariable
`readEnvironmentVariable(variableName, [defaultValue])`
azure-resource-manager Loops https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/loops.md
Title: Iterative loops in Bicep
description: Use loops to iterate over collections in Bicep Previously updated : 08/07/2023 Last updated : 10/17/2023 # Iterative loops in Bicep
Note in the preceding ARM JSON template, `languageVersion` must be set to `1.10-
## Next steps -- To set dependencies on resources that are created in a loop, see [Resource dependencies](resource-dependencies.md).
+- To learn about creating Bicep files, see [file](./file.md).
azure-resource-manager App Service Move Limitations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/move-limitations/app-service-move-limitations.md
Title: Move Azure App Service resources across resource groups or subscriptions
description: Use Azure Resource Manager to move App Service resources to a new resource group or subscription. Previously updated : 08/17/2023 Last updated : 10/17/2023 # Move App Service resources to a new resource group or subscription
When you move a Web App across subscriptions, the following guidance applies:
- If you need to move a Web App and App Service plan to a new App Service Environment, you'll need to recreate these resources in your new App Service Environment. Consider using the [backup and restore feature](../../../app-service/manage-backup.md) as way of recreating your resources in a different App Service Environment. - You can move a certificate bound to a web without deleting the TLS bindings, as long as the certificate is moved with all other resources in the resource group. However, you can't move a free App Service managed certificate. For that scenario, see [Move with free managed certificates](#move-with-free-managed-certificates). - App Service apps with private endpoints cannot be moved. Delete the private endpoint(s) and recreate it after the move.
+- App Service apps with virtual network integration cannot be moved. Remove the virtual network integration and reconnect it after the move.
- App Service resources can only be moved from the resource group in which they were originally created. If an App Service resource is no longer in its original resource group, move it back to its original resource group. Then, move the resource across subscriptions. For help with finding the original resource group, see the next section. ## Find original resource group
azure-resource-manager Resource Name Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/resource-name-rules.md
In the following tables, the term alphanumeric refers to:
> | azureFirewalls | resource group | 1-80 | Alphanumerics, underscores, periods, and hyphens.<br><br>Start with alphanumeric. End with alphanumeric or underscore. | > | bastionHosts | resource group | 1-80 | Alphanumerics, underscores, periods, and hyphens.<br><br>Start with alphanumeric. End alphanumeric or underscore. | > | connections | resource group | 1-80 | Alphanumerics, underscores, periods, and hyphens.<br><br>Start with alphanumeric. End alphanumeric or underscore. |
+> | dnsForwardingRuleset | resource group | 1-80 | Alphanumerics, underscores and hyphens.<br><br>Start with alphanumeric. End alphanumeric. |
+> | dnsResolvers | resource group | 1-80 | Alphanumerics, underscores and hyphens.<br><br>Start with alphanumeric. End alphanumeric. |
+> | dnsResolvers / inboundEndpoints | resource group | 1-80 | Alphanumerics, underscores and hyphens.<br><br>Start with alphanumeric. End alphanumeric. |
+> | dnsResolvers / outboundEndpoints | resource group | 1-80 | Alphanumerics, underscores and hyphens.<br><br>Start with alphanumeric. End alphanumeric. |
> | dnsZones | resource group | 1-63 characters<br><br>2 to 34 labels<br><br>Each label is a set of characters separated by a period. For example, **contoso.com** has 2 labels. | Each label can contain alphanumerics, underscores, and hyphens.<br><br>Each label is separated by a period. | > | expressRouteCircuits | resource group | 1-80 | Alphanumerics, underscores, periods, and hyphens.<br><br>Start with alphanumeric. End alphanumeric or underscore. | > | firewallPolicies | resource group | 1-80 | Alphanumerics, underscores, periods, and hyphens.<br><br>Start with alphanumeric. End alphanumeric or underscore. |
azure-signalr Howto Custom Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/howto-custom-domain.md
After you create a Shared Private Endpoint, you can add a custom certificate as
You don't have to explicitly allow SignalR Service IP addresses in key vault firewall settings. For more info, see [Key Vault private link diagnostics](../key-vault/general/private-link-diagnostics.md).
+## Certificate rotation
+
+If you don't specify a secret version when creating custom certificate, Azure SignalR Service periodically checks latest version in Key Vault. When a new version is observed, it's automatically applied. The delay is usually within 1 hour.
+
+Alternatively, you can also pin custom certificate to a specific secret version in Key Vault. When you need to apply a new certificate, you can edit the secret version and then update custom certificate proactively.
+ ## Cleanup If you don't plan to use the resources you've created in this article, you can delete the Resource Group.
If you don't plan to use the resources you've created in this article, you can d
+ [How to enable managed identity for Azure SignalR Service](howto-use-managed-identity.md) + [Managed identities for Azure SignalR Service](./howto-use-managed-identity.md) + [Get started with Key Vault certificates](../key-vault/certificates/certificate-scenarios.md)
-+ [What is Azure DNS](../dns/dns-overview.md)
++ [What is Azure DNS](../dns/dns-overview.md)
azure-sql-edge Configure Replication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql-edge/configure-replication.md
You can configure an instance of Azure SQL Edge as the push subscriber for one-w
The following requirements and best practices are important to understand as you configure replication: -- You can configure replication by using [SQL Server Management Studio](/sql/ssms/download-sql-server-management-studio-ssms). You can also do so by running Transact-SQL statements on the publisher, by using either SQL Server Management Studio or [Azure Data Studio](/sql/azure-data-studio/download-azure-data-studio).
+- You can configure replication by using [SQL Server Management Studio](/sql/ssms/download-sql-server-management-studio-ssms). You can also do so by running Transact-SQL statements on the publisher, by using either SQL Server Management Studio or [Azure Data Studio](/azure-data-studio/download-azure-data-studio).
- To replicate to an instance of Azure SQL Edge, you must use SQL Server authentication to sign in. - Replicated tables must have a primary key. - A single publication on SQL Server can support both Azure SQL Edge and SQL Server (on-premises and SQL Server in an Azure virtual machine) subscribers.
azure-sql-edge Connect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql-edge/connect.md
You can connect to an instance of Azure SQL Edge instance from any of these comm
- [sqlcmd](/sql/linux/sql-server-linux-setup-tools): **sqlcmd** client tools are already included in the container image of Azure SQL Edge. If you attach to a running container with an interactive bash shell, you can run the tools locally. SQL client tools *aren't* available on the ARM64 platform. - [SQL Server Management Studio](/sql/ssms/sql-server-management-studio-ssms)-- [Azure Data Studio](/sql/azure-data-studio/download-azure-data-studio)
+- [Azure Data Studio](/azure-data-studio/download-azure-data-studio)
- [Visual Studio Code](/sql/visual-studio-code/sql-server-develop-use-vscode) To connect to an Azure SQL Edge Database Engine from a network machine, you need the following:
To connect to an instance of Azure SQL Edge by using SQL Server Management Studi
To connect to an instance of Azure SQL Edge by using Visual Studio Code on a Windows, macOS or Linux machine, see [Visual Studio Code](/sql/visual-studio-code/sql-server-develop-use-vscode).
-To connect to an instance of Azure SQL Edge by using Azure Data Studio on a Windows, macOS or Linux machine, see [Azure Data Studio](/sql/azure-data-studio/quickstart-sql-server).
+To connect to an instance of Azure SQL Edge by using Azure Data Studio on a Windows, macOS or Linux machine, see [Azure Data Studio](/azure-data-studio/quickstart-sql-server).
## Next steps
azure-sql-edge Deploy Dacpac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql-edge/deploy-dacpac.md
SQL Database DACPAC and BACPAC packages can be deployed to SQL Edge using the `M
To deploy (or import) a SQL Database DAC package `(*.dacpac)` or a BACPAC file `(*.bacpac)` using Azure Blob storage and a zip file, follow these steps. 1. Create/extract a DAC package or export a BACPAC file using one of the following mechanisms.
- - Use [SQL Database Project Extension - Azure Data Studio](/sql/azure-data-studio/extensions/sql-database-project-extension-getting-started) to [create a new database project or export an existing database](/sql/azure-data-studio/extensions/sql-database-project-extension-getting-started)
+ - Use [SQL Database Project Extension - Azure Data Studio](/azure-data-studio/extensions/sql-database-project-extension-getting-started) to [create a new database project or export an existing database](/azure-data-studio/extensions/sql-database-project-extension-getting-started)
- Create or extract a SQL Database DAC package. See [Extracting a DAC from a database](/sql/relational-databases/data-tier-applications/extract-a-dac-from-a-database/) for information on how to generate a DAC package for an existing SQL Server database. - Exporting a deployed DAC package or a database. See [Export a Data-tier Application](/sql/relational-databases/data-tier-applications/export-a-data-tier-application/) for information on how to generate a BACPAC file for an existing SQL Server database.
azure-sql-edge Deploy Onnx https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql-edge/deploy-onnx.md
This quickstart is based on **scikit-learn** and uses the [Boston Housing datase
- If you're using Azure SQL Edge, and you haven't deployed an Azure SQL Edge module, follow the steps of [deploy SQL Edge using the Azure portal](deploy-portal.md). -- Install [Azure Data Studio](/sql/azure-data-studio/download).
+- Install [Azure Data Studio](/azure-data-studio/download-azure-data-studio).
- Install Python packages needed for this quickstart:
- 1. Open [New Notebook](/sql/azure-data-studio/sql-notebooks) connected to the Python 3 Kernel.
+ 1. Open [New Notebook](/azure-data-studio/notebooks/sql-kernel) connected to the Python 3 Kernel.
1. Select **Manage Packages** 1. In the **Installed** tab, look for the following Python packages in the list of installed packages. If any of these packages aren't installed, select the **Add New** tab, search for the package, and select **Install**. - **scikit-learn**
azure-sql-edge Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql-edge/overview.md
Azure SQL Edge makes developing and maintaining applications easier and more pro
- [The Azure portal](https://portal.azure.com/) - A web-based application for managing all Azure services. - [SQL Server Management Studio](/sql/ssms/download-sql-server-management-studio-ssms/) - A free, downloadable client application for managing any SQL infrastructure, from SQL Server to SQL Database. - [SQL Server Data Tools in Visual Studio](/sql/ssdt/download-sql-server-data-tools-ssdt/) - A free, downloadable client application for developing SQL Server relational databases, SQL databases, Integration Services packages, Analysis Services data models, and Reporting Services reports.-- [Azure Data Studio](/sql/azure-data-studio/what-is/) - A free, downloadable, cross platform database tool for data professional using the Microsoft family of on-premises and cloud data platforms on Windows, macOS, and Linux.
+- [Azure Data Studio](/azure-data-studio/what-is-azure-data-studio) - A free, downloadable, cross platform database tool for data professional using the Microsoft family of on-premises and cloud data platforms on Windows, macOS, and Linux.
- [Visual Studio Code](https://code.visualstudio.com/docs) - A free, downloadable, open-source code editor for Windows, macOS, and Linux. It supports extensions, including the [mssql extension](https://aka.ms/mssql-marketplace) for querying Microsoft SQL Server, Azure SQL Database, and Azure Synapse Analytics. ## Next steps
azure-sql-edge Tutorial Deploy Azure Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql-edge/tutorial-deploy-azure-resources.md
In this three-part tutorial, you'll create a machine learning model to predict i
- Azure IoT Edge tools - .NET core cross-platform development - Container development tools
-1. Install [Azure Data Studio](/sql/azure-data-studio/download-azure-data-studio/)
-1. Open Azure Data Studio and configure Python for notebooks. For details, see [Configure Python for Notebooks](/sql/azure-data-studio/sql-notebooks#configure-python-for-notebooks). This step can take several minutes.
+1. Install [Azure Data Studio](/azure-data-studio/download-azure-data-studio/)
+1. Open Azure Data Studio and configure Python for notebooks. For details, see [Configure Python for Notebooks](/azure-data-studio/notebooks/notebooks-python-kernel). This step can take several minutes.
1. Install the latest version of [Azure CLI](https://github.com/Azure/azure-powershell/releases/tag/v3.5.0-February2020). The following scripts require that AZ PowerShell be the latest version (3.5.0, Feb 2020). 1. Set up the environment to debug, run, and test IoT Edge solution by installing [Azure IoT EdgeHub Dev Tool](https://pypi.org/project/iotedgehubdev/). 1. Install Docker.
azure-sql-edge Tutorial Sync Data Factory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql-edge/tutorial-sync-data-factory.md
This tutorial shows you how to use Azure Data Factory to incrementally sync data
If you haven't already created a database or table in your Azure SQL Edge deployment, use one of these methods to create one: -- Use [SQL Server Management Studio](/sql/ssms/download-sql-server-management-studio-ssms/) or [Azure Data Studio](/sql/azure-data-studio/download/) to connect to SQL Edge. Run a SQL script to create the database and table.
+- Use [SQL Server Management Studio](/sql/ssms/download-sql-server-management-studio-ssms/) or [Azure Data Studio](/azure-data-studio/download-azure-data-studio//) to connect to SQL Edge. Run a SQL script to create the database and table.
- Create a database and table by using [sqlcmd](/sql/tools/sqlcmd-utility/) by directly connecting to the SQL Edge module. For more information, see [Connect to the Database Engine by using sqlcmd](/sql/ssms/scripting/sqlcmd-connect-to-the-database-engine/). - Use SQLPackage.exe to deploy a DAC package file to the SQL Edge container. You can automate this process by specifying the SqlPackage file URI as part of the module's desired properties configuration. You can also directly use the SqlPackage.exe client tool to deploy a DAC package to SQL Edge.
azure-vmware Attach Azure Netapp Files To Azure Vmware Solution Hosts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/attach-azure-netapp-files-to-azure-vmware-solution-hosts.md
Now that you've attached a datastore on Azure NetApp Files-based NFS volume to y
- **Can a single Azure NetApp Files datastore be added to multiple clusters within the same Azure VMware Solution SDDC?**
+ Yes, you can select multiple clusters at the time of creating the datastore. Additional clusters may be added or removed after the initial creation as well.
+
+- **Can a single Azure NetApp Files datastore be added to multiple clusters within different Azure VMware Solution SDDCs?**
+ Yes, you can connect an Azure NetApp Files volume as a datastore to multiple clusters in different SDDCs. Each SDDC will need connectivity via the ExpressRoute gateway in the Azure NetApp Files virtual network.
azure-vmware Deploy Disaster Recovery Using Vmware Hcx https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/deploy-disaster-recovery-using-vmware-hcx.md
This guide covers the following replication scenarios:
- **Enable Quiescence:** Pauses the VM to ensure a consistent copy is synced to the remote site.
- - **Destination Storage:** Remote datastore for the protected VMs, and in an Azure VMware Solution private cloud, which should be the vSAN datastore.
+ - **Destination Storage:** Remote datastore for the protected VMs, and in an Azure VMware Solution private cloud, which can be a vSAN datastore or an [Azure NetApp Files datastore](attach-azure-netapp-files-to-azure-vmware-solution-hosts.md).
- **Compute Container:** Remote vSphere Cluster or Resource Pool.
azure-vmware Migrate Sql Server Always On Availability Group https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/migrate-sql-server-always-on-availability-group.md
Title: Migrate Microsoft SQL Server Always On cluster to Azure VMware Solution
-description: Learn how to migrate Microsoft SQL Server Always On cluster to Azure VMware Solution.
+ Title: Migrate Microsoft SQL Server Always On Availablity Group to Azure VMware Solution
+description: Learn how to migrate Microsoft SQL Server Always On Availability Group to Azure VMware Solution.
azure-vmware Migrate Sql Server Standalone Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/migrate-sql-server-standalone-cluster.md
In both cases, consider the size and criticality of the database being migrated.
For this how-to procedure, we have validated VMware HCX vMotion. VMware HCX Cold Migration is also valid, but it requires a longer downtime period.
-This scenario was validated using the following editions and configurations:
--- Microsoft SQL Server (2019 and 2022) -- Windows Server (2019 and 2022) Data Center edition -- Windows Server and SQL Server were configured following best practices and recommendations from Microsoft and VMware. -- The on-premises source infrastructure was VMware vSphere 7.0 Update 3 and VMware vSAN running on Dell PowerEdge servers and Intel Optane P4800X SSD NVMe devices.- :::image type="content" source="media/sql-server-hybrid-benefit/migrated-sql-standalone-cluster.png" alt-text="Diagram showing the architecture of Standalone SQL Server for Azure VMware Solution." border="false" lightbox="media/sql-server-hybrid-benefit/migrated-sql-standalone-cluster.png"::: ## Prerequisites
For production environments, or workloads with large database sizes or where the
Further downtime considerations are discussed in the next section.
+This scenario was validated using the following editions and configurations:
+
+- Microsoft SQL Server (2019 and 2022)
+- Windows Server (2019 and 2022) Data Center edition
+- Windows Server and SQL Server were configured following best practices and recommendations from Microsoft and VMware.
+- The on-premises source infrastructure was VMware vSphere 7.0 Update 3 and VMware vSAN running on Dell PowerEdge servers and Intel Optane P4800X SSD NVMe devices.
++ ## Downtime considerations Downtime during a migration depends on the size of the database to be migrated and the speed of the private network connection to Azure cloud.
azure-web-pubsub Howto Custom Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/howto-custom-domain.md
After you create a shared private endpoint, you can create a custom certificate
You don't have to explicitly allow Azure Web PubSub Service IPs in Key Vault firewall settings. For more info, see [Key Vault private link diagnostics](../key-vault/general/private-link-diagnostics.md).
+## Certificate rotation
+
+If you don't specify a secret version when creating custom certificate, Azure Web PubSub Service periodically checks latest version in Key Vault. When a new version is observed, it's automatically applied. The delay is usually within 1 hour.
+
+Alternatively, you can also pin custom certificate to a specific secret version in Key Vault. When you need to apply a new certificate, you can edit the secret version and then update custom certificate proactively.
+ ## Next steps * [How to enable managed identity for Azure Web PubSub Service](howto-use-managed-identity.md) * [Get started with Key Vault certificates](../key-vault/certificates/certificate-scenarios.md)
-* [What is Azure DNS](../dns/dns-overview.md)
+* [What is Azure DNS](../dns/dns-overview.md)
batch Security Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/security-best-practices.md
the settings set by the image used by each compute node. Although the Batch node
most secure settings available when possible, it can still be limited by operating system level settings. We recommend that you review your OS level defaults and set them appropriately for the most secure mode that is amenable for your workflow and organizational requirements. For more information, please visit
-[Manage TLS](https://learn.microsoft.com/windows-server/security/tls/manage-tls) for cipher suite order enforcement and
-[TLS registry settings](https://learn.microsoft.com/windows-server/security/tls/tls-registry-settings) for SSL/TLS version
+[Manage TLS](/windows-server/security/tls/manage-tls) for cipher suite order enforcement and
+[TLS registry settings](/windows-server/security/tls/tls-registry-settings) for SSL/TLS version
control for Schannel SSP. Note that some setting changes require a reboot to take effect. Utilizing a newer operating system with modern security defaults or a [custom image](batch-sig-images.md) with modified settings is recommended instead of application of such settings with a Batch start task.
communication-services Call Automation Teams Interop https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/call-automation/call-automation-teams-interop.md
The following list presents the set of features that are currently available in
| -| -- | | -- | | -- | | Pre-call scenarios | Place new outbound call to a Microsoft Teams user | ✔️ | ✔️ | ✔️ | ✔️ | | | Redirect (forward) a call to a Microsoft Teams user | ✔️ | ✔️ | ✔️ | ✔️ |
-| | Set custom display name for the callee when making a call offer to a Microsoft Teams user | Only on Microsoft Teams desktop and web client | Only on Microsoft Teams desktop
- and web client |
+| | Set custom display name for the callee when making a call offer to a Microsoft Teams user | Only on Microsoft Teams desktop and web client | Only on Microsoft Teams desktop and web client |
| Mid-call scenarios | Add one or more endpoints to an existing call with a Microsoft Teams user | ✔️ | ✔️ | ✔️ | ✔️ | | | Play Audio from an audio file | ✔️ | ✔️ | ✔️ | ✔️ | | | Recognize user input through DTMF | ❌ | ❌ | ❌ | ❌ |
communication-services Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/managed-identity.md
+
+ Title: Azure Communication Services support for Managed Identity
+description: Learn about using Managed Identity with Azure Communication Services
++++ Last updated : 07/24/2023+++++
+# How to use Managed Identity with Azure Communication Services
+Azure Communication Services (ACS) is a fully managed communication platform that enables developers to build real-time communication features into their applications. By using Managed Identity with Azure Communication Services, you can simplify the authentication process for your application, while also increasing its security. This document covers how to use Managed Identity with Azure Communication Services.
+
+## Using Managed Identity with ACS
+
+ACS supports using Managed Identity to authenticate with the service. By using Managed Identity, you can eliminate the need to manage your own access tokens and credentials.
+
+Your ACS resource can be assigned two types of identity:
+1. A **System Assigned Identity** which is tied to your resource and is deleted when your resource is deleted.
+ Your resource can only have one system-assigned identity.
+2. A **User Assigned Identity** which is an Azure resource that can be assigned to your ACS resource. This identity isn't deleted when your resource is deleted. Your resource can have multiple user-assigned identities.
+
+To use Managed Identity with ACS, follow these steps:
+
+1. Grant your Managed Identity access to the Communication Services resource. This assignment can be through the Azure portal, Azure CLI and the Azure Communication Management SDKs.
+2. Use the Managed Identity to authenticate with ACS. Authentication can be done through the Azure SDKs or REST APIs that support Managed Identity.
+
+--
+
+## Add a system-assigned identity
+
+# [Azure portal](#tab/portal)
+
+1. In the left navigation of your app's page, scroll down to the **Settings** group.
+
+2. Select **Identity**.
+
+3. Within the **System assigned** tab, switch **Status** to **On**. Select **Save**.
+ :::image type="content" source="../media/managed-identity/managed-identity-system-assigned.png" alt-text="Screenshot that shows how to enable system assigned managed identity." lightbox="../media/managed-identity/managed-identity-system-assigned.png" :::
+# [Azure CLI](#tab/cli)
+
+Run the `az communication identity assign` command to assign a system-assigned identity:
+
+```azurecli-interactive
+az communication identity assign --system-assigned --name myApp --resource-group myResourceGroup
+```
+--
+
+## Add a user-assigned identity
+
+Assigning a user-assigned identity to your ACS resource requires that you first create the identity and then add its resource identifier to your Communication service resource.
+
+# [Azure portal](#tab/portal)
+
+First, you need to create a user-assigned managed identity resource.
+
+1. Create a user-assigned managed identity resource according to [these instructions](~/articles/active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md#create-a-user-assigned-managed-identity).
+
+2. In the left navigation for your app's page, scroll down to the **Settings** group.
+
+3. Select **Identity**.
+
+4. Select **User assigned** > **Add**.
+
+5. Search for the identity you created earlier, select it, and select **Add**.
+ :::image type="content" source="../media/managed-identity/managed-identity-user-assigned.png" alt-text="Screenshot that shows how to enable user assigned managed identity." lightbox="../media/managed-identity/managed-identity-user-assigned.png" :::
+
+# [Azure CLI](#tab/cli)
+
+1. Create a user-assigned identity.
+
+ ```azurepowershell-interactive
+ az identity create --resource-group <group-name> --name <identity-name>
+ ```
+
+2. Run the `az communication identity assign` command to assign a user-assigned identity:
+
+```azurecli-interactive
+az communication identity assign --name myApp --resource-group myResourceGroup --user-assigned <identity-id>
+```
+
+--
+
+## Managed Identity using ACS management SDKs
+Managed Identity can also be assigned to your ACS resource using the Azure Communication Management SDKs.
+This assignment can be achieved by introducing the identity property in the resource definition either on creation or when updating the resource.
+
+# [.NET](#tab/dotnet)
+You can assign your managed identity to your ACS resource using the Azure Communication Management SDK for .NET by setting the `Identity` property on the `CommunicationServiceResourceData `.
+
+For example:
+
+```csharp
+public async Task CreateResourceWithSystemAssignedManagedIdentity()
+{
+ ArmClient armClient = new ArmClient(new DefaultAzureCredential());
+ SubscriptionResource subscription = await armClient.GetDefaultSubscriptionAsync();
+
+ //Create Resource group
+ ResourceGroupCollection rgCollection = subscription.GetResourceGroups();
+ // With the collection, we can create a new resource group with an specific name
+ string rgName = "myRgName";
+ AzureLocation location = AzureLocation.WestUS2;
+ ArmOperation<ResourceGroupResource> lro = await rgCollection.CreateOrUpdateAsync(WaitUntil.Completed, rgName, new ResourceGroupData(location));
+ ResourceGroupResource resourceGroup = lro.Value;
+
+ // get resource group collection
+ CommunicationServiceResourceCollection collection = resourceGroup.GetCommunicationServiceResources();
+ string communicationServiceName = "myCommunicationService";
+
+ // Create Communication Service Resource
+ var identity = new ManagedServiceIdentity(ManagedServiceIdentityType.SystemAssigned);
+ CommunicationServiceResourceData data = new CommunicationServiceResourceData("global")
+ {
+ DataLocation = "UnitedStates",
+ Identity = identity
+ };
+ var communicationServiceLro = await collection.GetCommunicationServiceResources().CreateOrUpdateAsync(WaitUntil.Completed, communicationServiceName, data);
+ var resource = communicationServiceLro.Value;
+}
+```
+For more information on using the .NET Management SDK, see [Azure Communication Management SDK for .NET](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/communication/Azure.ResourceManager.Communication/README.md).
+
+For more information specific to managing your resource instance, see [Managing your Communication Service Resource instance](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/communication/Azure.ResourceManager.Communication/samples/Sample1_ManagingCommunicationService.md)
++
+# [JavaScript](#tab/javascript)
+
+For Node.js apps and JavaScript functions, samples on how to create or update your ACS resource with a managed identity can be found in the [Azure Communication Management Developer Samples for JavaScript](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/communication/arm-communication/samples-dev/communicationServicesCreateOrUpdateSample.ts)
+
+For more information on using the JavaScript Management SDK, see [Azure Communication Management SDK for JavaScript](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/communication/arm-communication/README.md)
+
+# [Python](#tab/python)
+
+For Python apps and functions, Code Samples on how to create or update your ACS resource with a managed identity can be found in the [Azure Communication Management Developer Samples for Python](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/communication/azure-mgmt-communication/generated_samples/communication_services/create_or_update_with_system_assigned_identity.py)
+
+For more information on using the python Management SDK, see [Azure Communication Management SDK for Python](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/communication/azure-mgmt-communication/README.md)
+# [Java](#tab/java)
+
+For Java apps and functions, Code Samples on how to create or update your ACS resource with a managed identity can be found in the [Azure Communication Management Developer Samples for Java](https://github.com/Azure/azure-sdk-for-java/blob/main/sdk/communication/azure-resourcemanager-communication/src/samples/java/com/azure/resourcemanager/communication/generated/CommunicationServicesCreateOrUpdateSamples.java).
+
+For more information on using the java Management SDK, see [Azure Communication Management SDK for Java](https://github.com/Azure/azure-sdk-for-jav)
+
+# [GoLang](#tab/go)
+
+For Golang apps and functions, Code Samples on how to create or update your ACS resource with a managed identity can be found in the [Azure Communication Management Developer Samples for Golang](https://github.com/Azure/azure-sdk-for-go/blob/main/sdk/resourcemanager/communication/armcommunication/services_client_example_test.go).
+
+For more information on using the golang Management SDK, see [Azure Communication Management SDK for Golang](https://github.com/Azure/azure-sdk-for-go/blob/main/sdk/resourcemanager/communication/armcommunication/README.md)
++
+--
+> [!NOTE]
+> A resource can have both system-assigned and user-assigned identities at the same time. In this case, the `type` property would be `SystemAssigned,UserAssigned`.
+>
+>Removing all managed identity assignments from a resource can also be acheived by specifying the `type` property as `None`.
++
+## Next steps
+Now that you have learned how to enable Managed Identity with Azure Communication Services. Consider implementing this feature in your own applications to simplify your authentication process and improve security.
+
+- [Managed Identities](~/articles/active-directory/managed-identities-azure-resources/overview.md)
+- [Manage user-assigned managed identities](~/articles/active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md)
communication-services Add Voip Push Notifications Event Grid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/tutorials/add-voip-push-notifications-event-grid.md
With Azure Communication Services, you can receive real-time event notifications
In this tutorial, we explore how to implement Azure Communication Services Calling with Azure Event Grid to receive push notifications on native platforms. Azure Event Grid is a serverless event routing service that makes it easy to build event-driven applications. This tutorial helps you set up and understand how to receive push notifications for incoming calls.
-You can take a look at [voice and video calling events](https://learn.microsoft.com/azure/event-grid/communication-services-voice-video-events) available using Event Grid.
+You can take a look at [voice and video calling events](../../event-grid/communication-services-voice-video-events.md) available using Event Grid.
## Current limitations with the Push Notification model
The current limitations of using the Native Calling SDK and [Push Notifications]
* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). * A deployed Communication Services resource. [Create a Communication Services resource](../quickstarts/create-communication-resource.md). * A `User Access Token` to enable the call client. For more information on [how to get a `User Access Token`](../quickstarts/identity/access-tokens.md)
-* [The Azure Event Grid topic](https://learn.microsoft.com/azure/event-grid/custom-event-quickstart-portal): Create an Azure Event Grid topic in your Azure subscription, it's used to send events when incoming calls occur.
+* [The Azure Event Grid topic](../../event-grid/custom-event-quickstart-portal.md): Create an Azure Event Grid topic in your Azure subscription, it's used to send events when incoming calls occur.
* Optional: Complete the quickstart for [getting started with adding calling to your application](../quickstarts/voice-video-calling/getting-started-with-calling.md) * Optional [Azure Functions extension](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurefunctions) to build your own serverless applications. For example, you can host your authentication application in Azure Functions. * Optional, review the quickstart to learn how to [handle voice and video calling events](../quickstarts/voice-video-calling/handle-calling-events.md).
communications-gateway Connect Operator Connect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/connect-operator-connect.md
# Connect Azure Communications Gateway to Operator Connect or Teams Phone Mobile
-After you have deployed Azure Communications Gateway, you need to connect it to the Microsoft Phone System and to your core network. You also need to onboard to the Operator Connect or Teams Phone Mobile environments.
+After you have deployed Azure Communications Gateway and connected it to your core network, you need to connect it to Microsoft Phone System. You also need to onboard to the Operator Connect or Teams Phone Mobile environments.
This article describes how to set up Azure Communications Gateway for Operator Connect and Teams Phone Mobile. When you have finished the steps in this article, you will be ready to [Prepare for live traffic](prepare-for-live-traffic-operator-connect.md) with Operator Connect, Teams Phone Mobile and Azure Communications Gateway.
communications-gateway Connect Teams Direct Routing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/connect-teams-direct-routing.md
# Connect Azure Communications Gateway to Microsoft Teams Direct Routing
-After you have deployed Azure Communications Gateway, you need to connect it to the Microsoft Phone System and to your core network.
+After you have deployed Azure Communications Gateway and connected it to your core network, you need to connect it to Microsoft Phone System.
This article describes how to start setting up Azure Communications Gateway for Microsoft Teams Direct Routing. When you have finished the steps in this article, you can set up test users for test calls and prepare for live traffic.
communications-gateway Emergency Calling Teams Direct Routing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/emergency-calling-teams-direct-routing.md
If a subscriber uses a Microsoft Teams client to make an emergency call and the
Azure Communications Gateway routes emergency calls to your network with this PIDF-LO location information unaltered. It is your responsibility to: - Ensure that these emergency calls are properly routed to an appropriate Public Safety Answering Point (PSAP).-- Configure the SIP trunks to Azure Communications Gateway in your tenant to support PIDF-LO. You typically do this when you [set up Direct Routing support](connect-teams-direct-routing.md#connect-your-tenant-to-azure-communications-gateway).
+- Configure the SIP trunks to Azure Communications Gateway in your tenant to support PIDF-LO. You typically set this configuration when you [set up Direct Routing support](connect-teams-direct-routing.md#connect-your-tenant-to-azure-communications-gateway).
For more information on how Microsoft Teams handles emergency calls, see [the Microsoft Teams documentation on managing emergency calling](/microsoftteams/what-are-emergency-locations-addresses-and-call-routing) and the [considerations for Direct Routing](/microsoftteams/considerations-direct-routing).
Microsoft Teams always sends location information on SIP INVITEs for emergency c
- When Microsoft Teams clients make an emergency call, they obtain their physical location based on their network location. - Static locations that your customers assign.
+## ELIN support for Direct Routing (preview)
+
+ELIN (Emergency Location Identifier Number) is the traditional method for signaling dynamic emergency location information for networks that don't support PIDF-LO. With Direct Routing, the Microsoft Phone System can add an ELIN (a phone number) representing the location to the message body. If ELIN support (preview) is configured, Azure Communications Gateway replaces the caller's number with this phone number when forwarding the call to your network. The Public Safety Answering Point (PSAP) can then look up this number to identify the location of the caller.
+
+> [!IMPORTANT]
+> If you require ELIN support (preview), discuss your requirements with a Microsoft representative.
+ ## Next steps - Learn about [the key concepts in Microsoft Teams emergency calling](/microsoftteams/what-are-emergency-locations-addresses-and-call-routing).
communications-gateway Interoperability Operator Connect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/interoperability-operator-connect.md
You can arrange more interworking function as part of your initial network desig
[!INCLUDE [microsoft-phone-system-requires-e164-numbers](includes/communications-gateway-e164-for-phone-system.md)] ## RTP and SRTP media
communications-gateway Interoperability Teams Direct Routing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/interoperability-teams-direct-routing.md
Azure Communications Gateway offers multiple media interworking options. For exa
For full details of the media interworking features available in Azure Communications Gateway, raise a support request.
+### Microsoft Phone System media bypass support (preview)
+
+Azure Communications Gateway has Preview support for Direct Routing media bypass. Direct Routing media bypass allows media to flow directly between Azure Communications Gateway and Microsoft Teams clients in some scenarios instead of always sending it through the Microsoft Phone System. Media continues to flow through Azure, because both Azure Communications Gateway and Microsoft Phone System are located in Azure.
+
+If you believe that media bypass support (preview) would be useful for your deployment, discuss your requirements with a Microsoft representative.
+ ## Topology hiding with domain delegation The domain for your Azure Communications Gateway deployment is visible to customer administrators in their Microsoft 365 admin center. By default, each Azure Communications Gateway deployment receives an automatically generated domain name similar to `a1b2c3d4efghij5678.commsgw.azure.example.com`.
communications-gateway Prepare For Live Traffic Teams Direct Routing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/prepare-for-live-traffic-teams-direct-routing.md
Last updated 10/09/2023
# Prepare for live traffic with Microsoft Teams Direct Routing and Azure Communications Gateway
-Before you can launch your Operator Connect or Teams Phone Mobile service, you and your onboarding team must:
+Before you can launch your Microsoft Teams Direct Routing service, you and your onboarding team must:
- Test your service. - Prepare for launch.
You must have completed the following procedures.
- [Prepare to deploy Azure Communications Gateway](prepare-to-deploy.md) - [Deploy Azure Communications Gateway](deploy.md)
+- [Integrate with Azure Communications Gateway's Provisioning API](integrate-with-provisioning-api.md)
- [Connect Azure Communications Gateway to Microsoft Teams Direct Routing](connect-teams-direct-routing.md) - [Configure a test customer for Microsoft Teams Direct Routing](configure-test-customer-teams-direct-routing.md) - [Configure test numbers for Microsoft Teams Direct Routing](configure-test-numbers-teams-direct-routing.md)
cosmos-db Pre Migration Steps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/pre-migration-steps.md
There are 3 ways to complete the pre-migration assessment, we recommend you to u
### Azure Cosmos DB Migration for MongoDB extension
-The [Azure Cosmos DB Migration for MongoDB extension](/sql/azure-data-studio/extensions/database-migration-for-mongo-extension) in Azure Data Studio helps you assess a MongoDB workload for migrating to Azure Cosmos DB for MongoDB. You can use this extension to run an end-to-end assessment on your workload and find out the actions that you may need to take to seamlessly migrate your workloads on Azure Cosmos DB. During the assessment of a MongoDB endpoint, the extension reports all the discovered resources.
+The [Azure Cosmos DB Migration for MongoDB extension](/azure-data-studio/extensions/database-migration-for-mongo-extension) in Azure Data Studio helps you assess a MongoDB workload for migrating to Azure Cosmos DB for MongoDB. You can use this extension to run an end-to-end assessment on your workload and find out the actions that you may need to take to seamlessly migrate your workloads on Azure Cosmos DB. During the assessment of a MongoDB endpoint, the extension reports all the discovered resources.
> [!NOTE]
cosmos-db Programmatic Database Migration Assistant Legacy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/programmatic-database-migration-assistant-legacy.md
Last updated 04/20/2023
[!INCLUDE[MongoDB](../includes/appliesto-mongodb.md)] > [!IMPORTANT]
-> Database Migration Assistant is a preliminary legacy utility meant to assist you with the pre-migration steps. Microsoft recommends you to use the [Azure Cosmos DB Migration for MongoDB extension](/sql/azure-data-studio/extensions/database-migration-for-mongo-extension) for all pre-migration steps.
+> Database Migration Assistant is a preliminary legacy utility meant to assist you with the pre-migration steps. Microsoft recommends you to use the [Azure Cosmos DB Migration for MongoDB extension](/azure-data-studio/extensions/database-migration-for-mongo-extension) for all pre-migration steps.
### Programmatic discovery using the Database Migration Assistant
cosmos-db Migration Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/vcore/migration-options.md
This document describes the various options to lift and shift your MongoDB workl
Assessment involves finding out whether you're using the [features and syntax that are supported](./compatibility.md). The aim of this stage is to create a list of incompatibilities and warnings, if any. After you have the assessment results, you can try to address the findings during rest of the migration planning.
-The [Azure Cosmos DB Migration for MongoDB extension](/sql/azure-data-studio/extensions/database-migration-for-mongo-extension) in Azure Data Studio helps you assess a MongoDB workload for migrating to Azure Cosmos DB for MongoDB. You can use this extension to run an end-to-end assessment on your workload and find out the actions that you may need to take to seamlessly migrate your workloads on Azure Cosmos DB. During the assessment of a MongoDB endpoint, the extension reports all the discovered resources.
+The [Azure Cosmos DB Migration for MongoDB extension](/azure-data-studio/extensions/database-migration-for-mongo-extension) in Azure Data Studio helps you assess a MongoDB workload for migrating to Azure Cosmos DB for MongoDB. You can use this extension to run an end-to-end assessment on your workload and find out the actions that you may need to take to seamlessly migrate your workloads on Azure Cosmos DB. During the assessment of a MongoDB endpoint, the extension reports all the discovered resources.
> [!TIP] > We recommend you to go through [the supported features and syntax](./compatibility.md) in detail, as well as perform a proof-of-concept prior to the actual migration.
cost-management-billing Pay By Invoice https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/pay-by-invoice.md
tags: billing
Previously updated : 03/13/2023 Last updated : 10/17/2023 # Pay for your Azure subscription by wire transfer
-This article applies to customers with a Microsoft Customer Agreement (MCA) and to customers who signed up for Azure through the Azure website (for a Microsoft Online Services Program account also called pay-as-you-go account). If you signed up for Azure through a Microsoft representative, then your default payment method is already be set to *wire transfer*.
+This article helps you set up your Azure subscription to pay by wire transfer.
+This article applies to you if you are:
+
+- A customer with a Microsoft Customer Agreement (MCA)
+- A customer who signed up for Azure through the Azure website (for a Microsoft Online Services Program account, also called pay-as-you-go account).
-If you switch to pay by wire transfer, that means you pay your bill within 30 days of the invoice date by wire transfer.
+If you signed up for Azure through a Microsoft representative, then your default payment method is already set to *wire transfer*, so these steps aren't needed.
+
-When you request to change your payment method to wire transfer, there are two possible results:
+When you switch to pay by wire transfer, you must pay your bill within 30 days of the invoice date by wire transfer.
-- You're automatically approved and you're prompted for information about your company.-- You're not automatically approved, but you can submit a request to Azure support.
+Users with a Microsoft Customer Agreement must always submit a request [Submit a request to set up pay by wire transfer](#submit-a-request-to-set-up-pay-by-wire-transfer) to Azure support to enable pay by wire transfer.
-Users with a Microsoft Customer Agreement must always submit a request to Azure support to enable pay by wire transfer.
+Customers who have a Microsoft Online Services Program (pay-as-you-go) account can use the Azure portal to [Request to pay by wire transfer](#request-to-pay-by-wire-transfer).
> [!IMPORTANT] > * Pay by wire transfer is only available for customers using Azure on behalf of a company.
If you're not automatically approved, you can submit a request to Azure support
- Company Name (as registered under VAT or Government Website): - Company Address (as registered under VAT or Government Website): - Company Website:
- - Country:
+ - Country/region:
- TAX ID/ VAT ID: - Company Established on (Year): - Any prior business with Microsoft:
If you're not automatically approved, you can submit a request to Azure support
1. Go to the Azure home page. Search for **Cost Management** and select it (not Cost Management + Billing). It's a green hexagon-shaped symbol. 1. You should see the overview page. If you don't see Properties in the left menu, at the top of the page under Scope, select **Go to billing account**.
- 1. In the left menu, select **Properties**. On the properties page you should see your billing account ID shown as a GUID ID value. It's your Commerce Account ID.
+ 1. In the left menu, select **Properties**. On the properties page, you should see your billing account ID shown as a GUID ID value. It's your Commerce Account ID.
-If we need to run a credit check because of the amount of credit that you need, we'll send you a credit check application. We might ask you to provide your companyΓÇÖs audited financial statements. If no financial information is provided or if the information isn't strong enough to support the amount of credit limit required, we might ask for a security deposit or a standby letter of credit to approve your credit check request.
+If we need to run a credit check because of the amount of credit that you need, you're sent a credit check application. We might ask you to provide your companyΓÇÖs audited financial statements. If no financial information is provided or if the information isn't strong enough to support the amount of credit limit required, we might ask for a security deposit or a standby letter of credit to approve your credit check request.
## Switch to pay by wire transfer after approval
With a Microsoft Customer Agreement, you can switch your billing profile to wire
### Switch Azure subscription to wire transfer
-Follow the steps below to switch your Azure subscription to pay by wire transfer. *Once you switch to payment by wire transfer, you can't switch back to a credit card*.
+Use the following steps to switch your Azure subscription to pay by wire transfer. *Once you switch to payment by wire transfer, you can't switch back to a credit card*.
1. Go to the Azure portal to sign in as the Account Administrator. Search for and select **Cost Management + Billing**. :::image type="content" source="./media/pay-by-invoice/search.png" alt-text="Screenshot showing search for Cost Management + Billing in the Azure portal." lightbox="./media/pay-by-invoice/search.png" :::
On the Payment methods page, select **Pay by wire transfer**.
### Switch billing profile to wire transfer
-Follow the steps below to switch a billing profile to wire transfer. Only the person who signed up for Azure can change the default payment method of a billing profile.
+Using the following steps to switch a billing profile to wire transfer. Only the person who signed up for Azure can change the default payment method of a billing profile.
1. Go to the Azure portal view your billing information. Search for and select **Cost Management + Billing**. 1. In the menu, choose **Billing profiles**.
Payments made by wire transfer have processing times that vary, depending on the
- Wire transfers (domestic) - Four business days. Two days to arrive, plus two days to post. - Wire transfers (international) - Seven business days. Five days to arrive, plus two days to post.
-If your account is approved for payment by wire transfer, the instructions for payment can be found on the invoice.
+When your account is approved for wire transfer payment, the instructions for payment can be found on the invoice.
## Check access to a Microsoft Customer Agreement [!INCLUDE [billing-check-mca](../../../includes/billing-check-mca.md)]
cost-management-billing Troubleshoot Azure Sign Up https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/troubleshoot-azure-sign-up.md
tags: billing
Previously updated : 04/11/2023 Last updated : 10/17/2023 # Troubleshoot issues when you sign up for a new account in the Azure portal
-You may experience an issue when you try to sign up for a new account in the Microsoft Azure portal. This short guide walks you through the sign-up process and discusses some common issues at each step.
+You might experience an issue when you try to sign up for a new account in the Microsoft Azure portal. This short guide walks you through the sign-up process and discusses some common issues at each step.
> [!NOTE] > If you already have an existing account and are looking for guidance to troubleshoot sign-in issues, see [Troubleshoot Azure subscription sign-in issues](./troubleshoot-sign-in-issue.md).
When you get the text message or telephone call, enter the code that you receive
#### No verification text message or phone call
-Although the sign-up verification process is typically quick, it may take up to four minutes for a verification code to be delivered.
+Although the sign-up verification process is typically quick, it might take up to four minutes for a verification code to be delivered.
Here are some other tips:
Here are some other tips:
#### Credit card declined or not accepted
-Virtual or prepaid credit cards aren't accepted as payment for Azure subscriptions. To see what else may cause your card to be declined, see [Troubleshoot a declined card at Azure sign-up](./troubleshoot-declined-card.md).
+Virtual or prepaid credit cards aren't accepted as payment for Azure subscriptions. To see what else might cause your card to be declined, see [Troubleshoot a declined card at Azure sign-up](./troubleshoot-declined-card.md).
#### Credit card form doesn't support my billing address
Use the following steps to update your browser's cookie settings.
### I saw a charge on my free trial account
-You may see a small, temporary verification hold on your credit card account after you sign up. This hold is removed within three to five days. If you're worried about managing costs, read more about [Analyzing unexpected charges](../understand/analyze-unexpected-charges.md).
+You might see a small, temporary verification hold on your credit card account after you sign up. This hold is removed within three to five days. If you're worried about managing costs, read more about [Analyzing unexpected charges](../understand/analyze-unexpected-charges.md).
## Agreement
Check that you're using the correct sign-in credentials. Then, check the benefit
- If you can't verify your status, contact [Visual Studio Subscription Support](https://visualstudio.microsoft.com/subscriptions/support/). - Microsoft for Startups - Sign in to the [Microsoft for Startups portal](https://startups.microsoft.com/#start-two) to verify your eligibility status for Microsoft for Startups.
- - If you can't verify your status, you can get help on the [Microsoft for Startups forums](https://www.microsoftpartnercommunity.com/t5/Microsoft-for-Startups/ct-p/Microsoft_Startups).
+ - If you can't verify your status, you can get help by creating a [Microsoft for Startups support request](https://support.microsoft.com/supportrequestform/354fe60a-ba6d-92ad-208a-6a41387aa9d8).
- Cloud Partner Program
- - Sign in to the [Cloud Partner Program portal](https://mspartner.microsoft.com/Pages/Locale.aspx) to verify your eligibility status. If you have the appropriate [Cloud Platform Competencies](https://mspartner.microsoft.com/pages/membership/cloud-platform-competency.aspx), you may be eligible for other benefits.
+ - Sign in to the [Cloud Partner Program portal](https://mspartner.microsoft.com/Pages/Locale.aspx) to verify your eligibility status. If you have the appropriate [Cloud Platform Competencies](https://mspartner.microsoft.com/pages/membership/cloud-platform-competency.aspx), you might be eligible for other benefits.
- If you can't verify your status, contact [Cloud Partner Program Support](https://mspartner.microsoft.com/Pages/Support/Premium/contact-support.aspx). ### Can't activate new Azure In Open subscription
data-factory How To Diagnostic Logs And Metrics For Managed Airflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-diagnostic-logs-and-metrics-for-managed-airflow.md
Similarly, you can create custom queries according to your needs using any table
For more information:
-1. [https://learn.microsoft.com/azure/azure-monitor/logs/log-analytics-tutorial](https://learn.microsoft.com/azure/azure-monitor/logs/log-analytics-tutorial)
+1. [Log Analytics Tutorial](../azure-monitor/logs/log-analytics-tutorial.md)
2. [Kusto Query Language (KQL) overview - Azure Data Explorer | Microsoft Learn](/azure/data-explorer/kusto/query/)
data-factory Tutorial Incremental Copy Multiple Tables Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-incremental-copy-multiple-tables-powershell.md
If you don't have an Azure subscription, create a [free](https://azure.microsoft
### Create source tables in your SQL Server database
-1. Open [SQL Server Management Studio (SSMS)](/sql/ssms/download-sql-server-management-studio-ssms) or [Azure Data Studio](/sql/azure-data-studio/download-azure-data-studio), and connect to your SQL Server database.
+1. Open [SQL Server Management Studio (SSMS)](/sql/ssms/download-sql-server-management-studio-ssms) or [Azure Data Studio](/azure-data-studio/download-azure-data-studio), and connect to your SQL Server database.
2. In **Server Explorer (SSMS)** or in the **Connections pane (Azure Data Studio)**, right-click the database and choose **New Query**.
If you don't have an Azure subscription, create a [free](https://azure.microsoft
### Create destination tables in your Azure SQL Database
-1. Open [SQL Server Management Studio (SSMS)](/sql/ssms/download-sql-server-management-studio-ssms) or [Azure Data Studio](/sql/azure-data-studio/download-azure-data-studio), and connect to your SQL Server database.
+1. Open [SQL Server Management Studio (SSMS)](/sql/ssms/download-sql-server-management-studio-ssms) or [Azure Data Studio](/azure-data-studio/download-azure-data-studio), and connect to your SQL Server database.
2. In **Server Explorer (SSMS)** or in the **Connections pane (Azure Data Studio)**, right-click the database and choose **New Query**.
databox-online Azure Stack Edge Deploy Aks On Azure Stack Edge https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-deploy-aks-on-azure-stack-edge.md
Previously updated : 09/28/2023 Last updated : 10/17/2023 # Customer intent: As an IT admin, I need to understand how to deploy and configure Azure Kubernetes service on Azure Stack Edge.
To verify that AKS is enabled, go to your Azure Stack Edge resource in the Azure
## Specify static IP pools (optional)
-An **optional** step where you can assign IP pools for the virtual network used by Kubernetes pods.
+An **optional** step where you can assign IP pools for the virtual network used by Kubernetes pods.
+
+> [!NOTE]
+> SAP customers can skip this step.
You can specify a static IP address pool for each virtual network that is enabled for Kubernetes. The virtual network enabled for Kubernetes generates a `NetworkAttachmentDefinition` that's created for the Kubernetes cluster.
databox-online Azure Stack Edge Gpu Clustering Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-clustering-overview.md
Previously updated : 10/05/2023 Last updated : 10/17/2023
A quorum is always maintained on your Azure Stack Edge cluster to remain online
For an Azure Stack Edge cluster with two nodes, if a node fails, then a cluster witness provides the third vote so that the cluster stays online (since the cluster is left with two out of three votes - a majority). A cluster witness is required on your Azure Stack Edge cluster. You can set up the witness in the cloud or in a local fileshare using the local UI of your device. - For more information about the cluster witness, see [Cluster witness on Azure Stack Edge](azure-stack-edge-gpu-cluster-witness-overview.md).
+ - For more information about witness in the cloud, see [Configure cloud witness](azure-stack-edge-gpu-manage-cluster.md#configure-cloud-witness).
## Infrastructure cluster
databox-online Azure Stack Edge Gpu Install Update https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-install-update.md
Previously updated : 10/05/2023 Last updated : 10/17/2023 # Update your Azure Stack Edge Pro GPU
For information on what's new in this update, go to [Release notes](azure-stack-
*Update package cannot be installed as its dependencies are not met.* -- You can update to 2203 from 2207 or later, and then install 2309.
+- You can update to 2303 from 2207 or later, and then install 2309.
Supported update paths:
databox-online Azure Stack Edge Pro 2 Deploy Install https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-pro-2-deploy-install.md
Previously updated : 10/06/2023 Last updated : 10/17/2023 zone_pivot_groups: azure-stack-edge-device-deployment # Customer intent: As an IT admin, I need to understand how to install Azure Stack Edge Pro 2 in datacenter so I can use it to transfer data to Azure.
Before you start cabling your device, you need the following things:
- Your Azure Stack Edge Pro 2 physical device, unpacked, and rack mounted. - One power cable (included in the device package).-- At least one 1-GbE RJ-45 network cable to connect to the Port 1. Port 1 and Port 2 the two 10/1-GbE network interfaces on your device.
+- Use 10G-BASET RJ-45 network cables (CAT-5e or CAT-6) to connect to Port1 and Port2. They can operate at either 1Gb/s or 10Gb/s.
- One 100-GbE QSFP28 passive direct attached cable (Microsoft validated) for each data network interface Port 3 and Port 4 to be configured. Here is an example of the QSFP28 DAC connector: ![Example of a QSFP28 DAC connector](./media/azure-stack-edge-pro-2-deploy-install/qsfp28-dac-connector.png)
defender-for-cloud Defender For Apis Prepare https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-apis-prepare.md
Review the latest cloud support information for Defender for Cloud plans and fea
Availability | This feature is available in the Premium, Standard, Basic, and Developer tiers of Azure API Management. API gateways | Azure API Management<br/><br/> Defender for APIs currently doesn't onboard APIs that are exposed using the API Management [self-hosted gateway](../api-management/self-hosted-gateway-overview.md), or managed using API Management [workspaces](../api-management/workspaces-overview.md). API types | Currently, Defender for APIs discovers and analyzes REST APIs.
-Multi-region support | In multi-regional managed and self-hosted Azure API Management deployments, security insights (data classification, authentication check, unused and external APIs) aren't supported in secondary regions. In such cases, data residency requirements are still met.ΓÇ»
+Multi-region support | There is currently limited support for API security insights for APIs published in Azure API Management multi-region deployments. Security insights, including data classifications, assessments of inactive APIs, unauthenticated APIs, and external APIs, is limited to supporting API traffic to the primary region (no support for security insights for secondary regions). All security detections and subsequently generated security alerts will work for API traffic sent to both primary and secondary regions.
## Defender CSPM integration
defender-for-iot How To Manage Subscriptions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-manage-subscriptions.md
Your Microsoft Defender for IoT deployment for OT monitoring is managed through
If you're looking to manage Enterprise IoT plans, see [Manage Defender for IoT plans for Enterprise IoT security monitoring](manage-subscriptions-enterprise.md).
+This article is relevant for commercial Defender for IoT customers. If you're a government cusetomer, contact your Microsoft sales representative for more information.
+ ## Prerequisites Before performing the procedures in this article, make sure that you have:
dms Ads Sku Recommend https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/ads-sku-recommend.md
# Get Azure recommendations to migrate your SQL Server database (Preview)
-The [Azure SQL Migration extension for Azure Data Studio](/sql/azure-data-studio/extensions/azure-sql-migration-extension) helps you to assess your database requirements, get the right-sized SKU recommendations for Azure resources, and migrate your SQL Server database to Azure.
+The [Azure SQL Migration extension for Azure Data Studio](/azure-data-studio/extensions/azure-sql-migration-extension) helps you to assess your database requirements, get the right-sized SKU recommendations for Azure resources, and migrate your SQL Server database to Azure.
Learn how to use this unified experience, collecting performance data from your source SQL Server instance to get right-sized Azure recommendations for your Azure SQL targets.
The diagram presents the workflow for Azure recommendations in the Azure SQL Mig
To get started with Azure recommendations (Preview) for your SQL Server database migration, you must meet the following prerequisites: -- [Download and install Azure Data Studio](/sql/azure-data-studio/download-azure-data-studio).-- [Install the Azure SQL Migration extension](/sql/azure-data-studio/extensions/azure-sql-migration-extension) from Azure Data Studio Marketplace.
+- [Download and install Azure Data Studio](/azure-data-studio/download-azure-data-studio).
+- [Install the Azure SQL Migration extension](/azure-data-studio/extensions/azure-sql-migration-extension) from Azure Data Studio Marketplace.
- Ensure that the login you use to connect the source SQL Server instance, has the [minimum permissions](#minimum-permissions). ## Supported sources and targets
dms Dms Tools Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/dms-tools-matrix.md
The following tables identify the services and tools you can use to plan for dat
| Source | Target | Schema | Data<br/>(Offline) | Data<br/>(Online) | | | | | | |
-| SQL Server | Azure SQL DB | [SQL Database Projects extension](/sql/azure-data-studio/extensions/sql-database-project-extension)<br/>[DMA](/sql/dm)<br/>[DMA](/sql/dma/dma-overview)<br/>[Cloudamize*](https://www.cloudamize.com/) | [Cloudamize*](https://www.cloudamize.com/)<br/>[Qlik*](https://www.qlik.com/us/products/qlik-replicate)<br/>[Striim*](https://www.striim.com/partners/striim-and-microsoft-azure/) |
+| SQL Server | Azure SQL DB | [SQL Database Projects extension](/azure-data-studio/extensions/sql-database-project-extension)<br/>[DMA](/sql/dm)<br/>[DMA](/sql/dma/dma-overview)<br/>[Cloudamize*](https://www.cloudamize.com/) | [Cloudamize*](https://www.cloudamize.com/)<br/>[Qlik*](https://www.qlik.com/us/products/qlik-replicate)<br/>[Striim*](https://www.striim.com/partners/striim-and-microsoft-azure/) |
| SQL Server | Azure SQL MI | [Azure SQL Migration extension](./migration-using-azure-data-studio.md)<br/>[Cloudamize*](https://www.cloudamize.com/) | [Azure SQL Migration extension](./migration-using-azure-data-studio.md)<br/>[Cloudamize*](https://www.cloudamize.com/) | [Azure SQL Migration extension](./migration-using-azure-data-studio.md)<br/>[Cloudamize*](https://www.cloudamize.com/)<br/>[Qlik*](https://www.qlik.com/us/products/qlik-replicate)<br/>[Striim*](https://www.striim.com/partners/striim-and-microsoft-azure/) | | SQL Server | Azure SQL VM | [Azure SQL Migration extension](./migration-using-azure-data-studio.md)<br/>[DMA](/sql/dm)<br/>[Cloudamize*](https://www.cloudamize.com/)<br/>[Qlik*](https://www.qlik.com/us/products/qlik-replicate)<br/>[Striim*](https://www.striim.com/partners/striim-and-microsoft-azure/) | | SQL Server | Azure Synapse Analytics | | | |
-| Amazon RDS for SQL Server| Azure SQL DB | [SQL Database Projects extension](/sql/azure-data-studio/extensions/sql-database-project-extension)<br/>[DMA](/sql/dm)<br/>[DMA](/sql/dma/dma-overview)| [Qlik*](https://www.qlik.com/us/products/qlik-replicate)<br/>[Striim*](https://www.striim.com/partners/striim-and-microsoft-azure/) |
+| Amazon RDS for SQL Server| Azure SQL DB | [SQL Database Projects extension](/azure-data-studio/extensions/sql-database-project-extension)<br/>[DMA](/sql/dm)<br/>[DMA](/sql/dma/dma-overview)| [Qlik*](https://www.qlik.com/us/products/qlik-replicate)<br/>[Striim*](https://www.striim.com/partners/striim-and-microsoft-azure/) |
| Amazon RDS for SQL | Azure SQL MI |[Azure SQL Migration extension](./migration-using-azure-data-studio.md) | [Azure SQL Migration extension](./migration-using-azure-data-studio.md) | [Azure SQL Migration extension](./migration-using-azure-data-studio.md)<br/>[Qlik*](https://www.qlik.com/us/products/qlik-replicate)<br/>[Striim*](https://www.striim.com/partners/striim-and-microsoft-azure/) | | Amazon RDS for SQL Server | Azure SQL VM | [Azure SQL Migration extension](./migration-using-azure-data-studio.md)<br/>[DMA](/sql/dm)<br/>[Qlik*](https://www.qlik.com/us/products/qlik-replicate)<br/>[Striim*](https://www.striim.com/partners/striim-and-microsoft-azure/) | | Oracle | Azure SQL DB, MI, VM | [SSMA](/sql/ssma/sql-server-migration-assistant)<br/>[SharePlex*](https://www.quest.com/products/shareplex/)<br/>[Ispirer*](https://www.ispirer.com/blog/migration-to-the-microsoft-technology-stack) | [SSMA](/sql/ssma/sql-server-migration-assistant)<br/>[SharePlex*](https://www.quest.com/products/shareplex/)<br/>[Ispirer*](https://www.ispirer.com/blog/migration-to-the-microsoft-technology-stack) | [SharePlex*](https://www.quest.com/products/shareplex/)<br/>[Qlik*](https://www.qlik.com/us/products/qlik-replicate)<br/>[Striim*](https://www.striim.com/partners/striim-and-microsoft-azure/) |
dms Known Issues Azure Sql Migration Azure Data Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/known-issues-azure-sql-migration-azure-data-studio.md
WHERE STEP in (3,4,6);
- **Cause**: The selected tables for the migration don't exist in the target Azure SQL Database. -- **Recommendation**: Make sure the target database schema was created before starting the migration. For more information on how to deploy the target database schema, see [SQL Database Projects extension](/sql/azure-data-studio/extensions/sql-database-project-extension)
+- **Recommendation**: Make sure the target database schema was created before starting the migration. For more information on how to deploy the target database schema, see [SQL Database Projects extension](/azure-data-studio/extensions/sql-database-project-extension)
- **Message**: DatabaseSizeMoreThanMax: `The source database size <Source Database Size> exceeds the maximum allowed size of the target database <Target Database Size>. Check if the target database has enough space.` - **Cause**: The target database doesn't have enough space. -- **Recommendation**: Make sure the target database schema was created before starting the migration. For more information on how to deploy the target database schema, see [SQL Database Projects extension](/sql/azure-data-studio/extensions/sql-database-project-extension).
+- **Recommendation**: Make sure the target database schema was created before starting the migration. For more information on how to deploy the target database schema, see [SQL Database Projects extension](/azure-data-studio/extensions/sql-database-project-extension).
- **Message**: NoTablesFound: `Some of the source tables don't exist in the target database. Missing tables: <TableList>`.
Migrating to SQL Server on Azure VMs by using the Azure SQL extension for Azure
## Next steps -- For an overview and installation of the Azure SQL migration extension, see [Azure SQL migration extension for Azure Data Studio](/sql/azure-data-studio/extensions/azure-sql-migration-extension)
+- For an overview and installation of the Azure SQL migration extension, see [Azure SQL migration extension for Azure Data Studio](/azure-data-studio/extensions/azure-sql-migration-extension)
- For more information on known limitations with Log Replay Service, see [Migrate databases from SQL Server to SQL Managed Instance by using Log Replay Service (Preview)](/azure/azure-sql/managed-instance/log-replay-service-migrate#limitations) - For more information on SQL Server on Virtual machine resource limits, see [Checklist: Best practices for SQL Server on Azure VMs](/azure/azure-sql/virtual-machines/windows/performance-guidelines-best-practices-checklist)
dms Migration Dms Powershell Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/migration-dms-powershell-cli.md
# Migrate databases at scale using automation (Preview)
-The [Azure SQL Migration extension for Azure Data Studio](/sql/azure-data-studio/extensions/azure-sql-migration-extension) brings together a simplified assessment, recommendation, and migration experience that delivers the following capabilities:
+The [Azure SQL Migration extension for Azure Data Studio](/azure-data-studio/extensions/azure-sql-migration-extension) brings together a simplified assessment, recommendation, and migration experience that delivers the following capabilities:
- An enhanced assessment mechanism can evaluate SQL Server instances, identifying databases ready for migration to the different Azure SQL targets. - An SKU recommendation engine (Preview) that collects performance data from the source SQL Server instance on-premises, generating right-sized SKU recommendations based on your Azure SQL target. - A reliable Azure service powered by Azure Database Migration Service that orchestrates data movement activities to deliver a seamless migration experience.
Pre-requisites that are common across all supported migration scenarios using Az
> Azure account is only required when running the migration steps and is not required for assessment or Azure recommendation steps process. * Create a target [Azure SQL Managed Instance](/azure/azure-sql/managed-instance/create-configure-managed-instance-powershell-quickstart), [SQL Server on Azure Virtual Machine](/azure/azure-sql/virtual-machines/windows/sql-vm-create-powershell-quickstart), or [Azure SQL Database](/azure/azure-sql/database/single-database-create-quickstart) > [!IMPORTANT]
- > If your target is Azure SQL Database you have to migrate database schema from source to target using [SQL Server dacpac extension](/sql/azure-data-studio/extensions/sql-server-dacpac-extension) or, [SQL Database Projects extension](/sql/azure-data-studio/extensions/sql-database-project-extension) for Azure Data Studio.
+ > If your target is Azure SQL Database you have to migrate database schema from source to target using [SQL Server dacpac extension](/azure-data-studio/extensions/sql-server-dacpac-extension) or, [SQL Database Projects extension](/azure-data-studio/extensions/sql-database-project-extension) for Azure Data Studio.
> > If you have an existing Azure Virtual Machine, it should be registered with [SQL IaaS Agent extension in Full management mode](/azure/azure-sql/virtual-machines/windows/sql-server-iaas-agent-extension-automate-management#management-modes).
dms Migration Using Azure Data Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/migration-using-azure-data-studio.md
# Migrate databases by using the Azure SQL Migration extension for Azure Data Studio
-Learn how to use the unified experience in [Azure SQL Migration extension for Azure Data Studio](/sql/azure-data-studio/extensions/azure-sql-migration-extension). Helps you to assess your database requirements, get the right-sized SKU recommendations for Azure resources, and migrate your SQL Server database to Azure.
+Learn how to use the unified experience in [Azure SQL Migration extension for Azure Data Studio](/azure-data-studio/extensions/azure-sql-migration-extension). Helps you to assess your database requirements, get the right-sized SKU recommendations for Azure resources, and migrate your SQL Server database to Azure.
The Azure SQL Migration extension for Azure Data Studio offers these key benefits:
SQL Server to SQL Server on an Azure virtual machine|[Online](./tutorial-sql-ser
SQL Server to Azure SQL Database | [Offline](./tutorial-sql-server-azure-sql-database-offline.md) > [!IMPORTANT]
-> If your target is Azure SQL Database, make sure you deploy the database schema before you begin the migration. You can use tools like the [SQL Server dacpac extension](/sql/azure-data-studio/extensions/sql-server-dacpac-extension) or the [SQL Database Projects extension](/sql/azure-data-studio/extensions/sql-database-project-extension) for Azure Data Studio.
+> If your target is Azure SQL Database, make sure you deploy the database schema before you begin the migration. You can use tools like the [SQL Server dacpac extension](/azure-data-studio/extensions/sql-server-dacpac-extension) or the [SQL Database Projects extension](/azure-data-studio/extensions/sql-database-project-extension) for Azure Data Studio.
The following video explains recent updates and features added to the Azure SQL Migration extension for Azure Data Studio:
The following list describes each step in the workflow:
(3) **Network file share**: A Server Message Block (SMB) network file share where backup files are stored for the databases to be migrated. Azure storage blob containers and Azure storage file share also are supported.
-(4) **Azure Data Studio**: Download and install the [Azure SQL Migration extension for Azure Data Studio](/sql/azure-data-studio/extensions/azure-sql-migration-extension).
+(4) **Azure Data Studio**: Download and install the [Azure SQL Migration extension for Azure Data Studio](/azure-data-studio/extensions/azure-sql-migration-extension).
(5) **Azure Database Migration Service**: An Azure service that orchestrates migration pipelines to do data movement activities from an on-premises environment to Azure. Database Migration Service is associated with the Azure Data Factory self-hosted integration runtime and provides the capability to register and monitor the self-hosted integration runtime.
For the list of Azure regions that support database migrations by using the Azur
## Next steps -- Learn how to install the [Azure SQL Migration extension for Azure Data Studio](/sql/azure-data-studio/extensions/azure-sql-migration-extension).
+- Learn how to install the [Azure SQL Migration extension for Azure Data Studio](/azure-data-studio/extensions/azure-sql-migration-extension).
dms Tutorial Login Migration Ads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-login-migration-ads.md
In this tutorial, you learn how to:
Before you begin the tutorial: -- [Download and install Azure Data Studio](/sql/azure-data-studio/download-azure-data-studio).-- [Install the Azure SQL Migration extension](/sql/azure-data-studio/extensions/azure-sql-migration-extension) from Azure Data Studio Marketplace.
+- [Download and install Azure Data Studio](/azure-data-studio/download-azure-data-studio).
+- [Install the Azure SQL Migration extension](/azure-data-studio/extensions/azure-sql-migration-extension) from Azure Data Studio Marketplace.
- Create a target instance of [Azure SQL Managed Instance](/azure/azure-sql/managed-instance/instance-create-quickstart) or [SQL Server on Azure Virtual Machines](/azure/azure-sql/virtual-machines/windows/create-sql-vm-portal).
dms Tutorial Sql Server Azure Sql Database Offline https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-sql-server-azure-sql-database-offline.md
The following section describes how to use Azure Database Migration Service with
Before you begin the tutorial: -- [Download and install Azure Data Studio](/sql/azure-data-studio/download-azure-data-studio).-- [Install the Azure SQL Migration extension](/sql/azure-data-studio/extensions/azure-sql-migration-extension) from Azure Data Studio Marketplace.
+- [Download and install Azure Data Studio](/azure-data-studio/download-azure-data-studio).
+- [Install the Azure SQL Migration extension](/azure-data-studio/extensions/azure-sql-migration-extension) from Azure Data Studio Marketplace.
- Have an Azure account that's assigned to one of the following built-in roles: - Contributor for the target instance of Azure SQL Database
Before you begin the tutorial:
- Make sure that the SQL Server login that connects to the source SQL Server instance is a member of the db_datareader role and that the login for the target SQL Server instance is a member of the db_owner role. -- Migrate the database schema from source to target by using the [SQL Server dacpac extension](/sql/azure-data-studio/extensions/sql-server-dacpac-extension) or the [SQL Database Projects extension](/sql/azure-data-studio/extensions/sql-database-project-extension) in Azure Data Studio.
+- Migrate the database schema from source to target by using the [SQL Server dacpac extension](/azure-data-studio/extensions/sql-server-dacpac-extension) or the [SQL Database Projects extension](/azure-data-studio/extensions/sql-database-project-extension) in Azure Data Studio.
- If you're using Database Migration Service for the first time, make sure that the Microsoft.DataMigration [resource provider is registered in your subscription](quickstart-create-data-migration-service-portal.md#register-the-resource-provider). > [!NOTE]
-> Make sure to migrate the database schema from source to target by using the [SQL Server dacpac extension](/sql/azure-data-studio/extensions/sql-server-dacpac-extension) or the [SQL Database Projects extension](/sql/azure-data-studio/extensions/sql-database-project-extension) in Azure Data Studio before selecting the list of tables to migrate.
+> Make sure to migrate the database schema from source to target by using the [SQL Server dacpac extension](/azure-data-studio/extensions/sql-server-dacpac-extension) or the [SQL Database Projects extension](/azure-data-studio/extensions/sql-database-project-extension) in Azure Data Studio before selecting the list of tables to migrate.
> > If no tables exist on the Azure SQL Database target, or no tables are selected before starting the migration, the **Next** button isn't available to select to initiate the migration task.
To open the Migrate to Azure SQL wizard:
> [!NOTE] > If no tables are selected or if a username and password aren't entered, the **Next** button isn't available to select. >
-> Make sure to migrate the database schema from source to target by using the [SQL Server dacpac extension](/sql/azure-data-studio/extensions/sql-server-dacpac-extension) or the [SQL Database Projects extension](/sql/azure-data-studio/extensions/sql-database-project-extension) in Azure Data Studio before selecting the list of tables to migrate.
+> Make sure to migrate the database schema from source to target by using the [SQL Server dacpac extension](/azure-data-studio/extensions/sql-server-dacpac-extension) or the [SQL Database Projects extension](/azure-data-studio/extensions/sql-database-project-extension) in Azure Data Studio before selecting the list of tables to migrate.
### Create a Database Migration Service instance
Before you begin the tutorial:
- Make sure that the SQL Server login that connects to the source SQL Server instance is a member of the **db_datareader** role, and that the login for the target SQL Server instance is a member of the **db_owner** role. -- Migrate the database schema from source to target by using the [SQL Server dacpac extension](/sql/azure-data-studio/extensions/sql-server-dacpac-extension) or the [SQL Database Projects extension](/sql/azure-data-studio/extensions/sql-database-project-extension) in Azure Data Studio.
+- Migrate the database schema from source to target by using the [SQL Server dacpac extension](/azure-data-studio/extensions/sql-server-dacpac-extension) or the [SQL Database Projects extension](/azure-data-studio/extensions/sql-database-project-extension) in Azure Data Studio.
- If you're using Database Migration Service for the first time, make sure that the `Microsoft.DataMigration` [resource provider is registered in your subscription](quickstart-create-data-migration-service-portal.md#register-the-resource-provider). > [!NOTE]
-> Make sure to migrate the database schema from source to target by using the [SQL Server dacpac extension](/sql/azure-data-studio/extensions/sql-server-dacpac-extension) or the [SQL Database Projects extension](/sql/azure-data-studio/extensions/sql-database-project-extension) in Azure Data Studio before selecting the list of tables to migrate.
+> Make sure to migrate the database schema from source to target by using the [SQL Server dacpac extension](/azure-data-studio/extensions/sql-server-dacpac-extension) or the [SQL Database Projects extension](/azure-data-studio/extensions/sql-database-project-extension) in Azure Data Studio before selecting the list of tables to migrate.
> > If no tables exists on the Azure SQL Database target, or no tables are selected before starting the migration. The **Next** button isn't available to select to initiate the migration task.
Before you begin the tutorial:
> [!NOTE] > In an offline migration, application downtime starts when the migration starts. >
- > Make sure to migrate the database schema from source to target by using the [SQL Server dacpac extension](/sql/azure-data-studio/extensions/sql-server-dacpac-extension) or the [SQL Database Projects extension](/sql/azure-data-studio/extensions/sql-database-project-extension) in Azure Data Studio before selecting the list of tables to migrate.
+ > Make sure to migrate the database schema from source to target by using the [SQL Server dacpac extension](/azure-data-studio/extensions/sql-server-dacpac-extension) or the [SQL Database Projects extension](/azure-data-studio/extensions/sql-database-project-extension) in Azure Data Studio before selecting the list of tables to migrate.
### Monitor the database migration
dms Tutorial Sql Server Managed Instance Offline Ads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-sql-server-managed-instance-offline-ads.md
This tutorial describes an offline migration from SQL Server to Azure SQL Manage
Before you begin the tutorial: -- [Download and install Azure Data Studio](/sql/azure-data-studio/download-azure-data-studio).-- [Install the Azure SQL Migration extension](/sql/azure-data-studio/extensions/azure-sql-migration-extension) from Azure Data Studio Marketplace.
+- [Download and install Azure Data Studio](/azure-data-studio/download-azure-data-studio).
+- [Install the Azure SQL Migration extension](/azure-data-studio/extensions/azure-sql-migration-extension) from Azure Data Studio Marketplace.
- Have an Azure account that's assigned to one of the following built-in roles: - Contributor for the target instance of Azure SQL Managed Instance and for the storage account where you upload your database backup files from a Server Message Block (SMB) network share
dms Tutorial Sql Server Managed Instance Online Ads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-sql-server-managed-instance-online-ads.md
This article describes an online database migration from SQL Server to Azure SQL
To complete this tutorial, you need to:
-* [Download and install Azure Data Studio](/sql/azure-data-studio/download-azure-data-studio)
-* [Install the Azure SQL migration extension](/sql/azure-data-studio/extensions/azure-sql-migration-extension) from the Azure Data Studio marketplace
+* [Download and install Azure Data Studio](/azure-data-studio/download-azure-data-studio)
+* [Install the Azure SQL migration extension](/azure-data-studio/extensions/azure-sql-migration-extension) from the Azure Data Studio marketplace
* Have an Azure account that is assigned to one of the built-in roles listed below: - Contributor for the target Azure SQL Managed Instance (and Storage Account to upload your database backup files from SMB network share). - Reader role for the Azure Resource Groups containing the target Azure SQL Managed Instance or the Azure storage account.
dms Tutorial Sql Server To Virtual Machine Offline Ads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-sql-server-to-virtual-machine-offline-ads.md
This tutorial describes an offline migration from SQL Server to SQL Server on Az
Before you begin the tutorial: -- [Download and install Azure Data Studio](/sql/azure-data-studio/download-azure-data-studio).-- [Install the Azure SQL Migration extension](/sql/azure-data-studio/extensions/azure-sql-migration-extension) from Azure Data Studio Marketplace.
+- [Download and install Azure Data Studio](/azure-data-studio/download-azure-data-studio).
+- [Install the Azure SQL Migration extension](/azure-data-studio/extensions/azure-sql-migration-extension) from Azure Data Studio Marketplace.
- Have an Azure account that's assigned to one of the following built-in roles: - Contributor for the target instance of SQL Server on Azure Virtual Machines and for the storage account where you upload your database backup files from a Server Message Block (SMB) network share
dms Tutorial Sql Server To Virtual Machine Online Ads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-sql-server-to-virtual-machine-online-ads.md
This article describes an online migration from SQL Server to a SQL Server on Az
To complete this tutorial, you need to:
-* [Download and install Azure Data Studio](/sql/azure-data-studio/download-azure-data-studio)
-* [Install the Azure SQL migration extension](/sql/azure-data-studio/extensions/azure-sql-migration-extension) from the Azure Data Studio marketplace
+* [Download and install Azure Data Studio](/azure-data-studio/download-azure-data-studio)
+* [Install the Azure SQL migration extension](/azure-data-studio/extensions/azure-sql-migration-extension) from the Azure Data Studio marketplace
* Have an Azure account that is assigned to one of the built-in roles listed below: - Contributor for the target SQL Server on Azure Virtual Machine (and Storage Account to upload your database backup files from SMB network share). - Reader role for the Azure Resource Groups containing the target SQL Server on Azure Virtual Machine or the Azure storage account.
dms Tutorial Transparent Data Encryption Migration Ads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-transparent-data-encryption-migration-ads.md
TDE provides a solution to this problem, with real-time I/O encryption/decryptio
When you migrate a TDE-protected database, the certificate (asymmetric key) used to open the database encryption key (DEK) must also be moved along with the source database. Therefore, you need to recreate the server certificate in the `master` database of the target SQL Server for that instance to access the database files.
-You can use the [Azure SQL Migration extension for Azure Data Studio](/sql/azure-data-studio/extensions/azure-sql-migration-extension) to help you migrate TDE-enabled databases (preview) from an on-premises instance of SQL Server to Azure SQL.
+You can use the [Azure SQL Migration extension for Azure Data Studio](/azure-data-studio/extensions/azure-sql-migration-extension) to help you migrate TDE-enabled databases (preview) from an on-premises instance of SQL Server to Azure SQL.
The TDE-enabled database migration process automates manual tasks such as backing up the database certificate keys (DEK), copying the certificate files from the on-premises SQL Server to the Azure SQL target, and then reconfiguring TDE for the target database again.
In this tutorial, you learn how to migrate the example `AdventureWorksTDE` encry
Before you begin the tutorial: -- [Download and install Azure Data Studio](/sql/azure-data-studio/download-azure-data-studio).
+- [Download and install Azure Data Studio](/azure-data-studio/download-azure-data-studio).
-- [Install the Azure SQL Migration extension](/sql/azure-data-studio/extensions/azure-sql-migration-extension) from Azure Data Studio Marketplace.
+- [Install the Azure SQL Migration extension](/azure-data-studio/extensions/azure-sql-migration-extension) from Azure Data Studio Marketplace.
- Run Azure Data Studio as Administrator.
event-grid System Topics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/system-topics.md
You can create a system topic in two ways:
When you use the Azure portal, you're always using this method. When you create an event subscription using the [**Events** page of an Azure resource](blob-event-quickstart-portal.md#subscribe-to-the-blob-storage), the system topic is created first and then the subscription for the topic is created. You can explicitly create a system topic first by using the [**Event Grid System Topics** page](create-view-manage-system-topics.md#create-a-system-topic) and then create a subscription for that topic.
-When you use [CLI](create-view-manage-system-topics-cli.md), [REST](/rest/api/eventgrid/controlplane-version2022-06-15/event-subscriptions/create-or-update), or [Azure Resource Manager template](create-view-manage-system-topics-arm.md), you can choose either of the above methods.
+When you use [CLI](create-view-manage-system-topics-cli.md), [REST](/rest/api/eventgrid/controlplane-preview/event-subscriptions/create-or-update), or [Azure Resource Manager template](create-view-manage-system-topics-arm.md), you can choose either of the above methods.
> [!IMPORTANT] > We recommend that you create a system topic first and then create a subscription on the topic, as it's the latest way of creating system topics.
firewall-manager Configure Ddos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall-manager/configure-ddos.md
Title: Configure Azure DDoS Protection Plan using Azure Firewall Manager
-description: Learn how to use Azure Firewall Manager to configure Azure DDoS Protection Plan Standard
+description: Learn how to use Azure Firewall Manager to configure Azure DDoS Protection Plan
Now you can associate the DDoS Protection Plan with the secured virtual network.
1. Select **Virtual Networks**. 1. Select the check box for **Hub-vnet-01**. 1. Select **Manage Security**, **Add DDoS Protection Plan**.
-1. For **DDoS protection standard**, select **Enable**.
+1. For **DDoS protection plan**, select **Enable**.
1. For **DDoS protection plan**, select **DDoS-plan-01**. 1. Select **Add**. 1. After the deployment completes, select **Refresh**.
firewall-manager Secure Cloud Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall-manager/secure-cloud-network.md
Now you must ensure that network traffic gets routed through your firewall.
3. Under **Settings**, select **Security configuration**. 4. Under **Internet traffic**, select **Azure Firewall**. 5. Under **Private traffic**, select **Send via Azure Firewall**.
-6. Under **Inter-hub**, select **Enabled** to enable the Virtual WAN routing intent feature. Routing intent is the mechanism through which you can configure Virtual WAN to route branch-to-branch (on-premises to on-premises) traffic via Azure Firewall deployed in the Virtual WAN Hub. For more information regarding pre-requisites and considerations associated with the routing intent feature, see [Routing Intent documentation](../virtual-wan/how-to-routing-policies.md).
-7. Select **Save**.
-8. Select **OK** on the **Warning** dialog.
+ > [!NOTE]
+ > If you're using public IP address ranges for private networks in a virtual network or an on-premises branch, you need to explicitly specify these IP address prefixes. Select the **Private Traffic Prefixes** section and then add them alongside the RFC1918 address prefixes.
+7. Under **Inter-hub**, select **Enabled** to enable the Virtual WAN routing intent feature. Routing intent is the mechanism through which you can configure Virtual WAN to route branch-to-branch (on-premises to on-premises) traffic via Azure Firewall deployed in the Virtual WAN Hub. For more information regarding prerequisites and considerations associated with the routing intent feature, see [Routing Intent documentation](../virtual-wan/how-to-routing-policies.md).
+8. Select **Save**.
+9. Select **OK** on the **Warning** dialog.
:::image type="content" source="./media/secure-cloud-network/9a-firewall-warning.png" alt-text="Screenshot of Secure Connections." lightbox="./media/secure-cloud-network/9a-firewall-warning.png":::
firewall Premium Migrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/premium-migrate.md
The minimum Azure PowerShell version requirement is 6.5.0. For more information,
- Allocate Firewall Premium ```azurepowershell
- $azfw = Get-AzFirewall -Name -Name "<firewall-name>" -ResourceGroupName "<resource-group-name>"
+ $azfw = Get-AzFirewall -Name "<firewall-name>" -ResourceGroupName "<resource-group-name>"
$hub = get-azvirtualhub -ResourceGroupName "<resource-group-name>" -name "<vWANhub-name>" $azfw.Sku.Tier="Premium" $azfw.Allocate($hub.id)
You can attach a Premium policy to the new Premium Firewall using the Azure port
## Next steps -- [Learn more about Azure Firewall Premium features](premium-features.md)
+- [Learn more about Azure Firewall Premium features](premium-features.md)
frontdoor Front Door Cdn Comparison https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/front-door-cdn-comparison.md
Azure Front Door and Azure CDN are both Azure services that offer global content
The following table provides a comparison between Azure Front Door and Azure CDN services.
-| Features and optimizations | Front Door Standard | Front Door Premium | Azure CDN Classic | Azure CDN Standard Microsoft | Azure CDN Standard Edgio | Azure CDN Premium Edgio |
+| Features and optimizations | Front Door Standard | Front Door Premium | Front Door Classic | Azure CDN Standard Microsoft | Azure CDN Standard Edgio | Azure CDN Premium Edgio |
| | | | | | | | | **Delivery and acceleration** | | | | | | | | Static file delivery | Yes | Yes | Yes | Yes | Yes | Yes |
The following table provides a comparison between Azure Front Door and Azure CDN
| HTTPS support | Yes | Yes | Yes | Yes | Yes | Yes | | Custom domain HTTPS | Yes | Yes | Yes | Yes | Yes | Yes | | Bring your own certificate | Yes | Yes | Yes | Yes | Yes | Yes |
-| Supported TLS Versions | TLS1.2, TLS1.0 | TLS1.2, TLS1.0 | TLS1.2, TLS1.0 | TLS 1.2, TLS 1.0/1.1 | "TLS 1.2, TLS 1.3" | TLS 1.2, TLS 1.3 |
+| Supported TLS Versions | TLS1.2, TLS1.0 | TLS1.2, TLS1.0 | TLS1.2, TLS1.0 | TLS 1.2, TLS 1.0/1.1 | TLS 1.2, TLS 1.3 | TLS 1.2, TLS 1.3 |
| **Caching** | | | | | | | | Query string caching | Yes | Yes | Yes | Yes | Yes | Yes | | Cache manage (purge, rules, and compression) | Yes | Yes | Yes | Yes | Yes | Yes |
The following table provides a comparison between Azure Front Door and Azure CDN
## Next steps * Learn how to [create an Azure Front Door](create-front-door-portal.md).
-* Learn how about the [Azure Front Door architecture](front-door-routing-architecture.md).
+* Learn how about the [Azure Front Door architecture](front-door-routing-architecture.md).
hdinsight Apache Hbase Build Java Maven Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hbase/apache-hbase-build-java-maven-linux.md
description: Learn how to use Apache Maven to build a Java-based Apache HBase ap
Previously updated : 09/23/2022 Last updated : 10/17/2023 # Build Java applications for Apache HBase
The steps in this document use [Apache Maven](https://maven.apache.org/) to crea
* An SSH client. For more information, see [Connect to HDInsight (Apache Hadoop) using SSH](../hdinsight-hadoop-linux-use-ssh-unix.md).
-* If using PowerShell, you'll need the [AZ Module](/powershell/azure/).
+* If using PowerShell, you need the [AZ Module](/powershell/azure/).
* A text editor. This article uses Microsoft Notepad.
The steps in this document use [Apache Maven](https://maven.apache.org/) to crea
The environment used for this article was a computer running Windows 10. The commands were executed in a command prompt, and the various files were edited with Notepad. Modify accordingly for your environment.
-From a command prompt, enter the commands below to create a working environment:
+From a command prompt, enter the following commands to create a working environment:
```cmd IF NOT EXIST C:\HDI MKDIR C:\HDI
cd C:\HDI
mkdir conf ```
- This command creates a directory named `hbaseapp` at the current location, which contains a basic Maven project. The second command changes the working directory to `hbaseapp`. The third command creates a new directory, `conf`, which will be used later. The `hbaseapp` directory contains the following items:
+ This command creates a directory named `hbaseapp` at the current location, which contains a basic Maven project. The second command changes the working directory to `hbaseapp`. The third command creates a new directory, `conf`, which can be used later. The `hbaseapp` directory contains the following items:
* `pom.xml`: The Project Object Model ([POM](https://maven.apache.org/guides/introduction/introduction-to-the-pom.html)) contains information and configuration details used to build the project. * `src\main\java\com\microsoft\examples`: Contains your application code. * `src\test\java\com\microsoft\examples`: Contains tests for your application.
-2. Remove the generated example code. Delete the generated test and application files `AppTest.java`, and `App.java` by entering the commands below:
+2. Remove the generated example code. Delete the generated test and application files `AppTest.java`, and `App.java` by entering the following commands:
```cmd DEL src\main\java\com\microsoft\examples\App.java
cd C:\HDI
## Update the Project Object Model
-For a full reference of the pom.xml file, see https://maven.apache.org/pom.html. Open `pom.xml` by entering the command below:
+For a full reference of the pom.xml file, see https://maven.apache.org/pom.html. Open `pom.xml` by entering the following command:
```cmd notepad pom.xml
scp sshuser@CLUSTERNAME-ssh.azurehdinsight.net:/etc/hbase/conf/hbase-site.xml ./
### Implement a CreateTable class
-Enter the command below to create and open a new file `CreateTable.java`. Select **Yes** at the prompt to create a new file.
+Enter the following command to create and open a new file `CreateTable.java`. Select **Yes** at the prompt to create a new file.
```cmd notepad src\main\java\com\microsoft\examples\CreateTable.java ```
-Then copy and paste the Java code below into the new file. Then close the file.
+Then copy and paste the following Java code into the new file. Then close the file.
```java package com.microsoft.examples;
This code is the `CreateTable` class, which creates a table named `people` and p
### Implement a SearchByEmail class
-Enter the command below to create and open a new file `SearchByEmail.java`. Select **Yes** at the prompt to create a new file.
+Enter the following command to create and open a new file `SearchByEmail.java`. Select **Yes** at the prompt to create a new file.
```cmd notepad src\main\java\com\microsoft\examples\SearchByEmail.java ```
-Then copy and paste the Java code below into the new file. Then close the file.
+Then copy and paste the following Java code into the new file. Then close the file.
```java package com.microsoft.examples;
The `SearchByEmail` class can be used to query for rows by email address. Becaus
### Implement a DeleteTable class
-Enter the command below to create and open a new file `DeleteTable.java`. Select **Yes** at the prompt to create a new file.
+Enter the following command to create and open a new file `DeleteTable.java`. Select **Yes** at the prompt to create a new file.
```cmd notepad src\main\java\com\microsoft\examples\DeleteTable.java ```
-Then copy and paste the Java code below into the new file. Then close the file.
+Then copy and paste the following Java code into the new file. Then close the file.
```java package com.microsoft.examples;
The following steps use the Azure PowerShell [AZ module](/powershell/azure/new-a
2. Save the `hbase-runner.psm1` file in the `hbaseapp` directory.
-3. Register the modules with Azure PowerShell. Open a new Azure PowerShell window and edit the command below by replacing `CLUSTERNAME` with the name of your cluster. Then enter the following commands:
+3. Register the modules with Azure PowerShell. Open a new Azure PowerShell window and edit the following command by replacing `CLUSTERNAME` with the name of your cluster. Then enter the following commands:
```powershell cd C:\HDI\hbaseapp
hdinsight Hdinsight Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-known-issues.md
Title: Azure HDInsight known issues
description: Track known issues and the ETA for the fix in Azure HDInsight Previously updated : 10/04/2023 Last updated : 10/13/2023 # Azure HDInsight known issues
This page lists known issues for the Azure HDInsight service. Before submitting
For service level outages or degradation notifications, check the [Azure service health status](https://azure.status.microsoft/status) page.
-## Currently active known issues
+Azure HDInsight has the following known issues:
-Select the **Title** to view more information about that specific known issue.
+| HDInsight component | Issue description |
+||-|
+| Kafka | [Kafka 2.4.1 has validation error in ARM templates](#kafka-241-has-validation-error-in-arm-templates) |
+| Spark | [Conda version regression in recent HDInsight release](#conda-version-regression-in-recent-hdinsight-release)|
+| Platform | [Cluster reliability issue](#cluster-reliability-issue) observed with Azure HDInsight clusters using images older than March 2022|
+
+## Known issues summary
+
+### Kafka 2.4.1 has validation error in ARM templates
+**Issue published date**: October, 13 2023
+
+When submitting cluster creation requests using ARM templates, Runbooks, PowerShell, Azure CLI, and other automation tools, you might receive a BadRequest error message if you specify clusterType="Kafka", HDI version = "5.0" and Kafka version = "2.4.1".
+
+#### Troubleshooting steps
+
+When using [templates or automation tools](/azure/hdinsight/hdinsight-hadoop-provision-linux-clusters#cluster-setup-methods) to create HDInsight Kafka clusters, choose componentVersion = "2.4". This enables you to successfully create a Kafka 2.4.1 cluster in HDInsight 5.0.
+
+#### Resources
+
+- [Create HDInsight clusters using automation](/azure/hdinsight/hdinsight-hadoop-provision-linux-clusters#cluster-setup-methods)
+- [Supported HDInsight versions](/azure/hdinsight/hdinsight-component-versioning#supported-hdinsight-versions)
+- [HDInsight Kafka cluster](/azure/hdinsight/kafka/apache-kafka-introduction)
+
+### Conda version regression in recent HDInsight release
+**Issue published date**: October, 13 2023
+
+In the latest Azure HDInsight release, the conda version was mistakenly downgraded to version 4.2.9. This regression is fixed in an upcoming release, but currently it can impact Spark job execution and result in script action failures. Conda 4.3.30 is the expected version in 5.0 and 5.1 clusters, so follow the steps to mitigate the issue.
+
+<!--/issueDescription-->
+
+#### Recommended Steps
+
+1. SSH to any VM in the cluster.
+2. Switch to the root user: `sudo su`
+3. Check the conda version: `/usr/bin/anaconda/bin/conda info`
+4. If the version is 4.2.9, run the following [script action](/azure/hdinsight/hdinsight-hadoop-customize-cluster-linux#script-action-to-a-running-cluster) on all nodes to upgrade the cluster to conda version 4.3.30:
+
+ `https://hdiconfigactions2.blob.core.windows.net/hdi-sre-workspace/conda_update_4_3_30_patch.sh`
+
+#### Recommended Documents
+
+- [Script action to a running cluster](/azure/hdinsight/hdinsight-hadoop-customize-cluster-linux#script-action-to-a-running-cluster)
+- [Safely manage Python environment on Azure HDInsight using Script Action](/azure/hdinsight/spark/apache-spark-python-package-installation)
+- [Supported HDInsight versions](/azure/hdinsight/hdinsight-component-versioning#supported-hdinsight-versions)
+
+### Cluster reliability issue
+**Issue published date**: October, 13 2023
+
+As part of the proactive reliability management of Azure HDInsight, we recently came across a potential reliability issue on HDInsight clusters that use images dated February 2022 or older.
+
+#### Issue Background
+
+In HDInsight images dated prior to March 2022, a known bug was discovered on one particular AzLinux build. The `waagent`, a lightweight process that manages virtual machines, was unstable and resulted in VM outages. HDInsight clusters that consumed the AzLinux build have experienced service outages, job failures, and adverse effects on features like IPSec and Autoscale.
+
+#### Required Action
+
+If your cluster was created prior to February 2022, we advise rebuilding your cluster with the latest HDInsight image by November 10, 2023. Cluster images dated prior to March 2022 will not be supported beyond this date. These images will not receive security updates, bug fixes, or patches, leaving them highly susceptible to vulnerabilities.
+
+> [!IMPORTANT]
+> ItΓÇÖs recommended to keep your clusters updated to the latest HDInsight version on a regular basis. Using clusters based on the latest HDInsight image ensures that clusters have the latest operating system patches, security patches, bug fixes, library versions, and minimizes risk and potential security vulnerabilities.
+>
+
+#### FAQ
+
+##### What happens in the case of a VM outage in the HDInsight clusters that use these impacted HDInsight images?
+
+Recovery of such Virtual Machines is not straight-forward restarts and could result in several hours of outage with a mandatory manual intervention from the Microsoft support team.
+
+##### Is this issue rectified in latest HDInsight images?
+
+Yes, we've fixed this issue on the HDInsight images dated March 1, 2022 or later. It is advised to move to the latest stable version to ensure SLA and service reliability.
+
+#### How do we determine the date of the HDInsight image our clusters are built on?
+
+The last 10 digits in your image version indicate the date and time of HDInsight image. For example, if your cluster image is ΓÇ£5.0.3000.1.2208310943ΓÇ¥ indicates that the date of your image is 31 August 2022. Learn how to [verify your HDInsight image version.](/azure/hdinsight/view-hindsight-cluster-image-version)
+
+#### Resources
+
+- [Create HDInsight clusters using automation](/azure/hdinsight/hdinsight-hadoop-provision-linux-clusters#cluster-setup-methods)
+- [Supported HDInsight versions](/azure/hdinsight/hdinsight-component-versioning#supported-hdinsight-versions)
+- [HDInsight Kafka cluster](/azure/hdinsight/kafka/apache-kafka-introduction)
+- [Verify your HDInsight image version](/azure/hdinsight/view-hindsight-cluster-image-version)
-| Issue ID | Area |Title | Issue publish date|
-|||-|-|
-| 450 | Cluster Creation | [Linux VM agent 9.9.9.9](https://github.com/Azure/SelfHelpContent/blob/master/articles/microsoft.hdinsight/asc-hdinsight-vmagent9999.md)| October 12, 2023 |
-| 451 | Spark Library management | [Conda version regression in recent HDInsight release](https://github.com/Azure/SelfHelpContent/blob/master/articles/microsoft.hdinsight/asc-hdinsight-condaregressionversion429.md)| October 12, 2023 |
-| 452 | Cluster Creation | [ARM templates not accepting kafka version 2.4.1](https://github.com/Azure/SelfHelpContent/blob/master/articles/microsoft.hdinsight/asc-hdinsight-kafkaversion241.md)| October 12, 2023 |
## Recently closed known issues Select the **Title** to view more information about that specific known issue. Fixed issues are removed after 60 days.
-| Issue ID | Area |Title | Issue publish date| Status|
+| Issue ID | Area |Title | Issues publish date| Status |
|||-|-|-| |NA|NA|NA|NA|NA|
hdinsight Apache Interactive Query Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/interactive-query/apache-interactive-query-get-started.md
description: An introduction to Interactive Query, also called Apache Hive LLAP,
Previously updated : 08/23/2022 Last updated : 10/16/2023 #Customer intent: As a developer new to Interactive Query in Azure HDInsight, I want to have a basic understanding of Interactive Query so I can decide if I want to use it rather than build my own cluster.
You can access the Hive service in the Interactive Query cluster only via Apache
For information about creating a HDInsight cluster, see [Create Apache Hadoop clusters in HDInsight](../hdinsight-hadoop-provision-linux-clusters.md). Choose the Interactive Query cluster type. > [!IMPORTANT]
-> The minimum headnode size for Interactive Query clusters is Standard_D13_v2. See the [Azure VM Sizing Chart](../../cloud-services/cloud-services-sizes-specs.md#dv2-series)for more information.
+> The minimum headnode size for Interactive Query clusters is Standard_D13_v2. For more information, see the [Azure Virtual Machine Sizing Chart](../../cloud-services/cloud-services-sizes-specs.md#dv2-series).
## Execute Apache Hive queries from Interactive Query
hdinsight Optimize Hbase Ambari https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/optimize-hbase-ambari.md
Title: Optimize Apache HBase with Apache Ambari in Azure HDInsight
description: Use the Apache Ambari web UI to configure and optimize Apache HBase. Previously updated : 09/19/2022 Last updated : 10/16/2023 # Optimize Apache HBase with Apache Ambari in Azure HDInsight
The following configurations are important to improve the performance of read-he
### Block cache size
-The block cache is the read cache. Its size is controlled by the `hfile.block.cache.size` parameter. The default value is 0.4, which is 40 percent of the total region server memory. The larger the block cache size, the faster the random reads will be.
+The block cache is the read cache. The `hfile.block.cache.size` parameter controls block cache size. The default value is 0.4, which is 40 percent of the total region server memory. Larger the block cache size, faster will be random reads.
1. To modify this parameter, navigate to the **Settings** tab in the HBase **Configs** tab, and then locate **% of RegionServer Allocated to Read Buffers**.
The block cache is the read cache. Its size is controlled by the `hfile.block.ca
### Memstore size
-All edits are stored in the memory buffer, called a *Memstore*. This buffer increases the total amount of data that can be written to disk in a single operation. It also speeds access to the recent edits. The Memstore size is defined by the following two parameters:
+All edits are stored in the memory buffer, called a *Memstore*. This buffer increases the total amount of data that can be written to disk in a single operation. It also speeds access to the recent edits. The Memstore size defines the following two parameters:
* `hbase.regionserver.global.memstore.UpperLimit`: Defines the maximum percentage of the region server that Memstore combined can use.
To optimize for random reads, you can reduce the Memstore upper and lower limits
### Number of rows fetched when scanning from disk
-The `hbase.client.scanner.caching` setting defines the number of rows read from disk when the `next` method is called on a scanner. The default value is 100. The higher the number, the fewer the remote calls made from the client to the region server, resulting in faster scans. However, this setting will also increase memory pressure on the client.
+The `hbase.client.scanner.caching` setting defines the number of rows read from disk when the `next` method is called on a scanner. The default value is 100. The higher the number, the fewer the remote calls made from the client to the region server, resulting in faster scans. However, this setting increase memory pressure on the client.
:::image type="content" source="./media/optimize-hbase-ambari/hbase-num-rows-fetched.png" alt-text="Apache HBase number of rows fetched" border="true":::
The following configurations are important to improve the performance of write-h
### Maximum region file size
-HBase stores data in an internal file format, called *HFile*. The property `hbase.hregion.max.filesize` defines the size of a single HFile for a region. A region is split into two regions if the sum of all HFiles in a region is greater than this setting.
+HBase stores data in an internal file format, called `HFile`. The property `hbase.hregion.max.filesize` defines the size of a single `HFile` for a region. A region is split into two regions if the sum of all `HFiles` in a region is greater than this setting.
:::image type="content" source="./media/optimize-hbase-ambari/hbase-hregion-max-filesize.png" alt-text="`Apache HBase HRegion max filesize`" border="true":::
The larger the region file size, the smaller the number of splits. You can incre
* The property `hbase.hregion.memstore.flush.size` defines the size at which Memstore is flushed to disk. The default size is 128 MB.
-* The HBase region block multiplier is defined by `hbase.hregion.memstore.block.multiplier`. The default value is 4. The maximum allowed is 8.
+* The `hbase.hregion.memstore.block.multiplier` defines the HBase region block multiplier. The default value is 4. The maximum allowed is 8.
* HBase blocks updates if the Memstore is (`hbase.hregion.memstore.flush.size` * `hbase.hregion.memstore.block.multiplier`) bytes.
The larger the region file size, the smaller the number of splits. You can incre
## Define Memstore size
-Memstore size is defined by the `hbase.regionserver.global.memstore.upperLimit` and `hbase.regionserver.global.memstore.lowerLimit` parameters. Setting these values equal to each other reduces pauses during writes (also causing more frequent flushing) and results in increased write performance.
+The `hbase.regionserver.global.memstore.upperLimit` and `hbase.regionserver.global.memstore.lowerLimit` parameters defines Memstore size. Setting these values equal to each other reduces pauses during writes (also causing more frequent flushing) and results in increased write performance.
## Set Memstore local allocation buffer
-Memstore local allocation buffer usage is determined by the property `hbase.hregion.memstore.mslab.enabled`. When enabled (true), this setting prevents heap fragmentation during heavy write operation. The default value is true.
+The property `hbase.hregion.memstore.mslab.enabled` defines Memstore local allocation buffer usage. When enabled (true), this setting prevents heap fragmentation during heavy write operation. The default value is true.
:::image type="content" source="./media/optimize-hbase-ambari/hbase-hregion-memstore-mslab-enabled.png" alt-text="hbase.hregion.memstore.mslab.enabled" border="true":::
hdinsight Apache Spark Run Machine Learning Automl https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/spark/apache-spark-run-machine-learning-automl.md
Title: Run Azure Machine Learning workloads on Apache Spark in HDInsight
description: Learn how to run Azure Machine Learning workloads with automated machine learning (AutoML) on Apache Spark in Azure HDInsight. Previously updated : 09/15/2022 Last updated : 10/16/2023 # Run Azure Machine Learning workloads with automated machine learning on Apache Spark in HDInsight
-Azure Machine Learning simplifies and accelerates the building, training, and deployment of machine learning models. In automated machine learning (AutoML), you start with training data that has a defined target feature. Iterate through combinations of algorithms and feature selections automatically select the best model for your data based on the training scores. HDInsight allows customers to provision clusters with hundreds of nodes. AutoML running on Spark in an HDInsight cluster allows users to use compute capacity across these nodes to run training jobs in a scale-out fashion, and to run multiple training jobs in parallel. This allows users to run AutoML experiments while sharing the compute with their other big data workloads.
+Azure Machine Learning simplifies and accelerates the building, training, and deployment of machine learning models. In automated machine learning (AutoML), you start with training data that has a defined target feature. Iterate through combinations of algorithms and feature selections automatically select the best model for your data based on the training scores. HDInsight allows customers to provision clusters with hundreds of nodes. AutoML running on Spark in an HDInsight cluster allows users to use compute capacity across these nodes to run training jobs in a scale-out fashion, and to run multiple training jobs in parallel. It allows users to run AutoML experiments while sharing the compute with their other big data workloads.
## Install Azure Machine Learning on an HDInsight cluster
In the [automated machine learning configuration](/python/api/azureml-train-auto
## Next steps
-* For more information on using Azure ML Automated ML capabilities, see [New automated machine learning capabilities in Azure Machine Learning](https://azure.microsoft.com/blog/new-automated-machine-learning-capabilities-in-azure-machine-learning-service/)
+* For more information on using Azure Machine Learning Automated ML capabilities, see [New automated machine learning capabilities in Azure Machine Learning](https://azure.microsoft.com/blog/new-automated-machine-learning-capabilities-in-azure-machine-learning-service/)
* [AutoML project from Microsoft Research](https://www.microsoft.com/research/project/automl/)
hdinsight Transport Layer Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/transport-layer-security.md
description: Transport layer security (TLS) and secure sockets layer (SSL) are c
Previously updated : 09/29/2022 Last updated : 10/16/2023 # Transport layer security in Azure HDInsight Connections to the HDInsight cluster via the public cluster endpoint `https://CLUSTERNAME.azurehdinsight.net` are proxied through cluster gateway nodes. These connections are secured using a protocol called TLS. Enforcing higher versions of TLS on gateways improves the security for these connections.
-By default, Azure HDInsight clusters accept TLS 1.2 connections on public HTTPS endpoints. You can control the minimum TLS version supported on the gateway nodes during cluster creation using either the Azure portal, or a Resource Manager template. For the portal, select the TLS version from the **Security + networking** tab during cluster creation. For a Resource Manager template at deployment time, use the **minSupportedTlsVersion** property. For a sample template, see [HDInsight minimum TLS 1.2 Quickstart template](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.hdinsight/hdinsight-minimum-tls/azuredeploy.json). This property supports one value: "1.2", which correspond to TLS 1.2+.
+By default, Azure HDInsight clusters accept TLS 1.2 connections on public HTTPS endpoints. You can control the minimum TLS version supported on the gateway nodes during cluster creation using either the Azure portal, or a Resource Manager template. For the portal, select the TLS version from the **Security + networking** tab during cluster creation. For a Resource Manager template at deployment time, use the **minSupportedTlsVersion** property. For a sample template, see [HDInsight minimum TLS 1.2 Quickstart template](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.hdinsight/hdinsight-minimum-tls/azuredeploy.json). This property supports one value: "1.2," which correspond to TLS 1.2+.
## Next steps
hdinsight Use Pig https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/use-pig.md
description: Learn how to use Pig with Apache Hadoop on HDInsight.
Previously updated : 09/23/2022 Last updated : 10/16/2023 # Use Apache Pig with Apache Hadoop on HDInsight
The following image shows a summary of what each transformation does to the data
## <a id="run"></a>Run the Pig Latin job
-HDInsight can run Pig Latin jobs by using a variety of methods. Use the following table to decide which method is right for you, then follow the link for a walkthrough.
+HDInsight can run Pig Latin jobs by using various methods. Use the following table to decide which method is right for you, then follow the link for a walkthrough.
## Pig and SQL Server Integration Services
healthcare-apis Enable Diagnostic Logging https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/enable-diagnostic-logging.md
At this time, the Azure API for FHIR service returns the following fields in the
|FhirResourceType|String|The resource type for which the operation was executed |LogCategory|String|The log category (we're currently returning ΓÇÿAuditLogsΓÇÖ LogCategory) |Location|String|The location of the server that processed the request (for example, South Central US)
-|OperationDuration|Int|The time it took to complete this request in seconds
+|OperationDuration|Int|The time it took to complete this request in seconds. Note : This value is always set to 0, due to a known issue
|OperationName|String| Describes the type of operation (for example, update, search-type) |RequestUri|String|The request URI |ResultType|String|The available values currently are **Started**, **Succeeded**, or **Failed**
In this article, you learned how to enable Audit Logs for Azure API for FHIR. Fo
>[!div class="nextstepaction"] >[Configure Private Link](configure-private-link.md)
-FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
+FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis Api Versioning Dicom Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/api-versioning-dicom-service.md
Previously updated : 10/11/2023 Last updated : 10/13/2023
healthcare-apis Dicom Change Feed Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/dicom-change-feed-overview.md
Title: Overview of DICOM change feed - Azure Health Data Services
-description: In this article, you'll learn the concepts of DICOM change feed.
+description: In this article, you learn the concepts of DICOM change feed.
Make sure to specify the version as part of the URL when making requests. More i
## API Design
-The API exposes two `GET` endpoints for interacting with the change feed. A typical flow for consuming the change feed is [provided below](#usage).
+The API exposes two `GET` endpoints for interacting with the change feed. A typical flow for consuming the change feed is provided in the [Usage](#usage) section.
Verb | Route | Returns | Description : | :-- | :- | :
Content-Type: application/json
Name | Type | Description | Default | Min | Max | :-- | :- | :- | : | :-- | :-- | offset | long | The exclusive starting sequence number for events | `0` | `0` | |
-limit | int | The maximum value of the sequence number relative to the offset. For example, if the offset is 10 and the limit is 5, then the maximum sequence number returned will be 15. | `10` | `1` | `100` |
+limit | int | The maximum value of the sequence number relative to the offset. For example, if the offset is 10 and the limit is 5, then the maximum sequence number returned is 15. | `10` | `1` | `100` |
includeMetadata | bool | Indicates whether or not to include the DICOM metadata | `true` | | | ## Latest change feed
includeMetadata | bool | Indicates whether or not to include the metadata | `tru
#### Version 2 1. An application regularly queries the change feed on some time interval
- * For example, if querying every hour, a query for the change feed may look like `/changefeed?startTime=2023-05-10T16:00:00Z&endTime=2023-05-10T17:00:00Z`
- * If starting from the beginning, the change feed query may omit the `startTime` to read all of the changes up to, but excluding, the `endTime`
- * E.g. `/changefeed?endTime=2023-05-10T17:00:00Z`
-2. Based on the `limit` (if provided), an application continues to query for additional pages of change events if the number of returned events is equal to the `limit` (or default) by updating the offset on each subsequent query
- * For example, if the `limit` is `100`, and 100 events are returned, then the subsequent query would include `offset=100` to fetch the next "page" of results. The below queries demonstrate the pattern:
+ * For example, if querying every hour, a query for the change feed might look like `/changefeed?startTime=2023-05-10T16:00:00Z&endTime=2023-05-10T17:00:00Z`
+ * If starting from the beginning, the change feed query might omit the `startTime` to read all of the changes up to, but excluding, the `endTime`
+ * For example: `/changefeed?endTime=2023-05-10T17:00:00Z`
+2. Based on the `limit` (if provided), an application continues to query for more pages of change events if the number of returned events is equal to the `limit` (or default) by updating the offset on each subsequent query
+ * For example, if the `limit` is `100`, and 100 events are returned, then the subsequent query would include `offset=100` to fetch the next "page" of results. The queries demonstrate the pattern:
* `/changefeed?offset=0&limit=100&startTime=2023-05-10T16:00:00Z&endTime=2023-05-10T17:00:00Z` * `/changefeed?offset=100&limit=100&startTime=2023-05-10T16:00:00Z&endTime=2023-05-10T17:00:00Z` * `/changefeed?offset=200&limit=100&startTime=2023-05-10T16:00:00Z&endTime=2023-05-10T17:00:00Z`
healthcare-apis Get Access Token https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/get-access-token.md
Previously updated : 10/6/2023 Last updated : 10/13/2023 # Get an access token
-To use the DICOM service, users and applications need to prove their identity and permissions by getting an access token. An access token is a string that identifies a user or an application and grants them permission to access a resource. Using access tokens enhances security by preventing unauthorized access and reducing the need for repeated authentication.
+To use the DICOM&reg; service, users and applications need to prove their identity and permissions by getting an access token. An access token is a string that identifies a user or an application and grants them permission to access a resource. Using access tokens enhances security by preventing unauthorized access and reducing the need for repeated authentication.
## Use the Azure command-line interface
You can use a token with the DICOM service [using cURL](dicomweb-standard-apis-c
Try It curl -X GET --header "Authorization: Bearer $token" https://<workspacename-dicomservicename>.dicom.azurehealthcareapis.com/v<version of REST API>/changefeed ```+
healthcare-apis Get Started With Analytics Dicom https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/get-started-with-analytics-dicom.md
Previously updated : 07/14/2023 Last updated : 10/13/2023 # Get Started using DICOM Data in Analytics Workloads
-This article details how to get started using DICOM data in analytics workloads with Azure Data Factory and Microsoft Fabric.
+This article details how to get started using DICOM&reg; data in analytics workloads with Azure Data Factory and Microsoft Fabric.
## Prerequisites Before getting started, ensure you have done the following steps:
From the Azure portal, open the Azure Data Factory instance and select **Launch
Azure Data Factory pipelines read from _data sources_ and write to _data sinks_, typically other Azure services. These connections to other services are managed as _linked services_. The pipeline in this example will read data from a DICOM service and write its output to a storage account, so a linked service must be created for both. #### Create linked service for the DICOM service
-1. In the Azure Data Factory Studio, select **Manage** from the navigation menu. Under **Connections** select **Linked services** and then select **New**.
+1. In the Azure Data Factory Studio, select **Manage** from the navigation menu. Under **Connections** select **Linked services** and then select **New**.
:::image type="content" source="media/data-factory-linked-services.png" alt-text="Screenshot of the Linked services screen in Azure Data Factory." lightbox="media/data-factory-linked-services.png":::
-2. On the New linked service panel, search for "REST". Select the **REST** tile and then **Continue**.
+2. On the New linked service panel, search for "REST". Select the **REST** tile and then **Continue**.
:::image type="content" source="media/data-factory-rest.png" alt-text="Screenshot of the New Linked services panel with REST tile selected." lightbox="media/data-factory-rest.png":::
Azure Data Factory pipelines read from _data sources_ and write to _data sinks_,
:::image type="content" source="media/data-factory-linked-service-dicom.png" alt-text="Screenshot of the New linked service panel with DICOM service details." lightbox="media/data-factory-linked-service-dicom.png":::
-4. In the **Base URL** field, enter the Service URL for your DICOM service. For example, a DICOM service named `contosoclinic` in the `contosohealth` workspace will have the Service URL `https://contosohealth-contosoclinic.dicom.azurehealthcareapis.com`.
+4. In the **Base URL** field, enter the Service URL for your DICOM service. For example, a DICOM service named `contosoclinic` in the `contosohealth` workspace will have the Service URL `https://contosohealth-contosoclinic.dicom.azurehealthcareapis.com`.
5. For Authentication type, select **System Assigned Managed Identity**.
-6. For **AAD resource**, enter `https://dicom.healthcareapis.azure.com`. Note, this URL is the same for all DICOM service instances.
+6. For **AAD resource**, enter `https://dicom.healthcareapis.azure.com`. Note, this URL is the same for all DICOM service instances.
7. After populating the required fields, select **Test connection** to ensure the identity's roles are correctly configured. 8. When the connection test is successful, select **Create**. #### Create linked service for Azure Data Lake Storage Gen2
-1. In the Azure Data Factory Studio, select **Manage** from the navigation menu. Under **Connections** select **Linked services** and then select **New**.
+1. In the Azure Data Factory Studio, select **Manage** from the navigation menu. Under **Connections** select **Linked services** and then select **New**.
-2. On the New linked service panel, search for "Azure Data Lake Storage Gen2". Select the **Azure Data Lake Storage Gen2** tile and then **Continue**.
+2. On the New linked service panel, search for "Azure Data Lake Storage Gen2". Select the **Azure Data Lake Storage Gen2** tile and then **Continue**.
:::image type="content" source="media/data-factory-adls.png" alt-text="Screenshot of the New Linked services panel with Azure Data Lake Storage Gen2 tile selected." lightbox="media/data-factory-adls.png":::
Azure Data Factory pipelines read from _data sources_ and write to _data sinks_,
### Create a pipeline for DICOM data Azure Data Factory pipelines are a collection of _activities_ that perform a task, like copying DICOM metadata to Delta tables. This section details the creation of a pipeline that regularly synchronizes DICOM data to Delta tables as data is added to, updated in, and deleted from a DICOM service.
-1. Select **Author** from the navigation menu. In the **Factory Resources** pane, select the plus (+) to add a new resource. Select **Pipeline** and then **Template gallery** from the menu.
+1. Select **Author** from the navigation menu. In the **Factory Resources** pane, select the plus (+) to add a new resource. Select **Pipeline** and then **Template gallery** from the menu.
:::image type="content" source="media/data-factory-create-pipeline-menu.png" alt-text="Screenshot of the New Pipeline from Template Gallery." lightbox="media/data-factory-create-pipeline-menu.png":::
-2. In the Template gallery, search for "DICOM". Select the **Copy DICOM Metadata Changes to ADLS Gen2 in Delta Format** tile and then **Continue**.
+2. In the Template gallery, search for "DICOM". Select the **Copy DICOM Metadata Changes to ADLS Gen2 in Delta Format** tile and then **Continue**.
:::image type="content" source="media/data-factory-gallery-dicom.png" alt-text="Screenshot of the DICOM template selected in Template gallery." lightbox="media/data-factory-gallery-dicom.png":::
Azure Data Factory pipelines are a collection of _activities_ that perform a tas
4. Select **Use this template** to create the new pipeline. ## Scheduling a pipeline
-Pipelines are scheduled by _triggers_. There are different types of triggers including _schedule triggers_, which allows pipelines to be triggered on a wall-clock schedule, and _manual triggers_, which triggers pipelines on-demand. In this example, a _tumbling window trigger_ is used to periodically run the pipeline given a starting point and regular time interval. For more information about triggers, see the [pipeline execution and triggers article](../../data-factory/concepts-pipeline-execution-triggers.md).
+Pipelines are scheduled by _triggers_. There are different types of triggers including _schedule triggers_, which allow pipelines to be triggered on a wall-clock schedule, and _manual triggers_, which trigger pipelines on demand. In this example, a _tumbling window trigger_ is used to periodically run the pipeline given a starting point and regular time interval. For more information about triggers, see the [pipeline execution and triggers article](../../data-factory/concepts-pipeline-execution-triggers.md).
### Create a new tumbling window trigger
-1. Select **Author** from the navigation menu. Select the pipeline for the DICOM service and select **Add trigger** and **New/Edit** from the menu bar.
+1. Select **Author** from the navigation menu. Select the pipeline for the DICOM service and select **Add trigger** and **New/Edit** from the menu bar.
:::image type="content" source="media/data-factory-add-trigger.png" alt-text="Screenshot of the pipeline view of Data Factory Studio with the Add trigger button on the menu bar selected." lightbox="media/data-factory-add-trigger.png":::
Pipelines are scheduled by _triggers_. There are different types of triggers inc
5. To configure a pipeline that runs hourly, set the recurrence to **1 Hour**.
-6. Expand the **Advanced** section and enter a **Delay** of **15 minutes**. This will allow any pending operations at the end of an hour to complete before processing.
+6. Expand the **Advanced** section and enter a **Delay** of **15 minutes**. This will allow any pending operations at the end of an hour to complete before processing.
7. Set the **Max concurrency** to **1** to ensure consistency across tables.
-8. Select **Ok** to continue configuring the trigger run parameters.
+8. Select **Ok** to continue configuring the trigger run parameters.
### Configure trigger run parameters Triggers not only define when to run a pipeline, they also include [parameters](../../data-factory/how-to-use-trigger-parameterization.md) that are passed to the pipeline execution. The **Copy DICOM Metadata Changes to Delta** template defines a few parameters detailed in the table below. Note, if no value is supplied during configuration, the listed default value will be used for each parameter.
Triggers not only define when to run a pipeline, they also include [parameters](
> > Learn more about [trigger types](../../data-factory/concepts-pipeline-execution-triggers.md#trigger-type-comparison).
-4. Select **Save** to create the new trigger. Be sure to select **Publish** on the menu bar to begin your trigger running on the defined schedule.
+4. Select **Save** to create the new trigger. Be sure to select **Publish** on the menu bar to begin your trigger running on the defined schedule.
:::image type="content" source="media/data-factory-publish.png" alt-text="Screenshow showing the Publish button on the main menu bar." lightbox="media/data-factory-publish.png":::
-After the trigger is published, it can be triggered manually using the **Trigger now** option. If the start time was set for a value in the past, the pipeline will start immediately.
+After the trigger is published, it can be triggered manually using the **Trigger now** option. If the start time was set for a value in the past, the pipeline will start immediately.
## Monitoring pipeline runs Trigger runs and their associated pipeline runs can be monitored in the **Monitor** tab. Here, users can browse when each pipeline ran, how long it took to execute, and potentially debug any problems that arose.
Trigger runs and their associated pipeline runs can be monitored in the **Monito
[Microsoft Fabric](https://www.microsoft.com/microsoft-fabric) is an all-in-one analytics solution that sits on top of [Microsoft OneLake](/fabric/onelake/onelake-overview). With the use of [Microsoft Fabric Lakehouse](/fabric/data-engineering/lakehouse-overview), data in OneLake can be managed, structured, and analyzed in a single location. Any data outside of OneLake, written to Azure Data Lake Storage Gen2, can be connected to OneLake as shortcuts to take advantage of FabricΓÇÖs suite of tools. ### Creating shortcuts
-1. Navigate to the lakehouse created in the prerequisites. In the **Explorer** view, select the triple-dot menu (...) next to the **Tables** folder.
+1. Navigate to the lakehouse created in the prerequisites. In the **Explorer** view, select the triple-dot menu (...) next to the **Tables** folder.
2. Select **New shortcut** to create a new shortcut to the storage account that contains the DICOM analytics data.
Trigger runs and their associated pipeline runs can be monitored in the **Monito
4. Under **Connection settings**, enter the **URL** used in the [Linked Services](#create-linked-service-for-azure-data-lake-storage-gen2) section above. 5. Select an existing connection or create a new connection, selecting the Authentication kind you want to use.
Trigger runs and their associated pipeline runs can be monitored in the **Monito
6. Select **Next**.
-7. Enter a **Shortcut Name** that represents the data created by the Azure Data Factory pipeline. For example, for the `instance` Delta table, the shortcut name should probably be **instance**.
+7. Enter a **Shortcut Name** that represents the data created by the Azure Data Factory pipeline. For example, for the `instance` Delta table, the shortcut name should probably be **instance**.
8. Enter the **Sub Path** that matches the `ContainerName` parameter from [run parameters](#configure-trigger-run-parameters) configuration and the name of the table for the shortcut. For example, use "/dicom/instance" for the Delta table with the path `instance` in the `dicom` container.
After the shortcuts have been created, expanding a table will show the names and
:::image type="content" source="media/fabric-shortcut-schema.png" alt-text="Screenshot of the table columns listed in the explorer view." lightbox="media/fabric-shortcut-schema.png"::: ### Running notebooks
-Once the tables have been created in the lakehouse, they can be queried from [Microsoft Fabric notebooks](/fabric/data-engineering/how-to-use-notebook). Notebooks may be created directly from the lakehouse by selecting **Open Notebook** from the menu bar.
+Once the tables have been created in the lakehouse, they can be queried from [Microsoft Fabric notebooks](/fabric/data-engineering/how-to-use-notebook). Notebooks may be created directly from the lakehouse by selecting **Open Notebook** from the menu bar.
On the notebook page, the contents of the lakehouse can still be viewed on the left-hand side, including the newly added tables. At the top of the page, select the language for the notebook (the language may also be configured for individual cells). The following example will use Spark SQL.
This query will select all of the contents from the `instance` table. When ready
:::image type="content" source="media/fabric-notebook.png" alt-text="Screenshot of a notebook with sample Spark SQL query." lightbox="media/fabric-notebook.png":::
-After a few seconds, the results of the query should appear in a table beneath the cell like (the time may be longer if this is the first Spark query in the session as the Spark context will need to be initialized).
+After a few seconds, the results of the query should appear in a table beneath the cell like (the time might be longer if this is the first Spark query in the session as the Spark context will need to be initialized).
:::image type="content" source="media/fabric-notebook-results.png" alt-text="Screenshot of a notebook with sample Spark SQL query and results." lightbox="media/fabric-notebook-results.png":::
Learn more about Azure Data Factory pipelines:
* [Pipelines and activities in Azure Data Factory](../../data-factory/concepts-pipelines-activities.md)
-Learn more about using Microsoft Fabric notebooks:
+* [How to use Microsoft Fabric notebooks](/fabric/data-engineering/how-to-use-notebook)
-* [How to use Microsoft Fabric notebooks](/fabric/data-engineering/how-to-use-notebook)
+
healthcare-apis Get Started With Dicom https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/get-started-with-dicom.md
# Get started with the DICOM service
-This article outlines the basic steps to get started with the DICOM service in [Azure Health Data Services](../healthcare-apis-overview.md).
+This article outlines the basic steps to get started with the DICOM&reg; service in [Azure Health Data Services](../healthcare-apis-overview.md).
-As a prerequisite, you'll need an Azure subscription and have been granted proper permissions to create Azure resource groups and to deploy Azure resources. You can follow all the steps, or skip some if you have an existing environment. Also, you can combine all the steps and complete them in PowerShell, Azure CLI, and REST API scripts. You'll need a workspace to provision a DICOM service. A FHIR service is optional and is needed only if you connect imaging data with electronic health records of the patient via DICOMcast.
+As a prerequisite, you need an Azure subscription and permissions to create Azure resource groups and to deploy Azure resources. You can follow all the steps, or skip some if you have an existing environment. Also, you can combine all the steps and complete them in PowerShell, Azure CLI, and REST API scripts. You need a workspace to provision a DICOM service. A FHIR&reg; service is optional and is needed only if you connect imaging data with electronic health records of the patient via DICOMcast.
[![Screenshot of Get Started with DICOM diagram.](media/get-started-with-dicom.png)](media/get-started-with-dicom.png#lightbox)
-## Create a workspace in your Azure Subscription
+## Create a workspace in your Azure subscription
You can create a workspace from the [Azure portal](../healthcare-apis-quickstart.md) or using PowerShell, Azure CLI, and REST API. You can find scripts from the [Azure Health Data Services samples](https://github.com/microsoft/healthcare-apis-samples/tree/main/src/scripts).
Optionally, you can create a [FHIR service](../fhir/fhir-portal-quickstart.md) a
## Access the DICOM service
-The DICOM service is secured by Microsoft Entra ID that can't be disabled. To access the service API, you must create a client application that's also referred to as a service principal in Microsoft Entra ID and grant it with the right permissions.
+The DICOM service is secured by a Microsoft Entra ID that can't be disabled. To access the service API, you must create a client application that's also referred to as a service principal in Microsoft Entra ID and grant it with the right permissions.
### Register a client application
You can grant access permissions or assign roles from the [Azure portal](../conf
### Perform create, read, update, and delete (CRUD) transactions
-You can perform create, read (search), update and delete (CRUD) transactions against the DICOM service in your applications or by using tools such as Postman, REST Client, cURL, and Python. Because the DICOM service is secured by default, you must obtain an access token and include it in your transaction request.
+You can perform, create, read (search), update, or delete (CRUD) transactions against the DICOM service in your applications or by using tools such as Postman, REST Client, cURL, and Python. Because the DICOM service is secured by default, you must obtain an access token and include it in your transaction request.
#### Get an access token
DICOMcast is currently available as an [open source](https://github.com/microsof
## Next steps
-This article described the basic steps to get started using the DICOM service. For information about deploying the DICOM service in the workspace, see
+[Deploy DICOM service using the Azure portal](deploy-dicom-services-in-azure.md)
->[!div class="nextstepaction"]
->[Deploy DICOM service using the Azure portal](deploy-dicom-services-in-azure.md)
-FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/overview.md
description: The DICOM service is a cloud-based solution for storing, managing,
Previously updated : 10/06/2023 Last updated : 10/13/2023 # What is the DICOM service?
-The DICOM service is a cloud-based solution that enables healthcare organizations to store, manage, and exchange medical imaging data securely and efficiently with any DICOMweb&trade;-enabled systems or applications. The DICOM service is part of [Azure Health Data Services](../healthcare-apis-overview.md).
+The DICOM&reg; service is a cloud-based solution that enables healthcare organizations to store, manage, and exchange medical imaging data securely and efficiently with any DICOMweb-enabled systems or applications. The DICOM service is part of [Azure Health Data Services](../healthcare-apis-overview.md).
The DICOM service offers many benefits, including:
The DICOM service offers many benefits, including:
## Use imaging data to enable healthcare scenarios
-To effectively treat patients, research treatments, diagnose illnesses, or get an overview of a patient's health history, organizations need to integrate data across several sources. The DICOM service enables imaging data to persist securely in the Microsoft cloud and allows it to reside with electronic health records (EHR) and healthcare device (IoT) data in the same Azure subscription.
+To effectively treat patients, research treatments, diagnose illnesses, or get an overview of a patient's health history, organizations need to integrate data across several sources. The DICOM service enables imaging data to persist in the Microsoft cloud and allows it to reside with electronic health records (EHR) and healthcare device (IoT) data in the same Azure subscription.
-FHIR supports integration of other types of data directly, or through references. With the DICOM service, organizations are able to store references to imaging data in FHIR and enable queries that cross clinical and imaging datasets. This capability enables organizations to deliver better healthcare. For example:
+FHIR&reg; supports integration of other types of data directly, or through references. With the DICOM service, organizations are able to store references to imaging data in FHIR and enable queries that cross clinical and imaging datasets. This capability enables organizations to deliver better healthcare. For example:
-- **Image back up**. Research institutions, clinics, imaging centers, veterinary clinics, pathology institutions, retailers, or organizations can use the DICOM service to back up their images with unlimited storage and access. There's no need to deidentify PHI data because the service is validated for PHI compliance.
+- **Image back-up**. Research institutions, clinics, imaging centers, veterinary clinics, pathology institutions, retailers, or organizations can use the DICOM service to back up their images with unlimited storage and access. There's no need to deidentify PHI data because the service is validated for PHI compliance.
- **Image exchange and collaboration**. Share an image, a subset of images, or an entire image library instantly with or without related EHR data.
Your organization needs an Azure subscription to configure and run the component
[Use DICOMweb standard APIs](dicomweb-standard-apis-with-dicom-services.md)
-> [!NOTE]
-> FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis Pull Dicom Changes From Change Feed https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/pull-dicom-changes-from-change-feed.md
Previously updated : 02/15/2022 Last updated : 10/13/2023
-# Pull DICOM changes using the Change Feed
+# Pull DICOM changes using the change feed
-DICOM Change Feed offers customers the ability to go through the history of the DICOM service and act on the create and delete events in the service. This how-to guide describes how to consume Change Feed.
+DICOM&reg; The change feed offers customers the ability to go through the history of the DICOM service and act on the create and delete events in the service. This how-to guide describes how to consume Change Feed.
The Change Feed is accessed using REST APIs. These APIs along with sample usage of Change Feed are documented in the [Overview of DICOM Change Feed](dicom-change-feed-overview.md). The version of the REST API should be explicitly specified in the request URL as called out in the [API Versioning for DICOM service Documentation](api-versioning-dicom-service.md).
do
To view and access the **ChangeFeedRetrieveService.cs** code example, see [Consume Change Feed](https://github.com/microsoft/dicom-server/blob/main/converter/dicom-cast/src/Microsoft.Health.DicomCast.Core/Features/DicomWeb/Service/ChangeFeedRetrieveService.cs).
-## Next Steps
+## Next steps
-This how-to guide describes how to consume Change Feed. Change Feed allows you to monitor the history of the DICOM service. For information about the DICOM service, see
+For information, see the [DICOM service overview](dicom-services-overview.md).
->[!div class="nextstepaction"]
->[Overview of the DICOM service](dicom-services-overview.md)
healthcare-apis References For Dicom Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/references-for-dicom-service.md
# DICOM service open-source projects
-This article describes our open-source projects on GitHub that provide source code and instructions to connect DICOM service with other tools, services, and products.
+This article describes our open-source projects on GitHub that provide source code and instructions to connect DICOM&reg; service with other tools, services, and products.
## DICOM service GitHub projects
This article describes our open-source projects on GitHub that provide source co
## Next steps
-For more information about using the DICOM service, see
+[Deploy DICOM service to Azure](deploy-dicom-services-in-azure.md)
->[!div class="nextstepaction"]
->[Deploy DICOM service to Azure](deploy-dicom-services-in-azure.md)
-
-For more information about DICOM cast, see
-
->[!div class="nextstepaction"]
->[DICOM cast overview](https://github.com/microsoft/dicom-server/blob/main/docs/concepts/dicom-cast.md)
-
-FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis Fhir Service Diagnostic Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/fhir-service-diagnostic-logs.md
At this time, the FHIR service returns the following fields in a diagnostic log:
|`FhirResourceType` | String| The resource type for which the operation was executed.| |`LogCategory` | String| The log category. (In this article, we're returning `AuditLogs`.)| |`Location` | String| The location of the server that processed the request. For example: `South Central US`.|
-|`OperationDuration` | Int| The time it took to complete this request, in seconds.|
+|`OperationDuration` | Int| The time it took to complete this request, in seconds. Note: This collumn value is always 0 due to a known issue|
|`OperationName` | String| The type of operation. For example: `update` or `search-type`.| |`RequestUri` | String| The request URI.| |`ResultType` | String| The status of the log. Available values are `Started`, `Succeeded`, or `Failed`.|
healthcare-apis Healthcare Apis Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/healthcare-apis-overview.md
Previously updated : 09/01/2023 Last updated : 10/13/2023 # What is Azure Health Data Services?
-Azure Health Data Services is a cloud-based solution that helps you collect, store, and analyze health data from different sources and formats. It supports various healthcare standards, such as FHIR and DICOM, and converts data from legacy or proprietary device formats to FHIR.
+Azure Health Data Services is a cloud-based solution that helps you collect, store, and analyze health data from different sources and formats. It supports various healthcare standards, such as FHIR&reg; and DICOM&reg;, and converts data from legacy or proprietary device formats to FHIR.
Azure Health Data Services enables you to:
In addition, Azure Health Data Services has a business model and infrastructure
## Next steps
-To work with Azure Health Data Services, first you need to create an [Azure workspace](workspace-overview.md).
+[Workspace overview](workspace-overview.md)
-Follow the steps in this quickstart guide:
+[Create a workspace](healthcare-apis-quickstart.md)
-> [!div class="nextstepaction"]
-> [Create a workspace](healthcare-apis-quickstart.md)
---
-FHIR&#174; is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
key-vault Vs Key Vault Add Connected Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/vs-key-vault-add-connected-service.md
- Title: Add Key Vault support to your ASP.NET project using Visual Studio - Azure Key Vault | Microsoft Docs
-description: Use this tutorial to help you learn how to add Key Vault support to an ASP.NET or ASP.NET Core web application.
------ Previously updated : 4/28/2023--
-# Add Key Vault to your web application by using Visual Studio Connected Services
-
-In this tutorial, you will learn how to easily add everything you need to start using Azure Key Vault to manage your secrets for web projects in Visual Studio, whether you are using ASP.NET Core or any type of ASP.NET project. By using the Connected Services feature in Visual Studio, you can have Visual Studio automatically add all the NuGet packages and configuration settings you need to connect to Key Vault in Azure.
-
-For details on the changes that Connected Services makes in your project to enable Key Vault, see [Key Vault Connected Service - What happened to my ASP.NET project](#how-your-aspnet-framework-project-is-modified) or [Key Vault Connected Service - What happened to my ASP.NET Core project](#how-your-aspnet-core-project-is-modified).
-
-## Prerequisites
--- **An Azure subscription**. If you don't have a subscription, sign up for a [free account](https://azure.microsoft.com/pricing/free-trial/).-- **Visual Studio 2019 version 16.3** or later [Download it now](https://aka.ms/vsdownload?utm_source=mscom&utm_campaign=msdocs).-
-## Add Key Vault support to your project
-
-Before you begin, make sure that you're signed into Visual Studio. Sign in with the same account that you use for your Azure subscription. Then open an ASP.NET 4.7.1 or later, or ASP.NET Core web project, and do the following steps. The steps shown are for Visual Studio 2022 version 17.4. The flow might be slightly different for other versions of Visual Studio.
-
-1. In **Solution Explorer**, right-click the project that you want to add the Key Vault support to, and choose **Add** > **Connected Service**. Under **Service Dependencies**, choose the **+** icon.
- The Connected Service page appears with services you can add to your project.
-1. In the menu of available services, choose **Azure Key Vault** and click **Next**.
-
- ![Choose "Azure Key Vault"](../media/vs-key-vault-add-connected-service/key-vault-connected-service.png)
-
-1. Select the subscription you want to use, and then if you already have a Key Vault you want to use, select it and click **Next**.
-
- ![Screenshot Select your subscription](../media/vs-key-vault-add-connected-service/key-vault-connected-service-select-vault.png)
-
-1. If you don't have an existing Key Vault, click on **Create new Key Vault**. You'll be asked to provide the resource group, location, and SKU.
-
- ![Screenshot of "Create Azure Key Vault" screen](../media/vs-key-vault-add-connected-service/create-new-key-vault.png)
-
-1. In the **Configure Key Vault** screen, you can change the name of the environment variable.
-
- ![Screenshot of Connect to Azure Key Vault screen.](../media/vs-key-vault-add-connected-service/connect-to-azure-key-vault.png)
-
-1. Click **Next** to review a summary of the changes and then **Finish**.
-
-Now, connection to Key Vault is established and you can access your secrets in code. If you just created a new key vault, test it by creating a secret that you can reference in code. You can create a secret by using the [Azure portal](../secrets/quick-create-portal.md), [PowerShell](../secrets/quick-create-powershell.md), or the [Azure CLI](../secrets/quick-create-cli.md).
-
-See code examples of working with secrets at [Azure Key Vault Secrets client library for .NET - Code examples](../secrets/quick-create-net.md?tabs=azure-cli#code-examples).
-
-## Troubleshooting
-
-If your Key Vault is running on a different Microsoft account than the one you're logged in to Visual Studio (for example, the Key Vault is running on your work account, but Visual Studio is using your private account) you get an error in your Program.cs file, that Visual Studio can't get access to the Key Vault. To fix this issue:
-
-1. Go to the [Azure portal](https://portal.azure.com) and open your Key Vault.
-
-1. Choose **Access policies**, then **Add Access Policy**, and choose the account you are logged in with as Principal.
-
-1. In Visual Studio, choose **File** > **Account Settings**.
-Select **Add an account** from the **All account** section. Sign in with the account you have chosen as Principal of your access policy.
-
-1. Choose **Tools** > **Options**, and look for **Azure Service Authentication**. Then select the account you just added to Visual Studio.
-
-Now, when you debug your application, Visual Studio connects to the account your Key Vault is located on.
-
-## How your ASP.NET Core project is modified
-
-This section identifies the exact changes made to an ASP.NET project when adding the Key Vault connected service using Visual Studio.
-
-### Added references for ASP.NET Core
-
-Affects the project file .NET references and NuGet package references.
-
-| Type | Reference |
-| | |
-| NuGet | Microsoft.AspNetCore.AzureKeyVault.HostingStartup |
-
-### Added files for ASP.NET Core
--- `ConnectedService.json` added, which records some information about the Connected Service provider, version, and a link the documentation.-
-### Project file changes for ASP.NET Core
--- Added the Connected Services ItemGroup and `ConnectedServices.json` file.-
-### launchsettings.json changes for ASP.NET Core
--- Added the following environment variable entries to both the IIS Express profile and the profile that matches your web project name:-
- ```json
- "environmentVariables": {
- "ASPNETCORE_HOSTINGSTARTUP__KEYVAULT__CONFIGURATIONENABLED": "true",
- "ASPNETCORE_HOSTINGSTARTUP__KEYVAULT__CONFIGURATIONVAULT": "<your keyvault URL>"
- }
- ```
-
-### Changes on Azure for ASP.NET Core
--- Created a resource group (or used an existing one).-- Created a Key Vault in the specified resource group.-
-## How your ASP.NET Framework project is modified
-
-This section identifies the exact changes made to an ASP.NET project when adding the Key Vault connected service using Visual Studio.
-
-### Added references for ASP.NET Framework
-
-Affects the project file .NET references and `packages.config` (NuGet references).
-
-| Type | Reference |
-| | |
-| .NET; NuGet | Azure.Identity |
-| .NET; NuGet | Azure.Security.KeyVault.Keys |
-| .NET; NuGet | Azure.Security.KeyVault.Secrets |
-
-> [!IMPORTANT]
-> By default Azure.Identity 1.1.1 is installed, which does not support Visual Studio Credential. You can update package reference manually to 1.2+ use Visual Studio Credential.
-
-### Added files for ASP.NET Framework
--- `ConnectedService.json` added, which records some information about the Connected Service provider, version, and a link to the documentation.-
-### Project file changes for ASP.NET Framework
--- Added the Connected Services ItemGroup and ConnectedServices.json file.-- References to the .NET assemblies described in the [Added references](#added-references-for-aspnet-framework) section.-
-## Next steps
-
-If you followed this tutorial, your Key Vault permissions are set up to run with your own Azure subscription, but that might not be desirable for a production scenario. You can create a managed identity to manage Key Vault access for your app. See [How to Authenticate to Key Vault](./authentication.md) and [Assign a Key Vault access policy](./assign-access-policy-portal.md).
-
-Learn more about Key Vault development by reading the [Key Vault Developer's Guide](developers-guide.md).
-
-If your goal is to store configuration for an ASP.NET Core app in an Azure Key Vault, see [Azure Key Vault configuration provider in ASP.NET Core](/aspnet/core/security/key-vault-configuration).
-
lab-services Class Type Sql Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/class-type-sql-server.md
Visual Studio supports several workloads including **Web & cloud** and **Desktop
[Azure Data Studio](https://github.com/microsoft/azuredatastudio) is a multi-database, cross-platform desktop environment for data professionals using the family of on-premises and cloud data platforms on Windows, macOS, and Linux.
-1. Download the [Azure Data Studio *system* installer for Windows](https://go.microsoft.com/fwlink/?linkid=2127432). To find installers for other supported operating systems, go to the [Azure Data Studio](/sql/azure-data-studio/download) download page.
+1. Download the [Azure Data Studio *system* installer for Windows](https://go.microsoft.com/fwlink/?linkid=2127432). To find installers for other supported operating systems, go to the [Azure Data Studio](/azure-data-studio/download-azure-data-studio) download page.
1. On the **License Agreement** page, select **I accept the agreement**, and then select **Next**.
load-balancer Load Balancer Outbound Connections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-outbound-connections.md
The following methods are Azure's most commonly used methods to enable outbound
| 1 | Use the frontend IP address(es) of a load balancer for outbound via outbound rules | Static, explicit | Yes, but not at scale | OK | | 2 | Associate a NAT gateway to the subnet | Dynamic, explicit | Yes | Best | | 3 | Assign a public IP to the virtual machine | Static, explicit | Yes | OK |
-| 4 | [Default outbound access](../virtual-network/ip-services/default-outbound-access.md) use | Implicit | No | Worst |
+| 4 | [Default outbound access](../virtual-network/ip-services/default-outbound-access.md) | Implicit | No | Worst |
:::image type="content" source="./media/load-balancer-outbound-connections/outbound-options.png" alt-text="Diagram of Azure outbound options.":::
machine-learning How To Access Data Batch Endpoints Jobs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-access-data-batch-endpoints-jobs.md
Batch endpoints can be used to perform long batch operations over large amounts of data. Such data can be placed in different places. Some type of batch endpoints can also receive literal parameters as inputs. In this tutorial we'll cover how you can specify those inputs, and the different types or locations supported.
-## Prerequisites
+## Before invoking an endpoint
-* This example assumes that you've created a batch endpoint with at least one deployment. To create an endpoint, follow the steps at [How to use batch endpoints for production workloads](how-to-use-batch-endpoints.md).
+To successfully invoke a batch endpoint and create jobs, ensure you have the following:
-* You would need permissions to run a batch endpoint deployment. Read [Authorization on batch endpoints](how-to-authenticate-batch-endpoint.md) for details.
+* You have permissions to run a batch endpoint deployment. Read [Authorization on batch endpoints](how-to-authenticate-batch-endpoint.md) to know the specific permissions needed.
+
+* You have a valid Microsoft Entra ID token representing a security principal to invoke the endpoint. This principal can be a user principal or a service principal. In any case, once an endpoint is invoked, a batch deployment job is created under the identity associated with the token. For testing purposes, you can use your own credentials for the invocation as mentioned below.
+
+ # [Azure CLI](#tab/cli)
+
+ Use the Azure CLI to log in using either interactive or device code authentication:
+
+ ```azurecli
+ az login
+ ```
+
+ # [Python](#tab/sdk)
+
+ Use the Azure Machine Learning SDK for Python to log in:
+
+ ```python
+ from azure.ai.ml import MLClient
+ from azure.identity import DefaultAzureCredentials
+
+ ml_client = MLClient.from_config(DefaultAzureCredentials())
+ ```
+
+ If running outside of Azure Machine Learning compute, you need to indicate the workspace where the endpoint is deployed:
+
+ ```python
+ from azure.ai.ml import MLClient
+ from azure.identity import DefaultAzureCredentials
+
+ subscription_id = "<subscription>"
+ resource_group = "<resource-group>"
+ workspace = "<workspace>"
+
+ ml_client = MLClient(DefaultAzureCredentials(), subscription_id, resource_group, workspace)
+ ```
+
+ # [REST](#tab/rest)
+
+ The simplest way to get a valid token for your user account is to use the Azure CLI. In a console, run the following command:
+
+ ```azurecli
+ az account get-access-token --resource https://ml.azure.com --query "accessToken" --output tsv
+ ```
+
+ > [!TIP]
+ > When working with REST, we recommend invoking batch endpoints using a service principal. See [Running jobs using a service principal (REST)](how-to-authenticate-batch-endpoint.md?tabs=rest#running-jobs-using-a-service-principal) to learn how to get a token for a Service Principal using REST.
+
+
+
+ To learn more about how to authenticate with multiple type of credentials read [Authorization on batch endpoints](how-to-authenticate-batch-endpoint.md).
+
+* The **compute cluster** where the endpoint is deployed has access to read the input data.
+
+ > [!TIP]
+ > If you are using a credential-less data store or external Azure Storage Account as data input, ensure you [configure compute clusters for data access](how-to-authenticate-batch-endpoint.md#configure-compute-clusters-for-data-access). **The managed identity of the compute cluster** is used **for mounting** the storage account. The identity of the job (invoker) is still used to read the underlying data allowing you to achieve granular access control.
## Understanding inputs and outputs
Batch endpoints provide a durable API that consumers can use to create batch job
:::image type="content" source="./media/concept-endpoints/batch-endpoint-inputs-outputs.png" alt-text="Diagram showing how inputs and outputs are used in batch endpoints.":::
-The number and type of inputs and outputs depend on the [type of batch deployment](concept-endpoints-batch.md#batch-deployments). Model deployments always require 1 data input and produce 1 data output. However, pipeline component deployments provide a more general construct to build endpoints. You can indicate any number of inputs and outputs.
+Batch endpoints support two types of inputs:
+
+* [Data inputs](#data-inputs), which are pointers to a specific storage location or Azure Machine Learning asset.
+* [Literal inputs](#literal-inputs), which are literal values (like numbers or strings) that you want to pass to the job.
+
+The number and type of inputs and outputs depend on the [type of batch deployment](concept-endpoints-batch.md#batch-deployments). Model deployments always require 1 data input and produce 1 data output. Literal inputs are not supported. However, pipeline component deployments provide a more general construct to build endpoints. You can indicate any number of inputs (data and literal) and outputs.
The following table summarizes it: | Deployment type | Input's number | Supported input's types | Output's number | Supported output's types | |--|--|--|--|--|
-| [Model deployment](concept-endpoints-batch.md#model-deployments) | 1 | [Data inputs](#data-inputs) | 1 | [Data outputs](#data-inputs) |
+| [Model deployment](concept-endpoints-batch.md#model-deployments) | 1 | [Data inputs](#data-inputs) | 1 | [Data outputs](#data-outputs) |
| [Pipeline component deployment (preview)](concept-endpoints-batch.md#pipeline-component-deployment-preview) | [0..N] | [Data inputs](#data-inputs) and [literal inputs](#literal-inputs) | [0..N] | [Data outputs](#data-outputs) | -- > [!TIP]
-> Inputs and outputs are always named. Those names serve as keys to indentify them and pass the actual value during invocation. For model deployments, since they always require 1 input and output, the name is ignored during invocation. You can assign the name its best describe your use case, like "salest_estimations".
+> Inputs and outputs are always named. Those names serve as keys to indentify them and pass the actual value during invocation. For model deployments, since they always require 1 input and output, the name is ignored during invocation. You can assign the name its best describe your use case, like "sales_estimation".
+
-## Data inputs
+### Data inputs
Data inputs refer to inputs that point to a location where data is placed. Since batch endpoints usually consume large amounts of data, you can't pass the input data as part of the invocation request. Instead, you indicate the location where the batch endpoint should go to look for the data. Input data is mounted and streamed on the target compute to improve performance. Batch endpoints support reading files located in the following storage options:
-* [Azure Machine Learning Data Assets](#input-data-from-a-data-asset). The following types are supported:
- * Data assets of type Folder (`uri_folder`).
- * Data assets of type File (`uri_file`).
- * Datasets of type `FileDataset` (Deprecated).
-* [Azure Machine Learning Data Stores](#input-data-from-data-stores). The following stores are supported:
- * Azure Blob Storage
- * Azure Data Lake Storage Gen1
- * Azure Data Lake Storage Gen2
-* [Azure Storage Accounts](#input-data-from-azure-storage-accounts). The following storage containers are supported:
- * Azure Data Lake Storage Gen1
- * Azure Data Lake Storage Gen2
- * Azure Blob Storage
-
-> [!TIP]
-> Local data folders/files can be used when executing batch endpoints from the Azure Machine Learning CLI or Azure Machine Learning SDK for Python. However, that operation will result in the local data to be uploaded to the default Azure Machine Learning Data Store of the workspace you are working on.
+* [Azure Machine Learning Data Assets](#input-data-from-a-data-asset), including Folder (`uri_folder`) and File (`uri_file`).
+* [Azure Machine Learning Data Stores](#input-data-from-data-stores), including Azure Blob Storage, Azure Data Lake Storage Gen1, and Azure Data Lake Storage Gen2.
+* [Azure Storage Accounts](#input-data-from-azure-storage-accounts), including Azure Data Lake Storage Gen1, Azure Data Lake Storage Gen2, and Azure Blob Storage.
+* Local data folders/files (Azure Machine Learning CLI or Azure Machine Learning SDK for Python). However, that operation will result in the local data to be uploaded to the default Azure Machine Learning Data Store of the workspace you are working on.
> [!IMPORTANT] > __Deprecation notice__: Datasets of type `FileDataset` (V1) are deprecated and will be retired in the future. Existing batch endpoints relying on this functionality will continue to work but batch endpoints created with GA CLIv2 (2.4.0 and newer) or GA REST API (2022-05-01 and newer) will not support V1 dataset.
-#### Input data from a data asset
+### Literal inputs
+
+Literal inputs refer to inputs that can be represented and resolved at invocation time, like strings, numbers, and boolean values. You typically use literal inputs to pass parameters to your endpoint as part of a pipeline component deployment. Batch endpoints support the following literal types:
+
+- `string`
+- `boolean`
+- `float`
+- `integer`
+
+Literal inputs are only supported in Pipeline Component deployments. See [Create jobs with literal inputs](#create-jobs-with-literal-inputs) to learn how to indicate them.
+
+### Data outputs
+
+Data outputs refer to the location where the results of a batch job should be placed. Outputs are identified by name and Azure Machine Learning automatically assign a unique path to each named output. However, you can indicate another path if required. Batch Endpoints only support writing outputs in blob Azure Machine Learning data stores.
++
+## Create jobs with data inputs
+
+The following examples show how to create jobs taking data inputs from [data assets](#input-data-from-a-data-asset), [data stores](#input-data-from-data-stores), and [Azure Storage Accounts](#input-data-from-azure-storage-accounts).
+
+### Input data from a data asset
Azure Machine Learning data assets (formerly known as datasets) are supported as inputs for jobs. Follow these steps to run a batch endpoint job using data stored in a registered data asset in Azure Machine Learning:
Azure Machine Learning data assets (formerly known as datasets) are supported as
Use the Azure Machine Learning CLI, Azure Machine Learning SDK for Python, or Studio to get the location (region), workspace, and data asset name and version. You need them later.
-1. Create a data input:
+1. Create the input or request:
# [Azure CLI](#tab/cli)
Azure Machine Learning data assets (formerly known as datasets) are supported as
> Data assets ID would look like `/subscriptions/<subscription>/resourcegroups/<resource-group>/providers/Microsoft.MachineLearningServices/workspaces/<workspace>/data/<data-asset>/versions/<version>`. You can also use `azureml:/<datasset_name>@latest` as a way to indicate the input.
-1. Run the deployment:
+1. Run the endpoint:
# [Azure CLI](#tab/cli)
Azure Machine Learning data assets (formerly known as datasets) are supported as
Content-Type: application/json ```
-#### Input data from data stores
+### Input data from data stores
Data from Azure Machine Learning registered data stores can be directly referenced by batch deployments jobs. In this example, we're going to first upload some data to the default data store in the Azure Machine Learning workspace and then run a batch deployment on it. Follow these steps to run a batch endpoint job using data stored in a data store:
Data from Azure Machine Learning registered data stores can be directly referenc
1. We'll need to upload some sample data to it. This example assumes you've uploaded the sample data included in the repo in the folder `sdk/python/endpoints/batch/deploy-models/heart-classifier-mlflow/data` in the folder `heart-disease-uci-unlabeled` in the blob storage account. Ensure you have done that before moving forward.
-1. Create a data input:
+1. Create the input or request:
# [Azure CLI](#tab/cli)
Data from Azure Machine Learning registered data stores can be directly referenc
> [!TIP] > You can also use `azureml://datastores/<data-store>/paths/<data-path>` as a way to indicate the input.
-1. Run the deployment:
+1. Run the endpoint:
# [Azure CLI](#tab/cli)
Data from Azure Machine Learning registered data stores can be directly referenc
Content-Type: application/json ```
-#### Input data from Azure Storage Accounts
+### Input data from Azure Storage Accounts
Azure Machine Learning batch endpoints can read data from cloud locations in Azure Storage Accounts, both public and private. Use the following steps to run a batch endpoint job using data stored in a storage account: > [!NOTE]
-> Check the section [Security considerations when reading data](#security-considerations-when-reading-data) for learn more about additional configuration required to successfully read data from storage accoutns.
+> Check the section [configure compute clusters for data access](how-to-authenticate-batch-endpoint.md#configure-compute-clusters-for-data-access) to learn more about additional configuration required to successfully read data from storage accoutns.
-1. Create a data input:
+1. Create the input or request:
# [Azure CLI](#tab/cli)
Azure Machine Learning batch endpoints can read data from cloud locations in Azu
} ```
-1. Run the deployment:
+1. Run the endpoint:
# [Azure CLI](#tab/cli)
Azure Machine Learning batch endpoints can read data from cloud locations in Azu
Authorization: Bearer <TOKEN> Content-Type: application/json ```
-
-
-### Security considerations when reading data
-
-Batch endpoints ensure that only authorized users are able to invoke batch deployments and generate jobs. However, depending on how the input data is configured, other credentials may be used to read the underlying data. Use the following table to understand which credentials are used:
-
-| Data input type | Credential in store | Credentials used | Access granted by |
-||||-|
-| Data store | Yes | Data store's credentials in the workspace | Credentials |
-| Data store | No | Identity of the job | Depends on type |
-| Data asset | Yes | Data store's credentials in the workspace | Credentials |
-| Data asset | No | Identity of the job | Depends on store |
-| Azure Blob Storage | Not apply | Identity of the job + Managed identity of the compute cluster | RBAC |
-| Azure Data Lake Storage Gen1 | Not apply | Identity of the job + Managed identity of the compute cluster | POSIX |
-| Azure Data Lake Storage Gen2 | Not apply | Identity of the job + Managed identity of the compute cluster | POSIX and RBAC |
-
-The managed identity of the compute cluster is used for mounting and configuring external data storage accounts. However, the identity of the job is still used to read the underlying data allowing you to achieve granular access control. That means that in order to successfully read data from external storage services, the managed identity of the compute cluster where the deployment is running must have at least [Storage Blob Data Reader](../role-based-access-control/built-in-roles.md#storage-blob-data-reader) access to the storage account. Only storage account owners can [change your access level via the Azure portal](../storage/blobs/assign-azure-role-data-access.md).
-
-> [!NOTE]
-> To assign an identity to the compute used by a batch deployment, follow the instructions at [Set up authentication between Azure Machine Learning and other services](how-to-identity-based-service-authentication.md#compute-cluster). Configure the identity on the compute cluster associated with the deployment. Notice that all the jobs running on such compute are affected by this change. However, different deployments (even under the same deployment) can be configured to run under different clusters so you can administer the permissions accordingly depending on your requirements.
-## Literal inputs
+## Create jobs with literal inputs
-Literal inputs refer to inputs that can be represented and resolved at invocation time, like strings, numbers, and boolean values. You typically use literal inputs to pass parameters to your endpoint as part of a pipeline component deployment.
-
-Batch endpoints support the following literal types:
--- `string`-- `boolean`-- `float`-- `integer`-
-The following example shows how to indicate an input named `score_mode`, of type `string`, with a value of `append`:
+Pipeline component deployments can take literal inputs. The following example shows how to indicate an input named `score_mode`, of type `string`, with a value of `append`:
# [Azure CLI](#tab/cli)
Content-Type: application/json
```
-## Data outputs
-Data outputs refer to the location where the results of a batch job should be placed. Outputs are identified by name and Azure Machine Learning automatically assign a unique path to each named output. However, you can indicate another path if required. Batch Endpoints only support writing outputs in blob Azure Machine Learning data stores.
+## Create jobs with data outputs
The following example shows how to change the location where an output named `score` is placed. For completeness, these examples also configure an input named `heart_dataset`.
The following example shows how to change the location where an output named `sc
Content-Type: application/json ```
+## Invoke a specific deployment
+
+Batch endpoints can host multiple deployments under the same endpoint. The default endpoint is used unless the user indicates otherwise. You can change the deployment that is used as follows:
+
+# [Azure CLI](#tab/cli)
+
+Use the argument `--deployment-name` or `-d` to indicate the name of the deployment:
+
+```azurecli
+az ml batch-endpoint invoke --name $ENDPOINT_NAME --deployment-name $DEPLOYMENT_NAME --input $INPUT_DATA
+```
+
+# [Python](#tab/sdk)
+
+Use the parameter `deployment_name` to indicate the name of the deployment:
+
+```python
+job = ml_client.batch_endpoints.invoke(
+ endpoint_name=endpoint.name,
+ deployment_name=deployment.name,
+ inputs={
+ "heart_dataset": input,
+ }
+)
+```
+
+# [REST](#tab/rest)
+
+Add the header `azureml-model-deployment` to your request, including the name of the deployment you want to invoke.
+
+__Request__
+
+```http
+POST jobs HTTP/1.1
+Host: <ENDPOINT_URI>
+Authorization: Bearer <TOKEN>
+Content-Type: application/json
+azureml-model-deployment: DEPLOYMENT_NAME
+```
+ ## Next steps * [Troubleshooting batch endpoints](how-to-troubleshoot-batch-endpoints.md).
-* [Customize outputs in batch deployments](how-to-deploy-model-custom-output.md).
+* [Customize outputs in model deployments batch deployments](how-to-deploy-model-custom-output.md).
+* [Create a custom scoring pipeline with inputs and outputs](how-to-use-batch-scoring-pipeline.md).
* [Invoking batch endpoints from Azure Data Factory](how-to-use-batch-azure-data-factory.md).
machine-learning How To Authenticate Batch Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-authenticate-batch-endpoint.md
Previously updated : 10/10/2022- Last updated : 10/10/2023+
To invoke a batch endpoint, the user must present a valid Microsoft Entra token
You can either use one of the [built-in security roles](../role-based-access-control/built-in-roles.md) or create a new one. In any case, the identity used to invoke the endpoints requires to be granted the permissions explicitly. See [Steps to assign an Azure role](../role-based-access-control/role-assignments-steps.md) for instructions to assign them. > [!IMPORTANT]
-> The identity used for invoking a batch endpoint may not be used to read the underlying data depending on how the data store is configured. Please see [Security considerations when reading data](how-to-access-data-batch-endpoints-jobs.md#security-considerations-when-reading-data) for more details.
+> The identity used for invoking a batch endpoint may not be used to read the underlying data depending on how the data store is configured. Please see [Configure compute clusters for data access](#configure-compute-clusters-for-data-access) for more details.
## How to run jobs using different types of credentials
The following examples show different ways to start batch deployment jobs using
In this case, we want to execute a batch endpoint using the identity of the user currently logged in. Follow these steps:
-> [!NOTE]
-> When working on Azure Machine Learning studio, batch endpoints/deployments are always executed using the identity of the current user logged in.
- # [Azure CLI](#tab/cli) 1. Use the Azure CLI to log in using either interactive or device code authentication:
In this case, we want to execute a batch endpoint using a service principal alre
### Running jobs using a managed identity
-You can use managed identities to invoke batch endpoint and deployments. Please notice that this manage identity doesn't belong to the batch endpoint, but it is the identity used to execute the endpoint and hence create a batch job. Both user assigned and system assigned identities can be use in this scenario.
+You can use managed identities to invoke batch endpoint and deployments. Notice that this manage identity doesn't belong to the batch endpoint, but it is the identity used to execute the endpoint and hence create a batch job. Both user assigned and system assigned identities can be use in this scenario.
# [Azure CLI](#tab/cli)
-On resources configured for managed identities for Azure resources, you can sign in using the managed identity. Signing in with the resource's identity is done through the `--identity` flag. For more details see [Sign in with Azure CLI](/cli/azure/authenticate-azure-cli).
+On resources configured for managed identities for Azure resources, you can sign in using the managed identity. Signing in with the resource's identity is done through the `--identity` flag. For more details, see [Sign in with Azure CLI](/cli/azure/authenticate-azure-cli).
```azurecli az login --identity
You can also use the Azure CLI to get an authentication token for the managed id
+## Configure compute clusters for data access
+
+Batch endpoints ensure that only authorized users are able to invoke batch deployments and generate jobs. However, depending on how the input data is configured, other credentials might be used to read the underlying data. Use the following table to understand which credentials are used:
+
+| Data input type | Credential in store | Credentials used | Access granted by |
+||||-|
+| Data store | Yes | Data store's credentials in the workspace | Access key or SAS |
+| Data asset | Yes | Data store's credentials in the workspace | Access Key or SAS |
+| Data store | No | Identity of the job + Managed identity of the compute cluster | RBAC |
+| Data asset | No | Identity of the job + Managed identity of the compute cluster | RBAC |
+| Azure Blob Storage | Not apply | Identity of the job + Managed identity of the compute cluster | RBAC |
+| Azure Data Lake Storage Gen1 | Not apply | Identity of the job + Managed identity of the compute cluster | POSIX |
+| Azure Data Lake Storage Gen2 | Not apply | Identity of the job + Managed identity of the compute cluster | POSIX and RBAC |
+
+For those items in the table where **Identity of the job + Managed identity of the compute cluster** is displayed, the managed identity of the compute cluster is used **for mounting** and configuring storage accounts. However, the identity of the job is still used to read the underlying data allowing you to achieve granular access control. That means that in order to successfully read data from storage, the managed identity of the compute cluster where the deployment is running must have at least [Storage Blob Data Reader](../role-based-access-control/built-in-roles.md#storage-blob-data-reader) access to the storage account.
+
+To configure the compute cluster for data access, follow these steps:
+
+1. Go to [Azure Machine Learning studio](https://ml.azure.com).
+
+1. Navigate to __Compute__, then __Compute clusters__, and select the compute cluster your deployment is using.
+
+1. Assign a managed identity to the compute cluster:
+
+ 1. In the __Managed identity__ section, verify if the compute has a managed identity assigned. If not, select the option __Edit__.
+
+ 1. Select __Assign a managed identity__ and configure it as needed. You can use a System-Assigned Managed Identity or a User-Assigned Managed Identity. If using a System-Assigned Managed Identity, it is named as "[workspace name]/computes/[compute cluster name]".
+
+ 1. Save the changes.
+
+ :::image type="content" source="media/how-to-authenticate-batch-endpoint/guide-manage-identity-cluster.gif" alt-text="Animation showing the steps to assign a managed identity to a cluster.":::
+
+1. Go to the [Azure portal](https://portal.azure.com) and navigate to the associated storage account where the data is located. If your data input is a Data Asset or a Data Store, look for the storage account where those assets are placed.
+
+1. Assign Storage Blob Data Reader access level in the storage account:
+
+ 1. Go to the section __Access control (IAM)__.
+
+ 1. Select the tab __Role assignment__, and then click on __Add__ > __Role assignment__.
+
+ 1. Look for the role named __Storage Blob Data Reader__, select it, and click on __Next__.
+
+ 1. Click on __Select members__.
+
+ 1. Look for the managed identity you have created. If using a System-Assigned Managed Identity, it is named as __"[workspace name]/computes/[compute cluster name]"__.
+
+ 1. Add the account, and complete the wizard.
+
+ :::image type="content" source="media/how-to-authenticate-batch-endpoint/guide-manage-identity-assign.gif" alt-text="Animation showing the steps to assign the created managed identity to the storage account.":::
+
+1. Your endpoint is ready to receive jobs and input data from the selected storage account.
+ ## Next steps * [Network isolation in batch endpoints](how-to-secure-batch-endpoint.md)
machine-learning How To Create Data Assets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-data-assets.md
Title: Create Data Assets description: Learn how to create Azure Machine Learning data assets-
When you create your data asset, you need to set the data asset type. Azure Mach
|**Table**<br> Reference a data table | `mltable` | You have a complex schema subject to frequent changes, or you need a subset of large tabular data.<br><br>AutoML with Tables.<br><br>Read unstructured data (images, text, audio, etc.) data that is spread across **multiple** storage locations. | > [!NOTE]
-> Please do not use embedded newlines in csv files unless you register the data as an MLTable. Embedded newlines in csv files might cause misaligned field values when you read the data. MLTable has this parameter [`support_multi_line`](https://learn.microsoft.com/azure/machine-learning/reference-yaml-mltable?view=azureml-api-2#read-transformations)in `read_delimited` transformation to interpret quoted line breaks as one record.
+> Please do not use embedded newlines in csv files unless you register the data as an MLTable. Embedded newlines in csv files might cause misaligned field values when you read the data. MLTable has this parameter [`support_multi_line`](../machine-learning/reference-yaml-mltable.md?view=azureml-api-2&preserve-view=true#read-transformations)in `read_delimited` transformation to interpret quoted line breaks as one record.
When you consume the data asset in an Azure Machine Learning job, you can either *mount* or *download* the asset to the compute node(s). For more information, please read [Modes](how-to-read-write-data-v2.md#modes).
machine-learning How To Monitor Model Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-monitor-model-performance.md
The component input DataFrame should contain a `mltable` with the processed data
### Component output signature
+The component output port should have the following signature.
+
+ | signature name | type | description |
+ ||||
+ | signal_metrics | mltable | The ml table that contains the computed metrics. The schema is defined in the signal_metrics schema section in the next section. |
+
+#### signal_metrics schema
The component output DataFrame should contain four columns: `group`, `metric_name`, `metric_value`, and `threshold_value`: | signature name | type | description | example value |
create_monitor:
type: custom component_id: azureml:my_custom_signal:1.0.0 input_data:
- test_data_1:
+ production_data:
input_data:
- type: mltable
- path: azureml:Direct:1
- data_context: test
- test_data_2:
- input_data:
- type: mltable
- path: azureml:Direct:1
+ type: uri_folder
+ path: azureml:my_production_data:1
data_context: test data_window: trailing_window_size: P30D trailing_window_offset: P7D pre_processing_component: azureml:custom_preprocessor:1.0.0 metric_thresholds:
- - metric_name: std_dev
+ - metric_name: std_deviation
threshold: 2 alert_notification: emails:
machine-learning How To Use Openai Models In Azure Ml https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-openai-models-in-azure-ml.md
To run a deploy fine-tuned model job from Azure Machine Learning, in order to de
1. Select the **Deploy** button and give the deployment name. The model is deployed to the default Azure OpenAI resource linked to your workspace. ### Finetuning using code based samples
-To enable users to quickly get started with code based finetuning, we have published samples (both Python notebooks and CLI examples) to the azureml-examples gut repo -
+To enable users to quickly get started with code based finetuning, we have published samples (both Python notebooks and CLI examples) to the azureml-examples git repo -
* [SDK example](https://github.com/Azure/azureml-examples/tree/main/sdk/python/foundation-models/azure_openai) * [CLI example](https://github.com/Azure/azureml-examples/tree/main/cli/foundation-models/azure_openai) ### Troubleshooting Here are some steps to help you resolve any of the following issues with your Azure OpenAI in Azure Machine Learning experience.
-Currently, only a maximum of 10 workspaces can be designated for a particular subscription. If a user creates more workspaces, they will get access to the models but their jobs will fail.
- You might receive any of the following errors when you try to deploy an Azure OpenAI model. - **Only one deployment can be made per model name and version** - **Fix**: Go to the [Azure OpenAI Studio](https://oai.azure.com/portal) and delete the deployments of the model you're trying to deploy. - **Failed to create deployment**
- - **Fix**: Azure OpenAI failed to create. This is due to Quota issues, make sure you have enough quota for the deployment.
--- **Failed to fetch Azure OpenAI deployments**
- - **Fix**: Unable to create the resource. Due to one of, the following reasons. You aren't in correct region, or you have exceeded the maximum limit of three Azure OpenAI resources. You need to delete an existing Azure OpenAI resource or you need to make sure you created a workspace in one of the [supported regions](../ai-services/openai/concepts/models.md#model-summary-table-and-region-availability).
--- **Failed to get Azure OpenAI resource**
- - **Fix**: Unable to create the resource. Due to one of, the following reasons. You aren't in correct region, or you have exceeded the maximum limit of three Azure OpenAI resources. You need to delete an existing Azure OpenAI resource or you need to make sure you created a workspace in one of the [supported regions](../ai-services/openai/concepts/models.md#model-summary-table-and-region-availability).
+ - **Fix**: Azure OpenAI failed to create. This is due to quota issues, make sure you have enough quota for the deployment. The default quota for fine-tuned models is 2 deployment per customer.
- **Failed to get Azure OpenAI resource**
- - **Fix**: Unable to create the resource. Due to one of, the following reasons. You aren't in correct region, or you have exceeded the maximum limit of three Azure OpenAI resources. You need to delete an existing Azure OpenAI resource or you need to make sure you created a workspace in one of the [supported regions](../ai-services/openai/concepts/models.md#model-summary-table-and-region-availability).
+ - **Fix**: Unable to create the resource. You either aren't in correct region, or you have exceeded the maximum limit of three Azure OpenAI resources. You need to delete an existing Azure OpenAI resource or you need to make sure you created a workspace in one of the [supported regions](../ai-services/openai/concepts/models.md#model-summary-table-and-region-availability).
- **Model Not Deployable** - **Fix**: This usually happens while trying to deploy a GPT-4 model. Due to high demand you need to [apply for access to use GPT-4 models](/azure/ai-services/openai/concepts/models#gpt-4-models). -- **Resource Create Failed**
- - **Fix**: We tried to automatically create the Azure OpenAI resource but the operation failed. Try again on a new workspace.
+- **Finetuning job Failed**
+ - **Fix**: Currently, only a maximum of 10 workspaces can be designated for a particular subscription for new fine tunable models. If a user creates more workspaces, they will get access to the models, but their jobs will fail. Try to limit number of workspaces per subscription to 10.
## Next steps
machine-learning Tutorial Train Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-train-model.md
dependencies:
- pandas>=1.1,<1.2 - pip: - inference-schema[numpy-support]==1.3.0
- - mlflow== 1.26.1
- - azureml-mlflow==1.42.0
+ - mlflow== 2.4.1
+ - azureml-mlflow==1.51.0
- psutil>=5.8,<5.9 - tqdm>=4.59,<4.60 - ipykernel~=6.0
migrate Troubleshoot Spring Boot Discovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/troubleshoot-spring-boot-discovery.md
For errors related to the access policy on the Key Vault, follow these steps to
1. Go to your Azure Migrate project. 2. Navigate to Azure Migrate: Discovery and assessment> **Overview** > **Manage** > **Appliances** and find the name of the Kubernetes-based appliance whose service principal you need to find.
-
+
+ :::image type="content" source="./media/tutorial-discover-spring-boot/troubleshoot.png" alt-text="Screenshot of Appliances screen.":::
+ 3. You can also find the Key Vault associated with the appliance by selecting the appliance name and finding the Key Vault name in appliance properties. 4. Go to your workstation and open PowerShell as an administrator. 5. Install the [ARM Client](https://github.com/projectkudu/ARMClient/releases/download/v1.9/ARMClient.zip) zip folder.
For errors related to the access policy on the Key Vault, follow these steps to
8. After successfully signing in, run the following command by adding the appliance name: ```
-armclient get /subscriptions/<subscription>/resourceGroups/<resourceGroup> /providers/Microsoft.Kubernetes/connectedClusters/<applianceName>/Providers/Microsoft.KubernetesConfiguration/extensions/credential-sync-agent ?api-version=2022-03-01
+armclient get /subscriptions/<subscription>/resourceGroups/<resourceGroup> /providers/Microsoft.Kubernetes/connectedClusters/<applianceName>/Providers/Microsoft.KubernetesConfiguration/extensions/credential-sync-agent?api-version=2022-03-01
``` 9. The response lists down the identity associated with the Appliance extension. Note down the `Principal ID` field in the response under the`identity` section. 10. Go to Azure portal and check if the Principal ID has the required access on the Azure Key Vault, chosen for secret processing.
-11. Go to the Key Vault, go to access policies, select the Principal ID from list and check the permissions it has OR create a new access policy specifically for the Principal ID you found by running the command above.
+11. Go to the Key Vault, go to access policies, select the Principal ID from list and check the permissions it has OR create a new access policy specifically for the Principal ID you found by running the command.
12. Ensure that following permissions are assigned to the Principal ID: *Secret permission* and both *Secret Management Operations and Privileged Management Operations*.
+**Error Code** | **Action**
+- | -
+**404** | Check if the credentials exist on the Key Vault. Ensure that the extension identity for <Microsoft.ConnectedCredentials> has required permissions to delete the credentials from the Key Vault.
+**406** | Ensure that the extension identity for <Microsoft.ConnectedCredentials> has the required permissions to access the credentials on the Key Vault.
+**407** | Check the firewall rules on the Key Vault, allow access from the appliance IP address and retry the operation.
+**413** | Ensure that the Key Vault is accessible from the appliance on-premises and the extension identity for <Microsoft.ConnectedCredentials> has required permissions to perform operations on secrets stored in Key Vault.
+**415** | Ensure that the extension identity for <Microsoft.ConnectedCredentials> has required permissions to access the credentials on the Key Vault.
+**416** | Check if the credentials exist on the Key Vault. Ensure that the extension identity for <Microsoft.ConnectedCredentials> has purge permissions on the Key Vault.
+**418** | Check if the credentials exist on the Key Vault. Ensure that the extension identity for <Microsoft.ConnectedCredentials> has required permissions to delete the credentials from Key Vault.
+
+## Operator error
+
+**Error code** | **Error Message** | **Possible Causes**| **Remediation**
+- | -| - | -
+400 | An internal error occurred. | The operation failed due to an unhandled error. | Retry the operation. If the issue persists, contact Support.
+401 | An attempt to update the resource specification faulted. | The operator service account might not have permissions to modify the custom resource. | Review permissions granted to the operator service account making sure it has privileges to update the custom resource, and retry the operation.
+402 | The transfer mode provided is not valid or supported. | The transfer mode used during credential resource creation isn't supported. | Review the transfer mode ensuring it is valid and supported, and retry the operation.
+403 | Failed to save credentials on the Kubernetes-based appliance. | The service account `connectedcredentials-sa` of the appliance operator <Microsoft.ConnectedCredentials> might not have required permissions to save credentials on the Kubernetes cluster. | Retry the operation after granting the required access to the appliance operator service account.
+404 | Failed to delete credentials from Key Vault (after syncing them to on-premises appliance).| The extension identity for <Microsoft.ConnectedCredentials> might not have required permissions to delete the credentials from Key Vault. | Check if the credentials exist on the Key Vault. Ensure that the extension identity for <Microsoft.ConnectedCredentials> has required permissions to delete the credentials from Key Vault.
+405 | Failed to access the secret on the Key Vault. | The Key Vault cannot be found as it could have been deleted. | Check if the Key Vault exists. If it exists, retry the operation after some time, else recreate a Key Vault with the same name and in the same Subscription and Resource Group as the Azure Migrate project.
+406 | Failed to access the secret on the Key Vault.| The extension identity for <Microsoft.ConnectedCredentials> might not have required permissions to access the credentials on the Key Vault. | Ensure that the extension identity for <Microsoft.ConnectedCredentials> has required permissions to access the credentials on the Key Vault.
+407 | Failed to connect with the endpoint of Key Vault. | The firewall rules associated with the Key Vault could be restricting access. | Check the firewall rules on the Key Vault, allow access from the appliance IP address and retry the operation.
+408 | Failed to access secret on Key Vault due to a conflicting operation.| The secret was modified by another operation causing a conflict in accessing the secret.| Ensure no other appliance extension on a different Kubernetes cluster is accessing secrets on the same Key Vault.
+409 | An attempt to access the secret on the Key Vault faulted because of the unsupported region. | Unsupported region. | Ensure the region is supported by the Key Vault, and retry the operation.
+410 | An attempt to access the secret faulted because of unsupported SKU. | Unsupported SKU. | Ensure the SKU is supported by the Key Vault and is valid, and retry the operation.
+411 | An attempt to access the secret faulted because the resource group is found to be missing.|Resource group is deleted. | Ensure the resource group exists and is valid, and retry the operation.
+412 | An attempt to access the secret faulted because the access was denied. | The Certificate associated with the extension's identity is expired. | NA
+413 | Failed to access the secret on Key Vault due to an unknown error. | Key Vault access policies and network access rules are not configured properly.| The Key Vault access or network policies might not be configured properly. Ensure that the Key Vault is accessible from the appliance on-premises and the extension identity for <Microsoft.ConnectedCredentials> has required permissions to perform operations on secrets stored in Key Vault.
+414 | Failed to access the secret on Key Vault as it couldn't be found. | The secret could have been deleted while the operation was in progress. | Check if the secret exists in the Key Vault. If it doesn't exist, delete the existing credentials, and add it again.
+415 | Failed to access the secret on the Key Vault. | The extension identity for <Microsoft.ConnectedCredentials> might not have required permissions to access the credentials on the Key Vault. | Ensure that the extension identity for Microsoft.ConnectedCredentials has required permissions to access the credentials on the Key Vault.
+416 | Failed to delete credentials from Key Vault (after syncing them to on-premises appliance). | The extension identity for <Microsoft.ConnectedCredentials> might not have required permissions to delete the credentials from Key Vault. | Check if the credentials exist on the Key Vault. Ensure that the extension identity for <Microsoft.ConnectedCredentials> has purge permissions on the Key Vault. If the Key Vault has been enabled with purge protection, secret will be automatically cleaned up after the purge protection window.
+417 | Failed to access the secret on the Key Vault. | The Key Vault can't be found as it could have been deleted.| Check if the Key Vault exists. If it exists, then retry the operation after some time, else recreate a Key Vault with the same name and in the same Subscription and Resource Group as the Azure Migrate project.
+418 | Failed to purge a deleted secret from the Key Vault. | The secret delete operation might not have completed.| Check if the credentials exist on the Key Vault. Ensure that the extension identity for <Microsoft.ConnectedCredentials> has required permissions to delete the credentials from Key Vault.
++ ## Next steps Set up an appliance for [VMware](how-to-set-up-appliance-vmware.md), [Hyper-V](how-to-set-up-appliance-hyper-v.md), or [physical servers](how-to-set-up-appliance-physical.md).
migrate Tutorial Discover Spring Boot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-discover-spring-boot.md
ms. Last updated 09/28/2023-+ # Tutorial: Discover Spring Boot applications running in your datacenter (preview)
In this tutorial, you learn how to:
If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/pricing/free-trial/) before you begin.
+## Supported geographies
+
+|**Geography**|
+|-|
+|Asia Pacific|
+|Korea|
+|Japan|
+|United States|
+|Europe|
+|United Kingdom|
+|Canada|
+|Australia|
+|France|
+ ## Prerequisites - Before you follow this tutorial to discover Spring Boot applications, make sure you've performed server discovery using the Azure Migrate appliance using the following tutorials:
If you don't have an Azure subscription, create a [free account](https://azure.m
- [Discover GCP instances](tutorial-discover-gcp.md) - Ensure that you have performed software inventory by providing the server credentials on the appliance configuration manager. [Learn more](how-to-discover-applications.md). - ## Set up Kubernetes-based appliance After you have performed server discovery and software inventory using the Azure Migrate appliance, you can enable discovery of Spring Boot applications by setting up a Kubernetes appliance as follows:
After you have performed server discovery and software inventory using the Azure
1. Go to the [Azure portal](https://aka.ms/migrate/springboot). Sign in with your Azure account and search for Azure Migrate. 2. On the **Overview** page > **Servers, databases and web apps**, select **Discover, assess and migrate**.
-1. Select the project where you have set up the Azure Migrate appliance as part of prerequisites above.
-1. You would see a message above Azure Migrate: Discovery and assessment tile to onboard a Kubernetes-based appliance to enable discovery of Spring Boot applications.
-5. You can proceed by selecting the link on the message, which will help you get started with onboarding Kubernetes-based appliance.
-6. In Step 1: Set up an appliance, select **Bring your own Kubernetes cluster** - You must bring your own Kubernetes cluster running on-premises, connect it to Azure Arc and use the installer script to set up the appliance.
+1. Select the project where you have set up the Azure Migrate appliance as part of the prerequisites.
+1. You see a message above Azure Migrate: Discovery and assessment tile to onboard a Kubernetes-based appliance to enable discovery of Spring Boot applications.
+
+ :::image type="content" source="./media/tutorial-discover-spring-boot/discover-banner-inline.png" alt-text="Screenshot shows the banner for discovery and assessment of web apps." lightbox="./media/tutorial-discover-spring-boot/discover-banner-expanded.png":::
-**Support** | **Details**
-- | -
-**Validated Kubernetes distros** | See [Azure Arc-enabled Kubernetes validation](https://learn.microsoft.com/azure/azure-arc/kubernetes/validation-program).
-**Hardware configuration required** | 6 GB RAM, with 30GB storage, 4 Core CPU
-**Network Requirements** | Access to the following endpoints: <br/><br/> - api.snapcraft.io <br/><br/> - https://dc.services.visualstudio.com/v2/track <br/><br/> - [Azure Arc-enabled Kubernetes network requirements](https://learn.microsoft.com/azure/azure-arc/kubernetes/network-requirements?tabs=azure-cloud) <br/><br/> - [Azure CLI endpoints for proxy bypass](https://learn.microsoft.com/cli/azure/azure-cli-endpoints?tabs=azure-cloud)
+5. You can proceed by selecting the link on the message, which helps you get started with onboarding Kubernetes-based appliance.
+
+ > [!Note]
+ > We recommend you choose a Kubernetes cluster with disk encryption for its services. [Learn more](#encryption-at-rest) about encrypting data at rest in Kubernetes.
+
+ :::image type="content" source="./media/tutorial-discover-spring-boot/onboard-kubernetes-inline.png" alt-text="Screenshot displays the Onboard Kubernetes appliance screen." lightbox="./media/tutorial-discover-spring-boot/onboard-kubernetes-expanded.png":::
+
+6. In Step 1: Set up an appliance, select **Bring your own Kubernetes cluster** - You must bring your own Kubernetes cluster running on-premises, connect it to Azure Arc and use the installer script to set up the appliance.
-#### Bring your own Kubernetes cluster (alternate option)
+#### Bring your own Kubernetes cluster
-1. In **Step 2: Choose connected cluster**, you need to select an existing Azure Arc connected cluster from your subscription. If you do not have an existing connected cluster, you can Arc enable a Kubernetes cluster running on-premises by following the steps [here](https://learn.microsoft.com/azure/azure-arc/kubernetes/quickstart-connect-cluster?tabs=azure-cli).
+1. In **Step 2: Choose connected cluster**, you need to select an existing Azure Arc connected cluster from your subscription. If you do not have an existing connected cluster, you can Arc enable a Kubernetes cluster running on-premises by following the steps [here](../azure-arc/kubernetes/quickstart-connect-cluster.md?tabs=azure-cli).
> [!Note]
- > You can only select an existing connected cluster, deployed in the same region as that of your Azure Migrate project
+ > You can only select an existing connected cluster, deployed in the same region as that of your Azure Migrate project.
-2. In Step 3: Provide appliance details for Azure Migrate, the appliance name is pre-populated, but you can choose to provide your own friendly name to the appliance.
+ :::image type="content" source="./media/tutorial-discover-spring-boot/choose-cluster-inline.png" alt-text="Screenshot displays Choose cluster option in the Onboard Kubernetes appliance screen." lightbox="./media/tutorial-discover-spring-boot/choose-cluster-expanded.png":::
-3. You can select a key vault from the drop-down or **Create new** key vault. This key vault is used to process the credentials provided in the project to start discovery of Spring Boot applications.
+2. In Step 3: Provide appliance details for Azure Migrate, the appliance name is prepopulated, but you can choose to provide your own friendly name to the appliance.
+
+3. You can select a Key Vault from the drop-down or **Create new** Key Vault. This Key Vault is used to process the credentials provided in the project to start discovery of Spring Boot applications.
> [!Note]
- > The Key Vault can be chosen or created in the same subscription and region as that of the Azure Migrate project. When creating/selecting a key vault, make sure that purge protection is disabled else there be will issues in processing of credentials through the key vault.
+ > The Key Vault can be chosen or created in the same subscription and region as that of the Azure Migrate project. When creating/selecting a Key Vault, make sure that purge protection is disabled else there will be issues in processing of credentials through the Key Vault.
-4. After providing the appliance name and key vault, select **Generate script** to generate an installer script that you can copy and paste on a Linux server on-premises. Before executing the script, ensure that you meet the following prerequisites on the Linux server:
+4. After providing the appliance name and Key Vault, select **Generate script** to generate an installer script that you can copy and paste on a Linux server on-premises. Before executing the script, ensure that you meet the following prerequisites on the Linux server:
**Support** | **Details** - | - **Supported Linux OS** | Ubuntu 20.04, RHEL 9
- **Hardware configuration required** | 6 GB RAM, with 30GB storage, 4 Core CPU
+ **Hardware configuration required** | 6 GB RAM, with 30 GB storage on root volume, 4 Core CPU
**Network Requirements** | Access to the following endpoints: <br/><br/> https://dc.services.visualstudio.com/v2/track <br/><br/> [Azure CLI endpoints for proxy bypass](https://learn.microsoft.com/cli/azure/azure-cli-endpoints?tabs=azure-cloud) 5. After copying the script, go to your Linux server, save the script as *Deploy.sh* on the server.
+#### Connect using an outbound proxy server
+If your machine is behind an outbound proxy server, requests must be routed via the outbound proxy server. Follow these steps to provide proxy settings:
+1. Open the terminal on the server and execute the following command setup environment variables as a root user:
+ `sudo su -`
+2. On the deployment machine, set the environment variables needed for `deploy.sh` to use the outbound proxy server:
+ ```
+ export HTTP_PROXY=ΓÇ¥<proxy-server-ip-address>:<port>ΓÇ¥
+ export HTTPS_PROXY=ΓÇ¥<proxy-server-ip-address>:<port>ΓÇ¥
+ export NO_PROXY=ΓÇ¥ΓÇ¥
+ ```
+3. If your proxy uses a certificate, provide the absolute path to the certificate.
+ `export PROXY_CERT=ΓÇ¥ΓÇ¥`
+
+> [!Note]
+> The machine uses proxy details while installing the required prerequisites to run the `deploy.sh` script . It won't override the proxy settings of the Azure Arc-enabled Kubernetes cluster.
+ #### Execute the installer script After you have saved the script on the Linux server, follow these steps: > [!Note]
-> - If you have chosen to deploy a packaged Kubernetes cluster and are running the installation script on any other Linux OS except Ubuntu, ensure to install the snap module by following the instructions [here](https://snapcraft.io/docs/installing-snap-on-red-hat), before executing the script.
-> - Also, ensure that you have curl installed on the server. For Ubuntu, you can install it using the command `sudo apt-get install curl`, and for other OS (RHEL/Centos), you can use the `yum install curl` command.
+> - This script needs to be run after you connect to a Linux machine on its terminal that meets the networking prerequisites and OS compatibility.
+> - Ensure that you have curl installed on the server. For Ubuntu, you can install it using the command `sudo apt-get install curl`, and for other OS (RHEL/Centos), you can use the `yum install curl` command.
+
+> [!Important]
+> Don't edit the script unless you want to clean up the setup.
1. Open the terminal on the server and execute the following command to execute the script as a root user:
After you have saved the script on the Linux server, follow these steps:
1. Installing required CLI extensions. 2. Registering Azure Resource Providers. 3. Checking for prerequisites like connectivity to required endpoints.
- 4. Setting up MicroK8s Kubernetes cluster.
5. Installing the required operators on the cluster. 6. Creating the required Migrate resources. After the script is executed successfully, configure the appliance through the portal.
-> [!Note]
-> If you encounter any issue during script execution, you need to run the script in *delete* mode by adding the following after line #19 in the `deploy.sh` script:
->
-> export DELETE= ΓÇ£trueΓÇ¥
+##### Reinstallation
+
+If you encounter any issue during script execution, you need to run the script in *delete* mode by adding the following after line #19 in the `deploy.sh` script:
+
+`export DELETE= ΓÇ£trueΓÇ¥`
+ The *delete* mode helps to clean up any existing components installed on the server so that you can do a fresh installation. After running the script in *delete* mode, remove the line from the script and execute it again in the default mode.
+## Encryption at rest
+
+As you're bringing your own Kubernetes cluster, there's a shared responsibility to ensure that the secrets are secured.
+- We recommend you choose a Kubernetes cluster with disk encryption for its services.
+- [Learn more](https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/) about encrypting data at rest in Kubernetes.
++ ## Configure Kubernetes-based appliance After successfully setting up the appliance using the installer script, you need to configure the appliance by following these steps: 1. Go to the Azure Migrate project where you started onboarding the Kubernetes-based appliance. 2. On the **Azure Migrate: Discovery and assessment** tile, select the appliance count for **Pending action** under appliances summary. +
+ :::image type="content" source="./media/tutorial-discover-spring-boot/pending-action-inline.png" alt-text="Screenshot displays the Pending action option." lightbox="./media/tutorial-discover-spring-boot/pending-action-expanded.png":::
+ 3. In **Overview** > **Manage** > **Appliances**, a filtered list of appliances appears with actions pending.
-4. Find the Kubernetes-based appliance that you have just set up and select **Credentials unavailable** status to configure the appliance.
+4. Find the Kubernetes-based appliance that you have set up and select **Credentials unavailable** status to configure the appliance.
+
+ :::image type="content" source="./media/tutorial-discover-spring-boot/appliances-inline.png" alt-text="Screenshot displays the details of the appliance." lightbox="./media/tutorial-discover-spring-boot/appliances-expanded.png":::
+ 5. In the **Manage credentials** page, add the credentials to initiate discovery of the Spring Boot applications running on your servers.+
+ :::image type="content" source="./media/tutorial-discover-spring-boot/manage-appliances-inline.png" alt-text="Screenshot displays the Manage credentials option." lightbox="./media/tutorial-discover-spring-boot/manage-appliances-expanded.png":::
+ 6. Select **Add credentials**, choose a credential type from Linux (non-domain) or Domain credentials, provide a friendly name, username, and password. Select **Save**.
- >[!Note]
- > - The credentials added on the portal are processed via the Azure Key Vault chosen in the initial steps of onboarding the Kubernetes-based appliance. The credentials are then synced (saved in an encrypted format) to the Kubernetes cluster on the appliance and removed from the Azure Key Vault.
- > - After the credentials have been successfully synced, they would be used for discovery of the specific workload in the next discovery cycle.
+ > [!Note]
+ > - The credentials added on the portal are processed via the Azure Key Vault chosen in the initial steps of onboarding the Kubernetes-based appliance. The credentials are then synced (saved in an encrypted format) to the Kubernetes cluster on the appliance and removed from the Azure Key Vault.
+ > - After the credentials have been successfully synced, they would be used for discovery of the specific workload in the next discovery cycle.
7. After adding a credential, you need to refresh the page to see the **Sync status** of the credential. If status is **Incomplete**, you can select the status to review the error encountered and take the recommended action. After the credentials have been successfully synced, wait for 24 hours before you can review the discovered inventory by filtering for the specific workload in the **Discovered servers** page.
-> [!Note]
-> You can add/update credentials any time by navigating to **Azure Migrate: Discovery and assessment** > **Overview** > **Manage** > **Appliances** page, selecting **Manage credentials** from the options available in the Kubernetes-based appliance.
+ > [!Note]
+ > You can add/update credentials any time by navigating to **Azure Migrate: Discovery and assessment** > **Overview** > **Manage** > **Appliances** page, selecting **Manage credentials** from the options available in the Kubernetes-based appliance.
+
+## Overview of Discovery results
+
+The **Discovered servers** screen provides the following information:
+- Displays all running Spring Boot workloads on your server-based environment.
+- Lists the basic information of each server in a table format.
++
+Select any web app to view its details. The **Web apps** screen provides the following information:
+- Provides a comprehensive view of each Spring Boot process on each server.
+- Displays the detailed information of each process, including:
+ - JDK version and Spring Boot version.
+ - Environment variable names and JVM options that are configured.
+ - Application configuration and certificate files in use.
+ - Location of JAR file for the process on the server.
+ - Static content locations and binding ports.
++ ## Next steps - [Assess Spring Boot](tutorial-assess-spring-boot.md) apps for migration.
mysql Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/whats-new.md
Azure Database for MySQL Flexible Server now supports [generated invisible prima
- **MySQL extension for Azure Data Studio (Preview)**
- When working with multiple databases across data platforms and cloud deployment models, performing the most common tasks on all your databases using a single tool enhances productivity several fold. With the MySQL extension for Azure Data Studio, you can now connect to and modify MySQL databases along with your other databases, taking advantage of the modern editor experience and capabilities in Azure Data Studio, such as IntelliSense, code snippets, source control integration, native Jupyter Notebooks, an integrated terminal, and more. Use this new tooling with any MySQL server hosted on-premises, on virtual machines, on managed MySQL in other clouds, and on Azure Database for MySQL ΓÇô Flexible Server. [Learn more](/sql/azure-data-studio/quickstart-mysql).
+ When working with multiple databases across data platforms and cloud deployment models, performing the most common tasks on all your databases using a single tool enhances productivity several fold. With the MySQL extension for Azure Data Studio, you can now connect to and modify MySQL databases along with your other databases, taking advantage of the modern editor experience and capabilities in Azure Data Studio, such as IntelliSense, code snippets, source control integration, native Jupyter Notebooks, an integrated terminal, and more. Use this new tooling with any MySQL server hosted on-premises, on virtual machines, on managed MySQL in other clouds, and on Azure Database for MySQL ΓÇô Flexible Server. [Learn more](/azure-data-studio/quickstart-mysql).
- **Enhanced metrics for better monitoring**
mysql Migrate Single Flexible In Place Auto Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/migrate/migrate-single-flexible-in-place-auto-migration.md
Following described are the ways to review your migration schedule once you have
## Pre-requisite checks for in-place auto-migration * The Single Server instance should be in **ready state** and should not be in stopped state during the planned maintenance window for automigration to take place.
-* For Single Server instance with **SSL enabled**, ensure you have both certificates (**BaltimoreCyberTrustRoot & DigiCertGlobalRootG2 Root CA**) available in the trusted root store. Additionally, if you have the certificate pinned to the connection string create a combined CA certificate before scheduled auto-migration by following steps [here](../single-server/concepts-certificate-rotation.md#create-a-combined-ca-certificate) to ensure business continuity post-migration.
+* For Single Server instance with **SSL enabled**, ensure you have all three certificates (**[BaltimoreCyberTrustRoot](https://www.digicert.com/CACerts/BaltimoreCyberTrustRoot.crt.pem), [DigiCertGlobalRootG2 Root CA](https://cacerts.digicert.com/DigiCertGlobalRootG2.crt.pem) and [DigiCertGlobalRootCA Root CA](https://dl.cacerts.digicert.com/DigiCertGlobalRootCA.crt.pem)**) available in the trusted root store. Additionally, if you have the certificate pinned to the connection string create a combined CA certificate with all three certificates before scheduled auto-migration to ensure business continuity post-migration.
* The MySQL engine doesn't guarantee any sort order if there is no 'SORT' clause present in queries. Post in-place automigration, you may observe a change in the sort order. If preserving sort order is crucial, ensure your queries are updated to include 'SORT' clause before the scheduled in-place automigration. ## How is the target MySQL Flexible Server auto-provisioned?
mysql Migrate Single Flexible Mysql Import Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/migrate/migrate-single-flexible-mysql-import-cli.md
az account set --subscription <subscription id>
- The source Azure Database for MySQL - Single Server and the target Azure Database for MySQL - Flexible Server must be in the same subscription, resource group, region, and on the same MySQL version. MySQL Import across subscriptions, resource groups, regions, and versions isn't possible. - MySQL versions supported by Azure MySQL Import are 5.7 and 8.0. If you are on a different major MySQL version on Single Server, make sure to upgrade your version on your Single Server instance before triggering the import command.
+- If the Azure Database for MySQL - Single Server instance has server parameter 'lower_case_table_names' set to 2 and your application used partition tables, MySQL Import will result in corrupted partition tables. The recommendation is to set 'lower_case_table_names' to 1 for your Azure Database for MySQL - Single Server instance in order to proceed with corruption-free MySQL Import operation.
- MySQL Import for Single Servers with Legacy Storage architecture (General Purpose storage V1) isn't supported. You must upgrade your storage to the latest storage architecture (General Purpose storage V2) to trigger a MySQL Import operation. Find your storage type and upgrade steps by following directions [here](../single-server/concepts-pricing-tiers.md#how-can-i-determine-which-storage-type-my-server-is-running-on). - MySQL Import to an existing Azure MySQL Flexible Server isn't supported. The CLI command initiates the import of a new Azure MySQL Flexible Server. - If the flexible target server is provisioned as non-HA (High Availability disabled) when updating the CLI command parameters, it can later be switched to Same-Zone HA but not Zone-Redundant HA.
partner-solutions Dynatrace Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/dynatrace/dynatrace-create.md
Title: Create Azure Native Dynatrace Service resource
description: This article describes how to use the Azure portal to create an instance of Dynatrace. Previously updated : 02/02/2023 Last updated : 10/16/2023
postgresql How To Configure Privatelink Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/how-to-configure-privatelink-cli.md
Connect to the VM *myVm* from the internet as follows:
Address: 10.1.3.4 ```
-3. Test the private link connection for the PostgreSQL server using any available client. The following example uses [Azure Data studio](/sql/azure-data-studio/download) to do the operation.
+3. Test the private link connection for the PostgreSQL server using any available client. The following example uses [Azure Data studio](/azure-data-studio/download-azure-data-studio) to do the operation.
4. In **New connection**, enter or select this information:
postgresql How To Configure Privatelink Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/how-to-configure-privatelink-portal.md
After you've created **myVm**, connect to it from the internet as follows:
Address: 10.1.3.4 ```
-3. Test the private link connection for the PostgreSQL server using any available client. In the example below I have used [Azure Data studio](/sql/azure-data-studio/download) to do the operation.
+3. Test the private link connection for the PostgreSQL server using any available client. In the example below I have used [Azure Data studio](/azure-data-studio/download-azure-data-studio) to do the operation.
4. In **New connection**, enter or select this information:
private-5g-core Monitor Private 5G Core Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/monitor-private-5g-core-alerts.md
Alerts help track important events in your network by sending a notification con
:::image type="content" source="media/packet-core-alerts-signal-list.png" alt-text="Screenshot of Azure portal showing alert signal selection menu." lightbox="media/packet-core-alerts-signal-list.png":::
-1. Select the signal you want the alert to be based on and follow the rest of the create instructions. For more information on alert options and setting actions groups used for notification, please refer to [the Azure Monitor alerts create and edit documentation](https://learn.microsoft.com/azure/azure-monitor/alerts/alerts-create-new-alert-rule?tabs=metric).
+1. Select the signal you want the alert to be based on and follow the rest of the create instructions. For more information on alert options and setting actions groups used for notification, please refer to [the Azure Monitor alerts create and edit documentation](../azure-monitor/alerts/alerts-create-new-alert-rule.md?tabs=metric).
1. Once you've reached the end of the create instructions, select **Review + create** to create your alert. 1. Verify that your alert rule was created by navigating to the alerts page for your packet core (see steps 1 and 2) and finding it in the list of alert rules on the page. ## Next steps-- [Learn more about Azure Monitor alerts](https://learn.microsoft.com/azure/azure-monitor/alerts/alerts-overview).
+- [Learn more about Azure Monitor alerts](../azure-monitor/alerts/alerts-overview.md).
sap High Availability Guide Rhel Nfs Azure Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/high-availability-guide-rhel-nfs-azure-files.md
Title: Azure VMs high availability for SAP NW on RHEL with NFS on Azure Files| Microsoft Docs
-description: Establish high availability for SAP NW on Azure virtual machines (VMs) RHEL with NFS on Azure Files.
+description: Establish high availability for SAP NetWeaver on Azure Virtual Machines Red Hat Enterprise Linux (RHEL) with NFS on Azure Files.
tags: azure-resource-manager
Last updated 08/23/2023
-# High availability for SAP NetWeaver on Azure VMs on Red Hat Enterprise Linux with NFS on Azure Files
+# High availability for SAP NetWeaver on VMs on RHEL with NFS on Azure Files
[dbms-guide]:dbms-guide-general.md [deployment-guide]:deployment-guide.md
[sap-hana-ha]:sap-hana-high-availability-rhel.md
-This article describes how to deploy and configure VMs, install the cluster framework, and install an HA SAP NetWeaver system, using [NFS on Azure Files](../../storage/files/files-nfs-protocol.md). The example configurations use VMs that run on Red Hat Enterprise Linux (RHEL).
+This article describes how to deploy and configure virtual machines (VMs), install the cluster framework, and install a high-availability (HA) SAP NetWeaver system by using [NFS on Azure Files](../../storage/files/files-nfs-protocol.md). The example configurations use VMs that run on Red Hat Enterprise Linux (RHEL).
## Prerequisites * [Azure Files documentation][afs-azure-doc] * SAP Note [1928533], which has:
- * List of Azure VM sizes that are supported for the deployment of SAP software
- * Important capacity information for Azure VM sizes
- * Supported SAP software, and operating system (OS) and database combinations
- * Required SAP kernel version for Windows and Linux on Microsoft Azure
+ * A list of Azure VM sizes that are supported for the deployment of SAP software.
+ * Important capacity information for Azure VM sizes.
+ * Supported SAP software and operating system (OS) and database combinations.
+ * Required SAP kernel version for Windows and Linux on Microsoft Azure.
* SAP Note [2015553] lists prerequisites for SAP-supported SAP software deployments in Azure.
-* SAP Note [2002167] has recommended OS settings for Red Hat Enterprise Linux 7.x
-* SAP Note [2772999] has recommended OS settings for Red Hat Enterprise Linux 8.x
-* SAP Note [2009879] has SAP HANA Guidelines for Red Hat Enterprise Linux
+* SAP Note [2002167] has recommended OS settings for Red Hat Enterprise Linux 7.x.
+* SAP Note [2772999] has recommended OS settings for Red Hat Enterprise Linux 8.x.
+* SAP Note [2009879] has SAP HANA Guidelines for Red Hat Enterprise Linux.
* SAP Note [2178632] has detailed information about all monitoring metrics reported for SAP in Azure. * SAP Note [2191498] has the required SAP Host Agent version for Linux in Azure. * SAP Note [2243692] has information about SAP licensing on Linux in Azure.
-* SAP Note [1999351] has additional troubleshooting information for the Azure Enhanced Monitoring Extension for SAP.
+* SAP Note [1999351] has more troubleshooting information for the Azure Enhanced Monitoring Extension for SAP.
* [SAP Community WIKI](https://wiki.scn.sap.com/wiki/display/HOME/SAPonLinuxNotes) has all required SAP Notes for Linux. * [Azure Virtual Machines planning and implementation for SAP on Linux][planning-guide] * [Azure Virtual Machines deployment for SAP on Linux][deployment-guide] * [Azure Virtual Machines DBMS deployment for SAP on Linux][dbms-guide]
-* [SAP Netweaver in pacemaker cluster](https://access.redhat.com/articles/3150081)
-* General RHEL documentation
+* [SAP Netweaver in Pacemaker cluster](https://access.redhat.com/articles/3150081)
+* General RHEL documentation:
* [High Availability Add-On Overview](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/high_availability_add-on_overview/index) * [High Availability Add-On Administration](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/high_availability_add-on_administration/index) * [High Availability Add-On Reference](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/high_availability_add-on_reference/index)
- * [Configuring ASCS/ERS for SAP Netweaver with standalone resources in RHEL 7.5](https://access.redhat.com/articles/3569681)
+ * [Configuring ASCS/ERS for SAP NetWeaver with Standalone Resources in RHEL 7.5](https://access.redhat.com/articles/3569681)
* [Configure SAP S/4HANA ASCS/ERS with Standalone Enqueue Server 2 (ENSA2) in Pacemaker on RHEL](https://access.redhat.com/articles/3974941) * Azure-specific RHEL documentation: * [Support Policies for RHEL High Availability Clusters - Microsoft Azure Virtual Machines as Cluster Members](https://access.redhat.com/articles/3131341)
This article describes how to deploy and configure VMs, install the cluster fram
## Overview
-To deploy the SAP NetWeaver application layer, you need shared directories like `/sapmnt/SID` and `/usr/sap/trans` in the environment. Additionally, when deploying an HA SAP system, you need to protect and make highly available file systems like `/sapmnt/SID` and `/usr/sap/SID/ASCS`.
+To deploy the SAP NetWeaver application layer, you need shared directories like `/sapmnt/SID` and `/usr/sap/trans` in the environment. Additionally, when you deploy an HA SAP system, you need to protect and make highly available file systems like `/sapmnt/SID` and `/usr/sap/SID/ASCS`.
-Now you can place these file systems on [NFS on Azure Files](../../storage/files/files-nfs-protocol.md). NFS on Azure Files is an HA storage solution. This solution offers synchronous Zone redundant storage (ZRS) and is suitable for SAP ASCS/ERS instances deployed across Availability Zones. You still need a Pacemaker cluster to protect single point of failure components like SAP Netweaver central services(ASCS/SCS).
+Now you can place these file systems on [NFS on Azure Files](../../storage/files/files-nfs-protocol.md). NFS on Azure Files is an HA storage solution. This solution offers synchronous zone-redundant storage (ZRS) and is suitable for SAP ASCS/ERS instances deployed across availability zones. You still need a Pacemaker cluster to protect single point of failure components like SAP NetWeaver central services (ASCS/SCS).
The example configurations and installation commands use the following instance numbers: | Instance name | Instance number | | - | |
-| ABAP SAP Central Services (ASCS) | 00 |
+| ABAP SAP central services (ASCS) | 00 |
| ERS | 01 |
-| Primary Application Server (PAS) | 02 |
-| Additional Application Server (AAS) | 03 |
+| ABAP SAP central services (ASCS) | 02 |
+| Additional application server (AAS) | 03 |
| SAP system identifier | NW1 |
- This diagram shows a typical SAP Netweaver HA architecture. The "sapmnt" and "saptrans" file systems are deployed on NFS shares on Azure Files. The SAP central services are protected by a Pacemaker cluster. The clustered VMs are behind an Azure load balancer. The NFS shares are mounted through private end point.
+ This diagram shows a typical SAP NetWeaver HA architecture. The `sapmnt` and `saptrans` file systems are deployed on NFS shares on Azure Files. The SAP central services are protected by a Pacemaker cluster. The clustered VMs are behind an instance of Azure Load Balancer. The NFS shares are mounted through private endpoints.
:::image-end::: ## Prepare infrastructure
-This document assumes that you've already deployed an [Azure Virtual Network](../../virtual-network/virtual-networks-overview.md), subnet and resource group.
+This document assumes that you already deployed an [Azure virtual network](../../virtual-network/virtual-networks-overview.md), subnet, and resource group.
-1. Deploy your VMs. Choose a [suitable deployment type](./sap-high-availability-architecture-scenarios.md#comparison-of-different-deployment-types-for-sap-workload). You can deploy VMs in availability zones, if the Azure region supports zones, or in availability sets. If you need additional IP addresses for your VMs, deploy and attach a second NIC. DonΓÇÖt add secondary IP addresses to the primary NIC. [Azure Load Balancer Floating IP doesn't support this scenario](../../load-balancer/load-balancer-multivip-overview.md#limitations).
+1. Deploy your VMs. Choose a [suitable deployment type](./sap-high-availability-architecture-scenarios.md#comparison-of-different-deployment-types-for-sap-workload). You can deploy VMs in availability zones, if the Azure region supports zones, or in availability sets. If you need more IP addresses for your VMs, deploy and attach a second NIC. Don't add secondary IP addresses to the primary NIC. [Azure Load Balancer Floating IP doesn't support this scenario](../../load-balancer/load-balancer-multivip-overview.md#limitations).
-2. For your virtual IPs, deploy and configure an Azure [load balancer](../../load-balancer/load-balancer-overview.md). It's recommended to use a [Standard load balancer](../../load-balancer/quickstart-load-balancer-standard-public-portal.md).
+1. For your virtual IPs, deploy and configure an instance of [Load Balancer](../../load-balancer/load-balancer-overview.md). We recommend that you use a [Standard load balancer](../../load-balancer/quickstart-load-balancer-standard-public-portal.md).
- 1. Configure two frontend IPs: one for ASCS (`10.90.90.10`) and one for ERS (`10.90.90.9`).
- 2. Create a backend pool and add both VMs, which will be part of the cluster.
- 3. Create the health probe for ASCS. The probe port is `62000`. Create the probe port for ERS. The ERS probe port is `62101`. When you configure the Pacemaker resources later on, you must use matching probe ports.
- 4. Configure the load balancing rules for ASCS and ERS. Select the corresponding front IPs, health probes, and the backend pool. Select HA ports, increase the idle timeout to 30 minutes, and enable floating IP.
+ 1. Configure two front-end IPs. One is for ASCS (`10.90.90.10`) and one is for ERS (`10.90.90.9`).
+ 1. Create a back-end pool and add both VMs, which will be part of the cluster.
+ 1. Create the health probe for ASCS. The probe port is `62000`. Create the probe port for ERS. The ERS probe port is `62101`. When you configure the Pacemaker resources later on, you must use matching probe ports.
+ 1. Configure the load-balancing rules for ASCS and ERS. Select the corresponding front IPs, health probes, and the back-end pool. Select HA ports, increase the idle timeout to 30 minutes, and enable floating IP.
### Deploy Azure Files storage account and NFS shares
-NFS on Azure Files, runs on top of [Azure Files Premium storage][afs-azure-doc]. Before setting up NFS on Azure Files, see [How to create an NFS share](../../storage/files/storage-files-how-to-create-nfs-shares.md?tabs=azure-portal).
+NFS on Azure Files runs on top of [Azure Files premium storage][afs-azure-doc]. Before you set up NFS on Azure Files, see [How to create an NFS share](../../storage/files/storage-files-how-to-create-nfs-shares.md?tabs=azure-portal).
There are two options for redundancy within an Azure region: * [Locally redundant storage (LRS)](../../storage/common/storage-redundancy.md#locally-redundant-storage), which offers local, in-zone synchronous data replication.
-* [Zone redundant storage (ZRS)](../../storage/common/storage-redundancy.md#zone-redundant-storage), which replicates your data synchronously across the three [availability zones](../../availability-zones/az-overview.md) in the region.
+* [Zone-redundant storage (ZRS)](../../storage/common/storage-redundancy.md#zone-redundant-storage), which replicates your data synchronously across the three [availability zones](../../availability-zones/az-overview.md) in the region.
-Check if your selected Azure region offers NFS 4.1 on Azure Files with the appropriate redundancy. Review the [availability of Azure Files by Azure region][afs-avail-matrix] under **Premium Files Storage**. If your scenario benefits from ZRS, [verify that Premium File shares with ZRS are supported in your Azure region](../../storage/common/storage-redundancy.md#zone-redundant-storage).
+Check if your selected Azure region offers NFS 4.1 on Azure Files with the appropriate redundancy. Review the [availability of Azure Files by Azure region][afs-avail-matrix] under **Premium Files Storage**. If your scenario benefits from ZRS, [verify that premium file shares with ZRS are supported in your Azure region](../../storage/common/storage-redundancy.md#zone-redundant-storage).
-It's recommended to access your Azure Storage account through an [Azure Private Endpoint](../../storage/files/storage-files-networking-endpoints.md?tabs=azure-portal). Make sure to deploy the Azure Files storage account endpoint and the VMs, where you need to mount the NFS shares, in the same Azure VNet or peered Azure VNets.
+We recommend that you access your Azure Storage account through an [Azure private endpoint](../../storage/files/storage-files-networking-endpoints.md?tabs=azure-portal). Make sure to deploy the Azure Files storage account endpoint and the VMs, where you need to mount the NFS shares, in the same Azure virtual network or peered Azure virtual networks.
-1. Deploy a File Storage account named `sapafsnfs`. In this example, we use ZRS. If you're not familiar with the process, see [Create a storage account](../../storage/files/storage-how-to-create-file-share.md?tabs=azure-portal#create-a-storage-account) for the Azure portal.
+1. Deploy an Azure Files storage account named `sapafsnfs`. In this example, we use ZRS. If you're not familiar with the process, see [Create a storage account](../../storage/files/storage-how-to-create-file-share.md?tabs=azure-portal#create-a-storage-account) for the Azure portal.
-2. In the **Basics** tab, use these settings:
+1. On the **Basics** tab, use these settings:
1. For **Storage account name**, enter `sapafsnfs`.
- 2. For **Performance**, select **Premium**.
- 3. For **Premium account type**, select **FileStorage**.
- 4. For **Replication**, select zone redundancy (ZRS).
-3. Select **Next**.
-4. In the **Advanced** tab, deselect **Require secure transfer for REST API Operations**. If you don't deselect this option, you can't mount the NFS share to your VM. The mount operation will time out.
-5. Select **Next**.
-6. In the **Networking** section, configure these settings:
+ 1. For **Performance**, select **Premium**.
+ 1. For **Premium account type**, select **FileStorage**.
+ 1. For **Replication**, select **zone redundancy (ZRS)**.
+1. Select **Next**.
+1. On the **Advanced** tab, clear **Require secure transfer for REST API Operations**. If you don't clear this option, you can't mount the NFS share to your VM. The mount operation will time out.
+1. Select **Next**.
+1. In the **Networking** section, configure these settings:
1. Under **Networking connectivity**, for **Connectivity method**, select **Private endpoint**.
- 2. Under **Private endpoint**, select **Add private endpoint**.
-7. In the **Create private endpoint** pane, select your **Subscription**, **Resource group**, and **Location**.
+ 1. Under **Private endpoint**, select **Add private endpoint**.
+1. On the **Create private endpoint** pane, select your **Subscription**, **Resource group**, and **Location**.
For **Name**, enter `sapafsnfs_pe`. For **Storage sub-resource**, select **file**.
- Under **Networking**, for **Virtual network**, select the VNet and subnet to use. Again, you can use the VNet where your SAP VMs are, or a peered VNet.
+ Under **Networking**, for **Virtual network**, select the virtual network and subnet to use. Again, you can use the virtual network where your SAP VMs are or a peered virtual network.
Under **Private DNS integration**, accept the default option **Yes** for **Integrate with private DNS zone**. Make sure to select your **Private DNS Zone**. Select **OK**.
-8. On the **Networking** tab again, select **Next**.
-9. On the **Data protection** tab, keep all the default settings.
-10. Select **Review + create** to validate your configuration.
-11. Wait for the validation to finish. Fix any issues before continuing.
-12. On the **Review + create** tab, select **Create**.
+1. On the **Networking** tab again, select **Next**.
+1. On the **Data protection** tab, keep all the default settings.
+1. Select **Review + create** to validate your configuration.
+1. Wait for the validation to finish. Fix any issues before you continue.
+1. On the **Review + create** tab, select **Create**.
Next, deploy the NFS shares in the storage account you created. In this example, there are two NFS shares, `sapnw1` and `saptrans`. 1. Sign in to the [Azure portal](https://portal.azure.com).
-2. Select or search for **Storage accounts**.
-3. On the **Storage accounts** page, select **sapafsnfs**.
-4. On the resource menu for **sapafsnfs**, select **File shares** under **Data storage**.
-5. On the **File shares** page, select **File share**.
+1. Select or search for **Storage accounts**.
+1. On the **Storage accounts** page, select **sapafsnfs**.
+1. On the resource menu for **sapafsnfs**, under **Data storage**, select **File shares**.
+1. On the **File shares** page, select **File share**.
1. For **Name**, enter `sapnw1`, `saptrans`.
- 2. Select an appropriate share size. For example, **128 GB**. Consider the size of the data stored on the share, IOPs and throughput requirements. For more information, see [Azure file share targets](../../storage/files/storage-files-scale-targets.md#azure-file-share-scale-targets).
- 3. Select **NFS** as the protocol.
- 4. Select **No root Squash**. Otherwise, when you mount the shares on your VMs, you can't see the file owner or group.
+ 1. Select an appropriate share size. For example, **128 GB**. Consider the size of the data stored on the share and IOPS and throughput requirements. For more information, see [Azure file share targets](../../storage/files/storage-files-scale-targets.md#azure-file-share-scale-targets).
+ 1. Select **NFS** as the protocol.
+ 1. Select **No root Squash**. Otherwise, when you mount the shares on your VMs, you can't see the file owner or group.
> [!IMPORTANT]
-> The share size above is just an example. Make sure to size your shares appropriately. Size not only based on the size of the of data stored on the share, but also based on the requirements for IOPS and throughput. For details see [Azure file share targets](../../storage/files/storage-files-scale-targets.md#azure-file-share-scale-targets).
+> The preceding share size is only an example. Make sure to size your shares appropriately. Size is not only based on the size of the of data stored on the share but also based on the requirements for IOPS and throughput. For more information, see [Azure file share targets](../../storage/files/storage-files-scale-targets.md#azure-file-share-scale-targets).
The SAP file systems that don't need to be mounted via NFS can also be deployed on [Azure disk storage](../../virtual-machines/disks-types.md#premium-ssds). In this example, you can deploy `/usr/sap/NW1/D02` and `/usr/sap/NW1/D03` on Azure disk storage.
The SAP file systems that don't need to be mounted via NFS can also be deployed
When you plan your deployment with NFS on Azure Files, consider the following important points:
-* The minimum share size is 100 GiB. You only pay for the [capacity of the provisioned shares](../../storage/files/understanding-billing.md#provisioned-model)
-* Size your NFS shares not only based on capacity requirements, but also on IOPS and throughput requirements. For details see [Azure file share targets](../../storage/files/storage-files-scale-targets.md#azure-file-share-scale-targets)
-* Test the workload to validate your sizing and ensure that it meets your performance targets. To learn how to troubleshoot performance issues with NFS on Azure Files, consult [Troubleshoot Azure file shares performance](../../storage/files/files-troubleshoot-performance.md)
+* The minimum share size is 100 GiB. You only pay for the [capacity of the provisioned shares](../../storage/files/understanding-billing.md#provisioned-model).
+* Size your NFS shares not only based on capacity requirements but also on IOPS and throughput requirements. For more information, see [Azure file share targets](../../storage/files/storage-files-scale-targets.md#azure-file-share-scale-targets).
+* Test the workload to validate your sizing and ensure that it meets your performance targets. To learn how to troubleshoot performance issues with NFS on Azure Files, see [Troubleshoot Azure file share performance](../../storage/files/files-troubleshoot-performance.md).
* For SAP J2EE systems, it's not supported to place `/usr/sap/<SID>/J<nr>` on NFS on Azure Files.
-* If your SAP system has a heavy batch jobs load, you may have millions of job logs. If the SAP batch job logs are stored in the file system, pay special attention to the sizing of the `sapmnt` share. As of SAP_BASIS 7.52 the default behavior for the batch job logs is to be stored in the database. For details see [Job log in the database][2360818].
-* Deploy a separate `sapmnt` share for each SAP system
-* Don't use the `sapmnt` share for any other activity, such as interfaces, or `saptrans`
-* Don't use the `saptrans` share for any other activity, such as interfaces, or `sapmnt`
-* Avoid consolidating the shares for too many SAP systems in a single storage account. There are also [Storage account performance scale targets](../../storage/files/storage-files-scale-targets.md#storage-account-scale-targets). Be careful to not exceed the limits for the storage account, too.
-* In general, don't consolidate the shares for more than 5 SAP systems in a single storage account. This guideline helps avoid exceeding the storage account limits and simplifies performance analysis.
-* In general, avoid mixing shares like `sapmnt` for non-production and production SAP systems in the same storage account.
-* We recommend deploying on RHEL 8.4 or higher to benefit from [NFS client improvements](../../storage/files/files-troubleshoot-linux-nfs.md#ls-hangs-for-large-directory-enumeration-on-some-kernels).
+* If your SAP system has a heavy batch jobs load, you might have millions of job logs. If the SAP batch job logs are stored in the file system, pay special attention to the sizing of the `sapmnt` share. As of SAP_BASIS 7.52, the default behavior for the batch job logs is to be stored in the database. For more information, see [Job log in the database][2360818].
+* Deploy a separate `sapmnt` share for each SAP system.
+* Don't use the `sapmnt` share for any other activity, such as interfaces, or `saptrans`.
+* Don't use the `saptrans` share for any other activity, such as interfaces, or `sapmnt`.
+* Avoid consolidating the shares for too many SAP systems in a single storage account. There are also [storage account performance scale targets](../../storage/files/storage-files-scale-targets.md#storage-account-scale-targets). Be careful not to exceed the limits for the storage account, too.
+* In general, don't consolidate the shares for more than five SAP systems in a single storage account. This guideline helps avoid exceeding the storage account limits and simplifies performance analysis.
+* In general, avoid mixing shares like `sapmnt` for nonproduction and production SAP systems in the same storage account.
+* We recommend that you deploy on RHEL 8.4 or higher to benefit from [NFS client improvements](../../storage/files/files-troubleshoot-linux-nfs.md#ls-hangs-for-large-directory-enumeration-on-some-kernels).
* Use a private endpoint. In the unlikely event of a zonal failure, your NFS sessions automatically redirect to a healthy zone. You don't have to remount the NFS shares on your VMs.
-* If you're deploying your VMs across Availability Zones, use [Storage account with ZRS](../../storage/common/storage-redundancy.md#zone-redundant-storage) in the Azure regions that supports ZRS.
+* If you're deploying your VMs across availability zones, use a [storage account with ZRS](../../storage/common/storage-redundancy.md#zone-redundant-storage) in the Azure regions that support ZRS.
* Azure Files doesn't currently support automatic cross-region replication for disaster recovery scenarios.
-## Setting up (A)SCS
+## Set up (A)SCS
In this example, you deploy the resources manually through the [Azure portal](https://portal.azure.com/#home).
-### Deploy Azure Load Balancer via Azure portal
+### Deploy Azure Load Balancer via the Azure portal
-After you deploy the VMs for your SAP system, create a load balancer. Then, use the VMs in the backend pool.
+After you deploy the VMs for your SAP system, create a load balancer. Then, use the VMs in the back-end pool.
-1. Create an internal, standard load balancer.
- 1. Create the frontend IP addresses
- 1. IP address 10.90.90.10 for the ASCS
- 1. Open the load balancer, select frontend IP pool, and click Add
- 2. Enter the name of the new frontend IP pool (for example **frontend.NW1.ASCS**)
- 3. Set the Assignment to Static and enter the IP address (for example **10.90.90.10**)
- 4. Click OK
- 2. IP address 10.90.90.9 for the ASCS ERS
- * Repeat the steps above under "a" to create an IP address for the ERS (for example **10.90.90.9** and **frontend.NW1.ERS**)
- 2. Create a single back-end pool:
+1. Create an internal, Standard instance of Load Balancer.
+ 1. Create the front-end IP addresses.
+ 1. IP address 10.90.90.10 for the ASCS:
+ 1. Open the load balancer, select the front-end IP pool, and select **Add**.
+ 1. Enter the name of the new front-end IP pool (for example, **frontend.NW1.ASCS**).
+ 1. Set the **Assignment** to **Static** and enter the IP address (for example, **10.90.90.10**).
+ 1. Select **OK**.
+ 1. IP address 10.90.90.9 for the ASCS ERS:
+ * Repeat the preceding steps under "a" to create an IP address for the ERS (for example, **10.90.90.9** and **frontend.NW1.ERS**).
+ 1. Create a single back-end pool:
1. Open the load balancer, select **Backend pools**, and then select **Add**.
- 2. Enter the name of the new back-end pool (for example, **backend.NW1**).
- 3. Select **NIC** for Backend Pool Configuration.
- 4. Select **Add a virtual machine**.
- 5. Select the virtual machines of the ASCS cluster.
- 6. Select **Add**.
- 7. Select **Save**.
- 3. Create the health probes
- 1. Port 620**00** for ASCS
- 1. Open the load balancer, select health probes, and click Add
- 2. Enter the name of the new health probe (for example **health.NW1.ASCS**)
- 3. Select TCP as protocol, port 620**00**, keep Interval 5
- 4. Click OK
- 2. Port 621**01** for ASCS ERS
- * Repeat the steps above under "c" to create a health probe for the ERS (for example 621**01** and **health.NW1.ERS**)
- 4. Load-balancing rules
- 1. Create a backend pool for the ASCS
- 1. Open the load balancer, select Load-balancing rules and click Add
- 2. Enter the name of the new load balancer rule (for example **lb.NW1.ASCS**)
- 3. Select the frontend IP address for ASCS, backend pool, and health probe you created earlier (for example **frontend.NW1.ASCS**, **backend.NW1**, and **health.NW1.ASCS**)
- 4. Select **HA ports**
- 5. Increase idle timeout to 30 minutes
- 6. **Make sure to enable Floating IP**
- 7. Click OK
- * Repeat the steps above to create load balancing rules for ERS (for example **lb.NW1.ERS**)
+ 1. Enter the name of the new back-end pool (for example, **backend.NW1**).
+ 1. Select **NIC** for **Backend Pool Configuration**.
+ 1. Select **Add a virtual machine**.
+ 1. Select the VMs of the ASCS cluster.
+ 1. Select **Add**.
+ 1. Select **Save**.
+ 1. Create the health probes.
+ 1. Port 620**00** for ASCS:
+ 1. Open the load balancer, select **Health probes**, and select **Add**.
+ 1. Enter the name of the new health probe (for example, **health.NW1.ASCS**).
+ 1. Select **TCP** as the protocol and the port 620**00** and keep **Interval 5**.
+ 1. Select **OK**.
+ 1. Port 621**01** for ASCS ERS:
+ * Repeat the preceding steps under "c" to create a health probe for the ERS (for example, 621**01** and **health.NW1.ERS**).
+ 1. Create load-balancing rules.
+ 1. Create a back-end pool for the ASCS:
+ 1. Open the load balancer, select **Load-balancing rules**, and select **Add**.
+ 1. Enter the name of the new load balancer rule (for example, **lb.NW1.ASCS**).
+ 1. Select the front-end IP address for ASCS, the back-end pool, and the health probe you created earlier (for example, **frontend.NW1.ASCS**, **backend.NW1**, and **health.NW1.ASCS**).
+ 1. Select **HA ports**.
+ 1. Increase the idle timeout to **30 minutes**.
+ 1. Make sure to enable **Floating IP**.
+ 1. Select **OK**.
+ * Repeat the preceding steps to create load-balancing rules for ERS (for example, **lb.NW1.ERS**).
> [!IMPORTANT]
-> Floating IP is not supported on a NIC secondary IP configuration in load-balancing scenarios. For details see [Azure Load balancer Limitations](../../load-balancer/load-balancer-multivip-overview.md#limitations). If you need additional IP address for the VM, deploy a second NIC.
+> Floating IP isn't supported on a NIC secondary IP configuration in load-balancing scenarios. For more information, see [Load Balancer limitations](../../load-balancer/load-balancer-multivip-overview.md#limitations). If you need another IP address for the VM, deploy a second NIC.
> [!NOTE]
-> When VMs without public IP addresses are placed in the backend pool of internal (no public IP address) Standard Azure load balancer, there will be no outbound internet connectivity, unless additional configuration is performed to allow routing to public end points. For details on how to achieve outbound connectivity see [Public endpoint connectivity for Virtual Machines using Azure Standard Load Balancer in SAP high-availability scenarios](./high-availability-guide-standard-load-balancer-outbound-connections.md).
+> When VMs without public IP addresses are placed in the back-end pool of an internal (no public IP address) Standard instance of Load Balancer, there's no outbound internet connectivity unless more configuration is performed to allow routing to public endpoints. For more information on how to achieve outbound connectivity, see [Public endpoint connectivity for virtual machines using Azure Standard Load Balancer in SAP high-availability scenarios](./high-availability-guide-standard-load-balancer-outbound-connections.md).
> [!IMPORTANT]
-> Do not enable TCP timestamps on Azure VMs placed behind Azure Load Balancer. Enabling TCP timestamps will cause the health probes to fail. Set parameter **net.ipv4.tcp_timestamps** to **0**. For details see [Load Balancer health probes](../../load-balancer/load-balancer-custom-probe-overview.md).
+> Don't enable TCP timestamps on Azure VMs placed behind Load Balancer. Enabling TCP timestamps causes the health probes to fail. Set the parameter **net.ipv4.tcp_timestamps** to **0**. For more information, see [Load Balancer health probes](../../load-balancer/load-balancer-custom-probe-overview.md).
-### Create Pacemaker cluster
+### Create a Pacemaker cluster
-Follow the steps in [Setting up Pacemaker on Red Hat Enterprise Linux in Azure](high-availability-guide-rhel-pacemaker.md) to create a basic Pacemaker cluster for this (A)SCS server.
+Follow the steps in [Set up Pacemaker on Red Hat Enterprise Linux in Azure](high-availability-guide-rhel-pacemaker.md) to create a basic Pacemaker cluster for this (A)SCS server.
-### Prepare for SAP NetWeaver installation
+### Prepare for an SAP NetWeaver installation
-The following items are prefixed with either **[A]** - applicable to all nodes, **[1]** - only applicable to node 1 or **[2]** - only applicable to node 2.
+The following items are prefixed with:
-1. **[A]** Setup host name resolution
+- **[A]**: Applicable to all nodes
+- **[1]**: Only applicable to node 1
+- **[2]**: Only applicable to node 2
- You can either use a DNS server or modify the /etc/hosts on all nodes. This example shows how to use the /etc/hosts file.
- Replace the IP address and the hostname in the following commands
+1. **[A]** Set up hostname resolution.
+
+ You can either use a DNS server or modify the `/etc/hosts` file on all nodes. This example shows how to use the `/etc/hosts` file. Replace the IP address and the hostname in the following commands:
```bash sudo vi /etc/hosts ```
- Insert the following lines to /etc/hosts. Change the IP address and hostname to match your environment
+ Insert the following lines to `/etc/hosts`. Change the IP address and hostname to match your environment.
```bash # IP address of cluster node 1
The following items are prefixed with either **[A]** - applicable to all nodes,
10.90.90.9 sapers ```
-2. **[A]** Install NFS client and other requirements
+1. **[A]** Install the NFS client and other requirements.
```bash sudo yum -y install nfs-utils resource-agents resource-agents-sap ```
-3. **[1]** Create the SAP directories on the NFS share.
- Mount temporarily the NFS share **sapnw1** one of the VMs and create the SAP directories that will be used as nested mount points.
+1. **[1]** Create the SAP directories on the NFS share.
+ Mount the NFS share **sapnw1** temporarily on one of the VMs, and create the SAP directories that will be used as nested mount points.
```bash # mount temporarily the volume
The following items are prefixed with either **[A]** - applicable to all nodes,
sudo rmdir /saptmp ```
-4. **[A]** Create the shared directories
+1. **[A]** Create the shared directories.
```bash sudo mkdir -p /sapmnt/NW1
The following items are prefixed with either **[A]** - applicable to all nodes,
sudo chattr +i /usr/sap/NW1/ERS01 ```
-5. **[A]** Check version of resource-agents-sap
+1. **[A]** Check the version of `resource-agents-sap`.
- Make sure that the version of the installed resource-agents-sap package is at least 3.9.5-124.el7
+ Make sure that the version of the installed `resource-agents-sap` package is at least `3.9.5-124.el7`.
```bash sudo yum info resource-agents-sap ```
-6. **[A]** Add mount entries
+1. **[A]** Add mount entries.
```bash vi /etc/fstab
The following items are prefixed with either **[A]** - applicable to all nodes,
mount -a ```
-7. **[A]** Configure SWAP file
+1. **[A]** Configure the SWAP file.
```bash sudo vi /etc/waagent.conf
The following items are prefixed with either **[A]** - applicable to all nodes,
ResourceDisk.SwapSizeMB=2000 ```
- Restart the Agent to activate the change
+ Restart the agent to activate the change.
```bash sudo service waagent restart ```
-8. **[A]** RHEL configuration
+1. **[A]** Configure RHEL.
- Configure RHEL as described in SAP Note [2002167] for RHEL 7.x, SAP Note [2772999] for RHEL 8.x or SAP note [3108316] for RHEL 9.x.
+ Configure RHEL as described in SAP Note [2002167] for RHEL 7.x, SAP Note [2772999] for RHEL 8.x, or SAP Note [3108316] for RHEL 9.x.
-### Installing SAP NetWeaver ASCS/ERS
+### Install SAP NetWeaver ASCS/ERS
-1. **[1]** Configure cluster default properties
+1. **[1]** Configure the cluster default properties.
```bash # If using RHEL 7.x
The following items are prefixed with either **[A]** - applicable to all nodes,
pcs resource defaults update migration-threshold=3 ```
-2. **[1]** Create a virtual IP resource and health-probe for the ASCS instance
+1. **[1]** Create a virtual IP resource and health probe for the ASCS instance.
```bash sudo pcs node standby sap-cl2
The following items are prefixed with either **[A]** - applicable to all nodes,
--group g-NW1_ASCS ```
- Make sure that the cluster status is ok and that all resources are started. It is not important on which node the resources are running.
+ Make sure that the cluster status is okay and that all resources are started. Which node the resources are running on isn't important.
```bash sudo pcs status
The following items are prefixed with either **[A]** - applicable to all nodes,
# vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started sap-cl1 ```
-3. **[1]** Install SAP NetWeaver ASCS
+1. **[1]** Install SAP NetWeaver ASCS.
- Install SAP NetWeaver ASCS as root on the first node using a virtual hostname that maps to the IP address of the load balancer frontend configuration for the ASCS, for example **sapascs**, **10.90.90.10** and the instance number that you used for the probe of the load balancer, for example **00**.
+ Install SAP NetWeaver ASCS as the root on the first node by using a virtual hostname that maps to the IP address of the load balancer front-end configuration for the ASCS, for example, **sapascs** and **10.90.90.10**, and the instance number that you used for the probe of the load balancer, for example, **00**.
- You can use the sapinst parameter SAPINST_REMOTE_ACCESS_USER to allow a non-root user to connect to sapinst.
+ You can use the `sapinst` parameter `SAPINST_REMOTE_ACCESS_USER` to allow a nonroot user to connect to `sapinst`.
```bash # Allow access to SWPM. This rule is not permanent. If you reboot the machine, you have to run the command again.
The following items are prefixed with either **[A]** - applicable to all nodes,
sudo chgrp sapsys /usr/sap/NW1/ASCS00 ```
-4. **[1]** Create a virtual IP resource and health-probe for the ERS instance
+1. **[1]** Create a virtual IP resource and health probe for the ERS instance.
```bash sudo pcs node unstandby sap-cl2
The following items are prefixed with either **[A]** - applicable to all nodes,
--group g-NW1_AERS ```
- Make sure that the cluster status is ok and that all resources are started. It is not important on which node the resources are running.
+ Make sure that the cluster status is okay and that all resources are started. Which node the resources are running on isn't important.
```bash sudo pcs status
The following items are prefixed with either **[A]** - applicable to all nodes,
# vip_NW1_AERS (ocf::heartbeat:IPaddr2): Started sap-cl2 ```
-5. **[2]** Install SAP NetWeaver ERS
+1. **[2]** Install SAP NetWeaver ERS.
- Install SAP NetWeaver ERS as root on the second node using a virtual hostname that maps to the IP address of the load balancer frontend configuration for the ERS, for example **sapers**, **10.90.90.9** and the instance number that you used for the probe of the load balancer, for example **01**.
+ Install SAP NetWeaver ERS as the root on the second node by using a virtual hostname that maps to the IP address of the load balancer front-end configuration for the ERS, for example, **sapers** and **10.90.90.9**, and the instance number that you used for the probe of the load balancer, for example, **01**.
- You can use the sapinst parameter SAPINST_REMOTE_ACCESS_USER to allow a non-root user to connect to sapinst.
+ You can use the `sapinst` parameter `SAPINST_REMOTE_ACCESS_USER` to allow a nonroot user to connect to `sapinst`.
```bash # Allow access to SWPM. This rule is not permanent. If you reboot the machine, you have to run the command again.
The following items are prefixed with either **[A]** - applicable to all nodes,
sudo chgrp sapsys /usr/sap/NW1/ERS01 ```
-6. **[1]** Adapt the ASCS/SCS and ERS instance profiles
+1. **[1]** Adapt the ASCS/SCS and ERS instance profiles.
- * ASCS/SCS profile
+ * ASCS/SCS profile:
- ```bash
- sudo vi /sapmnt/NW1/profile/NW1_ASCS00_sapascs
+ ```bash
+ sudo vi /sapmnt/NW1/profile/NW1_ASCS00_sapascs
- # Change the restart command to a start command
- #Restart_Program_01 = local $(_EN) pf=$(_PF)
- Start_Program_01 = local $(_EN) pf=$(_PF)
+ # Change the restart command to a start command
+ #Restart_Program_01 = local $(_EN) pf=$(_PF)
+ Start_Program_01 = local $(_EN) pf=$(_PF)
- # Add the keep alive parameter, if using ENSA1
- enque/encni/set_so_keepalive = true
- ```
+ # Add the keep alive parameter, if using ENSA1
+ enque/encni/set_so_keepalive = true
+ ```
- For both ENSA1 and ENSA2, make sure that the `keepalive` OS parameters are set as described in SAP note [1410736](https://launchpad.support.sap.com/#/notes/1410736).
+ For both ENSA1 and ENSA2, make sure that the `keepalive` OS parameters are set as described in SAP Note [1410736](https://launchpad.support.sap.com/#/notes/1410736).
- * ERS profile
+ * ERS profile:
- ```bash
- sudo vi /sapmnt/NW1/profile/NW1_ERS01_sapers
+ ```bash
+ sudo vi /sapmnt/NW1/profile/NW1_ERS01_sapers
- # Change the restart command to a start command
- #Restart_Program_00 = local $(_ER) pf=$(_PFL) NR=$(SCSID)
- Start_Program_00 = local $(_ER) pf=$(_PFL) NR=$(SCSID)
+ # Change the restart command to a start command
+ #Restart_Program_00 = local $(_ER) pf=$(_PFL) NR=$(SCSID)
+ Start_Program_00 = local $(_ER) pf=$(_PFL) NR=$(SCSID)
- # remove Autostart from ERS profile
- # Autostart = 1
- ```
+ # remove Autostart from ERS profile
+ # Autostart = 1
+ ```
-7. **[A]** Configure Keep Alive
+1. **[A]** Configure Keep Alive.
- The communication between the SAP NetWeaver application server and the ASCS/SCS is routed through a software load balancer. The load balancer disconnects inactive connections after a configurable timeout. To prevent this, you need to set a parameter in the SAP NetWeaver ASCS/SCS profile, if using ENSA1. Change the Linux system `keepalive` settings on all SAP servers for both ENSA1/ENSA2. Read [SAP Note 1410736][1410736] for more information.
+ The communication between the SAP NetWeaver application server and the ASCS/SCS is routed through a software load balancer. The load balancer disconnects inactive connections after a configurable timeout. To prevent this action, set a parameter in the SAP NetWeaver ASCS/SCS profile, if you're using ENSA1. Change the Linux system `keepalive` settings on all SAP servers for both ENSA1 and ENSA2. For more information, see SAP Note [1410736][1410736].
```bash # Change the Linux system configuration sudo sysctl net.ipv4.tcp_keepalive_time=300 ```
-8. **[A]** Update the /usr/sap/sapservices file
+1. **[A]** Update the `/usr/sap/sapservices` file.
- To prevent the start of the instances by the sapinit startup script, all instances managed by Pacemaker must be commented out from /usr/sap/sapservices file.
+ To prevent the start of the instances by the `sapinit` startup script, all instances managed by Pacemaker must be commented out from the `/usr/sap/sapservices` file.
```bash sudo vi /usr/sap/sapservices
The following items are prefixed with either **[A]** - applicable to all nodes,
# LD_LIBRARY_PATH=/usr/sap/NW1/ERS01/exe:$LD_LIBRARY_PATH; export LD_LIBRARY_PATH; /usr/sap/NW1/ERS01/exe/sapstartsrv pf=/usr/sap/NW1/ERS01/profile/NW1_ERS01_sapers -D -u nw1adm ```
-9. **[1]** Create the SAP cluster resources.
+1. **[1]** Create the SAP cluster resources.
- If using enqueue server 1 architecture (ENSA1), define the resources as follows:
+ If you use enqueue server 1 architecture (ENSA1), define the resources as shown here:
```bash sudo pcs property set maintenance-mode=true
The following items are prefixed with either **[A]** - applicable to all nodes,
sudo pcs property set maintenance-mode=false ```
- SAP introduced support for enqueue server 2, including replication, as of SAP NW 7.52. Starting with ABAP Platform 1809, enqueue server 2 is installed by default. See SAP note [2630416](https://launchpad.support.sap.com/#/notes/2630416) for enqueue server 2 support.
- If using enqueue server 2 architecture ([ENSA2](https://help.sap.com/viewer/cff8531bc1d9416d91bb6781e628d4e0/1709%20001/en-US/6d655c383abf4c129b0e5c8683e7ecd8.html)), install resource agent resource-agents-sap-4.1.1-12.el7.x86_64 or newer and define the resources as follows:
+ SAP introduced support for enqueue server 2, including replication, as of SAP NW 7.52. Starting with ABAP Platform 1809, enqueue server 2 is installed by default. See SAP Note [2630416](https://launchpad.support.sap.com/#/notes/2630416) for enqueue server 2 support.
+ If you use enqueue server 2 architecture ([ENSA2](https://help.sap.com/viewer/cff8531bc1d9416d91bb6781e628d4e0/1709%20001/en-US/6d655c383abf4c129b0e5c8683e7ecd8.html)), install resource agent resource-agents-sap-4.1.1-12.el7.x86_64 or newer and define the resources as shown here:
```bash sudo pcs property set maintenance-mode=true
The following items are prefixed with either **[A]** - applicable to all nodes,
sudo pcs property set maintenance-mode=false ```
- If you are upgrading from an older version and switching to enqueue server 2, see SAP note [2641322](https://launchpad.support.sap.com/#/notes/2641322).
+ If you're upgrading from an older version and switching to enqueue server 2, see SAP Note [2641322](https://launchpad.support.sap.com/#/notes/2641322).
> [!NOTE]
- > The timeouts in the above configuration are just examples and may need to be adapted to the specific SAP setup.
+ > The timeouts in the preceding configuration are only examples and might need to be adapted to the specific SAP setup.
- Make sure that the cluster status is ok and that all resources are started. It is not important on which node the resources are running.
+ Make sure that the cluster status is okay and that all resources are started. Which node the resources are running on isn't important.
```bash sudo pcs status
The following items are prefixed with either **[A]** - applicable to all nodes,
# rsc_sap_NW1_ERS01 (ocf::heartbeat:SAPInstance): Started sap-cl1 ```
-10. **[1]** Execute below step to configure priority-fencing-delay (applicable only as of pacemaker-2.0.4-6.el8 or higher)
+1. **[1]** Run the following step to configure `priority-fencing-delay` (applicable only as of pacemaker-2.0.4-6.el8 or higher).
> [!NOTE]
- > If you have two-node cluster, you have option to configure priority-fencing-delay cluster property. This property introduces additional delay in fencing a node that has higher total resource priority when a split-brain scenario occurs. For more information, see [Can Pacemaker fence the cluster node with the fewest running resources?](https://access.redhat.com/solutions/5110521).
+ > If you have a two-node cluster, you have the option to configure the `priority-fencing-delay` cluster property. This property introduces additional delay in fencing a node that has higher total resource priority when a split-brain scenario occurs. For more information, see [Can Pacemaker fence the cluster node with the fewest running resources?](https://access.redhat.com/solutions/5110521).
>
- > The property priority-fencing-delay is applicable for pacemaker-2.0.4-6.el8 version or higher. If you are setting up priority-fencing-delay on existing cluster, make sure to unset `pcmk_delay_max` option in fencing device.
+ > The property `priority-fencing-delay` is applicable for pacemaker-2.0.4-6.el8 version or higher. If you set up `priority-fencing-delay` on an existing cluster, make sure to clear the `pcmk_delay_max` setting in the fencing device.
```bash sudo pcs resource defaults update priority=1
The following items are prefixed with either **[A]** - applicable to all nodes,
sudo pcs property set priority-fencing-delay=15s ```
-11. **[A]** Add firewall rules for ASCS and ERS on both nodes.
+1. **[A]** Add firewall rules for ASCS and ERS on both nodes.
```bash # Probe Port of ASCS
The following items are prefixed with either **[A]** - applicable to all nodes,
## SAP NetWeaver application server preparation
- Some databases require that the database instance installation is executed on an application server. Prepare the application server virtual machines to be able to use them in these cases.
+ Some databases require that the database instance installation runs on an application server. Prepare the application server VMs to be able to use them in these cases.
+
+ The following steps assume that you install the application server on a server different from the ASCS/SCS and HANA servers. Otherwise, some of the steps (like configuring hostname resolution) aren't needed.
- The steps below assume that you install the application server on a server different from the ASCS/SCS and HANA servers. Otherwise some of the steps below (like configuring host name resolution) are not needed.
+ The following items are prefixed with:
- The following items are prefixed with either **[A]** - applicable to both PAS and AAS, **[P]** - only applicable to PAS or **[S]** - only applicable to AAS.
+ - **[A]**: Applicable to both PAS and AAS
+ - **[P]**: Only applicable to PAS
+ - **[S]**: Only applicable to AAS
-1. **[A]** Setup host name resolution
- You can either use a DNS server or modify the /etc/hosts on all nodes. This example shows how to use the /etc/hosts file.
- Replace the IP address and the hostname in the following commands:
+1. **[A]** Set up hostname resolution.
+ You can either use a DNS server or modify the `/etc/hosts` file on all nodes. This example shows how to use the `/etc/hosts` file. Replace the IP address and the hostname in the following commands:
```bash sudo vi /etc/hosts ```
- Insert the following lines to /etc/hosts. Change the IP address and hostname to match your environment.
+ Insert the following lines to `/etc/hosts`. Change the IP address and hostname to match your environment.
```bash 10.90.90.7 sap-cl1
The following items are prefixed with either **[A]** - applicable to all nodes,
10.90.90.13 sapa02 ```
-1. **[A]** Create the sapmnt directory
+1. **[A]** Create the `sapmnt` directory.
```bash sudo mkdir -p /sapmnt/NW1
The following items are prefixed with either **[A]** - applicable to all nodes,
sudo chattr +i /usr/sap/trans ```
-1. **[A]** Install NFS client and other requirements
+1. **[A]** Install the NFS client and other requirements.
```bash sudo yum -y install nfs-utils uuidd ```
-1. **[A]** Add mount entries
+1. **[A]** Add mount entries.
```bash vi /etc/fstab
The following items are prefixed with either **[A]** - applicable to all nodes,
mount -a ```
-1. **[A]** Configure SWAP file
+1. **[A]** Configure the SWAP file.
```bash sudo vi /etc/waagent.conf
The following items are prefixed with either **[A]** - applicable to all nodes,
ResourceDisk.SwapSizeMB=2000 ```
- Restart the Agent to activate the change
+ Restart the agent to activate the change.
```bash sudo service waagent restart ```
-## Install database
+## Install the database
-In this example, SAP NetWeaver is installed on SAP HANA. You can use every supported database for this installation. For more information on how to install SAP HANA in Azure, see [High availability of SAP HANA on Azure VMs on Red Hat Enterprise Linux][sap-hana-ha]. For a list of supported databases, see [SAP Note 1928533][1928533].
+In this example, SAP NetWeaver is installed on SAP HANA. You can use every supported database for this installation. For more information on how to install SAP HANA in Azure, see [High availability of SAP HANA on Azure VMs on Red Hat Enterprise Linux][sap-hana-ha]. For a list of supported databases, see SAP Note [1928533][1928533].
-Install the SAP NetWeaver database instance as root using a virtual hostname that maps to the IP address of the load balancer frontend configuration for the database.
+Install the SAP NetWeaver database instance as a root by using a virtual hostname that maps to the IP address of the load balancer front-end configuration for the database.
-You can use the sapinst parameter SAPINST_REMOTE_ACCESS_USER to allow a non-root user to connect to sapinst.
+You can use the `sapinst` parameter `SAPINST_REMOTE_ACCESS_USER` to allow a nonroot user to connect to `sapinst`.
```bash # Allow access to SWPM. This rule is not permanent. If you reboot the machine, you have to run the command again.
You can use the sapinst parameter SAPINST_REMOTE_ACCESS_USER to allow a non-root
Follow these steps to install an SAP application server.
-1. **[A]** Prepare application server
+1. **[A]** Prepare the application server.
- Follow the steps in the chapter [SAP NetWeaver application server preparation](#sap-netweaver-application-server-preparation) above to prepare the application server.
+ Follow the steps in the previous section [SAP NetWeaver application server preparation](#sap-netweaver-application-server-preparation) to prepare the application server.
-2. **[A]** Install SAP NetWeaver application server.
+1. **[A]** Install the SAP NetWeaver application server.
Install a primary or additional SAP NetWeaver applications server.
- You can use the sapinst parameter SAPINST_REMOTE_ACCESS_USER to allow a non-root user to connect to sapinst.
+ You can use the `sapinst` parameter `SAPINST_REMOTE_ACCESS_USER` to allow a nonroot user to connect to `sapinst`.
```bash sudo <swpm>/sapinst SAPINST_REMOTE_ACCESS_USER=sapadmin ```
-3. **[A]** Update SAP HANA secure store
+1. **[A]** Update the SAP HANA secure store.
Update the SAP HANA secure store to point to the virtual name of the SAP HANA System Replication setup.
- Run the following command to list the entries as `<sapsid>adm`
+ Run the following command to list the entries as `<sapsid>adm`.
```bash hdbuserstore List ```
- This should list all entries and should look similar to
+ All entries should be listed and look similar to:
```bash DATA FILE : /home/nw1adm/.hdb/sapa01/SSFS_HDB.DAT
Follow these steps to install an SAP application server.
DATABASE: NW1 ```
- In this example, the IP address of the default entry points to the VM, not the load balancer. Change the entry to point to the virtual hostname of the load balancer. Make sure to use the same port and database name. For example, `30313` and `NW1` in the sample output.
+ In this example, the IP address of the default entry points to the VM, not the load balancer. Change the entry to point to the virtual hostname of the load balancer. Make sure to use the same port and database name. For example, use `30313` and `NW1` in the sample output.
```bash su - nw1adm
Follow these steps to install an SAP application server.
## Test cluster setup
-Thoroughly test your Pacemaker cluster. [Execute the typical failover tests](./high-availability-guide-rhel.md#test-the-cluster-setup).
+Thoroughly test your Pacemaker cluster. For more information, see [Execute the typical failover tests](./high-availability-guide-rhel.md#test-the-cluster-setup).
## Next steps
-* To deploy cost optimization scenario where PAS and AAS instance is deployed with SAP NetWeaver HA cluster on RHEL, see [Install SAP Dialog Instance with SAP ASCS/SCS high availability VMs on RHEL](high-availability-guide-rhel-with-dialog-instance.md)
-* [HA for SAP NW on Azure VMs on RHEL for SAP applications multi-SID guide](./high-availability-guide-rhel-multi-sid.md)
-* [Azure Virtual Machines planning and implementation for SAP][planning-guide]
-* [Azure Virtual Machines deployment for SAP][deployment-guide]
-* [Azure Virtual Machines DBMS deployment for SAP][dbms-guide]
-* To learn how to establish high availability and plan for disaster recovery of SAP HANA on Azure (large instances), see [SAP HANA (large instances) high availability and disaster recovery on Azure](../../virtual-machines/workloads/sap/hana-overview-high-availability-disaster-recovery.md).
-* To learn how to establish high availability and plan for disaster recovery of SAP HANA on Azure VMs, see [High Availability of SAP HANA on Azure Virtual Machines (VMs)][sap-hana-ha]
+* To deploy a cost-optimization scenario where the PAS and AAS instance is deployed with SAP NetWeaver HA cluster on RHEL, see [Install SAP dialog instance with SAP ASCS/SCS high-availability VMs on RHEL](high-availability-guide-rhel-with-dialog-instance.md).
+* See [HA for SAP NW on Azure VMs on RHEL for SAP applications multi-SID guide](./high-availability-guide-rhel-multi-sid.md).
+* See [Azure Virtual Machines planning and implementation for SAP][planning-guide].
+* See [Azure Virtual Machines deployment for SAP][deployment-guide].
+* See [Azure Virtual Machines DBMS deployment for SAP][dbms-guide].
+* To learn how to establish HA and plan for disaster recovery of SAP HANA on Azure (large instances), see [SAP HANA (large instances) high availability and disaster recovery on Azure](../../virtual-machines/workloads/sap/hana-overview-high-availability-disaster-recovery.md).
+* To learn how to establish HA and plan for disaster recovery of SAP HANA on Azure VMs, see [High availability of SAP HANA on Azure Virtual Machines][sap-hana-ha].
sap High Availability Guide Rhel With Hana Ascs Ers Dialog Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/high-availability-guide-rhel-with-hana-ascs-ers-dialog-instance.md
Once you've Installed, configured and set-up the **HANA Cluster**, follow the st
Based on your storage, follow the steps described in below guides to configure `SAPInstance` resource for SAP ASCS/SCS and SAP ERS instance in the cluster.
-* NFS on Azure Files - [Azure VMs high availability for SAP NW on RHEL with NFS on Azure Files](high-availability-guide-rhel-nfs-azure-files.md#prepare-for-sap-netweaver-installation)
+* NFS on Azure Files - [Azure VMs high availability for SAP NW on RHEL with NFS on Azure Files](high-availability-guide-rhel-nfs-azure-files.md#prepare-for-an-sap-netweaver-installation)
* Azure NetApp Files - [Azure VMs high availability for SAP NW on RHEL with Azure NetApp Files](high-availability-guide-rhel-netapp-files.md#prepare-for-sap-netweaver-installation) ## Test the cluster setup
sap High Availability Guide Suse https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/high-availability-guide-suse.md
The following items are prefixed with either **[A]** - applicable to all nodes,
1. **[A]** Configure SWAP file
- Create a swap file as defined in [Create a SWAP file for an Azure Linux VM](https://learn.microsoft.com/troubleshoot/azure/virtual-machines/create-swap-file-linux-vm)
+ Create a swap file as defined in [Create a SWAP file for an Azure Linux VM](/troubleshoot/azure/virtual-machines/create-swap-file-linux-vm)
```bash #!/bin/sh
search Search Api Preview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-api-preview.md
Preview features that transition to general availability are removed from this l
|Feature&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; | Category | Description | Availability | |||-||
-| [**Exhaustive K-Nearest Neighbors (KNN)**](vector-search-ranking.md#eknn) | Vector search | Exhaustive K-Nearest Neighbors (KNN) is a new scoring algorithm for similarity search in vector space. It performs an exhaustive search for the nearest neighbors, useful for situations where high recall is more important than query performance. | Available in the 2023-10-01-Preview REST API. |
+| [**Exhaustive K-Nearest Neighbors (KNN)**](vector-search-overview.md#eknn) | Vector search | Exhaustive K-Nearest Neighbors (KNN) is a new scoring algorithm for similarity search in vector space. It performs an exhaustive search for the nearest neighbors, useful for situations where high recall is more important than query performance. | Available in the 2023-10-01-Preview REST API. |
| [**Prefilters in vector search**](vector-search-how-to-query.md) | Vector search | Evaluates filter criteria before query execution, reducing the amount of content that needs to be searched. | Available in the 2023-10-01-Preview REST API. | | [**2023-10-01-Preview Search REST API**](/rest/api/searchservice/search-service-api-versions#2023-10-01-Preview) | Vector search | New preview version of the Search REST APIs that changes the definition for [vector fields](vector-search-how-to-create-index.md) and [vector queries](vector-search-how-to-query.md). This API version introduces breaking changes from **2023-07-01-Preview**, otherwise it's inclusive of all previous preview features. If you're using earlier previews, switch to **2023-10-01-Preview** with no loss of functionality, assuming you make updates to vector code. | Public preview, [Search REST API 2023-10-01-Preview](/rest/api/searchservice/index). Announced in October 2023. | | [**Vector search**](vector-search-overview.md) | Vector search | Adds vector fields to a search index for similarity search scenarios over vector representations of text, image, and multilingual content. | Public preview using the [Search REST API 2023-07-01-Preview](/rest/api/searchservice/index-preview) and Azure portal. |
search Search Get Started Vector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-get-started-vector.md
You should get a status HTTP 201 success.
+ Vector fields must be `"type": "Collection(Edm.Single)"` with `"dimensions"` and `"vectorSearchProfile"` properties. See [Create or Update Index](/rest/api/searchservice/2023-10-01-preview/indexes/create-or-update) for property descriptions.
-+ The `"vectorSearch"` section is an array of Approximate Nearest Neighbors (ANN) algorithm configurations and profiles. Supported algorithms include HNSW and eKNN. See [Relevance scoring in vector search](vector-search-ranking.md) for details.
++ The `"vectorSearch"` section is an array of Approximate Nearest Neighbors (ANN) algorithm configurations and profiles. Supported algorithms include HNSW and exhaustive KNN. See [Relevance scoring in vector search](vector-search-ranking.md) for details. + [Optional]: The `"semantic"` configuration enables reranking of search results. You can rerank results in queries of type `"semantic"` for string fields that are specified in the configuration. See [Semantic Search overview](semantic-search-overview.md) to learn more.
search Vector Search Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/vector-search-overview.md
Vector search is a method of information retrieval where documents and queries a
### Why use vector search
-Vectors can overcome the limitations of traditional keyword-based search by using machine learning models to capture the meaning of words and phrases in context, rather than relying solely on lexical analysis and matching of individual query terms. By capturing the intent of the query, vector search can return more relevant results that match the user's needs, even if the exact terms aren't present in the document. Additionally, vector search can be applied to different types of content, such as images and videos, not just text. This enables new search experiences such as multi-modal search or cross-language search.
+Vectors can overcome the limitations of traditional keyword-based search by using machine learning models to capture the meaning of words and phrases in context, rather than relying solely on lexical analysis and matching of individual query terms. By capturing the intent of the query, vector search can return more relevant results that match the user's needs, even if the exact terms aren't present in the document.
+
+Additionally, vector search can be applied to different types of content, such as images and videos, not just text. This enables new search experiences such as multi-modal search or cross-language search in multi-lingual applications.
### Embeddings and vectorization
-*Embeddings* are a specific type of vector representation of content or a query, created by machine learning models that capture the semantic meaning of text or representations of other content such as images. Natural language machine learning models are trained on large amounts of data to identify patterns and relationships between words. During training, they learn to represent any input as a vector of real numbers in an intermediary step called the *encoder*. After training is complete, these language models can be modified so the intermediary vector representation becomes the model's output. The resulting embeddings are high-dimensional vectors, where words with similar meanings are closer together in the vector space, as explained in [this Azure OpenAI Service article](/azure/ai-services/openai/concepts/understand-embeddings).
+*Embeddings* are a specific type of vector representation of content or a query, created by machine learning models that capture the semantic meaning of text or representations of other content such as images. Natural language machine learning models are trained on large amounts of data to identify patterns and relationships between words. During training, they learn to represent any input as a vector of real numbers in an intermediary step called the *encoder*. After training is complete, these language models can be modified so the intermediary vector representation becomes the model's output. The resulting embeddings are high-dimensional vectors, where words with similar meanings are closer together in the vector space, as explained in [Understand embeddings (Azure OpenAI)](/azure/ai-services/openai/concepts/understand-embeddings).
The effectiveness of vector search in retrieving relevant information depends on the effectiveness of the embedding model in distilling the meaning of documents and queries into the resulting vector. The best models are well-trained on the types of data they're representing. You can evaluate existing models such as Azure OpenAI text-embedding-ada-002, bring your own model that's trained directly on the problem space, or fine-tune a general-purpose model. Azure Cognitive Search doesn't impose constraints on which model you choose, so pick the best one for your data.
-In order to create effective embeddings for vector search, it's important to take input size limitations into account. Therefore, we recommend following the [guidelines for chunking data](vector-search-how-to-chunk-documents.md) before generating embeddings. This best practice ensures that the embeddings accurately capture the relevant information and enable more efficient vector search.
+In order to create effective embeddings for vector search, it's important to take input size limitations into account. We recommend following the [guidelines for chunking data](vector-search-how-to-chunk-documents.md) before generating embeddings. This best practice ensures that the embeddings accurately capture the relevant information and enable more efficient vector search.
### What is the embedding space?
In order to create effective embeddings for vector search, it's important to tak
For example, documents that talk about different species of dogs would be clustered close together in the embedding space. Documents about cats would be close together, but farther from the dogs cluster while still being in the neighborhood for animals. Dissimilar concepts such as cloud computing would be much farther away. In practice, these embedding spaces are abstract and don't have well-defined, human-interpretable meanings, but the core idea stays the same.
-Popular vector similarity metrics include the following, which are all supported by Azure Cognitive Search.
+<a name="eknn"></a>
+
+### Nearest neighbors search
+
+In vector search, the search engine searches through the vectors within the embedding space to identify those that are near to the query vector. This technique is called *nearest neighbor search*. Nearest neighbors help quantify the similarity between items. A high degree of vector similarity indicates that the original data was similar too. To facilitate fast nearest neighbor search, the search engine will perform optimizations or employ data structures or data partitioning to reduce the search space. Each vector search algorithm will have different approaches to this problem, trading off different characteristics such as latency, throughput, recall, and memory. To compute similarity, similarity metrics provide the mechanism for computing this distance.
+
+Azure Cognitive Search currently supports the following algorithms:
+++ Hierarchical Navigable Small World (HNSW): HNSW is a leading ANN algorithm optimized for high-recall, low-latency applications where data distribution is unknown or can change frequently. It organizes high-dimensional data points into a hierarchical graph structure that enables fast and scalable similarity search while allowing a tunable a trade-off between search accuracy and computational cost. Because the algorithm requires all data points to reside in memory for fast random access, this algorithm consumes [vector index size](vector-search-index-size.md) quota.+++ Exhaustive K-nearest neighbors (KNN): Calculates the distances between the query vector and all data points. It's computationally intensive, so it works best for smaller datasets. Because the algorithm doesn't require fast random access of data points, this algorithm doesn't consume vector index size quota. However, this algorithm will provide the global set of nearest neighbors.
-+ `euclidean` (also known as `L2 norm`): This measures the length of the vector difference between two vectors.
-+ `cosine`: This measures the angle between two vectors, and isn't affected by differing vector lengths.
-+ `dotProduct`: This measures both the length of each of the pair of two vectors, and the angle between them. For normalized vectors, this is identical to `cosine` similarity, but slightly more performant.
+Within an index definition, you can specify one or more algorithms, and then for each vector field specify which algorithm to use:
+++ [Create a vector index](vector-search-how-to-create-index.md) to specify an algorithm in the index and on fields.+++ For `exhaustiveKnn`, use [2023-10-01-Preview](/rest/api/searchservice/2023-10-01-preview/indexes/create-or-update) REST APIs or Azure SDK beta libraries that target the 2023-10-01-Preview version.+
+Algorithm parameters that are used to initialize the index during index creation are immutable and can't be changed after the index is built. Some parameters that affect the query-time characteristics might be modified.
+
+In addition, fields that specify HNSW algorithm also support exhaustive knn search using the [query request](vector-search-how-to-query.md) parameter `"exhaustive": true`. The opposite isn't true however. If a field is indexed for `exhaustiveKnn`, you can't use HNSW in the query because the additional data structures that enable efficient search donΓÇÖt exist.
### Approximate Nearest Neighbors
-Approximate Nearest Neighbor search (ANN) is a class of algorithms for finding matches in vector space. This class of algorithms employs different data structures or data partitioning methods to significantly reduce the search space to accelerate query processing. The specific approach depends on the algorithm. While this approach sacrifices some accuracy, these algorithms offer scalable and faster retrieval of approximate nearest neighbors, which makes them ideal for balancing accuracy and efficiency in modern information retrieval applications. You can adjust the parameters of your algorithm to fine-tune the recall, latency, memory, and disk footprint requirements of your search application.
+Approximate Nearest Neighbor search (ANN) is a class of algorithms for finding matches in vector space. This class of algorithms employs different data structures or data partitioning methods to significantly reduce the search space to accelerate query processing.
-Azure Cognitive Search uses Hierarchical Navigable Small Worlds (HNSW), which is a leading ANN algorithm optimized for high-recall, low-latency applications where data distribution is unknown or can change frequently. REST API 2023-10-01-Preview adds support for Exhaustive K-Nearest Neighbors (eKNN), which calculates the distance between all pairs of data points.
+ANN algorithms sacrifice some accuracy, but offer scalable and faster retrieval of approximate nearest neighbors, which makes them ideal for balancing accuracy against efficiency in modern information retrieval applications. You can adjust the parameters of your algorithm to fine-tune the recall, latency, memory, and disk footprint requirements of your search application.
-> [!NOTE]
+Azure Cognitive Search uses HNSW for its ANN algorithm.
+
+<!-- > [!NOTE]
> Finding the true set of [_k_ nearest neighbors](https://en.wikipedia.org/wiki/K-nearest_neighbors_algorithm) requires comparing the input vector exhaustively against all vectors in the dataset. While each vector similarity calculation is relatively fast, performing these exhaustive comparisons across large datasets is computationally expensive and slow due to the sheer number of comparisons. For example, if a dataset contains 10 million 1,000-dimensional vectors, computing the distance between the query vector and all vectors in the dataset would require scanning 37 GB of data (assuming single-precision floating point vectors) and a high number of similarity calculations. >
-> To address this challenge, approximate nearest neighbor (ANN) search methods are used to trade off recall for speed. These methods can efficiently find a small set of candidate vectors that are similar to the query vector and have high likelihood to be in the globally most similar neighbors. Each algorithm has a different approach to reducing the total number of vectors comparisons, but they all share the ability to balance accuracy and efficiency by tweaking the algorithm configuration parameters.
+> To address this challenge, approximate nearest neighbor (ANN) search methods are used to trade off recall for speed. These methods can efficiently find a small set of candidate vectors that are similar to the query vector and have high likelihood to be in the globally most similar neighbors. Each algorithm has a different approach to reducing the total number of vectors comparisons, but they all share the ability to balance accuracy and efficiency by tweaking the algorithm configuration parameters. -->
## Next steps
search Vector Search Ranking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/vector-search-ranking.md
Title: Vector search scoring
+ Title: Vector relevance and ranking
-description: Explains the concepts behind vector relevance scoring, including how matches are found in vector space and ranked in search results.
+description: Explains the concepts behind vector relevance, scoring, including how matches are found in vector space and ranked in search results.
Last updated 10/13/2023
-# Searching and relevance in vector search
+# Relevance and ranking in vector search
> [!IMPORTANT] > Vector search is in public preview under [supplemental terms of use](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). It's available through the Azure portal, preview REST API, and [beta client libraries](https://github.com/Azure/cognitive-search-vector-pr#readme).
-This article is for developers who need a deeper understanding of relevance scoring for vector queries in Azure Cognitive Search.
+In vector query execution, the search engine looks for similar vectors to find the best candidates to return in search results. Depending on how you indexed the vector content, the search for relevant matches is either exhaustive, or constrained to near neighbors for faster processing. Once candidates are found, similarity metrics are used to score each result based on the strength of the match. This article explains the algorithms used to determine relevance and the similarity metrics used for scoring.
-## Vector search supported algorithms
+## Determine relevance in vector search
-Azure Cognitive Search provides the following scoring algorithms for vector search:
+The algorithms that determine relevance are exhaustive k-nearest neighbors (KNN) and Hierarchical Navigable Small World (HNSW).
-+ `exhaustiveKnn`: Calculates the distances between the query vector and all data points, making it very computationally intensive for large datasets. Because the algorithm does not require fast random access of data points, this algorithm will **not** consume vector index size quota.
-+ `hnsw`: Organizes high-dimensional data points into a hierarchical graph structure that enables fast and scalable similarity search while maintaining a trade-off between search accuracy and computational cost. Because the algorithm requires all data points to reside in memory for fast random access, this algorithm will consume vector index size quota.
+Exhaustive KNN performs a brute-force search that enables users to search the entire vector space for matches that are most similar to the query. It does this by calculating the distances between all pairs of data points and finding the exact `k` nearest neighbors for a query point.
-Vector search algorithms are specified in a search index, and then specified on the field definition (also in the index):
+HNSW is an algorithm used for efficient approximate nearest neighbor (ANN) search in high-dimensional spaces. It organizes data points into a hierarchical graph structure that enables fast neighbor queries by navigating through the graph while maintaining a balance between search accuracy and computational efficiency.
-+ [Create a vector index](vector-search-how-to-create-index.md)
+Only fields marked as `searchable` in the index, or as `searchFields` in the query, are used for searching and scoring. Only fields marked as `retrievable`, or fields specified in `select` in the query, are returned in search results, along with their search score.
-Algorithm parameters that are used to initialize the index during index creation are *immutable* and cannot be changed after the index is built. Some parameters that affect the query-time characteristics may be modified.Some of these parameters can be modified in a [query request](vector-search-how-to-query.md).
+### When to use exhaustive KNN
-Each algorithm has different memory requirements, which affect [vector index size](vector-search-index-size.md), predicated on memory usage. When evaluating algorithms, remember:
+This algorithm is intended for scenarios where high recall is of utmost importance, and users are willing to accept the trade-offs in search performance. Because it's computationally intensive, use exhaustive KNN for small to medium datasets, or when precision requirements outweigh query performance considerations.
-+ `hnsw`, which accesses HNSW graphs stored in memory, adds overhead to vector index size because these additional data structures consume space, and fast random access requires the full index to be loaded into memory.
-+ `exhaustiveKnn` doesn't load the entire vector index into memory. As such, it has no vector index size overhead, meaning it doesn't contribute to vector index size.
+Another use is to build a dataset to evaluate approximate nearest neighbor algorithm recall. Exhaustive KNN can be used to build the ground truth set of nearest neighbors.
-<a name="eknn"></a>
+Exhaustive KNN support is available through [2023-10-01-Preview REST API](/rest/api/searchservice/search-service-api-versions#2023-10-01-Preview) and in Azure SDK client libraries that target that REST API version.
-### Exhaustive K-Nearest Neighbors (KNN)
+### When to use HNSW
-Exhaustive KNN support is available through [2023-10-01-Preview REST API](/rest/api/searchservice/search-service-api-versions#2023-10-01-Preview) and it enables users to search the entire vector space for matches that are most similar to the query. This algorithm is intended for scenarios where high recall is of utmost importance, and users are willing to accept the trade-offs in search performance.
+HNSW is recommended for most scenarios due to its efficiency when searching over larger data sets. Internally, HNSW creates extra data structures for faster search. However, you aren't locked into using them on every search. HNSW has several configuration parameters that can be tuned to achieve the throughput, latency, and recall objectives for your search application. For example, at query time, you can specify options for exhaustive search, even if the vector field is indexed for HNSW.
-Exhaustive KNN performs a brute-force search by calculating the distances between all pairs of data points. It guarantees finding the exact `k` nearest neighbors for a query point. Because it's computationally intensive, use Exhaustive KNN for small to medium datasets, or when precision requirements outweigh query performance considerations.
+## How nearest neighbor search works
-### Hierarchical Navigable Small World (HNSW)
-
-HNSW is an algorithm used for efficient [approximate nearest neighbor (ANN)](vector-search-overview.md#approximate-nearest-neighbors) search in high-dimensional spaces. It organizes data points into a hierarchical graph structure that enables fast neighbor queries by navigating through the graph while maintaining a balance between search accuracy and computational efficiency.
-
-HNSW has several configuration parameters that can be tuned to achieve the throughput, latency, and recall objectives for your search application. You can create multiple configurations if you need optimizations for specific scenarios, but only one configuration can be specified on each vector field.
-
-## How HNSW ranking works
-
-Vector queries execute against an embedding space consisting of vectors generated from the same embedding model. In a typical application, the input value within a query request is fed into the same machine learning model that generated embeddings in the vector index. The output is a vector in the same embedding space. Since similar vectors are clustered close together, finding matches is equivalent to finding the vectors that are closest to the query vector, and returning the associated documents as the search result.
+Vector queries execute against an embedding space consisting of vectors generated from the same embedding model. Generally, the input value within a query request is fed into the same machine learning model that generated embeddings in the vector index. The output is a vector in the same embedding space. Since similar vectors are clustered close together, finding matches is equivalent to finding the vectors that are closest to the query vector, and returning the associated documents as the search result.
For example, if a query request is about hotels, the model maps the query into a vector that exists somewhere in the cluster of vectors representing documents about hotels. Identifying which vectors are the most similar to the query, based on a similarity metric, determines which documents are the most relevant.
-### Indexing vectors with the HNSW algorithm
+When vector fields are indexed for exhaustive KNN, the query executes against "all neighbors". For fields indexed for HNSW, the search engine uses an HNSW graph to search over a subset of nodes within the vector index.
+
+### Creating the HNSW graph
The goal of indexing a new vector into an HNSW graph is to add it to the graph structure in a manner that allows for efficient nearest neighbor search. The following steps summarize the process:
In the HNSW algorithm, a vector query search operation is executed by navigating
1. Completion: The search completes when the desired number of nearest neighbors have been identified, or when other stopping criteria are met. This desired number of nearest neighbors is governed by the query-time parameter `k`.
-Only fields marked as `searchable` in the index, or `searchFields` in the query, are used for scoring. Only fields marked as `retrievable`, or fields specified in `select` in the query, are returned in search results, along with their search score.
- ## Similarity metrics used to measure nearness
-A similarity metric measures the distance between neighboring vectors. Commonly used similarity metrics include `cosine`, `euclidean` (also known as `l2 norm`), and `dotProduct`, which are listed in the following table.
+A similarity metric measures the distance between neighboring vectors.
| Metric | Description | |--|-|
-| `cosine` | Calculates the angle between two vectors. Cosine is the similarity metric used by [Azure OpenAI embedding models](/azure/ai-services/openai/concepts/understand-embeddings#cosine-similarity), so if you're using Azure OpenAI, specify `cosine` in the vector configuration.|
-| `euclidean` | Calculates the Euclidean distance between two vectors, which is the l2-norm of the difference of the two vectors. |
-| `dotProduct` | Calculates the products of vectors' magnitudes and the angle between them. |
+| `cosine` | This metric measures the angle between two vectors, and isn't affected by differing vector lengths. Mathematically, it calculates the angle between two vectors. Cosine is the similarity metric used by [Azure OpenAI embedding models](/azure/ai-services/openai/concepts/understand-embeddings#cosine-similarity), so if you're using Azure OpenAI, specify `cosine` in the vector configuration.|
+| `dotProduct` | This metric measures both the length of each pair of two vectors, and the angle between them. Mathematically, it calculates the products of vectors' magnitudes and the angle between them. For normalized vectors, this is identical to `cosine` similarity, but slightly more performant. |
+| `euclidean` | (also known as `l2 norm`) This metric measures the length of the vector difference between two vectors. Mathematically, it calculates the Euclidean distance between two vectors, which is the l2-norm of the difference of the two vectors. |
-For normalized embedding spaces, `dotProduct` is equivalent to the `cosine` similarity, but is more efficient.
+## Scores in a vector search results
+
+Whenever results are ranked, **`@search.score`** property contains the value used to order the results.
+
+| Search method | Parameter | Scoring algorithm | Range |
+||--|-|-|
+| vector search | `@search.score` | HNSW or KNN algorithm, using the similarity metric specified in the algorithm configuration. | 0.333 - 1.00 (Cosine) |
If you're using the `cosine` metric, it's important to note that the calculated `@search.score` isn't the cosine value between the query vector and the document vectors. Instead, Cognitive Search applies transformations such that the score function is monotonically decreasing, meaning score values will always decrease in value as the similarity becomes worse. This transformation ensures that search scores are usable for ranking purposes.
double ScoreToSimilarity(double score)
Having the original cosine value can be useful in custom solutions that set up thresholds to trim results of low quality results.
-## Scores in a vector search results
-
-Whenever results are ranked, **`@search.score`** property contains the value used to order the results.
-
-The following table identifies the scoring property returned on each match, algorithm, and range.
-
-| Search method | Parameter | Scoring algorithm | Range |
-||--|-|-|
-| vector search | `@search.score` | HNSW or KNN algorithm, using the similarity metric specified in the algorithm configuration. | 0.333 - 1.00 (Cosine) |
- ## Number of ranked results in a vector query response A vector query specifies the `k` parameter, which determines how many nearest neighbors of the query vector should be found in vector space and returned in the results. If `k` is larger than the number of documents in the index, then the number of documents determines the upper limit of what can be returned.
search Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/whats-new.md
Learn about the latest updates to Azure Cognitive Search functionality, docs, an
| Item&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; | Type | Description | |--||--|
-| [**Exhaustive K-Nearest Neighbors (KNN)**](vector-search-ranking.md#eknn) | Feature | Exhaustive K-Nearest Neighbors (KNN) is a new scoring algorithm for similarity search in vector space. It performs an exhaustive search for the nearest neighbors, useful for situations where high recall is more important than query performance. Available in the 2023-10-01-Preview REST API only. |
+| [**Exhaustive K-Nearest Neighbors (KNN)**](vector-search-overview.md#eknn) | Feature | Exhaustive K-Nearest Neighbors (KNN) is a new scoring algorithm for similarity search in vector space. It performs an exhaustive search for the nearest neighbors, useful for situations where high recall is more important than query performance. Available in the 2023-10-01-Preview REST API only. |
| [**Prefilters in vector search**](vector-search-how-to-query.md) | Feature | Evaluates filter criteria before query execution, reducing the amount of content that needs to be searched. Available in the 2023-10-01-Preview REST API only, through a new `vectorFilterMode` property on the query that can be set to `preFilter` (default) or `postFilter`, depending on your requirements. | | [**2023-10-01-Preview Search REST API**](/rest/api/searchservice/search-service-api-versions#2023-10-01-Preview) | API | New preview version of the Search REST APIs that changes the definition for [vector fields](vector-search-how-to-create-index.md) and [vector queries](vector-search-how-to-query.md). This API version introduces breaking changes from **2023-07-01-Preview**, otherwise it's inclusive of all previous preview features. We recommend [creating new indexes](vector-search-how-to-create-index.md) for **2023-10-01-Preview**. You might encounter an HTTP 400 on some features on a migrated index, even if you migrated correctly.|
security Threat Modeling Tool Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/develop/threat-modeling-tool-authentication.md
The `<netMsmqBinding/>` element of the WCF configuration file below instructs WC
| **SDL Phase** | Build | | **Applicable Technologies** | Generic | | **Attributes** | N/A |
-| **References** | [Authentication Scenarios for Microsoft Entra ID](../../active-directory/develop/authentication-vs-authorization.md), [Microsoft Entra code Samples](../../active-directory/azuread-dev/sample-v1-code.md), [Microsoft Entra developer's guide](../../active-directory/develop/index.yml) |
+| **References** | [Authentication Scenarios for Microsoft Entra ID](../../active-directory/develop/authentication-vs-authorization.md), [Microsoft Entra code Samples](/azure/active-directory/azuread-dev/sample-v1-code), [Microsoft Entra developer's guide](../../active-directory/develop/index.yml) |
| **Steps** | <p>Microsoft Entra ID simplifies authentication for developers by providing identity as a service, with support for industry-standard protocols such as OAuth 2.0 and OpenID Connect. Below are the five primary application scenarios supported by Microsoft Entra ID:</p><ul><li>Web Browser to Web Application: A user needs to sign in to a web application that is secured by Microsoft Entra ID</li><li>Single Page Application (SPA): A user needs to sign in to a single page application that is secured by Microsoft Entra ID</li><li>Native Application to Web API: A native application that runs on a phone, tablet, or PC needs to authenticate a user to get resources from a web API that is secured by Microsoft Entra ID</li><li>Web Application to Web API: A web application needs to get resources from a web API secured by Microsoft Entra ID</li><li>Daemon or Server Application to Web API: A daemon application or a server application with no web user interface needs to get resources from a web API secured by Microsoft Entra ID</li></ul><p>Please refer to the links in the references section for low-level implementation details</p>| ## <a id="msal-distributed-cache"></a>Override the default MSAL token cache with a distributed cache
security Customer Lockbox Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/customer-lockbox-overview.md
Title: Customer Lockbox for Microsoft Azure
-description: Technical overview of Customer Lockbox for Microsoft Azure, which provides control over cloud provider access when Microsoft may need to access customer data.
+description: Technical overview of Customer Lockbox for Microsoft Azure, which provides control over cloud provider access when Microsoft might need to access customer data.
Last updated 08/14/2023
> [!NOTE] > To use this feature, your organization must have an [Azure support plan](https://azure.microsoft.com/support/plans/) with a minimal level of **Developer**.
-Most operations, support, and troubleshooting performed by Microsoft personnel and sub-processors do not require access to customer data. In those rare circumstances where such access is required, Customer Lockbox for Microsoft Azure provides an interface for customers to review and approve or reject customer data access requests. It is used in cases where a Microsoft engineer needs to access customer data, whether in response to a customer-initiated support ticket or a problem identified by Microsoft. Microsoft Azure services that have the potential to access customer data are required to onboard to Customer Lockbox for Microsoft Azure.
+Most operations, support, and troubleshooting performed by Microsoft personnel and sub-processors do not require access to customer data. In those rare circumstances where such access is required, Customer Lockbox for Microsoft Azure provides an interface for customers to review and approve or reject customer data access requests. It is used in cases where a Microsoft engineer needs to access customer data, whether in response to a customer-initiated support ticket or a problem identified by Microsoft.
This article covers how to enable Customer Lockbox and how Lockbox requests are initiated, tracked, and stored for later reviews and audits.
+## Supported services
+
+The following services are currently supported for Customer Lockbox:
+
+- Azure API Management
+- Azure App Service
+- Azure Cognitive Search
+- Azure Cognitive Services
+- Azure Container Registry
+- Azure Data Box
+- Azure Data Explorer
+- Azure Data Factory
+- Azure Data Manager for Energy
+- Azure Database for MySQL
+- Azure Database for MySQL Flexible Server
+- Azure Database for PostgreSQL
+- Azure Databricks
+- Azure Edge Zone Platform Storage
+- Azure Energy
+- Azure Functions
+- Azure HDInsight
+- Azure Health Bot
+- Azure Intelligent Recommendations
+- Azure Kubernetes Service
+- Azure Load Testing (CloudNative Testing)
+- Azure Logic Apps
+- Azure Monitor
+- Azure Red Hat OpenShift
+- Azure Spring Apps
+- Azure SQL Database
+- Azure SQL Managed Instance
+- Azure Storage
+- Azure Subscription Transfers
+- Azure Synapse Analytics
+- Azure Unified Vision Service
+- Commerce AI (Intelligent Recommendations)
+- DevCenter / DevBox
+- ElasticSan
+- Kusto (Dashboards)
+- Microsoft Azure Attestation
+- OpenAI
+- Spring Cloud
+- Unified Vision Service
+- Virtual Machines in Azure
+ ## Enable Customer Lockbox You can now enable Customer Lockbox from the [Administration module](https://aka.ms/customerlockbox/administration) in the Customer Lockbox blade.
The following steps outline a typical workflow for a Customer Lockbox request.
- The scope of the resource - Whether the requester is an isolated identity or using multifactor authentication - Permissions levels
- Based on the JIT rule, this request may also include an approval from Internal Microsoft Approvers. For example, the approver might be the Customer support lead or the DevOps Manager.
+ Based on the JIT rule, this request might also include an approval from Internal Microsoft Approvers. For example, the approver might be the Customer support lead or the DevOps Manager.
1. When the request requires direct access to customer data, a Customer Lockbox request is initiated. For example, remote desktop access to a customer's virtual machine. The request is now in a **Customer Notified** state, waiting for the customer's approval before granting access.
sentinel Azure Ddos Protection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/azure-ddos-protection.md
# Azure DDoS Protection connector for Microsoft Sentinel
-Connect to Azure DDoS Protection Standard logs via Public IP Address Diagnostic Logs. In addition to the core DDoS protection in the platform, Azure DDoS Protection Standard provides advanced DDoS mitigation capabilities against network attacks. It's automatically tuned to protect your specific Azure resources. Protection is simple to enable during the creation of new virtual networks. It can also be done after creation and requires no application or resource changes. For more information, see the [Microsoft Sentinel documentation](https://go.microsoft.com/fwlink/p/?linkid=2219760&wt.mc_id=sentinel_dataconnectordocs_content_cnl_csasci).
+Connect to Azure DDoS Protection logs via Public IP Address Diagnostic Logs. In addition to the core DDoS protection in the platform, Azure DDoS Protection provides advanced DDoS mitigation capabilities against network attacks. It's automatically tuned to protect your specific Azure resources. Protection is simple to enable during the creation of new virtual networks. It can also be done after creation and requires no application or resource changes. For more information, see the [Microsoft Sentinel documentation](https://go.microsoft.com/fwlink/p/?linkid=2219760&wt.mc_id=sentinel_dataconnectordocs_content_cnl_csasci).
## Connector attributes
sentinel Upload Indicators Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/upload-indicators-api.md
This section covers the first three of the five components discussed earlier. Yo
Acquire a Microsoft Entra access token with [OAuth 2.0 authentication](../active-directory/fundamentals/auth-oauth2.md). [V1.0 and V2.0](../active-directory/develop/access-tokens.md#token-formats) are valid tokens accepted by the API.
-To get a v1.0 token, use [ADAL](../active-directory/azuread-dev/active-directory-authentication-libraries.md) or send requests to the REST API in the following format:
+To get a v1.0 token, use [ADAL](/azure/active-directory/azuread-dev/active-directory-authentication-libraries) or send requests to the REST API in the following format:
- POST `https://login.microsoftonline.com/{{tenantId}}/oauth2/token` - Headers for using Microsoft Entra App: - grant_type: "client_credentials"
service-fabric How To Managed Cluster Ddos Protection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/how-to-managed-cluster-ddos-protection.md
+
+ Title: Use Azure DDoS Protection in a Service Fabric managed cluster
+description: This article describes how to use Azure DDoS Protection in a Service Fabric managed cluster.
++++++ Last updated : 09/05/2023++
+# Use Azure DDoS Protection in a Service Fabric managed cluster
+
+[Azure DDoS Protection](../ddos-protection/ddos-protection-overview.md), combined with application design best practices, provides enhanced DDoS mitigation features to defend against [Distributed denial of service (DDoS) attacks](https://www.microsoft.com/en-us/security/business/security-101/what-is-a-ddos-attack). It's automatically tuned to help protect your specific Azure resources in a virtual network. There are a [number of benefits to using Azure DDoS Protection](../ddos-protection/ddos-protection-overview.md#azure-ddos-protection-key-features).
+
+Service Fabric managed cluster supports Azure DDoS Network Protection and allows you to associate your VMSS with [Azure DDoS Network Protection Plan](../ddos-protection/ddos-protection-sku-comparison.md). The plan is created by the customer, and they pass the resource id of the plan in managed cluster arm template.
+
+## Use DDoS Protection in a Service Fabric managed cluster
+
+### Requirements
+
+Use Service Fabric API version 2023-07-01-preview or later.
+
+### Steps
+
+The following section describes the steps that should be taken to use DDoS Network Protection in a Service Fabric managed cluster:
+
+1. Follow the steps in the [Quickstart: Create and configure Azure DDoS Network Protection](../ddos-protection/manage-ddos-protection.md) to create a DDoS Network Protection plan through Portal, [Azure PowerShell](../ddos-protection/manage-ddos-protection-powershell.md), or Azure CLI. Note the ddosProtectionPlanName and ddosProtectionPlanId for use in a later step.
+
+2. Link your DDoS Protection plan to the virtual network that the Service Fabric managed cluster manages for you. To do this, you must grant SFMC permission to join your DDoS Protection plan with the virtual network. This permission is granted by assigning SFMC the ΓÇ£Network ContributorΓÇ¥ Azure role as described in steps below:
+
+ A. Get the service `Id` from your subscription for Service Fabric Resource Provider application.
+
+ ```powershell
+ Login-AzAccount
+ Select-AzSubscription -SubscriptionId <SubId>
+ Get-AzADServicePrincipal -DisplayName "Azure Service Fabric Resource Provider"
+ ```
+
+ > [!NOTE]
+ > Make sure you are in the correct subscription, the principal ID will change if the subscription is in a different tenant.
+
+ ```powershell
+ ServicePrincipalNames : {74cb6831-0dbb-4be1-8206-fd4df301cdc2}
+ ApplicationId : 74cb6831-0dbb-4be1-8206-fd4df301cdc2
+ ObjectType : ServicePrincipal
+ DisplayName : Azure Service Fabric Resource Provider
+ Id : 00000000-0000-0000-0000-000000000000
+ ```
+
+ Note the **Id** of the previous output as **principalId** for use in a later step
+
+ |Role definition name|Role definition ID|
+ |-|-|
+ |Network Contributor|4d97b98b-1d4f-4787-a291-c67834d212e7|
+
+ Note the `Role definition name` and `Role definition ID` property values for use in a later step
+
+
+ B. The [sample ARM deployment template](https://github.com/Azure-Samples/service-fabric-cluster-templates/tree/master/SF-Managed-Standard-SKU-1-NT-DDoSNwProtection) adds a role assignment to the DDoS Protection Plan with contributor access. For more information on Azure roles, see [Azure built-in roles - Azure RBAC](../role- based-access-control/built-in-roles.md#all). This role assignment is defined in the resources section of template with PrincipalId and a role definition ID determined from the first step.
++
+ ```json
+ "variables": {
+ "sfApiVersion": "2023-07-01-preview",
+ "ddosProtectionPlanName": "YourDDoSProtectionPlan",
+ "ddosProtectionPlanId": "[concat('/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/sampleRg/providers/Microsoft.Network/ddosProtectionPlans/', variables('ddosProtectionPlanName'))]",
+ "sfrpPrincipalId": "00000000-0000-0000-0000-000000000000",
+ "ddosProtectionPlanRoleAssignmentID": "[guid(variables('ddosProtectionPlanId'), 'SFRP-Role')]"
+ },
+ "resources": [
+ {
+ "type": "Microsoft.Authorization/roleAssignments",
+ "apiVersion": "2020-04-01-preview",
+ "name": "[variables('ddosProtectionPlanRoleAssignmentID')]",
+ "scope": "[concat('Microsoft.Network/ddosProtectionPlans/', variables('ddosProtectionPlanName'))]",
+ "properties": {
+ "roleDefinitionId": "[concat('/subscriptions/', subscription().subscriptionId, '/providers/Microsoft.Authorization/roleDefinitions/', '4d97b98b-1d4f- 4787-a291-c67834d212e7')]",
+ "principalId": "[variables('sfrpPrincipalId')]"
+ }
+ }
+ ]
+ ```
++
+ or you can also add role assignment via PowerShell using PrincipalId determined from the first step and role definition name as "Contributor" where applicable.
+
+ ```powershell
+ New-AzRoleAssignment -PrincipalId "sfrpPrincipalId" `
+ -RoleDefinitionId "4d97b98b-1d4f-4787-a291-c67834d212e7" `
+ -ResourceName <resourceName> `
+ -ResourceType <resourceType> `
+ -ResourceGroupName <resourceGroupName>
+ ```
+
+4. Use a [sample ARM deployment template](https://github.com/Azure-Samples/service-fabric-cluster-templates/tree/master/SF-Managed-Standard-SKU-1-NT-DDoSNwProtection) that assigns roles and adds DDoS Protection configuration as part of the service fabric managed cluster creation. Update the template with `principalId`, `ddosProtectionPlanName` and `ddosProtectionPlanId` obtained above.
+5. You can also modify your existing ARM template and add new property `ddosProtectionPlanId` under Microsoft.ServiceFabric/managedClusters resource that takes the resource ID of the DDoS Protection Network Protection Plan.
+
+ #### ARM template:
+
+ ```JSON
+ {
+ "apiVersion": "2023-07-01-preview",
+ "type": "Microsoft.ServiceFabric/managedclusters",
+ },
+ "properties": {
+ "ddosProtectionPlanId": "[parameters('ddosProtectionPlanId')]"
+ }
+ ```
service-fabric How To Managed Cluster Troubleshoot Snat Port Exhaustion Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/how-to-managed-cluster-troubleshoot-snat-port-exhaustion-issues.md
+
+ Title: Troubleshoot SNAT Port exhaustion issues in a Service Fabric managed cluster
+description: This article describes how to troubleshoot SNAT Port exhaustion issues in a Service Fabric managed cluster.
++++++ Last updated : 09/05/2023++
+# Troubleshoot SNAT Port exhaustion issues in a Service Fabric managed cluster
+
+This article provides more information on, and troubleshooting methodologies for, exhaustion of source network address translation (SNAT) ports in Service Fabric managed cluster. To learn more about SNAT ports, seeΓÇ»[Source Network Address Translation for outbound connections](../load-balancer/load-balancer-outbound-connections.md).
+
+## How to troubleshoot exhaustion of source network address translation (SNAT) ports
+
+There are a few solutions that let you avoid SNAT port limitations with Service Fabric managed cluster. They include:
+
+1. If your destination is an external endpoint outside of Azure, using [Azure NAT Gateway](../virtual-network/nat-gateway/nat-overview.md) gives you 64k outbound SNAT ports that are usable by the resources sending traffic through it. Azure NAT gateway is a highly resilient and scalable Azure service that provides outbound connectivity to the internet from your virtual network. NAT gateway also gives you a dedicated outbound address that you don't share with anybody. A NAT gatewayΓÇÖs [unique method of consuming SNAT ports](../load-balancer/troubleshoot-outbound-connection.md#deploy-nat-gateway-for-outbound-internet-connectivity) helps resolve common SNAT exhaustion and connection issues. A NAT gateway is highly recommended if your service is initiating repeated TCP or UDP outbound connections to the same destination. Here is how you can [configure a Service Fabric managed cluster to use a NAT gateway](../service-fabric/how-to-managed-cluster-nat-gateway.md).
+
+2. If your destination is an Azure service that supports [service endpoints](../virtual-network/virtual-network-service-endpoints-overview.md), you can avoid SNAT port exhaustion issues by using [service endpoints](../virtual-network/virtual-network-service-endpoints-overview.md) (supported on all node types). To configure service endpoints, you need to add the following to the ARM template for the cluster resource and deploy:
+
+ #### ARM template:
+
+ ```JSON
+ "serviceEndpoints": [
+ {
+ "service": "Microsoft.Storage",
+ "locations":[ "southcentralus", "westus"]
+ },
+ {
+ "service": "Microsoft.ServiceBus"
+ }
+ ]
+ ```
+
+3. With [Bring your own load balancer](../service-fabric/how-to-managed-cluster-networking.md#bring-your-own-azure-load-balancer), you can define your own outbound rules or attach multiple outgoing [public IP addresses](../service-fabric/how-to-managed-cluster-networking.md#enable-public-ip) to provide more SNAT ports (supported on secondary node types).
+
+4. For smaller scale deployments, you can consider assigning a [public IP to a node] (../service-fabric/how-to-managed-cluster-networking#enable-public-ip.md) (supported on secondary node types). If a public IP is assigned to a node, all ports provided by the public IP are available to the node. Unlike with a load balancer or a NAT gateway, the ports are only accessible to the single node associated with the IP address.
+
+5. [Design your applications](../load-balancer/troubleshoot-outbound-connection.md#design-connection-efficient-applications) to use connections efficiently. Connection efficiency can reduce or eliminate SNAT port exhaustion in your deployed applications.
+
+General strategies for mitigating SNAT port exhaustion are discussed in the [Problem-solving section](../load-balancer/load-balancer-outbound-connections.md) of the  **Outbound connections of Azure**  documentation. If you require more help at any point in this article, contact the Azure experts at the [MSDN Azure and the Stack Overflow forums](https://azure.microsoft.com/support/forums/). Alternatively, file an Azure support incident. Go to the [Azure Support site](https://azure.microsoft.com/support/options/) and select  **Get Support**.
service-health Resource Health Vm Annotation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-health/resource-health-vm-annotation.md
Title: Resource Health virtual machine Health Annotations description: Messages, meanings and troubleshooting for virtual machines resource health statuses. Previously updated : 10/02/2023 Last updated : 10/17/2023 # Resource Health virtual machine Health Annotations
-Virtual Machine (VM) health annotations inform you of any ongoing activity that influences the availability of your VMs (see [Resource types and health checks](resource-health-checks-resource-types.md)). Annotations carry metadata that help you rationalize the exact impact to availability.
+Virtual Machine (VM) health annotations inform any ongoing activity that influences the availability of VMs (see [Resource types and health checks](resource-health-checks-resource-types.md)). Annotations carry metadata that help rationalize the exact impact to availability.
-Here are more details on important attributes we recently added, to help you understand below annotations you may observe in [Resource Health](resource-health-overview.md), [Azure Resource Graph](/azure/governance/resource-graph/overview) and [Event Grid System](/azure/event-grid/event-schema-health-resources?tabs=event-grid-event-schema) topics:
+Here are more details on important attributes recently added to help understand below annotations you might observe in [Resource Health](resource-health-overview.md), [Azure Resource Graph](/azure/governance/resource-graph/overview) and [Event Grid System](/azure/event-grid/event-schema-health-resources?tabs=event-grid-event-schema) topics:
-- **Context**: Informs whether VM availability was influenced due to Azure or user orchestrated activity. This can assume values of _Platform Initiated | Customer Initiated | VM Initiated | Unknown_-- **Category**: Informs whether VM availability was influenced due to planned or unplanned activity. This is only applicable to ΓÇÿPlatform-InitiatedΓÇÖ events. This can assume values of _Planned | Unplanned | Not Applicable | Unknown_-- **ImpactType**: Informs the type of impact to VM availability. This can assume values of:
+- **Context**: Informs whether VM availability was influenced due to Azure or user orchestrated activity. It can assume values of _Platform Initiated | Customer Initiated | VM Initiated | Unknown_
+- **Category**: Informs whether VM availability was influenced due to planned or unplanned activity and is only applicable to ΓÇÿPlatform-InitiatedΓÇÖ events. It can assume values of _Planned | Unplanned | Not Applicable | Unknown_
+- **ImpactType**: Informs the type of impact to VM availability. It can assume values of:
- - *Downtime Reboot or Downtime Freeze*: Informs when VM is Unavailable due to Azure orchestrated activity (e.g., VirtualMachineStorageOffline, LiveMigrationSucceeded etc.). The reboot or freeze distinction can help you discern the type of downtime impact faced.
+ - *Downtime Reboot or Downtime Freeze*: Informs when VM is Unavailable due to Azure orchestrated activity (for example, VirtualMachineStorageOffline, LiveMigrationSucceeded etc.). The reboot or freeze distinction can help you discern the type of downtime impact faced.
+ - *Degraded*: Informs when Azure predicts a HW failure on the host server or detects potential degradation in performance. (for example, VirtualMachinePossiblyDegradedDueToHardwareFailure)
+ - *Informational*: Informs when an authorized user or process triggers a control plane operation (for example, VirtualMachineDeallocationInitiated, VirtualMachineRestarted). This category also captures cases of platform actions due to customer defined thresholds or conditions. (for example, VirtualMachinePreempted)
- - *Degraded*: Informs when Azure predicts a HW failure on the host server or detects potential degradation in performance. (e.g., VirtualMachinePossiblyDegradedDueToHardwareFailure)
- - *Informational*: Informs when an authorized user or process triggers a control plane operation (e.g., VirtualMachineDeallocationInitiated, VirtualMachineRestarted). This category also captures cases of platform actions due to customer defined thresholds or conditions. (E.g., VirtualMachinePreempted)
+>[!Note]
+> A VMs availability impact start and end time is **only** applicable to degraded annotations, and does not apply to downtime or informational annotations.
The below table summarizes all the annotations that the platform emits today:
The below table summarizes all the annotations that the platform emits today:
| VirtualMachineStorageOffline | The Virtual Machine is either currently undergoing a reboot or experiencing an application freeze due to a temporary loss of access to disk. No other action is required at this time, while the platform is working on reestablishing disk connectivity. | <ul><li>**Context**: Platform Initiated<li>**Category**: Unplanned<li>**ImpactType**: Downtime Reboot | | VirtualMachineFailedToSecureBoot | Applicable to Azure Confidential Compute Virtual Machines when guest activity such as unsigned booting components leads to a guest OS issue preventing the Virtual Machine from booting securely. You can attempt to retry deployment after ensuring OS boot components are signed by trusted publishers. For more information, see [Secure Boot](/windows-hardware/design/device-experiences/oem-secure-boot). | <ul><li> **Context**: Customer Initiated<li>**Category**: Not Applicable<li>**ImpactType**: Informational | | LiveMigrationSucceeded | The Virtual Machine was briefly paused as a Live Migration operation was successfully performed on your Virtual Machine. This operation was carried out either as a repair action, for allocation optimization or as part of routine maintenance workflows. No other action is required at this time. For more information, see [Live Migration](../virtual-machines/maintenance-and-updates.md#live-migration). | <ul><li> **Context**: Platform Initiated<li>**Category**: Unplanned<li>**ImpactType**: Downtime Freeze |
-| LiveMigrationFailure | A Live Migration operation was attempted on your Virtual Machine as either a repair action, for allocation optimization or as part of routine maintenance workflows. This operation, however, could not be successfully completed and may have resulted in a brief pause of your Virtual Machine. No other action is required at this time. <br/> Also note that [M Series](../virtual-machines/m-series.md), [L Series](../virtual-machines/lasv3-series.md) VM SKUs are not applicable for Live Migration. For more information, see [Live Migration](../virtual-machines/maintenance-and-updates.md#live-migration). | <ul><li> **Context**: Platform Initiated<li>**Category**: Unplanned<li>**ImpactType**: Downtime Freeze |
+| LiveMigrationFailure | A Live Migration operation was attempted on your Virtual Machine as either a repair action, for allocation optimization or as part of routine maintenance workflows. This operation, however, couldn't be successfully completed and may have resulted in a brief pause of your Virtual Machine. No other action is required at this time. <br/> Also note that [M Series](../virtual-machines/m-series.md), [L Series](../virtual-machines/lasv3-series.md) VM SKUs aren't applicable for Live Migration. For more information, see [Live Migration](../virtual-machines/maintenance-and-updates.md#live-migration). | <ul><li> **Context**: Platform Initiated<li>**Category**: Unplanned<li>**ImpactType**: Downtime Freeze |
| VirtualMachineAllocated | The Virtual Machine is in the process of being set up as requested by an authorized user or process. No other action is required at this time. | <ul><li>**Context**: Customer Initiated<li>**Category**: Not Applicable<li>**ImpactType**: Informational | | VirtualMachineDeallocationInitiated | The Virtual Machine is in the process of being stopped and deallocated as requested by an authorized user or process. No other action is required at this time. | <ul><li>**Context**: Customer Initiated<li>**Category**: Not Applicable<li>**ImpactType**: Informational | | VirtualMachineHostCrashed | The Virtual Machine has unexpectedly crashed due to the underlying host server experiencing a software failure or due to a failed hardware component. While the Virtual Machine is rebooting, the local data remains unaffected. You may attempt to redeploy the Virtual Machine to a different host server if you continue to experience issues. | <ul><li> **Context**: Platform Initiated<li>**Category**: Unplanned<li>**ImpactType**: Downtime Reboot | | VirtualMachineMigrationInitiatedForPlannedMaintenance | The Virtual Machine is being migrated to a different host server as part of routine maintenance workflows orchestrated by the platform. No other action is required at this time. For more information, see [Planned Maintenance](../virtual-machines/maintenance-and-updates.md). | <ul><li>**Context**: Platform Initiated<li>**Category**: Planned<li>**ImpactType**: Downtime Reboot | | VirtualMachineRebootInitiatedForPlannedMaintenance | The Virtual Machine is undergoing a reboot as part of routine maintenance workflows orchestrated by the platform. No other action is required at this time. For more information, see [Maintenance and updates](../virtual-machines/maintenance-and-updates.md). | <ul><li> **Context**: Platform Initiated<li>**Category**: Planned<li>**ImpactType**: Downtime Reboot | | VirtualMachineHostRebootedForRepair | The Virtual Machine is undergoing a reboot due to the underlying host server experiencing unexpected failures. While the Virtual Machine is rebooting, the local data remains unaffected. For more information, see [understanding Virtual Machine reboots in Azure](/troubleshoot/azure/virtual-machines/understand-vm-reboot). | <ul><li> **Context**: Platform Initiated<li>**Category**: Unplanned<li>**ImpactType**: Downtime Reboot |
-| VirtualMachineMigrationInitiatedForRepair | The Virtual Machine is being migrated to a different host server due to the underlying host server experiencing unexpected failures. Since the Virtual Machine is being migrated to a new host server, the local data will not persist. For more information, see [Service Healing](https://azure.microsoft.com/blog/service-healing-auto-recovery-of-virtual-machines/). | <ul><li>**Context**: Platform Initiated<li>**Category**: Unplanned<li>**ImpactType**: Downtime Reboot |
-| VirtualMachinePlannedFreezeStarted | This virtual machine is undergoing freeze impact due to a routine update. This update is necessary to ensure the underlying platform is up to date with the latest improvements. No additional action is required at this time. | <ul><li> **Context**: Platform Initiated <li>**Category**: Planned<li>**ImpactType**: Informational |
-| VirtualMachinePlannedFreezeSucceeded | This virtual machine has successfully undergone a routine update that resulted in freeze impact. This update is necessary to ensure the underlying platform is up to date with the latest improvements. No additional action is required at this time. | <ul><li>**Context**: Platform Initiated <li>**Category**: Planned<li>**ImpactType**: Downtime Freeze |
-| VirtualMachinePlannedFreezeFailed | This virtual machine underwent a routine update that may have resulted in freeze impact. However this update failed to successfully complete. The platform will automatically coordinate recovery actions, as necessary. This update was to ensure the underlying platform is up to date with the latest improvements. No additional action is required at this time. | <ul><li> **Context**: Platform Initiated <li>**Category**: Planned<li>**ImpactType**: Downtime Freeze |
-| VirtualMachineRedeployInitiatedByControlPlaneDueToPlannedMaintenance | The Virtual Machine is being migrated to a different host server as part of routine maintenance workflows triggered by an authorized user or process. Since the Virtual Machine is being migrated to a new host server, the local data will not persist. For more information, see [Maintenance and updates](../virtual-machines/maintenance-and-updates.md). | <ul><li> **Context**: Customer Initiated <li>**Category**: Not Applicable <li> **ImpactType**: Informational |
+| VirtualMachineMigrationInitiatedForRepair | The Virtual Machine is being migrated to a different host server due to the underlying host server experiencing unexpected failures. Since the Virtual Machine is being migrated to a new host server, the local data won't persist. For more information, see [Service Healing](https://azure.microsoft.com/blog/service-healing-auto-recovery-of-virtual-machines/). | <ul><li>**Context**: Platform Initiated<li>**Category**: Unplanned<li>**ImpactType**: Downtime Reboot |
+| VirtualMachinePlannedFreezeStarted | This virtual machine is undergoing freeze impact due to a routine update. This update is necessary to ensure the underlying platform is up to date with the latest improvements. No action is required at this time. | <ul><li> **Context**: Platform Initiated <li>**Category**: Planned<li>**ImpactType**: Informational |
+| VirtualMachinePlannedFreezeSucceeded | This virtual machine has successfully undergone a routine update that resulted in freeze impact. This update is necessary to ensure the underlying platform is up to date with the latest improvements. No action is required at this time. | <ul><li>**Context**: Platform Initiated <li>**Category**: Planned<li>**ImpactType**: Downtime Freeze |
+| VirtualMachinePlannedFreezeFailed | This virtual machine underwent a routine update that may have resulted in freeze impact. However this update failed to successfully complete. The platform will automatically coordinate recovery actions, as necessary. This update was to ensure the underlying platform is up to date with the latest improvements. No action is required at this time. | <ul><li> **Context**: Platform Initiated <li>**Category**: Planned<li>**ImpactType**: Downtime Freeze |
+| VirtualMachineRedeployInitiatedByControlPlaneDueToPlannedMaintenance | The Virtual Machine is being migrated to a different host server as part of routine maintenance workflows triggered by an authorized user or process. Since the Virtual Machine is being migrated to a new host server, the local data won't persist. For more information, see [Maintenance and updates](../virtual-machines/maintenance-and-updates.md). | <ul><li> **Context**: Customer Initiated <li>**Category**: Not Applicable <li> **ImpactType**: Informational |
| VirtualMachineMigrationScheduledForDegradedHardware | The Virtual Machine is experiencing degraded availability as it is running on a host server with a degraded hardware component which is predicted to fail soon. Live Migration will be attempted to safely migrate your Virtual Machine to a healthy host server; however, the operation may fail depending on the degradation of the underlying hardware. <br/> We strongly advise you to redeploy your Virtual Machine to avoid unexpected failures by the redeploy deadline specified. For more information, see [Advancing failure prediction and mitigation](https://azure.microsoft.com/blog/advancing-failure-prediction-and-mitigation-introducing-narya/). | <ul><li> **Context**: Platform Initiated <li>**Category**: Unplanned <li>**ImpactType**: Degraded | | VirtualMachinePossiblyDegradedDueToHardwareFailure | The Virtual Machine is experiencing an imminent risk to its availability as it is running on a host server with a degraded hardware component that will fail soon. Live Migration will be attempted to safely migrate your Virtual Machine to a healthy host server; however, the operation may fail. <br/> We strongly advise you to redeploy your Virtual Machine to avoid unexpected failures by the redeploy deadline specified. For more information, see [Advancing failure prediction and mitigation](https://azure.microsoft.com/blog/advancing-failure-prediction-and-mitigation-introducing-narya/). | <ul><li> **Context**: Platform Initiated <li>**Category**: Unplanned<li>**ImpactType**: Degraded | | VirtualMachineScheduledForServiceHealing | The Virtual Machine is experiencing an imminent risk to its availability as it is running on a host server that is experiencing fatal errors. Live Migration will be attempted to safely migrate your Virtual Machine to a healthy host server; however, the operation may fail depending on the failure signature encountered by the host server. <br/> We strongly advise you to redeploy your Virtual Machine to avoid unexpected failures by the redeploy deadline specified. For more information, see [Advancing failure prediction and mitigation](https://azure.microsoft.com/blog/advancing-failure-prediction-and-mitigation-introducing-narya/). | <ul><li>**Context**: Platform Initiated <li>**Category**: Unplanned<li>**ImpactType**: Degraded |
-| VirtualMachinePreempted | If you are running a Spot/Low Priority Virtual Machine, it has been preempted either due to capacity recall by the platform or due to billing-based eviction where cost exceeded user defined thresholds. No other action is required at this time. For more information, see [Spot Virtual Machines](../virtual-machines/spot-vms.md). | <ul><li> **Context**: Platform Initiated <li>**Category**: Unplanned<li>**ImpactType**: Informational |
+| VirtualMachinePreempted | If you're running a Spot/Low Priority Virtual Machine, it has been preempted either due to capacity recall by the platform or due to billing-based eviction where cost exceeded user defined thresholds. No other action is required at this time. For more information, see [Spot Virtual Machines](../virtual-machines/spot-vms.md). | <ul><li> **Context**: Platform Initiated <li>**Category**: Unplanned<li>**ImpactType**: Informational |
| VirtualMachineRebootInitiatedByControlPlane | The Virtual Machine is undergoing a reboot as requested by an authorized user or process from within the Virtual machine. No other action is required at this time. | <ul><li> **Context**: Customer Initiated <li>**Category**: Not Applicable<li>**ImpactType**: Informational |
-| VirtualMachineRedeployInitiatedByControlPlane | The Virtual Machine is being migrated to a different host server as requested by an authorized user or process from within the Virtual machine. No other action is required at this time. Since the Virtual Machine is being migrated to a new host server, the local data will not persist. | <ul><li> **Context**: Customer Initiated <li>**Category**: Not Applicable <li>**ImpactType**: Informational |
+| VirtualMachineRedeployInitiatedByControlPlane | The Virtual Machine is being migrated to a different host server as requested by an authorized user or process from within the Virtual machine. No other action is required at this time. Since the Virtual Machine is being migrated to a new host server, the local data won't persist. | <ul><li> **Context**: Customer Initiated <li>**Category**: Not Applicable <li>**ImpactType**: Informational |
| VirtualMachineSizeChanged | The Virtual Machine is being resized as requested by an authorized user or process. No other action is required at this time. | <ul><li> **Context**: Customer Initiated <li>**Category**: Not Applicable<li>**ImpactType**: Informational | |VirtualMachineConfigurationUpdated | The Virtual Machine configuration is being updated as requested by an authorized user or process. No other action is required at this time. | <ul><li> **Context**: Customer Initiated <li>**Category**: Not Applicable<li>**ImpactType**: Informational | | VirtualMachineStartInitiatedByControlPlane |The Virtual Machine is starting as requested by an authorized user or process. No other action is required at this time. | <ul><li> **Context**: Customer Initiated<li>**Category**: Not Applicable<li>**ImpactType**: Informational | | VirtualMachineStopInitiatedByControlPlane | The Virtual Machine is stopping as requested by an authorized user or process. No other action is required at this time. | <ul><li> **Context**: Customer Initiated<li>**Category**: Not Applicable<li>**ImpactType**: Informational | | VirtualMachineStoppedInternally | The Virtual Machine is stopping as requested by an authorized user or process, or due to a guest activity from within the Virtual Machine. No other action is required at this time. | <ul><li> **Context**: Customer Initiated <li>**Category**: Not Applicable<li>**ImpactType**: Informational | | VirtualMachineProvisioningTimedOut | The Virtual Machine provisioning has failed due to Guest OS issues or incorrect user run scripts. You can attempt to either re-create this Virtual Machine or if this Virtual Machine is part of a virtual machine scale set, you can try reimaging it. | <ul><li> **Context**: Platform Initiated <li> **Category**: Unplanned <li> **ImpactType**: Informational |
-| AccelnetUnhealthy | Applicable if Accelerated Networking is enabled for your Virtual Machine ΓÇô We have detected that the Accelerated Networking feature is not functioning as expected. You can attempt to redeploy your Virtual Machine to potentially mitigate the issue. | <ul><li> **Context**: Platform Initiated <li>**Category**: Unplanned <li> **ImpactType**: Degraded |
+| AccelnetUnhealthy | Applicable if Accelerated Networking is enabled for your Virtual Machine ΓÇô We have detected that the Accelerated Networking feature isn't functioning as expected. You can attempt to redeploy your Virtual Machine to potentially mitigate the issue. | <ul><li> **Context**: Platform Initiated <li>**Category**: Unplanned <li> **ImpactType**: Degraded |
spring-apps Quickstart Deploy Web App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/quickstart-deploy-web-app.md
zone_pivot_groups: spring-apps-plan-selection
> [!NOTE] > Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
-**This article applies to:** ✔️ Standard consumption and dedicated (Preview) ✔️ Basic/Standard ✔️ Enterprise
- This quickstart shows how to deploy a Spring Boot web application to Azure Spring Apps. The sample project is a simple ToDo application to add tasks, mark when they're complete, and then delete them. The following screenshot shows the application: :::image type="content" source="./media/quickstart-deploy-web-app/todo-app.png" alt-text="Screenshot of a sample web application in Azure Spring Apps." lightbox="./media/quickstart-deploy-web-app/todo-app.png":::
The following diagram shows the architecture of the system:
::: zone pivot="sc-consumption-plan,sc-standard"
-This article describes the following options for creating resources and deploying them to Azure Spring Apps:
+This article provides the following options for deploying to Azure Spring Apps:
-- Azure portal: Use the Azure portal to create resources and deploy applications step by step. The Azure portal is suitable for developers who are using Azure cloud services for the first time.-- Azure Developer CLI: Use the Azure Developer CLI to create resources and deploy applications through simple commands, and to cover application code and infrastructure as code files needed to provision the Azure resources. The Azure Developer CLI is suitable for developers who are familiar with Azure cloud services.
+- The **Azure portal** option is the easiest and the fastest way to create resources and deploy applications with a single click. This option is suitable for Spring developers who want to quickly deploy applications to Azure cloud services.
+- The **Azure portal + Maven plugin** option provides a more conventional way to create resources and deploy applications step by step. This option is suitable for Spring developers using Azure cloud services for the first time.
+- The **Azure Developer CLI** option is a more efficient way to automatically create resources and deploy applications through simple commands. The Azure Developer CLI uses a template to provision the Azure resources needed and to deploy the application code. This option is suitable for Spring developers who are familiar with Azure cloud services.
+++
+This article provides the following options for deploying to Azure Spring Apps:
+
+- The Azure portal is the easiest and fastest way to create resources and deploy applications with a single click. This option is suitable for Spring developers who want to quickly deploy applications to Azure cloud services.
+- The Azure CLI is a powerful command line tool to manage Azure resources. This option is suitable for Spring developers who are familiar with Azure cloud services.
::: zone-end
This article describes the following options for creating resources and deployin
### [Azure portal](#tab/Azure-portal)
+- An Azure subscription. If you don't have a subscription, create a [free account](https://azure.microsoft.com/free/) before you begin.
+
+### [Azure portal + Maven plugin](#tab/Azure-portal-maven-plugin)
+ - An Azure subscription. If you don't have a subscription, create a [free account](https://azure.microsoft.com/free/) before you begin. - [Git](https://git-scm.com/downloads). - [Java Development Kit (JDK)](/java/azure/jdk/), version 17.
This article describes the following options for creating resources and deployin
::: zone pivot="sc-enterprise"
+### [Azure portal](#tab/Azure-portal-ent)
+
+- An Azure subscription. If you don't have a subscription, create a [free account](https://azure.microsoft.com/free/) before you begin.
+
+### [Azure CLI](#tab/Azure-CLI)
+ - An Azure subscription. If you don't have a subscription, create a [free account](https://azure.microsoft.com/free/) before you begin. - [Git](https://git-scm.com/downloads). - [Java Development Kit (JDK)](/java/azure/jdk/), version 17. - [Azure CLI](/cli/azure/install-azure-cli) version 2.45.0 or higher. Use the following command to install the Azure Spring Apps extension: `az extension add --name spring` - If you're deploying an Azure Spring Apps Enterprise plan instance for the first time in the target subscription, see the [Requirements](./how-to-enterprise-marketplace-offer.md#requirements) section of [Enterprise plan in Azure Marketplace](./how-to-enterprise-marketplace-offer.md). ++ ::: zone-end ::: zone pivot="sc-standard"
This article describes the following options for creating resources and deployin
## 5. Validate the web app
-Now you can access the deployed app to see whether it works. Use the following steps to validate:
+Now you can access the deployed app to see whether it works.
::: zone pivot="sc-enterprise"
-1. After the deployment is complete, you can access the app with this URL: `https://${AZURE_SPRING_APPS_NAME}-${APP_NAME}.azuremicroservices.io/`. The page should appear as you saw in localhost.
+### [Azure portal](#tab/Azure-portal-ent)
+
+Use the following steps to validate:
+
+1. After the deployment finishes, you can find the application URL from the deployment outputs:
+
+ :::image type="content" source="media/quickstart-deploy-web-app/web-app-url-standard.png" alt-text="Diagram that shows the enterprise app URL of the ARM deployment outputs." border="false" lightbox="media/quickstart-deploy-web-app/web-app-url-standard.png":::
+
+1. Access the application with the output application URL. The page should appear as you saw in localhost.
+
+1. Check the details for each resource deployment, which are useful for investigating any deployment issues.
+
+### [Azure CLI](#tab/Azure-CLI)
+
+Use the following steps to validate:
+
+1. After the deployment finishes, you can access the app with this URL: `https://${AZURE_SPRING_APPS_NAME}-${APP_NAME}.azuremicroservices.io/`. The page should appear as you saw in localhost.
1. To check the app's log to investigate any deployment issue, use the following command:
Now you can access the deployed app to see whether it works. Use the following s
--name ${APP_NAME} ``` ++ ::: zone-end
-1. Access the application with the output application URL. The page should appear as you saw in localhost.
+### [Azure portal](#tab/Azure-portal)
+
+Use the following steps to validate:
+
+1. After the deployment finishes, you can find the application URL from the deployment outputs:
-1. From the navigation menu of the Azure Spring Apps instance overview page, select **Logs** to check the app's logs.
+ :::image type="content" source="media/quickstart-deploy-web-app/web-app-url-consumption.png" alt-text="Diagram that shows the consumption app URL of the ARM deployment outputs." border="false" lightbox="media/quickstart-deploy-web-app/web-app-url-consumption.png":::
- :::image type="content" source="media/quickstart-deploy-web-app/logs.png" alt-text="Screenshot of the Azure portal showing the Azure Spring Apps logs page." lightbox="media/quickstart-deploy-web-app/logs.png":::
+1. Access the application URL. The page should appear as you saw in localhost.
+
+1. Check the details for each resource deployment, which are useful for investigating any deployment issues.
+
+### [Azure portal + Maven plugin](#tab/Azure-portal-maven-plugin)
+
+Access the application with the output application URL. The page should appear as you saw in localhost.
+
+### [Azure Developer CLI](#tab/Azure-Developer-CLI)
+
+Access the application with the output endpoint. The page should appear as you saw in localhost.
+++++
+### [Azure portal](#tab/Azure-portal)
+
+Use the following steps to validate:
+
+1. After the deployment finishes, find the application URL from the deployment outputs:
+
+ :::image type="content" source="media/quickstart-deploy-web-app/web-app-url-standard.png" alt-text="Diagram that shows the standard app URL of the ARM deployment outputs." border="false" lightbox="media/quickstart-deploy-web-app/web-app-url-standard.png":::
+
+1. Access the application URL. The page should appear as you saw in localhost.
+
+1. Check the details for each resource deployment, which are useful for investigating any deployment issues.
+
+### [Azure portal + Maven plugin](#tab/Azure-portal-maven-plugin)
+
+Access the application with the output application URL. The page should appear as you saw in localhost.
+
+### [Azure Developer CLI](#tab/Azure-Developer-CLI)
+
+Access the application with the output endpoint. The page should appear as you saw in localhost.
++ ::: zone-end ## 6. Clean up resources
+Be sure to delete the resources you created in this article when you no longer need them. You can delete the Azure resource group, which includes all the resources in the resource group.
+ ::: zone pivot="sc-standard, sc-consumption-plan" [!INCLUDE [clean-up-resources-portal-or-azd](includes/quickstart-deploy-web-app/clean-up-resources.md)]
Now you can access the deployed app to see whether it works. Use the following s
::: zone pivot="sc-enterprise"
-If you plan to continue working with subsequent quickstarts and tutorials, you might want to leave these resources in place. When you no longer need the resources, delete them by deleting the resource group. To delete the resource group, use the following command:
+### [Azure portal](#tab/Azure-portal-ent)
++
+### [Azure CLI](#tab/Azure-CLI)
+
+Use the following command to delete the entire resource group, including the newly created service:
```azurecli az group delete --name ${RESOURCE_GROUP} ``` ++ ::: zone-end ## 7. Next steps
az group delete --name ${RESOURCE_GROUP}
> [Set up Azure Spring Apps CI/CD with GitHub Actions](./how-to-github-actions.md) > [!div class="nextstepaction"]
-> [Set up Azure Spring Apps CI/CD with Azure DevOps](./how-to-cicd.md)
+> [Automate application deployments to Azure Spring Apps](./how-to-cicd.md)
> [!div class="nextstepaction"] > [Use managed identities for applications in Azure Spring Apps](./how-to-use-managed-identities.md) > [!div class="nextstepaction"]
-> [Create a service connection in Azure Spring Apps with the Azure CLI](../service-connector/quickstart-cli-spring-cloud-connection.md)
+> [Quickstart: Create a service connection in Azure Spring Apps with the Azure CLI](../service-connector/quickstart-cli-spring-cloud-connection.md)
::: zone pivot="sc-standard, sc-consumption-plan" > [!div class="nextstepaction"]
-> [Run the Pet Clinic microservice on Azure Spring Apps](./quickstart-sample-app-introduction.md)
+> [Introduction to the sample app](./quickstart-sample-app-introduction.md)
::: zone-end ::: zone pivot="sc-enterprise" > [!div class="nextstepaction"]
-> [Run the polyglot ACME fitness store apps on Azure Spring Apps](./quickstart-sample-app-acme-fitness-store-introduction.md)
+> [Introduction to the Fitness Store sample app](./quickstart-sample-app-acme-fitness-store-introduction.md)
::: zone-end For more information, see the following articles: - [Azure Spring Apps Samples](https://github.com/Azure-Samples/Azure-Spring-Cloud-Samples).-- [Spring on Azure](/azure/developer/java/spring/)-- [Spring Cloud Azure](/azure/developer/java/spring-framework/)
+- [Azure for Spring developers](/azure/developer/java/spring/)
+- [Spring Cloud Azure documentation](/azure/developer/java/spring-framework/)
static-web-apps Database Mysql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/database-mysql.md
To complete this tutorial, you need to have an existing Azure Database for MySQL
||| | [Azure Database for MySQL Flexible Server](/azure/mysql/flexible-server/quickstart-create-server-portal) | If you need to create a database, follow the steps in the [create an Azure Database for MySQL Flexible Server](/azure/mysql/flexible-server/quickstart-create-server-portal) guide. If you plan to use a connection string authentication for your web app, ensure that you create your database with MySQL authentication. You can change this setting later if you want to use managed identity later on. | | [Existing static web app](getting-started.md) | If you don't already have one, follow the steps in the [getting started](getting-started.md) guide to create a *No Framework* static web app. |
-| [Azure Data Studio, with the MySQL extension](/sql/azure-data-studio/quickstart-mysql) | If you don't already have Azure Data Studio installed, follow the guide to install [Azure Data Studio, with the MySQL extension](/sql/azure-data-studio/quickstart-mysql). Alternatively, you may use any other tool to query your MySQL database, such as MySQL Workbench. |
+| [Azure Data Studio, with the MySQL extension](/azure-data-studio/quickstart-mysql) | If you don't already have Azure Data Studio installed, follow the guide to install [Azure Data Studio, with the MySQL extension](/azure-data-studio/quickstart-mysql). Alternatively, you may use any other tool to query your MySQL database, such as MySQL Workbench. |
Begin by configuring your database to work with the Azure Static Web Apps database connection feature.
To use your Azure database for local development, you need to retrieve the conne
## Create sample data
-Create a sample table and seed it with sample data to match the tutorial. Here, you can use [Azure Data Studio](/sql/azure-data-studio/quickstart-mysql), but you may use MySQL Workbench or any other tool.
+Create a sample table and seed it with sample data to match the tutorial. Here, you can use [Azure Data Studio](/azure-data-studio/quickstart-mysql), but you may use MySQL Workbench or any other tool.
-1. In Azure Data Studio, [create a connection to your Azure MySQL Flexible Server](/sql/azure-data-studio/quickstart-mysql#connect-to-mysql).
+1. In Azure Data Studio, [create a connection to your Azure MySQL Flexible Server](/azure-data-studio/quickstart-mysql#connect-to-mysql).
1. Right-click your server, and create a new database. Enter `MyTestPersonDatabase` as the database name, and select the charset to be `utf8mb4` and the collation of `utf8mb4_0900_ai_ci`.
static-web-apps Database Postgresql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/database-postgresql.md
To complete this tutorial, you need to have an existing Azure Database for Postg
||| | [Azure Database for PostgreSQL Flexible Server](/azure/postgresql/flexible-server/quickstart-create-server-portal) or [Azure Database for PostgreSQL Single Server Database](/azure/postgresql/single-server/quickstart-create-server-database-portal) | If you don't already have one, follow the steps in the [create an Azure Database for PostgreSQL Flexible Server database](/azure/postgresql/flexible-server/quickstart-create-server-portal) guide, or in the [create an Azure Database for PostgeSQL Single Server database](/azure/postgresql/single-server/quickstart-create-server-database-portal) guide. If you plan to use a connection string authentication for Static Web Apps' database connections, ensure that you create your Azure Database for PostgreSQL Server with PostgreSQL authentication. You can change this value if you want to use managed identity later on. | | [Existing static web app](getting-started.md) | If you don't already have one, follow the steps in the [getting started](getting-started.md) guide to create a *No Framework* static web app. |
-| [Azure Data Studio, with the PostgreSQL extension](/sql/azure-data-studio/quickstart-postgres) | If you don't already have Azure Data Studio installed, follow the guide to install [Azure Data Studio, with the PostgreSQL extension](/sql/azure-data-studio/quickstart-postgres). Alternatively, you may use any other tool to query your PostgreSQL database, such as PgAdmin. |
+| [Azure Data Studio, with the PostgreSQL extension](/azure-data-studio/quickstart-postgres) | If you don't already have Azure Data Studio installed, follow the guide to install [Azure Data Studio, with the PostgreSQL extension](/azure-data-studio/quickstart-postgres). Alternatively, you may use any other tool to query your PostgreSQL database, such as PgAdmin. |
Begin by configuring your database to work with the Azure Static Web Apps database connection feature.
To use your Azure database for local development, you need to retrieve the conne
## Create sample data
-Create a sample table and seed it with sample data to match the tutorial. This tutorial uses [Azure Data Studio](/sql/azure-data-studio/quickstart-postgres), but you may use PgAdmin or any other tool.
+Create a sample table and seed it with sample data to match the tutorial. This tutorial uses [Azure Data Studio](/azure-data-studio/quickstart-postgres), but you may use PgAdmin or any other tool.
-1. In Azure Data Studio, [create a connection to your Azure Database for PostgreSQL Server](/sql/azure-data-studio/quickstart-postgres#connect-to-postgresql)
+1. In Azure Data Studio, [create a connection to your Azure Database for PostgreSQL Server](/azure-data-studio/quickstart-postgres#connect-to-postgresql)
1. Right-click your server, and select **New Query**. Run the following querying to create a database named `MyTestPersonDatabase`.
storage Access Tiers Online Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/access-tiers-online-manage.md
Title: Set a blob's access tier
description: Learn how to specify a blob's access tier when you upload it, or how to change the access tier for an existing blob. - Last updated 08/10/2023-+ ms.devlang: powershell, azurecli
storage Data Lake Storage Migrate Gen1 To Gen2 Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/data-lake-storage-migrate-gen1-to-gen2-azure-portal.md
Previously updated : 06/20/2023 Last updated : 10/16/2023
Whichever option you choose, after you've migrated and verified that all your wo
> [!div class="mx-imgBorder"] > ![Checkbox to provide consent](./media/data-lake-storage-migrate-gen1-to-gen2-azure-portal/migration-consent.png)
- A progress bar appears along with a sub status message. You can use these indicators to gauge the progress of the migration.
+ A progress bar appears along with a sub status message. You can use these indicators to gauge the progress of the migration. Because the time to complete each task varies, the progress bar won't advance at a consistent rate. For example, the progress bar might quickly advance to 50 percent, but then take a bit more time to complete the remaining 50 percent.
> [!div class="mx-imgBorder"] > ![Screenshot of progress bar when migrating data.](./media/data-lake-storage-migrate-gen1-to-gen2-azure-portal/migration-progress.png)
Whichever option you choose, after you've migrated and verified that all your wo
> [!div class="mx-imgBorder"] > ![Consent checkbox](./media/data-lake-storage-migrate-gen1-to-gen2-azure-portal/migration-consent.png)
- A progress bar appears along with a sub status message. You can use these indicators to gauge the progress of the migration.
+ A progress bar appears along with a sub status message. You can use these indicators to gauge the progress of the migration. Because the time to complete each task varies, the progress bar won't advance at a consistent rate. For example, the progress bar might quickly advance to 50 percent, but then take a bit more time to complete the remaining 50 percent.
> [!div class="mx-imgBorder"] > ![Screenshot of progress bar when performing a complete migration.](./media/data-lake-storage-migrate-gen1-to-gen2-azure-portal/migration-progress.png)
The following table shows the approximate speed of each migration processing tas
| Processing task | Speed | |-|| | Data copy | 9 TB per hour |
-| Data validation | 9 million files per hour |
-| Metadata copy | 4 million files and folders per hour |
-| Metadata processing | 25 million files and folders per hour |
-| Additional metadata processing (data copy option)<sup>1</sup> | 50 million files and folders per hour |
+| Data validation | 9 million files or folders per hour |
+| Metadata copy | 4 million files or folders per hour |
+| Metadata processing | 25 million files or folders per hour |
+| Additional metadata processing (data copy option)<sup>1</sup> | 50 million files or folders per hour |
<sup>1</sup> The additional metadata processing time applies only if you choose the **Copy data to a new Gen2 account** option. This processing time does not apply if you choose the **Complete migration to a new gen2 account** option.
storage Storage Blob Index How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-index-how-to.md
Title: Use blob index tags to manage and find data on Azure Blob Storage
description: See examples of how to use blob index tags to categorize, manage, and query for blob objects. - Last updated 07/21/2022-+ ms.devlang: csharp
stream-analytics Write To Delta Lake https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/write-to-delta-lake.md
# Azure Stream Analytics - write to Delta Lake table - Delta Lake is an open format that brings reliability, quality and performance to data lakes. Azure Stream Analytics allows you to directly write streaming data to your delta lake tables without writing a single line of code. A stream analytics job can be configured to write through a native delta lake output connector, either to a new or a pre-created Delta table in an Azure Data Lake Storage Gen2 account. This connector is optimized for high-speed ingestion to delta tables in append mode and also provides exactly once semantics, which guarantees that no data is lost or duplicated. Ingesting real-time data streams from Azure Event Hubs into Delta tables allows you to perform ad-hoc interactive or batch analytics. ## Delta Lake configuration - To write data in Delta Lake, you need to connect to an Azure Data Lake Storage Gen2 account. The below table lists the properties related to Delta Lake configuration. |Property Name |Description |
To see the full list of ADLS Gen2 configuration, see [ALDS Gen2 Overview](blob-s
### Delta Path name - The Delta Path Name is used to specify the location and name of your Delta Lake table stored in Azure Data Lake Storage Gen2. You can choose to use one or more path segments to define the path to the delta table and the delta table name. A path segment is the string between consecutive delimiter characters (for example, the forward slash `/`) that corresponds to the name of a virtual directory.
Example output files:
1. Under the chosen container, directory path would be `WestEurope/CA/factory1`, delta table folder name would be **device-table**. 2. Under the chosen container, directory path would be `Test`, delta table folder name would be **demo**. 3. Under the chosen container, delta table folder name would be **mytable**.
+
+## Creating a new table
+
+If there is not already a Delta Lake table with the same name and in the location specified by the Delta Path name, by default, Azure Stream Analaytics will create a new Delta Table. This new table will be created with the following configuration:
+- [Writer Version 2 ](https://github.com/delta-io/delt#writer-version-requirements)
+- [Reader Version 1](https://github.com/delta-io/delt#reader-version-requirements)
+- The table will be [Append-Only](https://github.com/delta-io/delt#append-only-tables)
+- The table schema will be created with the schema of the first record encountered.
## Writing to the table
-To create a new Delta Lake table, you need to specify a Delta Path Name that doesn't lead to any existing tables. If there's already a Delta Lake table existing with the same name and in the location specified by the Delta Path name, by default, Azure Stream Analytics writes new records to the existing table.
+If there's already a Delta Lake table existing with the same name and in the location specified by the Delta Path name, by default, Azure Stream Analytics writes new records to the existing table.
### Exactly once delivery -
-The transaction log enables Delta Lake to guarantee exactly once processing. Azure Stream Analytics also provides exactly once delivery when output data to Azure Data Lake Storage Gen2 during one single job run.
+The transaction log enables Delta Lake to guarantee exactly once processing. Azure Stream Analytics also provides exactly once delivery when outputting data to Azure Data Lake Storage Gen2 during a single job run.
### Schema enforcement - Schema enforcement means that all new writes to a table are enforced to be compatible with the target table's schema at write time, to ensure data quality. All records of output data are projected to the schema of the existing table. If the output is being written to a new delta table, the table schema will be created with the first record. If the incoming data has one extra column compared to the existing table schema, it will be written in the table without the extra column. If the incoming data is missing one column compared to the existing table schema, it will be written in the table with the column being null.
+If there is no intersection between the schema of the delta table and the schema of a record of the streaming job, this will be considered an instance of schema conversion failure. Please note that this is not the only case that would be considered schema conversion failure.
At the failure of schema conversion, the job behavior will follow the [output data error handing policy](stream-analytics-output-error-policy.md) configured at the job level.
+### Delta Log checkpoints
++
+The Stream Analytics job will create Delta Log checkpoints periodically.
+ ## Limitations -- Dynamic partition key isn't supported.-- Writing to Delta lake is append only.
+- Dynamic partition key(specifying the name of a column of the record schema in the Delta Path) isn't supported.
+- Multiple partition columns are not supported. If multiple partition columns are desired, the recommendation is to use a composite key in the query and then specify it as the partition column.
+ - A composite key can be created in the query for example: "SELECT concat (col1, col2) AS compositeColumn INTO [blobOutput] FROM [input]".
+- Writing to Delta Lake is append only.
- Schema checking in query testing isn't available.-- Checkpoints for delta lake aren't taken by Stream Analytics.
+- Small file compaction is not performed by Stream Analytics.
+- All data files will be created without compression.
+- The [Date and Decimal types](https://github.com/delta-io/delt#valid-feature-names-in-table-features) are not supported.
+- Writing to existing tables of Writer Version 7 or above with writer features will fail.
+ - Example: Writing to existing tables with [Deletion Vectors](https://github.com/delta-io/delt#deletion-vectors) enabled will fail.
+ - The exceptions here are the [changeDataFeed and appendOnly Writer Features](https://github.com/delta-io/delt#valid-feature-names-in-table-features).
## Next steps
synapse-analytics Connect Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/connect-overview.md
Get connected to the Synapse SQL capability in Azure Synapse Analytics.
## Supported tools for serverless SQL pool
-[Azure Data Studio](/sql/azure-data-studio/download-azure-data-studio) is fully supported starting from version 1.18.0. SSMS is partially supported starting from version 18.5, you can use it to connect and query only.
+[Azure Data Studio](/azure-data-studio/download-azure-data-studio) is fully supported starting from version 1.18.0. SSMS is partially supported starting from version 18.5, you can use it to connect and query only.
## Find your server name
synapse-analytics Get Started Azure Data Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/get-started-azure-data-studio.md
> * [sqlcmd](get-started-connect-sqlcmd.md) > * [SSMS](get-started-ssms.md)
-You can use [Azure Data Studio](/sql/azure-data-studio/download-azure-data-studio?view=azure-sqldw-latest&preserve-view=true) to connect to and query Synapse SQL in Azure Synapse Analytics.
+You can use [Azure Data Studio](/azure-data-studio/download-azure-data-studio?view=azure-sqldw-latest&preserve-view=true) to connect to and query Synapse SQL in Azure Synapse Analytics.
## Connect
Explore other ways to connect to Synapse SQL:
- [Visual Studio](..//sql/get-started-visual-studio.md) - [sqlcmd](get-started-connect-sqlcmd.md)
-Visit [Use Azure Data Studio to connect and query data using a dedicated SQL pool in Azure Synapse Analytics](/sql/azure-data-studio/quickstart-sql-dw), for more information.
+Visit [Use Azure Data Studio to connect and query data using a dedicated SQL pool in Azure Synapse Analytics](/azure-data-studio/quickstart-sql-dw), for more information.
synapse-analytics Get Started Ssms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/get-started-ssms.md
You can use [SQL Server Management Studio (SSMS)](/sql/ssms/download-sql-server-
### Supported tools for serverless SQL pool
-[Azure Data Studio](/sql/azure-data-studio/download-azure-data-studio) is fully supported starting from version 1.18.0. SSMS is partially supported starting from version 18.5, you can use it to connect and query only.
+[Azure Data Studio](/azure-data-studio/download-azure-data-studio) is fully supported starting from version 1.18.0. SSMS is partially supported starting from version 18.5, you can use it to connect and query only.
## Prerequisites
synapse-analytics Resources Self Help Sql On Demand https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/resources-self-help-sql-on-demand.md
Usually, this problem occurs for one of two reasons:
Your query might fail with the error message `Websocket connection was closed unexpectedly.` This message means that your browser connection to Synapse Studio was interrupted, for example, because of a network issue. - To resolve this issue, rerun your query. -- Try [Azure Data Studio](/sql/azure-data-studio/download-azure-data-studio) or [SQL Server Management Studio](/sql/ssms/download-sql-server-management-studio-ssms) for the same queries instead of Synapse Studio for further investigation.
+- Try [Azure Data Studio](/azure-data-studio/download-azure-data-studio) or [SQL Server Management Studio](/sql/ssms/download-sql-server-management-studio-ssms) for the same queries instead of Synapse Studio for further investigation.
- If this message occurs often in your environment, get help from your network administrator. You can also check firewall settings, and check the [Troubleshooting guide](../troubleshoot/troubleshoot-synapse-studio.md). - If the issue continues, create a [support ticket](../../azure-portal/supportability/how-to-create-azure-support-request.md) through the Azure portal.
synapse-analytics Tutorial Connect Power Bi Desktop https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/tutorial-connect-power-bi-desktop.md
To complete this tutorial, you need the following prerequisites:
Optional: -- A SQL query tool, such as [Azure Data Studio](/sql/azure-data-studio/download-azure-data-studio), or [SQL Server Management Studio (SSMS)](/sql/ssms/download-sql-server-management-studio-ssms).
+- A SQL query tool, such as [Azure Data Studio](/azure-data-studio/download-azure-data-studio), or [SQL Server Management Studio (SSMS)](/sql/ssms/download-sql-server-management-studio-ssms).
Values for the following parameters:
update-center Update Manager Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/update-manager-faq.md
Azure Update Manager provides a SaaS solution to manage and govern software upda
Following are the benefits of using Azure Update - Oversee update compliance for your entire fleet of machines in Azure (Azure VMs), on premises, and multicloud environments (Arc-enabled Servers). - View and deploy pending updates to secure your machines [instantly](updates-maintenance-schedules.md#update-nowone-time-update).-- Manage [extended security updates (ESUs)](https://learn.microsoft.com/azure/azure-arc/servers/prepare-extended-security-updates) for your Azure Arc-enabled Windows Server 2012/2012 R2 machines. Get consistent experience for deployment of ESUs and other updates.
+- Manage [extended security updates (ESUs)](../azure-arc/servers/prepare-extended-security-updates.md) for your Azure Arc-enabled Windows Server 2012/2012 R2 machines. Get consistent experience for deployment of ESUs and other updates.
- Define recurring time windows during which your machines receive updates and might undergo reboots using [scheduled patching](scheduled-patching.md). Enforce machines grouped together based on standard Azure constructs (Subscriptions, Location, Resource Group, Tags etc.) to have common patch schedules using [dynamic scoping](dynamic-scope-overview.md). Sync patch schedules for Windows machines in relation to patch Tuesday, the unofficial term for month. - Enable incremental rollout of updates to Azure VMs in off-peak hours using [automatic VM guest patching](../virtual-machines/automatic-vm-guest-patching.md) and reduce reboots by enabling [hotpatching](updates-maintenance-schedules.md#hotpatching). - Automatically [assess](assessment-options.md#periodic-assessment) machines for pending updates every 24 hours, and flag machines that are out of compliance. Enforce enabling periodic assessments on multiple machines at scale using [Azure Policy](periodic-assessment-at-scale.md).
Customers using Automation Update Management moving to Azure Update Manager won'
If you have purchased a Defender for Servers Plan 2, then you won't have to pay to remediate the unhealthy resources for the above two recommendations. But if you're using any other Defender for server plan for your Arc machines, then you would be charged for those machines at the daily prorated $0.16/server by Azure Update Manager. ### Is Azure Update Manager chargeable on Azure Stack HCI?
-Azure Update Manager is not charged for machines hosted Azure Stack HCI clusters that have been enabled for Azure benefits and Azure Arc VM management. [Learn more](https://learn.microsoft.com/azure-stack/hci/manage/azure-benefits?tabs=wac#azure-benefits-available-on-azure-stack-hci).
+Azure Update Manager is not charged for machines hosted Azure Stack HCI clusters that have been enabled for Azure benefits and Azure Arc VM management. [Learn more](/azure-stack/hci/manage/azure-benefits?tabs=wac#azure-benefits-available-on-azure-stack-hci).
## Update Manager support and integration
update-center Updates Maintenance Schedules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/updates-maintenance-schedules.md
To enable the VM property, follow these steps:
## Hotpatching
-[Hotpatching](https://learn.microsoft.com/windows-server/get-started/hotpatch?context=%2Fazure%2Fvirtual-machines%2Fcontext%2Fcontext) allows you to install OS security updates on supported *Windows Server Datacenter: Azure Edition* virtual machines that don't require a reboot after installation. It works by patching the in-memory code of running processes without the need to restart the process.
+[Hotpatching](/windows-server/get-started/hotpatch?context=%2Fazure%2Fvirtual-machines%2Fcontext%2Fcontext) allows you to install OS security updates on supported *Windows Server Datacenter: Azure Edition* virtual machines that don't require a reboot after installation. It works by patching the in-memory code of running processes without the need to restart the process.
Following are the features of Hotpatching:
Following are the features of Hotpatching:
:::image type="content" source="media/updates-maintenance/hot-patch-inline.png" alt-text="Screenshot that shows the Hotpatch option." lightbox="media/updates-maintenance/hot-patch-expanded.png":::
-Hotpatching property is available as a setting in Azure Update Manager that you can enable by using Update settings flow. For more information, see [Hotpatch for virtual machines and supported platforms](https://learn.microsoft.com/windows-server/get-started/hotpatch).
+Hotpatching property is available as a setting in Azure Update Manager that you can enable by using Update settings flow. For more information, see [Hotpatch for virtual machines and supported platforms](/windows-server/get-started/hotpatch).
## Automatic extension upgrade
virtual-desktop Autoscale Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/autoscale-diagnostics.md
To enable diagnostics for your scaling plan for pooled host pools:
1. Open the [Azure portal](https://portal.azure.com).
-1. In the search bar, enter **Azure Virtual Desktop**, then select the service from the drop-down menu.
+1. In the search bar, type *Azure Virtual Desktop* and select the matching service entry.
1. Select **Scaling plans**, then select the scaling plan you'd like the report to track.
To enable diagnostics for your scaling plan for personal host pools:
1. Select **Save**. ++ ## Find autoscale diagnostic logs in Azure Storage After you've configured your diagnostic settings, you can find the logs by following these instructions:
The following JSON file is an example of what you'll see when you open a report:
- [Assign your scaling plan to new or existing host pools](autoscale-new-existing-host-pool.md). - Learn more about terms used in this article at our [autoscale glossary](autoscale-glossary.md). - For examples of how autoscale works, see [Autoscale example scenarios](autoscale-scenarios.md).-- View our [autoscale FAQ](autoscale-faq.yml) to answer commonly asked questions.
+- View our [autoscale FAQ](autoscale-faq.yml) to answer commonly asked questions.
virtual-desktop Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/insights.md
Before you start using Azure Virtual Desktop Insights, you'll need to set up the
> [!NOTE] > Read access only lets admins view data. They'll need different permissions to manage resources in the Azure Virtual Desktop portal.
-## Open Azure Virtual Desktop Insights
-
-To open Azure Virtual Desktop Insights:
-
-1. Go to the Azure portal and select **Azure Virtual Desktop Insights**.
-1. Select **Workbooks**, then select **Check configuration**.
- ## Log Analytics settings
-To start using Azure Virtual Desktop Insights, you'll need at least one Log Analytics workspace. Use a designated Log Analytics workspace for your Azure Virtual Desktop session hosts to ensure that performance counters and events are only collected from session hosts in your Azure Virtual Desktop deployment. If you already have a workspace set up, skip ahead to [Set up using the configuration workbook](#set-up-using-the-configuration-workbook). To set one up, see [Create a Log Analytics workspace in the Azure portal](../azure-monitor/logs/quick-create-workspace.md).
+To start using Azure Virtual Desktop Insights, you'll need at least one Log Analytics workspace. Use a designated Log Analytics workspace for your Azure Virtual Desktop session hosts to ensure that performance counters and events are only collected from session hosts in your Azure Virtual Desktop deployment. If you already have a workspace set up, skip ahead to [Set up the configuration workbook](#set-up-the-configuration-workbook). To set one up, see [Create a Log Analytics workspace in the Azure portal](../azure-monitor/logs/quick-create-workspace.md).
>[!NOTE] >Standard data storage charges for Log Analytics will apply. To start, we recommend you choose the pay-as-you-go model and adjust as you scale your deployment and take in more data. To learn more, see [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/).
-## Set up using the configuration workbook
+## Set up the configuration workbook
-If it's your first time opening Azure Virtual Desktop Insights, you'll need set up Azure Virtual Desktop Insights for your Azure Virtual Desktop environment. To configure your resources:
+If it's your first time opening Azure Virtual Desktop Insights, you'll need to set up Azure Virtual Desktop Insights for your Azure Virtual Desktop environment. To configure your resources:
-1. Open Azure Virtual Desktop Insights in the Azure portal at [`aka.ms/avdi`](https://aka.ms/avdi), then select **configuration workbook**.
-1. Select an environment to configure from the drop-down lists for **Subscription**, **Resource Group**, and **Host Pool**.
+1. Open Azure Virtual Desktop Insights in the Azure portal at [`aka.ms/avdi`](https://aka.ms/avdi).
+1. Select **Workbooks**, then select **Check Configuration**.
+1. Select an Azure Virtual Desktop environment to configure from the drop-down lists for **Subscription**, **Resource Group**, and **Host Pool**.
The configuration workbook sets up your monitoring environment and lets you check the configuration after you've finished the setup process. It's important to check your configuration if items in the dashboard aren't displaying correctly, or when the product group publishes updates that require new settings.
virtual-desktop Onedrive Remoteapp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/onedrive-remoteapp.md
You can use Microsoft OneDrive alongside a RemoteApp in Azure Virtual Desktop (p
> [!IMPORTANT] > - You should only use OneDrive with a RemoteApp for testing purposes as it requires an Insider Preview build of Windows 11 for your session hosts. >
-> - You can't use the OneDrive setting **Start OneDrive automatically when I sign in to Windows**, which starts OneDrive when a user signs in. Instead, you need to configure OneDrive to launch by configuring a registry value, which is described in this article.
+> - You can't use the setting **Start OneDrive automatically when I sign in to Windows** in the OneDrive preferences, which starts OneDrive when a user signs in. Instead, you need to configure OneDrive to launch by configuring a registry value, which is described in this article.
## User experience
virtual-desktop Whats New Client Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/whats-new-client-windows.md
description: Learn about recent changes to the Remote Desktop client for Windows
Previously updated : 10/06/2023 Last updated : 10/17/2023 # What's new in the Remote Desktop client for Windows
The following table lists the current versions available for the public and Insi
| Release | Latest version | Download | ||-|-|
-| Public | 1.2.4583 | [Windows 64-bit](https://go.microsoft.com/fwlink/?linkid=2139369) *(most common)*<br />[Windows 32-bit](https://go.microsoft.com/fwlink/?linkid=2139456)<br />[Windows ARM64](https://go.microsoft.com/fwlink/?linkid=2139370) |
+| Public | 1.2.4677 | [Windows 64-bit](https://go.microsoft.com/fwlink/?linkid=2139369) *(most common)*<br />[Windows 32-bit](https://go.microsoft.com/fwlink/?linkid=2139456)<br />[Windows ARM64](https://go.microsoft.com/fwlink/?linkid=2139370) |
| Insider | 1.2.4677 | [Windows 64-bit](https://go.microsoft.com/fwlink/?linkid=2139233) *(most common)*<br />[Windows 32-bit](https://go.microsoft.com/fwlink/?linkid=2139144)<br />[Windows ARM64](https://go.microsoft.com/fwlink/?linkid=2139368) |
-## Updates for version 1.2.4677 (Insider)
+## Updates for version 1.2.4677
-*Date published: October 3, 2023*
+*Date published: October 17, 2023*
-Download: [Windows 64-bit](https://go.microsoft.com/fwlink/?linkid=2139233), [Windows 32-bit](https://go.microsoft.com/fwlink/?linkid=2139144), [Windows ARM64](https://go.microsoft.com/fwlink/?linkid=2139368)
+Download: [Windows 64-bit](https://go.microsoft.com/fwlink/?linkid=2139369), [Windows 32-bit](https://go.microsoft.com/fwlink/?linkid=2139456), [Windows ARM64](https://go.microsoft.com/fwlink/?linkid=2139370)
- Added new parameters for multiple monitor configuration when connecting to a remote resource using the [Uniform Resource Identifier (URI) scheme](uri-scheme.md). - Added support for the following languages: Czech (Czechia), Hungarian (Hungary), Indonesian (Indonesia), Korean (Korea), Portuguese (Portugal), Turkish (T├╝rkiye).
Download: [Windows 64-bit](https://go.microsoft.com/fwlink/?linkid=2139233), [Wi
*Date published: October 6, 2023*
-Download: [Windows 64-bit](https://go.microsoft.com/fwlink/?linkid=2139369), [Windows 32-bit](https://go.microsoft.com/fwlink/?linkid=2139456), [Windows ARM64](https://go.microsoft.com/fwlink/?linkid=2139370)
+Download: [Windows 64-bit](https://query.prod.cms.rt.microsoft.com/cms/api/am/binary/RW1cTP6), [Windows 32-bit](https://query.prod.cms.rt.microsoft.com/cms/api/am/binary/RW1cTP7), [Windows ARM64](https://query.prod.cms.rt.microsoft.com/cms/api/am/binary/RW1cRlf)
- Fixed the [CVE-2023-5217](https://msrc.microsoft.com/update-guide/vulnerability/CVE-2023-5217) security vulnerability.
Download: [Windows 64-bit](https://go.microsoft.com/fwlink/?linkid=2139369), [Wi
*Date published: September 19, 2023*
-Download: [Windows 64-bit](https://query.prod.cms.rt.microsoft.com/cms/api/am/binary/RW1byOF), [Windows 32-bit](https://query.prod.cms.rt.microsoft.com/cms/api/am/binary/RW1bwjL), [Windows ARM64](https://query.prod.cms.rt.microsoft.com/cms/api/am/binary/RW1byOV)
- In this release, we've made the following changes: - Fixed an issue when using the default display settings and a change is made to the system display settings, where the bar does not show when hovering over top of screen after it is hidden.
virtual-machine-scale-sets Virtual Machine Scale Sets Automatic Instance Repairs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-automatic-instance-repairs.md
Previously updated : 11/22/2022 Last updated : 07/25/2023 -
+
+ # Automatic instance repairs for Azure Virtual Machine Scale Sets
-Enabling automatic instance repairs for Azure Virtual Machine Scale Sets helps achieve high availability for applications by maintaining a set of healthy instances. The [Application Health extension](./virtual-machine-scale-sets-health-extension.md) or [Load balancer health probes](../load-balancer/load-balancer-custom-probe-overview.md) may find that an instance is unhealthy. Automatic instance repairs will automatically perform instance repairs by deleting the unhealthy instance and creating a new one to replace it.
+> [!IMPORTANT]
+> The **Reimage** and **Restart** repair actions are currently in PREVIEW.
+> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. Some aspects of this feature may change prior to general availability (GA).
++
+Enabling automatic instance repairs for Azure Virtual Machine Scale Sets helps achieve high availability for applications by maintaining a set of healthy instances. If an unhealthy instance is found by [Application Health extension](./virtual-machine-scale-sets-health-extension.md) or [Load balancer health probes](../load-balancer/load-balancer-custom-probe-overview.md), automatic instance repairs will attempt to recover the instance by triggering repair actions such as deleting the unhealthy instance and creating a new one to replace it, reimaging the unhealthy instance (Preview), or restarting the unhealthy instance (Preview).
## Requirements for using automatic instance repairs
The scale set should have application health monitoring for instances enabled. H
**Configure endpoint to provide health status**
-Before enabling automatic instance repairs policy, ensure that your scale set instances have application endpoint configured to emit the application health status. To configure health status on Application Health extension, you can use either [Binary Health States](./virtual-machine-scale-sets-health-extension.md#binary-health-states) or [Rich Health States](./virtual-machine-scale-sets-health-extension.md#rich-health-states). To configure health status using Load balancer health probes, see [probe up behavior](../load-balancer/load-balancer-custom-probe-overview.md).
+Before enabling automatic instance repairs policy, ensure that your scale set instances have an application endpoint configured to emit the application health status. To configure health status on Application Health extension, you can use either [Binary Health States](./virtual-machine-scale-sets-health-extension.md#binary-health-states) or [Rich Health States](./virtual-machine-scale-sets-health-extension.md#rich-health-states). To configure health status using Load balancer health probes, see [probe up behavior](../load-balancer/load-balancer-custom-probe-overview.md).
For instances marked as "Unhealthy" or "Unknown" (*Unknown* state is only available with [Application Health extension - Rich Health States](./virtual-machine-scale-sets-health-extension.md#unknown-state)), automatic repairs are triggered by the scale set. Ensure the application endpoint is correctly configured before enabling the automatic repairs policy in order to avoid unintended instance repairs, while the endpoint is getting configured.
For instances marked as "Unhealthy" or "Unknown" (*Unknown* state is only availa
Automatic repairs policy is supported for compute API version 2018-10-01 or higher.
+The `repairAction` setting for Reimage (Preview) and Restart (Preview) is supported for compute API versions 2021-11-01 or higher.
+ **Restrictions on resource or subscription moves** Resource or subscription moves are currently not supported for scale sets when automatic repairs feature is enabled.
This feature is currently not supported for service fabric scale sets.
**Restriction for VMs with provisioning errors**
-Automatic repairs don't currently support scenarios where a VM instance is marked *Unhealthy* due to a provisioning failure. VMs must be successfully initialized to enable health monitoring and automatic repair capabilities.
+Automatic repairs currently do not support scenarios where a VM instance is marked *Unhealthy* due to a provisioning failure. VMs must be successfully initialized to enable health monitoring and automatic repair capabilities.
## How do automatic instance repairs work?
-Automatic instance repair feature relies on health monitoring of individual instances in a scale set. VM instances in a scale set can be configured to emit application health status using either the [Application Health extension](./virtual-machine-scale-sets-health-extension.md) or [Load balancer health probes](../load-balancer/load-balancer-custom-probe-overview.md). If an instance is found to be unhealthy, then the scale set performs repair action by deleting the unhealthy instance and creating a new one to replace it. The latest Virtual Machine Scale Set model is used to create the new instance. This feature can be enabled in the Virtual Machine Scale Set model by using the *automaticRepairsPolicy* object.
+Automatic instance repair feature relies on health monitoring of individual instances in a scale set. VM instances in a scale set can be configured to emit application health status using either the [Application Health extension](./virtual-machine-scale-sets-health-extension.md) or [Load balancer health probes](../load-balancer/load-balancer-custom-probe-overview.md). If an instance is found to be unhealthy, the scale set will perform a preconfigured repair action on the unhealthy instance. Automatic instance repairs can be enabled in the Virtual Machine Scale Set model by using the `automaticRepairsPolicy` object.
+
+### Available repair actions
+
+> [!CAUTION]
+> The `repairAction` setting, is currently under PREVIEW and not suitable for production workloads. To preview the **Restart** and **Reimage** repair actions, you must register your Azure subscription with the AFEC flag `AutomaticRepairsWithConfigurableRepairActions` and your compute API version must be 2021-11-01 or higher.
+> For more information, see [set up preview features in Azure subscription](../azure-resource-manager/management/preview-features.md).
+
+There are three available repair actions for automatic instance repairs ΓÇô Replace, Reimage (Preview), and Restart (Preview). The default repair action is Replace, but you can switch to Reimage (Preview) or Restart (Preview) by enrolling in the preview and modifying the `repairAction` setting under `automaticRepairsPolicy` object.
+
+- **Replace** deletes the unhealthy instance and creates a new instance to replace it. The latest Virtual Machine Scale Set model is used to create the new instance. This repair action is the default.
+
+- **Reimage** applies the reimage operation to the unhealthy instance.
+
+- **Restart** applies the restart operation to the unhealthy instance.
+
+The following table compares the differences between all three repair actions:
+
+| Repair action | VM instance ID preserved? | Private IP preserved? | Managed data disk preserved? | Managed OS disk preserved? | Local (temporary) disk preserved? |
+|--|--|--|--|--|--|
+| Replace | No | No | No | No | No |
+| Reimage | Yes | Yes | Yes | No | Yes |
+| Restart | Yes | Yes | Yes | Yes | Yes |
+
+For details on updating your repair action under automatic repairs policy, see the [configure a repair action on automatic repairs policy](#configure-a-repair-action-on-automatic-repairs-policy) section.
### Batching
The automatic instance repairs process works as follows:
1. [Application Health extension](./virtual-machine-scale-sets-health-extension.md) or [Load balancer health probes](../load-balancer/load-balancer-custom-probe-overview.md) ping the application endpoint inside each virtual machine in the scale set to get application health status for each instance. 2. If the endpoint responds with a status 200 (OK), then the instance is marked as "Healthy". In all the other cases (including if the endpoint is unreachable), the instance is marked "Unhealthy".
-3. When an instance is found to be unhealthy, the scale set triggers a repair action by deleting the unhealthy instance, and creating a new one to replace it.
+3. When an instance is found to be unhealthy, the scale set applies the configured repair action (default is *Replace*) to the unhealthy instance.
4. Instance repairs are performed in batches. At any given time, no more than 5% of the total instances in the scale set are repaired. If a scale set has fewer than 20 instances, the repairs are done for one unhealthy instance at a time. 5. The above process continues until all unhealthy instance in the scale set are repaired.
If an instance in a scale set is protected by applying one of the [protection po
## Terminate notification and automatic repairs
-If the [terminate notification](./virtual-machine-scale-sets-terminate-notification.md) feature is enabled on a scale set, then during automatic repair operation, the deletion of an unhealthy instance follows the terminate notification configuration. A terminate notification is sent through Azure metadata service ΓÇô scheduled events ΓÇô and instance deletion is delayed during the configured delay timeout. However, the creation of a new instance to replace the unhealthy one doesn't wait for the delay timeout to complete.
+If the [terminate notification](./virtual-machine-scale-sets-terminate-notification.md) feature is enabled on a scale set, then during a *Replace* operation, the deletion of an unhealthy instance follows the terminate notification configuration. A terminate notification is sent through Azure metadata service ΓÇô scheduled events ΓÇô and instance deletion is delayed during the configured delay timeout. However, the creation of a new instance to replace the unhealthy one doesn't wait for the delay timeout to complete.
## Enabling automatic repairs policy when creating a new scale set
For enabling automatic repairs policy while creating a new scale set, ensure tha
You can also use this [quickstart template](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.compute/vmss-automatic-repairs-slb-health-probe) to deploy a Virtual Machine Scale Set. The scale set has a load balancer health probe and automatic instance repairs enabled with a grace period of 30 minutes.
-### Azure portal
+### [Azure portal](#tab/portal-1)
The following steps enabling automatic repairs policy when creating a new scale set.
The following steps enabling automatic repairs policy when creating a new scale
1. In **Grace period (min)**, specify the grace period in minutes, allowed values are between 10 and 90 minutes. 1. When you're done creating the new scale set, select **Review + create** button.
-### REST API
+### [REST API](#tab/rest-api-1)
The following example shows how to enable automatic instance repair in a scale set model. Use API version 2018-10-01 or higher.
PUT or PATCH on '/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupNa
} ```
-### Azure PowerShell
+### [Azure PowerShell](#tab/powershell-1)
The automatic instance repair feature can be enabled while creating a new scale set by using the [New-AzVmssConfig](/powershell/module/az.compute/new-azvmssconfig) cmdlet. This sample script walks through the creation of a scale set and associated resources using the configuration file: [Create a complete Virtual Machine Scale Set](./scripts/powershell-sample-create-complete-scale-set.md). You can configure automatic instance repairs policy by adding the parameters *EnableAutomaticRepair* and *AutomaticRepairGracePeriod* to the configuration object for creating the scale set. The following example enables the feature with a grace period of 30 minutes.
New-AzVmssConfig `
-AutomaticRepairGracePeriod "PT30M" ```
-### Azure CLI 2.0
+### [Azure CLI 2.0](#tab/cli-1)
The following example enables the automatic repairs policy while creating a new scale set using *[az vmss create](/cli/azure/vmss#az-vmss-create)*. First create a resource group, then create a new scale set with automatic repairs policy grace period set to 30 minutes.
az vmss create \
The above example uses an existing load balancer and health probe for monitoring application health status of instances. If you prefer using an application health extension for monitoring, you can do the following instead: create a scale set, configure the application health extension, and enable the automatic instance repairs policy. You can enable that policy by using the *az vmss update*, as explained in the next section. ++ ## Enabling automatic repairs policy when updating an existing scale set Before enabling automatic repairs policy in an existing scale set, ensure that all the [requirements](#requirements-for-using-automatic-instance-repairs) for opting in to this feature are met. The application endpoint should be correctly configured for scale set instances to avoid triggering unintended repairs while the endpoint is getting configured. To enable the automatic instance repair in a scale set, use *automaticRepairsPolicy* object in the Virtual Machine Scale Set model. After updating the model of an existing scale set, ensure that the latest model is applied to all the instances of the scale. Refer to the instruction on [how to bring VMs up-to-date with the latest scale set model](./virtual-machine-scale-sets-upgrade-policy.md).
-### Azure portal
+### [Azure portal](#tab/portal-2)
You can modify the automatic repairs policy of an existing scale set through the Azure portal. > [!NOTE] > Enable the [Application Health extension](./virtual-machine-scale-sets-health-extension.md) or [Load Balancer health probes](../load-balancer/load-balancer-custom-probe-overview.md) on your Virtual Machine Scale Sets before you start the next steps.
-1. Go to an existing Virtual Machine Scale Set.
-2. Under **Settings** in the menu on the left, select **Health and repair**.
-3. Enable the **Monitor application health** option.
+1. Go to an existing Virtual Machine Scale Set.0
+1. Under **Settings** in the menu on the left, select **Health and repair**.
+1. Enable the **Monitor application health** option.
If you're monitoring your scale set by using the Application Health extension:
-4. Choose **Application Health extension** from the Application Health monitor dropdown list.
-5. From the **Protocol** dropdown list, choose the network protocol used by your application to report health. Select the appropriate protocol based on your application requirements. Protocol options are **HTTP, HTTPS**, or **TCP**.
-6. In the **Port number** configuration box, type the network port used to monitor application health.
-7. For **Path**, provide the application endpoint path (for example, "/") used to report application health.
+1. Choose **Application Health extension** from the Application Health monitor dropdown list.
+1. From the **Protocol** dropdown list, choose the network protocol used by your application to report health. Select the appropriate protocol based on your application requirements. Protocol options are **HTTP, HTTPS**, or **TCP**.
+1. In the **Port number** configuration box, type the network port used to monitor application health.
+1. For **Path**, provide the application endpoint path (for example, "/") used to report application health.
-> [!NOTE]
-> The Application Health extension will ping this path inside each virtual machine in the scale set to get application health status for each instance. If you're using [Binary Health States](./virtual-machine-scale-sets-health-extension.md#binary-health-states) and the endpoint responds with a status 200 (OK), then the instance is marked as "Healthy". In all the other cases (including if the endpoint is unreachable), the instance is marked "Unhealthy". For more health state options, explore [Rich Health States](./virtual-machine-scale-sets-health-extension.md#binary-versus-rich-health-states).
+ > [!NOTE]
+ > The Application Health extension will ping this path inside each virtual machine in the scale set to get application health status for each instance. If you're using [Binary Health States](./virtual-machine-scale-sets-health-extension.md#binary-health-states) and the endpoint responds with a status 200 (OK), then the instance is marked as "Healthy". In all the other cases (including if the endpoint is unreachable), the instance is marked "Unhealthy". For more health state options, explore [Rich Health States](./virtual-machine-scale-sets-health-extension.md#binary-versus-rich-health-states).
If you're monitoring your scale set using SLB Health probes:
-8. Choose **Load balancer probe** from the Application Health monitor dropdown list.
-9. For the Load Balancer health probe, select an existing health probe or create a new health probe for monitoring.
+- Choose **Load balancer probe** from the Application Health monitor dropdown list.- For the Load Balancer health probe, select an existing health probe or create a new health probe for monitoring.
To enable automatic repairs:
-10. Locate the **Automatic repair policy** section. Automatic repairs can be used to delete unhealthy instances from the scale set and create new ones to replace them.
-11. Turn **On** the **Automatic repairs** option.
-12. In **Grace period (min)**, specify the grace period in minutes. Allowed values are between 10 and 90 minutes.
-6. When you're done, select **Save**.
+1. Locate the **Automatic repair policy** section.
+1. Turn **On** the **Automatic repairs** option.
+1. In **Grace period (min)**, specify the grace period in minutes. Allowed values are between 10 and 90 minutes.
+1. When you're done, select **Save**.
-### REST API
+### [REST API](#tab/rest-api-2)
The following example enables the policy with grace period of 40 minutes. Use API version 2018-10-01 or higher.
PUT or PATCH on '/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupNa
} ```
-### Azure PowerShell
+### [Azure PowerShell](#tab/powershell-2)
Use the [Update-AzVmss](/powershell/module/az.compute/update-azvmss) cmdlet to modify the configuration of automatic instance repair feature in an existing scale set. The following example updates the grace period to 40 minutes.
Update-AzVmss `
-AutomaticRepairGracePeriod "PT40M" ```
-### Azure CLI 2.0
+### [Azure CLI 2.0](#tab/cli-2)
The following example demonstrates how to update the automatic instance repairs policy of an existing scale set, using *[az vmss update](/cli/azure/vmss#az-vmss-update)*.
az vmss update \
--automatic-repairs-grace-period 30 ``` ++
+## Configure a repair action on automatic repairs policy
+
+> [!CAUTION]
+> The `repairAction` setting, is currently under PREVIEW and not suitable for production workloads. To preview the **Restart** and **Reimage** repair actions, you must register your Azure subscription with the AFEC flag `AutomaticRepairsWithConfigurableRepairActions` and your compute API version must be 2021-11-01 or higher.
+> For more information, see [set up preview features in Azure subscription](../azure-resource-manager/management/preview-features.md).
+
+The `repairAction` setting under `automaticRepairsPolicy` allows you to specify the desired repair action performed in response to an unhealthy instance. If you are updating the repair action on an existing automatic repairs policy, you must first disable automatic repairs on the scale set and re-enable with the updated repair action. This process is illustrated in the examples below.
+
+### [REST API](#tab/rest-api-3)
+
+This example demonstrates how to update the repair action on a scale set with an existing automatic repairs policy. Use API version 2021-11-01 or higher.
+
+**Disable the existing automatic repairs policy on your scale set**
+```
+PUT or PATCH on '/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/virtualMachineScaleSets/{vmScaleSetName}?api-version=2021-11-01'
+```
+
+```json
+{
+ "properties": {
+ "automaticRepairsPolicy": {
+ "enabled": "false"
+ }
+ }
+}
+```
+
+**Re-enable automatic repairs policy with the desired repair action**
+```
+PUT or PATCH on '/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/virtualMachineScaleSets/{vmScaleSetName}?api-version=2021-11-01'
+```
+```json
+{
+ "properties": {
+ "automaticRepairsPolicy": {
+ "enabled": "true",
+ "gracePeriod": "PT40M",
+ "repairAction": "Reimage"
+ }
+ }
+}
+```
+
+### [Azure CLI](#tab/cli-3)
+
+This example demonstrates how to update the repair action on a scale set with an existing automatic repairs policy, using *[az vmss update](/cli/azure/vmss#az-vmss-update)*.
+
+**Disable the existing automatic repairs policy on your scale set**
+```azurecli-interactive
+az vmss update \
+ --resource-group <myResourceGroup> \
+ --name <myVMScaleSet> \
+ --enable-automatic-repairs false
+```
+
+**Re-enable automatic repairs policy with the desired repair action**
+```azurecli-interactive
+az vmss update \
+ --resource-group <myResourceGroup> \
+ --name <myVMScaleSet> \
+ --enable-automatic-repairs true \
+ --automatic-repairs-grace-period 30 \
+ --automatic-repairs-action Replace
+```
+
+### [Azure PowerShell](#tab/powershell-3)
+
+This example demonstrates how to update the repair action on a scale set with an existing automatic repairs policy, using [Update-AzVmss](/powershell/module/az.compute/update-azvmss). Use PowerShell Version 7.3.6 or higher.
+
+**Disable the existing automatic repairs policy on your scale set**
+```azurepowershell-interactive
+ -ResourceGroupName "myResourceGroup" `
+ -VMScaleSetName "myScaleSet" `
+ -EnableAutomaticRepair $false
+```
+
+**Re-enable automatic repairs policy with the desired repair action**
+```azurepowershell-interactive
+Update-AzVmss `
+ -ResourceGroupName "myResourceGroup" `
+ -VMScaleSetName "myScaleSet" `
+ -EnableAutomaticRepair $true `
+ -AutomaticRepairGracePeriod "PT40M" `
+ -AutomaticRepairAction "Restart"
+```
+++ ## Viewing and updating the service state of automatic instance repairs policy
-### REST API
+### [REST API](#tab/rest-api-4)
Use [Get Instance View](/rest/api/compute/virtualmachinescalesets/getinstanceview) with API version 2019-12-01 or higher for Virtual Machine Scale Set to view the *serviceState* for automatic repairs under the property *orchestrationServices*.
GET '/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/provider
} ```
-### Azure CLI
+### [Azure CLI](#tab/cli-4)
Use [get-instance-view](/cli/azure/vmss#az-vmss-get-instance-view) cmdlet to view the *serviceState* for automatic instance repairs.
az vmss set-orchestration-service-state \
--name MyScaleSet \ --resource-group MyResourceGroup ```
-### Azure PowerShell
+### [Azure PowerShell](#tab/powershell-4)
Use [Get-AzVmss](/powershell/module/az.compute/get-azvmss) cmdlet with parameter *InstanceView* to view the *ServiceState* for automatic instance repairs.
Set-AzVmssOrchestrationServiceState `
-Action "Suspend" ``` ++ ## Troubleshoot **Failure to enable automatic repairs policy**
virtual-machine-scale-sets Virtual Machine Scale Sets Automatic Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-automatic-upgrade.md
Previously updated : 07/25/2023 Last updated : 10/17/2023 # Azure Virtual Machine Scale Set automatic OS image upgrades
Use [az vmss rolling-upgrade start](/cli/azure/vmss/rolling-upgrade#az-vmss-roll
az vmss rolling-upgrade start --resource-group "myResourceGroup" --name "myScaleSet" --subscription "subscriptionId" ```
+## Investigate and Resolve Auto Upgrade Errors
+
+The platform can return errors on VMs while performing Automatic Image Upgrade with Rolling Upgrade policy. The [Get Instance View](/rest/api/compute/virtual-machine-scale-sets/get-instance-view) of a VM contains the detailed error message to investigate and resolve an error. The [Rolling Upgrades - Get Latest](/rest/api/compute/virtual-machine-scale-sets/get) can provide more details on rolling upgrade configuration and status. The [Get OS Upgrade History](/rest/api/compute/virtual-machine-scale-sets/get) provides details on the last image upgrade operation on the scale set. Below are the top most errors that can result in Rolling Upgrades.
+
+**RollingUpgradeInProgressWithFailedUpgradedVMs**
+- Error is triggered for a VM failure.
+- The detailed error message mentions whether the rollout will continue/pause based on the configured threshold.
+
+**MaxUnhealthyUpgradedInstancePercentExceededInRollingUpgrade**
+- Error is triggered when the percent of upgraded VMs exceed the max threshold allowed for unhealthy VMs.
+- The detailed error message aggregates the most common error contributing to the unhealthy VMs. See [MaxUnhealthyUpgradedInstancePercent](/rest/api/compute/virtual-machine-scale-sets/create-or-update?tabs=HTTP#rollingupgradepolicy).
+
+**MaxUnhealthyInstancePercentExceededInRollingUpgrade**
+- Error is triggered when the percent of unhealthy VMs exceed the max threshold allowed for unhealthy VMs during an upgrade.
+- The detailed error message displays the current unhealthy percent and the configured allowable unhealthy VM percentage. See [maxUnhealthyInstancePercent](/rest/api/compute/virtual-machine-scale-sets/create-or-update?tabs=HTTP#rollingupgradepolicy).
+
+**MaxUnhealthyInstancePercentExceededBeforeRollingUpgrade**
+- Error is triggered when the percent of unhealthy VMs exceed the max threshold allowed for unhealthy VMs before an upgrade takes place.
+- The detailed error message displays the current unhealthy percent and the configured allowable unhealthy VM percentage. See [maxUnhealthyInstancePercent](/rest/api/compute/virtual-machine-scale-sets/create-or-update?tabs=HTTP#rollingupgradepolicy).
+
+**InternalExecutionError**
+- Error is triggered when an unhandled, unformatted or unexpected occurs during execution.
+- The detailed error message displays the cause of the error.
+ ## Next steps > [!div class="nextstepaction"]
-> [Learn about the Application Health Extension](../virtual-machine-scale-sets/virtual-machine-scale-sets-health-extension.md)
+> [Learn about the Application Health Extension](../virtual-machine-scale-sets/virtual-machine-scale-sets-health-extension.md)
virtual-machines Capacity Reservation Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/capacity-reservation-overview.md
Capacity Reservations are priced at the same rate as the underlying VM size. For
If you then deploy a D2s_v3 VM and specify reservation property, the Capacity Reservation gets used. Once in use, you pay for only VM and not the Capacity Reservation. LetΓÇÖs say you deploy six D2s_v3 VMs against the previously mentioned Capacity Reservation. You see a bill for six D2s_v3 VMs and four unused Capacity Reservation, both charged at the same rate as a D2s_v3 VM.
-Both used and unused Capacity Reservation and Saving Plan are eligible for Reserved Instances term commitment discounts. In the previous example, if you have Reserved Instances for two D2s_v3 VMs in the same Azure region, the billing for two resources (either VM or unused Capacity Reservation) will be zeroed out. The remaining eight D2s_v3 is billed normally. The term commitment discounts could be applied on either the VM or the unused Capacity Reservation.
+Both used and unused Capacity Reservation are eligible for Saving Plan and Reserved Instances term commitment discounts. In the previous example, if you have Reserved Instances for two D2s_v3 VMs in the same Azure region, the billing for two resources (either VM or unused Capacity Reservation) will be zeroed out. The remaining eight D2s_v3 is billed normally. The term commitment discounts could be applied on either the VM or the unused Capacity Reservation.
## Difference between On-demand Capacity Reservation and Reserved Instances
virtual-machines Agent Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/agent-linux.md
Testing has confirmed that the following systems work with the Azure Linux VM Ag
| Azure Linux | 2.x | 2.x | | openSUSE | 12.3+ | *Not supported* | | Oracle Linux | 6.4+, 7.x+, 8.x+ | *Not supported* |
-| Red Hat Enterprise Linux | 6.7+, 7.x+, 8.x+ | 8.6+, 9.0+ |
+| Red Hat Enterprise Linux | 6.7+, 7.x+, 8.x+, 9.x+ | 8.6+, 9.0+ |
| Rocky Linux | 9.x+ | 9.x+ | | SLES | 12.x+, 15.x+ | 15.x SP4+ | | Ubuntu | 18.04+, 20.04+, 22.04+ | 20.04+, 22.04+ |
virtual-machines Disks Upload Vhd To Managed Disk Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/disks-upload-vhd-to-managed-disk-cli.md
description: Learn how to upload a VHD to an Azure managed disk and copy a manag
Previously updated : 08/25/2023 Last updated : 10/17/2023
virtual-machines Download Vhd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/download-vhd.md
Previously updated : 01/03/2023 Last updated : 10/17/2023 # Download a Linux VHD from Azure
virtual-machines Move Virtual Machines Regional Zonal Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/move-virtual-machines-regional-zonal-powershell.md
Most move resources operations are the same whether using the Azure portal or Po
| Operation | Portal | PowerShell/CLI | | | | |
-| **Create a move collection** | A move collection (a list of all the regional VMs that you're moving) is created automatically. Required identity permissions are assigned in the backend by the portal. | You can use [PowerShell cmdlets](/powershell/module/az.resourcemover/?view=azps-10.3.0#resource-mover) or [CLI cmdlets](https://learn.microsoft.com/cli/azure/resource-mover?view=azure-cli-latest) to: <br> - Assign a managed identity to the collection. <br> - Add regional VMs to the collection. |
-| **Resource move operations** | Validate steps and validates the *User* setting changes. **Initiate move** starts the move process and creates a copy of source VM in the target zone. It also finalizes the move of the newly created VM in the target zone. | [PowerShell cmdlets](/powershell/module/az.resourcemover/?view=azps-10.3.0#resource-mover) or [CLI cmdlets](https://learn.microsoft.com/cli/azure/resource-mover?view=azure-cli-latest) to: <br> - Add regional VMs to the collection <br> - Resolve dependencies <br> - Perform the move. <br> - Commit the move. |
+| **Create a move collection** | A move collection (a list of all the regional VMs that you're moving) is created automatically. Required identity permissions are assigned in the backend by the portal. | You can use [PowerShell cmdlets](/powershell/module/az.resourcemover/#resource-mover) or [CLI cmdlets](/cli/azure/resource-mover) to: <br> - Assign a managed identity to the collection. <br> - Add regional VMs to the collection. |
+| **Resource move operations** | Validate steps and validates the *User* setting changes. **Initiate move** starts the move process and creates a copy of source VM in the target zone. It also finalizes the move of the newly created VM in the target zone. | [PowerShell cmdlets](/powershell/module/az.resourcemover/#resource-mover) or [CLI cmdlets](/cli/azure/resource-mover) to: <br> - Add regional VMs to the collection <br> - Resolve dependencies <br> - Perform the move. <br> - Commit the move. |
### Sample values
virtual-machines Disks Upload Vhd To Managed Disk Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/disks-upload-vhd-to-managed-disk-powershell.md
Title: Upload a VHD to Azure or copy a disk across regions - Azure PowerShell
description: Learn how to upload a VHD to an Azure managed disk and copy a managed disk across regions, using Azure PowerShell, via direct upload. Previously updated : 08/25/2023 Last updated : 10/17/2023 linux
virtual-machines Download Vhd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/download-vhd.md
Previously updated : 01/03/2023 Last updated : 10/17/2023 # Download a Windows VHD from Azure
virtual-machines Ps Common Ref https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/ps-common-ref.md
Previously updated : 06/01/2018 Last updated : 09/07/2023
**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Flexible scale sets
-This article covers some of the Azure PowerShell commands that you can use to create and manage virtual machines in your Azure subscription. For more detailed help with specific command-line switches and options, you can use the **Get-Help** *command*.
+This article covers some of the basic Azure PowerShell commands that you can use to create and manage virtual machines in your Azure subscription. For more detailed help with specific command-line switches and options, you can use the **Get-Help** *command*.
-
-
-These variables might be useful for you if running more than one of the commands in this article:
+These variables might be useful if running more than one of the commands in this article:
- $location - The location of the virtual machine. You can use [Get-AzLocation](/powershell/module/az.resources/get-azlocation) to find a [geographical region](https://azure.microsoft.com/regions/) that works for you. - $myResourceGroup - The name of the resource group that contains the virtual machine.
These variables might be useful for you if running more than one of the commands
-## Create a VM configuration
+## Create a VM - advanced
| Task | Command | | - | - |
These variables might be useful for you if running more than one of the commands
| Create a VM |[New-AzVM](/powershell/module/az.compute/new-azvm) -ResourceGroupName $myResourceGroup -Location $location -VM $vm<BR></BR><BR></BR>All resources are created in a [resource group](../../azure-resource-manager/management/manage-resource-groups-powershell.md). Before you run this command, run New-AzVMConfig, Set-AzVMOperatingSystem, Set-AzVMSourceImage, Add-AzVMNetworkInterface, and Set-AzVMOSDisk. | | Update a VM |[Update-AzVM](/powershell/module/az.compute/update-azvm) -ResourceGroupName $myResourceGroup -VM $vm<BR></BR><BR></BR>Get the current VM configuration using Get-AzVM, change configuration settings on the VM object, and then run this command. |
-## Get information about VMs
+## Get information about your VMs
| Task | Command | | - | - |
These variables might be useful for you if running more than one of the commands
| List VMs in a resource group |Get-AzVM -ResourceGroupName $myResourceGroup<BR></BR><BR></BR>To get a list of resource groups in your subscription, use [Get-AzResourceGroup](/powershell/module/az.resources/get-azresourcegroup). | | Get information about a VM |Get-AzVM -ResourceGroupName $myResourceGroup -Name $myVM |
-## Manage VMs
+## Manage your VMs
| Task | Command | | | | | Start a VM |[Start-AzVM](/powershell/module/az.compute/start-azvm) -ResourceGroupName $myResourceGroup -Name $myVM |
virtual-machines Deploy Application Oracle Database Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/oracle/deploy-application-oracle-database-azure.md
As Oracle applications move on Azure IaaS, there are common design consideration
The provided network settings for Oracle Applications on Azure cover various aspects of network and security considerations. Here's a breakdown of the recommended network settings: -- Single sign-on (SSO) with Microsoft Entra ID and SAML: Use [Microsoft Entra ID for single sign-on (SSO)](https://learn.microsoft.com/azure/active-directory/manage-apps/what-is-single-sign-on) using the Security Assertions Markup Language (SAML) protocol. This SSO allows users to authenticate once and access multiple services seamlessly.-- Microsoft Entra application proxy: Consider using [Microsoft Entra application proxy](https://learn.microsoft.com/azure/active-directory/app-proxy/application-proxy), especially for remote users. This proxy allows you to securely access on-premises applications from outside your network.-- Routing Internal Users through [ExpressRoute](https://learn.microsoft.com/azure/expressroute/expressroute-introduction): For internal users, route traffic through Azure ExpressRoute for a dedicated, private connection to Azure services, ensuring low-latency and secure communication.-- Azure Firewall: If necessary, you can configure [Azure Firewall](https://learn.microsoft.com/azure/architecture/example-scenario/gateway/application-gateway-before-azure-firewall) in front of your application for added security. Azure Firewall helps protect your resources from unauthorized access and threats.-- Application Gateway for External Users: When external users need to access your application, consider using [Azure Application Gateway](https://learn.microsoft.com/azure/application-gateway/overview). It supplies Web Application Firewall (WAF) capabilities for protecting your web applications and Layer 7 load balancing to distribute traffic.-- Network Security Groups (NSG): Secure your subnets by using [Network Security Groups](https://learn.microsoft.com/azure/virtual-network/network-security-groups-overview) (NSG). NSGs allow you to control inbound and outbound traffic to network interfaces, Virtual Machines, and subnets by defining security rules.-- Role-Based Access Control (RBAC): To grant access to specific individuals or roles, use Azure Role-Based Access Control (RBAC). [RBAC](https://learn.microsoft.com/azure/role-based-access-control/overview) provides fine-grained access control to Azure resources based on roles and permissions.-- Bastion Host for SSH Access: Use a [Bastion host](https://learn.microsoft.com/azure/bastion/bastion-overview) as a jump box to enhance security for SSH access. A Bastion host acts as a secure gateway for administrators to access Virtual Machines in the virtual network. This host provides an added layer of security.
+- Single sign-on (SSO) with Microsoft Entra ID and SAML: Use [Microsoft Entra ID for single sign-on (SSO)](../../../active-directory/manage-apps/what-is-single-sign-on.md) using the Security Assertions Markup Language (SAML) protocol. This SSO allows users to authenticate once and access multiple services seamlessly.
+- Microsoft Entra application proxy: Consider using [Microsoft Entra application proxy](../../../active-directory/app-proxy/application-proxy.md), especially for remote users. This proxy allows you to securely access on-premises applications from outside your network.
+- Routing Internal Users through [ExpressRoute](../../../expressroute/expressroute-introduction.md): For internal users, route traffic through Azure ExpressRoute for a dedicated, private connection to Azure services, ensuring low-latency and secure communication.
+- Azure Firewall: If necessary, you can configure [Azure Firewall](/azure/architecture/example-scenario/gateway/application-gateway-before-azure-firewall) in front of your application for added security. Azure Firewall helps protect your resources from unauthorized access and threats.
+- Application Gateway for External Users: When external users need to access your application, consider using [Azure Application Gateway](../../../application-gateway/overview.md). It supplies Web Application Firewall (WAF) capabilities for protecting your web applications and Layer 7 load balancing to distribute traffic.
+- Network Security Groups (NSG): Secure your subnets by using [Network Security Groups](../../../virtual-network/network-security-groups-overview.md) (NSG). NSGs allow you to control inbound and outbound traffic to network interfaces, Virtual Machines, and subnets by defining security rules.
+- Role-Based Access Control (RBAC): To grant access to specific individuals or roles, use Azure Role-Based Access Control (RBAC). [RBAC](../../../role-based-access-control/overview.md) provides fine-grained access control to Azure resources based on roles and permissions.
+- Bastion Host for SSH Access: Use a [Bastion host](../../../bastion/bastion-overview.md) as a jump box to enhance security for SSH access. A Bastion host acts as a secure gateway for administrators to access Virtual Machines in the virtual network. This host provides an added layer of security.
- More considerations: - Data Encryption: Ensure that data at rest and in transit is encrypted. Azure provides tools like Azure Disk Encryption and SSL/TLS for this purpose. - Patch Management: Regularly update and patch your EBS environment to protect against known vulnerabilities.
- - Monitoring and Logging: Implement [Azure Monitor](https://learn.microsoft.com/azure/azure-monitor/overview) and [Azure Defender](https://learn.microsoft.com/azure/defender-for-cloud/defender-for-cloud-introduction) for security to continuously check your environment for security threats and anomalies. Set up logging for auditing and forensic analysis.
+ - Monitoring and Logging: Implement [Azure Monitor](../../../azure-monitor/overview.md) and [Azure Defender](../../../defender-for-cloud/defender-for-cloud-introduction.md) for security to continuously check your environment for security threats and anomalies. Set up logging for auditing and forensic analysis.
- In summary, these network and security settings aim to provide a robust and secure environment for hosting Oracle applications on Azure IaaS. They incorporate best practices for authentication, access control, and network security, both for internal and external users. They also consider the need for SSH access to Application servers. These recommendations can help you set up a mature security posture for your Oracle applications deployment on Azure IaaS.
The provided network settings for Oracle Applications on Azure cover various asp
**Application tier:** The application tier typically involves application servers and shared file systems.
-For autoscaling, [Virtual Machine Scale Sets](https://learn.microsoft.com/azure/virtual-machine-scale-sets/overview) can be a great choice for scale-out multiple Virtual Machines based on demand with custom scaling rules to adapt to your workload.
+For autoscaling, [Virtual Machine Scale Sets](../../../virtual-machine-scale-sets/overview.md) can be a great choice for scale-out multiple Virtual Machines based on demand with custom scaling rules to adapt to your workload.
Collaborate with Azure Subject Matter Experts (SMEs) to perform a thorough assessment of your architecture. They can help you determine the most suitable Azure services based on your specific requirements, including performance, availability, and scalability. Remember to consider factors like cost, data security, compliance, and disaster recovery when designing your architecture.
Load Balancing and Throughput: It's important to evaluate the workload character
Database Tier: HA architectures are recommended with Oracle Data Guard for Oracle on Azure IaaS. Applications require specific type of HA setup and are listed under each application.
-Backup - [Backups](https://learn.microsoft.com/azure/backup/backup-azure-vms-introduction) are sent from the application tier and the database tier. It's just one of many reasons why those two tiers shouldn't be separated into two different vendors. Backups of the database are performed by [Azure Backup Volume Snapshot](https://techcommunity.microsoft.com/t5/data-architecture-blog/azure-backup-volume-snapshots-for-oracle-is-now-ga/ba-p/2820032) on Premium Files to the secondary region.
+Backup - [Backups](../../../backup/backup-azure-vms-introduction.md) are sent from the application tier and the database tier. It's just one of many reasons why those two tiers shouldn't be separated into two different vendors. Backups of the database are performed by [Azure Backup Volume Snapshot](https://techcommunity.microsoft.com/t5/data-architecture-blog/azure-backup-volume-snapshots-for-oracle-is-now-ga/ba-p/2820032) on Premium Files to the secondary region.
-Disaster Recovery - There are different solutions you can choose from. It very much depends on your requirements. The architecture is built to be highly available. For replicating the application tier, you can use [Azure Site Recovery](https://learn.microsoft.com/azure/site-recovery/site-recovery-overview). Another solution you can choose is [Redundancy options for managed disks.](https://learn.microsoft.com/azure/virtual-machines/disks-redundancy) Both solutions replicate your data. Redundancy options for managed disks are a solution that can simplify the architecture but also comes with a few limitations.
+Disaster Recovery - There are different solutions you can choose from. It very much depends on your requirements. The architecture is built to be highly available. For replicating the application tier, you can use [Azure Site Recovery](../../../site-recovery/site-recovery-overview.md). Another solution you can choose is [Redundancy options for managed disks.](../../../virtual-machines/disks-redundancy.md) Both solutions replicate your data. Redundancy options for managed disks are a solution that can simplify the architecture but also comes with a few limitations.
## Siebel on Azure
virtual-network Routing Preference Unmetered https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/routing-preference-unmetered.md
Your network traffic egressing from origin in Azure destined to CDN provider ben
## Configuring Routing Preference Unmetered
-To take advantage of routing preference unmetered, your CDN providers need to be part of this program. If your CDN provider isn't part of the program, contact your CDN provider.
+To take advantage of routing preference unmetered, your CDN provider needs to be part of this program. If your CDN provider isn't part of the program, contact your CDN provider. Also, contact your CDN provider for the CDN services they support using routing preference unmetered. For a list of Azure services supported by routing preferences, see [What is routing preference - Supported services](routing-preference-overview.md#supported-services),
Next, configure routing preference for your resources, and set the Routing Preference type to **Internet**. You can configure routing preference while creating a public IP address, and then associate the public IP to resources such as virtual machines, internet facing load balancers, and more. [Learn how to configure routing preference for a public IP address using the Azure portal](./routing-preference-portal.md)
virtual-network Virtual Network Tcpip Performance Tuning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/virtual-network-tcpip-performance-tuning.md
Fragmentation occurs when a packet is sent that exceeds the MTU of a network int
Network devices in the path between a source and destination can either drop packets that exceed the MTU or fragment the packet into smaller pieces.
+IP fragmentation is not supported in Azure. There are some scenarios where IP fragmentation may work. However, it should not be relied upon.
+ #### The DonΓÇÖt Fragment bit in an IP packet The DonΓÇÖt Fragment (DF) bit is a flag in the IP protocol header. The DF bit indicates that network devices on the path between the sender and receiver must not fragment the packet. This bit could be set for many reasons. (See the "Path MTU Discovery" section of this article for one example.) When a network device receives a packet with the DonΓÇÖt Fragment bit set, and that packet exceeds the device's interface MTU, the standard behavior is for the device to drop the packet. The device sends an ICMP Fragmentation Needed message back to the original source of the packet.
virtual-network Virtual Networks Name Resolution For Vms And Role Instances https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/virtual-networks-name-resolution-for-vms-and-role-instances.md
If you provide your own DNS solution, it needs to:
> * For best performance, when you're using Azure VMs as DNS servers, IPv6 should be disabled. > * NSGs act as firewalls for your DNS resolver endpoints. You should modify or override your NSG security rules to allow access for UDP Port 53 (and optionally TCP Port 53) to your DNS listener endpoints. Once custom DNS servers are set on a network, then the traffic through port 53 will bypass the NSG's of the subnet.
+> [!IMPORTANT]
+> If you're using Windows DNS Servers as Custom DNS Servers forwarding DNS requests to Azure DNS Servers, make sure you increase the Forwarding Timeout value more than 4 seconds to allow Azure Recursive DNS Servers to perform proper recursion operations.
+>
+> For more information about this issue, see [Forwarders and conditional forwarders resolution timeouts](/troubleshoot/windows-server/networking/forwarders-resolution-timeouts).
+>
+> This recommendation may also apply to other DNS Server platforms with forwarding timeout value of 3 seconds or less.
+>
+> Failing to do so may result in Private DNS Zone records being resolved with public IP addresses.
+ ### Web apps Suppose you need to perform name resolution from your web app built by using App Service, linked to a virtual network, to VMs in the same virtual network. In addition to setting up a custom DNS server that has a DNS forwarder that forwards queries to Azure (virtual IP 168.63.129.16), perform the following steps:
virtual-wan How To Palo Alto Cloud Ngfw https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/how-to-palo-alto-cloud-ngfw.md
To create a new virtual WAN, use the steps in the following article:
## Known limitations
-* Palo Alto Networks Cloud NGFW is only available in the following Azure regions: Central US, East US, East US 2, West US, West US 3, West Europe, Australia East, Australia Southeast, UK South, UK West, Canada Central and East Asia. Other Azure regions are on the roadmap.
+* Palo Alto Networks Cloud NGFW is only available in the following Azure regions: Central US, East US, East US 2, West US, West US 2, West US 3, North Europe, West Europe, Australia East, Australia Southeast, UK South, UK West, Canada Central and East Asia. Other Azure regions are on the roadmap.
* Palo Alto Networks Cloud NGFW can't be deployed with Network Virtual Appliances in the Virtual WAN hub. * For routing between Virtual WAN and Palo Alto Networks Cloud NGFW to work properly, your entire network (on-premises and Virtual Networks) must be within RFC-1918 (subnets within 10.0.0.0/8, 192.168.0.0/16, 172.16.0.0/12). For example, you may not use a subnet such as 40.0.0.0/24 within your Virtual Network or on-premises. Traffic to 40.0.0.0/24 may not be routed properly. * All other limitations in the [Routing Intent and Routing policies documentation limitations section](how-to-routing-policies.md) apply to Palo Alto Networks Cloud NGFW deployments in Virtual WAN.
The following steps describe how to deploy a Virtual Hub that can be used with P
1. Navigate to your Virtual WAN resource. 1. On the left hand menu, select **Hubs** under **Connectivity**. 1. Click on **New Hub**.
-1. Under **Basics** specify a region for your Virtual Hub. Make sure the region is Central US, East US, East US 2, West US, West US 3, West Europe, Australia East, Australia Southeast, UK South, UK West, Canada Central or East Asia. Additionally, specify a name, address space, Virtual hub capacity and Hub routing preference for your hub.
+1. Under **Basics** specify a region for your Virtual Hub. Make sure the region is Central US, East US, East US 2, West US, West US 2, West US 3, North Europe, West Europe, Australia East, Australia Southeast, UK South, UK West, Canada Central or East Asia. Additionally, specify a name, address space, Virtual hub capacity and Hub routing preference for your hub.
:::image type="content" source="./media/how-to-palo-alto-cloudngfw/create-hub.png" alt-text="Screenshot showing hub creation page. Region selector box is highlighted." lightbox="./media/how-to-palo-alto-cloudngfw/create-hub.png"::: 1. Select and configure the Gateways (Site-to-site VPN, Point-to-site VPN, ExpressRoute) you want to deploy in the Virtual Hub. You can deploy Gateways later if you wish. 1. Click **Review + create**.
The following section describes common issues seen when using Palo Alto Networks
### Troubleshooting Cloud NGFW creation
-* Ensure your Virtual Hubs are deployed in one of the following regions: Central US, East US, East US 2, West US, West US 3, West Europe, Australia East, Australia Southeast, UK South, UK West, Canada Central and East Asia. Other regions are in the roadmap.
+* Ensure your Virtual Hubs are deployed in one of the following regions: Central US, East US, East US 2, West US, West US 2, West US 3, North Europe, West Europe, Australia East, Australia Southeast, UK South, UK West, Canada Central and East Asia. Other regions are in the roadmap.
* Ensure the Routing status of the Virtual Hub is "Provisioned." Attempts to create Cloud NGFW prior to routing being provisioned will fail. * Ensure registration to the **PaloAltoNetworks.Cloudngfw** resource provider is successful.
virtual-wan Virtual Wan Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/virtual-wan-faq.md
Virtual WAN is a networking-as-a-service platform that has a 99.95% SLA. However
The SLA for each component is calculated individually. For example, if ExpressRoute has a 10 minute downtime, the availability of ExpressRoute would be calculated as (Maximum Available Minutes - downtime) / Maximum Available Minutes * 100. ### Can you change the VNet address space in a spoke VNet connected to the hub?
-Yes, this can be done automatically with no update or reset required on the peering connection. You can find more information on how to change the VNet address space [here](https://learn.microsoft.com/azure/virtual-network/manage-virtual-network ).
+Yes, this can be done automatically with no update or reset required on the peering connection. You can find more information on how to change the VNet address space [here](../virtual-network/manage-virtual-network.md).
## Next steps
vpn-gateway Howto Point To Site Multi Auth https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/howto-point-to-site-multi-auth.md
description: Learn how to connect to a VNet via P2S using multiple authenticatio
Previously updated : 07/29/2022 Last updated : 10/17/2023
The client address pool is a range of private IP addresses that you specify. The
In this section, you configure authentication type and tunnel type. On the **Point-to-site configuration** page, if you don't see **Tunnel type** or **Authentication type**, your gateway is using the Basic SKU. The Basic SKU does not support IKEv2 or RADIUS authentication. If you want to use these settings, you need to delete and recreate the gateway using a different gateway SKU.
+> [!IMPORTANT]
+> [!INCLUDE [Entra ID note for portal pages](../../includes/vpn-gateway-entra-portal-note.md)]
+ :::image type="content" source="./media/howto-point-to-site-multi-auth/authentication-types.png" alt-text="Screenshot of authentication types and tunnel type."::: ### <a name="tunneltype"></a>Tunnel type
vpn-gateway Openvpn Azure Ad Client Mac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/openvpn-azure-ad-client-mac.md
Previously updated : 04/07/2023 Last updated : 10/17/2023
This article helps you configure a VPN client for a computer running macOS 10.15
> * The Azure VPN client for macOS is currently not available in France and China due to local regulations and requirements. >
-For every computer that you want to connect to a VNet using a Point-to-Site VPN connection, you need to do the following:
+For every computer that you want to connect to a virtual network using a Point-to-Site VPN connection, you need to do the following:
* Download the Azure VPN Client to the computer. * Configure a client profile that contains the VPN settings.
Before you can connect and authenticate using Microsoft Entra ID, you must first
## Import VPN client profile configuration files
+> [!NOTE]
+> [!INCLUDE [Entra VPN client note](../../includes/vpn-gateway-entra-vpn-client-note.md)]
+ 1. On the Azure VPN Client page, select **Import**. :::image type="content" source="media/openvpn-azure-ad-client-mac/import-1.png" alt-text="Screenshot of Azure VPN Client import selection.":::
Before you can connect and authenticate using Microsoft Entra ID, you must first
1. In the VPN connections pane, select the connection profile that you saved. Then, click **Connect**. :::image type="content" source="media/openvpn-azure-ad-client-mac/import-4.png" alt-text="Screenshot of Azure VPN Client clicking Connect.":::
-1. Once connected, the status will change to **Connected**. To disconnect from the session, click **Disconnect**.
+1. Once connected, the status changes to **Connected**. To disconnect from the session, click **Disconnect**.
:::image type="content" source="media/openvpn-azure-ad-client-mac/import-5.png" alt-text="Screenshot of Azure VPN Client connected status and disconnect button.":::
Before you can connect and authenticate using Microsoft Entra ID, you must first
Configure the following settings: * **Connection Name:** The name by which you want to refer to the connection profile.
- * **VPN Server:** This name is the name that you want to use to refer to the server. The name you choose here does not need to be the formal name of a server.
+ * **VPN Server:** This name is the name that you want to use to refer to the server. The name you choose here doesn't need to be the formal name of a server.
* **Server Validation** * **Certificate Information:** The certificate CA. * **Server Secret:** The server secret.
Before you can connect and authenticate using Microsoft Entra ID, you must first
1. Using your credentials, sign in to connect. :::image type="content" source="media/openvpn-azure-ad-client-mac/add-4.png" alt-text="Screenshot of Azure VPN Client sign in to connect.":::
-1. Once connected, you will see the **Connected** status. When you want to disconnect, click **Disconnect** to disconnect the connection.
+1. Once connected, you'll see the **Connected** status. When you want to disconnect, click **Disconnect** to disconnect the connection.
:::image type="content" source="media/openvpn-azure-ad-client-mac/add-5.png" alt-text="Screenshot of Azure VPN Client connected and disconnect button.":::
vpn-gateway Openvpn Azure Ad Client https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/openvpn-azure-ad-client.md
Previously updated : 11/22/2022 Last updated : 10/17/2023
After your Azure VPN Gateway P2S configuration is complete, your next steps are
## <a name="import"></a>Import VPN client profile configuration files
+> [!NOTE]
+> [!INCLUDE [Entra VPN client note](../../includes/vpn-gateway-entra-vpn-client-note.md)]
+ For Microsoft Entra authentication configurations, the **azurevpnconfig.xml** is used. The file is located in the **AzureVPN** folder of the VPN client profile configuration package. 1. On the page, select **Import**.
vpn-gateway Openvpn Azure Ad Mfa https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/openvpn-azure-ad-mfa.md
description: Learn how to enable multifactor authentication (MFA) for VPN users.
Previously updated : 07/28/2023 Last updated : 10/17/2023
vpn-gateway Openvpn Azure Ad Tenant Multi App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/openvpn-azure-ad-tenant-multi-app.md
description: Learn how to set up a Microsoft Entra tenant for P2S OpenVPN authen
Previously updated : 08/18/2023 Last updated : 10/17/2023
Assign the users to your applications.
## Configure authentication for the gateway
+> [!IMPORTANT]
+> [!INCLUDE [Entra ID note for portal pages](../../includes/vpn-gateway-entra-portal-note.md)]
+ In this step, you configure P2S Microsoft Entra authentication for the virtual network gateway. 1. Go to the virtual network gateway. In the left pane, click **Point-to-site configuration**.
vpn-gateway Openvpn Azure Ad Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/openvpn-azure-ad-tenant.md
description: Learn how to set up a Microsoft Entra tenant for P2S Microsoft Entr
Previously updated : 09/07/2023 Last updated : 10/17/2023
The steps in this article require a Microsoft Entra tenant. If you don't have a
## <a name="enable-authentication"></a>Configure authentication for the gateway
-1. Locate the tenant ID of the directory that you want to use for authentication. It's listed in the properties section of the Active Directory page. For help with finding your tenant ID, see [How to find your Microsoft Entra tenant ID](../active-directory/fundamentals/how-to-find-tenant.md).
+> [!IMPORTANT]
+> [!INCLUDE [Entra ID note for portal pages](../../includes/vpn-gateway-entra-portal-note.md)]
-1. If you don't already have a functioning point-to-site environment, follow the instruction to create one. See [Create a point-to-site VPN](vpn-gateway-howto-point-to-site-resource-manager-portal.md) to create and configure a point-to-site VPN gateway.
+1. Locate the tenant ID of the directory that you want to use for authentication. It's listed in the properties section of the Active Directory page. For help with finding your tenant ID, see [How to find your Microsoft Entra tenant ID](../active-directory/fundamentals/how-to-find-tenant.md).
- > [!IMPORTANT]
- > The Basic SKU is not supported for OpenVPN.
+1. If you don't already have a functioning point-to-site environment, follow the instruction to create one. See [Create a point-to-site VPN](vpn-gateway-howto-point-to-site-resource-manager-portal.md) to create and configure a point-to-site VPN gateway. When you create a VPN gateway, the Basic SKU isn't supported for OpenVPN.
1. Go to the virtual network gateway. In the left pane, click **Point-to-site configuration**.
The steps in this article require a Microsoft Entra tenant. If you don't have a
For **Microsoft Entra ID** values, use the following guidelines for **Tenant**, **Audience**, and **Issuer** values. Replace {AzureAD TenantID} with your tenant ID, taking care to remove **{}** from the examples when you replace this value.
- * **Tenant:** TenantID for the Microsoft Entra tenant. Enter the tenant ID that corresponds to your configuration. Make sure the Tenant URL does not have a `\` at the end.
+ * **Tenant:** TenantID for the Microsoft Entra tenant. Enter the tenant ID that corresponds to your configuration. Make sure the Tenant URL doesn't have a `\` at the end.
* Azure Public AD: `https://login.microsoftonline.com/{AzureAD TenantID}` * Azure Government AD: `https://login.microsoftonline.us/{AzureAD TenantID}`
The steps in this article require a Microsoft Entra tenant. If you don't have a
* Azure Germany: `538ee9e6-310a-468d-afef-ea97365856a9` * Microsoft Azure operated by 21Vianet: `49f817b6-84ae-4cc0-928c-73f27289b3aa`
- * **Issuer**: URL of the Secure Token Service. Include a trailing slash at the end of the **Issuer** value. Otherwise, the connection may fail.
+ * **Issuer**: URL of the Secure Token Service. Include a trailing slash at the end of the **Issuer** value. Otherwise, the connection might fail.
* `https://sts.windows.net/{AzureAD TenantID}/`
vpn-gateway Vpn Gateway Validate Throughput To Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-validate-throughput-to-vnet.md
The following diagram shows the logical connectivity of an on-premises network t
1. Determine your Azure VPN gateway throughput limits. For help, see the "Gateway SKUs" section of [About VPN Gateway](vpn-gateway-about-vpngateways.md#gwsku). 1. Determine the [Azure VM throughput guidance](../virtual-machines/sizes.md) for your VM size. 1. Determine your Internet Service Provider (ISP) bandwidth.
-1. Calculate your expected throughput by taking the least bandwidth of either the VM, VPN Gateway, or ISP; which is measured in Megabits-per-second (/) divided by eight (8).
+1. Calculate your expected throughput by taking the least bandwidth of either the VM, VPN Gateway, or ISP; which is measured in Megabits-per-second (/) divided by eight (8). This calculation gives you Megabytes-per-second.
If your calculated throughput does not meet your application's baseline throughput requirements, you must increase the bandwidth of the resource that you identified as the bottleneck. To resize an Azure VPN Gateway, see [Changing a gateway SKU](vpn-gateway-about-vpn-gateway-settings.md#gwsku). To resize a virtual machine, see [Resize a VM](../virtual-machines/resize-vm.md). If you are not experiencing the expected Internet bandwidth, you may also contact your ISP.
If your calculated throughput does not meet your application's baseline throughp
## Validate network throughput by using performance tools
-This validation should be performed during non-peak hours, as VPN tunnel throughput saturation during testing does not give accurate results.
+This validation should be performed during nonpeak hours, as VPN tunnel throughput saturation during testing does not give accurate results.
-The tool we use for this test is iPerf, which works on both Windows and Linux and has both client and server modes. It is limited to 3Gbps for Windows VMs.
+The tool we use for this test is iPerf, which works on both Windows and Linux and has both client and server modes. It is limited to 3 Gbps for Windows VMs.
This tool does not perform any read/write operations to disk. It solely produces self-generated TCP traffic from one end to the other. It generates statistics based on experimentation that measures the bandwidth available between client and server nodes. When testing between two nodes, one node acts as the server, and the other node acts as a client. Once this test is completed, we recommend that you reverse the roles of the nodes to test both upload and download throughput on both nodes.
Download [iPerf](https://iperf.fr/download/iperf_3.1/iperf-3.1.2-win64.zip). For
netsh advfirewall firewall delete rule name="Open Port 5001" protocol=TCP localport=5001 ```
- **Azure Linux:** Azure Linux images have permissive firewalls. If there is an application listening on a port, the traffic is allowed through. Custom images that are secured may need ports opened explicitly. Common Linux OS-layer firewalls include `iptables`, `ufw`, or `firewalld`.
+ **Azure Linux:** Azure Linux images have permissive firewalls. If there's an application listening on a port, the traffic is allowed through. Custom images that are secured may need ports opened explicitly. Common Linux OS-layer firewalls include `iptables`, `ufw`, or `firewalld`.
1. On the server node, change to the directory where iperf3.exe is extracted. Then run iPerf in server mode, and set it to listen on port 5001 as the following commands:
Download [iPerf](https://iperf.fr/download/iperf_3.1/iperf-3.1.2-win64.zip). For
iperf3.exe -c <IP of the iperf Server> -t 30 -p 5001 -P 32 ```
- The client is directing thirty seconds of traffic on port 5001, to the server. The flag '-P ' indicates that we are making 32 simultaneous connections to the server node.
+ The client is directing 30 seconds of traffic on port 5001, to the server. The flag '-P ' indicates that we're making 32 simultaneous connections to the server node.
The following screen shows the output from this example:
Make install is fast
> Make sure there are no intermediate hops (e.g. Virtual Appliance) during the throughput testing in between the VM and Gateway. > If there are poor results (in terms of overall throughput) coming from the iPERF/NTTTCP tests above, please refer to [this article](../virtual-network/virtual-network-tcpip-performance-tuning.md) to understand the key factors behind the possible root causes of the problem:
-In particular, analysis of packet capture traces (Wireshark/Network Monitor) collected in parallel from client and server during those tests will help in the assessments of bad performance. These traces can include packet loss, high latency, MTU size. fragmentation, TCP 0 Window, Out of Order fragments, and so on.
+In particular, analysis of packet capture traces (Wireshark/Network Monitor) collected in parallel from client and server during those tests help in the assessments of bad performance. These traces can include packet loss, high latency, MTU size. fragmentation, TCP 0 Window, Out of Order fragments, and so on.
## Address slow file copy issues Even if the overall throughput assessed with the previous steps (iPERF/NTTTCP/etc..) was good, you may experience slow file coping when either using Windows Explorer, or dragging and dropping through an RDP session. This problem is normally due to one or both of the following factors:
-* File copy applications, such as Windows Explorer and RDP, do not use multiple threads when copying files. For better performance, use a multi-threaded file copy application such as [Richcopy](/previous-versions/technet-magazine/dd547088(v=msdn.10)) to copy files by using 16 or 32 threads. To change the thread number for file copy in Richcopy, click **Action** > **Copy options** > **File copy**.
+* File copy applications, such as Windows Explorer and RDP, don't use multiple threads when copying files. For better performance, use a multi-threaded file copy application such as [Richcopy](/previous-versions/technet-magazine/dd547088(v=msdn.10)) to copy files by using 16 or 32 threads. To change the thread number for file copy in Richcopy, click **Action** > **Copy options** > **File copy**.
![Slow file copy issues](./media/vpn-gateway-validate-throughput-to-vnet/Richcopy.png)<br>
Mentioned the subnets of on-premises ranges that you would like Azure to reach v
* **Policy Based Gateway**: Policy-based VPNs encrypt and direct packets through IPsec tunnels based on the combinations of address prefixes between your on-premises network and the Azure VNet. The policy (or Traffic Selector) is usually defined as an access list in the VPN configuration.
-* **UsePolicyBasedTrafficSelector** connections: ("UsePolicyBasedTrafficSelectors" to $True on a connection will configure the Azure VPN gateway to connect to policy-based VPN firewall on premises. If you enable PolicyBasedTrafficSelectors, you need to ensure your VPN device has the matching traffic selectors defined with all combinations of your on-premises network (local network gateway) prefixes to and from the Azure virtual network prefixes, instead of any-to-any.
+* **UsePolicyBasedTrafficSelector** connections: ("UsePolicyBasedTrafficSelectors" to $True on a connection configures the Azure VPN gateway to connect to policy-based VPN firewall on premises. If you enable PolicyBasedTrafficSelectors, you need to ensure your VPN device has the matching traffic selectors defined with all combinations of your on-premises network (local network gateway) prefixes to and from the Azure virtual network prefixes, instead of any-to-any.
Inappropriate configuration may lead to frequent disconnects within the tunnel, packet drops, bad throughput, and latency.