Updates from: 04/08/2023 01:07:49
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory On Premises Scim Provisioning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/on-premises-scim-provisioning.md
The Azure Active Directory (Azure AD) provisioning service supports a [SCIM 2.0]
- Administrator role for configuring the application in the cloud (application administrator, cloud application administrator, global administrator, or a custom role with permissions). - A computer with at least 3 GB of RAM, to host a provisioning agent. The computer should have Windows Server 2016 or a later version of Windows Server, with connectivity to the target application, and with outbound connectivity to login.microsoftonline.com, other Microsoft Online Services and Azure domains. An example is a Windows Server 2016 virtual machine hosted in Azure IaaS or behind a proxy.
-## Deploying Azure AD provisioning agent
-The Azure AD Provisioning agent can be deployed on the same server hosting a SCIM enabled application, or a separate server, providing it has line of sight to the application's SCIM endpoint. A single agent also supports provision to multiple applications hosted locally on the same server or separate hosts, again as long as each SCIM endpoint is reachable by the agent.
-
- 1. [Download](https://aka.ms/OnPremProvisioningAgent) the provisioning agent and copy it onto the virtual machine or server that your SCIM application endpoint is hosted on.
- 2. Run the provisioning agent installer, agree to the terms of service, and select **Install**.
- 3. Once installed, locate and launch the **AAD Connect Provisioning Agent wizard**, and when prompted for an extensions select **On-premises provisioning**
- 4. For the agent to register itself with your tenant, provide credentials for an Azure AD admin with Hybrid administrator or global administrator permissions.
- 5. Select **Confirm** to confirm the installation was successful.
+## Download, install, and configure the Azure AD Connect Provisioning Agent Package
+
+If you have already downloaded the provisioning agent and configured it for another on-premises application, then continue reading in the next section.
+
+ 1. In the Azure portal, select **Azure Active Directory**.
+ 2. On the left, select **Azure AD Connect**.
+ 3. On the left, select **Cloud sync**.
+
+ :::image type="content" source="../../../includes/media/active-directory-cloud-sync-how-to-install/new-ux-1.png" alt-text="Screenshot of new UX screen." lightbox="../../../includes/media/active-directory-cloud-sync-how-to-install/new-ux-1.png":::
+
+ 4. On the left, select **Agent**.
+ 5. Select **Download on-premises agent**, and select **Accept terms & download**.
+
+ >[!NOTE]
+ >Please use different provisioning agents for on-premises application provisioning and Azure AD Connect Cloud Sync / HR-driven provisioning. All three scenarios should not be managed on the same agent.
+
+ 1. Open the provisioning agent installer, agree to the terms of service, and select **next**.
+ 1. When the provisioning agent wizard opens, continue to the **Select Extension** tab and select **On-premises application provisioning** when prompted for the extension you want to enable.
+ 1. The provisioning agent will use the operating system's web browser to display a popup window for you to authenticate to Azure AD, and potentially also your organization's identity provider. If you are using Internet Explorer as the browser on Windows Server, then you may need to add Microsoft web sites to your browser's trusted site list to allow JavaScript to run correctly.
+ 1. Provide credentials for an Azure AD administrator when you're prompted to authorize. The user is required to have the Hybrid Identity Administrator or Global Administrator role.
+ 1. Select **Confirm** to confirm the setting. Once installation is successful, you can select **Exit**, and also close the Provisioning Agent Package installer.
## Provisioning to SCIM-enabled application
-Once the agent is installed, no further configuration is necesary on-prem, and all provisioning configurations are then managed from the portal. Repeat the below steps for every on-premises application being provisioned via SCIM.
+Once the agent is installed, no further configuration is necessary on-premises, and all provisioning configurations are then managed from the portal. Repeat the below steps for every on-premises application being provisioned via SCIM.
1. In the Azure portal navigate to the Enterprise applications and add the **On-premises SCIM app** from the [gallery](../../active-directory/manage-apps/add-application-portal.md). 2. From the left hand menu navigate to the **Provisioning** option and select **Get started**. 3. Select **Automatic** from the dropdown list and expand the **On-Premises Connectivity** option. 4. Select the agent that you installed from the dropdown list and select **Assign Agent(s)**. 5. Now either wait 10 minutes or restart the **Microsoft Azure AD Connect Provisioning Agent** before proceeding to the next step & testing the connection.
- 6. In the **Tenant URL** field, provide the SCIM endpoint URL for your application. The URL is typically unique to each target application and must be resolveable by DNS. An example for a scenario where the agent is installed on the same host as the application is https://localhost:8585/scim ![Screenshot that shows assigning an agent.](./media/on-premises-scim-provisioning/scim-2.png)
+ 6. In the **Tenant URL** field, provide the SCIM endpoint URL for your application. The URL is typically unique to each target application and must be resolvable by DNS. An example for a scenario where the agent is installed on the same host as the application is https://localhost:8585/scim ![Screenshot that shows assigning an agent.](./media/on-premises-scim-provisioning/scim-2.png)
7. Select **Test Connection**, and save the credentials. The application SCIM endpoint must be actively listening for inbound provisioning requests, otherwise the test will fail. Use the steps [here](on-premises-ecma-troubleshoot.md#troubleshoot-test-connection-issues) if you run into connectivity issues. >[!NOTE] > If the test connection fails, you will see the request made. Please note that while the URL in the test connection error message is truncated, the actual request sent to the aplication contains the entire URL provided above.
Once the agent is installed, no further configuration is necesary on-prem, and a
12. Go to the **Provisioning** pane, and select **Start provisioning**. 13. Monitor using the [provisioning logs](../../active-directory/reports-monitoring/concept-provisioning-logs.md).
-The following video provides an overview of on-premises provisoning.
+The following video provides an overview of on-premises provisioning.
> [!VIDEO https://www.youtube.com/embed/QdfdpaFolys] ## Additional requirements
active-directory Licensing Service Plan Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/licensing-service-plan-reference.md
Previously updated : 03/30/2023 Last updated : 04/07/2023
When managing licenses in [the Azure portal](https://portal.azure.com/#blade/Mic
- **Service plans included (friendly names)**: A list of service plans (friendly names) in the product that correspond to the string ID and GUID >[!NOTE]
->This information last updated on March 30th, 2023.<br/>You can also download a CSV version of this table [here](https://download.microsoft.com/download/e/3/e/e3e9faf2-f28b-490a-9ada-c6089a1fc5b0/Product%20names%20and%20service%20plan%20identifiers%20for%20licensing.csv).
+>This information last updated on April 7th, 2023.<br/>You can also download a CSV version of this table [here](https://download.microsoft.com/download/e/3/e/e3e9faf2-f28b-490a-9ada-c6089a1fc5b0/Product%20names%20and%20service%20plan%20identifiers%20for%20licensing.csv).
><br/> | Product name | String ID | GUID | Service plans included | Service plans included (friendly names) |
When managing licenses in [the Azure portal](https://portal.azure.com/#blade/Mic
| Advanced Communications | ADV_COMMS | e4654015-5daf-4a48-9b37-4f309dddd88b | TEAMS_ADVCOMMS (604ec28a-ae18-4bc6-91b0-11da94504ba9) | Microsoft 365 Advanced Communications (604ec28a-ae18-4bc6-91b0-11da94504ba9) | | AI Builder Capacity add-on | CDSAICAPACITY | d2dea78b-507c-4e56-b400-39447f4738f8 | CDSAICAPACITY (a7c70a41-5e02-4271-93e6-d9b4184d83f5)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318) | AI Builder capacity add-on (a7c70a41-5e02-4271-93e6-d9b4184d83f5)<br/>Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318) | | App Connect IW | SPZA_IW | 8f0c5670-4e56-4892-b06d-91c085d7004f | SPZA (0bfc98ed-1dbc-4a97-b246-701754e48b17)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318) | APP CONNECT (0bfc98ed-1dbc-4a97-b246-701754e48b17)<br/>EXCHANGE FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318) |
+| App governance add-on to Microsoft Defender for Cloud Apps | Microsoft_Cloud_App_Security_App_Governance_Add_On | 9706eed9-966f-4f1b-94f6-bb2b4af99a5b | M365_AUDIT_PLATFORM (f6de4823-28fa-440b-b886-4783fa86ddba)<br/>MICROSOFT_APPLICATION_PROTECTION_AND_GOVERNANCE_A (5f3b1ded-75c0-4b31-8e6e-9b077eaadfd5)<br/>MICROSOFT_APPLICATION_PROTECTION_AND_GOVERNANCE_D (2e6ffd72-52d1-4541-8f6c-938f9a8d4cdc) | Microsoft 365 Audit Platform (f6de4823-28fa-440b-b886-4783fa86ddba)<br/>Microsoft Application Protection and Governance (A) (5f3b1ded-75c0-4b31-8e6e-9b077eaadfd5)<br/>Microsoft Application Protection and Governance (D) (2e6ffd72-52d1-4541-8f6c-938f9a8d4cdc) |
| Microsoft 365 Audio Conferencing | MCOMEETADV | 0c266dff-15dd-4b49-8397-2bb16070ed52 | MCOMEETADV (3e26ee1f-8a5f-4d52-aee2-b81ce45c8f40) | Microsoft 365 Audio Conferencing (3e26ee1f-8a5f-4d52-aee2-b81ce45c8f40) | | Azure Active Directory Basic | AAD_BASIC | 2b9c8e7c-319c-43a2-a2a0-48c5c6161de7 | AAD_BASIC (c4da7f8a-5ee2-4c99-a7e1-87d2df57f6fe) | MICROSOFT AZURE ACTIVE DIRECTORY BASIC (c4da7f8a-5ee2-4c99-a7e1-87d2df57f6fe) | | Azure Active Directory Premium P1 | AAD_PREMIUM | 078d2b04-f1bd-4111-bbd4-b4b1b354cef4 | AAD_PREMIUM (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>ADALLOM_S_DISCOVERY (932ad362-64a8-4783-9106-97849a1a30b9)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>MFA_PREMIUM (8a256a2b-b617-496d-b51b-e76466e88db0) | AZURE ACTIVE DIRECTORY PREMIUM P1 (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>CLOUD APP SECURITY DISCOVERY (932ad362-64a8-4783-9106-97849a1a30b9)<br/>EXCHANGE FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>MICROSOFT AZURE MULTI-FACTOR AUTHENTICATION (8a256a2b-b617-496d-b51b-e76466e88db0) |
When managing licenses in [the Azure portal](https://portal.azure.com/#blade/Mic
| Microsoft Dynamics AX7 User Trial | AX7_USER_TRIAL | fcecd1f9-a91e-488d-a918-a96cdb6ce2b0 | ERP_TRIAL_INSTANCE (e2f705fd-2468-4090-8c58-fad6e6b1e724)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318) | Dynamics 365 Operations Trial Environment (e2f705fd-2468-4090-8c58-fad6e6b1e724)<br/>Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318) | | Microsoft Azure Multi-Factor Authentication | MFA_STANDALONE | cb2020b1-d8f6-41c0-9acd-8ff3d6d7831b | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>MFA_PREMIUM (8a256a2b-b617-496d-b51b-e76466e88db0) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Microsoft Azure Multi-Factor Authentication (8a256a2b-b617-496d-b51b-e76466e88db0) | | Microsoft Defender for Office 365 (Plan 2) | THREAT_INTELLIGENCE | 3dd6cf57-d688-4eed-ba52-9e40b5468c3e | MTP (bf28f719-7844-4079-9c78-c1307898e192)<br/>ATP_ENTERPRISE (f20fedf3-f3c3-43c3-8267-2bfdd51c0939)<br/>THREAT_INTELLIGENCE (8e0c0a52-6a6c-4d40-8370-dd62790dcd70) | Microsoft 365 Defender (bf28f719-7844-4079-9c78-c1307898e192)<br/>Microsoft Defender for Office 365 (Plan 1) (f20fedf3-f3c3-43c3-8267-2bfdd51c0939)<br/>Microsoft Defender for Office 365 (Plan 2) (8e0c0a52-6a6c-4d40-8370-dd62790dcd70) |
+| Microsoft Defender Vulnerability Management Add-on | TVM_Premium_Add_on | ad7a56e0-6903-4d13-94f3-5ad491e78960 | TVM_PREMIUM_1 (36810a13-b903-490a-aa45-afbeb7540832) | Microsoft Defender Vulnerability Management (36810a13-b903-490a-aa45-afbeb7540832) |
+| Microsoft Intune Suite | Microsoft_Intune_Suite | a929cd4d-8672-47c9-8664-159c1f322ba8 | Intune-MAMTunnel (a6e407da-7411-4397-8a2e-d9b52780849e)<br/>INTUNE_P2 (d9923fe3-a2de-4d29-a5be-e3e83bb786be)<br/>Intune-EPM (bb73f429-78ef-4ff2-83c8-722b04c3e7d1)<br/>REMOTE_HELP (a4c6cf29-1168-4076-ba5c-e8fe0e62b17e)<br/>Intune_AdvancedEA (2a4baa0e-5e99-4c38-b1f2-6864960f1bd1) | Microsoft Tunnel for Mobile Application Management (a6e407da-7411-4397-8a2e-d9b52780849e)<br/>Intune Plan 2 (d9923fe3-a2de-4d29-a5be-e3e83bb786be)<br/>Intune Endpoint Privilege Management (bb73f429-78ef-4ff2-83c8-722b04c3e7d1)<br/>Remote Help (a4c6cf29-1168-4076-ba5c-e8fe0e62b17e)<br/>Intune Advanced endpoint analytics (2a4baa0e-5e99-4c38-b1f2-6864960f1bd1) |
| Microsoft 365 A1 | M365EDU_A1 | b17653a4-2443-4e8c-a550-18249dda78bb | AAD_EDU (3a3976ce-de18-4a87-a78e-5e9245e252df)<br/>INTUNE_EDU (da24caf9-af8e-485c-b7c8-e73336da2693)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>WINDOWS_STORE (a420f25f-a7b3-4ff5-a9d0-5d58f73b537d) | Azure Active Directory for Education (3a3976ce-de18-4a87-a78e-5e9245e252df)<br/>Intune for Education (da24caf9-af8e-485c-b7c8-e73336da2693)<br/>Microsoft Intune (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>Windows Store Service (a420f25f-a7b3-4ff5-a9d0-5d58f73b537d) | | Microsoft 365 A3 for faculty | M365EDU_A3_FACULTY | 4b590615-0888-425a-a965-b3bf7789848d | AAD_BASIC_EDU (1d0f309f-fdf9-4b2a-9ae7-9c48b91f1426)<br/>RMS_S_ENTERPRISE (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>CDS_O365_P2 (95b76021-6a53-4741-ab8b-1d1f3d66a95a)<br/>EducationAnalyticsP1 (a9b86446-fa4e-498f-a92a-41b447e03337)<br/>EXCHANGE_S_ENTERPRISE (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>INFORMATION_BARRIERS (c4801e8a-cb58-4c35-aca6-f2dcc106f287)<br/>ContentExplorer_Standard (2b815d45-56e4-4e3a-b65c-66cb9175b560)<br/>MIP_S_CLP1 (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>MYANALYTICS_P2 (33c4f319-9bdd-48d6-9c4d-410b750a4a5a)<br/>OFFICESUBSCRIPTION (43de0ff5-c92c-492b-9116-175376d08c38)<br/>MICROSOFTBOOKINGS (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>MDE_LITE (292cc034-7b7c-4950-aaf5-943befd3f1d4)<br/>OFFICE_FORMS_PLAN_2 (9b5de886-f035-4ff2-b3d8-c9127bea3620)<br/>KAIZALA_O365_P3 (aebd3021-9f8f-4bf8-bbe3-0ed2f4f047a1)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>MICROSOFT_SEARCH (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Deskless (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>STREAM_O365_E3 (9e700747-8b1d-45e5-ab8d-ef187ceec156)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>MINECRAFT_EDUCATION_EDITION (4c246bbc-f513-4311-beff-eba54c353256)<br/>INTUNE_O365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>Nucleus (db4d623d-b514-490b-b7ef-8885eee514de)<br/>ADALLOM_S_O365 (8c098270-9dd4-4350-9b30-ba4703f3b36b)<br/>SHAREPOINTWAC_EDU (e03c7e47-402c-463c-ab25-949079bedb21)<br/>PROJECT_O365_P2 (31b4e2fc-4cd6-4e7d-9c1b-41407303bd66)<br/>SCHOOL_DATA_SYNC_P2 (500b6a2a-7a50-4f40-b5f9-160e5b8c2f48)<br/>SHAREPOINTENTERPRISE_EDU (63038b2c-28d0-45f6-bc36-33062963b498)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>BPOS_S_TODO_2 (c87f142c-d1e9-4363-8630-aaea9c4d9ae5)<br/>VIVA_LEARNING_SEEDED (b76fb638-6ba6-402a-b9f9-83d28acb3d86)<br/>WHITEBOARD_PLAN2 (94a54592-cd8b-425e-87c6-97868b000b91)<br/>YAMMER_EDU (2078e8df-cff6-4290-98cb-5408261a760a)<br/>UNIVERSAL_PRINT_01 (795f6fe0-cc4d-4773-b050-5dde4dc704c9)<br/>Virtualization Rights for Windows 10 (E3/E5+VDA) (e7c91390-7625-45be-94e0-e16907e03118)<br/>WINDOWSUPDATEFORBUSINESS_DEPLOYMENTSERVICE (7bf960f6-2cd9-443a-8046-5dbff9558365)<br/>AAD_PREMIUM (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>RMS_S_PREMIUM (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>DYN365_CDS_O365_P2 (4ff01e01-1ba7-4d71-8cf8-ce96c3bbcf14)<br/>MFA_PREMIUM (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>ADALLOM_S_DISCOVERY (932ad362-64a8-4783-9106-97849a1a30b9)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>INTUNE_EDU (da24caf9-af8e-485c-b7c8-e73336da2693)<br/>POWERAPPS_O365_P2 (c68f8d98-5534-41c8-bf36-22fa496fa792)<br/>FLOW_O365_P2 (76846ad7-7776-4c40-a281-a386362dd1b9)<br/>POWER_VIRTUAL_AGENTS_O365_P2 (041fe683-03e4-45b6-b1af-c0cdc516daee) | Azure Active Directory Basic for Education (1d0f309f-fdf9-4b2a-9ae7-9c48b91f1426)<br/>Azure Rights Management (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>Common Data Service for Teams (95b76021-6a53-4741-ab8b-1d1f3d66a95a)<br/>Education Analytics (a9b86446-fa4e-498f-a92a-41b447e03337)<br/>Exchange Online (Plan 2) (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>Information Barriers (c4801e8a-cb58-4c35-aca6-f2dcc106f287)<br/>Information Protection and Governance Analytics ΓÇô Standard (2b815d45-56e4-4e3a-b65c-66cb9175b560)<br/>Information Protection for Office 365 - Standard (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>Insights by MyAnalytics (33c4f319-9bdd-48d6-9c4d-410b750a4a5a)<br/>Microsoft 365 Apps for enterprise (43de0ff5-c92c-492b-9116-175376d08c38)<br/>Microsoft Bookings (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>Microsoft Defender for Endpoint Plan 1 (292cc034-7b7c-4950-aaf5-943befd3f1d4)<br/>Microsoft Forms (Plan 2) (9b5de886-f035-4ff2-b3d8-c9127bea3620)<br/>Microsoft Kaizala Pro (aebd3021-9f8f-4bf8-bbe3-0ed2f4f047a1)<br/>Microsoft Planner (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>Microsoft Search (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Microsoft StaffHub (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>Microsoft Stream for Office 365 E3 (9e700747-8b1d-45e5-ab8d-ef187ceec156)<br/>Microsoft Teams (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>Minecraft Education Edition (4c246bbc-f513-4311-beff-eba54c353256)<br/>Mobile Device Management for Office 365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>Nucleus (db4d623d-b514-490b-b7ef-8885eee514de)<br/>Office 365 Cloud App Security (8c098270-9dd4-4350-9b30-ba4703f3b36b)<br/>Office for the Web for Education (e03c7e47-402c-463c-ab25-949079bedb21)<br/>Project for Office (Plan E3) (31b4e2fc-4cd6-4e7d-9c1b-41407303bd66)<br/>School Data Sync (Plan 2) (500b6a2a-7a50-4f40-b5f9-160e5b8c2f48)<br/>SharePoint (Plan 2) for Education (63038b2c-28d0-45f6-bc36-33062963b498)<br/>Skype for Business Online (Plan 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>Sway (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>To-Do (Plan 2) (c87f142c-d1e9-4363-8630-aaea9c4d9ae5)<br/>Viva Learning Seeded (b76fb638-6ba6-402a-b9f9-83d28acb3d86)<br/>Whiteboard (Plan 2) (94a54592-cd8b-425e-87c6-97868b000b91)<br/>Yammer for Academic (2078e8df-cff6-4290-98cb-5408261a760a)<br/>Universal Print (795f6fe0-cc4d-4773-b050-5dde4dc704c9)<br/>Windows 10/11 Enterprise (e7c91390-7625-45be-94e0-e16907e03118)<br/>Windows Update for Business Deployment Service (7bf960f6-2cd9-443a-8046-5dbff9558365)<br/>Azure Active Directory Premium P1 (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>Azure Information Protection Premium P1 (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>Common Data Service (4ff01e01-1ba7-4d71-8cf8-ce96c3bbcf14)<br/>Microsoft Azure Multi-Factor Authentication (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>Microsoft Defender for Cloud Apps Discovery (932ad362-64a8-4783-9106-97849a1a30b9)<br/>Microsoft Intune (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>Microsoft Intune for Education (da24caf9-af8e-485c-b7c8-e73336da2693)<br/>Power Apps for Office 365 (c68f8d98-5534-41c8-bf36-22fa496fa792)<br/>Power Automate for Office 365 (76846ad7-7776-4c40-a281-a386362dd1b9)<br/>Power Virtual Agents for Office 365 (041fe683-03e4-45b6-b1af-c0cdc516daee) | | Microsoft 365 A3 for students | M365EDU_A3_STUDENT | 7cfd9a2b-e110-4c39-bf20-c6a3f36a3121 | AAD_BASIC_EDU (1d0f309f-fdf9-4b2a-9ae7-9c48b91f1426)<br/>RMS_S_ENTERPRISE (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>CDS_O365_P2 (95b76021-6a53-4741-ab8b-1d1f3d66a95a)<br/>EducationAnalyticsP1 (a9b86446-fa4e-498f-a92a-41b447e03337)<br/>EXCHANGE_S_ENTERPRISE (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>INFORMATION_BARRIERS (c4801e8a-cb58-4c35-aca6-f2dcc106f287)<br/>ContentExplorer_Standard (2b815d45-56e4-4e3a-b65c-66cb9175b560)<br/>MIP_S_CLP1 (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>MYANALYTICS_P2 (33c4f319-9bdd-48d6-9c4d-410b750a4a5a)<br/>OFFICESUBSCRIPTION (43de0ff5-c92c-492b-9116-175376d08c38)<br/>MICROSOFTBOOKINGS (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>MDE_LITE (292cc034-7b7c-4950-aaf5-943befd3f1d4)<br/>OFFICE_FORMS_PLAN_2 (9b5de886-f035-4ff2-b3d8-c9127bea3620)<br/>KAIZALA_O365_P3 (aebd3021-9f8f-4bf8-bbe3-0ed2f4f047a1)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>MICROSOFT_SEARCH (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Deskless (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>STREAM_O365_E3 (9e700747-8b1d-45e5-ab8d-ef187ceec156)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>MINECRAFT_EDUCATION_EDITION (4c246bbc-f513-4311-beff-eba54c353256)<br/>INTUNE_O365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>ADALLOM_S_O365 (8c098270-9dd4-4350-9b30-ba4703f3b36b)<br/>SHAREPOINTWAC_EDU (e03c7e47-402c-463c-ab25-949079bedb21)<br/>PROJECT_O365_P2 (31b4e2fc-4cd6-4e7d-9c1b-41407303bd66)<br/>SCHOOL_DATA_SYNC_P2 (500b6a2a-7a50-4f40-b5f9-160e5b8c2f48)<br/>SHAREPOINTENTERPRISE_EDU (63038b2c-28d0-45f6-bc36-33062963b498)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>BPOS_S_TODO_2 (c87f142c-d1e9-4363-8630-aaea9c4d9ae5)<br/>WHITEBOARD_PLAN2 (94a54592-cd8b-425e-87c6-97868b000b91)<br/>YAMMER_EDU (2078e8df-cff6-4290-98cb-5408261a760a)<br/>UNIVERSAL_PRINT_01 (795f6fe0-cc4d-4773-b050-5dde4dc704c9)<br/>Virtualization Rights for Windows 10 (E3/E5+VDA) (e7c91390-7625-45be-94e0-e16907e03118)<br/>WINDOWSUPDATEFORBUSINESS_DEPLOYMENTSERVICE (7bf960f6-2cd9-443a-8046-5dbff9558365)<br/>AAD_PREMIUM (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>RMS_S_PREMIUM (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>DYN365_CDS_O365_P2 (4ff01e01-1ba7-4d71-8cf8-ce96c3bbcf14)<br/>MFA_PREMIUM (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>ADALLOM_S_DISCOVERY (932ad362-64a8-4783-9106-97849a1a30b9)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>INTUNE_EDU (da24caf9-af8e-485c-b7c8-e73336da2693)<br/>POWERAPPS_O365_P2 (c68f8d98-5534-41c8-bf36-22fa496fa792)<br/>FLOW_O365_P2 (76846ad7-7776-4c40-a281-a386362dd1b9)<br/>POWER_VIRTUAL_AGENTS_O365_P2 (041fe683-03e4-45b6-b1af-c0cdc516daee) | Azure Active Directory Basic for Education (1d0f309f-fdf9-4b2a-9ae7-9c48b91f1426)<br/>Azure Rights Management (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>Common Data Service for Teams (95b76021-6a53-4741-ab8b-1d1f3d66a95a)<br/>Education Analytics (a9b86446-fa4e-498f-a92a-41b447e03337)<br/>Exchange Online (Plan 2) (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>Information Barriers (c4801e8a-cb58-4c35-aca6-f2dcc106f287)<br/>Information Protection and Governance Analytics ΓÇô Standard (2b815d45-56e4-4e3a-b65c-66cb9175b560)<br/>Information Protection for Office 365 - Standard (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>Insights by MyAnalytics (33c4f319-9bdd-48d6-9c4d-410b750a4a5a)<br/>Microsoft 365 Apps for enterprise (43de0ff5-c92c-492b-9116-175376d08c38)<br/>Microsoft Bookings (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>Microsoft Defender for Endpoint Plan 1 (292cc034-7b7c-4950-aaf5-943befd3f1d4)<br/>Microsoft Forms (Plan 2) (9b5de886-f035-4ff2-b3d8-c9127bea3620)<br/>Microsoft Kaizala Pro (aebd3021-9f8f-4bf8-bbe3-0ed2f4f047a1)<br/>Microsoft Planner (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>Microsoft Search (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Microsoft StaffHub (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>Microsoft Stream for Office 365 E3 (9e700747-8b1d-45e5-ab8d-ef187ceec156)<br/>Microsoft Teams (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>Minecraft Education Edition (4c246bbc-f513-4311-beff-eba54c353256)<br/>Mobile Device Management for Office 365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>Office 365 Cloud App Security (8c098270-9dd4-4350-9b30-ba4703f3b36b)<br/>Office for the Web for Education (e03c7e47-402c-463c-ab25-949079bedb21)<br/>Project for Office (Plan E3) (31b4e2fc-4cd6-4e7d-9c1b-41407303bd66)<br/>School Data Sync (Plan 2) (500b6a2a-7a50-4f40-b5f9-160e5b8c2f48)<br/>SharePoint (Plan 2) for Education (63038b2c-28d0-45f6-bc36-33062963b498)<br/>Skype for Business Online (Plan 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>Sway (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>To-Do (Plan 2) (c87f142c-d1e9-4363-8630-aaea9c4d9ae5)<br/>Whiteboard (Plan 2) (94a54592-cd8b-425e-87c6-97868b000b91)<br/>Yammer for Academic (2078e8df-cff6-4290-98cb-5408261a760a)<br/>Universal Print (795f6fe0-cc4d-4773-b050-5dde4dc704c9)<br/>Windows 10/11 Enterprise (e7c91390-7625-45be-94e0-e16907e03118)<br/>Windows Update for Business Deployment Service (7bf960f6-2cd9-443a-8046-5dbff9558365)<br/>Azure Active Directory Premium P1 (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>Azure Information Protection Premium P1 (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>Common Data Service (4ff01e01-1ba7-4d71-8cf8-ce96c3bbcf14)<br/>Microsoft Azure Multi-Factor Authentication (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>Microsoft Defender for Cloud Apps Discovery (932ad362-64a8-4783-9106-97849a1a30b9)<br/>Microsoft Intune (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>Microsoft Intune for Education (da24caf9-af8e-485c-b7c8-e73336da2693)<br/>Power Apps for Office 365 (c68f8d98-5534-41c8-bf36-22fa496fa792)<br/>Power Automate for Office 365 (76846ad7-7776-4c40-a281-a386362dd1b9)<br/>Power Virtual Agents for Office 365 (041fe683-03e4-45b6-b1af-c0cdc516daee) |
When managing licenses in [the Azure portal](https://portal.azure.com/#blade/Mic
| Microsoft Defender for Identity | ATA | 98defdf7-f6c1-44f5-a1f6-943b6764e7a5 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>ATA (14ab5db5-e6c4-4b20-b4bc-13e36fd2227f)<br/>ADALLOM_FOR_AATP (61d18b02-6889-479f-8f36-56e6e0fe5792) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Microsoft Defender for Identity (14ab5db5-e6c4-4b20-b4bc-13e36fd2227f)<br/>SecOps Investigation for MDI (61d18b02-6889-479f-8f36-56e6e0fe5792) | | Microsoft Defender for Office 365 (Plan 1) GCC | ATP_ENTERPRISE_GOV | d0d1ca43-b81a-4f51-81e5-a5b1ad7bb005 | ATP_ENTERPRISE_GOV (493ff600-6a2b-4db6-ad37-a7d4eb214516) | Microsoft Defender for Office 365 (Plan 1) for Government (493ff600-6a2b-4db6-ad37-a7d4eb214516) | | Microsoft Defender for Office 365 (Plan 2) GCC | THREAT_INTELLIGENCE_GOV | 56a59ffb-9df1-421b-9e61-8b568583474d | MTP (bf28f719-7844-4079-9c78-c1307898e192)<br/>ATP_ENTERPRISE_GOV (493ff600-6a2b-4db6-ad37-a7d4eb214516)<br/>THREAT_INTELLIGENCE_GOV (900018f1-0cdb-4ecb-94d4-90281760fdc6) | Microsoft 365 Defender (bf28f719-7844-4079-9c78-c1307898e192)<br/>Microsoft Defender for Office 365 (Plan 1) for Government (493ff600-6a2b-4db6-ad37-a7d4eb214516)<br/>Microsoft Defender for Office 365 (Plan 2) for Government (900018f1-0cdb-4ecb-94d4-90281760fdc6) |
+| Microsoft Defender Vulnerability Management Add-on | TVM_Premium_Add_on | ad7a56e0-6903-4d13-94f3-5ad491e78960 | TVM_PREMIUM_1 (36810a13-b903-490a-aa45-afbeb7540832) | Microsoft Defender Vulnerability Management (36810a13-b903-490a-aa45-afbeb7540832) |
| Microsoft Dynamics CRM Online | CRMSTANDARD | d17b27af-3f49-4822-99f9-56a661538792 | CRMSTANDARD (f9646fb2-e3b2-4309-95de-dc4833737456)<br/>FLOW_DYN_APPS (7e6d7d78-73de-46ba-83b1-6d25117334ba)<br/>MDM_SALES_COLLABORATION (3413916e-ee66-4071-be30-6f94d4adfeda)<br/>NBPROFESSIONALFORCRM (3e58e97c-9abe-ebab-cd5f-d543d1529634)<br/>POWERAPPS_DYN_APPS (874fc546-6efe-4d22-90b8-5c4e7aa59f4b) | MICROSOFT DYNAMICS CRM ONLINE PROFESSIONAL(f9646fb2-e3b2-4309-95de-dc4833737456)<br/>FLOW FOR DYNAMICS 365 (7e6d7d78-73de-46ba-83b1-6d25117334ba)<br/>MICROSOFT DYNAMICS MARKETING SALES COLLABORATION - ELIGIBILITY CRITERIA APPLY (3413916e-ee66-4071-be30-6f94d4adfeda)<br/>MICROSOFT SOCIAL ENGAGEMENT PROFESSIONAL - ELIGIBILITY CRITERIA APPLY (3e58e97c-9abe-ebab-cd5f-d543d1529634)<br/>POWERAPPS FOR DYNAMICS 365 (874fc546-6efe-4d22-90b8-5c4e7aa59f4b) | | Microsoft Imagine Academy | IT_ACADEMY_AD | ba9a34de-4489-469d-879c-0f0f145321cd | IT_ACADEMY_AD (d736def0-1fde-43f0-a5be-e3f8b2de6e41) | MS IMAGINE ACADEMY (d736def0-1fde-43f0-a5be-e3f8b2de6e41) | | Microsoft Intune Device | INTUNE_A_D | 2b317a4a-77a6-4188-9437-b68a77b4e2c6 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Microsoft Intune (c1ec4a95-1f05-45b3-a911-aa3fa01094f5) | | Microsoft Intune Device for Government | INTUNE_A_D_GOV | 2c21e77a-e0d6-4570-b38a-7ff2dc17d2ca | EXCHANGE_S_FOUNDATION_GOV (922ba911-5694-4e99-a794-73aed9bfeec8)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5) | Exchange Foundation for Government (922ba911-5694-4e99-a794-73aed9bfeec8)<br/>Microsoft Intune (c1ec4a95-1f05-45b3-a911-aa3fa01094f5) |
+| Microsoft Intune Suite | Microsoft_Intune_Suite | a929cd4d-8672-47c9-8664-159c1f322ba8 | Intune_AdvancedEA (2a4baa0e-5e99-4c38-b1f2-6864960f1bd1)<br/>Intune-EPM (bb73f429-78ef-4ff2-83c8-722b04c3e7d1)<br/>INTUNE_P2 (d9923fe3-a2de-4d29-a5be-e3e83bb786be)<br/>Intune-MAMTunnel (a6e407da-7411-4397-8a2e-d9b52780849e)<br/>REMOTE_HELP (a4c6cf29-1168-4076-ba5c-e8fe0e62b17e) | Intune Advanced endpoint analytics (2a4baa0e-5e99-4c38-b1f2-6864960f1bd1)<br/>Intune Endpoint Privilege Management (bb73f429-78ef-4ff2-83c8-722b04c3e7d1)<br/>Intune Plan 2 (d9923fe3-a2de-4d29-a5be-e3e83bb786be)<br/>Microsoft Tunnel for Mobile Application Management (a6e407da-7411-4397-8a2e-d9b52780849e)<br/>Remote help (a4c6cf29-1168-4076-ba5c-e8fe0e62b17e) |
| Microsoft Power Apps Plan 2 Trial | POWERAPPS_VIRAL | dcb1a3ae-b33f-4487-846a-a640262fadf4 | DYN365_CDS_VIRAL (17ab22cd-a0b3-4536-910a-cb6eb12696c0)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>FLOW_P2_VIRAL (50e68c76-46c6-4674-81f9-75456511b170)<br/>FLOW_P2_VIRAL_REAL (d20bfa21-e9ae-43fc-93c2-20783f0840c3)<br/>POWERAPPS_P2_VIRAL (d5368ca3-357e-4acb-9c21-8495fb025d1f) | Common Data Service ΓÇô VIRAL (17ab22cd-a0b3-4536-910a-cb6eb12696c0)<br/>Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Flow Free (50e68c76-46c6-4674-81f9-75456511b170)<br/>Flow P2 Viral (d20bfa21-e9ae-43fc-93c2-20783f0840c3)<br/>PowerApps Trial (d5368ca3-357e-4acb-9c21-8495fb025d1f) | | Microsoft Power Automate Plan 2 | FLOW_P2 | 4755df59-3f73-41ab-a249-596ad72b5504 | DYN365_CDS_P2 (6ea4c1ef-c259-46df-bce2-943342cd3cb2)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>FLOW_P2 (56be9436-e4b2-446c-bb7f-cc15d16cca4d) | Common Data Service - P2 (6ea4c1ef-c259-46df-bce2-943342cd3cb2)<br/>Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Power Automate (Plan 2) (56be9436-e4b2-446c-bb7f-cc15d16cca4d) | | Microsoft Intune SMB | INTUNE_SMB | e6025b08-2fa5-4313-bd0a-7e5ffca32958 | AAD_SMB (de377cbc-0019-4ec2-b77c-3f223947e102)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>INTUNE_SMBIZ (8e9ff0ff-aa7a-4b20-83c1-2f636b600ac2)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/> | AZURE ACTIVE DIRECTORY (de377cbc-0019-4ec2-b77c-3f223947e102)<br/> EXCHANGE FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/> MICROSOFT INTUNE (8e9ff0ff-aa7a-4b20-83c1-2f636b600ac2)<br/> MICROSOFT INTUNE (c1ec4a95-1f05-45b3-a911-aa3fa01094f5) |
When managing licenses in [the Azure portal](https://portal.azure.com/#blade/Mic
| Microsoft Teams Phone Resoure Account | PHONESYSTEM_VIRTUALUSER | 440eaaa8-b3e0-484b-a8be-62870b9ba70a | MCOEV_VIRTUALUSER (f47330e9-c134-43b3-9993-e7f004506889) | Microsoft 365 Phone Standard Resource Account (f47330e9-c134-43b3-9993-e7f004506889)| | Microsoft Teams Phone Resource Account for GCC | PHONESYSTEM_VIRTUALUSER_GOV | 2cf22bcb-0c9e-4bc6-8daf-7e7654c0f285 | MCOEV_VIRTUALUSER_GOV (0628a73f-3b4a-4989-bd7b-0f8823144313) | Microsoft 365 Phone Standard Resource Account for Government (0628a73f-3b4a-4989-bd7b-0f8823144313) | | Microsoft Teams Premium | Microsoft_Teams_Premium | 989a1621-93bc-4be0-835c-fe30171d6463 | MICROSOFT_ECDN (85704d55-2e73-47ee-93b4-4b8ea14db92b)<br/>TEAMSPRO_MGMT (0504111f-feb8-4a3c-992a-70280f9a2869)<br/>TEAMSPRO_CUST (cc8c0802-a325-43df-8cba-995d0c6cb373)<br/>TEAMSPRO_PROTECTION (f8b44f54-18bb-46a3-9658-44ab58712968)<br/>TEAMSPRO_VIRTUALAPPT (9104f592-f2a7-4f77-904c-ca5a5715883f)<br/>MCO_VIRTUAL_APPT (711413d0-b36e-4cd4-93db-0a50a4ab7ea3)<br/>TEAMSPRO_WEBINAR (78b58230-ec7e-4309-913c-93a45cc4735b) | Microsoft eCDN (85704d55-2e73-47ee-93b4-4b8ea14db92b)<br/>Microsoft Teams Premium Intelligent (0504111f-feb8-4a3c-992a-70280f9a2869)<br/>Microsoft Teams Premium Personalized (cc8c0802-a325-43df-8cba-995d0c6cb373)<br/>Microsoft Teams Premium Secure (f8b44f54-18bb-46a3-9658-44ab58712968)<br/>Microsoft Teams Premium Virtual Appointment (9104f592-f2a7-4f77-904c-ca5a5715883f)<br/>Microsoft Teams Premium Virtual Appointments (711413d0-b36e-4cd4-93db-0a50a4ab7ea3)<br/>Microsoft Teams Premium Webinar (78b58230-ec7e-4309-913c-93a45cc4735b) |
+| Microsoft Teams Premium Introductory Pricing | Microsoft_Teams_Premium | 36a0f3b3-adb5-49ea-bf66-762134cf063a | MICROSOFT_ECDN (85704d55-2e73-47ee-93b4-4b8ea14db92b)<br/>TEAMSPRO_MGMT (0504111f-feb8-4a3c-992a-70280f9a2869)<br/>TEAMSPRO_CUST (cc8c0802-a325-43df-8cba-995d0c6cb373)<br/>TEAMSPRO_PROTECTION (f8b44f54-18bb-46a3-9658-44ab58712968)<br/>TEAMSPRO_VIRTUALAPPT (9104f592-f2a7-4f77-904c-ca5a5715883f)<br/>MCO_VIRTUAL_APPT (711413d0-b36e-4cd4-93db-0a50a4ab7ea3)<br/>TEAMSPRO_WEBINAR (78b58230-ec7e-4309-913c-93a45cc4735b) | Microsoft eCDN (85704d55-2e73-47ee-93b4-4b8ea14db92b)<br/>Microsoft Teams Premium Intelligent (0504111f-feb8-4a3c-992a-70280f9a2869)<br/>Microsoft Teams Premium Personalized (cc8c0802-a325-43df-8cba-995d0c6cb373)<br/>Microsoft Teams Premium Secure (f8b44f54-18bb-46a3-9658-44ab58712968)<br/>Microsoft Teams Premium Virtual Appointment (9104f592-f2a7-4f77-904c-ca5a5715883f)<br/>Microsoft Teams Premium Virtual Appointments (711413d0-b36e-4cd4-93db-0a50a4ab7ea3)<br/>Microsoft Teams Premium Webinar (78b58230-ec7e-4309-913c-93a45cc4735b) |
| Microsoft Teams Rooms Basic | Microsoft_Teams_Rooms_Basic | 6af4b3d6-14bb-4a2a-960c-6c902aad34f3 | MCOMEETADV (3e26ee1f-8a5f-4d52-aee2-b81ce45c8f40)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>WHITEBOARD_PLAN3 (4a51bca5-1eff-43f5-878c-177680f191af) | Microsoft 365 Audio Conferencing (3e26ee1f-8a5f-4d52-aee2-b81ce45c8f40)<br/>Microsoft Teams (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>Whiteboard (Plan 3) (4a51bca5-1eff-43f5-878c-177680f191af) | | Microsoft Teams Rooms Basic without Audio Conferencing | Microsoft_Teams_Rooms_Basic_without_Audio_Conferencing | 50509a35-f0bd-4c5e-89ac-22f0e16a00f8 | TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>WHITEBOARD_PLAN3 (4a51bca5-1eff-43f5-878c-177680f191af) | Microsoft Teams (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>Whiteboard (Plan 3) (4a51bca5-1eff-43f5-878c-177680f191af) | | Microsoft Teams Rooms Pro | Microsoft_Teams_Rooms_Pro | 4cde982a-ede4-4409-9ae6-b003453c8ea6 | AAD_PREMIUM (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>MCOMEETADV (3e26ee1f-8a5f-4d52-aee2-b81ce45c8f40)<br/>MCOEV (4828c8ec-dc2e-4779-b502-87ac9ce28ab7)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>WHITEBOARD_PLAN3 (4a51bca5-1eff-43f5-878c-177680f191af) | Azure Active Directory Premium P1 (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>Microsoft 365 Audio Conferencing (3e26ee1f-8a5f-4d52-aee2-b81ce45c8f40)<br/>Microsoft 365 Phone System (4828c8ec-dc2e-4779-b502-87ac9ce28ab7)<br/>Microsoft Intune (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>Microsoft Teams (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>Skype for Business Online (Plan 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>Whiteboard (Plan 3) (4a51bca5-1eff-43f5-878c-177680f191af) |
When managing licenses in [the Azure portal](https://portal.azure.com/#blade/Mic
| Microsoft Teams Trial | MS_TEAMS_IW | 74fbf1bb-47c6-4796-9623-77dc7371723b | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>MCO_TEAMS_IW (42a3ec34-28ba-46b6-992f-db53a675ac5b)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>POWERAPPS_O365_P1 (92f7a6f3-b89b-4bbd-8c30-809e6da5ad1c)<br/>FLOW_O365_P1 (0f9b09cb-62d1-4ff4-9129-43f4996f83f4)<br/>SHAREPOINTDESKLESS (902b47e5-dcb2-4fdc-858b-c63a90a2bdb9)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>YAMMER_ENTERPRISE (7547a3fe-08ee-4ccb-b430-5077c5041653) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Microsoft Planner (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>Microsoft Teams (42a3ec34-28ba-46b6-992f-db53a675ac5b)<br/>Microsoft Teams (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>Office for the Web (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>Power Apps for Office 365 (92f7a6f3-b89b-4bbd-8c30-809e6da5ad1c)<br/>Power Automate for Office 365 (0f9b09cb-62d1-4ff4-9129-43f4996f83f4)<br/>SharePoint Kiosk (902b47e5-dcb2-4fdc-858b-c63a90a2bdb9)<br/>Sway (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>Yammer Enterprise (7547a3fe-08ee-4ccb-b430-5077c5041653) | | Microsoft Threat Experts - Experts on Demand | EXPERTS_ON_DEMAND | 9fa2f157-c8e4-4351-a3f2-ffa506da1406 | EXPERTS_ON_DEMAND (b83a66d4-f05f-414d-ac0f-ea1c5239c42b) | Microsoft Threat Experts - Experts on Demand (b83a66d4-f05f-414d-ac0f-ea1c5239c42b) | | Microsoft Workplace Analytics | WORKPLACE_ANALYTICS | 3d957427-ecdc-4df2-aacd-01cc9d519da8 | WORKPLACE_ANALYTICS (f477b0f0-3bb1-4890-940c-40fcee6ce05f)<br/>WORKPLACE_ANALYTICS_INSIGHTS_BACKEND (ff7b261f-d98b-415b-827c-42a3fdf015af)<br/>WORKPLACE_ANALYTICS_INSIGHTS_USER (b622badb-1b45-48d5-920f-4b27a2c0996c) | Microsoft Workplace Analytics (f477b0f0-3bb1-4890-940c-40fcee6ce05f)<br/>Microsoft Workplace Analytics Insights Backend (ff7b261f-d98b-415b-827c-42a3fdf015af)<br/>Microsoft Workplace Analytics Insights User (b622badb-1b45-48d5-920f-4b27a2c0996c) |
+| Microsoft Viva Suite | VIVA | 61902246-d7cb-453e-85cd-53ee28eec138 | GRAPH_CONNECTORS_SEARCH_INDEX_TOPICEXP (b74d57b2-58e9-484a-9731-aeccbba954f0)<br/>WORKPLACE_ANALYTICS_INSIGHTS_USER (b622badb-1b45-48d5-920f-4b27a2c0996c)<br/>WORKPLACE_ANALYTICS_INSIGHTS_BACKEND (ff7b261f-d98b-415b-827c-42a3fdf015af)<br/>CORTEX (c815c93d-0759-4bb8-b857-bc921a71be83)<br/>VIVAENGAGE_COMMUNITIES_AND_COMMUNICATIONS (43304c6a-1d4e-4e0b-9b06-5b2a2ff58a90)<br/>VIVAENGAGE_KNOWLEDGE (c244cc9e-622f-4576-92ea-82e233e44e36)<br/>Viva_Goals_Premium (b44c6eaf-5c9f-478c-8f16-8cea26353bfb)<br/>VIVA_LEARNING_PREMIUM (7162bd38-edae-4022-83a7-c5837f951759) | Graph Connectors Search with Index (Microsoft Viva Topics) (b74d57b2-58e9-484a-9731-aeccbba954f0)<br/>Microsoft Viva Insights (b622badb-1b45-48d5-920f-4b27a2c0996c)<br/>Microsoft Viva Insights Backend (ff7b261f-d98b-415b-827c-42a3fdf015af)<br/>Microsoft Viva Topics (c815c93d-0759-4bb8-b857-bc921a71be83)<br/>Viva Engage Communities and Communications (43304c6a-1d4e-4e0b-9b06-5b2a2ff58a90)<br/>Viva Engage Knowledge (c244cc9e-622f-4576-92ea-82e233e44e36)<br/>Viva Goals (b44c6eaf-5c9f-478c-8f16-8cea26353bfb)<br/>Viva Learning (7162bd38-edae-4022-83a7-c5837f951759) |
| Multi-Geo Capabilities in Office 365 | OFFICE365_MULTIGEO | 84951599-62b7-46f3-9c9d-30551b2ad607 | EXCHANGEONLINE_MULTIGEO (897d51f1-2cfa-4848-9b30-469149f5e68e)<br/>SHAREPOINTONLINE_MULTIGEO (735c1d98-dd3f-4818-b4ed-c8052e18e62d)<br/>TEAMSMULTIGEO (41eda15d-6b52-453b-906f-bc4a5b25a26b) | Exchange Online Multi-Geo (897d51f1-2cfa-4848-9b30-469149f5e68e)<br/>SharePoint Multi-Geo (735c1d98-dd3f-4818-b4ed-c8052e18e62d)<br/>Teams Multi-Geo (41eda15d-6b52-453b-906f-bc4a5b25a26b) | | Nonprofit Portal | NONPROFIT_PORTAL | aa2695c9-8d59-4800-9dc8-12e01f1735af | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>NONPROFIT_PORTAL (7dbc2d88-20e2-4eb6-b065-4510b38d6eb2) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Nonprofit Portal (7dbc2d88-20e2-4eb6-b065-4510b38d6eb2)| | Office 365 A1 for Faculty | STANDARDWOFFPACK_FACULTY | 94763226-9b3c-4e75-a931-5c89701abe66 | AAD_BASIC_EDU (1d0f309f-fdf9-4b2a-9ae7-9c48b91f1426)<br/>DYN365_CDS_O365_P1 (40b010bb-0b69-4654-ac5e-ba161433f4b4)<br/>EducationAnalyticsP1 (a9b86446-fa4e-498f-a92a-41b447e03337)<br/>EXCHANGE_S_STANDARD (9aaf7827-d63c-4b61-89c3-182f06f82e5c)<br/>INFORMATION_BARRIERS (c4801e8a-cb58-4c35-aca6-f2dcc106f287)<br/>RMS_S_ENTERPRISE (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>OFFICE_FORMS_PLAN_2 (9b5de886-f035-4ff2-b3d8-c9127bea3620)<br/>KAIZALA_O365_P2 (54fc630f-5a40-48ee-8965-af0503c1386e)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>MICROSOFT_SEARCH (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Deskless (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>STREAM_O365_E3 (9e700747-8b1d-45e5-ab8d-ef187ceec156)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>INTUNE_O365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>Nucleus (db4d623d-b514-490b-b7ef-8885eee514de)<br/>SHAREPOINTWAC_EDU (e03c7e47-402c-463c-ab25-949079bedb21)<br/>OFFICEMOBILE_SUBSCRIPTION (c63d4d19-e8cb-460e-b37c-4d6c34603745)<br/>POWERAPPS_O365_P2 (c68f8d98-5534-41c8-bf36-22fa496fa792)<br/>FLOW_O365_P2 (76846ad7-7776-4c40-a281-a386362dd1b9)<br/>PROJECT_O365_P1 (a55dfd10-0864-46d9-a3cd-da5991a3e0e2)<br/>SCHOOL_DATA_SYNC_P1 (c33802dd-1b50-4b9a-8bb9-f13d2cdeadac)<br/>SHAREPOINTSTANDARD_EDU (0a4983bb-d3e5-4a09-95d8-b2d0127b3df5)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>BPOS_S_TODO_2 (c87f142c-d1e9-4363-8630-aaea9c4d9ae5)<br/>VIVA_LEARNING_SEEDED (b76fb638-6ba6-402a-b9f9-83d28acb3d86)<br/>WHITEBOARD_PLAN1 (b8afc642-032e-4de5-8c0a-507a7bba7e5d)<br/>YAMMER_EDU (2078e8df-cff6-4290-98cb-5408261a760a) | Azure Active Directory Basic for Education (1d0f309f-fdf9-4b2a-9ae7-9c48b91f1426)<br/>Common Data Service - O365 P1 (40b010bb-0b69-4654-ac5e-ba161433f4b4)<br/>Education Analytics (a9b86446-fa4e-498f-a92a-41b447e03337)<br/>Exchange Online (Plan 1) (9aaf7827-d63c-4b61-89c3-182f06f82e5c)<br/>Information Barriers (c4801e8a-cb58-4c35-aca6-f2dcc106f287)<br/>Microsoft Azure Active Directory Rights (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>Microsoft Forms (Plan 2) (9b5de886-f035-4ff2-b3d8-c9127bea3620)<br/>Microsoft Kaizala Pro Plan 2 (54fc630f-5a40-48ee-8965-af0503c1386e)<br/>Microsoft Planner (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>Microsoft Search (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Microsoft StaffHub (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>Microsoft Stream for Office 365 E3 (9e700747-8b1d-45e5-ab8d-ef187ceec156)<br/>Microsoft Teams (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>Mobile Device Management for Office 365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>Nucleus (db4d623d-b514-490b-b7ef-8885eee514de)<br/>Office for the Web for Education (e03c7e47-402c-463c-ab25-949079bedb21)<br/>Office Mobile Apps for Office 365 (c63d4d19-e8cb-460e-b37c-4d6c34603745)<br/>Power Apps for Office 365 (c68f8d98-5534-41c8-bf36-22fa496fa792)<br/>Power Automate for Office 365 (76846ad7-7776-4c40-a281-a386362dd1b9)<br/>Project for Office (Plan E1) (a55dfd10-0864-46d9-a3cd-da5991a3e0e2)<br/>School Data Sync (Plan 1) (c33802dd-1b50-4b9a-8bb9-f13d2cdeadac)<br/>SharePoint (Plan 1) for Education (0a4983bb-d3e5-4a09-95d8-b2d0127b3df5)<br/>Skype for Business Online (Plan 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>Sway (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>To-Do (Plan 2) (c87f142c-d1e9-4363-8630-aaea9c4d9ae5)<br/>Viva Learning Seeded (b76fb638-6ba6-402a-b9f9-83d28acb3d86)<br/>Whiteboard (Plan 1) (b8afc642-032e-4de5-8c0a-507a7bba7e5d)<br/>Yammer for Academic (2078e8df-cff6-4290-98cb-5408261a760a) |
active-directory How To Lifecycle Workflow Sync Attributes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/how-to-lifecycle-workflow-sync-attributes.md
The following example will walk you through setting up a custom synchronization
18. Enable the scheduler again by running `Set-ADSyncScheduler -SyncCycleEnabled $true`. > [!NOTE]
-> **msDS-cloudExtensionAttribute1** is an example source.
+>- **msDS-cloudExtensionAttribute1** is an example source.
+>- **Starting with [Azure AD Connect 2.0.3.0](../hybrid/reference-connect-version-history.md#functional-changes-10), `employeeHireDate` is added to the default 'Out to Azure AD' rule, so steps 10-16 are not required.**
For more information, see [How to customize a synchronization rule](../hybrid/how-to-connect-create-custom-sync-rule.md) and [Make a change to the default configuration.](../hybrid/how-to-connect-sync-change-the-configuration.md)
active-directory Ddc Web Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/ddc-web-tutorial.md
+
+ Title: Azure Active Directory SSO integration with DDC Web
+description: Learn how to configure single sign-on between Azure Active Directory and DDC Web.
++++++++ Last updated : 04/06/2023++++
+# Azure Active Directory SSO integration with DDC Web
+
+In this article, you learn how to integrate DDC Web with Azure Active Directory (Azure AD). Engage and mobilize your advocates and PAC eligible class with ease using the flexible DDC Web platform with personalized content, simple activation, and PAC fundraising tools. When you integrate DDC Web with Azure AD, you can:
+
+* Control in Azure AD who has access to DDC Web.
+* Enable your users to be automatically signed-in to DDC Web with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+You configure and test Azure AD single sign-on for DDC Web in a test environment. DDC Web supports **SP** and **IDP** initiated single sign-on.
+
+## Prerequisites
+
+To integrate Azure Active Directory with DDC Web, you need:
+
+* An Azure AD user account. If you don't already have one, you can [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+* One of the following roles: Global Administrator, Cloud Application Administrator, Application Administrator, or owner of the service principal.
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* DDC Web single sign-on (SSO) enabled subscription.
+
+## Add application and assign a test user
+
+Before you begin the process of configuring single sign-on, you need to add the DDC Web application from the Azure AD gallery. You need a test user account to assign to the application and test the single sign-on configuration.
+
+### Add DDC Web from the Azure AD gallery
+
+Add DDC Web from the Azure AD application gallery to configure single sign-on with DDC Web. For more information on how to add application from the gallery, see the [Quickstart: Add application from the gallery](../manage-apps/add-application-portal.md).
+
+### Create and assign Azure AD test user
+
+Follow the guidelines in the [create and assign a user account](../manage-apps/add-application-portal-assign-users.md) article to create a test user account in the Azure portal called B.Simon.
+
+Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, and assign roles. The wizard also provides a link to the single sign-on configuration pane in the Azure portal. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides).
+
+## Configure Azure AD SSO
+
+Complete the following steps to enable Azure AD single sign-on in the Azure portal.
+
+1. In the Azure portal, on the **DDC Web** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, select the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Screenshot shows how to edit Basic SAML Configuration.](common/edit-urls.png "Basic Configuration")
+
+1. On the **Basic SAML Configuration** section, perform the following steps:
+
+ a. In the **Identifier** textbox, type a URL using the following pattern:
+ `https://<yourwebsite>.com`
+
+ b. In the **Reply URL** textbox, type a URL using the following pattern:
+ `https://<yourwebsite>.com/sso/`
+
+1. If you wish to configure the application in **SP** initiated mode, then perform the following step:
+
+ In the **Sign on URL** textbox, type a URL using the following pattern:
+ `https://<yourwebsite>.com`
+
+ > [!NOTE]
+ > These values are not real. Update these values with the actual Identifier, Reply URL and Sign on URL. Contact [DDC Web Client support team](mailto:ondemand@ddcpublicaffairs.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+
+1. On the **Set-up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Federation Metadata XML** and select **Download** to download the certificate and save it on your computer.
+
+ ![Screenshot shows the Certificate download link.](common/metadataxml.png "Certificate")
+
+1. On the **Set up DDC Web** section, copy the appropriate URL(s) based on your requirement.
+
+ ![Screenshot shows to copy configuration appropriate URL.](common/copy-configuration-urls.png "Metadata")
+
+## Configure DDC Web SSO
+
+To configure single sign-on on **DDC Web** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [DDC Web support team](mailto:ondemand@ddcpublicaffairs.com). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create DDC Web test user
+
+In this section, you create a user called Britta Simon at DDC Web. Work with [DDC Web support team](mailto:ondemand@ddcpublicaffairs.com) to add the users in the DDC Web platform. Users must be created and activated before you use single sign-on.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+#### SP initiated:
+
+* Click on **Test this application** in Azure portal. This will redirect to DDC Web Sign-on URL where you can initiate the login flow.
+
+* Go to DDC Web Sign-on URL directly and initiate the login flow from there.
+
+#### IDP initiated:
+
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the DDC Web for which you set up the SSO.
+
+You can also use Microsoft My Apps to test the application in any mode. When you click the DDC Web tile in the My Apps, if configured in SP mode you would be redirected to the application sign-on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the DDC Web for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Additional resources
+
+* [What is single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+* [Plan a single sign-on deployment](../manage-apps/plan-sso-deployment.md).
+
+## Next steps
+
+Once you configure DDC Web you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Dozuki Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/dozuki-tutorial.md
+
+ Title: Azure Active Directory SSO integration with Dozuki
+description: Learn how to configure single sign-on between Azure Active Directory and Dozuki.
++++++++ Last updated : 04/06/2023++++
+# Azure Active Directory SSO integration with Dozuki
+
+In this article, you learn how to integrate Dozuki with Azure Active Directory (Azure AD). Dozuki is standard work instruction software that empowers manufacturers to implement standardized procedures in support of continuous improvement and training efforts. When you integrate Dozuki with Azure AD, you can:
+
+* Control in Azure AD who has access to Dozuki.
+* Enable your users to be automatically signed-in to Dozuki with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+You need to configure and test Azure AD single sign-on for Dozuki in a test environment. Dozuki supports both **SP** and **IDP** initiated single sign-on and **Just In Time** user provisioning.
+
+## Prerequisites
+
+To integrate Azure Active Directory with Dozuki, you need:
+
+* An Azure AD user account. If you don't already have one, you can [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+* One of the following roles: Global Administrator, Cloud Application Administrator, Application Administrator, or owner of the service principal.
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Dozuki single sign-on (SSO) enabled subscription.
+
+## Add application and assign a test user
+
+Before you begin the process of configuring single sign-on, you need to add the Dozuki application from the Azure AD gallery. You need a test user account to assign to the application and test the single sign-on configuration.
+
+### Add Dozuki from the Azure AD gallery
+
+Add Dozuki from the Azure AD application gallery to configure single sign-on with Dozuki. For more information on how to add application from the gallery, see the [Quickstart: Add application from the gallery](../manage-apps/add-application-portal.md).
+
+### Create and assign Azure AD test user
+
+Follow the guidelines in the [create and assign a user account](../manage-apps/add-application-portal-assign-users.md) article to create a test user account in the Azure portal called B.Simon.
+
+Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, and assign roles. The wizard also provides a link to the single sign-on configuration pane in the Azure portal. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides).
+
+## Configure Azure AD SSO
+
+Complete the following steps to enable Azure AD single sign-on in the Azure portal.
+
+1. In the Azure portal, on the **Dozuki** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, select the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Screenshot shows how to edit Basic SAML Configuration.](common/edit-urls.png "Basic Configuration")
+
+1. On the **Basic SAML Configuration** section, perform the following steps:
+
+ a. In the **Identifier** textbox, type a URL using the following pattern:
+ `https://<dozukiSubdomain>.dozuki.com/`
+
+ b. In the **Reply URL** textbox, type a URL using the following pattern:
+ `https://<dozukiSubdomain>.dozuki.com/Guide/User/remote_login`
+
+1. If you wish to configure the application in **SP** initiated mode, then perform the following step:
+
+ In the **Sign on URL** textbox, type a URL using the following pattern:
+ `https://<dozukiSubdomain>.dozuki.com/login`
+
+ > [!NOTE]
+ > These values are not real. Update these values with the actual Identifier, Reply URL and Sign on URL. Contact [Dozuki Client support team](mailto:support@dozuki.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+
+1. Dozuki application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes.
+
+ ![Screenshot shows the image of attributes configuration.](common/default-attributes.png "Image")
+
+1. In addition to above, Dozuki application expects few more attributes to be passed back in SAML response, which are shown. These attributes are also pre populated but you can review them as per your requirements.
+
+ | Name | Source Attribute|
+ | | |
+ | email | user.mail |
+ | userid | user.objectid |
+ | username | user.displayname |
+
+1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Base64)** and select **Download** to download the certificate and save it on your computer.
+
+ ![Screenshot shows the Certificate download link.](common/certificatebase64.png "Certificate")
+
+1. On the **Set up Dozuki** section, copy the appropriate URL(s) based on your requirement.
+
+ ![Screenshot shows to copy configuration appropriate URL.](common/copy-configuration-urls.png "Metadata")
+
+## Configure Dozuki SSO
+
+To configure single sign-on on **Dozuki** side, you need to send the downloaded **Certificate (Base64)** and appropriate copied URLs from Azure portal to [Dozuki support team](mailto:support@dozuki.com). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create Dozuki test user
+
+In this section, a user called B.Simon is created in Dozuki. Dozuki supports just-in-time user provisioning, which is enabled by default. There's no action item for you in this section. If a user doesn't already exist in Dozuki, a new one is commonly created after authentication.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+#### SP initiated:
+
+* Click on **Test this application** in Azure portal. This will redirect to Dozuki Sign-on URL where you can initiate the login flow.
+
+* Go to Dozuki Sign-on URL directly and initiate the login flow from there.
+
+#### IDP initiated:
+
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the Dozuki for which you set up the SSO.
+
+You can also use Microsoft My Apps to test the application in any mode. When you click the Dozuki tile in the My Apps, if configured in SP mode you would be redirected to the application sign-on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Dozuki for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Additional resources
+
+* [What is single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+* [Plan a single sign-on deployment](../manage-apps/plan-sso-deployment.md).
+
+## Next steps
+
+Once you configure Dozuki you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Fountain Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/fountain-tutorial.md
+
+ Title: Azure Active Directory SSO integration with Fountain
+description: Learn how to configure single sign-on between Azure Active Directory and Fountain.
++++++++ Last updated : 04/06/2023++++
+# Azure Active Directory SSO integration with Fountain
+
+In this article, you learn how to integrate Fountain with Azure Active Directory (Azure AD). FountainΓÇÖs all-in-one high volume hiring platform empowers the worldΓÇÖs leading enterprises to find the right people through smart, fast, and seamless recruiting. When you integrate Fountain with Azure AD, you can:
+
+* Control in Azure AD who has access to Fountain.
+* Enable your users to be automatically signed-in to Fountain with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+You need to configure and test Azure AD single sign-on for Fountain in a test environment. Fountain supports both **SP** and **IDP** initiated single sign-on and **Just In Time** user provisioning.
+
+## Prerequisites
+
+To integrate Azure Active Directory with Fountain, you need:
+
+* An Azure AD user account. If you don't already have one, you can [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+* One of the following roles: Global Administrator, Cloud Application Administrator, Application Administrator, or owner of the service principal.
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Fountain single sign-on (SSO) enabled subscription.
+
+## Add application and assign a test user
+
+Before you begin the process of configuring single sign-on, you need to add the Fountain application from the Azure AD gallery. You need a test user account to assign to the application and test the single sign-on configuration.
+
+### Add Fountain from the Azure AD gallery
+
+Add Fountain from the Azure AD application gallery to configure single sign-on with Fountain. For more information on how to add application from the gallery, see the [Quickstart: Add application from the gallery](../manage-apps/add-application-portal.md).
+
+### Create and assign Azure AD test user
+
+Follow the guidelines in the [create and assign a user account](../manage-apps/add-application-portal-assign-users.md) article to create a test user account in the Azure portal called B.Simon.
+
+Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, and assign roles. The wizard also provides a link to the single sign-on configuration pane in the Azure portal. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides).
+
+## Configure Azure AD SSO
+
+Complete the following steps to enable Azure AD single sign-on in the Azure portal.
+
+1. In the Azure portal, on the **Fountain** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, select the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Screenshot shows how to edit Basic SAML Configuration.](common/edit-urls.png "Basic Configuration")
+
+1. On the **Basic SAML Configuration** section, perform the following steps:
+
+ a. In the **Identifier** textbox, type a URL using the following pattern:
+ `https://www.okta.com/saml2/service-provider/<CustomerUniqueId>`
+
+ b. In the **Reply URL** textbox, type a URL using the following pattern:
+ `https://fountain.okta.com/sso/saml2/<CustomerUniqueId>`
+
+1. If you wish to configure the application in **SP** initiated mode, then perform the following step:
+
+ In the **Sign on URL** textbox, type a URL using the following pattern:
+ `https://fountain.okta.com/sso/saml2/<CustomerUniqueId>`
+
+ > [!NOTE]
+ > These values are not real. Update these values with the actual Identifier, Reply URL and Sign on URL. Contact [Fountain Client support team](mailto:support@fountain.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+
+1. Fountain application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes.
+
+ ![Screenshot shows the image of attributes configuration.](common/default-attributes.png "Image")
+
+1. In addition to above, Fountain application expects few more attributes to be passed back in SAML response, which are shown. These attributes are also pre populated but you can review them as per your requirements.
+
+ | Name | Source Attribute|
+ | | |
+ | email | user.mail |
+ | firstName | user.givenname |
+ | lastName | user.surname |
+
+1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Base64)** and select **Download** to download the certificate and save it on your computer.
+
+ ![Screenshot shows the Certificate download link.](common/certificatebase64.png "Certificate")
+
+1. On the **Set up Fountain** section, copy the appropriate URL(s) based on your requirement.
+
+ ![Screenshot shows to copy configuration appropriate URL.](common/copy-configuration-urls.png "Metadata")
+
+## Configure Fountain SSO
+
+To configure single sign-on on **Fountain** side, you need to send the downloaded **Certificate (Base64)** and appropriate copied URLs from Azure portal to [Fountain support team](mailto:support@fountain.com). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create Fountain test user
+
+In this section, a user called B.Simon is created in Fountain. Fountain supports just-in-time user provisioning, which is enabled by default. There's no action item for you in this section. If a user doesn't already exist in Fountain, a new one is commonly created after authentication.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+#### SP initiated:
+
+* Click on **Test this application** in Azure portal. This will redirect to Fountain Sign-on URL where you can initiate the login flow.
+
+* Go to Fountain Sign-on URL directly and initiate the login flow from there.
+
+#### IDP initiated:
+
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the Fountain for which you set up the SSO.
+
+You can also use Microsoft My Apps to test the application in any mode. When you click the Fountain tile in the My Apps, if configured in SP mode you would be redirected to the application sign-on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Fountain for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Additional resources
+
+* [What is single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+* [Plan a single sign-on deployment](../manage-apps/plan-sso-deployment.md).
+
+## Next steps
+
+Once you configure Fountain you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Gofluent Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/gofluent-tutorial.md
+
+ Title: Azure Active Directory SSO integration with goFLUENT
+description: Learn how to configure single sign-on between Azure Active Directory and goFLUENT.
++++++++ Last updated : 04/06/2023++++
+# Azure Active Directory SSO integration with goFLUENT
+
+In this article, you learn how to integrate goFLUENT with Azure Active Directory (Azure AD). goFLUENT, the world's leading language training provider, delivers a hyper-personalized learning experience that builds confidence, empowers career growth, and establishes an inclusive global culture. When you integrate goFLUENT with Azure AD, you can:
+
+* Control in Azure AD who has access to goFLUENT.
+* Enable your users to be automatically signed-in to goFLUENT with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+You configure and test Azure AD single sign-on for goFLUENT in a test environment. goFLUENT supports **SP** initiated single sign-on and **Just In Time** user provisioning.
+
+## Prerequisites
+
+To integrate Azure Active Directory with goFLUENT, you need:
+
+* An Azure AD user account. If you don't already have one, you can [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+* One of the following roles: Global Administrator, Cloud Application Administrator, Application Administrator, or owner of the service principal.
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* goFLUENT single sign-on (SSO) enabled subscription.
+
+## Add application and assign a test user
+
+Before you begin the process of configuring single sign-on, you need to add the goFLUENT application from the Azure AD gallery. You need a test user account to assign to the application and test the single sign-on configuration.
+
+### Add goFLUENT from the Azure AD gallery
+
+Add goFLUENT from the Azure AD application gallery to configure single sign-on with goFLUENT. For more information on how to add application from the gallery, see the [Quickstart: Add application from the gallery](../manage-apps/add-application-portal.md).
+
+### Create and assign Azure AD test user
+
+Follow the guidelines in the [create and assign a user account](../manage-apps/add-application-portal-assign-users.md) article to create a test user account in the Azure portal called B.Simon.
+
+Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, and assign roles. The wizard also provides a link to the single sign-on configuration pane in the Azure portal. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides).
+
+## Configure Azure AD SSO
+
+Complete the following steps to enable Azure AD single sign-on in the Azure portal.
+
+1. In the Azure portal, on the **goFLUENT** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, select the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Screenshot shows how to edit Basic SAML Configuration.](common/edit-urls.png "Basic Configuration")
+
+1. On the **Basic SAML Configuration** section, perform the following steps:
+
+ a. In the **Identifier** textbox, type a URL using the following pattern:
+ `https://<CustomerName>.gofluent.com/samlsso/metadata.jsp?pid=<CustomerName>`
+
+ b. In the **Reply URL** textbox, type a URL using the following pattern:
+ `https://<CustomerName>.gofluent.com/samlsso/acs.jsp?pid=<CustomerName>`
+
+ c. In the **Sign on URL** textbox, type a URL using the following pattern:
+ `https://<CustomerName>.gofluent.com`
+
+ > [!NOTE]
+ > These values are not real. Update these values with the actual Identifier, Reply URL and Sign on URL. Contact [goFLUENT Client support team](mailto:presales-team@gofluent.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+
+1. On the **Set-up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Federation Metadata XML** and select **Download** to download the certificate and save it on your computer.
+
+ ![Screenshot shows the Certificate download link.](common/metadataxml.png "Certificate")
+
+1. On the **Set up goFLUENT** section, copy the appropriate URL(s) based on your requirement.
+
+ ![Screenshot shows to copy configuration appropriate URL.](common/copy-configuration-urls.png "Metadata")
+
+## Configure goFLUENT SSO
+
+To configure single sign-on on **goFLUENT** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [goFLUENT support team](mailto:presales-team@gofluent.com). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create goFLUENT test user
+
+In this section, a user called B.Simon is created in goFLUENT. goFLUENT supports just-in-time user provisioning, which is enabled by default. There's no action item for you in this section. If a user doesn't already exist in goFLUENT, a new one is commonly created after authentication.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+* Click on **Test this application** in Azure portal. This will redirect to goFLUENT Sign-on URL where you can initiate the login flow.
+
+* Go to goFLUENT Sign-on URL directly and initiate the login flow from there.
+
+* You can use Microsoft My Apps. When you click the goFLUENT tile in the My Apps, this will redirect to goFLUENT Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Additional resources
+
+* [What is single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+* [Plan a single sign-on deployment](../manage-apps/plan-sso-deployment.md).
+
+## Next steps
+
+Once you configure goFLUENT you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Hashicorp Cloud Platform Hcp Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/hashicorp-cloud-platform-hcp-tutorial.md
+
+ Title: Azure Active Directory SSO integration with HashiCorp Cloud Platform (HCP)
+description: Learn how to configure single sign-on between Azure Active Directory and HashiCorp Cloud Platform (HCP).
++++++++ Last updated : 04/06/2023++++
+# Azure Active Directory SSO integration with HashiCorp Cloud Platform (HCP)
+
+In this article, you learn how to integrate HashiCorp Cloud Platform (HCP) with Azure Active Directory (Azure AD). HashiCorp Cloud platform hosting managed services of the developer tools created by HashiCorp, such Terraform, Vault, Boundary, and Consul. When you integrate HashiCorp Cloud Platform (HCP) with Azure AD, you can:
+
+* Control in Azure AD who has access to HashiCorp Cloud Platform (HCP).
+* Enable your users to be automatically signed-in to HashiCorp Cloud Platform (HCP) with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+You configure and test Azure AD single sign-on for HashiCorp Cloud Platform (HCP) in a test environment. HashiCorp Cloud Platform (HCP) supports only **SP** initiated single sign-on.
+
+## Prerequisites
+
+To integrate Azure Active Directory with HashiCorp Cloud Platform (HCP), you need:
+
+* An Azure AD user account. If you don't already have one, you can [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+* One of the following roles: Global Administrator, Cloud Application Administrator, Application Administrator, or owner of the service principal.
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* HashiCorp Cloud Platform (HCP) single sign-on (SSO) enabled subscription.
+
+## Add application and assign a test user
+
+Before you begin the process of configuring single sign-on, you need to add the HashiCorp Cloud Platform (HCP) application from the Azure AD gallery. You need a test user account to assign to the application and test the single sign-on configuration.
+
+### Add HashiCorp Cloud Platform (HCP) from the Azure AD gallery
+
+Add HashiCorp Cloud Platform (HCP) from the Azure AD application gallery to configure single sign-on with HashiCorp Cloud Platform (HCP). For more information on how to add application from the gallery, see the [Quickstart: Add application from the gallery](../manage-apps/add-application-portal.md).
+
+### Create and assign Azure AD test user
+
+Follow the guidelines in the [create and assign a user account](../manage-apps/add-application-portal-assign-users.md) article to create a test user account in the Azure portal called B.Simon.
+
+Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, and assign roles. The wizard also provides a link to the single sign-on configuration pane in the Azure portal. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides).
+
+## Configure Azure AD SSO
+
+Complete the following steps to enable Azure AD single sign-on in the Azure portal.
+
+1. In the Azure portal, on the **HashiCorp Cloud Platform (HCP)** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, select the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Screenshot shows how to edit Basic SAML Configuration.](common/edit-urls.png "Basic Configuration")
+
+1. On the **Basic SAML Configuration** section, perform the following steps:
+
+ a. In the **Identifier** textbox, type a value using the following pattern:
+ `urn:hashicorp:HCP-SSO-<HCP_ORG_ID>-samlp`
+
+ b. In the **Reply URL** textbox, type a URL using the following pattern:
+ `https://auth.hashicorp.com/login/callback?connection=HCP-SSO-<ORG_ID>-samlp`
+
+ c. In the **Sign on URL** textbox, type a URL using the following pattern:
+ `https://portal.cloud.hashicorp.com/sign-in?conn-id=HCP-SSO-<HCP_ORG_ID>-samlp`
+
+ > [!NOTE]
+ > These values are not real. Update these values with the actual Identifier, Reply URL and Sign on URL. Contact [HashiCorp Cloud Platform (HCP) Client support team](mailto:support@hashicorp.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+
+1. On the **Set-up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Base64)** and select **Download** to download the certificate and save it on your computer.
+
+ ![Screenshot shows the Certificate download link.](common/certificatebase64.png "Certificate")
+
+1. On the **Set up HashiCorp Cloud Platform (HCP)** section, copy the appropriate URL(s) based on your requirement.
+
+ ![Screenshot shows to copy configuration appropriate URL.](common/copy-configuration-urls.png "Metadata")
+
+## Configure HashiCorp Cloud Platform (HCP) SSO
+
+To configure single sign-on on **HashiCorp Cloud Platform (HCP)** side, you need to send the downloaded **Certificate (Base64)** and appropriate copied URLs from Azure portal to [HashiCorp Cloud Platform (HCP) support team](mailto:support@hashicorp.com). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create HashiCorp Cloud Platform (HCP) test user
+
+In this section, you create a user called Britta Simon at HashiCorp Cloud Platform (HCP). Work with [HashiCorp Cloud Platform (HCP) support team](mailto:support@hashicorp.com) to add the users in the HashiCorp Cloud Platform (HCP) platform. Users must be created and activated before you use single sign-on.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+* Click on **Test this application** in Azure portal. This will redirect to HashiCorp Cloud Platform (HCP) Sign-on URL where you can initiate the login flow.
+
+* Go to HashiCorp Cloud Platform (HCP) Sign-on URL directly and initiate the login flow from there.
+
+* You can use Microsoft My Apps. When you select the HashiCorp Cloud Platform (HCP) tile in the My Apps, this will redirect to HashiCorp Cloud Platform (HCP) Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Additional resources
+
+* [What is single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+* [Plan a single sign-on deployment](../manage-apps/plan-sso-deployment.md).
+
+## Next steps
+
+Once you configure HashiCorp Cloud Platform (HCP) you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Jiramicrosoft Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/jiramicrosoft-tutorial.md
Use your Microsoft Azure Active Directory account with Atlassian JIRA server to
To configure Azure AD integration with JIRA SAML SSO by Microsoft, you need the following items: - An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).-- JIRA Core and Software 6.4 to 9.4.0 or JIRA Service Desk 3.0 to 4.22.1 should be installed and configured on Windows 64-bit version.
+- JIRA Core and Software 6.4 to 9.7.0 or JIRA Service Desk 3.0 to 4.22.1 should be installed and configured on Windows 64-bit version.
- JIRA server is HTTPS enabled. - Note the supported versions for JIRA Plugin are mentioned in below section. - JIRA server is reachable on the Internet particularly to the Azure AD login page for authentication and should able to receive the token from Azure AD.
To get started, you need the following items:
## Supported versions of JIRA
-* JIRA Core and Software: 6.4 to 9.4.0.
+* JIRA Core and Software: 6.4 to 9.7.0.
* JIRA Service Desk 3.0 to 4.22.1. * JIRA also supports 5.2. For more details, click [Microsoft Azure Active Directory single sign-on for JIRA 5.2](jira52microsoft-tutorial.md).
active-directory Ms Confluence Jira Plugin Adminguide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/ms-confluence-jira-plugin-adminguide.md
Note the following information before you install the plug-in:
The plug-in supports the following versions of Jira and Confluence:
-* Jira Core and Software: 6.0 to 9.1.0
+* Jira Core and Software: 6.0 to 9.7.0
* Jira Service Desk: 3.0.0 to 4.22.1. * JIRA also supports 5.2. For more details, click [Microsoft Azure Active Directory single sign-on for JIRA 5.2](./jira52microsoft-tutorial.md). * Confluence: 5.0 to 5.10.
JIRA:
|Plugin Version | Release Notes | Supported JIRA versions | |--|-|-| | 1.0.20 | Bug Fixes: | Jira Core and Software: |
-| | JIRA SAML SSO add-on redirects to incorrect URL from mobile browser. | 7.0.0 to 9.5.0 |
+| | JIRA SAML SSO add-on redirects to incorrect URL from mobile browser. | 7.0.0 to 9.7.0 |
| | The mark log section after enabling the JIRA plugin. | | | | The last login date for a user doesn't update when user signs in via SSO. | | | | | |
active-directory Oreilly Learning Platform Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/oreilly-learning-platform-tutorial.md
+
+ Title: Azure Active Directory SSO integration with O'Reilly learning platform
+description: Learn how to configure single sign-on between Azure Active Directory and O'Reilly learning platform.
++++++++ Last updated : 04/06/2023++++
+# Azure Active Directory SSO integration with O'Reilly learning platform
+
+In this article, you learn how to integrate O'Reilly learning platform with Azure Active Directory (Azure AD). Azure AD's integration with the OΓÇÖReilly learning platform allows you to enable single sign-on (SSO) with SAML. This creates a seamless login experience for end users. When you integrate O'Reilly learning platform with Azure AD, you can:
+
+* Control in Azure AD who has access to O'Reilly learning platform.
+* Enable your users to be automatically signed-in to O'Reilly learning platform with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+You need to configure and test Azure AD single sign-on for O'Reilly learning platform in a test environment. O'Reilly learning platform supports both **SP** and **IDP** initiated single sign-on and **Just In Time** user provisioning.
+
+## Prerequisites
+
+To integrate Azure Active Directory with O'Reilly learning platform, you need:
+
+* An Azure AD user account. If you don't already have one, you can [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+* One of the following roles: Global Administrator, Cloud Application Administrator, Application Administrator, or owner of the service principal.
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* O'Reilly learning platform single sign-on (SSO) enabled subscription.
+
+## Add application and assign a test user
+
+Before you begin the process of configuring single sign-on, you need to add the O'Reilly learning platform application from the Azure AD gallery. You need a test user account to assign to the application and test the single sign-on configuration.
+
+### Add O'Reilly learning platform from the Azure AD gallery
+
+Add O'Reilly learning platform from the Azure AD application gallery to configure single sign-on with O'Reilly learning platform. For more information on how to add application from the gallery, see the [Quickstart: Add application from the gallery](../manage-apps/add-application-portal.md).
+
+### Create and assign Azure AD test user
+
+Follow the guidelines in the [create and assign a user account](../manage-apps/add-application-portal-assign-users.md) article to create a test user account in the Azure portal called B.Simon.
+
+Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, and assign roles. The wizard also provides a link to the single sign-on configuration pane in the Azure portal. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides).
+
+## Configure Azure AD SSO
+
+Complete the following steps to enable Azure AD single sign-on in the Azure portal.
+
+1. In the Azure portal, on the **O'Reilly learning platform** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, select the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Screenshot shows how to edit Basic SAML Configuration.](common/edit-urls.png "Basic Configuration")
+
+1. On the **Basic SAML Configuration** section, perform the following steps:
+
+ a. In the **Identifier** textbox, type a value using the following pattern:
+ `urn:auth0:learning:<CONNECTION-NAME>`
+
+ b. In the **Reply URL** textbox, type a URL using the following pattern:
+ `https://sso.oreilly.com/login/callback?connection=<CONNECTION-NAME>`
+
+1. If you wish to configure the application in **SP** initiated mode, then perform the following step:
+
+ In the **Sign on URL** textbox, type a URL using the following pattern:
+ `https://go.oreilly.com/<CONNECTION-NAME>`
+
+ > [!NOTE]
+ > These values are not real. Update these values with the actual Identifier, Reply URL and Sign on URL. Contact [O'Reilly learning platform Client support team](mailto:platform-integration@oreilly.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+
+1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, click copy button to copy **App Federation Metadata Url** and save it on your computer.
+
+ ![Screenshot shows the Certificate download link.](common/copy-metadataurl.png "Certificate")
+
+## Configure O'Reilly learning platform SSO
+
+To configure single sign-on on **O'Reilly learning platform** side, you need to send the **App Federation Metadata Url** to [O'Reilly learning platform support team](mailto:platform-integration@oreilly.com). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create O'Reilly learning platform test user
+
+In this section, a user called B.Simon is created in O'Reilly learning platform. O'Reilly learning platform supports just-in-time user provisioning, which is enabled by default. There's no action item for you in this section. If a user doesn't already exist in O'Reilly learning platform, a new one is commonly created after authentication.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+#### SP initiated:
+
+* Click on **Test this application** in Azure portal. This will redirect to O'Reilly learning platform Sign-on URL where you can initiate the login flow.
+
+* Go to O'Reilly learning platform Sign-on URL directly and initiate the login flow from there.
+
+#### IDP initiated:
+
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the O'Reilly learning platform for which you set up the SSO.
+
+You can also use Microsoft My Apps to test the application in any mode. When you click the O'Reilly learning platform tile in the My Apps, if configured in SP mode you would be redirected to the application sign-on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the O'Reilly learning platform for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Additional resources
+
+* [What is single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+* [Plan a single sign-on deployment](../manage-apps/plan-sso-deployment.md).
+
+## Next steps
+
+Once you configure O'Reilly learning platform you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Predict360 Sso Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/predict360-sso-tutorial.md
+
+ Title: Azure Active Directory SSO integration with Predict360 SSO
+description: Learn how to configure single sign-on between Azure Active Directory and Predict360 SSO.
++++++++ Last updated : 04/06/2023++++
+# Azure Active Directory SSO integration with Predict360 SSO
+
+In this article, you learn how to integrate Predict360 SSO with Azure Active Directory (Azure AD). Predict360 is a Governance, Risk and Compliance solution for mid-sized banks and other Financial Institutions. When you integrate Predict360 SSO with Azure AD, you can:
+
+* Control in Azure AD who has access to Predict360 SSO.
+* Enable your users to be automatically signed-in to Predict360 SSO with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+You configure and test Azure AD single sign-on for Predict360 SSO in a test environment. Predict360 SSO supports both **SP** and **IDP** initiated single sign-on.
+
+## Prerequisites
+
+To integrate Azure Active Directory with Predict360 SSO, you need:
+
+* An Azure AD user account. If you don't already have one, you can [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+* One of the following roles: Global Administrator, Cloud Application Administrator, Application Administrator, or owner of the service principal.
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Predict360 SSO single sign-on (SSO) enabled subscription.
+
+## Add application and assign a test user
+
+Before you begin the process of configuring single sign-on, you need to add the Predict360 SSO application from the Azure AD gallery. You need a test user account to assign to the application and test the single sign-on configuration.
+
+### Add Predict360 SSO from the Azure AD gallery
+
+Add Predict360 SSO from the Azure AD application gallery to configure single sign-on with Predict360 SSO. For more information on how to add application from the gallery, see the [Quickstart: Add application from the gallery](../manage-apps/add-application-portal.md).
+
+### Create and assign Azure AD test user
+
+Follow the guidelines in the [create and assign a user account](../manage-apps/add-application-portal-assign-users.md) article to create a test user account in the Azure portal called B.Simon.
+
+Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, and assign roles. The wizard also provides a link to the single sign-on configuration pane in the Azure portal. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides).
+
+## Configure Azure AD SSO
+
+Complete the following steps to enable Azure AD single sign-on in the Azure portal.
+
+1. In the Azure portal, on the **Predict360 SSO** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, select the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Screenshot shows how to edit Basic SAML Configuration.](common/edit-urls.png "Basic Configuration")
+
+1. On the **Basic SAML Configuration** section, if you have **Service Provider metadata file**, perform the following steps:
+
+ a. Click **Upload metadata file**.
+
+ ![Screenshot shows how to upload metadata file.](common/upload-metadata.png "File")
+
+ b. Click on **folder logo** to select the metadata file and click **Upload**.
+
+ ![Screenshot shows to choose metadata file in folder.](common/browse-upload-metadata.png "Browse")
+
+ c. After the metadata file is successfully uploaded, the **Identifier** and **Reply URL** values get auto populated in Basic SAML Configuration section.
+
+ d. If you wish to configure the application in **SP** initiated mode, then perform the following step:
+
+ In the **Sign on URL** textbox, type the URL:
+ `https://paadt.360factors.com/predict360/login.do`.
+
+ > [!Note]
+ > You will get the **Service Provider metadata file** from the [Predict360 SSO support team](mailto:support@360factors.com). If the **Identifier** and **Reply URL** values do not get auto populated, then fill in the values manually according to your requirement.
+
+1. On the **Set-up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Federation Metadata XML** and select **Download** to download the certificate and save it on your computer.
+
+ ![Screenshot shows the Certificate download link.](common/metadataxml.png "Certificate")
+
+1. On the **Set up Predict360 SSO** section, copy the appropriate URL(s) based on your requirement.
+
+ ![Screenshot shows to copy configuration appropriate URL.](common/copy-configuration-urls.png "Metadata")
+
+## Configure Predict360 SSO SSO
+
+To configure single sign-on on **Predict360 SSO** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [Predict360 SSO support team](mailto:support@360factors.com). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create Predict360 SSO test user
+
+In this section, you create a user called Britta Simon at Predict360 SSO. Work with [Predict360 SSO support team](mailto:support@360factors.com) to add the users in the Predict360 SSO platform. Users must be created and activated before you use single sign-on.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+#### SP initiated:
+
+* Click on **Test this application** in Azure portal. This will redirect to Predict360 SSO Sign-on URL where you can initiate the login flow.
+
+* Go to Predict360 SSO Sign-on URL directly and initiate the login flow from there.
+
+#### IDP initiated:
+
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the Predict360 SSO for which you set up the SSO.
+
+You can also use Microsoft My Apps to test the application in any mode. When you click the Predict360 SSO tile in the My Apps, if configured in SP mode you would be redirected to the application sign-on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Predict360 SSO for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Additional resources
+
+* [What is single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+* [Plan a single sign-on deployment](../manage-apps/plan-sso-deployment.md).
+
+## Next steps
+
+Once you configure Predict360 SSO you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Proactis Rego Source To Contract Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/proactis-rego-source-to-contract-tutorial.md
+
+ Title: Azure Active Directory SSO integration with Proactis Rego Source-to-Contract
+description: Learn how to configure single sign-on between Azure Active Directory and Proactis Rego Source-to-Contract.
++++++++ Last updated : 04/06/2023++++
+# Azure Active Directory SSO integration with Proactis Rego Source-to-Contract
+
+In this article, you learn how to integrate Proactis Rego Source-to-Contract with Azure Active Directory (Azure AD). Proactis Rego is a powerful Source-to-Contract software platform designed for mid-market organizations. ItΓÇÖs easy to use and integrate, giving you control over your spend and supply-chain risks. When you integrate Proactis Rego Source-to-Contract with Azure AD, you can:
+
+* Control in Azure AD who has access to Proactis Rego Source-to-Contract.
+* Enable your users to be automatically signed-in to Proactis Rego Source-to-Contract with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+You configure and test Azure AD single sign-on for Proactis Rego Source-to-Contract in a test environment. Proactis Rego Source-to-Contract supports **SP** initiated single sign-on.
+
+## Prerequisites
+
+To integrate Azure Active Directory with Proactis Rego Source-to-Contract, you need:
+
+* An Azure AD user account. If you don't already have one, you can [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+* One of the following roles: Global Administrator, Cloud Application Administrator, Application Administrator, or owner of the service principal.
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Proactis Rego Source-to-Contract single sign-on (SSO) enabled subscription.
+
+## Add application and assign a test user
+
+Before you begin the process of configuring single sign-on, you need to add the Proactis Rego Source-to-Contract application from the Azure AD gallery. You need a test user account to assign to the application and test the single sign-on configuration.
+
+### Add Proactis Rego Source-to-Contract from the Azure AD gallery
+
+Add Proactis Rego Source-to-Contract from the Azure AD application gallery to configure single sign-on with Proactis Rego Source-to-Contract. For more information on how to add application from the gallery, see the [Quickstart: Add application from the gallery](../manage-apps/add-application-portal.md).
+
+### Create and assign Azure AD test user
+
+Follow the guidelines in the [create and assign a user account](../manage-apps/add-application-portal-assign-users.md) article to create a test user account in the Azure portal called B.Simon.
+
+Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, and assign roles. The wizard also provides a link to the single sign-on configuration pane in the Azure portal. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides).
+
+## Configure Azure AD SSO
+
+Complete the following steps to enable Azure AD single sign-on in the Azure portal.
+
+1. In the Azure portal, on the **Proactis Rego Source-to-Contract** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, select the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Screenshot shows how to edit Basic SAML Configuration.](common/edit-urls.png "Basic Configuration")
+
+1. On the **Basic SAML Configuration** section, perform the following steps:
+
+ a. In the **Identifier** textbox, type a URL using the following pattern:
+ `https://www.proactisplaza.com/authentication/saml/<CustomerName>`
+
+ b. In the **Reply URL** textbox, type a URL using the following pattern:
+ `https://www.proactisplaza.com/authentication/saml/<CustomerName>/consume`
+
+ c. In the **Sign on URL** textbox, type a URL using the following pattern:
+ `https://www.proactisplaza.com/authentication/saml/<CustomerName>`
+
+ > [!NOTE]
+ > These values are not real. Update these values with the actual Identifier, Reply URL and Sign on URL. Contact [Proactis Rego Source-to-Contract Client support team](mailto:helpdesk@proactis.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+
+1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (PEM)** and select **Download** to download the certificate and save it on your computer.
+
+ ![The Certificate download link](common/certificate-base64-download.png)
+
+1. On the **Set up Proactis Rego Source-to-Contract** section, copy the appropriate URL(s) based on your requirement.
+
+ ![Screenshot shows to copy configuration appropriate URL.](common/copy-configuration-urls.png "Metadata")
+
+## Configure Proactis Rego Source-to-Contract SSO
+
+To configure single sign-on on **Proactis Rego Source-to-Contract** side, you need to send the downloaded **Certificate (PEM)** and appropriate copied URLs from Azure portal to [Proactis Rego Source-to-Contract support team](mailto:helpdesk@proactis.com). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create Proactis Rego Source-to-Contract test user
+
+In this section, you create a user called Britta Simon at Proactis Rego Source-to-Contract. Work with [Proactis Rego Source-to-Contract support team](mailto:helpdesk@proactis.com) to add the users in the Proactis Rego Source-to-Contract platform. Users must be created and activated before you use single sign-on.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+* Click on **Test this application** in Azure portal. This will redirect to Proactis Rego Source-to-Contract Sign-on URL where you can initiate the login flow.
+
+* Go to Proactis Rego Source-to-Contract Sign-on URL directly and initiate the login flow from there.
+
+* You can use Microsoft My Apps. When you click the Proactis Rego Source-to-Contract tile in the My Apps, this will redirect to Proactis Rego Source-to-Contract Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Additional resources
+
+* [What is single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+* [Plan a single sign-on deployment](../manage-apps/plan-sso-deployment.md).
+
+## Next steps
+
+Once you configure Proactis Rego Source-to-Contract you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Theom Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/theom-tutorial.md
+
+ Title: Azure Active Directory SSO integration with Theom
+description: Learn how to configure single sign-on between Azure Active Directory and Theom.
++++++++ Last updated : 04/06/2023++++
+# Azure Active Directory SSO integration with Theom
+
+In this article, you learn how to integrate Theom with Azure Active Directory (Azure AD). Theom detects active attacks on data clouds, data lakehouses and prevents breaches. Customers can seamlessly use TheomΓÇÖs AI threat intelligence while using their trusted environment for remediation. When you integrate Theom with Azure AD, you can:
+
+* Control in Azure AD who has access to Theom.
+* Enable your users to be automatically signed-in to Theom with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+You configure and test Azure AD single sign-on for Theom in a test environment. Theom supports **SP** initiated single sign-on.
+
+## Prerequisites
+
+To integrate Azure Active Directory with Theom, you need:
+
+* An Azure AD user account. If you don't already have one, you can [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+* One of the following roles: Global Administrator, Cloud Application Administrator, Application Administrator, or owner of the service principal.
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Theom single sign-on (SSO) enabled subscription.
+
+## Add application and assign a test user
+
+Before you begin the process of configuring single sign-on, you need to add the Theom application from the Azure AD gallery. You need a test user account to assign to the application and test the single sign-on configuration.
+
+### Add Theom from the Azure AD gallery
+
+Add Theom from the Azure AD application gallery to configure single sign-on with Theom. For more information on how to add application from the gallery, see the [Quickstart: Add application from the gallery](../manage-apps/add-application-portal.md).
+
+### Create and assign Azure AD test user
+
+Follow the guidelines in the [create and assign a user account](../manage-apps/add-application-portal-assign-users.md) article to create a test user account in the Azure portal called B.Simon.
+
+Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, and assign roles. The wizard also provides a link to the single sign-on configuration pane in the Azure portal. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides).
+
+## Configure Azure AD SSO
+
+Complete the following steps to enable Azure AD single sign-on in the Azure portal.
+
+1. In the Azure portal, on the **Theom** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, select the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Screenshot shows how to edit Basic SAML Configuration.](common/edit-urls.png "Basic Configuration")
+
+1. On the **Basic SAML Configuration** section, perform the following steps:
+
+ a. In the **Identifier** textbox, type a value using the following pattern:
+ `urn:auth0:theom:<connection-name>`
+
+ b. In the **Reply URL** textbox, type a URL using the following pattern:
+ `https://theom.us.auth0.com/login/callback?connection=<connection-name>`
+
+ c. In the **Sign on URL** textbox, type a URL using the following pattern:
+ `https://<CUSTOMER_SUBDOMAIN>.theom.ai`
+
+ > [!NOTE]
+ > These values are not real. Update these values with the actual Identifier, Reply URL and Sign on URL. Contact [Theom Client support team](mailto:help@theom.ai) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+
+1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Base64)** and select **Download** to download the certificate and save it on your computer.
+
+ ![Screenshot shows the Certificate download link.](common/certificatebase64.png "Certificate")
+
+1. On the **Set up Theom** section, copy the appropriate URL(s) based on your requirement.
+
+ ![Screenshot shows to copy configuration appropriate URL.](common/copy-configuration-urls.png "Metadata")
+
+## Configure Theom SSO
+
+To configure single sign-on on **Theom** side, you need to send the downloaded **Certificate (Base64)** and appropriate copied URLs from Azure portal to [Theom support team](mailto:help@theom.ai). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create Theom test user
+
+In this section, you create a user called Britta Simon at Theom. Work with [Theom support team](mailto:help@theom.ai) to add the users in the Theom platform. Users must be created and activated before you use single sign-on.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+* Click on **Test this application** in Azure portal. This will redirect to Theom Sign-on URL where you can initiate the login flow.
+
+* Go to Theom Sign-on URL directly and initiate the login flow from there.
+
+* You can use Microsoft My Apps. When you click the Theom tile in the My Apps, this will redirect to Theom Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Additional resources
+
+* [What is single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+* [Plan a single sign-on deployment](../manage-apps/plan-sso-deployment.md).
+
+## Next steps
+
+Once you configure Theom you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
aks Azure Cni Powered By Cilium https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-cni-powered-by-cilium.md
Azure CNI powered by Cilium currently has the following limitations:
## Prerequisites
-* Azure CLI version 2.41.0 or later. Run `az --version` to see the currently installed version. If you need to install or upgrade, see [Install Azure CLI][/cli/azure/install-azure-cli].
+* Azure CLI version 2.41.0 or later. Run `az --version` to see the currently installed version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli).
* Azure CLI with aks-preview extension 0.5.135 or later. * If using ARM templates or the REST API, the AKS API version must be 2022-09-02-preview or later.
az network vnet subnet create -g <resourceGroupName> --vnet-name <vnetName> --na
az network vnet subnet create -g <resourceGroupName> --vnet-name <vnetName> --name podsubnet --address-prefixes <address prefix, example: 10.241.0.0/16> -o none ```
-Create the cluster using `--network-dataplane=cilium`:
+Create the cluster using `--network-dataplane cilium`:
```azurecli-interactive az aks create -n <clusterName> -g <resourceGroupName> -l <location> \
az aks create -n <clusterName> -g <resourceGroupName> -l <location> \
--network-plugin azure \ --vnet-subnet-id /subscriptions/<subscriptionId>/resourceGroups/<resourceGroupName>/providers/Microsoft.Network/virtualNetworks/<vnetName>/subnets/nodesubnet \ --pod-subnet-id /subscriptions/<subscriptionId>/resourceGroups/<resourceGroupName>/providers/Microsoft.Network/virtualNetworks/<vnetName>/subnets/podsubnet \
- --network-dataplane=cilium
+ --network-dataplane cilium
``` > [!NOTE]
-> The `--network-dataplane=cilium` flag replaces the deprecated `--enable-ebpf-dataplane` flag used in earlier versions of the aks-preview CLI extension.
+> The `--network-dataplane cilium` flag replaces the deprecated `--enable-ebpf-dataplane` flag used in earlier versions of the aks-preview CLI extension.
### Option 2: Assign IP addresses from an overlay network
az aks create -n <clusterName> -g <resourceGroupName> -l <location> \
--network-plugin azure \ --network-plugin-mode overlay \ --pod-cidr 192.168.0.0/16 \
- --network-dataplane=cilium
+ --network-dataplane cilium
``` ## Frequently asked questions
aks Manage Abort Operations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/manage-abort-operations.md
Title: Abort an Azure Kubernetes Service (AKS) long running operation (preview)
+ Title: Abort an Azure Kubernetes Service (AKS) long running operation
description: Learn how to terminate a long running operation on an Azure Kubernetes Service cluster at the node pool or cluster level. Last updated 3/23/2023
Last updated 3/23/2023
Sometimes deployment or other processes running within pods on nodes in a cluster can run for periods of time longer than expected due to various reasons. While it's important to allow those processes to gracefully terminate when they're no longer needed, there are circumstances where you need to release control of node pools and clusters with long running operations using an *abort* command.
-AKS now supports aborting a long running operation, which is currently in public preview. This feature allows you to take back control and run another operation seamlessly. This design is supported using the [Azure REST API](/rest/api/azure/) or the [Azure CLI](/cli/azure/).
+AKS now supports aborting a long running operation, which is now generally available. This feature allows you to take back control and run another operation seamlessly. This design is supported using the [Azure REST API](/rest/api/azure/) or the [Azure CLI](/cli/azure/).
The abort operation supports the following scenarios:
The abort operation supports the following scenarios:
## Before you begin -- The Azure CLI version 2.40.0 or later. Run `az --version` to find the version, and run `az upgrade` to upgrade the version. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli].--- The `aks-preview` extension version 0.5.102 or later.-
+- The Azure CLI version 2.47.0 or later. Run `az --version` to find the version, and run `az upgrade` to upgrade the version. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli].
## Abort a long running operation
aks Open Service Mesh About https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/open-service-mesh-about.md
Title: Open Service Mesh
-description: Open Service Mesh (OSM) in Azure Kubernetes Service (AKS)
+ Title: Open Service Mesh in Azure Kubernetes Service (AKS)
+description: Learn about the Open Service Mesh (OSM) add-on in Azure Kubernetes Service (AKS).
Previously updated : 12/20/2021 Last updated : 04/06/2023
-# Open Service Mesh AKS add-on
+# Open Service Mesh (OSM) add-on in Azure Kubernetes Service (OSM)
-[Open Service Mesh (OSM)](https://docs.openservicemesh.io/) is a lightweight, extensible, cloud native service mesh that allows users to uniformly manage, secure, and get out-of-the-box observability features for highly dynamic microservice environments.
+[Open Service Mesh (OSM)](https://docs.openservicemesh.io/) is a lightweight, extensible, cloud native service mesh that allows you to uniformly manage, secure, and get out-of-the-box observability features for highly dynamic microservice environments.
-OSM runs an Envoy-based control plane on Kubernetes and can be configured with [SMI](https://smi-spec.io/) APIs. OSM works by injecting an Envoy proxy as a sidecar container with each instance of your application. The Envoy proxy contains and executes rules around access control policies, implements routing configuration, and captures metrics. The control plane continually configures the Envoy proxies to ensure policies and routing rules are up to date and ensures proxies are healthy.
+OSM runs an Envoy-based control plane on Kubernetes and can be configured with [SMI](https://smi-spec.io/) APIs. OSM works by injecting an Envoy proxy as a sidecar container with each instance of your application. The Envoy proxy contains and executes rules around access control policies, implements routing configuration, and captures metrics. The control plane continually configures the Envoy proxies to ensure policies and routing rules are up to date and proxies are healthy.
-The OSM project was originated by Microsoft and has since been donated and is governed by the [Cloud Native Computing Foundation (CNCF)](https://www.cncf.io/).
+Microsoft started the OSM project, but it's now governed by the [Cloud Native Computing Foundation (CNCF)](https://www.cncf.io/).
-## Installation and version
+## Enable the OSM add-on
-OSM can be added to your Azure Kubernetes Service (AKS) cluster by enabling the OSM add-on using the [Azure CLI][osm-azure-cli] or a [Bicep template][osm-bicep]. The OSM add-on provides a fully supported installation of OSM that is integrated with AKS.
+OSM can be added to your Azure Kubernetes Service (AKS) cluster by enabling the OSM add-on using the [Azure CLI][osm-azure-cli] or a [Bicep template][osm-bicep]. The OSM add-on provides a fully supported installation of OSM that's integrated with AKS.
> [!IMPORTANT]
-> Based on the version of Kubernetes your cluster is running, the OSM add-on installs a different version of OSM:
-> - If your cluster is running Kubernetes version 1.24.0 or greater, the OSM add-on installs version *1.2.3* of OSM.
-> - If your cluster is running a version of Kubernetes between 1.23.5 and 1.24.0, the OSM add-on installs version *1.1.3* of OSM.
-> - If your cluster is running a version of Kubernetes below 1.23.5, the OSM add-on installs version *1.0.0* of OSM.
+> Based on the version of Kubernetes your cluster is running, the OSM add-on installs a different version of OSM.
+>
+> |Kubernetes version | OSM version installed |
+> ||--|
+> | 1.24.0 or greater | 1.2.3 |
+> | Between 1.23.5 and 1.24.0 | 1.1.3 |
+> | Below 1.23.5 | 1.0.0 |
## Capabilities and features OSM provides the following capabilities and features: -- Secure service to service communication by enabling mutual TLS (mTLS).
+- Secure service-to-service communication by enabling mutual TLS (mTLS).
- Onboard applications onto the OSM mesh using automatic sidecar injection of Envoy proxy. - Transparently configure traffic shifting on deployments.-- Define and execute fine grained access control policies for services.
+- Define and execute fine-grained access control policies for services.
- Monitor and debug services using observability and insights into application metrics.-- Integrate with external certificate management.-- Integrates with existing ingress solutions such as [NGINX][nginx], [Contour][contour], and [Web Application Routing][web-app-routing]. For more details on how ingress works with OSM, see [Using Ingress to manage external access to services within the cluster][osm-ingress]. For an example on integrating OSM with Contour for ingress, see [Ingress with Contour][osm-contour]. For an example on integrating OSM with ingress controllers that use the `networking.k8s.io/v1` API, such as NGINX, see [Ingress with Kubernetes Nginx Ingress Controller][osm-nginx]. For more details on using Web Application Routing, which automatically integrates with OSM, see [Web Application Routing][web-app-routing].-
-## Example scenarios
-
-OSM can be used to help your AKS deployments in many different ways. For example:
- - Encrypt communications between service endpoints deployed in the cluster. - Enable traffic authorization of both HTTP/HTTPS and TCP traffic. - Configure weighted traffic controls between two or more services for A/B testing or canary deployments. - Collect and view KPIs from application traffic.
+- Integrate with external certificate management.
+- Integrate with existing ingress solutions such as [NGINX][nginx], [Contour][contour], and [Web Application Routing][web-app-routing].
+
+For more information on ingress and OSM, see [Using ingress to manage external access to services within the cluster][osm-ingress] and [Integrate OSM with Contour for ingress][osm-contour]. For an example of how to integrate OSM with ingress controllers using the `networking.k8s.io/v1` API, see [Ingress with Kubernetes Nginx ingress controller][osm-nginx]. For more information on using Web Application Routing, which automatically integrates with OSM, see [Web Application Routing][web-app-routing].
-## Add-on limitations
+## Limitations
The OSM AKS add-on has the following limitations:
-* [Iptables redirection][ip-tables-redirection] for port IP address and port range exclusion must be enabled using `kubectl patch` after installation. For more details, see [iptables redirection][ip-tables-redirection].
-* Pods that are onboarded to the mesh that need access to IMDS, Azure DNS, or the Kubernetes API server must have their IP addresses to the global list of excluded outbound IP ranges using [Global outbound IP range exclusions][global-exclusion].
-* At this time, OSM does not support Windows Server containers.
+- After installation, you must enable Iptables redirection for port IP address and port range exclusion using `kubectl patch`. For more information, see [iptables redirection][ip-tables-redirection].
+- Any pods that need access to IMDS, Azure DNS, or the Kubernetes API server must have their IP addresses added to the global list of excluded outbound IP ranges using [Global outbound IP range exclusions][global-exclusion].
+- OSM doesn't support Windows Server containers.
## Next steps After enabling the OSM add-on using the [Azure CLI][osm-azure-cli] or a [Bicep template][osm-bicep], you can:
-* [Deploy a sample application][osm-deploy-sample-app]
-* [Onboard an existing application][osm-onboard-app]
+
+- [Deploy a sample application][osm-deploy-sample-app]
+- [Onboard an existing application][osm-onboard-app]
[ip-tables-redirection]: https://release-v1-2.docs.openservicemesh.io/docs/guides/traffic_management/iptables_redirection/ [global-exclusion]: https://release-v1-2.docs.openservicemesh.io/docs/guides/traffic_management/iptables_redirection/#global-outbound-ip-range-exclusions
After enabling the OSM add-on using the [Azure CLI][osm-azure-cli] or a [Bicep t
[osm-bicep]: open-service-mesh-deploy-addon-bicep.md [osm-deploy-sample-app]: https://release-v1-2.docs.openservicemesh.io/docs/getting_started/install_apps/ [osm-onboard-app]: https://release-v1-2.docs.openservicemesh.io/docs/guides/app_onboarding/
-[ip-tables-redirection]: https://docs.openservicemesh.io/docs/guides/traffic_management/iptables_redirection/
-[global-exclusion]: https://docs.openservicemesh.io/docs/guides/traffic_management/iptables_redirection/#global-outbound-ip-range-exclusions
[nginx]: https://github.com/kubernetes/ingress-nginx [contour]: https://projectcontour.io/ [osm-ingress]: https://release-v1-2.docs.openservicemesh.io/docs/guides/traffic_management/ingress/
aks Servicemesh About https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/servicemesh-about.md
Title: About service meshes
description: Obtain an overview of service meshes, supported scenarios, selection criteria, and next steps to explore. Previously updated : 01/04/2022 Last updated : 04/06/2023 # About service meshes
-A service mesh provides capabilities like traffic management, resiliency, policy, security, strong identity, and observability to your workloads. Your application is decoupled from these operational capabilities and the service mesh moves them out of the application layer, and down to the infrastructure layer.
+A service mesh is an infrastructure layer in your application that facilitates communication between services. Service meshes provide capabilities like traffic management, resiliency, policy, security, strong identity, and observability to your workloads. Your application is decoupled from these operational capabilities, while the service mesh moves them out of the application layer and down to the infrastructure layer.
## Scenarios
-These are some of the scenarios that can be enabled for your workloads when you use a service mesh:
+When you use a service mesh, you can enable scenarios such as:
-- **Encrypt all traffic in cluster** - Enable mutual TLS between specified services in the cluster. This can be extended to ingress and egress at the network perimeter, and provides a secure by default option with no changes needed for application code and infrastructure.
+- **Encrypting all traffic in cluster**: Enable mutual TLS between specified services in the cluster. This can be extended to ingress and egress at the network perimeter and provides a secure-by-default option with no changes needed for application code and infrastructure.
-- **Canary and phased rollouts** - Specify conditions for a subset of traffic to be routed to a set of new services in the cluster. On successful test of canary release, remove conditional routing and phase gradually increasing % of all traffic to new service. Eventually all traffic will be directed to new service.
+- **Canary and phased rollouts**: Specify conditions for a subset of traffic to be routed to a set of new services in the cluster. On successful test of canary release, remove conditional routing and phase gradually increasing % of all traffic to a new service. Eventually, all traffic will be directed to the new service.
-- **Traffic management and manipulation** - Create a policy on a service that will rate limit all traffic to a version of a service from a specific origin, or a policy that applies a retry strategy to classes of failures between specified services. Mirror live traffic to new versions of services during a migration or to debug issues. Inject faults between services in a test environment to test resiliency.
+- **Traffic management and manipulation**: Create a policy on a service that rate limits all traffic to a version of a service from a specific origin, or a policy that applies a retry strategy to classes of failures between specified services. Mirror live traffic to new versions of services during a migration or to debug issues. Inject faults between services in a test environment to test resiliency.
-- **Observability** - Gain insight into how your services are connected and the traffic that flows between them. Obtain metrics, logs, and traces for all traffic in cluster, including ingress/egress. Add distributed tracing abilities to your applications.
+- **Observability**: Gain insight into how your services are connected and the traffic that flows between them. Gather metrics, logs, and traces for all traffic in the cluster, including ingress/egress. Add distributed tracing abilities to applications.
## Selection criteria
-Before you select a service mesh, ensure that you understand your requirements and the reasons for installing a service mesh. Ask the following questions:
+Before you select a service mesh, make sure you understand your requirements and reasoning for installing a service mesh. Ask the following questions:
-- **Is an Ingress Controller sufficient for my needs?** - Sometimes having a capability like A/B testing or traffic splitting at the ingress is sufficient to support the required scenario. Don't add complexity to your environment with no upside.
+- **Is an ingress controller sufficient for my needs?**: Sometimes having a capability like A/B testing or traffic splitting at the ingress is sufficient to support the required scenario. Don't add complexity to your environment with no upside.
-- **Can my workloads and environment tolerate the additional overheads?** - All the additional components required to support the service mesh require additional resources like CPU and memory. In addition, all the proxies and their associated policy checks add latency to your traffic. If you have workloads that are very sensitive to latency or cannot provide the additional resources to cover the service mesh components, then re-consider.
+- **Can my workloads and environment tolerate the additional overheads?**: All the components required to support the service mesh require resources like CPU and memory. All the proxies and their associated policy checks add latency to your traffic. If you have workloads that are very sensitive to latency or can't provide extra resources to cover service mesh components, you should reconsider using a service mesh.
-- **Is this adding additional complexity unnecessarily?** - If the reason for installing a service mesh is to gain a capability that is not necessarily critical to the business or operational teams, then consider whether the additional complexity of installation, maintenance, and configuration is worth it.
+- **Is this adding unnecessary complexity?**: If you want to install a service mesh to use a capability that isn't critical to the business or operational teams, then consider whether the added complexity of installation, maintenance, and configuration is worth it.
-- **Can this be adopted in an incremental approach?** - Some of the service meshes that provide a lot of capabilities can be adopted in a more incremental approach. Install just the components you need to ensure your success. Once you are more confident and additional capabilities are required, then explore those. Resist the urge to install *everything* from the start.
+- **Can this be adopted in an incremental approach?**: Some of the service meshes that provide a lot of capabilities can be adopted in a more incremental approach. Install just the components you need to ensure your success. If you later find that more capabilities are required, explore them at a later time. Resist the urge to install *everything* from the start.
## Next steps Open Service Mesh (OSM) is a supported service mesh that runs Azure Kubernetes Service (AKS): > [!div class="nextstepaction"]
-> [Learn more about OSM ...][osm-about]
+> [Learn more about OSM][osm-about]
-There are also service meshes provided by open-source projects and third parties that are commonly used with AKS. These open-source and third-party service meshes are not covered by the [AKS support policy][aks-support-policy].
+There are also service meshes provided by open-source projects and third parties that are commonly used with AKS. These service meshes aren't covered by the [AKS support policy][aks-support-policy].
- [Istio][istio] - [Linkerd][linkerd]
There are also service meshes provided by open-source projects and third parties
For more details on the service mesh landscape, see [Layer 5's Service Mesh Landscape][service-mesh-landscape].
-For more details service mesh standardization efforts:
+For more details on service mesh standardization efforts, see:
- [Service Mesh Interface (SMI)][smi] - [Service Mesh Federation][smf] - [Service Mesh Performance (SMP)][smp] - <!-- LINKS - external --> [istio]: https://istio.io/latest/docs/setup/install/ [linkerd]: https://linkerd.io/getting-started/
aks Supported Kubernetes Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/supported-kubernetes-versions.md
For the past release history, see [Kubernetes history](https://en.wikipedia.org/
| 1.23 | Dec 2021 | Jan 2022 | Apr 2022 | Apr 2023 | | 1.24 | Apr-22-22 | May 2022 | Jul 2022 | Jul 2023 | 1.25 | Aug 2022 | Oct 2022 | Dec 2022 | Dec 2023
-| 1.26 | Dec 2022 | Feb 2023 | Mar 2023 | Mar 2024
+| 1.26 | Dec 2022 | Feb 2023 | Apr 2023 | Mar 2024
| 1.27 | Apr 2023 | May 2023 | Jun 2023 | Jun 2024 ## Alias minor version
azure-functions Quickstart Python Vscode https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/quickstart-python-vscode.md
You now have a Durable Functions app that can be run locally and deployed to Azu
::: zone pivot="python-mode-decorators"
-> [!NOTE]
-> Using [Extension Bundles](../functions-bindings-register.md#extension-bundles) is not currently supported when trying out the Python V2 programming model with Durable Functions, so you will need to manage your extensions manually.
-> To do this, remove the `extensionBundle` section of your `host.json` as described [here](../functions-run-local.md#install-extensions) and run `func extensions install --package Microsoft.Azure.WebJobs.Extensions.DurableTask --version 2.9.1` on your terminal. This will install the Durable Functions extension for your app and will allow you to try out the new experience.
+## Requirements
+
+Version 2 of the Python programming model requires the following minimum versions:
+
+- [Azure Functions Runtime](../functions-versions.md) v4.16+
+- [Azure Functions Core Tools](../functions-run-local.md) v4.0.5095+ (if running locally)
+
+## Enable v2 programming model
+
+The following application setting is required to run the v2 programming model while it is in preview:
+- Name: `AzureWebJobsFeatureFlags`
+- Value: `EnableWorkerIndexing`
+
+If you're running locally using [Azure Functions Core Tools](../functions-run-local.md), you should add this setting to your `local.settings.json` file. If you're running in Azure, follow these steps with the tool of your choice:
+
+# [Azure CLI](#tab/azure-cli-set-indexing-flag)
+
+Replace `<FUNCTION_APP_NAME>` and `<RESOURCE_GROUP_NAME>` with the name of your function app and resource group, respectively.
+
+```azurecli
+az functionapp config appsettings set --name <FUNCTION_APP_NAME> --resource-group <RESOURCE_GROUP_NAME> --settings AzureWebJobsFeatureFlags=EnableWorkerIndexing
+```
+
+# [Azure PowerShell](#tab/azure-powershell-set-indexing-flag)
+
+Replace `<FUNCTION_APP_NAME>` and `<RESOURCE_GROUP_NAME>` with the name of your function app and resource group, respectively.
+
+```azurepowershell
+Update-AzFunctionAppSetting -Name <FUNCTION_APP_NAME> -ResourceGroupName <RESOURCE_GROUP_NAME> -AppSetting @{"AzureWebJobsFeatureFlags" = "EnableWorkerIndexing"}
+```
+
+# [VS Code](#tab/vs-code-set-indexing-flag)
+
+1. Make sure you have the [Azure Functions extension for VS Code](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurefunctions) installed
+1. Press <kbd>F1</kbd> to open the command palette. In the command palette, search for and select `Azure Functions: Add New Setting...`.
+1. Choose your subscription and function app when prompted
+1. For the name, type `AzureWebJobsFeatureFlags` and press <kbd>Enter</kbd>.
+1. For the value, type `EnableWorkerIndexing` and press <kbd>Enter</kbd>.
++ To create a basic Durable Functions app using these 3 function types, replace the contents of `function_app.py` with the following Python code.
azure-functions Event Driven Scaling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/event-driven-scaling.md
Title: Event-driven scaling in Azure Functions description: Explains the scaling behaviors of Consumption plan and Premium plan function apps. Previously updated : 10/29/2020 Last updated : 04/04/2023
In the Consumption and Premium plans, Azure Functions scales CPU and memory resources by adding more instances of the Functions host. The number of instances is determined on the number of events that trigger a function.
-Each instance of the Functions host in the Consumption plan is limited to 1.5 GB of memory and one CPU. An instance of the host is the entire function app, meaning all functions within a function app share resource within an instance and scale at the same time. Function apps that share the same Consumption plan scale independently. In the Premium plan, the plan size determines the available memory and CPU for all apps in that plan on that instance.
+Each instance of the Functions host in the Consumption plan is limited, typically to 1.5 GB of memory and one CPU. An instance of the host is the entire function app, meaning all functions within a function app share resource within an instance and scale at the same time. Function apps that share the same Consumption plan scale independently. In the Premium plan, the plan size determines the available memory and CPU for all apps in that plan on that instance.
Function code files are stored on Azure Files shares on the function's main storage account. When you delete the main storage account of the function app, the function code files are deleted and can't be recovered. ## Runtime scaling
-Azure Functions uses a component called the *scale controller* to monitor the rate of events and determine whether to scale out or scale in. The scale controller uses heuristics for each trigger type. For example, when you're using an Azure Queue storage trigger, it scales based on the queue length and the age of the oldest queue message.
+Azure Functions uses a component called the *scale controller* to monitor the rate of events and determine whether to scale out or scale in. The scale controller uses heuristics for each trigger type. For example, when you're using an Azure Queue storage trigger, it uses [target-based scaling](functions-target-based-scaling.md).
-The unit of scale for Azure Functions is the function app. When the function app is scaled out, more resources are allocated to run multiple instances of the Azure Functions host. Conversely, as compute demand is reduced, the scale controller removes function host instances. The number of instances is eventually "scaled in" to zero when no functions are running within a function app.
+The unit of scale for Azure Functions is the function app. When the function app is scaled out, more resources are allocated to run multiple instances of the Azure Functions host. Conversely, as compute demand is reduced, the scale controller removes function host instances. The number of instances is eventually "scaled in" when no functions are running within a function app.
![Scale controller monitoring events and creating instances](./media/functions-scale/central-listener.png) ## Cold Start
-After your function app has been idle for a number of minutes, the platform may scale the number of instances on which your app runs down to zero. The next request has the added latency of scaling from zero to one. This latency is referred to as a _cold start_. The number of dependencies required by your function app can affect the cold start time. Cold start is more of an issue for synchronous operations, such as HTTP triggers that must return a response. If cold starts are impacting your functions, consider running in a Premium plan or in a Dedicated plan with the **Always on** setting enabled.
+After your function app has been idle for a number of minutes, the platform may scale the number of instances on which your app runs down to zero. The next request has the added latency of scaling from zero to one. This latency is referred to as a _cold start_. The number of dependencies required by your function app can affect the cold start time. Cold start is more of an issue for synchronous operations, such as HTTP triggers that must return a response. If cold starts are impacting your functions, consider running in a [Premium plan](functions-premium-plan.md#eliminate-cold-starts) or in a Dedicated plan with the **Always on** setting enabled.
## Understanding scaling behaviors
-Scaling can vary based on several factors, and apps scale differently based on the trigger and language selected. There are a few intricacies of scaling behaviors to be aware of:
+Scaling can vary based on several factors, and apps scale differently based on the triggers and language selected. There are a few intricacies of scaling behaviors to be aware of:
-* **Maximum instances:** A single function app only scales out to a maximum of 200 instances. A single instance may process more than one message or request at a time though, so there isn't a set limit on number of concurrent executions. You can [specify a lower maximum](#limit-scale-out) to throttle scale as required.
+* **Maximum instances:** A single function app only scales out to a [maximum allowed by the plan](functions-scale.md#scale). A single instance may process more than one message or request at a time though, so there isn't a set limit on number of concurrent executions. You can [specify a lower maximum](#limit-scale-out) to throttle scale as required.
* **New instance rate:** For HTTP triggers, new instances are allocated, at most, once per second. For non-HTTP triggers, new instances are allocated, at most, once every 30 seconds. Scaling is faster when running in a [Premium plan](functions-premium-plan.md).
-* **Scale efficiency:** For Service Bus triggers, use _Manage_ rights on resources for the most efficient scaling. With _Listen_ rights, scaling isn't as accurate because the queue length can't be used to inform scaling decisions. To learn more about setting rights in Service Bus access policies, see [Shared Access Authorization Policy](../service-bus-messaging/service-bus-sas.md#shared-access-authorization-policies). For Event Hubs triggers, see the [this scaling guidance](#event-hubs-triggers).
+* **Target-based scaling:** Target-based scaling provides a fast and intuitive scaling model for customers and is currently supported for Service Bus Queues and Topics, Storage Queues, Event Hubs, and Cosmos DB extensions. Make sure to review target-based scaling to understand their scaling behavior.
## Limit scale-out
-You may wish to restrict the maximum number of instances an app used to scale out. This is most common for cases where a downstream component like a database has limited throughput. By default, Consumption plan functions scale out to as many as 200 instances, and Premium plan functions will scale out to as many as 100 instances. You can specify a lower maximum for a specific app by modifying the `functionAppScaleLimit` value. The `functionAppScaleLimit` can be set to `0` or `null` for unrestricted, or a valid value between `1` and the app maximum.
+You may wish to restrict the maximum number of instances an app used to scale out. This is most common for cases where a downstream component like a database has limited throughput. By default, Consumption plan functions scale out to as many as 200 instances, and Premium plan functions will scale out to as many as 100 instances. You can specify a lower maximum for a specific app by modifying the `functionAppScaleLimit` value. The `functionAppScaleLimit` can be set to `0` or `null` for unrestricted, or a valid value between `1` and the app maximum.
```azurecli az resource update --resource-type Microsoft.Web/sites -g <RESOURCE_GROUP> -n <FUNCTION_APP-NAME>/config/web --set properties.functionAppScaleLimit=<SCALE_LIMIT>
The following considerations apply for scale-in behaviors:
* For Consumption plan function apps running on Windows, only apps created after May 2021 have drain mode behaviors enabled by default. * To enable graceful shutdown for functions using the Service Bus trigger, use version 4.2.0 or a later version of the [Service Bus Extension](functions-bindings-service-bus.md).
-## Event Hubs triggers
-
-This section describes how scaling behaves when your function uses an [Event Hubs trigger](functions-bindings-event-hubs-trigger.md) or an [IoT Hub trigger](functions-bindings-event-iot-trigger.md). In these cases, each instance of an event triggered function is backed by a single [EventProcessorHost](/dotnet/api/microsoft.azure.eventhubs.processor) instance. The trigger (powered by Event Hubs) ensures that only one [EventProcessorHost](/dotnet/api/microsoft.azure.eventhubs.processor) instance can get a lease on a given partition.
-
-For example, consider an event hub as follows:
-
-* 10 partitions
-* 1,000 events distributed evenly across all partitions, with 100 messages in each partition
-
-When your function is first enabled, there's only one instance of the function. Let's call the first function instance `Function_0`. The `Function_0` function has a single instance of [EventProcessorHost](/dotnet/api/microsoft.azure.eventhubs.processor) that holds a lease on all 10 partitions. This instance is reading events from partitions 0-9. From this point forward, one of the following happens:
-
-* **New function instances are not needed**: `Function_0` is able to process all 1,000 events before the Functions scaling logic take effect. In this case, all 1,000 messages are processed by `Function_0`.
-
-* **An additional function instance is added**: If the Functions scaling logic determines that `Function_0` has more messages than it can process, a new function app instance (`Function_1`) is created. This new function also has an associated instance of [EventProcessorHost](/dotnet/api/microsoft.azure.eventhubs.processor). As the underlying event hub detects that a new host instance is trying read messages, it load balances the partitions across the host instances. For example, partitions 0-4 may be assigned to `Function_0` and partitions 5-9 to `Function_1`.
-
-* **N more function instances are added**: If the Functions scaling logic determines that both `Function_0` and `Function_1` have more messages than they can process, new `Functions_N` function app instances are created. Apps are created to the point where `N` is greater than the number of event hub partitions. In our example, Event Hubs again load balances the partitions, in this case across the instances `Function_0`...`Functions_9`.
-
-As scaling occurs, `N` instances is a number greater than the number of event hub partitions. This pattern is used to ensure [EventProcessorHost](/dotnet/api/microsoft.azure.eventhubs.processor) instances are available to obtain locks on partitions as they become available from other instances. You're only charged for the resources used when the function instance executes. In other words, you aren't charged for this over-provisioning.
-
-When all function execution completes (with or without errors), checkpoints are added to the associated storage account. When check-pointing succeeds, all 1,000 messages are never retrieved again.
- ## Best practices and patterns for scalable apps There are many aspects of a function app that impacts how it scales, including host configuration, runtime footprint, and resource efficiency. For more information, see the [scalability section of the performance considerations article](performance-reliability.md#scalability-best-practices). You should also be aware of how connections behave as your function app scales. For more information, see [How to manage connections in Azure Functions](manage-connections.md).
Useful queries and information on how to understand your consumption bill can be
## Next steps
+To learn more, see the following articles:
+++ [Improve the performance and reliability of Azure Functions](./performance-reliability.md)++ [Azure Functions reliable event processing](./functions-reliable-event-processing.md) + [Azure Functions hosting options](functions-scale.md)
azure-functions Functions Bindings Cosmosdb V2 Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-cosmosdb-v2-trigger.md
Title: Azure Cosmos DB trigger for Functions 2.x and higher description: Learn to use the Azure Cosmos DB trigger in Azure Functions. Previously updated : 03/03/2023 Last updated : 04/04/2023 ms.devlang: csharp, java, javascript, powershell, python zone_pivot_groups: programming-languages-set-functions-lang-workers
The Azure Cosmos DB Trigger uses the [Azure Cosmos DB change feed](../cosmos-db/
For information on setup and configuration details, see the [overview](./functions-bindings-cosmosdb-v2.md).
+Cosmos DB scaling decisions for the Consumption and Premium plans are done via target-based scaling. For more information, see [Target-based scaling](functions-target-based-scaling.md).
+ ::: zone pivot="programming-language-python" Azure Functions supports two programming models for Python. The way that you define your bindings depends on your chosen programming model.
For Python v2 functions defined using a decorator, the following properties on t
|-|--| |`arg_name` | The variable name used in function code that represents the list of documents with changes. | |`database_name` | The name of the Azure Cosmos DB database with the collection being monitored. |
-|`collection_name` | The name of the Azure CosmosDB collection being monitored. |
+|`collection_name` | The name of the Azure Cosmos DB collection being monitored. |
|`connection` | The connection string of the Azure Cosmos DB being monitored. | For Python functions defined by using *function.json*, see the [Configuration](#configuration) section.
azure-functions Functions Bindings Service Bus Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-service-bus-trigger.md
Title: Azure Service Bus trigger for Azure Functions
description: Learn to run an Azure Function when as Azure Service Bus messages are created. ms.assetid: daedacf0-6546-4355-a65c-50873e74f66b Previously updated : 03/06/2023 Last updated : 04/04/2023 ms.devlang: csharp, java, javascript, powershell, python zone_pivot_groups: programming-languages-set-functions-lang-workers
Starting with extension version 3.1.0, you can trigger on a session-enabled queu
For information on setup and configuration details, see the [overview](functions-bindings-service-bus.md).
+Service Bus scaling decisions for the Consumption and Premium plans are made based on target-based scaling. For more information, see [Target-based scaling](functions-target-based-scaling.md).
+ ::: zone pivot="programming-language-python" Azure Functions supports two programming models for Python. The way that you define your bindings depends on your chosen programming model.
azure-functions Functions Bindings Storage Queue Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-storage-queue-trigger.md
Title: Azure Queue storage trigger for Azure Functions description: Learn to run an Azure Function as Azure Queue storage data changes. Previously updated : 02/27/2023 Last updated : 04/04/2023 ms.devlang: csharp, java, javascript, powershell, python zone_pivot_groups: programming-languages-set-functions-lang-workers
zone_pivot_groups: programming-languages-set-functions-lang-workers
The queue storage trigger runs a function as messages are added to Azure Queue storage.
+Azure Queue storage scaling decisions for the Consumption and Premium plans are done via target-based scaling. For more information, see [Target-based scaling](functions-target-based-scaling.md).
+ ::: zone pivot="programming-language-python" Azure Functions supports two programming models for Python. The way that you define your bindings depends on your chosen programming model.
azure-functions Functions Create Function App Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-create-function-app-portal.md
Next, create a function in the new function app.
1. Under **Template details** use `HttpExample` for **New Function**, select **Anonymous** from the **[Authorization level](functions-bindings-http-webhook-trigger.md#authorization-keys)** drop-down list, and then select **Create**. Azure creates the HTTP trigger function. Now, you can run the new function by sending an HTTP request.
-
- >[!NOTE]
- > When your function app has [private endpoints](functions-create-vnet.md) enabled, you must add the following [CORS origins](security-concepts.md?#restrict-cors-access).
- >
- >- `https://functions-next.azure.com`
- >- `https://functions-staging.azure.com`
- >- `https://functions.azure.com`
- >- `https://portal.azure.com`
- ## Test the function
azure-functions Functions Networking Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-networking-options.md
Title: Azure Functions networking options description: An overview of all networking options available in Azure Functions.-+ Previously updated : 3/28/2022 Last updated : 4/6/2023
The following APIs let you programmatically manage regional virtual network inte
+ **Azure CLI**: Use the [`az functionapp vnet-integration`](/cli/azure/functionapp/vnet-integration) commands to add, list, or remove a regional virtual network integration. + **ARM templates**: Regional virtual network integration can be enabled by using an Azure Resource Manager template. For a full example, see [this Functions quickstart template](https://azure.microsoft.com/resources/templates/function-premium-vnet-integration/).
-## Testing
+## Testing considerations
When testing functions in a function app with private endpoints, you must do your testing from within the same virtual network, such as on a virtual machine (VM) in that network. To use the **Code + Test** option in the portal from that VM, you need to add following [CORS origins](./functions-how-to-use-azure-function-app-settings.md?tabs=portal#cors) to your function app:
-* https://functions-next.azure.com
-* https://functions-staging.azure.com
-* https://functions.azure.com
-* https://portal.azure.com
+* `https://functions-next.azure.com`
+* `https://functions-staging.azure.com`
+* `https://functions.azure.com`
+* `https://portal.azure.com`
## Troubleshooting
azure-functions Functions Reference Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-reference-python.md
You can also explicitly declare the attribute types and return type in the funct
```python import azure.functions - def main(req: azure.functions.HttpRequest) -> str: user = req.params.get('user') return f'Hello, {user}!'
Triggers and bindings can be declared and used in a function in a decorator base
```python @app.function_name(name="HttpTrigger1") @app.route(route="req")- def main(req): user = req.params.get('user') return f'Hello, {user}!'
You can also explicitly declare the attribute types and return type in the funct
```python import azure.functions
+app = func.FunctionApp()
+ @app.function_name(name="HttpTrigger1") @app.route(route="req")- def main(req: azure.functions.HttpRequest) -> str: user = req.params.get('user') return f'Hello, {user}!'
For example, the following code demonstrates the difference between the two inpu
"direction": "in", "type": "blob", "path": "samples/{id}",
- "connection": "AzureWebJobsStorage"
+ "connection": "STORAGE_CONNECTION_STRING"
} ] }
For example, the following code demonstrates the difference between the two inpu
"IsEncrypted": false, "Values": { "FUNCTIONS_WORKER_RUNTIME": "python",
+ "STORAGE_CONNECTION_STRING": "<AZURE_STORAGE_CONNECTION_STRING>",
"AzureWebJobsStorage": "<azure-storage-connection-string>" } }
For example, the following code demonstrates the difference between the two inpu
import azure.functions as func import logging -
-def main(req: func.HttpRequest,
- obj: func.InputStream):
-
+def main(req: func.HttpRequest, obj: func.InputStream):
logging.info(f'Python HTTP-triggered function processed: {obj.read()}') ```
-When the function is invoked, the HTTP request is passed to the function as `req`. An entry will be retrieved from the Azure Blob Storage account based on the _ID_ in the route URL and made available as `obj` in the function body. Here, the specified storage account is the connection string that's found in the `AzureWebJobsStorage` app setting, which is the same storage account that's used by the function app.
+When the function is invoked, the HTTP request is passed to the function as `req`. An entry will be retrieved from the Azure Blob Storage account based on the _ID_ in the route URL and made available as `obj` in the function body. Here, the specified storage account is the connection string that's found in the `CONNECTION_STRING` app setting.
::: zone-end ::: zone pivot="python-mode-decorators" Inputs are divided into two categories in Azure Functions: trigger input and other input. Although they're defined using different decorators, their usage is similar in Python code. Connection strings or secrets for trigger and input sources map to values in the *local.settings.json* file when they're running locally, and they map to the application settings when they're running in Azure.
As an example, the following code demonstrates how to define a Blob Storage inpu
"IsEncrypted": false, "Values": { "FUNCTIONS_WORKER_RUNTIME": "python",
+ "STORAGE_CONNECTION_STRING": "<AZURE_STORAGE_CONNECTION_STRING>",
"AzureWebJobsStorage": "<azure-storage-connection-string>", "AzureWebJobsFeatureFlags": "EnableWorkerIndexing" }
import logging
app = func.FunctionApp() @app.route(route="req")
-@app.read_blob(arg_name="obj", path="samples/{id}", connection="AzureWebJobsStorage")
-
-def main(req: func.HttpRequest,
- obj: func.InputStream):
+@app.read_blob(arg_name="obj", path="samples/{id}",
+ connection="STORAGE_CONNECTION_STRING")
+def main(req: func.HttpRequest, obj: func.InputStream):
logging.info(f'Python HTTP-triggered function processed: {obj.read()}') ```
-When the function is invoked, the HTTP request is passed to the function as `req`. An entry will be retrieved from the Azure Blob Storage account based on the _ID_ in the route URL and made available as `obj` in the function body. Here, the specified storage account is the connection string that's found in the AzureWebJobsStorage app setting, which is the same storage account that's used by the function app.
+When the function is invoked, the HTTP request is passed to the function as `req`. An entry will be retrieved from the Azure Blob Storage account based on the _ID_ in the route URL and made available as `obj` in the function body. Here, the specified storage account is the connection string that's found in the `STORAGE_CONNECTION_STRING` app setting.
::: zone-end For data intensive binding operations, you may want to use a separate storage account. For more information, see [Storage account guidance](storage-considerations.md#storage-account-guidance).
To produce multiple outputs, use the `set()` method provided by the [`azure.func
"direction": "out", "type": "queue", "queueName": "outqueue",
- "connection": "AzureWebJobsStorage"
+ "connection": "STORAGE_CONNECTION_STRING"
}, { "name": "$return",
To produce multiple outputs, use the `set()` method provided by the [`azure.func
```python import azure.functions as func - def main(req: func.HttpRequest, msg: func.Out[func.QueueMessage]) -> str:
To produce multiple outputs, use the `set()` method provided by the [`azure.func
# function_app.py import azure.functions as func
+app = func.FunctionApp()
@app.write_blob(arg_name="msg", path="output-container/{name}",
- connection="AzureWebJobsStorage")
-
+ connection="CONNECTION_STRING")
def test_function(req: func.HttpRequest, msg: func.Out[str]) -> str:
The following example logs an info message when the function is invoked via an H
```python import logging - def main(req): logging.info('Python HTTP trigger function processed a request.') ```
The following example is from the HTTP trigger template for the Python v2 progra
```python @app.function_name(name="HttpTrigger1") @app.route(route="hello")- def test_function(req: func.HttpRequest) -> func.HttpResponse: logging.info('Python HTTP trigger function processed a request.')
The following example uses `os.environ["myAppSetting"]` to get the [application
```python import logging import os+ import azure.functions as func def main(req: func.HttpRequest) -> func.HttpResponse:-
- # Get the setting named 'myAppSetting'
- my_app_setting_value = os.environ["myAppSetting"]
- logging.info(f'My app setting value:{my_app_setting_value}')
+ # Get the setting named 'myAppSetting'
+ my_app_setting_value = os.environ["myAppSetting"]
+ logging.info(f'My app setting value:{my_app_setting_value}')
``` For local development, application settings are [maintained in the *local.settings.json* file](functions-develop-local.md#local-settings-file).
The following example uses `os.environ["myAppSetting"]` to get the [application
```python import logging import os+ import azure.functions as func
+app = func.FunctionApp()
+ @app.function_name(name="HttpTrigger1") @app.route(route="req")- def main(req: func.HttpRequest) -> func.HttpResponse:--
- # Get the setting named 'myAppSetting'
- my_app_setting_value = os.environ["myAppSetting"]
- logging.info(f'My app setting value:{my_app_setting_value}')
+ # Get the setting named 'myAppSetting'
+ my_app_setting_value = os.environ["myAppSetting"]
+ logging.info(f'My app setting value:{my_app_setting_value}')
``` For local development, application settings are [maintained in the *local.settings.json* file](functions-develop-local.md#local-settings-file).
from shared_code import my_second_helper_function
# Define an HTTP trigger that accepts the ?value=<int> query parameter # Double the value and return the result in HttpResponse def main(req: func.HttpRequest) -> func.HttpResponse:
- logging.info('Executing my_second_function.')
+ logging.info('Executing my_second_function.')
- initial_value: int = int(req.params.get('value'))
- doubled_value: int = my_second_helper_function.double(initial_value)
+ initial_value: int = int(req.params.get('value'))
+ doubled_value: int = my_second_helper_function.double(initial_value)
- return func.HttpResponse(
- body=f"{initial_value} * 2 = {doubled_value}",
- status_code=200
+ return func.HttpResponse(
+ body=f"{initial_value} * 2 = {doubled_value}",
+ status_code=200
) ```
import azure.functions as func
from my_second_function import main class TestFunction(unittest.TestCase):
- def test_my_second_function(self):
- # Construct a mock HTTP request.
- req = func.HttpRequest(
- method='GET',
- body=None,
- url='/api/my_second_function',
- params={'value': '21'})
-
- # Call the function.
- resp = main(req)
-
- # Check the output.
- self.assertEqual(
- resp.get_body(),
- b'21 * 2 = 42',
- )
+ def test_my_second_function(self):
+ # Construct a mock HTTP request.
+ req = func.HttpRequest(method='GET',
+ body=None,
+ url='/api/my_second_function',
+ params={'value': '21'})
+ # Call the function.
+ resp = main(req)
+
+ # Check the output.
+ self.assertEqual(resp.get_body(), b'21 * 2 = 42',)
``` Inside your *.venv* Python virtual environment folder, install your favorite Python test framework, such as `pip install pytest`. Then run `pytest tests` to check the test result.
from shared_code import my_second_helper_function
app = func.FunctionApp() - # Define the HTTP trigger that accepts the ?value=<int> query parameter # Double the value and return the result in HttpResponse @app.function_name(name="my_second_function")
You can start writing test cases for your HTTP trigger.
# <project_root>/tests/test_my_second_function.py import unittest import azure.functions as func
-from function_app import main
+from function_app import main
class TestFunction(unittest.TestCase):
- def test_my_second_function(self):
- # Construct a mock HTTP request.
- req = func.HttpRequest(
- method='GET',
- body=None,
- url='/api/my_second_function',
- params={'value': '21'})
-
- # Call the function.
- func_call = main.build().get_user_function()
- resp = func_call(req)
-
- # Check the output.
- self.assertEqual(
- resp.get_body(),
- b'21 * 2 = 42',
- )
+ def test_my_second_function(self):
+ # Construct a mock HTTP request.
+ req = func.HttpRequest(method='GET',
+ body=None,
+ url='/api/my_second_function',
+ params={'value': '21'})
+ # Call the function.
+ func_call = main.build().get_user_function()
+ resp = func_call(req)
+ # Check the output.
+ self.assertEqual(
+ resp.get_body(),
+ b'21 * 2 = 42',
+ )
``` Inside your *.venv* Python virtual environment folder, install your favorite Python test framework, such as `pip install pytest`. Then run `pytest tests` to check the test result.
The following example creates a named temporary file in the temporary directory
import logging import azure.functions as func import tempfile+ from os import listdir #
azure-functions Functions Target Based Scaling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-target-based-scaling.md
+
+ Title: Target-based scaling in Azure Functions
+description: Explains target-based scaling behaviors of Consumption plan and Premium plan function apps.
Last updated : 04/04/2023++++
+# Target-based scaling
+
+Target-based scaling provides a fast and intuitive scaling model for customers and is currently supported for the following extensions:
+
+- Service Bus queues and topics
+- Storage Queues
+- Event Hubs
+- Azure Cosmos DB
+
+Target-based scaling replaces the previous Azure Functions incremental scaling model as the default for these extension types. Incremental scaling added or removed a maximum of one worker at [each new instance rate](event-driven-scaling.md#understanding-scaling-behaviors), with complex decisions for when to scale. In contrast, target-based scaling allows scale up of four instances at a time, and the scaling decision is based on a simple target-based equation:
+
+![Illustration of the equation: desired instances = event source length / target executions per instance.](./media/functions-target-based-scaling/target-based-scaling-formula.png)
+
+The default _target executions per instance_ values come from the SDKs used by the Azure Functions extensions. You don't need to make any changes for target-based scaling to work.
+
+> [!NOTE]
+> In order to achieve the most accurate scaling based on metrics, we recommend one target-based triggered function per function app.
+
+## Prerequisites
+
+Target-based scaling is supported for the [Consumption](consumption-plan.md) and [Premium](functions-premium-plan.md) plans. Your function app runtime must be 4.3.0 or higher.
+
+## Opting out
+
+Target-based scaling is enabled by default for function apps on the Consumption plan or Premium plans without runtime scale monitoring. If you wish to disable target-based scaling and revert to incremental scaling, add the following app setting to your function app:
+
+| App Setting | Value |
+| -- | -- |
+|`TARGET_BASED_SCALING_ENABLED` | 0 |
+
+## Customizing target-based scaling
+
+You can make the scaling behavior more or less aggressive based on your app's workload by adjusting _target executions per instance_. Each extension has different settings that you can use to set _target executions per instance_.
+
+This table summarizes the `host.json` values that are used for the _target executions per instance_ values and the defaults:
+
+| Extension | host.json values | Default Value |
+| -- | -- | - |
+| Service Bus (Extension v5.x+, Single Dispatch) | extensions.serviceBus.maxConcurrentCalls | 16 |
+| Service Bus (Extension v5.x+, Single Dispatch Sessions Based) | extensions.serviceBus.maxConcurrentSessions | 8 |
+| Service Bus (Extension v5.x+, Batch Processing) | extensions.serviceBus.maxMessageBatchSize | 1000 |
+| Service Bus (Functions v2.x+, Single Dispatch) | extensions.serviceBus.messageHandlerOptions.maxConcurrentCalls | 16 |
+| Service Bus (Functions v2.x+, Single Dispatch Sessions Based) | extensions.serviceBus.sessionHandlerOptions.maxConcurrentSessions | 2000 |
+| Service Bus (Functions v2.x+, Batch Processing) | extensions.serviceBus.batchOptions.maxMessageCount | 1000 |
+| Event Hubs (Extension v5.x+) | extensions.eventHubs.maxEventBatchSize | 10 |
+| Event Hubs (Extension v3.x+) | extensions.eventHubs.eventProcessorOptions.maxBatchSize | 10 |
+| Event Hubs (if defined) | extensions.eventHubs.targetUnprocessedEventThreshold | n/a |
+| Storage Queue | extensions.queues.batchSize | 16 |
+
+For Azure Cosmos DB _target executions per instance_ is set in the function attribute:
+
+| Extension | Function trigger setting | Default Value |
+| -- | | - |
+| Azure Cosmos DB | maxItemsPerInvocation | 100 |
+
+To learn more, see the [example configurations for the supported extensions](#supported-extensions).
+
+## Premium plan with runtime scale monitoring enabled
+
+In [runtime scale monitoring](functions-networking-options.md?tabs=azure-cli#premium-plan-with-virtual-network-triggers), the extensions handle target-based scaling. Hence, in addition to the function app runtime version requirement, your extension packages must meet the following minimum versions:
+
+| Extension Name | Minimum Version Needed |
+| -- | - |
+| Storage Queue | 5.1.0 |
+| Event Hubs | 5.2.0 |
+| Service Bus | 5.9.0 |
+| Azure Cosmos DB | 4.1.0 |
+
+Additionally, target-based scaling is currently an **opt-in** feature with runtime scale monitoring. In order to use target-based scaling with the Premium plan when runtime scale monitoring is enabled, add the following app setting to your function app:
+
+| App Setting | Value |
+| -- | -- |
+|`TARGET_BASED_SCALING_ENABLED` | 1 |
+
+## Dynamic concurrency support
+
+Target-based scaling introduces faster scaling, and uses defaults for _target executions per instance_. When using Service Bus or Storage queues, you can also enable [dynamic concurrency](functions-concurrency.md#dynamic-concurrency). In this configuration, the _target executions per instance_ value is determined automatically by the dynamic concurrency feature. It starts with limited concurrency and identifies the best setting over time.
+
+## Supported extensions
+
+The way in which you configure target-based scaling in your host.json file depends on the specific extension type. This section provides the configuration details for the extensions that currently support target-based scaling.
+
+### Service Bus queues and topics
+
+The Service Bus extension support three execution models, determined by the `IsBatched` and `IsSessionsEnabled` attributes of your Service Bus trigger. The default value for `IsBatched` and `IsSessionsEnabled` is `false`.
+
+| Execution Model | IsBatched | IsSessionsEnabled | Setting Used for _target executions per instance_ |
+| | | -- | - |
+| Single dispatch processing | false | false | maxConcurrentCalls |
+| Single dispatch processing (session-based) | false | true | maxConcurrentSessions |
+| Batch processing | true | false | maxMessageBatchSize or maxMessageCount |
+
+> [!NOTE]
+> **Scale efficiency:** For the Service Bus extension, use _Manage_ rights on resources for the most efficient scaling. With _Listen_ rights scaling reverts to incremental scale because the queue or topic length can't be used to inform scaling decisions. To learn more about setting rights in Service Bus access policies, see [Shared Access Authorization Policy](../service-bus-messaging/service-bus-sas.md#shared-access-authorization-policies).
++
+#### Single dispatch processing
+
+In this model, each invocation of your function processes a single message. The `maxConcurrentCalls` setting governs _target executions per instance_. The specific setting depends on the version of the Service Bus extension.
+
+# [v5.x+](#tab/v5)
+
+Modify the `host.json` setting `maxConcurrentCalls`, as in the following example:
+
+```json
+{
+ "version": "2.0",
+ "extensions": {
+ "serviceBus": {
+ "maxConcurrentCalls": 16
+ }
+ }
+}
+```
+
+# [v2.x+](#tab/v2)
+
+Modify the `host.json` setting `maxConcurrentCalls` in `messageHandlerOptions`, as in the following example:
+
+```json
+{
+ "version": "2.0",
+ "extensions": {
+ "serviceBus": {
+ "messageHandlerOptions": {
+ "maxConcurrentCalls": 16
+ }
+ }
+ }
+}
+```
++
+#### Single dispatch processing (session-based)
+
+In this model, each invocation of your function processes a single message. However, depending on the number of active sessions for your Service Bus topic or queue, each instance leases one or more sessions. The specific setting depends on the version of the Service Bus extension.
+
+# [v5.x+](#tab/v5)
+
+Modify the `host.json` setting `maxConcurrentSessions` to set _target executions per instance_, as in the following example:
+
+```json
+{
+ "version": "2.0",
+ "extensions": {
+ "serviceBus": {
+ "maxConcurrentSessions": 8
+ }
+ }
+}
+```
+
+# [v2.x+](#tab/v2)
+
+Modify the `host.json` setting `maxConcurrentSessions` in `sessionHandlerOptions` to set _target executions per instance_, as in the following example:
+
+```json
+{
+ "version": "2.0",
+ "extensions": {
+ "serviceBus": {
+ "sessionHandlerOptions": {
+ "maxConcurrentSessions": 2000
+ }
+ }
+ }
+}
+```
++
+#### Batch processing
+
+In this model, each invocation of your function processes a batch of messages. The specific setting depends on the version of the Service Bus extension.
+
+# [v5.x+](#tab/v5)
+
+Modify the `host.json` setting `maxMessageBatchSize` to set _target executions per instance_, as in the following example:
+```json
+{
+ "version": "2.0",
+ "extensions": {
+ "serviceBus": {
+ "maxMessageBatchSize": 1000
+ }
+ }
+}
+```
+
+# [v2.x+](#tab/v2)
+
+Modify the `host.json` setting `maxMessageCount` in `batchOptions` to set _target executions per instance_, as in the following example:
+
+```json
+{
+ "version": "2.0",
+ "extensions": {
+ "serviceBus": {
+ "batchOptions": {
+ "maxMessageCount": 1000
+ }
+ }
+ }
+}
+```
++
+### Event Hubs
+
+For Azure Event Hubs, Azure Functions scales based on the number of unprocessed events distributed across all the partitions in the event hub. By default, the `host.json` attributes used for _target executions per instance_ are `maxEventBatchSize` and `maxBatchSize`. However, if you choose to fine-tune target-based scaling, you can define a separate parameter `targetUnprocessedEventThreshold` that overrides to set _target executions per instance_ without changing the batch settings. If `targetUnprocessedEventThreshold` is set, the total unprocessed event count is divided by this value to determine the number of instances, which is then be rounded up to a worker instance count that creates a balanced partition distribution.
+
+> [!NOTE]
+> Since Event Hubs is a partitioned workload, the target instance count for Event Hubs is capped by the number of partitions in your event hub.
+
+The specific setting depends on the version of the Event Hubs extension.
+
+# [v5.x+](#tab/v5)
+
+Modify the `host.json` setting `maxEventBatchSize` to set _target executions per instance_, as in the following example:
+
+```json
+{
+ "version": "2.0",
+ "extensions": {
+ "eventHubs": {
+ "maxEventBatchSize" : 10
+ }
+ }
+}
+```
+
+When defined in `host.json`, `targetUnprocessedEventThreshold` is used as _target executions per instance_ instead of `maxEventBatchSize`, as in the following example:
+
+```json
+{
+ "version": "2.0",
+ "extensions": {
+ "eventHubs": {
+ "targetUnprocessedEventThreshold": 23
+ }
+ }
+}
+```
+
+# [v3.x+](#tab/v2)
+
+For **v3.x+** of the Event Hubs extension, modify the `host.json` setting `maxBatchSize` under `eventProcessorOptions` to set _target executions per instance_:
+```json
+{
+ "version": "2.0",
+ "extensions": {
+ "eventHubs": {
+ "eventProcessorOptions": {
+ "maxBatchSize": 10
+ }
+ }
+ }
+}
+```
+
+When defined in `host.json`, `targetUnprocessedEventThreshold` is used as _target executions per instance_ instead of `maxBatchSize`, as in the following example:
+
+```json
+{
+ "version": "2.0",
+ "extensions": {
+ "eventHubs": {
+ "targetUnprocessedEventThreshold": 23
+ }
+ }
+}
+```
++
+### Storage Queues
+
+For **v2.x+** of the Storage extension, modify the `host.json` setting `batchSize` to set _target executions per instance_:
+```json
+{
+ "version": "2.0",
+ "extensions": {
+ "queues": {
+ "batchSize": 16
+ }
+ }
+}
+```
+
+### Azure Cosmos DB
+
+Azure Cosmos DB uses a function-level attribute, `MaxItemsPerInvocation`. The way you set this function-level attribute depends on your function language.
+
+# [C#](#tab/csharp)
+
+For a compiled C# function, set `MaxItemsPerInvocation` in your trigger definition, as shown in the following examples for an in-process C# function:
+
+```C#
+namespace CosmosDBSamplesV2
+{
+ public static class CosmosTrigger
+ {
+ [FunctionName("CosmosTrigger")]
+ public static void Run([CosmosDBTrigger(
+ databaseName: "ToDoItems",
+ collectionName: "Items",
+ MaxItemsPerInvocation: 100,
+ ConnectionStringSetting = "CosmosDBConnection",
+ LeaseCollectionName = "leases",
+ CreateLeaseCollectionIfNotExists = true)]IReadOnlyList<Document> documents,
+ ILogger log)
+ {
+ if (documents != null && documents.Count > 0)
+ {
+ log.LogInformation($"Documents modified: {documents.Count}");
+ log.LogInformation($"First document Id: {documents[0].Id}");
+ }
+ }
+ }
+}
+
+```
+
+# [Java](#tab/java)
+
+Java example pending.
+
+# [JavaScript/PowerShell/Python](#tab/node+powershell+python)
+
+For Functions languages that use `function.json`, the `MaxItemsPerInvocation` parameter is defined in the specific binding, as in this Azure Cosmos DB trigger example:
+
+```json
+{
+ "bindings": [
+ {
+ "type": "cosmosDBTrigger",
+ "maxItemsPerInvocation": 100,
+ "connection": "MyCosmosDb",
+ "leaseContainerName": "leases",
+ "containerName": "collectionName",
+ "databaseName": "databaseName",
+ "leaseDatabaseName": "databaseName",
+ "createLeaseContainerIfNotExists": false,
+ "startFromBeginning": false,
+ "name": "input"
+ }
+ ]
+}
+```
+
+Examples for the Python v2 programming model and the JavaScript v4 programming model aren't yet available.
+++
+> [!NOTE]
+> Since Azure Cosmos DB is a partitioned workload, the target instance count for the database is capped by the number of physical partitions in your container. To learn more about Azure Cosmos DB scaling, see [physical partitions](../cosmos-db/nosql/change-feed-processor.md#dynamic-scaling) and [lease ownership](../cosmos-db/nosql/change-feed-processor.md#dynamic-scaling).
+
+## Next steps
+
+To learn more, see the following articles:
+++ [Improve the performance and reliability of Azure Functions](./performance-reliability.md)++ [Azure Functions reliable event processing](./functions-reliable-event-processing.md)
azure-maps Display Feature Information Android https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/display-feature-information-android.md
source.add(feature)
::: zone-end
-See the [Create a data source](create-data-source-android-sdk.md) documentation for ways to create and add data to the map.
+For more information on how to create and add data to the map, see [Create a data source].
-When a user interacts with a feature on the map, events can be used to react to those actions. A common scenario is to display a message made of the metadata properties of a feature the user interacted with. The `OnFeatureClick` event is the main event used to detect when the user tapped a feature on the map. There's also an `OnLongFeatureClick` event. When adding the `OnFeatureClick` event to the map, it can be limited to a single layer by passing in the ID of a layer to limit it to. If no layer ID is passed in, tapping any feature on the map, regardless of which layer it is in, would fire this event. The following code creates a symbol layer to render point data on the map, then adds an `OnFeatureClick` event and limits it to this symbol layer.
+When a user interacts with a feature on the map, events can be used to react to those actions. A common scenario is to display a message made of the metadata properties of a feature the user interacted with. The `OnFeatureClick` event is the main event used to detect when the user tapped a feature on the map. There's also an `OnLongFeatureClick` event. When the `OnFeatureClick` event is added to the map, it can be limited to a single layer by passing in the ID of a layer to limit it to. If no layer ID is passed in, tapping any feature on the map, regardless of which layer it is in, would fire this event. The following code creates a symbol layer to render point data on the map, then adds an `OnFeatureClick` event and limits it to this symbol layer.
::: zone pivot="programming-language-java-android"
map.events.add(OnFeatureClick { features: List<Feature> ->
In addition to toast messages, There are many other ways to present the metadata properties of a feature, such as: -- [Snackbar widget](https://developer.android.com/training/snackbar/showing.html) - `Snackbars` provide lightweight feedback about an operation. They show a brief message at the bottom of the screen on mobile and lower left on larger devices. `Snackbars` appear above all other elements on screen and only one can be displayed at a time.-- [Dialogs](https://developer.android.com/guide/topics/ui/dialogs) - A dialog is a small window that prompts the user to make a decision or enter additional information. A dialog doesn't fill the screen and is normally used for modal events that require users to take an action before they can continue.-- Add a [Fragment](https://developer.android.com/guide/components/fragments) to the current activity.
+- [Snackbar widget] - `Snackbars` provide lightweight feedback about an operation. They show a brief message at the bottom of the screen on mobile and lower left on larger devices. `Snackbars` appear above all other elements on screen and only one can be displayed at a time.
+- [Dialogs] - A dialog is a small window that prompts the user to make a decision or enter additional information. A dialog doesn't fill the screen and is normally used for modal events that require users to take an action before they can continue.
+- Add a [Fragment] to the current activity.
- Navigate to another activity or view. ## Display a popup
The following screen capture shows popups appearing when features are clicked an
To add more data to your map: > [!div class="nextstepaction"]
-> [React to map events](android-map-events.md)
+> [React to map events]
> [!div class="nextstepaction"]
-> [Create a data source](create-data-source-android-sdk.md)
+> [Create a data source]
> [!div class="nextstepaction"]
-> [Add a symbol layer](how-to-add-symbol-to-android-map.md)
+> [Add a symbol layer]
> [!div class="nextstepaction"]
-> [Add a bubble layer](map-add-bubble-layer-android.md)
+> [Add a bubble layer]
> [!div class="nextstepaction"]
-> [Add a line layer](android-map-add-line-layer.md)
+> [Add a line layer]
> [!div class="nextstepaction"]
-> [Add a polygon layer](how-to-add-shapes-to-android-map.md)
+> [Add a polygon layer]
+
+[Create a data source]: create-data-source-android-sdk.md
+[Snackbar widget]: https://developer.android.com/training/snackbar/showing.html
+[Dialogs]: https://developer.android.com/guide/topics/ui/dialogs
+[Fragment]: https://developer.android.com/guide/components/fragments
+[React to map events]: android-map-events.md
+[Add a symbol layer]: how-to-add-symbol-to-android-map.md
+[Add a bubble layer]: map-add-bubble-layer-android.md
+[Add a line layer]: android-map-add-line-layer.md
+[Add a polygon layer]: how-to-add-shapes-to-android-map.md
azure-maps Display Feature Information Ios Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/display-feature-information-ios-sdk.md
feature.addProperty("title", value: "Hello World!")
source.add(feature: feature) ```
-See the [Create a data source](create-data-source-ios-sdk.md) documentation for ways to create and add data to the map.
+For more information on how to create and add data to the map, see [Create a data source].
-When a user interacts with a feature on the map, events can be used to react to those actions. A common scenario is to display a message made of the metadata properties of a feature the user interacted with. The `azureMap(_:didTapOn:)` event is the main event used to detect when the user tapped a feature on the map. There's also an `azureMap(_:didLongPressOn:)` event. When adding a delegate to the map, it can be limited to a single layer by passing in the ID of a layer to limit it to. If no layer ID is passed in, tapping any feature on the map, regardless of which layer it is in, would fire this event. The following code creates a symbol layer to render point data on the map, then adds a delegate, limited to this symbol layer, which handles the `azureMap(_:didTapOn:)` event.
+When a user interacts with a feature on the map, events can be used to react to those actions. A common scenario is to display a message made of the metadata properties of a feature the user interacted with. The `azureMap(_:didTapOn:)` event is the main event used to detect when the user tapped a feature on the map. There's also an `azureMap(_:didLongPressOn:)` event. When a delegate is added to the map, it can be limited to a single layer by passing in the ID of a layer to limit it to. If no layer ID is passed in, tapping any feature on the map, regardless of which layer it is in, would fire this event. The following code creates a symbol layer to render point data on the map, then adds a delegate, limited to this symbol layer, which handles the `azureMap(_:didTapOn:)` event.
```swift // Create a symbol and add it to the map.
func azureMap(_ map: AzureMap, didTapOn features: [Feature]) {
The following screen capture shows popups appearing when features are tapped and staying anchored to their specified location on the map as it moves. ## Additional information
To add more data to your map:
- [Add a bubble layer](add-bubble-layer-map-ios.md) - [Add a line layer](add-line-layer-map-ios.md) - [Add a polygon layer](add-polygon-layer-map-ios.md)+
+[Create a data source]: create-data-source-ios-sdk.md
azure-maps Drawing Conversion Error Codes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/drawing-conversion-error-codes.md
# Drawing conversion errors and warnings
-The [Azure Maps Conversion service](/rest/api/maps/v2/conversion) lets you convert uploaded drawing packages into map data. Drawing packages must adhere to the [Drawing package requirements](drawing-requirements.md). If one or more requirements aren't met, then the Conversion service will return errors or warnings. This article lists the conversion error and warning codes, with recommendations on how to resolve them. It also provides some examples of drawings that can cause the Conversion service to return these codes.
+The Azure Maps [Conversion service] lets you convert uploaded drawing packages into map data. Drawing packages must adhere to the [Drawing package requirements]. If one or more requirements aren't met, then the Conversion service returns errors or warnings. This article lists the conversion error and warning codes, with recommendations on how to resolve them. It also provides some examples of drawings that can cause the Conversion service to return these codes.
-The Conversion service will succeed if there are any conversion warnings. However, it's recommended that you review and resolve all warnings. A warning means part of the conversion was ignored or automatically fixed. Failing to resolve the warnings could result in errors in latter processes.
+The Conversion service succeeds if there are any conversion warnings. However, it's recommended that you review and resolve all warnings. A warning means part of the conversion was ignored or automatically fixed. Failing to resolve the warnings could result in errors in latter processes.
## General Warnings
The Conversion service will succeed if there are any conversion warnings. Howeve
#### *Description for geometryWarning*
-A **geometryWarning** occurs when the drawing contains an invalid entity. An invalid entity is an entity that doesn't conform to geometric constraints. Examples of an invalid entity are a self-intersecting polygon or a non-closed PolyLine in a layer that only supports closed geometry.
+A **geometryWarning** occurs when the drawing contains an invalid entity. An invalid entity is an entity that doesn't conform to geometric constraints. Examples of an invalid entity are a self-intersecting polygon or an open PolyLine in a layer that only supports closed geometry.
The Conversion service is unable to create a map feature from an invalid entity and instead ignores it. #### *Examples for geometryWarning*
-* The two images below show examples of self-intersecting polygons.
+* The following two images show examples of self-intersecting polygons.
![Example of a self-intersecting polygon, example one.](./media/drawing-conversion-error-codes/geometry-warning-1.png) ![Example of a self-intersecting polygon, example two.](./media/drawing-conversion-error-codes/geometry-warning-2.png)
-* Below is an image that shows a non-closed PolyLine. Assume that the layer only supports closed geometry.
+* The following image shows an open PolyLine. Assume that the layer only supports closed geometry.
- ![Example of a non-closed PolyLine](./media/drawing-conversion-error-codes/geometry-warning-3.png)
+ ![Example of an open PolyLine](./media/drawing-conversion-error-codes/geometry-warning-3.png)
#### *How to fix geometryWarning*
Inspect the **geometryWarning** for each entity to verify that it follows geomet
#### *Description for unexpectedGeometryInLayer*
-An **unexpectedGeometryInLayer** warning occurs when the drawing contains geometry that is incompatible with the expected geometry type for a given layer. When the Conversion service returns an **unexpectedGeometryInLayer** warning, it will ignore that geometry.
+An **unexpectedGeometryInLayer** warning occurs when the drawing contains geometry that is incompatible with the expected geometry type for a given layer. When the Conversion service returns an **unexpectedGeometryInLayer** warning, it ignores that geometry.
#### *Example for unexpectedGeometryInLayer*
-The image below shows a non-closed PolyLine. Assume that the layer only supports closed geometry.
+The following image shows an open PolyLine. Assume that the layer only supports closed geometry.
-![Example of a non-closed PolyLine](./media/drawing-conversion-error-codes/geometry-warning-3.png)
+![Example of an open PolyLine](./media/drawing-conversion-error-codes/geometry-warning-3.png)
#### *How to fix unexpectedGeometryInLayer*
The **unsupportedFeatureRepresentation** warning occurs when the drawing contain
#### *Example for unsupportedFeatureRepresentation*
-The image below shows an unsupported entity type as a multi-line text object on a label layer.
+The following image shows an unsupported entity type as a multi-line text object on a label layer.
![Example of a multi-line text object on label layer](./media/drawing-conversion-error-codes/multi-line.png) #### *How to fix unsupportedFeatureRepresentation*
-Ensure that your DWG files contain only the supported entity types. Supported types are listed under the [Drawing files requirements](drawing-requirements.md#drawing-package-requirements) section in the drawing package requirements article.
+Ensure that your DWG files contain only the supported entity types. Supported types are listed under the [Drawing files requirements] section in the drawing package requirements article.
### **automaticRepairPerformed**
The **automaticRepairPerformed** warning occurs when the Conversion service auto
![Example of a self-intersecting polygon repaired](./media/drawing-conversion-error-codes/automatic-repair-1.png)
-* The image below shows how the Conversion service snapped the first and last vertex of a non-closed PolyLine to create a closed PolyLine, where the first and last vertex were less than 1 mm apart.
+* The following image shows the Conversion service snapping the first and last vertex of an open PolyLine to create a closed PolyLine, where the first and last vertex were less than 1 mm apart.
![Example of a snapped PolyLine](./media/drawing-conversion-error-codes/automatic-repair-2.png)
-* The image below shows how, in a layer that supports only closed PolyLines, the Conversion service repaired multiple non-closed PolyLines. To avoid discarding the non-closed PolyLines, the service combined them into a single closed PolyLine.
+* The following image shows how, in a layer that supports only closed PolyLines, the Conversion service repaired multiple open PolyLines. To avoid discarding the open PolyLines, the service combined them into a single closed PolyLine.
- ![Example of non-closed Polylines combined into a single closed PolyLine](./media/drawing-conversion-error-codes/automatic-repair-3.png)
+ ![Example of open Polylines combined into a single closed PolyLine](./media/drawing-conversion-error-codes/automatic-repair-3.png)
#### *How to fix automaticRepairPerformed*
The **redundantAttribution** warning occurs when the manifest contains redundant
#### *Examples for redundantAttribution*
-* The JSON snippet below contains two or more `unitProperties` objects with the same `name`.
+* The following JSON example contains two or more `unitProperties` objects with the same `name`.
```json "unitProperties": [
The **redundantAttribution** warning occurs when the manifest contains redundant
] ```
-* In the JSON snippet below, two or more `zoneProperties` objects have the same `name`.
+* In the following JSON snippet, two or more `zoneProperties` objects have the same `name`.
```json "zoneProperties": [
The **wallOutsideLevel** warning occurs when the drawing contains a Wall geometr
#### *Example for wallOutsideLevel*
-* The image below shows an interior wall, in red, outside the yellow level boundary.
+* The following image shows an interior wall, in red, outside the yellow level boundary.
![Example of interior wall outside the level boundary](./media/drawing-conversion-error-codes/wall-outside-level.png)
To fix a **labelWarning**, ensure that:
An **invalidArchiveFormat** error occurs when the drawing package is in an invalid archive format such as GZIP or 7-Zip. Only the ZIP archive format is supported.
-An **invalidArchiveFormat** error will also occur if the ZIP archive is empty.
+An **invalidArchiveFormat** error also occurs if the ZIP archive is empty.
#### *How to fix invalidArchiveFormat*
To fix a **dwgError**, inspect your _manifest.json_ file confirm that:
An **invalidJsonFormat** error occurs when the _manifest.json_ file can't be read.
-The _manifest.json_file can't be read because of JSON formatting or syntax errors. To learn more about how JSON format and syntax, see [The JavaScript Object Notation (JSON) Data Interchange Format](https://tools.ietf.org/html/rfc7159)
+The _manifest.json_file can't be read because of JSON formatting or syntax errors. To learn more about how JSON format and syntax, see [The JavaScript Object Notation (JSON) Data Interchange Format].
#### *How to fix invalidJsonFormat*
A **missingRequiredField** error occurs when the _manifest.json_ file is missing
#### *How to fix missingRequiredField*
-To fix a **missingRequiredField** error, verify that the manifest contains all required properties. For a full list of required manifest object, see the [manifest section in the Drawing package requirements](drawing-requirements.md#manifest-file-requirements)
+To fix a **missingRequiredField** error, verify that the manifest contains all required properties. For a full list of required manifest object, see the [manifest section in the Drawing package requirements].
### **missingManifest**
The **conflict** error occurs when the _manifest.json_ file contains conflicting
#### *Example scenario for conflict*
-The Conversion service will return a **conflict** error when more than one level is defined with the same level ordinal. The following JSON snippet shows two levels defined with the same ordinal.
+The Conversion service returns a **conflict** error when more than one level is defined with the same level ordinal. The following JSON snippet shows two levels defined with the same ordinal.
```JSON "buildingLevels":
The **invalidGeoreference** error occurs because of one or more of the following
#### *Example scenario for invalidGeoreference*
-In the JSON snippet below, the latitude is above the upper limit.
+In the following JSON snippet, the latitude is above the upper limit.
```json "georeference"
In the JSON snippet below, the latitude is above the upper limit.
To fix an **invalidGeoreference** error, verify that the georeferenced values are within range.
->[!IMPORTANT]
->In GeoJSON, the coordinates order is longitude and latitude. If you don't use the correct order, you may accidentally refer a latitude or longitude value that is out of range.
+> [!IMPORTANT]
+> In GeoJSON, the coordinates order is longitude and latitude. If you don't use the correct order, you may accidentally refer a latitude or longitude value that is out of range.
## Wall errors
-### **wallError**
+### **wallError**s
#### *Description for wallError*
The **verticalPenetrationError** occurs because of one or more of the following
#### *Example scenario for verticalPenetrationError*
-The image below shows a vertical penetration area with no overlapping vertical penetration areas on levels above or below it.
+The following image shows a vertical penetration area with no overlapping vertical penetration areas on levels above or below it.
![Example of a vertical penetration 1](./media/drawing-conversion-error-codes/vrt-2.png)
The following image shows a vertical penetration area that overlaps more than on
#### How to fix verticalPenetrationError
-To fix a **verticalPenetrationError** error, read about how to use a vertical penetration feature in the [Drawing package requirements](drawing-requirements.md) article.
+To fix a **verticalPenetrationError** error, read about how to use a vertical penetration feature in the [Drawing package requirements] article.
## Next steps > [!div class="nextstepaction"]
-> [How to use Azure Maps Drawing error visualizer](drawing-error-visualizer.md)
+> [How to use Azure Maps Drawing error visualizer]
> [!div class="nextstepaction"]
-> [Drawing Package Guide](drawing-package-guide.md)
+> [Drawing Package Guide]
> [!div class="nextstepaction"]
-> [Creator for indoor mapping](creator-indoor-maps.md)
+> [Creator for indoor mapping]
+
+[Conversion service]: /rest/api/maps/v2/conversion
+[Drawing package requirements]: drawing-requirements.md
+[Drawing files requirements]: drawing-requirements.md#drawing-package-requirements
+[The JavaScript Object Notation (JSON) Data Interchange Format]: https://tools.ietf.org/html/rfc7159
+[manifest section in the Drawing package requirements]: drawing-requirements.md#manifest-file-requirements
+[How to use Azure Maps Drawing error visualizer]: drawing-error-visualizer.md
+[Drawing Package Guide]: drawing-package-guide.md
+[Creator for indoor mapping]: creator-indoor-maps.md
azure-monitor Availability Azure Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/availability-azure-functions.md
Title: Create and run custom availability tests by using Azure Functions
-description: This article explains how to create an Azure function with TrackAvailability() that will run periodically according to the configuration given in a TimerTrigger function.
+ Title: Review TrackAvailability() test results
+description: This article explains how to review data logged by TrackAvailability() tests
Previously updated : 03/22/2023 Last updated : 04/06/2023
-# Create and run custom availability tests by using Azure Functions
+# Review TrackAvailability() test results
-This article explains how to create an Azure function with `TrackAvailability()` that will run periodically according to the configuration given in the `TimerTrigger` function with your own business logic. The results of this test will be sent to your Application Insights resource, where you can query for and alert on the availability results data. Then you can create customized tests similar to what you can do via [availability monitoring](./availability-overview.md) in the Azure portal. By using customized tests, you can:
+This article explains how to review TrackAvailability() test results in the Azure portal and query the data using Log Analytics.
+## Prerequisites
-- Write more complex availability tests than is possible by using the portal UI.-- Monitor an app inside of your Azure virtual network.-- Change the endpoint address.-- Create an availability test even if this feature isn't available in your region.-
-> [!NOTE]
-> This example is designed solely to show you the mechanics of how the `TrackAvailability()` API call works within an Azure function. It doesn't show you how to write the underlying HTTP test code or business logic that's required to turn this example into a fully functional availability test. By default, if you walk through this example, you'll be creating a basic availability HTTP GET test.
->
-> To follow these instructions, you must use the [dedicated plan](../../azure-functions/dedicated-plan.md) to allow editing code in App Service Editor.
-
-## Create a timer trigger function
-
-1. Create an Azure Functions resource.
- - If you already have an Application Insights resource:
-
- - By default, Azure Functions creates an Application Insights resource. But if you want to use a resource you created previously, you must specify that during creation.
- - Follow the instructions on how to [create an Azure Functions resource](../../azure-functions/functions-create-scheduled-function.md#create-a-function-app) with the following modification:
-
- On the **Monitoring** tab, select the **Application Insights** dropdown box and then enter or select the name of your resource.
-
- :::image type="content" source="media/availability-azure-functions/app-insights-resource.png" alt-text="Screenshot that shows selecting your existing Application Insights resource on the Monitoring tab.":::
-
- - If you don't have an Application Insights resource created yet for your timer-triggered function:
- - By default, when you're creating your Azure Functions application, it will create an Application Insights resource for you. Follow the instructions on how to [create an Azure Functions resource](../../azure-functions/functions-create-scheduled-function.md#create-a-function-app).
-
- > [!NOTE]
- > You can host your functions on a Consumption, Premium, or App Service plan. If you're testing behind a virtual network or testing nonpublic endpoints, you'll need to use the Premium plan in place of the Consumption plan. Select your plan on the **Hosting** tab. Ensure the latest .NET version is selected when you create the function app.
-1. Create a timer trigger function.
- 1. In your function app, select the **Functions** tab.
- 1. Select **Add**. On the **Add function** pane, select the following configurations:
- 1. **Development environment**: **Develop in portal**
- 1. **Select a template**: **Timer trigger**
- 1. Select **Add** to create the timer trigger function.
-
- :::image type="content" source="media/availability-azure-functions/add-function.png" alt-text="Screenshot that shows how to add a timer trigger function to your function app." lightbox="media/availability-azure-functions/add-function.png":::
-
-## Add and edit code in the App Service Editor
-
-Go to your deployed function app, and under **Development Tools**, select the **App Service Editor** tab.
-
-To create a new file, right-click under your timer trigger function (for example, **TimerTrigger1**) and select **New File**. Then enter the name of the file and select **Enter**.
-
-1. Create a new file called **function.proj** and paste the following code:
-
- ```xml
- <Project Sdk="Microsoft.NET.Sdk">
- <PropertyGroup>
- <TargetFramework>netstandard2.0</TargetFramework>
- </PropertyGroup>
- <ItemGroup>
- <PackageReference Include="Microsoft.ApplicationInsights" Version="2.15.0" /> <!-- Ensure youΓÇÖre using the latest version -->
- </ItemGroup>
- </Project>
- ```
-
- :::image type="content" source="media/availability-azure-functions/function-proj.png" alt-text=" Screenshot that shows function.proj in the App Service Editor." lightbox="media/availability-azure-functions/function-proj.png":::
-
-1. Create a new file called **runAvailabilityTest.csx** and paste the following code:
-
- ```csharp
- using System.Net.Http;
-
- public async static Task RunAvailabilityTestAsync(ILogger log)
- {
- using (var httpClient = new HttpClient())
- {
- // TODO: Replace with your business logic
- await httpClient.GetStringAsync("https://www.bing.com/");
- }
- }
- ```
-
-1. Define the `REGION_NAME` environment variable as a valid Azure availability location.
-
- Run the following command in the [Azure CLI](https://learn.microsoft.com/cli/azure/account?view=azure-cli-latest#az-account-list-locations&preserve-view=true) to list available regions.
-
- ```azurecli
- az account list-locations -o table
- ```
-
-1. Copy the following code into the **run.csx** file. (You'll replace the preexisting code.)
-
- ```csharp
- #load "runAvailabilityTest.csx"
-
- using System;
-
- using System.Diagnostics;
-
- using Microsoft.ApplicationInsights;
-
- using Microsoft.ApplicationInsights.Channel;
-
- using Microsoft.ApplicationInsights.DataContracts;
-
- using Microsoft.ApplicationInsights.Extensibility;
-
- private static TelemetryClient telemetryClient;
-
- // =============================================================
-
- // ****************** DO NOT MODIFY THIS FILE ******************
-
- // Business logic must be implemented in RunAvailabilityTestAsync function in runAvailabilityTest.csx
-
- // If this file does not exist, please add it first
-
- // =============================================================
-
- public async static Task Run(TimerInfo myTimer, ILogger log, ExecutionContext executionContext)
-
- {
- if (telemetryClient == null)
- {
- // Initializing a telemetry configuration for Application Insights based on connection string
-
- var telemetryConfiguration = new TelemetryConfiguration();
- telemetryConfiguration.ConnectionString = Environment.GetEnvironmentVariable("APPLICATIONINSIGHTS_CONNECTION_STRING");
- telemetryConfiguration.TelemetryChannel = new InMemoryChannel();
- telemetryClient = new TelemetryClient(telemetryConfiguration);
- }
-
- string testName = executionContext.FunctionName;
- string location = Environment.GetEnvironmentVariable("REGION_NAME");
- var availability = new AvailabilityTelemetry
- {
- Name = testName,
-
- RunLocation = location,
-
- Success = false,
- };
-
- availability.Context.Operation.ParentId = Activity.Current.SpanId.ToString();
- availability.Context.Operation.Id = Activity.Current.RootId;
- var stopwatch = new Stopwatch();
- stopwatch.Start();
-
- try
- {
- using (var activity = new Activity("AvailabilityContext"))
- {
- activity.Start();
- availability.Id = Activity.Current.SpanId.ToString();
- // Run business logic
- await RunAvailabilityTestAsync(log);
- }
- availability.Success = true;
- }
-
- catch (Exception ex)
- {
- availability.Message = ex.Message;
- throw;
- }
-
- finally
- {
- stopwatch.Stop();
- availability.Duration = stopwatch.Elapsed;
- availability.Timestamp = DateTimeOffset.UtcNow;
- telemetryClient.TrackAvailability(availability);
- telemetryClient.Flush();
- }
- }
-
- ```
+> [!div class="checklist"]
+> - [Azure subscription](https://azure.microsoft.com/free) and user account with the ability to create and delete resources
+> - [Workspace-based Application Insights resource](create-workspace-resource.md)
+> - Custom [Azure Functions app](../../azure-functions/functions-overview.md#introduction-to-azure-functions) running [TrackAvailability()](/dotnet/api/microsoft.applicationinsights.telemetryclient.trackavailability) with your own business logic
## Check availability
-To make sure everything is working, look at the graph on the **Availability** tab of your Application Insights resource.
+Start by reviewing the graph on the **Availability** tab of your Application Insights resource.
> [!NOTE] > Tests created with `TrackAvailability()` will appear with **CUSTOM** next to the test name.
You can use Log Analytics to view your availability results, dependencies, and m
## Next steps -- [Application Map](./app-map.md)-- [Transaction diagnostics](./transaction-diagnostics.md)
+* [Standard tests](availability-standard-tests.md)
+* [Availability alerts](availability-alerts.md)
+* [Application Map](./app-map.md)
+* [Transaction diagnostics](./transaction-diagnostics.md)
azure-netapp-files Create Active Directory Connections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/create-active-directory-connections.md
Several features of Azure NetApp Files require that you have an Active Directory connection. For example, you need to have an Active Directory connection before you can create an [SMB volume](azure-netapp-files-create-volumes-smb.md), a [NFSv4.1 Kerberos volume](configure-kerberos-encryption.md), or a [dual-protocol volume](create-volumes-dual-protocol.md). This article shows you how to create and manage Active Directory connections for Azure NetApp Files. + ## <a name="requirements-for-active-directory-connections"></a>Requirements and considerations for Active Directory connections > [!IMPORTANT]
azure-netapp-files Faq Smb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/faq-smb.md
Previously updated : 03/13/2023 Last updated : 04/6/2023 # SMB FAQs for Azure NetApp Files
Both [Azure Active Directory Domain Services (Azure AD DS)](../active-directory-
If you're using Azure NetApp Files with Azure Active Directory Domain Services, the organizational unit path is `OU=AADDC Computers` when you configure Active Directory for your NetApp account.
+## How do the Netlogon protocol changes in the April 2023 Windows Update affect Azure NetApp Files?
+
+The Windows April 2023 update will include a patch for Netlogon protocol changes, however these changes are not enforced at this time.
+
+You should not modify the `RequireSeal` value to 2 at this time. Azure NetApp Files adds support for setting `RequireSeal` to 2 in May 2023.
+
+The enforcement of setting `RequireSeal` value to 2 will occur by default with the June 2023 Azure update.
+
+For more information, see [KB5021130: How to manage the Netlogon protocol changes related to CVE-2022-38023](https://support.microsoft.com/topic/kb5021130-how-to-manage-the-netlogon-protocol-changes-related-to-cve-2022-38023-46ea3067-3989-4d40-963c-680fd9e8ee25#timing5021130).
+ ## What versions of Windows Server Active Directory are supported? Azure NetApp Files supports Windows Server 2008r2SP1-2019 versions of Active Directory Domain Services.
Yes, Azure NetApp Files supports [Alternate Data Streams (ADS)](/openspecs/windo
SMB/CIFS oplocks (opportunistic locks) enable the redirector on a SMB/CIFS client in certain file-sharing scenarios to perform client-side caching of read-ahead, write-behind, and lock information. A client can then work with a file (read or write it) without regularly reminding the server that it needs access to the file. This improves performance by reducing network traffic. SMB/CIFS oplocks are enabled on Azure NetApp Files SMB and dual-protocol volumes. + ## Next steps - [FAQs about SMB performance for Azure NetApp Files](azure-netapp-files-smb-performance.md)
azure-netapp-files Modify Active Directory Connections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/modify-active-directory-connections.md
Once you've [created an Active Directory connection](create-active-directory-connections.md) in Azure NetApp Files, you can modify it. When you're modifying an Active Directory connection, not all configurations are modifiable. + ## Modify Active Directory connections 1. Select **Active Directory connections**. Then, select **Edit** to edit an existing AD connection.
azure-netapp-files Understand Guidelines Active Directory Domain Service Site https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/understand-guidelines-active-directory-domain-service-site.md
Proper Active Directory Domain Services (AD DS) design and planning are key to s
This article provides recommendations to help you develop an AD DS deployment strategy for Azure NetApp Files. Before reading this article, you need to have a good understanding about how AD DS works on a functional level. + ## <a name="ad-ds-requirements"></a> Identify AD DS requirements for Azure NetApp Files Before you deploy Azure NetApp Files volumes, you must identify the AD DS integration requirements for Azure NetApp Files to ensure that Azure NetApp Files is well connected to AD DS. _Incorrect or incomplete AD DS integration with Azure NetApp Files might cause client access interruptions or outages for SMB, dual-protocol, or Kerberos NFSv4.1 volumes_.
azure-resource-manager Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/overview.md
Bicep provides concise syntax, reliable type safety, and support for code reuse.
Bicep provides the following advantages: -- **Support for all resource types and API versions**: Bicep immediately supports all preview and GA versions for Azure services. As soon as a resource provider introduces new resources types and API versions, you can use them in your Bicep file. You don't have to wait for tools to be updated before using the new services.
+- **Support for all resource types and API versions**: Bicep immediately supports all preview and GA versions for Azure services. As soon as a resource provider introduces new resource types and API versions, you can use them in your Bicep file. You don't have to wait for tools to be updated before using the new services.
- **Simple syntax**: When compared to the equivalent JSON template, Bicep files are more concise and easier to read. Bicep requires no previous knowledge of programming languages. Bicep syntax is declarative and specifies which resources and resource properties you want to deploy. The following examples show the difference between a Bicep file and the equivalent JSON template. Both examples deploy a storage account.
azure-resource-manager Lock Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/lock-resources.md
Title: Protect your Azure resources with a lock description: You can safeguard Azure resources from updates or deletions by locking all users and roles. Previously updated : 12/12/2022 Last updated : 04/06/2023
As an administrator, you can lock an Azure subscription, resource group, or resource to protect them from accidental user deletions and modifications. The lock overrides any user permissions. + You can set locks that prevent either deletions or modifications. In the portal, these locks are called **Delete** and **Read-only**. In the command line, these locks are called **CanNotDelete** and **ReadOnly**. - **CanNotDelete** means authorized users can read and modify a resource, but they can't delete it.
lockid=$(az lock show --name LockSite --resource-group exampleresourcegroup --o
az lock delete --ids $lockid ```
+### Python
+
+You lock deployed resources with Python by using the [ManagementLockClient.management_locks.create_or_update_at_resource_group_level](/python/api/azure-mgmt-resource/azure.mgmt.resource.locks.v2016_09_01.operations.managementlocksoperations#azure-mgmt-resource-locks-v2016-09-01-operations-managementlocksoperations-create-or-update-at-resource-group-level) command.
+
+To lock a resource, provide the name of the resource, its resource type, and its resource group name.
+
+```python
+import os
+from azure.identity import AzureCliCredential
+from azure.mgmt.resource import ManagementLockClient
+
+credential = AzureCliCredential()
+subscription_id = os.environ["AZURE_SUBSCRIPTION_ID"]
+
+lock_client = ManagementLockClient(credential, subscription_id)
+
+lock_result = lock_client.management_locks.create_or_update_at_resource_level(
+ "exampleGroup",
+ "Microsoft.Web",
+ "",
+ "sites",
+ "examplesite",
+ "lockSite",
+ {
+ "level": "CanNotDelete"
+ }
+)
+```
+
+To lock a resource group, provide the name of the resource group.
+
+```python
+import os
+from azure.identity import AzureCliCredential
+from azure.mgmt.resource import ManagementLockClient
+
+credential = AzureCliCredential()
+subscription_id = os.environ["AZURE_SUBSCRIPTION_ID"]
+
+lock_client = ManagementLockClient(credential, subscription_id)
+
+lock_result = lock_client.management_locks.create_or_update_at_resource_group_level(
+ "exampleGroup",
+ "lockGroup",
+ {
+ "level": "CanNotDelete"
+ }
+)
+```
+
+To get information about all locks in your subscription, use [ManagementLockClient.management_locks.get](/python/api/azure-mgmt-resource/azure.mgmt.resource.locks.v2016_09_01.operations.managementlocksoperations#azure-mgmt-resource-locks-v2016-09-01-operations-managementlocksoperations-list-at-subscription-level). To get all the locks in your subscription, use:
+
+```python
+import os
+from azure.identity import AzureCliCredential
+from azure.mgmt.resource import ManagementLockClient
+
+credential = AzureCliCredential()
+subscription_id = os.environ["AZURE_SUBSCRIPTION_ID"]
+
+lock_client = ManagementLockClient(credential, subscription_id)
+
+lock_result = lock_client.management_locks.list_at_subscription_level()
+
+for lock in lock_result:
+ print(f"Lock name: {lock.name}")
+ print(f"Lock level: {lock.level}")
+ print(f"Lock notes: {lock.notes}")
+```
+
+To get a lock for a resource, use:
+
+```python
+import os
+from azure.identity import AzureCliCredential
+from azure.mgmt.resource import ManagementLockClient
+
+credential = AzureCliCredential()
+subscription_id = os.environ["AZURE_SUBSCRIPTION_ID"]
+
+lock_client = ManagementLockClient(credential, subscription_id)
+
+lock_result = lock_client.management_locks.get_at_resource_level(
+ "exampleGroup",
+ "Microsoft.Web",
+ "",
+ "sites",
+ "examplesite",
+ "lockSite"
+)
+
+print(f"Lock ID: {lock_result.id}")
+print(f"Lock Name: {lock_result.name}")
+print(f"Lock Level: {lock_result.level}")
+```
+
+To get a lock for a resource group, use:
+
+```python
+import os
+from azure.identity import AzureCliCredential
+from azure.mgmt.resource import ManagementLockClient
+
+credential = AzureCliCredential()
+subscription_id = os.environ["AZURE_SUBSCRIPTION_ID"]
+
+lock_client = ManagementLockClient(credential, subscription_id)
+
+lock_result = lock_client.management_locks.get_at_resource_group_level(
+ "exampleGroup",
+ "lockGroup"
+)
+
+print(f"Lock ID: {lock_result.id}")
+print(f"Lock Level: {lock_result.level}")
+```
+
+To delete a lock for a resource, use:
+
+```python
+import os
+from azure.identity import AzureCliCredential
+from azure.mgmt.resource import ManagementLockClient
+
+credential = AzureCliCredential()
+subscription_id = os.environ["AZURE_SUBSCRIPTION_ID"]
+
+lock_client = ManagementLockClient(credential, subscription_id)
+
+lock_client.management_locks.delete_at_resource_level(
+ "exampleGroup",
+ "Microsoft.Web",
+ "",
+ "sites",
+ "examplesite",
+ "lockSite"
+)
+```
+
+To delete a lock for a resource group, use:
+
+```python
+import os
+from azure.identity import AzureCliCredential
+from azure.mgmt.resource import ManagementLockClient
+
+credential = AzureCliCredential()
+subscription_id = os.environ["AZURE_SUBSCRIPTION_ID"]
+
+lock_client = ManagementLockClient(credential, subscription_id)
+
+lock_client.management_locks.delete_at_resource_group_level("exampleGroup", "lockGroup")
+```
+ ### REST API You can lock deployed resources with the [REST API for management locks](/rest/api/resources/managementlocks). The REST API lets you create and delete locks and retrieve information about existing locks.
azure-vmware Enable Managed Snat For Workloads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/enable-managed-snat-for-workloads.md
Last updated 05/12/2022
# Enable Managed SNAT for Azure VMware Solution workloads
-In this article, you'll learn how to enable Azure VMware SolutionΓÇÖs Managed Source NAT (SNAT) to connect to the Internet outbound. A SNAT service translates from RFC1918 space to the public Internet for simple outbound Internet access. The SNAT service won't work when you have a default route from Azure.
+In this article, you'll learn how to enable Azure VMware SolutionΓÇÖs Managed Source NAT (SNAT) to connect to the Internet outbound. A SNAT service translates from RFC1918 space to the public Internet for simple outbound Internet access. Note that ICMP (ping) is disabled by design; you cannot ping an Internet host. The SNAT service won't work when you have a default route from Azure.
With this capability, you:
cognitive-services Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/reference.md
Previously updated : 04/04/2023 Last updated : 04/06/2023 recommendations: false
POST https://{your-resource-name}.openai.azure.com/openai/deployments/{deploymen
|--|--|--|--|--| | ```prompt``` | string or array | Optional | ```<\|endoftext\|>``` | The prompt(s) to generate completions for, encoded as a string, a list of strings, or a list of token lists. Note that ```<\|endoftext\|>``` is the document separator that the model sees during training, so if a prompt isn't specified the model will generate as if from the beginning of a new document. | | ```max_tokens``` | integer | Optional | 16 | The maximum number of tokens to generate in the completion. The token count of your prompt plus max_tokens can't exceed the model's context length. Most models have a context length of 2048 tokens (except for the newest models, which support 4096). |
-| ```temperature``` | number | Optional | 1 | What sampling temperature to use. Higher values means the model will take more risks. Try 0.9 for more creative applications, and 0 (`argmax sampling`) for ones with a well-defined answer. We generally recommend altering this or top_p but not both. |
+| ```temperature``` | number | Optional | 1 | What sampling temperature to use, between 0 and 2. Higher values means the model will take more risks. Try 0.9 for more creative applications, and 0 (`argmax sampling`) for ones with a well-defined answer. We generally recommend altering this or top_p but not both. |
| ```top_p``` | number | Optional | 1 | An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. We generally recommend altering this or temperature but not both. | | ```logit_bias``` | map | Optional | null | Modify the likelihood of specified tokens appearing in the completion. Accepts a json object that maps tokens (specified by their token ID in the GPT tokenizer) to an associated bias value from -100 to 100. You can use this tokenizer tool (which works for both GPT-2 and GPT-3) to convert text to token IDs. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model, but values between -1 and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token. As an example, you can pass {"50256": -100} to prevent the <\|endoftext\|> token from being generated. | | ```user``` | string | Optional | | A unique identifier representing your end-user, which can help monitoring and detecting abuse |
communication-services Bring Your Own Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/call-automation/call-recording/bring-your-own-storage.md
This quickstart gets you started with Bring your own Azure storage for Call Recording. To start using Bring your own Azure Storage functionality, make sure you're familiar with the [Call Recording APIs](../../voice-video-calling/get-started-call-recording.md).
+You need to be part of the Azure Communication Services TAP program. It's likely that youΓÇÖre already part of this program, and if you aren't, sign-up using https://aka.ms/acs-tap-invite. Bring your own Azure Storage uses Managed Identities, to access to this functionality for Call Recording, submit your Azure Communication Services Resource IDs by filling this - [Registration form](https://forms.office.com/r/njact5SiVJ). You need to fill the form every time you need a new resource ID allow-listed.
+ ## Pre-requisite: Setting up Managed Identity and RBAC role assignments ### 1. Enable system assigned managed identity for Azure Communication Services
container-apps Networking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/networking.md
As a Container Apps environment is created, you provide resource IDs for a singl
If you're using the CLI, the parameter to define the subnet resource ID is `infrastructure-subnet-resource-id`. The subnet hosts infrastructure components and user app containers.
+In addition, if you're using the Azure CLI with the Consumption only architecture and the [platformReservedCidr](vnet-custom-internal.md#networking-parameters) range is defined, both subnets must not overlap with the IP range defined in `platformReservedCidr`.
+ ### Subnet Address Range Restrictions Subnet address ranges can't overlap with the following ranges reserved by AKS:
In addition, Container Apps on the workload profiles architecture reserve the fo
- 100.100.160.0/19 - 100.100.192.0/19
-If you're using the Azure CLI and the [platformReservedCidr](vnet-custom-internal.md#networking-parameters) range is defined, both subnets must not overlap with the IP range defined in `platformReservedCidr`.
- ## Routes User Defined Routes (UDR) and controlled egress through NAT Gateway are supported in the workload profiles architecture, which is in preview. In the Consumption only architecture, these features aren't supported.
In addition to the [Azure Container Apps billing](./billing.md), you're billed f
- Two standard [Load Balancers](https://azure.microsoft.com/pricing/details/load-balancer/) if using an internal environment, or one standard [Load Balancer](https://azure.microsoft.com/pricing/details/load-balancer/) if using an external environment. Each load balancer has fewer than six rules. The cost of data processed (GB) includes both ingress and egress for management operations. #### Workload profiles architecture
-The name of the resource group created in the Azure subscription where your environment is hosted is prefixed with `me_` by default, and the resource group name *can* be customized during container app environment creation. For external environments, the resource group contains a public IP address used specifically for inbound connectivity to your external environment and a load balancer. For internal environments, the resource group only contains a Load Balancer.
+The name of the resource group created in the Azure subscription where your environment is hosted is prefixed with `ME_` by default, and the resource group name *can* be customized during container app environment creation. For external environments, the resource group contains a public IP address used specifically for inbound connectivity to your external environment and a load balancer. For internal environments, the resource group only contains a Load Balancer.
In addition to the [Azure Container Apps billing](./billing.md), you're billed for: - One standard static [public IP](https://azure.microsoft.com/pricing/details/ip-addresses/) for ingress in external environments and one standard [Load Balancer](https://azure.microsoft.com/pricing/details/load-balancer/).
container-apps Vnet Custom Internal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/vnet-custom-internal.md
The following example shows you how to create a Container Apps environment in an
<!-- Create --> [!INCLUDE [container-apps-create-portal-steps.md](../../includes/container-apps-create-portal-steps.md)]
+> [!NOTE]
+> You can use an existing virtual network, but a dedicated subnet with a CIDR range of `/23` or larger is required for use with Container Apps when using the Consumption only Architecture. When using the Workload Profiles Architecture, a `/27` or larger is required. To learn more about subnet sizing, see the [networking architecture overview](./networking.md#subnet).
+ 7. Select the **Networking** tab to create a VNET. 8. Select **Yes** next to *Use your own virtual network*. 9. Next to the *Virtual network* box, select the **Create new** link and enter the following value.
$VnetName = 'my-custom-vnet'
Now create an instance of the virtual network to associate with the Container Apps environment. The virtual network must have two subnets available for the container app instance. > [!NOTE]
-> You can use an existing virtual network, but a dedicated subnet with a CIDR range of `/23` or larger is required for use with Container Apps.
+> Network subnet address prefix requires a minimum CIDR range of `/23` for use with Container Apps when using the Consumption only Architecture. When using the Workload Profiles Architecture, a `/27` or larger is required. To learn more about subnet sizing, see the [networking architecture overview](./networking.md#subnet).
# [Bash](#tab/bash)
az network vnet subnet create \
--resource-group $RESOURCE_GROUP \ --vnet-name $VNET_NAME \ --name infrastructure-subnet \
- --address-prefixes 10.0.0.0/21
+ --address-prefixes 10.0.0.0/23
``` # [Azure PowerShell](#tab/azure-powershell)
az network vnet subnet create \
```azurepowershell $SubnetArgs = @{ Name = 'infrastructure-subnet'
- AddressPrefix = '10.0.0.0/21'
+ AddressPrefix = '10.0.0.0/23'
} $subnet = New-AzVirtualNetworkSubnetConfig @SubnetArgs ```
$vnet = New-AzVirtualNetwork @VnetArgs
-> [!NOTE]
-> Network subnet address prefix requires a minimum CIDR range of `/23`.
- With the VNET established, you can now query for the infrastructure subnet ID. # [Bash](#tab/bash)
You must either provide values for all three of these properties, or none of the
| Parameter | Description | |||
-| `platform-reserved-cidr` | The address range used internally for environment infrastructure services. Must have a size between `/21` and `/12`. |
+| `platform-reserved-cidr` | The address range used internally for environment infrastructure services. Must have a size between `/23` and `/12` when using the [Consumption only architecture](./networking.md)|
| `platform-reserved-dns-ip` | An IP address from the `platform-reserved-cidr` range that is used for the internal DNS server. The address can't be the first address in the range, or the network address. For example, if `platform-reserved-cidr` is set to `10.2.0.0/16`, then `platform-reserved-dns-ip` can't be `10.2.0.0` (the network address), or `10.2.0.1` (infrastructure reserves use of this IP). In this case, the first usable IP for the DNS would be `10.2.0.2`. | | `docker-bridge-cidr` | The address range assigned to the Docker bridge network. This range must have a size between `/28` and `/12`. |
You must either provide values for all three of these properties, or none of the
| Parameter | Description | |||
-| `VnetConfigurationPlatformReservedCidr` | The address range used internally for environment infrastructure services. Must have a size between `/21` and `/12`. |
+| `VnetConfigurationPlatformReservedCidr` | The address range used internally for environment infrastructure services. Must have a size between `/23` and `/12` when using the [Consumption only architecture](./networking.md) |
| `VnetConfigurationPlatformReservedDnsIP` | An IP address from the `VnetConfigurationPlatformReservedCidr` range that is used for the internal DNS server. The address can't be the first address in the range, or the network address. For example, if `VnetConfigurationPlatformReservedCidr` is set to `10.2.0.0/16`, then `VnetConfigurationPlatformReservedDnsIP` can't be `10.2.0.0` (the network address), or `10.2.0.1` (infrastructure reserves use of this IP). In this case, the first usable IP for the DNS would be `10.2.0.2`. | | `VnetConfigurationDockerBridgeCidr` | The address range assigned to the Docker bridge network. This range must have a size between `/28` and `/12`. |
container-apps Vnet Custom https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/vnet-custom.md
The following example shows you how to create a Container Apps environment in an
[!INCLUDE [container-apps-create-portal-steps.md](../../includes/container-apps-create-portal-steps.md)] > [!NOTE]
-> Network address prefixes requires a CIDR range of `/23` or larger.
+> You can use an existing virtual network, but a dedicated subnet with a CIDR range of `/23` or larger is required for use with Container Apps when using the Consumption only Architecture. When using the Workload Profiles Architecture, a `/27` or larger is required. To learn more about subnet sizing, see the [networking architecture overview](./networking.md#subnet).
7. Select the **Networking** tab to create a VNET. 8. Select **Yes** next to *Use your own virtual network*.
$VnetName = 'my-custom-vnet'
Now create an Azure virtual network to associate with the Container Apps environment. The virtual network must have a subnet available for the environment deployment. > [!NOTE]
-> You can use an existing virtual network, but a dedicated subnet with a CIDR range of `/23` or larger is required for use with Container Apps.
+> Network subnet address prefix requires a minimum CIDR range of `/23` for use with Container Apps when using the Consumption only Architecture. When using the Workload Profiles Architecture, a `/27` or larger is required. To learn more about subnet sizing, see the [networking architecture overview](./networking.md#subnet).
# [Bash](#tab/bash)
You must either provide values for all three of these properties, or none of the
| Parameter | Description | |||
-| `platform-reserved-cidr` | The address range used internally for environment infrastructure services. Must have a size between `/21` and `/12`. |
+| `platform-reserved-cidr` | The address range used internally for environment infrastructure services. Must have a size between `/23` and `/12` when using the [Consumption only architecture](./networking.md)|
| `platform-reserved-dns-ip` | An IP address from the `platform-reserved-cidr` range that is used for the internal DNS server. The address can't be the first address in the range, or the network address. For example, if `platform-reserved-cidr` is set to `10.2.0.0/16`, then `platform-reserved-dns-ip` can't be `10.2.0.0` (the network address), or `10.2.0.1` (infrastructure reserves use of this IP). In this case, the first usable IP for the DNS would be `10.2.0.2`. | | `docker-bridge-cidr` | The address range assigned to the Docker bridge network. This range must have a size between `/28` and `/12`. |
You must either provide values for all three of these properties, or none of the
| Parameter | Description | |||
-| `VnetConfigurationPlatformReservedCidr` | The address range used internally for environment infrastructure services. Must have a size between `/23` and `/12`. |
+| `VnetConfigurationPlatformReservedCidr` | The address range used internally for environment infrastructure services. Must have a size between `/23` and `/12` when using the [Consumption only architecture](./networking.md) |
| `VnetConfigurationPlatformReservedDnsIP` | An IP address from the `VnetConfigurationPlatformReservedCidr` range that is used for the internal DNS server. The address can't be the first address in the range, or the network address. For example, if `VnetConfigurationPlatformReservedCidr` is set to `10.2.0.0/16`, then `VnetConfigurationPlatformReservedDnsIP` can't be `10.2.0.0` (the network address), or `10.2.0.1` (infrastructure reserves use of this IP). In this case, the first usable IP for the DNS would be `10.2.0.2`. | | `VnetConfigurationDockerBridgeCidr` | The address range assigned to the Docker bridge network. This range must have a size between `/28` and `/12`. |
cosmos-db Merge https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/merge.md
To get started using partition merge, navigate to the **Features** page in your
Before enabling the feature, verify that your Azure Cosmos DB account(s) meet all the [preview eligibility criteria](#preview-eligibility-criteria). Once you've enabled the feature, it takes 15-20 minutes to take effect. > [!CAUTION]
-> When merge is enabled on an account, only requests from .NET SDK version >= 3.27.0 or Java SDK >= 4.42.0 will be allowed on the account, regardless of whether merges are ongoing or not. Requests from other SDKs (older .NET SDK, older Java SDK, any JavaScript SDK, any Python SDK, any Go SDK) or unsupported connectors (Azure Data Factory, Azure Search, Azure Functions, Azure Stream Analytics, and others) will be blocked and fail. Ensure you have upgraded to a supported SDK version before enabling the feature. After the feature is enabled or disabled, it may take 15-20 minutes to fully propagate to the account. If you plan to disable the feature after you've completed using it, it may take 15-20 minutes before requests from SDKs and connectors that are not supported for merge are allowed.
+> When merge is enabled on an account, only requests from .NET SDK version >= 3.27.0 or Java SDK >= 4.42.0 or Azure Cosmos DB Spark connector >= 4.18.0 will be allowed on the account, regardless of whether merges are ongoing or not. Requests from other SDKs (older .NET SDK, older Java SDK, any JavaScript SDK, any Python SDK, any Go SDK) or unsupported connectors (Azure Data Factory, Azure Search, Azure Functions, Azure Stream Analytics, and others) will be blocked and fail. Ensure you have upgraded to a supported SDK version before enabling the feature. After the feature is enabled or disabled, it may take 15-20 minutes to fully propagate to the account. If you plan to disable the feature after you've completed using it, it may take 15-20 minutes before requests from SDKs and connectors that are not supported for merge are allowed.
:::image type="content" source="media/merge/merge-feature-blade.png" alt-text="Screenshot of Features pane and Partition merge feature.":::
healthcare-apis Device Messages Through Iot Hub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/device-messages-through-iot-hub.md
Previously updated : 04/04/2023 Last updated : 04/07/2023
> [!NOTE] > [Fast Healthcare Interoperability Resources (FHIR&#174;)](https://www.hl7.org/fhir/) is an open healthcare specification.
-For enhanced workflows and ease of use, you can use the MedTech service to receive messages from devices you create and manage through an IoT hub in [Azure IoT Hub](../../iot-hub/iot-concepts-and-iot-hub.md). This tutorial uses an Azure Resource Manager template (ARM template) and a **Deploy to Azure** button to deploy a MedTech service. The template creates an IoT hub to create and manage devices, and then routes device messages to an event hub in Azure Event Hubs for the MedTech service to pick up.
+For enhanced workflows and ease of use, you can use the MedTech service to receive messages from devices you create and manage through an IoT hub in [Azure IoT Hub](../../iot-hub/iot-concepts-and-iot-hub.md). This tutorial uses an Azure Resource Manager template (ARM template) and a **Deploy to Azure** button to deploy a MedTech service. The template deploys an IoT hub to create and manage devices, and then routes device messages to an event hub in Azure Event Hubs for the MedTech service to pick up and process.
> [!TIP]
-> To learn how the MedTech service transforms and persists device message data into the FHIR service as FHIR Observation resources, see [Overview of the MedTech service device message processing stages](overview-of-device-message-processing-stages.md).
+> To learn how the MedTech service transforms and persists device message data into the FHIR service as FHIR Observations, see [Overview of the MedTech service device message processing stages](overview-of-device-message-processing-stages.md).
In this tutorial, you learn how to:
To begin deployment in the Azure portal, select the **Deploy to Azure** button:
- **Resource group**: An existing resource group, or you can create a new resource group.
- - **Region**: The Azure region of the resource group that's used for the deployment. **Region** auto-fills by using the resource group region.
+ - **Region**: The Azure region of the resource group that's used for the deployment. **Region** autofills by using the resource group region.
- **Basename**: A value that's appended to the name of the Azure resources and services that are deployed. The examples in this tutorial use the basename *azuredocsdemo*. You can choose your own basename value.
When deployment is completed, the following resources and access roles are creat
> [!IMPORTANT] > In this tutorial, the ARM template configures the MedTech service to operate in **Create** mode. A Patient resource and a Device resource are created for each device that sends data to your FHIR service. >
-> To learn more about the MedTech service resolution types **Create** and **Lookup**, see [Destination properties](deploy-new-config.md#destination-properties).
+> To learn about the MedTech service resolution types **Create** and **Lookup**, see [Destination properties](deploy-new-config.md#destination-properties).
## Create a device and send a test message
You complete the steps by using Visual Studio Code with the Azure IoT Hub extens
## Review metrics from the test message
-Now that you've successfully sent a test message to your IoT hub, review your MedTech service metrics. You review metrics to verify that your MedTech service received, grouped, transformed, and persisted the test message to your FHIR service. To learn more, see [How to display the MedTech service monitoring tab metrics](how-to-use-monitoring-tab.md).
+Now that you have successfully sent a test message to your IoT hub, review your MedTech service metrics. You review metrics to verify that your MedTech service received, grouped, transformed, and persisted the test message to your FHIR service. To learn more, see [How to display the MedTech service monitoring tab metrics](how-to-use-monitoring-tab.md).
For your MedTech service metrics, you can see that your MedTech service completed the following steps for the test message:
healthcare-apis How To Use Mapping Debugger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/how-to-use-mapping-debugger.md
In this article, learn how to use the MedTech service Mapping debugger. The Mapp
> [!TIP] > To learn about how the MedTech service transforms and persists device message data into the FHIR service see, [Overview of the MedTech service device message processing stages](overview-of-device-message-processing-stages.md).
-The following video presents an overview of Mapping debugger:
+The following video presents an overview of the Mapping debugger:
> [!VIDEO https://youtube.com/embed/OEGuCSGnECY]
healthcare-apis Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/overview.md
This article provides an introductory overview of the MedTech service. The MedTe
The MedTech service is important because data can be difficult to access or lost when it comes from diverse or incompatible devices, systems, or formats. If this information isn't easy to access, it may have a negative effect on gaining key insights and capturing trends. The ability to transform many types of device data into a unified FHIR format enables the MedTech service to successfully link device data with other datasets to support the end user. As a result, this capability can facilitate the discovery of important clinical insights and trend capture. It can also help make connections to new device applications and enable advanced research projects.
-The following video presents an overview of MedTech service:
+The following video presents an overview of the MedTech service:
> [!VIDEO https://youtube.com/embed/_nMirYYU0pg]
machine-learning How To Nlp Processing Batch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-nlp-processing-batch.md
The model we are going to work with was built using the popular library transfor
* It can work with sequences up to 1024 tokens. * It is trained for summarization of text in English.
-* We are going to use TensorFlow as a backend.
+* We are going to use Torch as a backend.
The information in this article is based on code samples contained in the [azureml-examples](https://github.com/azure/azureml-examples) repository. To run the commands locally without having to copy/paste YAML and other files, clone the repo and then change directories to the `cli/endpoints/batch/deploy-models/huggingface-text-summarization` if you are using the Azure CLI or `sdk/python/endpoints/batch/deploy-models/huggingface-text-summarization` if you are using our SDK for Python.
machine-learning Reference Yaml Job Command https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-job-command.md
The source JSON schema can be found at https://azuremlschemas.azureedge.net/late
| `compute` | string | Name of the compute target to execute the job on. This can be either a reference to an existing compute in the workspace (using the `azureml:<compute_name>` syntax) or `local` to designate local execution. **Note:** jobs in pipeline didn't support `local` as `compute` | | `local` | | `resources.instance_count` | integer | The number of nodes to use for the job. | | `1` | | `resources.instance_type` | string | The instance type to use for the job. Applicable for jobs running on Azure Arc-enabled Kubernetes compute (where the compute target specified in the `compute` field is of `type: kubernentes`). If omitted, this will default to the default instance type for the Kubernetes cluster. For more information, see [Create and select Kubernetes instance types](how-to-attach-kubernetes-anywhere.md). | | |
+| `resources.shm_size` | string | The size of the docker container's shared memory block. This should be in the format of `<number><unit>` where number has to be greater than 0 and the unit can be one of `b` (bytes), `k` (kilobytes), `m` (megabytes), or `g` (gigabytes). | | `2g` |
| `limits.timeout` | integer | The maximum time in seconds the job is allowed to run. Once this limit is reached the system will cancel the job. | | | | `inputs` | object | Dictionary of inputs to the job. The key is a name for the input within the context of the job and the value is the input value. <br><br> Inputs can be referenced in the `command` using the `${{ inputs.<input_name> }}` expression. | | | | `inputs.<input_name>` | number, integer, boolean, string or object | One of a literal value (of type number, integer, boolean, or string) or an object containing a [job input data specification](#job-inputs). | | |
networking Microsoft Global Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/networking/microsoft-global-network.md
Title: 'Microsoft global network - Azure'
-description: Learn how Microsoft builds and operates one of the largest backbone networks in the world, and why it is central to delivering a great cloud experience.
---
+description: Learn how Microsoft builds and operates one of the largest backbone networks in the world, and why it's central to delivering a great cloud experience.
+ - Previously updated : 01/05/2020-- Last updated : 04/06/2023+ # Microsoft global network
Microsoft owns and operates one of the largest backbone networks in the world. This global and sophisticated architecture, spanning more than 165,000 miles, connects our datacenters and customers. Every day, customers around the world connect and pass trillions of requests to Microsoft Azure, Bing, Dynamics 365, Microsoft 365, XBox, and many others. Regardless of type, customers expect instant reliability and responsiveness from our services.
-
-The [Microsoft global network](https://azure.microsoft.com/global-infrastructure/global-network/) (WAN) is a central part of delivering a great cloud experience. Connecting our Microsoft [data centers](https://azure.microsoft.com/global-infrastructure/) across 61 Azure regions and large mesh of edge-nodes strategically placed around the world, our global network offers both the availability, capacity, and the flexibility to meet any demand.
-![Microsoft global network](./media/microsoft-global-network/microsoft-global-wan.png)
-
+The [Microsoft global network](https://azure.microsoft.com/global-infrastructure/global-network/) (WAN) is a central part of delivering a great cloud experience. The global network connects our Microsoft [data centers](https://azure.microsoft.com/global-infrastructure/) across 61 Azure regions with a large mesh of edge-nodes strategically placed around the world. The Microsoft global network offers both the availability, capacity, and the flexibility to meet any demand.
++ ## Get the premium cloud network
-Opting for the [best possible experience](https://www.sdxcentral.com/articles/news/azure-tops-aws-gcp-in-cloud-performance-says-thousandeyes/2018/11/) is easy when you use Microsoft cloud. From the moment when customer traffic enters our global network through our strategically placed edge-nodes, your data travels through optimized routes at near the speed of light. This ensures optimal latency for best performance. These edge-nodes, all interconnected to more than 4000 unique Internet partners (peers) through thousands of connections in more than 175 locations, provide the foundation of our interconnection strategy.
+Opting for the [best possible experience](https://www.sdxcentral.com/articles/news/azure-tops-aws-gcp-in-cloud-performance-says-thousandeyes/2018/11/) is easy when you use Microsoft cloud. From the moment when customer traffic enters our global network through our strategically placed edge-nodes, your data travels through optimized routes at near the speed of light. These edge-nodes, all interconnected to more than 4000 unique Internet partners (peers) through thousands of connections in more than 175 locations, provide the foundation of our interconnection strategy.
-Whether connecting from London to Tokyo, or from Washington DC to Los Angeles, network performance is quantified and impacted by things such as latency, jitter, packet loss, and throughput. At Microsoft, we prefer and use direct interconnects as opposed to transit-links, this keeps response traffic symmetric and helps keep hops, peering parties and paths as short and simple as possible.
+Whether connecting from London to Tokyo, or from Washington DC to Los Angeles, latency, jitter, packet loss, and throughput affect network performance. At Microsoft, we choose and utilize direct interconnects instead of transit-links. This approach ensures symmetric response traffic and helps to minimize hops, peering parties, and paths to keep them as short and simple as possible.
-For example, if a user in London attempts to access a service in Tokyo, then the Internet traffic enters one of our edges in London, goes over Microsoft WAN through France, our Trans-Arabia paths between Europe and India, and then to Japan where the service is hosted. Response traffic is symmetric. This is sometimes referred as [cold-potato routing](https://en.wikipedia.org/wiki/Hot-potato_and_cold-potato_routing) which means that the traffic stays on Microsoft network as long as possible before we hand it off.
+For example, if a user in London accesses a service in Tokyo, the Internet traffic enters one of our edges in London, travels over the Microsoft WAN through France, our Trans-Arabia paths between Europe and India, and then to Japan where the service resides. Response traffic is symmetric. This data travel is referred to as [cold-potato routing](https://en.wikipedia.org/wiki/Hot-potato_and_cold-potato_routing). Traffic stays on Microsoft network as long as possible before it's handed off.
-So, does that mean any and all traffic when using Microsoft services? Yes, any traffic between data centers, within Microsoft Azure or between Microsoft services such as Virtual Machines, Microsoft 365, XBox, SQL DBs, Storage, and virtual networks are routed within our global network and never over the public Internet, to ensure optimal performance and integrity.
-
-Massive investments in fiber capacity and diversity across metro, terrestrial, and submarine paths are crucial for us to keep consistent and high service-level while fueling the extreme growth of our cloud and online services. Recent additions to our global network are our [MAREA](https://www.submarinecablemap.com/#/submarine-cable/marea) submarine cable, the industry's first Open Line System (OLS) over subsea, between Bilbao, Spain and Virginia Beach, Virginia, USA, as well as the [AEC](https://www.submarinecablemap.com/#/submarine-cable/aeconnect-1) between New York, USA and Dublin, Ireland and [New Cross Pacific (NCP)](https://www.submarinecablemap.com/#/submarine-cable/new-cross-pacific-ncp-cable-system) between Tokyo, Japan, and Portland, Oregon, USA.
+So, does that mean all traffic when using Microsoft services? Yes, any traffic between data centers, within Microsoft Azure or between Microsoft services such as Virtual Machines, Microsoft 365, XBox, SQL DBs, Storage, and virtual networks routes within our global network and never over the public Internet. This routing ensures optimal performance and integrity.
+Massive investments in fiber capacity and diversity across metro, terrestrial, and submarine paths are crucial for us to keep consistent and high service-level while fueling the extreme growth of our cloud and online services.
+
+Recent additions to our global network are:
+
+* [MAREA](https://www.submarinecablemap.com/#/submarine-cable/marea) submarine cable. The industry's first Open Line System (OLS) over subsea, between Bilbao, Spain and Virginia Beach, Virginia, USA.
+
+* [AEC](https://www.submarinecablemap.com/#/submarine-cable/aeconnect-1) between New York, USA and Dublin, Ireland.
+
+* [New Cross Pacific (NCP)](https://www.submarinecablemap.com/#/submarine-cable/new-cross-pacific-ncp-cable-system) between Tokyo, Japan, and Portland, Oregon, USA.
## Our network is your network
-We have put two decades of experience, along with massive investments into the network, to ensure optimal performance at all times. Businesses can take full advantage of our network assets and build advanced overlay architectures on top.
+We have put two decades of experience, along with massive investments into the network, to always ensure optimal performance. Businesses can take full advantage of our network assets and build advanced overlay architectures on top.
-Microsoft Azure offers the richest portfolio of services and capabilities, allowing customers to quickly and easily build, expand, and meet networking requirements anywhere. Our family of connectivity services span virtual network peering between regions, hybrid, and in-cloud point-to-site and site-to-site architectures as well as global IP transit scenarios. For enterprises looking to connect their own datacenter or network to Azure, or customers with massive data ingestion or transit needs, [ExpressRoute](../expressroute/expressroute-introduction.md), and [ExpressRoute Direct](../expressroute/expressroute-erdirect-about.md) provide options up to 100 Gbps of bandwidth, directly into Microsoft's global network at peering locations across the world.
-
-[ExpressRoute Global Reach](../expressroute/expressroute-global-reach.md) is designed to complement your service provider's WAN implementation and connect your on-premises sites across the world. If you run a global operation, you can use ExpressRoute Global Reach in conjunction with your preferred and local service providers to connect all your global sites using the Microsoft global network. Expanding your new network in the cloud (WAN) to encompass large numbers of branch-sites can be accomplished through Azure Virtual WAN, which brings the ability to seamlessly connect your branches to Microsoft global network with SDWAN & VPN devices (that is, Customer Premises Equipment or CPE) with built-in ease of use and automated connectivity and configuration management.
-
-[Global VNet peering](../virtual-network/virtual-network-peering-overview.md) enables customers to connect two or more Azure virtual networks across regions seamlessly. Once peered, the virtual networks appear as one. The traffic between virtual machines in the peered virtual networks is routed through the Microsoft backbone infrastructure, much like traffic is routed between virtual machines in the same virtual network - through private IP addresses only.
+Microsoft Azure offers the richest portfolio of services and capabilities, allowing customers to quickly and easily build, expand, and meet networking requirements anywhere. Our family of connectivity services spans virtual network peering between regions, hybrid, and in-cloud point-to-site and site-to-site architectures as well as global IP transit scenarios. Enterprises seeking to connect their datacenter or network to Azure or customers with significant data ingestion or transit needs can choose from options such as [ExpressRoute](../expressroute/expressroute-introduction.md) and [ExpressRoute Direct](../expressroute/expressroute-erdirect-about.md). These options provide bandwidth of up to 100 Gbps directly into Microsoft's global network at peering locations worldwide.
+
+* [**ExpressRoute Global Reach**](../expressroute/expressroute-global-reach.md) is designed to complement your service provider's WAN implementation and connect your on-premises sites across the world. If you run a global operation, you can use ExpressRoute Global Reach with your preferred and local service providers to connect all your global sites using the Microsoft global network. You can expand your cloud-based WAN to include a significant number of branch-sites using Azure Virtual WAN. This service enables you to connect your branches to Microsoft's global network seamlessly, using SDWAN and VPN devices (Customer Premises Equipment or CPE) with built-in ease of use and automated connectivity and configuration management.
+* [**Global virtual network peering**](../virtual-network/virtual-network-peering-overview.md) enables customers to connect two or more Azure virtual networks across regions seamlessly. Once peered, the virtual networks appear as one. The traffic between virtual machines in the peered virtual networks is routed through the Microsoft backbone infrastructure, much like traffic is routed between virtual machines in the same virtual network - through private IP addresses only.
## Well managed using software-defined innovation
-Running one of the leading clouds in the world, Microsoft has gained a lot of insight and experience in building and managing high-performance global infrastructure.
+As one of the world's top cloud providers, Microsoft has acquired substantial insight and expertise in constructing and managing high-performance global infrastructure.
We adhere to a robust set of operational principles: - Use best-of-breed switching hardware across the various tiers of the network. -- Deploy new features with zero impact to end users. +
+- Deploy new features with zero effect to end users.
+ - Roll out updates securely and reliably across the fleet, as fast as possible. Hours instead of weeks. -- Utilize cloud-scale deep telemetry and fully automated fault mitigation. -- Use unified and software-defined Networking technology to control all hardware elements in the network. Eliminating duplication and reduce failures. +
+- Make use of comprehensive cloud-based monitoring and fully automated fault mitigation.
+
+- Use unified and software-defined Networking technology to control all hardware elements in the network. To eliminate duplication and reduce failures.
-These principles apply to all layers of the network: from the host Network Interface, switching platform, network functions in the data center such as Load Balancers, all the way up to the WAN with our traffic engineering platform and our optical networks.
+These principles apply to all layers of the network: from the host network interface, switching platform, network functions in the data center such as load balancers, all the way up to the WAN with our traffic engineering platform and our optical networks.
-The exponential growth of Azure and its network has reached a point where we eventually realized that human intuition could no longer be relied on to manage the global network operations. To fulfill the need to validate long, medium, and short-term changes on the network, we developed a platform to mirror and emulate our production network synthetically. The ability to create mirrored environments and run millions of simulations, allows us to test software and hardware changes and their impact, before committing them to our production platform and network.
+The exponential growth of Azure and its network has reached a point where we eventually realized that human intuition could no longer be relied on to manage the global network operations. To fulfill the need to validate long, medium, and short-term changes on the network, we developed a platform to mirror and emulate our production network synthetically. The ability to create mirrored environments and run millions of simulations, allows us to test software and hardware changes and their effect, before committing them to our production platform and network.
## Next steps+ - [Learn about how Microsoft is advancing global network reliability through intelligent software](https://azure.microsoft.com/blog/advancing-global-network-reliability-through-intelligent-software-part-1-of-2/)+ - [Learn more about the networking services provided in Azure](https://azure.microsoft.com/product-categories/networking/)
public-multi-access-edge-compute-mec Considerations For Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/public-multi-access-edge-compute-mec/considerations-for-deployment.md
A trade-off exists between availability and latency. Although failing over the a
Architect your edge applications by utilizing the Azure Region for the components that are less latency sensitive, need to be persistent or need to be shared between public MEC sites. This will allow for the applications to be more resilient and cost effective. The public MEC can host the latency sensitive components.
+## Data residency
+> [!IMPORTANT]
+Azure public MEC doesn't store or process customer data outside the region you deploy the service instance in.
## Next steps To deploy a virtual machine in Azure public MEC using an Azure Resource Manager (ARM) template, advance to the following article: > [!div class="nextstepaction"]
-> [Quickstart: Deploy a virtual machine in Azure public MEC using an ARM template](quickstart-create-vm-azure-resource-manager-template.md)
+> [Quickstart: Deploy a virtual machine in Azure public MEC using an ARM template](quickstart-create-vm-azure-resource-manager-template.md)
purview Catalog Asset Details https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/catalog-asset-details.md
Last updated 07/25/2022
# Asset details page in the Microsoft Purview Data Catalog
-This article discusses how assets are displayed in the Microsoft Purview Data Catalog. It describes how you can view relevant information or take action on assets in your catalog.
+This article discusses how assets are displayed in the Microsoft Purview Data Catalog, and all the features and details available to them. It describes how you can view relevant information or take action on assets in your catalog.
+ ## Prerequisites - Set up your data sources and scan the assets into your catalog.-- *Or* Use the Microsoft Purview Atlas APIs to ingest assets into the catalog.
+- *Or* Use the Microsoft Purview Atlas APIs to ingest assets into the catalog.
## Open an asset details page
Once you find the asset you're looking for, you can view all of the asset inform
- **Related** - This tab lets you navigate through the technical hierarchy of assets that are related to the current asset you're viewing. ## Asset overview+ The overview section of the asset details gives a summarized view of an asset. The sections that follow explains the different parts of the overview page. :::image type="content" source="media/catalog-asset-details/asset-detail-overview.png" alt-text="Screenshot that shows the asset details overview page.":::
You can navigate to the contact tab of the edit screen to update owners and expe
Both column-level and asset-level updates such as adding a description, glossary term or classification don't impact scan updates. Scans will update new columns and classifications regardless if these changes are made. If you update the **name** or **data type** of a column, subsequent scans **won't** update the asset schema. New columns and classifications **won't** be detected.+ ### Request access to data If a [self-service data access workflow](how-to-workflow-self-service-data-access-hybrid.md) has been created, you can request access to a desired asset directly from the asset details page! To learn more about Microsoft Purview's data policy applications, see [how to enable data use management](how-to-enable-data-use-management.md).
Any asset you delete using the delete button is permanently deleted in Microsoft
If you have a scheduled scan (weekly or monthly) on the source, the **deleted asset won't get re-ingested** into the catalog unless the asset is modified by an end user since the previous run of the scan. For example, say you manually delete a SQL table from the Microsoft Purview Data Map. Later, a data engineer adds a new column to the source table. When Microsoft Purview scans the database, the table will be reingested into the data map and be discoverable in the data catalog.
+## Ratings
+
+Assets can be rated by all users with read access, or better, to that asset in Microsoft Purview.
+Ratings allow users to give an asset a rating from 1 to 5 stars, and leave a comment about the asset.
+
+These ratings can be seen by all users with read access, and rating can be [added as a facet](how-to-search-catalog.md#use-the-facets) when [searching the data catalog](how-to-search-catalog.md) so users can find assets with a certain rating.
+
+### View ratings
+
+1. [Search](how-to-search-catalog.md) or [browse](how-to-browse-catalog.md) for your asset in Microsoft Purview and select it.
+1. In the header of the asset you can see a rating, which will show an aggregate star rating of the asset, and the number of reviews.
+ :::image type="content" source="media/catalog-asset-details/view-rating-aggregate.png" alt-text="Screenshot of a rating in the header of an asset.":::
+1. To see a percentage breakdown of the ratings, select the rating.
+1. To see specific ratings and their comments, or to add your own rating, select **Open ratings**.
+ :::image type="content" source="media/catalog-asset-details/open-rating-details.png" alt-text="Screenshot of a selected rating in the header of an asset showing the percentage breakdown.":::
+
+### Add a rating to an asset
+
+1. [Search](how-to-search-catalog.md) or [browse](how-to-browse-catalog.md) for your asset in Microsoft Purview and select it.
+1. Select the ratings button in the asset's header.
+1. Select the **Open ratings** button.
+ :::image type="content" source="media/catalog-asset-details/open-ratings.png" alt-text="Screenshot that shows the ratings button selected, and the open ratings button highlighted.":::
+1. Choose a star rating, add a comment, and select **Submit**.
+ :::image type="content" source="media/catalog-asset-details/rate-asset.png" alt-text="Screenshot of a rating, showing five start selected and a comment about the quality of the data.":::
+
+## Edit or delete your rating
+
+1. Select the ratings button in the asset's header.
+1. Select the **Open ratings** button.
+1. Under **My rating** select the ellipsis button in your rating.
+ :::image type="content" source="media/catalog-asset-details/edit-rating.png" alt-text="Screenshot of the user's rating, shown under the My rating menu, with the ellipsis button selected.":::
+1. To delete your rating, select **Delete rating**.
+1. To edit your rating, select **Edit rating**, then update your score and comment and select **Submit**.
+
+## Tags
+
+Asset can be tagged by users with data curator permissions or better, and any users with reader permissions on these assets in Microsoft Purview can see these tags.
+Users can add tags [as a filter](how-to-search-catalog.md#refine-results) when [searching the data catalog](how-to-search-catalog.md), so users can see all assets with certain tags.
+
+>[!NOTE]
+>Tag limitations:
+>
+> - An asset can only have up to 50 tags
+> - Tags can only be 50 characters
+> - Allowed characters: numbers, letters, -, and _
+
+### Add a tag to an asset
+
+If you have [data curator](catalog-permissions.md) permissions Microsoft Purview, you can add a tag to an asset by:
+
+1. [Search](how-to-search-catalog.md) or [browse](how-to-browse-catalog.md) for your asset in Microsoft Purview and select it.
+1. Select the **+ Add Tag** button under the asset's name.
+ :::image type="content" source="media/catalog-asset-details/add-new-tag.png" alt-text="Screenshot that shows the new tag button highlighted on an asset detail page.":::
+1. Select an existing available tag, or input a new tag.
+
+### Remove a tag from an asset
+
+If you have [data curator](catalog-permissions.md) permissions Microsoft Purview, you can remove a tag from an asset by:
+
+1. [Search](how-to-search-catalog.md) or [browse](how-to-browse-catalog.md) for your asset in Microsoft Purview and select it.
+1. Select the **X** button next to an existing tag under the asset's name.
+ :::image type="content" source="media/catalog-asset-details/remove-tag.png" alt-text="Screenshot that shows the remove tag button highlighted next to an existing page.":::
+1. Confirm the removal of the tag.
## Next steps
purview How To Browse Catalog https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-browse-catalog.md
Browse by collection allows you to explore the different collections you're a da
Once a collection is selected, you'll get a list of assets in that collection with the facets and filters available in search. As a collection can have thousands of assets, browse uses the Microsoft Purview search relevance engine to boost the most important assets to the top.
+You can also refine the list of assets using facets and filters:
-For certain annotations, you can select the ellipses to choose between an AND condition or an OR condition.
+- [Use the facets](how-to-search-catalog.md#use-the-facets) on the left hand side to narrow results by business metadata like glossary terms or classifications.
+- [Use the filters](how-to-search-catalog.md#use-the-filters) at the top to narrow results by source type, [managed attributes](how-to-managed-attributes.md), or activity.
If the selected collection doesnΓÇÖt contain the data you're looking for, you can easily navigate to related collections, or go back and view the entire collections tree.
A native browsing experience with hierarchical namespace is provided for each co
> [!NOTE] > After a successful scoped scan, there may be delay before newly scanned assets appear in the browse experience. - 1. On the **Browse by source types** page, tiles are categorized by data sources. To further explore assets in each data source, select the corresponding tile. :::image type="content" source="media/how-to-browse-catalog/browse-asset-types.png" alt-text="Browse asset types page" border="true"::: > [!TIP]
- > Certain tiles are groupings of a collection of data sources. For example, the Azure Storage Account tile contains all Azure Blob Storage and Azure Data Lake Storage Gen2 accounts. The Azure SQL Server tile will display the Azure SQL Server assets that contain Azure SQL Database and Azure Dedicated SQL Pool instances ingested into the catalog.
+ > Certain tiles are groupings of a collection of data sources. For example, the Azure Storage Account tile contains all Azure Blob Storage and Azure Data Lake Storage Gen2 accounts. The Azure SQL Server tile will display the Azure SQL Server assets that contain Azure SQL Database and Azure Dedicated SQL Pool instances ingested into the catalog.
1. On the next page, top-level assets under your chosen data type are listed. Pick one of the assets to further explore its contents. For example, after selecting "Azure SQL Database", you'll see a list of databases with assets in the data catalog.
purview How To Managed Attributes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-managed-attributes.md
Managed attributes are user-defined attributes that provide a business or organi
## Terminology **Managed attribute:** A set of user-defined attributes that provide a business or organization level context to an asset. A managed attribute has a name and a value. For example, ΓÇ£DepartmentΓÇ¥ is an attribute name and ΓÇ£FinanceΓÇ¥ is its value.
-**Attribute group:** A grouping of managed attributes that allow for easier organization and consumption.
+**Attribute group:** A grouping of managed attributes that allow for easier organization and consumption.
## Create managed attributes in Microsoft Purview Studio
In Microsoft Purview Studio, an organization's managed attributes are managed in
:::image type="content" source="media/how-to-managed-attributes/create-new-managed-attribute.png" alt-text="Screenshot that shows how to create a new managed attribute or attribute group."::: 1. To create an attribute group, enter a name and a description. :::image type="content" source="media/how-to-managed-attributes/create-attribute-group.png" alt-text="Screenshot that shows how to create an attribute group.":::
-1. Managed attributes have a name, attribute group, data type, and associated asset types. Attribute groups can be created in-line during the managed attribute creation process. Associated asset types are the asset types you can apply the attribute to. For example, if you select "Azure SQL Table" for an attribute, you can apply it to Azure SQL Table assets, but not Azure Synapse Dedicated Table assets.
+1. Managed attributes have a name, attribute group, data type, and associated asset types. They also have a required flag that can only be enabled when created as part of creating a new attribute group. Associated asset types are the data asset types you can apply the attribute to. For example, if you select "Azure SQL Table" for an attribute, you can apply it to Azure SQL Table assets, but not Azure Synapse Dedicated Table assets.
:::image type="content" source="media/how-to-managed-attributes/create-managed-attribute.png" alt-text="Screenshot that shows how to create a managed attribute."::: 1. Select **Create** to save your attribute.
+### Required managed attributes
+
+When you create a managed attribute as part of a managed attribute group, you can add the **required** flag. The required flag means that a value must be provided for this managed attribute. When a data asset is edited the required attribute must be filled out before you can close the editor.
+
+>[!NOTE]
+> - You can't add the **required** flag to an existing attribute in editing.
+> - You can't add the **required** flag while creating a new attribute outside of an attribute group.
+> You can only add this flag while creating an attribute group.
+
+1. Open the data map application and navigate to **Managed attributes** in the **Annotation management** section.
+1. Select **New** and select **Attribute group**.
+1. Select **New attribute**.
+1. Fill out your attribute details, and select the **Mark as required** flag.
+ :::image type="content" source="media/how-to-managed-attributes/mark-as-required.png" alt-text="Screenshot of the mark as required flag on a new attribute being created as a part of a new attribute group.":::
+1. Select **Apply** and finish adding other attributes to complete your attribute group.
+ ### Expiring managed attributes In the managed attribute management experience, managed attributes can't be deleted, only expired. Expired attributes can't be applied to any assets and are, by default, hidden in the user experience. By default, expired managed attributes aren't removed from an asset. If an asset has an expired managed attribute applied, it can only be removed, not edited.
Below are the known limitations of the managed attribute feature as it currently
- Managed attributes get matched to search keywords, but there's no user-facing filter in the search results. Managed attributes can be filtered using the Search APIs. - Managed attributes can't be applied via the bulk edit experience. - After creating an attribute group, you can't edit the name of the attribute group.-- After creating a managed attribute, you can't update the attribute name, attribute group or the field type.
+- After creating a managed attribute, you can't update the attribute name, attribute group or the field type.
+- A managed attribute can only be marked as required during the creation of an attribute group.
## Next steps
purview How To Search Catalog https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-search-catalog.md
Your keyword will be highlighted in the return results, so you can see where the
The Microsoft Purview relevance engine sorts through all the matches and ranks them based on what it believes their usefulness is to a user. For example, a data consumer is likely more interested in a table curated by a data steward that matches on multiple keywords than an unannotated folder. Many factors determine an assetΓÇÖs relevance score and the Microsoft Purview search team is constantly tuning the relevance engine to ensure the top search results have value to you.
-## Filtering results
+## Refine results
If the top results donΓÇÖt include the assets you're looking for, there are two ways you can filter results:
Then select any facet you would like to narrow your results by.
:::image type="content" source="./media/how-to-search-catalog/facet-menu.png" alt-text="Screenshot showing the search menu on the left side with Folder and Report selected." border="true":::
-For certain annotations, you can select the ellipses to choose between an AND condition or an OR condition.
+For certain annotations, you can select the ellipses to choose between an AND condition or an OR condition.
:::image type="content" source="./media/how-to-search-catalog/search-and-or-choice.png" alt-text="Screenshot showing how to choose between and AND or OR condition." border="true":::
For certain annotations, you can select the ellipses to choose between an AND co
> > :::image type="content" source="./media/how-to-search-catalog/filter-facets.png" alt-text="Screenshot showing the facet filter at the top of the menu, with a search parameter entered, and the facets filtered below." border="true":::
+#### Available facets
+
+- **Assigned term** - refines your search to assets with the selected terms applied.
+- **Classification** - refines your search to assets with certain classifications.
+- **Collection** - refines your search by assets in a specific collection.
+- **Contact** - refines your search to assets that have selected users listed as a contact.
+- **Data** - refines your search to specific data types. For example: pipelines, data shares, tables, or reports.
+- **Endorsement** - refines your search to assets with specified endorsements, like **Certified** or **Promoted**.
+- **Label** - refines your search to assets with specific security labels.
+- **Metamodel facets** - if you've created a [metamodel](concept-metamodel.md) in your Microsoft Purview Data Map, you can also refine your search to metamodel assets like Business or Organization.
+- **Rating** - refines your search to only data assets with a specified rating.
+ ### Use the filters To narrow results by asset type, [managed attributes](how-to-managed-attributes.md), or activity you'll use the filters at the top of the page of search results.
To remove any filters, select the **x** in the filter button, or clear all filte
:::image type="content" source="./media/how-to-search-catalog/remove-filters.png" alt-text="Screenshot showing the remove filter buttons in the top menu." border="true":::
+#### Available filters
+
+- **Activity** - allows you refine your search to attributes created or updated within a certain timeframe.
+- **Managed attributes** - refines your search to assets with specified [managed attributes](how-to-managed-attributes.md). Attributes will be listed under their attribute group, and use operators to help search for specific values. For example: Contains any, or Doesn't contain.
+- **Source type** - refines your search to assets from specified source types. For example: Azure Blob Storage or Power BI.
+- **Tags** - refines your search to assets with selected tags.
+ ## View assets From the search results page, you can select an asset to view details such as schema, lineage, and classifications. To learn more about the asset details page, see [Manage catalog assets](catalog-asset-details.md).
sap High Availability Guide Rhel Pacemaker https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/high-availability-guide-rhel-pacemaker.md
vm-windows Previously updated : 09/22/2022 Last updated : 04/03/2022
[2191498]:https://launchpad.support.sap.com/#/notes/2191498 [2243692]:https://launchpad.support.sap.com/#/notes/2243692 [1999351]:https://launchpad.support.sap.com/#/notes/1999351
+[3108316]:https://launchpad.support.sap.com/#/notes/3108316
+[3108302]:https://launchpad.support.sap.com/#/notes/3108302
[virtual-machines-linux-maintenance]:../../virtual-machines/maintenance-and-updates.md#maintenance-that-doesnt-require-a-reboot
-The article describes how to configure basic Pacemaker cluster on Red Hat Enterprise Server(RHEL). The instructions cover both RHEL 7 and RHEL 8.
+The article describes how to configure basic Pacemaker cluster on Red Hat Enterprise Server(RHEL). The instructions cover RHEL 7, RHEL 8 and RHEL 9.
## Prerequisites Read the following SAP Notes and papers first:
Read the following SAP Notes and papers first:
* The supported SAP software, and operating system (OS) and database combinations. * The required SAP kernel version for Windows and Linux on Microsoft Azure. * SAP Note [2015553] lists prerequisites for SAP-supported SAP software deployments in Azure.
-* SAP Note [2002167] has recommended OS settings for Red Hat Enterprise Linux
+* SAP Note [2002167] recommends OS settings for Red Hat Enterprise Linux
+* SAP Note [3108316] recommends OS settings for Red Hat Enterprise Linux 9.x
* SAP Note [2009879] has SAP HANA Guidelines for Red Hat Enterprise Linux
+* SAP Note [3108302] has SAP HANA Guidelines for Red Hat Enterprise Linux 9.x
* SAP Note [2178632] has detailed information about all monitoring metrics reported for SAP in Azure. * SAP Note [2191498] has the required SAP Host Agent version for Linux in Azure. * SAP Note [2243692] has information about SAP licensing on Linux in Azure.
Read the following SAP Notes and papers first:
> Red Hat doesn't support software-emulated watchdog. Red Hat doesn't support SBD on cloud platforms. For details see [Support Policies for RHEL High Availability Clusters - sbd and fence_sbd](https://access.redhat.com/articles/2800691). > The only supported fencing mechanism for Pacemaker Red Hat Enterprise Linux clusters on Azure, is Azure fence agent.
-The following items are prefixed with either **[A]** - applicable to all nodes, **[1]** - only applicable to node 1 or **[2]** - only applicable to node 2. Differences in the commands or the configuration between RHEL 7 and RHEL 8 are marked in the document.
+The following items are prefixed with either **[A]** - applicable to all nodes, **[1]** - only applicable to node 1 or **[2]** - only applicable to node 2. Differences in the commands or the configuration between RHEL 7 and RHEL 8/RHEL 9 are marked in the document.
-1. **[A]** Register - optional step. This step is not required, if using RHEL SAP HA-enabled images.
+1. **[A]** Register - optional step. This step isn't required, if using RHEL SAP HA-enabled images.
- Register your virtual machines and attach it to a pool that contains repositories for RHEL 7.
+ For example, if deploying on RHEL 7, register your virtual machine and attach it to a pool that contains repositories for RHEL 7.
<pre><code>sudo subscription-manager register # List the available pools
The following items are prefixed with either **[A]** - applicable to all nodes,
By attaching a pool to an Azure Marketplace PAYG RHEL image, you will be effectively double-billed for your RHEL usage: once for the PAYG image, and once for the RHEL entitlement in the pool you attach. To mitigate this situation, Azure now provides BYOS RHEL images. For more information, see [Red Hat Enterprise Linux bring-your-own-subscription Azure images](../../virtual-machines/workloads/redhat/byos.md).
-1. **[A]** Enable RHEL for SAP repos - optional step. This step is not required, if using RHEL SAP HA-enabled images.
+1. **[A]** Enable RHEL for SAP repos - optional step. This step isn't required, if using RHEL SAP HA-enabled images.
In order to install the required packages on RHEL 7, enable the following repositories.
The following items are prefixed with either **[A]** - applicable to all nodes,
1. **[A]** Install RHEL HA Add-On
- <pre><code>sudo yum install -y pcs pacemaker fence-agents-azure-arm nmap-ncat
- </code></pre>
-
+ ```sudo yum install -y pcs pacemaker fence-agents-azure-arm nmap-ncat
+ ```
+
> [!IMPORTANT] > We recommend the following versions of Azure Fence agent (or later) for customers to benefit from a faster failover time, if a resource stop fails or the cluster nodes cannot communicate which each other anymore: > RHEL 7.7 or higher use the latest available version of fence-agents package
The following items are prefixed with either **[A]** - applicable to all nodes,
> RHEL 8.1: fence-agents-4.2.1-30.el8_1.4 > RHEL 7.9: fence-agents-4.2.1-41.el7_9.4.
- Check the version of the Azure fence agent. If necessary, update it to a version equal to or later than the stated above.
+ > [!IMPORTANT]
+ > On RHEL 9, we recommend the following package versions (or later) to avoid issues with Azure Fence agent:
+ > fence-agents-4.10.0-20.el9_0.7
+ > fence-agents-common-4.10.0-20.el9_0.6
+ > ha-cloud-support-4.10.0-20.el9_0.6.x86_64.rpm
+
+ Check the version of the Azure fence agent. If necessary, update it to the minimum required version or later.
<pre><code># Check the version of the Azure Fence Agent sudo yum info fence-agents-azure-arm
The following items are prefixed with either **[A]** - applicable to all nodes,
> [!IMPORTANT] > If you need to update the Azure Fence agent, and if using custom role, make sure to update the custom role to include action **powerOff**. For details see [Create a custom role for the fence agent](#1-create-a-custom-role-for-the-fence-agent).
+1. If deploying on RHEL 9, install also the resource agents for cloud deployment:
+
+ ```
+ sudo yum install -y resource-agents-cloud
+ ```
+ 1. **[A]** Setup host name resolution You can either use a DNS server or modify the /etc/hosts on all nodes. This example shows how to use the /etc/hosts file. Replace the IP address and the hostname in the following commands. >[!IMPORTANT]
- > If using host names in the cluster configuration, it is vital to have reliable host name resolution. The cluster communication will fail, if the names are not available and that can lead to cluster failover delays.
+ > If using host names in the cluster configuration, it's vital to have reliable host name resolution. The cluster communication will fail, if the names are not available and that can lead to cluster failover delays.
> The benefit of using /etc/hosts is that your cluster becomes independent of DNS, which could be a single point of failures too. <pre><code>sudo vi /etc/hosts
The following items are prefixed with either **[A]** - applicable to all nodes,
sudo pcs cluster start --all </code></pre>
- If building a cluster on **RHEL 8.x**, use the following commands:
+ If building a cluster on **RHEL 8.x/RHEL 9.x**, use the following commands:
<pre><code>sudo pcs host auth <b>prod-cl1-0</b> <b>prod-cl1-1</b> -u hacluster sudo pcs cluster setup <b>nw1-azr</b> <b>prod-cl1-0</b> <b>prod-cl1-1</b> totem token=30000 sudo pcs cluster start --all
The following items are prefixed with either **[A]** - applicable to all nodes,
The fencing device uses either a managed identity for Azure resource or service principal to authorize against Microsoft Azure. ### Using Managed Identity
-To create a managed identity (MSI), [create a system-assigned](../../active-directory/managed-identities-azure-resources/qs-configure-portal-windows-vm.md#system-assigned-managed-identity) managed identity for each VM in the cluster. Should a system-assigned managed identity already exist, it will be used. User assigned managed identities should not be used with Pacemaker at this time. Fence device, based on managed identity is supported on RHEL 7.9 and RHEL 8.x.
+To create a managed identity (MSI), [create a system-assigned](../../active-directory/managed-identities-azure-resources/qs-configure-portal-windows-vm.md#system-assigned-managed-identity) managed identity for each VM in the cluster. Should a system-assigned managed identity already exist, it will be used. User assigned managed identities should not be used with Pacemaker at this time. Fence device, based on managed identity is supported on RHEL 7.9 and RHEL 8.x/RHEL 9.x.
### Using Service Principal Follow these steps to create a service principal, if not using managed identity.
Follow these steps to create a service principal, if not using managed identity.
1. Click New Registration 1. Enter a Name, select "Accounts in this organization directory only" 2. Select Application Type "Web", enter a sign-on URL (for example http:\//localhost) and click Add
- The sign-on URL is not used and can be any valid URL
+ The sign-on URL isn't used and can be any valid URL
1. Select Certificates and Secrets, then click New client secret 1. Enter a description for a new key, select "Never expires" and click Add 1. Make a node the Value. It is used as the **password** for the service principal
-1. Select Overview. Make a note the Application ID. It is used as the username (**login ID** in the steps below) of the service principal
+1. Select Overview. Make a note the Application ID. It's used as the username (**login ID** in the steps below) of the service principal
### **[1]** Create a custom role for the fence agent
-Neither managed identity nor service principal has permissions to access your Azure resources by default. You need to give the managed identity or service principal permissions to start and stop (power-off) all virtual machines of the cluster. If you did not already create the custom role, you can create it using [PowerShell](../../role-based-access-control/custom-roles-powershell.md) or [Azure CLI](../../role-based-access-control/custom-roles-cli.md)
+Neither managed identity nor service principal has permissions to access your Azure resources by default. You need to give the managed identity or service principal permissions to start and stop (power-off) all virtual machines of the cluster. If you didn't already create the custom role, you can create it using [PowerShell](../../role-based-access-control/custom-roles-powershell.md) or [Azure CLI](../../role-based-access-control/custom-roles-cli.md)
Use the following content for the input file. You need to adapt the content to your subscriptions that is, replace *xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx* and *yyyyyyyy-yyyy-yyyy-yyyy-yyyyyyyyyyyy* with the Ids of your subscription. If you only have one subscription, remove the second entry in AssignableScopes.
Assign the custom role "Linux Fence Agent Role" that was created in the last cha
#### Using Service Principal
-Assign the custom role "Linux Fence Agent Role" that was created in the last chapter to the service principal. Do not use the Owner role anymore! For detailed steps, see [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.md).
+Assign the custom role "Linux Fence Agent Role" that was created in the last chapter to the service principal. Don't use the Owner role anymore! For detailed steps, see [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.md).
Make sure to assign the role for both cluster nodes. ### **[1]** Create the fencing devices
sudo pcs property set stonith-timeout=900
#### [Managed Identity](#tab/msi)
-For RHEL **7.X**, use the following command to configure the fence device:
+For RHEL **7.x**, use the following command to configure the fence device:
<pre><code>sudo pcs stonith create rsc_st_azure fence_azure_arm <b>msi=true</b> resourceGroup="<b>resource group</b>" \ subscriptionId="<b>subscription id</b>" <b>pcmk_host_map="prod-cl1-0:prod-cl1-0-vm-name;prod-cl1-1:prod-cl1-1-vm-name"</b> \ power_timeout=240 pcmk_reboot_timeout=900 pcmk_monitor_timeout=120 pcmk_monitor_retries=4 pcmk_action_limit=3 pcmk_delay_max=15 \ op monitor interval=3600 </code></pre>
-For RHEL **8.X**, use the following command to configure the fence device:
+For RHEL **8.x/9.x**, use the following command to configure the fence device:
<pre><code>sudo pcs stonith create rsc_st_azure fence_azure_arm <b>msi=true</b> resourceGroup="<b>resource group</b>" \ subscriptionId="<b>subscription id</b>" <b>pcmk_host_map="prod-cl1-0:prod-cl1-0-vm-name;prod-cl1-1:prod-cl1-1-vm-name"</b> \ power_timeout=240 pcmk_reboot_timeout=900 pcmk_monitor_timeout=120 pcmk_monitor_retries=4 pcmk_action_limit=3 pcmk_delay_max=15 \
power_timeout=240 pcmk_reboot_timeout=900 pcmk_monitor_timeout=120 pcmk_monitor_
op monitor interval=3600 </code></pre>
-For RHEL **8.x**, use the following command to configure the fence device:
+For RHEL **8.x/9.x**, use the following command to configure the fence device:
<pre><code>sudo pcs stonith create rsc_st_azure fence_azure_arm username="<b>login ID</b>" password="<b>password</b>" \ resourceGroup="<b>resource group</b>" tenantId="<b>tenant ID</b>" subscriptionId="<b>subscription id</b>" \ <b>pcmk_host_map="prod-cl1-0:prod-cl1-0-vm-name;prod-cl1-1:prod-cl1-1-vm-name"</b> \
op monitor interval=3600
-If you are using fencing device, based on service principal configuration, read [Change from SPN to MSI for Pacemaker clusters using Azure fencing](https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/sap-on-azure-high-availability-change-from-spn-to-msi-for/ba-p/3609278) and learn how to convert to managed identity configuration.
+If you're using fencing device, based on service principal configuration, read [Change from SPN to MSI for Pacemaker clusters using Azure fencing](https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/sap-on-azure-high-availability-change-from-spn-to-msi-for/ba-p/3609278) and learn how to convert to managed identity configuration.
> [!TIP] > Only configure the `pcmk_delay_max` attribute in two node Pacemaker clusters. For more information on preventing fence races in a two node Pacemaker cluster, see [Delaying fencing in a two node cluster to prevent fence races of "fence death" scenarios](https://access.redhat.com/solutions/54829). > [!IMPORTANT]
-> The monitoring and fencing operations are de-serialized. As a result, if there is a longer running monitoring operation and simultaneous fencing event, there is no delay to the cluster failover, due to the already running monitoring operation.
+> The monitoring and fencing operations are deserialized. As a result, if there is a longer running monitoring operation and simultaneous fencing event, there is no delay to the cluster failover, due to the already running monitoring operation.
### **[1]** Enable the use of a fencing device
If you are using fencing device, based on service principal configuration, read
> [!TIP] > This section is only applicable, if it is desired to configure special fencing device `fence_kdump`.
-If there is a need to collect diagnostic information within the VM, it may be useful to configure additional fencing device, based on fence agent `fence_kdump`. The `fence_kdump` agent can detect that a node entered kdump crash recovery and can allow the crash recovery service to complete, before other fencing methods are invoked. Note that `fence_kdump` is not a replacement for traditional fence mechanisms, like Azure Fence Agent when using Azure VMs.
+If there is a need to collect diagnostic information within the VM, it may be useful to configure additional fencing device, based on fence agent `fence_kdump`. The `fence_kdump` agent can detect that a node entered kdump crash recovery and can allow the crash recovery service to complete, before other fencing methods are invoked. Note that `fence_kdump` isn't a replacement for traditional fence mechanisms, like Azure Fence Agent when using Azure VMs.
> [!IMPORTANT] > Be aware that when `fence_kdump` is configured as a first level fencing device, it will introduce delays in the fencing operations and respectively delays in the application resources failover.
The following Red Hat KBs contain important information about configuring `fence
* [How do I configure fence_kdump in a Red Hat Pacemaker cluster](https://access.redhat.com/solutions/2876971) * [How to configure/manage fencing levels in RHEL cluster with Pacemaker](https://access.redhat.com/solutions/891323)
-* [fence_kdump fails with "timeout after X seconds" in a RHEL 6 0r 7 HA cluster with kexec-tools older than 2.0.14](https://access.redhat.com/solutions/2388711)
-* For information how to change change the default timeout see [How do I configure kdump for use with the RHEL 6,7,8 HA Add-On](https://access.redhat.com/articles/67570)
+* [fence_kdump fails with "timeout after X seconds" in a RHEL 6 or 7 HA cluster with kexec-tools older than 2.0.14](https://access.redhat.com/solutions/2388711)
+* For information how to change the default timeout see [How do I configure kdump for use with the RHEL 6,7,8 HA Add-On](https://access.redhat.com/articles/67570)
* For information on how to reduce failover delay, when using `fence_kdump` see [Can I reduce the expected delay of failover when adding fence_kdump configuration](https://access.redhat.com/solutions/5512331) Execute the following optional steps to add `fence_kdump` as a first level fencing configuration, in addition to the Azure Fence Agent configuration.
sap Rise Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/rise-integration.md
Title: Integrating Azure with SAP RISE managed workloads| Microsoft Docs
description: Describes integrating SAP RISE managed virtual network with customer's own Azure environment documentationcenter: ''-+ editor: '' tags: azure-resource-manager
vm-linux Previously updated : 12/21/2022 Last updated : 04/07/2022
For more information on Microsoft Sentinel and SAP, including a deployment guide
SAP RISE/ECS doesn't support any integration with Azure Monitoring for SAP. RISE/ECSΓÇÖs own monitoring and reporting is provided to the customer as defined by your service description with SAP.
+## Azure Center for SAP Solutions
+
+Just as with Azure Monitoring for SAP, SAP RISE/ECS doesn't support any integration with [Azure Center for SAP Solutions](../center-sap-solutions/overview.md) in any capability. All SAP RISE workloads are deployed by SAP and running in SAP's Azure tenant and subscription, without any access by customer to the Azure resources.
+ ## Next steps Check out the documentation:
sap Sap Hana High Availability Netapp Files Red Hat https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/sap-hana-high-availability-netapp-files-red-hat.md
vm-linux Previously updated : 12/07/2022 Last updated : 04/06/2023
[2455582]:https://launchpad.support.sap.com/#/notes/2455582 [2593824]:https://launchpad.support.sap.com/#/notes/2593824 [2009879]:https://launchpad.support.sap.com/#/notes/2009879
+[3108302]:https://launchpad.support.sap.com/#/notes/3108302
[sap-swcenter]:https://support.sap.com/en/my-support/software-downloads.html
Read the following SAP Notes and papers first:
- SAP Note [405827](https://launchpad.support.sap.com/#/notes/405827) lists out recommended file system for HANA environment. - SAP Note [2002167](https://launchpad.support.sap.com/#/notes/2002167) has recommended OS settings for Red Hat Enterprise Linux. - SAP Note [2009879](https://launchpad.support.sap.com/#/notes/2009879) has SAP HANA Guidelines for Red Hat Enterprise Linux.
+- SAP Note [3108302](https://launchpad.support.sap.com/#/notes/3108302) has SAP HANA Guidelines for Red Hat Enterprise Linux 9.x.
- SAP Note [2178632](https://launchpad.support.sap.com/#/notes/2178632) has detailed information about all monitoring metrics reported for SAP in Azure. - SAP Note [2191498](https://launchpad.support.sap.com/#/notes/2191498) has the required SAP Host Agent version for Linux in Azure. - SAP Note [2243692](https://launchpad.support.sap.com/#/notes/2243692) has information about SAP licensing on Linux in Azure.
sap Sap Hana High Availability Rhel https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/sap-hana-high-availability-rhel.md
vm-linux Previously updated : 03/01/2023 Last updated : 04/06/2023
[2455582]:https://launchpad.support.sap.com/#/notes/2455582 [2002167]:https://launchpad.support.sap.com/#/notes/2002167 [2009879]:https://launchpad.support.sap.com/#/notes/2009879
+[3108302]:https://launchpad.support.sap.com/#/notes/3108302
[sap-swcenter]:https://launchpad.support.sap.com/#/softwarecenter [template-multisid-db]:https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fapplication-workloads%2Fsap%2Fsap-3-tier-marketplace-image-multi-sid-db-md%2Fazuredeploy.json
Read the following SAP Notes and papers first:
* SAP Note [2015553] lists prerequisites for SAP-supported SAP software deployments in Azure. * SAP Note [2002167] has recommended OS settings for Red Hat Enterprise Linux * SAP Note [2009879] has SAP HANA Guidelines for Red Hat Enterprise Linux
+* SAP Note [3108302] has SAP HANA Guidelines for Red Hat Enterprise Linux 9.x
* SAP Note [2178632] has detailed information about all monitoring metrics reported for SAP in Azure. * SAP Note [2191498] has the required SAP Host Agent version for Linux in Azure. * SAP Note [2243692] has information about SAP licensing on Linux in Azure.
sap Sap Hana High Availability Scale Out Hsr Rhel https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/sap-hana-high-availability-scale-out-hsr-rhel.md
vm-windows Previously updated : 11/14/2022 Last updated : 04/06/2022
[2455582]:https://launchpad.support.sap.com/#/notes/2455582 [2593824]:https://launchpad.support.sap.com/#/notes/2593824 [2009879]:https://launchpad.support.sap.com/#/notes/2009879
+[3108302]:https://launchpad.support.sap.com/#/notes/3108302
[sap-swcenter]:https://support.sap.com/en/my-support/software-downloads.html
Some readers will benefit from consulting a variety of SAP notes and resources b
* SAP note [2015553]: Lists prerequisites for SAP-supported SAP software deployments in Azure. * SAP note [2002167]: Has recommended operating system settings for RHEL. * SAP note [2009879]: Has SAP HANA guidelines for RHEL.
+* SAP Note [3108302] has SAP HANA Guidelines for Red Hat Enterprise Linux 9.x.
* SAP note [2178632]: Contains detailed information about all monitoring metrics reported for SAP in Azure. * SAP note [2191498]: Contains the required SAP host agent version for Linux in Azure. * SAP note [2243692]: Contains information about SAP licensing on Linux in Azure.
sap Sap Hana Scale Out Standby Netapp Files Rhel https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/sap-hana-scale-out-standby-netapp-files-rhel.md
vm-windows Previously updated : 11/15/2022 Last updated : 04/06/2023
[2455582]: https://launchpad.support.sap.com/#/notes/2455582 [2593824]: https://launchpad.support.sap.com/#/notes/2593824 [2009879]: https://launchpad.support.sap.com/#/notes/2009879
+[3108302]:https://launchpad.support.sap.com/#/notes/3108302
[sap-swcenter]: https://support.sap.com/en/my-support/software-downloads.html [2447641]: https://access.redhat.com/solutions/2447641
Before you begin, refer to the following SAP notes and papers:
* SAP Note [2015553]: Lists prerequisites for SAP-supported SAP software deployments in Azure * SAP Note [2002167] has recommended OS settings for Red Hat Enterprise Linux * SAP Note [2009879] has SAP HANA Guidelines for Red Hat Enterprise Linux
+* SAP Note [3108302] has SAP HANA Guidelines for Red Hat Enterprise Linux 9.x
* SAP Note [2178632]: Contains detailed information about all monitoring metrics reported for SAP in Azure * SAP Note [2191498]: Contains the required SAP Host Agent version for Linux in Azure * SAP Note [2243692]: Contains information about SAP licensing on Linux in Azure
sentinel Forward Syslog Monitor Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/forward-syslog-monitor-agent.md
#Customer intent: As a security-engineer, I want to get syslog data into Microsoft Sentinel so that I can use the data with other data to do attack detection, threat visibility, proactive hunting, and threat response. As an IT administrator, I want to get syslog data into my Log Analytics workspace to monitor my linux-based devices.
-# Forward syslog data to a Log Analytics workspace by using the Azure Monitor agent
+# Tutorial: Forward syslog data to a Log Analytics workspace by using the Azure Monitor agent
-In this article, we'll describe how to configure a Linux virtual machine (VM) to forward syslog data to your workspace by using the Azure Monitor agent. These steps allow you to collect and monitor data from Linux-based devices where you can't install an agent like a firewall network device.
+In this tutorial, you'll configure a Linux virtual machine (VM) to forward syslog data to your workspace by using the Azure Monitor agent. These steps allow you to collect and monitor data from Linux-based devices where you can't install an agent like a firewall network device.
Configure your linux-based device to send data to a Linux VM. The Azure Monitor agent on the VM forwards the syslog data to the Log Analytics workspace. Then use Microsoft Sentinel or Azure Monitor to monitor the device from the data stored in the Log Analytics workspace.
-In this article, you learn how to:
+In this tutorial, you learn how to:
> [!div class="checklist"] > * Create a data collection rule
In this article, you learn how to:
## Prerequisites
-To complete the steps in this article, you must have the following resources and roles.
+To complete the steps in this tutorial, you must have the following resources and roles.
- Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). - Azure account with the following roles to deploy the agent and create the data collection rules:
spring-apps Quickstart Deploy Web App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/quickstart-deploy-web-app.md
+
+ Title: Quickstart - Deploy your first web application to Azure Spring Apps
+description: Describes how to deploy a web application to Azure Spring Apps.
+++ Last updated : 04/06/2023++++
+# Quickstart: Deploy your first web application to Azure Spring Apps
+
+> [!NOTE]
+> The first 50 vCPU hours and 100 GB hours of memory are free each month. For more information, see [Price Reduction - Azure Spring Apps does more, costs less!](https://techcommunity.microsoft.com/t5/apps-on-azure-blog/price-reduction-azure-spring-apps-does-more-costs-less/ba-p/3614058) on the [Apps on Azure Blog](https://techcommunity.microsoft.com/t5/apps-on-azure-blog/bg-p/AppsonAzureBlog).
+
+> [!NOTE]
+> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
+
+**This article applies to:** ✔️ Basic/Standard tier ❌ Enterprise tier
+
+This quickstart shows how to deploy a Spring Boot web application to Azure Spring Apps. The sample project is a simple ToDo application to add tasks, mark when they're complete, and then delete them. The following screenshot shows the application:
++
+This application is a typical three-layers web application with the following layers:
+
+- A frontend bounded [React](https://reactjs.org/) application.
+- A backend Spring web application that uses Spring Data JPA to access a relational database.
+- A relational database. For localhost, the application uses [H2 Database Engine](https://www.h2database.com/html/main.html). For Azure Spring Apps, the application uses Azure Database for PostgreSQL. For more information about Azure Database for PostgreSQL, see [Flexible Server documentation](../postgresql/flexible-server/overview.md).
+
+The following diagram shows the architecture of the system:
++
+## Prerequisites
+
+- An Azure subscription. If you don't have a subscription, create a [free account](https://azure.microsoft.com/free/) before you begin.
+- [Azure CLI](/cli/azure/install-azure-cli). Version 2.45.0 or greater.
+- [Git](https://git-scm.com/downloads).
+- [Java Development Kit (JDK)](/java/azure/jdk/), version 17.
+
+## Clone and run the sample project locally
+
+Use the following steps to clone and run the app locally.
+
+1. The sample project is available on GitHub. Use the following command to clone the sample project:
+
+ ```bash
+ git clone https://github.com/Azure-Samples/ASA-Samples-Web-Application.git
+ ```
+
+1. Use the following command to build the sample project:
+
+ ```bash
+ cd ASA-Samples-Web-Application
+ ./mvnw clean package -DskipTests
+ ```
+
+1. Use the following command to run the sample application by Maven:
+
+ ```bash
+ java -jar web/target/simple-todo-web-0.0.1-SNAPSHOT.jar
+ ```
+
+1. Go to `http://localhost:8080` in your browser to access the application.
+
+## Prepare the cloud environment
+
+The main resources required to run this sample are an Azure Spring Apps instance and an Azure Database for PostgreSQL instance. This section provides the steps to create these resources.
+
+### Provide names for each resource
+
+Create variables to hold the resource names. Be sure to replace the placeholders with your own values.
+
+```azurecli
+RESOURCE_GROUP=<resource-group-name>
+LOCATION=<location>
+POSTGRESQL_SERVER=<server-name>
+POSTGRESQL_DB=<database-name>
+AZURE_SPRING_APPS_NAME=<Azure-Spring-Apps-service-instance-name>
+APP_NAME=<web-app-name>
+CONNECTION=<connection-name>
+```
+
+### Create a new resource group
+
+Use the following steps to create a new resource group.
+
+1. Use the following command to sign in to Azure CLI.
+
+ ```azurecli
+ az login
+ ```
+
+1. Use the following command to set the default location.
+
+ ```azurecli
+ az configure --defaults location=${LOCATION}
+ ```
+
+1. Set the default subscription. Use the following command to first list all available subscriptions:
+
+ ```azurecli
+ az account list --output table
+ ```
+
+1. Choose a subscription and set it as the default subscription with the following command:
+
+ ```azurecli
+ az account set --subscription <subscription-ID>
+ ```
+
+1. Use the following command to create a resource group.
+
+ ```azurecli
+ az group create --resource-group ${RESOURCE_GROUP}
+ ```
+
+1. Use the following command to set the newly created resource group as the default resource group.
+
+ ```azurecli
+ az configure --defaults group=${RESOURCE_GROUP}
+ ```
+
+### Create an Azure Spring Apps instance
+
+Azure Spring Apps is used to host the Spring web app. Create an Azure Spring Apps instance and an application inside it.
+
+1. Use the following command to create an Azure Spring Apps service instance.
+
+ ```azurecli
+ az spring create --name ${AZURE_SPRING_APPS_NAME}
+ ```
+
+1. Use the following command to create an application in the Azure Spring Apps instance.
+
+ ```azurecli
+ az spring app create \
+ --service ${AZURE_SPRING_APPS_NAME} \
+ --name ${APP_NAME} \
+ --runtime-version Java_17 \
+ --assign-endpoint true
+ ```
+
+### Prepare the PostgreSQL instance
+
+The Spring web app uses H2 for the database in localhost, and Azure Database for PostgreSQL for the database in Azure.
+
+Use the following command to create a PostgreSQL instance:
+
+```azurecli
+az postgres flexible-server create \
+ --name ${POSTGRESQL_SERVER} \
+ --database-name ${POSTGRESQL_DB} \
+ --active-directory-auth Enabled
+```
+
+To ensure that the application is accessible only by PostgreSQL in Azure Spring Apps, enter `n` to the prompts to enable access to a specific IP address and to enable access for all IP addresses.
+
+```output
+Do you want to enable access to client xxx.xxx.xxx.xxx (y/n) (y/n): n
+Do you want to enable access for all IPs (y/n): n
+```
+
+### Connect app instance to PostgreSQL instance
+
+After the application instance and the PostgreSQL instance are created, the application instance can't access the PostgreSQL instance directly. The following steps use Service Connector to configure the needed network settings and connection information. For more information about Service Connector, see [What is Service Connector?](../service-connector/overview.md).
+
+1. If you're using Service Connector for the first time, use the following command to register the Service Connector resource provider.
+
+ ```azurecli
+ az provider register --namespace Microsoft.ServiceLinker
+ ```
+
+1. Use the following command to achieve a passwordless connection:
+
+ ```azurecli
+ az extension add --name serviceconnector-passwordless --upgrade
+ ```
+
+1. Use the following command to create a service connection between the application and the PostgreSQL database:
+
+ ```azurecli
+ az spring connection create postgres-flexible \
+ --resource-group ${RESOURCE_GROUP} \
+ --service ${AZURE_SPRING_APPS_NAME} \
+ --app ${APP_NAME} \
+ --client-type springBoot \
+ --target-resource-group ${RESOURCE_GROUP} \
+ --server ${POSTGRESQL_SERVER} \
+ --database ${POSTGRESQL_DB} \
+ --system-identity \
+ --connection ${CONNECTION}
+ ```
+
+ The `--system-identity` parameter is required for the passwordless connection. For more information, see [Bind an Azure Database for PostgreSQL to your application in Azure Spring Apps](how-to-bind-postgres.md).
+
+1. After the connection is created, use the following command to validate the connection:
+
+ ```azurecli
+ az spring connection validate \
+ --resource-group ${RESOURCE_GROUP} \
+ --service ${AZURE_SPRING_APPS_NAME} \
+ --app ${APP_NAME} \
+ --connection ${CONNECTION}
+ ```
+
+ The output should appear similar to the following JSON code:
+
+ ```json
+ [
+ {
+ "additionalProperties": {},
+ "description": null,
+ "errorCode": null,
+ "errorMessage": null,
+ "name": "The target existence is validated",
+ "result": "success"
+ },
+ {
+ "additionalProperties": {},
+ "description": null,
+ "errorCode": null,
+ "errorMessage": null,
+ "name": "The target service firewall is validated",
+ "result": "success"
+ },
+ {
+ "additionalProperties": {},
+ "description": null,
+ "errorCode": null,
+ "errorMessage": null,
+ "name": "The configured values (except username/password) is validated",
+ "result": "success"
+ },
+ {
+ "additionalProperties": {},
+ "description": null,
+ "errorCode": null,
+ "errorMessage": null,
+ "name": "The identity existence is validated",
+ "result": "success"
+ }
+ ]
+ ```
+
+## Deploy the app to Azure Spring Apps
+
+Now that the cloud environment is prepared, the application is ready to deploy.
+
+1. Use the following command to deploy the app:
+
+ ```azurecli
+ az spring app deploy \
+ --service ${AZURE_SPRING_APPS_NAME} \
+ --name ${APP_NAME} \
+ --artifact-path web/target/simple-todo-web-0.0.1-SNAPSHOT.jar
+ ```
+
+1. After the deployment has completed, you can access the app with this URL: `https://${AZURE_SPRING_APPS_NAME}-${APP_NAME}.azuremicroservices.io/`. The page should appear as you saw in localhost.
+
+1. If there's a problem when you deploy the app, check the app's log to investigate by using the following command:
+
+ ```azurecli
+ az spring app logs \
+ --service ${AZURE_SPRING_APPS_NAME} \
+ --name ${APP_NAME}
+ ```
+
+## Clean up resources
+
+If you plan to continue working with subsequent quickstarts and tutorials, you might want to leave these resources in place. When you no longer need the resources, delete them by deleting the resource group. Use the following command to delete the resource group:
+
+```azurecli
+az group delete --name ${RESOURCE_GROUP}
+```
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Create a service connection in Azure Spring Apps with the Azure CLI](../service-connector/quickstart-cli-spring-cloud-connection.md)
+
+For more information, see the following articles:
+
+- [Azure Spring Apps Samples](https://github.com/Azure-Samples/Azure-Spring-Cloud-Samples).
+- [Spring on Azure](/azure/developer/java/spring/)
+- [Spring Cloud Azure](/azure/developer/java/spring-framework/)
storage-mover Billing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage-mover/billing.md
Previously updated : 09/07/2022 Last updated : 03/22/2023 <!-- !######################################################## STATUS: IN REVIEW
-CONTENT: final
+CONTENT: final (85/100)
REVIEW Stephen/Fabian: not reviewed REVIEW Engineering: not reviewed EDIT PASS: started
+STATUS: GA-ready
+
+Initial doc score: 83
+Current doc score: 96 (100, 783, 0)
!######################################################## -->
Azure Storage Mover facilitates the migration of unstructured data into Azure. T
## Billing components
-In a migration to Azure, there are several components involved that can affect on your bill:
+In a migration to Azure, there are several components involved that can affect your bill:
1. Storage Mover service usage 1. Target storage usage
In a migration to Azure, there are several components involved that can affect o
### 1. Storage Mover service usage
-All current features of the Azure Storage Mover service are provided free of charge during the public preview. However, service enhancements and other features may be included in future releases. It's possible that the use of these features may incur a charge.
+All current features of the Azure Storage Mover service are provided free of charge. However, service enhancements and other features may be included in future releases. It's possible that the use of these features may incur a charge.
### 2. Target Azure storage usage
-As you begin your migration into Azure, the service will copy your files and folders into your target Azure Storage locations. Depending on the configuration of these storage targets, usage charges may apply.
+As you begin your migration into Azure, the service copies your files and folders into your target Azure Storage locations. Depending on the configuration of these storage targets, usage charges may apply.
-Any storage usage charges incurred will be the result of the following factors:
+Any storage usage charges incurred are the result of the following factors:
- Storage transactions - Billed capacity
-The billing model for each Azure Storage target will determine how these charges apply. There are two different billing models in Azure Storage:
+The billing model for each Azure Storage target determines how these charges apply. There are two different billing models in Azure Storage:
#### [Consumption-based billing](#tab/consumption)
-* Storage transactions caused by the Storage Mover service will be billed. Review your specific storage product's pricing pages for details on transaction charges. The [estimating storage transaction charges](#estimating-storage-transaction-charges) section of this article explains why it can be difficult to estimate a migrationΓÇÖs effect on your storage transaction charges.
-* One advantage of the consumption-based model is that capacity charges are progressively applied. Charges are incurred only as files are migrated and increasingly more storage capacity is consumed. This model helps prevent storage pre-provisioning or over-provisioning ahead of a migration.
+* Storage transactions caused by the Storage Mover service are billed. Review your specific storage product's pricing pages for details on transaction charges. The [estimating storage transaction charges](#estimating-storage-transaction-charges) section of this article explains why it can be difficult to estimate a migrationΓÇÖs effect on your storage transaction charges.
+* One advantage of the consumption-based model is that capacity charges are progressively applied. Charges are incurred only as files are migrated and increasingly more storage capacity is consumed. This model helps prevent storage preprovisioning or over-provisioning ahead of a migration.
#### [Provisioned billing](#tab/provisioned) * Storage transaction charges typically don't apply to targets covered by this billing model. Review your specific storage product's pricing pages for details and to confirm the previous statement actually applies to your product.
-* The capacity of your Azure target storage is pre-provisioned and is billed regardless of utilization. The progress of your cloud migration doesn't affect your bill.
+* The capacity of your Azure target storage is preprovisioned and is billed regardless of utilization. The progress of your cloud migration doesn't affect your bill.
> [!CAUTION] > Always ensure that there is enough provisioned capacity to store all your source content. Copy jobs will fail if your target lacks sufficient storage capacity.
Storage transactions aren't billable for every Azure Storage type. Review the pr
If you've determined that your Azure storage product charges for transactions, it may be difficult to estimate the number generated by your migration. -- It's not possible to estimate the number of transactions based on the utilized storage capacity of the source. The number of transactions scales with the number of namespace items (files and folder) and their properties that are migrated, not their size. For example, more transactions are required to migrate one GiB of small files than one GiB of larger files.-- An empty Azure target requires fewer resources than a target which already contains items. To comply with your migration's settings, the Storage Mover agent will often need to enumerate a target's existing namespace. This enumeration increases the number of transactions.
+- It's not possible to estimate the number of transactions based on the utilized storage capacity of the source. The number of transactions scales with the number of namespace items (files and folder) and their properties that are migrated, not their size. For example, more transactions are required to migrate 1 GiB of small files than 1 GiB of larger files.
+- An empty Azure target requires fewer resources than a target that already contains items. To comply with your migration's settings, the Storage Mover agent often needs to enumerate a target's existing namespace. This enumeration increases the number of transactions.
- In order to minimize downtime, you may need to run copy operations several times between a source and its target. All source and target items are processed during each copy operation, though subsequent runs finish faster. After the initial operations, only the differences introduced between copy runs are transported over the network. It's important to understand that although less data is being transported, the number of transactions required may remain the same. - Copying the same file twice might not result in the same number of transactions. Processing an item migrated in a previous copy run may result in only a few read transactions. In contrast, changes to metadata or content between copy runs may require a larger number of transactions to update the target. Each file in your namespace may have unique requirements, resulting in a different number of transactions. ### 3. Network usage
-Upload bandwidth is another factor that could affect overall cost. The bandwidth utilized by your migration carries the same charge as any other Azure-bound traffic. There's no Storage Mover-specific premium. Your specific network connection and provider agreements will determine whether upload charges are incurred.
+Upload bandwidth is another factor that could affect overall cost. The bandwidth utilized by your migration carries the same charge as any other Azure-bound traffic. There's no Storage Mover-specific premium. Your specific network connection and provider agreements determine whether upload charges are incurred.
## Next steps
-After understanding the billing implications of your cloud migration, it's a good idea to get more familiar with the Storage Mover service. Select an article below to learn more.
+After understanding the billing implications of your cloud migration, it's a good idea to get more familiar with the Storage Mover service. Select an article to learn more.
- [Understand file and folder cloud migration basics](migration-basics.md) - [Learn about the Azure Storage Mover resource hierarchy](resource-hierarchy.md)
storage Storage Blob Container Lease Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-lease-python.md
The following example renews a lease for a container:
You can either wait for a lease to expire or explicitly release it. When you release a lease, other clients can obtain a lease. You can release a lease by using the following method: - [BlobLeaseClient.release](/python/api/azure-storage-blob/azure.storage.blob.blobleaseclient#azure-storage-blob-blobleaseclient-release)
-s
+ The following example releases the lease on a container: :::code language="python" source="~/azure-storage-snippets/blobs/howto/python/blob-devguide-py/blob-devguide-containers.py" id="Snippet_release_container_lease":::
storage Storage Files Quick Create Use Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-quick-create-use-linux.md
Next, create an Azure VM running Linux to represent the on-premises server. When
1. Select **+ Create** and then **+ Azure virtual machine**.
-1. In the **Basics** tab, under **Project details**, make sure the correct subscription and resource group are selected. Under **Instance details**, type *myVM* for the **Virtual machine name**, and select the same region as your storage account. Choose the default Ubuntu Server version for your **Image**. Leave the other defaults. The default size and pricing is only shown as an example. Size availability and pricing are dependent on your region and subscription.
+1. In the **Basics** tab, under **Project details**, make sure the correct subscription and resource group are selected. Under **Instance details**, type *myVM* for the **Virtual machine name**, and select the same region as your storage account. Choose your Linux distribution for your **Image**. Leave the other defaults. The default size and pricing is only shown as an example. Size availability and pricing are dependent on your region and subscription.
:::image type="content" source="media/storage-files-quick-create-use-linux/create-vm-project-instance-details.png" alt-text="Screenshot showing how to enter the project and instance details to create a new V M." lightbox="media/storage-files-quick-create-use-linux/create-vm-project-instance-details.png" border="true":::
Now that you've created an NFS share, to use it you have to mount it on your Lin
:::image type="content" source="media/storage-files-quick-create-use-linux/mount-nfs-share.png" alt-text="Screenshot showing how to connect to an N F S file share from Linux using a provided mounting script." lightbox="media/storage-files-quick-create-use-linux/mount-nfs-share.png" border="true":::
-1. Select your Linux distribution (Ubuntu).
+1. Select your Linux distribution.
1. Using the ssh connection you created to your VM, enter the sample commands to use NFS and mount the file share.
When you're done, delete the resource group. Deleting the resource group deletes
## Next steps > [!div class="nextstepaction"]
-> [Learn about using NFS Azure file shares](files-nfs-protocol.md)
+> [Learn about using NFS Azure file shares](files-nfs-protocol.md)
storage Storage How To Use Files Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-how-to-use-files-linux.md
uname -r
* <a id="install-cifs-utils"></a>**Ensure the cifs-utils package is installed.** The cifs-utils package can be installed using the package manager on the Linux distribution of your choice.
- On **Ubuntu** and **Debian**, use the `apt` package
- ```bash
- sudo apt update
- sudo apt install cifs-utils
- ```
+# [Ubuntu](#tab/Ubuntu)
- On **Red Hat Enterprise Linux 8+** use the `dnf` package
+On Ubuntu and Debian, use the `apt` package
- ```bash
- sudo dnf install cifs-utils
- ```
+```bash
+sudo apt update
+sudo apt install cifs-utils
+```
+# [RHEL](#tab/RHEL)
- On older versions of **Red Hat Enterprise Linux** use the `yum` package
+Same applies for CentOS or Oracle Linux
- ```bash
- sudo yum install cifs-utils
- ```
+On Red Hat Enterprise Linux 8+ use the `dnf` package
- On **SUSE Linux Enterprise Server**, use the `zypper` package
+```bash
+sudo dnf install cifs-utils
+```
- ```bash
- sudo zypper install cifs-utils
- ```
+On older versions of Red Hat Enterprise Linux use the `yum` package
+
+```bash
+sudo yum install cifs-utils
+```
+# [SLES](#tab/SLES)
+
+On SUSE Linux Enterprise Server, use the `zypper` package
+
+```bash
+sudo zypper install cifs-utils
+```
+
- On other distributions, use the appropriate package manager or [compile from source](https://wiki.samba.org/index.php/LinuxCIFS_utils#Download).
+On other distributions, use the appropriate package manager or [compile from source](https://wiki.samba.org/index.php/LinuxCIFS_utils#Download).
* **The most recent version of the Azure Command Line Interface (CLI).** For more information on how to install the Azure CLI, see [Install the Azure CLI](/cli/azure/install-azure-cli) and select your operating system. If you prefer to use the Azure PowerShell module in PowerShell 6+, you may; however, the instructions in this article are for the Azure CLI.
sudo mount -a
### Dynamically mount with autofs To dynamically mount a file share with the `autofs` utility, install it using the package manager on the Linux distribution of your choice.
-On **Ubuntu** and **Debian** distributions, use the `apt` package
+# [Ubuntu](#tab/Ubuntu)
+
+On Ubuntu and Debian distributions, use the `apt` package
```bash sudo apt update sudo apt install autofs ```
+# [RHEL](#tab/RHEL)
+
+Same applies for CentOS or Oracle Linux
-On **Red Hat Enterprise Linux 8+**, use the `dnf` package
+On Red Hat Enterprise Linux 8+, use the `dnf` package
```bash sudo dnf install autofs ```
-On older versions of **Red Hat Enterprise Linux**, use the `yum` package
+On older versions of Red Hat Enterprise Linux, use the `yum` package
```bash sudo yum install autofs ```
-On **SUSE Linux Enterprise Server**, use the `zypper` package
+# [SLES](#tab/SLES)
+
+On SUSE Linux Enterprise Server, use the `zypper` package
```bash sudo zypper install autofs ```+ Next, update the `autofs` configuration files.
synapse-analytics Resources Self Help Sql On Demand https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/resources-self-help-sql-on-demand.md
Incorrect network configuration is often the cause of this behavior. Make sure t
Finally, make sure the appropriate roles are granted and have not been revoked.
+### Unable to create new database as the request will use the old/expired key
+
+This error is caused by changing workspace customer managed key used for enryption. You can choose to re-encrypt all the data in the workspace with the latest version of the active key. To-re-encrypt, change the key in the Azure portal to a temporary key and then switch back to the key you wish to use for encryption. Learn here how to [manage the workspace keys](../security/workspaces-encryption.md#manage-the-workspace-customer-managed-key).
+ ### Synapse serverless SQL pool is unavailable after transfering a subscription to a different Azure AD tenant If you moved a subscription to another Azure AD tenant, you might experience some issues with serverless SQL pool. Create a support ticket and Azure suport will contact you to resolve the issue.
virtual-desktop Start Virtual Machine Connect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/start-virtual-machine-connect.md
# Set up Start VM on Connect
-Start VM On Connect lets you reduce costs by enabling end users to turn on their session host virtual machines (VMs) only when they need them. You can them turn off VMs when they're not needed.
+Start VM On Connect lets you reduce costs by enabling end users to turn on their session host virtual machines (VMs) only when they need them. You can then turn off VMs when they're not needed.
You can configure Start VM on Connect for personal or pooled host pools using the Azure portal or PowerShell. Start VM on Connect is a host pool setting.
virtual-network Subnet Extension https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/subnet-extension.md
Title: Subnet extension in Azure+ description: Learn about subnet extension in Azure.- -
-tags: azure-resource-manager
Previously updated : 10/31/2019 Last updated : 04/06/2023 # Subnet extension
-Workload migration to the public cloud requires careful planning and coordination. One of the key considerations can be the ability to retain your IP addresses. Which can be important especially if your applications have IP address dependency or you have compliance requirements to use specific IP addresses. Azure Virtual Network solves this problem for you by allowing you to create VNet and Subnets using an IP address range of your choice.
-Migrations can get a bit challenging when the above requirement is coupled with an additional requirement to keep some applications on-premises. In such as a situation, you'll have to split the applications between Azure and on-premises, without renumbering the IP addresses on either side. Additionally, you'll have to allow the applications to communicate as if they are in the same network.
+Workload migration to the public cloud requires careful planning and coordination. One of the key considerations can be the ability to retain your IP addresses. Which can be important especially if your applications have IP address dependency or you have compliance requirements to use specific IP addresses. Azure Virtual Network solves this problem for you by allowing you to create virtual networks and subnets using an IP address range of your choice.
+
+Migrations can get a bit challenging when the above requirement is coupled with an extra requirement to keep some applications on-premises. In such as a situation, you have to split the applications between Azure and on-premises, without renumbering the IP addresses on either side. Additionally, you have to allow the applications to communicate as if they are in the same network.
One solution to the above problem is subnet extension. Extending a network allows applications to talk over the same broadcast domain when they exist at different physical locations, removing the need to rearchitect your network topology.
-While extending your network isn't a good practice in general, below use cases can make it necessary.
+While extending your network isn't a good practice in general, the following use cases can make it necessary.
- **Phased Migration**: The most common scenario is that you want to phase your migration. You want to bring a few applications first and over time migrate rest of the applications to Azure.+ - **Latency**: Low latency requirements can be another reason for you to keep some applications on-premises to ensure that they're as close as possible to your datacenter.+ - **Compliance**: Another use case is that you might have compliance requirements to keep some of your applications on-premises. > [!NOTE]
While extending your network isn't a good practice in general, below use cases c
In the next section, we'll discuss how you can extend your subnets into Azure. - ## Extend your subnet to Azure
- You can extend your on-premises subnets to Azure using a layer-3 overlay network based solution. Most solutions use an overlay technology such as VXLAN to extend the layer-2 network using an layer-3 overlay network. The diagram below shows a generalized solution. In this solution, the same subnet exists on both sides that is, Azure and on-premises.
-![Subnet Extension Example](./media/subnet-extension/subnet-extension.png)
+ You can extend your on-premises subnets to Azure using a layer-3 overlay network based solution. Most solutions use an overlay technology such as VXLAN to extend the layer-2 network using an layer-3 overlay network. The following diagram shows a generalized solution. In this solution, the same subnet exists on both sides that is, Azure and on-premises.
+ The IP addresses from the subnet are assigned to VMs on Azure and on-premises. Both Azure and on-premises have an NVA inserted in their networks. When a VM in Azure tries to talk to a VM in on-premises network, the Azure NVA captures the packet, encapsulates it, and sends it over VPN/Express Route to the on-premises network. The on-premises NVA receives the packet, decapsulates it and forwards it to the intended recipient in its network. The return traffic uses a similar path and logic.
virtual-wan About Virtual Hub Routing Preference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/about-virtual-hub-routing-preference.md
This section explains the route selection algorithm in a virtual hub along with
**Things to note:** * When there are multiple virtual hubs in a Virtual WAN scenario, a virtual hub selects the best routes using the route selection algorithm described above, and then advertises them to the other virtual hubs in the virtual WAN.
+* For a given set of destination route-prefixes, if the ExpressRoute routes are preferred and the ExpressRoute connection subsequently goes down, then routes from S2S VPN or SD-WAN NVA connections will be preferred for traffic destined to the same route-prefixes. When the ExpressRoute connection is restored, traffic destined for these route-prefixes will continue to prefer the S2S VPN or SD-WAN NVA connections.
## Routing scenarios
virtual-wan Virtual Wan Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/virtual-wan-faq.md
Azure-wide Cloud Services-based infrastructure is deprecating. As a result, the
YouΓÇÖll only be able to update your virtual hub router if all the resources (gateways/route tables/VNet connections) in your hub are in a succeeded state. Please make sure all your spoke virtual networks are in active/enabled subscriptions and that your spoke virtual networks are not deleted. Additionally, as this operation requires deployment of new virtual machine scale sets based virtual hub routers, youΓÇÖll face an expected downtime of 1-2 minutes for VNet-to-VNet traffic through the same hub and 5-7 minutes for all other traffic flows through the hub. Within a single Virtual WAN resource, hubs should be updated one at a time instead of updating multiple at the same time. When the Router Version says ΓÇ£LatestΓÇ¥, then the hub is done updating. There will be no routing behavior changes after this update.
-If you have already configured BGP peering between your Virtual WAN hub and an NVA in a spoke VNet, then you will have to [delete and then recreate the BGP peer](create-bgp-peering-hub-portal.md). Since the virtual hub router's IP addresses change after the upgrade, you will also have to reconfigure your NVA to peer with the virtual hub router's new IP addresses. These IP addresses are represented as the "virtualRouterIps" field in the Virtual Hub's Resource JSON.
+There are several limitations with the virtual hub router upgrade
+
+* If you have already configured BGP peering between your Virtual WAN hub and an NVA in a spoke VNet, then you will have to [delete and then recreate the BGP peer](create-bgp-peering-hub-portal.md). Since the virtual hub router's IP addresses change after the upgrade, you will also have to reconfigure your NVA to peer with the virtual hub router's new IP addresses. These IP addresses are represented as the "virtualRouterIps" field in the Virtual Hub's Resource JSON.
+
+* If your Virtual WAN hub is connected to a combination of spoke virtual networks in the same region as the hub and a separate region than the hub, then you may experience a lack of connectivity to these respective spoke virtual networks. To resolve this and restore connectivity to these virtual networks, you can modify any of the virtual network connection properties (For example, you can modify the connection to propagate to a dummy label). We are actively working on removing this requirement.
+
+* Your Virtual WAN hub router can not currently be upgraded if you have a network virtual appliance in the virtual hub. We are actively working on removing this limitation.
+
+* If your Virtual WAN hub is connected to more than 100 spoke virtual networks, then the upgrade may fail.
If the update fails for any reason, your hub will be auto recovered to the old version to ensure there is still a working setup.
virtual-wan Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/whats-new.md
The following features are currently in gated public preview. After working with
|#|Issue|Description |Date first reported|Mitigation| ||||||
-|1|Virtual hub router upgrade: Compatibility with NVA in a hub.|For deployments with an NVA provisioned in the hub, the virtual hub router can't be upgraded to Virtual Machine Scale Sets.| July 2022|The Virtual WAN team is working on a fix that will allow Virtual hub routers to be upgraded to Virtual Machine Scale Sets, even if an NVA is provisioned in the hub. After upgrading, users will have to re-peer the NVA with the hub routerΓÇÖs new IP addresses (instead of having to delete the NVA).|
-|2|Virtual hub router upgrade: Compatibility with NVA in a spoke VNet.|For deployments with an NVA provisioned in a spoke VNet, the customer will have to delete and recreate the BGP peering with the spoke NVA.|March 2022|The Virtual WAN team is working on a fix to remove the need for users to delete and recreate the BGP peering with a spoke NVA after upgrading.|
+|1|Virtual hub upgrade to VMSS-based infrastructure: Compatibility with NVA in a hub.|For deployments with an NVA provisioned in the hub, the virtual hub router can't be upgraded to Virtual Machine Scale Sets.| July 2022|The Virtual WAN team is working on a fix that will allow Virtual hub routers to be upgraded to Virtual Machine Scale Sets, even if an NVA is provisioned in the hub. After upgrading, users will have to re-peer the NVA with the hub routerΓÇÖs new IP addresses (instead of having to delete the NVA).|
+|2|Virtual hub upgrade to VMSS-based infrastructure: Compatibility with NVA in a spoke VNet.|For deployments with an NVA provisioned in a spoke VNet, the customer will have to delete and recreate the BGP peering with the spoke NVA.|March 2022|The Virtual WAN team is working on a fix to remove the need for users to delete and recreate the BGP peering with a spoke NVA after upgrading.|
+|3|Virtual hub upgrade to VMSS-based infrastructure: Compatibility with spoke VNets in different regions |If your Virtual WAN hub is connected to a combination of spoke virtual networks in the same region as the hub and a separate region than the hub, then you may experience a lack of connectivity to these respective spoke virtual networks after upgrading your hub router to VMSS-based infrastructure.|March 2023|To resolve this and restore connectivity to these virtual networks, you can modify any of the virtual network connection properties (For example, you can modify the connection to propagate to a dummy label). We are actively working on removing this requirement. |
+|4|Virtual hub upgrade to VMSS-based infrastructure: Compatibility with more than 100 spoke VNets |If your Virtual WAN hub is connected to more than 100 spoke VNets, then the upgrade may time out, causing your virtual hub to remain on Cloud Services-based infrastructure.|March 2023|The Virtual WAN team is working on a fix to support upgrades when there are more than 100 spoke VNets connected.|
## Next steps
vpn-gateway Bgp Howto https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/bgp-howto.md
Configure a local network gateway with BGP settings.
* Name: Site5 * IP address: The IP address of the gateway endpoint you want to connect to. Example: 128.9.9.9
- * Address spaces: the address spaces on the on-premises site to which you want to route.
+ * Address spaces: If BGP is enabled, no address space is required.
1. To configure BGP settings, go to the **Advanced** page. Use the following example values (shown in Diagram 3). Modify any values necessary to match your environment. * Configure BGP settings: Yes * Autonomous system number (ASN): 65050
- * BGP peer IP address: The address that you noted in previous steps.
+ * BGP peer IP address: The address of the on-premise VPN Device. Example: 10.51.255.254
1. Click **Review + create** to create the local network gateway.
For context, referring to **Diagram 4**, if BGP were to be disabled between Test
## Next steps
-For more information about BGP, see [About BGP and VPN Gateway](vpn-gateway-bgp-overview.md).
+For more information about BGP, see [About BGP and VPN Gateway](vpn-gateway-bgp-overview.md).
web-application-firewall Waf Front Door Custom Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/afds/waf-front-door-custom-rules.md
Here's an example JSON description of the custom rule:
### Size constraint
-Front Door's WAF enables you to build custom rules that apply a length or size constraint on a part of an incoming request.
+Front Door's WAF enables you to build custom rules that apply a length or size constraint on a part of an incoming request. This size constraint is measured in bytes.
Suppose you need to block requests where the URL is longer than 100 characters.