Updates from: 08/21/2021 03:09:59
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Custom Policy Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/custom-policy-overview.md
Last updated 04/08/2021 + # Azure AD B2C custom policy overview
Azure AD B2C custom policy [starter pack](tutorial-create-user-flows.md?pivots=b
- **SocialAndLocalAccounts** - Enables the use of both local and social accounts. Most of our samples refer to this policy. - **SocialAndLocalAccountsWithMFA** - Enables social, local, and multi-factor authentication options.
+In the [Azure AD B2C samples GitHub repository](https://github.com/azure-ad-b2c/samples), you'll find samples for several enhanced Azure AD B2C custom CIAM user journeys, such as local account policy enhancements, social account policy enhancements, MFA enhancements, user interface enhancements, generic enhancements, app migration, user migration, conditional access, web test, and CI/CD.
+
## Understanding the basics ### Claims
After you set up and test your Azure AD B2C policy, you can start customizing yo
- [Localize the user interface](./language-customization.md) of your application using a custom policy. Learn how to set up the list of supported languages, and provide language-specific labels, by adding the localized resources element. - During your policy development and testing, you can [disable email verification](./disable-email-verification.md). Learn how to overwrite a technical profile metadata. - [Set up sign-in with a Google account](./identity-provider-google.md) using custom policies. Learn how to create a new claims provider with OAuth2 technical profile. Then customize the user journey to include the Google sign-in option.-- To diagnose problems with your custom policies you can [Collect Azure Active Directory B2C logs with Application Insights](troubleshoot-with-application-insights.md). Learn how to add new technical profiles, and configure your relying party policy.
+- To diagnose problems with your custom policies you can [Collect Azure Active Directory B2C logs with Application Insights](troubleshoot-with-application-insights.md). Learn how to add new technical profiles, and configure your relying party policy.
active-directory-domain-services Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-domain-services/policy-reference.md
Title: Built-in policy definitions for Azure Active Directory Domain Services description: Lists Azure Policy built-in policy definitions for Azure Active Directory Domain Services. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/13/2021 Last updated : 08/20/2021
active-directory How To Gmsa Cmdlets https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/cloud-sync/how-to-gmsa-cmdlets.md
The following prerequisites are required to use these cmdlets.
|PasswordHashSync|See [PasswordHashSync](../../active-directory/hybrid/how-to-connect-configure-ad-ds-connector-account.md#permissions-for-password-hash-synchronization) permissions for Azure AD Connect| |PasswordWriteBack|See [PasswordWriteBack](../../active-directory/hybrid/how-to-connect-configure-ad-ds-connector-account.md#permissions-for-password-writeback) permissions for Azure AD Connect| |HybridExchangePermissions|See [HybridExchangePermissions](../../active-directory/hybrid/how-to-connect-configure-ad-ds-connector-account.md#permissions-for-exchange-hybrid-deployment) permissions for Azure AD Connect|
-|ExchangeMailPublicFolderPermissions| See [ExchangeMailPublicFolderPermissions](../../active-directory/hybrid/how-to-connect-configure-ad-ds-connector-account.md#permissions-for-exchange-mail-public-folders-preview) permissions for Azure AD Connect|
+|ExchangeMailPublicFolderPermissions| See [ExchangeMailPublicFolderPermissions](../../active-directory/hybrid/how-to-connect-configure-ad-ds-connector-account.md#permissions-for-exchange-mail-public-folders) permissions for Azure AD Connect|
|CloudHR| Applies 'Full control' on 'Descendant User objects' and 'Create/delete User objects' on 'This object and all descendant objects'| |All|adds all the above permissions.|
active-directory Howto Vm Sign In Azure Ad Linux https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/devices/howto-vm-sign-in-azure-ad-linux.md
For customers who are using previous version of Azure AD login for Linux that wa
``` ## Using Azure Policy to ensure standards and assess compliance
-Use Azure policy to ensure Azure AD login is enabled for your new and existing Linux virtual machines and assess compliance of your environment at scale on your Azure policy compliance dashboard. With this capability, you can use many levels of enforcement: you can flag new and existing Linux VMs within your environment that do not have Azure AD login enabled. You can also use Azure policy to deploy the Azure AD extension on new Linux VMs that do not have Azure AD login enabled, as well as remediate existing Linux VMs to the same standard. In addition to these capabilities, you can also use policy to detect and flag Linux VMs that have non-approved local accounts created on their machines. To learn more, review [Azure policy](https://www.aka.ms/AzurePolicy).
+Use Azure Policy to ensure Azure AD login is enabled for your new and existing Linux virtual machines and assess compliance of your environment at scale on your Azure Policy compliance dashboard. With this capability, you can use many levels of enforcement: you can flag new and existing Linux VMs within your environment that do not have Azure AD login enabled. You can also use Azure Policy to deploy the Azure AD extension on new Linux VMs that do not have Azure AD login enabled, as well as remediate existing Linux VMs to the same standard. In addition to these capabilities, you can also use Azure Policy to detect and flag Linux VMs that have non-approved local accounts created on their machines. To learn more, review [Azure Policy](../../governance/policy/overview.md).
## Troubleshoot sign-in issues
active-directory Howto Vm Sign In Azure Ad Windows https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/devices/howto-vm-sign-in-azure-ad-windows.md
You are now signed in to the Windows Server 2019 Azure virtual machine with the
## Using Azure Policy to ensure standards and assess compliance
-Use Azure policy to ensure Azure AD login is enabled for your new and existing Windows virtual machines and assess compliance of your environment at scale on your Azure policy compliance dashboard. With this capability, you can use many levels of enforcement: you can flag new and existing Windows VMs within your environment that do not have Azure AD login enabled. You can also use Azure policy to deploy the Azure AD extension on new Windows VMs that do not have Azure AD login enabled, as well as remediate existing Windows VMs to the same standard. In addition to these capabilities, you can also use policy to detect and flag Windows VMs that have non-approved local accounts created on their machines. To learn more, review [Azure policy](https://www.aka.ms/AzurePolicy).
+Use Azure Policy to ensure Azure AD login is enabled for your new and existing Windows virtual machines and assess compliance of your environment at scale on your Azure Policy compliance dashboard. With this capability, you can use many levels of enforcement: you can flag new and existing Windows VMs within your environment that do not have Azure AD login enabled. You can also use Azure Policy to deploy the Azure AD extension on new Windows VMs that do not have Azure AD login enabled, as well as remediate existing Windows VMs to the same standard. In addition to these capabilities, you can also use Azure Policy to detect and flag Windows VMs that have non-approved local accounts created on their machines. To learn more, review [Azure Policy](../../governance/policy/overview.md).
## Troubleshoot
active-directory Licensing Service Plan Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/enterprise-users/licensing-service-plan-reference.md
Previously updated : 8/19/2021 Last updated : 8/20/2021
When managing licenses in [the Azure portal](https://portal.azure.com/#blade/Mic
- **Service plans included (friendly names)**: A list of service plans (friendly names) in the product that correspond to the string ID and GUID >[!NOTE]
->This information last updated on August 19th, 2021.
+>This information last updated on August 20th, 2021.
| Product name | String ID | GUID | Service plans included | Service plans included (friendly names) | | | | | | |
When managing licenses in [the Azure portal](https://portal.azure.com/#blade/Mic
| MICROSOFT DYNAMICS CRM ONLINE | CRMSTANDARD | d17b27af-3f49-4822-99f9-56a661538792 | CRMSTANDARD (f9646fb2-e3b2-4309-95de-dc4833737456)<br/>FLOW_DYN_APPS (7e6d7d78-73de-46ba-83b1-6d25117334ba)<br/>MDM_SALES_COLLABORATION (3413916e-ee66-4071-be30-6f94d4adfeda)<br/>NBPROFESSIONALFORCRM (3e58e97c-9abe-ebab-cd5f-d543d1529634)<br/>POWERAPPS_DYN_APPS (874fc546-6efe-4d22-90b8-5c4e7aa59f4b) | MICROSOFT DYNAMICS CRM ONLINE PROFESSIONAL(f9646fb2-e3b2-4309-95de-dc4833737456)<br/>FLOW FOR DYNAMICS 365 (7e6d7d78-73de-46ba-83b1-6d25117334ba)<br/>MICROSOFT DYNAMICS MARKETING SALES COLLABORATION - ELIGIBILITY CRITERIA APPLY (3413916e-ee66-4071-be30-6f94d4adfeda)<br/>MICROSOFT SOCIAL ENGAGEMENT PROFESSIONAL - ELIGIBILITY CRITERIA APPLY (3e58e97c-9abe-ebab-cd5f-d543d1529634)<br/>POWERAPPS FOR DYNAMICS 365 (874fc546-6efe-4d22-90b8-5c4e7aa59f4b) | | MS IMAGINE ACADEMY | IT_ACADEMY_AD | ba9a34de-4489-469d-879c-0f0f145321cd | IT_ACADEMY_AD (d736def0-1fde-43f0-a5be-e3f8b2de6e41) | MS IMAGINE ACADEMY (d736def0-1fde-43f0-a5be-e3f8b2de6e41) | | MICROSOFT INTUNE DEVICE FOR GOVERNMENT | INTUNE_A_D_GOV | 2c21e77a-e0d6-4570-b38a-7ff2dc17d2ca | EXCHANGE_S_FOUNDATION_GOV (922ba911-5694-4e99-a794-73aed9bfeec8)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5) | Exchange Foundation for Government (922ba911-5694-4e99-a794-73aed9bfeec8)<br/>Microsoft Intune (c1ec4a95-1f05-45b3-a911-aa3fa01094f5) |
-| MICROSOFT POWER APPS PLAN 2 TRIAL | POWERAPPS_VIRAL | dcb1a3ae-b33f-4487-846a-a640262fadf4 | DYN365_CDS_VIRAL (17ab22cd-a0b3-4536-910a-cb6eb12696c0)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>FLOW_P2_VIRAL (50e68c76-46c6-4674-81f9-75456511b170)<br/>FLOW_P2_VIRAL_REAL (d20bfa21-e9ae-43fc-93c2-20783f0840c3)<br/>POWERAPPS_P2_VIRAL (d5368ca3-357e-4acb-9c21-8495fb025d1f) | COMMON DATA SERVICE ΓÇô VIRAL (17ab22cd-a0b3-4536-910a-cb6eb12696c0)<br/>EXCHANGE FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>FLOW FREE (50e68c76-46c6-4674-81f9-75456511b170)<br/>FLOW P2 VIRAL (d20bfa21-e9ae-43fc-93c2-20783f0840c3)<br/>POWERAPPS TRIAL (d5368ca3-357e-4acb-9c21-8495fb025d1f) |
+| Microsoft Power Apps Plan 2 Trial | POWERAPPS_VIRAL | dcb1a3ae-b33f-4487-846a-a640262fadf4 | DYN365_CDS_VIRAL (17ab22cd-a0b3-4536-910a-cb6eb12696c0)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>FLOW_P2_VIRAL (50e68c76-46c6-4674-81f9-75456511b170)<br/>FLOW_P2_VIRAL_REAL (d20bfa21-e9ae-43fc-93c2-20783f0840c3)<br/>POWERAPPS_P2_VIRAL (d5368ca3-357e-4acb-9c21-8495fb025d1f) | Common Data Service ΓÇô VIRAL (17ab22cd-a0b3-4536-910a-cb6eb12696c0)<br/>Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Flow Free (50e68c76-46c6-4674-81f9-75456511b170)<br/>Flow P2 Viral (d20bfa21-e9ae-43fc-93c2-20783f0840c3)<br/>PowerApps Trial (d5368ca3-357e-4acb-9c21-8495fb025d1f) |
| MICROSOFT INTUNE SMB | INTUNE_SMB | e6025b08-2fa5-4313-bd0a-7e5ffca32958 | AAD_SMB (de377cbc-0019-4ec2-b77c-3f223947e102)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>INTUNE_SMBIZ (8e9ff0ff-aa7a-4b20-83c1-2f636b600ac2)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/> | AZURE ACTIVE DIRECTORY (de377cbc-0019-4ec2-b77c-3f223947e102)<br/> EXCHANGE FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/> MICROSOFT INTUNE (8e9ff0ff-aa7a-4b20-83c1-2f636b600ac2)<br/> MICROSOFT INTUNE (c1ec4a95-1f05-45b3-a911-aa3fa01094f5) | | Microsoft Power Apps Plan 2 (Qualified Offer) | POWERFLOW_P2 | ddfae3e3-fcb2-4174-8ebd-3023cb213c8b | DYN365_CDS_P2 (6ea4c1ef-c259-46df-bce2-943342cd3cb2)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>FLOW_P2 (56be9436-e4b2-446c-bb7f-cc15d16cca4d)<br/>POWERAPPS_P2 (00527d7f-d5bc-4c2a-8d1e-6c0de2410c81) | Common Data Service - P2 (6ea4c1ef-c259-46df-bce2-943342cd3cb2)<br/>Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Flow Plan 2 (56be9436-e4b2-446c-bb7f-cc15d16cca4d)<br/>PowerApps Plan 2 (00527d7f-d5bc-4c2a-8d1e-6c0de2410c81) | | Microsoft Power Automate Free | FLOW_FREE | f30db892-07e9-47e9-837c-80727f46fd3d | DYN365_CDS_VIRAL (17ab22cd-a0b3-4536-910a-cb6eb12696c0)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>FLOW_P2_VIRAL (50e68c76-46c6-4674-81f9-75456511b170) | Common Data Service ΓÇô VIRAL (17ab22cd-a0b3-4536-910a-cb6eb12696c0)<br/>Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Flow Free (50e68c76-46c6-4674-81f9-75456511b170) |
When managing licenses in [the Azure portal](https://portal.azure.com/#blade/Mic
| Microsoft Teams Rooms Standard | MEETING_ROOM | 6070a4c8-34c6-4937-8dfb-39bbc6397a60 | MCOMEETADV (3e26ee1f-8a5f-4d52-aee2-b81ce45c8f40)<br/>MCOEV (4828c8ec-dc2e-4779-b502-87ac9ce28ab7)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>WHITEBOARD_PLAN3 (4a51bca5-1eff-43f5-878c-177680f191af) | Microsoft 365 Audio Conferencing (3e26ee1f-8a5f-4d52-aee2-b81ce45c8f40)<br/>Microsoft 365 Phone System (4828c8ec-dc2e-4779-b502-87ac9ce28ab7)<br/>Microsoft Intune (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>Microsoft Teams (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>Skype for Business Online (Plan 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>Whiteboard (Plan 3) (4a51bca5-1eff-43f5-878c-177680f191af) | | Microsoft Threat Experts - Experts on Demand | EXPERTS_ON_DEMAND | 9fa2f157-c8e4-4351-a3f2-ffa506da1406 | EXPERTS_ON_DEMAND (b83a66d4-f05f-414d-ac0f-ea1c5239c42b) | Microsoft Threat Experts - Experts on Demand (b83a66d4-f05f-414d-ac0f-ea1c5239c42b) | | Teams Rooms Premium | MTR_PREM | 4fb214cb-a430-4a91-9c91-4976763aa78f | MMR_P1 (bdaa59a3-74fd-4137-981a-31d4f84eb8a0)<br/>MCOMEETADV (3e26ee1f-8a5f-4d52-aee2-b81ce45c8f40)<br/>MCOEV (4828c8ec-dc2e-4779-b502-87ac9ce28ab7)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>WHITEBOARD_PLAN3 (4a51bca5-1eff-43f5-878c-177680f191af) | Meeting Room Managed Services (bdaa59a3-74fd-4137-981a-31d4f84eb8a0)<br/>Microsoft 365 Audio Conferencing (3e26ee1f-8a5f-4d52-aee2-b81ce45c8f40)<br/>Microsoft 365 Phone System (4828c8ec-dc2e-4779-b502-87ac9ce28ab7)<br/>Microsoft Intune (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>Microsoft Teams (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>Skype for Business Online (Plan 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>Whiteboard (Plan 3) (4a51bca5-1eff-43f5-878c-177680f191af) |
+| Office 365 A3 for faculty | ENTERPRISEPACKPLUS_FACULTY | e578b273-6db4-4691-bba0-8d691f4da603 | AAD_BASIC_EDU (1d0f309f-fdf9-4b2a-9ae7-9c48b91f1426)<br/>DYN365_CDS_O365_P2 (4ff01e01-1ba7-4d71-8cf8-ce96c3bbcf14)<br/>CDS_O365_P2 (95b76021-6a53-4741-ab8b-1d1f3d66a95a)<br/>EducationAnalyticsP1 (a9b86446-fa4e-498f-a92a-41b447e03337)<br/>EXCHANGE_S_ENTERPRISE (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>MIP_S_CLP1 (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>MYANALYTICS_P2 (33c4f319-9bdd-48d6-9c4d-410b750a4a5a)<br/>OFFICESUBSCRIPTION (43de0ff5-c92c-492b-9116-175376d08c38)<br/>RMS_S_ENTERPRISE (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>MICROSOFTBOOKINGS (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>OFFICE_FORMS_PLAN_2 (9b5de886-f035-4ff2-b3d8-c9127bea3620)<br/>KAIZALA_O365_P3 (aebd3021-9f8f-4bf8-bbe3-0ed2f4f047a1)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>MICROSOFT_SEARCH (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Deskless (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>STREAM_O365_E3 (9e700747-8b1d-45e5-ab8d-ef187ceec156)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>INTUNE_O365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>ADALLOM_S_O365 (8c098270-9dd4-4350-9b30-ba4703f3b36b)<br/>SHAREPOINTWAC_EDU (e03c7e47-402c-463c-ab25-949079bedb21)<br/>POWERAPPS_O365_P2 (c68f8d98-5534-41c8-bf36-22fa496fa792)<br/>FLOW_O365_P2 (76846ad7-7776-4c40-a281-a386362dd1b9)<br/>POWER_VIRTUAL_AGENTS_O365_P2 (041fe683-03e4-45b6-b1af-c0cdc516daee)<br/>PROJECT_O365_P2 (31b4e2fc-4cd6-4e7d-9c1b-41407303bd66)<br/>SCHOOL_DATA_SYNC_P2 (500b6a2a-7a50-4f40-b5f9-160e5b8c2f48)<br/>SHAREPOINTENTERPRISE_EDU (63038b2c-28d0-45f6-bc36-33062963b498)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>BPOS_S_TODO_2 (c87f142c-d1e9-4363-8630-aaea9c4d9ae5)<br/>WHITEBOARD_PLAN2(94a54592-cd8b-425e-87c6-97868b000b91)<br/> YAMMER_EDU(2078e8df-cff6-4290-98cb-5408261a760a) | Azure Active Directory Basic for EDU (1d0f309f-fdf9-4b2a-9ae7-9c48b91f1426)<br/>Common Data Service - O365 P2 (4ff01e01-1ba7-4d71-8cf8-ce96c3bbcf14)<br/>Common Data Service for Teams_P2 (95b76021-6a53-4741-ab8b-1d1f3d66a95a)<br/>Education Analytics (a9b86446-fa4e-498f-a92a-41b447e03337)<br/>Exchange Online (Plan 2) (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>Information Protection for Office 365 ΓÇô Standard (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>Insights by MyAnalytics (33c4f319-9bdd-48d6-9c4d-410b750a4a5a)<br/>Microsoft 365 Apps for enterprise (43de0ff5-c92c-492b-9116-175376d08c38)<br/>Microsoft Azure Active Directory Rights (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>Microsoft Bookings (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>Microsoft Forms (Plan 2) (9b5de886-f035-4ff2-b3d8-c9127bea3620)<br/>Microsoft Kaizala Pro Plan 3 (aebd3021-9f8f-4bf8-bbe3-0ed2f4f047a1)<br/>Microsoft Planner (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>Microsoft Search (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Microsoft StaffHub (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>Microsoft Stream for O365 E3 SKU (9e700747-8b1d-45e5-ab8d-ef187ceec156)<br/>Microsoft Teams (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>Mobile Device Management for Office 365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>Office 365 Advanced Security Management (8c098270-9dd4-4350-9b30-ba4703f3b36b)<br/>Office for the web (Education) (e03c7e47-402c-463c-ab25-949079bedb21)<br/>Power Apps for Office 365 (c68f8d98-5534-41c8-bf36-22fa496fa792)<br/>Power Automate for Office 365 (76846ad7-7776-4c40-a281-a386362dd1b9)<br/>Power Virtual Agents for Office 365 P2 (041fe683-03e4-45b6-b1af-c0cdc516daee)<br/>Project for Office (Plan E3) (31b4e2fc-4cd6-4e7d-9c1b-41407303bd66)<br/>School Data Sync (Plan 2) (500b6a2a-7a50-4f40-b5f9-160e5b8c2f48)<br/>SharePoint Plan 2 for EDU (63038b2c-28d0-45f6-bc36-33062963b498)<br/>Skype for Business Online (Plan 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>Sway (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>To-Do (Plan 2) (c87f142c-d1e9-4363-8630-aaea9c4d9ae5)<br/>Whiteboard (Plan 2) (94a54592-cd8b-425e-87c6-97868b000b91)<br/>Yammer for Academic (2078e8df-cff6-4290-98cb-5408261a760a) |
| Office 365 A5 for faculty| ENTERPRISEPREMIUM_FACULTY | a4585165-0533-458a-97e3-c400570268c4 | AAD_BASIC_EDU (1d0f309f-fdf9-4b2a-9ae7-9c48b91f1426)<br/>RMS_S_ENTERPRISE (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>LOCKBOX_ENTERPRISE (9f431833-0334-42de-a7dc-70aa40db46db)<br/>EducationAnalyticsP1 (a9b86446-fa4e-498f-a92a-41b447e03337)<br/>EXCHANGE_S_ENTERPRISE (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>FLOW_O365_P3 (07699545-9485-468e-95b6-2fca3738be01)<br/>INFORMATION_BARRIERS (c4801e8a-cb58-4c35-aca6-f2dcc106f287)<br/>MIP_S_CLP2 (efb0351d-3b08-4503-993d-383af8de41e3)<br/>MIP_S_CLP1 (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>M365_ADVANCED_AUDITING (2f442157-a11c-46b9-ae5b-6e39ff4e5849)<br/>MCOMEETADV (3e26ee1f-8a5f-4d52-aee2-b81ce45c8f40)<br/>MCOEV (4828c8ec-dc2e-4779-b502-87ac9ce28ab7)<br/>MICROSOFTBOOKINGS (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>COMMUNICATIONS_COMPLIANCE (41fcdd7d-4733-4863-9cf4-c65b83ce2df4)<br/>COMMUNICATIONS_DLP (6dc145d6-95dd-4191-b9c3-185575ee6f6b)<br/>CUSTOMER_KEY (6db1f1db-2b46-403f-be40-e39395f08dbb)<br/>DATA_INVESTIGATIONS (46129a58-a698-46f0-aa5b-17f6586297d9)<br/>OFFICE_FORMS_PLAN_3 (96c1e14a-ef43-418d-b115-9636cdaa8eed)<br/>INFO_GOVERNANCE (e26c2fcc-ab91-4a61-b35c-03cdc8dddf66)<br/>KAIZALA_STANDALONE (0898bdbb-73b0-471a-81e5-20f1fe4dd66e)<br/>EXCHANGE_ANALYTICS (34c0d7a0-a70f-4668-9238-47f9fc208882)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>RECORDS_MANAGEMENT (65cc641f-cccd-4643-97e0-a17e3045e541)<br/>MICROSOFT_SEARCH (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Deskless (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>STREAM_O365_E5 (6c6042f5-6f01-4d67-b8c1-eb99d36eed3e)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>INTUNE_O365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>EQUIVIO_ANALYTICS (4de31727-a228-4ec3-a5bf-8e45b5ca48cc)<br/>ADALLOM_S_O365 (8c098270-9dd4-4350-9b30-ba4703f3b36b)<br/>ATP_ENTERPRISE (f20fedf3-f3c3-43c3-8267-2bfdd51c0939)<br/>THREAT_INTELLIGENCE (8e0c0a52-6a6c-4d40-8370-dd62790dcd70)<br/>PAM_ENTERPRISE (b1188c4c-1b36-4018-b48b-ee07604f6feb)<br/>OFFICESUBSCRIPTION (43de0ff5-c92c-492b-9116-175376d08c38)<br/>SHAREPOINTWAC_EDU (e03c7e47-402c-463c-ab25-949079bedb21)<br/>BI_AZURE_P2 (70d33638-9c74-4d01-bfd3-562de28bd4ba)<br/>POWERAPPS_O365_P3 (9c0dab89-a30c-4117-86e7-97bda240acd2)<br/>PREMIUM_ENCRYPTION (617b097b-4b93-4ede-83de-5f075bb5fb2f)<br/>SCHOOL_DATA_SYNC_P2 (500b6a2a-7a50-4f40-b5f9-160e5b8c2f48)<br/>SHAREPOINTENTERPRISE_EDU (63038b2c-28d0-45f6-bc36-33062963b498)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>BPOS_S_TODO_3 (3fb82609-8c27-4f7b-bd51-30634711ee67)<br/>WHITEBOARD_PLAN3 (4a51bca5-1eff-43f5-878c-177680f191af)<br/>YAMMER_EDU (2078e8df-cff6-4290-98cb-5408261a760a) | Azure Active Directory Basic for EDU (1d0f309f-fdf9-4b2a-9ae7-9c48b91f1426)<br/>Azure Rights Management (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>Customer Lockbox (9f431833-0334-42de-a7dc-70aa40db46db)<br/>Education Analytics (a9b86446-fa4e-498f-a92a-41b447e03337)<br/>Exchange Online (Plan 2) (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>Flow for Office 365 (07699545-9485-468e-95b6-2fca3738be01)<br/>Information Barriers (c4801e8a-cb58-4c35-aca6-f2dcc106f287)<br/>Information Protection for Office 365 - Premium (efb0351d-3b08-4503-993d-383af8de41e3)<br/>Information Protection for Office 365 - Standard (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>Microsoft 365 Advanced Auditing (2f442157-a11c-46b9-ae5b-6e39ff4e5849)<br/>Microsoft 365 Audio Conferencing (3e26ee1f-8a5f-4d52-aee2-b81ce45c8f40)<br/>Microsoft 365 Phone System (4828c8ec-dc2e-4779-b502-87ac9ce28ab7)<br/>Microsoft Bookings (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>Microsoft Communications Compliance (41fcdd7d-4733-4863-9cf4-c65b83ce2df4)<br/>Microsoft Communications DLP (6dc145d6-95dd-4191-b9c3-185575ee6f6b)<br/>Microsoft Customer Key (6db1f1db-2b46-403f-be40-e39395f08dbb)<br/>Microsoft Data Investigations (46129a58-a698-46f0-aa5b-17f6586297d9)<br/>Microsoft Forms (Plan 3) (96c1e14a-ef43-418d-b115-9636cdaa8eed)<br/>Microsoft Information Governance (e26c2fcc-ab91-4a61-b35c-03cdc8dddf66)<br/>Microsoft Kaizala (0898bdbb-73b0-471a-81e5-20f1fe4dd66e)<br/>Microsoft MyAnalytics (Full) (34c0d7a0-a70f-4668-9238-47f9fc208882)<br/>Microsoft Planner (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>Microsoft Records Management (65cc641f-cccd-4643-97e0-a17e3045e541)<br/>Microsoft Search (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Microsoft StaffHub (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>Microsoft Stream for O365 E5 SKU (6c6042f5-6f01-4d67-b8c1-eb99d36eed3e)<br/>Microsoft Teams (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>Mobile Device Management for Office 365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>Office 365 Advanced eDiscovery (4de31727-a228-4ec3-a5bf-8e45b5ca48cc)<br/>Office 365 Advanced Security Management (8c098270-9dd4-4350-9b30-ba4703f3b36b)<br/>Microsoft Defender for Office 365 (Plan 1) (f20fedf3-f3c3-43c3-8267-2bfdd51c0939)<br/>Microsoft Defender for Office 365 (Plan 2) (8e0c0a52-6a6c-4d40-8370-dd62790dcd70)<br/>Office 365 Privileged Access Management (b1188c4c-1b36-4018-b48b-ee07604f6feb)<br/>Office 365 ProPlus (43de0ff5-c92c-492b-9116-175376d08c38)<br/>Office for the web (Education) (e03c7e47-402c-463c-ab25-949079bedb21)<br/>Power BI Pro (70d33638-9c74-4d01-bfd3-562de28bd4ba)<br/>PowerApps for Office 365 Plan 3 (9c0dab89-a30c-4117-86e7-97bda240acd2)<br/>Premium Encryption in Office 365 (617b097b-4b93-4ede-83de-5f075bb5fb2f)<br/>School Data Sync (Plan 2) (500b6a2a-7a50-4f40-b5f9-160e5b8c2f48)<br/>SharePoint Plan 2 for EDU (63038b2c-28d0-45f6-bc36-33062963b498)<br/>Skype for Business Online (Plan 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>Sway (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>To-Do (Plan 3) (3fb82609-8c27-4f7b-bd51-30634711ee67)<br/>Whiteboard (Plan 3) (4a51bca5-1eff-43f5-878c-177680f191af)<br/>Yammer for Academic (2078e8df-cff6-4290-98cb-5408261a760a) | | Office 365 A5 for students | ENTERPRISEPREMIUM_STUDENT | ee656612-49fa-43e5-b67e-cb1fdf7699df | AAD_BASIC_EDU (1d0f309f-fdf9-4b2a-9ae7-9c48b91f1426)<br/>RMS_S_ENTERPRISE (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>LOCKBOX_ENTERPRISE (9f431833-0334-42de-a7dc-70aa40db46db)<br/>EducationAnalyticsP1 (a9b86446-fa4e-498f-a92a-41b447e03337)<br/>EXCHANGE_S_ENTERPRISE (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>FLOW_O365_P3 (07699545-9485-468e-95b6-2fca3738be01)<br/>INFORMATION_BARRIERS (c4801e8a-cb58-4c35-aca6-f2dcc106f287)<br/>MIP_S_CLP2 (efb0351d-3b08-4503-993d-383af8de41e3)<br/>MIP_S_CLP1 (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>M365_ADVANCED_AUDITING (2f442157-a11c-46b9-ae5b-6e39ff4e5849)<br/>MCOMEETADV (3e26ee1f-8a5f-4d52-aee2-b81ce45c8f40)<br/>MCOEV (4828c8ec-dc2e-4779-b502-87ac9ce28ab7)<br/>MICROSOFTBOOKINGS (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>COMMUNICATIONS_COMPLIANCE (41fcdd7d-4733-4863-9cf4-c65b83ce2df4)<br/>COMMUNICATIONS_DLP (6dc145d6-95dd-4191-b9c3-185575ee6f6b)<br/>CUSTOMER_KEY (6db1f1db-2b46-403f-be40-e39395f08dbb)<br/>DATA_INVESTIGATIONS (46129a58-a698-46f0-aa5b-17f6586297d9)<br/>OFFICE_FORMS_PLAN_3 (96c1e14a-ef43-418d-b115-9636cdaa8eed)<br/>INFO_GOVERNANCE (e26c2fcc-ab91-4a61-b35c-03cdc8dddf66)<br/>KAIZALA_STANDALONE (0898bdbb-73b0-471a-81e5-20f1fe4dd66e)<br/>EXCHANGE_ANALYTICS (34c0d7a0-a70f-4668-9238-47f9fc208882)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>RECORDS_MANAGEMENT (65cc641f-cccd-4643-97e0-a17e3045e541)<br/>MICROSOFT_SEARCH (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Deskless (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>STREAM_O365_E5 (6c6042f5-6f01-4d67-b8c1-eb99d36eed3e)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>INTUNE_O365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>EQUIVIO_ANALYTICS (4de31727-a228-4ec3-a5bf-8e45b5ca48cc)<br/>ADALLOM_S_O365 (8c098270-9dd4-4350-9b30-ba4703f3b36b)<br/>ATP_ENTERPRISE (f20fedf3-f3c3-43c3-8267-2bfdd51c0939)<br/>THREAT_INTELLIGENCE (8e0c0a52-6a6c-4d40-8370-dd62790dcd70)<br/>PAM_ENTERPRISE (b1188c4c-1b36-4018-b48b-ee07604f6feb)<br/>OFFICESUBSCRIPTION (43de0ff5-c92c-492b-9116-175376d08c38)<br/>SHAREPOINTWAC_EDU (e03c7e47-402c-463c-ab25-949079bedb21)<br/>BI_AZURE_P2 (70d33638-9c74-4d01-bfd3-562de28bd4ba)<br/>POWERAPPS_O365_P3 (9c0dab89-a30c-4117-86e7-97bda240acd2)<br/>PREMIUM_ENCRYPTION (617b097b-4b93-4ede-83de-5f075bb5fb2f)<br/>SCHOOL_DATA_SYNC_P2 (500b6a2a-7a50-4f40-b5f9-160e5b8c2f48)<br/>SHAREPOINTENTERPRISE_EDU (63038b2c-28d0-45f6-bc36-33062963b498)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>BPOS_S_TODO_3 (3fb82609-8c27-4f7b-bd51-30634711ee67)<br/>WHITEBOARD_PLAN3 (4a51bca5-1eff-43f5-878c-177680f191af)<br/>YAMMER_EDU (2078e8df-cff6-4290-98cb-5408261a760a) | Azure Active Directory Basic for EDU (1d0f309f-fdf9-4b2a-9ae7-9c48b91f1426)<br/>Azure Rights Management (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>Customer Lockbox (9f431833-0334-42de-a7dc-70aa40db46db)<br/>Education Analytics (a9b86446-fa4e-498f-a92a-41b447e03337)<br/>Exchange Online (Plan 2) (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>Flow for Office 365 (07699545-9485-468e-95b6-2fca3738be01)<br/>Information Barriers (c4801e8a-cb58-4c35-aca6-f2dcc106f287)<br/>Information Protection for Office 365 - Premium (efb0351d-3b08-4503-993d-383af8de41e3)<br/>Information Protection for Office 365 - Standard (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>Microsoft 365 Advanced Auditing (2f442157-a11c-46b9-ae5b-6e39ff4e5849)<br/>Microsoft 365 Audio Conferencing (3e26ee1f-8a5f-4d52-aee2-b81ce45c8f40)<br/>Microsoft 365 Phone System (4828c8ec-dc2e-4779-b502-87ac9ce28ab7)<br/>Microsoft Bookings (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>Microsoft Communications Compliance (41fcdd7d-4733-4863-9cf4-c65b83ce2df4)<br/>Microsoft Communications DLP (6dc145d6-95dd-4191-b9c3-185575ee6f6b)<br/>Microsoft Customer Key (6db1f1db-2b46-403f-be40-e39395f08dbb)<br/>Microsoft Data Investigations (46129a58-a698-46f0-aa5b-17f6586297d9)<br/>Microsoft Forms (Plan 3) (96c1e14a-ef43-418d-b115-9636cdaa8eed)<br/>Microsoft Information Governance (e26c2fcc-ab91-4a61-b35c-03cdc8dddf66)<br/>Microsoft Kaizala (0898bdbb-73b0-471a-81e5-20f1fe4dd66e)<br/>Microsoft MyAnalytics (Full) (34c0d7a0-a70f-4668-9238-47f9fc208882)<br/>Microsoft Planner (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>Microsoft Records Management (65cc641f-cccd-4643-97e0-a17e3045e541)<br/>Microsoft Search (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Microsoft StaffHub (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>Microsoft Stream for O365 E5 SKU (6c6042f5-6f01-4d67-b8c1-eb99d36eed3e)<br/>Microsoft Teams (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>Mobile Device Management for Office 365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>Office 365 Advanced eDiscovery (4de31727-a228-4ec3-a5bf-8e45b5ca48cc)<br/>Office 365 Advanced Security Management (8c098270-9dd4-4350-9b30-ba4703f3b36b)<br/>Microsoft Defender for Office 365 (Plan 1) (f20fedf3-f3c3-43c3-8267-2bfdd51c0939)<br/>Microsoft Defender for Office 365 (Plan 2) (8e0c0a52-6a6c-4d40-8370-dd62790dcd70)<br/>Office 365 Privileged Access Management (b1188c4c-1b36-4018-b48b-ee07604f6feb)<br/>Office 365 ProPlus (43de0ff5-c92c-492b-9116-175376d08c38)<br/>Office for the web (Education) (e03c7e47-402c-463c-ab25-949079bedb21)<br/>Power BI Pro (70d33638-9c74-4d01-bfd3-562de28bd4ba)<br/>PowerApps for Office 365 Plan 3 (9c0dab89-a30c-4117-86e7-97bda240acd2)<br/>Premium Encryption in Office 365 (617b097b-4b93-4ede-83de-5f075bb5fb2f)<br/>School Data Sync (Plan 2) (500b6a2a-7a50-4f40-b5f9-160e5b8c2f48)<br/>SharePoint Plan 2 for EDU (63038b2c-28d0-45f6-bc36-33062963b498)<br/>Skype for Business Online (Plan 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>Sway (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>To-Do (Plan 3) (3fb82609-8c27-4f7b-bd51-30634711ee67)<br/>Whiteboard (Plan 3) (4a51bca5-1eff-43f5-878c-177680f191af)<br/>Yammer for Academic (2078e8df-cff6-4290-98cb-5408261a760a) | | Office 365 Advanced Compliance | EQUIVIO_ANALYTICS | 1b1b1f7a-8355-43b6-829f-336cfccb744c | LOCKBOX_ENTERPRISE (9f431833-0334-42de-a7dc-70aa40db46db)<br/>INFORMATION_BARRIERS (c4801e8a-cb58-4c35-aca6-f2dcc106f287)<br/>MIP_S_CLP2 (efb0351d-3b08-4503-993d-383af8de41e3)<br/>EQUIVIO_ANALYTICS (4de31727-a228-4ec3-a5bf-8e45b5ca48cc)<br/>PAM_ENTERPRISE (b1188c4c-1b36-4018-b48b-ee07604f6feb)<br/>PREMIUM_ENCRYPTION (617b097b-4b93-4ede-83de-5f075bb5fb2f) | Customer Lockbox (9f431833-0334-42de-a7dc-70aa40db46db)<br/>Information Barriers (c4801e8a-cb58-4c35-aca6-f2dcc106f287)<br/>Information Protection for Office 365 - Premium (efb0351d-3b08-4503-993d-383af8de41e3)<br/>Office 365 Advanced eDiscovery (4de31727-a228-4ec3-a5bf-8e45b5ca48cc)<br/>Office 365 Privileged Access Management (b1188c4c-1b36-4018-b48b-ee07604f6feb)<br/>Premium Encryption in Office 365 (617b097b-4b93-4ede-83de-5f075bb5fb2f) | | Microsoft Defender for Office 365 (Plan 1) | ATP_ENTERPRISE | 4ef96642-f096-40de-a3e9-d83fb2f90211 | ATP_ENTERPRISE (f20fedf3-f3c3-43c3-8267-2bfdd51c0939) | Microsoft Defender for Office 365 (Plan 1) (f20fedf3-f3c3-43c3-8267-2bfdd51c0939) | | Office 365 Extra File Storage for GCC | SHAREPOINTSTORAGE_GOV | e5788282-6381-469f-84f0-3d7d4021d34d | EXCHANGE_S_FOUNDATION_GOV (922ba911-5694-4e99-a794-73aed9bfeec8)<br/>SHAREPOINTSTORAGE_GOV (e5bb877f-6ac9-4461-9e43-ca581543ab16) | EXCHANGE_S_FOUNDATION_GOV (922ba911-5694-4e99-a794-73aed9bfeec8)<br/>SHAREPOINTSTORAGE_GOV (e5bb877f-6ac9-4461-9e43-ca581543ab16) |
+| Microsoft Teams Commercial Cloud | TEAMS_COMMERCIAL_TRIAL | 29a2f828-8f39-4837-b8ff-c957e86abe3c | CDS_O365_P1 (bed136c6-b799-4462-824d-fc045d3a9d25)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>FORMS_PLAN_E1 (159f4cd6-e380-449f-a816-af1a9ef76344)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>STREAM_O365_E1 (743dd19e-1ce3-4c62-a3ad-49ba8f63a2f6)<br/>MCO_TEAMS_IW (42a3ec34-28ba-46b6-992f-db53a675ac5b)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>POWERAPPS_O365_P1 (92f7a6f3-b89b-4bbd-8c30-809e6da5ad1c)<br/>FLOW_O365_P1 (0f9b09cb-62d1-4ff4-9129-43f4996f83f4)<br/>POWER_VIRTUAL_AGENTS_O365_P1 (0683001c-0492-4d59-9515-d9a6426b5813)<br/>SHAREPOINTDESKLESS (902b47e5-dcb2-4fdc-858b-c63a90a2bdb9)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>WHITEBOARD_PLAN1 (b8afc642-032e-4de5-8c0a-507a7bba7e5d)<br/>YAMMER_ENTERPRISE (7547a3fe-08ee-4ccb-b430-5077c5041653) | Common Data Service for Teams_P1 (bed136c6-b799-4462-824d-fc045d3a9d25)<br/>Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Microsoft Forms (Plan E1) (159f4cd6-e380-449f-a816-af1a9ef76344)<br/>Microsoft Planner (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>Microsoft Stream for O365 E1 SKU (743dd19e-1ce3-4c62-a3ad-49ba8f63a2f6)<br/>Microsoft Teams (42a3ec34-28ba-46b6-992f-db53a675ac5b)<br/>Microsoft Teams (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>Office for the web (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>Power Apps for Office 365 (92f7a6f3-b89b-4bbd-8c30-809e6da5ad1c)<br/>Power Automate for Office 365 (0f9b09cb-62d1-4ff4-9129-43f4996f83f4)<br/>Power Virtual Agents for Office 365 P1 (0683001c-0492-4d59-9515-d9a6426b5813)<br/>SharePoint Kiosk (902b47e5-dcb2-4fdc-858b-c63a90a2bdb9)<br/>Sway (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>Whiteboard (Plan 1) (b8afc642-032e-4de5-8c0a-507a7bba7e5d)<br/>Yammer Enterprise (7547a3fe-08ee-4ccb-b430-5077c5041653) |
| Office 365 Extra File Storage | SHAREPOINTSTORAGE | 99049c9c-6011-4908-bf17-15f496e6519d | SHAREPOINTSTORAGE (be5a7ed5-c598-4fcd-a061-5e6724c68a58) | Office 365 Extra File Storage (be5a7ed5-c598-4fcd-a061-5e6724c68a58) | | OFFICE 365 E1 | STANDARDPACK | 18181a46-0d4e-45cd-891e-60aabd171b4e | BPOS_S_TODO_1 (5e62787c-c316-451f-b873-1d05acd4d12c)<br/>Deskless (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>EXCHANGE_S_STANDARD (9aaf7827-d63c-4b61-89c3-182f06f82e5c)<br/>FLOW_O365_P1 (0f9b09cb-62d1-4ff4-9129-43f4996f83f4)<br/>FORMS_PLAN_E1 (159f4cd6-e380-449f-a816-af1a9ef76344)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>OFFICEMOBILE_SUBSCRIPTION (c63d4d19-e8cb-460e-b37c-4d6c34603745)<br/>POWERAPPS_O365_P1 (92f7a6f3-b89b-4bbd-8c30-809e6da5ad1c)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>SHAREPOINTSTANDARD (c7699d2e-19aa-44de-8edf-1736da088ca1)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>STREAM_O365_E1 (743dd19e-1ce3-4c62-a3ad-49ba8f63a2f6)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>YAMMER_ENTERPRISE (7547a3fe-08ee-4ccb-b430-5077c5041653)) | BPOS_S_TODO_1 (5e62787c-c316-451f-b873-1d05acd4d12c)<br/>MICROSOFT STAFFHUB (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>EXCHANGE ONLINE (PLAN 1) (9aaf7827-d63c-4b61-89c3-182f06f82e5c)<br/>FLOW FOR OFFICE 365 (0f9b09cb-62d1-4ff4-9129-43f4996f83f4)<br/>MICROSOFT FORMS (PLAN E1) (159f4cd6-e380-449f-a816-af1a9ef76344)<br/>SKYPE FOR BUSINESS ONLINE (PLAN 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>OFFICEMOBILE_SUBSCRIPTION (c63d4d19-e8cb-460e-b37c-4d6c34603745)<br/>POWERAPPS FOR OFFICE 365 (92f7a6f3-b89b-4bbd-8c30-809e6da5ad1c)<br/>MICROSOFT PLANNER(b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>SHAREPOINTSTANDARD (c7699d2e-19aa-44de-8edf-1736da088ca1)<br/>OFFICE ONLINE (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>MICROSOFT STREAM FOR O365 E1 SKU (743dd19e-1ce3-4c62-a3ad-49ba8f63a2f6)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>YAMMER_ENTERPRISE (7547a3fe-08ee-4ccb-b430-5077c5041653)) | | OFFICE 365 E2 | STANDARDWOFFPACK | 6634e0ce-1a9f-428c-a498-f84ec7b8aa2e | BPOS_S_TODO_1(5e62787c-c316-451f-b873-1d05acd4d12c)<br/>Deskless (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>EXCHANGE_S_STANDARD (9aaf7827-d63c-4b61-89c3-182f06f82e5c)<br/>FLOW_O365_P1 (0f9b09cb-62d1-4ff4-9129-43f4996f83f4)<br/>FORMS_PLAN_E1 (159f4cd6-e380-449f-a816-af1a9ef76344)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>POWERAPPS_O365_P1 (92f7a6f3-b89b-4bbd-8c30-809e6da5ad1c)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>SHAREPOINTSTANDARD (c7699d2e-19aa-44de-8edf-1736da088ca1)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>STREAM_O365_E1 (743dd19e-1ce3-4c62-a3ad-49ba8f63a2f6)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>YAMMER_ENTERPRISE (7547a3fe-08ee-4ccb-b430-5077c5041653) | BPOS_S_TODO_1 (5e62787c-c316-451f-b873-1d05acd4d12c)<br/>MICROSOFT STAFFHUB (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>EXCHANGE ONLINE (PLAN 1) (9aaf7827-d63c-4b61-89c3-182f06f82e5c)<br/>FLOW FOR OFFICE 365 (0f9b09cb-62d1-4ff4-9129-43f4996f83f4)<br/>MICROSOFT FORMS (PLAN E1) (159f4cd6-e380-449f-a816-af1a9ef76344)<br/>SKYPE FOR BUSINESS ONLINE (PLAN 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>POWERAPPS FOR OFFICE 365 (92f7a6f3-b89b-4bbd-8c30-809e6da5ad1c)<br/>MICROSOFT PLANNER(b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>SHAREPOINTSTANDARD (c7699d2e-19aa-44de-8edf-1736da088ca1)<br/>OFFICE ONLINE (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>MICROSOFT STREAM FOR O365 E1 SKU (743dd19e-1ce3-4c62-a3ad-49ba8f63a2f6)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>YAMMER_ENTERPRISE (7547a3fe-08ee-4ccb-b430-5077c5041653) |
active-directory Concept Fundamentals Security Defaults https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/fundamentals/concept-fundamentals-security-defaults.md
Previously updated : 07/07/2021 Last updated : 08/20/2021 -+
Managing security can be difficult with common identity-related attacks like password spray, replay, and phishing becoming more popular. Security defaults make it easier to help protect your organization from these attacks with preconfigured security settings: - Requiring all users to register for Azure AD Multi-Factor Authentication.-- Requiring administrators to perform multi-factor authentication.
+- Requiring administrators to do multi-factor authentication.
- Blocking legacy authentication protocols.-- Requiring users to perform multi-factor authentication when necessary.
+- Requiring users to do multi-factor authentication when necessary.
- Protecting privileged activities like access to the Azure portal. ![Screenshot of the Azure portal with the toggle to enable security defaults](./media/concept-fundamentals-security-defaults/security-defaults-azure-ad-portal.png)
More details on why security defaults are being made available can be found in A
## Availability
-Microsoft is making security defaults available to everyone. The goal is to ensure that all organizations have a basic level of security enabled at no extra cost. You turn on security defaults in the Azure portal. If your tenant was created on or after October 22, 2019, it is possible security defaults are already enabled in your tenant. To protect all of our users, security defaults are being rolled out to all new tenants created.
+Microsoft is making security defaults available to everyone. The goal is to ensure that all organizations have a basic level of security enabled at no extra cost. You turn on security defaults in the Azure portal. If your tenant was created on or after October 22, 2019, security defaults may be enabled in your tenant. To protect all of our users, security defaults are being rolled out to new tenants at creation.
### Who's it for? -- If you are an organization that wants to increase your security posture but you don't know how or where to start, security defaults are for you.-- If you are an organization utilizing the free tier of Azure Active Directory licensing, security defaults are for you.
+- If you're an organization that wants to increase your security posture but you don't know how or where to start, security defaults are for you.
+- If you're an organization using the free tier of Azure Active Directory licensing, security defaults are for you.
### Who should use Conditional Access? -- If you are an organization currently using Conditional Access policies to bring signals together, to make decisions, and enforce organizational policies, security defaults are probably not right for you. -- If you are an organization with Azure Active Directory Premium licenses, security defaults are probably not right for you.
+- If you're an organization currently using Conditional Access policies to bring signals together, to make decisions, and enforce organizational policies, security defaults are probably not right for you.
+- If you're an organization with Azure Active Directory Premium licenses, security defaults are probably not right for you.
- If your organization has complex security requirements, you should consider Conditional Access. ## Policies enforced ### Unified Multi-Factor Authentication registration
-All users in your tenant must register for multi-factor authentication (MFA) in the form of the Azure AD Multi-Factor Authentication. Users have 14 days to register for Azure AD Multi-Factor Authentication by using the Microsoft Authenticator app. After the 14 days have passed, the user won't be able to sign in until registration is completed. A user's 14-day period begins after their first successful interactive sign-in after enabling security defaults.
+All users in your tenant must register for multi-factor authentication (MFA) in the form of the Azure AD Multi-Factor Authentication. Users have 14 days to register for Azure AD Multi-Factor Authentication by using the Microsoft Authenticator app. After the 14 days have passed, the user can't sign in until registration is completed. A user's 14-day period begins after their first successful interactive sign-in after enabling security defaults.
### Protecting administrators
-Users with privileged access have increased access to your environment. Due to the power these accounts have, you should treat them with special care. One common method to improve the protection of privileged accounts is to require a stronger form of account verification for sign-in. In Azure AD, you can get a stronger account verification by requiring multi-factor authentication.
+Users with privileged access have increased access to your environment. Because of the power these accounts have, you should treat them with special care. One common method to improve the protection of privileged accounts is to require a stronger form of account verification for sign-in. In Azure AD, you can get a stronger account verification by requiring multi-factor authentication.
-After registration with Azure AD Multi-Factor Authentication is finished, the following nine Azure AD administrator roles will be required to perform additional authentication every time they sign in:
+After registration with Azure AD Multi-Factor Authentication is finished, the following nine Azure AD administrator roles will be required to do extra authentication every time they sign in:
- Global administrator - SharePoint administrator
After registration with Azure AD Multi-Factor Authentication is finished, the fo
We tend to think that administrator accounts are the only accounts that need extra layers of authentication. Administrators have broad access to sensitive information and can make changes to subscription-wide settings. But attackers frequently target end users.
-After these attackers gain access, they can request access to privileged information on behalf of the original account holder. They can even download the entire directory to perform a phishing attack on your whole organization.
+After these attackers gain access, they can request access to privileged information for the original account holder. They can even download the entire directory to do a phishing attack on your whole organization.
-One common method to improve protection for all users is to require a stronger form of account verification, such as Multi-Factor Authentication, for everyone. After users complete Multi-Factor Authentication registration, they'll be prompted for additional authentication whenever necessary. Users will be prompted primarily when they authenticate using a new device or application, or when performing critical roles and tasks. This functionality protects all applications registered with Azure AD including SaaS applications.
+One common method to improve protection for all users is to require a stronger form of account verification, such as Multi-Factor Authentication, for everyone. After users complete Multi-Factor Authentication registration, they'll be prompted for another authentication whenever necessary. Users will be prompted primarily when they authenticate using a new device or application, or when doing critical roles and tasks. This functionality protects all applications registered with Azure AD including SaaS applications.
### Blocking legacy authentication
To give your users easy access to your cloud apps, Azure AD supports various aut
- Clients that don't use modern authentication (for example, an Office 2010 client). - Any client that uses older mail protocols such as IMAP, SMTP, or POP3.
-Today, most compromising sign-in attempts come from legacy authentication. Legacy authentication does not support Multi-Factor Authentication. Even if you have a Multi-Factor Authentication policy enabled on your directory, an attacker can authenticate by using an older protocol and bypass Multi-Factor Authentication.
+Today, most compromising sign-in attempts come from legacy authentication. Legacy authentication doesn't support Multi-Factor Authentication. Even if you have a Multi-Factor Authentication policy enabled on your directory, an attacker can authenticate by using an older protocol and bypass Multi-Factor Authentication.
After security defaults are enabled in your tenant, all authentication requests made by an older protocol will be blocked. Security defaults blocks Exchange Active Sync basic authentication.
Organizations use various Azure services managed through the Azure Resource Mana
Using Azure Resource Manager to manage your services is a highly privileged action. Azure Resource Manager can alter tenant-wide configurations, such as service settings and subscription billing. Single-factor authentication is vulnerable to various attacks like phishing and password spray.
-It's important to verify the identity of users who want to access Azure Resource Manager and update configurations. You verify their identity by requiring additional authentication before you allow access.
+It's important to verify the identity of users who want to access Azure Resource Manager and update configurations. You verify their identity by requiring more authentication before you allow access.
-After you enable security defaults in your tenant, any user who's accessing the Azure portal, Azure PowerShell, or the Azure CLI will need to complete additional authentication. This policy applies to all users who are accessing Azure Resource Manager, whether they're an administrator or a user.
+After you enable security defaults in your tenant, any user who's accessing the Azure portal, Azure PowerShell, or the Azure CLI will need to complete more authentication. This policy applies to all users who are accessing Azure Resource Manager, whether they're an administrator or a user.
> [!NOTE] > Pre-2017 Exchange Online tenants have modern authentication disabled by default. In order to avoid the possibility of a login loop while authenticating through these tenants, you must [enable modern authentication](/exchange/clients-and-mobile-in-exchange-online/enable-or-disable-modern-authentication-in-exchange-online).
After you enable security defaults in your tenant, any user who's accessing the
## Deployment considerations
-The following additional considerations are related to deployment of security defaults.
+The following extra considerations are related to deployment of security defaults.
### Authentication methods
These free security defaults allow registration and use of Azure AD Multi-Factor
### Disabled MFA status
-If your organization is a previous user of per-user based Azure AD Multi-Factor Authentication, do not be alarmed to not see users in an **Enabled** or **Enforced** status if you look at the Multi-Factor Auth status page. **Disabled** is the appropriate status for users who are using security defaults or Conditional Access based Azure AD Multi-Factor Authentication.
+If your organization is a previous user of per-user based Azure AD Multi-Factor Authentication, don't be alarmed to not see users in an **Enabled** or **Enforced** status if you look at the Multi-Factor Auth status page. **Disabled** is the appropriate status for users who are using security defaults or Conditional Access based Azure AD Multi-Factor Authentication.
### Conditional Access
-You can use Conditional Access to configure policies similar to security defaults, but with more granularity including user exclusions, which are not available in security defaults. If you're using Conditional Access and have Conditional Access policies enabled in your environment, security defaults won't be available to you. If you have a license that provides Conditional Access but don't have any Conditional Access policies enabled in your environment, you are welcome to use security defaults until you enable Conditional Access policies. More information about Azure AD licensing can be found on the [Azure AD pricing page](https://www.microsoft.com/security/business/identity-access-management/azure-ad-pricing).
+You can use Conditional Access to configure policies similar to security defaults, but with more granularity including user exclusions, which aren't available in security defaults. If you're using Conditional Access and have Conditional Access policies enabled in your environment, security defaults won't be available to you. If you have a license that provides Conditional Access but don't have any Conditional Access policies enabled in your environment, you're welcome to use security defaults until you enable Conditional Access policies. More information about Azure AD licensing can be found on the [Azure AD pricing page](https://www.microsoft.com/security/business/identity-access-management/azure-ad-pricing).
![Warning message that you can have security defaults or Conditional Access not both](./media/concept-fundamentals-security-defaults/security-defaults-conditional-access.png)
-Here are step-by-step guides on how you can use Conditional Access to configure equivalent policies to those policies enabled by security defaults:
+Here are step-by-step guides on how you can use Conditional Access to configure a set of policies, which form a good starting point for protecting your identities:
- [Require MFA for administrators](../conditional-access/howto-conditional-access-policy-admin-mfa.md) - [Require MFA for Azure management](../conditional-access/howto-conditional-access-policy-azure-management.md) - [Block legacy authentication](../conditional-access/howto-conditional-access-policy-block-legacy.md) - [Require MFA for all users](../conditional-access/howto-conditional-access-policy-all-users-mfa.md)-- [Require Azure AD MFA registration](../identity-protection/howto-identity-protection-configure-mfa-policy.md) - Requires Azure AD Identity Protection part of Azure AD Premium P2. ## Enabling security defaults
active-directory How To Connect Configure Ad Ds Connector Account https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-configure-ad-ds-connector-account.md
Previously updated : 04/21/2021 Last updated : 08/20/2021
This cmdlet will set the following permissions:
|Allow |AD DS Connector Account |Read/Write all properties |Descendant Group objects| |Allow |AD DS Connector Account |Read/Write all properties |Descendant Contact objects|
-### Permissions for Exchange Mail Public Folders (Preview)
+### Permissions for Exchange Mail Public Folders
To set permissions for the AD DS Connector account when using Exchange Mail Public Folders feature, run: ``` powershell
active-directory Reference Connect Version History https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/reference-connect-version-history.md
Topic | Details
| | Steps to upgrade from Azure AD Connect | Different methods to [upgrade from a previous version to the latest](how-to-upgrade-previous-version.md) Azure AD Connect release. Required permissions | For permissions required to apply an update, see [accounts and permissions](reference-connect-accounts-permissions.md#upgrade).
-Download| [Download Azure AD Connect](https://go.microsoft.com/fwlink/?LinkId=615771).
>[!NOTE] >Releasing a new version of Azure AD Connect is a process that requires several quality control step to ensure the operation functionality of the service, and while we go through this process the version number of a new release as well as the release status will be updated to reflect the most recent state.
Please follow this link to read more about [auto upgrade](how-to-connect-install
> >For version history information on retired versions, see [Azure AD Connect version release history archive](reference-connect-version-history-archive.md)
+## Download links
+If you are using Windows Server 2016 or newer you should use Azure AD Connect V2.0. You can download the latest version of Azure AD Connect 2.0 using [this link](https://www.microsoft.com/en-us/download/details.aspx?id=47594).
+If you are still using an older version of Windows Server you should use Azure AD Connect V1.6. You can download the latest version of Azure AD Connect 1.6 using [this link](https://www.microsoft.com/download/details.aspx?id=103336)
+ ## 2.0.10.0 >[!NOTE]
There are no functional changes in this release
>[!NOTE] >This is a security update release of Azure AD Connect. This release requires Windows Server 2016 or newer. If you are using an older version of Windows Server, please use [version 1.6.11.3](#16113). >This release addresses a vulnerability as documented in [this CVE](https://msrc.microsoft.com/update-guide/vulnerability/CVE-2021-36949). For more information about this vulnerability please refer to the CVE.
->You can download this release using [this link](https://www.microsoft.com/en-us/download/details.aspx?id=47594).
+>You can download the latest version of Azure AD Connect 2.0 using [this link](https://www.microsoft.com/en-us/download/details.aspx?id=47594).
### Release status 8/10/2021: Released for download only, not available for auto upgrade.
There are no functional changes in this release
>[!NOTE] >This is security update release of Azure AD Connect. This version is intended to be used by customers are running an older version of Windows Server and cannot upgrade their server to Windows Server 2016 or newer as this time. You cannot use this version to update an Azure AD Connect V2.0 server. >This release addresses a vulnerability as documented in [this CVE](https://msrc.microsoft.com/update-guide/vulnerability/CVE-2021-36949). For more information about this vulnerability please refer to the CVE.
->You can download this release using [this link](https://www.microsoft.com/download/details.aspx?id=103336)
+>You can download the latest version of Azure AD Connect 1.6 using [this link](https://www.microsoft.com/download/details.aspx?id=103336)
### Release status 8/10/2021: Released for download only, not available for auto upgrade.
active-directory Services Support Managed Identities https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/managed-identities-azure-resources/services-support-managed-identities.md
Refer to the following list to configure managed identity for Azure Digital Twin
Managed identity type |All Generally Available<br>Global Azure Regions | Azure Government | Azure Germany | Azure China 21Vianet | | | :-: | :-: | :-: | :-: | | System assigned | Preview | Preview | Not available | Preview |
-| User assigned | Not available | Not available | Not available | Not available |
+| User assigned | Preview | Preview | Not available | Preview |
### Azure Firewall Policy
Managed identity type | All Generally Available<br>Global Azure Regions | Azure
> You can use Managed Identities to authenticate an [Azure Stream analytics job to Power BI](../../stream-analytics/powerbi-output-managed-identity.md).
-[check]: media/services-support-managed-identities/check.png "Available"
+[check]: media/services-support-managed-identities/check.png "Available"
active-directory Maptician Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/maptician-provisioning-tutorial.md
+
+ Title: 'Tutorial: Configure Maptician for automatic user provisioning with Azure Active Directory | Microsoft Docs'
+description: Learn how to automatically provision and de-provision user accounts from Azure AD to Maptician.
++
+writer: twimmers
+
+ms.assetid: 15ae5ceb-2113-40f8-8d3f-bf8895ef8f42
++++ Last updated : 08/11/2021+++
+# Tutorial: Configure Maptician for automatic user provisioning
+
+This tutorial describes the steps you need to perform in both Maptician and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users and groups to [Maptician](https://maptician.com/) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md).
++
+## Capabilities Supported
+> [!div class="checklist"]
+> * Create users in Maptician
+> * Remove users in Maptician when they do not require access anymore
+> * Keep user attributes synchronized between Azure AD and Maptician
+> * [Single sign-on](maptician-tutorial.md) to Maptician (recommended).
+
+## Prerequisites
+
+The scenario outlined in this tutorial assumes that you already have the following prerequisites:
+
+* [An Azure AD tenant](../develop/quickstart-create-new-tenant.md)
+* A user account in Azure AD with [permission](../roles/permissions-reference.md) to configure provisioning (for example, Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator).
+* A [Maptician](https://maptician.com/) tenant.
+* A user account in Maptician with Admin permissions.
+
+## Step 1. Plan your provisioning deployment
+1. Learn about [how the provisioning service works](../app-provisioning/user-provisioning.md).
+1. Determine who will be in [scope for provisioning](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+1. Determine what data to [map between Azure AD and Maptician](../app-provisioning/customize-application-attributes.md).
+
+## Step 2. Configure Maptician to support provisioning with Azure AD
++
+You can begin the process of connecting your Maptician environment to Azure AD provisioning and single sign-on (SSO) by reaching out to the Maptician support team <support@maptician.com> or directly with your Maptician account manager. You will be provided a document that will contain your **Tenant URL**, along with a **Secret Token**. Maptician support team members can assist you with setting up this integration and are available to answer any questions about its configuration or use.
+
+## Step 3. Add Maptician from the Azure AD application gallery
+
+Add Maptician from the Azure AD application gallery to start managing provisioning to Maptician. If you have previously setup Maptician for SSO, you can use the same application. However it is recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](../manage-apps/add-application-portal.md).
+
+## Step 4. Define who will be in scope for provisioning
+
+The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+* When assigning users and groups to Maptician, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add more roles.
+
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control it by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
++
+## Step 5. Configure automatic user provisioning to Maptician
+
+This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users and/or groups in Maptician based on user and/or group assignments in Azure AD.
+
+### To configure automatic user provisioning for Maptician in Azure AD:
+
+1. Sign in to the [Azure portal](https://portal.azure.com). Select **Enterprise Applications**, then select **All applications**.
+
+ ![Enterprise applications blade](common/enterprise-applications.png)
+
+1. In the applications list, select **Maptician**.
+
+ ![The Maptician link in the Applications list](common/all-applications.png)
+
+1. Select the **Provisioning** tab.
+
+ ![Provisioning tab](common/provisioning.png)
+
+1. Set the **Provisioning Mode** to **Automatic**.
+
+ ![Provisioning tab automatic](common/provisioning-automatic.png)
+
+1. In the **Admin Credentials** section, input your Maptician **Tenant URL** and **Secret Token**. Click **Test Connection** to ensure Azure AD can connect to Maptician. If the connection fails, ensure your Maptician account has Admin permissions and try again.
+
+ ![Token](common/provisioning-testconnection-tenanturltoken.png)
+
+1. In the **Notification Email** field, enter the email address of a person or group who should receive the provisioning error notifications and select the **Send an email notification when a failure occurs** check box.
+
+ ![Notification Email](common/provisioning-notification-email.png)
+
+1. Select **Save**.
+
+1. In the **Mappings** section, select **Synchronize Azure Active Directory Users to Maptician**.
+
+1. Review the user attributes that are synchronized from Azure AD to Maptician in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in Maptician for update operations. If you choose to change the [matching target attribute](../app-provisioning/customize-application-attributes.md), you will need to ensure that the Maptician API supports filtering users based on that attribute. Select the **Save** button to commit any changes.
+
+ |Attribute|Type|Supported for filtering|
+ ||||
+ |userName|String|&check;
+ |emails[type eq "work"].value|String|
+ |active|Boolean|
+ |title|String|
+ |userType|String|
+ |name.givenName|String|
+ |name.familyName|String|
+ |addresses[type eq "work"].locality|String|
+ |addresses[type eq "work"].region|String|
+ |phoneNumbers[type eq "work"].value|String|
+ |phoneNumbers[type eq "mobile"].value|String|
+ |externalId|String|
+ |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:employeeNumber|String|
+ |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:organization|String|
+ |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:department|String|
+
+1. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+1. To enable the Azure AD provisioning service for Maptician, change the **Provisioning Status** to **On** in the **Settings** section.
+
+ ![Provisioning Status Toggled On](common/provisioning-toggle-on.png)
+
+1. Define the users and/or groups that you would like to provision to Maptician by choosing the desired values in **Scope** in the **Settings** section.
+
+ ![Provisioning Scope](common/provisioning-scope.png)
+
+1. When you are ready to provision, click **Save**.
+
+ ![Saving Provisioning Configuration](common/provisioning-configuration-save.png)
+
+This operation starts the initial synchronization cycle of all users and groups defined in **Scope** in the **Settings** section. The initial cycle takes longer to perform than subsequent cycles, which occur approximately every 40 minutes as long as the Azure AD provisioning service is running.
+
+## Step 6. Monitor your deployment
+Once you've configured provisioning, use the following resources to monitor your deployment:
+
+* Use the [provisioning logs](../reports-monitoring/concept-provisioning-logs.md) to determine which users have been provisioned successfully or unsuccessfully
+* Check the [progress bar](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) to see the status of the provisioning cycle and how close it is to completion
+* If the provisioning configuration seems to be in an unhealthy state, the application will go into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md).
+
+## More resources
+
+* [Managing user account provisioning for Enterprise Apps](../app-provisioning/configure-automatic-user-provisioning-portal.md)
+* [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+
+## Next steps
+
+* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md)
aks Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/policy-reference.md
Title: Built-in policy definitions for Azure Kubernetes Service description: Lists Azure Policy built-in policy definitions for Azure Kubernetes Service. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/13/2021 Last updated : 08/20/2021
aks Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Kubernetes Service (AKS) description: Lists Azure Policy Regulatory Compliance controls available for Azure Kubernetes Service (AKS). These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 08/13/2021 Last updated : 08/20/2021
api-management Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/policy-reference.md
Title: Built-in policy definitions for Azure API Management description: Lists Azure Policy built-in policy definitions for Azure API Management. These built-in policy definitions provide approaches to managing your Azure resources. Previously updated : 08/13/2021 Last updated : 08/20/2021
api-management Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure API Management description: Lists Azure Policy Regulatory Compliance controls available for Azure API Management. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 08/13/2021 Last updated : 08/20/2021
app-service Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/policy-reference.md
Title: Built-in policy definitions for Azure App Service description: Lists Azure Policy built-in policy definitions for Azure App Service. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/13/2021 Last updated : 08/20/2021
app-service Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure App Service description: Lists Azure Policy Regulatory Compliance controls available for Azure App Service. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 08/13/2021 Last updated : 08/20/2021
application-gateway Application Gateway Create Probe Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/application-gateway/application-gateway-create-probe-portal.md
Probes are configured in a two-step process through the portal. The first step i
|||| |**Name**|customProbe|This value is a friendly name given to the probe that is accessible in the portal.| |**Protocol**|HTTP or HTTPS | The protocol that the health probe uses. |
- |**Host**|i.e contoso.com|This value is the name of the virtual host (different from the VM host name) running on the application server. The probe is sent to \<protocol\>://\<host name\>:\<port\>/\<urlPath\>|
+ |**Host**|i.e contoso.com|This value is the name of the virtual host (different from the VM host name) running on the application server. The probe is sent to \<protocol\>://\<host name\>:\<port\>/\<urlPath\> This can also be the private IP of the server, or the public ip address, or the DNS entry of the public ip address. This will attempt to access the server when used with a file based path entry, and validate a specific file exists on the server as a health check.|
|**Pick host name from backend HTTP settings**|Yes or No|Sets the *host* header in the probe to the host name from the HTTP settings to which this probe is associated to. Specially required in case of multi-tenant backends such as Azure app service. [Learn more](./configuration-http-settings.md#pick-host-name-from-back-end-address)| |**Pick port from backend HTTP settings**| Yes or No|Sets the *port* of the health probe to the port from HTTP settings to which this probe is associated to. If you choose no, you can enter a custom destination port to use | |**Port**| 1-65535 | Custom port to be used for the health probes |
- |**Path**|/ or any valid path|The remainder of the full url for the custom probe. A valid path starts with '/'. For the default path of http:\//contoso.com just use '/' |
+ |**Path**|/ or any valid path|The remainder of the full url for the custom probe. A valid path starts with '/'. For the default path of http:\//contoso.com just use '/'. You can also input a server path to a file for a static health check instead of web based. File paths should be used while using public / private ip, or public ip dns entry as the hostname entry.|
|**Interval (secs)**|30|How often the probe is run to check for health. It is not recommended to set the lower than 30 seconds.| |**Timeout (secs)**|30|The amount of time the probe waits before timing out. If a valid response is not received within this time-out period, the probe is marked as failed. The timeout interval needs to be high enough that an http call can be made to ensure the backend health page is available. Note that the time-out value should not be more than the ΓÇÿIntervalΓÇÖ value used in this probe setting or the ΓÇÿRequest timeoutΓÇÖ value in the HTTP setting which will be associated with this probe.| |**Unhealthy threshold**|3|Number of consecutive failed attempts to be considered unhealthy. The threshold can be set to 1 or more.|
Probes are configured in a two-step process through the portal. The first step i
|||| |**Name**|customProbe|This value is a friendly name given to the probe that is accessible in the portal.| |**Protocol**|HTTP or HTTPS | The protocol that the health probe uses. |
- |**Host**|i.e contoso.com|This value is the name of the virtual host (different from the VM host name) running on the application server. The probe is sent to (protocol)://(host name):(port from httpsetting)/urlPath. This is applicable when multi-site is configured on Application Gateway. If the Application Gateway is configured for a single site, then enter '127.0.0.1'.|
+ |**Host**|i.e contoso.com|This value is the name of the virtual host (different from the VM host name) running on the application server. The probe is sent to (protocol)://(host name):(port from httpsetting)/urlPath. This is applicable when multi-site is configured on Application Gateway. If the Application Gateway is configured for a single site, then enter '127.0.0.1'. You can also input a server path to a file for a static health check instead of web based. File paths should be used while using public / private ip, or public ip dns entry as the hostname entry.|
|**Pick host name from backend HTTP settings**|Yes or No|Sets the *host* header in the probe to the host name of the back-end resource in the back-end pool associated with the HTTP Setting to which this probe is associated to. Specially required in case of multi-tenant backends such as Azure app service. [Learn more](./configuration-http-settings.md#pick-host-name-from-back-end-address)|
- |**Path**|/ or any valid path|The remainder of the full url for the custom probe. A valid path starts with '/'. For the default path of http:\//contoso.com just use '/' |
+ |**Path**|/ or any valid path|The remainder of the full url for the custom probe. A valid path starts with '/'. For the default path of http:\//contoso.com just use '/' You can also input a server path to a file for a static health check instead of web based. File paths should be used while using public / private ip, or public ip dns entry as the hostname entry.|
|**Interval (secs)**|30|How often the probe is run to check for health. It is not recommended to set the lower than 30 seconds.| |**Timeout (secs)**|30|The amount of time the probe waits before timing out. If a valid response is not received within this time-out period, the probe is marked as failed. The timeout interval needs to be high enough that an http call can be made to ensure the backend health page is available. Note that the time-out value should not be more than the ΓÇÿIntervalΓÇÖ value used in this probe setting or the ΓÇÿRequest timeoutΓÇÖ value in the HTTP setting which will be associated with this probe.| |**Unhealthy threshold**|3|Number of consecutive failed attempts to be considered unhealthy. The threshold can be set to 1 or more.|
attestation Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/attestation/policy-reference.md
Title: Built-in policy definitions for Azure Attestation description: Lists Azure Policy built-in policy definitions for Azure Attestation. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/13/2021 Last updated : 08/20/2021
automation Automation Linux Hrw Install https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/automation-linux-hrw-install.md
To install and configure a Linux Hybrid Runbook Worker, perform the following st
2. Deploy the Log Analytics agent to the target machine.
- * For Azure VMs, install the Log Analytics agent for Linux using the [virtual machine extension for Linux](../virtual-machines/extensions/oms-linux.md). The extension installs the Log Analytics agent on Azure virtual machines, and enrolls virtual machines into an existing Log Analytics workspace. You can use an Azure Resource Manager template, the Azure CLI, or Azure Policy to assign the [Deploy Log Analytics agent for *Linux* or *Windows* VMs](../governance/policy/samples/built-in-policies.md#monitoring) built-in policy. Once the agent is installed, the machine can be added to a Hybrid Runbook Worker group in your Automation account.
+ * For Azure VMs, install the Log Analytics agent for Linux using the [virtual machine extension for Linux](../virtual-machines/extensions/oms-linux.md). The extension installs the Log Analytics agent on Azure virtual machines, and enrolls virtual machines into an existing Log Analytics workspace. You can use an Azure Resource Manager template, the Azure CLI, or Azure Policy to assign the [Deploy Log Analytics agent for *Linux* or *Windows* VMs](../governance/policy/samples/built-in-policies.md#monitoring) built-in policy definition. Once the agent is installed, the machine can be added to a Hybrid Runbook Worker group in your Automation account.
* For non-Azure machines, you can install the Log Analytics agent using [Azure Arc enabled servers](../azure-arc/servers/overview.md). Arc enabled servers support deploying the Log Analytics agent using the following methods:
To install and configure a Linux Hybrid Runbook Worker, perform the following st
- Using Azure Policy.
- Using this approach, you use the Azure Policy [Deploy Log Analytics agent to Linux or Windows Azure Arc machines](../governance/policy/samples/built-in-policies.md#monitoring) built-in policy to audit if the Arc enabled server has the Log Analytics agent installed. If the agent is not installed, it automatically deploys it using a remediation task. Alternatively, if you plan to monitor the machines with Azure Monitor for VMs, instead use the [Enable Azure Monitor for VMs](../governance/policy/samples/built-in-initiatives.md#monitoring) initiative to install and configure the Log Analytics agent.
+ Using this approach, you use the Azure Policy [Deploy Log Analytics agent to Linux or Windows Azure Arc machines](../governance/policy/samples/built-in-policies.md#monitoring) built-in policy definition to audit if the Arc enabled server has the Log Analytics agent installed. If the agent is not installed, it automatically deploys it using a remediation task. Alternatively, if you plan to monitor the machines with Azure Monitor for VMs, instead use the [Enable Azure Monitor for VMs](../governance/policy/samples/built-in-initiatives.md#monitoring) initiative to install and configure the Log Analytics agent.
We recommend installing the Log Analytics agent for Windows or Linux using Azure Policy.
automation Automation Windows Hrw Install https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/automation-windows-hrw-install.md
To install and configure a Windows Hybrid Runbook Worker, perform the following
1. Deploy the Log Analytics agent to the target machine.
- * For Azure VMs, install the Log Analytics agent for Windows using the [virtual machine extension for Windows](../virtual-machines/extensions/oms-windows.md). The extension installs the Log Analytics agent on Azure virtual machines, and enrolls virtual machines into an existing Log Analytics workspace. You can use an Azure Resource Manager template, PowerShell, or Azure Policy to assign the [Deploy Log Analytics agent for *Linux* or *Windows* VMs](../governance/policy/samples/built-in-policies.md#monitoring) built-in policy. Once the agent is installed, the machine can be added to a Hybrid Runbook Worker group in your Automation account.
+ * For Azure VMs, install the Log Analytics agent for Windows using the [virtual machine extension for Windows](../virtual-machines/extensions/oms-windows.md). The extension installs the Log Analytics agent on Azure virtual machines, and enrolls virtual machines into an existing Log Analytics workspace. You can use an Azure Resource Manager template, PowerShell, or Azure Policy to assign the [Deploy Log Analytics agent for *Linux* or *Windows* VMs](../governance/policy/samples/built-in-policies.md#monitoring) built-in policy definition. Once the agent is installed, the machine can be added to a Hybrid Runbook Worker group in your Automation account.
* For non-Azure machines, you can install the Log Analytics agent using [Azure Arc enabled servers](../azure-arc/servers/overview.md). Arc enabled servers support deploying the Log Analytics agent using the following methods:
To install and configure a Windows Hybrid Runbook Worker, perform the following
- Using Azure Policy.
- Using this approach, you use the Azure Policy [Deploy Log Analytics agent to Linux or Windows Azure Arc machines](../governance/policy/samples/built-in-policies.md#monitoring) built-in policy to audit if the Arc enabled server has the Log Analytics agent installed. If the agent is not installed, it automatically deploys it using a remediation task. Alternatively, if you plan to monitor the machines with Azure Monitor for VMs, instead use the [Enable Azure Monitor for VMs](../governance/policy/samples/built-in-initiatives.md#monitoring) initiative to install and configure the Log Analytics agent.
+ Using this approach, you use the Azure Policy [Deploy Log Analytics agent to Linux or Windows Azure Arc machines](../governance/policy/samples/built-in-policies.md#monitoring) built-in policy definition to audit if the Arc enabled server has the Log Analytics agent installed. If the agent is not installed, it automatically deploys it using a remediation task. Alternatively, if you plan to monitor the machines with Azure Monitor for VMs, instead use the [Enable Azure Monitor for VMs](../governance/policy/samples/built-in-initiatives.md#monitoring) initiative to install and configure the Log Analytics agent.
We recommend installing the Log Analytics agent for Windows or Linux using Azure Policy.
automation Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/change-tracking/overview.md
You can enable Change Tracking and Inventory in the following ways:
- From your [Automation account](enable-from-automation-account.md) for one or more Azure and non-Azure machines. -- Manually for non-Azure machines, including machines or servers registered with [Azure Arc enabled servers](../../azure-arc/servers/overview.md). For hybrid machines, we recommend installing the Log Analytics agent for Windows by first connecting your machine to [Azure Arc enabled servers](../../azure-arc/servers/overview.md), and then using Azure Policy to assign the [Deploy Log Analytics agent to *Linux* or *Windows* Azure Arc machines](../../governance/policy/samples/built-in-policies.md#monitoring) built-in policy. If you plan to also monitor the machines with Azure Monitor for VMs, instead use the [Enable Azure Monitor for VMs](../../governance/policy/samples/built-in-initiatives.md#monitoring) initiative.
+- Manually for non-Azure machines, including machines or servers registered with [Azure Arc enabled servers](../../azure-arc/servers/overview.md). For hybrid machines, we recommend installing the Log Analytics agent for Windows by first connecting your machine to [Azure Arc enabled servers](../../azure-arc/servers/overview.md), and then using Azure Policy to assign the [Deploy Log Analytics agent to *Linux* or *Windows* Azure Arc machines](../../governance/policy/samples/built-in-policies.md#monitoring) built-in policy definition. If you plan to also monitor the machines with Azure Monitor for VMs, instead use the [Enable Azure Monitor for VMs](../../governance/policy/samples/built-in-initiatives.md#monitoring) initiative.
- For a single Azure VM from the [Virtual machine page](enable-from-vm.md) in the Azure portal. This scenario is available for Linux and Windows VMs.
automation Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/policy-reference.md
Title: Built-in policy definitions for Azure Automation description: Lists Azure Policy built-in policy definitions for Azure Automation. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/13/2021 Last updated : 08/20/2021
automation Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Automation description: Lists Azure Policy Regulatory Compliance controls available for Azure Automation. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 08/13/2021 Last updated : 08/20/2021
automation Onboarding https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/troubleshoot/onboarding.md
You can delete the query for the feature and then enable the feature again, whic
#### Issue
-This error code indicates that the deployment failed due to violation of one or more policies.
+This error code indicates that the deployment failed due to violation of one or more Azure Policy assignments.
#### Cause
-A policy is blocking the operation from completing.
+An Azure Policy assignment is blocking the operation from completing.
#### Resolution
-To successfully deploy the feature, you must consider altering the indicated policy. Because there are many different types of policies that can be defined, the changes required depend on the policy that's violated. For example, if a policy is defined on a resource group that denies permission to change the contents of some contained resources, you might choose one of these fixes:
+To successfully deploy the feature, you must consider altering the indicated policy definition. Because there are many different types of policy definitions that can be defined, the changes required depend on the policy definition that's violated. For example, if a policy definition is assigned to a resource group that denies permission to change the contents of some contained resources, you might choose one of these fixes:
-* Remove the policy altogether.
+* Remove the policy assignment altogether.
* Try to enable the feature for a different resource group.
-* Retarget the policy to a specific resource, for example, an Automation account.
-* Revise the set of resources that the policy is configured to deny.
+* Retarget the policy assignment to a specific resource, for example, an Automation account.
+* Revise the set of resources that the policy definition is configured to deny.
Check the notifications in the upper-right corner of the Azure portal, or go to the resource group that contains your Automation account and select **Deployments** under **Settings** to view the failed deployment. To learn more about Azure Policy, see [Overview of Azure Policy](../../governance/policy/overview.md?toc=%2fazure%2fautomation%2ftoc.json).
automation Operating System Requirements https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/update-management/operating-system-requirements.md
Software Requirements:
- Windows PowerShell 5.1 is required ([Download Windows Management Framework 5.1](https://www.microsoft.com/download/details.aspx?id=54616).) - The Update Management feature depends on the system Hybrid Runbook Worker role, and you should confirm its [system requirements](../automation-windows-hrw-install.md#prerequisites).
-Windows Update agents must be configured to communicate with a Windows Server Update Services (WSUS) server, or they require access to Microsoft Update. For hybrid machines, we recommend installing the Log Analytics agent for Windows by first connecting your machine to [Azure Arc enabled servers](../../azure-arc/servers/overview.md), and then use Azure Policy to assign the [Deploy Log Analytics agent to Windows Azure Arc machines](../../governance/policy/samples/built-in-policies.md#monitoring) built-in policy. Alternatively, if you plan to monitor the machines with VM insights, instead use the [Enable Enable VM insights](../../governance/policy/samples/built-in-initiatives.md#monitoring) initiative.
+Windows Update agents must be configured to communicate with a Windows Server Update Services (WSUS) server, or they require access to Microsoft Update. For hybrid machines, we recommend installing the Log Analytics agent for Windows by first connecting your machine to [Azure Arc enabled servers](../../azure-arc/servers/overview.md), and then use Azure Policy to assign the [Deploy Log Analytics agent to Windows Azure Arc machines](../../governance/policy/samples/built-in-policies.md#monitoring) built-in policy definition. Alternatively, if you plan to monitor the machines with VM insights, instead use the [Enable Enable VM insights](../../governance/policy/samples/built-in-initiatives.md#monitoring) initiative.
You can use Update Management with Microsoft Endpoint Configuration Manager. To learn more about integration scenarios, see [Integrate Update Management with Windows Endpoint Configuration Manager](mecmintegration.md). The [Log Analytics agent for Windows](../../azure-monitor/agents/agent-windows.md) is required for Windows servers managed by sites in your Configuration Manager environment.
Software Requirements:
> [!NOTE] > Update assessment of Linux machines is only supported in certain regions. See the Automation account and Log Analytics workspace [mappings table](../how-to/region-mappings.md#supported-mappings).
-For hybrid machines, we recommend installing the Log Analytics agent for Linux by first connecting your machine to [Azure Arc enabled servers](../../azure-arc/servers/overview.md), and then use Azure Policy to assign the [Deploy Log Analytics agent to Linux Azure Arc machines](../../governance/policy/samples/built-in-policies.md#monitoring) built-in policy. Alternatively, if you plan to monitor the machines with Azure Monitor for VMs, instead use the [Enable Azure Monitor for VMs](../../governance/policy/samples/built-in-initiatives.md#monitoring) initiative.
+For hybrid machines, we recommend installing the Log Analytics agent for Linux by first connecting your machine to [Azure Arc enabled servers](../../azure-arc/servers/overview.md), and then use Azure Policy to assign the [Deploy Log Analytics agent to Linux Azure Arc machines](../../governance/policy/samples/built-in-policies.md#monitoring) built-in policy definition. Alternatively, if you plan to monitor the machines with Azure Monitor for VMs, instead use the [Enable Azure Monitor for VMs](../../governance/policy/samples/built-in-initiatives.md#monitoring) initiative.
## Next steps
automation Plan Deployment https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/update-management/plan-deployment.md
The [Log Analytics agent](../../azure-monitor/agents/log-analytics-agent.md) for
On Azure VMs, if the Log Analytics agent isn't already installed, when you enable Update Management for the VM it is automatically installed using the Log Analytics VM extension for [Windows](../../virtual-machines/extensions/oms-windows.md) or [Linux](../../virtual-machines/extensions/oms-linux.md). The agent is configured to report to the Log Analytics workspace linked to the Automation account Update Management is enabled in.
-Non-Azure VMs or servers need to have the Log Analytics agent for Windows or Linux installed and reporting to the linked workspace. We recommend installing the Log Analytics agent for Windows or Linux by first connecting your machine to [Azure Arc enabled servers](../../azure-arc/servers/overview.md), and then use Azure Policy to assign the [Deploy Log Analytics agent to Linux or Windows Azure Arc machines](../../governance/policy/samples/built-in-policies.md#monitoring) built-in policy. Alternatively, if you plan to monitor the machines with [VM insights](../../azure-monitor/vm/vminsights-overview.md), instead use the [Enable Azure Monitor for VMs](../../governance/policy/samples/built-in-initiatives.md#monitoring) initiative.
+Non-Azure VMs or servers need to have the Log Analytics agent for Windows or Linux installed and reporting to the linked workspace. We recommend installing the Log Analytics agent for Windows or Linux by first connecting your machine to [Azure Arc enabled servers](../../azure-arc/servers/overview.md), and then use Azure Policy to assign the [Deploy Log Analytics agent to Linux or Windows Azure Arc machines](../../governance/policy/samples/built-in-policies.md#monitoring) built-in policy definition. Alternatively, if you plan to monitor the machines with [VM insights](../../azure-monitor/vm/vminsights-overview.md), instead use the [Enable Azure Monitor for VMs](../../governance/policy/samples/built-in-initiatives.md#monitoring) initiative.
If you're enabling a machine that's currently managed by Operations Manager, a new agent isn't required. The workspace information is added to the agents configuration when you connect the management group to the Log Analytics workspace.
automation Whats New https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/whats-new.md
Azure Automation now supports [System Assigned Managed Identities](./automation-
## March 2021
-### New Azure Automation built-in policies
+### New Azure Automation built-in policy definitions
**Type:** New feature
-Azure Automation has added five new built-in policies:
+Azure Automation has added five new built-in policy definitions:
- Automation accounts should disable public network access, - Azure Automation accounts should use customer-managed keys to encrypt data at rest
Azure Automation has added five new built-in policies:
- Configure private endpoint connections on Azure Automation accounts - Private endpoint connections on Automation Accounts should be enabled.
-For more information, see [policy reference](./policy-reference.md).
+For more information, see [Azure Policy reference](./policy-reference.md).
### Support for Automation and State Configuration declared GA in South India
azure-app-configuration Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-app-configuration/policy-reference.md
Title: Built-in policy definitions for Azure App Configuration description: Lists Azure Policy built-in policy definitions for Azure App Configuration. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/13/2021 Last updated : 08/20/2021
azure-app-configuration Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-app-configuration/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure App Configuration description: Lists Azure Policy Regulatory Compliance controls available for Azure App Configuration. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 08/13/2021 Last updated : 08/20/2021
azure-arc Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/kubernetes/overview.md
Azure Arc enabled Kubernetes supports the following scenarios:
* Enforce threat protection using Azure Defender for Kubernetes.
-* Apply policies using Azure Policy for Kubernetes.
+* Apply policy definitions using Azure Policy for Kubernetes.
* Create [custom locations](./custom-locations.md) as target locations for deploying Azure Arc enabled Data Services, [App Services on Azure Arc](../../app-service/overview-arc-integration.md) (including web, function, and logic apps) and [Event Grid on Kubernetes](../../event-grid/kubernetes/overview.md).
azure-arc Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/kubernetes/policy-reference.md
Title: Built-in policy definitions for Azure Arc enabled Kubernetes description: Lists Azure Policy built-in policy definitions for Azure Arc enabled Kubernetes. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/13/2021 Last updated : 08/20/2021 #
azure-arc Tutorial Arc Enabled Open Service Mesh https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/kubernetes/tutorial-arc-enabled-open-service-mesh.md
OSM runs an Envoy-based control plane on Kubernetes, can be configured with [SMI
### Support limitations for Arc enabled Open Service Mesh - Only one instance of Open Service Mesh can be deployed on an Arc connected Kubernetes cluster-- Public preview is available for Open Service Mesh version v0.8.4 and above. Find out the latest version of the release [here](https://github.com/Azure/osm-azure/releases).
+- Public preview is available for Open Service Mesh version v0.8.4 and above. Find out the latest version of the release [here](https://github.com/Azure/osm-azure/releases). The supported release versions are appended with notes. Ignore the tags associated with intermediate releases.
- Following Kubernetes distributions are currently supported - AKS Engine
+ - AKS on HCI
- Cluster API Azure - Google Kubernetes Engine - Canonical Kubernetes Distribution
Both Azure Monitor and Azure Application Insights helps you maximize the availab
Arc enabled Open Service Mesh will have deep integrations into both of these Azure services, and provide a seemless Azure experience for viewing and responding to critical KPIs provided by OSM metrics. Follow the steps below to allow Azure Monitor to scrape prometheus endpoints for collecting application metrics.
-1. Ensure that prometheus_scraping is set to true in the `osm-mesh-config`.
+1. Ensure that the application namespaces that you wish to be monitored are onboarded to the mesh. Follow the guidance [available here](#onboard-namespaces-to-the-service-mesh).
-2. Ensure that the application namespaces that you wish to be monitored are onboarded to the mesh. Follow the guidance [available here](#onboard-namespaces-to-the-service-mesh).
-
-3. Expose the prometheus endpoints for application namespaces.
+2. Expose the prometheus endpoints for application namespaces.
```azurecli-interactive osm metrics enable --namespace <namespace1> osm metrics enable --namespace <namespace2> ```
+ For v0.8.4, ensure that `prometheus_scraping` is set to `true` in the `osm-config` ConfigMap.
-4. Install the Azure Monitor extension using the guidance available [here](../../azure-monitor/containers/container-insights-enable-arc-enabled-clusters.md?toc=/azure/azure-arc/kubernetes/toc.json).
+3. Install the Azure Monitor extension using the guidance available [here](../../azure-monitor/containers/container-insights-enable-arc-enabled-clusters.md?toc=/azure/azure-arc/kubernetes/toc.json).
-5. Add the namespaces you want to monitor in container-azm-ms-osmconfig ConfigMap. Download the ConfigMap from [here](https://github.com/microsoft/Docker-Provider/blob/ci_prod/kubernetes/container-azm-ms-osmconfig.yaml).
+4. Add the namespaces you want to monitor in container-azm-ms-osmconfig ConfigMap. Download the ConfigMap from [here](https://github.com/microsoft/Docker-Provider/blob/ci_prod/kubernetes/container-azm-ms-osmconfig.yaml).
```azurecli-interactive monitor_namespaces = ["namespace1", "namespace2"] ```
-6. Run the following kubectl command
+5. Run the following kubectl command
```azurecli-interactive kubectl apply -f container-azm-ms-osmconfig.yaml ```
azure-arc Use Azure Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/kubernetes/use-azure-policy.md
Once the assignment is created, the Azure Policy engine identifies all Azure Arc
To enable separation of concerns, you can create multiple policy assignments, each with a different GitOps configuration pointing to a different Git repo. For example, one repo may be used by cluster admins and other repositories may be used by application teams.
->[!TIP]
-> There are built-in policies for these scenarios:
+> [!TIP]
+> There are built-in policy definitions for these scenarios:
> * Public repo or private repo with SSH keys created by Flux: `Configure Kubernetes clusters with specified GitOps configuration using no secrets` > * Private repo with user-provided SSH keys: `Configure Kubernetes clusters with specified GitOps configuration using SSH secrets` > * Private repo with user-provided HTTPS keys: `Configure Kubernetes clusters with specified GitOps configuration using HTTPS secrets`
Verify you have `Microsoft.Authorization/policyAssignments/write` permissions on
1. In the Azure portal, navigate to **Policy**. 1. In the **Authoring** section of the sidebar, select **Definitions**.
-1. In the "Kubernetes" category, choose the "Configure Kubernetes clusters with specified GitOps configuration using no secrets" built-in policy.
+1. In the "Kubernetes" category, choose the "Configure Kubernetes clusters with specified GitOps configuration using no secrets" built-in policy definition.
1. Click on **Assign**. 1. Set the **Scope** to the management group, subscription, or resource group to which the policy assignment will apply.
- * If you want to exclude any resources from the policy scope, set **Exclusions**.
+ * If you want to exclude any resources from the policy assignment scope, set **Exclusions**.
1. Give the policy assignment an easily identifiable **Name** and **Description**. 1. Ensure **Policy enforcement** is set to **Enabled**. 1. Select **Next**.
For existing clusters, you may need to manually run a remediation task. This tas
1. In the Azure portal, navigate to one of your Azure Arc enabled Kubernetes clusters. 1. In the **Settings** section of the sidebar, select **Policies**.
- * In the policies list, you should see the policy assignment that you created earlier with the **Compliance state** set as *Compliant*.
+ * In the list, you should see the policy assignment that you created earlier with the **Compliance state** set as *Compliant*.
1. In the **Settings** section of the sidebar, select **GitOps**. * In the configurations list, you should see the configuration created by the policy assignment. 1. Use `kubectl` to interrogate the cluster.
azure-arc Manage Agent https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/servers/manage-agent.md
For Arc-enabled servers, before you rename the machine, it is necessary to remov
5. Re-register the Connected Machine agent with Arc-enabled servers. Run the `azcmagent` tool with the [Connect](manage-agent.md#connect) parameter complete this step.
-6. Redeploy the VM extensions that were originally deployed to the machine from Arc-enabled servers. If you deployed the Azure Monitor for VMs (insights) agent or the Log Analytics agent using an Azure policy, the agents are redeployed after the next [evaluation cycle](../../governance/policy/how-to/get-compliance-data.md#evaluation-triggers).
+6. Redeploy the VM extensions that were originally deployed to the machine from Arc-enabled servers. If you deployed the Azure Monitor for VMs (insights) agent or the Log Analytics agent using an Azure Policy definition, the agents are redeployed after the next [evaluation cycle](../../governance/policy/how-to/get-compliance-data.md#evaluation-triggers).
## Upgrading agent
azure-arc Manage Howto Migrate https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/servers/manage-howto-migrate.md
To migrate an Azure Arc-enabled server from one Azure region to another, you hav
3. Re-register the Connected Machine agent with Arc-enabled servers in the other region. Run the `azcmagent` tool with the [Connect](manage-agent.md#connect) parameter complete this step.
-4. Redeploy the VM extensions that were originally deployed to the machine from Arc-enabled servers. If you deployed the Azure Monitor for VMs (insights) agent or the Log Analytics agent using an Azure policy, the agents are redeployed after the next [evaluation cycle](../../governance/policy/how-to/get-compliance-data.md#evaluation-triggers).
+4. Redeploy the VM extensions that were originally deployed to the machine from Arc-enabled servers. If you deployed the Azure Monitor for VMs (insights) agent or the Log Analytics agent using an Azure Policy definition, the agents are redeployed after the next [evaluation cycle](../../governance/policy/how-to/get-compliance-data.md#evaluation-triggers).
## Next steps
azure-arc Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/servers/policy-reference.md
Title: Built-in policy definitions for Azure Arc enabled servers description: Lists Azure Policy built-in policy definitions for Azure Arc enabled servers (preview). These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/13/2021 Last updated : 08/20/2021
azure-arc Scenario Onboard Azure Sentinel https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/servers/scenario-onboard-azure-sentinel.md
Azure Sentinel comes with a number of connectors for Microsoft solutions, availa
We recommend installing the Log Analytics agent for Windows or Linux using Azure Policy.
-After your Arc-enabled servers are connected, your data starts streaming into Azure Sentinel and is ready for you to start working with. You can view the logs in the [built-in workbooks](/azure/azure-arc/servers/articles/sentinel/get-visibility.md) and start building queries in Log Analytics to [investigate the data](/azure/azure-arc/servers/articles/sentinel/investigate-cases.md).
+After your Arc-enabled servers are connected, your data starts streaming into Azure Sentinel and is ready for you to start working with. You can view the logs in the [built-in workbooks](/azure/sentinel/get-visibility) and start building queries in Log Analytics to [investigate the data](/azure/sentinel/investigate-cases).
## Next steps
-Get started [detecting threats with Azure Sentinel](/azure/azure-arc/servers/articles/sentinel/detect-threats-built-in.md).
+Get started [detecting threats with Azure Sentinel](/azure/sentinel/detect-threats-built-in).
azure-arc Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/servers/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Arc-enabled servers (preview) description: Lists Azure Policy Regulatory Compliance controls available for Azure Arc-enabled servers (preview). These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 08/13/2021 Last updated : 08/20/2021
azure-cache-for-redis Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/policy-reference.md
Title: Built-in policy definitions for Azure Cache for Redis description: Lists Azure Policy built-in policy definitions for Azure Cache for Redis. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/13/2021 Last updated : 08/20/2021
azure-cache-for-redis Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Cache for Redis description: Lists Azure Policy Regulatory Compliance controls available for Azure Cache for Redis. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 08/13/2021 Last updated : 08/20/2021
azure-monitor Azure Ad Authentication https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/azure-ad-authentication.md
tracer = Tracer(
After the Azure AD authentication is enabled, you can choose to disable local authentication. This will allow you to ingest telemetry authenticated exclusively by Azure AD and impacts data access (for example, through API Keys).
-You can disable local authentication by using the Azure portal, Azure policy, or programmatically.
+You can disable local authentication by using the Azure portal, Azure Policy, or programmatically.
### Azure portal
You can disable local authentication by using the Azure portal, Azure policy, or
:::image type="content" source="./media/azure-ad-authentication/overview.png" alt-text="Screenshot of overview tab with the disabled(click to change) highlighted.":::
-### Azure policy
+### Azure Policy
-Azure policy for ΓÇÿDisableLocalAuthΓÇÖ will deny from users to create a new Application Insights resource without this property setting to ΓÇÿtrueΓÇÖ. The policy name is ΓÇÿApplication Insights components should block non-AAD auth ingestionΓÇÖ.
+Azure Policy for ΓÇÿDisableLocalAuthΓÇÖ will deny from users to create a new Application Insights resource without this property setting to ΓÇÿtrueΓÇÖ. The policy name is ΓÇÿApplication Insights components should block non-AAD auth ingestionΓÇÖ.
-To apply this policy to your subscription, [create a new policy assignment and assign the policy](../..//governance/policy/assign-policy-portal.md).
+To apply this policy definition to your subscription, [create a new policy assignment and assign the policy](../../governance/policy/assign-policy-portal.md).
Below is the policy template definition: ```JSON
azure-monitor Deploy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/deploy.md
Resources in Azure automatically generate [resource logs](essentials/platform-lo
There is a cost for this collection so refer to [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/) before implementing across a significant number of resources. Also see [Manage usage and costs with Azure Monitor Logs](logs/manage-cost-storage.md) for details on optimizing the cost of your log collection.
-See [Create diagnostic setting to collect resource logs and metrics in Azure](essentials/diagnostic-settings.md#create-in-azure-portal) to create a diagnostic setting for an Azure resource. Since a diagnostic setting needs to be created for each Azure resource, see [Deploy Azure Monitor at scale](deploy-scale.md) for details on using [Azure policy](../governance/policy/overview.md) to have settings automatically created each time an Azure resource is created.
+See [Create diagnostic setting to collect resource logs and metrics in Azure](essentials/diagnostic-settings.md#create-in-azure-portal) to create a diagnostic setting for an Azure resource. Since a diagnostic setting needs to be created for each Azure resource, see [Deploy Azure Monitor at scale](deploy-scale.md) for details on using [Azure Policy](../governance/policy/overview.md) to have settings automatically created each time an Azure resource is created.
### Enable insights and solutions Insights and solutions provide specialized monitoring for a particular service or solution. Insights use more recent features of Azure Monitor such as workbooks, so you should use an insight if it's available for your service. They are automatically available in every Azure subscription but may require some configuration for full functionality. They will typically use platform metrics and resources logs that you previously configured and could collect additional data.
azure-monitor Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/policy-reference.md
Title: Built-in policy definitions for Azure Monitor description: Lists Azure Policy built-in policy definitions for Azure Monitor. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/13/2021 Last updated : 08/20/2021
azure-monitor Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Monitor description: Lists Azure Policy Regulatory Compliance controls available for Azure Monitor. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 08/13/2021 Last updated : 08/20/2021
azure-monitor Vminsights Health Enable https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/vm/vminsights-health-enable.md
Log Analytics workspace must be located in one of the following regions:
- Southeast Asia - Switzerland North - Switzerland West-- UAe North
+- UAE North
- UK South - West Europe region - West US
azure-portal Azure Portal Dashboards https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-portal/azure-portal-dashboards.md
Title: Create a dashboard in the Azure portal description: This article describes how to create and customize a dashboard in the Azure portal. Previously updated : 05/12/2021 Last updated : 08/19/2021 # Create a dashboard in the Azure portal
Dashboards are a focused and organized view of your cloud resources in the Azure
The Azure portal provides a default dashboard as a starting point. You can edit the default dashboard and create and customize additional dashboards. > [!NOTE]
-> Each user can create up to 100 private dashboards. If you [publish and share the dashboard](azure-portal-dashboard-share-access.md), it will be implemented as an Azure resource in your subscription and wonΓÇÖt count towards this limit.
+> Each user can create up to 100 private dashboards. If you [publish and share the dashboard](azure-portal-dashboard-share-access.md), it will be implemented as an Azure resource in your subscription and won't count towards this limit.
This article describes how to create a new dashboard and customize it. For information on sharing dashboards, see [Share Azure dashboards by using Azure role-based access control](azure-portal-dashboard-share-access.md).
This example shows how to create a new private dashboard with an assigned name.
1. From the Azure portal menu, select **Dashboard**. Your default view might already be set to dashboard.
- ![Screenshot of the Azure portal with Dashboard selected.](./media/azure-portal-dashboards/portal-menu-dashboard.png)
+ :::image type="content" source="media/azure-portal-dashboards/portal-menu-dashboard.png" alt-text="Screenshot of the Azure portal with Dashboard selected.":::
1. Select **New dashboard** then **Blank dashboard**.
- ![Screenshot of the New dashboard options.](./media/azure-portal-dashboards/create-new-dashboard.png)
+ :::image type="content" source="media/azure-portal-dashboards/create-new-dashboard.png" alt-text="Screenshot of the New dashboard options.":::
This action opens the **Tile Gallery**, from which you can select tiles, and an empty grid where you'll arrange the tiles.
To add tiles to a dashboard, follow these steps:
1. Select ![edit icon](./media/azure-portal-dashboards/dashboard-edit-icon.png) **Edit** from the dashboard's page header.
- ![Screenshot of dashboard highlighting the Edit option.](./media/azure-portal-dashboards/dashboard-edit.png)
+ :::image type="content" source="media/azure-portal-dashboards/dashboard-edit.png" alt-text="Screenshot of dashboard highlighting the Edit option.":::
1. Browse the **Tile Gallery** or use the search field to find a certain tile. Select the tile you want to add to your dashboard.
If you set filters for a particular tile, the left corner of that tile displays
Some tiles might require more configuration to show the information you want. For example, the **Metrics chart** tile has to be set up to display a metric from Azure Monitor. You can also customize tile data to override the dashboard's default time settings and filters.
-## Complete tile configuration
+### Complete tile configuration
Any tile that needs to be set up displays a banner until you customize the tile. For example, in the **Metrics chart**, the banner reads **Edit in Metrics**. Other banners may use different text, such as **Configure tile**.
To customize the tile:
1. Select the banner, then do the required setup.
- ![Screenshot of tile that requires configuration.](./media/azure-portal-dashboards/dashboard-configure-tile.png)
+ :::image type="content" source="media/azure-portal-dashboards/dashboard-configure-tile.png" alt-text="Screenshot of tile that requires configuration.":::
### Customize time span for a tile
Data on the dashboard shows activity and refreshes based on the global filters.
1. Select **Customize tile data** from the context menu or from the ![filter icon](./media/azure-portal-dashboards/dashboard-filter.png) in the upper left corner of the tile.
- ![Screenshot of tile context menu.](./media/azure-portal-dashboards/dashboard-customize-tile-data.png)
+ :::image type="content" source="media/azure-portal-dashboards/dashboard-customize-tile-data.png" alt-text="Screenshot of tile context menu.":::
1. Select the checkbox to **Override the dashboard time settings at the tile level**.
- ![Screenshot of dialog to configure tile time settings.](./media/azure-portal-dashboards/dashboard-override-time-settings.png)
+ :::image type="content" source="media/azure-portal-dashboards/dashboard-override-time-settings.png" alt-text="Screenshot of dialog to configure tile time settings.":::
1. Choose the time span to show for this tile. You can choose from the past 30 minutes to the past 30 days or define a custom range.
Data on the dashboard shows activity and refreshes based on the global filters.
1. Select **Apply**.
+### Change the title and subtitle of a tile
+
+Some tiles allow you to edit their title and subtitle. To do so, select **Configure tile settings** from the context menu.
++
+Make any changes to the tile's title and/or subtitle, then select **Apply**.
+
+
## Delete a tile To remove a tile from a dashboard, do one of the following:
To remove a tile from a dashboard, do one of the following:
- Select ![edit icon](./media/azure-portal-dashboards/dashboard-edit-icon.png) **Edit** to enter customization mode. Hover in the upper right corner of the tile, then select the ![delete icon](./media/azure-portal-dashboards/dashboard-delete-icon.png) delete icon to remove the tile from the dashboard.
- ![Screenshot showing how to remove tile from dashboard.](./media/azure-portal-dashboards/dashboard-delete-tile.png)
+ :::image type="content" source="media/azure-portal-dashboards/dashboard-delete-tile.png" alt-text="Screenshot showing how to remove tile from dashboard.":::
## Clone a dashboard
To find and open a shared dashboard, follow these steps:
1. select **Browse all dashboards**.
- ![Screenshot of dashboard selection menu](./media/azure-portal-dashboards/dashboard-browse.png)
+ :::image type="content" source="media/azure-portal-dashboards/dashboard-browse.png" alt-text="Screenshot of dashboard selection menu.":::
1. In the **Type** field, select **Shared dashboards**.
- ![Screenshot of all dashboards selection menu](./media/azure-portal-dashboards/dashboard-browse-all.png)
+ :::image type="content" source="media/azure-portal-dashboards/dashboard-browse-all.png" alt-text="Screenshot of all dashboards selection menu.":::
1. Select one or more subscriptions. You can also enter text to filter dashboards by name.
To permanently delete a private or shared dashboard, follow these steps:
1. For a private dashboard, select **OK** on the confirmation dialog to remove the dashboard. For a shared dashboard, on the confirmation dialog, select the checkbox to confirm that the published dashboard will no longer be viewable by others. Then, select **OK**.
- ![Screenshot of delete confirmation.](./media/azure-portal-dashboards/dashboard-delete-dash.png)
+ :::image type="content" source="media/azure-portal-dashboards/dashboard-delete-dash.png" alt-text="Screenshot of delete confirmation.":::
## Recover a deleted dashboard
azure-portal Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-portal/policy-reference.md
Title: Built-in policy definitions for Azure portal description: Lists Azure Policy built-in policy definitions for Azure portal. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/13/2021 Last updated : 08/20/2021
azure-resource-manager Deploy Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/bicep/deploy-cli.md
az deployment group create \
--resource-group testgroup \ --template-file <path-to-bicep> \ --parameters $params
-```
+```
However, if you're using Azure CLI with Windows Command Prompt (CMD) or PowerShell, set the variable to a JSON string. Escape the quotation marks: `$params = '{ \"prefix\": {\"value\":\"start\"}, \"suffix\": {\"value\":\"end\"} }'`.
Before deploying your Bicep file, you can preview the changes the Bicep file wil
## Deploy template specs
-Currently, Azure CLI doesn't support creating template specs by providing Bicep files. However you can create a Bicep file with the [Microsoft.Resources/templateSpecs](/azure/templates/microsoft.resources/templatespecs) resource to deploy a template spec. Here's an [example](https://github.com/Azure/azure-docs-json-samples/blob/master/create-template-spec-using-template/azuredeploy.bicep). You can also build your Bicep file into an ARM template JSON by using the Bicep CLI, and then create a template spec with the JSON template.
+Currently, Azure CLI doesn't support creating template specs by providing Bicep files. However you can create a Bicep file with the [Microsoft.Resources/templateSpecs](/azure/templates/microsoft.resources/templatespecs) resource to deploy a template spec. Here's an [example](https://github.com/Azure/azure-docs-bicep-samples/blob/main/create-template-spec-using-bicep/azuredeploy.bicep). You can also build your Bicep file into an ARM template JSON by using the Bicep CLI, and then create a template spec with the JSON template.
## Deployment name
To avoid conflicts with concurrent deployments and to ensure unique entries in t
* To roll back to a successful deployment when you get an error, see [Rollback on error to successful deployment](../templates/rollback-on-error.md). - To understand how to define parameters in your file, see [Understand the structure and syntax of Bicep files](file.md).
-* For tips on resolving common deployment errors, see [Troubleshoot common Azure deployment errors with Azure Resource Manager](../templates/common-deployment-errors.md).
+* For tips on resolving common deployment errors, see [Troubleshoot common Azure deployment errors with Azure Resource Manager](../templates/common-deployment-errors.md).
azure-resource-manager Deploy Github Actions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/bicep/deploy-github-actions.md
You need to create secrets for your Azure credentials, resource group, and subsc
Add a Bicep file to your GitHub repository. The following Bicep file creates a storage account: ```url
-https://raw.githubusercontent.com/mumian/azure-docs-json-samples/master/get-started-with-templates/add-variable/azuredeploy.bicep
+https://raw.githubusercontent.com/Azure/azure-docs-bicep-samples/main/get-started-with-bicep-files/add-variable/azuredeploy.bicep
``` The Bicep file takes one parameter called **storagePrefix** with 3 to 11 characters.
azure-resource-manager Deploy Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/bicep/deploy-powershell.md
If you're deploying to a resource group that doesn't exist, create the resource
New-AzResourceGroup -Name ExampleGroup -Location "Central US" ```
-To deploy a local Bicep file, use the `-TemplateFile` parameter in the deployment command.
+To deploy a local Bicep file, use the `-TemplateFile` parameter in the deployment command.
```azurepowershell New-AzResourceGroupDeployment `
Before deploying your Bicep file, you can preview the changes the Bicep file wil
## Deploy template specs
-Currently, Azure PowerShell doesn't support creating template specs by providing Bicep files. However you can create a Bicep file with the [Microsoft.Resources/templateSpecs](/azure/templates/microsoft.resources/templatespecs) resource to deploy a template spec. Here is an [example](https://github.com/Azure/azure-docs-json-samples/blob/master/create-template-spec-using-template/azuredeploy.bicep). You can also build your Bicep file into an ARM template JSON by using the Bicep CLI, and then create a template spec with the JSON template.
+Currently, Azure PowerShell doesn't support creating template specs by providing Bicep files. However you can create a Bicep file with the [Microsoft.Resources/templateSpecs](/azure/templates/microsoft.resources/templatespecs) resource to deploy a template spec. Here is an [example](https://github.com/Azure/azure-docs-bicep-samples/blob/main/create-template-spec-using-bicep/azuredeploy.bicep). You can also build your Bicep file into an ARM template JSON by using the Bicep CLI, and then create a template spec with the JSON template.
## Deployment name
To avoid conflicts with concurrent deployments and to ensure unique entries in t
- To roll back to a successful deployment when you get an error, see [Rollback on error to successful deployment](../templates/rollback-on-error.md). - To understand how to define parameters in your file, see [Understand the structure and syntax of Bicep files](file.md).-- For information about deploying a template that requires a SAS token, see [Deploy private ARM template with SAS token](../templates/secure-template-with-sas-token.md).
+- For information about deploying a template that requires a SAS token, see [Deploy private ARM template with SAS token](../templates/secure-template-with-sas-token.md).
azure-resource-manager Deploy To Resource Group https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/bicep/deploy-to-resource-group.md
To deploy resources to the target resource group, add those resources to the Bic
// resource deployed to target resource group resource exampleResource 'Microsoft.Storage/storageAccounts@2019-06-01' = { ...
-}
+}
``` For an example template, see [Deploy to target resource group](#deploy-to-target-resource-group). ### Scope to different resource group
-To deploy resources to a resource group that isn't the target resource group, add a [module](modules.md). Use the [resourceGroup function](bicep-functions-scope.md#resourcegroup) to set the `scope` property for that module.
+To deploy resources to a resource group that isn't the target resource group, add a [module](modules.md). Use the [resourceGroup function](bicep-functions-scope.md#resourcegroup) to set the `scope` property for that module.
-If the resource group is in a different subscription, provide the subscription ID and the name of the resource group. If the resource group is in the same subscription as the current deployment, provide only the name of the resource group. If you don't specify a subscription in the [resourceGroup function](bicep-functions-scope.md#resourcegroup), the current subscription is used.
+If the resource group is in a different subscription, provide the subscription ID and the name of the resource group. If the resource group is in the same subscription as the current deployment, provide only the name of the resource group. If you don't specify a subscription in the [resourceGroup function](bicep-functions-scope.md#resourcegroup), the current subscription is used.
The following example shows a module that targets a resource group in a different subscription.
For an example template, see [Deploy to multiple resource groups](#deploy-to-mul
### Scope to subscription
-To deploy resources to a subscription, add a module. Use the [subscription function](bicep-functions-scope.md#subscription) to set its `scope` property.
+To deploy resources to a subscription, add a module. Use the [subscription function](bicep-functions-scope.md#subscription) to set its `scope` property.
-To deploy to the current subscription, use the subscription function without a parameter.
+To deploy to the current subscription, use the subscription function without a parameter.
```bicep
For more information, see [Management group](deploy-to-management-group.md#manag
To deploy resources in the target resource group, define those resources in the `resources` section of the template. The following template creates a storage account in the resource group that is specified in the deployment operation. ## Deploy to multiple resource groups
azure-resource-manager Deploy What If https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/bicep/deploy-what-if.md
The following results show the two different output formats:
### Set up environment
-To see how what-if works, let's runs some tests. First, deploy a [Bicep file that creates a virtual network](https://github.com/Azure/azure-docs-json-samples/blob/master/azure-resource-manager/what-if/what-if-before.bicep). You'll use this virtual network to test how changes are reported by what-if. Download a copy of the Bicep file.
+To see how what-if works, let's runs some tests. First, deploy a [Bicep file that creates a virtual network](https://github.com/Azure/azure-docs-bicep-samples/blob/main/bicep/what-if/what-if-before.bicep). You'll use this virtual network to test how changes are reported by what-if. Download a copy of the Bicep file.
# [PowerShell](#tab/azure-powershell)
az deployment group create \
### Test modification
-After the deployment completes, you're ready to test the what-if operation. This time you deploy a [Bicep file that changes the virtual network](https://github.com/Azure/azure-docs-json-samples/blob/master/azure-resource-manager/what-if/what-if-after.bicep). It's missing one the original tags, a subnet has been removed, and the address prefix has changed. Download a copy of the Bicep file.
+After the deployment completes, you're ready to test the what-if operation. This time you deploy a [Bicep file that changes the virtual network](https://github.com/Azure/azure-docs-bicep-samples/blob/main/bicep/what-if/what-if-after.bicep). It's missing one the original tags, a subnet has been removed, and the address prefix has changed. Download a copy of the Bicep file.
# [PowerShell](#tab/azure-powershell)
azure-resource-manager Loop Resources https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/bicep/loop-resources.md
The following examples show common scenarios for creating more than one instance
|Template |Description | |||
-|[Loop storage](https://github.com/Azure/azure-docs-json-samples/blob/master/azure-resource-manager/multipleinstance/loopstorage.bicep) |Deploys more than one storage account with an index number in the name. |
-|[Serial loop storage](https://github.com/Azure/azure-docs-json-samples/blob/master/azure-resource-manager/multipleinstance/loopserialstorage.bicep) |Deploys several storage accounts one at time. The name includes the index number. |
-|[Loop storage with array](https://github.com/Azure/azure-docs-json-samples/blob/master/azure-resource-manager/multipleinstance/loopstoragewitharray.bicep) |Deploys several storage accounts. The name includes a value from an array. |
+|[Loop storage](https://github.com/Azure/azure-docs-bicep-samples/blob/main/bicep/multiple-instance/loopstorage.bicep) |Deploys more than one storage account with an index number in the name. |
+|[Serial loop storage](https://github.com/Azure/azure-docs-bicep-samples/blob/main/bicep/multiple-instance/loopserialstorage.bicep) |Deploys several storage accounts one at time. The name includes the index number. |
+|[Loop storage with array](https://github.com/Azure/azure-docs-bicep-samples/blob/main/bicep/multiple-instance/loopstoragewitharray.bicep) |Deploys several storage accounts. The name includes a value from an array. |
## Next steps
azure-resource-manager Loop Variables https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/bicep/loop-variables.md
The following examples show common scenarios for creating more than one value fo
|Template |Description | |||
-|[Loop variables](https://github.com/Azure/azure-docs-json-samples/blob/master/azure-resource-manager/multipleinstance/loopvariables.bicep) | Demonstrates how to iterate on variables. |
-|[Multiple security rules](https://github.com/Azure/azure-docs-json-samples/blob/master/azure-resource-manager/multipleinstance/multiplesecurityrules.bicep) |Deploys several security rules to a network security group. It constructs the security rules from a parameter. For the parameter, see [multiple NSG parameter file](https://github.com/Azure/azure-docs-json-samples/blob/master/azure-resource-manager/multipleinstance/multiplesecurityrules.parameters.json). |
+|[Loop variables](https://github.com/Azure/azure-docs-bicep-samples/blob/main/bicep/multiple-instance/loopvariables.bicep) | Demonstrates how to iterate on variables. |
+|[Multiple security rules](https://github.com/Azure/azure-docs-bicep-samples/blob/main/bicep/multiple-instance/multiplesecurityrules.bicep) |Deploys several security rules to a network security group. It constructs the security rules from a parameter. For the parameter, see [multiple NSG parameter file](https://github.com/Azure/azure-docs-bicep-samples/blob/main/bicep/multiple-instance/multiplesecurityrules.parameters.json). |
## Next steps
azure-resource-manager Outputs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/bicep/outputs.md
The following template doesn't deploy any resources. It shows some ways of retur
Bicep doesn't currently support loops. ## Get output values
azure-resource-manager Parameters https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/bicep/parameters.md
The following examples demonstrate scenarios for using parameters.
|Template |Description | |||
-|[parameters with functions for default values](https://github.com/Azure/azure-docs-json-samples/blob/master/azure-resource-manager/parameterswithfunctions.bicep) | Demonstrates how to use Bicep functions when defining default values for parameters. The Bicep file doesn't deploy any resources. It constructs parameter values and returns those values. |
-|[parameter object](https://github.com/Azure/azure-docs-json-samples/blob/master/azure-resource-manager/parameterobject.bicep) | Demonstrates using an object for a parameter. The Bicep file doesn't deploy any resources. It constructs parameter values and returns those values. |
+|[parameters with functions for default values](https://github.com/Azure/azure-docs-bicep-samples/blob/main/bicep/parameterswithfunctions.bicep) | Demonstrates how to use Bicep functions when defining default values for parameters. The Bicep file doesn't deploy any resources. It constructs parameter values and returns those values. |
+|[parameter object](https://github.com/Azure/azure-docs-bicep-samples/blob/main/bicep/parameterobject.bicep) | Demonstrates using an object for a parameter. The Bicep file doesn't deploy any resources. It constructs parameter values and returns those values. |
## Next steps
azure-resource-manager Variables https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/bicep/variables.md
Because storage account names must use lowercase letters, the `storageName` vari
The following template doesn't deploy any resources. It shows some ways of declaring variables of different types. ## Configuration variables You can define variables that hold related values for configuring an environment. You define the variable as an object with the values. The following example shows an object that holds values for two environments - **test** and **prod**. Pass in one of these values during deployment. ## Next steps
azure-resource-manager Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/custom-providers/policy-reference.md
Title: Built-in policy definitions for Azure Custom Resource Providers description: Lists Azure Policy built-in policy definitions for Azure Custom Resource Providers. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/13/2021 Last updated : 08/20/2021
azure-resource-manager Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/managed-applications/policy-reference.md
Title: Built-in policy definitions for Azure Managed Applications description: Lists Azure Policy built-in policy definitions for Azure Managed Applications. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/13/2021 Last updated : 08/20/2021
azure-resource-manager Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/management/policy-reference.md
Title: Built-in policy definitions for Azure Resource Manager description: Lists Azure Policy built-in policy definitions for Azure Resource Manager. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/13/2021 Last updated : 08/20/2021
azure-resource-manager Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/management/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Resource Manager description: Lists Azure Policy Regulatory Compliance controls available for Azure Resource Manager. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 08/13/2021 Last updated : 08/20/2021
azure-signalr Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-signalr/policy-reference.md
Title: Built-in policy definitions for Azure SignalR description: Lists Azure Policy built-in policy definitions for Azure SignalR. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/13/2021 Last updated : 08/20/2021
azure-signalr Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-signalr/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure SignalR description: Lists Azure Policy Regulatory Compliance controls available for Azure SignalR. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 08/13/2021 Last updated : 08/20/2021
azure-sql Auditing Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/auditing-overview.md
For a script example, see [Configure auditing and threat detection using PowerSh
**REST API**: -- [Create or Update Database Auditing Policy](/rest/api/sql/database%20auditing%20settings/createorupdate)
+- [Create or Update Database Auditing Policy](/rest/api/sql/2017-03-01-preview/server-auditing-settings/create-or-update)
- [Create or Update Server Auditing Policy](/rest/api/sql/server%20auditing%20settings/createorupdate) - [Get Database Auditing Policy](/rest/api/sql/database%20auditing%20settings/get)-- [Get Server Auditing Policy](/rest/api/sql/server%20auditing%20settings/get)
+- [Get Server Auditing Policy](/rest/api/sql/2017-03-01-preview/server-auditing-settings/get)
Extended policy with WHERE clause support for additional filtering:
You can manage Azure SQL Database auditing using [Azure Resource Manager](../../
- Data Exposed episode [What's New in Azure SQL Auditing](https://channel9.msdn.com/Shows/Data-Exposed/Whats-New-in-Azure-SQL-Auditing) on Channel 9. - [Auditing for SQL Managed Instance](../managed-instance/auditing-configure.md)-- [Auditing for SQL Server](/sql/relational-databases/security/auditing/sql-server-audit-database-engine)
+- [Auditing for SQL Server](/sql/relational-databases/security/auditing/sql-server-audit-database-engine)
azure-sql Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/policy-reference.md
Title: Built-in policy definitions for Azure SQL Database description: Lists Azure Policy built-in policy definitions for Azure SQL Database and SQL Managed Instance. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/13/2021 Last updated : 08/20/2021
azure-sql Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure SQL Database description: Lists Azure Policy Regulatory Compliance controls available for Azure SQL Database and SQL Managed Instance. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 08/13/2021 Last updated : 08/20/2021
azure-sql Troubleshoot Common Errors Issues https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/troubleshoot-common-errors-issues.md
Previously updated : 08/18/2021 Last updated : 08/20/2021 # Troubleshooting connectivity issues and other errors with Azure SQL Database and Azure SQL Managed Instance
The Azure infrastructure has the ability to dynamically reconfigure servers when
| Error code | Severity | Description | | :| :|: |
+| 926 |14 |Database 'replicatedmaster' cannot be opened. It has been marked SUSPECT by recovery. See the SQL Server errorlog for more information.<br/><br/>This error may be logged on SQL Managed Instance errorlog, for a short period of time, during the last stage of a reconfiguration, while the old primary is shutting down its log.<br/>Other, non-transient scenarios involving this error message are described in the [MSSQL Errors documentation](/sql/relational-databases/errors-events/mssqlserver-926-database-engine-error).|
| 4060 |16 |Cannot open database "%.&#x2a;ls" requested by the login. The login failed. For more information, see [Errors 4000 to 4999](/sql/relational-databases/errors-events/database-engine-events-and-errors#errors-4000-to-4999)| | 40197 |17 |The service has encountered an error processing your request. Please try again. Error code %d.<br/><br/>You receive this error when the service is down due to software or hardware upgrades, hardware failures, or any other failover problems. The error code (%d) embedded within the message of error 40197 provides additional information about the kind of failure or failover that occurred. Some examples of the error codes are embedded within the message of error 40197 are 40020, 40143, 40166, and 40540.<br/><br/>Reconnecting automatically connects you to a healthy copy of your database. Your application must catch error 40197, log the embedded error code (%d) within the message for troubleshooting, and try reconnecting to SQL Database until the resources are available, and your connection is established again. For more information, see [Transient errors](troubleshoot-common-connectivity-issues.md#transient-errors-transient-faults).| | 40501 |20 |The service is currently busy. Retry the request after 10 seconds. Incident ID: %ls. Code: %d. For more information, see: <br/>&bull; &nbsp;[Logical SQL server resource limits](resource-limits-logical-server.md)<br/>&bull; &nbsp;[DTU-based limits for single databases](service-tiers-dtu.md)<br/>&bull; &nbsp;[DTU-based limits for elastic pools](resource-limits-dtu-elastic-pools.md)<br/>&bull; &nbsp;[vCore-based limits for single databases](resource-limits-vcore-single-databases.md)<br/>&bull; &nbsp;[vCore-based limits for elastic pools](resource-limits-vcore-elastic-pools.md)<br/>&bull; &nbsp;[Azure SQL Managed Instance resource limits](../managed-instance/resource-limits.md).|
For more information about how to enable logging, see [Enable diagnostics loggin
## See also - [Troubleshooting transaction log errors with Azure SQL Database and Azure SQL Managed Instance](troubleshoot-transaction-log-errors-issues.md)-- [Troubleshoot transient connection errors in SQL Database and SQL Managed Instance](troubleshoot-common-connectivity-issues.md)
+- [Troubleshoot transient connection errors in SQL Database and SQL Managed Instance](troubleshoot-common-connectivity-issues.md)
azure-sql Connect Application Instance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/managed-instance/connect-application-instance.md
Previously updated : 07/08/2021 Last updated : 08/20/2021 # Connect your application to Azure SQL Managed Instance
You may choose to host application in the cloud by using Azure App Service or so
Whatever choice you make, you can connect it to Azure SQL Managed Instance.
+This article describes how to connect an application to Azure SQL Managed Instance in a number of different application scenarios from inside the virtual network.
+
+> [!IMPORTANT]
+> You can also enable data access to your managed instance from outside a virtual network. You are able to access your managed instance from multi-tenant Azure services like Power BI, Azure App Service, or an on-premises network that are not connected to a VPN by using the public endpoint on a managed instance. You will need to enable public endpoint on the managed instance and allow public endpoint traffic on the network security group associated with the managed instance subnet. See more important details on [Configure public endpoint in Azure SQL Managed Instance](./public-endpoint-configure.md).
+ ![High availability](./media/connect-application-instance/application-deployment-topologies.png)
-This article describes how to connect an application to Azure SQL Managed Instance in a number of different application scenarios.
## Connect inside the same VNet
Peering is preferable because it uses the Microsoft backbone network, so from th
## Connect from on-premises
-You can also connect your on-premises application to SQL Managed Instance. SQL Managed Instance can only be accessed through a private IP address. In order to access it from on-premises, you need to make a site-to-site connection between the application and the SQL Managed Instance virtual network.
+You can also connect your on-premises application to SQL Managed Instance via virtual network (private IP address). In order to access it from on-premises, you need to make a site-to-site connection between the application and the SQL Managed Instance virtual network. For data access to your managed instance from outside a virtual network see [Configure public endpoint in Azure SQL Managed Instance](./public-endpoint-configure.md).
There are two options for how to connect on-premises to an Azure virtual network:
If you've established an on-premises to Azure connection successfully and you ca
## Connect the developer box
-It is also possible to connect your developer box to SQL Managed Instance. SQL Managed Instance can be accessed only through a private IP address, so in order to access it from your developer box, you first need to make a connection between your developer box and the SQL Managed Instance virtual network. To do so, configure a point-to-site connection to a virtual network using native Azure certificate authentication. For more information, see [Configure a point-to-site connection to connect to Azure SQL Managed Instance from an on-premises computer](point-to-site-p2s-configure.md).
+It is also possible to connect your developer box to SQL Managed Instance. In order to access it from your developer box via virtual network, you first need to make a connection between your developer box and the SQL Managed Instance virtual network. To do so, configure a point-to-site connection to a virtual network using native Azure certificate authentication. For more information, see [Configure a point-to-site connection to connect to Azure SQL Managed Instance from an on-premises computer](point-to-site-p2s-configure.md).
+
+For data access to your managed instance from outside a virtual network see [Configure public endpoint in Azure SQL Managed Instance](./public-endpoint-configure.md).
## Connect with VNet peering
Once you have the basic infrastructure set up, you need to modify some settings
## Connect Azure App Service
-You can also connect an application that's hosted by Azure App Service. SQL Managed Instance can be accessed only through a private IP address, so in order to access it from Azure App Service, you first need to make a connection between the application and the SQL Managed Instance virtual network. See [Integrate your app with an Azure virtual network](../../app-service/web-sites-integrate-with-vnet.md).
+You can also connect an application that's hosted by Azure App Service. In order to access it from Azure App Service via virtual network, you first need to make a connection between the application and the SQL Managed Instance virtual network. See [Integrate your app with an Azure virtual network](../../app-service/web-sites-integrate-with-vnet.md). For data access to your managed instance from outside a virtual network see [Configure public endpoint in Azure SQL Managed Instance](./public-endpoint-configure.md).
-For troubleshooting, see [Troubleshooting virtual networks and applications](../../app-service/web-sites-integrate-with-vnet.md#troubleshooting). If a connection cannot be established, try [syncing the networking configuration](azure-app-sync-network-configuration.md).
+For troubleshooting Azure App Service access via virtual network, see [Troubleshooting virtual networks and applications](../../app-service/web-sites-integrate-with-vnet.md#troubleshooting). If a connection cannot be established, try [syncing the networking configuration](azure-app-sync-network-configuration.md).
A special case of connecting Azure App Service to SQL Managed Instance is when you integrate Azure App Service to a network peered to a SQL Managed Instance virtual network. That case requires the following configuration to be set up:
azure-sql Azure Storage Sql Server Backup Restore Use https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/virtual-machines/windows/azure-storage-sql-server-backup-restore-use.md
The following Azure components are used when backing up to Azure Blob storage.
| Component | Description | | | |
-| **Storage account** |The storage account is the starting point for all storage services. To access Azure Blob storage, first create an Azure Storage account. For more information about Azure Blob storage, see [How to use Azure Blob storage](https://azure.microsoft.com/develop/net/how-to-guides/blob-storage/). |
+| **Storage account** |The storage account is the starting point for all storage services. To access Azure Blob storage, first create an Azure Storage account. SQL Server is agnostic to the type of storage redundancy used. Backup to Page blobs and block blobs is supported for every storage redundancy (LRS\ZRS\GRS\RA-GRS\RA-GZRS\etc.). For more information about Azure Blob storage, see [How to use Azure Blob storage](https://azure.microsoft.com/develop/net/how-to-guides/blob-storage/). |
| **Container** |A container provides a grouping of a set of blobs, and can store an unlimited number of Blobs. To write a SQL Server backup to Azure Blob storage, you must have at least the root container created. | | **Blob** |A file of any type and size. Blobs are addressable using the following URL format: `https://<storageaccount>.blob.core.windows.net/<container>/<blob>`. For more information about page Blobs, see [Understanding Block and Page Blobs](/rest/api/storageservices/Understanding-Block-Blobs--Append-Blobs--and-Page-Blobs) |
The following SQL Server components are used when backing up to Azure Blob stora
If you have any problems, review the topic [SQL Server Backup to URL Best Practices and Troubleshooting](/sql/relational-databases/backup-restore/sql-server-backup-to-url-best-practices-and-troubleshooting).
-For other SQL Server backup and restore options, see [Backup and Restore for SQL Server on Azure Virtual Machines](backup-restore.md).
+For other SQL Server backup and restore options, see [Backup and Restore for SQL Server on Azure Virtual Machines](backup-restore.md).
azure-sql Backup Restore https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/virtual-machines/windows/backup-restore.md
The following table provides information on various backup and restore options f
| Strategy | SQL versions | Description | ||||
-| [Automated Backup](#automated) | 2014<br/> 2016<br/> 2017 | Automated Backup allows you to schedule regular backups for all databases on a SQL Server VM. Backups are stored in Azure storage for up to 30 days. Beginning with SQL Server 2016, Automated Backup v2 offers additional options such as configuring manual scheduling and the frequency of full and log backups. |
+| [Automated Backup](#automated) | 2014<br/> 2016<br/> 2017<br/> 2019 | Automated Backup allows you to schedule regular backups for all databases on a SQL Server VM. Backups are stored in Azure storage for up to 30 days. Beginning with SQL Server 2016, Automated Backup v2 offers additional options such as configuring manual scheduling and the frequency of full and log backups. |
| [Azure Backup for SQL VMs](#azbackup) | 2008<br/> 2012<br/> 2014<br/> 2016<br/> 2017<br/> 2019 | Azure Backup provides an Enterprise class backup capability for SQL Server on Azure VMs. With this service, you can centrally manage backups for multiple servers and thousands of databases. Databases can be restored to a specific point in time in the portal. It offers a customizable retention policy that can maintain backups for years. | | [Manual backup](#manual) | All | Depending on your version of SQL Server, there are various techniques to manually backup and restore SQL Server on Azure VM. In this scenario, you are responsible for how your databases are backed up and the storage location and management of these backups. |
azure-sql Performance Guidelines Best Practices Storage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/virtual-machines/windows/performance-guidelines-best-practices-storage.md
The following table provides a summary of the recommended caching policies based
||| | **Data disk** | Enable `Read-only` caching for the disks hosting SQL Server data files. <br/> Reads from cache will be faster than the uncached reads from the data disk. <br/> Uncached IOPS and throughput plus Cached IOPS and throughput will yield the total possible performance available from the virtual machine within the VMs limits, but actual performance will vary based on the workload's ability to use the cache (cache hit ratio). <br/>| |**Transaction log disk**|Set the caching policy to `None` for disks hosting the transaction log. There is no performance benefit to enabling caching for the Transaction log disk, and in fact having either `Read-only` or `Read/Write` caching enabled on the log drive can degrade performance of the writes against the drive and decrease the amount of cache available for reads on the data drive. |
-|**Operating OS disk** | The default caching policy could be `Read-only` or `Read/write` for the OS drive. <br/> It is not recommended to change the caching level of the OS drive. |
+|**Operating OS disk** | The default caching policy is `Read/write` for the OS drive. <br/> It is not recommended to change the caching level of the OS drive. |
| **tempdb**| If tempdb cannot be placed on the ephemeral drive `D:\` due to capacity reasons, either resize the virtual machine to get a larger ephemeral drive or place tempdb on a separate data drive with `Read-only` caching configured. <br/> The virtual machine cache and ephemeral drive both use the local SSD, so keep this in mind when sizing as tempdb I/O will count against the cached IOPS and throughput virtual machine limits when hosted on the ephemeral drive.| | | |
+> [!IMPORTANT]
+> Changing the cache setting of an Azure disk detaches and reattaches the target disk. When changing the cache setting for a disk that hosts SQL Server data, log, or application files, be sure to stop the SQL Server service along with any other related services to avoid data corruption.
To learn more, see [Disk caching](../../../virtual-machines/premium-storage-performance.md#disk-caching).
For security best practices, see [Security considerations for SQL Server on Azur
For detailed testing of SQL Server performance on Azure VMs with TPC-E and TPC_C benchmarks, refer to the blog [Optimize OLTP performance](https://techcommunity.microsoft.com/t5/sql-server/optimize-oltp-performance-with-sql-server-on-azure-vm/ba-p/916794).
-Review other SQL Server Virtual Machine articles at [SQL Server on Azure Virtual Machines Overview](sql-server-on-azure-vm-iaas-what-is-overview.md). If you have questions about SQL Server virtual machines, see the [Frequently Asked Questions](frequently-asked-questions-faq.yml).
+Review other SQL Server Virtual Machine articles at [SQL Server on Azure Virtual Machines Overview](sql-server-on-azure-vm-iaas-what-is-overview.md). If you have questions about SQL Server virtual machines, see the [Frequently Asked Questions](frequently-asked-questions-faq.yml).
azure-vmware Disaster Recovery Using Vmware Site Recovery Manager https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/disaster-recovery-using-vmware-site-recovery-manager.md
Scale limitations are per private cloud.
\* For information about Recovery Point Objective (RPO) lower than 15 minutes, see [How the 5 Minute Recovery Point Objective Works](https://docs.vmware.com/en/vSphere-Replication/8.3/com.vmware.vsphere.replication-admin.doc/GUID-9E17D567-A947-49CD-8A84-8EA2D676B55A.html) in the _vSphere Replication Administration guide_.
-For more information, see [Operational Limits of VMware Site Recovery](https://docs.vmware.com/en/VMware-Site-Recovery/services/com.vmware.srmaas.install_config.doc/GUID-D4EE4AE4-FF80-4355-977A-CF211EEC5E1F.html)
## SRM licenses
azure-web-pubsub Concept Performance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-web-pubsub/concept-performance.md
+
+ Title: Performance guide for Azure Web PubSub Service
+description: An overview of the performance and benchmark of Azure Web PubSub Service. Key metrics to consider when planning the capacity.
++ Last updated : 5/12/2021++++
+# Performance guide for Azure Web PubSub Service
+
+One of the key benefits of using Azure Web PubSub Service is the ease of scaling Web PubSub upstream applications. In a large-scale scenario, performance is an important factor.
+
+In this guide, we'll introduce the factors that affect Web PubSub upstream application performance. We'll describe typical performance in different use-case scenarios.
+
+## Term definitions
+
+*Inbound*: The incoming message to Azure Web PubSub Service.
+
+*Outbound*: The outgoing message from Azure Web PubSub Service.
+
+*Bandwidth*: The total size of all messages in 1 second.
+
+## Overview
+
+Azure Web PubSub Service defines seven Standard tiers for different performance capacities. This
+guide answers the following questions:
+
+- What is the typical Azure Web PubSub Service performance for each tier?
+
+- Does Azure Web PubSub Service meet my requirements for message throughput (for example, sending 100,000 messages per second)?
+
+- For my specific scenario, which tier is suitable for me? Or how can I select the proper tier?
+
+To answer these questions, this guide first gives a high-level explanation of the factors that affect performance. It then illustrates the maximum inbound and outbound messages for every tier for typical use cases: **Send to groups through Web PubSub subprotocol**, **upstream**, and **rest api** .
+
+This guide can't cover all scenarios (and different use cases, message sizes, message sending patterns, and so on). But it provides some basic information to understand the performance limitation.
+
+## Performance insight
+
+This section describes the performance evaluation methodologies, and then lists all factors that affect performance. In the end, it provides methods to help you evaluate performance requirements.
+
+### Methodology
+
+*Throughput* and *latency* are two typical aspects of performance checking. For Azure Web PubSub Service, each SKU tier has its own throughput throttling policy. The policy defines *the maximum allowed throughput (inbound and outbound bandwidth)* as the maximum achieved throughput when 99 percent of messages have latency that's less than 1 second.
+
+### Performance factors
+
+Theoretically, Azure Web PubSub Service capacity is limited by computation resources: CPU, memory, and network. For example, more connections to Azure Web PubSub Service cause the service to use more memory. For larger message traffic (for example, every message is larger than 2,048 bytes), Azure Web PubSub Service needs to spend more CPU cycles to process traffic.
+
+The message routing cost also limits performance. Azure Web PubSub Service plays a role as a message broker, which routes the message among a set of clients. A different scenario or API requires a different routing policy.
+
+For **echo**, the client sends a message to the upstream, and upstream echoes the message back to the client. This pattern has the lowest routing cost. But for **broadcast**, **send to group**, and **send to connection**, Azure Web PubSub Service needs to look up the target connections through the internal distributed data structure. This extra processing uses more CPU, memory, and network bandwidth. As a result, performance is slower.
+
+In summary, the following factors affect the inbound and outbound capacity:
+
+- SKU tier (CPU/memory)
+
+- Number of connections
+
+- Message size
+
+- Message send rate
+
+- Use-case scenario (routing cost)
++
+### Finding a proper SKU
+
+How can you evaluate the inbound/outbound capacity or find which tier is suitable for a specific use case?
+
+Every tier has its own maximum inbound bandwidth and outbound bandwidth. A smooth user experience isn't guaranteed after the inbound or outbound traffic exceeds the limit.
+
+```
+ inboundBandwidth = inboundConnections * messageSize / sendInterval
+ outboundBandwidth = outboundConnections * messageSize / sendInterval
+```
+
+- *inboundConnections*: The number of connections sending the message.
+- *outboundConnections*: The number of connections receiving the message.
+- *messageSize*: The size of a single message (average value). A small message that's less than 1,024 bytes has a performance impact that's similar to a 1,024-byte message.
+- *sendInterval*: The interval for sending messages. For example, 1 second means sending one message every second. A smaller interval means sending more messages in a time period. For example, 0.5 second means sending two messages every second.
+- *Connections*: The committed maximum threshold for Azure Web PubSub Service for every tier. Connections that exceed the threshold get throttled.
+
+Assume that the upstream is powerful enough and isn't the performance bottleneck. Then, check the maximum inbound and outbound bandwidth for every tier.
+
+## Case study
+
+The following sections go through three typical use cases: **send to groups through Web PubSub subprotocol**, **triggering CloudEvent**, **calling rest api**. For each scenario, the section lists the current inbound and outbound capacity for Azure Web PubSub Service. It also explains the main factors that affect performance.
+
+In all use cases, the default message size is 2,048 bytes, and the message send interval is 1 second.
+
+### Send to groups through Web PubSub subprotocol
+The service supports a specific subprotocol called `json.webpubsub.azure.v1`, which empowers the clients to do publish/subscribe directly instead of a round trip to the upstream server. This scenario is efficient as no server is involved and all traffic goes through the client-service WebSocket connection.
+
+![Diagram showing the send to group workflow.](./media/concept-performance/group.png)
+
+Group member and group count are two factors that affect performance. To simplify the analysis, we define two kinds of groups:
+
+- **Big group**: The group number is always 10. The group member count is equal to (max
+connection count) / 10. For example, for Unit1, if there are 1,000 connection counts, then every group has 1000 / 10 = 100 members.
+- **Small group**: Every group has 10 connections. The group number is equal to (max
+ connection count) / 10. For example, for Unit1, if there are 1,000 connection counts, then we have 1000 / 10 = 100 groups.
+
+**Send to group** brings a routing cost to Azure Web PubSub Service because it has to find the target connections through a distributed data structure. As the sending connections increase, the cost increases.
+
+##### Big group
+
+For **send to big group**, the outbound bandwidth becomes the bottleneck before hitting the routing cost limit. The following table lists the maximum outbound bandwidth.
+
+| Send to big group | Unit1 | Unit2 | Unit5 | Unit10 | Unit20 | Unit50 | Unit100 |
+||-|-|--|--|- |--||
+| Connections | 1,000 | 2,000 | 5,000 | 10,000 | 20,000 | 50,000 | 100,000|
+| Group member count | 100 | 200 | 500 | 1,000 | 2,000 | 5,000 | 10,000 |
+| Group count | 10 | 10 | 10 | 10 | 10 | 10 | 10|
+| Inbound messages per second | 30 | 30 | 30 | 30 | 30 | 30 | 30 |
+| Inbound bandwidth | 60 KBps | 60 KBps | 60 KBps | 60 KBps | 60 KBps | 60 KBps | 60 KBps |
+| Outbound messages per second | 3,000 | 6,000 | 15,000 | 30,000 | 60,000 | 150,000 | 300,000 |
+| Outbound bandwidth | **6 MBps** | **12 MBps** | **30 MBps** | **60 MBps** | **120 MBps** | **300 MBps** | **600 MBps** |
+
+##### Small group
+
+The routing cost is significant for sending message to many small groups. Currently, the Azure Web PubSub Service implementation hits the routing cost limit at Unit 50. Adding more CPU and memory doesn't help, so Unit100 can't improve further by design. If you need more inbound bandwidth, contact customer support.
+
+| Send to small group | Unit1 | Unit2 | Unit5 | Unit10 | Unit20 | Unit50 | Unit100 |
+||-|-|--|--|--|--||
+| Connections | 1,000 | 2,000 | 5,000 | 10,000 | 20,000 | 50,000 | 100,000 |
+| Group member count | 10 | 10 | 10 | 10 | 10 | 10 | 10 |
+| Group count | 100 | 200 | 500 | 1,000 | 2,000 | 5,000 | 10,000 |
+| Inbound messages per second | 400 | 800 | 2,000 | 4,000 | 8,000 | 15,000 | 15,000 |
+| Inbound bandwidth | 800 KBps | 1.6 MBps | 4 MBps | 8 MBps | 16 MBps | 30 MBps | 30 MBps |
+| Outbound messages per second | 4,000 | 8,000 | 20,000 | 40,000 | 80,000 | 150,000 | 150,000 |
+| Outbound bandwidth | **8 MBps** | **16 MBps** | **40 MBps** | **80 MBps** | **160 MBps** | **300 MBps** | **300 MBps** |
+
+### Triggering Cloud Event
+Service delivers client events to the upstream webhook using the [CloudEvents HTTP protocol](./reference-cloud-events.md).
+
+![The Upstream Webhook](./media/concept-performance/upstream.png)
+
+For every event, it formulates an HTTP POST request to the registered upstream and expects an HTTP response.
+
+> [!NOTE]
+> Web PubSub also supports HTTP 2.0 for upstream events delivering. The below result is tested using HTTP 1.1. If your app server supports HTTP 2.0, the performance will be better.
+
+#### Echo
+
+In this case, the app server writes back the original message back in the http response. The behavior of **echo** determines that the maximum inbound bandwidth is equal to the maximum outbound bandwidth. For details, see the following table.
+
+| Echo | Unit1 | Unit2 | Unit5 | Unit10 | Unit20 | Unit50 | Unit100 |
+|--|-|-|-|--|--|--||
+| Connections | 1,000 | 2,000 | 5,000 | 10,000 | 20,000 | 50,000 | 100,000 |
+| Inbound/outbound messages per second | 500 | 1,000 | 2,500 | 5,000 | 10,000 | 25,000 | 50,000 |
+| Inbound/outbound bandwidth | **1 MBps** | **2 MBps** | **5 MBps** | **10 MBps** | **20 MBps** | **50 MBps** | **100 MBps** |
+++
+### Rest API
+
+Azure Web PubSub provides powerful [APIs](/rest/api/webpubsub/) to manage clients and deliver real-time messages.
+
+![Diagram showing the Web PubSub service overall workflow using REST APIs.](./media/concept-performance/rest-api.png)
+
+#### Send to user through REST API
+The benchmark assigns usernames to all of the clients before they start connecting to Azure Web PubSub Service.
+
+| Send to user through REST API | Unit1 | Unit2 | Unit5 | Unit10 | Unit20 | Unit50 | Unit100 |
+||-|-|--|--|--|||
+| Connections | 1,000 | 2,000 | 5,000 | 10,000 | 20,000 | 50,000 | 100,000 |
+| Inbound/outbound messages per second | 180 | 360 | 900 | 1,800 | 3,600 | 9,000 | 18,000 |
+| Inbound/outbound bandwidth | **360 KBps** | **720 KBps** | **1.8 MBps** | **3.6 MBps** | **7.2 MBps** | **18 MBps** | **36 MBps** |
+
+#### Broadcast through REST API
+The bandwidth limit is the same as that for **send to big group**.
+
+| Broadcast through REST API | Unit1 | Unit2 | Unit5 | Unit10 | Unit20 | Unit50 | Unit100 |
+||-|-|--|--|--|||
+| Connections | 1,000 | 2,000 | 5,000 | 10,000 | 20,000 | 50,000 | 100,000 |
+| Inbound messages per second | 3 | 3 | 3 | 3 | 3 | 3 | 3 |
+| Outbound messages per second | 3,000 | 6,000 | 15,000 | 30,000 | 60,000 | 150,000 | 300,000 |
+| Inbound bandwidth | 6 KBps | 6 KBps | 6 KBps | 6 KBps | 6 KBps | 6 KBps | 6 KBps |
+| Outbound bandwidth | **6 MBps** | **12 MBps** | **30 MBps** | **60 MBps** | **120 MBps** | **300 MBps** | **600 MBps** |
+
+## Next steps
+
azure-web-pubsub Concept Service Internals https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-web-pubsub/concept-service-internals.md
+
+ Title: Azure Web PubSub service internals
+description: Learn about Azure Web PubSub Service internals, the architecture, the connections and how data is transmitted.
++++ Last updated : 08/18/2021++
+# Azure Web PubSub service internals
+
+Azure Web PubSub Service provides an easy way to publish/subscribe messages using simple [WebSocket](https://tools.ietf.org/html/rfc6455) connections.
+
+- Client can be written in any language having WebSocket support
+- Both text and binary messages are supported within one connection
+- A simple protocol for clients to do direct client-client message publish
+- The service manages the WebSocket connections for you
+
+## Terms
+* **Service**: Azure Web PubSub Service.
++
+* **Client Connection** and **ConnectionId**: A client connects to the `/client` endpoint, when connected, a unique `connectionId` is generated by the service as the unique identity of the client connection. Users can then manage the client connection using this `connectionId`. Details are described in [Client Protocol](#client_protocol) section.
+
+* **Client Events**: Events are created during the lifecycle of a client connection. For example, a simple WebSocket client connection creates a `connect` event when it tries to connect to the service, a `connected` event when it successfully connected to the service, a `message` event when it sends messages to the service and a `disconnected` event when it disconnects from the service. Details about *client events* are illustrated in [Client Protocol](#client_protocol) section.
+
+* **Event Handler**: The event handler contains the logic to handle the client events. Register and configure event handlers in the service through the portal or Azure CLI beforehand. Details are described in [Event Handler](#event_handler) section. The place to host the event handler logic is considered as the server-side.
+
+* **Server**: The server can handle client events, manage client connections, and publish messages to groups. The server, comparing to the client, is trustworthy. Details about **server** are described in [Server Protocol](#server_protocol) section.
+
+<a name="workflow"></a>
+
+## Workflow
+
+![Diagram showing the Web PubSub service workflow.](./media/concept-service-internals/workflow.png)
+
+As illustrated by the above workflow graph:
+1. A *client* connects to the service `/client` endpoint using WebSocket transport. Service forward every WebSocket frame to the configured upstream(server). The WebSocket connection can connect with any custom subprotocol for the server to handle, or it can connect with the service-supported subprotocol `json.webpubsub.azure.v1`, which empowers the clients to do pub/sub directly. Details are described in [client protocol](#client_protocol).
+2. The service invokes the server using **CloudEvents HTTP protocol** on different client events. [**CloudEvents**](https://github.com/cloudevents/spec/blob/v1.0.1/http-protocol-binding.md) is a standardized and protocol-agnostic definition of the structure and metadata description of events hosted by the Cloud Native Computing Foundation (CNCF). Details are described in [server protocol](#server_protocol).
+3. Server can invoke the service using REST API to send messages to clients or to manage the connected clients. Details are described in [server protocol](#server_protocol)
+
+<a name="client_protocol"></a>
+
+## Client protocol
+
+A client connection connects to the `/client` endpoint of the service using [WebSocket protocol](https://tools.ietf.org/html/rfc6455). The WebSocket protocol provides full-duplex communication channels over a single TCP connection and was standardized by the IETF as RFC 6455 in 2011. Most languages have native support to start WebSocket connections.
+
+Our service supports two kinds of clients:
+- One is called [the simple WebSocket client](#simple_client)
+- The other is called [the PubSub WebSocket client](#pubsub_client)
+
+<a name="simple_client"></a>
+
+### The simple WebSocket client
+A simple WebSocket client, as the naming indicates, is a simple WebSocket connection. It can also have its custom subprotocol.
+
+For example, in JS, a simple WebSocket client can be created using:
+```js
+// simple WebSocket client1
+var client1 = new WebSocket('wss://test.webpubsub.azure.com/client/hubs/hub1');
+
+// simple WebSocket client2 with some custom subprotocol
+var client2 = new WebSocket('wss://test.webpubsub.azure.com/client/hubs/hub1', 'custom.subprotocol')
+
+```
+
+A simple WebSocket client follows a client<->server architecture, as the below sequence diagram shows:
+![Diagram showing the sequence for a client connection.](./media/concept-service-internals/simple-client-sequence.png)
++
+1. When the client starts WebSocket handshake, the service tries to invoke the `connect` event handler (the server) for WebSocket handshake. Users can use this handler to handle the WebSocket handshake, determine the subprotocol to use, auth the client, and join the client to some groups.
+2. When the client is successfully connected, the service invokes a `connected` event handler. It works as some notification and doesn't block the client from sending messages. Users can use this handler to do some data storage and can respond with messages sending to the client.
+2. When the client sends messages, the services trigger the `message` event to the event handler (the server) to handle the messages sent. This event is a general event containing the messages sent in a WebSocket frame. User needs to dispatch the messages on their own inside this event handler.
+3. When the client disconnects, the service tries to trigger the `disconnected` event to the event handler (the server) once it detects the disconnect.
+
+The events fall into two categories:
+* synchronous events (blocking)
+ Synchronous events block the client workflow. When such an event trigger fails, the service drops the client connection.
+ * `connect`
+ * `message`
+* asynchronous events (non-blocking)
+ Asynchronous events don't block the client workflow, it acts as some notification to the upstream event handler. When such an event trigger fails, the service logs the error detail.
+ * `connected`
+ * `disconnected`
+
+#### Scenarios:
+Such connection can be used in a typical client-server architecture, that the client sends messages to the server, and the server handles incoming messages using [Event Handlers](#event_handler). It can also be used when customers apply existing [subprotocols](https://www.iana.org/assignments/websocket/websocket.xml) in their application logic.
+
+<a name="pubsub_client"></a>
+
+### The PubSub WebSocket client
+The service also supports a specific subprotocol called `json.webpubsub.azure.v1`, which empowers the clients to do publish/subscribe directly instead of a round trip to the upstream server. We call the WebSocket connection with `json.webpubsub.azure.v1` subprotocol a PubSub WebSocket client.
+
+For example, in JS, a PubSub WebSocket client can be created using:
+```js
+// PubSub WebSocket client
+var pubsub = new WebSocket('wss://test.webpubsub.azure.com/client/hubs/hub1', 'json.webpubsub.azure.v1');
+```
+
+A PubSub WebSocket client can:
+* Join a group, for example:
+ ```json
+ {
+ "type": "joinGroup",
+ "group": "<group_name>"
+ }
+ ```
+* Leave a group, for example:
+ ```json
+ {
+ "type": "leaveGroup",
+ "group": "<group_name>"
+ }
+ ```
+* Publish messages to a group, for example:
+ ```json
+ {
+ "type": "sendToGroup",
+ "group": "<group_name>",
+ "data": { "hello": "world" }
+ }
+ ```
+* Send custom events to the upstream server, for example:
+
+ ```json
+ {
+ "type": "event",
+ "event": "<event_name>",
+ "data": { "hello": "world" }
+ }
+ ```
+
+[PubSub WebSocket Subprotocol](./reference-json-webpubsub-subprotocol.md) contains the details of the `json.webpubsub.azure.v1` subprotocol.
+
+You may have noticed that for a [simple WebSocket client](#simple_client), the *server* is a MUST HAVE role to handle the events from clients. A simple WebSocket connection always triggers a `message` event when it sends messages, and always relies on the server-side to process messages and do other operations. With the help of the `json.webpubsub.azure.v1` subprotocol, an authorized client can join a group and publish messages to a group directly. It can also route messages to different upstream (event handlers) by customizing the *event* the message belongs.
+
+#### Scenarios:
+Such clients can be used when clients want to talk to each other. Messages are sent from `client1` to the service and the service delivers the message directly to `client2` if the clients are authorized to do so.
+
+Client1:
+
+```js
+var client1 = new WebSocket("wss://xxx.webpubsub.azure.com/client/hubs/hub1", "json.webpubsub.azure.v1");
+client1.onmessage = e => {
+ if (e.data) {
+ var message = JSON.parse(e.data);
+ if (message.type === "message"
+ && message.group === "Group1"){
+ // Only print messages from Group1
+ console.log(message.data);
+ }
+ }
+};
+
+client1.onopen = e => {
+ client1.send(JSON.stringify({
+ type: "joinGroup",
+ group: "Group1"
+ }));
+};
+```
+
+Client2:
+
+```js
+var client2 = new WebSocket("wss://xxx.webpubsub.azure.com/client/hubs/hub1", "json.webpubsub.azure.v1");
+client2.onopen = e => {
+ client2.send(JSON.stringify({
+ type: "sendToGroup",
+ group: "Group1",
+ data: "Hello Client1"
+ });
+};
+```
+
+As the above example shows, `client2` sends data directly to `client1` by publishing messages to `Group1` which `client1` is in.
+
+<a name="client_message_limit"></a>
+
+### Client message limit
+The maximum allowed message size for one WebSocket frame is **1MB**.
+
+<a name="client_auth"></a>
+
+### Client Auth
+
+#### Auth workflow
+
+Client uses a signed JWT token to connect to the service. The upstream can also reject the client when it is `connect` event handler of the incoming client. The event handler auth the client by specifying the `userId` and the `role`s the client has in the webhook response, or decline the client with 401. [Event handler](#event_handler) section describes it in detail.
+
+The below graph describes the workflow:
+
+![Diagram showing the client authentication workflow.](./media/concept-service-internals/client-connect-workflow.png)
+
+As you may have noticed when we describe the PubSub WebSocket clients, that a client can publish to other clients only when it is *authorized* to. The `role`s of the client determines the *initial* permissions the client have:
+
+| Role | Permission |
+|||
+| Not specified | The client can send events.
+| `webpubsub.joinLeaveGroup` | The client can join/leave any group.
+| `webpubsub.sendToGroup` | The client can publish messages to any group.
+| `webpubsub.joinLeaveGroup.<group>` | The client can join/leave group `<group>`.
+| `webpubsub.sendToGroup.<group>` | The client can publish messages to group `<group>`.
+
+The server-side can also grant or revoke permissions of the client dynamically through [server protocol](#connection_manager) as to be illustrated in a later section.
+
+<a name="server_protocol"></a>
+
+## Server Protocol
+
+Server protocol provides the functionality for the server to manage the client connections and the groups.
+
+In general, server protocol contains two roles:
+1. [Event handler](#event_handler)
+2. [Connection manager](#connection_manager)
+
+<a name="event_handler"></a>
+
+### Event handler
+The event handler handles the incoming client events. Event handlers are registered and configured in the service through portal or Azure CLI beforehand so that when a client event is triggered, the service can identify if the event is expected to be handled or not. For public preview, we use `PUSH` mode to invoke the event handler: that the event handler as the server side, exposes public accessible endpoint for the service to invoke when the event is triggered. It acts as a **webhook**.
+
+Service delivers client events to the upstream webhook using the [CloudEvents HTTP protocol](https://github.com/cloudevents/spec/blob/v1.0.1/http-protocol-binding.md).
+
+For every event, it formulates an HTTP POST request to the registered upstream and expects an HTTP response.
+
+The data sending from the service to the server is always in CloudEvents `binary` format.
+
+![Diagram showing the Web PubSub service event push mode.](./media/concept-service-internals/event-push.png)
+
+#### Upstream and Validation
+
+Event handlers need to be registered and configured in the service through portal or Azure CLI beforehand so that when a client event is triggered, the service can identify if the event is expected to be handled or not. For public preview, we use `PUSH` mode to invoke the event handler: that the event handler as the server side, exposes public accessible endpoint for the service to invoke when the event is triggered. It acts as a **webhook** **upstream**.
+
+When configuring the webhook endpoint, the URL can use `{event}` parameter to define a URL template. The service calculates the value of the webhook URL dynamically when the client request comes in. For example, when a request `/client/hubs/chat` comes in, with a configured event handler URL pattern `http://host.com/api/{event}` for hub `chat`, when the client connects, it will first POST to this URL: `http://host.com/api/connect`. This can be useful when a PubSub WebSocket client sends custom events, that the event handler helps dispatch different events to different upstream. Note that the `{event}` parameter is not allowed in the URL domain name.
+
+When setting up the event handler upstream through Azure portal or CLI, the service follows the [CloudEvents abuse protection](https://github.com/cloudevents/spec/blob/v1.0/http-webhook.md#4-abuse-protection) to validate the upstream webhook. The `WebHook-Request-Origin` request header is set to the service domain name `xxx.webpubsub.azure.com`, and it expects the response having header `WebHook-Allowed-Origin` to contain this domain name.
+
+When doing the validation, the `{event}` parameter is resolved to `validate`. For example, when trying to set the URL to `http://host.com/api/{event}`, the service tries to **OPTIONS** a request to `http://host.com/api/validate` and only when the response is valid the configure can be set successfully.
+
+For now, we do not support [WebHook-Request-Rate](https://github.com/cloudevents/spec/blob/v1.0/http-webhook.md#414-webhook-request-rate) and [WebHook-Request-Callback](https://github.com/cloudevents/spec/blob/v1.0/http-webhook.md#413-webhook-request-callback).
+
+#### Authentication between service and webhook
+- Anonymous mode
+- Simple Auth that `code` is provided through the configured Webhook URL.
+- AAD Auth.
+ - Add a client secret in AAD's [App Registrations] and provide the [client secret] to Azure Web PubSub through portal/cli.
+ - Provide the [Identity](/azure/app-service/overview-managed-identity?tabs=dotnet) to Azure Web PubSub through portal/cli
+
+<a name="connection_manager"></a>
+
+### Connection manager
+
+The server is by nature an authorized user. With the help of the *event handler role*, the server knows the metadata of the clients, for example, `connectionId` and `userId`, so it can:
+ - Close a client connection
+ - Send messages to a client
+ - Send messages to clients that belong to the same user
+ - Add a client to a group
+ - Add clients authed as the same user to a group
+ - Remove a client from a group
+ - Remove clients authed as the same user from a group
+ - Publish messages to a group
+
+It can also grant or revoke publish/join permissions for a PubSub client:
+ - Grant Join/Publish permissions to some specific group or to all groups
+ - Revoke Join/Publish permissions for some specific group or for all groups
+ - Check if the client has permission to Join/Publish to some specific group or to all groups
+
+For public preview, the service provides REST APIs for the server to do connection management:
+
+![Diagram showing the Web PubSub service connection manager workflow.](./media/concept-service-internals/manager-rest.png)
+
+The detailed REST API protocol is defined [here][rest].
+
+### Summary
+You may have noticed that the *event handler role* handles communication from the service to the server while *the manager role* handles communication from the server to the service. So combing the two roles, the data flow between service and server looks as similar to below, using HTTP protocol:
+
+![Diagram showing the Web PubSub service bi-directional workflow.](./media/concept-service-internals/http-service-server.png)
+
+[rest]: /rest/api/webpubsub/
+
+## Next steps
+
azure-web-pubsub Key Concepts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-web-pubsub/key-concepts.md
Last updated 08/06/2021
Azure Web PubSub Service helps you build real-time messaging web applications. The clients connect to the service using the [standard WebSocket protocol](https://datatracker.ietf.org/doc/html/rfc6455), and the service exposes [REST APIs](/rest/api/webpubsub) and SDKs for you to manage these clients.
+Here are some important terms used by the service:
+ [!INCLUDE [Terms](includes/terms.md)]
azure-web-pubsub Reference Functions Bindings https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-web-pubsub/reference-functions-bindings.md
+
+ Title: Reference - Azure Web PubSub trigger and bindings for Azure Functions
+description: The reference describes Azure Web PubSub trigger and bindings for Azure Functions
++++ Last updated : 08/16/2021++
+# Azure Web PubSub trigger and bindings for Azure Functions
+
+This reference explains how to handle Web PubSub events in Azure Functions.
+
+Web PubSub is an Azure-managed service that helps developers easily build web applications with real-time features and publish-subscribe pattern.
+
+| Action | Type |
+|||
+| Run a function when messages come from service | [Trigger binding](#trigger-binding) |
+| Return the service endpoint URL and access token | [Input binding](#input-binding)
+| Send Web PubSub messages |[Output binding](#output-binding) |
+
+[Source code](https://github.com/Azure/azure-sdk-for-net/blob/master/sdk/webpubsub/) |
+[Package](https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.WebPubSub) |
+[API reference documentation](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/webpubsub/Microsoft.Azure.WebJobs.Extensions.WebPubSub/api/Microsoft.Azure.WebJobs.Extensions.WebPubSub.netstandard2.0.cs) |
+[Product documentation](https://aka.ms/awps/doc) |
+[Samples][samples_ref]
+
+## Add to your Functions app
+
+Working with the trigger and bindings requires you reference the appropriate package. The NuGet package is used for .NET class libraries while the extension bundle is used for all other application types.
+
+| Language | Add by... | Remarks
+|-||-|
+| C# | Installing the [NuGet package], version prerelease | |
+| C# Script, JavaScript, Python, PowerShell | [Explicitly install extensions] | The [Azure Tools extension] is recommended to use with Visual Studio Code. |
+| C# Script (online-only in Azure portal) | Adding a binding | To update existing binding extensions without having to republish your function app, see [Update your extensions]. |
+
+Install the client library from [NuGet](https://www.nuget.org/) with specified package and version.
+
+```bash
+func extensions install --package Microsoft.Azure.WebJobs.Extensions.WebPubSub --version 1.0.0-beta.3
+```
+
+[NuGet package]: https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.WebPubSub
+[Explicitly install extensions]: /azure/azure-functions/functions-bindings-register#explicitly-install-extensions
+[Azure Tools extension]: https://marketplace.visualstudio.com/items?itemName=ms-vscode.vscode-node-azure-pack
+[Update your extensions]: /azure/azure-functions/functions-bindings-register
+
+## Key concepts
+
+![Diagram showing the workflow of Azure Web PubSub service working with Function Apps.](./media/reference-functions-bindings/functions-workflow.png)
+
+(1)-(2) `WebPubSubConnection` input binding with HttpTrigger to generate client connection.
+
+(3)-(4) `WebPubSubTrigger` trigger binding or `WebPubSubRequest` input binding with HttpTrigger to handle service request.
+
+(5)-(6) `WebPubSub` output binding to request service do something.
+
+## Trigger binding
+
+Use the function trigger to handle requests from Azure Web PubSub service.
+
+`WebPubSubTrigger` is used when you need to handle requests from service side. The trigger endpoint pattern would be like below which should be set in Web PubSub service side (Portal: settings -> event handler -> URL Template). In the endpoint pattern, the query part `code=<API_KEY>` is **REQUIRED** when you're using Azure Function App for [security](/azure/azure-functions/security-concepts#system-key) reasons. The key can be found in **Azure Portal**. Find your function app resource and navigate to **Functions** -> **App Keys** -> **System Keys** -> **webpubsub_extension** after you deploy the function app to Azure. Though, this key isn't needed when you're working with local functions.
+
+```
+<Function_App_Url>/runtime/webhooks/webpubsub?code=<API_KEY>
+```
+
+### Example
++
+# [C#](#tab/csharp)
+
+```cs
+[FunctionName("WebPubSubTrigger")]
+public static void Run(
+ [WebPubSubTrigger("<hub>", "message", EventType.User)]
+ ConnectionContext context,
+ string message,
+ MessageDataType dataType)
+{
+ Console.WriteLine($"Request from: {context.userId}");
+ Console.WriteLine($"Request message: {message}");
+ Console.WriteLine($"Request message DataType: {dataType}");
+}
+```
+
+`WebPubSubTrigger` binding also supports return value in some scenarios, for example, `Connect`, `Message` events, when server can check and deny the client request, or send message to the request client directly. `Connect` event respects `ConnectResponse` and `ErrorResponse`, and `Message` event respects `MessageResponse` and `ErrorResponse`, rest types not matching current scenario will be ignored. And if `ErrorResponse` is returned, service will drop the client connection.
+
+```cs
+[FunctionName("WebPubSubTriggerReturnValue")]
+public static MessageResponse Run(
+ [WebPubSubTrigger("<hub>", "message", EventType.User)]
+ ConnectionContext context,
+ string message,
+ MessageDataType dataType)
+{
+ return new MessageResponse
+ {
+ Message = BinaryData.FromString("ack"),
+ DataType = MessageDataType.Text
+ };
+}
+```
+
+# [JavaScript](#tab/javascript)
+
+Define trigger binding in `function.json`.
+
+```json
+{
+ "disabled": false,
+ "bindings": [
+ {
+ "type": "webPubSubTrigger",
+ "direction": "in",
+ "name": "message",
+ "hub": "<hub>",
+ "eventName": "message",
+ "eventType": "user"
+ }
+ ]
+}
+```
+
+Define function in `index.js`.
+
+```js
+module.exports = function (context, message) {
+ console.log('Request from: ', context.userId);
+ console.log('Request message: ', message);
+ console.log('Request message dataType: ', context.bindingData.dataType);
+}
+```
+
+`WebPubSubTrigger` binding also supports return value in some scenarios, for example, `Connect`, `Message` events. When server can check and deny the client request, or send message to the request client directly. In JavaScript type-less language, it will be deserialized regarding the object keys. And `ErrorResponse` will have the highest priority compare to rest objects, that if `code` is in the return, then it will be parsed to `ErrorResponse` and client connection will be dropped.
+
+```js
+module.exports = async function (context) {
+ return {
+ "message": "ack",
+ "dataType" : "text"
+ };
+}
+```
++++
+### Attributes and annotations
+
+In [C# class libraries](/azure/azure-functions/functions-dotnet-class-library), use the `WebPubSubTrigger` attribute.
+
+Here's an `WebPubSubTrigger` attribute in a method signature:
+
+```csharp
+[FunctionName("WebPubSubTrigger")]
+public static void Run([WebPubSubTrigger("<hub>", "<eventName>", <eventType>)]
+ConnectionContext context, ILogger log)
+{
+ ...
+}
+```
+
+For a complete example, see C# example.
+
+### Configuration
+
+The following table explains the binding configuration properties that you set in the *function.json* file.
+
+| function.json property | Attribute property | Description |
+||||
+| **type** | n/a |Required - must be set to `webPubSubTrigger`. |
+| **direction** | n/a | Required - must be set to `in`. |
+| **name** | n/a | Required - the variable name used in function code for the parameter that receives the event data. |
+| **hub** | Hub | Required - the value must be set to the name of the Web PubSub hub for the function to be triggered. We support set the value in attribute as higher priority, or it can be set in app settings as a global value. |
+| **eventType** | EventType | Required - the value must be set as the event type of messages for the function to be triggered. The value should be either `user` or `system`. |
+| **eventName** | EventName | Required - the value must be set as the event of messages for the function to be triggered. </br> For `system` event type, the event name should be in `connect`, `connected`, `disconnect`. </br> For system supported subprotocol `json.webpubsub.azure.v1.`, the event name is user-defined event name. </br> For user-defined subprotocols, the event name is `message`. |
+
+### Usages
+
+In C#, `ConnectionContext` is type recognized binding parameter, rest parameters are bound by parameter name. Check table below of available parameters and types.
+
+In type-less language like JavaScript, `name` in `function.json` will be used to bind the trigger object regarding below mapping table. And will respect `dataType` in `function.json` to convert message accordingly when `name` is set to `message` as the binding object for trigger input. All the parameters can be read from `context.bindingData.<BindingName>` and will be `JObject` converted.
+
+| Binding Name | Binding Type | Description | Properties |
+|||||
+|connectionContext|`ConnectionContext`|Common request information| EventType, EventName, Hub, ConnectionId, UserId, Headers, Signature |
+|message|`BinaryData`,`string`,`Stream`,`byte[]`| Request message from client | -|
+|dataType|`MessageDataType`| Request message dataType, supports `binary`, `text`, `json` | -|
+|claims|`IDictionary<string, string[]>`|User Claims in `connect` request | -|
+|query|`IDictionary<string, string[]>`|User query in `connect` request | -|
+|subprotocols|`string[]`|Available subprotocols in `connect` request | -|
+|clientCertificates|`ClientCertificate[]`|A list of certificate thumbprint from clients in `connect` request|-|
+|reason|`string`|Reason in disconnect request|-|
+
+### Return response
+
+`WebPubSubTrigger` will respect customer returned response for synchronous events of `connect` and user event `message`. Only matched response will be sent back to service, otherwise, it will be ignored.
+
+| Return Type | Description | Properties |
+||||
+|`ConnectResponse`| Response for `connect` event | Groups, Roles, UserId, Subprotocol |
+|`MessageResponse`| Response for user event | DataType, Message |
+|`ErrorResponse`| Error response for the sync event | Code, ErrorMessage |
+|`ServiceResponse`| Base response type of the supported ones used for uncertain return cases | - |
+
+## Input binding
+
+Our extension provides two input binding targeting different needs.
+
+- `WebPubSubConnection`
+
+ To let a client connect to Azure Web PubSub Service, it must know the service endpoint URL and a valid access token. The `WebPubSubConnection` input binding produces required information, so client doesn't need to handle this token generation itself. Because the token is time-limited and can be used to authenticate a specific user to a connection, don't cache the token or share it between clients. An HTTP trigger working with this input binding can be used for clients to retrieve the connection information.
+
+- `WebPubSubRequest`
+
+ When using is Static Web Apps, `HttpTrigger` is the only supported trigger and under Web PubSub scenario, we provide the `WebPubSubRequest` input binding helps users deserialize upstream http request from service side under Web PubSub protocols. So customers can get similar results comparing to `WebPubSubTrigger` to easy handle in functions. See [examples](#examplewebpubsubrequest) in below.
+ When used with `HttpTrigger`, customer requires to configure the HttpTrigger exposed url in upstream accordingly.
+
+### Example - `WebPubSubConnection`
+
+The following example shows a C# function that acquires Web PubSub connection information using the input binding and returns it over HTTP.
+
+# [C#](#tab/csharp)
+
+```cs
+[FunctionName("WebPubSubConnectionInputBinding")]
+public static WebPubSubConnection Run(
+ [HttpTrigger(AuthorizationLevel.Anonymous, "get", "post")] HttpRequest req,
+ [WebPubSubConnection(Hub = "<hub>", UserId = "{query.userid}")] WebPubSubConnection connection)
+{
+ Console.WriteLine("login");
+ return connection;
+}
+```
+
+# [JavaScript](#tab/javascript)
+
+Define input bindings in `function.json`.
+
+```json
+{
+ "disabled": false,
+ "bindings": [
+ {
+ "authLevel": "anonymous",
+ "type": "httpTrigger",
+ "direction": "in",
+ "name": "req"
+ },
+ {
+ "type": "http",
+ "direction": "out",
+ "name": "res"
+ },
+ {
+ "type": "webPubSubConnection",
+ "name": "connection",
+ "userId": "{query.userid}",
+ "hub": "<hub>",
+ "direction": "in"
+ }
+ ]
+}
+```
+
+Define function in `index.js`.
+
+```js
+module.exports = function (context, req, connection) {
+ context.res = { body: connection };
+ context.done();
+};
+```
+++
+### Authenticated **tokens**
+
+If the function is triggered by an authenticated client, you can add a user ID claim to the generated token. You can easily add authentication to a function app using App Service Authentication.
+
+App Service Authentication sets HTTP headers named `x-ms-client-principal-id` and `x-ms-client-principal-name` that contain the authenticated user's client principal ID and name, respectively.
+
+You can set the UserId property of the binding to the value from either header using a binding expression: `{headers.x-ms-client-principal-id}` or `{headers.x-ms-client-principal-name}`.
+
+```cs
+[FunctionName("WebPubSubConnectionInputBinding")]
+public static WebPubSubConnection Run(
+ [HttpTrigger(AuthorizationLevel.Anonymous, "get", "post")] HttpRequest req,
+ [WebPubSubConnection(Hub = "<hub>", UserId = "{headers.x-ms-client-principal-name}")] WebPubSubConnection connection)
+{
+ Console.WriteLine("login");
+ return connection;
+}
+```
+
+### Example - `WebPubSubRequest`
+
+The following example shows a C# function that acquires Web PubSub Request information using the input binding under connect event type and returns it over HTTP.
+
+# [C#](#tab/csharp)
+
+```cs
+[FunctionName("WebPubSubRequestInputBinding")]
+public static object Run(
+ [HttpTrigger(AuthorizationLevel.Anonymous, "get", "post")] HttpRequest req,
+ [WebPubSubRequest] WebPubSubRequest wpsReq)
+{
+ if (wpsReq.Request.IsValidationRequest || !wpsReq.Request.Valid)
+ {
+ return wpsReq.Response;
+ }
+ var request = wpsReq.Request as ConnectEventRequest;
+ var response = new ConnectResponse
+ {
+ UserId = wpsReq.ConnectionContext.UserId
+ };
+ return response;
+}
+```
+
+# [JavaScript](#tab/javascript)
+
+Define input bindings in `function.json`.
+
+```json
+{
+ "disabled": false,
+ "bindings": [
+ {
+ "authLevel": "anonymous",
+ "type": "httpTrigger",
+ "direction": "in",
+ "name": "req",
+ "methods": ["get", "post"]
+ },
+ {
+ "type": "http",
+ "direction": "out",
+ "name": "$return"
+ },
+ {
+ "type": "webPubSubRequest",
+ "name": "wpsReq",
+ "direction": "in"
+ }
+ ]
+}
+```
+
+Define function in `index.js`.
+
+```js
+module.exports = async function (context, req, wpsReq) {
+ if (!wpsReq.request.valid || wpsReq.request.isValidationRequest)
+ {
+ console.log(`invalid request: ${wpsReq.response.message}.`);
+ return wpsReq.response;
+ }
+ console.log(`user: ${context.bindings.wpsReq.connectionContext.userId} is connecting.`);
+ return { body: {"userId": context.bindings.wpsReq.connectionContext.userId} };
+};
+```
+++
+### Configuration
+
+#### WebPubSubConnection
+
+The following table explains the binding configuration properties that you set in the function.json file and the `WebPubSubConnection` attribute.
+
+| function.json property | Attribute property | Description |
+||||
+| **type** | n/a | Must be set to `webPubSubConnection` |
+| **direction** | n/a | Must be set to `in` |
+| **name** | n/a | Variable name used in function code for input connection binding object. |
+| **hub** | Hub | The value must be set to the name of the Web PubSub hub for the function to be triggered. We support set the value in attribute as higher priority, or it can be set in app settings as a global value. |
+| **userId** | UserId | Optional - the value of the user identifier claim to be set in the access key token. |
+| **connectionStringSetting** | ConnectionStringSetting | The name of the app setting that contains the Web PubSub Service connection string (defaults to "WebPubSubConnectionString") |
+
+#### WebPubSubRequest
+
+The following table explains the binding configuration properties that you set in the functions.json file and the `WebPubSubRequest` attribute.
+
+| function.json property | Attribute property | Description |
+||||
+| **type** | n/a | Must be set to `webPubSubRequest` |
+| **direction** | n/a | Must be set to `in` |
+| **name** | n/a | Variable name used in function code for input Web PubSub request. |
+
+### Usage
+
+#### WebPubSubConnection
+
+`WebPubSubConnection` provides below properties.
+
+Binding Name | Binding Type | Description
+||
+BaseUrl | string | Web PubSub client connection url
+Url | string | Absolute Uri of the Web PubSub connection, contains `AccessToken` generated base on the request
+AccessToken | string | Generated `AccessToken` based on request UserId and service information
+
+#### WebPubSubRequest
+
+`WebPubSubRequest` provides below properties.
+
+Binding Name | Binding Type | Description | Properties
+|||
+connectionContext | `ConnectionContext` | Common request information| EventType, EventName, Hub, ConnectionId, UserId, Headers, Signature
+request | `ServiceRequest` | Request from client, see below table for details | IsValidationRequest, Valid, Unauthorized, BadRequest, ErrorMessage, Name, etc.
+response | `HttpResponseMessage` | Extension builds response mainly for `AbuseProtection` and errors cases | -
+
+For `ServiceRequest`, it's deserialized to different classes that provides different information about the request scenario. For `ValidationRequest` or `InvalidRequest`, it's suggested to return system build response `WebPubSubRequest.Response` directly, or customer can log errors in need. In different scenarios, customer can read the request properties as below.
+
+Derived Class | Description | Properties
+--|--|--
+`ValidationRequest` | Use in `AbuseProtection` when `IsValidationRequest` is **true** | -
+`ConnectEventRequest` | Used in `Connect` event type | Claims, Query, Subprotocols, ClientCertificates
+`ConnectedEventRequest` | Use in `Connected` event type | -
+`MessageEventRequest` | Use in user event type | Message, DataType
+`DisconnectedEventRequest` | Use in `Disconnected` event type | Reason
+`InvalidRequest` | Use when the request is invalid | -
+
+## Output binding
+
+Use the Web PubSub output binding to send one or more messages using Azure Web PubSub Service. You can broadcast a message to:
+
+* All connected clients
+* Connected clients authenticated to a specific user
+* Connected clients joined in a specific group
+
+The output binding also allows you to manage groups and grant/revoke permissions targeting specific connectionId with group.
+
+For information on setup and configuration details, see the overview.
+
+### Example
+
+# [C#](#tab/csharp)
+
+```cs
+[FunctionName("WebPubSubOutputBinding")]
+public static async Task RunAsync(
+ [HttpTrigger(AuthorizationLevel.Anonymous, "get", "post")] HttpRequest req,
+ [WebPubSub(Hub = "<hub>")] IAsyncCollector<WebPubSubOperation> operations)
+{
+ await operations.AddAsync(new SendToAll
+ {
+ Message = BinaryData.FromString("Hello Web PubSub"),
+ DataType = MessageDataType.Text
+ });
+}
+```
+
+# [JavaScript](#tab/javascript)
+
+Define bindings in `functions.json`.
+
+```json
+{
+ "disabled": false,
+ "bindings": [
+ {
+ "type": "webPubSub",
+ "name": "webPubSubOperation",
+ "hub": "<hub>",
+ "direction": "out"
+ }
+ ]
+}
+```
+
+Define function in `index.js`.
+
+```js
+module.exports = async function (context) {
+ context.bindings.webPubSubOperation = {
+ "operationKind": "sendToAll",
+ "message": "hello",
+ "dataType": "text"
+ };
+ context.done();
+}
+```
+++
+### WebPubSubOperation
+
+`WebPubSubOperation` is the base abstract type of output bindings. The derived types represent the operation server want services to invoke. In type-less language like `javascript`, `OperationKind` is the key parameter to resolve the type. And under strong type language like `csharp`, user could create the target operation type directly and customer assigned `OperationKind` value would be ignored.
+
+Derived Class|Properties
+--|--
+`SendToAll`|Message, DataType, Excluded
+`SendToGroup`|Group, Message, DataType, Excluded
+`SendToUser`|UserId, Message, DataType
+`SendToConnection`|ConnectionId, Message, DataType
+`AddUserToGroup`|UserId, Group
+`RemoveUserFromGroup`|UserId, Group
+`RemoveUserFromAllGroups`|UserId
+`AddConnectionToGroup`|ConnectionId, Group
+`RemoveConnectionFromGroup`|ConnectionId, Group
+`CloseClientConnection`|ConnectionId, Reason
+`GrantGroupPermission`|ConnectionId, Group, Permission, TargetName
+`RevokeGroupPermission`|ConnectionId, Group, Permission, TargetName
+
+### Configuration
+
+#### WebPubSub
+
+The following table explains the binding configuration properties that you set in the function.json file and the `WebPubSub` attribute.
+
+| function.json property | Attribute property | Description |
+||||
+| **type** | n/a | Must be set to `webPubSub` |
+| **direction** | n/a | Must be set to `out` |
+| **name** | n/a | Variable name used in function code for output binding object. |
+| **hub** | Hub | The value must be set to the name of the Web PubSub hub for the function to be triggered. We support set the value in attribute as higher priority, or it can be set in app settings as a global value. |
+| **connectionStringSetting** | ConnectionStringSetting | The name of the app setting that contains the Web PubSub Service connection string (defaults to "WebPubSubConnectionString") |
+
+## Troubleshooting
+
+### Setting up console logging
+You can also easily [enable console logging](https://github.com/Azure/azure-sdk-for-net/blob/master/sdk/core/Azure.Core/samples/Diagnostics.md#logging) if you want to dig deeper into the requests you're making against the service.
+
+[azure_sub]: https://azure.microsoft.com/free/
+[samples_ref]: https://github.com/Azure/azure-webpubsub/tree/main/samples/functions
+
+## Next steps
+
backup Backup Center Govern Environment https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-center-govern-environment.md
Backup center helps you govern your Azure environment to ensure that all your re
## Azure Policies for backup
-To view all the [Azure Policies](../governance/policy/overview.md) that are available for backup, select the **Azure Policies for Backup** menu item. This will display all the built-in and custom [Azure policy definitions for backup](policy-reference.md) that are available for assignment to your subscriptions and resource groups.
+To view all the [Azure Policies](../governance/policy/overview.md) that are available for backup, select the **Azure Policies for Backup** menu item. This will display all the built-in and custom [Azure Policy definitions for backup](policy-reference.md) that are available for assignment to your subscriptions and resource groups.
Selecting any of the definitions allows you to [assign the policy](../governance/policy/tutorials/create-and-manage.md#assign-a-policy) to a scope.
backup Guidance Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/guidance-best-practices.md
You can use a single vault or multiple vaults to organize and manage your backup
* If your workloads are spread across subscriptions, then you can create multiple vaults, one or more per subscription. * Backup Center allows you to have a single pane of glass to manage all tasks related to Backup. [Learn more here](). * You can customize your views with workbook templates. Backup Explorer is one such template for Azure VMs. [Learn more here](monitor-azure-backup-with-backup-explorer.md).
- * If you needed consistent policy across vaults, then you can use Azure policy to propagate backup policy across multiple vaults. You can write a custom [Azure Policy definition](../governance/policy/concepts/definition-structure.md) that uses the [ΓÇÿdeployifnotexistsΓÇÖ](../governance/policy/concepts/effects.md#deployifnotexists) effect to propagate a backup policy across multiple vaults. You can also [assign](../governance/policy/assign-policy-portal.md) this Azure Policy definition to a particular scope (subscription or RG), so that it deploys a 'backup policy' resource to all Recovery Services vaults in the scope of the Azure Policy assignment. The settings of the backup policy (such as backup frequency, retention, and so on) should be specified by the user as parameters in the Azure Policy assignment.
+ * If you needed consistent policy across vaults, then you can use Azure Policy to propagate backup policy across multiple vaults. You can write a custom [Azure Policy definition](../governance/policy/concepts/definition-structure.md) that uses the [ΓÇÿdeployifnotexistsΓÇÖ](../governance/policy/concepts/effects.md#deployifnotexists) effect to propagate a backup policy across multiple vaults. You can also [assign](../governance/policy/assign-policy-portal.md) this Azure Policy definition to a particular scope (subscription or RG), so that it deploys a 'backup policy' resource to all Recovery Services vaults in the scope of the Azure Policy assignment. The settings of the backup policy (such as backup frequency, retention, and so on) should be specified by the user as parameters in the Azure Policy assignment.
* As your organizational footprint grows, you might want to move workloads across subscriptions for the following reasons: align by backup policy, consolidate vaults, trade-off on lower redundancy to save on cost (move from GRS to LRS). Azure Backup supports moving a Recovery Services vault across Azure subscriptions, or to another resource group within the same subscription. [Learn more here](backup-azure-move-recovery-services-vault.md).
backup Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/policy-reference.md
Title: Built-in policy definitions for Azure Backup description: Lists Azure Policy built-in policy definitions for Azure Backup. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/13/2021 Last updated : 08/20/2021
backup Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Backup description: Lists Azure Policy Regulatory Compliance controls available for Azure Backup. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 08/13/2021 Last updated : 08/20/2021
batch Batch Virtual Network https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/batch/batch-virtual-network.md
Title: Provision a pool in a virtual network description: How to create a Batch pool in an Azure virtual network so that compute nodes can communicate securely with other VMs in the network, such as a file server. Previously updated : 06/09/2021 Last updated : 08/20/2021
To allow compute nodes to communicate securely with other virtual machines, or w
Once you have created your VNet and assigned a subnet to it, you can create a Batch pool with that VNet. Follow these steps to create a pool from the Azure portal:  1. Navigate to your Batch account in the Azure portal. This account must be in the same subscription and region as the resource group containing the VNet you intend to use.
-2. In the **Settings** window on the left, select the **Pools** menu item.
-3. In the **Pools** window, select **Add**.
-4. On the **Add Pool** window, select the option you intend to use from the **Image Type** dropdown.
-5. Select the correct **Publisher/Offer/Sku** for your custom image.
-6. Specify the remaining required settings, including the **Node size**, **Target dedicated nodes**, and **Low priority nodes**, as well as any desired optional settings.
-7. In **Virtual Network**, select the virtual network and subnet you wish to use.
+1. In the **Settings** window on the left, select the **Pools** menu item.
+1. In the **Pools** window, select **Add**.
+1. On the **Add Pool** window, select the option you intend to use from the **Image Type** dropdown.
+1. Select the correct **Publisher/Offer/Sku** for your custom image.
+1. Specify the remaining required settings, including the **Node size**, **Target dedicated nodes**, and **Low priority nodes**, as well as any desired optional settings.
+1. In **Virtual Network**, select the virtual network and subnet you wish to use.
+1. Select **OK** to create your pool.
- ![Add pool with virtual network](./media/batch-virtual-network/add-vnet-pool.png)
+> [!IMPORTANT]
+> If you try to delete a subnet which is being used by a pool, you will get an error message. All pools using a subnet must be deleted before you delete that subnet.
## User-defined routes for forced tunneling
To ensure that the nodes in your pool work in a VNet that has forced tunneling e
When you add a UDR, define the route for each related Batch IP address prefix, and set **Next hop type** to **Internet**.
-![User-defined route](./media/batch-virtual-network/user-defined-route.png)
- > [!WARNING] > Batch service IP addresses can change over time. To prevent outages due to an IP address change, create a process to refresh Batch service IP addresses automatically and keep them up to date in your route table.
batch Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/batch/policy-reference.md
Title: Built-in policy definitions for Azure Batch description: Lists Azure Policy built-in policy definitions for Azure Batch. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/13/2021 Last updated : 08/20/2021
batch Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/batch/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Batch description: Lists Azure Policy Regulatory Compliance controls available for Azure Batch. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 08/13/2021 Last updated : 08/20/2021
cloud-services Cloud Services Guestos Msrc Releases https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/cloud-services-guestos-msrc-releases.md
na Previously updated : 8/13/2021 Last updated : 8/20/2021 # Azure Guest OS The following tables show the Microsoft Security Response Center (MSRC) updates applied to the Azure Guest OS. Search this article to determine if a particular update applies to the Guest OS you are using. Updates always carry forward for the particular [family][family-explain] they were introduced in.
+## August 2021 Guest OS
+
+>[!NOTE]
+
+>The August Guest OS is currently being rolled out to Cloud Service VMs that are configured for automatic updates. When the rollout is complete, this version will be made available for manual updates through the Azure portal and configuration files. The following patches are included in the August Guest OS. This list is subject to change.
+
+| Product Category | Parent KB Article | Vulnerability Description | Guest OS | Date First Introduced |
+| | | | | |
+| Rel 21-08 | [5005030] | Latest Cumulative Update(LCU) | 6.34 | Aug 10 , 2021 |
+| Rel 21-08 | [5005036] | IE Cumulative Updates | 2.113, 3.100, 4.93 | Aug 10 , 2021 |
+| Rel 21-08 | [5004238] | Latest Cumulative Update(LCU) | 5.58 | July 13 , 2021 |
+| Rel 21-08 | [4578952] | .NET Framework 3.5 Security and Quality Rollup  | 2.113 | Feb 16, 2021 |
+| Rel 21-08 | [4578955] | .NET Framework 4.5.2 Security and Quality Rollup  | 2.113 | Jun 8, 2021 |
+| Rel 21-08 | [4578953] | .NET Framework 3.5 Security and Quality Rollup  | 4.93 | Feb 16, 2021 |
+| Rel 21-08 | [4578956] | .NET Framework 4.5.2 Security and Quality Rollup  | 4.93 | Feb 16, 2021 |
+| Rel 21-08 | [4578950] | .NET Framework 3.5 Security and Quality Rollup  | 3.100 | Feb 16, 2021 |
+| Rel 21-08 | [4578954] | . NET Framework 4.5.2 Security and Quality Rollup  | 3.100 | Feb 16, 2021 |
+| Rel 21-08 | [5004335] | . NET Framework 3.5 and 4.7.2 Cumulative Update  | 6.34 | Aug 10, 2021 |
+| Rel 21-08 | [5005088] | Monthly Rollup  | 2.113 | Aug 10, 2021 |
+| Rel 21-08 | [5005099] | Monthly Rollup  | 3.100 | Aug 10, 2021 |
+| Rel 21-08 | [5005076] | Monthly Rollup  | 4.93 | Aug 10, 2021 |
+| Rel 21-08 | [5001401] | Servicing Stack update  | 3.100 | Apr 13, 2021 |
+| Rel 21-08 | [5001403] | Servicing Stack update  | 4.93 | Apr 13, 2021 |
+| Rel 21-08 OOB | [4578013] | Standalone Security Update  | 4.93 | Aug 19, 2020 |
+| Rel 21-08 | [5001402] | Servicing Stack update  | 5.58 | Apr 13, 2021 |
+| Rel 21-08 | [5004378] | Servicing Stack update  | 2.113 | July 13, 2021 |
+| Rel 21-08 | [5005112] | Servicing Stack update  | 6.34 | Aug 10, 2021 |
+| Rel 21-08 | [4494175] | Microcode  | 5.58 | Sep 1, 2020 |
+| Rel 21-08 | [4494174] | Microcode  | 6.34 | Sep 1, 2020 |
+
+[5005030]: https://support.microsoft.com/kb/5005030
+[5005036]: https://support.microsoft.com/kb/5005036
+[5004238]: https://support.microsoft.com/kb/5004238
+[4578952]: https://support.microsoft.com/kb/4578952
+[4578955]: https://support.microsoft.com/kb/4578955
+[4578953]: https://support.microsoft.com/kb/4578953
+[4578956]: https://support.microsoft.com/kb/4578956
+[4578950]: https://support.microsoft.com/kb/4578950
+[4578954]: https://support.microsoft.com/kb/4578954
+[5004335]: https://support.microsoft.com/kb/5004335
+[5005088]: https://support.microsoft.com/kb/5005088
+[5005099]: https://support.microsoft.com/kb/5005099
+[5005076]: https://support.microsoft.com/kb/5005076
+[5001401]: https://support.microsoft.com/kb/5001401
+[5001403]: https://support.microsoft.com/kb/5001403
+[4578013]: https://support.microsoft.com/kb/4578013
+[5001402]: https://support.microsoft.com/kb/5001402
+[5004378]: https://support.microsoft.com/kb/5004378
+[5005112]: https://support.microsoft.com/kb/5005112
+[4494175]: https://support.microsoft.com/kb/4494175
+[4494174]: https://support.microsoft.com/kb/4494174
+ ## July 2021 Guest OS
cognitive-services Spatial Analysis Operations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Computer-vision/spatial-analysis-operations.md
This is an example of a JSON input for the SPACEANALYTICS_CONFIG parameter that
| `name` | string| Friendly name for this zone.| | `polygon` | list| Each value pair represents the x,y for vertices of polygon. The polygon represents the areas in which people are counted and the distance between people is measured. The float values represent the position of the vertex relative to the top,left corner. To calculate the absolute x, y values, you multiply these values with the frame size. | `threshold` | float| Events are egressed when the person is greater than this number of pixels inside the zone. |
-| `type` | string| For **cognitiveservices.vision.spatialanalysis-persondistance** this should be `people_distance`.|
+| `type` | string| For **cognitiveservices.vision.spatialanalysis-persondistance** this should be `persondistance`.|
| `trigger` | string| The type of trigger for sending an event. Supported values are `event` for sending events when the count changes or `interval` for sending events periodically, irrespective of whether the count has changed or not. | `output_frequency` | int | The rate at which events are egressed. When `output_frequency` = X, every X event is egressed, ex. `output_frequency` = 2 means every other event is output. The `output_frequency` is applicable to both `event` and `interval`.| | `minimum_distance_threshold` | float| A distance in feet that will trigger a "TooClose" event when people are less than that distance apart.|
cognitive-services Copy Move Projects https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Custom-Vision-Service/copy-move-projects.md
You'll get a `200/OK` response with metadata about the exported project and a re
} ```
+> [!TIP]
+> If you get an "Invalid Token" error when you import your project, it could be that the token URL string isn't web encoded. You can encode the token using a [URL Encoder](https://meyerweb.com/eric/tools/dencoder/).
+ ## Import the project Call **[ImportProject](https://southcentralus.dev.cognitive.microsoft.com/docs/services/Custom_Vision_Training_3.3/operations/5eb0bcc7548b571998fddee3)** using your target training key and endpoint, along with the reference token. You can also give your project a name in its new account.
You'll get a `200/OK` response with metadata about your newly imported project.
## Next steps In this guide, you learned how to copy and move a project between Custom Vision resources. Next, explore the API reference docs to see what else you can do with Custom Vision.
-* [REST API reference documentation](https://southcentralus.dev.cognitive.microsoft.com/docs/services/Custom_Vision_Training_3.3/operations/5eb0bcc6548b571998fddeb3)
+* [REST API reference documentation](https://southcentralus.dev.cognitive.microsoft.com/docs/services/Custom_Vision_Training_3.3/operations/5eb0bcc6548b571998fddeb3)
cognitive-services Use Persondirectory https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Face/Face-API-How-to-Topics/use-persondirectory.md
HttpResponseMessage response;
// Request body var body = new Dictionary<string, object>(); body.Add("faceIds", new List<string>{"{guid1}", "{guid2}", …});
-body.Add("personIds", new List<string>{"{guid1}", "{guid2}", …});
+body.Add("personIds", new List<string>{"*"});
byte[] byteData = Encoding.UTF8.GetBytes(JsonConvert.SerializeObject(body)); using (var content = new ByteArrayContent(byteData))
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Face/Overview.md
For more information on face detection and analysis, see the [Face detection](co
## Identity verification
-Modern enterprises and apps can use the the Face identification and Face verification operations to verify that a user is who they claim to be. Face identification can be thought of as "one-to-many" matching. Match candidates are returned based on how closely their face data matches the query face. This scenario is used in granting building access to a certain group of people or verifying the user of a device.
+Modern enterprises and apps can use the the Face identification and Face verification operations to verify that a user is who they claim to be.
+
+### Identification
+
+Face identification can be thought of as "one-to-many" matching. Match candidates are returned based on how closely their face data matches the query face. This scenario is used in granting building access to a certain group of people or verifying the user of a device.
The following image shows an example of a database named `"myfriends"`. Each group can contain up to 1 million different person objects. Each person object can have up to 248 faces registered.
cognitive-services Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/policy-reference.md
Title: Built-in policy definitions for Azure Cognitive Services description: Lists Azure Policy built-in policy definitions for Azure Cognitive Services. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/13/2021 Last updated : 08/20/2021
cognitive-services Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Cognitive Services description: Lists Azure Policy Regulatory Compliance controls available for Azure Cognitive Services. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 08/13/2021 Last updated : 08/20/2021
container-instances Container Instances Support Help https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-instances/container-instances-support-help.md
If you need help with the language and tools used to develop and manage Azure Co
| Ansible | https://github.com/Azure/Ansible/issues | -
-## Submit feature requests on Azure Feedback
-
-<div class='icon is-large'>
- <img alt='UserVoice' src='./media/logos/azure-feedback-logo.png'>
-</div>
-
-To request new features, post them on Azure Feedback. Share your ideas for improving Azure Container Instances.
-
-| Service | Azure Feedback URL |
-|-||
-| Azure Container Instances | https://feedback.azure.com/forums/602224-azure-container-instances
- ## Stay informed of updates and new releases <div class='icon is-large'>
container-instances Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-instances/policy-reference.md
Title: Built-in policy definitions for Azure Container Instances description: Lists Azure Policy built-in policy definitions for Azure Container Instances. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/13/2021 Last updated : 08/20/2021
container-registry Container Registry Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-registry/container-registry-customer-managed-keys.md
Title: Encrypt registry with a customer-managed key description: Learn about encryption-at-rest of your Azure container registry, and how to encrypt your Premium registry with a customer-managed key stored in Azure Key Vault Previously updated : 06/25/2021 Last updated : 08/16/2021
This feature is available in the **Premium** container registry service tier. Fo
* [Content trust](container-registry-content-trust.md) is currently not supported in a registry encrypted with a customer-managed key. * In a registry encrypted with a customer-managed key, run logs for [ACR Tasks](container-registry-tasks-overview.md) are currently retained for only 24 hours. If you need to retain logs for a longer period, see guidance to [export and store task run logs](container-registry-tasks-logs.md#alternative-log-storage). -
-> [!IMPORTANT]
-> If you plan to store the registry encryption key in an existing Azure key vault that denies public access and allows only private endpoint or selected virtual networks, extra configuration steps are needed. See [Advanced scenario: Key Vault firewall](#advanced-scenario-key-vault-firewall) in this article.
- ## Automatic or manual update of key versions An important consideration for the security of a registry encrypted with a customer-managed key is how frequently you update (rotate) the encryption key. Your organization might have compliance policies that require regularly updating key [versions](../key-vault/general/about-keys-secrets-certificates.md#objects-identifiers-and-versioning) stored in Azure Key Vault when used as customer-managed keys.
For use in later steps, get the resource ID of the key vault:
keyvaultID=$(az keyvault show --resource-group <resource-group-name> --name <key-vault-name> --query 'id' --output tsv) ```
-### Enable key vault access
+### Enable key vault access by trusted services
+
+If the key vault is protected with a firewall or virtual network (private endpoint), enable the network setting to allow access by [trusted Azure services](../key-vault/general/overview-vnet-service-endpoints.md#trusted-services).
+
+For more information, see [Configure Azure Key Vault networking settings](../key-vault/general/how-to-azure-key-vault-network-security.md?tabs=azure-cli).
++
+### Enable key vault access by managed identity
#### Enable key vault access policy
When creating a key vault for a customer-managed key, in the **Basics** tab, ena
:::image type="content" source="media/container-registry-customer-managed-keys/create-key-vault.png" alt-text="Create key vault in the Azure portal":::
-### Enable key vault access
+### Enable key vault access by trusted services
+
+If the key vault is protected with a firewall or virtual network (private endpoint), enable the network setting to allow access by [trusted Azure services](../key-vault/general/overview-vnet-service-endpoints.md#trusted-services).
+
+For more information, see [Configure Azure Key Vault networking settings](../key-vault/general/how-to-azure-key-vault-network-security.md?tabs=azure-portal).
+
+### Enable key vault access by managed identity
#### Enable key vault access policy
One option is to configure a policy for the key vault so that the identity can a
:::image type="content" source="media/container-registry-customer-managed-keys/add-key-vault-access-policy.png" alt-text="Create key vault access policy":::
-#### Assign RBAC role
+#### Assign RBAC role
Alternatively, assign the Key Vault Crypto Service Encryption User role to the user-assigned managed identity at the key vault scope.
For detailed steps, see [Assign Azure roles using the Azure portal](../role-base
### Create key (optional)
-Optionally create a key in the key vault for use to encrypt the registry. Follow these steps if you want to select a specific key version as a customer-managed key.
+Optionally create a key in the key vault for use to encrypt the registry. Follow these steps if you want to select a specific key version as a customer-managed key. You may also need to create a key before creating the registry if key vault access is restricted to a private endpoint or selected networks.
1. Navigate to your key vault. 1. Select **Settings** > **Keys**.
Optionally create a key in the key vault for use to encrypt the registry. Follow
1. In **Identity**, select the managed identity you created. 1. In **Encryption**, choose either of the following: * Select **Select from Key Vault**, and select an existing key vault and key, or **Create new**. The key you select is non-versioned and enables automatic key rotation.
- * Select **Enter key URI**, and provide a key identifier directly. You can provide either a versioned key URI (for a key that must be rotated manually) or a non-versioned key URI (which enables automatic key rotation).
+ * Select **Enter key URI**, and provide the identifier of an existing key. You can provide either a versioned key URI (for a key that must be rotated manually) or a non-versioned key URI (which enables automatic key rotation). See the previous section for steps to create a key.
1. In the **Encryption** tab, select **Review + create**. 1. Select **Create** to deploy the registry instance.
Update the key version in Azure Key Vault, or create a new key, and then update
When rotating a key, typically you specify the same identity used when creating the registry. Optionally, configure a new user-assigned identity for key access, or enable and specify the registry's system-assigned identity. > [!NOTE]
-> Ensure that the required [key vault access](#enable-key-vault-access) is set for the identity you configure for key access.
+> * To enable the registry's system-assigned identity in the portal, select **Settings** > **Identity** and set the system-assigned identity's status to **On**.
+> * Ensure that the required [key vault access](#enable-key-vault-access-by-managed-identity) is set for the identity you configure for key access.
### Update key version
az keyvault delete-policy \
Revoking the key effectively blocks access to all registry data, since the registry can't access the encryption key. If access to the key is enabled or the deleted key is restored, your registry will pick the key so you can again access the encrypted registry data.
-## Advanced scenario: Key Vault firewall
-
-> [!IMPORTANT]
-> Currently, during registry deployment, a registry's *user-assigned* identity can only be configured to access an encryption key in a key vault that allows public access, not one configured with a [Key Vault firewall](../key-vault/general/network-security.md).
->
-> To access a key vault protected with a Key Vault firewall, the registry must bypass the firewall using its *system-managed* identity. Currently these settings can only be configured after the registry is deployed.
-
-For this scenario, first create a new user-assigned identity, key vault, and container registry encrypted with a customer-managed key, using the [Azure CLI](#enable-customer-managed-keycli), [portal](#enable-customer-managed-keyportal), or [template](#enable-customer-managed-keytemplate). Detailed steps are in preceding sections in this article.
- > [!NOTE]
- > The new key vault is deployed outside the firewall. It's only used temporarily to store the customer-managed key.
-
-After registry creation, continue with the following steps. Details are in the following sections.
-
-1. Enable the registry's system-assigned identity.
-1. Grant the system-assigned identity permissions to access keys in the key vault that's restricted with the Key Vault firewall.
-1. Ensure that the Key Vault firewall allows bypass by trusted services. Currently, an Azure container registry can only bypass the firewall when using its system-managed identity.
-1. Rotate the customer-managed key by selecting one in the key vault that's restricted with the Key Vault firewall.
-1. When no longer needed, you may delete the key vault that was created outside the firewall.
--
-### Step 1 - Enable registry's system-assigned identity
-
-1. In the portal, navigate to your registry.
-1. Select **Settings** > **Identity**.
-1. Under **System assigned**, set **Status** to **On**. Select **Save**.
-1. Copy the **Object ID** of the identity.
-
-### Step 2 - Grant system-assigned identity access to your key vault
-
-1. In the portal, navigate to your key vault.
-1. Select **Settings** > **Access policies > +Add Access Policy**.
-1. Select **Key permissions**, and select **Get**, **Unwrap Key**, and **Wrap Key**.
-1. Choose **Select principal** and search for the object ID of your system-assigned managed identity, or the name of your registry.
-1. Select **Add**, then select **Save**.
-
-### Step 3 - Enable key vault bypass
-
-To access a key vault configured with a Key Vault firewall, the registry must bypass the firewall. Ensure that the key vault is configured to allow access by any [trusted service](../key-vault/general/overview-vnet-service-endpoints.md#trusted-services). Azure Container Registry is one of the trusted services.
-
-1. In the portal, navigate to your key vault.
-1. Select **Settings** > **Networking**.
-1. Confirm, update, or add virtual network settings. For detailed steps, see [Configure Azure Key Vault firewalls and virtual networks](../key-vault/general/network-security.md).
-1. In **Allow Microsoft Trusted Services to bypass this firewall**, select **Yes**.
-
-### Step 4 - Rotate the customer-managed key
-
-After completing the preceding steps, rotate to a key that's stored in the key vault behind a firewall.
-
-1. In the portal, navigate to your registry.
-1. Under **Settings**, select **Encryption** > **Change key**.
-1. In **Identity**, select **System Assigned**.
-1. Select **Select from Key Vault**, and select the name of the key vault that's behind a firewall.
-1. Select an existing key, or **Create new**. The key you select is non-versioned and enables automatic key rotation.
-1. Complete the key selection and select **Save**.
- ## Troubleshoot ### Removing managed identity
Then, after changing the key and assigning a different identity, you can remove
If this issue occurs with a system-assigned identity, please [create an Azure support ticket](https://azure.microsoft.com/support/create-ticket/) for assistance to restore the identity.
+### Enabling key vault firewall
+
+If you enable a key vault firewall or virtual network after creating an encrypted registry, you might see HTTP 403 or other errors with image import or automated key rotation. To correct this problem, reconfigure the managed identity and key you used initially for encryption. See steps in [Rotate key](#rotate-key).
+
+If the problem persists, please contact Azure Support.
## Next steps
container-registry Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-registry/policy-reference.md
Title: Built-in policy definitions for Azure Container Registry description: Lists Azure Policy built-in policy definitions for Azure Container Registry. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/13/2021 Last updated : 08/20/2021
container-registry Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-registry/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Container Registry description: Lists Azure Policy Regulatory Compliance controls available for Azure Container Registry. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 08/13/2021 Last updated : 08/20/2021
cosmos-db Diagnostic Queries Cassandra https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/cassandra/diagnostic-queries-cassandra.md
Title: Troubleshoot issues with advanced diagnostics queries for Cassandra API
-description: Learn how to query diagnostics logs for troubleshooting data stored in Azure Cosmos DB for Cassandra API
+description: Learn how to query diagnostics logs for troubleshooting data stored in Azure Cosmos DB for the Cassandra API.
Last updated 06/12/2021
-# Troubleshoot issues with advanced diagnostics queries for Cassandra API
+# Troubleshoot issues with advanced diagnostics queries for the Cassandra API
[!INCLUDE[appliesto-all-apis-except-table](../includes/appliesto-all-apis-except-table.md)]
> * [Gremlin API](../queries-gremlin.md)
-In this article, we'll cover how to write more advanced queries to help troubleshoot issues with your Azure Cosmos DB account using diagnostics logs sent to **AzureDiagnostics (legacy)** and **Resource-specific (preview)** tables.
+In this article, we'll cover how to write more advanced queries to help troubleshoot issues with your Azure Cosmos DB account by using diagnostics logs sent to **Azure Diagnostics (legacy)** and **resource-specific (preview)** tables.
-For Azure Diagnostics tables, all data is written into one single table and users will need to specify which category they'd like to query. If you'd like to view the full-text query of your request, [follow this article](../cosmosdb-monitor-resource-logs.md#full-text-query) to learn how to enable this feature.
+For Azure Diagnostics tables, all data is written into one single table. Users specify which category they want to query. If you want to view the full-text query of your request, see [Monitor Azure Cosmos DB data by using diagnostic settings in Azure](../cosmosdb-monitor-resource-logs.md#full-text-query) to learn how to enable this feature.
-For [resource-specific tables](../cosmosdb-monitor-resource-logs.md#create-setting-portal), data is written into individual tables for each category of the resource. We recommend this mode since it makes it much easier to work with the data, provides better discoverability of the schemas, and improves performance across both ingestion latency and query times.
+For [resource-specific tables](../cosmosdb-monitor-resource-logs.md#create-setting-portal), data is written into individual tables for each category of the resource. We recommend this mode because it:
+
+- Makes it much easier to work with the data.
+- Provides better discoverability of the schemas.
+- Improves performance across both ingestion latency and query times.
## Common queries
+Common queries are shown in the resource-specific and Azure Diagnostics tables.
-- Top N(10) RU consuming requests/queries in a given time frame
+### Top N(10) Request Unit (RU) consuming requests or queries in a specific time frame
# [Resource-specific](#tab/resource-specific)
For [resource-specific tables](../cosmosdb-monitor-resource-logs.md#create-setti
``` -- Requests throttled (statusCode = 429) in a given time window
+### Requests throttled (statusCode = 429) in a specific time window
# [Resource-specific](#tab/resource-specific)
For [resource-specific tables](../cosmosdb-monitor-resource-logs.md#create-setti
``` -- Queries with large response lengths (payload size of the server response)
+### Queries with large response lengths (payload size of the server response)
# [Resource-specific](#tab/resource-specific)
For [resource-specific tables](../cosmosdb-monitor-resource-logs.md#create-setti
``` -- RU Consumption by physical partition (across all replicas in the replica set)
+### RU consumption by physical partition (across all replicas in the replica set)
# [Resource-specific](#tab/resource-specific)
For [resource-specific tables](../cosmosdb-monitor-resource-logs.md#create-setti
``` -- RU Consumption by logical partition (across all replicas in the replica set)
+### RU consumption by logical partition (across all replicas in the replica set)
# [Resource-specific](#tab/resource-specific) ```Kusto
For [resource-specific tables](../cosmosdb-monitor-resource-logs.md#create-setti
## Next steps
-* For more information on how to create diagnostic settings for Cosmos DB see [Creating Diagnostics settings](../cosmosdb-monitor-resource-logs.md) article.
-
-* For detailed information about how to create a diagnostic setting by using the Azure portal, CLI, or PowerShell, see [create diagnostic setting to collect platform logs and metrics in Azure](../../azure-monitor/essentials/diagnostic-settings.md) article.
+* For more information on how to create diagnostic settings for Azure Cosmos DB, see [Create diagnostic settings](../cosmosdb-monitor-resource-logs.md).
+* For detailed information about how to create a diagnostic setting by using the Azure portal, the Azure CLI, or PowerShell, see [Create diagnostic settings to collect platform logs and metrics in Azure](../../azure-monitor/essentials/diagnostic-settings.md).
cosmos-db Manage Data Java https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/cassandra/manage-data-java.md
In this quickstart, you create an Azure Cosmos DB Cassandra API account, and use
- [Git](https://www.git-scm.com/downloads). On Ubuntu, run `sudo apt-get install git` to install Git. > [!NOTE]
-> This is a simple quickstart which uses [version 3](https://github.com/datastax/java-driver/tree/3.x) of the open-source Apache Cassandra driver for Java. In most cases, you should be able to connect an existing Apache Cassandra dependent Java application to Azure Cosmos DB Cassandra API without any changes to your existing code. However, we recommend adding our [custom Java extension](https://github.com/Azure/azure-cosmos-cassandra-extensions/tree/feature/java-driver-3%2F1.0.0), which includes custom retry and load balancing policies, for a better overall experience. This is to handle [rate limiting](/scale-account-throughput.md#handling-rate-limiting-429-errors) and application level failover in Azure Cosmos DB respectively. You can find a comprehensive sample which implements the extension [here](https://github.com/Azure-Samples/azure-cosmos-cassandra-extensions-java-sample).
+> This is a simple quickstart which uses [version 3](https://github.com/datastax/java-driver/tree/3.x) of the open-source Apache Cassandra driver for Java. In most cases, you should be able to connect an existing Apache Cassandra dependent Java application to Azure Cosmos DB Cassandra API without any changes to your existing code. However, we recommend adding our [custom Java extension](https://github.com/Azure/azure-cosmos-cassandra-extensions/tree/feature/java-driver-3%2F1.0.0), which includes custom retry and load balancing policies, for a better overall experience. This is to handle [rate limiting](/azure/cosmos-db/cassandra/scale-account-throughput#handling-rate-limiting-429-errors) and application level failover in Azure Cosmos DB respectively. You can find a comprehensive sample which implements the extension [here](https://github.com/Azure-Samples/azure-cosmos-cassandra-extensions-java-sample).
## Create a database account
cosmos-db Troubleshoot Common Issues https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/cassandra/troubleshoot-common-issues.md
cluster = Cluster.builder()
If the value for `withLocalDc()` doesn't match the contact point datacenter, you might experience an intermittent error: `com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (no host was tried)`.
-Implement the [CosmosLoadBalancingPolicy](https://github.com/Azure/azure-cosmos-cassandra-extensions/blob/master/package/src/main/java/com/microsoft/azure/cosmos/cassandra/CosmosLoadBalancingPolicy.java). To make it work, you might need to upgrade DataStax by using the following code:
+Implement the [CosmosLoadBalancingPolicy](https://github.com/Azure/azure-cosmos-cassandra-extensions/blob/master/driver-3/src/main/java/com/azure/cosmos/cassandra/CosmosLoadBalancingPolicy.java). To make it work, you might need to upgrade DataStax by using the following code:
```java LoadBalancingPolicy loadBalancingPolicy = new CosmosLoadBalancingPolicy.Builder().withWriteDC("West US").withReadDC("West US").build();
cosmos-db Convert Vcore To Request Unit https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/convert-vcore-to-request-unit.md
+
+ Title: 'Convert the number of vCores or vCPUs in your nonrelational database to Azure Cosmos DB RU/s'
+description: 'Convert the number of vCores or vCPUs in your nonrelational database to Azure Cosmos DB RU/s'
+++++ Last updated : 08/20/2021+
+# Convert the number of vCores or vCPUs in your nonrelational database to Azure Cosmos DB RU/s
+
+This article explains how to estimate Azure Cosmos DB request units (RU/s) when you are considering data migration but all you know is the total vCore or vCPU count in your existing database replica set(s). When you migrate one or more replica sets to Azure Cosmos DB, each collection held in those replica sets will be stored as an Azure Cosmos DB collection consisting of a sharded cluster with a 4x replication factor. You can read more about our architecture in this [partitioning and scaling guide](partitioning-overview.md). Request units are how throughput capacity is provisioned on a collection; you can [read the request units guide](request-units.md) and the RU/s [provisioning guide](set-throughput.md) to learn more. When you migrate a collection, Azure Cosmos DB provisions enough shards to serve your provisioned request units and store your data. Therefore estimating RU/s for collections is an important step in scoping out the scale of your planned Azure Cosmos DB data estate prior to migration. Based on our experience with thousands of customers, we have found this formula helps us arrive at a rough starting-point RU/s estimate from vCores or vCPUs:
+
+`
+Provisioned RU/s = C*T/R
+`
+
+* *T*: Total vCores and/or vCPUs in your existing database **data-bearing** replica set(s).
+* *R*: Replication factor of your existing **data-bearing** replica set(s).
+* *C*: Recommended provisioned RU/s per vCore or vCPU. This value derives from the architecture of Azure Cosmos DB:
+ * *C = 600 RU/s/vCore* for Azure Cosmos DB SQL API
+ * *C = 1000 RU/s/vCore* for Azure Cosmos DB API for MongoDB v4.0
+ * *C* estimates for Cassandra API, Gremlin API, or other APIs are not currently available
+
+Values for *C* are provided above. ***T* must be determined by examining the number of vCores or vCPUs in each data-bearing replica set of your existing database, and summing to get the total**; if you cannot estimate *T* then consider following our [guide to estimating RU/s using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md) instead of this guide. *T* should not include *vCores* or *vCPUs* associated with your existing database's routing server or configuration cluster, if it has those components.
+
+For *R*, we recommend plugging in the average replication factor of your database replica sets; if this information is not available then *R=3* is a good rule of thumb.
+
+Azure Cosmos DB interop APIs run on top of the SQL API and implement their own unique architectures; thus Azure Cosmos DB API for MongoDB v4.0 has a different *C*-value than Azure Cosmos DB SQL API.
+
+## Worked example: estimate RU/s for single replica set migration
+
+![Migrate a replica set with 3 replicas of a four-core SKU to Azure Cosmos DB](media/tutorial-vcore-pricing/one-replica-set.png)
+
+Consider a single replica set with a replication factor of *R=3* based on a four-core server SKU. Then
+* *T* = 12 vCores
+* *R* = 3
+
+Then the recommended request units for Azure Cosmos DB SQL API are
+
+`
+Provisioned RU/s, SQL API = (600 RU/s/vCore) * (12 vCores) / (3) = 2,400 RU/s
+`
+
+And the recommended request units for Azure Cosmos DB API for MongoDB are
+
+`
+Provisioned RU/s, API for MongoDB = (1,000 RU/s/vCore) * (12 vCores) / (3) = 4,000 RU/s
+`
+
+## Worked example: estimate RU/s when migrating a cluster of homogeneous replica sets
+
+![Migrate a homogeneous sharded replica set with 3 shards, each with three replicas of a four-core SKU, to Azure Cosmos DB](media/tutorial-vcore-pricing/homogeneous-sharded-replica-sets.png)
+
+Consider a sharded and replicated cluster comprising three replica sets each with a replication factor three, where each server is a four-core SKU. Then
+* *T* = 36 vCores
+* *R* = 3
+
+Then the recommended request units for Azure Cosmos DB SQL API are
+
+`
+Provisioned RU/s, SQL API = (600 RU/s/vCore) * (36 vCores) / (3) = 7,200 RU/s
+`
+
+And the recommended request units for Azure Cosmos DB API for MongoDB are
+
+`
+Provisioned RU/s, API for MongoDB = (1,000 RU/s/vCore) * (36 vCores) / (3) = 12,000 RU/s
+`
+
+## Worked example: estimate RU/s when migrating a cluster of heterogeneous replica sets
+
+![Migrate a heterogeneous sharded replica set with 3 shards, each with different numbers of replicas of a four-core SKU, to Azure Cosmos DB](media/tutorial-vcore-pricing/heterogeneous-sharded-replica-sets.png)
+
+Consider a sharded and replicated cluster comprising three replica sets, in which each server is based on a four-core SKU. The replica sets are "heterogeneous" in the sense that each has a different replication factor: 3x, 1x, and 5x, respectively. The recommended approach is to use the average replication factor when calculating request units. Then
+* *T* = 36 vCores
+* *Ravg* = (3+1+5)/3 = 3
+
+Then the recommended request units for Azure Cosmos DB SQL API are
+
+`
+Provisioned RU/s, SQL API = (600 RU/s/vCore) * (36 vCores) / (3) = 7,200 RU/s
+`
+
+And the recommended request units for Azure Cosmos DB API for MongoDB are
+
+`
+Provisioned RU/s, API for MongoDB = (1,000 RU/s/vCore) * (36 vCores) / (3) = 12,000 RU/s
+`
+
+## Tips for getting the most accurate RU/s estimate
+
+*Migrating from a cloud-managed database:* If you currently use a cloud-managed database, these services often appear to be provisioned in units of *vCores* or *vCPUs* (in other words, *T*), but in fact the core-count you provision sets the *vCores/replica* or *vCPU/replica* value (*T/R*) for an *R*-node replica set; the true number of cores is *R* times more than what you provisioned explicitly. We recommend determining whether this description applies to your current cloud-managed database, and if so you must multiply the nominal number of provisioned *vCores* or *vCPUs* by *R* in order to get an accurate estimate of *T*.
+
+*vCores vs vCPUs:* In this article we treat "vCore" and "vCPU" as synonymous, thus *C* has units of *RU/s/vCore* or *RU/s/vCPU*, with no distinction. However in practice this simplification may not be accurate in some situations. These terms may have different meanings; for example, if your physical CPUs support hyperthreading, it is possible that *1 vCPU = 2 vCores* or something else. In general, the *vCore*/*vCPU* relationship is hardware-dependent and we recommend investigating what is the relationship on your existing cluster hardware, and whether your cluster compute is provisioned in terms of *vCores* or *vCPUs*. If *vCPU* and *vCore* have differing meanings on your hardware, then we recommend treating the above estimates of *C* as having units of *RU/s/vCore*, and if necessary converting *T* from vCPU to vCore using the conversion factor appropriate to your hardware.
+
+## Summary
+
+Estimating RU/s from *vCores* or *vCPUs* requires collecting information about total *vCores*/*vCPUs* and replication factor from your existing database replica set(s). Then you can use known relationships between *vcores*/*vCPUs* and throughput to estimate Azure Cosmos DB request units (RU/s). Finding this request unit estimate will be an important step in anticipating the scale of your Azure Cosmos DB data estate after migration.
+
+The table below summarizes the relationship between *vCores* and *vCPUs* for Azure Cosmos DB SQL API and API for MongoDB v4.0:
++
+| vCores | RU/s (SQL API)<br> (rep. factor=3) | RU/s (API for MongoDB v4.0)<br> (rep. factor=3) |
+|-|-||
+| 3 | 600 | 1000 |
+| 6 | 1200 | 2000 |
+| 12 | 2400 | 4000 |
+| 24 | 4800 | 8000 |
+| 48 | 9600 | 16000 |
+| 96 | 19200 | 32000 |
+| 192 | 38400 | 64000 |
+| 384 | 76800 | 128000 |
+
+## Next steps
+* [Learn about Azure Cosmos DB pricing](https://azure.microsoft.com/pricing/details/cosmos-db/)
+* [Learn how to plan and manage costs for Azure Cosmos DB](plan-manage-costs.md)
+* [Review options for migrating to Azure Cosmos DB](cosmosdb-migrationchoices.md)
+* [Migrate to Azure Cosmos DB SQL API](import-data.md)
+* [Plan your migration to Azure Cosmos DB API for MongoDB](mongodb/pre-migration-steps.md). This doc includes links to different migration tools that you can use once you are finished planning.
+
+[regions]: https://azure.microsoft.com/regions/
cosmos-db Cosmos Db Advanced Queries https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/cosmos-db-advanced-queries.md
Title: Troubleshoot issues with advanced diagnostics queries (SQL API)
-description: Learn how to query diagnostics logs to troubleshoot data stored in Azure Cosmos DB - SQL API
+description: Learn how to query diagnostics logs to troubleshoot data stored in Azure Cosmos DB - SQL API.
Last updated 06/12/2021
-# Troubleshoot issues with advanced diagnostics queries for SQL (Core) API
+# Troubleshoot issues with advanced diagnostics queries for the SQL (Core) API
> [!div class="op_single_selector"] > * [SQL (Core) API](cosmos-db-advanced-queries.md)
> * [Gremlin API](queries-gremlin.md) >
-In this article, we'll cover how to write more advanced queries to help troubleshoot issues with your Azure Cosmos DB account using diagnostics logs sent to **AzureDiagnostics (legacy)** and **Resource-specific (preview)** tables.
+In this article, we'll cover how to write more advanced queries to help troubleshoot issues with your Azure Cosmos DB account by using diagnostics logs sent to **Azure Diagnostics (legacy)** and **resource-specific (preview**) tables.
-For Azure Diagnostics tables, all data is written into one single table and users will need to specify which category they'd like to query. If you'd like to view the full-text query of your request, [follow this article](cosmosdb-monitor-resource-logs.md#full-text-query) to learn how to enable this feature.
+For Azure Diagnostics tables, all data is written into one single table. Users specify which category they want to query. If you want to view the full-text query of your request, see [Monitor Azure Cosmos DB data by using diagnostic settings in Azure](cosmosdb-monitor-resource-logs.md#full-text-query) to learn how to enable this feature.
-For [resource-specific tables](cosmosdb-monitor-resource-logs.md#create-setting-portal), data is written into individual tables for each category of the resource. We recommend this mode since it makes it much easier to work with the data, provides better discoverability of the schemas, and improves performance across both ingestion latency and query times.
+For [resource-specific tables](cosmosdb-monitor-resource-logs.md#create-setting-portal), data is written into individual tables for each category of the resource. We recommend this mode because it:
+
+- Makes it much easier to work with the data.
+- Provides better discoverability of the schemas.
+- Improves performance across both ingestion latency and query times.
## Common queries
+Common queries are shown in the resource-specific and Azure Diagnostics tables.
-- Top N(10) queries ordered by request units consumption in a given time frame
+### Top N(10) queries ordered by Request Unit (RU) consumption in a specific time frame
# [Resource-specific](#tab/resource-specific)
For [resource-specific tables](cosmosdb-monitor-resource-logs.md#create-setting-
``` -- Requests throttled (statusCode = 429) in a given time window
+### Requests throttled (statusCode = 429) in a specific time window
# [Resource-specific](#tab/resource-specific)
For [resource-specific tables](cosmosdb-monitor-resource-logs.md#create-setting-
``` -- Queries with the largest response lengths (payload size of the server response)
+### Queries with the largest response lengths (payload size of the server response)
# [Resource-specific](#tab/resource-specific)
For [resource-specific tables](cosmosdb-monitor-resource-logs.md#create-setting-
``` -- RU Consumption by physical partition (across all replicas in the replica set)
+### RU consumption by physical partition (across all replicas in the replica set)
# [Resource-specific](#tab/resource-specific)
For [resource-specific tables](cosmosdb-monitor-resource-logs.md#create-setting-
``` -- RU Consumption by logical partition (across all replicas in the replica set)
+### RU consumption by logical partition (across all replicas in the replica set)
# [Resource-specific](#tab/resource-specific)
For [resource-specific tables](cosmosdb-monitor-resource-logs.md#create-setting-
## Next steps
-* For more information on how to create diagnostic settings for Cosmos DB see [Creating Diagnostics settings](cosmosdb-monitor-resource-logs.md) article.
-
-* For detailed information about how to create a diagnostic setting by using the Azure portal, CLI, or PowerShell, see [create diagnostic setting to collect platform logs and metrics in Azure](../azure-monitor/essentials/diagnostic-settings.md) article.
+* For more information on how to create diagnostic settings for Azure Cosmos DB, see [Create diagnostic settings](cosmosdb-monitor-resource-logs.md).
+* For detailed information about how to create a diagnostic setting by using the Azure portal, the Azure CLI, or PowerShell, see [Create diagnostic settings to collect platform logs and metrics in Azure](../azure-monitor/essentials/diagnostic-settings.md).
cosmos-db Cosmosdb Migrationchoices https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/cosmosdb-migrationchoices.md
The following factors determine the choice of the migration tool:
## Azure Cosmos DB SQL API
+If you need help with capacity planning, consider reading our [guide to estimating RU/s using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md).
+* If you are migrating from a vCores- or server-based platform and you need guidance on estimating request units, consider reading our [guide to estimating RU/s based on vCores](estimate-ru-with-capacity-planner.md).
+ |Migration type|Solution|Supported sources|Supported targets|Considerations| |||||| |Offline|[Data Migration Tool](import-data.md)| &bull;JSON/CSV Files<br/>&bull;Azure Cosmos DB SQL API<br/>&bull;MongoDB<br/>&bull;SQL Server<br/>&bull;Table Storage<br/>&bull;AWS DynamoDB<br/>&bull;Azure Blob Storage|&bull;Azure Cosmos DB SQL API<br/>&bull;Azure Cosmos DB Tables API<br/>&bull;JSON Files |&bull; Easy to set up and supports multiple sources. <br/>&bull; Not suitable for large datasets.|
The following factors determine the choice of the migration tool:
## Azure Cosmos DB Mongo API
+Follow the [pre-migration guide](mongodb/pre-migration-steps.md) to plan your migration.
+* If you need help with capacity planning, consider reading our [guide to estimating RU/s using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md).
+* If you are migrating from a vCores- or server-based platform and you need guidance on estimating request units, consider reading our [guide to estimating RU/s based on vCores](convert-vcore-to-request-unit.md).
+
+When you are ready to migrate, you can find detailed guidance on migration tools below
+* [Offline migration using MongoDB native tools](mongodb/tutorial-mongotools-cosmos-db.md)
+* [Offline migration using Azure database migration service (DMS)](../dms/tutorial-mongodb-cosmos-db.md)
+* [Online migration using Azure database migration service (DMS)](../dms/tutorial-mongodb-cosmos-db-online.md)
+* [Offline/online migration using Azure Databricks and Spark](mongodb/migrate-databricks.md)
+
+Then, follow our [post-migration guide](mongodb/post-migration-optimization.md) to optimize your Azure Cosmos DB data estate once you have migrated.
+
+A summary of migration pathways from your current solution to Azure Cosmos DB API for MongoDB is provided below:
+ |Migration type|Solution|Supported sources|Supported targets|Considerations| |||||| |Online|[Azure Database Migration Service](../dms/tutorial-mongodb-cosmos-db-online.md)| MongoDB|Azure Cosmos DB API for MongoDB |&bull; Makes use of the Azure Cosmos DB bulk executor library. <br/>&bull; Suitable for large datasets and takes care of replicating live changes. <br/>&bull; Works only with other MongoDB sources.|
The following factors determine the choice of the migration tool:
## Azure Cosmos DB Cassandra API
+If you need help with capacity planning, consider reading our [guide to estimating RU/s using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md).
+ |Migration type|Solution|Supported sources|Supported targets|Considerations| |||||| |Offline|[cqlsh COPY command](cassandr#migrate-data-by-using-the-cqlsh-copy-command)|CSV Files | Azure Cosmos DB Cassandra API| &bull; Easy to set up. <br/>&bull; Not suitable for large datasets. <br/>&bull; Works only when the source is a Cassandra table.|
cosmos-db Data Residency https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/data-residency.md
In Azure Cosmos DB, you can configure your data and backups to remain in a singl
## Residency requirements for data
-In Azure Cosmos DB, you must explicitly configure the cross-region data replication. Learn how to configure geo-replication using [Azure portal](how-to-manage-database-account.md#addremove-regions-from-your-database-account), [Azure CLI](scripts/cli/common/regions.md). To meet data residency requirements, you can create an Azure policy that allows certain regions to prevent data replication to unwanted regions.
+In Azure Cosmos DB, you must explicitly configure the cross-region data replication. Learn how to configure geo-replication using [Azure portal](how-to-manage-database-account.md#addremove-regions-from-your-database-account), [Azure CLI](scripts/cli/common/regions.md). To meet data residency requirements, you can create an Azure Policy definition that allows certain regions to prevent data replication to unwanted regions.
## Residency requirements for backups
cosmos-db Diagnostic Queries Gremlin https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/graph/diagnostic-queries-gremlin.md
Title: Troubleshoot issues with advanced diagnostics queries for Gremlin API
-description: Learn how to query diagnostics logs for troubleshooting data stored in Azure Cosmos DB for Gremlin API
+description: Learn how to query diagnostics logs for troubleshooting data stored in Azure Cosmos DB for the Gremlin API.
Last updated 06/12/2021
-# Troubleshoot issues with advanced diagnostics queries for Gremlin API
+# Troubleshoot issues with advanced diagnostics queries for the Gremlin API
[!INCLUDE[appliesto-all-apis-except-table](../includes/appliesto-all-apis-except-table.md)]
> * [Gremlin API](diagnostic-queries-gremlin.md) >
-In this article, we'll cover how to write more advanced queries to help troubleshoot issues with your Azure Cosmos DB account using diagnostics logs sent to **AzureDiagnostics (legacy)** and **Resource-specific (preview)** tables.
+In this article, we'll cover how to write more advanced queries to help troubleshoot issues with your Azure Cosmos DB account by using diagnostics logs sent to **Azure Diagnostics (legacy)** and **resource-specific (preview)** tables.
-For Azure Diagnostics tables, all data is written into one single table and users will need to specify which category they'd like to query. If you'd like to view the full-text query of your request, [follow this article](../cosmosdb-monitor-resource-logs.md#full-text-query) to learn how to enable this feature.
+For Azure Diagnostics tables, all data is written into one single table. Users specify which category they want to query. If you want to view the full-text query of your request, see [Monitor Azure Cosmos DB data by using diagnostic settings in Azure](../cosmosdb-monitor-resource-logs.md#full-text-query) to learn how to enable this feature.
-For [resource-specific tables](../cosmosdb-monitor-resource-logs.md#create-setting-portal), data is written into individual tables for each category of the resource. We recommend this mode since it makes it much easier to work with the data, provides better discoverability of the schemas, and improves performance across both ingestion latency and query times.
+For [resource-specific tables](../cosmosdb-monitor-resource-logs.md#create-setting-portal), data is written into individual tables for each category of the resource. We recommend this mode because it:
+
+- Makes it much easier to work with the data.
+- Provides better discoverability of the schemas.
+- Improves performance across both ingestion latency and query times.
## Common queries
+Common queries are shown in the resource-specific and Azure Diagnostics tables.
-- Top N(10) RU consuming requests/queries in a given time frame
+### Top N(10) Request Unit (RU) consuming requests or queries in a specific time frame
# [Resource-specific](#tab/resource-specific)
For [resource-specific tables](../cosmosdb-monitor-resource-logs.md#create-setti
``` -- Requests throttled (statusCode = 429) in a given time window
+### Requests throttled (statusCode = 429) in a specific time window
# [Resource-specific](#tab/resource-specific) ```Kusto
For [resource-specific tables](../cosmosdb-monitor-resource-logs.md#create-setti
``` -- Queries with large response lengths (payload size of the server response)
+### Queries with large response lengths (payload size of the server response)
# [Resource-specific](#tab/resource-specific) ```Kusto
For [resource-specific tables](../cosmosdb-monitor-resource-logs.md#create-setti
``` -- RU Consumption by physical partition (across all replicas in the replica set)
+### RU consumption by physical partition (across all replicas in the replica set)
# [Resource-specific](#tab/resource-specific) ```Kusto
For [resource-specific tables](../cosmosdb-monitor-resource-logs.md#create-setti
``` -- RU Consumption by logical partition (across all replicas in the replica set)
+### RU consumption by logical partition (across all replicas in the replica set)
# [Resource-specific](#tab/resource-specific) ```Kusto
For [resource-specific tables](../cosmosdb-monitor-resource-logs.md#create-setti
## Next steps
-* For more information on how to create diagnostic settings for Cosmos DB see [Creating Diagnostics settings](../cosmosdb-monitor-resource-logs.md) article.
-
-* For detailed information about how to create a diagnostic setting by using the Azure portal, CLI, or PowerShell, see [create diagnostic setting to collect platform logs and metrics in Azure](../../azure-monitor/essentials/diagnostic-settings.md) article.
+* For more information on how to create diagnostic settings for Azure Cosmos DB, see [Create diagnostic settings](../cosmosdb-monitor-resource-logs.md).
+* For detailed information about how to create a diagnostic setting by using the Azure portal, the Azure CLI, or PowerShell, see [Create diagnostic settings to collect platform logs and metrics in Azure](../../azure-monitor/essentials/diagnostic-settings.md).
cosmos-db Migrate Dotnet V3 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/migrate-dotnet-v3.md
Previously updated : 09/23/2020 Last updated : 08/19/2021 # Migrate your application to use the Azure Cosmos DB .NET SDK v3
CosmosClientBuilder cosmosClientBuilder = new CosmosClientBuilder(
CosmosClient client = cosmosClientBuilder.Build(); ```
+### Exceptions
+
+Where the v2 SDK used `DocumentClientException` to signal errors during operations, the v3 SDK uses `CosmosClientException`, which exposes the `StatusCode`, `Diagnostics`, and other response-related information. All the complete information is serialized when `ToString()` is used:
+
+```csharp
+catch (CosmosClientException ex)
+{
+ HttpStatusCode statusCode = ex.StatusCode;
+ CosmosDiagnostics diagnostics = ex.Diagnostics;
+ // store diagnostics optionally with diagnostics.ToString();
+ // or log the entire error details with ex.ToString();
+}
+```
+
+### Diagnostics
+
+Where the v2 SDK had Direct-only diagnostics available through the `ResponseDiagnosticsString` property, the v3 SDK uses `Diagnostics` available in all responses and exceptions, which are richer and not restricted to Direct mode. They include not only the time spent on the SDK for the operation, but also the regions the operation contacted:
+
+```csharp
+try
+{
+ ItemResponse<MyItem> response = await container.ReadItemAsync<MyItem>(
+ partitionKey: new PartitionKey("MyPartitionKey"),
+ id: "MyId");
+
+ TimeSpan elapsedTime = response.Diagnostics.GetElapsedTime();
+ if (elapsedTime > somePreDefinedThreshold)
+ {
+ // log response.Diagnostics.ToString();
+ IReadOnlyList<(string region, Uri uri)> regions = response.Diagnostics.GetContactedRegions();
+ }
+}
+catch (CosmosException cosmosException) {
+ string diagnostics = cosmosException.Diagnostics.ToString();
+
+ TimeSpan elapsedTime = cosmosException.Diagnostics.GetElapsedTime();
+
+ IReadOnlyList<(string region, Uri uri)> regions = cosmosException.Diagnostics.GetContactedRegions();
+
+ // log cosmosException.ToString()
+}
+```
+
+### ConnectionPolicy
+
+Some settings in `ConnectionPolicy` have been renamed or replaced:
+
+| .NET v2 SDK | .NET v3 SDK |
+|-|-|
+|`EnableEndpointRediscovery`|`LimitToEndpoint` - The value is now inverted, if `EnableEndpointRediscovery` was being set to `true`, `LimitToEndpoint` should be set to `false`. Before using this setting, you need to understand [how it affects the client](troubleshoot-sdk-availability.md).|
+|`ConnectionProtocol`|Removed. Protocol is tied to the Mode, either it's Gateway (HTTPS) or Direct (TCP).|
+|`MediaRequestTimeout`|Removed. Attachments are no longer supported.|
+
+### Session token
+
+Where the v2 SDK exposed the session token of a response as `ResourceResponse.SessionToken` for cases where capturing the session token was required, because the session token is a header, the v3 SDK exposes that value in the `Headers.Session` property of any response.
+
+### Timestamp
+
+Where the v2 SDK exposed the timestamp of a document through the `Timestamp` property, because `Document` is no longer available, users can map the `_ts` [system property](account-databases-containers-items.md#properties-of-an-item) to a property in their model.
+
+### OpenAsync
+
+For use cases where `OpenAsync()` was being used to warm up the v2 SDK client, `CreateAndInitializeAsync` can be used to both [create and warm-up](https://devblogs.microsoft.com/cosmosdb/improve-net-sdk-initialization/) a v3 SDK client.
+ ### Using the change feed processor APIs directly from the v3 SDK The v3 SDK has built-in support for the Change Feed Processor APIs, allowing you use the same SDK for building your application and change feed processor implementation. Previously, you had to use a separate change feed processor library.
cosmos-db Diagnostic Queries Mongodb https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/mongodb/diagnostic-queries-mongodb.md
Title: Troubleshoot issues with advanced diagnostics queries for Mongo API
-description: Learn how to query diagnostics logs for troubleshooting data stored in Azure Cosmos DB for Mongo API
+description: Learn how to query diagnostics logs for troubleshooting data stored in Azure Cosmos DB for the MongoDB API.
-# Troubleshoot issues with advanced diagnostics queries for Mongo API
+# Troubleshoot issues with advanced diagnostics queries for the MongoDB API
[!INCLUDE[appliesto-all-apis-except-table](../includes/appliesto-all-apis-except-table.md)]
> * [Gremlin API](../queries-gremlin.md) >
-In this article, we'll cover how to write more advanced queries to help troubleshoot issues with your Azure Cosmos DB account using diagnostics logs sent to **AzureDiagnostics (legacy)** and **Resource-specific (preview)** tables.
+In this article, we'll cover how to write more advanced queries to help troubleshoot issues with your Azure Cosmos DB account by using diagnostics logs sent to **Azure Diagnostics (legacy)** and **resource-specific (preview)** tables.
-For Azure Diagnostics tables, all data is written into one single table and users will need to specify which category they'd like to query. If you'd like to view the full-text query of your request, [follow this article](../cosmosdb-monitor-resource-logs.md#full-text-query) to learn how to enable this feature.
+For Azure Diagnostics tables, all data is written into one single table. Users specify which category they want to query. If you want to view the full-text query of your request, see [Monitor Azure Cosmos DB data by using diagnostic settings in Azure](../cosmosdb-monitor-resource-logs.md#full-text-query) to learn how to enable this feature.
-For [resource-specific tables](../cosmosdb-monitor-resource-logs.md#create-setting-portal), data is written into individual tables for each category of the resource. We recommend this mode since it makes it much easier to work with the data, provides better discoverability of the schemas, and improves performance across both ingestion latency and query times.
+For [resource-specific tables](../cosmosdb-monitor-resource-logs.md#create-setting-portal), data is written into individual tables for each category of the resource. We recommend this mode because it:
+
+- Makes it much easier to work with the data.
+- Provides better discoverability of the schemas.
+- Improves performance across both ingestion latency and query times.
## Common queries
+Common queries are shown in the resource-specific and Azure Diagnostics tables.
-- Top N(10) RU consuming requests/queries in a given time frame
+### Top N(10) Request Unit (RU) consuming requests or queries in a specific time frame
# [Resource-specific](#tab/resource-specific) ```Kusto
For [resource-specific tables](../cosmosdb-monitor-resource-logs.md#create-setti
``` -- Requests throttled (statusCode = 429 or 16500) in a given time window
+### Requests throttled (statusCode = 429 or 16500) in a specific time window
# [Resource-specific](#tab/resource-specific) ```Kusto
For [resource-specific tables](../cosmosdb-monitor-resource-logs.md#create-setti
``` -- Timed out requests (statusCode = 50) in a given time window
+### Timed-out requests (statusCode = 50) in a specific time window
# [Resource-specific](#tab/resource-specific) ```Kusto
For [resource-specific tables](../cosmosdb-monitor-resource-logs.md#create-setti
``` -- Queries with large response lengths (payload size of the server response)
+### Queries with large response lengths (payload size of the server response)
# [Resource-specific](#tab/resource-specific) ```Kusto
For [resource-specific tables](../cosmosdb-monitor-resource-logs.md#create-setti
``` -- RU Consumption by physical partition (across all replicas in the replica set)
+### RU consumption by physical partition (across all replicas in the replica set)
# [Resource-specific](#tab/resource-specific) ```Kusto
For [resource-specific tables](../cosmosdb-monitor-resource-logs.md#create-setti
``` -- RU Consumption by logical partition (across all replicas in the replica set)
+### RU consumption by logical partition (across all replicas in the replica set)
# [Resource-specific](#tab/resource-specific) ```Kusto
For [resource-specific tables](../cosmosdb-monitor-resource-logs.md#create-setti
## Next steps
-* For more information on how to create diagnostic settings for Cosmos DB see [Creating Diagnostics settings](../cosmosdb-monitor-resource-logs.md) article.
-
-* For detailed information about how to create a diagnostic setting by using the Azure portal, CLI, or PowerShell, see [create diagnostic setting to collect platform logs and metrics in Azure](../../azure-monitor/essentials/diagnostic-settings.md) article.
+* For more information on how to create diagnostic settings for Azure Cosmos DB, see [Create diagnostic settings](../cosmosdb-monitor-resource-logs.md).
+* For detailed information about how to create a diagnostic setting by using the Azure portal, the Azure CLI, or PowerShell, see [Create diagnostic settings to collect platform logs and metrics in Azure](../../azure-monitor/essentials/diagnostic-settings.md).
cosmos-db Pre Migration Steps https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/mongodb/pre-migration-steps.md
Finally, now that you have a view of your existing data estate and a design for
|Online|[Azure Database Migration Service](../../dms/tutorial-mongodb-cosmos-db-online.md)|&bull; Makes use of the Azure Cosmos DB bulk executor library <br/>&bull; Suitable for large datasets and takes care of replicating live changes <br/>&bull; Works only with other MongoDB sources| |Offline|[Azure Database Migration Service](../../dms/tutorial-mongodb-cosmos-db-online.md)|&bull; Makes use of the Azure Cosmos DB bulk executor library <br/>&bull; Suitable for large datasets and takes care of replicating live changes <br/>&bull; Works only with other MongoDB sources| |Offline|[Azure Data Factory](../../data-factory/connector-azure-cosmos-db.md)|&bull; Easy to set up and supports multiple sources <br/>&bull; Makes use of the Azure Cosmos DB bulk executor library <br/>&bull; Suitable for large datasets <br/>&bull; Lack of checkpointing means that any issue during the course of migration would require a restart of the whole migration process<br/>&bull; Lack of a dead letter queue would mean that a few erroneous files could stop the entire migration process <br/>&bull; Needs custom code to increase read throughput for certain data sources|
- |Offline|[Existing Mongo Tools (mongodump, mongorestore, Studio3T)](https://azure.microsoft.com/resources/videos/using-mongodb-tools-with-azure-cosmos-db/)|&bull; Easy to set up and integration <br/>&bull; Needs custom handling for throttles|
+ |Offline|[Existing Mongo Tools (mongodump, mongorestore, Studio3T)](tutorial-mongotools-cosmos-db.md)|&bull; Easy to set up and integration <br/>&bull; Needs custom handling for throttles|
+ |Offline/online|[Azure Databricks and Spark](migrate-databricks.md)|&bull; Full control of migration rate and data transformation <br/>&bull; Requires custom coding|
* If your resource can tolerate an offline migration, use the diagram below to choose the appropriate migration tool:
In the pre-migration phase, spend some time to plan what steps you will take tow
* The best guide to post-migration can be found [here](post-migration-optimization.md). ## Next steps
-* [Migrate your MongoDB data to Cosmos DB using the Database Migration Service.](../../dms/tutorial-mongodb-cosmos-db.md)
+
+* Migrate to Azure Cosmos DB API for MongoDB
+ * [Offline migration using MongoDB native tools](tutorial-mongotools-cosmos-db.md)
+ * [Offline migration using Azure database migration service (DMS)](../../dms/tutorial-mongodb-cosmos-db.md)
+ * [Online migration using Azure database migration service (DMS)](../../dms/tutorial-mongodb-cosmos-db-online.md)
+ * [Offline/online migration using Azure Databricks and Spark](migrate-databricks.md)
+ * [Migrate your MongoDB data using Azure database migration service (DMS).](../../dms/tutorial-mongodb-cosmos-db.md)
+* [Post-migration guide](post-migration-optimization.md) - optimize steps once you have migrated to Azure Cosmos DB API for MongoDB
* [Provision throughput on Azure Cosmos containers and databases](../set-throughput.md) * [Partitioning in Azure Cosmos DB](../partitioning-overview.md) * [Global Distribution in Azure Cosmos DB](../distribute-data-globally.md)
cosmos-db Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/policy-reference.md
Title: Built-in policy definitions for Azure Cosmos DB description: Lists Azure Policy built-in policy definitions for Azure Cosmos DB. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/13/2021 Last updated : 08/20/2021
cosmos-db Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Cosmos DB description: Lists Azure Policy Regulatory Compliance controls available for Azure Cosmos DB. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 08/13/2021 Last updated : 08/20/2021
cost-management-billing Cost Mgt Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/costs/cost-mgt-best-practices.md
Tags are a effective way to understand costs that span across multiple teams and
Similarly, you might also have web apps or environments, such as Test or Production, that use resources across multiple subscriptions owned by different teams. To better understand the full cost of the workloads, tag the resources that they use. When tags are applied properly, you can apply them as a filter in cost analysis to better understand trends.
-After you plan for resource tagging, you can configure an Azure policy to enforce tagging on resources. Watch the [How to review tag policies with Azure Cost Management](https://www.youtube.com/watch?v=nHQYcYGKuyw) video to understand the tools available that help you enforce scalable resource tagging. To watch other videos, visit the [Cost Management YouTube channel](https://www.youtube.com/c/AzureCostManagement).
+After you plan for resource tagging, you can configure an Azure Policy definition to enforce tagging on resources. Watch the [How to review tag policies with Azure Cost Management](https://www.youtube.com/watch?v=nHQYcYGKuyw) video to understand the tools available that help you enforce scalable resource tagging. To watch other videos, visit the [Cost Management YouTube channel](https://www.youtube.com/c/AzureCostManagement).
>[!VIDEO https://www.youtube.com/embed/nHQYcYGKuyw]
cost-management-billing Reservation Renew https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/reservations/reservation-renew.md
Previously updated : 08/05/2020 Last updated : 08/20/2021
Go to Azure portal > **Reservations**.
## If you don't renew
-Your services continue to run normally. You're charged pay-as-you-go rates for your usage after the reservation expires.
+Your services continue to run normally. You're charged pay-as-you-go rates for your usage after the reservation expires. If the reservation wasn't set for automatic renewal before expiration, you can't renew an expired reservation. To continue to receive savings, you can buy a new reservation.
## Required renewal permissions
cost-management-billing View Reservations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/reservations/view-reservations.md
This article explains how reservation permissions work and how users can view and manage Azure reservations in the Azure portal and with Azure PowerShell. + ## Who can manage a reservation by default By default, the following users can view and manage reservations:
data-factory Compute Linked Services https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/compute-linked-services.md
The following table provides a list of compute environments supported by Data Fa
| | | | [On-demand HDInsight cluster](#azure-hdinsight-on-demand-linked-service) or [your own HDInsight cluster](#azure-hdinsight-linked-service) | [Hive](transform-data-using-hadoop-hive.md), [Pig](transform-data-using-hadoop-pig.md), [Spark](transform-data-using-spark.md), [MapReduce](transform-data-using-hadoop-map-reduce.md), [Hadoop Streaming](transform-data-using-hadoop-streaming.md) | | [Azure Batch](#azure-batch-linked-service) | [Custom](transform-data-using-dotnet-custom-activity.md) |
-| [Azure Machine Learning Studio (classic)](#azure-machine-learning-studio-classic-linked-service) | [Machine Learning Studio (classic) activities: Batch Execution and Update Resource](transform-data-using-machine-learning.md) |
+| [ML Studio (classic)](#ml-studio-classic-linked-service) | [ML Studio (classic) activities: Batch Execution and Update Resource](transform-data-using-machine-learning.md) |
| [Azure Machine Learning](#azure-machine-learning-linked-service) | [Azure Machine Learning Execute Pipeline](transform-data-machine-learning-service.md) | | [Azure Data Lake Analytics](#azure-data-lake-analytics-linked-service) | [Data Lake Analytics U-SQL](transform-data-using-data-lake-analytics.md) | | [Azure SQL](#azure-sql-database-linked-service), [Azure Synapse Analytics](#azure-synapse-analytics-linked-service), [SQL Server](#sql-server-linked-service) | [Stored Procedure](transform-data-using-stored-procedure.md) |
See following articles if you are new to Azure Batch service:
| linkedServiceName | Name of the Azure Storage linked service associated with this Azure Batch linked service. This linked service is used for staging files required to run the activity. | Yes | | connectVia | The Integration Runtime to be used to dispatch the activities to this linked service. You can use Azure Integration Runtime or Self-hosted Integration Runtime. If not specified, it uses the default Azure Integration Runtime. | No |
-## Azure Machine Learning Studio (classic) linked service
-You create an Azure Machine Learning Studio (classic) linked service to register a Machine Learning Studio (classic) batch scoring endpoint to a data factory.
+## ML Studio (classic) linked service
+You create an ML Studio (classic) linked service to register a Machine Learning Studio (classic) batch scoring endpoint to a data factory.
### Example
You create an Azure Machine Learning Studio (classic) linked service to register
| Type | The type property should be set to: **AzureML**. | Yes | | mlEndpoint | The batch scoring URL. | Yes | | apiKey | The published workspace model's API. | Yes |
-| updateResourceEndpoint | The Update Resource URL for an Azure Machine Learning Studio (classic) Web Service endpoint used to update the predictive Web Service with trained model file | No |
+| updateResourceEndpoint | The Update Resource URL for an ML Studio (classic) Web Service endpoint used to update the predictive Web Service with trained model file | No |
| servicePrincipalId | Specify the application's client ID. | Required if updateResourceEndpoint is specified | | servicePrincipalKey | Specify the application's key. | Required if updateResourceEndpoint is specified | | tenant | Specify the tenant information (domain name or tenant ID) under which your application resides. You can retrieve it by hovering the mouse in the upper-right corner of the Azure portal. | Required if updateResourceEndpoint is specified |
data-factory Connect Data Factory To Azure Purview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connect-data-factory-to-azure-purview.md
Last updated 08/10/2021
[!INCLUDE[appliesto-adf-xxx-md](includes/appliesto-adf-xxx-md.md)]
-[Azure Purview](../purview/overview.md) is a unified data governance service that helps you manage and govern your on-premises, multi-cloud, and software-as-a-service (SaaS) data. You can connect your data factory to Azure Purview. That connection allows you to use Azure Purview for capturing lineage data, as well as to discover and explore Azure Purview assets.
+[Azure Purview](../purview/overview.md) is a unified data governance service that helps you manage and govern your on-premises, multi-cloud, and software-as-a-service (SaaS) data. You can connect your data factory to Azure Purview. That connection allows you to use Azure Purview for capturing lineage data, and to discover and explore Azure Purview assets.
## Connect Data Factory to Azure Purview
To establish the connection, you need to have **Owner** or **Contributor** role
3. Once connected, you can see the name of the Purview account in the tab **Purview account**.
-When connecting data factory to Purview, ADF UI also tries to grant the data factory's managed identity **Purview Data Curator** role on your Purview account. Managed identity is used to authenticate lineage push operations from data factory to Purview. If you have **Owner** or **User Access Administrator** role on the Purview account, this operation will be done automatically. If not, you would see warning like below:
+When connecting data factory to Purview, ADF UI also tries to grant the data factory's managed identity **Purview Data Curator** role on your Purview account. Managed identity is used to authenticate lineage push operations from data factory to Purview. If you have **Owner** or **User Access Administrator** role on the Purview account, this operation will be done automatically. If you see warning like the following, it means the needed role is not granted:
:::image type="content" source="./media/data-factory-purview/register-purview-account-warning.png" alt-text="Screenshot for warning of registering a Purview account.":::
data-factory Connector Microsoft Access https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-microsoft-access.md
Previously updated : 03/17/2021 Last updated : 08/20/2021 # Copy data from and to Microsoft Access using Azure Data Factory
To use this Microsoft Access connector, you need to:
- Install the Microsoft Access ODBC driver for the data store on the Integration Runtime machine. >[!NOTE]
->Microsoft Access 2016 version of ODBC driver doesn't work with this connector. Use driver version 2013 or 2010 instead.
+>Microsoft Access 2016 version of ODBC driver doesn't work with this connector. Use Microsoft Access 2013 or 2010 version of ODBC driver instead.
## Getting started
data-factory Connector Troubleshoot Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-troubleshoot-guide.md
Previously updated : 08/16/2021 Last updated : 08/18/2021
Azure Cosmos DB calculates RUs, see [Request units in Azure Cosmos DB](../cosmos
- **Message**: `Failed to connect to Dynamics: %message;`
- or otherwise `Unable to Login to Dynamics CRM, message:ERROR REQUESTING Token FROM THE Authentication context - USER intervention required but not permitted by prompt behavior AADSTS50079: Due to a configuration change made by your administrator, or because you moved to a new location, you must enroll in multi-factor authentication to access '00000007-0000-0000-c000-000000000000'` If your use case meets **all** of the following three conditions:
- - You are connecting to Dynamics 365, Common Data Service, or Dynamics CRM.
- - You are using Office365 Authentication.
- - Your tenant and user is configured in Azure Active Directory for [conditional access](../active-directory/conditional-access/overview.md) and/or Multi-Factor Authentication is required (see this [link](/powerapps/developer/data-platform/authenticate-office365-deprecation) to Dataverse doc).
-
- Under these circumstances, the connection used to succeed before 6/8/2021.
- Starting 6/9/2021 connection will start to fail because of the deprecation of regional Discovery Service (see this [link](/power-platform/important-changes-coming#regional-discovery-service-is-deprecated)).
-
- If your tenant and user is configured in Azure Active Directory for [conditional access](../active-directory/conditional-access/overview.md) and/or Multi-Factor Authentication is required, you must use ΓÇÿAzure AD service-principalΓÇÖ to authenticate after 6/8/2021. Refer this [link](./connector-dynamics-crm-office-365.md#prerequisites) for detailed steps.
--
-
- 1. Contact Dynamics support team with the detailed error message for help.
- 1. Use the service principal authentication, and you can refer to this article: [Example: Dynamics online using Azure AD service-principal and certificate authentication](./connector-dynamics-crm-office-365.md#example-dynamics-online-using-azure-ad-service-principal-and-certificate-authentication).
-
-
-
- 1. Make sure you have put the correct service URI in the linked service.
- 1. If you use the Self Hosted IR, make sure that the firewall/proxy does not intercept the requests to the Dynamics server.
-
-
-
- 1. Make sure your username and password are correct if you use the Office 365 authentication.
- 1. Make sure you have input the correct service URI.
- 1. If you use regional CRM URL (URL has a number after 'crm'), make sure you use the correct regional identifier.
- 1. Contact the Dynamics support team for help.
-
+ - **Causes and recommendations**: Different causes may lead to this error. Check below list for possible cause analysis and related recommendation.
-
- 1. Make sure you have input the correct service URI.
- 1. If you use the regional CRM URL (URL has a number after 'crm'), make sure that you use the correct regional identifier.
- 1. Contact the Dynamics support team for help.
-
-
-
-
-
+ | Cause analysis | Recommendation |
+ | :-- | :-- |
+ | You are seeing `ERROR REQUESTING ORGS FROM THE DISCOVERY SERVERFCB 'EnableRegionalDisco' is disabled.` or otherwise `Unable to Login to Dynamics CRM, message:ERROR REQUESTING Token FROM THE Authentication context - USER intervention required but not permitted by prompt behavior AADSTS50079: Due to a configuration change made by your administrator, or because you moved to a new location, you must enroll in multi-factor authentication to access '00000007-0000-0000-c000-000000000000'` If your use case meets **all** of the following three conditions: <br/> 1.You are connecting to Dynamics 365, Common Data Service, or Dynamics CRM.<br/> 2.You are using Office365 Authentication.<br/> 3.Your tenant and user is configured in Azure Active Directory for [conditional access](../active-directory/conditional-access/overview.md) and/or Multi-Factor Authentication is required (see this [link](/powerapps/developer/data-platform/authenticate-office365-deprecation) to Dataverse doc).<br/> Under these circumstances, the connection used to succeed before 6/8/2021. Starting 6/9/2021 connection will start to fail because of the deprecation of regional Discovery Service (see this [link](/power-platform/important-changes-coming#regional-discovery-service-is-deprecated)).| If your tenant and user is configured in Azure Active Directory for [conditional access](../active-directory/conditional-access/overview.md) and/or Multi-Factor Authentication is required, you must use 'Azure AD service-principal' to authenticate after 6/8/2021. Refer this [link](./connector-dynamics-crm-office-365.md#prerequisites) for detailed steps.|
+ |If you see `Office 365 auth with OAuth failed` in the error message, it means that your server might have some configurations not compatible with OAuth.| 1. Contact Dynamics support team with the detailed error message for help. <br/> 2. Use the service principal authentication, and you can refer to this article: [Example: Dynamics online using Azure AD service-principal and certificate authentication](./connector-dynamics-crm-office-365.md#example-dynamics-online-using-azure-ad-service-principal-and-certificate-authentication).
+ |If you see `Unable to retrieve authentication parameters from the serviceUri` in the error message, it means that either you input the wrong Dynamics service URL or proxy/firewall to intercept the traffic. |1. Make sure you have put the correct service URI in the linked service.<br/> 2. If you use the Self Hosted IR, make sure that the firewall/proxy does not intercept the requests to the Dynamics server. |
+ |If you see `An unsecured or incorrectly secured fault was received from the other party` in the error message, it means that unexpected responses were gotten from the server side. | 1. Make sure your username and password are correct if you use the Office 365 authentication. <br/> 2. Make sure you have input the correct service URI. <br/> 3. If you use regional CRM URL (URL has a number after 'crm'), make sure you use the correct regional identifier.<br/> 4. Contact the Dynamics support team for help. |
+ |If you see `No Organizations Found` in the error message, it means that either your organization name is wrong or you used a wrong CRM region identifier in the service URL.|1. Make sure you have input the correct service URI.<br/>2. If you use the regional CRM URL (URL has a number after 'crm'), make sure that you use the correct regional identifier. <br/> 3. Contact the Dynamics support team for help. |
+ | If you see `401 Unauthorized` and AAD-related error message, it means that there's an issue with the service principal. |Follow the guidance in the error message to fix the service principal issue. |
+ |For other errors, usually the issue is on the server side. |Use [XrmToolBox](https://www.xrmtoolbox.com/) to make connection. If the error persists, contact the Dynamics support team for help. |
-
-
### Error code: DynamicsOperationFailed - **Message**: `Dynamics operation failed with error code: %code;, error message: %message;.`
data-factory Control Flow Azure Function Activity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/control-flow-azure-function-activity.md
For an eight-minute introduction and demonstration of this feature, watch the fo
The return type of the Azure function has to be a valid `JObject`. (Keep in mind that [JArray](https://www.newtonsoft.com/json/help/html/T_Newtonsoft_Json_Linq_JArray.htm) is *not* a `JObject`.) Any return type other than `JObject` fails and raises the user error *Response Content is not a valid JObject*.
-Function Key provides secure access to function name with each one having separate unique keys or master key within a function app. Managed identity provides secure access to the entire function app. User is free to provide no key and/or Managed Identity to access function name. Please refer function documentation for more details about [Function access key](../azure-functions/functions-bindings-http-webhook-trigger.md?tabs=csharp#configuration)
+Function Key provides secure access to function name with each one having separate unique keys or master key within a function app. Managed identity provides secure access to the entire function app. User needs to provide key to access function name. Please refer function documentation for more details about [Function access key](../azure-functions/functions-bindings-http-webhook-trigger.md?tabs=csharp#configuration)
| **Property** | **Description** | **Required** |
data-factory How To Use Sql Managed Instance With Ir https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/how-to-use-sql-managed-instance-with-ir.md
You can now move your SQL Server Integration Services (SSIS) projects, packages,
1. Make sure that you have no [resource lock](../azure-resource-manager/management/lock-resources.md) on the resource group/subscription to which the virtual network belongs. If you configure a read-only/delete lock, starting and stopping your Azure-SSIS IR will fail, or it will stop responding.
- 1. Make sure that you don't have an Azure policy that prevents the following resources from being created under the resource group/subscription to which the virtual network belongs:
+ 1. Make sure that you don't have an Azure Policy definition that prevents the following resources from being created under the resource group/subscription to which the virtual network belongs:
- Microsoft.Network/LoadBalancers - Microsoft.Network/NetworkSecurityGroups
data-factory Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/policy-reference.md
Previously updated : 08/13/2021 Last updated : 08/20/2021 # Azure Policy built-in definitions for Data Factory (Preview)
data-factory Source Control https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/source-control.md
A side pane will open where you confirm that the publish branch and pending chan
> [!IMPORTANT] > The main branch is not representative of what's deployed in the Data Factory service. The main branch *must* be published manually to the Data Factory service. ++ ## Best practices for Git integration ### Permissions
Using Key Vault or MSI authentication also makes continuous integration and depl
### Stale publish branch
-If the publish branch is out of sync with the main branch and contains out-of-date resources despite a recent publish, try following these steps:
-
-1. Remove your current Git repository
-1. Reconfigure Git with the same settings, but make sure **Import existing Data Factory resources to repository** is selected and choose **New branch**
-1. Create a pull request to merge the changes to the collaboration branch
- Below are some examples of situations that can cause a stale publish branch:+ - A user has multiple branches. In one feature branch, they deleted a linked service that isn't AKV associated (non-AKV linked services are published immediately regardless if they are in Git or not) and never merged the feature branch into the collaboration branch. - A user modified the data factory using the SDK or PowerShell - A user moved all resources to a new branch and tried to publish for the first time. Linked services should be created manually when importing resources. - A user uploads a non-AKV linked service or an Integration Runtime JSON manually. They reference that resource from another resource such as a dataset, linked service, or pipeline. A non-AKV linked service created through the UX is published immediately because the credentials need to be encrypted. If you upload a dataset referencing that linked service and try to publish, the UX will allow it because it exists in the git environment. It will be rejected at publish time since it does not exist in the data factory service.
+If the publish branch is out of sync with the main branch and contains out-of-date resources despite a recent publish, you can use either of the below solutions:
+
+#### Option 1: Use **Overwrite live mode** functionality
+
+It publishes or overwrites the code from your collaboration branch into the live mode. It will consider the code in your repository as the source of truth.
+
+<u>*Code flow:*</u> ***Collaboration branch -> Live mode***
+
+![force publish code from collaboration branch](media/author-visually/force-publish-changes-from-collaboration-branch.png)
+
+#### Option 2: Disconnect and reconnect Git repository
+
+It imports the code from live mode into collaboration branch. It considers the code in live mode as source of truth.
+
+<u>*Code flow:*</u> ***Live mode -> Collaboration branch***
+
+1. Remove your current Git repository
+1. Reconfigure Git with the same settings, but make sure **Import existing Data Factory resources to repository** is selected and choose **New branch**
+1. Create a pull request to merge the changes to the collaboration branch
+
+Choose either method appropriately as needed.
+ ## Switch to a different Git repository To switch to a different Git repository, go to Git configuration page in the management hub under **Source control**. Select **Disconnect**.
data-factory Data Factory Azure Ml Batch Execution Activity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/v1/data-factory-azure-ml-batch-execution-activity.md
Title: Create predictive data pipelines using Azure Data Factory
-description: Describes how to create create predictive pipelines using Azure Data Factory and ML Studio (classic)
+description: Describes how to create create predictive pipelines using Azure Data Factory and Machine Learning Studio (classic)
Last updated 01/22/2018
-# Create predictive pipelines using ML Studio (classic) and Azure Data Factory
+# Create predictive pipelines using Machine Learning Studio (classic) and Azure Data Factory
> [!div class="op_single_selector" title1="Transformation Activities"] > * [Hive Activity](data-factory-hive-activity.md)
Last updated 01/22/2018
> [!NOTE] > This article applies to version 1 of Data Factory. If you are using the current version of the Data Factory service, see [transform data using machine learning in Data Factory](../transform-data-using-machine-learning.md).
-### ML Studio (classic)
+### Machine Learning Studio (classic)
[ML Studio (classic)](https://azure.microsoft.com/documentation/services/machine-learning/) enables you to build, test, and deploy predictive analytics solutions. From a high-level point of view, it is done in three steps: 1. **Create a training experiment**. You do this step by using ML Studio (classic). Studio (classic) is a collaborative visual development environment that you use to train and test a predictive analytics model using training data.
data-lake-analytics Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-lake-analytics/policy-reference.md
Title: Built-in policy definitions for Azure Data Lake Analytics description: Lists Azure Policy built-in policy definitions for Azure Data Lake Analytics. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/13/2021 Last updated : 08/20/2021
data-lake-analytics Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-lake-analytics/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Data Lake Analytics description: Lists Azure Policy Regulatory Compliance controls available for Azure Data Lake Analytics. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 08/13/2021 Last updated : 08/20/2021
data-lake-store Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-lake-store/policy-reference.md
Title: Built-in policy definitions for Azure Data Lake Storage Gen1 description: Lists Azure Policy built-in policy definitions for Azure Data Lake Storage Gen1. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/13/2021 Last updated : 08/20/2021
data-lake-store Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-lake-store/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Data Lake Storage Gen1 description: Lists Azure Policy Regulatory Compliance controls available for Azure Data Lake Storage Gen1. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 08/13/2021 Last updated : 08/20/2021
databox-online Azure Stack Edge Deploy Prep https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-deploy-prep.md
Before you begin, make sure that:
* You have your Microsoft Azure storage account with access credentials.
-* You are not blocked by any Azure policy set up by your system administrator. For more information about policies, see [Quickstart: Create a policy assignment to identify non-compliant resources](../governance/policy/assign-policy-portal.md).
+* You are not blocked by any Azure Policy assignment set up by your system administrator. For more information about Azure Policy, see [Quickstart: Create a policy assignment to identify non-compliant resources](../governance/policy/assign-policy-portal.md).
### For the Azure Stack Edge Pro FPGA device
databox-online Azure Stack Edge Troubleshoot Ordering https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-troubleshoot-ordering.md
For more information, see [Register resource providers](azure-stack-edge-manage-
*Resource &lt;resource name&gt; was disallowed by policy. (Code: RequestDisallowedByPolicy). Initiative: Deny generally unwanted Resource Types. Policy: Not allowed resource types.*
-**Suggested solution:** This error occurs due to an existing Azure policy that blocks the resource creation. Azure policies are set by an organization's system administrator to ensure compliance while using or creating Azure resources. If any such policy is blocking Azure Stack Edge resource creation, contact your system administrator to edit your Azure policy.
+**Suggested solution:** This error occurs due to an existing Azure Policy assignment that blocks the resource creation. Azure Policy definitions and assignments are set by an organization's system administrator to ensure compliance while using or creating Azure resources. If any such policy assignment is blocking Azure Stack Edge resource creation, contact your system administrator to edit your Azure Policy definition.
## Next steps
databox-online Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/policy-reference.md
Title: Built-in policy definitions for Azure Stack Edge description: Lists Azure Policy built-in policy definitions for Azure Stack Edge. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/13/2021 Last updated : 08/20/2021
databox Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox/policy-reference.md
Title: Built-in policy definitions for Azure Data Box description: Lists Azure Policy built-in policy definitions for Azure Data Box. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/13/2021 Last updated : 08/20/2021
databox Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Data Box description: Lists Azure Policy Regulatory Compliance controls available for Azure Data Box. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 08/13/2021 Last updated : 08/20/2021
ddos-protection Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/ddos-protection/policy-reference.md
ms.devlang: na na Previously updated : 08/13/2021 Last updated : 08/20/2021
devtest-labs Devtest Lab Comparing Vm Base Image Types https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/devtest-labs/devtest-lab-comparing-vm-base-image-types.md
Title: Comparing custom images and formulas in DevTest Labs | Microsoft Docs description: Learn about the differences between custom images and formulas as VM bases so you can decide which one best suits your environment.- Previously updated : 06/26/2020+ Last updated : 08/26/2021
-# Comparing custom images and formulas in DevTest Labs
-Both [custom images](devtest-lab-create-template.md) and [formulas](devtest-lab-manage-formulas.md)
-can be used as bases for [created new VMs](devtest-lab-add-vm.md).
-However, the key distinction between custom images and formulas is that
-a custom image is simply an image based on a VHD, while a formula is
-an image based on a VHD *in addition to* preconfigured settings - such as
-VM Size, virtual network, subnet, and artifacts. These preconfigured settings are set up with default values that can be overridden
-at the time of VM creation. This article explains some of the advantages (pros) and
-disadvantages (cons) to using custom images versus using formulas.
+# Compare custom images and formulas in DevTest Labs
+Both [custom images](devtest-lab-create-template.md) and [formulas](devtest-lab-manage-formulas.md) can be used as bases for [created new VMs](devtest-lab-add-vm.md). The key distinction between custom images and formulas is that a custom image is simply an image based on a VHD, while a formula is
+an image based on a VHD *in addition to* preconfigured settings - such as VM Size, virtual network, subnet, and artifacts. These preconfigured settings are set up with default values that can be overridden at the time of VM creation.
-## Custom image pros and cons
-Custom images provide a static, immutable way to create VMs from a desired environment.
-
-**Pros**
-
-* VM provisioning from a custom image is fast as nothing changes after the VM is spun up from the image. In other words, there are no settings to apply as the custom image is just an image without settings.
-* VMs created from a single custom image are identical.
+In this article, you'll learn the pros & cons to using custom images versus using formulas. You can also read the [How to create a custom image from a VM](devtest-lab-create-custom-image-from-vm-using-portal.md)" and the "[FAQ](devtest-lab-faq.yml)".
-**Cons**
+## Custom image benefits
+Custom images provide a static, immutable way to create VMs from a desired environment.
-* If you need to update some aspect of the custom image, the image must be recreated.
+|Pros|Cons|
+|-|-|
+|<li>VM provisioning from a custom image is fast as nothing changes after the VM is spun up from the image. In other words, there are no settings to apply as the custom image is just an image without settings. <li>VMs created from a single custom image are identical.|<li>If you need to update some aspect of the custom image, the image must be recreated. |
-## Formula pros and cons
+## Formula benefits
+
Formulas provide a dynamic way to create VMs from the desired configuration/settings.
-**Pros**
-
-* Changes in the environment can be captured on the fly via artifacts. For example, if you want a VM installed with the latest bits from your release pipeline or enlist the latest code from your repository, you can simply specify an artifact that deploys the latest bits or enlists the latest code in the formula together with a target base image. Whenever this formula is used to create VMs, the latest bits/code are deployed/enlisted to the VM.
-* Formulas can define default settings that custom images cannot provide - such as VM sizes and virtual network settings.
-* The settings saved in a formula are shown as default values, but can be modified when the VM is created.
-
-**Cons**
-
-* Creating a VM from a formula can take more time than creating a VM from a custom image.
+|Pros|Cons|
+|-|-|
+|<li>Changes in the environment can be captured on the fly via artifacts. For example, if you want a VM installed with the latest bits from your release pipeline or enlist the latest code from your repository, you can simply specify an artifact that deploys the latest bits or enlists the latest code in the formula together with <li>target base image. Whenever this formula is used to create VMs, the latest bits/code are deployed/enlisted to the VM. <li>Formulas can define default settings that custom images cannot provide - such as VM sizes and virtual network settings. <li>The settings saved in a formula are shown as default values, but can be modified when the VM is created. |<li> Creating a VM from a formula can take more time than creating a VM from a custom image.
[!INCLUDE [devtest-lab-try-it-out](../../includes/devtest-lab-try-it-out.md)]-
-## Related blog posts
-* [Custom images or formulas?](/azure/devtest-labs/devtest-lab-faq#blog-post)
-
-## Next steps
-- [DevTest Labs FAQ](devtest-lab-faq.yml)
digital-twins Concepts Event Notifications https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/concepts-event-notifications.md
Inside the message, the `data` field contains the data of the affected digital t
For creation events, the `data` payload reflects the state of the twin after the resource is created, so it should include all system generated-elements just like a `GET` call.
-Here is an example of a the data for an [IoT Plug and Play (PnP)](../iot-develop/overview-iot-plug-and-play.md) device, with components and no top-level properties. Properties that do not make sense for devices (such as reported properties) should be omitted. This is the information that will go in the `data` field of the lifecycle notification message.
+Here is an example of a the data for an [IoT Plug and Play](../iot-develop/overview-iot-plug-and-play.md) device, with components and no top-level properties. Properties that do not make sense for devices (such as reported properties) should be omitted. This is the information that will go in the `data` field of the lifecycle notification message.
```json {
digital-twins How To Parse Models https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-parse-models.md
The capabilities of the parser include:
* Determine whether a model is assignable from another model. > [!NOTE]
-> [IoT Plug and Play (PnP)](../iot-develop/overview-iot-plug-and-play.md) devices use a small syntax variant to describe their functionality. This syntax variant is a semantically compatible subset of the DTDL that is used in Azure Digital Twins. When using the parser library, you do not need to know which syntax variant was used to create the DTDL for your digital twin. The parser will always, by default, return the same model for both PnP and Azure Digital Twins syntax.
+> [IoT Plug and Play](../iot-develop/overview-iot-plug-and-play.md) devices use a small syntax variant to describe their functionality. This syntax variant is a semantically compatible subset of the DTDL that is used in Azure Digital Twins. When using the parser library, you do not need to know which syntax variant was used to create the DTDL for your digital twin. The parser will always, by default, return the same model for both IoT Plug and Play and Azure Digital Twins syntax.
### Code with the parser library
digital-twins Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/overview.md
Models are defined in a JSON-like language called [Digital Twins Definition Lang
* Models define semantic **relationships** between your entities so that you can connect your twins into a graph that reflects their interactions. You can think of the models as nouns in a description of your world, and the relationships as verbs. * You can also specialize twins using model inheritance. One model can inherit from another.
-DTDL is used for data models throughout other Azure IoT services, including [IoT Plug and Play (PnP)](../iot-develop/overview-iot-plug-and-play.md) and [Time Series Insights (TSI)](../time-series-insights/overview-what-is-tsi.md). This helps you keep your Azure Digital Twins solution connected and compatible with other parts of the Azure ecosystem.
+DTDL is used for data models throughout other Azure IoT services, including [IoT Plug and Play](../iot-develop/overview-iot-plug-and-play.md) and [Time Series Insights (TSI)](../time-series-insights/overview-what-is-tsi.md). This helps you keep your Azure Digital Twins solution connected and compatible with other parts of the Azure ecosystem.
### Live execution environment
event-grid Enable Identity Custom Topics Domains https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-grid/enable-identity-custom-topics-domains.md
Title: Enable managed identity on Azure Event Grid custom topics and domains description: This article describes how enable managed service identity for an Azure Event Grid custom topic or domain. Previously updated : 03/25/2021 Last updated : 08/20/2021
-# Assign a system-managed identity to an Event Grid custom topic or domain
-This article shows you how to enable a system-managed identity for an Event Grid custom topic or a domain. To learn about managed identities, see [What are managed identities for Azure resources](../active-directory/managed-identities-azure-resources/overview.md).
+# Assign a managed identity to an Event Grid custom topic or domain
+This article shows you how to assign a system-assigned or a user-assigned identity to an Event Grid custom topic or a domain. To learn about managed identities, see [What are managed identities for Azure resources](../active-directory/managed-identities-azure-resources/overview.md).
-## Enable identity at the time of creation
+> [!IMPORTANT]
+> You can enable either system-assigned identity or user-assigned identity for an Event Grid topic or domain, but not both. You can have at most two user-assigned identities assigned to a topic or domain.
-### Using Azure portal
-You can enable system-assigned identity for a custom topic or a domain while creating it in the Azure portal. The following image shows how to enable a system-managed identity for a custom topic. Basically, you select the option **Enable system assigned identity** on the **Advanced** page of the topic creation wizard. You'll see this option on the **Advanced** page of the domain creation wizard too.
+## Enable identity when creating a topic or domain
-![Enable identity while creating a custom topic](./media/managed-service-identity/create-topic-identity.png)
+# [Azure portal](#tab/portal)
+You can assign a system-assigned identity or a user-assigned identity to a custom topic or domain while creating it in the Azure portal.
-### Using Azure CLI
-You can also use the Azure CLI to create a custom topic or domain with a system-assigned identity. Use the `az eventgrid topic create` command with the `--identity` parameter set to `systemassigned`. If you don't specify a value for this parameter, the default value `noidentity` is used.
+### Enable system-assigned identity
+On the **Advanced** tab of the topic or domain creation wizard, select **Enable system assigned identity**.
++
+### Enable user-assigned identity
+1. On the **Advanced** page of the topic or domain creation wizard, select **Enable user-assigned identity**, and then select **Add user assigned identity**.
+
+ :::image type="content" source="./media/managed-service-identity/create-page-add-user-assigned-identity-link.png" alt-text="Image showing the Enable user assigned identity option selected.":::
+1. In the **Select user assigned identity** window, select the subscription that has the user-assigned identity, select the **user-assigned identity**, and then click **Select**.
+
+# [Azure CLI](#tab/cli)
+You can also use the Azure CLI to create a custom topic or a domain with a system-assigned identity. Use the `az eventgrid topic create` command with the `--identity` parameter set to `systemassigned`. If you don't specify a value for this parameter, the default value `noidentity` is used.
```azurecli-interactive # create a custom topic with a system-assigned identity az eventgrid topic create -g <RESOURCE GROUP NAME> --name <TOPIC NAME> -l <LOCATION> --identity systemassigned ```
-Similarly, you can use the `az eventgrid domain create` command to create a domain with a system-managed identity.
+Similarly, you can use the `az eventgrid domain create` command to create a domain with a system-assigned identity.
+
+> [!NOTE]
+> Azure CLI doesn't support assigning a user-assigned managed identity to an Event Grid topic or a domain yet.
++ ## Enable identity for an existing custom topic or domain
-In this section, you learn how to enable a system-managed identity for an existing custom topic or domain.
+In this section, you learn how to enable a system-assigned identity or a user-assigned identity for an existing custom topic or domain.
-### Using Azure portal
-The following procedure shows you how to enable system-managed identity for a custom topic. The steps for enabling an identity for a domain are similar.
+# [Azure portal](#tab/portal)
+The following procedure shows you how to enable system-assigned identity for a custom topic. The steps for enabling an identity for a domain are similar.
1. Go to the [Azure portal](https://portal.azure.com). 2. Search for **event grid topics** in the search bar at the top. 3. Select the **custom topic** for which you want to enable the managed identity.
-4. Switch to the **Identity** tab.
-5. Turn **on** the switch to enable the identity.
+4. Select **Identity** on the left menu.
+
+### To assign a system-assigned identity to a topic
+1. In the **System assigned** tab, turn **on** the switch to enable the identity.
1. Select **Save** on the toolbar to save the setting. :::image type="content" source="./media/managed-service-identity/identity-existing-topic.png" alt-text="Identity page for a custom topic":::
+### To assign a user-assigned identity to a topic
+1. Create a user-assigned identity by following instructions in the [Manage user-assigned managed identities](../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md) article.
+1. On the **Identity** page, switch to the **User assigned** tab in the right pane, and then select **+ Add** on the toolbar.
+
+ :::image type="content" source="./media/managed-service-identity/user-assigned-identity-add-button.png" alt-text="Image showing the User Assigned Identity tab":::
+1. In the **Add user managed identity** window, follow these steps:
+ 1. Select the **Azure subscription** that has the user-assigned identity.
+ 1. Select the **user-assigned identity**.
+ 1. Select **Add**.
+1. Refresh the list in the **User assigned** tab to see the added user-assigned identity.
+ You can use similar steps to enable an identity for an event grid domain.
-### Use the Azure CLI
+# [Azure CLI](#tab/cli)
Use the `az eventgrid topic update` command with `--identity` set to `systemassigned` to enable system-assigned identity for an existing custom topic. If you want to disable the identity, specify `noidentity` as the value. ```azurecli-interactive
az eventgrid topic update -g $rg --name $topicname --identity systemassigned --s
The command for updating an existing domain is similar (`az eventgrid domain update`).
+> [!NOTE]
+> Azure CLI doesn't support assigning a user-assigned managed identity to an Event Grid topic or a domain yet.
++ ## Next steps Add the identity to an appropriate role (for example, Service Bus Data Sender) on the destination (for example, a Service Bus queue). For detailed steps, see [Grant managed identity the access to Event Grid destination](add-identity-roles.md).
event-grid Enable Identity System Topics https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-grid/enable-identity-system-topics.md
Title: Enable managed identity on Azure Event Grid system topic description: This article describes how enable managed service identity for an Azure Event Grid system topic. Previously updated : 03/25/2021 Last updated : 08/20/2021 # Assign a system-managed identity to an Event Grid system topic
-In this article, you learn how to enable system-managed identity for an existing Event Grid system topic. To learn about managed identities, see [What are managed identities for Azure resources](../active-directory/managed-identities-azure-resources/overview.md).
+In this article, you learn how to assign system-assigned or user-assigned identity to an existing Event Grid system topic. To learn about managed identities, see [What are managed identities for Azure resources](../active-directory/managed-identities-azure-resources/overview.md).
> [!IMPORTANT]
-> Currently, you can't enable a system-managed identity when creating a new system topic, that is, when creating an event subscription on an Azure resource that supports system topics.
+> You can enable either system-assigned identity or user-assigned identity for a system topic, but not both. You can have at most two user-assigned identities assigned to a system topic.
-
-## Use Azure portal
-The following procedure shows you how to enable system-managed identity for a system topic.
+## Enable managed identity for an existing system topic
+This section shows you how to enable a managed identity for an existing system topic.
1. Go to the [Azure portal](https://portal.azure.com). 2. Search for **event grid system topics** in the search bar at the top. 3. Select the **system topic** for which you want to enable the managed identity. 4. Select **Identity** on the left menu. You don't see this option for a system topic that's in the global location.
-5. Turn **on** the switch to enable the identity.
+
+### Enable system-assigned identity
+1. Turn **on** the switch to enable the identity.
1. Select **Save** on the toolbar to save the setting.
- :::image type="content" source="./media/managed-service-identity/identity-existing-system-topic.png" alt-text="Identity page for a system topic":::
+ :::image type="content" source="./media/managed-service-identity/identity-existing-system-topic.png" alt-text="Identity page for a system topic.":::
1. Select **Yes** on the confirmation message.
- :::image type="content" source="./media/managed-service-identity/identity-existing-system-topic-confirmation.png" alt-text="Assign identity to a system topic - confirmation":::
+ :::image type="content" source="./media/managed-service-identity/identity-existing-system-topic-confirmation.png" alt-text="Assign identity to a system topic - confirmation.":::
1. Confirm that you see the object ID of the system-assigned managed identity and see a link to assign roles.
- :::image type="content" source="./media/managed-service-identity/identity-existing-system-topic-completed.png" alt-text="Assign identity to a system topic - completed":::
+ :::image type="content" source="./media/managed-service-identity/identity-existing-system-topic-completed.png" alt-text="Assign identity to a system topic - completed.":::
+
+### Enable user-assigned identity
+
+1. First, create a user-assigned identity by following instructions in the [Manage user-assigned managed identities](../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md) article.
+1. On the **Identity** page, switch to the **User assigned** tab in the right pane, and then select **+ Add** on the toolbar.
+
+ :::image type="content" source="./media/managed-service-identity/system-topic-user-identity-add-button.png" alt-text="Image showing the Add button seleted in the User assigned tab of the Identity page.":::
+1. In the **Add user managed identity** window, follow these steps:
+ 1. Select the **Azure subscription** that has the user-assigned identity.
+ 1. Select the **user-assigned identity**.
+ 1. Select **Add**.
+1. Refresh the list in the **User assigned** tab to see the added user-assigned identity.
+
+## Enable managed identity when creating a system topic
+
+1. In the Azure portal, in the search bar, search for and select **Event Grid System Topics**.
+1. On the **Event Grid System Topics** page, select **Create** on the toolbar.
+1. On the **Basics** page of the creation wizard, follow these steps:
+ 1. For **Topic Types**, select the type of the topic that supports a system topic. In the following example, **Storage Accounts** is selected.
+ 2. For **Subscription**, select the Azure subscription that contains the Azure resource.
+ 1. For **Resource Group**, select the resource group that contains the Azure resource.
+ 1. For **Resource**, select the resource.
+ 1. Specify a **name** for the system topic.
+ 1. Enable managed identity:
+ 1. To enable system-assigned identity, select **Enable system assigned identity**.
+
+ :::image type="content" source="./media/managed-service-identity/system-topic-creation-enable-managed-identity.png" alt-text="Image showing the screenshot of system topic creation wizard with system assigned identity option selected.":::
+ 1. To enable user assigned identity:
+ 1. Select **User assigned identity**, and then select **Add user identity**.
+
+ :::image type="content" source="./media/managed-service-identity/system-topic-creation-enable-user-identity.png" alt-text="Image showing the screenshot of system topic creation wizard with user assigned identity option selected.":::
+ 1. In the **Add user managed identity** window, follow these steps:
+ 1. Select the **Azure subscription** that has the user-assigned identity.
+ 1. Select the **user-assigned identity**.
+ 1. Select **Add**.
+
+> [!NOTE]
+> Currently, you can't enable a managed identity for a new system topic when creating an event subscription on an Azure resource that supports system topics.
+ ## Global Azure sources You can enable system-managed identity only for the regional Azure resources. You can't enable it for system topics associated with global Azure resources such as Azure subscriptions, resource groups, or Azure Maps. The system topics for these global sources are also not associated with a specific region. You don't see the **Identity** page for the system topic whose location is set to **Global**.
event-grid Managed Service Identity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-grid/managed-service-identity.md
This article describes how to use a [managed service identity](../active-directo
## Prerequisites
-1. Assign a system-assigned identity to a system topic, a custom topic, or a domain.
+1. Assign a system-assigned identity or a user-assigned identity to a system topic, a custom topic, or a domain.
- For custom topics and domains, see [Enable managed identity for custom topics and domains](enable-identity-custom-topics-domains.md). - For system topics, see [Enable managed identity for system topics](enable-identity-system-topics.md) 1. Add the identity to an appropriate role (for example, Service Bus Data Sender) on the destination (for example, a Service Bus queue). For detailed steps, see [Add identity to Azure roles on destinations](add-identity-roles.md)
This article describes how to use a [managed service identity](../active-directo
After you have an event grid custom topic or system topic or domain with a system-managed identity and have added the identity to the appropriate role on the destination, you're ready to create subscriptions that use the identity. ### Use the Azure portal
-When you create an event subscription, you see an option to enable the use of a system-assigned identity for an endpoint in the **ENDPOINT DETAILS** section.
+When you create an event subscription, you see an option to enable the use of a system-assigned identity or user-assigned identity for an endpoint in the **ENDPOINT DETAILS** section.
+
+Here's an example of enabling system-assigned identity while creating an event subscription with a Service Bus queue as a destination.
![Enable identity while creating an event subscription for a Service Bus queue](./media/managed-service-identity/service-bus-queue-subscription-identity.png)
You can also enable using a system-assigned identity to be used for dead-letteri
![Enable system-assigned identity for dead-lettering](./media/managed-service-identity/enable-deadletter-identity.png)
+You can also enable a managed identity on an event subscription after it's created. On the **Event Subscription** page for the event subscription, switch to the **Additional Features** tab to see the option.
+
+![Enable system-assigned identity on an existing event subscription](./media/managed-service-identity/event-subscription-additional-features.png)
+
+If you had enabled user-assigned identities for the topic, you will see user-assigned identity option enabled in the drop-down list for **Manged Identity Type**. If you select **User Assigned** for **Managed Identity Type**, you can then select the user-assigned identity that you want to use to deliver events.
+
+![Enable user-assigned identity on an event subscription](./media/managed-service-identity/event-subscription-user-identity.png)
++ ### Use the Azure CLI - Service Bus queue In this section, you learn how to use the Azure CLI to enable the use of a system-assigned identity to deliver events to a Service Bus queue. The identity must be a member of the **Azure Service Bus Data Sender** role. It must also be a member of the **Storage Blob Data Contributor** role on the storage account that's used for dead-lettering.
event-grid Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-grid/policy-reference.md
Title: Built-in policy definitions for Azure Event Grid description: Lists Azure Policy built-in policy definitions for Azure Event Grid. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/13/2021 Last updated : 08/20/2021
event-grid Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-grid/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Event Grid description: Lists Azure Policy Regulatory Compliance controls available for Azure Event Grid. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 08/13/2021 Last updated : 08/20/2021
event-hubs Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-hubs/policy-reference.md
Title: Built-in policy definitions for Azure Event Hubs description: Lists Azure Policy built-in policy definitions for Azure Event Hubs. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/13/2021 Last updated : 08/20/2021
event-hubs Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-hubs/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Event Hubs description: Lists Azure Policy Regulatory Compliance controls available for Azure Event Hubs. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 08/13/2021 Last updated : 08/20/2021
expressroute Expressroute Locations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/expressroute/expressroute-locations.md
If you are remote and don't have fiber connectivity or you want to explore other
| **[C3ntro Telecom](https://www.c3ntro.com/)** | Equinix, Megaport | Dallas | | **[Chief](https://www.chief.com.tw/)** | Equinix | Hong Kong SAR | | **[Cinia](https://www.cinia.fi/palvelutiedotteet)** | Equinix, Megaport | Frankfurt, Hamburg |
-| **[CloudXpress](https://www2.telenet.be/fr/business/produits-services/internet/cloudxpress/)** | Equinix | Amsterdam |
+| **[CloudXpress](https://www2.telenet.be/content/www-telenet-be/fr/business/sme-le/aanbod/internet/cloudxpress)** | Equinix | Amsterdam |
| **[CMC Telecom](https://cmctelecom.vn/san-pham/value-added-service-and-it/cmc-telecom-cloud-express-en/)** | Equinix | Singapore | | **[CoreAzure](https://www.coreazure.com/)**| Equinix | London | | **[Cox Business](https://www.cox.com/business/networking/cloud-connectivity.html)**| Equinix | Dallas, Silicon Valley, Washington DC |
Enabling private connectivity to fit your needs can be challenging, based on the
| **[Nelite](https://www.exakis-nelite.com/offres/)** | Europe | | **[New Signature](https://newsignature.com/technologies/express-route/)** | Europe | | **[OneAs1a](https://www.oneas1a.com/connectivity.html)** | Asia |
-| **[Orange Networks](https://orange-networks.com/blog/88-azureexpressroute)** | Europe |
+| **[Orange Networks](https://www.orange-networks.com/blog/88-azureexpressroute)** | Europe |
| **[Perficient](https://www.perficient.com/Partners/Microsoft/Cloud/Azure-ExpressRoute)** | North America | | **[Presidio](https://www.presidio.com/subpage/1107/microsoft-azure)** | North America | | **[sol-tec](https://www.sol-tec.com/what-we-do/)** | Europe |
firewall Protect Windows Virtual Desktop https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/firewall/protect-windows-virtual-desktop.md
You will need to create an Azure Firewall Policy and create Rule Collections for
| | | | | | | Rule Name | IP Address | VNet or Subnet IP Address | 80 | TCP | IP Address | 169.254.169.254, 168.63.129.16 | Rule Name | IP Address | VNet or Subnet IP Address | 443 | TCP | Service Tag | AzureCloud, WindowsVirtualDesktop
-| Rule Name | IP Address | VNet or Subnet IP Address | 52 | TCP, UDP | IP Address | *
+| Rule Name | IP Address | VNet or Subnet IP Address | 53 | TCP, UDP | IP Address | *
### Create application rules
frontdoor Concept Private Link https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/frontdoor/standard-premium/concept-private-link.md
Azure Front Door private endpoints get managed by the platform and under the sub
## Next steps
-* To connect Azure Front Door Premium to your Web App via Private Link service, see [Connect to a web app using a Private endpoint](../../private-link/tutorial-private-endpoint-webapp-portal.md).
-* To connect Azure Front Door Premium to your Storage Account via private link service, see [Connect to a storage account using Private endpoint](../../private-link/tutorial-private-endpoint-storage-portal.md).
+* To connect Azure Front Door Premium to your Web App via Private Link service, see [Connect Azure Front Door Premium to a Web App origin with Private Link](../../frontdoor/standard-premium/how-to-enable-private-link-web-app.md).
+* To connect Azure Front Door Premium to your Storage Account via private link service, see [Connect Azure Front Door Premium to a storage account origin with Private Link](../../frontdoor/standard-premium/how-to-enable-private-link-storage-account.md).
+* To connect Azure Front Door Premium to an internal load balancer origin with Private Link service, see [Connect Azure Front Door Premium to an internal load balancer origin with Private Link](../../frontdoor/standard-premium/how-to-enable-private-link-internal-load-balancer.md).
frontdoor How To Configure Https Custom Domain https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/frontdoor/standard-premium/how-to-configure-https-custom-domain.md
Register the service principal for Azure Front Door as an app in your Azure Acti
1. In PowerShell, run the following command:
- `New-AzADServicePrincipal -ApplicationId "205478c0-bd83-4e1b-a9d6-db63a3e1e1c8"`
+ `New-AzADServicePrincipal -ApplicationId "ad0e1c7e-6d38-4ba4-9efd-0bc77ba9f037"`
#### Grant Azure Front Door access to your key vault
Grant Azure Front Door permission to access the certificates in your Azure Key
1. In your key vault account, under SETTINGS, select **Access policies**. Then select **Add new** to create a new policy.
-1. In **Select principal**, search for **205478c0-bd83-4e1b-a9d6-db63a3e1e1c8**, and choose ** Microsoft.AzureFrontDoor-Cdn**. Click **Select**.
+1. In **Select principal**, search for **ad0e1c7e-6d38-4ba4-9efd-0bc77ba9f037**, and choose ** Microsoft.AzureFrontDoor-Cdn**. Click **Select**.
1. In **Secret permissions**, select **Get** to allow Front Door to retrieve the certificate.
frontdoor How To Enable Private Link Internal Load Balancer https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/frontdoor/standard-premium/how-to-enable-private-link-internal-load-balancer.md
In this section, you'll map the Private Link service to a private endpoint creat
1. Then select **Add** and then **Update** to save your configuration.
-## Approve private endpoint connection from the storage account
+## Approve Azure Front Door Premium private endpoint connection from Private link service
1. Go to the Private Link Center and select **Private link services**. Then select your Private link name.
frontdoor Subscription Offers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/frontdoor/standard-premium/subscription-offers.md
+
+ Title: Azure Front Door Standard/Premium subscription offers and bandwidth throttling
+description: Learn about Azure Front Door Standard/Premium availability for a specific subscription type.
+++++ Last updated : 08/20/2021+++
+# Azure Front Door Standard/Premium subscription offers and bandwidth throttling
+
+Bandwidth throttling is applied to Azure Front Door Standard/Premium profiles, based on your subscription type.
+
+## Free and Trial Subscription
+
+Bandwidth throttling is applied for this type of subscription.
+
+## Pay-as-you-go
+
+Bandwidth will be throttled until the subscription is determined to be in good standing and has a sufficient payment history. The process for determining the subscription status and having throttling removed happens automatically after the first payment has been received.
+
+If you have made a payment and throttling hasn't been removed, you can request to do so by [contacting support](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade).
+
+## Enterprise agreements
+
+Enterprise Agreement subscriptions don't have any bandwidth restrictions.
+
+## Other offer types
+
+The same functionality as Pay-as-you-go applies to these types of agreements:
+
+* Visual Studio
+* MSDN
+* Students
+* CSP
+
+## Next steps
+
+Learn how to [create an Azure Front Door Standard/Premium profile](create-front-door-portal.md).
governance Australia Ism https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/samples/australia-ism.md
Title: Regulatory Compliance details for Australian Government ISM PROTECTED description: Details of the Australian Government ISM PROTECTED Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 08/13/2021 Last updated : 08/20/2021
governance Azure Security Benchmark https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/samples/azure-security-benchmark.md
Title: Regulatory Compliance details for Azure Security Benchmark description: Details of the Azure Security Benchmark Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 08/13/2021 Last updated : 08/20/2021
initiative definition.
|[Kubernetes cluster containers should only listen on allowed ports](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F440b515e-a580-421e-abeb-b159a61ddcbc) |Restrict containers to listen only on allowed ports to secure access to the Kubernetes cluster. This policy is generally available for Kubernetes Service (AKS), and preview for AKS Engine and Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](../concepts/policy-for-kubernetes.md). |audit, deny, disabled |[6.1.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Kubernetes/ContainerAllowedPorts.json) | |[Kubernetes cluster containers should only use allowed AppArmor profiles](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F511f5417-5d12-434d-ab2e-816901e72a5e) |Containers should only use allowed AppArmor profiles in a Kubernetes cluster. This recommendation is part of Pod Security Policies which are intended to improve the security of your Kubernetes environments. This policy is generally available for Kubernetes Service (AKS), and preview for AKS Engine and Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](../concepts/policy-for-kubernetes.md). |audit, deny, disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Kubernetes/EnforceAppArmorProfile.json) | |[Kubernetes cluster containers should only use allowed capabilities](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc26596ff-4d70-4e6a-9a30-c2506bd2f80c) |Restrict the capabilities to reduce the attack surface of containers in a Kubernetes cluster. This recommendation is part of CIS 5.2.8 and CIS 5.2.9 which are intended to improve the security of your Kubernetes environments. This policy is generally available for Kubernetes Service (AKS), and preview for AKS Engine and Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](../concepts/policy-for-kubernetes.md). |audit, deny, disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Kubernetes/ContainerAllowedCapabilities.json) |
-|[Kubernetes cluster containers should only use allowed images](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffebd0533-8e55-448f-b837-bd0e06f16469) |Use images from trusted registries to reduce the Kubernetes cluster's exposure risk to unknown vulnerabilities, security issues and malicious images. This policy is generally available for Kubernetes Service (AKS), and preview for AKS Engine and Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](../concepts/policy-for-kubernetes.md). |audit, deny, disabled |[6.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Kubernetes/ContainerAllowedImages.json) |
+|[Kubernetes cluster containers should only use allowed images](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffebd0533-8e55-448f-b837-bd0e06f16469) |Use images from trusted registries to reduce the Kubernetes cluster's exposure risk to unknown vulnerabilities, security issues and malicious images. This policy is generally available for Kubernetes Service (AKS), and preview for AKS Engine and Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](../concepts/policy-for-kubernetes.md). |audit, deny, disabled |[7.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Kubernetes/ContainerAllowedImages.json) |
|[Kubernetes cluster containers should run with a read only root file system](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fdf49d893-a74c-421d-bc95-c663042e5b80) |Run containers with a read only root file system to protect from changes at run-time with malicious binaries being added to PATH in a Kubernetes cluster. This policy is generally available for Kubernetes Service (AKS), and preview for AKS Engine and Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](../concepts/policy-for-kubernetes.md). |audit, deny, disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Kubernetes/ReadOnlyRootFileSystem.json) | |[Kubernetes cluster pod hostPath volumes should only use allowed host paths](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F098fc59e-46c7-4d99-9b16-64990e543d75) |Limit pod HostPath volume mounts to the allowed host paths in a Kubernetes Cluster. This recommendation is part of Pod Security Policies which are intended to improve the security of your Kubernetes environments. This policy is generally available for Kubernetes Service (AKS), and preview for AKS Engine and Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](../concepts/policy-for-kubernetes.md). |audit, deny, disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Kubernetes/AllowedHostPaths.json) | |[Kubernetes cluster pods and containers should only run with approved user and group IDs](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff06ddb64-5fa3-4b77-b166-acb36f7f6042) |Control the user, primary group, supplemental group and file system group IDs that pods and containers can use to run in a Kubernetes Cluster. This recommendation is part of Pod Security Policies which are intended to improve the security of your Kubernetes environments. This policy is generally available for Kubernetes Service (AKS), and preview for AKS Engine and Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](../concepts/policy-for-kubernetes.md). |audit, deny, disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Kubernetes/AllowedUsersGroups.json) |
initiative definition.
|[Kubernetes clusters should disable automounting API credentials](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F423dd1ba-798e-40e4-9c4d-b6902674b423) |Disable automounting API credentials to prevent a potentially compromised Pod resource to run API commands against Kubernetes clusters. For more information, see [https://aka.ms/kubepolicydoc](../concepts/policy-for-kubernetes.md). |audit, deny, disabled |[2.0.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Kubernetes/BlockAutomountToken.json) | |[Kubernetes clusters should not allow container privilege escalation](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1c6e92c9-99f0-4e55-9cf2-0c234dc48f99) |Do not allow containers to run with privilege escalation to root in a Kubernetes cluster. This recommendation is part of CIS 5.2.5 which is intended to improve the security of your Kubernetes environments. This policy is generally available for Kubernetes Service (AKS), and preview for AKS Engine and Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](../concepts/policy-for-kubernetes.md). |audit, deny, disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Kubernetes/ContainerNoPrivilegeEscalation.json) | |[Kubernetes clusters should not grant CAP_SYS_ADMIN security capabilities](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd2e7ea85-6b44-4317-a0be-1b951587f626) |To reduce the attack surface of your containers, restrict CAP_SYS_ADMIN Linux capabilities. For more information, see [https://aka.ms/kubepolicydoc](../concepts/policy-for-kubernetes.md). |audit, deny, disabled |[2.1.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Kubernetes/ContainerDisallowedSysAdminCapability.json) |
-|[Kubernetes clusters should not use the default namespace](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F9f061a12-e40d-4183-a00e-171812443373) |Prevent usage of the default namespace in Kubernetes clusters to protect against unauthorized access for ConfigMap, Pod, Secret, Service, and ServiceAccount resource types. For more information, see [https://aka.ms/kubepolicydoc](../concepts/policy-for-kubernetes.md). |audit, deny, disabled |[2.0.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Kubernetes/BlockDefaultNamespace.json) |
+|[Kubernetes clusters should not use the default namespace](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F9f061a12-e40d-4183-a00e-171812443373) |Prevent usage of the default namespace in Kubernetes clusters to protect against unauthorized access for ConfigMap, Pod, Secret, Service, and ServiceAccount resource types. For more information, see [https://aka.ms/kubepolicydoc](../concepts/policy-for-kubernetes.md). |audit, deny, disabled |[2.1.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Kubernetes/BlockDefaultNamespace.json) |
|[Remote debugging should be turned off for API Apps](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe9c8d085-d9cc-4b17-9cdc-059f1f01f19e) |Remote debugging requires inbound ports to be opened on API apps. Remote debugging should be turned off. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_DisableRemoteDebugging_ApiApp_Audit.json) | |[Remote debugging should be turned off for Function Apps](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0e60b895-3786-45da-8377-9c6b4b6ac5f9) |Remote debugging requires inbound ports to be opened on function apps. Remote debugging should be turned off. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_DisableRemoteDebugging_FunctionApp_Audit.json) | |[Remote debugging should be turned off for Web Applications](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fcb510bfd-1cba-4d9f-a230-cb0976f4bb71) |Remote debugging requires inbound ports to be opened on a web application. Remote debugging should be turned off. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_DisableRemoteDebugging_WebApp_Audit.json) |
governance Azure Security Benchmarkv1 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/samples/azure-security-benchmarkv1.md
Title: Regulatory Compliance details for Azure Security Benchmark v1 description: Details of the Azure Security Benchmark v1 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 08/13/2021 Last updated : 08/20/2021
governance Built In Initiatives https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/samples/built-in-initiatives.md
Title: List of built-in policy initiatives description: List built-in policy initiatives for Azure Policy. Categories include Regulatory Compliance, Guest Configuration, and more. Previously updated : 08/13/2021 Last updated : 08/20/2021
governance Built In Policies https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/samples/built-in-policies.md
Title: List of built-in policy definitions description: List built-in policy definitions for Azure Policy. Categories include Tags, Regulatory Compliance, Key Vault, Kubernetes, Guest Configuration, and more. Previously updated : 08/13/2021 Last updated : 08/20/2021
governance Canada Federal Pbmm https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/samples/canada-federal-pbmm.md
Title: Regulatory Compliance details for Canada Federal PBMM description: Details of the Canada Federal PBMM Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 08/13/2021 Last updated : 08/20/2021
governance Cis Azure 1 1 0 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/samples/cis-azure-1-1-0.md
Title: Regulatory Compliance details for CIS Microsoft Azure Foundations Benchmark 1.1.0 description: Details of the CIS Microsoft Azure Foundations Benchmark 1.1.0 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 08/13/2021 Last updated : 08/20/2021
governance Cis Azure 1 3 0 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/samples/cis-azure-1-3-0.md
Title: Regulatory Compliance details for CIS Microsoft Azure Foundations Benchmark 1.3.0 description: Details of the CIS Microsoft Azure Foundations Benchmark 1.3.0 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 08/13/2021 Last updated : 08/20/2021
governance Cmmc L3 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/samples/cmmc-l3.md
Title: Regulatory Compliance details for CMMC Level 3 description: Details of the CMMC Level 3 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 08/13/2021 Last updated : 08/20/2021
governance Fedramp High https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/samples/fedramp-high.md
Title: Regulatory Compliance details for FedRAMP High description: Details of the FedRAMP High Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 08/13/2021 Last updated : 08/20/2021
initiative definition.
|[Kubernetes cluster containers should only listen on allowed ports](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F440b515e-a580-421e-abeb-b159a61ddcbc) |Restrict containers to listen only on allowed ports to secure access to the Kubernetes cluster. This policy is generally available for Kubernetes Service (AKS), and preview for AKS Engine and Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](../concepts/policy-for-kubernetes.md). |audit, deny, disabled |[6.1.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Kubernetes/ContainerAllowedPorts.json) | |[Kubernetes cluster containers should only use allowed AppArmor profiles](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F511f5417-5d12-434d-ab2e-816901e72a5e) |Containers should only use allowed AppArmor profiles in a Kubernetes cluster. This recommendation is part of Pod Security Policies which are intended to improve the security of your Kubernetes environments. This policy is generally available for Kubernetes Service (AKS), and preview for AKS Engine and Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](../concepts/policy-for-kubernetes.md). |audit, deny, disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Kubernetes/EnforceAppArmorProfile.json) | |[Kubernetes cluster containers should only use allowed capabilities](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc26596ff-4d70-4e6a-9a30-c2506bd2f80c) |Restrict the capabilities to reduce the attack surface of containers in a Kubernetes cluster. This recommendation is part of CIS 5.2.8 and CIS 5.2.9 which are intended to improve the security of your Kubernetes environments. This policy is generally available for Kubernetes Service (AKS), and preview for AKS Engine and Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](../concepts/policy-for-kubernetes.md). |audit, deny, disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Kubernetes/ContainerAllowedCapabilities.json) |
-|[Kubernetes cluster containers should only use allowed images](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffebd0533-8e55-448f-b837-bd0e06f16469) |Use images from trusted registries to reduce the Kubernetes cluster's exposure risk to unknown vulnerabilities, security issues and malicious images. This policy is generally available for Kubernetes Service (AKS), and preview for AKS Engine and Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](../concepts/policy-for-kubernetes.md). |audit, deny, disabled |[6.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Kubernetes/ContainerAllowedImages.json) |
+|[Kubernetes cluster containers should only use allowed images](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffebd0533-8e55-448f-b837-bd0e06f16469) |Use images from trusted registries to reduce the Kubernetes cluster's exposure risk to unknown vulnerabilities, security issues and malicious images. This policy is generally available for Kubernetes Service (AKS), and preview for AKS Engine and Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](../concepts/policy-for-kubernetes.md). |audit, deny, disabled |[7.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Kubernetes/ContainerAllowedImages.json) |
|[Kubernetes cluster containers should run with a read only root file system](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fdf49d893-a74c-421d-bc95-c663042e5b80) |Run containers with a read only root file system to protect from changes at run-time with malicious binaries being added to PATH in a Kubernetes cluster. This policy is generally available for Kubernetes Service (AKS), and preview for AKS Engine and Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](../concepts/policy-for-kubernetes.md). |audit, deny, disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Kubernetes/ReadOnlyRootFileSystem.json) | |[Kubernetes cluster pod hostPath volumes should only use allowed host paths](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F098fc59e-46c7-4d99-9b16-64990e543d75) |Limit pod HostPath volume mounts to the allowed host paths in a Kubernetes Cluster. This recommendation is part of Pod Security Policies which are intended to improve the security of your Kubernetes environments. This policy is generally available for Kubernetes Service (AKS), and preview for AKS Engine and Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](../concepts/policy-for-kubernetes.md). |audit, deny, disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Kubernetes/AllowedHostPaths.json) | |[Kubernetes cluster pods and containers should only run with approved user and group IDs](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff06ddb64-5fa3-4b77-b166-acb36f7f6042) |Control the user, primary group, supplemental group and file system group IDs that pods and containers can use to run in a Kubernetes Cluster. This recommendation is part of Pod Security Policies which are intended to improve the security of your Kubernetes environments. This policy is generally available for Kubernetes Service (AKS), and preview for AKS Engine and Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](../concepts/policy-for-kubernetes.md). |audit, deny, disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Kubernetes/AllowedUsersGroups.json) |
governance Fedramp Moderate https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/samples/fedramp-moderate.md
Title: Regulatory Compliance details for FedRAMP Moderate description: Details of the FedRAMP Moderate Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 08/13/2021 Last updated : 08/20/2021
initiative definition.
|[Kubernetes cluster containers should only listen on allowed ports](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F440b515e-a580-421e-abeb-b159a61ddcbc) |Restrict containers to listen only on allowed ports to secure access to the Kubernetes cluster. This policy is generally available for Kubernetes Service (AKS), and preview for AKS Engine and Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](../concepts/policy-for-kubernetes.md). |audit, deny, disabled |[6.1.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Kubernetes/ContainerAllowedPorts.json) | |[Kubernetes cluster containers should only use allowed AppArmor profiles](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F511f5417-5d12-434d-ab2e-816901e72a5e) |Containers should only use allowed AppArmor profiles in a Kubernetes cluster. This recommendation is part of Pod Security Policies which are intended to improve the security of your Kubernetes environments. This policy is generally available for Kubernetes Service (AKS), and preview for AKS Engine and Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](../concepts/policy-for-kubernetes.md). |audit, deny, disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Kubernetes/EnforceAppArmorProfile.json) | |[Kubernetes cluster containers should only use allowed capabilities](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc26596ff-4d70-4e6a-9a30-c2506bd2f80c) |Restrict the capabilities to reduce the attack surface of containers in a Kubernetes cluster. This recommendation is part of CIS 5.2.8 and CIS 5.2.9 which are intended to improve the security of your Kubernetes environments. This policy is generally available for Kubernetes Service (AKS), and preview for AKS Engine and Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](../concepts/policy-for-kubernetes.md). |audit, deny, disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Kubernetes/ContainerAllowedCapabilities.json) |
-|[Kubernetes cluster containers should only use allowed images](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffebd0533-8e55-448f-b837-bd0e06f16469) |Use images from trusted registries to reduce the Kubernetes cluster's exposure risk to unknown vulnerabilities, security issues and malicious images. This policy is generally available for Kubernetes Service (AKS), and preview for AKS Engine and Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](../concepts/policy-for-kubernetes.md). |audit, deny, disabled |[6.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Kubernetes/ContainerAllowedImages.json) |
+|[Kubernetes cluster containers should only use allowed images](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffebd0533-8e55-448f-b837-bd0e06f16469) |Use images from trusted registries to reduce the Kubernetes cluster's exposure risk to unknown vulnerabilities, security issues and malicious images. This policy is generally available for Kubernetes Service (AKS), and preview for AKS Engine and Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](../concepts/policy-for-kubernetes.md). |audit, deny, disabled |[7.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Kubernetes/ContainerAllowedImages.json) |
|[Kubernetes cluster containers should run with a read only root file system](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fdf49d893-a74c-421d-bc95-c663042e5b80) |Run containers with a read only root file system to protect from changes at run-time with malicious binaries being added to PATH in a Kubernetes cluster. This policy is generally available for Kubernetes Service (AKS), and preview for AKS Engine and Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](../concepts/policy-for-kubernetes.md). |audit, deny, disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Kubernetes/ReadOnlyRootFileSystem.json) | |[Kubernetes cluster pod hostPath volumes should only use allowed host paths](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F098fc59e-46c7-4d99-9b16-64990e543d75) |Limit pod HostPath volume mounts to the allowed host paths in a Kubernetes Cluster. This recommendation is part of Pod Security Policies which are intended to improve the security of your Kubernetes environments. This policy is generally available for Kubernetes Service (AKS), and preview for AKS Engine and Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](../concepts/policy-for-kubernetes.md). |audit, deny, disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Kubernetes/AllowedHostPaths.json) | |[Kubernetes cluster pods and containers should only run with approved user and group IDs](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff06ddb64-5fa3-4b77-b166-acb36f7f6042) |Control the user, primary group, supplemental group and file system group IDs that pods and containers can use to run in a Kubernetes Cluster. This recommendation is part of Pod Security Policies which are intended to improve the security of your Kubernetes environments. This policy is generally available for Kubernetes Service (AKS), and preview for AKS Engine and Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](../concepts/policy-for-kubernetes.md). |audit, deny, disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Kubernetes/AllowedUsersGroups.json) |
governance Gov Azure Security Benchmark https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/samples/gov-azure-security-benchmark.md
Title: Regulatory Compliance details for Azure Security Benchmark (Azure Government) description: Details of the Azure Security Benchmark (Azure Government) Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 08/13/2021 Last updated : 08/20/2021
The following article details how the Azure Policy Regulatory Compliance built-in initiative definition maps to **compliance domains** and **controls** in Azure Security Benchmark (Azure Government). For more information about this compliance standard, see
-[Azure Security Benchmark (Azure Government)](/security/benchmark/azure/introduction). To understand
+[Azure Security Benchmark](/security/benchmark/azure/introduction). To understand
_Ownership_, see [Azure Policy policy definition](../concepts/definition-structure.md#type) and [Shared responsibility in the cloud](../../../security/fundamentals/shared-responsibility.md).
-The following mappings are to the **Azure Security Benchmark (Azure Government)** controls. Use the
+The following mappings are to the **Azure Security Benchmark** controls. Use the
navigation on the right to jump directly to a specific **compliance domain**. Many of the controls are implemented with an [Azure Policy](../overview.md) initiative definition. To review the complete initiative definition, open **Policy** in the Azure portal and select the **Definitions** page.
initiative definition.
|[Kubernetes cluster containers should only listen on allowed ports](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F440b515e-a580-421e-abeb-b159a61ddcbc) |Restrict containers to listen only on allowed ports to secure access to the Kubernetes cluster. This policy is generally available for Kubernetes Service (AKS), and preview for AKS Engine and Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](../concepts/policy-for-kubernetes.md). |audit, deny, disabled |[6.1.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Kubernetes/ContainerAllowedPorts.json) | |[Kubernetes cluster containers should only use allowed AppArmor profiles](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F511f5417-5d12-434d-ab2e-816901e72a5e) |Containers should only use allowed AppArmor profiles in a Kubernetes cluster. This recommendation is part of Pod Security Policies which are intended to improve the security of your Kubernetes environments. This policy is generally available for Kubernetes Service (AKS), and preview for AKS Engine and Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](../concepts/policy-for-kubernetes.md). |audit, deny, disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Kubernetes/EnforceAppArmorProfile.json) | |[Kubernetes cluster containers should only use allowed capabilities](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc26596ff-4d70-4e6a-9a30-c2506bd2f80c) |Restrict the capabilities to reduce the attack surface of containers in a Kubernetes cluster. This recommendation is part of CIS 5.2.8 and CIS 5.2.9 which are intended to improve the security of your Kubernetes environments. This policy is generally available for Kubernetes Service (AKS), and preview for AKS Engine and Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](../concepts/policy-for-kubernetes.md). |audit, deny, disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Kubernetes/ContainerAllowedCapabilities.json) |
-|[Kubernetes cluster containers should only use allowed images](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffebd0533-8e55-448f-b837-bd0e06f16469) |Use images from trusted registries to reduce the Kubernetes cluster's exposure risk to unknown vulnerabilities, security issues and malicious images. This policy is generally available for Kubernetes Service (AKS), and preview for AKS Engine and Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](../concepts/policy-for-kubernetes.md). |audit, deny, disabled |[6.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Kubernetes/ContainerAllowedImages.json) |
+|[Kubernetes cluster containers should only use allowed images](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffebd0533-8e55-448f-b837-bd0e06f16469) |Use images from trusted registries to reduce the Kubernetes cluster's exposure risk to unknown vulnerabilities, security issues and malicious images. This policy is generally available for Kubernetes Service (AKS), and preview for AKS Engine and Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](../concepts/policy-for-kubernetes.md). |audit, deny, disabled |[7.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Kubernetes/ContainerAllowedImages.json) |
|[Kubernetes cluster containers should run with a read only root file system](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fdf49d893-a74c-421d-bc95-c663042e5b80) |Run containers with a read only root file system to protect from changes at run-time with malicious binaries being added to PATH in a Kubernetes cluster. This policy is generally available for Kubernetes Service (AKS), and preview for AKS Engine and Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](../concepts/policy-for-kubernetes.md). |audit, deny, disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Kubernetes/ReadOnlyRootFileSystem.json) | |[Kubernetes cluster pod hostPath volumes should only use allowed host paths](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F098fc59e-46c7-4d99-9b16-64990e543d75) |Limit pod HostPath volume mounts to the allowed host paths in a Kubernetes Cluster. This recommendation is part of Pod Security Policies which are intended to improve the security of your Kubernetes environments. This policy is generally available for Kubernetes Service (AKS), and preview for AKS Engine and Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](../concepts/policy-for-kubernetes.md). |audit, deny, disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Kubernetes/AllowedHostPaths.json) | |[Kubernetes cluster pods and containers should only run with approved user and group IDs](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff06ddb64-5fa3-4b77-b166-acb36f7f6042) |Control the user, primary group, supplemental group and file system group IDs that pods and containers can use to run in a Kubernetes Cluster. This recommendation is part of Pod Security Policies which are intended to improve the security of your Kubernetes environments. This policy is generally available for Kubernetes Service (AKS), and preview for AKS Engine and Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](../concepts/policy-for-kubernetes.md). |audit, deny, disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Kubernetes/AllowedUsersGroups.json) |
initiative definition.
|[Kubernetes clusters should disable automounting API credentials](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F423dd1ba-798e-40e4-9c4d-b6902674b423) |Disable automounting API credentials to prevent a potentially compromised Pod resource to run API commands against Kubernetes clusters. For more information, see [https://aka.ms/kubepolicydoc](../concepts/policy-for-kubernetes.md). |audit, deny, disabled |[2.0.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Kubernetes/BlockAutomountToken.json) | |[Kubernetes clusters should not allow container privilege escalation](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1c6e92c9-99f0-4e55-9cf2-0c234dc48f99) |Do not allow containers to run with privilege escalation to root in a Kubernetes cluster. This recommendation is part of CIS 5.2.5 which is intended to improve the security of your Kubernetes environments. This policy is generally available for Kubernetes Service (AKS), and preview for AKS Engine and Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](../concepts/policy-for-kubernetes.md). |audit, deny, disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Kubernetes/ContainerNoPrivilegeEscalation.json) | |[Kubernetes clusters should not grant CAP_SYS_ADMIN security capabilities](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd2e7ea85-6b44-4317-a0be-1b951587f626) |To reduce the attack surface of your containers, restrict CAP_SYS_ADMIN Linux capabilities. For more information, see [https://aka.ms/kubepolicydoc](../concepts/policy-for-kubernetes.md). |audit, deny, disabled |[2.1.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Kubernetes/ContainerDisallowedSysAdminCapability.json) |
-|[Kubernetes clusters should not use the default namespace](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F9f061a12-e40d-4183-a00e-171812443373) |Prevent usage of the default namespace in Kubernetes clusters to protect against unauthorized access for ConfigMap, Pod, Secret, Service, and ServiceAccount resource types. For more information, see [https://aka.ms/kubepolicydoc](../concepts/policy-for-kubernetes.md). |audit, deny, disabled |[2.0.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Kubernetes/BlockDefaultNamespace.json) |
+|[Kubernetes clusters should not use the default namespace](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F9f061a12-e40d-4183-a00e-171812443373) |Prevent usage of the default namespace in Kubernetes clusters to protect against unauthorized access for ConfigMap, Pod, Secret, Service, and ServiceAccount resource types. For more information, see [https://aka.ms/kubepolicydoc](../concepts/policy-for-kubernetes.md). |audit, deny, disabled |[2.1.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Kubernetes/BlockDefaultNamespace.json) |
|[Remote debugging should be turned off for API Apps](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe9c8d085-d9cc-4b17-9cdc-059f1f01f19e) |Remote debugging requires inbound ports to be opened on API apps. Remote debugging should be turned off. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_DisableRemoteDebugging_ApiApp_Audit.json) | |[Remote debugging should be turned off for Function Apps](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0e60b895-3786-45da-8377-9c6b4b6ac5f9) |Remote debugging requires inbound ports to be opened on function apps. Remote debugging should be turned off. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_DisableRemoteDebugging_FunctionApp_Audit.json) | |[Remote debugging should be turned off for Web Applications](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fcb510bfd-1cba-4d9f-a230-cb0976f4bb71) |Remote debugging requires inbound ports to be opened on a web application. Remote debugging should be turned off. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_DisableRemoteDebugging_WebApp_Audit.json) |
governance Gov Cis Azure 1 1 0 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/samples/gov-cis-azure-1-1-0.md
Title: Regulatory Compliance details for CIS Microsoft Azure Foundations Benchmark 1.1.0 (Azure Government) description: Details of the CIS Microsoft Azure Foundations Benchmark 1.1.0 (Azure Government) Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 08/13/2021 Last updated : 08/20/2021 # Details of the CIS Microsoft Azure Foundations Benchmark 1.1.0 (Azure Government) Regulatory Compliance built-in initiative The following article details how the Azure Policy Regulatory Compliance built-in initiative
-definition maps to **compliance domains** and **controls** in CIS Microsoft Azure Foundations Benchmark 1.1.0.
+definition maps to **compliance domains** and **controls** in CIS Microsoft Azure Foundations Benchmark 1.1.0 (Azure Government).
For more information about this compliance standard, see [CIS Microsoft Azure Foundations Benchmark 1.1.0](https://www.cisecurity.org/benchmark/azure/). To understand _Ownership_, see [Azure Policy policy definition](../concepts/definition-structure.md#type) and
Then, find and select the **CIS Microsoft Azure Foundations Benchmark v1.1.0** R
initiative definition. This built-in initiative is deployed as part of the
-[CIS Microsoft Azure Foundations Benchmark 1.1.0 (Azure Government) blueprint sample](../../blueprints/samples/cis-azure-1-1-0.md).
+[CIS Microsoft Azure Foundations Benchmark 1.1.0 blueprint sample](../../blueprints/samples/cis-azure-1-1-0.md).
> [!IMPORTANT] > Each control below is associated with one or more [Azure Policy](../overview.md) definitions.
governance Gov Cis Azure 1 3 0 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/samples/gov-cis-azure-1-3-0.md
Title: Regulatory Compliance details for CIS Microsoft Azure Foundations Benchmark 1.3.0 (Azure Government) description: Details of the CIS Microsoft Azure Foundations Benchmark 1.3.0 (Azure Government) Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 08/13/2021 Last updated : 08/20/2021
The following article details how the Azure Policy Regulatory Compliance built-in initiative definition maps to **compliance domains** and **controls** in CIS Microsoft Azure Foundations Benchmark 1.3.0 (Azure Government). For more information about this compliance standard, see
-[CIS Microsoft Azure Foundations Benchmark 1.3.0 (Azure Government)](https://www.cisecurity.org/benchmark/azure/). To understand
+[CIS Microsoft Azure Foundations Benchmark 1.3.0](https://www.cisecurity.org/benchmark/azure/). To understand
_Ownership_, see [Azure Policy policy definition](../concepts/definition-structure.md#type) and [Shared responsibility in the cloud](../../../security/fundamentals/shared-responsibility.md).
-The following mappings are to the **CIS Microsoft Azure Foundations Benchmark 1.3.0 (Azure Government)** controls. Use the
+The following mappings are to the **CIS Microsoft Azure Foundations Benchmark 1.3.0** controls. Use the
navigation on the right to jump directly to a specific **compliance domain**. Many of the controls are implemented with an [Azure Policy](../overview.md) initiative definition. To review the complete initiative definition, open **Policy** in the Azure portal and select the **Definitions** page.
governance Gov Cmmc L3 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/samples/gov-cmmc-l3.md
Title: Regulatory Compliance details for CMMC Level 3 (Azure Government) description: Details of the CMMC Level 3 (Azure Government) Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 08/13/2021 Last updated : 08/20/2021 # Details of the CMMC Level 3 (Azure Government) Regulatory Compliance built-in initiative The following article details how the Azure Policy Regulatory Compliance built-in initiative
-definition maps to **compliance domains** and **controls** in CMMC Level 3.
+definition maps to **compliance domains** and **controls** in CMMC Level 3 (Azure Government).
For more information about this compliance standard, see [CMMC Level 3](https://www.acq.osd.mil/cmmc/docs/CMMC_Model_Main_20200203.pdf). To understand _Ownership_, see [Azure Policy policy definition](../concepts/definition-structure.md#type) and
Then, find and select the **CMMC Level 3** Regulatory Compliance built-in
initiative definition. This built-in initiative is deployed as part of the
-[CMMC Level 3 (Azure Government) blueprint sample](../../blueprints/samples/cmmc-l3.md).
+[CMMC Level 3 blueprint sample](../../blueprints/samples/cmmc-l3.md).
> [!IMPORTANT] > Each control below is associated with one or more [Azure Policy](../overview.md) definitions.
governance Gov Dod Impact Level 4 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/samples/gov-dod-impact-level-4.md
Title: Regulatory Compliance details for DoD Impact Level 4 (Azure Government) description: Details of the DoD Impact Level 4 (Azure Government) Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 08/13/2021 Last updated : 08/20/2021
The following article details how the Azure Policy Regulatory Compliance built-in initiative definition maps to **compliance domains** and **controls** in DoD Impact Level 4 (Azure Government). For more information about this compliance standard, see
-[DoD Impact Level 4 (Azure Government)](https://public.cyber.mil/dccs/). To understand
+[DoD Impact Level 4](https://public.cyber.mil/dccs/). To understand
_Ownership_, see [Azure Policy policy definition](../concepts/definition-structure.md#type) and [Shared responsibility in the cloud](../../../security/fundamentals/shared-responsibility.md).
-The following mappings are to the **DoD Impact Level 4 (Azure Government)** controls. Use the
+The following mappings are to the **DoD Impact Level 4** controls. Use the
navigation on the right to jump directly to a specific **compliance domain**. Many of the controls are implemented with an [Azure Policy](../overview.md) initiative definition. To review the complete initiative definition, open **Policy** in the Azure portal and select the **Definitions** page.
initiative definition.
|[Kubernetes cluster containers should only listen on allowed ports](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F440b515e-a580-421e-abeb-b159a61ddcbc) |Restrict containers to listen only on allowed ports to secure access to the Kubernetes cluster. This policy is generally available for Kubernetes Service (AKS), and preview for AKS Engine and Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](../concepts/policy-for-kubernetes.md). |audit, deny, disabled |[6.1.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Kubernetes/ContainerAllowedPorts.json) | |[Kubernetes cluster containers should only use allowed AppArmor profiles](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F511f5417-5d12-434d-ab2e-816901e72a5e) |Containers should only use allowed AppArmor profiles in a Kubernetes cluster. This recommendation is part of Pod Security Policies which are intended to improve the security of your Kubernetes environments. This policy is generally available for Kubernetes Service (AKS), and preview for AKS Engine and Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](../concepts/policy-for-kubernetes.md). |audit, deny, disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Kubernetes/EnforceAppArmorProfile.json) | |[Kubernetes cluster containers should only use allowed capabilities](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc26596ff-4d70-4e6a-9a30-c2506bd2f80c) |Restrict the capabilities to reduce the attack surface of containers in a Kubernetes cluster. This recommendation is part of CIS 5.2.8 and CIS 5.2.9 which are intended to improve the security of your Kubernetes environments. This policy is generally available for Kubernetes Service (AKS), and preview for AKS Engine and Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](../concepts/policy-for-kubernetes.md). |audit, deny, disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Kubernetes/ContainerAllowedCapabilities.json) |
-|[Kubernetes cluster containers should only use allowed images](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffebd0533-8e55-448f-b837-bd0e06f16469) |Use images from trusted registries to reduce the Kubernetes cluster's exposure risk to unknown vulnerabilities, security issues and malicious images. This policy is generally available for Kubernetes Service (AKS), and preview for AKS Engine and Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](../concepts/policy-for-kubernetes.md). |audit, deny, disabled |[6.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Kubernetes/ContainerAllowedImages.json) |
+|[Kubernetes cluster containers should only use allowed images](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffebd0533-8e55-448f-b837-bd0e06f16469) |Use images from trusted registries to reduce the Kubernetes cluster's exposure risk to unknown vulnerabilities, security issues and malicious images. This policy is generally available for Kubernetes Service (AKS), and preview for AKS Engine and Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](../concepts/policy-for-kubernetes.md). |audit, deny, disabled |[7.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Kubernetes/ContainerAllowedImages.json) |
|[Kubernetes cluster containers should run with a read only root file system](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fdf49d893-a74c-421d-bc95-c663042e5b80) |Run containers with a read only root file system to protect from changes at run-time with malicious binaries being added to PATH in a Kubernetes cluster. This policy is generally available for Kubernetes Service (AKS), and preview for AKS Engine and Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](../concepts/policy-for-kubernetes.md). |audit, deny, disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Kubernetes/ReadOnlyRootFileSystem.json) | |[Kubernetes cluster pod hostPath volumes should only use allowed host paths](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F098fc59e-46c7-4d99-9b16-64990e543d75) |Limit pod HostPath volume mounts to the allowed host paths in a Kubernetes Cluster. This recommendation is part of Pod Security Policies which are intended to improve the security of your Kubernetes environments. This policy is generally available for Kubernetes Service (AKS), and preview for AKS Engine and Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](../concepts/policy-for-kubernetes.md). |audit, deny, disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Kubernetes/AllowedHostPaths.json) | |[Kubernetes cluster pods and containers should only run with approved user and group IDs](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff06ddb64-5fa3-4b77-b166-acb36f7f6042) |Control the user, primary group, supplemental group and file system group IDs that pods and containers can use to run in a Kubernetes Cluster. This recommendation is part of Pod Security Policies which are intended to improve the security of your Kubernetes environments. This policy is generally available for Kubernetes Service (AKS), and preview for AKS Engine and Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](../concepts/policy-for-kubernetes.md). |audit, deny, disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Kubernetes/AllowedUsersGroups.json) |
governance Gov Dod Impact Level 5 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/samples/gov-dod-impact-level-5.md
Title: Regulatory Compliance details for DoD Impact Level 5 (Azure Government) description: Details of the DoD Impact Level 5 (Azure Government) Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 08/13/2021 Last updated : 08/20/2021
The following article details how the Azure Policy Regulatory Compliance built-in initiative definition maps to **compliance domains** and **controls** in DoD Impact Level 5 (Azure Government). For more information about this compliance standard, see
-[DoD Impact Level 5 (Azure Government)](https://public.cyber.mil/dccs/). To understand
+[DoD Impact Level 5](https://public.cyber.mil/dccs/). To understand
_Ownership_, see [Azure Policy policy definition](../concepts/definition-structure.md#type) and [Shared responsibility in the cloud](../../../security/fundamentals/shared-responsibility.md).
-The following mappings are to the **DoD Impact Level 5 (Azure Government)** controls. Use the
+The following mappings are to the **DoD Impact Level 5** controls. Use the
navigation on the right to jump directly to a specific **compliance domain**. Many of the controls are implemented with an [Azure Policy](../overview.md) initiative definition. To review the complete initiative definition, open **Policy** in the Azure portal and select the **Definitions** page.
initiative definition.
|[Kubernetes cluster containers should only listen on allowed ports](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F440b515e-a580-421e-abeb-b159a61ddcbc) |Restrict containers to listen only on allowed ports to secure access to the Kubernetes cluster. This policy is generally available for Kubernetes Service (AKS), and preview for AKS Engine and Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](../concepts/policy-for-kubernetes.md). |audit, deny, disabled |[6.1.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Kubernetes/ContainerAllowedPorts.json) | |[Kubernetes cluster containers should only use allowed AppArmor profiles](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F511f5417-5d12-434d-ab2e-816901e72a5e) |Containers should only use allowed AppArmor profiles in a Kubernetes cluster. This recommendation is part of Pod Security Policies which are intended to improve the security of your Kubernetes environments. This policy is generally available for Kubernetes Service (AKS), and preview for AKS Engine and Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](../concepts/policy-for-kubernetes.md). |audit, deny, disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Kubernetes/EnforceAppArmorProfile.json) | |[Kubernetes cluster containers should only use allowed capabilities](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc26596ff-4d70-4e6a-9a30-c2506bd2f80c) |Restrict the capabilities to reduce the attack surface of containers in a Kubernetes cluster. This recommendation is part of CIS 5.2.8 and CIS 5.2.9 which are intended to improve the security of your Kubernetes environments. This policy is generally available for Kubernetes Service (AKS), and preview for AKS Engine and Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](../concepts/policy-for-kubernetes.md). |audit, deny, disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Kubernetes/ContainerAllowedCapabilities.json) |
-|[Kubernetes cluster containers should only use allowed images](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffebd0533-8e55-448f-b837-bd0e06f16469) |Use images from trusted registries to reduce the Kubernetes cluster's exposure risk to unknown vulnerabilities, security issues and malicious images. This policy is generally available for Kubernetes Service (AKS), and preview for AKS Engine and Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](../concepts/policy-for-kubernetes.md). |audit, deny, disabled |[6.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Kubernetes/ContainerAllowedImages.json) |
+|[Kubernetes cluster containers should only use allowed images](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffebd0533-8e55-448f-b837-bd0e06f16469) |Use images from trusted registries to reduce the Kubernetes cluster's exposure risk to unknown vulnerabilities, security issues and malicious images. This policy is generally available for Kubernetes Service (AKS), and preview for AKS Engine and Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](../concepts/policy-for-kubernetes.md). |audit, deny, disabled |[7.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Kubernetes/ContainerAllowedImages.json) |
|[Kubernetes cluster containers should run with a read only root file system](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fdf49d893-a74c-421d-bc95-c663042e5b80) |Run containers with a read only root file system to protect from changes at run-time with malicious binaries being added to PATH in a Kubernetes cluster. This policy is generally available for Kubernetes Service (AKS), and preview for AKS Engine and Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](../concepts/policy-for-kubernetes.md). |audit, deny, disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Kubernetes/ReadOnlyRootFileSystem.json) | |[Kubernetes cluster pod hostPath volumes should only use allowed host paths](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F098fc59e-46c7-4d99-9b16-64990e543d75) |Limit pod HostPath volume mounts to the allowed host paths in a Kubernetes Cluster. This recommendation is part of Pod Security Policies which are intended to improve the security of your Kubernetes environments. This policy is generally available for Kubernetes Service (AKS), and preview for AKS Engine and Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](../concepts/policy-for-kubernetes.md). |audit, deny, disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Kubernetes/AllowedHostPaths.json) | |[Kubernetes cluster pods and containers should only run with approved user and group IDs](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff06ddb64-5fa3-4b77-b166-acb36f7f6042) |Control the user, primary group, supplemental group and file system group IDs that pods and containers can use to run in a Kubernetes Cluster. This recommendation is part of Pod Security Policies which are intended to improve the security of your Kubernetes environments. This policy is generally available for Kubernetes Service (AKS), and preview for AKS Engine and Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](../concepts/policy-for-kubernetes.md). |audit, deny, disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Kubernetes/AllowedUsersGroups.json) |
governance Gov Fedramp High https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/samples/gov-fedramp-high.md
Title: Regulatory Compliance details for FedRAMP High (Azure Government) description: Details of the FedRAMP High (Azure Government) Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 08/13/2021 Last updated : 08/20/2021
The following article details how the Azure Policy Regulatory Compliance built-in initiative definition maps to **compliance domains** and **controls** in FedRAMP High (Azure Government). For more information about this compliance standard, see
-[FedRAMP High (Azure Government)](https://www.fedramp.gov/). To understand
+[FedRAMP High](https://www.fedramp.gov/). To understand
_Ownership_, see [Azure Policy policy definition](../concepts/definition-structure.md#type) and [Shared responsibility in the cloud](../../../security/fundamentals/shared-responsibility.md).
-The following mappings are to the **FedRAMP High (Azure Government)** controls. Use the
+The following mappings are to the **FedRAMP High** controls. Use the
navigation on the right to jump directly to a specific **compliance domain**. Many of the controls are implemented with an [Azure Policy](../overview.md) initiative definition. To review the complete initiative definition, open **Policy** in the Azure portal and select the **Definitions** page.
initiative definition.
|[Kubernetes cluster containers should only listen on allowed ports](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F440b515e-a580-421e-abeb-b159a61ddcbc) |Restrict containers to listen only on allowed ports to secure access to the Kubernetes cluster. This policy is generally available for Kubernetes Service (AKS), and preview for AKS Engine and Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](../concepts/policy-for-kubernetes.md). |audit, deny, disabled |[6.1.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Kubernetes/ContainerAllowedPorts.json) | |[Kubernetes cluster containers should only use allowed AppArmor profiles](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F511f5417-5d12-434d-ab2e-816901e72a5e) |Containers should only use allowed AppArmor profiles in a Kubernetes cluster. This recommendation is part of Pod Security Policies which are intended to improve the security of your Kubernetes environments. This policy is generally available for Kubernetes Service (AKS), and preview for AKS Engine and Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](../concepts/policy-for-kubernetes.md). |audit, deny, disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Kubernetes/EnforceAppArmorProfile.json) | |[Kubernetes cluster containers should only use allowed capabilities](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc26596ff-4d70-4e6a-9a30-c2506bd2f80c) |Restrict the capabilities to reduce the attack surface of containers in a Kubernetes cluster. This recommendation is part of CIS 5.2.8 and CIS 5.2.9 which are intended to improve the security of your Kubernetes environments. This policy is generally available for Kubernetes Service (AKS), and preview for AKS Engine and Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](../concepts/policy-for-kubernetes.md). |audit, deny, disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Kubernetes/ContainerAllowedCapabilities.json) |
-|[Kubernetes cluster containers should only use allowed images](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffebd0533-8e55-448f-b837-bd0e06f16469) |Use images from trusted registries to reduce the Kubernetes cluster's exposure risk to unknown vulnerabilities, security issues and malicious images. This policy is generally available for Kubernetes Service (AKS), and preview for AKS Engine and Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](../concepts/policy-for-kubernetes.md). |audit, deny, disabled |[6.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Kubernetes/ContainerAllowedImages.json) |
+|[Kubernetes cluster containers should only use allowed images](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffebd0533-8e55-448f-b837-bd0e06f16469) |Use images from trusted registries to reduce the Kubernetes cluster's exposure risk to unknown vulnerabilities, security issues and malicious images. This policy is generally available for Kubernetes Service (AKS), and preview for AKS Engine and Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](../concepts/policy-for-kubernetes.md). |audit, deny, disabled |[7.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Kubernetes/ContainerAllowedImages.json) |
|[Kubernetes cluster containers should run with a read only root file system](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fdf49d893-a74c-421d-bc95-c663042e5b80) |Run containers with a read only root file system to protect from changes at run-time with malicious binaries being added to PATH in a Kubernetes cluster. This policy is generally available for Kubernetes Service (AKS), and preview for AKS Engine and Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](../concepts/policy-for-kubernetes.md). |audit, deny, disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Kubernetes/ReadOnlyRootFileSystem.json) | |[Kubernetes cluster pod hostPath volumes should only use allowed host paths](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F098fc59e-46c7-4d99-9b16-64990e543d75) |Limit pod HostPath volume mounts to the allowed host paths in a Kubernetes Cluster. This recommendation is part of Pod Security Policies which are intended to improve the security of your Kubernetes environments. This policy is generally available for Kubernetes Service (AKS), and preview for AKS Engine and Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](../concepts/policy-for-kubernetes.md). |audit, deny, disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Kubernetes/AllowedHostPaths.json) | |[Kubernetes cluster pods and containers should only run with approved user and group IDs](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff06ddb64-5fa3-4b77-b166-acb36f7f6042) |Control the user, primary group, supplemental group and file system group IDs that pods and containers can use to run in a Kubernetes Cluster. This recommendation is part of Pod Security Policies which are intended to improve the security of your Kubernetes environments. This policy is generally available for Kubernetes Service (AKS), and preview for AKS Engine and Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](../concepts/policy-for-kubernetes.md). |audit, deny, disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Kubernetes/AllowedUsersGroups.json) |
governance Gov Fedramp Moderate https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/samples/gov-fedramp-moderate.md
Title: Regulatory Compliance details for FedRAMP Moderate (Azure Government) description: Details of the FedRAMP Moderate (Azure Government) Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 08/13/2021 Last updated : 08/20/2021
The following article details how the Azure Policy Regulatory Compliance built-in initiative definition maps to **compliance domains** and **controls** in FedRAMP Moderate (Azure Government). For more information about this compliance standard, see
-[FedRAMP Moderate (Azure Government)](https://www.fedramp.gov/). To understand
+[FedRAMP Moderate](https://www.fedramp.gov/). To understand
_Ownership_, see [Azure Policy policy definition](../concepts/definition-structure.md#type) and [Shared responsibility in the cloud](../../../security/fundamentals/shared-responsibility.md).
-The following mappings are to the **FedRAMP Moderate (Azure Government)** controls. Use the
+The following mappings are to the **FedRAMP Moderate** controls. Use the
navigation on the right to jump directly to a specific **compliance domain**. Many of the controls are implemented with an [Azure Policy](../overview.md) initiative definition. To review the complete initiative definition, open **Policy** in the Azure portal and select the **Definitions** page.
initiative definition.
|[Kubernetes cluster containers should only listen on allowed ports](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F440b515e-a580-421e-abeb-b159a61ddcbc) |Restrict containers to listen only on allowed ports to secure access to the Kubernetes cluster. This policy is generally available for Kubernetes Service (AKS), and preview for AKS Engine and Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](../concepts/policy-for-kubernetes.md). |audit, deny, disabled |[6.1.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Kubernetes/ContainerAllowedPorts.json) | |[Kubernetes cluster containers should only use allowed AppArmor profiles](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F511f5417-5d12-434d-ab2e-816901e72a5e) |Containers should only use allowed AppArmor profiles in a Kubernetes cluster. This recommendation is part of Pod Security Policies which are intended to improve the security of your Kubernetes environments. This policy is generally available for Kubernetes Service (AKS), and preview for AKS Engine and Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](../concepts/policy-for-kubernetes.md). |audit, deny, disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Kubernetes/EnforceAppArmorProfile.json) | |[Kubernetes cluster containers should only use allowed capabilities](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc26596ff-4d70-4e6a-9a30-c2506bd2f80c) |Restrict the capabilities to reduce the attack surface of containers in a Kubernetes cluster. This recommendation is part of CIS 5.2.8 and CIS 5.2.9 which are intended to improve the security of your Kubernetes environments. This policy is generally available for Kubernetes Service (AKS), and preview for AKS Engine and Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](../concepts/policy-for-kubernetes.md). |audit, deny, disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Kubernetes/ContainerAllowedCapabilities.json) |
-|[Kubernetes cluster containers should only use allowed images](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffebd0533-8e55-448f-b837-bd0e06f16469) |Use images from trusted registries to reduce the Kubernetes cluster's exposure risk to unknown vulnerabilities, security issues and malicious images. This policy is generally available for Kubernetes Service (AKS), and preview for AKS Engine and Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](../concepts/policy-for-kubernetes.md). |audit, deny, disabled |[6.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Kubernetes/ContainerAllowedImages.json) |
+|[Kubernetes cluster containers should only use allowed images](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffebd0533-8e55-448f-b837-bd0e06f16469) |Use images from trusted registries to reduce the Kubernetes cluster's exposure risk to unknown vulnerabilities, security issues and malicious images. This policy is generally available for Kubernetes Service (AKS), and preview for AKS Engine and Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](../concepts/policy-for-kubernetes.md). |audit, deny, disabled |[7.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Kubernetes/ContainerAllowedImages.json) |
|[Kubernetes cluster containers should run with a read only root file system](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fdf49d893-a74c-421d-bc95-c663042e5b80) |Run containers with a read only root file system to protect from changes at run-time with malicious binaries being added to PATH in a Kubernetes cluster. This policy is generally available for Kubernetes Service (AKS), and preview for AKS Engine and Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](../concepts/policy-for-kubernetes.md). |audit, deny, disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Kubernetes/ReadOnlyRootFileSystem.json) | |[Kubernetes cluster pod hostPath volumes should only use allowed host paths](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F098fc59e-46c7-4d99-9b16-64990e543d75) |Limit pod HostPath volume mounts to the allowed host paths in a Kubernetes Cluster. This recommendation is part of Pod Security Policies which are intended to improve the security of your Kubernetes environments. This policy is generally available for Kubernetes Service (AKS), and preview for AKS Engine and Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](../concepts/policy-for-kubernetes.md). |audit, deny, disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Kubernetes/AllowedHostPaths.json) | |[Kubernetes cluster pods and containers should only run with approved user and group IDs](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff06ddb64-5fa3-4b77-b166-acb36f7f6042) |Control the user, primary group, supplemental group and file system group IDs that pods and containers can use to run in a Kubernetes Cluster. This recommendation is part of Pod Security Policies which are intended to improve the security of your Kubernetes environments. This policy is generally available for Kubernetes Service (AKS), and preview for AKS Engine and Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](../concepts/policy-for-kubernetes.md). |audit, deny, disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Kubernetes/AllowedUsersGroups.json) |
governance Gov Irs 1075 Sept2016 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/samples/gov-irs-1075-sept2016.md
Title: Regulatory Compliance details for IRS 1075 September 2016 (Azure Government) description: Details of the IRS 1075 September 2016 (Azure Government) Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 08/13/2021 Last updated : 08/20/2021 # Details of the IRS 1075 September 2016 (Azure Government) Regulatory Compliance built-in initiative The following article details how the Azure Policy Regulatory Compliance built-in initiative
-definition maps to **compliance domains** and **controls** in IRS 1075 September 2016.
+definition maps to **compliance domains** and **controls** in IRS 1075 September 2016 (Azure Government).
For more information about this compliance standard, see [IRS 1075 September 2016](https://www.irs.gov/pub/irs-pdf/p1075.pdf). To understand _Ownership_, see [Azure Policy policy definition](../concepts/definition-structure.md#type) and
Then, find and select the **IRS1075 September 2016** Regulatory Compliance built
initiative definition. This built-in initiative is deployed as part of the
-[IRS 1075 September 2016 (Azure Government) blueprint sample](../../blueprints/samples/irs-1075-sept2016.md).
+[IRS 1075 September 2016 blueprint sample](../../blueprints/samples/irs-1075-sept2016.md).
> [!IMPORTANT] > Each control below is associated with one or more [Azure Policy](../overview.md) definitions.
governance Gov Iso 27001 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/samples/gov-iso-27001.md
Title: Regulatory Compliance details for ISO 27001:2013 (Azure Government) description: Details of the ISO 27001:2013 (Azure Government) Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 08/13/2021 Last updated : 08/20/2021 # Details of the ISO 27001:2013 (Azure Government) Regulatory Compliance built-in initiative The following article details how the Azure Policy Regulatory Compliance built-in initiative
-definition maps to **compliance domains** and **controls** in ISO 27001:2013.
+definition maps to **compliance domains** and **controls** in ISO 27001:2013 (Azure Government).
For more information about this compliance standard, see [ISO 27001:2013](https://www.iso.org/isoiec-27001-information-security.html). To understand _Ownership_, see [Azure Policy policy definition](../concepts/definition-structure.md#type) and
Then, find and select the **ISO 27001:2013** Regulatory Compliance built-in
initiative definition. This built-in initiative is deployed as part of the
-[ISO 27001:2013 (Azure Government) blueprint sample](../../blueprints/samples/iso-27001-2013.md).
+[ISO 27001:2013 blueprint sample](../../blueprints/samples/iso-27001-2013.md).
> [!IMPORTANT] > Each control below is associated with one or more [Azure Policy](../overview.md) definitions.
governance Gov Nist Sp 800 171 R2 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/samples/gov-nist-sp-800-171-r2.md
Title: Regulatory Compliance details for NIST SP 800-171 R2 (Azure Government) description: Details of the NIST SP 800-171 R2 (Azure Government) Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 08/13/2021 Last updated : 08/20/2021
The following article details how the Azure Policy Regulatory Compliance built-in initiative definition maps to **compliance domains** and **controls** in NIST SP 800-171 R2 (Azure Government). For more information about this compliance standard, see
-[NIST SP 800-171 R2 (Azure Government)](https://csrc.nist.gov/publications/detail/sp/800-171/rev-2/final). To understand
+[NIST SP 800-171 R2](https://csrc.nist.gov/publications/detail/sp/800-171/rev-2/final). To understand
_Ownership_, see [Azure Policy policy definition](../concepts/definition-structure.md#type) and [Shared responsibility in the cloud](../../../security/fundamentals/shared-responsibility.md).
-The following mappings are to the **NIST SP 800-171 R2 (Azure Government)** controls. Use the
+The following mappings are to the **NIST SP 800-171 R2** controls. Use the
navigation on the right to jump directly to a specific **compliance domain**. Many of the controls are implemented with an [Azure Policy](../overview.md) initiative definition. To review the complete initiative definition, open **Policy** in the Azure portal and select the **Definitions** page.
governance Gov Nist Sp 800 53 R4 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/samples/gov-nist-sp-800-53-r4.md
Title: Regulatory Compliance details for NIST SP 800-53 Rev. 4 (Azure Government) description: Details of the NIST SP 800-53 Rev. 4 (Azure Government) Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 08/13/2021 Last updated : 08/20/2021 # Details of the NIST SP 800-53 Rev. 4 (Azure Government) Regulatory Compliance built-in initiative The following article details how the Azure Policy Regulatory Compliance built-in initiative
-definition maps to **compliance domains** and **controls** in NIST SP 800-53 Rev. 4.
+definition maps to **compliance domains** and **controls** in NIST SP 800-53 Rev. 4 (Azure Government).
For more information about this compliance standard, see [NIST SP 800-53 Rev. 4](https://nvd.nist.gov/800-53). To understand _Ownership_, see [Azure Policy policy definition](../concepts/definition-structure.md#type) and
Then, find and select the **NIST SP 800-53 Rev. 4** Regulatory Compliance built-
initiative definition. This built-in initiative is deployed as part of the
-[NIST SP 800-53 Rev. 4 (Azure Government) blueprint sample](../../blueprints/samples/nist-sp-800-53-r4.md).
+[NIST SP 800-53 Rev. 4 blueprint sample](../../blueprints/samples/nist-sp-800-53-r4.md).
> [!IMPORTANT] > Each control below is associated with one or more [Azure Policy](../overview.md) definitions.
This built-in initiative is deployed as part of the
|[Kubernetes cluster containers should only listen on allowed ports](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F440b515e-a580-421e-abeb-b159a61ddcbc) |Restrict containers to listen only on allowed ports to secure access to the Kubernetes cluster. This policy is generally available for Kubernetes Service (AKS), and preview for AKS Engine and Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](../concepts/policy-for-kubernetes.md). |audit, deny, disabled |[6.1.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Kubernetes/ContainerAllowedPorts.json) | |[Kubernetes cluster containers should only use allowed AppArmor profiles](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F511f5417-5d12-434d-ab2e-816901e72a5e) |Containers should only use allowed AppArmor profiles in a Kubernetes cluster. This recommendation is part of Pod Security Policies which are intended to improve the security of your Kubernetes environments. This policy is generally available for Kubernetes Service (AKS), and preview for AKS Engine and Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](../concepts/policy-for-kubernetes.md). |audit, deny, disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Kubernetes/EnforceAppArmorProfile.json) | |[Kubernetes cluster containers should only use allowed capabilities](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc26596ff-4d70-4e6a-9a30-c2506bd2f80c) |Restrict the capabilities to reduce the attack surface of containers in a Kubernetes cluster. This recommendation is part of CIS 5.2.8 and CIS 5.2.9 which are intended to improve the security of your Kubernetes environments. This policy is generally available for Kubernetes Service (AKS), and preview for AKS Engine and Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](../concepts/policy-for-kubernetes.md). |audit, deny, disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Kubernetes/ContainerAllowedCapabilities.json) |
-|[Kubernetes cluster containers should only use allowed images](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffebd0533-8e55-448f-b837-bd0e06f16469) |Use images from trusted registries to reduce the Kubernetes cluster's exposure risk to unknown vulnerabilities, security issues and malicious images. This policy is generally available for Kubernetes Service (AKS), and preview for AKS Engine and Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](../concepts/policy-for-kubernetes.md). |audit, deny, disabled |[6.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Kubernetes/ContainerAllowedImages.json) |
+|[Kubernetes cluster containers should only use allowed images](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffebd0533-8e55-448f-b837-bd0e06f16469) |Use images from trusted registries to reduce the Kubernetes cluster's exposure risk to unknown vulnerabilities, security issues and malicious images. This policy is generally available for Kubernetes Service (AKS), and preview for AKS Engine and Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](../concepts/policy-for-kubernetes.md). |audit, deny, disabled |[7.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Kubernetes/ContainerAllowedImages.json) |
|[Kubernetes cluster containers should run with a read only root file system](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fdf49d893-a74c-421d-bc95-c663042e5b80) |Run containers with a read only root file system to protect from changes at run-time with malicious binaries being added to PATH in a Kubernetes cluster. This policy is generally available for Kubernetes Service (AKS), and preview for AKS Engine and Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](../concepts/policy-for-kubernetes.md). |audit, deny, disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Kubernetes/ReadOnlyRootFileSystem.json) | |[Kubernetes cluster pod hostPath volumes should only use allowed host paths](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F098fc59e-46c7-4d99-9b16-64990e543d75) |Limit pod HostPath volume mounts to the allowed host paths in a Kubernetes Cluster. This recommendation is part of Pod Security Policies which are intended to improve the security of your Kubernetes environments. This policy is generally available for Kubernetes Service (AKS), and preview for AKS Engine and Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](../concepts/policy-for-kubernetes.md). |audit, deny, disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Kubernetes/AllowedHostPaths.json) | |[Kubernetes cluster pods and containers should only run with approved user and group IDs](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff06ddb64-5fa3-4b77-b166-acb36f7f6042) |Control the user, primary group, supplemental group and file system group IDs that pods and containers can use to run in a Kubernetes Cluster. This recommendation is part of Pod Security Policies which are intended to improve the security of your Kubernetes environments. This policy is generally available for Kubernetes Service (AKS), and preview for AKS Engine and Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](../concepts/policy-for-kubernetes.md). |audit, deny, disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Kubernetes/AllowedUsersGroups.json) |
governance Gov Nist Sp 800 53 R5 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/samples/gov-nist-sp-800-53-r5.md
Title: Regulatory Compliance details for NIST SP 800-53 Rev. 5 (Azure Government) description: Details of the NIST SP 800-53 Rev. 5 (Azure Government) Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 08/13/2021 Last updated : 08/20/2021
The following article details how the Azure Policy Regulatory Compliance built-in initiative definition maps to **compliance domains** and **controls** in NIST SP 800-53 Rev. 5 (Azure Government). For more information about this compliance standard, see
-[NIST SP 800-53 Rev. 5 (Azure Government)](https://nvd.nist.gov/800-53). To understand
+[NIST SP 800-53 Rev. 5](https://nvd.nist.gov/800-53). To understand
_Ownership_, see [Azure Policy policy definition](../concepts/definition-structure.md#type) and [Shared responsibility in the cloud](../../../security/fundamentals/shared-responsibility.md).
-The following mappings are to the **NIST SP 800-53 Rev. 5 (Azure Government)** controls. Use the
+The following mappings are to the **NIST SP 800-53 Rev. 5** controls. Use the
navigation on the right to jump directly to a specific **compliance domain**. Many of the controls are implemented with an [Azure Policy](../overview.md) initiative definition. To review the complete initiative definition, open **Policy** in the Azure portal and select the **Definitions** page.
initiative definition.
|[Kubernetes cluster containers should only listen on allowed ports](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F440b515e-a580-421e-abeb-b159a61ddcbc) |Restrict containers to listen only on allowed ports to secure access to the Kubernetes cluster. This policy is generally available for Kubernetes Service (AKS), and preview for AKS Engine and Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](../concepts/policy-for-kubernetes.md). |audit, deny, disabled |[6.1.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Kubernetes/ContainerAllowedPorts.json) | |[Kubernetes cluster containers should only use allowed AppArmor profiles](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F511f5417-5d12-434d-ab2e-816901e72a5e) |Containers should only use allowed AppArmor profiles in a Kubernetes cluster. This recommendation is part of Pod Security Policies which are intended to improve the security of your Kubernetes environments. This policy is generally available for Kubernetes Service (AKS), and preview for AKS Engine and Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](../concepts/policy-for-kubernetes.md). |audit, deny, disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Kubernetes/EnforceAppArmorProfile.json) | |[Kubernetes cluster containers should only use allowed capabilities](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc26596ff-4d70-4e6a-9a30-c2506bd2f80c) |Restrict the capabilities to reduce the attack surface of containers in a Kubernetes cluster. This recommendation is part of CIS 5.2.8 and CIS 5.2.9 which are intended to improve the security of your Kubernetes environments. This policy is generally available for Kubernetes Service (AKS), and preview for AKS Engine and Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](../concepts/policy-for-kubernetes.md). |audit, deny, disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Kubernetes/ContainerAllowedCapabilities.json) |
-|[Kubernetes cluster containers should only use allowed images](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffebd0533-8e55-448f-b837-bd0e06f16469) |Use images from trusted registries to reduce the Kubernetes cluster's exposure risk to unknown vulnerabilities, security issues and malicious images. This policy is generally available for Kubernetes Service (AKS), and preview for AKS Engine and Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](../concepts/policy-for-kubernetes.md). |audit, deny, disabled |[6.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Kubernetes/ContainerAllowedImages.json) |
+|[Kubernetes cluster containers should only use allowed images](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffebd0533-8e55-448f-b837-bd0e06f16469) |Use images from trusted registries to reduce the Kubernetes cluster's exposure risk to unknown vulnerabilities, security issues and malicious images. This policy is generally available for Kubernetes Service (AKS), and preview for AKS Engine and Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](../concepts/policy-for-kubernetes.md). |audit, deny, disabled |[7.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Kubernetes/ContainerAllowedImages.json) |
|[Kubernetes cluster containers should run with a read only root file system](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fdf49d893-a74c-421d-bc95-c663042e5b80) |Run containers with a read only root file system to protect from changes at run-time with malicious binaries being added to PATH in a Kubernetes cluster. This policy is generally available for Kubernetes Service (AKS), and preview for AKS Engine and Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](../concepts/policy-for-kubernetes.md). |audit, deny, disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Kubernetes/ReadOnlyRootFileSystem.json) | |[Kubernetes cluster pod hostPath volumes should only use allowed host paths](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F098fc59e-46c7-4d99-9b16-64990e543d75) |Limit pod HostPath volume mounts to the allowed host paths in a Kubernetes Cluster. This recommendation is part of Pod Security Policies which are intended to improve the security of your Kubernetes environments. This policy is generally available for Kubernetes Service (AKS), and preview for AKS Engine and Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](../concepts/policy-for-kubernetes.md). |audit, deny, disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Kubernetes/AllowedHostPaths.json) | |[Kubernetes cluster pods and containers should only run with approved user and group IDs](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff06ddb64-5fa3-4b77-b166-acb36f7f6042) |Control the user, primary group, supplemental group and file system group IDs that pods and containers can use to run in a Kubernetes Cluster. This recommendation is part of Pod Security Policies which are intended to improve the security of your Kubernetes environments. This policy is generally available for Kubernetes Service (AKS), and preview for AKS Engine and Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](../concepts/policy-for-kubernetes.md). |audit, deny, disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Kubernetes/AllowedUsersGroups.json) |
governance Hipaa Hitrust 9 2 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/samples/hipaa-hitrust-9-2.md
Title: Regulatory Compliance details for HIPAA HITRUST 9.2 description: Details of the HIPAA HITRUST 9.2 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 08/13/2021 Last updated : 08/20/2021
governance Irs 1075 Sept2016 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/samples/irs-1075-sept2016.md
Title: Regulatory Compliance details for IRS 1075 September 2016 description: Details of the IRS 1075 September 2016 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 08/13/2021 Last updated : 08/20/2021
governance Iso 27001 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/samples/iso-27001.md
Title: Regulatory Compliance details for ISO 27001:2013 description: Details of the ISO 27001:2013 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 08/13/2021 Last updated : 08/20/2021
governance New Zealand Ism https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/samples/new-zealand-ism.md
Title: Regulatory Compliance details for New Zealand ISM Restricted description: Details of the New Zealand ISM Restricted Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 08/13/2021 Last updated : 08/20/2021
governance Nist Sp 800 171 R2 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/samples/nist-sp-800-171-r2.md
Title: Regulatory Compliance details for NIST SP 800-171 R2 description: Details of the NIST SP 800-171 R2 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 08/13/2021 Last updated : 08/20/2021
governance Nist Sp 800 53 R4 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/samples/nist-sp-800-53-r4.md
Title: Regulatory Compliance details for NIST SP 800-53 Rev. 4 description: Details of the NIST SP 800-53 Rev. 4 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 08/13/2021 Last updated : 08/20/2021
This built-in initiative is deployed as part of the
|[Kubernetes cluster containers should only listen on allowed ports](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F440b515e-a580-421e-abeb-b159a61ddcbc) |Restrict containers to listen only on allowed ports to secure access to the Kubernetes cluster. This policy is generally available for Kubernetes Service (AKS), and preview for AKS Engine and Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](../concepts/policy-for-kubernetes.md). |audit, deny, disabled |[6.1.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Kubernetes/ContainerAllowedPorts.json) | |[Kubernetes cluster containers should only use allowed AppArmor profiles](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F511f5417-5d12-434d-ab2e-816901e72a5e) |Containers should only use allowed AppArmor profiles in a Kubernetes cluster. This recommendation is part of Pod Security Policies which are intended to improve the security of your Kubernetes environments. This policy is generally available for Kubernetes Service (AKS), and preview for AKS Engine and Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](../concepts/policy-for-kubernetes.md). |audit, deny, disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Kubernetes/EnforceAppArmorProfile.json) | |[Kubernetes cluster containers should only use allowed capabilities](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc26596ff-4d70-4e6a-9a30-c2506bd2f80c) |Restrict the capabilities to reduce the attack surface of containers in a Kubernetes cluster. This recommendation is part of CIS 5.2.8 and CIS 5.2.9 which are intended to improve the security of your Kubernetes environments. This policy is generally available for Kubernetes Service (AKS), and preview for AKS Engine and Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](../concepts/policy-for-kubernetes.md). |audit, deny, disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Kubernetes/ContainerAllowedCapabilities.json) |
-|[Kubernetes cluster containers should only use allowed images](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffebd0533-8e55-448f-b837-bd0e06f16469) |Use images from trusted registries to reduce the Kubernetes cluster's exposure risk to unknown vulnerabilities, security issues and malicious images. This policy is generally available for Kubernetes Service (AKS), and preview for AKS Engine and Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](../concepts/policy-for-kubernetes.md). |audit, deny, disabled |[6.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Kubernetes/ContainerAllowedImages.json) |
+|[Kubernetes cluster containers should only use allowed images](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffebd0533-8e55-448f-b837-bd0e06f16469) |Use images from trusted registries to reduce the Kubernetes cluster's exposure risk to unknown vulnerabilities, security issues and malicious images. This policy is generally available for Kubernetes Service (AKS), and preview for AKS Engine and Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](../concepts/policy-for-kubernetes.md). |audit, deny, disabled |[7.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Kubernetes/ContainerAllowedImages.json) |
|[Kubernetes cluster containers should run with a read only root file system](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fdf49d893-a74c-421d-bc95-c663042e5b80) |Run containers with a read only root file system to protect from changes at run-time with malicious binaries being added to PATH in a Kubernetes cluster. This policy is generally available for Kubernetes Service (AKS), and preview for AKS Engine and Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](../concepts/policy-for-kubernetes.md). |audit, deny, disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Kubernetes/ReadOnlyRootFileSystem.json) | |[Kubernetes cluster pod hostPath volumes should only use allowed host paths](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F098fc59e-46c7-4d99-9b16-64990e543d75) |Limit pod HostPath volume mounts to the allowed host paths in a Kubernetes Cluster. This recommendation is part of Pod Security Policies which are intended to improve the security of your Kubernetes environments. This policy is generally available for Kubernetes Service (AKS), and preview for AKS Engine and Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](../concepts/policy-for-kubernetes.md). |audit, deny, disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Kubernetes/AllowedHostPaths.json) | |[Kubernetes cluster pods and containers should only run with approved user and group IDs](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff06ddb64-5fa3-4b77-b166-acb36f7f6042) |Control the user, primary group, supplemental group and file system group IDs that pods and containers can use to run in a Kubernetes Cluster. This recommendation is part of Pod Security Policies which are intended to improve the security of your Kubernetes environments. This policy is generally available for Kubernetes Service (AKS), and preview for AKS Engine and Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](../concepts/policy-for-kubernetes.md). |audit, deny, disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Kubernetes/AllowedUsersGroups.json) |
governance Nist Sp 800 53 R5 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/samples/nist-sp-800-53-r5.md
Title: Regulatory Compliance details for NIST SP 800-53 Rev. 5 description: Details of the NIST SP 800-53 Rev. 5 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 08/13/2021 Last updated : 08/20/2021
initiative definition.
|[Kubernetes cluster containers should only listen on allowed ports](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F440b515e-a580-421e-abeb-b159a61ddcbc) |Restrict containers to listen only on allowed ports to secure access to the Kubernetes cluster. This policy is generally available for Kubernetes Service (AKS), and preview for AKS Engine and Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](../concepts/policy-for-kubernetes.md). |audit, deny, disabled |[6.1.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Kubernetes/ContainerAllowedPorts.json) | |[Kubernetes cluster containers should only use allowed AppArmor profiles](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F511f5417-5d12-434d-ab2e-816901e72a5e) |Containers should only use allowed AppArmor profiles in a Kubernetes cluster. This recommendation is part of Pod Security Policies which are intended to improve the security of your Kubernetes environments. This policy is generally available for Kubernetes Service (AKS), and preview for AKS Engine and Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](../concepts/policy-for-kubernetes.md). |audit, deny, disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Kubernetes/EnforceAppArmorProfile.json) | |[Kubernetes cluster containers should only use allowed capabilities](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc26596ff-4d70-4e6a-9a30-c2506bd2f80c) |Restrict the capabilities to reduce the attack surface of containers in a Kubernetes cluster. This recommendation is part of CIS 5.2.8 and CIS 5.2.9 which are intended to improve the security of your Kubernetes environments. This policy is generally available for Kubernetes Service (AKS), and preview for AKS Engine and Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](../concepts/policy-for-kubernetes.md). |audit, deny, disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Kubernetes/ContainerAllowedCapabilities.json) |
-|[Kubernetes cluster containers should only use allowed images](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffebd0533-8e55-448f-b837-bd0e06f16469) |Use images from trusted registries to reduce the Kubernetes cluster's exposure risk to unknown vulnerabilities, security issues and malicious images. This policy is generally available for Kubernetes Service (AKS), and preview for AKS Engine and Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](../concepts/policy-for-kubernetes.md). |audit, deny, disabled |[6.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Kubernetes/ContainerAllowedImages.json) |
+|[Kubernetes cluster containers should only use allowed images](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffebd0533-8e55-448f-b837-bd0e06f16469) |Use images from trusted registries to reduce the Kubernetes cluster's exposure risk to unknown vulnerabilities, security issues and malicious images. This policy is generally available for Kubernetes Service (AKS), and preview for AKS Engine and Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](../concepts/policy-for-kubernetes.md). |audit, deny, disabled |[7.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Kubernetes/ContainerAllowedImages.json) |
|[Kubernetes cluster containers should run with a read only root file system](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fdf49d893-a74c-421d-bc95-c663042e5b80) |Run containers with a read only root file system to protect from changes at run-time with malicious binaries being added to PATH in a Kubernetes cluster. This policy is generally available for Kubernetes Service (AKS), and preview for AKS Engine and Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](../concepts/policy-for-kubernetes.md). |audit, deny, disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Kubernetes/ReadOnlyRootFileSystem.json) | |[Kubernetes cluster pod hostPath volumes should only use allowed host paths](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F098fc59e-46c7-4d99-9b16-64990e543d75) |Limit pod HostPath volume mounts to the allowed host paths in a Kubernetes Cluster. This recommendation is part of Pod Security Policies which are intended to improve the security of your Kubernetes environments. This policy is generally available for Kubernetes Service (AKS), and preview for AKS Engine and Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](../concepts/policy-for-kubernetes.md). |audit, deny, disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Kubernetes/AllowedHostPaths.json) | |[Kubernetes cluster pods and containers should only run with approved user and group IDs](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff06ddb64-5fa3-4b77-b166-acb36f7f6042) |Control the user, primary group, supplemental group and file system group IDs that pods and containers can use to run in a Kubernetes Cluster. This recommendation is part of Pod Security Policies which are intended to improve the security of your Kubernetes environments. This policy is generally available for Kubernetes Service (AKS), and preview for AKS Engine and Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](../concepts/policy-for-kubernetes.md). |audit, deny, disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Kubernetes/AllowedUsersGroups.json) |
governance Ukofficial Uknhs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/samples/ukofficial-uknhs.md
Title: Regulatory Compliance details for UK OFFICIAL and UK NHS description: Details of the UK OFFICIAL and UK NHS Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 08/13/2021 Last updated : 08/20/2021
hdinsight Cluster Reboot Vm https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/cluster-reboot-vm.md
You can use the **Try it** feature in the API doc to send requests to HDInsight.
POST https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.HDInsight/clusters/{clusterName}/listHosts?api-version=2018-06-01-preview ```
-1. Restart hosts. After you get the names of the nodes that you want to reboot, restart the nodes by using the REST API to reboot the nodes. The node name follows the pattern of *NodeType(wn/hn/zk/gw)* + *x* + *first six characters of cluster name*. For more information, see [HDInsight restart hosts REST API operation](/rest/api/hdinsight/virtualmachines/restarthosts).
+1. Restart hosts. After you get the names of the nodes that you want to reboot, restart the nodes by using the REST API to reboot the nodes. The node name follows the pattern of *NodeType(wn/hn/zk)* + *x* + *first six characters of cluster name*. For more information, see [HDInsight restart hosts REST API operation](/rest/api/hdinsight/virtualmachines/restarthosts).
``` POST https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.HDInsight/clusters/{clusterName}/restartHosts?api-version=2018-06-01-preview
hdinsight Hdinsight Hadoop Development Using Azure Resource Manager https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/hdinsight-hadoop-development-using-azure-resource-manager.md
The following table lists the ASM cmdlets and their names in Resource Manager mo
| Get-AzureHDInsightJobOutput |[Get-AzHDInsightJobOutput](/powershell/module/az.hdinsight/get-azhdinsightjoboutput) | | Get-AzureHDInsightProperty |[Get-AzHDInsightProperty](/powershell/module/az.hdinsight/get-azhdinsightproperty) | | Grant-AzureHDInsightHttpServicesAccess |[Grant-AzureRmHDInsightHttpServicesAccess](/powershell/module/azurerm.hdinsight/grant-azurermhdinsighthttpservicesaccess) |
-| Grant-AzureHdinsightRdpAccess |[Grant-AzHDInsightRdpServicesAccess](/powershell/module/az.hdinsight/grant-azhdinsightrdpservicesaccess) |
| Invoke-AzureHDInsightHiveJob |[Invoke-AzHDInsightHiveJob](/powershell/module/az.hdinsight/invoke-azhdinsighthivejob) | | New-AzureHDInsightCluster |[New-AzHDInsightCluster](/powershell/module/az.hdinsight/new-azhdinsightcluster) | | New-AzureHDInsightClusterConfig |[New-AzHDInsightClusterConfig](/powershell/module/az.hdinsight/new-azhdinsightclusterconfig) |
The following table lists the ASM cmdlets and their names in Resource Manager mo
| New-AzureHDInsightStreamingMapReduceJobDefinition |[New-AzHDInsightStreamingMapReduceJobDefinition](/powershell/module/az.hdinsight/new-azhdinsightstreamingmapreducejobdefinition) | | Remove-AzureHDInsightCluster |[Remove-AzHDInsightCluster](/powershell/module/az.hdinsight/remove-azhdinsightcluster) | | Revoke-AzureHDInsightHttpServicesAccess |[Revoke-AzHDInsightHttpServicesAccess](/powershell/module/azurerm.hdinsight/revoke-azurermhdinsighthttpservicesaccess) |
-| Revoke-AzureHdinsightRdpAccess |[Revoke-AzHDInsightRdpServicesAccess](/powershell/module/az.hdinsight/revoke-azhdinsightrdpservicesaccess) |
| Set-AzureHDInsightClusterSize |[Set-AzHDInsightClusterSize](/powershell/module/az.hdinsight/set-azhdinsightclustersize) | | Set-AzureHDInsightDefaultStorage |[Set-AzHDInsightDefaultStorage](/powershell/module/az.hdinsight/set-azhdinsightdefaultstorage) | | Start-AzureHDInsightJob |[Start-AzHDInsightJob](/powershell/module/az.hdinsight/start-azhdinsightjob) |
hdinsight Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/policy-reference.md
Title: Built-in policy definitions for Azure HDInsight description: Lists Azure Policy built-in policy definitions for Azure HDInsight. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/13/2021 Last updated : 08/20/2021
hdinsight Apache Spark Python Package Installation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/spark/apache-spark-python-package-installation.md
HDInsight cluster depends on the built-in Python environment, both Python 2.7 an
:::image type="content" source="./media/apache-spark-python-package-installation/check-python-version-in-jupyter.png" alt-text="Check Python version in Jupyter Notebook" border="true":::
-## Known issue
-
-There's a known bug for Anaconda version `4.7.11`, `4.7.12`, and `4.8.0`. If you see your script actions stops responding at `"Collecting package metadata (repodata.json): ...working..."` and failing with `"Python script has been killed due to timeout after waiting 3600 secs"`. You can download [this script](https://gregorysfixes.blob.core.windows.net/public/fix-conda.sh) and run it as script actions on all nodes to fix the issue.
-
-To check your Anaconda version, you can SSH to the cluster header node and run `/usr/bin/anaconda/bin/conda --v`.
## Next steps
healthcare-apis Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/healthcare-apis/azure-api-for-fhir/policy-reference.md
Title: Built-in policy definitions for Azure API for FHIR description: Lists Azure Policy built-in policy definitions for Azure API for FHIR. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/13/2021 Last updated : 08/20/2021
healthcare-apis Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/healthcare-apis/azure-api-for-fhir/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure API for FHIR description: Lists Azure Policy Regulatory Compliance controls available for Azure API for FHIR. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 08/13/2021 Last updated : 08/20/2021
healthcare-apis Fhir Service Diagnostic Logs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/healthcare-apis/fhir/fhir-service-diagnostic-logs.md
+
+ Title: View and enable diagnostic settings in the FHIR service - Azure Healthcare APIs
+description: This article describes how to enable diagnostic settings in the FHIR service and review some sample queries for these logs.
++++ Last updated : 08/17/2021+++
+# View and enable diagnostic settings in the FHIR service
+
+> [!IMPORTANT]
+> Azure Healthcare APIs is currently in PREVIEW. The [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+In this article, you'll learn how to enable diagnostic settings in the FHIR service and review some sample queries for these logs. Access to diagnostic logs is essential for any healthcare service where compliance with regulatory requirements, such as Health Insurance Portability and Accountability Act (HIPAA), is a must. To access this feature in the Azure portal, refer to the steps below.
+
+## Enable audit logs
+
+1. Select your FHIR service in the Azure portal
+
+2. Browse to **Diagnostic** settings under the **Monitoring** menu option.
+
+ [ ![Add Azure FHIR diagnostic settings.](media/diagnostic-logs/fhir-diagnostic-settings-screen.png) ](media/diagnostic-logs/fhir-diagnostic-settings-screen.png#lightbox)
+
+3. Select **+ Add diagnostic settings**.
+
+4. Enter a name for the setting.
+
+5. Select the method you want to use to access your diagnostic logs:
+
+**Archive to a storage account** for auditing or manual inspection.
+The storage account you want to use needs to be already created.
+
+**Stream to event hub** for ingestion by a third-party service or custom analytic solution.
+You will need to create an event hub namespace and event hub policy before you can configure this step.
+
+**Stream to the Log Analytics** workspace in Azure Monitor.
+You will need to create your Logs Analytics Workspace before you can select this option.
+
+6. Select **AuditLogs**.
+
+ [ ![Azure FHIR diagnostic settings audit logs.](media/diagnostic-logs/fhir-diagnostic-settings-add.png) ](media/diagnostic-logs/fhir-diagnostic-settings-add.png#lightbox)
+
+7. Select **Save**.
+
+> [!NOTE]
+> It might take up to 15 minutes for the first logs to display in the Log Analytics Workspace. Also, if the FHIR service is moved from one resource group or subscription to another, update the settings after the move is complete.
++
+## Audit log details
+
+At this time, the FHIR service returns the following fields in the audit log:
+
+|Field Name|Type|Notes|
+|-|-|--|
+|CallerIdentity |Dynamic|A generic property bag containing identity information.|
+|CallerIdentityIssuer | String| Issuer|
+|CallerIdentityObjectId | String| Oject_ID|
+|CallerIPAddress | String| The callerΓÇÖs IP address.|
+|CorrelationId | String| Correlation ID|
+|FhirResourceType | String| The resource type for which the operation was executed.|
+|LogCategory | String| The log category (we're currently returning ΓÇÿAuditLogsΓÇÖ LogCategory).|
+|Location | String| The location of the server that processed the request. For example, South Central US.|
+|OperationDuration | Int| The time it took to complete this request in seconds.|
+|OperationName | String| Describes the type of operation. For example, update and search-type.|
+|RequestUri | String| The request URI.|
+|ResultType | String| The available values currently are Started, Succeeded, or Failed.|
+|StatusCode | Int| The HTTP status code. For example, 200.|
+|TimeGenerated | DateTime| Date and time of the event.|
+|Properties | String| Describes the properties of the fhirResourceType.|
+|SourceSystem | String| Source System that's always Azure in this case.|
+|TenantId | String | Tenant ID|
+|Type | String| Type of log that's always MicrosoftHealthcareApisAuditLog in this case.|
+|_ResourceId | String| Details about the resource.|
+
+## Sample queries
+
+Listed below are a few basic Application Insights queries you can use to explore your log data.
+
+Run this query to see the **100 most recent** logs:
+
+Insights
+MicrosoftHealthcareApisAuditLogs
+| limit 100
+
+Run this query to group operations by **FHIR Resource Type**:
+
+Insights
+MicrosoftHealthcareApisAuditLogs
+| summarize count() by FhirResourceType
+
+Run this query to get all the **failed results**:
+
+Insights
+MicrosoftHealthcareApisAuditLogs
+| where ResultType == "Failed"
+
+## Conclusion
+
+Having access to diagnostic logs is essential for monitoring a service and providing compliance reports. The FHIR service of the Azure Healthcare APIs allows you to do these actions through diagnostic logs.
+
+FHIR is the registered trademark of [HL7](https://www.hl7.org/fhir/https://docsupdatetracker.net/index.html) and is used with the permission of HL7.
+
+## Next steps
+
+In this article, you learned how to enable audit logs for the FHIR service.
+
+> [!NOTE]
+> Metrics will be added when the Azure Healthcare APIs is generally available.
++
+For an overview of the FHIR service, see
+
+>[!div class="nextstepaction"]
+>[FHIR service overview](overview.md)
++
+
+
+
+
+
+
+
+
+
+
+
+
+
+
healthcare-apis Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/healthcare-apis/fhir/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Healthcare APIs FHIR service description: Lists Azure Policy Regulatory Compliance controls available. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 08/13/2021 Last updated : 08/20/2021
iot-central Howto Connect Powerbi https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/howto-connect-powerbi.md
- Title: Visualize your Azure IoT Central data in a Power BI dashboard | Microsoft Docs
-description: Use the Power BI Solution for Azure IoT Central to visualize and analyze your IoT Central data.
---- Previously updated : 10/4/2019--
-# This topic applies to administrators and solution developers.
--
-# Visualize and analyze your Azure IoT Central data in a Power BI dashboard
-
-> [!Important]
-> This solution uses [legacy data export features](./howto-export-data-legacy.md). Stay tuned for updated guidance on how to connect to Power BI using the latest data export.
--
-Use the Power BI Solution for Azure IoT Central V3 to create a powerful Power BI dashboard to monitor the performance of your IoT devices. In your Power BI dashboard, you can:
--- Track how much data your devices are sending over time-- Compare data volumes between different telemetry streams-- Filter down to data sent by specific devices-- View the most recent telemetry data in a table-
-This solution sets up a pipeline that reads data from your [legacy data export](./howto-export-data-legacy.md) Azure Blob storage account. The pipeline uses Azure Functions, Azure Data Factory, and Azure SQL Database to process and transform the data. you can visualize and analyze the data in a Power BI report that you download as a PBIX file. All of the resources are created in your Azure subscription, so you can customize each component to suit your needs.
-
-## Prerequisites
-
-To complete the steps in this how-to guide, you need:
---- Legacy continuous data export which is configured to export telemetry, devices, and device templates to Azure Blob storage. To learn more, see [legacy data export documentation](howto-export-data-legacy.md).
- - Make sure that only your IoT Central application is exporting data to the blob container.
- - Your [devices must send JSON encoded messages](../../iot-hub/iot-hub-devguide-messages-d2c.md). Devices must specify `contentType:application/JSON` and `contentEncoding:utf-8` or `contentEncoding:utf-16` or `contentEncoding:utf-32` in the message system properties.
-- Power BI Desktop (latest version). See [Power BI downloads](https://powerbi.microsoft.com/downloads/).-- Power BI Pro (if you want to share the dashboard with others).-
-> [!NOTE]
-> If you're using a version 2 IoT Central application, see [Visualize and analyze your Azure IoT Central data in a Power BI dashboard](/previous-versions/azure/iot-central/core/howto-connect-powerbi) on the previous versions documentation site.
-
-## Install
-
-To set up the pipeline, navigate to the [Power BI Solution for Azure IoT Central V3](https://appsource.microsoft.com/product/web-apps/iot-central.power-bi-solution-iot-central) page on the **Microsoft AppSource** site. Select **Get it now**, and follow the instructions.
-
-When you open the PBIX file, be sure the read and follow the instructions on the cover page. These instructions describe how to connect your report to your SQL database.
-
-## Report
-
-The PBIX file contains the **Devices and Telemetry** report shows a historical view of the telemetry that has been sent by devices. It provides a breakdown of the different types of telemetry, and also shows the most recent telemetry sent by devices.
--
-## Pipeline resources
-
-You can access all the Azure resources that make up the pipeline in the Azure portal. All the resources are in the resource group you created when you set up the pipeline.
--
-The following list describes the role of each resource in the pipeline:
-
-### Azure Functions
-
-The Azure Function app triggers each time IoT Central writes a new file to Blob storage. The functions extract data from the telemetry, devices, and device templates blobs to populate the intermediate SQL tables that Azure Data Factory uses.
-
-### Azure Data Factory
-
-Azure Data Factory connects to SQL Database as a linked service. It runs stored procedures to process the data and store it in the analysis tables.
-
-Azure Data Factory runs every 15 minutes to transform the latest batch of data to load into the SQL tables (which is the current minimal number for the **Tumbling Window Trigger**).
-
-### Azure SQL Database
-
-Azure Data Factory generates a set of analysis tables for Power BI. You can explore these schemas in Power BI and use them to build your own visualizations.
-
-## Estimated costs
-
-The [Power BI Solution for Azure IoT Central V3](https://appsource.microsoft.com/product/web-apps/iot-central.power-bi-solution-iot-central) page on the Microsoft AppSource site includes a link to a cost estimator for the resources you deploy.
-
-## Next steps
-
-Now that you've learned how to visualize your data in Power BI, the suggested next step is to learn [How to manage devices](howto-manage-devices-individually.md).
iot-central Howto Create Custom Analytics https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/howto-create-custom-analytics.md
In this how-to guide, you learned how to:
* Stream telemetry from an IoT Central application using *continuous data export*. * Create an Azure Databricks environment to analyze and plot telemetry data.
-Now that you know how to create custom analytics, the suggested next step is to learn how to [Visualize and analyze your Azure IoT Central data in a Power BI dashboard](howto-connect-powerbi.md).
+Now that you know how to create custom analytics, the suggested next step is to learn how to [Use the IoT Central device bridge to connect other IoT clouds to IoT Central](howto-build-iotc-device-bridge.md).
iot-central Overview Iot Central Developer https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/overview-iot-central-developer.md
To learn more about best practices you edit a model, see [Edit an existing devic
Each model has a unique _device twin model identifier_ (DTMI), such as `dtmi:com:example:Thermostat;1`. When a device connects to IoT Central, it sends the DTMI of the model it implements. IoT Central can then associate the correct device template with the device.
-[IoT Plug and Play](../../iot-develop/overview-iot-plug-and-play.md) defines a set of conventions that a device should follow when it implements a DTDL model.
+[IoT Plug and Play](../../iot-develop/overview-iot-plug-and-play.md) defines a set of [conventions](../../iot-develop/concepts-convention.md) that a device should follow when it implements a DTDL model.
The [Azure IoT device SDKs](#languages-and-sdks) include support for the IoT Plug and Play conventions.
iot-central Overview Iot Central Solution Builder https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/overview-iot-central-solution-builder.md
As a solution builder, you can use the data export and rules capabilities in IoT
- [Use workflows to integrate your Azure IoT Central application with other cloud services](howto-configure-rules-advanced.md) - [Extend Azure IoT Central with custom rules using Stream Analytics, Azure Functions, and SendGrid](howto-create-custom-rules.md) - [Extend Azure IoT Central with custom analytics using Azure Databricks](howto-create-custom-analytics.md)-- [Visualize and analyze your Azure IoT Central data in a Power BI dashboard](howto-connect-powerbi.md) ## APIs
iot-central Tutorial Health Data Triage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/healthcare/tutorial-health-data-triage.md
# Tutorial: Build a Power BI provider dashboard
-When building your continuous patient monitoring solution, you can also create a dashboard for a hospital care team to visualize patient data. In this tutorial, you will learn how to create a Power BI real-time streaming dashboard from your IoT Central continuous patient monitoring application template. If your use case does not require access to real-time data, you can use the [IoT Central Power BI dashboard](../core/howto-connect-powerbi.md), which has a simplified deployment process.
+When building your continuous patient monitoring solution, you can also create a dashboard for a hospital care team to visualize patient data. In this tutorial, you will learn how to create a Power BI real-time streaming dashboard from your IoT Central continuous patient monitoring application template.
:::image type="content" source="media/dashboard-gif-3.gif" alt-text="Dashboard GIF":::
iot-develop Concepts Architecture https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-develop/concepts-architecture.md
# IoT Plug and Play architecture
-IoT Plug and Play enables solution builders to integrate smart devices with their solutions without any manual configuration. At the core of IoT Plug and Play, is a device _model_ that describes a device's capabilities to an IoT Plug and Play-enabled application. This model is structured as a set of interfaces that define:
+IoT Plug and Play enables solution builders to integrate IoT devices with their solutions without any manual configuration. At the core of IoT Plug and Play, is a device _model_ that describes a device's capabilities to an IoT Plug and Play-enabled application. This model is structured as a set of interfaces that define:
- _Properties_ that represent the read-only or writable state of a device or other entity. For example, a device serial number may be a read-only property and a target temperature on a thermostat may be a writable property. - _Telemetry_ that's the data emitted by a device, whether the data is a regular stream of sensor readings, an occasional error, or an information message.
The model repository has built-in role-based access controls that let you limit
## Devices
-A device builder implements the code to run on an IoT smart device using one of the [Azure IoT device SDKs](./libraries-sdks.md). The device SDKs help the device builder to:
+A device builder implements the code to run on an IoT device using one of the [Azure IoT device SDKs](./libraries-sdks.md). The device SDKs help the device builder to:
- Connect securely to an IoT hub. - Register the device with your IoT hub and announce the model ID that identifies the collection of DTDL interfaces the device implements.
iot-develop Concepts Developer Guide Device https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-develop/concepts-developer-guide-device.md
zone_pivot_groups: programming-languages-set-twenty-six
# IoT Plug and Play device developer guide
-IoT Plug and Play lets you build smart devices that advertise their capabilities to Azure IoT applications. IoT Plug and Play devices don't require manual configuration when a customer connects them to IoT Plug and Play-enabled applications.
+IoT Plug and Play lets you build IoT devices that advertise their capabilities to Azure IoT applications. IoT Plug and Play devices don't require manual configuration when a customer connects them to IoT Plug and Play-enabled applications.
-A smart device might be implemented directly, use [modules](../iot-hub/iot-hub-devguide-module-twins.md), or use [IoT Edge modules](../iot-edge/about-iot-edge.md).
+A IoT device might be implemented directly, use [modules](../iot-hub/iot-hub-devguide-module-twins.md), or use [IoT Edge modules](../iot-edge/about-iot-edge.md).
This guide describes the basic steps required to create a device, module, or IoT Edge module that follows the [IoT Plug and Play conventions](../iot-develop/concepts-convention.md).
iot-develop Concepts Developer Guide Service https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-develop/concepts-developer-guide-service.md
zone_pivot_groups: programming-languages-set-ten
# IoT Plug and Play service developer guide
-IoT Plug and Play lets you build smart devices that advertise their capabilities to Azure IoT applications. IoT Plug and Play devices don't require manual configuration when a customer connects them to IoT Plug and Play-enabled applications.
+IoT Plug and Play lets you build IoT devices that advertise their capabilities to Azure IoT applications. IoT Plug and Play devices don't require manual configuration when a customer connects them to IoT Plug and Play-enabled applications.
IoT Plug and Play lets you use devices that have announced their model ID with your IoT hub. For example, you can access the properties and commands of a device directly.
iot-develop Overview Iot Plug And Play https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-develop/overview-iot-plug-and-play.md
Title: Introduction to IoT Plug and Play | Microsoft Docs
description: Learn about IoT Plug and Play. IoT Plug and Play is based on an open modeling language that enables smart IoT devices to declare their capabilities. IoT devices present that declaration, called a device model, when they connect to cloud solutions. The cloud solution can then automatically understand the device and start interacting with it, all without writing any code. Previously updated : 03/21/2021 Last updated : 08/20/2021
# What is IoT Plug and Play?
-IoT Plug and Play enables solution builders to integrate smart devices with their solutions without any manual configuration. At the core of IoT Plug and Play, is a device _model_ that a device uses to advertise its capabilities to an IoT Plug and Play-enabled application. This model is structured as a set of elements that define:
+IoT Plug and Play enables solution builders to integrate IoT devices with their solutions without any manual configuration. At the core of IoT Plug and Play, is a device _model_ that a device uses to advertise its capabilities to an IoT Plug and Play-enabled application. This model is structured as a set of elements that define:
- _Properties_ that represent the read-only or writable state of a device or other entity. For example, a device serial number may be a read-only property and a target temperature on a thermostat may be a writable property. - _Telemetry_ that's the data emitted by a device, whether the data is a regular stream of sensor readings, an occasional error, or an information message.
This article outlines:
IoT Plug and Play is useful for two types of developers: -- A _solution builder_ is responsible for developing an IoT solution using Azure IoT Hub and other Azure resources, and for identifying IoT devices to integrate.-- A _device builder_ creates the code that runs on a device connected to your solution.
+- A _solution builder_ is responsible for developing an IoT solution using Azure IoT Hub and other Azure resources, and for identifying IoT devices to integrate. To learn more, see [IoT Plug and Play service developer guide](concepts-developer-guide-service.md).
+- A _device builder_ creates the code that runs on a device connected to your solution. To learn more, see [IoT Plug and Play device developer guide](concepts-developer-guide-device.md).
## Use IoT Plug and Play devices
IoT Hub - a managed cloud service - acts as a message hub for secure, bi-directi
If you have existing sensors attached to a Windows or Linux gateway, you can use [IoT Plug and Play bridge](./concepts-iot-pnp-bridge.md), to connect these sensors and create IoT Plug and Play devices without the need to write device software/firmware (for [supported protocols](./concepts-iot-pnp-bridge.md#supported-protocols-and-sensors)).
+To learn more, see [IoT Plug and Play architecture](concepts-architecture.md)
+ ## Develop an IoT device application As a device builder, you can develop an IoT hardware product that supports IoT Plug and Play. The process includes three key steps: 1. Define the device model. You author a set of JSON files that define your device's capabilities using the [DTDL](https://github.com/Azure/opendigitaltwins-dtdl). A model describes a complete entity such as a physical product, and defines the set of interfaces implemented by that entity. Interfaces are shared contracts that uniquely identify the telemetry, properties, and commands supported by a device. Interfaces can be reused across different models.
-1. Author device software or firmware in a way that their telemetry, properties, and commands follow the IoT Plug and Play conventions. If you are connecting existing sensors attached to a Windows or Linux gateway, the [IoT Plug and Play bridge](./concepts-iot-pnp-bridge.md) can simplify this step.
+1. Author device software or firmware in a way that their telemetry, properties, and commands follow the [IoT Plug and Play conventions](concepts-convention.md). If you are connecting existing sensors attached to a Windows or Linux gateway, the [IoT Plug and Play bridge](./concepts-iot-pnp-bridge.md) can simplify this step.
1. The device announces the model ID as part of the MQTT connection. The Azure IoT SDK includes new constructs to provide the model ID at connection time.
iot-fundamentals Iot Services And Technologies https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-fundamentals/iot-services-and-technologies.md
To build an IoT solution from scratch, or extend a solution created using IoT Ce
Develop your IoT devices using one of the [Azure IoT Starter Kits](https://devicecatalog.azure.com/kits) or choose a device to use from the [Azure Certified for IoT device catalog](https://devicecatalog.azure.com). Implement your embedded code using the open-source [device SDKs](../iot-hub/iot-hub-devguide-sdks.md). The device SDKs support multiple operating systems, such as Linux, Windows, and real-time operating systems. There are SDKs for multiple programming languages, such as [C](https://github.com/Azure/azure-iot-sdk-c), [Node.js](https://github.com/Azure/azure-iot-sdk-node), [Java](https://github.com/Azure/azure-iot-sdk-java), [.NET](https://github.com/Azure/azure-iot-sdk-csharp), and [Python](https://github.com/Azure/azure-iot-sdk-python).
-You can further simplify how you create the embedded code for your devices by using the [IoT Plug and Play](../iot-develop/overview-iot-plug-and-play.md) service. IoT Plug and Play enables solution developers to integrate devices with their solutions without writing any embedded code. At the core of IoT Plug and Play, is a _device capability model_ schema that describes device capabilities. Use the device capability model to generate your embedded device code and configure a cloud-based solution such as an IoT Central application.
+You can further simplify how you create the embedded code for your devices by following the [IoT Plug and Play](../iot-develop/overview-iot-plug-and-play.md) conventions. IoT Plug and Play enables solution developers to integrate devices with their solutions without writing any embedded code. At the core of IoT Plug and Play, is a _device capability model_ schema that describes device capabilities. Use the device capability model to generate your embedded device code and configure a cloud-based solution such as an IoT Central application.
[Azure IoT Edge](../iot-edge/about-iot-edge.md) lets you offload parts of your IoT workload from your Azure cloud services to your devices. IoT Edge can reduce latency in your solution, reduce the amount of data your devices exchange with the cloud, and enable off-line scenarios. You can manage IoT Edge devices from IoT Central and some solution accelerators.
iot-hub-device-update Device Update Agent Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub-device-update/device-update-agent-overview.md
The Device Update Agent consists of two conceptual layers:
-* The Interface Layer builds on top of [Azure IoT Plug and Play
-(PnP)](../iot-develop/overview-iot-plug-and-play.md)
+* The Interface Layer builds on top of [Azure IoT Plug and Play](../iot-develop/overview-iot-plug-and-play.md)
allowing for messaging to flow between the Device Update Agent and Device Update Services. * The Platform Layer is responsible for the high-level update actions of Download, Install, and Apply that may be platform, or device specific.
iot-hub-device-update Device Update Plug And Play https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub-device-update/device-update-plug-and-play.md
# Device Update for IoT Hub and IoT Plug and Play
-Device Update for IoT Hub uses [IoT Plug and Play](../iot-develop/index.yml) to discover and manage devices that are over-the-air update capable. The Device Update service will send and receive properties and messages to and from devices using PnP interfaces. Device Update for IoT Hub requires IoT devices to implement the following interfaces and model-id as described below.
+Device Update for IoT Hub uses [IoT Plug and Play](../iot-develop/index.yml) to discover and manage devices that are over-the-air update capable. The Device Update service will send and receive properties and messages to and from devices using IoT Plug and Play interfaces. Device Update for IoT Hub requires IoT devices to implement the following interfaces and model-id as described below.
Concepts: * Understand the [IoT Plug and Play device client](../iot-develop/concepts-developer-guide-device.md?pivots=programming-language-csharp).
Concepts:
The 'ADUCoreInterface' interface is used to send update actions and metadata to devices and receive update status from devices. The 'ADU Core' interface is split into two Object properties.
-The expected component name in your model is **"azureDeviceUpdateAgent"** when implementing this interface. [Learn more about Azure IoT PnP Components](../iot-develop/concepts-modeling-guide.md)
+The expected component name in your model is **"azureDeviceUpdateAgent"** when implementing this interface. [Learn more about Azure IoT Plug and Play Components](../iot-develop/concepts-modeling-guide.md)
### Agent Metadata
Service Metadata contains fields that the Device Update services uses to communi
The Device Information Interface is a concept used within [IoT Plug and Play architecture](../iot-develop/overview-iot-plug-and-play.md). It contains device to cloud properties that provide information about the hardware and operating system of the device. Device Update for IoT Hub uses the DeviceInformation.manufacturer and DeviceInformation.model properties for telemetry and diagnostics. To learn more about Device Information interface, see this [example](https://devicemodels.azure.com/dtmi/azure/devicemanagement/deviceinformation-1.json).
-The expected component name in your model is **deviceInformation** when implementing this interface. [Learn about Azure IoT PnP Components](../iot-develop/concepts-modeling-guide.md)
+The expected component name in your model is **deviceInformation** when implementing this interface. [Learn about Azure IoT Plug and Play Components](../iot-develop/concepts-modeling-guide.md)
|Name|Type|Schema|Direction|Description|Example| |-|-|||--|--|
iot-hub-device-update Understand Device Update https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub-device-update/understand-device-update.md
platform. Device Update for IoT Hub also provides open-source code if you are no
running one of the above platforms. You can port the agent to the distribution you are running.
-Device Update works with IoT Plug and Play (PnP) and can manage any device that supports
-the required PnP interfaces. For more information, see [Device Update for IoT Hub and
+Device Update works with IoT Plug and Play and can manage any device that supports
+the required IoT Plug and Play interfaces. For more information, see [Device Update for IoT Hub and
IoT Plug and Play](device-update-plug-and-play.md). ## Support for a wide range of update artifacts
iot-hub Iot Hub Weather Forecast Machine Learning https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-weather-forecast-machine-learning.md
Title: Weather forecast using Azure Machine Learning Studio (classic) with IoT Hub data
-description: Use Azure Machine Learning Studio (classic) to predict the chance of rain based on the temperature and humidity data your IoT hub collects from a sensor.
+ Title: Weather forecast using Machine Learning Studio (classic) with IoT Hub data
+description: Use ML Studio (classic) to predict the chance of rain based on the temperature and humidity data your IoT hub collects from a sensor.
keywords: weather forecast machine learning
Last updated 09/16/2020
-# Weather forecast using the sensor data from your IoT hub in Azure Machine Learning Studio (classic)
+# Weather forecast using the sensor data from your IoT hub in Machine Learning Studio (classic)
![End-to-end diagram](media/iot-hub-get-started-e2e-diagram/6.png) [!INCLUDE [iot-hub-get-started-note](../../includes/iot-hub-get-started-note.md)]
-Machine learning is a technique of data science that helps computers learn from existing data to forecast future behaviors, outcomes, and trends. Azure Machine Learning Studio (classic) is a cloud predictive analytics service that makes it possible to quickly create and deploy predictive models as analytics solutions. In this article, you learn how to use Azure Machine Learning Studio (classic) to do weather forecasting (chance of rain) using the temperature and humidity data from your Azure IoT hub. The chance of rain is the output of a prepared weather prediction model. The model is built upon historic data to forecast chance of rain based on temperature and humidity.
+Machine learning is a technique of data science that helps computers learn from existing data to forecast future behaviors, outcomes, and trends. ML Studio (classic) is a cloud predictive analytics service that makes it possible to quickly create and deploy predictive models as analytics solutions. In this article, you learn how to use ML Studio (classic) to do weather forecasting (chance of rain) using the temperature and humidity data from your Azure IoT hub. The chance of rain is the output of a prepared weather prediction model. The model is built upon historic data to forecast chance of rain based on temperature and humidity.
## Prerequisites
Machine learning is a technique of data science that helps computers learn from
- An active Azure subscription. - An Azure IoT hub under your subscription. - A client application that sends messages to your Azure IoT hub.-- An [Azure Machine Learning Studio (classic)](https://studio.azureml.net/) account.
+- An [ML Studio (classic)](https://studio.azureml.net/) account.
- An [Azure Storage account](../storage/common/storage-account-overview.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json#types-of-storage-accounts), A **General-purpose v2** account is preferred, but any Azure Storage account that supports Azure Blob storage will also work. > [!Note]
-> This article uses Azure Stream Analytics and several other paid services. Extra charges are incurred in Azure Stream Analytics when data must be transferred across Azure regions. For this reason, it would be good to ensure that your Resource Group, IoT Hub, and Azure Storage account -- as well as the Machine Learning Studio (classic) workspace and Azure Stream Analytics Job added later in this tutorial -- are all located in the same Azure region. You can check regional support for Azure Machine Learning Studio (classic) and other Azure services on the [Azure product availability by region page](https://azure.microsoft.com/global-infrastructure/services/?products=machine-learning-studio&regions=all).
+> This article uses Azure Stream Analytics and several other paid services. Extra charges are incurred in Azure Stream Analytics when data must be transferred across Azure regions. For this reason, it would be good to ensure that your Resource Group, IoT Hub, and Azure Storage account -- as well as the Machine Learning Studio (classic) workspace and Azure Stream Analytics Job added later in this tutorial -- are all located in the same Azure region. You can check regional support for ML Studio (classic) and other Azure services on the [Azure product availability by region page](https://azure.microsoft.com/global-infrastructure/services/?products=machine-learning-studio&regions=all).
## Deploy the weather prediction model as a web service
In this section you get the weather prediction model from the Azure AI Library.
### Get the weather prediction model
-In this section you get the weather prediction model from the Azure AI Gallery and open it in Azure Machine Learning Studio (classic).
+In this section you get the weather prediction model from the Azure AI Gallery and open it in ML Studio (classic).
1. Go to the [weather prediction model page](https://gallery.cortanaintelligence.com/Experiment/Weather-prediction-model-1). ![Open the weather prediction model page in Azure AI Gallery](media/iot-hub-weather-forecast-machine-learning/weather-prediction-model-in-azure-ai-gallery.png)
-1. Select **Open in Studio (classic)** to open the model in Microsoft Azure Machine Learning Studio (classic). Select a region near your IoT hub and the correct workspace in the **Copy experiment from Gallery** pop-up.
+1. Select **Open in Studio (classic)** to open the model in Microsoft ML Studio (classic). Select a region near your IoT hub and the correct workspace in the **Copy experiment from Gallery** pop-up.
- ![Open the weather prediction model in Azure Machine Learning Studio (classic)](media/iot-hub-weather-forecast-machine-learning/open-ml-studio.png)
+ ![Open the weather prediction model in ML Studio (classic)](media/iot-hub-weather-forecast-machine-learning/open-ml-studio.png)
### Add an R-script module to clean temperature and humidity data For the model to behave correctly, the temperature and humidity data must be convertible to numeric data. In this section, you add an R-script module to the weather prediction model that removes any rows that have data values for temperature or humidity that cannot be converted to numeric values.
-1. On the left-side of the Azure Machine Learning Studio (classic) window, select the arrow to expand the tools panel. Enter "Execute" into the search box. Select the **Execute R Script** module.
+1. On the left-side of the ML Studio (classic) window, select the arrow to expand the tools panel. Enter "Execute" into the search box. Select the **Execute R Script** module.
![Select Execute R Script module](media/iot-hub-weather-forecast-machine-learning/select-r-script-module.png)
In this section, you validate the model, set up a predictive web service based o
1. Select **SET UP WEB SERVICE** > **Predictive Web Service**. The predictive experiment diagram opens.
- ![Deploy the weather prediction model in Azure Machine Learning Studio (classic)](media/iot-hub-weather-forecast-machine-learning/predictive-experiment.png)
+ ![Deploy the weather prediction model in ML Studio (classic)](media/iot-hub-weather-forecast-machine-learning/predictive-experiment.png)
1. In the predictive experiment diagram, delete the connection between the **Web service input** module and the **Select Columns in Dataset** at the top. Then drag the **Web service input** module somewhere near the **Score Model** module and connect it as shown:
- ![Connect two modules in Azure Machine Learning Studio (classic)](media/iot-hub-weather-forecast-machine-learning/connect-modules-azure-machine-learning-studio.png)
+ ![Connect two modules in ML Studio (classic)](media/iot-hub-weather-forecast-machine-learning/connect-modules-azure-machine-learning-studio.png)
1. Select **RUN** to validate the steps in the model.
Run the client application to start collecting and sending temperature and humid
1. Select your subscription > **Storage Accounts** > your storage account > **Blob Containers** > your container. 1. Download a .csv file to see the result. The last column records the chance of rain.
- ![Get weather forecast result with Azure Machine Learning Studio (classic)](media/iot-hub-weather-forecast-machine-learning/weather-forecast-result.png)
+ ![Get weather forecast result with ML Studio (classic)](media/iot-hub-weather-forecast-machine-learning/weather-forecast-result.png)
## Summary
-YouΓÇÖve successfully used Azure Machine Learning Studio (classic) to produce the chance of rain based on the temperature and humidity data that your IoT hub receives.
+YouΓÇÖve successfully used ML Studio (classic) to produce the chance of rain based on the temperature and humidity data that your IoT hub receives.
[!INCLUDE [iot-hub-get-started-next-steps](../../includes/iot-hub-get-started-next-steps.md)]
iot-hub Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/policy-reference.md
Title: Built-in policy definitions for Azure IoT Hub description: Lists Azure Policy built-in policy definitions for Azure IoT Hub. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/13/2021 Last updated : 08/20/2021
iot-hub Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure IoT Hub description: Lists Azure Policy Regulatory Compliance controls available for Azure IoT Hub. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 08/13/2021 Last updated : 08/20/2021
key-vault Move Resourcegroup https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/general/move-resourcegroup.md
Your organization may have implemented Azure Policy with enforcement or exclusio
### Example
-You have an application connected to key vault that creates certificates that are valid for two years. The resource group where you are attempting to move your key vault has a policy assignment that blocks the creation of certificates that are valid for longer than one year. After moving your key vault to the new resource group the operation to create a certificate that is valid for two years will be blocked by an Azure policy assignment.
+You have an application connected to key vault that creates certificates that are valid for two years. The resource group where you are attempting to move your key vault has a policy assignment that blocks the creation of certificates that are valid for longer than one year. After moving your key vault to the new resource group the operation to create a certificate that is valid for two years will be blocked by an Azure Policy assignment.
### Solution
key-vault Soft Delete Change https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/general/soft-delete-change.md
If your application assumes that soft-delete isn't enabled and expects that dele
Security principals that need access to permanently delete secrets must be granted more access policy permissions to purge these secrets and the key vault.
-Disable any Azure policy on your key vaults that mandates that soft-delete is turned off. You might need to escalate this issue to an administrator who controls Azure policies applied to your environment. If this policy isn't disabled, you might lose the ability to create new key vaults in the scope of the applied policy.
+Disable any Azure Policy assignments on your key vaults that mandates that soft-delete is turned off. You might need to escalate this issue to an administrator who controls Azure Policy assignments applied to your environment. If this policy assignment isn't disabled, you might lose the ability to create new key vaults in the scope of the applied policy assignment.
If your organization is subject to legal compliance requirements and can't allow deleted key vaults and secrets to remain in a recoverable state for an extended period of time, you'll have to adjust the retention period of soft-delete to meet your organization's standards. You can configure the retention period to last from 7 to 90 days.
key-vault Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/policy-reference.md
Title: Built-in policy definitions for Key Vault description: Lists Azure Policy built-in policy definitions for Key Vault. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/13/2021 Last updated : 08/20/2021
key-vault Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Key Vault description: Lists Azure Policy Regulatory Compliance controls available for Azure Key Vault. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 08/13/2021 Last updated : 08/20/2021
lab-services Add Lab Creator https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/lab-services/add-lab-creator.md
Title: Add a user as a lab creator in Azure Lab Services description: This article shows how to add a user to the Lab Creator role for a lab account in Azure Lab Services. The lab creators can create labs within this lab account. Previously updated : 06/26/2020 Last updated : 07/26/2021+ # Add lab creators to a lab account in Azure Lab Services
This article shows you how to add users as lab creators to a lab account in Azur
## Add Microsoft user account to Lab Creator role To set up a classroom lab in a lab account, the user must be a member of the **Lab Creator** role in the lab account. The account you used to create the lab account is automatically added to this role. If you are planning to use the same user account to create a classroom lab, you can skip this step. To use another user account to create a classroom lab, do the following steps:
-To provide educators the permission to create labs for their classes, add them to the **Lab Creator** role:
+To provide educators the permission to create labs for their classes, add them to the **Lab Creator** role: For detailed steps, see [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md).
++
+1. On the **Lab Account** page, select **Access control (IAM)**
+
+1. Select **Add** > **Add role assignment (Preview)**.
+
+ ![Access control (IAM) page with Add role assignment menu open.](../../includes/role-based-access-control/media/add-role-assignment-menu-generic.png)
+
+1. On the **Role** tab, select the **Lab Creator** role.
+
+ ![Add role assignment page with Role tab selected.](../../includes/role-based-access-control/media/add-role-assignment-role-generic.png)
+
+1. On the **Members** tab, select the user you want to add to the Lab Creators role
+
+1. On the **Review + assign** tab, select **Review + assign** to assign the role.
-1. On the **Lab Account** page, select **Access control (IAM)**, and click **+ Add role assignment** on the toolbar.
- ![Access Control -> Add Role Assignment button](./media/tutorial-setup-lab-account/add-role-assignment-button.png)
-1. On the **Add role assignment** page, select **Lab Creator** for **Role**, select the user you want to add to the Lab Creators role, and select **Save**.
- ![Add lab creator](./media/tutorial-setup-lab-account/add-lab-creator.png)
> [!NOTE] > If you are adding a non-Microsoft account user as a lab creator, see the [Add a non-Microsoft account user as a lab creator](#add-a-non-microsoft-account-user-as-a-lab-creator) section.
lab-services How To Add User Lab Owner https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/lab-services/how-to-add-user-lab-owner.md
Title: How to add additional owners to a lab in Azure Lab Services description: This article shows you how an administrator can add a user as an owner to a lab in Azure Lab Services. Previously updated : 09/04/2020 Last updated : 08/03/2021+ # How to add additional owners to an existing lab in Azure Lab Services This article shows you how you, as an administrator, can add additional owners to an existing lab. ## Add user to the reader role for the lab account
-To add an user as an additional owner to an existing lab, you must first give the user **read** permissions on the lab account.
+1. Back on the **Lab Account** page, select **All labs** on the left menu.
+2. Select the **lab** to which you want to add user as an owner.
+
+ ![Select the lab ](./media/how-to-add-user-lab-owner/select-lab.png)
+1. In the navigation menu, select **Access control (IAM)**.
+
+1. Select **Add** > **Add role assignment (Preview)**.
-1. Sign in to the [Azure portal](https://portal.azure.com).
-2. Select **All Services** on the left menu. Search for **Lab Services**, and then select it.
-3. Select your **lab account** from the list.
-2. On the **Lab Account page**, select **Access Control (IAM)** on the left menu.
-2. On the **Access control (IAM)** page, select **Add** on the toolbar, and the select **Add role assignment**.
+ ![Access control (IAM) page with Add role assignment menu open.](../../includes/role-based-access-control/media/add-role-assignment-menu-generic.png)
- ![Role assignment for the lab account ](./media/how-to-add-user-lab-owner/lab-account-access-control-page.png)
-3. On the **Add a role assignment** page, do the following steps:
- 1. Select **Reader** for the **role**.
- 2. Select the user.
- 3. Select **Save**.
+1. On the **Role** tab, select the **Reader** role.
- ![Add user to the reader role for the lab account ](./media/how-to-add-user-lab-owner/reader-lab-account.png)
+ ![Add role assignment page with Role tab selected.](../../includes/role-based-access-control/media/add-role-assignment-role-generic.png)
+1. On the **Members** tab, select the user you want to add to the Reader role.
+
+1. On the **Review + assign** tab, select **Review + assign** to assign the role.
## Add user to the owner role for the lab > [!NOTE]
-> If the user has only Reader access on the a lab, the lab isn't shown in labs.azure.com.
+> If the user has only Reader access on the a lab, the lab isn't shown in labs.azure.com. For detailed steps, see [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md).
++
+1. On the **Lab Account** page, select **Access control (IAM)**
+
+1. Select **Add** > **Add role assignment (Preview)**.
+
+ ![Access control (IAM) page with Add role assignment menu open.](../../includes/role-based-access-control/media/add-role-assignment-menu-generic.png)
+
+1. On the **Role** tab, select the **Owner** role.
+
+ ![Add role assignment page with Role tab selected.](../../includes/role-based-access-control/media/add-role-assignment-role-generic.png)
+
+1. On the **Members** tab, select the user you want to add to the Owner's role
+
+1. On the **Review + assign** tab, select **Review + assign** to assign the role.
-1. Back on the **Lab Account** page, select **All labs** on the left menu.
-2. Select the **lab** to which you want to add user as an owner.
-
- ![Select the lab ](./media/how-to-add-user-lab-owner/select-lab.png)
-3. On the **Lab** page, select **Access control (IAM)** on the left menu.
-4. On the **Access control (IAM)** page, select **Add** on the toolbar, and the select **Add role assignment**.
-5. On the **Add a role assignment** page, do the following steps:
- 1. Select **Owner** for the **role**.
- 2. Select the user.
- 3. Select **Save**.
## Next steps Confirm that the user sees the lab upon logging into the [Lab Services portal](https://labs.azure.com).
lab-services Tutorial Setup Lab Account https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/lab-services/tutorial-setup-lab-account.md
Title: Set up a lab account with Azure Lab Services | Microsoft Docs description: Learn how to set up a lab account with Azure Lab Services, add a lab creator, and specify Marketplace images to be used by labs in the lab account. Previously updated : 06/26/2020 Last updated : 07/26/2021+ # Tutorial: Set up a lab account with Azure Lab Services
The following steps illustrate how to use the Azure portal to create a lab accou
![Lab account page](./media/tutorial-setup-lab-account/lab-account-page.png) ## Add a user to the Lab Creator role
-To set up a classroom lab in a lab account, the user must be a member of the **Lab Creator** role in the lab account. To provide educators the permission to create labs for their classes, add them to the **Lab Creator** role:
+To set up a classroom lab in a lab account, the user must be a member of the **Lab Creator** role in the lab account. To provide educators the permission to create labs for their classes, add them to the **Lab Creator** role: For detailed steps, see [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md).
> [!NOTE] > The account you used to create the lab account is automatically added to this role. If you are planning to use the same user account to create a classroom lab in this tutorial, skip this step.
-1. On the **Lab Account** page, select **Access control (IAM)**, select **+ Add** on the toolbar, and then select **+ Add role assignment** on the toolbar.
- ![Access Control -> Add Role Assignment button](./media/tutorial-setup-lab-account/add-role-assignment-button.png)
-1. On the **Add role assignment** page, select **Lab Creator** for **Role**, select the user you want to add to the Lab Creators role, and select **Save**.
+1. On the **Lab Account** page, select **Access control (IAM)**
- ![Add lab creator](./media/tutorial-setup-lab-account/add-lab-creator.png)
+1. Select **Add** > **Add role assignment (Preview)**.
+
+ ![Access control (IAM) page with Add role assignment menu open.](../../includes/role-based-access-control/media/add-role-assignment-menu-generic.png)
+
+1. On the **Role** tab, select the **Lab Creator** role.
+
+ ![Add role assignment page with Role tab selected.](../../includes/role-based-access-control/media/add-role-assignment-role-generic.png)
+
+1. On the **Members** tab, select the user you want to add to the Lab Creators role
+
+1. On the **Review + assign** tab, select **Review + assign** to assign the role.
## Next steps
lighthouse Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/lighthouse/samples/policy-reference.md
Title: Built-in policy definitions for Azure Lighthouse description: Lists Azure Policy built-in policy definitions for Azure Lighthouse. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/13/2021 Last updated : 08/20/2021
load-balancer Load Balancer Tcp Reset https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/load-balancer/load-balancer-tcp-reset.md
TCP keep-alive works for scenarios where battery life isn't a constraint. It isn
- TCP reset only sent during TCP connection in ESTABLISHED state. - TCP idle timeout does not affect load balancing rules on UDP protocol.
+- TCP reset is not supported for ILB HA ports when a network virtual appliance is in the path. A workaround could be to use outbound rule with TCP reset from NVA.
## Next steps
load-balancer Python Samples https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/load-balancer/python-samples.md
+
+ Title: Python Samples
+
+description: With these samples, load balance traffic to multiple websites. Deploy load balancers in a HA configuration.
+
+documentationcenter: load-balancer
++++ Last updated : 08/20/2021+++
+# Python Samples for Azure Load Balancer
+
+The following table includes links to code samples built using Python.
+
+| Script | Description |
+|-|-|
+| [Getting Started with Azure Resource Manager for public and internal load balancers in Python](/samples/azure-samples/azure-samples-python-management/network-python-manage-loadbalancer) | Creates virtual machines in a load-balanced configuration. Sample includes internal and public load balancers. |
++
logic-apps Block Connections Connectors https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/logic-apps/block-connections-connectors.md
To block creating a connection altogether in a logic app, follow these steps:
1. Next, to assign the policy definition where you want enforce the policy, [create a policy assignment](#create-policy-assignment).
-For more information about Azure policy definitions, see these topics:
+For more information about Azure Policy definitions, see these topics:
-* [Policy structure definition](../governance/policy/concepts/definition-structure.md)
+* [Azure Policy definition structure](../governance/policy/concepts/definition-structure.md)
* [Tutorial: Create and manage policies to enforce compliance](../governance/policy/tutorials/create-and-manage.md) * [Azure Policy built-in policy definitions for Azure Logic Apps](./policy-reference.md)
When you create a connection inside a logic app, that connection exists as separ
1. Next, to assign the policy definition where you want enforce the policy, [create a policy assignment](#create-policy-assignment).
-For more information about Azure policy definitions, see these topics:
+For more information about Azure Policy definitions, see these topics:
-* [Policy structure definition](../governance/policy/concepts/definition-structure.md)
+* [Azure Policy definition structure](../governance/policy/concepts/definition-structure.md)
* [Tutorial: Create and manage policies to enforce compliance](../governance/policy/tutorials/create-and-manage.md) * [Azure Policy built-in policy definitions for Azure Logic Apps](./policy-reference.md)
logic-apps Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/logic-apps/policy-reference.md
Title: Built-in policy definitions for Azure Logic Apps description: Lists Azure Policy built-in policy definitions for Azure Logic Apps. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/13/2021 Last updated : 08/20/2021 ms.suite: integration
logic-apps Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/logic-apps/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Logic Apps description: Lists Azure Policy Regulatory Compliance controls available for Azure Logic Apps. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 08/13/2021 Last updated : 08/20/2021
machine-learning Concept Compute Instance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/concept-compute-instance.md
A compute instance is a fully managed cloud-based workstation optimized for your
* The compute instance is also a secure training compute target similar to compute clusters, but it is single node. * You can [create a compute instance](how-to-create-manage-compute-instance.md?tabs=python#create) yourself, or an administrator can **[create a compute instance on your behalf](how-to-create-manage-compute-instance.md?tabs=python#on-behalf)**. * You can also **[use a setup script (preview)](how-to-create-manage-compute-instance.md#setup-script)** for an automated way to customize and configure the compute instance as per your needs.
-* To save on costs, **[create a schedule (preview)](how-to-create-manage-compute-instance.md#schedule)** to automatically start and stop the compute instance (preview).
+* To save on costs, **[create a schedule (preview)](how-to-create-manage-compute-instance.md#schedule)** to automatically start and stop the compute instance.
## <a name="contents"></a>Tools and environments
A compute instance:
* Has a job queue. * Runs jobs securely in a virtual network environment, without requiring enterprises to open up SSH port. The job executes in a containerized environment and packages your model dependencies in a Docker container. * Can run multiple small jobs in parallel (preview). Two jobs per core can run in parallel while the rest of the jobs are queued.
-* Supports single-node multi-GPU distributed training jobs
+* Supports single-node multi-GPU [distributed training](how-to-train-distributed-gpu.md) jobs
You can use compute instance as a local inferencing deployment target for test/debug scenarios.
machine-learning Concept Compute Target https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/concept-compute-target.md
A *compute target* is a designated compute resource or environment where you run
In a typical model development lifecycle, you might: 1. Start by developing and experimenting on a small amount of data. At this stage, use your local environment, such as a local computer or cloud-based virtual machine (VM), as your compute target.
-1. Scale up to larger data, or do distributed training by using one of these [training compute targets](#train).
+1. Scale up to larger data, or do [distributed training](how-to-train-distributed-gpu.md) by using one of these [training compute targets](#train).
1. After your model is ready, deploy it to a web hosting environment with one of these [deployment compute targets](#deploy). The compute resources you use for your compute targets are attached to a [workspace](concept-workspace.md). Compute resources other than the local machine are shared by users of the workspace. ## <a name="train"></a> Training compute targets
-Azure Machine Learning has varying support across different compute targets. A typical model development lifecycle starts with development or experimentation on a small amount of data. At this stage, use a local environment like your local computer or a cloud-based VM. As you scale up your training on larger datasets or perform distributed training, use Azure Machine Learning compute to create a single- or multi-node cluster that autoscales each time you submit a run. You can also attach your own compute resource, although support for different scenarios might vary.
+Azure Machine Learning has varying support across different compute targets. A typical model development lifecycle starts with development or experimentation on a small amount of data. At this stage, use a local environment like your local computer or a cloud-based VM. As you scale up your training on larger datasets or perform [distributed training](how-to-train-distributed-gpu.md), use Azure Machine Learning compute to create a single- or multi-node cluster that autoscales each time you submit a run. You can also attach your own compute resource, although support for different scenarios might vary.
[!INCLUDE [aml-compute-target-train](../../includes/aml-compute-target-train.md)]
machine-learning Concept Distributed Training https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/concept-distributed-training.md
In distributed training the workload to train a model is split up and shared amo
## Deep learning and distributed training
-There are two main types of distributed training: [data parallelism](#data-parallelism) and [model parallelism](#model-parallelism). For distributed training on deep learning models, the [Azure Machine Learning SDK in Python](/python/api/overview/azure/ml/intro) supports integrations with popular frameworks, PyTorch and TensorFlow. Both frameworks employ data parallelism for distributed training, and can leverage [horovod](https://horovod.readthedocs.io/en/latest/summary_include.html) for optimizing compute speeds.
+There are two main types of distributed training: [data parallelism](#data-parallelism) and [model parallelism](#model-parallelism). For distributed training on deep learning models, the [Azure Machine Learning SDK in Python](/python/api/overview/azure/ml/intro) supports integrations with popular frameworks, PyTorch and TensorFlow. Both frameworks employ data parallelism for distributed training, and can leverage [horovod](https://horovod.readthedocs.io/en/latest/summary_include.html) for optimizing compute speeds.
-* [Distributed training with PyTorch](how-to-train-pytorch.md#distributed-training)
-* [Distributed training with TensorFlow](how-to-train-tensorflow.md#distributed-training)
+* [Distributed training with PyTorch](how-to-train-distributed-gpu.md#pytorch)
+
+* [Distributed training with TensorFlow](how-to-train-distributed-gpu.md#tensorflow)
For ML models that don't require distributed training, see [train models with Azure Machine Learning](concept-train-machine-learning-model.md#python-sdk) for the different ways to train models using the Python SDK.
In model parallelism, worker nodes only need to synchronize the shared parameter
* Learn how to [use compute targets for model training](how-to-set-up-training-targets.md) with the Python SDK. * For a technical example, see the [reference architecture scenario](/azure/architecture/reference-architectures/ai/training-deep-learning).
-* [Train ML models with TensorFlow](how-to-train-tensorflow.md).
-* [Train ML models with PyTorch](how-to-train-pytorch.md).
+* Find tips for MPI, TensorFlow, and PyTorch in the [Distributed GPU training guide](how-to-train-distributed-gpu.md)
machine-learning Concept Ml Pipelines https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/concept-ml-pipelines.md
In short, all of the complex tasks of the machine learning lifecycle can be help
Many programming ecosystems have tools that orchestrate resource, library, or compilation dependencies. Generally, these tools use file timestamps to calculate dependencies. When a file is changed, only it and its dependents are updated (downloaded, recompiled, or packaged). Azure Machine Learning pipelines extend this concept. Like traditional build tools, pipelines calculate dependencies between steps and only perform the necessary recalculations.
-The dependency analysis in Azure Machine Learning pipelines is more sophisticated than simple timestamps though. Every step may run in a different hardware and software environment. Data preparation might be a time-consuming process but not need to run on hardware with powerful GPUs, certain steps might require OS-specific software, you might want to use distributed training, and so forth.
+The dependency analysis in Azure Machine Learning pipelines is more sophisticated than simple timestamps though. Every step may run in a different hardware and software environment. Data preparation might be a time-consuming process but not need to run on hardware with powerful GPUs, certain steps might require OS-specific software, you might want to use [distributed training](how-to-train-distributed-gpu.md), and so forth.
Azure Machine Learning automatically orchestrates all of the dependencies between pipeline steps. This orchestration might include spinning up and down Docker images, attaching and detaching compute resources, and moving data between the steps in a consistent and automatic manner.
machine-learning Concept Workspace https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/concept-workspace.md
When you create a new workspace, it automatically creates several Azure resource
+ [Azure Container Registry](https://azure.microsoft.com/services/container-registry/): Registers docker containers that you use during training and when you deploy a model. To minimize costs, ACR is **lazy-loaded** until deployment images are created.
+ > [!NOTE]
+ > If your subscription setting requires adding tags to resources under it, Azure Container Registry (ACR) created by Azure Machine Learning will fail, since we cannot set tags to ACR.
+ + [Azure Application Insights](https://azure.microsoft.com/services/application-insights/): Stores monitoring information about your models. + [Azure Key Vault](https://azure.microsoft.com/services/key-vault/): Stores secrets that are used by compute targets and other sensitive information that's needed by the workspace.
machine-learning How To Access Terminal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-access-terminal.md
Learn more about [cloning Git repositories into your workspace file system](conc
## Install packages
- Install packages from a terminal window. Install Python packages into the **Python 3.6 - AzureML** environment. Install R packages into the **R** environment.
+ Install packages from a terminal window. Install Python packages into the **Python 3.8 - AzureML** environment. Install R packages into the **R** environment.
Or you can install packages directly in Jupyter Notebook or RStudio:
Or you can install packages directly in Jupyter Notebook or RStudio:
## Add new kernels > [!WARNING]
-> While customizing the compute instance, make sure you do not delete the **azureml_py36** conda environment or **Python 3.6 - AzureML** kernel. This is needed for Jupyter/JupyterLab functionality
+> While customizing the compute instance, make sure you do not delete the **azureml_py36** or **azureml_py38** conda environments. Aslo do not delete **Python 3.6 - AzureML** or **Python 3.8 - AzureML** kernels. These are needed for Jupyter/JupyterLab functionality.
To add a new Jupyter kernel to the compute instance:
Any of the [available Jupyter Kernels](https://github.com/jupyter/jupyter/wiki/J
Select **View active sessions** in the terminal toolbar to see a list of all active terminal sessions. When there are no active sessions, this tab will be disabled.
-Close any unused sessions to preserve your compute instance's resources.
+Close any unused sessions to preserve your compute instance's resources.
machine-learning How To Attach Arc Kubernetes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-attach-arc-kubernetes.md
Azure Arc enabled machine learning supports the following training scenarios:
## Deploy Azure Machine Learning extension
-Azure Arc enabled Kubernetes has a cluster extension functionality that enables you to install various agents including Azure policy, monitoring, machine learning, and many others. Azure Machine Learning requires the use of the *Microsoft.AzureML.Kubernetes* cluster extension to deploy the Azure Machine Learning agent on the Kubernetes cluster. Once the Azure Machine Learning extension is installed, you can attach the cluster to an Azure Machine Learning workspace and use it for training.
+Azure Arc enabled Kubernetes has a cluster extension functionality that enables you to install various agents including Azure Policy definitions, monitoring, machine learning, and many others. Azure Machine Learning requires the use of the *Microsoft.AzureML.Kubernetes* cluster extension to deploy the Azure Machine Learning agent on the Kubernetes cluster. Once the Azure Machine Learning extension is installed, you can attach the cluster to an Azure Machine Learning workspace and use it for training.
Use the `k8s-extension` Azure CLI extension to deploy the Azure Machine Learning extension to your Azure Arc-enabled Kubernetes cluster.
machine-learning How To Create Attach Kubernetes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-create-attach-kubernetes.md
These methods of creating an AKS cluster use the __default__ version of the clus
When **attaching** an existing AKS cluster, we support all currently supported AKS versions.
+> [!IMPORTANT]
+> Azure Kubernetes Service uses [Blobfuse FlexVolume driver](https://github.com/Azure/kubernetes-volume-drivers/blob/master/flexvolume/blobfuse/README.md) for the versions <=1.16 and [Blob CSI driver](https://github.com/kubernetes-sigs/blob-csi-driver/blob/master/README.md) for the versions >=1.17.
+> Therefore, it is important to re-deploy or [update the web service](how-to-deploy-update-web-service.md) after cluster upgrade in order to deploy to correct blobfuse method for the cluster version.
+ > [!NOTE] > There may be edge cases where you have an older cluster that is no longer supported. In this case, the attach operation will return an error and list the currently supported versions. >
machine-learning How To Create Manage Compute Instance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-create-manage-compute-instance.md
Schedules can also be defined for [create on behalf of](#on-behalf) compute inst
1. Select **Add schedule** again if you want to create another schedule. Once the compute instance is created, you can view, edit, or add new schedules from the compute instance details section.
+Please note timezone labels don't account for day light savings. For instance, (UTC+01:00) Amsterdam, Berlin, Bern, Rome, Stockholm, Vienna is actually UTC+02:00 during day light savings.
### Create a schedule with a Resource Manager template
You can schedule the automatic start and stop of a compute instance by using a R
// hyphen (meaning an inclusive range). ```
-Use Azure policy to enforce a shutdown schedule exists for every compute instance in a subscription or default to a schedule if nothing exists.
+Use Azure Policy to enforce a shutdown schedule exists for every compute instance in a subscription or default to a schedule if nothing exists.
## <a name="setup-script"></a> Customize the compute instance with a script (preview)
You can also use the following environment variables in your script:
3. CI_NAME 4. CI_LOCAL_UBUNTU_USER. This points to azureuser
-You can use setup script in conjunction with Azure policy to either enforce or default a setup script for every compute instance creation.
+You can use setup script in conjunction with Azure Policy to either enforce or default a setup script for every compute instance creation.
### Use the script in the studio
machine-learning How To Deploy And Where https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-deploy-and-where.md
An inference configuration describes the Docker container and files to use when
The inference configuration below specifies that the machine learning deployment will use the file `echo_score.py` in the `./source_dir` directory to process incoming requests and that it will use the Docker image with the Python packages specified in the `project_environment` environment.
-You can use any [Azure Machine Learning curated environment](./resource-curated-environments.md) as the base Docker image when creating your project environment. We will install the required dependencies on top and store the resulting Docker image into the repository that is associated with your workspace.
+You can use any [Azure Machine Learning inference curated environments](concept-prebuilt-docker-images-inference.md#list-of-prebuilt-docker-images-for-inference) as the base Docker image when creating your project environment. We will install the required dependencies on top and store the resulting Docker image into the repository that is associated with your workspace.
+
+> [!NOTE]
+> Azure machine learning [inference source directory](https://docs.microsoft.com/python/api/azureml-core/azureml.core.model.inferenceconfig?view=azure-ml-py#constructor&preserve-view=true) upload does not respect **.gitignore** or **.amlignore**
# [Azure CLI](#tab/azcli)
machine-learning How To Deploy Update Web Service https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-deploy-update-web-service.md
See [ACI Service Update Method.](/python/api/azureml-core/azureml.core.webservic
> > You can not use the SDK to update a web service published from the Azure Machine Learning designer.
+> [!IMPORTANT]
+> Azure Kubernetes Service uses [Blobfuse FlexVolume driver](https://github.com/Azure/kubernetes-volume-drivers/blob/master/flexvolume/blobfuse/README.md) for the versions <=1.16 and [Blob CSI driver](https://github.com/kubernetes-sigs/blob-csi-driver/blob/master/README.md) for the versions >=1.17.
+>
+> Therefore, it is important to re-deploy or update the web service after cluster upgrade in order to deploy to correct blobfuse method for the cluster version.
+
+> [!NOTE]
+> When an operation is already in progress, any new operation on that same web service will respond with 409 conflict error. For example, If create or update web service operation is in progress and if you trigger a new Delete operation it will throw an error.
+ **Using the SDK** The following code shows how to use the SDK to update the model, environment, and entry script for a web service:
machine-learning How To Inference Server Http https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-inference-server-http.md
The following steps explain how the Azure Machine Learning inference HTTP server
:::image type="content" source="./media/how-to-inference-server-http/inference-server-architecture.png" alt-text="Diagram of the HTTP server process":::
+## How to integrate with Visual Studio Code
+
+There are two ways to use Visual Studio Code (VSCode) and [Python Extension](https://marketplace.visualstudio.com/items?itemName=ms-python.python) to debug with [azureml-inference-server-http](https://pypi.org/project/azureml-inference-server-http/) package.
+
+1. User starts the AzureML Inference Server in a command line and use VSCode + Python Extension to attach to the process.
+1. User sets up the `launch.json` in the VSCode and start the AzureML Inference Server within VSCode.
+
+In both ways, user can set breakpoint and debug step by step.
+ ## Frequently asked questions ### Do I need to reload the server when changing the score script?
machine-learning How To Move Data In Out Of Pipelines https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-move-data-in-out-of-pipelines.md
dataprep_step = PythonScriptStep(
``` > [!NOTE]
-> Concurrent writes to a `OutputFileDatasetConfig` will fail. Do not attempt to use a single `OutputFileDatasetConfig` concurrently. Do not share a single `OutputFileDatasetConfig` in a multiprocessing situation, such as when using distributed training.
+> Concurrent writes to a `OutputFileDatasetConfig` will fail. Do not attempt to use a single `OutputFileDatasetConfig` concurrently. Do not share a single `OutputFileDatasetConfig` in a multiprocessing situation, such as when using [distributed training](how-to-train-distributed-gpu.md).
### Use `OutputFileDatasetConfig` as outputs of a training step
machine-learning How To Secure Training Vnet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-secure-training-vnet.md
In this article you learn how to secure the following training compute resources
:::image type="content" source="./media/how-to-secure-training-vnet/compute-instance-cluster-network-security-group.png" alt-text="Screenshot of NSG":::
- * One public IP address. If you have Azure policy prohibiting Public IP creation then deployment of cluster/instances will fail
+ * One public IP address. If you have Azure Policy assignments prohibiting Public IP creation then deployment of cluster/instances will fail
* One load balancer For compute clusters, these resources are deleted every time the cluster scales down to 0 nodes and created when scaling up.
In this article you learn how to secure the following training compute resources
For a compute instance, these resources are kept until the instance is deleted. Stopping the instance does not remove the resources. > [!IMPORTANT]
- > These resources are limited by the subscription's [resource quotas](../azure-resource-manager/management/azure-subscription-service-limits.md). If the virtual network resource group is locked then deletion of compute cluster/instance will fail. Load balancer cannot be deleted until the compute cluster/instance is deleted. Also please ensure there is no Azure policy which prohibits creation of network security groups.
+ > These resources are limited by the subscription's [resource quotas](../azure-resource-manager/management/azure-subscription-service-limits.md). If the virtual network resource group is locked then deletion of compute cluster/instance will fail. Load balancer cannot be deleted until the compute cluster/instance is deleted. Also please ensure there is no Azure Policy assignment which prohibits creation of network security groups.
* If the Azure Storage Accounts for the workspace are also in the virtual network, use the following guidance on subnet limitations:
machine-learning How To Set Up Training Targets https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-set-up-training-targets.md
If you have command-line arguments you want to pass to your training script, you
If you want to override the default maximum time allowed for the run, you can do so via the **`max_run_duration_seconds`** parameter. The system will attempt to automatically cancel the run if it takes longer than this value. ### Specify a distributed job configuration
-If you want to run a distributed training job, provide the distributed job-specific config to the **`distributed_job_config`** parameter. Supported config types include [MpiConfiguration](/python/api/azureml-core/azureml.core.runconfig.mpiconfiguration), [TensorflowConfiguration](/python/api/azureml-core/azureml.core.runconfig.tensorflowconfiguration), and [PyTorchConfiguration](/python/api/azureml-core/azureml.core.runconfig.pytorchconfiguration).
+If you want to run a [distributed training](how-to-train-distributed-gpu.md) job, provide the distributed job-specific config to the **`distributed_job_config`** parameter. Supported config types include [MpiConfiguration](/python/api/azureml-core/azureml.core.runconfig.mpiconfiguration), [TensorflowConfiguration](/python/api/azureml-core/azureml.core.runconfig.tensorflowconfiguration), and [PyTorchConfiguration](/python/api/azureml-core/azureml.core.runconfig.pytorchconfiguration).
For more information and examples on running distributed Horovod, TensorFlow and PyTorch jobs, see:
machine-learning How To Train Distributed Gpu https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-train-distributed-gpu.md
+
+ Title: Distributed GPU training guide
+
+description: Distributed training with MPI, Horovod, DeepSpeed, PyTorch, PyTorch Lightning, Hugging Face Transformers, TensorFlow, and InfiniBand.
++++++ Last updated : 08/12/2021++
+# Distributed GPU training guide
+
+Learn more about how to use distributed GPU training code in Azure Machine Learning (ML). This article will not teach you about distributed training. It will help you run your existing distributed training code on Azure Machine Learning. It offers tips and examples for you to follow for each framework:
+
+* Message Passing Interface (MPI)
+ * Horovod
+ * DeepSpeed
+ * Environment variables from Open MPI
+* PyTorch
+ * Process group initialization
+ * Launch options
+ * DistributedDataParallel (per-process-launch)
+ * Using `torch.distributed.launch` (per-node-launch)
+ * PyTorch Lightning
+ * Hugging Face Transformers
+* TensorFlow
+ * Environment variables for TensorFlow (TF_CONFIG)
+* Accelerate GPU training with InfiniBand
+
+## Prerequisites
+
+Review these [basic concepts of distributed GPU training](concept-distributed-training.md) such as _data parallelism_, _distributed data parallelism_, and _model parallelism_.
+
+> [!TIP]
+> If you don't know which type of parallelism to use, more than 90% of the time you should use __Distributed Data Parallelism__.
+
+## MPI
+
+Azure ML offers an [MPI job](https://www.mcs.anl.gov/research/projects/mpi/) to launch a given number of processes in each node. You can adopt this approach to run distributed training using either per-process-launcher or per-node-launcher, depending on whether `process_count_per_node` is set to 1 (the default) for per-node-launcher, or equal to the number of devices/GPUs for per-process-launcher. Azure ML constructs the full MPI launch command (`mpirun`) behind the scenes. You can't provide your own full head-node-launcher commands like `mpirun` or `DeepSpeed launcher`.
+
+> [!TIP]
+> The base Docker image used by an Azure Machine Learning MPI job needs to have an MPI library installed. [Open MPI](https://www.open-mpi.org/) is included in all the [AzureML GPU base images](https://github.com/Azure/AzureML-Containers). When you use a custom Docker image, you are responsible for making sure the image includes an MPI library. Open MPI is recommended, but you can also use a different MPI implementation such as Intel MPI. Azure ML also provides [curated environments](resource-curated-environments.md) for popular frameworks.
+
+To run distributed training using MPI, follow these steps:
+
+1. Use an Azure ML environment with the preferred deep learning framework and MPI. AzureML provides [curated environment](resource-curated-environments.md) for popular frameworks.
+1. Define `MpiConfiguration` with `process_count_per_node` and `node_count`. `process_count_per_node` should be equal to the number of GPUs per node for per-process-launch, or set to 1 (the default) for per-node-launch if the user script will be responsible for launching the processes per node.
+1. Pass the `MpiConfiguration` object to the `distributed_job_config` parameter of `ScriptRunConfig`.
+
+```python
+from azureml.core import Workspace, ScriptRunConfig, Environment, Experiment
+from azureml.core.runconfig import MpiConfiguration
+
+curated_env_name = 'AzureML-PyTorch-1.6-GPU'
+pytorch_env = Environment.get(workspace=ws, name=curated_env_name)
+distr_config = MpiConfiguration(process_count_per_node=4, node_count=2)
+
+run_config = ScriptRunConfig(
+ source_directory= './src',
+ script='train.py',
+ compute_target=compute_target,
+ environment=pytorch_env,
+ distributed_job_config=distr_config,
+)
+
+# submit the run configuration to start the job
+run = Experiment(ws, "experiment_name").submit(run_config)
+```
+
+### Horovod
+
+Use the MPI job configuration when you use [Horovod](https://horovod.readthedocs.io/en/stable/https://docsupdatetracker.net/index.html) for distributed training with the deep learning framework.
+
+Make sure your code follows these tips:
+
+* The training code is instrumented correctly with Horovod before adding the Azure ML parts
+* Your Azure ML environment contains Horovod and MPI. The PyTorch and TensorFlow curated GPU environments come pre-configured with Horovod and its dependencies.
+* Create an `MpiConfiguration` with your desired distribution.
+
+### Horovod example
+
+* [azureml-examples: TensorFlow distributed training using Horovod](https://github.com/Azure/azureml-examples/tree/main/python-sdk/workflows/train/tensorflow/mnist-distributed-horovod)
+
+### DeepSpeed
+
+Don't use DeepSpeed's custom launcher to run distributed training with the [DeepSpeed](https://www.deepspeed.ai/) library on Azure ML. Instead, configure an MPI job to launch the training job [with MPI](https://www.deepspeed.ai/getting-started/#mpi-and-azureml-compatibility).
+
+Make sure your code follows these tips:
+
+* Your Azure ML environment contains DeepSpeed and its dependencies, Open MPI, and mpi4py.
+* Create an `MpiConfiguration` with your distribution.
+
+### DeepSeed example
+
+* [azureml-examples: Distributed training with DeepSpeed on CIFAR-10](https://github.com/Azure/azureml-examples/tree/main/python-sdk/workflows/train/deepspeed/cifar)
+
+### Environment variables from Open MPI
+
+When running MPI jobs with Open MPI images, the following environment variables for each process launched:
+
+1. `OMPI_COMM_WORLD_RANK` - the rank of the process
+2. `OMPI_COMM_WORLD_SIZE` - the world size
+3. `AZ_BATCH_MASTER_NODE` - primary address with port, `MASTER_ADDR:MASTER_PORT`
+4. `OMPI_COMM_WORLD_LOCAL_RANK` - the local rank of the process on the node
+5. `OMPI_COMM_WORLD_LOCAL_SIZE` - number of processes on the node
+
+> [!TIP]
+> Despite the name, environment variable `OMPI_COMM_WORLD_NODE_RANK` does not corresponds to the `NODE_RANK`. To use per-node-launcher, set `process_count_per_node=1` and use `OMPI_COMM_WORLD_RANK` as the `NODE_RANK`.
+
+## PyTorch
+
+Azure ML supports running distributed jobs using PyTorch's native distributed training capabilities (`torch.distributed`).
+
+> [!TIP]
+> For data parallelism, the [official PyTorch guidance](https://pytorch.org/tutorials/intermediate/ddp_tutorial.html#comparison-between-dataparallel-and-distributeddataparallel) is to use DistributedDataParallel (DDP) over DataParallel for both single-node and multi-node distributed training. PyTorch also [recommends using DistributedDataParallel over the multiprocessing package](https://pytorch.org/docs/stable/notes/cuda.html#use-nn-parallel-distributeddataparallel-instead-of-multiprocessing-or-nn-dataparallel). Azure Machine Learning documentation and examples will therefore focus on DistributedDataParallel training.
+
+### Process group initialization
+
+The backbone of any distributed training is based on a group of processes that know each other and can communicate with each other using a backend. For PyTorch, the process group is created by calling [torch.distributed.init_process_group](https://pytorch.org/docs/stable/distributed.html#torch.distributed.init_process_group) in __all distributed processes__ to collectively form a process group.
+
+```
+torch.distributed.init_process_group(backend='nccl', init_method='env://', ...)
+```
+
+The most common communication backends used are `mpi`, `nccl`, and `gloo`. For GPU-based training `nccl` is recommended for best performance and should be used whenever possible.
+
+`init_method` tells how each process can discover each other, how they initialize and verify the process group using the communication backend. By default if `init_method` is not specified PyTorch will use the environment variable initialization method (`env://`). `init_method` is the recommended initialization method to use in your training code to run distributed PyTorch on Azure ML. PyTorch will look for the following environment variables for initialization,:
+
+- **`MASTER_ADDR`** - IP address of the machine that will host the process with rank 0.
+- **`MASTER_PORT`** - A free port on the machine that will host the process with rank 0.
+- **`WORLD_SIZE`** - The total number of processes. Should be equal to the total number of devices (GPU) used for distributed training.
+- **`RANK`** - The (global) rank of the current process. The possible values are 0 to (world size - 1).
+
+For more information on process group initialization, see the [PyTorch documentation](https://pytorch.org/docs/stable/distributed.html#torch.distributed.init_process_group).
+
+Beyond these, many applications will also need the following environment variables:
+- **`LOCAL_RANK`** - The local (relative) rank of the process within the node. The possible values are 0 to (# of processes on the node - 1). This information is useful because many operations such as data preparation only should be performed once per node usually on local_rank = 0.
+- **`NODE_RANK`** - The rank of the node for multi-node training. The possible values are 0 to (total # of nodes - 1).
+
+### PyTorch launch options
+
+The Azure ML PyTorch job supports two types of options for launching distributed training:
+
+- __Per-process-launcher__: The system will launch all distributed processes for you, with all the relevant information (such as environment variables) to set up the process group.
+- __Per-node-launcher__: You provide Azure ML with the utility launcher that will get run on each node. The utility launcher will handle launching each of the processes on a given node. Locally within each node, `RANK` and `LOCAL_RANK` are set up by the launcher. The **torch.distributed.launch** utility and PyTorch Lightning both belong in this category.
+
+There are no fundamental differences between these launch options. The choice is largely up to your preference or the conventions of the frameworks/libraries built on top of vanilla PyTorch (such as Lightning or Hugging Face).
+
+The following sections go into more detail on how to configure Azure ML PyTorch jobs for each of the launch options.
+
+### <a name="per-process-launch"></a> DistributedDataParallel (per-process-launch)
+
+You don't need to use a launcher utility like `torch.distributed.launch`. To run a distributed PyTorch job:
+
+1. Specify the training script and arguments
+1. Create a `PyTorchConfiguration` and specify the `process_count` and `node_count`. The `process_count` corresponds to the total number of processes you want to run for your job. `process_count` should typically equal `# GPUs per node x # nodes`. If `process_count` isn't specified, Azure ML will by default launch one process per node.
+
+Azure ML will set the `MASTER_ADDR`, `MASTER_PORT`, `WORLD_SIZE`, and `NODE_RANK` environment variables on each node, and set the process-level `RANK` and `LOCAL_RANK` environment variables.
+
+To use this option for multi-process-per-node training, use Azure ML Python SDK `>= 1.22.0`. Process_count was introduced in 1.22.0.
+
+```python
+from azureml.core import ScriptRunConfig, Environment, Experiment
+from azureml.core.runconfig import PyTorchConfiguration
+
+curated_env_name = 'AzureML-PyTorch-1.6-GPU'
+pytorch_env = Environment.get(workspace=ws, name=curated_env_name)
+distr_config = PyTorchConfiguration(process_count=8, node_count=2)
+
+run_config = ScriptRunConfig(
+ source_directory='./src',
+ script='train.py',
+ arguments=['--epochs', 50],
+ compute_target=compute_target,
+ environment=pytorch_env,
+ distributed_job_config=distr_config,
+)
+
+run = Experiment(ws, 'experiment_name').submit(run_config)
+```
+
+> [!TIP]
+> If your training script passes information like local rank or rank as script arguments, you can reference the environment variable(s) in the arguments:
+>
+> ```python
+> arguments=['--epochs', 50, '--local_rank', $LOCAL_RANK]
+> ```
+
+### Pytorch per-process-launch example
+
+- [azureml-examples: Distributed training with PyTorch on CIFAR-10](https://github.com/Azure/azureml-examples/tree/main/python-sdk/workflows/train/pytorch/cifar-distributed)
+
+### <a name="per-node-launch"></a> Using `torch.distributed.launch` (per-node-launch)
+
+PyTorch provides a launch utility in [torch.distributed.launch](https://pytorch.org/docs/stable/distributed.html#launch-utility) that you can use to launch multiple processes per node. The `torch.distributed.launch` module spawns multiple training processes on each of the nodes.
+
+The following steps demonstrate how to configure a PyTorch job with a per-node-launcher on Azure ML. The job achieves the equivalent of running the following command:
+
+```shell
+python -m torch.distributed.launch --nproc_per_node <num processes per node> \
+ --nnodes <num nodes> --node_rank $NODE_RANK --master_addr $MASTER_ADDR \
+ --master_port $MASTER_PORT --use_env \
+ <your training script> <your script arguments>
+```
+
+1. Provide the `torch.distributed.launch` command to the `command` parameter of the `ScriptRunConfig` constructor. Azure ML runs this command on each node of your training cluster. `--nproc_per_node` should be less than or equal to the number of GPUs available on each node. MASTER_ADDR, MASTER_PORT, and NODE_RANK are all set by Azure ML, so you can just reference the environment variables in the command. Azure ML sets MASTER_PORT to `6105`, but you can pass a different value to the `--master_port` argument of torch.distributed.launch command if you wish. (The launch utility will reset the environment variables.)
+2. Create a `PyTorchConfiguration` and specify the `node_count`.
+
+```python
+from azureml.core import ScriptRunConfig, Environment, Experiment
+from azureml.core.runconfig import PyTorchConfiguration
+
+curated_env_name = 'AzureML-PyTorch-1.6-GPU'
+pytorch_env = Environment.get(workspace=ws, name=curated_env_name)
+distr_config = PyTorchConfiguration(node_count=2)
+launch_cmd = "python -m torch.distributed.launch --nproc_per_node 4 --nnodes 2 --node_rank $NODE_RANK --master_addr $MASTER_ADDR --master_port $MASTER_PORT --use_env train.py --epochs 50".split()
+
+run_config = ScriptRunConfig(
+ source_directory='./src',
+ command=launch_cmd,
+ compute_target=compute_target,
+ environment=pytorch_env,
+ distributed_job_config=distr_config,
+)
+
+run = Experiment(ws, 'experiment_name').submit(run_config)
+```
+
+> [!TIP]
+> **Single-node multi-GPU training:**
+> If you are using the launch utility to run single-node multi-GPU PyTorch training, you do not need to specify the `distributed_job_config` parameter of ScriptRunConfig.
+>
+>```python
+> launch_cmd = "python -m torch.distributed.launch --nproc_per_node 4 --use_env train.py --epochs 50".split()
+>
+> run_config = ScriptRunConfig(
+> source_directory='./src',
+> command=launch_cmd,
+> compute_target=compute_target,
+> environment=pytorch_env,
+> )
+> ```
+
+### PyTorch per-node-launch example
+
+- [azureml-examples: Distributed training with PyTorch on CIFAR-10](https://github.com/Azure/azureml-examples/tree/main/python-sdk/workflows/train/pytorch/cifar-distributed)
+
+### PyTorch Lightning
+
+[PyTorch Lightning](https://pytorch-lightning.readthedocs.io/en/stable/) is a lightweight open-source library that provides a high-level interface for PyTorch. Lightning abstracts away many of the lower-level distributed training configurations required for vanilla PyTorch. Lightning allows you to run your training scripts in single GPU, single-node multi-GPU, and multi-node multi-GPU settings. Behind the scene, it launches multiple processes for you similar to `torch.distributed.launch`.
+
+For single-node training (including single-node multi-GPU), you can run your code on Azure ML without needing to specify a `distributed_job_config`. For multi-node training, Lightning requires the following environment variables to be set on each node of your training cluster:
+
+- MASTER_ADDR
+- MASTER_PORT
+- NODE_RANK
+
+To run multi-node Lightning training on Azure ML, you can largely follow the [per-node-launch guide](#per-node-launch):
+
+- Define the `PyTorchConfiguration` and specify the `node_count`. Don't specify `process_count`, as Lightning internally handles launching the worker processes for each node.
+- For PyTorch jobs, Azure ML handles setting the MASTER_ADDR, MASTER_PORT, and NODE_RANK environment variables required by Lightning.
+- Lightning will handle computing the world size from the Trainer flags `--gpus` and `--num_nodes` and manage rank and local rank internally.
+
+```python
+from azureml.core import ScriptRunConfig, Experiment
+from azureml.core.runconfig import PyTorchConfiguration
+
+nnodes = 2
+args = ['--max_epochs', 50, '--gpus', 2, '--accelerator', 'ddp', '--num_nodes', nnodes]
+distr_config = PyTorchConfiguration(node_count=nnodes)
+
+run_config = ScriptRunConfig(
+ source_directory='./src',
+ script='train.py',
+ arguments=args,
+ compute_target=compute_target,
+ environment=pytorch_env,
+ distributed_job_config=distr_config,
+)
+
+run = Experiment(ws, 'experiment_name').submit(run_config)
+```
+
+### PyTorch Lightning example
+
+* [azureml-examples: Multi-node training with PyTorch Lightning](https://github.com/Azure/azureml-examples/blob/main/python-sdk/experimental/using-pytorch-lightning/4.train-multi-node-ddp.ipynb)
+
+### Hugging Face Transformers
+
+Hugging Face provides many [examples](https://github.com/huggingface/transformers/tree/master/examples) for using its Transformers library with `torch.distributed.launch` to run distributed training. To run these examples and your own custom training scripts using the Transformers Trainer API, follow the [Using `torch.distributed.launch`](#per-node-launch) section.
+
+Sample job configuration code to fine-tune the BERT large model on the text classification MNLI task using the `run_glue.py` script on one node with 8 GPUs:
+
+```python
+from azureml.core import ScriptRunConfig
+from azureml.core.runconfig import PyTorchConfiguration
+
+distr_config = PyTorchConfiguration() # node_count defaults to 1
+launch_cmd = "python -m torch.distributed.launch --nproc_per_node 8 text-classification/run_glue.py --model_name_or_path bert-large-uncased-whole-word-masking --task_name mnli --do_train --do_eval --max_seq_length 128 --per_device_train_batch_size 8 --learning_rate 2e-5 --num_train_epochs 3.0 --output_dir /tmp/mnli_output".split()
+
+run_config = ScriptRunConfig(
+ source_directory='./src',
+ command=launch_cmd,
+ compute_target=compute_target,
+ environment=pytorch_env,
+ distributed_job_config=distr_config,
+)
+```
+
+You can also use the [per-process-launch](#per-process-launch) option to run distributed training without using `torch.distributed.launch`. One thing to keep in mind if using this method is that the transformers [TrainingArguments](https://huggingface.co/transformers/main_classes/trainer.html?highlight=launch#trainingarguments) expect the local rank to be passed in as an argument (`--local_rank`). `torch.distributed.launch` takes care of this when `--use_env=False`, but if you are using per-process-launch you'll need to explicitly pass the local rank in as an argument to the training script `--local_rank=$LOCAL_RANK` as Azure ML only sets the `LOCAL_RANK` environment variable.
+
+## TensorFlow
+
+If you're using [native distributed TensorFlow](https://www.tensorflow.org/guide/distributed_training) in your training code, such as TensorFlow 2.x's `tf.distribute.Strategy` API, you can launch the distributed job via Azure ML using the `TensorflowConfiguration`.
+
+To do so, specify a `TensorflowConfiguration` object to the `distributed_job_config` parameter of the `ScriptRunConfig` constructor. If you're using `tf.distribute.experimental.MultiWorkerMirroredStrategy`, specify the `worker_count` in the `TensorflowConfiguration` corresponding to the number of nodes for your training job.
+
+```python
+from azureml.core import ScriptRunConfig, Environment, Experiment
+from azureml.core.runconfig import TensorflowConfiguration
+
+curated_env_name = 'AzureML-TensorFlow-2.3-GPU'
+tf_env = Environment.get(workspace=ws, name=curated_env_name)
+distr_config = TensorflowConfiguration(worker_count=2, parameter_server_count=0)
+
+run_config = ScriptRunConfig(
+ source_directory='./src',
+ script='train.py',
+ compute_target=compute_target,
+ environment=tf_env,
+ distributed_job_config=distr_config,
+)
+
+# submit the run configuration to start the job
+run = Experiment(ws, "experiment_name").submit(run_config)
+```
+
+If your training script uses the parameter server strategy for distributed training, such as for legacy TensorFlow 1.x, you'll also need to specify the number of parameter servers to use in the job, for example, `tf_config = TensorflowConfiguration(worker_count=2, parameter_server_count=1)`.
+
+### TF_CONFIG
+
+In TensorFlow, the **TF_CONFIG** environment variable is required for training on multiple machines. For TensorFlow jobs, Azure ML will configure and set the TF_CONFIG variable appropriately for each worker before executing your training script.
+
+You can access TF_CONFIG from your training script if you need to: `os.environ['TF_CONFIG']`.
+
+Example TF_CONFIG set on a chief worker node:
+```json
+TF_CONFIG='{
+ "cluster": {
+ "worker": ["host0:2222", "host1:2222"]
+ },
+ "task": {"type": "worker", "index": 0},
+ "environment": "cloud"
+}'
+```
+
+### TensorFlow example
+
+- [azureml-examples: Distributed TensorFlow training with MultiWorkerMirroredStrategy](https://github.com/Azure/azureml-examples/tree/main/python-sdk/workflows/train/tensorflow/mnist-distributed)
+
+## <a name="infiniband"></a> Accelerating GPU training with InfiniBand
+
+Certain Azure VM series, specifically the NC, ND, and H-series, now have RDMA-capable VMs with SR-IOV and Infiniband support. These VMs communicate over the low latency and high-bandwidth InfiniBand network, which is much more performant than Ethernet-based connectivity. SR-IOV for InfiniBand enables near bare-metal performance for any MPI library (MPI is used by many distributed training frameworks and tooling, including NVIDIA's NCCL software.) These SKUs are intended to meet the needs of computationally intensive, GPU-acclerated machine learning workloads. For more information, see [Accelerating Distributed Training in Azure Machine Learning with SR-IOV](https://techcommunity.microsoft.com/t5/azure-ai/accelerating-distributed-training-in-azure-machine-learning/ba-p/1059050).
+
+If you create an `AmlCompute` cluster of one of these RDMA-capable, InfiniBand-enabled sizes, such as `Standard_ND40rs_v2`, the OS image will come with the Mellanox OFED driver required to enable InfiniBand preinstalled and preconfigured.
+
+## Next steps
+
+* [Deploy machine learning models to Azure](how-to-deploy-and-where.md)
+* [Deploy and score a machine learning model by using a managed online endpoint (preview)](how-to-deploy-managed-online-endpoints.md)
+* [Reference architecture for distributed deep learning training in Azure](/azure/architecture/reference-architectures/ai/training-deep-learning)
machine-learning How To Train Pytorch https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-train-pytorch.md
Azure Machine Learning also supports multi-node distributed PyTorch jobs so that
Azure ML supports running distributed PyTorch jobs with both Horovod and PyTorch's built-in DistributedDataParallel module.
-### Horovod
-[Horovod](https://github.com/uber/horovod) is an open-source, all reduce framework for distributed training developed by Uber. It offers an easy path to writing distributed PyTorch code for training.
-
-Your training code will have to be instrumented with Horovod for distributed training. For more information using Horovod with PyTorch, see the [Horovod documentation](https://horovod.readthedocs.io/en/stable/pytorch.html).
-
-Additionally, make sure your training environment includes the **horovod** package. If you are using a PyTorch curated environment, horovod is already included as one of the dependencies. If you are using your own environment, make sure the horovod dependency is included, for example:
-
-```yaml
-channels:
-- conda-forge
-dependencies:
-- python=3.6.2-- pip:
- - azureml-defaults
- - torch==1.6.0
- - torchvision==0.7.0
- - horovod==0.19.5
-```
-
-In order to execute a distributed job using MPI/Horovod on Azure ML, you must specify an [MpiConfiguration](/python/api/azureml-core/azureml.core.runconfig.mpiconfiguration) to the `distributed_job_config` parameter of the ScriptRunConfig constructor. The below code will configure a 2-node distributed job running one process per node. If you would also like to run multiple processes per node (i.e. if your cluster SKU has multiple GPUs), additionally specify the `process_count_per_node` parameter in MpiConfiguration (the default is `1`).
-
-```python
-from azureml.core import ScriptRunConfig
-from azureml.core.runconfig import MpiConfiguration
-
-src = ScriptRunConfig(source_directory=project_folder,
- script='pytorch_horovod_mnist.py',
- compute_target=compute_target,
- environment=pytorch_env,
- distributed_job_config=MpiConfiguration(node_count=2))
-```
-
-For a full tutorial on running distributed PyTorch with Horovod on Azure ML, see [Distributed PyTorch with Horovod](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/ml-frameworks/pytorch/distributed-pytorch-with-horovod).
-
-### DistributedDataParallel
-If you are using PyTorch's built-in [DistributedDataParallel](https://pytorch.org/tutorials/intermediate/ddp_tutorial.html) module that is built using the **torch.distributed** package in your training code, you can also launch the distributed job via Azure ML.
-
-To launch a distributed PyTorch job on Azure ML, you have two options:
-1. Per-process launch: specify the total number of worker processes you want to run, and Azure ML will handle launching each process.
-2. Per-node launch with `torch.distributed.launch`: provide the `torch.distributed.launch` command you want to run on each node. The torch launch utility will handle launching the worker processes on each node.
-
-There are no fundamental differences between these launch options; it is largely up to the user's preference or the conventions of the frameworks/libraries built on top of vanilla PyTorch (such as Lightning or Hugging Face).
-
-#### Per-process launch
-To use this option to run a distributed PyTorch job, do the following:
-1. Specify the training script and arguments
-2. Create a [PyTorchConfiguration](/python/api/azureml-core/azureml.core.runconfig.pytorchconfiguration) and specify the `process_count` as well as `node_count`. The `process_count` corresponds to the total number of processes you want to run for your job. This should typically equal the number of GPUs per node multiplied by the number of nodes. If `process_count` is not specified, Azure ML will by default launch one process per node.
-
-Azure ML will set the following environment variables:
-* `MASTER_ADDR` - IP address of the machine that will host the process with rank 0.
-* `MASTER_PORT` - A free port on the machine that will host the process with rank 0.
-* `NODE_RANK` - The rank of the node for multi-node training. The possible values are 0 to (total # of nodes - 1).
-* `WORLD_SIZE` - The total number of processes. This should be equal to the total number of devices (GPU) used for distributed training.
-* `RANK` - The (global) rank of the current process. The possible values are 0 to (world size - 1).
-* `LOCAL_RANK` - The local (relative) rank of the process within the node. The possible values are 0 to (# of processes on the node - 1).
-
-Since the required environment variables will be set for you by Azure ML, you can use [the default environment variable initialization method](https://pytorch.org/docs/stable/distributed.html#environment-variable-initialization) to initialize the process group in your training code.
-
-The following code snippet configures a 2-node, 2-process-per-node PyTorch job:
-```python
-from azureml.core import ScriptRunConfig
-from azureml.core.runconfig import PyTorchConfiguration
-
-curated_env_name = 'AzureML-PyTorch-1.6-GPU'
-pytorch_env = Environment.get(workspace=ws, name=curated_env_name)
-distr_config = PyTorchConfiguration(process_count=4, node_count=2)
-
-src = ScriptRunConfig(
- source_directory='./src',
- script='train.py',
- arguments=['--epochs', 25],
- compute_target=compute_target,
- environment=pytorch_env,
- distributed_job_config=distr_config,
-)
-
-run = Experiment(ws, 'experiment_name').submit(src)
-```
-
-> [!WARNING]
-> In order to use this option for multi-process-per-node training, you will need to use Azure ML Python SDK >= 1.22.0, as `process_count` was introduced in 1.22.0.
-
-> [!TIP]
-> If your training script passes information like local rank or rank as script arguments, you can reference the environment variable(s) in the arguments: `arguments=['--epochs', 50, '--local_rank', $LOCAL_RANK]`.
-
-#### Per-node launch with `torch.distributed.launch`
-PyTorch provides a launch utility in [torch.distributed.launch](https://pytorch.org/docs/stable/distributed.html#launch-utility) that users can use to launch multiple processes per node. The `torch.distributed.launch` module will spawn multiple training processes on each of the nodes.
-
-The following steps will demonstrate how to configure a PyTorch job with a per-node-launcher on Azure ML that will achieve the equivalent of running the following command:
-
-```shell
-python -m torch.distributed.launch --nproc_per_node <num processes per node> \
- --nnodes <num nodes> --node_rank $NODE_RANK --master_addr $MASTER_ADDR \
- --master_port $MASTER_PORT --use_env \
- <your training script> <your script arguments>
-```
-
-1. Provide the `torch.distributed.launch` command to the `command` parameter of the `ScriptRunConfig` constructor. Azure ML will run this command on each node of your training cluster. `--nproc_per_node` should be less than or equal to the number of GPUs available on each node. `MASTER_ADDR`, `MASTER_PORT`, and `NODE_RANK` are all set by Azure ML, so you can just reference the environment variables in the command. Azure ML sets `MASTER_PORT` to 6105, but you can pass a different value to the `--master_port` argument of `torch.distributed.launch` command if you wish. (The launch utility will reset the environment variables.)
-2. Create a `PyTorchConfiguration` and specify the `node_count`. You do not need to set `process_count` as Azure ML will default to launching one process per node, which will run the launch command you specified.
-
-```python
-from azureml.core import ScriptRunConfig
-from azureml.core.runconfig import PyTorchConfiguration
-
-curated_env_name = 'AzureML-PyTorch-1.6-GPU'
-pytorch_env = Environment.get(workspace=ws, name=curated_env_name)
-distr_config = PyTorchConfiguration(node_count=2)
-launch_cmd = "python -m torch.distributed.launch --nproc_per_node 2 --nnodes 2 --node_rank $NODE_RANK --master_addr $MASTER_ADDR --master_port $MASTER_PORT --use_env train.py --epochs 50".split()
-
-src = ScriptRunConfig(
- source_directory='./src',
- command=launch_cmd,
- compute_target=compute_target,
- environment=pytorch_env,
- distributed_job_config=distr_config,
-)
-
-run = Experiment(ws, 'experiment_name').submit(src)
-```
-
-For a full tutorial on running distributed PyTorch on Azure ML, see [Distributed PyTorch with DistributedDataParallel](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/ml-frameworks/pytorch/distributed-pytorch-with-distributeddataparallel).
-
-### Troubleshooting
-
-* **Horovod has been shut down**: In most cases, if you encounter "AbortedError: Horovod has been shut down", there was an underlying exception in one of the processes that caused Horovod to shut down. Each rank in the MPI job gets it own dedicated log file in Azure ML. These logs are named `70_driver_logs`. In case of distributed training, the log names are suffixed with `_rank` to make it easier to differentiate the logs. To find the exact error that caused Horovod to shut down, go through all the log files and look for `Traceback` at the end of the driver_log files. One of these files will give you the actual underlying exception.
+For more information about distributed training, see the [Distributed GPU training guide](how-to-train-distributed-gpu.md).
## Export to ONNX
machine-learning How To Train Tensorflow https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-train-tensorflow.md
Azure Machine Learning also supports multi-node distributed TensorFlow jobs so t
Azure ML supports running distributed TensorFlow jobs with both Horovod and TensorFlow's built-in distributed training API.
-### Horovod
-[Horovod](https://github.com/uber/horovod) is an open-source, all reduce framework for distributed training developed by Uber. It offers an easy path to writing distributed TensorFlow code for training.
-
-Your training code will have to be instrumented with Horovod for distributed training. For more information using Horovod with TensorFlow, refer to Horovod documentation:
-
-For more information on using Horovod with TensorFlow, refer to Horovod documentation:
-
-* [Horovod with TensorFlow](https://github.com/horovod/horovod/blob/master/docs/tensorflow.rst)
-* [Horovod with TensorFlow's Keras API](https://github.com/horovod/horovod/blob/master/docs/keras.rst)
-
-Additionally, make sure your training environment includes the **horovod** package. If you are using a TensorFlow curated environment, horovod is already included as one of the dependencies. If you are using your own environment, make sure the horovod dependency is included, for example:
-
-```yaml
-channels:
-- conda-forge
-dependencies:
-- python=3.6.2-- pip:
- - azureml-defaults
- - tensorflow-gpu==2.2.0
- - horovod==0.19.5
-```
-
-In order to execute a distributed job using MPI/Horovod on Azure ML, you must specify an [MpiConfiguration](/python/api/azureml-core/azureml.core.runconfig.mpiconfiguration) to the `distributed_job_config` parameter of the ScriptRunConfig constructor. The below code will configure a 2-node distributed job running one process per node. If you would also like to run multiple processes per node (i.e. if your cluster SKU has multiple GPUs), additionally specify the `process_count_per_node` parameter in MpiConfiguration (the default is `1`).
-
-```python
-from azureml.core import ScriptRunConfig
-from azureml.core.runconfig import MpiConfiguration
-
-src = ScriptRunConfig(source_directory=project_folder,
- script='tf_horovod_word2vec.py',
- arguments=['--input_data', dataset.as_mount()],
- compute_target=compute_target,
- environment=tf_env,
- distributed_job_config=MpiConfiguration(node_count=2))
-```
-
-For a full tutorial on running distributed TensorFlow with Horovod on Azure ML, see [Distributed TensorFlow with Horovod](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/ml-frameworks/tensorflow/distributed-tensorflow-with-horovod).
-
-### tf.distribute
-
-If you are using [native distributed TensorFlow](https://www.tensorflow.org/guide/distributed_training) in your training code, e.g. TensorFlow 2.x's `tf.distribute.Strategy` API, you can also launch the distributed job via Azure ML.
-
-To do so, specify a [TensorflowConfiguration](/python/api/azureml-core/azureml.core.runconfig.tensorflowconfiguration) to the `distributed_job_config` parameter of the ScriptRunConfig constructor. If you are using `tf.distribute.experimental.MultiWorkerMirroredStrategy`, specify the `worker_count` in the TensorflowConfiguration corresponding to the number of nodes for your training job.
-
-```python
-import os
-from azureml.core import ScriptRunConfig
-from azureml.core.runconfig import TensorflowConfiguration
-
-distr_config = TensorflowConfiguration(worker_count=2, parameter_server_count=0)
-
-model_path = os.path.join("./outputs", "keras-model")
-
-src = ScriptRunConfig(source_directory=source_dir,
- script='train.py',
- arguments=["--epochs", 30, "--model-dir", model_path],
- compute_target=compute_target,
- environment=tf_env,
- distributed_job_config=distr_config)
-```
-
-In TensorFlow, the `TF_CONFIG` environment variable is required for training on multiple machines. Azure ML will configure and set the `TF_CONFIG` variable appropriately for each worker before executing your training script. You can access `TF_CONFIG` from your training script if you need to via `os.environ['TF_CONFIG']`.
-
-Example structure of `TF_CONFIG` set on a chief worker node:
-```JSON
-TF_CONFIG='{
- "cluster": {
- "worker": ["host0:2222", "host1:2222"]
- },
- "task": {"type": "worker", "index": 0},
- "environment": "cloud"
-}'
-```
-
-If your training script uses the parameter server strategy for distributed training, i.e. for legacy TensorFlow 1.x, you will also need to specify the number of parameter servers to use in the job, e.g. `distr_config = TensorflowConfiguration(worker_count=2, parameter_server_count=1)`.
+For more information about distributed training, see the [Distributed GPU training guide](how-to-train-distributed-gpu.md).
## Deploy a TensorFlow model
machine-learning How To Troubleshoot Deployment https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-troubleshoot-deployment.md
Take these actions for the following errors:
|Error | Resolution | |||
+| 409 conflict error| When an operation is already in progress, any new operation on that same web service will respond with 409 conflict error. For example, If create or update web service operation is in progress and if you trigger a new Delete operation it will throw an error. |
|Image building failure when deploying web service | Add "pynacl==1.2.1" as a pip dependency to Conda file for image configuration | |`['DaskOnBatch:context_managers.DaskOnBatch', 'setup.py']' died with <Signals.SIGKILL: 9>` | Change the SKU for VMs used in your deployment to one that has more memory. | |FPGA failure | You will not be able to deploy models on FPGAs until you have requested and been approved for FPGA quota. To request access, fill out the quota request form: https://aka.ms/aml-real-time-ai | + ## Advanced debugging You may need to interactively debug the Python code contained in your model deployment. For example, if the entry script is failing and the reason cannot be determined by additional logging. By using Visual Studio Code and the debugpy, you can attach to the code running inside the Docker container.
machine-learning Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/policy-reference.md
Title: Built-in policy definitions for Azure Machine Learning description: Lists Azure Policy built-in policy definitions for Azure Machine Learning. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/13/2021 Last updated : 08/20/2021
machine-learning Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Machine Learning description: Lists Azure Policy Regulatory Compliance controls available for Azure Machine Learning. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 08/13/2021 Last updated : 08/20/2021
machine-learning Tutorial 1St Experiment Bring Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/tutorial-1st-experiment-bring-data.md
Select **Save and run script in terminal** to run the *run-pytorch-data.py* scr
This code will print a URL to the experiment in the Azure Machine Learning studio. If you go to that link, you'll be able to see your code running. + ### <a name="inspect-log"></a> Inspect the log file
machine-learning Tutorial 1St Experiment Hello World https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/tutorial-1st-experiment-hello-world.md
Here's a description of how the control script works:
## <a name="submit"></a> Submit and run your code in the cloud
-Select **Save and run script in terminal** to run your control script, which in turn runs `hello.py` on the compute cluster that you created in the [setup tutorial](quickstart-create-resources.md).
+1. Select **Save and run script in terminal** to run your control script, which in turn runs `hello.py` on the compute cluster that you created in the [setup tutorial](quickstart-create-resources.md).
-In the terminal, you may be asked to sign in to authenticate. Copy the code and follow the link to complete this step.
+1. In the terminal, you may be asked to sign in to authenticate. Copy the code and follow the link to complete this step.
-> [!TIP]
-> If you just finished creating the compute cluster, you may see the error "UserError: Required Docker image not found..." Wait about 5 minutes or so, and try again. The compute cluster may need more time before it is ready to spin up nodes.
+1. Once you're authenticated, you'll see a link in the terminal. Select the link to view the run.
+
+ [!INCLUDE [amlinclude-info](../../includes/machine-learning-py38-ignore.md)]
+
+## View the output
+1. In the page that opens, you'll see the run status.
+1. When the status of the run is **Completed**, select **Output + logs** at the top of the page.
+1. Select **70_driver_log.txt** to view the output of your run.
## <a name="monitor"></a>Monitor your code in the cloud in the studio
machine-learning Tutorial 1St Experiment Sdk Train https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/tutorial-1st-experiment-sdk-train.md
if __name__ == "__main__":
## <a name="submit"></a> Submit the run to Azure Machine Learning
-Select **Save and run script in terminal** to run the *run-pytorch.py* script.
+1. Select **Save and run script in terminal** to run the *run-pytorch.py* script.
->[!NOTE]
-> The first time you run this script, Azure Machine Learning will build a new Docker image from your PyTorch environment. The whole run might take 3 to 4 minutes to complete.
->
-> You can see the Docker build logs in the Azure Machine Learning studio. Follow the link to the studio, select the **Outputs + logs** tab, and then select `20_image_build_log.txt`.
->
-> This image will be reused in future runs to make them run much quicker.
+1. You'll see a link in the terminal window that opens. Select the link to view the run.
+
+ [!INCLUDE [amlinclude-info](../../includes/machine-learning-py38-ignore.md)]
+
+### View the output
-After your image is built, select `70_driver_log.txt` to see the output of your training script.
+1. In the page that opens, you'll see the run status. The first time you run this script, Azure Machine Learning will build a new Docker image from your PyTorch environment. The whole run might take 3 to 4 minutes to complete. This image will be reused in future runs to make them run much quicker.
+1. You can see view Docker build logs in the Azure Machine Learning studio. Select the **Outputs + logs** tab, and then select **20_image_build_log.txt**.
+1. When the status of the run is **Completed**, select **Output + logs**.
+1. Select **70_driver_log.txt** to view the output of your run.
```txt Downloading https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz to ./data/cifar-10-python.tar.gz
epoch=2, batch=12000: loss 1.27
Finished Training ```
-> [!WARNING]
-> If you see an error `Your total snapshot size exceeds the limit`, the **data** folder is located in the `source_directory` value used in `ScriptRunConfig`.
->
-> Select the **...** at the end of the folder, then select **Move** to move **data** to the **get-started** folder.
+If you see an error `Your total snapshot size exceeds the limit`, the **data** folder is located in the `source_directory` value used in `ScriptRunConfig`.
+
+Select the **...** at the end of the folder, then select **Move** to move **data** to the **get-started** folder.
++ ## <a name="log"></a> Log training metrics
mariadb Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mariadb/policy-reference.md
Title: Built-in policy definitions for Azure Database for MariaDB description: Lists Azure Policy built-in policy definitions for Azure Database for MariaDB. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/13/2021 Last updated : 08/20/2021
mariadb Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mariadb/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Database for MariaDB description: Lists Azure Policy Regulatory Compliance controls available for Azure Database for MariaDB. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 08/13/2021 Last updated : 08/20/2021
marketplace Azure App Managed https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/azure-app-managed.md
You can configure a maximum of five policies, and only one instance of each Poli
1. Under **Policy settings**, select the **+ Add policy (max 5)** link. 1. In the **Name** box, enter the policy assignment name (limited to 50 characters).
-1. From the **Policies** list box, select the Azure policy that will be applied to resources created by the managed application in the customer subscription.
+1. From the **Policies** list box, select the Azure Policy definition that will be applied to resources created by the managed application in the customer subscription.
1. In the **Policy parameters** box, provide the parameter on which the auditing and diagnostic settings policies should be applied. 1. From the **Policy SKU** list box, select the policy SKU type.
marketplace Gtm Offer Listing Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/gtm-offer-listing-best-practices.md
Previously updated : 06/03/2021 Last updated : 08/20/2021 # Offer listing best practices
-This article gives suggestions for creating and engaging Microsoft commercial marketplace offers. The following tables outline best practices for completing offer information in Partner Center. For an analysis of how your offers are performing, go to the [Marketplace Insights dashboard](https://go.microsoft.com/fwlink/?linkid=2165936) in Partner Center.
+This article offers suggestions for creating and engaging Microsoft commercial marketplace offers. The following tables outline best practices for completing offer information in Partner Center.
+
+For a complete list of marketing best practices including best practices to drive traffic to and improve customer engagement with your listing please see the Commercial Marketplace [Marketing Best Practices Guide](https://aka.ms/marketplacebestpracticesguide).
+
+For an analysis of how your offers are performing, go to the [Marketplace Insights dashboard](https://go.microsoft.com/fwlink/?linkid=2165936) in Partner Center.
## Online store offer details | Setting | Best practice | |: |: |
-| Offer name | For apps, provide a clear title that includes search keywords to help customers discover your offer. <br> <br> For Consulting Services, follow this format: [Offer Name: [Duration] [Offer Type] (for example, Contoso: 2-Week Implementation) |
-| Offer description | Provide a clear description that describes your offer's value proposition in the first few sentences. Keep in mind that these sentences may be used in search engine results. Core components of your value proposition should include: <ul> <li>Description of the product or solution. </li> <li> User persona that benefits from the product or solution. </li> <li> Customer need or pain the product or solution addresses. </li> </ul> <br> Use industry standard vocabulary or benefit-based wording when possible. Do not rely on features and functionality to sell your product. Instead, focus on the value you deliver. <br> <br> For Consulting Service listings, clearly state the professional service you provide. |
+| Offer name | For apps, provide a clear title that includes search keywords to help customers discover your offer.<br><br>For Consulting Services, follow this format: [Offer Name: [Duration] [Offer Type] (for example, Contoso: 2-Week Implementation) |
+| Offer description | Provide a clear description that describes your offer's value proposition in the first few sentences. These sentences may be used in search engine results. Core components of your value proposition should include:<ul><li>Description of the product or solution.</li><li>User persona that benefits from the product or solution.</li><li>Customer need or pain the product or solution addresses.</li></ul><br>Use industry standard vocabulary or benefit-based wording when possible. Do not rely on features and functionality to sell your product. Instead, focus on the value your offer delivers.<br><br>For Consulting Service listings, clearly state the professional service you provide. |
+| Offer logo (PNG format, from 216×216 to 350x350 px): app details page | Design and optimize your logo for a digital medium:<br><br>Upload the logo in PNG format to the app details listing page of your offer. Partner Center will resize it to the required logo sizes. |
+| Offer logo (PNG format, 48×48 px): search page | Partner Center will generate this logo from the Large logo you uploaded. You can optionally replace this with a different image later. |
+**Learn more** documents | Include supporting sales and marketing assets under **Learn more**; examples include:<ul><li>white papers</li><li>brochures</li><li>checklists</li><li>PowerPoint presentations</li></ul><br>Save all files in PDF format. Your goal here should be to educate customers, not sell to them.<br><br>Add a link to your app landing page to all your documents and add URL parameters to help you track visitors and trials. |
+| Videos (AppSource, consulting services, and SaaS offers only) | The strongest videos communicate the value of your offer in narrative form:<ul><li>Make your customer, not your company, the hero of the story.</li><li>Your video should address the principal challenges and goals of your target customer.</li><li>Recommended length: 60-90 seconds.</li><li>Incorporate key search words that use the name of the videos.</li></ul><br>Consider adding additional videos, such as a how-to, getting started, or customer testimonials. |
+| Screenshots (1280×720 px) | Add up to five screenshots. Incorporate key search words in the file names. |
+|
> [!IMPORTANT] > Make sure your offer name and offer description adhere to **[Microsoft Trademark and Brand Guidelines](https://www.microsoft.com/en-us/legal/intellectualproperty/trademarks/usage/general.aspx)** and other relevant, product-specific guidelines when referring to Microsoft trademarks and the names of Microsoft software, products, and services.
-## Online store listing details
-
-This table shows which offer types have categories and industries applicable to the different online stores: Azure Marketplace and Microsoft AppSource.
-
-| Offer type | Categories for Azure Marketplace | Categories for AppSource | Industries for AppSource |
-| :- |:-:|::|:-:|
-| Azure Application | X | | |
-| Azure Container | X | | |
-| Azure Virtual Machine | X | | |
-| Consulting Service | X<sup>*</sup> | | X<sup>*</sup> |
-| Dynamics 365 Customer Engagement & Power Apps | | X | X |
-| Dynamics 365 for Operations | | X | X |
-| Dynamics 365 business central | | X | X |
-| IoT Edge Module | X | | |
-| Managed service | X | | |
-| Power BI app | | X | X |
-| SaaS | X | X | X |
-
-* The offer is published to the relevant online store based on the primary product. If the primary product is Azure, it goes to Azure Marketplace. Otherwise, its published to AppSource.
-
-### Categories
-
-Microsoft AppSource and Azure Marketplace are online stores that offer different solution types. Azure Marketplace offers IT solutions built on or for Azure. Microsoft AppSource offer business solutions, such as industry SaaS applications, Dynamics 365 add-ins, Microsoft 365 add-ins, and Power Platform apps.
-
-Categories and subcategories are mapped to each online store based on the solution type. Your offer will be published to Microsoft AppSource or Azure Marketplace depending on the offer type, transaction capabilities of the offer and category/subcategory selection.
-
-Select categories and subcategories that best align with your solution type. You can select:
-
-* Up to two categories, including a primary and a secondary category (optional).
-* Up to two subcategories for each primary and/or secondary category. If no subcategory is selected, you offer will still be discoverable on the selected category only.
--
-#### IMPORTANT: SaaS Offers and Microsoft 365 Add-ins
-
-For specific details on how transact capabilities may affect how your offer can be viewed and purchased by marketplace customers, see [Transacting in the commercial marketplace](marketplace-commercial-transaction-capabilities-and-considerations.md). For SaaS offers, the offer's transaction capability as well as the category selection will determine the online store where your offer will be published.
-
-This table shows the combinations of options that are applicable to the different online stores: Azure Marketplace and Microsoft AppSource.
-
-| Metered billing | Microsoft 365 add-ins | Private-only plan | Public-only plan | Public & private plans | Applicable online store |
-|:-:|::|:--:|::|::|:-:|
-| | X | | | | AppSource |
-| X | | X | | | Azure Marketplace |
-| X | | | X | | Azure Marketplace |
-| X | | | | X | Azure Marketplace<sup>2</sup> |
-| | | X | | | Azure Marketplace |
-| | | | X | | AppSource<sup>1</sup><br>Azure Marketplace<sup>1</sup> |
-| | | | | X | AppSource<sup>1</sup><br>Azure Marketplace<sup>1,2</sup> |
-| | | | | X | AppSource<sup>1</sup><br>Azure Marketplace<sup>1</sup> |
-
-1. Depending on category/subcategory and industry selection
-2. Offers with private plans will be published to the Azure portal
-
-> [!NOTE]
-> You cannot have both listing plans and transactable plans in the same offer.
-
-### Industries
-
-Industry selection applies only for offers published to AppSource and Consulting Services published in Azure Marketplace. Select industries and/or verticals if your offer addresses industry-specific needs, calling out industry-specific capabilities in your offer description. You can select up to two (2) industries and two (2) verticals per industry selected.
-
->[!Note]
->For consulting service offers in Azure Marketplace, there are no industry verticals.
-
-| **Industries** | **Verticals** |
-| :- | :-|
-| **Agriculture** | |
-| **Architecture & Construction** | |
-| **Automotive** | |
-| **Distribution** | Wholesale <br> Parcel & Package Shipping |
-| **Education** | Higher Education <br> Primary & Secondary Edu / K-12 <br> Libraries & Museums |
-| **Financial Services** | Banking & Capital Markets <br> Insurance |
-| **Government** | Defense & Intelligence <br> Civilian Government <br> Public Safety & Justice |
-| **Healthcare** | Health Payor <br> Health Provider <br> Pharmaceuticals |
-| **Hospitality & Travel** | Travel & Transportation <br> Hotels & Leisure <br> Restaurants & Food Services |
-| **Manufacturing & Resources** | Chemical & Agrochemical <br> Discrete Manufacturing <br> Energy |
-| **Media & Communications** | Media & Entertainment <br> Telecommunications |
-| **Other Public Sector Industries** | Forestry & Fishing <br> Nonprofit |
-| **Professional Services** | Partner Professional Services <br> Legal |
-| **Real Estate** | |
-
-Industry for Microsoft AppSource only:
-
-| **Industry** | **Verticals** |
-| :- | :-|
-| **Retail & Consumer Goods** | Retailers <br> Consumer Goods |
-
-### Applicable products
-
-Select the applicable products your app works with for the offer to show up under selected products in AppSource.
-
-### Search keywords
-
-Keywords can help customers find your offer when they search. Identify the top search keywords for your offer, incorporate them in your offer summary and description as well as in the keyword section of the offer listing details section.
-
-## Online store marketing details
-| Setting | Best practice |
-|: |: |
-| Offer logo (PNG format, from 216 × 216 to 350 x 350 px): app details page | Design and optimize your logo for a digital medium:<br>Upload the logo in PNG format to the app details listing page of your offer. Partner Center will resize it to the required logo sizes. |
-| Offer logo (PNG format, 48 × 48 pixels): search page | Partner Center will generate this logo from the Large logo you uploaded. You can optionally replace this with a different image later. |
-| "Learn more" documents | Include supporting sales and marketing assets under "Learn more," some examples are:<ul><li>white papers</li><li> brochures</li><li>checklists, or</li><li> PowerPoint presentations</li></ul><br>Save all files in PDF format. Your goal here should be to educate customers, not sell to them.<br><br>Add a link to your app landing page to all your documents and add URL parameters to help you track visitors and trials. |
-| Videos: AppSource, consulting services, and SaaS offers only | The strongest videos communicate the value of your offer in narrative form:<ul> <li> Make your customer, not your company, the hero of the story. </li> <li> Your video should address the principal challenges and goals of your target customer. </li> <li> Recommended length: 60-90 seconds.</li> <li> Incorporate key search words that use the name of the videos. </li> <li> Consider adding additional videos, such as a how-to, getting started, or customer testimonials. </li> </ul> |
-| Screenshots (1280&nbsp;&times;&nbsp;720) | Add up to five screenshots:<br>Incorporate key search words in the file names. |
- ## Link to your offer page from your website
-When you link from the AppSource or Azure Marketplace badge on your site to your listing in the commercial marketplace, you can support strong analytics and reporting by including the following query parameters at the end of the URL:
+To easily direct users to your offer in the commercial marketplace, leverage our **Get It Now** badges on your website or in your digital marketing collateral. Find these badges in our [Marketplace Marketing Toolkit](/asset/collection/azure-marketplace-and-appsource-publisher-toolkit#/).
+
+When you link from the AppSource or Azure Marketplace badge on your site to your listing in the commercial marketplace, support strong analytics and reporting by including the following query parameters at the end of the URL:
* **src**: Include the source from which the traffic is routed to AppSource (for example, website, LinkedIn, or Facebook). * **mktcmpid**: Your marketing campaign ID, which can contain up to 16 characters in any combination of letters, numbers, underscores, and hyphens (for example, *blogpost_12*). The following example URL contains both of the preceding query parameters: `https://appsource.microsoft.com/product/dynamics-365/mscrm.04931187-431c-415d-8777-f7f482ba8095?src=website&mktcmpid=blogpost_12`
-By adding the parameters to your AppSource URL, you can review the effectiveness of your campaign in the [analytics dashboard](https://go.microsoft.com/fwlink/?linkid=2165765) in Partner Center.
+After adding these parameters to your AppSource URL, review the effectiveness of your campaign in the [analytics dashboard](https://go.microsoft.com/fwlink/?linkid=2165765) in Partner Center.
-## Next steps
+## Listing creation technical best practices
-Learn more about your [commercial marketplace benefits](./gtm-your-marketplace-benefits.md).
+Navigating Markdown can be tricky. To help, we've compiled some best practices for revising and reviewing offer listings for the commercial marketplace in Partner Center. The [commercial marketplace listing technical best practices guide](/collection/azure-marketplace-and-appsource-publisher-toolkit#/) shows how to edit your listing and preview your Markdown code.
+
+## Next steps
-Sign in to [Partner Center](https://go.microsoft.com/fwlink/?linkid=2165290) to create and configure your offer. If you haven't yet enrolled in Partner Center, [create an account](create-account.md).
+- Learn more about your [commercial marketplace benefits](./gtm-your-marketplace-benefits.md).
+- Sign in to [Partner Center](https://go.microsoft.com/fwlink/?linkid=2165290) to create and configure your offer. If you haven't yet enrolled in Partner Center, [create an account](create-account.md).
marketplace Marketplace Categories Industries https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/marketplace-categories-industries.md
+
+ Title: Marketplace categories and industries - Microsoft commercial marketplace
+description: Discusses and lists the categories and industries available to choose from when creating an offer in Microsoft AppSource or Azure Marketplace.
+++++ Last updated : 08/20/2021++
+# Marketplace categories and industries
+
+This article discusses and lists the categories and industries available to choose from when creating an offer in Partner Center for our two different online stores: Microsoft AppSource or Azure Marketplace.
+
+## Category and industry listings by offer type
+
+Microsoft AppSource and Azure Marketplace are two different storefronts that serve different customer personas. Your offer will be published to Microsoft AppSource or Azure Marketplace depending on the category/subcategory selection, the offer type, and transaction capabilities.
+
+Following are the categories and industries applicable to each online stores, by offer:
+
+| Offer type | Categories for Azure Marketplace | Categories for AppSource | Industries for AppSource |
+| :- |:-:|::|:-:|
+| Azure Application | &#x2714; | | |
+| Azure Container | &#x2714; | | |
+| Azure Virtual Machine | &#x2714; | | |
+| IoT Edge Module | &#x2714; | | |
+| Managed service | &#x2714; | | |
+| SaaS | &#x2714; | &#x2714; | &#x2714; |
+| Consulting Service | &#x2714; | | &#x2714; |
+| Dynamics 365 Customer Engagement & Power Apps | | &#x2714; | &#x2714; |
+| Dynamics 365 for Operations | | &#x2714; | &#x2714; |
+| Dynamics 365 business central | | &#x2714; | &#x2714; |
+| Power BI app | | &#x2714; | &#x2714; |
+|
+
+## Applicable store by offer type
+
+Following are the combinations of options applicable to each online stores:
+
+| Metered billing | Microsoft 365 add-ins | Private-only plan | Public-only plan | Public & private plans | Applicable online store |
+|:-:|::|:--:|::|::|:-:|
+| | &#x2714; | | | | AppSource |
+| &#x2714; | | &#x2714; | | | Azure Marketplace |
+| &#x2714; | | | &#x2714; | | Azure Marketplace |
+| &#x2714; | | | | &#x2714; | Azure Marketplace<sup>2</sup> |
+| | | &#x2714; | | | Azure Marketplace |
+| | | | &#x2714; | | AppSource<sup>1</sup><br>Azure Marketplace<sup>1</sup> |
+| | | | | &#x2714; | AppSource<sup>1</sup><br>Azure Marketplace<sup>1,2</sup> |
+| | | | | &#x2714; | AppSource<sup>1</sup><br>Azure Marketplace<sup>1</sup> |
+|
+
+<sup>1</sup> Depending on category/subcategory and industry selection.<br>
+<sup>2</sup> Offers with private plans will be published to the Azure portal.<br>
+
+> [!NOTE]
+> A listing plan and transactable plan cannot exist in the same offer.
+
+## Categories
+
+Categories in Azure Marketplace target IT professionals and developers while categories in Microsoft AppSource target business users looking for business and/or industry SaaS applications, Dynamics 365 add-ins, Microsoft 365 add-ins, and Power Platform apps.
+
+Select categories and subcategories that best align with the value proposition of your listing. You can select:
+
+- A maximum of two categories, including a primary and a secondary (optional) category.
+- A maximum of two subcategories for each primary and/or secondary category. If no subcategory is selected, you offer will be discoverable in the selected category only. Select a subcategory to make your offer discoverable within a smaller subset.
++
+## Industries
+
+Industry selection applies only for offers published to AppSource and Consulting Services published in Azure Marketplace. Select industries and/or verticals if your offer addresses industry-specific needs, calling out industry-specific capabilities in your offer description. You can select up to two industries and two verticals per industry.
+
+>[!Note]
+>For consulting service offers in Azure Marketplace, there are no industry verticals.
+
+| Industries | Verticals |
+| :- | :-|
+| Automotive | n/a |
+| Financial Services | Banking<br>Insurance<br>Capital Markets |
+| Government | Civilian Government<br>Public Safety and Justice |
+| Defense and Intelligence | n/a |
+| Healthcare | Health Payor<br>Health Provider<br>Life Sciences |
+| Education | Higher Education<br>Primary and Secondary Edu/K-12<br>Libraries and Museums |
+| Nonprofit and IGO | n/a |
+| Manufacturing | Process Manufacturing<br>Discrete Manufacturing<br>Agriculture |
+| Energy | n/a |
+| Retail | Retail<br>Customer Goods |
+| Media and Communications | Media and Entertainment<br>Telecommunications |
+| Professional Services | Partner Professional Services<br>Legal<br>Architecture and Construction<br>Real Estate |
+| Distribution | Wholesale<br>Parcel and Package Shipping |
+| Hospitality and Travel | Travel & Transportation<br>Hotels and Leisure<br>Restaurants and Food Services |
+|
+
+## Applicable products
+
+Select the applicable products your app works with for the offer to show up under selected products in Microsoft AppSource.
+
+## Next steps
+
+- To create an offer, sign in to [Partner Center](https://go.microsoft.com/fwlink/?linkid=2165290) to create and configure your offer. If you haven't yet enrolled in Partner Center, [create an account](/azure/marketplace/create-account).
+- For step-by-step instructions on publishing an offer, see the commercial marketplace [publishing guide by offer type](/azure/marketplace/publisher-guide-by-offer-type).
+
marketplace Plan Azure Application Offer https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/plan-azure-application-offer.md
You can add or modify a CRM connection at any time during or after offer creatio
## Categories and subcategories
-You can choose at least one and up to two categories for grouping your offer into the appropriate commercial marketplace search areas. You can choose up to two subcategories for each primary and secondary category. For a full list of categories and subcategories, see [Offer Listing Best Practices](gtm-offer-listing-best-practices.md#categories).
+You can choose at least one and up to two categories for grouping your offer into the appropriate commercial marketplace search areas. You can choose up to two subcategories for each primary and secondary category. For a full list of categories and subcategories, see [Offer Listing Best Practices](marketplace-categories-industries.md#categories).
## Legal contracts
marketplace Publisher Guide By Offer Type https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/publisher-guide-by-offer-type.md
Title: Publishing guide by offer type - Microsoft commercial marketplace
-description: This article describes the offer types that are available in the Microsoft commercial marketplace.
+description: This article describes the offer types that are available in the Microsoft commercial marketplace (Azure Marketplace).
Previously updated : 04/06/2021 Last updated : 08/20/2021 # Publishing guide by offer type
The following table shows the commercial marketplace offer types in Partner Cent
| [**Power BI app**<br/>**Microsoft 365**](marketplace-dynamics-365.md) | Publish AppSource offers that build on or extend Power BI and Microsoft 365.| | [**Software as a Service**](plan-saas-offer.md) | Use the software as a service (SaaS) offer type to enable your customer to buy your SaaS-based, technical solution as a subscription. For information on single sign-on requirements for SaaS offers, see [Azure AD and transactable SaaS offers in the commercial marketplace](azure-ad-saas.md). |
+> [!IMPORTANT]
+> **SaaS Offers and Microsoft 365 Add-ins**: For specific details on how transact capabilities may affect how your offer can be viewed and purchased by marketplace customers, see [Transacting in the commercial marketplace](marketplace-commercial-transaction-capabilities-and-considerations.md). For SaaS offers, the offer's transaction capability as well as the category selection will determine the online store where your offer will be published.
+ ## Next steps - Review the eligibility requirements in the corresponding article for your offer type to finalize the selection and configuration of your offer.-- Review the publishing patterns for each online store for examples on how your solution maps to an offer type and configuration.
+- Review the publishing patterns for each online store for examples on how your solution maps to an offer type and configuration.
marketplace What Is New https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/what-is-new.md
Learn about important updates in the commercial marketplace program of Partner C
| Category | Description | Date | | | - | - |
+| Offers | The [Commercial marketplace transact capabilities](/azure/marketplace/marketplace-commercial-transaction-capabilities-and-considerations) topic now includes a flowchart to help you determine the appropriate transactable offer type and pricing plan to sell your software in the commercial marketplace. | 2021-08-18 |
| Policy | Updated [certification](/legal/marketplace/certification-policies?context=/azure/marketplace/context/context) policy; see [change history](/legal/marketplace/offer-policies-change-history). | 2021-08-06 | | Co-sell | Information added for the MACC program including, requirements, how often we update MACC status, and definitions for Enrolled, and not Enrolled. To learn more, see [Azure Consumption Commitment enrollment](./azure-consumption-commitment-enrollment.md), or [Co-sell with Microsoft sales teams and partners overview](./co-sell-overview.md). | 2021-06-03 | | Offers | Additional information regarding VM pricing options and descriptions. To learn more see [How to plan a SaaS offer for the commercial marketplace](./plan-saas-offer.md). | 2021-05-25|
media-services Drm Content Key Policy Concept https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/latest/drm-content-key-policy-concept.md
To get to the key, use `GetPolicyPropertiesWithSecretsAsync`, as shown in the [G
## Filtering, ordering, paging
-See [Filtering, ordering, paging of Media Services entities](filter-order-page-entitites-how-to.md).
+See [Filtering, ordering, paging of Media Services entities](filter-order-page-entities-how-to.md).
## Additional notes
media-services Drm Offline Fairplay For Ios Concept https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/latest/drm-offline-fairplay-for-ios-concept.md
With either the version 3 or version 4 sample of the FPS Server SDK, if a master
## Offline Fairplay questions
-See [offline fairplay questions](questions-collection.md#why-does-only-audio-play-but-not-video-during-offline-mode).
+See [offline fairplay questions in the FAQ](frequently-asked-questions.yml).
media-services Drm Offline Widevine For Android https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/latest/drm-offline-widevine-for-android.md
The above open-source PWA app is authored in Node.js. If you want to host your o
## More information
-For more information, see [Widevine in the Questions Collection](questions-collection.md#widevine-streaming-for-android).
+For more information, see [Content Protection in the FAQ](frequently-asked-questions.yml).
Widevine is a service provided by Google Inc. and subject to the terms of service and Privacy Policy of Google, Inc.
media-services Filter Order Page Entities How To https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/latest/filter-order-page-entities-how-to.md
+
+ Title: Filtering, ordering, and paging of entities
+description: Learn about filtering, ordering, and paging of Azure Media Services v3 entities.
+
+documentationcenter: ''
++
+editor: ''
+++ Last updated : 08/31/2020++++
+# Filtering, ordering, and paging entities
++
+This topic discusses the OData query options and pagination support available when you're listing Azure Media Services v3 entities.
+
+## Considerations
+
+* Properties of entities that are of the `Datetime` type are always in UTC format.
+* White space in the query string should be URL-encoded before you send a request.
+
+## Comparison operators
+
+You can use the following operators to compare a field to a constant value:
+
+Equality operators:
+
+- `eq`: Test whether a field is *equal to* a constant value.
+- `ne`: Test whether a field is *not equal to* a constant value.
+
+Range operators:
+
+- `gt`: Test whether a field is *greater than* a constant value.
+- `lt`: Test whether a field is *less than* a constant value.
+- `ge`: Test whether a field is *greater than or equal to* a constant value.
+- `le`: Test whether a field is *less than or equal to* a constant value.
+
+## Filter
+
+Use `$filter` to supply an OData filter parameter to find only the objects you're interested in.
+
+The following REST example filters on the `alternateId` value of an asset:
+
+```
+GET https://management.azure.com/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/mediaresources/providers/Microsoft.Media/mediaServices/amstestaccount/assets?api-version=2018-07-01&$filter=properties/alternateId%20eq%20'unique identifier'
+```
+
+The following C# example filters on the asset's created date:
+
+```csharp
+var odataQuery = new ODataQuery<Asset>("properties/created lt 2018-05-11T17:39:08.387Z");
+var firstPage = await MediaServicesArmClient.Assets.ListAsync(CustomerResourceGroup, CustomerAccountName, odataQuery);
+```
+
+## Order by
+
+Use `$orderby` to sort the returned objects by the specified parameter. For example:
+
+```
+GET https://management.azure.com/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/mediaresources/providers/Microsoft.Media/mediaServices/amstestaccount/assets?api-version=2018-07-01$orderby=properties/created%20gt%202018-05-11T17:39:08.387Z
+```
+
+To sort the results in ascending or descending order, append either `asc` or `desc` to the field name, separated by a space. For example: `$orderby properties/created desc`.
+
+## Skip token
+
+If a query response contains many items, the service returns a `$skiptoken` (`@odata.nextLink`) value that you use to get the next page of results. Use it to page through the entire result set.
+
+In Media Services v3, you can't configure the page size. The page size varies by the type of entity. Read the individual sections that follow for details.
+
+If entities are created or deleted while you're paging through the collection, the changes are reflected in the returned results (if those changes are in the part of the collection that hasn't been downloaded).
+
+> [!TIP]
+> Always use `nextLink` to enumerate the collection and don't depend on a particular page size.
+>
+> The `nextLink` value will be present only if there's more than one page of entities.
+
+Consider the following example of where `$skiptoken` is used. Make sure you replace *amstestaccount* with your account name and set the *api-version* value to the latest version.
+
+If you request a list of assets like this:
+
+```
+GET https://management.azure.com/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/mediaresources/providers/Microsoft.Media/mediaServices/amstestaccount/assets?api-version=2018-07-01 HTTP/1.1
+x-ms-client-request-id: dd57fe5d-f3be-4724-8553-4ceb1dbe5aab
+Content-Type: application/json; charset=utf-8
+```
+
+You'll get back a response similar to this one:
+
+```
+HTTP/1.1 200 OK
+
+{
+"value":[
+{
+"name":"Asset 0","id":"/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/mediaresources/providers/Microsoft.Media/mediaservices/amstestaccount/assets/Asset 0","type":"Microsoft.Media/mediaservices/assets","properties":{
+"assetId":"00000000-0000-0000-0000-000000000000","created":"2018-12-11T22:12:44.98Z","lastModified":"2018-12-11T22:15:48.003Z","container":"asset-00000000-0000-0000-0000-0000000000000","storageAccountName":"amsacctname","storageEncryptionFormat":"None"
+}
+},
+// lots more assets
+{
+"name":"Asset 517","id":"/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/mediaresources/providers/Microsoft.Media/mediaservices/amstestaccount/assets/Asset 517","type":"Microsoft.Media/mediaservices/assets","properties":{
+"assetId":"00000000-0000-0000-0000-000000000000","created":"2018-12-11T22:14:08.473Z","lastModified":"2018-12-11T22:19:29.657Z","container":"asset-00000000-0000-0000-0000-000000000000","storageAccountName":"amsacctname","storageEncryptionFormat":"None"
+}
+}
+],"@odata.nextLink":"https:// management.azure.com/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/mediaresources/providers/Microsoft.Media/mediaServices/amstestaccount/assets?api-version=2018-07-01&$skiptoken=Asset+517"
+}
+```
+
+You would then request the next page by sending a get request for:
+
+```
+https://management.azure.com/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/mediaresources/providers/Microsoft.Media/mediaServices/amstestaccount/assets?api-version=2018-07-01&$skiptoken=Asset+517
+```
+
+The following C# example shows how to enumerate through all streaming locators in the account.
+
+```csharp
+var firstPage = await MediaServicesArmClient.StreamingLocators.ListAsync(CustomerResourceGroup, CustomerAccountName);
+
+var currentPage = firstPage;
+while (currentPage.NextPageLink != null)
+{
+ currentPage = await MediaServicesArmClient.StreamingLocators.ListNextAsync(currentPage.NextPageLink);
+}
+```
+
+## Using logical operators to combine query options
+
+Media Services v3 supports **OR** and **AND** logical operators.
+
+The following REST example checks the job's state:
+
+```
+https://management.azure.com/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/qbtest/providers/Microsoft.Media/mediaServices/qbtest/transforms/VideoAnalyzerTransform/jobs?$filter=properties/state%20eq%20Microsoft.Media.JobState'Scheduled'%20or%20properties/state%20eq%20Microsoft.Media.JobState'Processing'&api-version=2018-07-01
+```
+
+You construct the same query in C# like this:
+
+```csharp
+var odataQuery = new ODataQuery<Job>("properties/state eq Microsoft.Media.JobState'Scheduled' or properties/state eq Microsoft.Media.JobState'Processing'");
+client.Jobs.List(config.ResourceGroup, config.AccountName, VideoAnalyzerTransformName, odataQuery);
+```
+
+## Filtering and ordering options of entities
+
+The following table shows how you can apply the filtering and ordering options to different entities:
+
+|Entity name|Property name|Filter|Order|
+|||||
+|[Assets](/rest/api/media/assets/)|name|`eq`, `gt`, `lt`, `ge`, `le`|`asc` and `desc`|
+||properties.alternateId |`eq`||
+||properties.assetId |`eq`||
+||properties.created| `eq`, `gt`, `lt`| `asc` and `desc`|
+|[Content key policies](/rest/api/media/contentkeypolicies)|name|`eq`, `ne`, `ge`, `le`, `gt`, `lt`|`asc` and `desc`|
+||properties.created |`eq`, `ne`, `ge`, `le`, `gt`, `lt`|`asc` and `desc`|
+||properties.description |`eq`, `ne`, `ge`, `le`, `gt`, `lt`||
+||properties.lastModified|`eq`, `ne`, `ge`, `le`, `gt`, `lt`|`asc` and `desc`|
+||properties.policyId|`eq`, `ne`||
+|[Jobs](/rest/api/media/jobs)| name | `eq` | `asc` and `desc`|
+||properties.state | `eq`, `ne` | |
+||properties.created | `gt`, `ge`, `lt`, `le`| `asc` and `desc`|
+||properties.lastModified | `gt`, `ge`, `lt`, `le` | `asc` and `desc`|
+|[Streaming locators](/rest/api/media/streaminglocators)|name|`eq`, `ne`, `ge`, `le`, `gt`, `lt`|`asc` and `desc`|
+||properties.created |`eq`, `ne`, `ge`, `le`, `gt`, `lt`|`asc` and `desc`|
+||properties.endTime |`eq`, `ne`, `ge`, `le`, `gt`, `lt`|`asc` and `desc`|
+|[Streaming policies](/rest/api/media/streamingpolicies)|name|`eq`, `ne`, `ge`, `le`, `gt`, `lt`|`asc` and `desc`|
+||properties.created |`eq`, `ne`, `ge`, `le`, `gt`, `lt`|`asc` and `desc`|
+|[Transforms](/rest/api/media/transforms)| name | `eq` | `asc` and `desc`|
+|| properties.created | `gt`, `ge`, `lt`, `le`| `asc` and `desc`|
+|| properties.lastModified | `gt`, `ge`, `lt`, `le`| `asc` and `desc`|
+
+## Next steps
+
+* [List Assets](/rest/api/media/assets/list)
+* [List Content Key Policies](/rest/api/media/contentkeypolicies/list)
+* [List Jobs](/rest/api/media/jobs/list)
+* [List Streaming Policies](/rest/api/media/streamingpolicies/list)
+* [List Streaming Locators](/rest/api/media/streaminglocators/list)
+* [Stream a file](stream-files-dotnet-quickstart.md)
+* [Quotas and limits](limits-quotas-constraints-reference.md)
media-services Live Event Outputs Concept https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/latest/live-event-outputs-concept.md
For details, see [long-running operations](media-services-apis-overview.md#long-
Once you have the stream flowing into the live event, you can begin the streaming event by creating an [Asset](/rest/api/media/assets), [live output](/rest/api/media/liveoutputs), and [Streaming Locator](/rest/api/media/streaminglocators). live output will archive the stream and make it available to viewers through the [Streaming Endpoint](/rest/api/media/streamingendpoints). For detailed information about live outputs, see [Using a cloud DVR](live-event-cloud-dvr-time-how-to.md).- ## Live event output questions
-See the [live event output questions](questions-collection.md#live-streaming) article.
+See the [live event questions in the FAQ](frequently-asked-questions.yml).
media-services Media Services Apis Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/latest/media-services-apis-overview.md
AMSE is an Open Source project, support is provided by the community (issues can
## Filtering, ordering, paging of Media Services entities
-See [Filtering, ordering, paging of Azure Media Services entities](filter-order-page-entitites-how-to.md).
+See [Filtering, ordering, paging of Azure Media Services entities](filter-order-page-entities-how-to.md).
## Ask questions, give feedback, get updates
media-services Questions Collection https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/latest/questions-collection.md
-
-# Mandatory fields. See more on aka.ms/skyeye/meta.
Title: Azure Media Services v3 question collection
-description: This article gives answers to a collection of questions about Azure Media Services v3.
------- Previously updated : 05/25/2021--
-<!-- NOTE this file is temporary and a placeholder until the FAQ file update is completed. -->
-
-# Media Services v3 questions collection
--
-This article gives answers to frequently asked questions about Azure Media Services v3.
-
-## General
-
-### Does Media Services store any customer data outside of the service region?
--- Customers attach their own storage accounts to their Azure Media Services account. All asset data is stored in these associated storage accounts and the customer controls the location and replication type of this storage.-- Additional data associated with the Media Services account (including Content Encryption Keys, token verification keys, JobInputHttp urls, and other entity metadata) is stored in Microsoft owned storage within the region selected for the Media Services account.
- - Due to [data residency requirements](https://azure.microsoft.com/global-infrastructure/data-residency/#more-information) in Brazil South and Southeast Asia, the additional account data is stored in a zone-redundant fashion and is contained in a single region. For Southeast Asia, all the additional account data is stored in Singapore and for Brazil South, the data is stored in Brazil.
- - In regions other than Brazil South and Southeast Asia, the additional account data may also be stored in Microsoft owned storage in the [paired region](../../best-practices-availability-paired-regions.md).
-- Azure Media Services is a regional service and does not provide [high availability](architecture-high-availability-encoding-concept.md) or data replication. Customers needing these features are highly encouraged to build a solution using Media Services accounts in multiple regions. A sample showing how to build a solution for High Availability with Media Services Video on Demand is available as a guide.-
-### What are the Azure portal limitations for Media Services v3?
-
-You can use the [Azure portal](https://portal.azure.com/) to manage v3 live events, view v3 assets and jobs, get info about accessing APIs, encrypt content. <br/>For all other management tasks (for example, managing transforms and jobs or analyzing v3 content), use the [CLI](/cli/azure/ams), or one of the supported client [SDKs](media-services-apis-overview.md#sdks).
-
-If your video was previously uploaded into the Media Services account using Media Services v3 API or the content was generated based on a live output, you will not see the **Encode**, **Analyze**, or **Encrypt** buttons in the Azure portal. Use the Media Services v3 APIs to perform these tasks.
-
-### What Azure roles can perform actions on Azure Media Services resources?
-
-See [Azure role-based access control (Azure RBAC) for Media Services accounts](security-rbac-concept.md).
-
-### How do I stream to Apple iOS devices?
-
-Make sure you have **(format=m3u8-aapl)** at the end of your path (after the **/manifest** portion of the URL) to tell the streaming origin server to return HTTP Live Streaming (HLS) content for consumption on Apple iOS native devices. For details, see [Delivering content](encode-dynamic-packaging-concept.md).
-
-### What is the recommended method to process videos?
-
-Use [Transforms](/rest/api/medi).
-
-### I uploaded, encoded, and published a video. Why won't the video play when I try to stream it?
-
-One of the most common reasons is that you don't have the streaming endpoint from which you're trying to play back in the Running state.
-
-### How does pagination work?
-
-When you're using pagination, you should always use the next link to enumerate the collection and not depend on a particular page size. For details and examples, see [Filtering, ordering, paging](filter-order-page-entitites-how-to.md).
-
-### What features are not yet available in Azure Media Services v3?
-
-For details, see [the Migration Guide](migrate-v-2-v-3-migration-introduction.md).
-
-### What is the process of moving a Media Services account between subscriptions?
-
-For details, see [Moving a Media Services account between subscriptions](account-move-account-how-to.md).
-
-## Live streaming
-
-### How do I stop the live stream after the broadcast is done?
-
-You can approach it from the client side or the server side.
-
-#### Client side
-
-Your web application should prompt the user if they want to end the broadcast as they're closing the browser. This is a browser event that your web application can handle.
-
-#### Server side
-
-You can monitor live events by subscribing to Azure Event Grid events. For more information, see the [EventGrid event schema](monitoring/media-services-event-schemas.md#live-event-types).
-
-You can either:
-
-* [Subscribe](monitoring/reacting-to-media-services-events.md) to the stream-level [Microsoft.Media.LiveEventEncoderDisconnected](monitoring/media-services-event-schemas.md#liveeventencoderdisconnected) events and monitor that no reconnections come in for a while to stop and delete your live event.
-* [Subscribe](monitoring/reacting-to-media-services-events.md) to the track-level [heartbeat](monitoring/media-services-event-schemas.md#liveeventingestheartbeat) events. If all tracks have an incoming bitrate dropping to 0 or the last time stamp is no longer increasing, you can safely shut down the live event. The heartbeat events come in at every 20 seconds for every track, so it might be a bit verbose.
-
-### How do I insert breaks/videos and image slates during a live stream?
-
-Media Services v3 live encoding does not yet support inserting video or image slates during live stream.
-
-You can use a [live on-premises encoder](encode-recommended-on-premises-live-encoders.md) to switch the source video. Many apps provide to ability to switch sources, including Telestream Wirecast, Switcher Studio (on iOS), and OBS Studio (free app).
-
-## Content protection
-
-### Should I use AES-128 clear key encryption or a DRM system?
-
-Customers often wonder whether they should use AES encryption or a DRM system. The main difference between the two systems is that with AES encryption, the content key is transmitted to the client over TLS so that the key is encrypted in transit but without any additional encryption ("in the clear"). As a result, the key that's used to decrypt the content is accessible to the client player and can be viewed in a network trace on the client in plain text. AES-128 clear key encryption is suitable for use cases where the viewer is a trusted party (for example, encrypting corporate videos distributed within a company to be viewed by employees).
-
-DRM systems like PlayReady, Widevine, and FairPlay all provide an additional level of encryption on the key that's used to decrypt the content, compared to an AES-128 clear key. The content key is encrypted to a key protected by the DRM runtime in addition to any transport-level encryption provided by TLS. Additionally, decryption is handled in a secure environment at the operating system level, where it's more difficult for a malicious user to attack. We recommend DRM for use cases where the viewer might not be a trusted party and you need the highest level of security.
-
-### How do I show a video to only users who have a specific permission, without using Azure AD?
-
-You don't have to use any specific token provider such as Azure Active Directory (Azure AD). You can create your own [JWT](https://jwt.io/) provider (so-called Secure Token Service, or STS) by using asymmetric key encryption. In your custom STS, you can add claims based on your business logic.
-
-Make sure that the issuer, audience, and claims all match up exactly between what's in JWT and the `ContentKeyPolicyRestriction` value used in `ContentKeyPolicy`.
-
-For more information, see [Protect your content by using Media Services dynamic encryption](drm-content-protection-concept.md).
-
-### How and where did I get a JWT token before using it to request a license or key?
-
-For production, you need to have Secure Token Service (that is, a web service), which issues a JWT token upon an HTTPS request. For test, you can use the code shown in the `GetTokenAsync` method defined in [Program.cs](https://github.com/Azure-Samples/media-services-v3-dotnet-tutorials/blob/main/AMSV3Tutorials/EncryptWithDRM/Program.cs).
-
-The player makes a request, after a user is authenticated, to STS for such a token and assigns it as the value of the token. You can use the [Azure Media Player API](https://amp.azure.net/libs/amp/latest/docs/).
-
-For an example of running STS with either a symmetric key or an asymmetric key, see the [JWT tool](https://aka.ms/jwt). For an example of a player based on Azure Media Player using such a JWT token, see the [Azure media test tool](https://aka.ms/amtest). (Expand the **player_settings** link to see the token input.)
-
-### How do I authorize requests to stream videos with AES encryption?
-
-The correct approach is to use Secure Token Service. In STS, depending on the user profile, add different claims (such as "Premium User," "Basic User," "Free Trial User"). With different claims in a JWT, the user can see different contents. For different contents or assets, `ContentKeyPolicyRestriction` will have the corresponding `RequiredClaims` value.
-
-Use Azure Media Services APIs for configuring license/key delivery and encrypting your assets (as shown in [this sample](https://github.com/Azure-Samples/media-services-v3-dotnet-tutorials/blob/main/AMSV3Tutorials/EncryptWithAES/Program.cs)).
-
-For more information, see:
--- [Content protection overview](drm-content-protection-concept.md)-- [Design of a multi-DRM content protection system with access control](architecture-design-multi-drm-system.md)-
-### Should I use HTTP or HTTPS?
-The ASP.NET MVC player application must support the following:
-
-* User authentication through Azure AD, which is under HTTPS.
-* JWT exchange between the client and Azure AD, which is under HTTPS.
-* DRM license acquisition by the client, which must be under HTTPS if license delivery is provided by Media Services. The PlayReady product suite doesn't mandate HTTPS for license delivery. If your PlayReady license server is outside Media Services, you can use either HTTP or HTTPS.
-
-The ASP.NET player application uses HTTPS as a best practice, so Media Player is on a page under HTTPS. However, HTTP is preferred for streaming, so you need to consider these issues with mixed content:
-
-* The browser doesn't allow mixed content. But plug-ins like Silverlight and the OSMF plug-in for Smooth and DASH do allow it. Mixed content is a security concern because of the threat of the ability to inject malicious JavaScript, which can put customer data at risk. Browsers block this capability by default. The only way to work around it is on the server (origin) side by allowing all domains (regardless of HTTPS or HTTP). This is probably not a good idea either.
-* Avoid mixed content. Both the player application and Media Player should use HTTP or HTTPS. When you're playing mixed content, the SilverlightSS tech requires clearing a mixed-content warning. The FlashSS tech handles mixed content without a mixed-content warning.
-* If your streaming endpoint was created before August 2014, it won't support HTTPS. In this case, create and use a new streaming endpoint for HTTPS.
-
-### What about live streaming?
-
-You can use exactly the same design and implementation to help protect live streaming in Media Services by treating the asset associated with a program as a VOD asset. To provide a multi-DRM protection of the live content, apply the same setup/processing to the asset as if it were a VOD asset before you associate the asset with the live output.
-
-### What about license servers outside Media Services?
-
-Often, customers have invested in a license server farm either in their own datacenter or in one hosted by DRM service providers. With Media Services content protection, you can operate in hybrid mode. Content can be hosted and dynamically protected in Media Services, while DRM licenses are delivered by servers outside Media Services. In this case, consider the following changes:
-
-* STS needs to issue tokens that are acceptable and can be verified by the license server farm. For example, the Widevine license servers provided by Axinom require a specific JWT that contains an entitlement message. You need to have an STS to issue such a JWT.
-* You no longer need to configure license delivery service in Media Services. You need to provide the license acquisition URLs (for PlayReady, Widevine, and FairPlay) when you configure `ContentKeyPolicy`.
-
-> [!NOTE]
-> Widevine is a service provided by Google and subject to the terms of service and privacy policy of Google.
-
-## Media Services v2 vs. v3
-
-### Can I use the Azure portal to manage v3 resources?
-
-Currently, you can use the [Azure portal](https://portal.azure.com/) to:
-
-* Manage [Live Events](live-event-outputs-concept.md) in Media Services v3.
-* View (not manage) v3 [assets](assets-concept.md).
-* [Get info about accessing APIs](./access-api-howto.md).
-
-For all other management tasks (for example, [Transforms and Jobs](transform-jobs-concept.md) and [content protection](drm-content-protection-concept.md)), use the [REST API](/rest/api/medi#sdks).
-
-### Is there an AssetFile concept in v3?
-
-The `AssetFile` concept was removed from the Media Services API to separate Media Services from Storage SDK dependency. Now Azure Storage, not Media Services, keeps the information that belongs in the Storage SDK.
-
-For more information, see [Migrate to Media Services v3](migrate-v-2-v-3-migration-introduction.md).
-
-### Where did client-side storage encryption go?
-
-We now recommend that you use server-side storage encryption (which is on by default). For more information, see [Azure Storage Service Encryption for data at rest](../../storage/common/storage-service-encryption.md).
-
-## Offline streaming
-
-### FairPlay Streaming for iOS
-
-The following frequently asked questions provide assistance with troubleshooting offline FairPlay streaming for iOS.
-
-#### Why does only audio play but not video during offline mode?
-
-This behavior seems to be by design of the sample app. When an alternate audio track is present (which is the case for HLS) during offline mode, both iOS 10 and iOS 11 default to the alternate audio track. To compensate this behavior for FPS offline mode, remove the alternate audio track from the stream. To do this on Media Services, add the dynamic manifest filter **audio-only=false**. In other words, an HLS URL ends with **.ism/manifest(format=m3u8-aapl,audio-only=false)**.
-
-#### Why does it still play audio only without video during offline mode after I add audio-only=false?
-
-Depending on the cache key design for the content delivery network, the content might be cached. Purge the cache.
-
-#### Is FPS offline mode supported on iOS 11 in addition to iOS 10?
-
-Yes. FPS offline mode is supported for iOS 10 and iOS 11.
-
-#### Why can't I find the document "Offline Playback with FairPlay Streaming and HTTP Live Streaming" in the FPS Server SDK?
-
-Since FPS Server SDK version 4, this document was merged into the "FairPlay Streaming Programming Guide."
-
-#### What is the downloaded/offline file structure on iOS devices?
-
-The downloaded file structure on an iOS device looks like the following screenshot. The `_keys` folder stores downloaded FPS licenses, with one store file for each license service host. The `.movpkg` folder stores audio and video content.
-
-The first folder with a name that ends with a dash followed by a number contains video content. The numeric value is the peak bandwidth of the video renditions. The second folder with a name that ends with a dash followed by 0 contains audio content. The third folder named `Data` contains the master playlist of the FPS content. Finally, boot.xml provides a complete description of the `.movpkg` folder content.
-
-![Offline file structure for the FairPlay iOS sample app](media/drm-offline-fairplay-for-ios-concept/offline-fairplay-file-structure.png)
-
-Here's a sample boot.xml file:
-
-```xml
-<?xml version="1.0" encoding="UTF-8"?>
-<HLSMoviePackage xmlns:xsi="https://www.w3.org/2001/XMLSchema-instance" xmlns="http://apple.com/IMG/Schemas/HLSMoviePackage" xsi:schemaLocation="http://apple.com/IMG/Schemas/HLSMoviePackage /System/Library/Schemas/HLSMoviePackage.xsd">
- <Version>1.0</Version>
- <HLSMoviePackageType>PersistedStore</HLSMoviePackageType>
- <Streams>
- <Stream ID="1-4DTFY3A3VDRCNZ53YZ3RJ2NPG2AJHNBD-0" Path="1-4DTFY3A3VDRCNZ53YZ3RJ2NPG2AJHNBD-0" NetworkURL="https://willzhanmswest.streaming.mediaservices.windows.net/e7c76dbb-8e38-44b3-be8c-5c78890c4bb4/MicrosoftElite01.ism/QualityLevels(127000)/Manifest(aac_eng_2_127,format=m3u8-aapl)">
- <Complete>YES</Complete>
- </Stream>
- <Stream ID="0-HC6H5GWC5IU62P4VHE7NWNGO2SZGPKUJ-310656" Path="0-HC6H5GWC5IU62P4VHE7NWNGO2SZGPKUJ-310656" NetworkURL="https://willzhanmswest.streaming.mediaservices.windows.net/e7c76dbb-8e38-44b3-be8c-5c78890c4bb4/MicrosoftElite01.ism/QualityLevels(161000)/Manifest(video,format=m3u8-aapl)">
- <Complete>YES</Complete>
- </Stream>
- </Streams>
- <MasterPlaylist>
- <NetworkURL>https://willzhanmswest.streaming.mediaservices.windows.net/e7c76dbb-8e38-44b3-be8c-5c78890c4bb4/MicrosoftElite01.ism/manifest(format=m3u8-aapl,audio-only=false)</NetworkURL>
- </MasterPlaylist>
- <DataItems Directory="Data">
- <DataItem>
- <ID>CB50F631-8227-477A-BCEC-365BBF12BCC0</ID>
- <Category>Playlist</Category>
- <Name>master.m3u8</Name>
- <DataPath>Playlist-master.m3u8-CB50F631-8227-477A-BCEC-365BBF12BCC0.data</DataPath>
- <Role>Master</Role>
- </DataItem>
- </DataItems>
-</HLSMoviePackage>
-```
-
-### Widevine streaming for Android
-
-#### How can I deliver persistent licenses (offline enabled) for some clients/users and non-persistent licenses (offline disabled) for others? Do I have to duplicate the content and use separate content keys?
-
-Because Media Services v3 allows an asset to have multiple `StreamingLocator` instances, you can have:
-
-* One `ContentKeyPolicy` instance with `license_type = "persistent"`, `ContentKeyPolicyRestriction` with claim on `"persistent"`, and its `StreamingLocator`.
-* Another `ContentKeyPolicy` instance with `license_type="nonpersistent"`, `ContentKeyPolicyRestriction` with claim on `"nonpersistent`", and its `StreamingLocator`.
-* Two `StreamingLocator` instances that have different `ContentKey` values.
-
-Depending on business logic of custom STS, different claims are issued in the JWT token. With the token, only the corresponding license can be obtained and only the corresponding URL can be played.
-
-#### What is the mapping between the Widevine and Media Services DRM security levels?
-
-Google's "Widevine DRM Architecture Overview" defines three security levels. However, the [Azure Media Services documentation on the Widevine license template](drm-widevine-license-template-concept.md) outlines
-five security levels (client robustness requirements for playback). This section explains how the security levels map.
-
-Both sets of security levels are defined by Google Widevine. The difference is in usage level: architecture or API. The five security levels are used in the Widevine API. The `content_key_specs` object, which
-contains `security_level`, is deserialized and passed to the Widevine global delivery service by the Azure Media Services Widevine license service. The following table shows the mapping between the two sets of security levels.
-
-| **Security levels defined in Widevine architecture** |**Security levels used in Widevine API**|
-|||
-| **Security Level 1**: All content processing, cryptography, and control are performed within the Trusted Execution Environment (TEE). In some implementation models, security processing might be performed in different chips.|**security_level=5**: The crypto, decoding, and all handling of the media (compressed and uncompressed) must be handled within a hardware-backed TEE.<br/><br/>**security_level=4**: The crypto and decoding of content must be performed within a hardware-backed TEE.|
-**Security Level 2**: Cryptography (but not video processing) is performed within the TEE. Decrypted buffers are returned to the application domain and processed through separate video hardware or software. At Level 2, however, cryptographic information is still processed only within the TEE.| **security_level=3**: The key material and crypto operations must be performed within a hardware-backed TEE. |
-| **Security Level 3**: There's no TEE on the device. Appropriate measures can be taken to protect the cryptographic information and decrypted content on host operating system. A Level 3 implementation might also include a hardware cryptographic engine, but that enhances only performance, not security. | **security_level=2**: Software crypto and an obfuscated decoder are required.<br/><br/>**security_level=1**: Software-based white-box crypto is required.|
-
-#### Why does content download take so long?
-
-There are two ways to improve download speed:
-
-* Enable a content delivery network so that users are more likely to hit that instead of the origin/streaming endpoint for content download. If a user hits a streaming endpoint, each HLS segment or DASH fragment is dynamically packaged and encrypted. Even though this latency is in millisecond scale for each segment or fragment, when you have an hour-long video, the accumulated latency can be large and cause a longer download.
-* Give users the option to selectively download video quality layers and audio tracks instead of all contents. For offline mode, there's no point in downloading all of the quality layers. There are two ways to achieve this:
-
- * Client controlled: The player app automatically selects, or the user selects, the video quality layer and the audio tracks to download.
- * Service controlled: You can use the Dynamic Manifest feature in Azure Media Services to create a (global) filter, which limits HLS playlist or DASH MPD to a single video quality layer and selected audio tracks. Then the download URL presented to users will include this filter.
-
-## Next steps
-
-[Media Services v3 overview](media-services-overview.md)
media-services Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/latest/release-notes.md
To stay up-to-date with the most recent developments, this article provides you
* Bug fixes * Deprecated functionality
+## July 2021
+
+### .NET SDK (Microsoft.Azure.Management.Media ) 5.0.0 release available in NuGet (Coming soon - early September 2021!)
+
+The [Microsoft.Azure.Management.Media](https://www.nuget.org/packages/Microsoft.Azure.Management.Media/5.0.0) .NET SDK version 5.0.0 is now released on NuGet. This version is generated to work with the [2021-06-01 stable](https://github.com/Azure/azure-rest-api-specs/tree/master/specification/mediaservices/resource-manager/Microsoft.Media/stable/2021-06-01) version of the Open API (Swagger) ARM Rest API.
+
+For details on changes from the 4.0.0 release see the [change log](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/mediaservices/Microsoft.Azure.Management.Medi).
+
+#### Changes in the 5.0.0 .NET SDK release (Coming soon - early September 2021!)
+
+* The Media Services account now supports system and user assigned managed identities.
+* Added **PublicNetworkAccess** option to Media Services accounts. This option can be used with the Private Link feature to only allow access from private networks, blocking all public network access
+* Basic passthrough - A new live event type is added. "Basic Pass-through" live events have similar capabilities as standard pass-through live events with some input and output restrictions, and are offered at a reduced price.
+* **PresetConfigurations** - allow you to customize the output settings, and min and max bitrates used for the [Content Aware Encoding presets](./encode-content-aware-concept.md). This helps you to better estimate and plan for more accurate billing when using Content Aware Encoding through constrained output track numbers and resolutions.
+
+#### Breaking changes in tht 5.0.0 .NET SDK release
+
+* **ApiErrorException** has been replaced with **ErrorResponseException** to be consistent with all other Azure SDKs. Exception body has not changed.
+* Media service constructor has new optional PublicNetworkAccess parameter after KeyDelivery parameter.
+* Type property in MediaServiceIdentity has been changed from ManagedIdentityType enum to string, to accommodate comma separated multiple types. Valid strings for type are SystemAssigned or SystemAssigned,UserAssigned or UserAssigned.
## June 2021
media-services Stream Live Streaming Concept https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/latest/stream-live-streaming-concept.md
The asset that the live output is archiving to, automatically becomes an on-dema
- [States and billing](live-event-states-billing-concept.md) - [Latency](live-event-latency-reference.md)
-## Live streaming questions
+## Live streaming FAQ
-See the [live streaming questions](questions-collection.md#live-streaming) article.
+See the [live streaming questions in the FAQ](frequently-asked-questions.yml).
media-services Stream Streaming Locators Concept https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/latest/stream-streaming-locators-concept.md
See [Filters: associate with Streaming Locators](filters-concept.md#associating-
## Filter, order, page Streaming Locator entities
-See [Filtering, ordering, paging of Media Services entities](filter-order-page-entitites-how-to.md).
+See [Filtering, ordering, paging of Media Services entities](filter-order-page-entities-how-to.md).
## List Streaming Locators by Asset name
media-services Stream Streaming Policy Concept https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/latest/stream-streaming-policy-concept.md
Response:
## Filtering, ordering, paging
-See [Filtering, ordering, paging of Media Services entities](filter-order-page-entitites-how-to.md).
+See [Filtering, ordering, paging of Media Services entities](filter-order-page-entities-how-to.md).
## Next steps
media-services Transform Jobs Concept https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/latest/transform-jobs-concept.md
Check out the [Azure Media Services community](media-services-community.md) arti
## See also * [Error codes](/rest/api/media/jobs/get#joberrorcode)
-* [Filtering, ordering, paging of Media Services entities](filter-order-page-entitites-how-to.md)
+* [Filtering, ordering, paging of Media Services entities](filter-order-page-entities-how-to.md)
## Next steps
migrate Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/migrate/policy-reference.md
Title: Built-in policy definitions for Azure Migrate description: Lists Azure Policy built-in policy definitions for Azure Migrate. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/13/2021 Last updated : 08/20/2021
migrate Troubleshoot Network Connectivity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/migrate/troubleshoot-network-connectivity.md
ms. Previously updated : 06/15/2021 Last updated : 08/19/2021
The private endpoint details and private link resource FQDNs' information is ava
An illustrative example for DNS resolution of the storage account private link FQDN. -- Enter _nslookup<storage-account-name>_.blob.core.windows.net. Replace <storage-account-name> with the name of the storage account used for Azure Migrate.
+- Enter _nslookup ```<storage-account-name>_.blob.core.windows.net.``` Replace ```<storage-account-name>``` with the name of the storage account used for Azure Migrate.
You'll receive a message like this:
In addition to the URLs above, the appliance needs access to the following URLs
|*.portal.azure.com | Navigate to the Azure portal |*.windows.net <br/> *.msftauth.net <br/> *.msauth.net <br/> *.microsoft.com <br/> *.live.com <br/> *.office.com <br/> *.microsoftonline.com <br/> *.microsoftonline-p.com <br/> | Used for access control and identity management by Azure Active Directory |management.azure.com | For triggering Azure Resource Manager deployments
-|*.services.visualstudio.com (optional) | Upload appliance logs used for internal monitoring
+|*.services.visualstudio.com (optional) | Upload appliance logs used for internal monitoring.
|aka.ms/* (optional) | Allow access to aka links; used to download and install the latest updates for appliance services |download.microsoft.com/download | Allow downloads from Microsoft download center
If the DNS resolution is incorrect, follow these steps:
2. If you use a custom DNS server, review your custom DNS settings, and validate that the DNS configuration is correct. For guidance, see [private endpoint overview: DNS configuration](../private-link/private-endpoint-overview.md#dns-configuration). 3. If the issue still persists, [refer to this section](#validate-the-private-dns-zone) for further troubleshooting.
-After youΓÇÖve verified the connectivity, retry the discovery process.
+After youΓÇÖve verified the connectivity, retry the discovery process.
+
+### Import/export request fails with the error "403: This request is not authorized to perform this operation"
+
+The export/import/download report request fails with the error *"403: This request is not authorized to perform this operation"* for projects with private endpoint connectivity.
+
+#### Possible causes:
+This error may occur if the export/import/download request was not initiated from an authorized network. This can happen if the import/export/download request was initiated from a client that is not connected to the Azure Migrate service (Azure virtual network) over a private network.
+
+#### Remediation
+**Option 1** *(recommended)*:
+
+To resolve this error, retry the import/export/download operation from a client residing in a virtual network that is connected to Azure over a private link. You can open the Azure portal in your on-premises network or your appliance VM and retry the operation.
+
+**Option 2**:
+
+The import/export/download request makes a connection to a storage account for uploading/downloading reports. You can also change the networking settings of the storage account used for the import/export/download operation and allow access to the storage account via other networks (public networks).
+
+To set up the storage account for public endpoint connectivity,
+
+1. **Locate the storage account**: The storage account name is available on the Azure Migrate: Discovery and Assessment properties page. The storage account name will have the suffix *usa*.
+
+ :::image type="content" source="./media/how-to-use-azure-migrate-with-private-endpoints/server-assessment-properties.png" alt-text="Snapshot of download DNS settings.":::
+
+2. Navigate to the storage account and edit the storage account networking properties to allow access from all/other networks.
+
+ :::image type="content" source="./media/how-to-use-azure-migrate-with-private-endpoints/networking-firewall-virtual-networks.png" alt-text="Snapshot of storage account networking properties.":::
+
+3. Alternatively, you can limit the access to selected networks and add the public IP address of the client from where you're trying to access the Azure portal.
+
+ :::image type="content" source="./media/how-to-use-azure-migrate-with-private-endpoints/networking-firewall.png" alt-text="Snapshot of add the public IP address of the client.":::
mysql Concepts Azure Ad Authentication https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mysql/concepts-azure-ad-authentication.md
When using Azure AD authentication, there are two Administrator accounts for the
## Permissions
-To create new users that can authenticate with Azure AD, you must be the designed Azure AD administrator. This user is assigned by configuring the Azure AD Administrator account for a specific Azure Database for MySQL server.
+To create new users that can authenticate with Azure AD, you must be the designated Azure AD administrator. This user is assigned by configuring the Azure AD Administrator account for a specific Azure Database for MySQL server.
To create a new Azure AD database user, you must connect as the Azure AD administrator. This is demonstrated in [Configure and Login with Azure AD for Azure Database for MySQL](howto-configure-sign-in-azure-ad-authentication.md).
mysql Whats New https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mysql/flexible-server/whats-new.md
This release of Azure Database for MySQL - Flexible Server includes the followin
The Point-In-Time Restore experience for the service now enables customers to configure availability zone, Co-locating the database servers and standby applications in the same zone reduces latencies and allows customers to better prepare for disaster recovery situations and ΓÇ£zone downΓÇ¥ scenarios. [Learn more](https://aka.ms/standby-selection).
+- **validate_password and caching_sha2_password plugin available in private preview**
+
+ Flexible Server now supports enabling validate_password and caching_sha2_password plugins in private preview. Please email us at AskAzureDBforMySQL@service.microsoft.com
+ - **Availability in four additional Azure regions** The public preview of Azure Database for MySQL - Flexible Server is now available in the following Azure regions [Learn more](overview.md#azure-regions):
If you have questions about or suggestions for working with Azure Database for M
- To contact Azure Support, [file a ticket from the Azure portal](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade). - To fix an issue with your account, file a [support request](https://ms.portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest) in the Azure portal.-- To provide feedback or to request new features, create an entry via [UserVoice](https://feedback.azure.com/forums/597982-azure-database-for-mysql).
+- To provide feedback or to request new features, Please email us at AskAzureDBforMySQL@service.microsoft.com.
## Next steps
mysql Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mysql/policy-reference.md
Title: Built-in policy definitions for Azure Database for MySQL description: Lists Azure Policy built-in policy definitions for Azure Database for MySQL. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/13/2021 Last updated : 08/20/2021
mysql Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mysql/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Database for MySQL description: Lists Azure Policy Regulatory Compliance controls available for Azure Database for MySQL. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 08/13/2021 Last updated : 08/20/2021
network-watcher Network Watcher Nsg Grafana https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/network-watcher/network-watcher-nsg-grafana.md
You use Logstash to flatten the JSON formatted flow logs to a flow tuple level.
"systemId" => "%{[records][systemId]}" "category" => "%{[records][category]}" "resourceId" => "%{[records][resourceId]}"
- "operationName" => "%{[records][operationName}}"
- "Version" => "%{[records][properties][Version}}"
+ "operationName" => "%{[records][operationName]}"
+ "Version" => "%{[records][properties][Version]}"
"rule" => "%{[records][properties][flows][rule]}" "mac" => "%{[records][properties][flows][flows][mac]}" }
network-watcher Nsg Flow Logs Policy Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/network-watcher/nsg-flow-logs-policy-portal.md
## Overview Azure Policy helps to enforce organizational standards and to assess compliance at-scale. Common use cases for Azure Policy include implementing governance for resource consistency, regulatory compliance, security, cost, and management. In this article, we will use two built-in policies available for NSG Flow Logs to manage your flow logs setup. The first policy flags any NSGs without flow logs enabled. The second policy automatically deploys Flow logs for NSGs without Flow logs enabled.
-If you are creating an Azure policy for the first time, you can read through:
+If you are creating an Azure Policy definition for the first time, you can read through:
- [Azure Policy overview](../governance/policy/overview.md) -- [Tutorial for creating policy](../governance/policy/assign-policy-portal.md#create-a-policy-assignment).
+- [Tutorial for creating an Azure Policy assignment](../governance/policy/assign-policy-portal.md#create-a-policy-assignment).
## Locate the policies
network-watcher Traffic Analytics Policy Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/network-watcher/traffic-analytics-policy-portal.md
Azure Policy helps to enforce organizational standards and to assess compliance at-scale. Common use cases for Azure Policy include implementing governance for resource consistency, regulatory compliance, security, cost, and management. In this article, we will cover three built-in policies available for [Traffic Analytics](./traffic-analytics.md) to manage your setup.
-If you are creating an Azure policy for the first time, you can read through:
+If you are creating an Azure Policy definition for the first time, you can read through:
- [Azure Policy overview](../governance/policy/overview.md) -- [Tutorial for creating policy](../governance/policy/assign-policy-portal.md#create-a-policy-assignment).
+- [Tutorial for creating an Azure Policy assignment](../governance/policy/assign-policy-portal.md#create-a-policy-assignment).
## Locate the policies
networking Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/networking/policy-reference.md
Title: Built-in policy definitions for Azure networking services description: Lists Azure Policy built-in policy definitions for Azure networking services. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/13/2021 Last updated : 08/20/2021
networking Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/networking/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure networking services description: Lists Azure Policy Regulatory Compliance controls available for Azure networking services. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 08/13/2021 Last updated : 08/20/2021
postgresql Howto Hyperscale Logging https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/howto-hyperscale-logging.md
- Previously updated : 7/13/2020+ Last updated : 8/20/2021 # Logs in Azure Database for PostgreSQL - Hyperscale (Citus)
-PostgreSQL logs are available on every node of a Hyperscale (Citus) server
-group. You can ship logs to a storage server, or to an analytics service. The
-logs can be used to identify, troubleshoot, and repair configuration errors and
-suboptimal performance.
+PostgreSQL database server logs are available for every node of a Hyperscale
+(Citus) server group. You can ship logs to a storage server, or to an analytics
+service. The logs can be used to identify, troubleshoot, and repair
+configuration errors and suboptimal performance.
-## Accessing logs
+## Capturing logs
To access PostgreSQL logs for a Hyperscale (Citus) coordinator or worker node,
-open the node in the Azure portal:
--
-For the selected node, open **Diagnostic settings**, and click **+ Add
-diagnostic setting**.
+you have to enable the PostgreSQLLogs diagnostic setting. In the Azure
+portal, open **Diagnostic settings**, and select **+ Add diagnostic setting**.
:::image type="content" source="media/howto-hyperscale-logging/diagnostic-settings.png" alt-text="Add diagnostic settings button":::
-Pick a name for the new diagnostics settings, and check the **PostgreSQLLogs**
-box. Choose which destination(s) should receive the logs.
+Pick a name for the new diagnostics settings, check the **PostgreSQLLogs** box,
+and check the **Send to Log Analytics workspace** box. Then select **Save**.
:::image type="content" source="media/howto-hyperscale-logging/diagnostic-create-setting.png" alt-text="Choose PostgreSQL logs":::
+## Viewing logs
+
+To view and filter the logs, we'll use Kusto queries. Open **Logs** in the
+Azure portal for your Hyperscale (Citus) server group. If a query selection
+dialog appears, close it:
++
+You'll then see an input box to enter queries.
++
+Enter the following query and select the **Run** button.
+
+```kusto
+AzureDiagnostics
+| project TimeGenerated, Message, errorLevel_s, LogicalServerName_s
+```
+
+The above query lists log messages from all nodes, along with their severity
+and timestamp. You can add `where` clauses to filter the results. For instance,
+to see errors from the coordinator node only, filter the error level and server
+name like this:
+
+```kusto
+AzureDiagnostics
+| project TimeGenerated, Message, errorLevel_s, LogicalServerName_s
+| where LogicalServerName_s == 'example-server-group-c'
+| where errorLevel_s == 'ERROR'
+```
+
+Replace the server name in the above example with the name of your server. The
+coordinator node name has the suffix `-c` and worker nodes are named
+with a suffix of `-w0`, `-w1`, and so on.
+ ## Next steps - [Get started with log analytics queries](../azure-monitor/logs/log-analytics-tutorial.md)-- Learn about [Azure event hubs](../event-hubs/event-hubs-about.md)
+- Learn about [Azure event hubs](../event-hubs/event-hubs-about.md)
postgresql Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/policy-reference.md
Title: Built-in policy definitions for Azure Database for PostgreSQL description: Lists Azure Policy built-in policy definitions for Azure Database for PostgreSQL. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/13/2021 Last updated : 08/20/2021
postgresql Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Database for PostgreSQL description: Lists Azure Policy Regulatory Compliance controls available for Azure Database for PostgreSQL. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 08/13/2021 Last updated : 08/20/2021
private-link Disable Private Endpoint Network Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/private-link/disable-private-endpoint-network-policy.md
$net =@{
} $vnet = Get-AzVirtualNetwork @net
-($vnet | Select -ExpandProperty subnets).PrivateEndpointNetworkPolicies = "Disabled"
+($vnet | Select -ExpandProperty subnets | Where-Object {$_.Name -eq 'default'}).PrivateEndpointNetworkPolicies = "Disabled"
$vnet | Set-AzVirtualNetwork ```
private-link Private Endpoint Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/private-link/private-endpoint-overview.md
The private link resource owner can do the following actions over a private endp
> Only a private endpoint in an approved state can send traffic to a given private link resource. ### Connecting using Alias
-Alias is a unique moniker that is generated when the service owner creates the private link service behind a standard load balancer. Service owner can share this Alias with their consumers offline. Consumers can request a connection to private link service using either the resource URI or the Alias. If you want to connect using. Alias, you must create private endpoint using manual connection approval method. For using manual connection approval method, set manual request parameter to true during private endpoint create flow. Look at [New-AzPrivateEndpoint](/powershell/module/az.network/new-azprivateendpoint) and [az network private-endpoint create](/cli/azure/network/private-endpoint#az_network_private_endpoint_create) for details.
+Alias is a unique moniker that is generated when the service owner creates the private link service behind a standard load balancer. Service owner can share this Alias with their consumers offline. Consumers can request a connection to private link service using either the resource URI or the Alias. If you want to connect using Alias, you must create private endpoint using manual connection approval method. For using manual connection approval method, set manual request parameter to true during private endpoint create flow. Look at [New-AzPrivateEndpoint](/powershell/module/az.network/new-azprivateendpoint) and [az network private-endpoint create](/cli/azure/network/private-endpoint#az_network_private_endpoint_create) for details.
## DNS configuration When connecting to a private link resource using a fully qualified domain name (FQDN) as part of the connection string, it's important to correctly configure your DNS settings to resolve to the given private IP address. Existing Azure services might already have a DNS configuration to use when connecting over a public endpoint. This configuration must be overwritten to connect using your private endpoint.
purview Catalog Private Link Name Resolution https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/catalog-private-link-name-resolution.md
Use any of the following options to sent up internal name resolution when using
To enable internal name resolution, you can deploy the required Azure DNS Zones inside your Azure subscription where Azure Purview account is deployed.
-When you create portal and account private endpoints, the DNS CNAME resource records for Azure Purview is automatically updated to an alias in a subdomain with the prefix `privatelink`. By default, we also create a [private DNS zone](../dns/private-dns-overview.md) that corresponds to the `privatelink` subdomain for Azure Purview as privatelink.purview.azure.com including DNS A resource records for the private endpoints. If you enable ingestion private endpoints, additional DNS zones are required for managed resources.
+When you create portal and account private endpoints, the DNS CNAME resource records for Azure Purview is automatically updated to an alias in a subdomain with the prefix `privatelink`.
+By default, we also create a [private DNS zone](../dns/private-dns-overview.md) that corresponds to the `privatelink` subdomain for Azure Purview as privatelink.purview.azure.com including DNS A resource records for the private endpoints.
+If you enable ingestion private endpoints, additional DNS zones are required for managed resources.
-The following table shows an example of Azure Private DNS zones A Records that can be deployed as part of configuration of private endpoint for an Azure Purview account:
+The following table shows an example of Azure Private DNS zones and DNS A Records that are deployed as part of configuration of private endpoint for an Azure Purview account if you enable _Private DNS integration_ during the deployment:
-Private endpoint |Private endpoint associated to |DNS Zone |A Record (example) |
+Private endpoint |Private endpoint associated to |DNS Zone (new) |A Record (example) |
||||| |Account |Azure Purview |`privatelink.purview.azure.com` |PurviewA | |Portal |Azure Purview account |`privatelink.purview.azure.com` |Web |
The DNS resource records for PurviewA, when resolved in the virtual network host
| `PurviewA.privatelink.purview.azure.com` | A | \<private endpoint IP address\> | | `Web.purview.azure.com` | CNAME | \<private endpoint IP address\> | - ## Option 2 - Use existing Azure Private DNS Zones
-During the deployment of Azure purview account private endpoints, you can choose an existing Azure Private DNS Zones. This is common case for organizations where private endpoint is used for other services in Azure. Your organization may also have a central or hub subscription for all Azure Private DNS Zones. In this case, during the deployment of private endpoints, make sure you select the existing DNS zones instead of creating new ones.
+During the deployment of Azure purview private endpoints, you can choose _Private DNS integration_ using existing Azure Private DNS Zones. This is common case for organizations where private endpoint is used for other services in Azure. In this case, during the deployment of private endpoints, make sure you select the existing DNS zones instead of creating new ones.
+
+This also applies if your organization uses a central or hub subscription for all Azure Private DNS Zones.
+
+The following list shows the required Azure DNS zones and A records for Purview private endpoints:
+
+> [!NOTE]
+> Update all names with `PurviewA`,`scaneastusabcd1234` and `atlas-12345678-1234-1234-abcd-123456789abc` with corresponding Azure resources name in your environment. For example, instead of `scaneastusabcd1234` use the name of your Azure Purview managed storage account.
+
+Private endpoint |Private endpoint associated to |DNS Zone (existing) |A Record (example) |
+|||||
+|Account |Azure Purview |`privatelink.purview.azure.com` |PurviewA |
+|Portal |Azure Purview account |`privatelink.purview.azure.com` |Web |
+|Ingestion |Purview managed Storage Account - Blob |`privatelink.blob.core.windows.net` |scaneastusabcd1234 |
+|Ingestion |Purview managed Storage Account - Queue |`privatelink.queue.core.windows.net` |scaneastusabcd1234 |
+|Ingestion |Purview managed Storage Account - Event Hub |`privatelink.servicebus.windows.net` |atlas-12345678-1234-1234-abcd-123456789abc |
For more information, see [Virtual network workloads without custom DNS server](../private-link/private-endpoint-dns.md#virtual-network-workloads-without-custom-dns-server) and [On-premises workloads using a DNS forwarder](../private-link/private-endpoint-dns.md#on-premises-workloads-using-a-dns-forwarder) scenarios in [Azure Private Endpoint DNS configuration](../private-link/private-endpoint-dns.md). :::image type="content" source="media/catalog-private-link/purview-name-resolution-diagram.png" alt-text="Diagram that shows Azure Purview name resolution"lightbox="media/catalog-private-link/purview-name-resolution-diagram.png":::
+If you are using a custom DNS server on your network, clients must be able to resolve the FQDN for the Azure Purview endpoint to the private endpoint IP address. Configure your DNS server to delegate your Private Link subdomain to the private DNS zone for the virtual network. Or, configure the A records for `PurviewA.privatelink.purview.azure.com` with the private endpoint IP address.
Once the private endpoint deployment is completed, make sure there is a [Virtual network link](../dns/private-dns-virtual-network-links.md) for name resolution on the corresponding Azure Private DNS zone to Azure virtual network where private endpoint was deployed.
+For more information, see [Azure private endpoint DNS configuration](../private-link/private-endpoint-dns.md).
+ ## Option 3 - Use your own DNS Servers
-If you do not use DNS forwarders and instead you manage A records directly in your on-premises DNS servers to resolve the endpoints through their private IP addresses, you might need to create additional A records in your DNS servers.
+If you do not use DNS forwarders and instead you manage A records directly in your on-premises DNS servers to resolve the endpoints through their private IP addresses, you might need to create the following A records in your DNS servers.
+
+> [!NOTE]
+> Update all names with `PurviewA`,`scaneastusabcd1234` and `atlas-12345678-1234-1234-abcd-123456789abc` with corresponding Azure resources name in your environment. For example, instead of `scaneastusabcd1234` use the name of your Azure Purview managed storage account.
| Name | Type | Value | | - | -- | |
+| `web.purview.azure.com` | A | \<portal private endpoint IP address of Azure Purview> |
+| `scaneastusabcd1234.blob.core.windows.net` | A | \<blob-ingestion private endpoint IP address of Azure Purview> |
+| `scaneastusabcd1234.queue.core.windows.net` | A | \<queue-ingestion private endpoint IP address of Azure Purview> |
+| `atlas-12345678-1234-1234-abcd-123456789abc.servicebus.windows.net`| A | \<namespace-ingestion private endpoint IP address of Azure Purview> |
| `PurviewA.Purview.azure.com` | A | \<account private endpoint IP address of Azure Purview> | | `PurviewA.scan.Purview.azure.com` | A | \<account private endpoint IP address of Azure Purview> | | `PurviewA.catalog.Purview.azure.com` | A | \<account private endpoint IP address of Azure Purview\> |
If you do not use DNS forwarders and instead you manage A records directly in yo
| `PurviewA.policy.prod.ext.web.purview.azure.com` | A | \<portal private endpoint IP address of Azure Purview\> | | `PurviewA.sensitivity.prod.ext.web.purview.azure.com` | A | \<portal private endpoint IP address of Azure Purview\> |
-If you are using a custom DNS server on your network, clients must be able to resolve the FQDN for the Azure Purview endpoint to the private endpoint IP address. Configure your DNS server to delegate your Private Link subdomain to the private DNS zone for the virtual network. Or, configure the A records for `PurviewA.privatelink.purview.azure.com` with the private endpoint IP address.
-
-For more information, see [Azure private endpoint DNS configuration](../private-link/private-endpoint-dns.md).
- ## Verify and DNS test name resolution and connectivity 1. If you are using Azure Private DNS Zones, make sure the following DNS Zones and the corresponding A records are created in your Azure Subscription:
purview How To Create And Manage Collections https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/how-to-create-and-manage-collections.md
> [!NOTE] > At this time, this guide only applies to Purview instances created **on or after August 18, 2021**. Instances created before August 18 are able to create collections, but do not manage permissions through those collections. For information on creating a collection for a Purview instances created before August 18, see our [**legacy collection guide**](#legacy-collection-guide) at the bottom of the page.
+>
+> All legacy accounts will be upgraded automatically in the coming weeks. You will receive an email notification when your Purview account is upgraded. When the account is upgraded, all assigned permissions will be automatically redeployed to the root collection.
Collections in Purview can be used to organize assets and sources by your business's flow, but they are also the tool used to manage access across Purview. This guide will take you through the creation and management of these collections, as well as cover steps about how to register sources and add assets into your collections.
purview How To Lineage Azure Synapse Analytics https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/how-to-lineage-azure-synapse-analytics.md
Currently, Azure Purview captures runtime lineage from the following Azure Synap
### Step 1: Connect Azure Synapse workspace to your Purview account
-You can connect an Azure Sysnpase workspace to Purview, and the connection enables Azure Synapse to push lineage information to Purview. Follow the steps in [Connect an Azure Purview Account into Synapse](../synapse-analytics/catalog-and-governance/quickstart-connect-azure-purview.md). Multiple Azure Synapse workspaces can connect to a single Azure Purview account for holistic lineage tracking.
+You can connect an Azure Synapse workspace to Purview, and the connection enables Azure Synapse to push lineage information to Purview. Follow the steps in [Connect an Azure Purview Account into Synapse](../synapse-analytics/catalog-and-governance/quickstart-connect-azure-purview.md). Multiple Azure Synapse workspaces can connect to a single Azure Purview account for holistic lineage tracking.
### Step 2: Run pipeline in Azure Synapse workspace
Select the Synapse account -> pipeline -> activity, you can view the lineage inf
[Catalog lineage user guide](catalog-lineage-user-guide.md)
-[Link to Azure Data Share for lineage](how-to-link-azure-data-share.md)
+[Link to Azure Data Share for lineage](how-to-link-azure-data-share.md)
purview Quickstart Create Collection https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/quickstart-create-collection.md
> [!NOTE] > At this time, this quickstart only applies for Purview instances created on or after August 18, 2021. Instances created before August 18 are able to create collections, but do not manage permissions through those collections. For information on creating a collection for a Purview instance created before August 18, see our [**legacy collection guide**](#legacy-collection-guide) at the bottom of the page.
+>
+> All legacy accounts will be upgraded automatically in the coming weeks. You will receive an email notification when your Purview account is upgraded. When the account is upgraded, all assigned permissions will be automatically redeployed to the root collection.
Collections are Purview's tool to manage ownership and access control across assets, sources, and information. They also organize your sources and assets into categories that are customized to match your management experience with your data. This guide will take you through setting up your first collection and collection admin to prepare your Purview environment for your organization.
purview Reference Purview Glossary https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/reference-purview-glossary.md
A regular expression included in a classification rule that represents the colum
## Contact An individual who is associated with an entity in the data catalog ## Control plane operation
-Operations that manage resources in your subscription, such as role-based access control and Azure policy, that are sent to the Azure Resource Manager end point.
+Operations that manage resources in your subscription, such as role-based access control and Azure Policy, that are sent to the Azure Resource Manager end point.
## Credential A verification of identity or tool used in an access control system. Credentials can be used to authenticate an individual or group for the purpose of granting access to a data asset.  ## Data catalog
role-based-access-control Built In Roles https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/role-based-access-control/built-in-roles.md
Previously updated : 08/04/2021 Last updated : 08/13/2021
The following table provides a brief description of each built-in role. Click th
> | [Data Purger](#data-purger) | Delete private data from a Log Analytics workspace. | 150f5e0c-0603-4f03-8c7f-cf70034c4e90 | > | [HDInsight Cluster Operator](#hdinsight-cluster-operator) | Lets you read and modify HDInsight cluster configurations. | 61ed4efc-fab3-44fd-b111-e24485cc132a | > | [HDInsight Domain Services Contributor](#hdinsight-domain-services-contributor) | Can Read, Create, Modify and Delete Domain Services related operations needed for HDInsight Enterprise Security Package | 8d8d5a11-05d3-4bda-a417-a08778121c7c |
-> | [Log Analytics Contributor](#log-analytics-contributor) | Log Analytics Contributor can read all monitoring data and edit monitoring settings. Editing monitoring settings includes adding the VM extension to VMs; reading storage account keys to be able to configure collection of logs from Azure Storage; creating and configuring Automation accounts; adding solutions; and configuring Azure diagnostics on all Azure resources. | 92aaf0da-9dab-42b6-94a3-d43ce8d16293 |
+> | [Log Analytics Contributor](#log-analytics-contributor) | Log Analytics Contributor can read all monitoring data and edit monitoring settings. Editing monitoring settings includes adding the VM extension to VMs; reading storage account keys to be able to configure collection of logs from Azure Storage; adding solutions; and configuring Azure diagnostics on all Azure resources. | 92aaf0da-9dab-42b6-94a3-d43ce8d16293 |
> | [Log Analytics Reader](#log-analytics-reader) | Log Analytics Reader can view and search all monitoring data as well as and view monitoring settings, including viewing the configuration of Azure diagnostics on all Azure resources. | 73c42c96-874c-492b-b04d-ab87d138a893 | > | [Purview Data Curator](#purview-data-curator) | The Microsoft.Purview data curator can create, read, modify and delete catalog data objects and establish relationships between objects. This role is in preview and subject to change. | 8a3c2885-9b38-4fd2-9d99-91af537c1347 | > | [Purview Data Reader](#purview-data-reader) | The Microsoft.Purview data reader can read catalog data objects. This role is in preview and subject to change. | ff100721-1b9d-43d8-af52-42b69c1272db |
The following table provides a brief description of each built-in role. Click th
> | [Cost Management Reader](#cost-management-reader) | Can view cost data and configuration (e.g. budgets, exports) | 72fafb9e-0641-4937-9268-a91bfd8191a3 | > | [Hierarchy Settings Administrator](#hierarchy-settings-administrator) | Allows users to edit and delete Hierarchy Settings | 350f8d15-c687-4448-8ae1-157740a3936d | > | [Kubernetes Cluster - Azure Arc Onboarding](#kubernetes-clusterazure-arc-onboarding) | Role definition to authorize any user/service to create connectedClusters resource | 34e09817-6cbe-4d01-b1a2-e0eac5743d41 |
+> | [Kubernetes Extension Contributor](#kubernetes-extension-contributor) | Can create, update, get, list and delete Kubernetes Extensions, and get extension async operations | 85cb6faf-e071-4c9b-8136-154b5a04f717 |
> | [Managed Application Contributor Role](#managed-application-contributor-role) | Allows for creating managed application resources. | 641177b8-a67a-45b9-a033-47bc880bb21e | > | [Managed Application Operator Role](#managed-application-operator-role) | Lets you read and perform actions on Managed Application resources | c7393b34-138c-406f-901b-d8cf2b17e6ae | > | [Managed Applications Reader](#managed-applications-reader) | Lets you read resources in a managed app and request JIT access. | b9331d33-8a36-4f8c-b097-4f54124fdb44 |
Can Read, Create, Modify and Delete Domain Services related operations needed fo
### Log Analytics Contributor
-Log Analytics Contributor can read all monitoring data and edit monitoring settings. Editing monitoring settings includes adding the VM extension to VMs; reading storage account keys to be able to configure collection of logs from Azure Storage; creating and configuring Automation accounts; adding solutions; and configuring Azure diagnostics on all Azure resources. [Learn more](../azure-monitor/logs/manage-access.md)
+Log Analytics Contributor can read all monitoring data and edit monitoring settings. Editing monitoring settings includes adding the VM extension to VMs; reading storage account keys to be able to configure collection of logs from Azure Storage; adding solutions; and configuring Azure diagnostics on all Azure resources. [Learn more](../azure-monitor/logs/manage-access.md)
> [!div class="mx-tableFixed"] > | Actions | Description | > | | | > | */read | Read resources of all types, except secrets. |
-> | [Microsoft.Automation](resource-provider-operations.md#microsoftautomation)/automationAccounts/* | |
> | [Microsoft.ClassicCompute](resource-provider-operations.md#microsoftclassiccompute)/virtualMachines/extensions/* | | > | [Microsoft.ClassicStorage](resource-provider-operations.md#microsoftclassicstorage)/storageAccounts/listKeys/action | Lists the access keys for the storage accounts. | > | [Microsoft.Compute](resource-provider-operations.md#microsoftcompute)/virtualMachines/extensions/* | |
Log Analytics Contributor can read all monitoring data and edit monitoring setti
"assignableScopes": [ "/" ],
- "description": "Log Analytics Contributor can read all monitoring data and edit monitoring settings. Editing monitoring settings includes adding the VM extension to VMs; reading storage account keys to be able to configure collection of logs from Azure Storage; creating and configuring Automation accounts; adding solutions; and configuring Azure diagnostics on all Azure resources.",
+ "description": "Log Analytics Contributor can read all monitoring data and edit monitoring settings. Editing monitoring settings includes adding the VM extension to VMs; reading storage account keys to be able to configure collection of logs from Azure Storage; adding solutions; and configuring Azure diagnostics on all Azure resources.",
"id": "/subscriptions/{subscriptionId}/providers/Microsoft.Authorization/roleDefinitions/92aaf0da-9dab-42b6-94a3-d43ce8d16293", "name": "92aaf0da-9dab-42b6-94a3-d43ce8d16293", "permissions": [ { "actions": [ "*/read",
- "Microsoft.Automation/automationAccounts/*",
"Microsoft.ClassicCompute/virtualMachines/extensions/*", "Microsoft.ClassicStorage/storageAccounts/listKeys/action", "Microsoft.Compute/virtualMachines/extensions/*",
Azure Sentinel Contributor [Learn more](../sentinel/roles.md)
> | [Microsoft.OperationalInsights](resource-provider-operations.md#microsoftoperationalinsights)/workspaces/query/read | Run queries over the data in the workspace | > | [Microsoft.OperationalInsights](resource-provider-operations.md#microsoftoperationalinsights)/workspaces/query/*/read | | > | [Microsoft.OperationalInsights](resource-provider-operations.md#microsoftoperationalinsights)/workspaces/dataSources/read | Get datasources under a workspace. |
+> | [Microsoft.OperationalInsights](resource-provider-operations.md#microsoftoperationalinsights)/querypacks/*/read | |
> | [Microsoft.Insights](resource-provider-operations.md#microsoftinsights)/workbooks/* | | > | [Microsoft.Insights](resource-provider-operations.md#microsoftinsights)/myworkbooks/read | Read a private Workbook | > | [Microsoft.Authorization](resource-provider-operations.md#microsoftauthorization)/*/read | Read roles and role assignments |
Azure Sentinel Contributor [Learn more](../sentinel/roles.md)
"Microsoft.OperationalInsights/workspaces/query/read", "Microsoft.OperationalInsights/workspaces/query/*/read", "Microsoft.OperationalInsights/workspaces/dataSources/read",
+ "Microsoft.OperationalInsights/querypacks/*/read",
"Microsoft.Insights/workbooks/*", "Microsoft.Insights/myworkbooks/read", "Microsoft.Authorization/*/read",
Azure Sentinel Reader [Learn more](../sentinel/roles.md)
> | [Microsoft.OperationsManagement](resource-provider-operations.md#microsoftoperationsmanagement)/solutions/read | Get exiting OMS solution | > | [Microsoft.OperationalInsights](resource-provider-operations.md#microsoftoperationalinsights)/workspaces/query/read | Run queries over the data in the workspace | > | [Microsoft.OperationalInsights](resource-provider-operations.md#microsoftoperationalinsights)/workspaces/query/*/read | |
+> | [Microsoft.OperationalInsights](resource-provider-operations.md#microsoftoperationalinsights)/querypacks/*/read | |
> | [Microsoft.OperationalInsights](resource-provider-operations.md#microsoftoperationalinsights)/workspaces/dataSources/read | Get datasources under a workspace. | > | [Microsoft.Insights](resource-provider-operations.md#microsoftinsights)/workbooks/read | Read a workbook | > | [Microsoft.Insights](resource-provider-operations.md#microsoftinsights)/myworkbooks/read | Read a private Workbook |
Azure Sentinel Reader [Learn more](../sentinel/roles.md)
"Microsoft.OperationsManagement/solutions/read", "Microsoft.OperationalInsights/workspaces/query/read", "Microsoft.OperationalInsights/workspaces/query/*/read",
+ "Microsoft.OperationalInsights/querypacks/*/read",
"Microsoft.OperationalInsights/workspaces/dataSources/read", "Microsoft.Insights/workbooks/read", "Microsoft.Insights/myworkbooks/read",
Azure Sentinel Responder [Learn more](../sentinel/roles.md)
> | [Microsoft.OperationalInsights](resource-provider-operations.md#microsoftoperationalinsights)/workspaces/query/read | Run queries over the data in the workspace | > | [Microsoft.OperationalInsights](resource-provider-operations.md#microsoftoperationalinsights)/workspaces/query/*/read | | > | [Microsoft.OperationalInsights](resource-provider-operations.md#microsoftoperationalinsights)/workspaces/dataSources/read | Get datasources under a workspace. |
+> | [Microsoft.OperationalInsights](resource-provider-operations.md#microsoftoperationalinsights)/querypacks/*/read | |
> | [Microsoft.Insights](resource-provider-operations.md#microsoftinsights)/workbooks/read | Read a workbook | > | [Microsoft.Insights](resource-provider-operations.md#microsoftinsights)/myworkbooks/read | Read a private Workbook | > | [Microsoft.Authorization](resource-provider-operations.md#microsoftauthorization)/*/read | Read roles and role assignments |
Azure Sentinel Responder [Learn more](../sentinel/roles.md)
"Microsoft.OperationalInsights/workspaces/query/read", "Microsoft.OperationalInsights/workspaces/query/*/read", "Microsoft.OperationalInsights/workspaces/dataSources/read",
+ "Microsoft.OperationalInsights/querypacks/*/read",
"Microsoft.Insights/workbooks/read", "Microsoft.Insights/myworkbooks/read", "Microsoft.Authorization/*/read",
View and update permissions for Security Center. Same permissions as the Securit
> | [Microsoft.IoTSecurity](resource-provider-operations.md#microsoftiotsecurity)/* | | > | [Microsoft.Support](resource-provider-operations.md#microsoftsupport)/* | Create and update a support ticket | > | **NotActions** | |
-> | *none* | |
+> | [Microsoft.IoTSecurity](resource-provider-operations.md#microsoftiotsecurity)/defenderSettings/write | Creates or updates IoT Defender Settings |
> | **DataActions** | | > | *none* | | > | **NotDataActions** | |
View and update permissions for Security Center. Same permissions as the Securit
"Microsoft.IoTSecurity/*", "Microsoft.Support/*" ],
- "notActions": [],
+ "notActions": [
+ "Microsoft.IoTSecurity/defenderSettings/write"
+ ],
"dataActions": [], "notDataActions": [] }
View permissions for Security Center. Can view recommendations, alerts, a securi
> | [Microsoft.Security](resource-provider-operations.md#microsoftsecurity)/iotSensors/downloadResetPassword/action | Downloads reset password file for IoT Sensors | > | [Microsoft.IoTSecurity](resource-provider-operations.md#microsoftiotsecurity)/defenderSettings/packageDownloads/action | Gets downloadable IoT Defender packages information | > | [Microsoft.IoTSecurity](resource-provider-operations.md#microsoftiotsecurity)/defenderSettings/downloadManagerActivation/action | Download manager activation file |
-> | [Microsoft.IoTSecurity](resource-provider-operations.md#microsoftiotsecurity)/sensors/* | |
-> | [Microsoft.IoTSecurity](resource-provider-operations.md#microsoftiotsecurity)/onPremiseSensors/* | |
> | [Microsoft.Management](resource-provider-operations.md#microsoftmanagement)/managementGroups/read | List management groups for the authenticated user. | > | **NotActions** | | > | *none* | |
View permissions for Security Center. Can view recommendations, alerts, a securi
"Microsoft.Security/iotSensors/downloadResetPassword/action", "Microsoft.IoTSecurity/defenderSettings/packageDownloads/action", "Microsoft.IoTSecurity/defenderSettings/downloadManagerActivation/action",
- "Microsoft.IoTSecurity/sensors/*",
- "Microsoft.IoTSecurity/onPremiseSensors/*",
"Microsoft.Management/managementGroups/read" ], "notActions": [],
Role definition to authorize any user/service to create connectedClusters resour
} ```
+### Kubernetes Extension Contributor
+
+Can create, update, get, list and delete Kubernetes Extensions, and get extension async operations
+
+> [!div class="mx-tableFixed"]
+> | Actions | Description |
+> | | |
+> | [Microsoft.Authorization](resource-provider-operations.md#microsoftauthorization)/*/read | Read roles and role assignments |
+> | [Microsoft.Insights](resource-provider-operations.md#microsoftinsights)/alertRules/* | Create and manage a classic metric alert |
+> | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/deployments/* | Create and manage a deployment |
+> | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/subscriptions/resourceGroups/read | Gets or lists resource groups. |
+> | [Microsoft.KubernetesConfiguration](resource-provider-operations.md#microsoftkubernetesconfiguration)/extensions/write | Creates or updates extension resource. |
+> | [Microsoft.KubernetesConfiguration](resource-provider-operations.md#microsoftkubernetesconfiguration)/extensions/read | Gets extension instance resource. |
+> | [Microsoft.KubernetesConfiguration](resource-provider-operations.md#microsoftkubernetesconfiguration)/extensions/delete | Deletes extension instance resource. |
+> | [Microsoft.KubernetesConfiguration](resource-provider-operations.md#microsoftkubernetesconfiguration)/extensions/operations/read | Gets Async Operation status. |
+> | **NotActions** | |
+> | *none* | |
+> | **DataActions** | |
+> | *none* | |
+> | **NotDataActions** | |
+> | *none* | |
+
+```json
+{
+ "assignableScopes": [
+ "/"
+ ],
+ "description": "Can create, update, get, list and delete Kubernetes Extensions, and get extension async operations",
+ "id": "/subscriptions/{subscriptionId}/providers/Microsoft.Authorization/roleDefinitions/85cb6faf-e071-4c9b-8136-154b5a04f717",
+ "name": "85cb6faf-e071-4c9b-8136-154b5a04f717",
+ "permissions": [
+ {
+ "actions": [
+ "Microsoft.Authorization/*/read",
+ "Microsoft.Insights/alertRules/*",
+ "Microsoft.Resources/deployments/*",
+ "Microsoft.Resources/subscriptions/resourceGroups/read",
+ "Microsoft.KubernetesConfiguration/extensions/write",
+ "Microsoft.KubernetesConfiguration/extensions/read",
+ "Microsoft.KubernetesConfiguration/extensions/delete",
+ "Microsoft.KubernetesConfiguration/extensions/operations/read"
+ ],
+ "notActions": [],
+ "dataActions": [],
+ "notDataActions": []
+ }
+ ],
+ "roleName": "Kubernetes Extension Contributor",
+ "roleType": "BuiltInRole",
+ "type": "Microsoft.Authorization/roleDefinitions"
+}
+```
+ ### Managed Application Contributor Role Allows for creating managed application resources.
role-based-access-control Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/role-based-access-control/policy-reference.md
Title: Built-in policy definitions for Azure RBAC description: Lists Azure Policy built-in policy definitions for Azure RBAC. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/13/2021 Last updated : 08/20/2021
role-based-access-control Resource Provider Operations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/role-based-access-control/resource-provider-operations.md
Previously updated : 08/04/2021 Last updated : 08/13/2021
Click the resource provider name in the following table to see the list of opera
| [Microsoft.GuestConfiguration](#microsoftguestconfiguration) | | [Microsoft.HybridCompute](#microsofthybridcompute) | | [Microsoft.Kubernetes](#microsoftkubernetes) |
+| [Microsoft.KubernetesConfiguration](#microsoftkubernetesconfiguration) |
| [Microsoft.ManagedServices](#microsoftmanagedservices) | | [Microsoft.Management](#microsoftmanagement) | | [Microsoft.PolicyInsights](#microsoftpolicyinsights) |
Azure service: [Virtual Machines](../virtual-machines/index.yml), [Virtual Machi
> | Microsoft.Compute/locations/capsOperations/read | Gets the status of an asynchronous Caps operation | > | Microsoft.Compute/locations/cloudServiceOsFamilies/read | Read any guest OS Family that can be specified in the XML service configuration (.cscfg) for a Cloud Service. | > | Microsoft.Compute/locations/cloudServiceOsVersions/read | Read any guest OS Version that can be specified in the XML service configuration (.cscfg) for a Cloud Service. |
+> | Microsoft.Compute/locations/diagnosticOperations/read | Gets status of a Compute Diagnostic operation |
+> | Microsoft.Compute/locations/diagnostics/diskInspection/action | Create a request for executing DiskInspection Diagnostic |
+> | Microsoft.Compute/locations/diagnostics/read | Gets the properties of all available Compute Disgnostics |
+> | Microsoft.Compute/locations/diagnostics/diskInspection/read | Gets the properties of DiskInspection Diagnostic |
> | Microsoft.Compute/locations/diskOperations/read | Gets the status of an asynchronous Disk operation | > | Microsoft.Compute/locations/logAnalytics/getRequestRateByInterval/action | Create logs to show total requests by time interval to aid throttling diagnostics. | > | Microsoft.Compute/locations/logAnalytics/getThrottledRequests/action | Create logs to show aggregates of throttled requests grouped by ResourceName, OperationName, or the applied Throttle Policy. |
Azure service: [Application Gateway](../application-gateway/index.yml), [Azure B
> | Microsoft.Network/azurefirewalls/providers/Microsoft.Insights/logDefinitions/read | Gets the events for Azure Firewall | > | Microsoft.Network/azurefirewalls/providers/Microsoft.Insights/metricDefinitions/read | Gets the available metrics for Azure Firewall | > | Microsoft.Network/azureWebCategories/read | Gets Azure WebCategories |
-> | Microsoft.Network/azureWebCategories/getwebcategory/action | Looks up WebCategory |
-> | Microsoft.Network/azureWebCategories/classifyUnknown/action | Classifies Unknown WebCategory |
-> | Microsoft.Network/azureWebCategories/reclassify/action | Reclassifies WebCategory |
-> | Microsoft.Network/azureWebCategories/getMiscategorizationStatus/action | Gets Miscategorization Status |
> | Microsoft.Network/bastionHosts/read | Gets a Bastion Host | > | Microsoft.Network/bastionHosts/write | Create or Update a Bastion Host | > | Microsoft.Network/bastionHosts/delete | Deletes a Bastion Host |
Azure service: [Application Gateway](../application-gateway/index.yml), [Azure B
> | Microsoft.Network/ddosProtectionPlans/ddosProtectionPlanProxies/read | Gets a DDoS Protection Plan Proxy definition | > | Microsoft.Network/ddosProtectionPlans/ddosProtectionPlanProxies/write | Creates a DDoS Protection Plan Proxy or updates and existing DDoS Protection Plan Proxy | > | Microsoft.Network/ddosProtectionPlans/ddosProtectionPlanProxies/delete | Deletes a DDoS Protection Plan Proxy |
-> | Microsoft.Network/dnsForwardingRulesets/read | Gets a DNS Forwarding Ruleset, in JSON format |
-> | Microsoft.Network/dnsForwardingRulesets/write | Creates Or Updates a DNS Forwarding Ruleset |
-> | Microsoft.Network/dnsForwardingRulesets/delete | Deletes a DNS Forwarding Ruleset, in JSON format |
-> | Microsoft.Network/dnsForwardingRulesets/forwardingRules/read | Gets a DNS Forwarding Rule, in JSON format |
-> | Microsoft.Network/dnsForwardingRulesets/forwardingRules/write | Creates Or Updates a DNS Forwarding Rule, in JSON format |
-> | Microsoft.Network/dnsForwardingRulesets/forwardingRules/delete | Deletes a DNS Forwarding Rule, in JSON format |
-> | Microsoft.Network/dnsForwardingRulesets/virtualNetworkLinks/read | Gets the DNS Forwarding Ruleset Link to virtual network properties, in JSON format |
-> | Microsoft.Network/dnsForwardingRulesets/virtualNetworkLinks/write | Creates Or Updates DNS Forwarding Ruleset Link to virtual network properties, in JSON format |
-> | Microsoft.Network/dnsForwardingRulesets/virtualNetworkLinks/delete | Deletes DNS Forwarding Ruleset Link to Virtual Network |
> | Microsoft.Network/dnsoperationresults/read | Gets results of a DNS operation | > | Microsoft.Network/dnsoperationstatuses/read | Gets status of a DNS operation | > | Microsoft.Network/dnsResolvers/read | Gets the DNS Resolver Properties, in JSON format |
Azure service: [Application Gateway](../application-gateway/index.yml), [Azure B
> | Microsoft.Network/dnsResolvers/inboundEndpoints/read | Gets the DNS Resolver Inbound Endpoint, in JSON format | > | Microsoft.Network/dnsResolvers/inboundEndpoints/write | Creates Or Updates a DNS Resolver Inbound Endpoint, in JSON format | > | Microsoft.Network/dnsResolvers/inboundEndpoints/delete | Deletes a DNS Resolver Inbound Endpoint, in JSON format |
-> | Microsoft.Network/dnsResolvers/outboundEndpoints/read | Gets the DNS Resolver Outbound Endpoint Properties, in JSON format |
-> | Microsoft.Network/dnsResolvers/outboundEndpoints/write | Creates Or Updates a DNS Resolver Outbound Endpoint, in JSON format |
-> | Microsoft.Network/dnsResolvers/outboundEndpoints/delete | Deletes a DNS Resolver Outbound Endpoint description. |
> | Microsoft.Network/dnszones/read | Get the DNS zone, in JSON format. The zone properties include tags, etag, numberOfRecordSets, and maxNumberOfRecordSets. Note that this command does not retrieve the record sets contained within the zone. | > | Microsoft.Network/dnszones/write | Create or update a DNS zone within a resource group. Used to update the tags on a DNS zone resource. Note that this command can not be used to create or update record sets within the zone. | > | Microsoft.Network/dnszones/delete | Delete the DNS zone, in JSON format. The zone properties include tags, etag, numberOfRecordSets, and maxNumberOfRecordSets. |
Azure service: [Application Gateway](../application-gateway/index.yml), [Azure B
> | Microsoft.Network/expressRouteCircuits/write | Creates or updates an existing ExpressRouteCircuit | > | Microsoft.Network/expressRouteCircuits/join/action | Joins an Express Route Circuit. Not alertable. | > | Microsoft.Network/expressRouteCircuits/delete | Deletes an ExpressRouteCircuit |
-> | Microsoft.Network/expressRouteCircuits/nrpinternalupdate/action | Create or Update ExpressRouteCircuit |
> | Microsoft.Network/expressRouteCircuits/authorizations/read | Gets an ExpressRouteCircuit Authorization | > | Microsoft.Network/expressRouteCircuits/authorizations/write | Creates or updates an existing ExpressRouteCircuit Authorization | > | Microsoft.Network/expressRouteCircuits/authorizations/delete | Deletes an ExpressRouteCircuit Authorization |
Azure service: [Application Gateway](../application-gateway/index.yml), [Azure B
> | Microsoft.Network/expressRouteCrossConnections/read | Get Express Route Cross Connection | > | Microsoft.Network/expressRouteCrossConnections/write | Create or Update Express Route Cross Connection | > | Microsoft.Network/expressRouteCrossConnections/delete | Delete Express Route Cross Connection |
-> | Microsoft.Network/expressRouteCrossConnections/serviceProviders/action | Backfill Express Route Cross Connection |
> | Microsoft.Network/expressRouteCrossConnections/join/action | Joins an Express Route Cross Connection. Not alertable. | > | Microsoft.Network/expressRouteCrossConnections/peerings/read | Gets an Express Route Cross Connection Peering | > | Microsoft.Network/expressRouteCrossConnections/peerings/write | Creates an Express Route Cross Connection Peering or Updates an existing Express Route Cross Connection Peering |
Azure service: [Application Gateway](../application-gateway/index.yml), [Azure B
> | Microsoft.Network/loadBalancers/write | Creates a load balancer or updates an existing load balancer | > | Microsoft.Network/loadBalancers/delete | Deletes a load balancer | > | Microsoft.Network/loadBalancers/backendAddressPools/queryInboundNatRulePortMapping/action | Query inbound Nat rule port mapping. |
-> | Microsoft.Network/loadBalancers/backendAddressPools/updateAdminState/action | Update AdminStates of backend addresses of a pool |
> | Microsoft.Network/loadBalancers/backendAddressPools/read | Gets a load balancer backend address pool definition | > | Microsoft.Network/loadBalancers/backendAddressPools/write | Creates a load balancer backend address pool or updates an existing load balancer backend address pool | > | Microsoft.Network/loadBalancers/backendAddressPools/delete | Deletes a load balancer backend address pool |
Azure service: [Application Gateway](../application-gateway/index.yml), [Azure B
> | Microsoft.Network/networkIntentPolicies/read | Gets an Network Intent Policy Description | > | Microsoft.Network/networkIntentPolicies/write | Creates an Network Intent Policy or updates an existing Network Intent Policy | > | Microsoft.Network/networkIntentPolicies/delete | Deletes an Network Intent Policy |
-> | Microsoft.Network/networkIntentPolicies/join/action | Joins a Network Intent Policy. Not alertable. |
> | Microsoft.Network/networkInterfaces/read | Gets a network interface definition. | > | Microsoft.Network/networkInterfaces/write | Creates a network interface or updates an existing network interface. | > | Microsoft.Network/networkInterfaces/join/action | Joins a Virtual Machine to a network interface. Not Alertable. |
Azure service: [Azure NetApp Files](../azure-netapp-files/index.yml)
> | Microsoft.NetApp/netAppAccounts/capacityPools/volumes/snapshots/read | Reads a snapshot resource. | > | Microsoft.NetApp/netAppAccounts/capacityPools/volumes/snapshots/write | Writes a snapshot resource. | > | Microsoft.NetApp/netAppAccounts/capacityPools/volumes/snapshots/delete | Deletes a snapshot resource. |
+> | Microsoft.NetApp/netAppAccounts/capacityPools/volumes/snapshots/RestoreFiles/action | Restores files from a snapshot resource |
+> | Microsoft.NetApp/netAppAccounts/capacityPools/volumes/subvolumes/read | |
> | Microsoft.NetApp/netAppAccounts/capacityPools/volumes/subvolumes/write | Write a subvolume resource. |
+> | Microsoft.NetApp/netAppAccounts/capacityPools/volumes/subvolumes/delete | |
> | Microsoft.NetApp/netAppAccounts/ipsecPolicies/read | Reads a IPSec policy resource. | > | Microsoft.NetApp/netAppAccounts/ipsecPolicies/write | Writes an IPSec policy resource. | > | Microsoft.NetApp/netAppAccounts/ipsecPolicies/delete | Deletes a IPSec policy resource. |
Azure service: [Azure Kubernetes Service (AKS)](../aks/index.yml)
> | Microsoft.ContainerService/managedClusters/pods/read | Reads pods | > | Microsoft.ContainerService/managedClusters/pods/write | Writes pods | > | Microsoft.ContainerService/managedClusters/pods/delete | Deletes pods |
+> | Microsoft.ContainerService/managedClusters/pods/exec/action | Exec into pods resource |
> | Microsoft.ContainerService/managedClusters/podtemplates/read | Reads podtemplates | > | Microsoft.ContainerService/managedClusters/podtemplates/write | Writes podtemplates | > | Microsoft.ContainerService/managedClusters/podtemplates/delete | Deletes podtemplates |
Azure service: [Azure Database Migration Service](../dms/index.yml)
> [!div class="mx-tableFixed"] > | Action | Description | > | | |
-> | Microsoft.DataMigration/register/action | Registers the subscription with the Azure Database Migration Service provider |
-> | Microsoft.DataMigration/locations/operationResults/read | Get the status of a long-running operation related to a 202 Accepted response |
-> | Microsoft.DataMigration/locations/operationStatuses/read | Get the status of a long-running operation related to a 202 Accepted response |
-> | Microsoft.DataMigration/services/read | Read information about resources |
-> | Microsoft.DataMigration/services/write | Create or update resources and their properties |
-> | Microsoft.DataMigration/services/delete | Deletes a resource and all of its children |
-> | Microsoft.DataMigration/services/stop/action | Stop the DMS service to minimize its cost |
-> | Microsoft.DataMigration/services/start/action | Start the DMS service to allow it to process migrations again |
-> | Microsoft.DataMigration/services/checkStatus/action | Check whether the service is deployed and running |
-> | Microsoft.DataMigration/services/configureWorker/action | Configures a DMS worker to the Service's availiable workers |
-> | Microsoft.DataMigration/services/addWorker/action | Adds a DMS worker to the Service's availiable workers |
-> | Microsoft.DataMigration/services/removeWorker/action | Removes a DMS worker to the Service's availiable workers |
-> | Microsoft.DataMigration/services/updateAgentConfig/action | Updates DMS agent configuration with provided values. |
-> | Microsoft.DataMigration/services/getHybridDownloadLink/action | Gets a DMS worker package download link from RP Blob Storage. |
-> | Microsoft.DataMigration/services/projects/read | Read information about resources |
-> | Microsoft.DataMigration/services/projects/write | Run tasks Azure Database Migration Service tasks |
-> | Microsoft.DataMigration/services/projects/delete | Deletes a resource and all of its children |
-> | Microsoft.DataMigration/services/projects/accessArtifacts/action | Generate a URL that can be used to GET or PUT project artifacts |
-> | Microsoft.DataMigration/services/projects/tasks/read | Read information about resources |
-> | Microsoft.DataMigration/services/projects/tasks/write | Run tasks Azure Database Migration Service tasks |
-> | Microsoft.DataMigration/services/projects/tasks/delete | Deletes a resource and all of its children |
-> | Microsoft.DataMigration/services/projects/tasks/cancel/action | Cancel the task if it's currently running |
-> | Microsoft.DataMigration/services/serviceTasks/read | Read information about resources |
-> | Microsoft.DataMigration/services/serviceTasks/write | Run tasks Azure Database Migration Service tasks |
-> | Microsoft.DataMigration/services/serviceTasks/delete | Deletes a resource and all of its children |
-> | Microsoft.DataMigration/services/serviceTasks/cancel/action | Cancel the task if it's currently running |
-> | Microsoft.DataMigration/services/slots/read | Read information about resources |
-> | Microsoft.DataMigration/services/slots/write | Create or update resources and their properties |
-> | Microsoft.DataMigration/services/slots/delete | Deletes a resource and all of its children |
-> | Microsoft.DataMigration/skus/read | Get a list of SKUs supported by DMS resources. |
+> | Microsoft.DataMigration/locations/sqlMigrationServiceOperationResults/read | Retrieve Service Operation Results |
+> | Microsoft.DataMigration/operations/read | Get all REST Operations |
+> | Microsoft.DataMigration/sqlMigrationServices/write | Create a new or change properties of existing Service |
+> | Microsoft.DataMigration/sqlMigrationServices/delete | Delete existing Service |
+> | Microsoft.DataMigration/sqlMigrationServices/write | Update tag of the service |
+> | Microsoft.DataMigration/sqlMigrationServices/read | Retrieve details of Migration Service |
+> | Microsoft.DataMigration/sqlMigrationServices/read | Retrieve details of Migration Services in a Resource Group |
+> | Microsoft.DataMigration/sqlMigrationServices/listAuthKeys/action | Retrieve the List of Authentication Keys |
+> | Microsoft.DataMigration/sqlMigrationServices/regenerateAuthKeys/action | Regenerate the Authentication Keys |
+> | Microsoft.DataMigration/sqlMigrationServices/deleteNode/action | |
+> | Microsoft.DataMigration/sqlMigrationServices/read | Retrieve all services in the Subscription |
+> | Microsoft.DataMigration/sqlMigrationServices/getMonitoringData/read | Retrieve the Monitoring Data |
+> | Microsoft.DataMigration/sqlMigrationServices/listMigrations/read | |
+> | Microsoft.DataMigration/sqlMigrationServices/MonitoringData/read | Retrieve the Monitoring Data |
### Microsoft.DBforMariaDB
Azure service: [Azure Database for PostgreSQL](../postgresql/index.yml)
> | Microsoft.DBforPostgreSQL/locations/serverKeyOperationResults/read | Gets in-progress operations on data encryption server keys | > | Microsoft.DBforPostgreSQL/operations/read | Return the list of PostgreSQL Operations. | > | Microsoft.DBforPostgreSQL/performanceTiers/read | Returns the list of Performance Tiers available. |
+> | Microsoft.DBforPostgreSQL/serverGroupsv2/privateEndpointConnectionsApproval/action | Determines if user is allowed to approve a private endpoint connection for PostgreSQL SGv2 |
+> | Microsoft.DBforPostgreSQL/serverGroupsv2/privateEndpointConnectionProxies/read | Returns the list of private endpoint connections or gets the properties for the specified private endpoint connection via proxy |
+> | Microsoft.DBforPostgreSQL/serverGroupsv2/privateEndpointConnectionProxies/write | Creates a private endpoint connection with the specified parameters or updates the properties or tags for the specified private endpoint connection via proxy |
+> | Microsoft.DBforPostgreSQL/serverGroupsv2/privateEndpointConnectionProxies/delete | Deletes an existing private endpoint connection via proxy |
+> | Microsoft.DBforPostgreSQL/serverGroupsv2/privateEndpointConnectionProxies/validate/action | Validates a private endpoint connection creation by NRP |
+> | Microsoft.DBforPostgreSQL/serverGroupsv2/privateEndpointConnections/read | Returns the list of private endpoint connections or gets the properties for the specified private endpoint connection |
+> | Microsoft.DBforPostgreSQL/serverGroupsv2/privateEndpointConnections/write | Approves or rejects an existing private endpoint connection |
+> | Microsoft.DBforPostgreSQL/serverGroupsv2/privateEndpointConnections/delete | Deletes an existing private endpoint connection |
+> | Microsoft.DBforPostgreSQL/serverGroupsv2/privateLinkResources/read | Get the private link resources for the corresponding PostgreSQL SGv2 |
> | Microsoft.DBforPostgreSQL/servers/queryTexts/action | Return the text of a query | > | Microsoft.DBforPostgreSQL/servers/resetQueryPerformanceInsightData/action | Reset Query Performance Insight data | > | Microsoft.DBforPostgreSQL/servers/privateEndpointConnectionsApproval/action | Determines if user is allowed to approve a private endpoint connection |
Azure service: [Azure Cosmos DB](../cosmos-db/index.yml)
> | Microsoft.DocumentDB/databaseAccounts/cassandraKeyspaces/throughputSettings/migrateToAutoscale/operationResults/read | Read status of the asynchronous operation. | > | Microsoft.DocumentDB/databaseAccounts/cassandraKeyspaces/throughputSettings/migrateToManualThroughput/operationResults/read | Read status of the asynchronous operation. | > | Microsoft.DocumentDB/databaseAccounts/cassandraKeyspaces/throughputSettings/operationResults/read | Read status of the asynchronous operation. |
+> | Microsoft.DocumentDB/databaseAccounts/cassandraKeyspaces/views/write | Create or update a Cassandra view. |
+> | Microsoft.DocumentDB/databaseAccounts/cassandraKeyspaces/views/read | Read a Cassandra table or list all the Cassandra views. |
+> | Microsoft.DocumentDB/databaseAccounts/cassandraKeyspaces/views/delete | Delete a Cassandra view. |
+> | Microsoft.DocumentDB/databaseAccounts/cassandraKeyspaces/views/operationResults/read | Read status of the asynchronous operation. |
+> | Microsoft.DocumentDB/databaseAccounts/cassandraKeyspaces/views/throughputSettings/write | Update a Cassandra view throughput. |
+> | Microsoft.DocumentDB/databaseAccounts/cassandraKeyspaces/views/throughputSettings/read | Read a Cassandra view throughput. |
+> | Microsoft.DocumentDB/databaseAccounts/cassandraKeyspaces/views/throughputSettings/migrateToAutoscale/action | Migrate Cassandra view offer to autoscale. |
+> | Microsoft.DocumentDB/databaseAccounts/cassandraKeyspaces/views/throughputSettings/migrateToManualThroughput/action | Migrate Cassandra view offer to to manual throughput. |
+> | Microsoft.DocumentDB/databaseAccounts/cassandraKeyspaces/views/throughputSettings/migrateToAutoscale/operationResults/read | Read status of the asynchronous operation. |
+> | Microsoft.DocumentDB/databaseAccounts/cassandraKeyspaces/views/throughputSettings/migrateToManualThroughput/operationResults/read | Read status of the asynchronous operation. |
+> | Microsoft.DocumentDB/databaseAccounts/cassandraKeyspaces/views/throughputSettings/operationResults/read | Read status of the asynchronous operation. |
> | Microsoft.DocumentDB/databaseAccounts/databases/collections/metricDefinitions/read | Reads the collection metric definitions. | > | Microsoft.DocumentDB/databaseAccounts/databases/collections/metrics/read | Reads the collection metrics. | > | Microsoft.DocumentDB/databaseAccounts/databases/collections/partitionKeyRangeId/metrics/read | Read database account partition key level metrics |
Azure service: [Azure SQL Database](../azure-sql/database/index.yml), [Azure SQL
> | Microsoft.Sql/locations/managedShortTermRetentionPolicyOperationResults/read | Gets the status of a short term retention policy operation | > | Microsoft.Sql/locations/managedTransparentDataEncryptionAzureAsyncOperation/read | Gets in-progress operations on managed database transparent data encryption | > | Microsoft.Sql/locations/managedTransparentDataEncryptionOperationResults/read | Gets in-progress operations on managed database transparent data encryption |
+> | Microsoft.Sql/locations/networkSecurityPerimeterAssociationProxyAzureAsyncOperation/read | Get network security perimeter proxy azure async operation |
+> | Microsoft.Sql/locations/networkSecurityPerimeterAssociationProxyOperationResults/read | Get network security perimeter operation result |
> | Microsoft.Sql/locations/operationsHealth/read | Gets health status of the service operation in a location | > | Microsoft.Sql/locations/privateEndpointConnectionAzureAsyncOperation/read | Gets the result for a private endpoint connection operation | > | Microsoft.Sql/locations/privateEndpointConnectionOperationResults/read | Gets the result for a private endpoint connection operation |
Azure service: [Azure SQL Database](../azure-sql/database/index.yml), [Azure SQL
> | Microsoft.Sql/servers/keys/read | Return the list of server keys or gets the properties for the specified server key. | > | Microsoft.Sql/servers/keys/write | Creates a key with the specified parameters or update the properties or tags for the specified server key. | > | Microsoft.Sql/servers/keys/delete | Deletes an existing server key. |
+> | Microsoft.Sql/servers/networkSecurityPerimeterAssociationProxies/read | Get network security perimeter association |
+> | Microsoft.Sql/servers/networkSecurityPerimeterAssociationProxies/write | Create network security perimeter association |
+> | Microsoft.Sql/servers/networkSecurityPerimeterAssociationProxies/delete | Drop network security perimeter association |
> | Microsoft.Sql/servers/operationResults/read | Gets in-progress server operations | > | Microsoft.Sql/servers/operations/read | Return the list of operations performed on the server | > | Microsoft.Sql/servers/outboundFirewallRules/read | Read outbound firewall rule |
Azure service: [HDInsight](../hdinsight/index.yml)
> | Microsoft.HDInsight/clusters/write | Create or Update HDInsight Cluster | > | Microsoft.HDInsight/clusters/read | Get details about HDInsight Cluster | > | Microsoft.HDInsight/clusters/delete | Delete a HDInsight Cluster |
-> | Microsoft.HDInsight/clusters/changerdpsetting/action | Change RDP setting for HDInsight Cluster |
> | Microsoft.HDInsight/clusters/getGatewaySettings/action | Get gateway settings for HDInsight Cluster | > | Microsoft.HDInsight/clusters/updateGatewaySettings/action | Update gateway settings for HDInsight Cluster | > | Microsoft.HDInsight/clusters/configurations/action | Get HDInsight Cluster Configurations |
Azure service: [HDInsight](../hdinsight/index.yml)
> | Microsoft.HDInsight/clusters/extensions/write | Create Cluster Extension for HDInsight Cluster | > | Microsoft.HDInsight/clusters/extensions/read | Get Cluster Extension for HDInsight Cluster | > | Microsoft.HDInsight/clusters/extensions/delete | Delete Cluster Extension for HDInsight Cluster |
+> | Microsoft.HDInsight/clusters/privateEndpointConnections/read | Get Private Endpoint Connections for HDInsight Cluster |
+> | Microsoft.HDInsight/clusters/privateEndpointConnections/write | Update Private Endpoint Connections for HDInsight Cluster |
+> | Microsoft.HDInsight/clusters/privateEndpointConnections/delete | Delete Private Endpoint Connections for HDInsight Cluster |
> | Microsoft.HDInsight/clusters/privateLinkResources/read | Get Private Link Resources for HDInsight Cluster | > | Microsoft.HDInsight/clusters/providers/Microsoft.Insights/diagnosticSettings/read | Gets the diagnostic setting for the resource HDInsight Cluster | > | Microsoft.HDInsight/clusters/providers/Microsoft.Insights/diagnosticSettings/write | Creates or updates the diagnostic setting for the resource HDInsight Cluster |
Azure service: [Azure Data Explorer](/azure/data-explorer/)
> | Microsoft.Kusto/Clusters/PrivateEndpointConnectionProxies/delete | Deletes a private endpoint connection proxy | > | Microsoft.Kusto/Clusters/PrivateEndpointConnections/read | Reads a private endpoint connection | > | Microsoft.Kusto/Clusters/PrivateEndpointConnections/write | Writes a private endpoint connection |
+> | Microsoft.Kusto/Clusters/PrivateEndpointConnections/delete | Deletes a private endpoint connection |
> | Microsoft.Kusto/Clusters/PrivateLinkResources/read | Reads private link resources | > | Microsoft.Kusto/Clusters/providers/Microsoft.Insights/diagnosticSettings/read | Gets the diagnostic settings for the resource | > | Microsoft.Kusto/Clusters/providers/Microsoft.Insights/diagnosticSettings/write | Creates or updates the diagnostic setting for the resource |
Azure service: [Azure Synapse Analytics](../synapse-analytics/index.yml)
> | Microsoft.Synapse/checkNameAvailability/action | Checks Workspace name availability. | > | Microsoft.Synapse/register/action | Registers the Azure Synapse Analytics (workspaces) Resource Provider and enables the creation of Workspaces. | > | Microsoft.Synapse/unregister/action | Unregisters the Azure Synapse Analytics (workspaces) Resource Provider and disables the creation of Workspaces. |
+> | Microsoft.Synapse/Deployments/Preflight/action | Run a Preflight operation |
> | Microsoft.Synapse/Locations/KustoPoolCheckNameAvailability/action | Checks resource name availability. | > | Microsoft.Synapse/locations/kustoPoolOperationResults/read | Reads operations resources | > | Microsoft.Synapse/locations/operatio