Service | Microsoft Docs article | Related commit history on GitHub | Change details |
---|---|---|---|
active-directory | Application Provisioning Log Analytics | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/application-provisioning-log-analytics.md | Provisioning integrates with Azure Monitor logs and Log Analytics. With Azure mo ## Enabling provisioning logs -You should already be familiar with Azure monitoring and Log Analytics. If not, jump over to learn about them and then come back to learn about application provisioning logs. To learn more about Azure monitoring, see [Azure Monitor overview](../../azure-monitor/overview.md). To learn more about Azure Monitor logs and Log Analytics, see [Overview of log queries in Azure Monitor](../../azure-monitor/logs/log-query-overview.md). +You should already be familiar with Azure monitoring and Log Analytics. If not, jump over to learn about them, and then come back to learn about application provisioning logs. To learn more about Azure monitoring, see [Azure Monitor overview](../../azure-monitor/overview.md). To learn more about Azure Monitor logs and Log Analytics, see [Overview of log queries in Azure Monitor](../../azure-monitor/logs/log-query-overview.md). Once you've configured Azure monitoring, you can enable logs for application provisioning. The option is located on the **Diagnostics settings** page. The underlying data stream that Provisioning sends log viewers is almost identic Azure Monitor workbooks provide a flexible canvas for data analysis. They also provide for the creation of rich visual reports within the Azure portal. To learn more, see [Azure Monitor Workbooks overview](../../azure-monitor/visualize/workbooks-overview.md). -Application provisioning comes with a set of pre-built workbooks. You can find them on the Workbooks page. To view the data, you'll need to ensure that all the filters (timeRange, jobID, appName) are populated. You'll also need to make sure you've provisioned an app, otherwise there won't be any data in the logs. +Application provisioning comes with a set of prebuilt workbooks. You can find them on the Workbooks page. To view the data, ensure that all the filters (timeRange, jobID, appName) are populated. Also confirm the app was provisioned, otherwise there isn't any data in the logs. :::image type="content" source="media/application-provisioning-log-analytics/workbooks.png" alt-text="Application provisioning workbooks" lightbox="media/application-provisioning-log-analytics/workbooks.png"::: Alert when there's a spike in disables or deletes. ## Community contributions -We're taking an open source and community-based approach to application provisioning queries and dashboards. If you've built a query, alert, or workbook that you think others would find useful, be sure to publish it to the [AzureMonitorCommunity GitHub repo](https://github.com/microsoft/AzureMonitorCommunity). Then shoot us an email with a link. We'll review and publish it to the service so others can benefit too. You can contact us at provisioningfeedback@microsoft.com. +We're taking an open source and community-based approach to application provisioning queries and dashboards. Build a query, alert, or workbook that you think is useful to others, then publish it to the [AzureMonitorCommunity GitHub repo](https://github.com/microsoft/AzureMonitorCommunity). Shoot us an email with a link. We review and publish queries and dashboards to the service so others benefit too. Contact us at provisioningfeedback@microsoft.com. ## Next steps |
active-directory | Concept Fido2 Hardware Vendor | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-fido2-hardware-vendor.md | The following table lists partners who are Microsoft-compatible FIDO2 security k | Nymi | ![y] | ![n]| ![y]| ![n]| ![n] | https://www.nymi.com/nymi-band | | Octatco | ![y] | ![y]| ![n]| ![n]| ![n] | https://octatco.com/ | | OneSpan Inc. | ![n] | ![y]| ![n]| ![y]| ![n] | https://www.onespan.com/products/fido |-| Swissbit | ![n] | ![y]| ![y]| ![n]| ![n] | https://www.swissbit.com/en/products/ishield-fido2/ | +| Swissbit | ![n] | ![y]| ![y]| ![n]| ![n] | https://www.swissbit.com/en/products/ishield-key/ | | Thales Group | ![n] | ![y]| ![y]| ![n]| ![y] | https://cpl.thalesgroup.com/access-management/authenticators/fido-devices | | Thetis | ![y] | ![y]| ![y]| ![y]| ![n] | https://thetis.io/collections/fido2 | | Token2 Switzerland | ![y] | ![y]| ![y]| ![n]| ![n] | https://www.token2.swiss/shop/product/token2-t2f2-alu-fido2-u2f-and-totp-security-key | |
active-directory | Report View System Report | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/report-view-system-report.md | This article describes how to generate and view a system report in Permissions M ## Generate a system report -1. In the Permissions Management home page, select the **Reports** tab, and then select the **Systems Reports** subtab. +1. From the Permissions Management home page, select the **Reports** tab, and then select the **Systems Reports** subtab. The **Systems Reports** subtab displays the following options in the **Reports** table: - **Report Name**: The name of the report. - **Category**: The type of report: **Permission**.- - **Authorization System**: The authorization system activity in the report: Amazon Web Services (AWS), Microsoft Azure (Azure), and Google Cloud Platform (GCP). + - **Authorization System**: The cloud provider included in the report: Amazon Web Services (AWS), Microsoft Azure (Azure), and Google Cloud Platform (GCP). - **Format**: The format in which the report is available: comma-separated values (**CSV**) format, portable document format (**PDF**), or Microsoft Excel Open XML Spreadsheet (**XLSX**) format. -1. In the **Report Name** column, find the report you want, and then select the down arrow to the right of the report name to download the report. -- Or, from the ellipses **(...)** menu, select **Download**. -- The following message displays: **Successfully Started To Generate On Demand Report.** -+1. In the **Report Name** column, find the report you want to generate. +1. From the ellipses **(...)** menu for that report, select **Generate & Download**. A new window appears where you provide more information for the report you want to generate. +1. For each **Authorization System**, select the **Authorization System Name** you want to include in the report by selecting the box next to the name. +1. If you want to combine all Authorization Systems into one report, check the box for **Collate**. +1. For **Report Format**, check the boxes for a **Detailed** or **Summary** of the report in CSV format. You can select both. > [!NOTE] > If you select one authorization system, the report includes a summary. If you select more than one authorization system, the report does not include a summary.-+1. For **Schedule**, select the frequency for how often you want to receive the report(s). You can select **None** if you don't want to generate reports on a scheduled basis. +1. Click **Save**. Upon clicking **Save**, you receive a message **Report has been created**. The report appears on the **Custom Reports** tab. 1. To refresh the list of reports, select **Reload**.+1. On the **Custom Reports** tab, hover your mouse over the report, and click the down arrow to **Download** the report. A message appears **Successfully Started to Generate On Demand Report**. The report is sent to your email. ## Search for a system report 1. On the **Systems Reports** subtab, select **Search**.-1. In the **Search** box, enter the name of the report you want. -- The **Systems Reports** subtab displays a list of reports that match your search criteria. +1. In the **Search** box, enter the name of the report you want to locate. The **Systems Reports** subtab displays a list of reports that match your search criteria. 1. Select a report from the **Report Name** column.-1. To download a report, select the down arrow to the right of the report name, or from the ellipses **(...)** menu, select **Download**. +1. To generate a report, click on the ellipses **(...)** menu for that report, then select **Generate & Download**. +1. For each **Authorization System**, select the **Authorization System Name** you want to include in the report by selecting the box next to the name. +1. If you want to combine all Authorization Systems into one report, check the box for **Collate**. +1. For **Report Format**, check the boxes for a **Detailed** or **Summary** of the report in CSV format. You can select both. ++ > [!NOTE] + > If you select one authorization system, the report includes a summary. If you select more than one authorization system, the report does not include a summary. +1. For **Schedule**, select the frequency for how often you want to receive the report(s). You can select **None** if you don't want to generate reports on a scheduled basis. +1. Click **Save**. Upon clicking **Save**, you receive a message **Report has been created**. The report appears on the **Custom Reports** tab. 1. To refresh the list of reports, select **Reload**.+1. On the **Custom Reports** tab, hover your mouse over the report, and click the down arrow to **Download** the report. A message appears **Successfully Started to Generate On Demand Report**. The report is sent to your email. + ## Next steps |
active-directory | Groups Bulk Import Members | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-bulk-import-members.md | The rows in a downloaded CSV template are as follows: - The first two rows of the upload template must not be removed or modified, or the upload can't be processed. - The required columns are listed first.-- We don't recommend adding new columns to the template. Any additional columns you add are ignored and not processed.+- We don't recommend adding new columns to the template. Any other columns you add are ignored and not processed. - We recommend that you download the latest version of the CSV template as often as possible. - Add at least two users' UPNs or object IDs to successfully upload the file. The rows in a downloaded CSV template are as follows: 1. Sign in to [the Azure portal](https://portal.azure.com) with a User administrator account in the organization. Group owners can also bulk import members of groups they own. 1. In Azure AD, select **Groups** > **All groups**. 1. Open the group to which you're adding members and then select **Members**.-1. On the **Members** page, select **Import members**. +1. On the **Members** page, select **bulk operations** and then choose **Import members**. 1. On the **Bulk import group members** page, select **Download** to get the CSV file template with required group member properties.  |
active-directory | Authentication Conditional Access | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/authentication-conditional-access.md | When an Azure AD organization shares resources with external users with an ident The following diagram illustrates the authentication flow when an external user signs in with an account from a non-Azure AD identity provider, such as Google, Facebook, or a federated SAML/WS-Fed identity provider. -[  ](media/authentication-conditional-access/authentication-flow-b2b-guests.png#lightbox)) +[  ](media/authentication-conditional-access/authentication-flow-b2b-guests.png#lightbox) | Step | Description | |--|--| |
active-directory | Secure External Access Resources | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/secure-external-access-resources.md | Both methods have drawbacks. For more information, see the following table. | Area of concern | Local credentials | Federation | |-||| | Security | - Access continues after external user terminates<br> - UserType is Member by default, which grants too much default access | - No user-level visibility <br> - Unknown partner security posture|-| Expense | - Password and multi-factor authentication (MFA) management<br> - Onboarding process<br> - Identity cleanup<br> - Overhead of running a separate directory | Small partners can't afford the infrastructure, lack expertise, and might user consumer email| +| Expense | - Password and multi-factor authentication (MFA) management<br> - Onboarding process<br> - Identity cleanup<br> - Overhead of running a separate directory | Small partners can't afford the infrastructure, lack expertise, and might use consumer email| | Complexity | Partner users manage more credentials | Complexity grows with each new partner, and increased for partners| Azure Active Directory (Azure AD) B2B integrates with other tools in Azure AD, and Microsoft 365 services. Azure AD B2B simplifies collaboration, reduces expense, and increases security. |
active-directory | Whats New Archive | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/whats-new-archive.md | For more information, see [Automate user provisioning to SaaS applications with In June 2021, we have added following 42 new applications in our App gallery with Federation support -[Taksel](https://help.ubuntu.com/community/Tasksel), [IDrive360](../saas-apps/idrive360-tutorial.md), [VIDA](../saas-apps/vida-tutorial.md), [ProProfs Classroom](../saas-apps/proprofs-classroom-tutorial.md), [WAN-Sign](../saas-apps/wan-sign-tutorial.md), [Citrix Cloud SAML SSO](../saas-apps/citrix-cloud-saml-sso-tutorial.md), [Fabric](../saas-apps/fabric-tutorial.md), [DssAD](https://cloudlicensing.deepseedsolutions.com/), [RICOH Creative Collaboration RICC](https://www.ricoh-europe.com/products/software-apps/collaboration-board-software/ricc/), [Styleflow](../saas-apps/styleflow-tutorial.md), [Chaos](https://accounts.chaosgroup.com/corporate_login), [Traced Connector](https://control.traced.app/signup), [Squarespace](https://account.squarespace.com/org/azure), [MX3 Diagnostics Connector](https://www.mx3diagnostics.com/), [Ten Spot](https://tenspot.co/api/v1/sso/azure/login/), [Finvari](../saas-apps/finvari-tutorial.md), [Mobile4ERP](https://play.google.com/store/apps/details?id=com.negevsoft.mobile4erp), [WalkMe US OpenID Connect](https://www.walkme.com/), [Neustar UltraDNS](../saas-apps/neustar-ultradns-tutorial.md), [cloudtamer.io](../saas-apps/cloudtamer-io-tutorial.md), [A Cloud Guru](../saas-apps/a-cloud-guru-tutorial.md), [PetroVue](../saas-apps/petrovue-tutorial.md), [Postman](../saas-apps/postman-tutorial.md), [ReadCube Papers](../saas-apps/readcube-papers-tutorial.md), [Peklostroj](https://app.peklostroj.cz/), [SynCloud](https://onboard.syncloud.io/), [Polymerhq.io](https://www.polymerhq.io/), [Bonos](../saas-apps/bonos-tutorial.md), [Astra Schedule](../saas-apps/astra-schedule-tutorial.md), [Draup](../saas-apps/draup-inc-tutorial.md), [Inc](../saas-apps/draup-inc-tutorial.md), [Applied Mental Health](../saas-apps/applied-mental-health-tutorial.md), [iHASCO Training](../saas-apps/ihasco-training-tutorial.md), [Nexsure](../saas-apps/nexsure-tutorial.md), [XEOX](https://login.xeox.com/), [Plandisc](https://create.plandisc.com/account/logon), [foundU](../saas-apps/foundu-tutorial.md), [Standard for Success Accreditation](../saas-apps/standard-for-success-accreditation-tutorial.md), [Penji Teams](https://web.penjiapp.com/), [CheckPoint Infinity Portal](../saas-apps/checkpoint-infinity-portal-tutorial.md), [Teamgo](../saas-apps/teamgo-tutorial.md), [Hopsworks.ai](../saas-apps/hopsworks-ai-tutorial.md), [HoloMeeting 2](https://backend2.holomeeting.io/) +[Taksel](https://help.ubuntu.com/community/Tasksel), [IDrive360](../saas-apps/idrive360-tutorial.md), [VIDA](../saas-apps/vida-tutorial.md), [ProProfs Classroom](../saas-apps/proprofs-classroom-tutorial.md), [WAN-Sign](../saas-apps/wan-sign-tutorial.md), [Citrix Cloud SAML SSO](../saas-apps/citrix-cloud-saml-sso-tutorial.md), [Fabric](../saas-apps/fabric-tutorial.md), [DssAD](https://cloudlicensing.deepseedsolutions.com/), [RICOH Creative Collaboration RICC](https://www.ricoh-europe.com/products/software-apps/collaboration-board-software/ricc/), [Styleflow](../saas-apps/styleflow-tutorial.md), [Chaos](https://accounts.chaosgroup.com/corporate_login), [Traced Connector](https://control.traced.app/signup), [Squarespace](https://account.squarespace.com/org/azure), [MX3 Diagnostics Connector](https://www.mx3diagnostics.com/), [Ten Spot](https://tenspot.co/api/v1/sso/azure/login/), [Finvari](../saas-apps/finvari-tutorial.md), [Mobile4ERP](https://play.google.com/store/apps/details?id=com.negevsoft.mobile4erp), [WalkMe US OpenID Connect](https://www.walkme.com/), [Neustar UltraDNS](../saas-apps/neustar-ultradns-tutorial.md), [cloudtamer.io](../saas-apps/cloudtamer-io-tutorial.md), [A Cloud Guru](../saas-apps/a-cloud-guru-tutorial.md), [PetroVue](../saas-apps/petrovue-tutorial.md), [Postman](../saas-apps/postman-tutorial.md), [ReadCube Papers](../saas-apps/readcube-papers-tutorial.md), [Peklostroj](https://app.peklostroj.cz/), [SynCloud](https://www.syncloud.org/apps.html), [Polymerhq.io](https://www.polymerhq.io/), [Bonos](../saas-apps/bonos-tutorial.md), [Astra Schedule](../saas-apps/astra-schedule-tutorial.md), [Draup](../saas-apps/draup-inc-tutorial.md), [Inc](../saas-apps/draup-inc-tutorial.md), [Applied Mental Health](../saas-apps/applied-mental-health-tutorial.md), [iHASCO Training](../saas-apps/ihasco-training-tutorial.md), [Nexsure](../saas-apps/nexsure-tutorial.md), [XEOX](https://login.xeox.com/), [Plandisc](https://create.plandisc.com/account/logon), [foundU](../saas-apps/foundu-tutorial.md), [Standard for Success Accreditation](../saas-apps/standard-for-success-accreditation-tutorial.md), [Penji Teams](https://web.penjiapp.com/), [CheckPoint Infinity Portal](../saas-apps/checkpoint-infinity-portal-tutorial.md), [Teamgo](../saas-apps/teamgo-tutorial.md), [Hopsworks.ai](../saas-apps/hopsworks-ai-tutorial.md), [HoloMeeting 2](https://backend2.holomeeting.io/) You can also find the documentation of all the applications here: https://aka.ms/AppsTutorial You can add free text notes to Enterprise applications. You can add any relevant In September 2020 we have added following 34 new applications in our App gallery with Federation support: -[VMware Horizon - Unified Access Gateway](), [Pulse Secure PCS](../saas-apps/vmware-horizon-unified-access-gateway-tutorial.md), [Inventory360](../saas-apps/pulse-secure-pcs-tutorial.md), [Frontitude](https://services.enteksystems.de/sso/microsoft/signup), [BookWidgets](https://www.bookwidgets.com/sso/office365), [ZVD_Server](https://zaas.zenmutech.com/user/signin), [HashData for Business](https://hashdata.app/login.xhtml), [SecureLogin](https://securelogin.securelogin.nu/sso/azure/login), [CyberSolutions MAILBASEΣ/CMSS](../saas-apps/cybersolutions-mailbase-tutorial.md), [CyberSolutions CYBERMAILΣ](../saas-apps/cybersolutions-cybermail-tutorial.md), [LimbleCMMS](https://auth.limblecmms.com/), [Glint Inc](../saas-apps/glint-inc-tutorial.md), [zeroheight](../saas-apps/zeroheight-tutorial.md), [Gender Fitness](https://app.genderfitness.com/), [Coeo Portal](https://my.coeo.com/), [Grammarly](../saas-apps/grammarly-tutorial.md), [Fivetran](../saas-apps/fivetran-tutorial.md), [Kumolus](../saas-apps/kumolus-tutorial.md), [RSA Archer Suite](../saas-apps/rsa-archer-suite-tutorial.md), [TeamzSkill](../saas-apps/teamzskill-tutorial.md), [raumfürraum](../saas-apps/raumfurraum-tutorial.md), [Saviynt](../saas-apps/saviynt-tutorial.md), [BizMerlinHR](https://marketplace.bizmerlin.net/bmone/signup), [Mobile Locker](../saas-apps/mobile-locker-tutorial.md), [Zengine](../saas-apps/zengine-tutorial.md), [CloudCADI](https://app.cloudcadi.com/login), [Simfoni Analytics](https://simfonianalytics.com/accounts/microsoft/login/), [Priva Identity & Access Management](https://my.priva.com/), [Nitro Pro](https://www.gonitro.com/nps/product-details/downloads), [Eventfinity](../saas-apps/eventfinity-tutorial.md), [Fexa](../saas-apps/fexa-tutorial.md), [Secured Signing Enterprise Portal](https://www.securedsigning.com/aad/Auth/ExternalLogin/AdminPortal), [Secured Signing Enterprise Portal AAD Setup](https://www.securedsigning.com/aad/Auth/ExternalLogin/AdminPortal), [Wistec Online](https://wisteconline.com/auth/oidc), [Oracle PeopleSoft - Protected by F5 BIG-IP APM](../saas-apps/oracle-peoplesoft-protected-by-f5-big-ip-apm-tutorial.md) +[VMware Horizon - Unified Access Gateway](), [Pulse Secure PCS](../saas-apps/vmware-horizon-unified-access-gateway-tutorial.md), [Inventory360](../saas-apps/pulse-secure-pcs-tutorial.md), [Frontitude](https://services.enteksystems.de/sso/microsoft/signup), [BookWidgets](https://www.bookwidgets.com/sso/office365), [ZVD_Server](https://zaas.zenmutech.com/user/signin), [HashData for Business](https://hashdata.app/login.xhtml), [SecureLogin](https://securelogin.securelogin.nu/sso/azure/login), [CyberSolutions MAILBASEΣ/CMSS](../saas-apps/cybersolutions-mailbase-tutorial.md), [CyberSolutions CYBERMAILΣ](../saas-apps/cybersolutions-cybermail-tutorial.md), [LimbleCMMS](https://auth.limblecmms.com/), [Glint Inc](../saas-apps/glint-inc-tutorial.md), [zeroheight](../saas-apps/zeroheight-tutorial.md), [Gender Fitness](https://app.genderfitness.com/), [Coeo Portal](https://my.coeo.com/), [Grammarly](../saas-apps/grammarly-tutorial.md), [Fivetran](../saas-apps/fivetran-tutorial.md), [Kumolus](../saas-apps/kumolus-tutorial.md), [RSA Archer Suite](../saas-apps/rsa-archer-suite-tutorial.md), [TeamzSkill](../saas-apps/teamzskill-tutorial.md), [raumfürraum](../saas-apps/raumfurraum-tutorial.md), [Saviynt](../saas-apps/saviynt-tutorial.md), [BizMerlinHR](https://marketplace.bizmerlin.net/bmone/signup), [Mobile Locker](../saas-apps/mobile-locker-tutorial.md), [Zengine](../saas-apps/zengine-tutorial.md), [CloudCADI](https://cloudcadi.com/), [Simfoni Analytics](https://simfonianalytics.com/accounts/microsoft/login/), [Priva Identity & Access Management](https://my.priva.com/), [Nitro Pro](https://www.gonitro.com/nps/product-details/downloads), [Eventfinity](../saas-apps/eventfinity-tutorial.md), [Fexa](../saas-apps/fexa-tutorial.md), [Secured Signing Enterprise Portal](https://www.securedsigning.com/aad/Auth/ExternalLogin/AdminPortal), [Secured Signing Enterprise Portal AAD Setup](https://www.securedsigning.com/aad/Auth/ExternalLogin/AdminPortal), [Wistec Online](https://wisteconline.com/auth/oidc), [Oracle PeopleSoft - Protected by F5 BIG-IP APM](../saas-apps/oracle-peoplesoft-protected-by-f5-big-ip-apm-tutorial.md) You can also find the documentation of all the applications from here: https://aka.ms/AppsTutorial. |
active-directory | Whats New | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/whats-new.md | For more information, see: **Service category:** Group Management **Product capability:** End User Experiences -A new and improved My Groups experience is now available at https://www.myaccount.microsoft.com/groups. My Groups enables end users to easily manage groups, such as finding groups to join, managing groups they own, and managing existing group memberships. Based on customer feedback, the new My Groups support sorting and filtering on lists of groups and group members, a full list of group members in large groups, and an actionable overview page for membership requests. -This experience replaces the existing My Groups experience at https://www.mygroups.microsoft.com in May. +A new and improved My Groups experience is now available at `https://www.myaccount.microsoft.com/groups`. My Groups enables end users to easily manage groups, such as finding groups to join, managing groups they own, and managing existing group memberships. Based on customer feedback, the new My Groups support sorting and filtering on lists of groups and group members, a full list of group members in large groups, and an actionable overview page for membership requests. +This experience replaces the existing My Groups experience at `https://www.mygroups.microsoft.com` in May. For more information, see: [Update your Groups info in the My Apps portal](https://support.microsoft.com/account-billing/update-your-groups-info-in-the-my-apps-portal-bc0ca998-6d3a-42ac-acb8-e900fb1174a4). |
active-directory | Migrate From Federation To Cloud Authentication | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/migrate-from-federation-to-cloud-authentication.md | Modern authentication clients (Office 2016 and Office 2013, iOS, and Android app To plan for rollback, use the [documented current federation settings](#document-current-federation-settings) and check the [federation design and deployment documentation](/windows-server/identity/ad-fs/deployment/windows-server-2012-r2-ad-fs-deployment-guide). -The rollback process should include converting managed domains to federated domains by using the [Convert-MSOLDomainToFederated](/powershell/module/microsoft.graph.identity.directorymanagement/new-mgdomainfederationconfiguration?view=graph-powershell-1.0&preserve-view=true) cmdlet. If necessary, configuring extra claims rules. +The rollback process should include converting managed domains to federated domains by using the [New-MgDomainFederationConfiguration](/powershell/module/microsoft.graph.identity.directorymanagement/new-mgdomainfederationconfiguration?view=graph-powershell-1.0&preserve-view=true) cmdlet. If necessary, configuring extra claims rules. ## Migration considerations |
active-directory | Configure Password Single Sign On Non Gallery Applications | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/configure-password-single-sign-on-non-gallery-applications.md | The configuration page for password-based SSO is simple. It includes only the UR ## Prerequisites To configure password-based SSO in your Azure AD tenant, you need:-- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F)-- One of the following roles: Global Administrator, Cloud Application Administrator, Application Administrator, or owner of the service principal.+- An Azure account with an active subscription. If you don't already have one, you can [create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) +- Global Administrator, or owner of the service principal. - An application that supports password-based SSO. ## Configure password-based single sign-on |
active-directory | Groups Assign Role | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/groups-assign-role.md | $roleAssignment = New-MgRoleManagementDirectoryRoleAssignment -DirectoryScopeId # [Azure AD PowerShell](#tab/aad-powershell) + ### Create a role-assignable group Use the [New-AzureADMSGroup](/powershell/module/azuread/new-azureadmsgroup?branch=main) command to create a role-assignable group. |
active-directory | Azure Ad Pci Dss Guidance | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/standards/azure-ad-pci-dss-guidance.md | + + Title: Azure Active Directory PCI-DSS guidance +description: Guidance on meeting payment card industry (PCI) compliance with Azure AD +++++++++ Last updated : 04/18/2023+++++# Azure Active Directory PCI-DSS guidance ++The Payment Card Industry Security Standards Council (PCI SSC) is responsible for developing and promoting data security standards and resources, including the Payment Card Industry Data Security Standard (PCI-DSS), to ensure the security of payment transactions. To achieve PCI compliance, organizations using Azure Active Directory (Azure AD) can refer to guidance in this document. However, it is the responsibility of the organizations to ensure their PCI compliance. Their IT teams, SecOps teams, and Solutions Architects are responsible for creating and maintaining secure systems, products, and networks that handle, process, and store payment card information. ++While Azure AD helps meet some PCI-DSS control requirements, and provides modern identity and access protocols for cardholder data environment (CDE) resources, it should not be the sole mechanism for protecting cardholder data. Therefore, review this document set and all PCI-DSS requirements to establish a comprehensive security program that preserves customer trust. For a complete list of requirements, please visit the official PCI Security Standards Council website at pcisecuritystandards.org: [Official PCI Security Standards Council Site](https://docs-prv.pcisecuritystandards.org/PCI%20DSS/Standard/PCI-DSS-v4_0.pdf) ++## PCI requirements for controls ++The global PCI-DSS v4.0 establishes a baseline of technical and operational standards for protecting account data. It ΓÇ£was developed to encourage and enhance payment card account data security and facilitate the broad adoption of consistent data security measures, globally. It provides a baseline of technical and operational requirements designed to protect account data. While specifically designed to focus on environments with payment card account data, PCI-DSS can also be used to protect against threats and secure other elements in the payment ecosystem.ΓÇ¥ ++## Azure AD configuration and PCI-DSS ++This document serves as a comprehensive guide for technical and business leaders who are responsible for managing identity and access management (IAM) with Azure Active Directory (Azure AD) in compliance with the Payment Card Industry Data Security Standard (PCI DSS). By following the key requirements, best practices, and approaches outlined in this document, organizations can reduce the scope, complexity, and risk of PCI noncompliance, while promoting security best practices and standards compliance. The guidance provided in this document aims to help organizations configure Azure AD in a way that meets the necessary PCI DSS requirements and promotes effective IAM practices. ++Technical and business leaders can use the following guidance to fulfill responsibilities for identity and access management (IAM) with Azure AD. For more information on PCI-DSS in other Microsoft workloads, see [Overview of the Microsoft cloud security benchmark (v1)](/security/benchmark/azure/overview). ++PCI-DSS requirements and testing procedures consist of 12 principal requirements that ensure the secure handling of payment card information. Together, these requirements are a comprehensive framework that helps organizations secure payment card transactions and protect sensitive cardholder data. ++Azure AD is an enterprise identity service that secures applications, systems, and resources to support PCI-DSS compliance. The following table has the PCI principal requirements and links to Azure AD recommended controls for PCI-DSS compliance. ++## Principal PCI-DSS requirements ++PCI-DSS requirements **3**, **4**, **9**, and **12** aren't addressed or met by Azure AD, therefore there are no corresponding articles. To see all requirements, go to pcisecuritystandards.org: [Official PCI Security Standards Council Site](https://docs-prv.pcisecuritystandards.org/PCI%20DSS/Standard/PCI-DSS-v4_0.pdf). ++|PCI Data Security Standard - High Level Overview|Azure AD recommended PCI-DSS controls| +|-|-| +|Build and Maintain Secure Network and Systems|[1. Install and Maintain Network Security Controls]() </br> [2. Apply Secure Configurations to All System Components]()| +|Protect Account Data|3. Protect Stored Account Data </br> 4. Protect Cardholder Data with Strong Cryptography During Transmission Over Public Networks| +|Maintain a Vulnerability Management Program|[5. Protect All Systems and Networks from Malicious Software]() </br> [6. Develop and Maintain Secure Systems and Software]()| +|Implement Strong Access Control Measures|[7. Restrict Access to System Components and Cardholder Data by Business Need to Know]() </br> [8. Identify and Authenticate Access to System Components]() </br> 9. Restrict Physical Access to System Components and Cardholder Data| +|Regularly Monitor and Test Networks|[10. Log and Monitor All Access to System Components and Cardholder Data]() </br> [11. Test Security of Systems and Networks Regularly]()| +|Maintain an Information Security Policy|12. Support Information Security with Organizational Policies and Programs| ++## PCI-DSS applicability ++PCI-DSS applies to organizations that store, process, or transmit cardholder data (CHD) and/or sensitive authentication data (SAD). These data elements, considered together, are known as account data. PCI-DSS provides security guidelines and requirements for organizations that affect the cardholder data environment (CDE). Entities safeguarding CDE ensures the confidentiality and security of customer payment information. ++CHD consists of: ++* **Primary account number (PAN)** - a unique payment card number (credit, debit, or prepaid cards, etc.) that identifies the issuer and the cardholder account +* **Cardholder name** ΓÇô the card owner +* **Card expiration date** ΓÇô the day and month the card expires +* **Service code** - a three- or four-digit value in the magnetic stripe that follows the expiration date of the payment card on the track data. It defines service attributes, differentiating between international and national interchange, or identifying usage restrictions. ++SAD consists of security-related information used to authenticate cardholders and/or authorize payment card transactions. SAD includes, but isn't limited to: ++* **Full track data** - magnetic stripe or chip equivalent +* **Card verification codes/values** - also referred to as the card validation code (CVC), or value (CVV). ItΓÇÖs the three- or four-digit value on the front or back of the payment card. ItΓÇÖs also referred to as CAV2, CVC2, CVN2, CVV2 or CID, determined by the participating payment brands (PPB). +* **PIN** - personal identification number + * **PIN blocks** - an encrypted representation of the PIN used in a debit or credit card transaction. It ensures the secure transmission of sensitive information during a transaction ++Protecting the CDE is essential to the security and confidentiality of customer payment information and helps: ++* **Preserve customer trust** - customers expect their payment information to be handled securely and kept confidential. If a company experiences a data breach that results in the theft of customer payment data, it can degrade customer trust in the company and cause reputational damage. +* **Comply with regulations** - companies processing credit card transactions are required to comply with the PCI-DSS. Failure to comply results in fines, legal liabilities, and resultant reputational damage. +* **Financial risk mitigation** -data breaches have significant financial effects, including, costs for forensic investigations, legal fees, and compensation for affected customers. +* **Business continuity** - data breaches disrupt business operations and might affect credit card transaction processes. This scenario might lead to lost revenue, operational disruptions, and reputational damage. ++## PCI audit scope ++PCI audit scope relates to the systems, networks, and processes in the storage, processing, or transmission of CHD and/or SAD. If Account Data is stored, processed, or transmitted in a cloud environment, PCI-DSS applies to that environment and compliance typically involves validation of the cloud environment and the usage of it. There are five fundamental elements in scope for a PCI audit: ++* **Cardholder data environment (CDE)** - the area where CHD, and/or SAD, is stored, processed, or transmitted. It includes an organizationΓÇÖs components that touch CHD, such as networks, and network components, databases, servers, applications, and payment terminals. +* **People** - with access to the CDE, such as employees, contractors, and third-party service providers, are in the scope of a PCI audit. +* **Processes** - that involve CHD, such as authorization, authentication, encryption and storage of account data in any format, are within the scope of a PCI audit. +* **Technology** - that processes, stores, or transmits CHD, including hardware such as printers, and multi-function devices that scan, print and fax, end-user devices such as computers, laptops workstations, administrative workstations, tablets and mobile devices, software, and other IT systems, are in the scope of a PCI audit. +* **System components** ΓÇô that might not store, process, or transmit CHD/SAD but have unrestricted connectivity to system components that store, process, or transmit CHD/SAD, or that could effect the security of the CDE. ++If PCI scope is minimized, organizations can effectively reduce the effects of security incidents and lower the risk of data breaches. Segmentation can be a valuable strategy for reducing the size of the PCI CDE, resulting in reduced compliance costs and overall benefits for the organization including but not limited to: ++* **Cost savings** - by limiting audit scope, organizations reduce time, resources, and expenses to undergo an audit, which leads to cost savings. +* **Reduced risk exposure** - a smaller PCI audit scope reduces potential risks associated with processing, storing, and transmitting cardholder data. If the number of systems, networks, and applications subject to an audit are limited, organizations focus on securing their critical assets and reducing their risk exposure. +* **Streamlined compliance** - narrowing audit scope makes PCI-DSS compliance more manageable and streamlined. Results are more efficient audits, fewer compliance issues, and a reduced risk of incurring noncompliance penalties. +* **Improved security posture** - with a smaller subset of systems and processes, organizations allocate security resources and efforts efficiently. Outcomes are a stronger security posture, as security teams concentrate on securing critical assets and identifying vulnerabilities in a targeted and effective manner. ++## Strategies to reduce PCI audit scope ++An organizationΓÇÖs definition of its CDE determines PCI audit scope. Organizations document and communicate this definition to the PCI-DSS Qualified Security Assessor (QSA) performing the audit. The QSA assesses controls for the CDE to determine compliance. +Adherence to PCI standards and use of effective risk mitigation helps businesses protect customer personal and financial data, which maintains trust in their operations. The following section outlines strategies to reduce risk in PCI audit scope. ++### Tokenization ++Tokenization is a data security technique. Use tokenization to replace sensitive information, such as credit card numbers, with a unique token stored and used for transactions, without exposing sensitive data. Tokens reduce the scope of a PCI audit for the following requirements: ++* **Requirement 3** - Protect Stored Account Data +* **Requirement 4** - Protect Cardholder Data with strong Cryptography During Transmission Over Open Public Networks +* **Requirement 9** - Restrict Physical Access to Cardholder Data +* **Requirement 10** - Log and Monitor All Access to Systems Components and Cardholder Data. ++When using cloud-based processing methodologies, consider the relevant risks to sensitive data and transactions. To mitigate these risks, it's recommended you implement relevant security measures and contingency plans to protect data and prevent transaction interruptions. As a best practice, use payment tokenization as a methodology to declassify data, and potentially reduce the footprint of the CDE. With payment tokenization, sensitive data is replaced with a unique identifier that reduces the risk of data theft and limits the exposure of sensitive information in the CDE. ++### Secure CDE ++PCI-DSS requires organizations to maintain a secure CDE. With effectively configured CDE, businesses can mitigate their risk exposure and reduce the associated costs for both on-premises and cloud environments. This approach helps minimize the scope of a PCI audit, making it easier and more cost-effective to demonstrate compliance with the standard. ++To configure Azure AD to secure the CDE: ++* Use passwordless credentials for users: Windows Hello for Business, FIDO2 security keys, and Microsoft Authenticator app +* Use strong credentials for workload identities: certificates and managed identities for Azure resources. + * Integrate access technologies such as VPN, remote desktop, and network access points with Azure AD for authentication, if applicable +* Enable privileged identity management and access reviews for Azure AD roles, privileged access groups and Azure resources +* Use Conditional Access policies to enforce PCI-requirement controls: credential strength, device state, and enforce them based on location, group membership, applications, and risk +* Use modern authentication for DCE workloads +* Archive Azure AD logs in security information and event management (SIEM) systems ++Where applications and resources use Azure AD for identity and access management (IAM), the Azure AD tenant(s) are in scope of PCI audit, and the guidance herein is applicable. Organizations must evaluate identity and resource isolation requirements, between non-PCI and PCI workloads, to determine their best architecture. ++Learn more ++* [Introduction to delegated administration and isolated environments](../fundamentals/secure-with-azure-ad-introduction.md) +* [How to use the Microsoft Authenticator app](https://support.microsoft.com/account-billing/how-to-use-the-microsoft-authenticator-app-9783c865-0308-42fb-a519-8cf666fe0acc) +* [What are managed identities for Azure resources?](../managed-identities-azure-resources/overview.md) +* [What are access reviews?](../governance/access-reviews-overview.md) +* [What is Conditional Access?](../conditional-access/overview.md) +* [Audit logs in Azure AD](../reports-monitoring/concept-audit-logs.md) ++### Establish a responsibility matrix ++PCI compliance is the responsibility of entities that process payment card transactions including but not limited to: ++* Merchants +* Card service providers +* Merchant service providers +* Acquiring banks +* Payment processors +* Payment card issuers +* Hardware vendors ++These entities ensure payment card transactions are processed securely and are PCI-DSS compliant. All entities involved in payment card transactions have a role to help ensure PCI compliance. ++Azure PCI DSS compliance status doesn't automatically translate to PCI-DSS validation for the services you build or host on Azure. You ensure that you achieve compliance with PCI-DSS requirements. ++### Establish continuous processes to maintain compliance ++Continuous processes entail ongoing monitoring and improvement of compliance posture. Benefits of continuous processes to maintain PCI compliance: ++* Reduced risk of security incidents and noncompliance +* Improved data security +* Better alignment with regulatory requirements +* Increased customer and stakeholder confidence ++With ongoing processes, organizations respond effectively to changes in the regulatory environment and ever-evolving security threats. ++* **Risk assessment** ΓÇô conduct this process to identify credit-card data vulnerabilities and security risks. Identify potential threats, assess the likelihood threats occurring, and evaluate the potential effects on the business. +* **Security awareness training** - employees who handle credit card data receive regular security awareness training to clarify the importance of protecting cardholder data and the measures to do so. +* **Vulnerability management** -conduct regular vulnerability scans and penetration testing to identify network or system weaknesses exploitable by attackers. +* **Monitor and maintain access control policies** - access to credit card data is restricted to authorized individuals. Monitor access logs to identify unauthorized access attempts. +* **Incident response** ΓÇô an incident response plan helps security teams take action during security incidents involving credit card data. Identify incident cause, contain the damage, and restore normal operations in a timely manner. +* **Compliance monitoring** - and auditing is conducted to ensure ongoing compliance with PCI-DSS requirements. Review security logs, conduct regular policy reviews, and ensure system components are accurately configured and maintained. ++### Implement strong security for shared infrastructure ++Typically, web services such as Azure, have a shared infrastructure wherein customer data might be stored on the same physical server or data storage device. This scenario creates the risk of unauthorized customers accessing data they donΓÇÖt own, and the risk of malicious actors targeting the shared infrastructure. Azure AD security features help mitigate risks associated with shared infrastructure: ++* User authentication to network access technologies that support modern authentication protocols: virtual private network (VPN), remote desktop, and network access points. +* Access control policies that enforce strong authentication methods and device compliance based on signals such as user context, device, location, and risk. +* Conditional Access provides an identity-driven control plane and brings signals together, to make decisions, and enforce organizational policies. +* Privileged role governance - access reviews, just-in-time (JIT) activation, etc. ++Learn more: [What is Conditional Access?](../conditional-access/overview.md) ++### Data residency ++PCI-DSS cites no specific geographic location for credit card data storage. However, it requires cardholder data is stored securely, which might include geographic restrictions, depending on the organization's security and regulatory requirements. Different countries and regions have data protection and privacy laws. Consult with a legal or compliance advisor to determine applicable data residency requirements. ++Learn more: [Azure AD and data residency](../fundamentals/azure-ad-data-residency.md) ++### Third-party security risks ++A non-PCI compliant third-party provider poses a risk to PCI compliance. Regularly assess and monitor third-party vendors and service providers to ensure they maintain required controls to protect cardholder data. ++Azure AD features and functions in **Data residency** help mitigate risks associated with third-party security. ++### Logging and monitoring ++Implement accurate logging and monitoring to detect, and respond to, security incidents in a timely manner. Azure AD helps manage PCI compliance with audit and activity logs, and reports that can be integrated with a SIEM system. Azure AD has role -based access control (RBAC) and MFA to secure access to sensitive resources, encryption, and threat protection features to protect organizations from unauthorized access and data theft. ++Learn more: ++ΓÇó [What are Azure AD reports?](../reports-monitoring/overview-reports.md) +ΓÇó [Azure AD built-in roles](../roles/permissions-reference.md) ++### Multi-application environments: host outside the CDE ++PCI-DSS ensures that companies that accept, process, store, or transmit credit card information maintain a secure environment. Hosting outside the CDE introduces risks such as: ++* Poor access control and identity management might result in unauthorized access to sensitive data and systems +* Insufficient logging and monitoring of security events impedes detection and response to security incidents +* Insufficient encryption and threat protection increases the risk of data theft and unauthorized access +* Poor, or no security awareness and training for users might result in avoidable social engineering attacks, such as phishing ++## Next steps ++PCI-DSS requirements **3**, **4**, **9**, and **12** aren't applicable to Azure AD, therefore there are no corresponding articles. To see all requirements, go to pcisecuritystandards.org: [Official PCI Security Standards Council Site](https://docs-prv.pcisecuritystandards.org/PCI%20DSS/Standard/PCI-DSS-v4_0.pdf). ++To configure Azure AD to comply with PCI-DSS, see the following articles. ++* [Azure AD PCI-DSS guidance](azure-ad-pci-dss-guidance.md) (You're here) +* [Requirement 1: Install and Maintain Network Security Controls](pci-requirement-1.md) +* [Requirement 2: Apply Secure Configurations to All System Components](pci-requirement-2.md) +* [Requirement 5: Protect All Systems and Networks from Malicious Software](pci-requirement-5.md) +* [Requirement 6: Develop and Maintain Secure Systems and Software](pci-requirement-6.md) +* [Requirement 7: Restrict Access to System Components and Cardholder Data by Business Need to Know](pci-requirement-7.md) +* [Requirement 8: Identify Users and Authenticate Access to System Components](pci-requirement-8.md) +* [Requirement 10: Log and Monitor All Access to System Components and Cardholder Data](pci-requirement-10.md) +* [Requirement 11: Test Security of Systems and Networks Regularly](pci-requirement-11.md) +* [Azure AD PCI-DSS Multi-Factor Authentication guidance](azure-ad-pci-dss-mfa.md) + |
active-directory | Azure Ad Pci Dss Mfa | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/standards/azure-ad-pci-dss-mfa.md | + + Title: Azure Active Directory PCI-DSS Multi-Factor Authentication guidance +description: Learn the authentication methods supported by Azure AD to meet PCI MFA requirements +++++++++ Last updated : 04/18/2023+++++# Azure Active Directory PCI-DSS Multi-Factor Authentication guidance +**Information Supplement: Multi-Factor Authentication v 1.0** ++Use the following table of authentication methods supported by Azure Active Directory (Azure AD) to meet requirements in the PCI Security Standards Council [Information Supplement, Multi-Factor Authentication v 1.0](https://listings.pcisecuritystandards.org/pdfs/Multi-Factor-Authentication-Guidance-v1.pdf). ++|Method|To meet requirements|Protection|MFA element| +|-|-|-|-| +|[Passwordless phone sign in with Microsoft Authenticator](../authentication/howto-authentication-passwordless-phone.md)|Something you have (device with a key), something you know or are (PIN or biometric) </br> In iOS, Authenticator Secure Element (SE) stores the key in Keychain. [Apple Platform Security, Keychain data protection](https://support.apple.com/guide/security/keychain-data-protection-secb0694df1a/web) </br> In Android, Authenticator uses Trusted Execution Engine (TEE) by storing the key in Keystore. [Developers, Android Keystore system](https://developer.android.com/training/articles/keystore) </br> When users authenticate using Microsoft Authenticator, Azure AD generates a random number the user enters in the app. This action fulfills the out-of-band authentication requirement. |Customers configure device protection policies to mitigate device compromise risk. For instance, Microsoft Intune compliance policies. |Users unlock the key with the gesture, then Azure AD validates the authentication method. | +|[Windows Hello for Business Deployment Prerequisite Overview](/windows/security/identity-protection/hello-for-business/hello-identity-verification) |Something you have (Windows device with a key), and something you know or are (PIN or biometric). </br> Keys are stored with device Trusted Platform Module (TPM). Customers use devices with hardware TPM 2.0 or later to meet the authentication method independence and out-of-band requirements. </br> [Certified Authenticator Levels](https://fidoalliance.org/certification/authenticator-certification-levels/)|Configure device protection policies to mitigate device compromise risk. For instance, Microsoft Intune compliance policies. |Users unlock the key with the gesture for Windows device sign in.| +|[Enable passwordless security key sign-in, Enable FIDO2 security key method](../authentication/howto-authentication-passwordless-security-key.md)|Something that you have (FIDO2 security key) and something you know or are (PIN or biometric). </br> Keys are stored with hardware cryptographic features. Customers use FIDO2 keys, at least Authentication Certification Level 2 (L2) to meet the authentication method independence and out-of-band requirement.|Procure hardware with protection against tampering and compromise.|Users unlock the key with the gesture, then Azure AD validates the credential. | +|[Overview of Azure AD certificate-based authentication](../authentication/concept-certificate-based-authentication.md)|Something you have (smart card) and something you know (PIN). </br> Physical smart cards or virtual smartcards stored in TPM 2.0 or later, are a Secure Element (SE). This action meets the authentication method independence and out-of-band requirement.|Procure smart cards with protection against tampering and compromise.|Users unlock the certificate private key with the gesture, or PIN, then Azure AD validates the credential. | ++## Next steps ++PCI-DSS requirements **3**, **4**, **9**, and **12** aren't applicable to Azure AD, therefore there are no corresponding articles. To see all requirements, go to pcisecuritystandards.org: [Official PCI Security Standards Council Site](https://docs-prv.pcisecuritystandards.org/PCI%20DSS/Standard/PCI-DSS-v4_0.pdf). ++To configure Azure AD to comply with PCI-DSS, see the following articles. ++* [Azure AD PCI-DSS guidance](azure-ad-pci-dss-guidance.md) +* [Requirement 1: Install and Maintain Network Security Controls](pci-requirement-1.md) +* [Requirement 2: Apply Secure Configurations to All System Components](pci-requirement-2.md) +* [Requirement 5: Protect All Systems and Networks from Malicious Software](pci-requirement-5.md) +* [Requirement 6: Develop and Maintain Secure Systems and Software](pci-requirement-6.md) +* [Requirement 7: Restrict Access to System Components and Cardholder Data by Business Need to Know](pci-requirement-7.md) +* [Requirement 8: Identify Users and Authenticate Access to System Components](pci-requirement-8.md) +* [Requirement 10: Log and Monitor All Access to System Components and Cardholder Data](pci-requirement-10.md) +* [Requirement 11: Test Security of Systems and Networks Regularly](pci-requirement-11.md) +* [Azure AD PCI-DSS Multi-Factor Authentication guidance](azure-ad-pci-dss-mfa.md) (You're here) |
active-directory | Pci Requirement 1 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/standards/pci-requirement-1.md | + + Title: Azure Active Directory and PCI-DSS Requirement 1 +description: Learn PCI-DSS defined approach requirements for installing and maintaining network security controls +++++++++ Last updated : 04/18/2023+++++# Azure Active Directory and PCI-DSS Requirement 1 ++**Requirement 1: Install and Maintain Network Security Controls** +</br> **Defined approach requirements** ++## 1.1 Processes and mechanisms for installing and maintaining network security controls are defined and understood. ++|PCI-DSS Defined approach requirements|Azure AD guidance and recommendations| +|-|-| +|**1.1.1** All security policies and operational procedures that are identified in Requirement 1 are: </br> Documented </br> Kept up to date </br> In use </br> Known to all affected parties|Use the guidance and links herein to produce the documentation to fulfill requirements based on your environment configuration.| +|**1.1.2** Roles and responsibilities for performing activities in Requirement 1 are documented, assigned, and understood|Use the guidance and links herein to produce the documentation to fulfill requirements based on your environment configuration.| ++## 1.2 Network security controls (NSCs) are configured and maintained. ++|PCI-DSS Defined approach requirements|Azure AD guidance and recommendations| +|-|-| +|**1.2.1** Configuration standards for NSC rulesets are: </br> Defined </br> Implemented </br> Maintained|Integrate access technologies such as VPN, remote desktop, and network access points with Azure AD for authentication and authorization, if the access technologies support modern authentication. Ensure NSC standards, which pertain to identity-related controls, include definition of Conditional Access policies, application assignment, access reviews, group management, credential policies, etc. [Azure AD operations reference guide](../fundamentals/active-directory-ops-guide-intro.md)| +|**1.2.2** All changes to network connections and to configurations of NSCs are approved and managed in accordance with the change control process defined at Requirement 6.5.1|Not applicable to Azure AD.| +|**1.2.3** An accurate network diagram(s) is maintained that shows all connections between the cardholder data environment (CDE) and other networks, including any wireless networks.|Not applicable to Azure AD.| +|**1.2.4** An accurate data-flow diagram(s) is maintained that meets the following: </br> Shows all account data flows across systems and networks. </br> Updated as needed upon changes to the environment.|Not applicable to Azure AD.| +|**1.2.5** All services, protocols, and ports allowed are identified, approved, and have a defined business need|Not applicable to Azure AD.| +|**1.2.6** Security features are defined and implemented for all services, protocols, and ports in use and considered insecure, such that risk is mitigated.|Not applicable to Azure AD.| +|**1.2.7** Configurations of NSCs are reviewed at least once every six months to confirm they're relevant and effective.|Use Azure AD access reviews to automate group-membership reviews and applications, such as VPN appliances, which align to network security controls in your CDE. [What are access reviews?](../governance/access-reviews-overview.md)| +|**1.2.8** Configuration files for NSCs are: </br> Secured from unauthorized access </br> Kept consistent with active network configurations|Not applicable to Azure AD.| ++## 1.3 Network access to and from the cardholder data environment is restricted. ++|PCI-DSS Defined approach requirements|Azure AD guidance and recommendations| +|-|-| +|**1.3.1** Inbound traffic to the CDE is restricted as follows: </br> To only traffic that is necessary. </br> All other traffic is specifically denied|Use Azure AD to configure named locations to create Conditional Access policies. Calculate user and sign-in risk. Microsoft recommends customers populate and maintain the CDE IP addresses using network locations. Use them to define Conditional Access policy requirements. [Using the location condition in a CA policy](../conditional-access/location-condition.md)| +|**1.3.2** Outbound traffic from the CDE is restricted as follows: </br> To only traffic that is necessary. </br> All other traffic is specifically denied|For NSC design, include Conditional Access policies for applications to allow access to CDE IP addresses. </br> Emergency access or remote access to establish connectivity to CDE, such as virtual private network (VPN) appliances, captive portals, might need policies to prevent unintended lockout. [Using the location condition in a CA policy](../conditional-access/location-condition.md) </br> [Manage emergency access accounts in Azure AD](../roles/security-emergency-access.md)| +|**1.3.3** NSCs are installed between all wireless networks and the CDE, regardless of whether the wireless network is a CDE, such that: </br> All wireless traffic from wireless networks into the CDE is denied by default. </br> Only wireless traffic with an authorized business purpose is allowed into the CDE.|For NSC design, include Conditional Access policies for applications to allow access to CDE IP addresses. </br> Emergency access or remote access to establish connectivity to CDE, such as virtual private network (VPN) appliances, captive portals, might need policies to prevent unintended lockout. [Using the location condition in a CA policy](../conditional-access/location-condition.md) </br> [Manage emergency access accounts in Azure AD](../roles/security-emergency-access.md)| ++## 1.4 Network connections between trusted and untrusted networks are controlled. ++|PCI-DSS Defined approach requirements|Azure AD guidance and recommendations| +|-|-| +|**1.4.1** NSCs are implemented between trusted and untrusted networks.|Not applicable to Azure AD.| +|**1.4.2** Inbound traffic from untrusted networks to trusted networks is restricted to: </br> Communications with system components that are authorized to provide publicly accessible services, protocols, and ports. </br> Stateful responses to communications initiated by system components in a trusted network. </br> All other traffic is denied.|Not applicable to Azure AD.| +|**1.4.3** Anti-spoofing measures are implemented to detect and block forged source IP addresses from entering the trusted network.|Not applicable to Azure AD.| +|**1.4.4** System components that store cardholder data are not directly accessible from untrusted networks.|In addition to controls in the networking layer, applications in the CDE using Azure AD can use Conditional Access policies. Restrict access to applications based on location. [Using the location condition in a CA policy](../conditional-access/location-condition.md)| +|**1.4.5** The disclosure of internal IP addresses and routing information is limited to only authorized parties.|Not applicable to Azure AD.| ++## 1.5 Risks to the CDE from computing devices that are able to connect to both untrusted networks and the CDE are mitigated. ++|PCI-DSS Defined approach requirements|Azure AD guidance and recommendations| +|-|-| +|**1.5.1** Security controls are implemented on any computing devices, including company- and employee-owned devices, that connect to both untrusted networks (including the Internet) and the CDE as follows: </br> Specific configuration settings are defined to prevent threats being introduced into the entityΓÇÖs network. </br> Security controls are actively running. </br> Security controls are not alterable by users of the computing devices unless specifically documented and authorized by management on a case-by-case basis for a limited period.| Deploy Conditional Access policies that require device compliance. [Use compliance policies to set rules for devices you manage with Intune](/mem/intune/protect/device-compliance-get-started) </br> Integrate device compliance state with anti-malware solutions. [Enforce compliance for Microsoft Defender for Endpoint with Conditional Access in Intune](/mem/intune/protect/advanced-threat-protection) </br> [Mobile Threat Defense integration with Intune](/mem/intune/protect/mobile-threat-defense)| ++## Next steps ++PCI-DSS requirements **3**, **4**, **9**, and **12** aren't applicable to Azure AD, therefore there are no corresponding articles. To see all requirements, go to pcisecuritystandards.org: [Official PCI Security Standards Council Site](https://docs-prv.pcisecuritystandards.org/PCI%20DSS/Standard/PCI-DSS-v4_0.pdf). ++To configure Azure AD to comply with PCI-DSS, see the following articles. ++* [Azure AD PCI-DSS guidance](azure-ad-pci-dss-guidance.md) +* [Requirement 1: Install and Maintain Network Security Controls](pci-requirement-1.md) (You're here) +* [Requirement 2: Apply Secure Configurations to All System Components](pci-requirement-2.md) +* [Requirement 5: Protect All Systems and Networks from Malicious Software](pci-requirement-5.md) +* [Requirement 6: Develop and Maintain Secure Systems and Software](pci-requirement-6.md) +* [Requirement 7: Restrict Access to System Components and Cardholder Data by Business Need to Know](pci-requirement-7.md) +* [Requirement 8: Identify Users and Authenticate Access to System Components](pci-requirement-8.md) +* [Requirement 10: Log and Monitor All Access to System Components and Cardholder Data](pci-requirement-10.md) +* [Requirement 11: Test Security of Systems and Networks Regularly](pci-requirement-11.md) +* [Azure AD PCI-DSS Multi-Factor Authentication guidance](azure-ad-pci-dss-mfa.md) + |
active-directory | Pci Requirement 10 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/standards/pci-requirement-10.md | + + Title: Azure Active Directory and PCI-DSS Requirement 10 +description: Learn PCI-DSS defined approach requirements about logging and monitoring all acess to system components and CHD +++++++++ Last updated : 04/18/2023+++++# Azure Active Directory and PCI-DSS Requirement 10 ++**Requirement 10: Log and Monitor All Access to System Components and Cardholder Data** +</br>**Defined approach requirements** ++## 10.1 Processes and mechanisms for logging and monitoring all access to system components and cardholder data are defined and documented. ++|PCI-DSS Defined approach requirements|Azure AD guidance and recommendations| +|-|-| +|**10.1.1** All security policies and operational procedures that are identified in Requirement 10 are: </br> Documented </br> Kept up to date </br> In use </br> Known to all affected parties|Use the guidance and links herein to produce the documentation to fulfill requirements based on your environment configuration.| +|**10.1.2** Roles and responsibilities for performing activities in Requirement 10 are documented, assigned, and understood.|Use the guidance and links herein to produce the documentation to fulfill requirements based on your environment configuration.| ++## 10.2 Audit logs are implemented to support the detection of anomalies and suspicious activity, and the forensic analysis of events. ++|PCI-DSS Defined approach requirements|Azure AD guidance and recommendations| +|-|-| +|**10.2.1** Audit logs are enabled and active for all system components and cardholder data.|Archive Azure AD audit logs to obtain changes to security policies and Azure AD tenant configuration. </br> Archive Azure AD activity logs in a security information and event management (SIEM) system to learn about usage. [Azure AD activity logs in Azure Monitor](../reports-monitoring/concept-activity-logs-azure-monitor.md)| +|**10.2.1.1** Audit logs capture all individual user access to cardholder data.|Not applicable to Azure AD.| +|**10.2.1.2** Audit logs capture all actions taken by any individual with administrative access, including any interactive use of application or system accounts.|Not applicable to Azure AD.| +|**10.2.1.3** Audit logs capture all access to audit logs.|In Azure AD, you canΓÇÖt wipe or modify logs. Privileged users can query logs from Azure AD. [Least privileged roles by task in Azure AD](../roles/delegate-by-task.md) </br> When audit logs are exported to systems such as Azure Log Analytics Workspace, storage accounts, or third-party SIEM systems, monitor them for access.| +|**10.2.1.4** Audit logs capture all invalid logical access attempts.|Azure AD generates activity logs when a user attempts to sign in with invalid credentials. It generates activity logs when access is denied due to Conditional Access policies. | +|**10.2.1.5** Audit logs capture all changes to identification and authentication credentials including, but not limited to: </br> Creation of new accounts </br> Elevation of privileges </br> All changes, additions, or deletions to accounts with administrative access|Azure AD generates audit logs for the events in this requirement. | +|**10.2.1.6** Audit logs capture the following: </br> All initialization of new audit logs, and </br> All starting, stopping, or pausing of the existing audit logs.|Not applicable to Azure AD.| +|**10.2.1.7** Audit logs capture all creation and deletion of system-level objects.|Azure AD generates audit logs for events in this requirement.| +|**10.2.2** Audit logs record the following details for each auditable event: </br> User identification. </br> Type of event. </br> Date and time. </br> Success and failure indication. </br> Origination of event. </br> Identity or name of affected data, system component, resource, or service (for example, name and protocol).|See, [Audit logs in Azure AD](../reports-monitoring/concept-audit-logs.md)| ++## 10.3 Audit logs are protected from destruction and unauthorized modifications. ++|PCI-DSS Defined approach requirements|Azure AD guidance and recommendations| +|-|-| +|**10.3.1** Read access to audit logs files is limited to those with a job-related need.|Privileged users can query logs from Azure AD. [Least privileged roles by task in Azure AD](../roles/delegate-by-task.md)| +|**10.3.2** Audit log files are protected to prevent modifications by individuals.|In Azure AD, you canΓÇÖt wipe or modify logs. </br> When audit logs are exported to systems such as Azure Log Analytics Workspace, storage accounts, or third-party SIEM systems, monitor them for access.| +|**10.3.3** Audit log files, including those for external-facing technologies, are promptly backed up to a secure, central, internal log server(s) or other media that is difficult to modify.|In Azure AD, you canΓÇÖt wipe or modify logs. </br> When audit logs are exported to systems such as Azure Log Analytics Workspace, storage accounts, or third-party SIEM systems, monitor them for access.| +|**10.3.4** File integrity monitoring or change-detection mechanisms is used on audit logs to ensure that existing log data can't be changed without generating alerts.|In Azure AD, you canΓÇÖt wipe or modify logs. </br> When audit logs are exported to systems such as Azure Log Analytics Workspace, storage accounts, or third-party SIEM systems, monitor them for access.| ++## 10.4 Audit logs are reviewed to identify anomalies or suspicious activity. ++|PCI-DSS Defined approach requirements|Azure AD guidance and recommendations| +|-|-| +|**10.4.1** The following audit logs are reviewed at least once daily: </br> All security events. </br> Logs of all system components that store, process, or transmit cardholder data (CHD) and/or sensitive authentication data (SAD). Logs of all critical system components. </br> Logs of all servers and system components that perform security functions (for example, network security controls, intrusion-detection systems/intrusion-prevention systems (IDS/IPS), authentication servers).|Include Azure AD logs in this process.| +|**10.4.1.1** Automated mechanisms are used to perform audit log reviews.|Include Azure AD logs in this process. Configure automated actions and alerting when Azure AD logs are integrated with Azure Monitor. [Deploy Azure Monitor: Alerts and automated actions](/azure/azure-monitor/best-practices-alerts)| +|**10.4.2** Logs of all other system components (those not specified in Requirement 10.4.1) are reviewed periodically.|Not applicable to Azure AD.| +|**10.4.2.1** The frequency of periodic log reviews for all other system components (not defined in Requirement 10.4.1) is defined in the entityΓÇÖs targeted risk analysis, which is performed according to all elements specified in Requirement 12.3.1|Not applicable to Azure AD.| +|**10.4.3** Exceptions and anomalies identified during the review process are addressed.|Not applicable to Azure AD.| ++## 10.5 Audit log history is retained and available for analysis. ++|PCI-DSS Defined approach requirements|Azure AD guidance and recommendations| +|-|-| +|**10.5.1** Retain audit log history for at least 12 months, with at least the most recent three months immediately available for analysis.|Integrate with Azure Monitor and export the logs for long term archival. [Integrate Azure AD logs with Azure Monitor logs](../reports-monitoring/howto-integrate-activity-logs-with-log-analytics.md) </br> Learn about Azure AD logs data retention policy. [Azure AD data retention](../reports-monitoring/reference-reports-data-retention.md)| ++## 10.6 Time-synchronization mechanisms support consistent time settings across all systems. ++|PCI-DSS Defined approach requirements|Azure AD guidance and recommendations| +|-|-| +|**10.6.1** System clocks and time are synchronized using time-synchronization technology.|Learn about the time synchronization mechanism in Azure services. [Time synchronization for financial services in Azure](https://azure.microsoft.com/blog/time-synchronization-for-financial-services-in-azure/)| +|**10.6.2** Systems are configured to the correct and consistent time as follows: </br> One or more designated time servers are in use. </br> Only the designated central time server(s) receives time from external sources. </br> Time received from external sources is based on International Atomic Time or Coordinated Universal Time (UTC). </br> The designated time server(s) accept time updates only from specific industry-accepted external sources. </br> Where there's more than one designated time server, the time servers peer with one another to keep accurate time. </br> Internal systems receive time information only from designated central time server(s).|Learn about the time synchronization mechanism in Azure services. [Time synchronization for financial services in Azure](https://azure.microsoft.com/blog/time-synchronization-for-financial-services-in-azure/)| +|**10.6.3** Time synchronization settings and data are protected as follows: </br> Access to time data is restricted to only personnel with a business need. </br> Any changes to time settings on critical systems are logged, monitored, and reviewed.|Azure AD relies on time synchronization mechanisms in Azure. </br> Azure procedures synchronize servers and network devices with NTP Stratum 1-time servers synchronized to global positioning system (GPS) satellites. Synchronization occurs every five minutes. Azure ensures service hosts sync time. [Time synchronization for financial services in Azure](https://azure.microsoft.com/blog/time-synchronization-for-financial-services-in-azure/) </br> Hybrid components in Azure AD, such as Azure AD Connect servers, interact with on-premises infrastructure. The customer owns time synchronization of on-premises servers. | ++## 10.7 Failures of critical security control systems are detected, reported, and responded to promptly. ++|PCI-DSS Defined approach requirements|Azure AD guidance and recommendations| +|-|-| +|**10.7.2** *Additional requirement for service providers only*: Failures of critical security control systems are detected, alerted, and addressed promptly, including but not limited to failure of the following critical security control systems: </br> Network security controls </br> IDS/IPS </br> File integrity monitoring (FIM) </br> Anti-malware solutions </br> Physical access controls </br> Logical access controls </br> Audit logging mechanism </br> Segmentation controls (if used)|Azure AD relies on time synchronization mechanisms in Azure. </br> Azure supports real-time event analysis in its operational environment. Internal Azure infrastructure systems generate near real-time event alerts about potential compromise.| +|**10.7.2** Failures of critical security control systems are detected, alerted, and addressed promptly, including but not limited to failure of the following critical security control systems: </br> Network security controls </br> IDS/IP </br> Change-detection mechanisms </br> Anti-malware solutions </br> Physical access controls </br> Logical access controls </br> Audit logging mechanisms </br> Segmentation controls (if used) </br> Audit log review mechanisms </br> Automated security testing tools (if used)|See, [Azure AD security operations guide](../fundamentals/security-operations-introduction.md) | +|**10.7.3** Failures of any critical security controls systems are responded to promptly, including but not limited to: </br> Restoring security functions. </br> Identifying and documenting the duration (date and time from start to end) of the security failure. </br> Identifying and documenting the cause(s) of failure and documenting required remediation. </br> Identifying and addressing any security issues that arose during the failure. </br> Determining whether further actions are required as a result of the security failure. </br> Implementing controls to prevent the cause of failure from reoccurring. </br> Resuming monitoring of security controls.|See, [Azure AD security operations guide](../fundamentals/security-operations-introduction.md)| ++## Next steps ++PCI-DSS requirements **3**, **4**, **9**, and **12** aren't applicable to Azure AD, therefore there are no corresponding articles. To see all requirements, go to pcisecuritystandards.org: [Official PCI Security Standards Council Site](https://docs-prv.pcisecuritystandards.org/PCI%20DSS/Standard/PCI-DSS-v4_0.pdf). ++To configure Azure AD to comply with PCI-DSS, see the following articles. ++* [Azure AD PCI-DSS guidance](azure-ad-pci-dss-guidance.md) +* [Requirement 1: Install and Maintain Network Security Controls](pci-requirement-1.md) +* [Requirement 2: Apply Secure Configurations to All System Components](pci-requirement-2.md) +* [Requirement 5: Protect All Systems and Networks from Malicious Software](pci-requirement-5.md) +* [Requirement 6: Develop and Maintain Secure Systems and Software](pci-requirement-6.md) +* [Requirement 7: Restrict Access to System Components and Cardholder Data by Business Need to Know](pci-requirement-7.md) +* [Requirement 8: Identify Users and Authenticate Access to System Components](pci-requirement-8.md) +* [Requirement 10: Log and Monitor All Access to System Components and Cardholder Data](pci-requirement-10.md) (You're here) +* [Requirement 11: Test Security of Systems and Networks Regularly](pci-requirement-11.md) +* [Azure AD PCI-DSS Multi-Factor Authentication guidance](azure-ad-pci-dss-mfa.md) |
active-directory | Pci Requirement 11 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/standards/pci-requirement-11.md | + + Title: Azure Active Directory and PCI-DSS Requirement 11 +description: Learn PCI-DSS defined approach requirements for regular testing of security and network security +++++++++ Last updated : 04/18/2023+++++# Azure Active Directory and PCI-DSS Requirement 11 ++**Requirement 11: Test Security of Systems and Networks Regularly** +</br>**Defined approach requirements** ++## 11.1 Processes and mechanisms for regularly testing security of systems and networks are defined and understood. ++|PCI-DSS Defined approach requirements|Azure AD guidance and recommendations| +|-|-| +|**11.1.1** All security policies and operational procedures that are identified in Requirement 11 are: </br> Documented </br> Kept up to date </br> In use </br> Known to all affected parties|Use the guidance and links herein to produce the documentation to fulfill requirements based on your environment configuration.| +|**11.1.2** Roles and responsibilities for performing activities in Requirement 11 are documented, assigned, and understood.|Use the guidance and links herein to produce the documentation to fulfill requirements based on your environment configuration.| ++## 11.2 Wireless access points are identified and monitored, and unauthorized wireless access points are addressed. ++|PCI-DSS Defined approach requirements|Azure AD guidance and recommendations| +|-|-| +|**11.2.1** Authorized and unauthorized wireless access points are managed as follows: </br> The presence of wireless (Wi-Fi) access points is tested for. </br> All authorized and unauthorized wireless access points are detected and identified. </br> Testing, detection, and identification occurs at least once every three months. </br> If automated monitoring is used, personnel are notified via generated alerts.|If your organization integrates network access points with Azure AD for authentication, see [Requirement 1: Install and Maintain Network Security Controls](pci-requirement-1.md)| +|**11.2.2** An inventory of authorized wireless access points is maintained, including a documented business justification.|Not applicable to Azure AD.| ++## 11.3 External and internal vulnerabilities are regularly identified, prioritized, and addressed. ++|PCI-DSS Defined approach requirements|Azure AD guidance and recommendations| +|-|-| +|**11.3.1** Internal vulnerability scans are performed as follows: </br> At least once every three months. </br> High-risk and critical vulnerabilities (per the entityΓÇÖs vulnerability risk rankings defined at Requirement 6.3.1) are resolved. </br> Rescans are performed that confirm all high-risk and critical vulnerabilities (as noted) have been resolved. </br> Scan tool is kept up to date with latest vulnerability information. </br> Scans are performed by qualified personnel and organizational independence of the tester exists.|Include servers that support Azure AD hybrid capabilities. For example, Azure AD Connect, Application proxy connectors, etc. as part of internal vulnerability scans. </br> Organizations using federated authentication: review and address federation system infrastructure vulnerabilities. [What is federation with Azure AD?](../hybrid/whatis-fed.md) </br> Review and mitigate risk detections reported by Azure AD Identity Protection. Integrate the signals with a SIEM solution to integrate more with remediation workflows or automation. [Risk types and detection](../identity-protection/concept-identity-protection-risks.md) </br> Run the Azure AD assessment tool regularly and address findings. [AzureAD/AzureADAssessment](https://github.com/AzureAD/AzureADAssessment) </br> [Security operations for infrastructure](../fundamentals/security-operations-infrastructure.md) </br> [Integrate Azure AD logs with Azure Monitor logs](../reports-monitoring/howto-integrate-activity-logs-with-log-analytics.md)| +|**11.3.1.1** All other applicable vulnerabilities (those not ranked as high-risk or critical per the entityΓÇÖs vulnerability risk rankings defined at Requirement 6.3.1) are managed as follows: </br> Addressed based on the risk defined in the entityΓÇÖs targeted risk analysis, which is performed according to all elements specified in Requirement 12.3.1. </br> Rescans are conducted as needed.|Include servers that support Azure AD hybrid capabilities. For example, Azure AD Connect, Application proxy connectors, etc. as part of internal vulnerability scans. </br> Organizations using federated authentication: review and address federation system infrastructure vulnerabilities. [What is federation with Azure AD?](../hybrid/whatis-fed.md) </br> Review and mitigate risk detections reported by Azure AD Identity Protection. Integrate the signals with a SIEM solution to integrate more with remediation workflows or automation. [Risk types and detection](../identity-protection/concept-identity-protection-risks.md) </br> Run the Azure AD assessment tool regularly and address findings. [AzureAD/AzureADAssessment](https://github.com/AzureAD/AzureADAssessment) </br> [Security operations for infrastructure](../fundamentals/security-operations-infrastructure.md) </br> [Integrate Azure AD logs with Azure Monitor logs](../reports-monitoring/howto-integrate-activity-logs-with-log-analytics.md)| +|**11.3.1.2** Internal vulnerability scans are performed via authenticated scanning as follows: </br> Systems that are unable to accept credentials for authenticated scanning are documented. </br> Sufficient privileges are used for those systems that accept credentials for scanning. </br> If accounts used for authenticated scanning can be used for interactive login, they're managed in accordance with Requirement 8.2.2.|Include servers that support Azure AD hybrid capabilities. For example, Azure AD Connect, Application proxy connectors, etc. as part of internal vulnerability scans. </br> Organizations using federated authentication: review and address federation system infrastructure vulnerabilities. [What is federation with Azure AD?](../hybrid/whatis-fed.md) </br> Review and mitigate risk detections reported by Azure AD Identity Protection. Integrate the signals with a SIEM solution to integrate more with remediation workflows or automation. [Risk types and detection](../identity-protection/concept-identity-protection-risks.md) </br> Run the Azure AD assessment tool regularly and address findings. [AzureAD/AzureADAssessment](https://github.com/AzureAD/AzureADAssessment) </br> [Security operations for infrastructure](../fundamentals/security-operations-infrastructure.md) </br> [Integrate Azure AD logs with Azure Monitor logs](../reports-monitoring/howto-integrate-activity-logs-with-log-analytics.md)| +|**11.3.1.3** Internal vulnerability scans are performed after any significant change as follows: </br> High-risk and critical vulnerabilities (per the entityΓÇÖs vulnerability risk rankings defined at Requirement 6.3.1) are resolved. </br> Rescans are conducted as needed. </br> Scans are performed by qualified personnel and organizational independence of the tester exists (not required to be a Qualified Security Assessor (QSA) or Approved Scanning Vendor (ASV)).|Include servers that support Azure AD hybrid capabilities. For example, Azure AD Connect, Application proxy connectors, etc. as part of internal vulnerability scans. </br> Organizations using federated authentication: review and address federation system infrastructure vulnerabilities. [What is federation with Azure AD?](../hybrid/whatis-fed.md) </br> Review and mitigate risk detections reported by Azure AD Identity Protection. Integrate the signals with a SIEM solution to integrate more with remediation workflows or automation. [Risk types and detection](../identity-protection/concept-identity-protection-risks.md) </br> Run the Azure AD assessment tool regularly and address findings. [AzureAD/AzureADAssessment](https://github.com/AzureAD/AzureADAssessment) </br> [Security operations for infrastructure](../fundamentals/security-operations-infrastructure.md) </br> [Integrate Azure AD logs with Azure Monitor logs](../reports-monitoring/howto-integrate-activity-logs-with-log-analytics.md)| +|**11.3.2** External vulnerability scans are performed as follows: </br> At least once every three months. </br> By a PCI SSC ASV. </br> Vulnerabilities are resolved and ASV Program Guide requirements for a passing scan are met. </br> Rescans are performed as needed to confirm that vulnerabilities are resolved per the ASV Program Guide requirements for a passing scan.|Not applicable to Azure AD.| +|**11.3.2.1** External vulnerability scans are performed after any significant change as follows: </br> Vulnerabilities that are scored 4.0 or higher by the CVSS are resolved. </br> Rescans are conducted as needed. </br> Scans are performed by qualified personnel and organizational independence of the tester exists (not required to be a QSA or ASV).|Not applicable to Azure AD.| ++## 11.4 External and internal penetration testing is regularly performed, and exploitable vulnerabilities and security weaknesses are corrected. ++|PCI-DSS Defined approach requirements|Azure AD guidance and recommendations| +|-|-| +|**11.4.1** A penetration testing methodology is defined, documented, and implemented by the entity, and includes: </br> Industry-accepted penetration testing approaches. </br> Coverage for the entire cardholder data environment (CDE) perimeter and critical systems. </br> Testing from both inside and outside the network. </br> Testing to validate any segmentation and scope-reduction controls. </br> Application-layer penetration testing to identify, at a minimum, the vulnerabilities listed in Requirement 6.2.4. </br> Network-layer penetration tests that encompass all components that support network functions and operating systems. </br> Review and consideration of threats and vulnerabilities experienced in the last 12 months. </br> Documented approach to assessing and addressing the risk posed by exploitable vulnerabilities and security weaknesses found during penetration testing. </br> Retention of penetration testing results and remediation activities results for at least 12 months.|[Penetration Testing Rules of Engagement, Microsoft Cloud](https://www.microsoft.com/msrc/pentest-rules-of-engagement)| +|**11.4.2** Internal penetration testing is performed: </br> Per the entityΓÇÖs defined methodology. </br> At least once every 12 months. </br> After any significant infrastructure or application upgrade or change. </br> By a qualified internal resource or qualified external third-party. </br> Organizational independence of the tester exists (not required to be a QSA or ASV).|[Penetration Testing Rules of Engagement, Microsoft Cloud](https://www.microsoft.com/msrc/pentest-rules-of-engagement)| +|**11.4.3** External penetration testing is performed: </br> Per the entityΓÇÖs defined methodology. </br> At least once every 12 months. </br> After any significant infrastructure or application upgrade or change. </br> By a qualified internal resource or qualified external third party. </br> Organizational independence of the tester exists (not required to be a QSA or ASV).|[Penetration Testing Rules of Engagement, Microsoft Cloud](https://www.microsoft.com/msrc/pentest-rules-of-engagement)| +|**11.4.4** Exploitable vulnerabilities and security weaknesses found during penetration testing are corrected as follows: </br> In accordance with the entityΓÇÖs assessment of the risk posed by the security issue as defined in Requirement 6.3.1. </br> Penetration testing is repeated to verify the corrections.|[Penetration Testing Rules of Engagement, Microsoft Cloud](https://www.microsoft.com/msrc/pentest-rules-of-engagement)| +|**11.4.5** If segmentation is used to isolate the CDE from other networks, penetration tests are performed on segmentation controls as follows: </br> At least once every 12 months and after any changes to segmentation controls/methods. </br> Covering all segmentation controls/methods in use. </br> According to the entityΓÇÖs defined penetration testing methodology. </br> Confirming that the segmentation controls/methods are operational and effective, and isolate the CDE from all out-of-scope systems. </br> Confirming effectiveness of any use of isolation to separate systems with differing security levels (see Requirement 2.2.3). </br> Performed by a qualified internal resource or qualified external third party. </br> Organizational independence of the tester exists (not required to be a QSA or ASV).|Not applicable to Azure AD.| +|**11.4.6** *Additional requirement for service providers only*: If segmentation is used to isolate the CDE from other networks, penetration tests are performed on segmentation controls as follows: </br> At least once every six months and after any changes to segmentation controls/methods. </br> Covering all segmentation controls/methods in use. </br> According to the entityΓÇÖs defined penetration testing methodology. </br> Confirming that the segmentation controls/methods are operational and effective, and isolate the CDE from all out-of-scope systems. </br> Confirming effectiveness of any use of isolation to separate systems with differing security levels (see Requirement 2.2.3). </br> Performed by a qualified internal resource or qualified external third party. </br> Organizational independence of the tester exists (not required to be a QSA or ASV).|Not applicable to Azure AD.| +|**11.4.7** *Additional requirement for multi-tenant service providers only*: Multi-tenant service providers support their customers for external penetration testing per Requirement 11.4.3 and 11.4.4.|[Penetration Testing Rules of Engagement, Microsoft Cloud](https://www.microsoft.com/msrc/pentest-rules-of-engagement)| ++## 11.5 Network intrusions and unexpected file changes are detected and responded to. ++|PCI-DSS Defined approach requirements|Azure AD guidance and recommendations| +|-|-| +|**11.5.1** Intrusion-detection and/or intrusion-prevention techniques are used to detect and/or prevent intrusions into the network as follows: </br> All traffic is monitored at the perimeter of the CDE. </br> All traffic is monitored at critical points in the CDE. </br> Personnel are alerted to suspected compromises. </br> All intrusion-detection and prevention engines, baselines, and signatures are kept up to date.|Not applicable to Azure AD.| +|**11.5.1.1** *Additional requirement for service providers only*: Intrusion-detection and/or intrusion-prevention techniques detect, alert on/prevent, and address covert malware communication channels.|Not applicable to Azure AD.| +|**11.5.2** A change-detection mechanism (for example, file integrity monitoring tools) is deployed as follows: </br> To alert personnel to unauthorized modification (including changes, additions, and deletions) of critical files. </br> To perform critical file comparisons at least once weekly.|Not applicable to Azure AD.| ++## 11.6 Unauthorized changes on payment pages are detected and responded to. ++|PCI-DSS Defined approach requirements|Azure AD guidance and recommendations| +|-|-| +|**11.6.1** A change- and tamper-detection mechanism is deployed as follows: </br> To alert personnel to unauthorized modification (including indicators of compromise, changes, additions, and deletions) to the HTTP headers and the contents of payment pages as received by the consumer browser. </br> The mechanism is configured to evaluate the received HTTP header and payment page. </br> The mechanism functions are performed as follows: At least once every seven days </br> OR </br> Periodically at the frequency defined in the entityΓÇÖs targeted risk analysis, which is performed according to all elements|Not applicable to Azure AD.| ++## Next steps ++PCI-DSS requirements **3**, **4**, **9**, and **12** aren't applicable to Azure AD, therefore there are no corresponding articles. To see all requirements, go to pcisecuritystandards.org: [Official PCI Security Standards Council Site](https://docs-prv.pcisecuritystandards.org/PCI%20DSS/Standard/PCI-DSS-v4_0.pdf). ++To configure Azure AD to comply with PCI-DSS, see the following articles. ++* [Azure AD PCI-DSS guidance](azure-ad-pci-dss-guidance.md) +* [Requirement 1: Install and Maintain Network Security Controls](pci-requirement-1.md) +* [Requirement 2: Apply Secure Configurations to All System Components](pci-requirement-2.md) +* [Requirement 5: Protect All Systems and Networks from Malicious Software](pci-requirement-5.md) +* [Requirement 6: Develop and Maintain Secure Systems and Software](pci-requirement-6.md) +* [Requirement 7: Restrict Access to System Components and Cardholder Data by Business Need to Know](pci-requirement-7.md) +* [Requirement 8: Identify Users and Authenticate Access to System Components](pci-requirement-8.md) +* [Requirement 10: Log and Monitor All Access to System Components and Cardholder Data](pci-requirement-10.md) +* [Requirement 11: Test Security of Systems and Networks Regularly](pci-requirement-11.md) (You're here) +* [Azure AD PCI-DSS Multi-Factor Authentication guidance](azure-ad-pci-dss-mfa.md) |
active-directory | Pci Requirement 2 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/standards/pci-requirement-2.md | + + Title: Azure Active Directory and PCI-DSS Requirement 2 +description: Learn PCI-DSS defined approach requirements for applying secure configurations to all system components +++++++++ Last updated : 04/18/2023+++++# Azure Active Directory and PCI-DSS Requirement 2 ++**Requirement 2: Apply Secure Configurations to All System Components** +</br> **Defined approach requirements** ++## 2.1 Processes and mechanisms for applying secure configurations to all system components are defined and understood. ++|PCI-DSS Defined approach requirements|Azure AD guidance and recommendations| +|-|-| +|**2.1.1** All security policies and operational procedures that are identified in Requirement 2 are: </br> Documented </br> Kept up to date </br> In use</br> Known to all affected parties|Use the guidance and links herein to produce the documentation to fulfill requirements based on your environment configuration.| +|**2.1.2** Roles and responsibilities for performing activities in Requirement 2 are documented, assigned, and understood.|Use the guidance and links herein to produce the documentation to fulfill requirements based on your environment configuration.| ++## 2.2 System components are configured and managed securely. ++|PCI-DSS Defined approach requirements|Azure AD guidance and recommendations| +|-|-| +|**2.2.1** Configuration standards are developed, implemented, and maintained to: </br> Cover all system components. </br> Address all known security vulnerabilities.</br> Be consistent with industry-accepted system hardening standards or vendor hardening recommendations. </br> Be updated as new vulnerability issues are identified, as defined in Requirement 6.3.1. </br> Be applied when new systems are configured and verified as in place before or immediately after a system component is connected to a production environment.|See, [Azure AD security operations guide](../fundamentals/security-operations-introduction.md)| +|**2.2.2** Vendor default accounts are managed as follows: </br> If the vendor default account(s) will be used, the default password is changed per Requirement 8.3.6. </br> If the vendor default account(s) will not be used, the account is removed or disabled.|Not applicable to Azure AD.| +|**2.2.3** Primary functions requiring different security levels are managed as follows: </br> Only one primary function exists on a system component, </br> OR </br> Primary functions with differing security levels that exist on the same system component are isolated from each other,</br> OR </br> Primary functions with differing security levels on the same system component are all secured to the level required by the function with the highest security need.|Learn about determining least-privileged roles. [Least privileged roles by task in Azure AD](../roles/delegate-by-task.md)| +|**2.2.4** Only necessary services, protocols, daemons, and functions are enabled, and all unnecessary functionality is removed or disabled.|Review Azure AD settings and disable unused features. [Five steps to securing your identity infrastructure](../../security/fundamentals/steps-secure-identity.md) </br> [Azure AD security operations guide](../fundamentals/security-operations-introduction.md)| +|**2.2.5** If any insecure services, protocols, or daemons are present: </br> Business justification is documented. </br> Additional security features are documented and implemented that reduce the risk of using insecure services, protocols, or daemons.|Review Azure AD settings and disable unused features. [Five steps to securing your identity infrastructure](../../security/fundamentals/steps-secure-identity.md) </br> [Azure AD security operations guide](../fundamentals/security-operations-introduction.md)| +|**2.2.6** System security parameters are configured to prevent misuse.|Review Azure AD settings and disable unused features. [Five steps to securing your identity infrastructure](../../security/fundamentals/steps-secure-identity.md) </br> [Azure AD security operations guide](../fundamentals/security-operations-introduction.md)| +|**2.2.7** All nonconsole administrative access is encrypted using strong cryptography.|Azure AD interfaces, such the management portal, Microsoft Graph, and PowerShell, are encrypted in transit using TLS. [Enable support for TLS 1.2 in your environment for Azure AD TLS 1.1 and 1.0 deprecation](/troubleshoot/azure/active-directory/enable-support-tls-environment?tabs=azure-monitor)| ++## 2.3 Wireless environments are configured and managed securely. ++|PCI-DSS Defined approach requirements|Azure AD guidance and recommendations| +|-|-| +|**2.3.1** For wireless environments connected to the CDE or transmitting account data, all wireless vendor defaults are changed at installation or are confirmed to be secure, including but not limited to: </br> Default wireless encryption keys </br> Passwords on wireless access points </br> SNMP defaults </br> Any other security-related wireless vendor defaults|If your organization integrates network access points with Azure AD for authentication, see [Requirement 1: Install and Maintain Network Security Controls](pci-requirement-1.md).| +|**2.3.2** For wireless environments connected to the CDE or transmitting account data, wireless encryption keys are changed as follows: </br> Whenever personnel with knowledge of the key leave the company or the role for which the knowledge was necessary. </br> Whenever a key is suspected of or known to be compromised.|Not applicable to Azure AD.| ++## Next steps ++PCI-DSS requirements **3**, **4**, **9**, and **12** aren't applicable to Azure AD, therefore there are no corresponding articles. To see all requirements, go to pcisecuritystandards.org: [Official PCI Security Standards Council Site](https://docs-prv.pcisecuritystandards.org/PCI%20DSS/Standard/PCI-DSS-v4_0.pdf). ++To configure Azure AD to comply with PCI-DSS, see the following articles. ++* [Azure AD PCI-DSS guidance](azure-ad-pci-dss-guidance.md) +* [Requirement 1: Install and Maintain Network Security Controls](pci-requirement-1.md) +* [Requirement 2: Apply Secure Configurations to All System Components](pci-requirement-2.md) (You're here) +* [Requirement 5: Protect All Systems and Networks from Malicious Software](pci-requirement-5.md) +* [Requirement 6: Develop and Maintain Secure Systems and Software](pci-requirement-6.md) +* [Requirement 7: Restrict Access to System Components and Cardholder Data by Business Need to Know](pci-requirement-7.md) +* [Requirement 8: Identify Users and Authenticate Access to System Components](pci-requirement-8.md) +* [Requirement 10: Log and Monitor All Access to System Components and Cardholder Data](pci-requirement-10.md) +* [Requirement 11: Test Security of Systems and Networks Regularly](pci-requirement-11.md) +* [Azure AD PCI-DSS Multi-Factor Authentication guidance](azure-ad-pci-dss-mfa.md) |
active-directory | Pci Requirement 5 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/standards/pci-requirement-5.md | + + Title: Azure Active Directory and PCI-DSS Requirement 5 +description: Learn PCI-DSS defined approach requirements for protecting all systems and networks from malicious software +++++++++ Last updated : 04/18/2023+++++# Azure Active Directory and PCI-DSS Requirement 5 ++**Requirement 5: Protect All Systems and Networks from Malicious Software** +</br>**Defined approach requirements** ++## 5.1 Processes and mechanisms for protecting all systems and networks from malicious software are defined and understood. ++|PCI-DSS Defined approach requirements|Azure AD guidance and recommendations| +|-|-| +|**5.1.1** All security policies and operational procedures that are identified in Requirement 5 are: </br> Documented </br> Kept up to date </br> In use </br> Known to all affected parties|Use the guidance and links herein to produce the documentation to fulfill requirements based on your environment configuration.| +|**5.1.2** Roles and responsibilities for performing activities in Requirement 5 are documented, assigned, and understood.|Use the guidance and links herein to produce the documentation to fulfill requirements based on your environment configuration.| ++## 5.2 Malicious software (malware) is prevented, or detected and addressed. ++|PCI-DSS Defined approach requirements|Azure AD guidance and recommendations| +|-|-| +|**5.2.1** An anti-malware solution(s) is deployed on all system components, except for those system components identified in periodic evaluations per Requirement 5.2.3 that concludes the system components aren't at risk from malware.|Deploy Conditional Access policies that require device compliance. [Use compliance policies to set rules for devices you manage with Intune](/mem/intune/protect/device-compliance-get-started) </br> Integrate device compliance state with anti-malware solutions. [Enforce compliance for Microsoft Defender for Endpoint with Conditional Access in Intune](/mem/intune/protect/advanced-threat-protection) </br> [Mobile Threat Defense integration with Intune](/mem/intune/protect/mobile-threat-defense)| +|**5.2.2** The deployed anti-malware solution(s): </br> Detects all known types of malware. Removes, blocks, or contains all known types of malware.|Not applicable to Azure AD.| +|**5.2.3** Any system components that aren't at risk for malware are evaluated periodically to include the following: </br> A documented list of all system components not at risk for malware. </br> Identification and evaluation of evolving malware threats for those system components. </br> Confirmation whether such system components continue to not require anti-malware protection.|Not applicable to Azure AD.| +|**5.2.3.1** The frequency of periodic evaluations of system components identified as not at risk for malware is defined in the entityΓÇÖs targeted risk analysis, which is performed according to all elements specified in Requirement 12.3.1.|Not applicable to Azure AD.| ++## 5.3 Anti-malware mechanisms and processes are active, maintained, and monitored. ++|PCI-DSS Defined approach requirements|Azure AD guidance and recommendations| +|-|-| +|**5.3.1** The anti-malware solution(s) is kept current via automatic updates.|Not applicable to Azure AD.| +|**5.3.2** The anti-malware solution(s): </br> Performs periodic scans and active or real-time scans.</br> OR </br> Performs continuous behavioral analysis of systems or processes.|Not applicable to Azure AD.| +|**5.3.2.1** If periodic malware scans are performed to meet Requirement 5.3.2, the frequency of scans is defined in the entityΓÇÖs targeted risk analysis, which is performed according to all elements specified in Requirement 12.3.1.|Not applicable to Azure AD.| +|**5.3.3** For removable electronic media, the anti-malware solution(s): </br> Performs automatic scans of when the media is inserted, connected, or logically mounted, </br> OR </br> Performs continuous behavioral analysis of systems or processes when the media is inserted, connected, or logically mounted.|Not applicable to Azure AD.| +|**5.3.4** Audit logs for the anti-malware solution(s) are enabled and retained in accordance with Requirement 10.5.1.|Not applicable to Azure AD.| +|**5.3.5** Anti-malware mechanisms can't be disabled or altered by users, unless specifically documented, and authorized by management on a case-by-case basis for a limited time period.|Not applicable to Azure AD.| ++## 5.4 Anti-phishing mechanisms protect users against phishing attacks. ++|PCI-DSS Defined approach requirements|Azure AD guidance and recommendations| +|-|-| +|**5.4.1** Processes and automated mechanisms are in place to detect and protect personnel against phishing attacks.|Configure Azure AD to use phishing-resistant credentials. [Implementation considerations for phishing-resistant MFA](memo-22-09-multi-factor-authentication.md) </br> Use controls in Conditional Access to require authentication with phishing-resistant credentials. [Conditional Access authentication strength](../authentication/concept-authentication-strengths.md) </br> Guidance herein relates to identity and access management configuration. To mitigate phishing attacks, deploy workload capabilities, such as in Microsoft 365. [Anti-phishing protection in Microsoft 365](/microsoft-365/security/office-365-security/anti-phishing-protection-about?view=o365-worldwide&preserve-view=true)| ++## Next steps ++PCI-DSS requirements **3**, **4**, **9**, and **12** aren't applicable to Azure AD, therefore there are no corresponding articles. To see all requirements, go to pcisecuritystandards.org: [Official PCI Security Standards Council Site](https://docs-prv.pcisecuritystandards.org/PCI%20DSS/Standard/PCI-DSS-v4_0.pdf). ++To configure Azure AD to comply with PCI-DSS, see the following articles. ++* [Azure AD PCI-DSS guidance](azure-ad-pci-dss-guidance.md) +* [Requirement 1: Install and Maintain Network Security Controls](pci-requirement-1.md) +* [Requirement 2: Apply Secure Configurations to All System Components](pci-requirement-2.md) +* [Requirement 5: Protect All Systems and Networks from Malicious Software](pci-requirement-5.md) (You're here) +* [Requirement 6: Develop and Maintain Secure Systems and Software](pci-requirement-6.md) +* [Requirement 7: Restrict Access to System Components and Cardholder Data by Business Need to Know](pci-requirement-7.md) +* [Requirement 8: Identify Users and Authenticate Access to System Components](pci-requirement-8.md) +* [Requirement 10: Log and Monitor All Access to System Components and Cardholder Data](pci-requirement-10.md) +* [Requirement 11: Test Security of Systems and Networks Regularly](pci-requirement-11.md) +* [Azure AD PCI-DSS Multi-Factor Authentication guidance](azure-ad-pci-dss-mfa.md) |
active-directory | Pci Requirement 6 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/standards/pci-requirement-6.md | + + Title: Azure Active Directory and PCI-DSS Requirement 6 +description: Learn PCI-DSS defined approach requirements about developing and maintaining secure systems and software +++++++++ Last updated : 04/18/2023+++++# Azure Active Directory and PCI-DSS Requirement 6 ++**Requirement 6: Develop and Maintain Secure Systems and Software** +</br>**Defined approach requirements** ++## 6.1 Processes and mechanisms for developing and maintaining secure systems and software are defined and understood. ++|PCI-DSS Defined approach requirements|Azure AD guidance and recommendations| +|-|-| +|**6.1.1** All security policies and operational procedures that are identified in Requirement 6 are: </br> Documented </br> Kept up to date </br> In use </br> Known to all affected parties|Use the guidance and links herein to produce the documentation to fulfill requirements based on your environment configuration.| +|**6.1.2** Roles and responsibilities for performing activities in Requirement 6 are documented, assigned, and understood.|Use the guidance and links herein to produce the documentation to fulfill requirements based on your environment configuration.| ++## 6.2 Bespoke and custom software are developed securely. ++|PCI-DSS Defined approach requirements|Azure AD guidance and recommendations| +|-|-| +|**6.2.1** Bespoke and custom software are developed securely, as follows: </br> Based on industry standards and/or best practices for secure development. </br> In accordance with PCI-DSS (for example, secure authentication and logging). </br> Incorporating consideration of information security issues during each stage of the software development lifecycle.|Procure and develop applications that use modern authentication protocols, such as OAuth2 and OpenID Connect (OIDC), which integrate with Azure Active Directory (Azure AD). </br> Build software using the Microsoft identity platform. [Microsoft identity platform best practices and recommendations](../develop/identity-platform-integration-checklist.md)| +|**6.2.2** Software development personnel working on bespoke and custom software are trained at least once every 12 months as follows: </br> On software security relevant to their job function and development languages. </br> Including secure software design and secure coding techniques. </br> Including, if security testing tools are used, how to use the tools for detecting vulnerabilities in software.|Use the following exam to provide proof of proficiency on Microsoft identity platform: [Exam MS-600: Building Applications and Solutions with Microsoft 365 Core Services](/certifications/exams/ms-600) Use the following training to prepare for the exam: [MS-600: Implement Microsoft identity](/training/paths/m365-identity-associate/)| +|**6.2.3** Bespoke and custom software is reviewed prior to being released into production or to customers, to identify and correct potential coding vulnerabilities, as follows: </br> Code reviews ensure code is developed according to secure coding guidelines. </br> Code reviews look for both existing and emerging software vulnerabilities. </br> Appropriate corrections are implemented prior to release.|Not applicable to Azure AD.| +|**6.2.3.1** If manual code reviews are performed for bespoke and custom software prior to release to production, code changes are: </br> Reviewed by individuals other than the originating code author, and who are knowledgeable about code-review techniques and secure coding practices. </br> Reviewed and approved by management prior to release.|Not applicable to Azure AD.| +|**6.2.4** Software engineering techniques or other methods are defined and in use by software development personnel to prevent or mitigate common software attacks and related vulnerabilities in bespoke and custom software, including but not limited to the following: </br> Injection attacks, including SQL, LDAP, XPath, or other command, parameter, object, fault, or injection-type flaws. </br> Attacks on data and data structures, including attempts to manipulate buffers, pointers, input data, or shared data. </br> Attacks on cryptography usage, including attempts to exploit weak, insecure, or inappropriate cryptographic implementations, algorithms, cipher suites, or modes of operation. </br> Attacks on business logic, including attempts to abuse or bypass application features and functionalities through the manipulation of APIs, communication protocols and channels, client-side functionality, or other system/application functions and resources. This includes cross-site scripting (XSS) and cross-site request forgery (CSRF). </br> Attacks on access control mechanisms, including attempts to bypass or abuse identification, authentication, or authorization mechanisms, or attempts to exploit weaknesses in the implementation of such mechanisms. </br> Attacks via any ΓÇ£high-riskΓÇ¥ vulnerabilities identified in the vulnerability identification process, as defined in Requirement 6.3.1.|Not applicable to Azure AD.| ++## 6.3 Security vulnerabilities are identified and addressed. ++|PCI-DSS Defined approach requirements|Azure AD guidance and recommendations| +|-|-| +|**6.3.1** Security vulnerabilities are identified and managed as follows: </br> New security vulnerabilities are identified using industry-recognized sources for security vulnerability information, including alerts from international and national computer emergency response teams (CERTs). </br> Vulnerabilities are assigned a risk ranking based on industry best practices and consideration of potential impact. </br> Risk rankings identify, at a minimum, all vulnerabilities considered to be a high-risk or critical to the environment. </br> Vulnerabilities for bespoke and custom, and third-party software (for example operating systems and databases) are covered.|Learn about vulnerabilities. [MSRC | Security Updates, Security Update Guide](https://msrc.microsoft.com/update-guide)| +|**6.3.2** An inventory of bespoke and custom software, and third-party software components incorporated into bespoke and custom software is maintained to facilitate vulnerability and patch management.|Generate reports for applications using Azure AD for authentication for inventory. [applicationSignInDetailedSummary resource type](/graph/api/resources/applicationsignindetailedsummary?view=graph-rest-beta&viewFallbackFrom=graph-rest-1.0&preserve-view=true) </br> [Applications listed in Enterprise applications](../manage-apps/application-list.md)| +|**6.3.3** All system components are protected from known vulnerabilities by installing applicable security patches/updates as follows: </br> Critical or high-security patches/updates (identified according to the risk ranking process at Requirement 6.3.1) are installed within one month of release. </br> All other applicable security patches/updates are installed within an appropriate time frame as determined by the entity (for example, within three months of release).|Not applicable to Azure AD.| ++## 6.4 Public-facing web applications are protected against attacks. ++|PCI-DSS Defined approach requirements|Azure AD guidance and recommendations| +|-|-| +|**6.4.1** For public-facing web applications, new threats and vulnerabilities are addressed on an ongoing basis and these applications are protected against known attacks as follows: Reviewing public-facing web applications via manual or automated application vulnerability security assessment tools or methods as follows: </br> ΓÇô At least once every 12 months and after significant changes. </br> ΓÇô By an entity that specializes in application security. </br> ΓÇô Including, at a minimum, all common software attacks in Requirement 6.2.4. </br> ΓÇô All vulnerabilities are ranked in accordance with requirement 6.3.1. </br> ΓÇô All vulnerabilities are corrected. </br> ΓÇô The application is reevaluated after the corrections </br> OR </br> Installing an automated technical solution(s) that continually detect and prevent web-based attacks as follows: </br> ΓÇô Installed in front of public-facing web applications to detect and prevent web-based attacks. </br> ΓÇô Actively running and up to date as applicable. </br> ΓÇô Generating audit logs. </br> ΓÇô Configured to either block web-based attacks or generate an alert that is immediately investigated.|Not applicable to Azure AD.| +|**6.4.2** For public-facing web applications, an automated technical solution is deployed that continually detects and prevents web-based attacks, with at least the following: </br> Is installed in front of public-facing web applications and is configured to detect and prevent web-based attacks. </br> Actively running and up to date as applicable. </br> Generating audit logs. </br> Configured to either block web-based attacks or generate an alert that is immediately investigated.|Not applicable to Azure AD.| +|**6.4.3** All payment page scripts that are loaded and executed in the consumerΓÇÖs browser are managed as follows: </br> A method is implemented to confirm that each script is authorized. </br> A method is implemented to assure the integrity of each script. </br> An inventory of all scripts is maintained with written justification as to why each is necessary.|Not applicable to Azure AD.| ++## 6.5 Changes to all system components are managed securely. ++|PCI-DSS Defined approach requirements|Azure AD guidance and recommendations| +|-|-| +|**6.5.1** Changes to all system components in the production environment are made according to established procedures that include: </br> Reason for, and description of, the change. </br> Documentation of security impact. </br> Documented change approval by authorized parties. </br> Testing to verify that the change doesn't adversely impact system security. </br> For bespoke and custom software changes, all updates are tested for compliance with Requirement 6.2.4 before being deployed into production. </br> Procedures to address failures and return to a secure state.|Include changes to Azure AD configuration in the change control process. | +|**6.5.2** Upon completion of a significant change, all applicable PCI-DSS requirements are confirmed to be in place on all new or changed systems and networks, and documentation is updated as applicable.|Not applicable to Azure AD.| +|**6.5.3** Preproduction environments are separated from production environments and the separation is enforced with access controls.|Approaches to separate preproduction and production environments, based on organizational requirements. [Resource isolation in a single tenant](../fundamentals/secure-with-azure-ad-single-tenant.md) </br> [Resource isolation with multiple tenants](../fundamentals/secure-with-azure-ad-multiple-tenants.md)| +|**6.5.4** Roles and functions are separated between production and preproduction environments to provide accountability such that only reviewed and approved changes are deployed.|Learn about privileged roles and dedicated preproduction tenants. [Best practices for Azure AD roles](../roles/best-practices.md)| +|**6.5.5** Live PANs aren't used in preproduction environments, except where those environments are included in the CDE and protected in accordance with all applicable PCI-DSS requirements.|Not applicable to Azure AD.| +|**6.5.6** Test data and test accounts are removed from system components before the system goes into production.|Not applicable to Azure AD.| ++## Next steps ++PCI-DSS requirements **3**, **4**, **9**, and **12** aren't applicalbe to Azure AD, therefore there are no corresponding articles. To see all requirements, go to pcisecuritystandards.org: [Official PCI Security Standards Council Site](https://docs-prv.pcisecuritystandards.org/PCI%20DSS/Standard/PCI-DSS-v4_0.pdf). ++To configure Azure AD to comply with PCI-DSS, see the following articles. ++* [Azure AD PCI-DSS guidance](azure-ad-pci-dss-guidance.md) +* [Requirement 1: Install and Maintain Network Security Controls](pci-requirement-1.md) +* [Requirement 2: Apply Secure Configurations to All System Components](pci-requirement-2.md) +* [Requirement 5: Protect All Systems and Networks from Malicious Software](pci-requirement-5.md) +* [Requirement 6: Develop and Maintain Secure Systems and Software](pci-requirement-6.md) (You're here) +* [Requirement 7: Restrict Access to System Components and Cardholder Data by Business Need to Know](pci-requirement-7.md) +* [Requirement 8: Identify Users and Authenticate Access to System Components](pci-requirement-8.md) +* [Requirement 10: Log and Monitor All Access to System Components and Cardholder Data](pci-requirement-10.md) +* [Requirement 11: Test Security of Systems and Networks Regularly](pci-requirement-11.md) +* [Azure AD PCI-DSS Multi-Factor Authentication guidance](azure-ad-pci-dss-mfa.md) |
active-directory | Pci Requirement 7 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/standards/pci-requirement-7.md | + + Title: Azure Active Directory and PCI-DSS Requirement 7 +description: Learn PCI-DSS defined approach requirements for restricting access to system components and CHD by business need-to-know +++++++++ Last updated : 04/18/2023+++++# Azure Active Directory and PCI-DSS Requirement 7 ++**Requirement 7: Restrict Access to System Components and Cardholder Data by Business Need to Know** +</br>**Defined approach requirements** ++## 7.1 Processes and mechanisms for restricting access to system components and cardholder data by business need to know are defined and understood. ++|PCI-DSS Defined approach requirements|Azure AD guidance and recommendations| +|-|-| +|**7.1.1** All security policies and operational procedures that are identified in Requirement 7 are: </br> Documented </br> Kept up to date </br> In use </br> Known to all affected parties|Integrate access to cardholder data environment (CDE) applications with Azure Active Directory (Azure AD) for authentication and authorization. </br> Document Conditional Access policies for remote access technologies. Automate with Microsoft Graph API and PowerShell. [Conditional Access: Programmatic access](../conditional-access/howto-conditional-access-apis.md) </br> Archive the Azure AD audit logs to record security policy changes and Azure AD tenant configuration. To record usage, archive Azure AD sign-in logs in a security information and event management (SIEM) system. [Azure AD activity logs in Azure Monitor](../reports-monitoring/concept-activity-logs-azure-monitor.md)| +|**7.1.2** Roles and responsibilities for performing activities in Requirement 7 are documented, assigned, and understood.|Integrate access to CDE applications with Azure AD for authentication and authorization. </br> - Assign users roles to applications or with group membership </br> - Use Microsoft Graph to list application assignments </br> - Use Azure AD audit logs to track assignment changes. </br> [List appRoleAssignments granted to a user](/graph/api/user-list-approleassignments?view=graph-rest-1.0&tabs=http&preserve-view=true) </br> [Get-MgServicePrincipalAppRoleAssignedTo](/powershell/module/microsoft.graph.applications/get-mgserviceprincipalapproleassignedto?view=graph-powershell-1.0&preserve-view=true) </br></br> **Privileged access** </br> Use Azure AD audit logs to track directory role assignments. Administrator roles relevant to this PCI requirement: </br> - Global </br> - Application </br> - Authentication </br> - Authentication Policy </br> - Hybrid Identity </br> To implement least privilege access, use Azure AD to create custom directory roles. </br> If you build portions of CDE in Azure, document privileged role assignments such as Owner, Contributor, user Access Administrator, etc., and subscription custom roles where CDE resources are deployed. </br> Microsoft recommends you enable Just-In-Time (JIT) access to roles using Privileged Identity Management (PIM). PIM enables JIT access to Azure AD security groups for scenarios when group membership represents privileged access to CDE applications or resources. [Azure AD built-in roles](../roles/permissions-reference.md) </br> [Azure AD Identity and access management operations reference guide](../fundamentals/active-directory-ops-guide-iam.md) </br> [Create and assign a custom role in Azure Active Directory](../roles/custom-create.md) </br> [Securing privileged access for hybrid and cloud deployments in Azure AD](../roles/security-planning.md) </br> [What is Azure AD Privileged Identity Management?](../privileged-identity-management/pim-configure.md) </br> [Best practices for all isolation architectures]() </br> [PIM for Groups](../fundamentals/secure-with-azure-ad-best-practices.md)| ++## 7.2 Access to system components and data is appropriately defined and assigned. ++|PCI-DSS Defined approach requirements|Azure AD guidance and recommendations| +|-|-| +|**7.2.1** An access control model is defined and includes granting access as follows: </br> Appropriate access depending on the entityΓÇÖs business and access needs. </br> Access to system components and data resources that is based on usersΓÇÖ job classification and functions. </br> The least privileges required (for example, user, administrator) to perform a job function.|Use Azure AD to assign users to roles in applications directly or through group membership. </br> Organizations with standardized taxonomy implemented as attributes can automate access grants based on user job classification and function. Use Azure AD Groups with dynamic membership, and Azure AD entitlement management access packages with dynamic assignment policies. </br> Use entitlement management to define separation of duties to delineate least privilege. </br> PIM enables JIT access to Azure AD security groups for custom scenarios where group membership represents privileged access to CDE applications or resources. [Dynamic membership rules for groups in Azure AD](../enterprise-users/groups-dynamic-membership.md) </br> [Configure an automatic assignment policy for an access package in entitlement management](../governance/entitlement-management-access-package-auto-assignment-policy.md) </br> [Configure separation of duties for an access package in entitlement management](../governance/entitlement-management-access-package-incompatible.md) </br> [PIM for Groups](../privileged-identity-management/concept-pim-for-groups.md)| +|**7.2.2** Access is assigned to users, including privileged users, based on: </br> Job classification and function. </br> Least privileges necessary to perform job responsibilities.|Use Azure AD to assign users to roles in applications directly or through group membership. </br> Organizations with standardized taxonomy implemented as attributes can automate access grants based on user job classification and function. Use Azure AD Groups with dynamic membership, and Azure AD entitlement management access packages with dynamic assignment policies. </br> Use entitlement management to define separation of duties to delineate least privilege. </br> PIM enables JIT access to Azure AD security groups for custom scenarios where group membership represents privileged access to CDE applications or resources. [Dynamic membership rules for groups in Azure AD](../enterprise-users/groups-dynamic-membership.md) </br> [Configure an automatic assignment policy for an access package in entitlement management](../governance/entitlement-management-access-package-auto-assignment-policy.md) </br> [Configure separation of duties for an access package in entitlement management](../governance/entitlement-management-access-package-incompatible.md) </br> [PIM for Groups](../privileged-identity-management/concept-pim-for-groups.md)| +|**7.2.3** Required privileges are approved by authorized personnel.|Entitlement management supports approval workflows to grant access to resources, and periodic access reviews. [Approve or deny access requests in entitlement management](../governance/entitlement-management-request-approve.md) </br> [Review access of an access package in entitlement management](../governance/entitlement-management-access-reviews-review-access.md) </br> PIM supports approval workflows to activate Azure AD directory roles, and Azure roles, and cloud groups. [Approve or deny requests for Azure AD roles in PIM](../privileged-identity-management/azure-ad-pim-approval-workflow.md) </br> [Approve activation requests for group members and owners](../privileged-identity-management/groups-approval-workflow.md)| +|**7.2.4** All user accounts and related access privileges, including third-party/vendor accounts, are reviewed as follows: </br> At least once every six months. </br> To ensure user accounts and access remain appropriate based on job function. </br> Any inappropriate access is addressed. Management acknowledges that access remains appropriate.|If you grant access to applications using direct assignment or with group membership, configure Azure AD access reviews. If you grant access to applications using entitlement management, enable access reviews at the access package level. [Create an access review of an access package in entitlement management](../governance/entitlement-management-access-reviews-create.md) </br> Use Azure AD external identities for third-party and vendor accounts. You can perform access reviews targeting external identities, for instance third-party or vendor accounts. [Manage guest access with access reviews](../governance/manage-guest-access-with-access-reviews.md)| +|**7.2.5** All application and system accounts and related access privileges are assigned and managed as follows: </br> Based on the least privileges necessary for the operability of the system or application. </br> Access is limited to the systems, applications, or processes that specifically require their use.|Use Azure AD to assign users to roles in applications directly or through group membership. </br> Organizations with standardized taxonomy implemented as attributes can automate access grants based on user job classification and function. Use Azure AD Groups with dynamic membership, and Azure AD entitlement management access packages with dynamic assignment policies. </br> Use entitlement management to define separation of duties to delineate least privilege. </br> PIM enables JIT access to Azure AD security groups for custom scenarios where group membership represents privileged access to CDE applications or resources. [Dynamic membership rules for groups in Azure AD](../enterprise-users/groups-dynamic-membership.md) </br> [Configure an automatic assignment policy for an access package in entitlement management](../governance/entitlement-management-access-package-auto-assignment-policy.md) </br> [Configure separation of duties for an access package in entitlement management](../governance/entitlement-management-access-package-incompatible.md) </br> [PIM for Groups](../privileged-identity-management/concept-pim-for-groups.md)| +|**7.2.5.1** All access by application and system accounts and related access privileges are reviewed as follows: </br> Periodically (at the frequency defined in the entityΓÇÖs targeted risk analysis, which is performed according to all elements specified in Requirement 12.3.1). </br> The application/system access remains appropriate for the function being performed. </br> Any inappropriate access is addressed. </br> Management acknowledges that access remains appropriate.|Best practices when reviewing service accounts permissions. [Governing Azure AD service accounts](../fundamentals/service-accounts-governing-azure.md) </br> [Govern on-premises service accounts](../fundamentals/service-accounts-govern-on-premises.md)| +|**7.2.6** All user access to query repositories of stored cardholder data is restricted as follows: </br> Via applications or other programmatic methods, with access and allowed actions based on user roles and least privileges. </br> Only the responsible administrator(s) can directly access or query repositories of stored card-holder data (CHD).|Modern applications enable programmatic methods that restrict access to data repositories.</br> Integrate applications with Azure AD using modern authentication protocols such as OAuth and OpenID connect (OIDC). [OAuth 2.0 and OIDC protocols on the Microsoft identity platform](../develop/active-directory-v2-protocols.md) </br> Define application-specific roles to model privileged and nonprivileged user access. Assign users or groups to roles. [Add app roles to your application and receive them in the token](../develop/howto-add-app-roles-in-azure-ad-apps.md) </br> For APIs exposed by your application, define OAuth scopes to enable user and administrator consent. [Scopes and permissions in the Microsoft identity platform](../develop/scopes-oidc.md) </br> Model privileged and non-privileged access to the repositories with the following approach and avoid direct repository access. If administrators and operators require access, grant it per the underlying platform. For instance, ARM IAM assignments in Azure, Access Control Lists (ACLs) windows, etc. </br> See architecture guidance that includes securing application platform-as-a-service (PaaS) and infrastructure-as-a-service (IaaS) in Azure. [Azure Architecture Center](/azure/architecture/)| ++## 7.3 Access to system components and data is managed via an access control system(s). ++|PCI-DSS Defined approach requirements|Azure AD guidance and recommendations| +|-|-| +|**7.3.1** An access control system(s) is in place that restricts access based on a userΓÇÖs need to know and covers all system components.|Integrate access to applications in the CDE with Azure AD as an access control system authentication and authorization. Conditional Access policies, with application assignments control access to applications. [What is Conditional Access?](../conditional-access/overview.md) </br> [Assign users and groups to an application](../manage-apps/assign-user-or-group-access-portal.md)| +|**7.3.2** The access control system(s) is configured to enforce permissions assigned to individuals, applications, and systems based on job classification and function.|Integrate access to applications in the CDE with Azure AD as an access control system authentication and authorization. Conditional Access policies, with application assignments control access to applications. [What is Conditional Access?](../conditional-access/overview.md) </br> [Assign users and groups to an application](../manage-apps/assign-user-or-group-access-portal.md)| +|**7.3.3** The access control system(s) is set to ΓÇ£deny allΓÇ¥ by default.|Use Conditional Access to block access based on access request conditions such as group membership, applications, network location, credential strength, etc. [Conditional Access: Block access](../conditional-access/howto-conditional-access-policy-block-access.md) </br> Misconfigured block policy might contribute to unintentional lockouts. Design an emergency access strategy. [Manage emergency access admin accounts in Azure AD](../manage-apps/assign-user-or-group-access-portal.md) ++## Next steps ++PCI-DSS requirements **3**, **4**, **9**, and **12** aren't applicable to Azure AD, therefore there are no corresponding articles. To see all requirements, go to pcisecuritystandards.org: [Official PCI Security Standards Council Site](https://docs-prv.pcisecuritystandards.org/PCI%20DSS/Standard/PCI-DSS-v4_0.pdf). ++To configure Azure AD to comply with PCI-DSS, see the following articles. ++* [Azure AD PCI-DSS guidance](azure-ad-pci-dss-guidance.md) +* [Requirement 1: Install and Maintain Network Security Controls](pci-requirement-1.md) +* [Requirement 2: Apply Secure Configurations to All System Components](pci-requirement-2.md) +* [Requirement 5: Protect All Systems and Networks from Malicious Software](pci-requirement-5.md) +* [Requirement 6: Develop and Maintain Secure Systems and Software](pci-requirement-6.md) +* [Requirement 7: Restrict Access to System Components and Cardholder Data by Business Need to Know](pci-requirement-7.md) (You're here) +* [Requirement 8: Identify Users and Authenticate Access to System Components](pci-requirement-8.md) +* [Requirement 10: Log and Monitor All Access to System Components and Cardholder Data](pci-requirement-10.md) +* [Requirement 11: Test Security of Systems and Networks Regularly](pci-requirement-11.md) +* [Azure AD PCI-DSS Multi-Factor Authentication guidance](azure-ad-pci-dss-mfa.md) |
active-directory | Pci Requirement 8 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/standards/pci-requirement-8.md | + + Title: Azure Active Directory and PCI-DSS Requirement 8 +description: Learn PCI-DSS defined approach requirements to identify users and authenticate access to system components +++++++++ Last updated : 04/18/2023+++++# Azure Active Directory and PCI-DSS Requirement 8 ++**Requirement 8: Identify Users and Authenticate Access to System Components** +</br>**Defined approach requirements** ++## 8.1 Processes and mechanisms for identifying users and authenticating access to system components are defined and understood. ++|PCI-DSS Defined approach requirements|Azure AD guidance and recommendations| +|-|-| +|**8.1.1** All security policies and operational procedures that are identified in Requirement 8 are: </br> Documented </br> Kept up to date </br> In use </br> Known to all affected parties|Use the guidance and links herein to produce the documentation to fulfill requirements based on your environment configuration.| +|**8.1.2** Roles and responsibilities for performing activities in Requirement 8 are documented, assigned, and understood.|Use the guidance and links herein to produce the documentation to fulfill requirements based on your environment configuration.| ++## 8.2 User identification and related accounts for users and administrators are strictly managed throughout an accountΓÇÖs lifecycle. ++|PCI-DSS Defined approach requirements|Azure AD guidance and recommendations| +|-|-| +|**8.2.1** All users are assigned a unique ID before access to system components or cardholder data is allowed.|For CDE applications that rely on Azure AD, the unique user ID is the user principal name (UPN) attribute. [Azure AD UserPrincipalName population](../hybrid/plan-connect-userprincipalname.md)| +|**8.2.2** Group, shared, or generic accounts, or other shared authentication credentials are only used when necessary on an exception basis, and are managed as follows: </br> Account use is prevented unless needed for an exceptional circumstance. </br> Use is limited to the time needed for the exceptional circumstance. </br> Business justification for use is documented. </br> Use is explicitly approved by management </br> Individual user identity is confirmed before access to an account is granted. </br> Every action taken is attributable to an individual user.|Ensure CDEs using Azure AD for application access have processes to prevent shared accounts. Create them as an exception that requires approval. </br> For CDE resources deployed in Azure, use Azure AD managed identities to represent the workload identity, instead of creating a shared service account. [What are managed identities for Azure resources?](../managed-identities-azure-resources/overview.md) </br> If you canΓÇÖt use managed identities and the resources accessed are using the OAuth protocol, use service principals to represent workload identities. Grant identities least privileged access through OAuth scopes. Administrators can restrict access and define approval workflows to create them. [What are workload identities?](../workload-identities/workload-identities-overview.md)| +|**8.2.3** *Additional requirement for service providers only*: Service providers with remote access to customer premises use unique authentication factors for each customer premises.|Azure AD has on-premises connectors to enable hybrid capabilities. Connectors are identifiable and use uniquely generated credentials. [Azure AD Connect sync: Understand and customize synchronization](../hybrid/how-to-connect-sync-whatis.md) </br> [Cloud sync deep dive](../cloud-sync/concept-how-it-works.md) </br> [Azure AD on-premises application provisioning architecture](../app-provisioning/on-premises-application-provisioning-architecture.md) </br> [Plan cloud HR application to Azure AD user provisioning](../app-provisioning/plan-cloud-hr-provision.md) </br> [Install the Azure AD Connect Health agents](../hybrid/how-to-connect-health-agent-install.md)| +|**8.2.4** Addition, deletion, and modification of user IDs, authentication factors, and other identifier objects are managed as follows: </br> Authorized with the appropriate approval. </br> Implemented with only the privileges specified on the documented approval.|Azure AD has automated user account provisioning from HR systems. Use this feature to create a lifecycle. [What is HR driven provisioning?](../app-provisioning/what-is-hr-driven-provisioning.md) </br> Azure AD has lifecycle workflows to enable customized logic for joiner, mover, and leaver processes. [What are Lifecycle Workflows?](../governance/what-are-lifecycle-workflows.md) </br> Azure AD has a programmatic interface to manage authentication methods with Microsoft Graph. Some authentication methods such as Windows Hello for Business and FIDO2 keys, require user intervention to register. [Get started with the Graph authentication methods API](/graph/authenticationmethods-get-started) </br> Administrators and/or automation generates the Temporary Access Pass credential using Graph API. Use this credential for passwordless onboarding. [Configure a Temporary Access Pass in Azure AD to register Passwordless authentication methods](../authentication/howto-authentication-temporary-access-pass.md)| +|**8.2.5** Access for terminated users is immediately revoked.|To revoke access to an account, disable on-premises accounts for hybrid accounts synchronized from Azure AD, disable accounts in Azure AD, and revoke tokens. [Revoke user access in Azure AD](../enterprise-users/users-revoke-access.md) </br> Use Continuous Access Evaluation (CAE) for compatible applications to have a two-way conversation with Azure AD. Apps can be notified of events, such as account termination and reject tokens. [Continuous access evaluation](../conditional-access/concept-continuous-access-evaluation.md)| +|**8.2.6** Inactive user accounts are removed or disabled within 90 days of inactivity.|For hybrid accounts, administrators check activity in Active Directory and Azure AD every 90 days. For Azure AD, use Microsoft Graph to find the last sign-in date. [How to: Manage inactive user accounts in Azure AD](../reports-monitoring/howto-manage-inactive-user-accounts.md)| +|**8.2.7** Accounts used by third parties to access, support, or maintain system components via remote access are managed as follows: </br> Enabled only during the time period needed and disabled when not in use. </br> Use is monitored for unexpected activity.|Azure AD has external identity management capabilities. </br> Use governed guest lifecycle with entitlement management. External users are onboarded in the context of apps, resources, and access packages, which you can grant for a limited period and require periodic access reviews. Reviews can result in account removal or disablement. [Govern access for external users in entitlement management](../governance/entitlement-management-external-users.md) </br> Azure AD generates risk events at the user and session level. Learn to protect, detect, and respond to unexpected activity. [What is risk?](../identity-protection/concept-identity-protection-risks.md)| +|**8.2.8** If a user session has been idle for more than 15 minutes, the user is required to reauthenticate to reactivate the terminal or session.|Use endpoint management policies with Intune, and Microsoft Endpoint Manager. Then, use Conditional Access to allow access from compliant devices. [Use compliance policies to set rules for devices you manage with Intune](/mem/intune/protect/device-compliance-get-started) </br> If your CDE environment relies on group policy objects (GPO), configure GPO to set an idle timeout. Configure Azure AD to allow access from hybrid Azure AD joined devices. [Hybrid Azure AD joined devices](../devices/concept-azure-ad-join-hybrid.md)| ++## 8.3 Strong authentication for users and administrators is established and managed. ++For more information about Azure AD authentication methods that meet PCI requirements, see: [Information Supplement: Multi-Factor Authentication](azure-ad-pci-dss-mfa.md). ++|PCI-DSS Defined approach requirements|Azure AD guidance and recommendations| +|-|-| +|**8.3.1** All user access to system components for users and administrators is authenticated via at least one of the following authentication factors: </br> Something you know, such as a password or passphrase. </br> Something you have, such as a token device or smart card. </br> Something you are, such as a biometric element.|[Azure AD requires passwordless methods to meet the PCI requirements](https://microsoft.sharepoint-df.com/:w:/t/MicrosoftTechnicalContributorProgram-PCIDSSDocumentation/ETlhHVraW_NPsMGM-mFZlfgB4OPry8BxGizhQ4qItfGCFw?e=glcZ8y) </br> See holistic passwordless deployment. [Plan a passwordless authentication deployment in Azure AD](../authentication/howto-authentication-passwordless-deployment.md)| +|**8.3.2** Strong cryptography is used to render all authentication factors unreadable during transmission and storage on all system components.|Cryptography used by Azure AD is compliant with [PCI definition of Strong Cryptography](https://www.pcisecuritystandards.org/glossary/#glossary-s). [Azure AD Data protection considerations](../fundamentals/data-protection-considerations.md)| +|**8.3.3** User identity is verified before modifying any authentication factor.|Azure AD requires users to authenticate to update their authentication methods using self-service, such as mysecurityinfo portal and the self-service password reset (SSPR) portal. [Set up security info from a sign-in page](https://support.microsoft.com/en-us/topic/28180870-c256-4ebf-8bd7-5335571bf9a8) </br> [Common Conditional Access policy: Securing security info registration](../conditional-access/howto-conditional-access-policy-registration.md) </br> [Azure AD self-service password reset](../authentication/concept-sspr-howitworks.md) </br> Administrators with privileged roles can modify authentication factors: Global, Password, User, Authentication, and Privileged Authentication. [Least privileged roles by task in Azure AD](../roles/delegate-by-task.md). Microsoft recommends you enable JIT access and governance, for privileged access using [Azure AD Privileged Identity Management](../privileged-identity-management/pim-configure.md)| +|**8.3.4** Invalid authentication attempts are limited by: </br> Locking out the user ID after not more than 10 attempts. </br> Setting the lockout duration to a minimum of 30 minutes or until the userΓÇÖs identity is confirmed.|Deploy Windows Hello for Business for Windows devices that support hardware Trusted Platform Modules (TPM) 2.0 or higher. </br> For Windows Hello for Business, lockout relates to the device. The gesture, PIN, or biometric, unlocks access to the local TPM. Administrators configure the lockout behavior with GPO or Intune policies. [TPM Group Policy settings](/windows/security/information-protection/tpm/trusted-platform-module-services-group-policy-settings) </br> [Manage Windows Hello for Business on devices at the time devices enroll with Intune](/mem/intune/protect/windows-hello) </br> [TPM fundamentals](/windows/security/information-protection/tpm/tpm-fundamentals) </br> Windows Hello for Business works for on-premises authentication to Active Directory and cloud resources on Azure AD. </br> For FIDO2 security keys, brute-force protection is related to the key. The gesture, PIN or biometric, unlocks access to the local key storage. Administrators configure Azure AD to allow registration of FIDO2 security keys from manufacturers that align to PCI requirements. [Enable passwordless security key sign-in](../authentication/howto-authentication-passwordless-security-key.md) </br></br> **Microsoft Authenticator App** </br> To mitigate brute force attacks using Microsoft Authenticator app passwordless sign in, enable number matching and more context. </br> Azure AD generates a random number in the authentication flow. The user types it in the authenticator app. The mobile app authentication prompt shows the location, the request IP address, and the request application. [How to use number matching in MFA notifications](../authentication/how-to-mfa-number-match.md) </br> [How to use additional context in Microsoft Authenticator notifications](../authentication/how-to-mfa-additional-context.md)| +|**8.3.5** If passwords/passphrases are used as authentication factors to meet Requirement 8.3.1, they're set and reset for each user as follows: </br> Set to a unique value for first-time use and upon reset. </br> Forced to be changed immediately after the first use.|Not applicable to Azure AD.| +|**8.3.6** If passwords/passphrases are used as authentication factors to meet Requirement 8.3.1, they meet the following minimum level of complexity: </br> A minimum length of 12 characters (or IF the system doesn't support 12 characters, a minimum length of eight characters). </br> Contain both numeric and alphabetic characters.|Not applicable to Azure AD.| +|**8.3.7** Individuals aren't allowed to submit a new password/passphrase that is the same as any of the last four passwords/passphrases used.|Not applicable to Azure AD.| +|**8.3.8** Authentication policies and procedures are documented and communicated to all users including: </br> Guidance on selecting strong authentication factors. </br> Guidance for how users should protect their authentication factors. </br> Instructions not to reuse previously used passwords/passphrases. </br> Instructions to change passwords/passphrases if there's any suspicion or knowledge that the password/passphrases have been compromised and how to report the incident.|Document policies and procedures, then communicate to users per this requirement. Microsoft provides customizable templates in the [Download Center](https://www.microsoft.com/en-us/download/details.aspx?id=57600).| +|**8.3.9** If passwords/passphrases are used as the only authentication factor for user access (that is, in any single-factor authentication implementation) then either: Passwords/passphrases are changed at least once every 90 days, </br> OR </br> The security posture of accounts is dynamically analyzed, and real-time access to resources is automatically determined accordingly.|Not applicable to Azure AD.| +|**8.3.10** *Additional requirement for service providers only*: If passwords/passphrases are used as the only authentication factor for customer user access to cardholder data (that is, in any single-factor authentication implementation), then guidance is provided to customer users including: </br> Guidance for customers to change their user passwords/passphrases periodically. </br> Guidance as to when, and under what circumstances, passwords/passphrases are to be changed.|Not applicable to Azure AD.| +|**8.3.10.1** Additional requirement for service providers only: If passwords/passphrases are used as the only authentication factor for customer user access (that is, in any single-factor authentication implementation) then either: </br> Passwords/passphrases are changed at least once every 90 days, </br> OR </br> The security posture of accounts is dynamically analyzed, and real-time access to resources is automatically determined accordingly.|Not applicable to Azure AD.| +|**8.3.11** Where authentication factors such as physical or logical security tokens, smart cards, or certificates are used: </br> Factors are assigned to an individual user and not shared among multiple users. </br> Physical and/or logical controls ensure only the intended user can use that factor to gain access.|Use passwordless authentication methods such as Windows Hello for Business, FIDO2 security keys, and Microsoft Authenticator app for phone sign in. Use smart cards based on public or private keypairs associated with users to prevent reuse.| ++## 8.4 Multi-factor authentication (MFA) is implemented to secure access into the cardholder data environment (CDE) ++|PCI-DSS Defined approach requirements|Azure AD guidance and recommendations| +|-|-| +|**8.4.1** MFA is implemented for all nonconsole access into the CDE for personnel with administrative access.|Use Conditional Access to require strong authentication to access CDE resources. Define policies to target an administrative role (Global Administrator), or a security group representing administrative access to an application. </br> For administrative access, use Azure AD Privileged Identity Management (PIM) to enable just-in-time (JIT) activation of privileged roles. [What is Conditional Access?](../conditional-access/overview.md) </br> [CA templates](/azure/active-directory/conditional-access/concept-conditional-access-policy-common) </br> [Start using PIM](../privileged-identity-management/pim-getting-started.md)| +|**8.4.2** MFA is implemented for all access into the CDE.|Block access to legacy protocols that donΓÇÖt support strong authentication. [Block legacy authentication with Azure AD with Conditional Access](../conditional-access/block-legacy-authentication.md)| +|**8.4.3** MFA is implemented for all remote network access originating from outside the entityΓÇÖs network that could access or impact the CDE as follows: </br> All remote access by all personnel, both users and administrators, originating from outside the entityΓÇÖs network. </br> All remote access by third parties and vendors.|Integrate access technologies like virtual private network (VPN), remote desktop, and network access points with Azure AD for authentication and authorization. Use Conditional Access to require strong authentication to access remote access applications. [CA templates](/azure/active-directory/conditional-access/concept-conditional-access-policy-common)| ++## 8.5 Multi-factor authentication (MFA) systems are configured to prevent misuse. ++|PCI-DSS Defined approach requirements|Azure AD guidance and recommendations| +|-|-| +|**8.5.1** MFA systems are implemented as follows: </br> The MFA system isn't susceptible to replay attacks. </br> MFA systems can't be bypassed by any users, including administrative users unless specifically documented, and authorized by management on an exception basis, for a limited time period. </br> At least two different types of authentication factors are used. </br> Success of all authentication factors is required before access is granted.|The recommended Azure AD authentication methods use nonce or challenges. These methods resist replay attacks because Azure AD detects replayed authentication transactions. </br> Windows Hello for Business, FIDO2, and Microsoft Authenticator app for passwordless phone sign in use a nonce to identify the request and detect replay attempts. Use passwordless credentials for users in the CDE. </br> Certificate-based authentication uses challenges to detect replay attempts. </br> [NIST authenticator assurance level 2 with Azure AD](nist-authenticator-assurance-level-2.md) </br> [NIST authenticator assurance level 3 by using Azure AD](nist-authenticator-assurance-level-3.md)| ++## 8.6 Use of application and system accounts and associated authentication factors is strictly managed. ++|PCI-DSS Defined approach requirements|Azure AD guidance and recommendations| +|-|-| +|**8.6.1** If accounts used by systems or applications can be used for interactive login, they're managed as follows: </br> Interactive use is prevented unless needed for an exceptional circumstance. </br> Interactive use is limited to the time needed for the exceptional circumstance. </br> Business justification for interactive use is documented. </br> Interactive use is explicitly approved by management. </br> Individual user identity is confirmed before access to account is granted. </br> Every action taken is attributable to an individual user.|For CDE applications with modern authentication, and for CDE resources deployed in Azure that use modern authentication, Azure AD has two service account types for applications: Managed Identities and service principals. </br> Learn about Azure AD service account governance: planning, provisioning, lifecycle, monitoring, access reviews, etc. [Governing Azure AD service accounts](../fundamentals/service-accounts-governing-azure.md) </br> To secure Azure AD service accounts. [Securing managed identities in Azure AD](../fundamentals/service-accounts-managed-identities.md) </br> [Securing service principals in Azure AD](../fundamentals/service-accounts-principal.md) </br> For CDEs with resources outside Azure that require access, configure workload identity federations without managing secrets or interactive sign in. [Workload identity federation](../develop/workload-identity-federation.md) </br> To enable approval and tracking processes to fulfill requirements, orchestrate workflows using IT Service Management (ITSM) and configuration management databases (CMDB) These tools use MS Graph API to interact with Azure AD and manage the service account. </br> For CDEs that require service accounts compatible with on-premises Active Directory, use Group Managed Service Accounts (GMSAs), and standalone managed service accounts (sMSA), computer accounts, or user accounts. [Securing on-premises service accounts](../fundamentals/service-accounts-on-premises.md)| +|**8.6.2** Passwords/passphrases for any application and system accounts that can be used for interactive login aren't hard coded in scripts, configuration/property files, or bespoke and custom source code.|Use modern service accounts such as Azure Managed Identities and service principals that donΓÇÖt require passwords. </br> Azure AD Managed Identities credentials are provisioned, and rotated in the cloud, which prevents using shared secrets such as passwords and passphrases. When using system-assigned managed identities, the lifecycle is tied to the underlying Azure resource lifecycle. </br> Use service principals to use certificates as credentials, which prevents use of shared secrets such as passwords and passphrases. If certificates are not feasible, use Azure Key Vault to store service principal client secrets. [Best practices for using Azure Key Vault](/azure/key-vault/general/best-practices#using-service-principals-with-key-vault) </br> For CDEs with resources outside Azure that require access, configure workload identity federations without managing secrets or interactive sign-in. [Workload identity federation](../workload-identities/workload-identity-federation.md) </br> Deploy Conditional Access for workload identities to control authorization based on location and/or risk level. [CA for workload identities](../conditional-access/workload-identity.md) </br> In addition to the previous guidance, use code analysis tools to detect hard-coded secrets in code and configuration files. [Detect exposed secrets in code](/azure/defender-for-cloud/detect-exposed-secrets) </br> [Security rules](/dotnet/fundamentals/code-analysis/quality-rules/security-warnings)| +|**8.6.3** Passwords/passphrases for any application and system accounts are protected against misuse as follows: </br> Passwords/passphrases are changed periodically (at the frequency defined in the entityΓÇÖs targeted risk analysis, which is performed according to all elements specified in Requirement 12.3.1) and upon suspicion or confirmation of compromise. </br> Passwords/passphrases are constructed with sufficient complexity appropriate for how frequently the entity changes the passwords/passphrases.|Use modern service accounts such as Azure Managed Identities and service principals that donΓÇÖt require passwords. </br> For exceptions that require service principals with secrets, abstract secret lifecycle with workflows and automations that sets random passwords to service principals, rotates them regularly, and reacts to risk events. </br> Security operations teams can review and remediate reports generated by Azure AD such as Risky workload identities. [Securing workload identities with Identity Protection](../identity-protection/concept-workload-identity-risk.md) | ++## Next steps ++PCI-DSS requirements **3**, **4**, **9**, and **12** aren't applicable to Azure AD, therefore there are no corresponding articles. To see all requirements, go to pcisecuritystandards.org: [Official PCI Security Standards Council Site](https://docs-prv.pcisecuritystandards.org/PCI%20DSS/Standard/PCI-DSS-v4_0.pdf). ++To configure Azure AD to comply with PCI-DSS, see the following articles. ++* [Azure AD PCI-DSS guidance](azure-ad-pci-dss-guidance.md) +* [Requirement 1: Install and Maintain Network Security Controls](pci-requirement-1.md) +* [Requirement 2: Apply Secure Configurations to All System Components](pci-requirement-2.md) +* [Requirement 5: Protect All Systems and Networks from Malicious Software](pci-requirement-5.md) +* [Requirement 6: Develop and Maintain Secure Systems and Software](pci-requirement-6.md) +* [Requirement 7: Restrict Access to System Components and Cardholder Data by Business Need to Know](pci-requirement-7.md) +* [Requirement 8: Identify Users and Authenticate Access to System Components](pci-requirement-8.md) (You're here) +* [Requirement 10: Log and Monitor All Access to System Components and Cardholder Data](pci-requirement-10.md) +* [Requirement 11: Test Security of Systems and Networks Regularly](pci-requirement-11.md) +* [Azure AD PCI-DSS Multi-Factor Authentication guidance](azure-ad-pci-dss-mfa.md) |
advisor | Advisor Reference Cost Recommendations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/advisor/advisor-reference-cost-recommendations.md | Apache Spark for Azure Synapse Analytics pool's Autoscale feature automatically Learn more about [Synapse workspace - EnableSynapseSparkComputeAutoScaleGuidance (Consider enabling autoscale feature on spark compute.)](https://aka.ms/EnableSynapseSparkComputeAutoScaleGuidance). +## Web ++### Right-size underutilized App Service plans ++We've analyzed the usage patterns of your app service plan over the past 7 days and identified low CPU usage. While certain scenarios can result in low utilization by design, you can often save money by choosing a less expensive SKU while retaining the same features. ++> [!NOTE] +> - Currently, this recommendation only works for App Service plans running on Windows on a SKU that allows you to downscale to less expensive tiers without losing any features, like from P3v2 to P2v2 or from P2v2 to P1v2. +> - CPU bursts that last only a few minutes might not be correctly detected. Please perform a careful analysis in your App Service plan metrics blade before downscaling your SKU. ++Learn more about [App Service plans](../app-service/overview-hosting-plans.md). + ## Azure Monitor For Azure Monitor cost optimization suggestions, please see [Optimize costs in Azure Monitor](../azure-monitor/best-practices-cost.md). |
aks | Custom Certificate Authority | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/custom-certificate-authority.md | -Custom certificate authorities (CAs) allow you to establish trust between your Azure Kubernetes Service (AKS) cluster and your workloads, such as private registries, proxies, and firewalls. A Kubernetes secret is used to store the certificate authority's information, then it's passed to all nodes in the cluster. +AKS generates and uses the following certificates, Certificate Authorities (CAs), and Service Accounts (SAs): -This feature is applied per nodepool, so new and existing node pools must be configured to enable this feature. +* The AKS API server creates a CA called the Cluster CA. +* The API server has a Cluster CA, which signs certificates for one-way communication from the API server to kubelets. +* Each kubelet also creates a Certificate Signing Request (CSR), which is signed by the Cluster CA, for communication from the kubelet to the API server. +* The API aggregator uses the Cluster CA to issue certificates for communication with other APIs. The API aggregator can also have its own CA for issuing those certificates, but it currently uses the Cluster CA. +* Each node uses an SA token, which is signed by the Cluster CA. +* The `kubectl` client has a certificate for communicating with the AKS cluster. ++You can also create custom certificate authorities, which allow you to establish trust between your Azure Kubernetes Service (AKS) clusters and workloads, such as private registries, proxies, and firewalls. A Kubernetes secret stores the certificate authority's information, and then it's passed to all nodes in the cluster. This feature is applied per node pool, so you need to enable it on new and existing node pools. ++This article shows you how to create custom CAs and apply them to your AKS clusters. ## Prerequisites -* An Azure subscription. If you don't have an Azure subscription, you can create a [free account](https://azure.microsoft.com/free). +* An Azure subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free). * [Azure CLI installed][azure-cli-install] (version 2.43.0 or greater). * A base64 encoded certificate string or a text file with certificate. ## Limitations -This feature isn't currently supported for Windows node pools. +* This feature currently isn't supported for Windows node pools. -## Install the aks-preview Azure CLI extension +## Install the `aks-preview` Azure CLI extension [!INCLUDE [preview features callout](includes/preview/preview-callout.md)] -To install the aks-preview extension, run the following command: +1. Install the aks-preview extension using the [`az extension add`][az-extension-add] command. -```azurecli -az extension add --name aks-preview -``` + ```azurecli + az extension add --name aks-preview + ``` -Run the following command to update to the latest version of the extension released: +2. Update to the latest version of the extension using the [`az extension update`][az-extension-update] command. -```azurecli -az extension update --name aks-preview -``` + ```azurecli + az extension update --name aks-preview + ``` -## Register the 'CustomCATrustPreview' feature flag +## Register the `CustomCATrustPreview` feature flag -Register the `CustomCATrustPreview` feature flag by using the [az feature register][az-feature-register] command, as shown in the following example: +1. Register the `CustomCATrustPreview` feature flag using the [`az feature register`][az-feature-register] command. -```azurecli -az feature register --namespace "Microsoft.ContainerService" --name "CustomCATrustPreview" -``` + ```azurecli + az feature register --namespace "Microsoft.ContainerService" --name "CustomCATrustPreview" + ``` -It takes a few minutes for the status to show *Registered*. Verify the registration status by using the [az feature show][az-feature-show] command: + It takes a few minutes for the status to show *Registered*. -```azurecli-interactive -az feature show --namespace "Microsoft.ContainerService" --name "CustomCATrustPreview" -``` +2. Verify the registration status using the [`az feature show`][az-feature-show] command. -When the status reflects *Registered*, refresh the registration of the *Microsoft.ContainerService* resource provider by using the [az provider register][az-provider-register] command: + ```azurecli + az feature show --namespace "Microsoft.ContainerService" --name "CustomCATrustPreview" + ``` -```azurecli-interactive -az provider register --namespace Microsoft.ContainerService -``` +3. When the status reflects *Registered*, refresh the registration of the *Microsoft.ContainerService* resource provider using the [`az provider register`][az-provider-register] command. -## Two ways for custom CA installation on AKS node pools + ```azurecli + az provider register --namespace Microsoft.ContainerService + ``` -Two ways of installing custom CAs on your AKS cluster are available. They're intended for different use cases, which are outlined below. +## Custom CA installation on AKS node pools -### Install CAs during node pool boot up -If your environment requires your custom CAs to be added to node trust store for correct provisioning, -text file containing up to 10 blank line separated certificates needs to be passed during -[az aks create][az-aks-create] or [az aks update][az-aks-update] operations. +### Install CAs on AKS node pools -Example command: -```azurecli -az aks create \ - --resource-group myResourceGroup \ - --name myAKSCluster \ - --node-count 2 \ - --enable-custom-ca-trust \ - --custom-ca-trust-certificates pathToFileWithCAs -``` +* If your environment requires your custom CAs to be added to node trust store for correct provisioning, you need to pass a text file containing up to 10 blank line separated certificates during [`az aks create`][az-aks-create] or [`az aks update`][az-aks-update] operations. Example text file: -Example file: -``` BEGIN CERTIFICATE---cert1 END CERTIFICATE--+ ```txt + --BEGIN CERTIFICATE-- + cert1 + --END CERTIFICATE-- BEGIN CERTIFICATE---cert2 END CERTIFICATE---``` + --BEGIN CERTIFICATE-- + cert2 + --END CERTIFICATE-- + ``` -CAs will be added to node's trust store during node boot up process, allowing the node to, for example access a private registry. +#### Install CAs during node pool creation -#### CA rotation for availability during node pool boot up -To update CAs passed to cluster during boot up [az aks update][az-aks-update] operation has to be used. -```azurecli -az aks update \ - --resource-group myResourceGroup \ - --name myAKSCluster \ - --custom-ca-trust-certificates pathToFileWithCAs -``` +* Install CAs during node pool creation using the [`az aks create][az-aks-create] command and specifying your text file for the `--custom-ca-trust-certificates` parameter. ++ ```azurecli + az aks create \ + --resource-group myResourceGroup \ + --name myAKSCluster \ + --node-count 2 \ + --enable-custom-ca-trust \ + --custom-ca-trust-certificates pathToFileWithCAs + ``` -> [!NOTE] -> Running this operation will trigger a model update, to ensure that new nodes added during for example scale up operation have the newest CAs required for correct provisioning. -> This means that AKS will create additional nodes, drain currently existing ones, delete them and then replace them with nodes that have the new set of CAs installed. +#### CA rotation for availability during node pool boot up +* Update CAs passed to your cluster during boot up using the [`az aks update`][az-aks-update] command and specifying your text file for the `--custom-ca-trust-certificates` parameter. -### Install CAs once node pool is up and running -If your environment can be successfully provisioned without your custom CAs, you can provide the CAs using a secret deployed in the kube-system namespace. -This approach allows for certificate rotation without the need for node recreation. + ```azurecli + az aks update \ + --resource-group myResourceGroup \ + --name myAKSCluster \ + --custom-ca-trust-certificates pathToFileWithCAs + ``` -Create a [Kubernetes secret][kubernetes-secrets] YAML manifest with your base64 encoded certificate string in the `data` field. Data from this secret is used to update CAs on all nodes. + > [!NOTE] + > This operation triggers a model update, ensuring new nodes have the newest CAs required for correct provisioning. AKS creates additional nodes, drains existing ones, deletes them, and replaces them with nodes that have the new set of CAs installed. -You must ensure that: -* The secret is named `custom-ca-trust-secret`. -* The secret is created in the `kube-system` namespace. +### Install CAs after node pool creation -```yaml -apiVersion: v1 -kind: Secret -metadata: - name: custom-ca-trust-secret - namespace: kube-system -type: Opaque -data: - ca1.crt: | - {base64EncodedCertStringHere} - ca2.crt: | - {anotherBase64EncodedCertStringHere} -``` +If your environment can be successfully provisioned without your custom CAs, you can provide the CAs by deploying a secret in the `kube-system` namespace. This approach allows for certificate rotation without the need for node recreation. -To update or remove a CA, edit and apply the secret's YAML manifest. The cluster will poll for changes and update the nodes accordingly. This process may take a couple of minutes before changes are applied. +* Create a [Kubernetes secret][kubernetes-secrets] YAML manifest with your base64 encoded certificate string in the `data` field. -Sometimes containerd restart on the node might be required for the CAs to be picked up properly. If it appears like CAs aren't added correctly to your node's trust store, you can trigger such restart using the following command from node's shell: + ```yaml + apiVersion: v1 + kind: Secret + metadata: + name: custom-ca-trust-secret + namespace: kube-system + type: Opaque + data: + ca1.crt: | + {base64EncodedCertStringHere} + ca2.crt: | + {anotherBase64EncodedCertStringHere} + ``` -```systemctl restart containerd``` + Data from this secret is used to update CAs on all nodes. Make sure the secret is named `custom-ca-trust-secret` and is created in the `kube-system` namespace. Installing CAs using the secret in the `kube-system` namespace allows for CA rotation without the need for node recreation. To update or remove a CA, you can edit and apply the YAML manifest. The cluster polls for changes and updates the nodes accordingly. It may take a couple minutes before changes are applied. -> [!NOTE] -> Installing CAs using the secret in the kube-system namespace will allow for CA rotation without need for node recreation. + > [!NOTE] + > + > containerd restart on the node might be required for the CAs to be picked up properly. If it appears like CAs aren't correctly added to your node's trust store, you can trigger a restart using the following command from node's shell: + > + > ```systemctl restart containerd``` ## Configure a new AKS cluster to use a custom CA -To configure a new AKS cluster to use a custom CA, run the [az aks create][az-aks-create] command with the `--enable-custom-ca-trust` parameter. +* Configure a new AKS cluster to use a custom CA using the [`az aks create`][az-aks-create] command with the `--enable-custom-ca-trust` parameter. ++ ```azurecli + az aks create \ + --resource-group myResourceGroup \ + --name myAKSCluster \ + --node-count 2 \ + --enable-custom-ca-trust + ``` -```azurecli -az aks create \ - --resource-group myResourceGroup \ - --name myAKSCluster \ - --node-count 2 \ - --enable-custom-ca-trust -``` +## Configure a new AKS cluster to use a custom CA with CAs installed before node boots up -To configure a new AKS cluster to use custom CA with CAs installed before node boots up, run the [az aks create][az-aks-create] command with the `--enable-custom-ca-trust` and `--custom-ca-trust-certificates` parameters. +* Configure a new AKS cluster to use custom CA with CAs installed before the node boots up using the [`az aks create`][az-aks-create] command with the `--enable-custom-ca-trust` and `--custom-ca-trust-certificates` parameters. -```azurecli -az aks create \ - --resource-group myResourceGroup \ - --name myAKSCluster \ - --node-count 2 \ - --enable-custom-ca-trust \ - --custom-ca-trust-certificates pathToFileWithCAs -``` + ```azurecli + az aks create \ + --resource-group myResourceGroup \ + --name myAKSCluster \ + --node-count 2 \ + --enable-custom-ca-trust \ + --custom-ca-trust-certificates pathToFileWithCAs + ``` ## Configure an existing AKS cluster to have custom CAs installed before node boots up -To configure an existing AKS cluster to have your custom CAs added to node's trust store before it boots up, run [az aks update][az-aks-update] command with the `--custom-ca-trust-certificates` parameter. +* Configure an existing AKS cluster to have your custom CAs added to node's trust store before it boots up using the [`az aks update`][az-aks-update] command with the `--custom-ca-trust-certificates` parameter. -```azurecli -az aks update \ - --resource-group myResourceGroup \ - --name myAKSCluster \ - --custom-ca-trust-certificates pathToFileWithCAs -``` + ```azurecli + az aks update \ + --resource-group myResourceGroup \ + --name myAKSCluster \ + --custom-ca-trust-certificates pathToFileWithCAs + ``` ## Configure a new node pool to use a custom CA -To configure a new node pool to use a custom CA, run the [az aks nodepool add][az-aks-nodepool-add] command with the `--enable-custom-ca-trust` parameter. --```azurecli -az aks nodepool add \ - --cluster-name myAKSCluster \ - --resource-group myResourceGroup \ - --name myNodepool \ - --enable-custom-ca-trust \ - --os-type Linux -``` --If there are currently no other node pools with the feature enabled, cluster will have to reconcile its settings for -the changes to take effect. Before that happens, daemonset and pods, which install CAs won't appear on the cluster. -This operation will happen automatically as a part of AKS's reconcile loop. -You can trigger reconcile operation immediately by running the [az aks update][az-aks-update] command: +* Configure a new node pool to use a custom CA using the [`az aks nodepool add`][az-aks-nodepool-add] command with the `--enable-custom-ca-trust` parameter. -```azurecli -az aks update \ - --resource-group myResourceGroup \ - --name cluster-name -``` + ```azurecli + az aks nodepool add \ + --cluster-name myAKSCluster \ + --resource-group myResourceGroup \ + --name myNodepool \ + --enable-custom-ca-trust \ + --os-type Linux + ``` -Once completed, the daemonset and pods will appear in the cluster. + If no other node pools with the feature enabled exist, the cluster has to reconcile its settings for the changes to take effect. This operation happens automatically as a part of AKS's reconcile loop. Before the operation, the daemon set and pods don't appear on the cluster. You can trigger an immediate reconcile operation using the [`az aks update`][az-aks-update] command. The daemon set and pods appear after the update completes. ## Configure an existing node pool to use a custom CA -To configure an existing node pool to use a custom CA, run the [az aks nodepool update][az-aks-nodepool-update] command with the `--enable-custom-trust-ca` parameter. +* Configure an existing node pool to use a custom CA using the [`az aks nodepool update`][az-aks-nodepool-update] command with the `--enable-custom-trust-ca` parameter. -```azurecli -az aks nodepool update \ - --resource-group myResourceGroup \ - --cluster-name myAKSCluster \ - --name myNodepool \ - --enable-custom-ca-trust -``` + ```azurecli + az aks nodepool update \ + --resource-group myResourceGroup \ + --cluster-name myAKSCluster \ + --name myNodepool \ + --enable-custom-ca-trust + ``` -If there are currently no other node pools with the feature enabled, cluster will have to reconcile its settings for -the changes to take effect. Before that happens, daemon set and pods, which install CAs won't appear on the cluster. -This operation will happen automatically as a part of AKS's reconcile loop. -You can trigger reconcile operation by running the following command: --```azurecli -az aks update -g myResourceGroup --name cluster-name -``` --Once complete, the daemonset and pods will appear in the cluster. + If no other node pools with the feature enabled exist, the cluster has to reconcile its settings for the changes to take effect. This operation happens automatically as a part of AKS's reconcile loop. Before the operation, the daemon set and pods don't appear on the cluster. You can trigger an immediate reconcile operation using the [`az aks update`][az-aks-update] command. The daemon set and pods appear after the update completes. ## Troubleshooting ### Feature is enabled and secret with CAs is added, but operations are failing with X.509 Certificate Signed by Unknown Authority error+ #### Incorrectly formatted certs passed in the secret+ AKS requires certs passed in the user-created secret to be properly formatted and base64 encoded. Make sure the CAs you passed are properly base64 encoded and that files with CAs don't have CRLF line breaks.-Certificates passed to ```--custom-ca-trust-certificates``` option shouldn't be base64 encoded. -#### Containerd hasn't picked up new certs -From node's shell, run ```systemctl restart containerd```, once containerd is restarted, new certs will be properly picked up by the container runtime. +Certificates passed to ```--custom-ca-trust-certificates``` shouldn't be base64 encoded. ++#### containerd hasn't picked up new certs ++From the node's shell, run ```systemctl restart containerd```. Once containerd is restarts, the new certs are properly picked up by the container runtime. ## Next steps For more information on AKS security best practices, see [Best practices for cluster security and upgrades in Azure Kubernetes Service (AKS)][aks-best-practices-security-upgrades]. -<!-- LINKS EXTERNAL --> -[kubernetes-secrets]:https://kubernetes.io/docs/concepts/configuration/secret/ - <!-- LINKS INTERNAL --> [aks-best-practices-security-upgrades]: operator-best-practices-cluster-security.md [azure-cli-install]: /cli/azure/install-azure-cli |
aks | Custom Node Configuration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/custom-node-configuration.md | Title: Customize the node configuration for Azure Kubernetes Service (AKS) node description: Learn how to customize the configuration on Azure Kubernetes Service (AKS) cluster nodes and node pools. Previously updated : 12/03/2020 Last updated : 04/24/2023 # Customize node configuration for Azure Kubernetes Service (AKS) node pools -Customizing your node configuration allows you to configure or tune your operating system (OS) settings or the kubelet parameters to match the needs of the workloads. When you create an AKS cluster or add a node pool to your cluster, you can customize a subset of commonly used OS and kubelet settings. To configure settings beyond this subset, [use a daemon set to customize your needed configurations without losing AKS support for your nodes](support-policies.md#shared-responsibility). +Customizing your node configuration allows you to adjust operating system (OS) settings or kubelet parameters to match the needs of your workloads. When you create an AKS cluster or add a node pool to your cluster, you can customize a subset of commonly used OS and kubelet settings. To configure settings beyond this subset, you can [use a daemon set to customize your needed configurations without losing AKS support for your nodes](support-policies.md#shared-responsibility). ## Create an AKS cluster with a customized node configuration [!INCLUDE [preview features callout](./includes/preview/preview-callout.md)] -### Prerequisites for Windows kubelet custom configuration (Preview) +### Prerequisites for Windows kubelet custom configuration (preview) -* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). +Before you begin, make sure you have an Azure account with an active subscription. If you don't have one, [create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). You also need to register the feature flag using the following steps: -First, install the aks-preview extension by running the following command: +1. Install the aks-preview extension using the [`az extension add`][az-extension-add] command. -```azurecli -az extension add --name aks-preview -``` + ```azurecli + az extension add --name aks-preview + ``` -Run the following command to update to the latest version of the extension released: +2. Update to the latest version of the extension using the [`az extension update`][az-extension-update] command. -```azurecli -az extension update --name aks-preview -``` + ```azurecli + az extension update --name aks-preview + ``` -Then register the `WindowsCustomKubeletConfigPreview` feature flag by using the [`az feature register`][az-feature-register] command, as shown in the following example: +3. Register the `WindowsCustomKubeletConfigPreview` feature flag using the [`az feature register`][az-feature-register] command. -```azurecli-interactive -az feature register --namespace "Microsoft.ContainerService" --name "WindowsCustomKubeletConfigPreview" -``` + ```azurecli-interactive + az feature register --namespace "Microsoft.ContainerService" --name "WindowsCustomKubeletConfigPreview" + ``` -It takes a few minutes for the status to show *Registered*. Verify the registration status by using the [`az feature show`][az-feature-show] command: + It takes a few minutes for the status to show *Registered*. -```azurecli-interactive -az feature show --namespace "Microsoft.ContainerService" --name "WindowsCustomKubeletConfigPreview" -``` +4. Verify the registration status using the [`az feature show`][az-feature-show] command. -When the status reflects *Registered*, refresh the registration of the *Microsoft.ContainerService* resource provider by using the [`az provider register`][az-provider-register] command: + ```azurecli-interactive + az feature show --namespace "Microsoft.ContainerService" --name "WindowsCustomKubeletConfigPreview" + ``` -```azurecli-interactive -az provider register --namespace Microsoft.ContainerService -``` +5. When the status reflects *Registered*, refresh the registration of the *Microsoft.ContainerService* resource provider using the [`az provider register`][az-provider-register] command. ++ ```azurecli-interactive + az provider register --namespace Microsoft.ContainerService + ``` -### Create config files for kubelet configuration, OS configuration, or both +### Create config files -Create a `linuxkubeletconfig.json` file with the following contents (for Linux node pools): +#### Kubelet configuration ++### [Linux node pools](#tab/linux-node-pools) ++Create a `linuxkubeletconfig.json` file with the following contents: ```json { Create a `linuxkubeletconfig.json` file with the following contents (for Linux n "failSwapOn": false } ```++### [Windows node pools](#tab/windows-node-pools) + > [!NOTE]-> Windows kubelet custom configuration only supports the parameters `imageGcHighThreshold`, `imageGcLowThreshold`, `containerLogMaxSizeMB`, and `containerLogMaxFiles`. The json file contents above should be modified to remove any unsupported parameters. +> Windows kubelet custom configuration only supports the parameters `imageGcHighThreshold`, `imageGcLowThreshold`, `containerLogMaxSizeMB`, and `containerLogMaxFiles`. -Create a `windowskubeletconfig.json` file with the following contents (for Windows node pools): +Create a `windowskubeletconfig.json` file with the following contents: ```json { Create a `windowskubeletconfig.json` file with the following contents (for Windo } ``` -Create a `linuxosconfig.json` file with the following contents (for Linux node pools only): +++#### OS configuration ++### [Linux node pools](#tab/linux-node-pools) ++Create a `linuxosconfig.json` file with the following contents: ```json { Create a `linuxosconfig.json` file with the following contents (for Linux node p } ``` +### [Windows node pools](#tab/windows-node-pools) ++Not currently supported. +++ ### Create a new cluster using custom configuration files -When creating a new cluster, you can use the customized configuration files created in the previous step to specify the kubelet configuration, OS configuration, or both. Since the first node pool created with az aks create is a linux node pool in all cases, you should use the `linuxkubeletconfig.json` and `linuxosconfig.json` files. +When creating a new cluster, you can use the customized configuration files created in the previous steps to specify the kubelet configuration, OS configuration, or both. > [!NOTE]-> If you specify a configuration when creating a cluster, only the nodes in the initial node pool will have that configuration applied. Any settings not configured in the JSON file will retain the default value. CustomLinuxOsConfig isn't supported for OS type: Windows. +> If you specify a configuration when creating a cluster, only the nodes in the initial node pool will have that configuration applied. Any settings not configured in the JSON file will retain the default value. `CustomLinuxOsConfig` isn't supported for OS type: Windows. ++Create a new cluster using custom configuration files using the [`az aks create`][az-aks-create] command and specifying your configuration files. The following example command creates a new cluster with the custom `./linuxkubeletconfig.json` and `./linuxosconfig.json` files: ```azurecli az aks create --name myAKSCluster --resource-group myResourceGroup --kubelet-config ./linuxkubeletconfig.json --linux-os-config ./linuxosconfig.json ```+ ### Add a node pool using custom configuration files -When adding a node pool to a cluster, you can use the customized configuration file created in the previous step to specify the kubelet configuration. CustomKubeletConfig is supported for Linux and Windows node pools. +When adding a node pool to a cluster, you can use the customized configuration file created in the previous step to specify the kubelet configuration. `CustomKubeletConfig` is supported for Linux and Windows node pools. > [!NOTE] > When you add a Linux node pool to an existing cluster, you can specify the kubelet configuration, OS configuration, or both. When you add a Windows node pool to an existing cluster, you can only specify the kubelet configuration. If you specify a configuration when adding a node pool, only the nodes in the new node pool will have that configuration applied. Any settings not configured in the JSON file will retain the default value. -For Linux node pools +### [Linux node pools](#tab/linux-node-pools) ```azurecli az aks nodepool add --name mynodepool1 --cluster-name myAKSCluster --resource-group myResourceGroup --kubelet-config ./linuxkubeletconfig.json ```-For Windows node pools (Preview) ++### [Windows node pools](#tab/windows-node-pools) ```azurecli az aks nodepool add --name mynodepool1 --cluster-name myAKSCluster --resource-group myResourceGroup --os-type Windows --kubelet-config ./windowskubeletconfig.json ``` ++ ### Other configurations -These settings can be used to modify other operating system settings. +The following settings can be used to modify other operating system settings: #### Message of the Day Pass the ```--message-of-the-day``` flag with the location of the file to replace the Message of the Day on Linux nodes at cluster creation or node pool creation. -##### Cluster creation - ```azurecli az aks create --cluster-name myAKSCluster --resource-group myResourceGroup --message-of-the-day ./newMOTD.txt ``` az aks nodepool add --name mynodepool1 --cluster-name myAKSCluster --resource-gr ### Confirm settings have been applied -After you have applied custom node configuration, you can confirm the settings have been applied to the nodes by [connecting to the host][node-access] and verifying `sysctl` or configuration changes have been made on the filesystem. +After you apply custom node configuration, you can confirm the settings have been applied to the nodes by [connecting to the host][node-access] and verifying `sysctl` or configuration changes have been made on the filesystem. ## Custom node configuration supported parameters -## Kubelet custom configuration +### Kubelet custom configuration Kubelet custom configuration is supported for Linux and Windows node pools. Supported parameters differ and are documented below. -### Linux Kubelet custom configuration --The supported Kubelet parameters and accepted values for Linux node pools are listed below. +#### Linux Kubelet custom configuration | Parameter | Allowed values/interval | Default | Description | | | -- | - | -- | | `cpuManagerPolicy` | none, static | none | The static policy allows containers in [Guaranteed pods](https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/) with integer CPU requests access to exclusive CPUs on the node. |-| `cpuCfsQuota` | true, false | true | Enable/Disable CPU CFS quota enforcement for containers that specify CPU limits. | +| `cpuCfsQuota` | true, false | true | Enable/Disable CPU CFS quota enforcement for containers that specify CPU limits. | | `cpuCfsQuotaPeriod` | Interval in milliseconds (ms) | `100ms` | Sets CPU CFS quota period value. | -| `imageGcHighThreshold` | 0-100 | 85 | The percent of disk usage after which image garbage collection is always run. Minimum disk usage that **will** trigger garbage collection. To disable image garbage collection, set to 100. | +| `imageGcHighThreshold` | 0-100 | 85 | The percent of disk usage after which image garbage collection is always run. Minimum disk usage that **will** trigger garbage collection. To disable image garbage collection, set to 100. | | `imageGcLowThreshold` | 0-100, no higher than `imageGcHighThreshold` | 80 | The percent of disk usage before which image garbage collection is never run. Minimum disk usage that **can** trigger garbage collection. | | `topologyManagerPolicy` | none, best-effort, restricted, single-numa-node | none | Optimize NUMA node alignment, see more [here](https://kubernetes.io/docs/tasks/administer-cluster/topology-manager/). |-| `allowedUnsafeSysctls` | `kernel.shm*`, `kernel.msg*`, `kernel.sem`, `fs.mqueue.*`, `net.*` | None | Allowed list of unsafe sysctls or unsafe sysctl patterns. | -| `containerLogMaxSizeMB` | Size in megabytes (MB) | 10 | The maximum size (for example, 10 MB) of a container log file before it's rotated. | -| `containerLogMaxFiles` | ≥ 2 | 5 | The maximum number of container log files that can be present for a container. | +| `allowedUnsafeSysctls` | `kernel.shm*`, `kernel.msg*`, `kernel.sem`, `fs.mqueue.*`, `net.*` | None | Allowed list of unsafe sysctls or unsafe sysctl patterns. | +| `containerLogMaxSizeMB` | Size in megabytes (MB) | 10 | The maximum size (for example, 10 MB) of a container log file before it's rotated. | +| `containerLogMaxFiles` | ≥ 2 | 5 | The maximum number of container log files that can be present for a container. | | `podMaxPids` | -1 to kernel PID limit | -1 (∞)| The maximum amount of process IDs that can be running in a Pod | -### Windows Kubelet custom configuration (Preview) --The supported Kubelet parameters and accepted values for Windows node pools are listed below. +#### Windows Kubelet custom configuration (preview) | Parameter | Allowed values/interval | Default | Description | | | -- | - | -- |-| `imageGcHighThreshold` | 0-100 | 85 | The percent of disk usage after which image garbage collection is always run. Minimum disk usage that **will** trigger garbage collection. To disable image garbage collection, set to 100. | +| `imageGcHighThreshold` | 0-100 | 85 | The percent of disk usage after which image garbage collection is always run. Minimum disk usage that **will** trigger garbage collection. To disable image garbage collection, set to 100. | | `imageGcLowThreshold` | 0-100, no higher than `imageGcHighThreshold` | 80 | The percent of disk usage before which image garbage collection is never run. Minimum disk usage that **can** trigger garbage collection. |-| `containerLogMaxSizeMB` | Size in megabytes (MB) | 10 | The maximum size (for example, 10 MB) of a container log file before it's rotated. | -| `containerLogMaxFiles` | ≥ 2 | 5 | The maximum number of container log files that can be present for a container. | +| `containerLogMaxSizeMB` | Size in megabytes (MB) | 10 | The maximum size (for example, 10 MB) of a container log file before it's rotated. | +| `containerLogMaxFiles` | ≥ 2 | 5 | The maximum number of container log files that can be present for a container. | -## Linux OS custom configuration --The supported OS settings and accepted values are listed below. +## Linux custom OS configuration settings ### File handle limits -When you're serving a lot of traffic, it's common that the traffic you're serving is coming from a large number of local files. You can tweak the below kernel settings and built-in limits to allow you to handle more, at the cost of some system memory. +When serving a lot of traffic, the traffic commonly comes from a large number of local files. You can adjust the below kernel settings and built-in limits to allow you to handle more, at the cost of some system memory. | Setting | Allowed values/interval | Default | Description | | - | -- | - | -- | | `fs.file-max` | 8192 - 12000500 | 709620 | Maximum number of file-handles that the Linux kernel will allocate, by increasing this value you can increase the maximum number of open files permitted. |-| `fs.inotify.max_user_watches` | 781250 - 2097152 | 1048576 | Maximum number of file watches allowed by the system. Each *watch* is roughly 90 bytes on a 32-bit kernel, and roughly 160 bytes on a 64-bit kernel. | +| `fs.inotify.max_user_watches` | 781250 - 2097152 | 1048576 | Maximum number of file watches allowed by the system. Each *watch* is roughly 90 bytes on a 32-bit kernel, and roughly 160 bytes on a 64-bit kernel. | | `fs.aio-max-nr` | 65536 - 6553500 | 65536 | The aio-nr shows the current system-wide number of asynchronous io requests. aio-max-nr allows you to change the maximum value aio-nr can grow to. | | `fs.nr_open` | 8192 - 20000500 | 1048576 | The maximum number of file-handles a process can allocate. | ### Socket and network tuning -For agent nodes, which are expected to handle very large numbers of concurrent sessions, you can use the subset of TCP and network options below that you can tweak per node pool. +For agent nodes, which are expected to handle very large numbers of concurrent sessions, you can use the subset of TCP and network options below that you can tweak per node pool. | Setting | Allowed values/interval | Default | Description | | - | -- | - | -- | For agent nodes, which are expected to handle very large numbers of concurrent s | `net.ipv4.tcp_fin_timeout` | 5 - 120 | 60 | The length of time an orphaned (no longer referenced by any application) connection will remain in the FIN_WAIT_2 state before it's aborted at the local end. | | `net.ipv4.tcp_keepalive_time` | 30 - 432000 | 7200 | How often TCP sends out `keepalive` messages when `keepalive` is enabled. | | `net.ipv4.tcp_keepalive_probes` | 1 - 15 | 9 | How many `keepalive` probes TCP sends out, until it decides that the connection is broken. |-| `net.ipv4.tcp_keepalive_intvl` | 1 - 75 | 75 | How frequently the probes are sent out. Multiplied by `tcp_keepalive_probes` it makes up the time to kill a connection that isn't responding, after probes started. | +| `net.ipv4.tcp_keepalive_intvl` | 1 - 75 | 75 | How frequently the probes are sent out. Multiplied by `tcp_keepalive_probes` it makes up the time to kill a connection that isn't responding, after probes started. | | `net.ipv4.tcp_tw_reuse` | 0 or 1 | 0 | Allow to reuse `TIME-WAIT` sockets for new connections when it's safe from protocol viewpoint. | | `net.ipv4.ip_local_port_range` | First: 1024 - 60999 and Last: 32768 - 65000] | First: 32768 and Last: 60999 | The local port range that is used by TCP and UDP traffic to choose the local port. Comprised of two numbers: The first number is the first local port allowed for TCP and UDP traffic on the agent node, the second is the last local port number. | -| `net.ipv4.neigh.default.gc_thresh1`| 128 - 80000 | 4096 | Minimum number of entries that may be in the ARP cache. Garbage collection won't be triggered if the number of entries is below this setting. | +| `net.ipv4.neigh.default.gc_thresh1`| 128 - 80000 | 4096 | Minimum number of entries that may be in the ARP cache. Garbage collection won't be triggered if the number of entries is below this setting. | | `net.ipv4.neigh.default.gc_thresh2`| 512 - 90000 | 8192 | Soft maximum number of entries that may be in the ARP cache. This setting is arguably the most important, as ARP garbage collection will be triggered about 5 seconds after reaching this soft maximum. | | `net.ipv4.neigh.default.gc_thresh3`| 1024 - 100000 | 16384 | Hard maximum number of entries in the ARP cache. |-| `net.netfilter.nf_conntrack_max` | 131072 - 1048576 | 131072 | `nf_conntrack` is a module that tracks connection entries for NAT within Linux. The `nf_conntrack` module uses a hash table to record the *established connection* record of the TCP protocol. `nf_conntrack_max` is the maximum number of nodes in the hash table, that is, the maximum number of connections supported by the `nf_conntrack` module or the size of connection tracking table. | -| `net.netfilter.nf_conntrack_buckets` | 65536 - 147456 | 65536 | `nf_conntrack` is a module that tracks connection entries for NAT within Linux. The `nf_conntrack` module uses a hash table to record the *established connection* record of the TCP protocol. `nf_conntrack_buckets` is the size of hash table. | +| `net.netfilter.nf_conntrack_max` | 131072 - 1048576 | 131072 | `nf_conntrack` is a module that tracks connection entries for NAT within Linux. The `nf_conntrack` module uses a hash table to record the *established connection* record of the TCP protocol. `nf_conntrack_max` is the maximum number of nodes in the hash table, that is, the maximum number of connections supported by the `nf_conntrack` module or the size of connection tracking table. | +| `net.netfilter.nf_conntrack_buckets` | 65536 - 147456 | 65536 | `nf_conntrack` is a module that tracks connection entries for NAT within Linux. The `nf_conntrack` module uses a hash table to record the *established connection* record of the TCP protocol. `nf_conntrack_buckets` is the size of hash table. | ### Worker limits -Like file descriptor limits, the number of workers or threads that a process can create are limited by both a kernel setting and user limits. The user limit on AKS is unlimited. +Like file descriptor limits, the number of workers or threads that a process can create are limited by both a kernel setting and user limits. The user limit on AKS is unlimited. | Setting | Allowed values/interval | Default | Description | | - | -- | - | -- |-| `kernel.threads-max` | 20 - 513785 | 55601 | Processes can spin up worker threads. The maximum number of all threads that can be created is set with the kernel setting `kernel.threads-max`. | +| `kernel.threads-max` | 20 - 513785 | 55601 | Processes can spin up worker threads. The maximum number of all threads that can be created is set with the kernel setting `kernel.threads-max`. | ### Virtual memory The settings below can be used to tune the operation of the virtual memory (VM) | Setting | Allowed values/interval | Default | Description | | - | -- | - | -- |-| `vm.max_map_count` | 65530 - 262144 | 65530 | This file contains the maximum number of memory map areas a process may have. Memory map areas are used as a side-effect of calling `malloc`, directly by `mmap`, `mprotect`, and `madvise`, and also when loading shared libraries. | +| `vm.max_map_count` | 65530 - 262144 | 65530 | This file contains the maximum number of memory map areas a process may have. Memory map areas are used as a side-effect of calling `malloc`, directly by `mmap`, `mprotect`, and `madvise`, and also when loading shared libraries. | | `vm.vfs_cache_pressure` | 1 - 500 | 100 | This percentage value controls the tendency of the kernel to reclaim the memory, which is used for caching of directory and inode objects. |-| `vm.swappiness` | 0 - 100 | 60 | This control is used to define how aggressive the kernel will swap memory pages. Higher values will increase aggressiveness, lower values decrease the amount of swap. A value of 0 instructs the kernel not to initiate swap until the amount of free and file-backed pages is less than the high water mark in a zone. | -| `swapFileSizeMB` | 1 MB - Size of the [temporary disk](../virtual-machines/managed-disks-overview.md#temporary-disk) (/dev/sdb) | None | SwapFileSizeMB specifies size in MB of a swap file will be created on the agent nodes from this node pool. | -| `transparentHugePageEnabled` | `always`, `madvise`, `never` | `always` | [Transparent Hugepages](https://www.kernel.org/doc/html/latest/admin-guide/mm/transhuge.html#admin-guide-transhuge) is a Linux kernel feature intended to improve performance by making more efficient use of your processor’s memory-mapping hardware. When enabled the kernel attempts to allocate `hugepages` whenever possible and any Linux process will receive 2-MB pages if the `mmap` region is 2 MB naturally aligned. In certain cases when `hugepages` are enabled system wide, applications may end up allocating more memory resources. An application may `mmap` a large region but only touch 1 byte of it, in that case a 2-MB page might be allocated instead of a 4k page for no good reason. This scenario is why it's possible to disable `hugepages` system-wide or to only have them inside `MADV_HUGEPAGE madvise` regions. | -| `transparentHugePageDefrag` | `always`, `defer`, `defer+madvise`, `madvise`, `never` | `madvise` | This value controls whether the kernel should make aggressive use of memory compaction to make more `hugepages` available. | +| `vm.swappiness` | 0 - 100 | 60 | This control is used to define how aggressive the kernel will swap memory pages. Higher values will increase aggressiveness, lower values decrease the amount of swap. A value of 0 instructs the kernel not to initiate swap until the amount of free and file-backed pages is less than the high water mark in a zone. | +| `swapFileSizeMB` | 1 MB - Size of the [temporary disk](../virtual-machines/managed-disks-overview.md#temporary-disk) (/dev/sdb) | None | SwapFileSizeMB specifies size in MB of a swap file will be created on the agent nodes from this node pool. | +| `transparentHugePageEnabled` | `always`, `madvise`, `never` | `always` | [Transparent Hugepages](https://www.kernel.org/doc/html/latest/admin-guide/mm/transhuge.html#admin-guide-transhuge) is a Linux kernel feature intended to improve performance by making more efficient use of your processor’s memory-mapping hardware. When enabled the kernel attempts to allocate `hugepages` whenever possible and any Linux process will receive 2-MB pages if the `mmap` region is 2 MB naturally aligned. In certain cases when `hugepages` are enabled system wide, applications may end up allocating more memory resources. An application may `mmap` a large region but only touch 1 byte of it, in that case a 2-MB page might be allocated instead of a 4k page for no good reason. This scenario is why it's possible to disable `hugepages` system-wide or to only have them inside `MADV_HUGEPAGE madvise` regions. | +| `transparentHugePageDefrag` | `always`, `defer`, `defer+madvise`, `madvise`, `never` | `madvise` | This value controls whether the kernel should make aggressive use of memory compaction to make more `hugepages` available. | > [!IMPORTANT] > For ease of search and readability the OS settings are displayed in this document by their name but should be added to the configuration json file or AKS API using [camelCase capitalization convention](/dotnet/standard/design-guidelines/capitalization-conventions). The settings below can be used to tune the operation of the virtual memory (VM) - See the list of [Frequently asked questions about AKS](faq.md) to find answers to some common AKS questions. <!-- LINKS - internal -->-[aks-faq]: faq.md -[aks-faq-node-resource-group]: faq.md#can-i-modify-tags-and-other-properties-of-the-aks-resources-in-the-node-resource-group -[aks-multiple-node-pools]: use-multiple-node-pools.md -[aks-scale-apps]: tutorial-kubernetes-scale.md -[aks-support-policies]: support-policies.md -[aks-upgrade]: upgrade-cluster.md [node-access]: node-access.md-[aks-view-master-logs]: ../azure-monitor/containers/container-insights-log-query.md#enable-resource-logs -[autoscaler-profile-properties]: #using-the-autoscaler-profile -[azure-cli-install]: /cli/azure/install-azure-cli -[az-aks-show]: /cli/azure/aks#az-aks-show [az-extension-add]: /cli/azure/extension#az-extension-add [az-extension-update]: /cli/azure/extension#az-extension-update [az-aks-create]: /cli/azure/aks#az-aks-create-[az-aks-update]: /cli/azure/aks#az-aks-update -[az-aks-scale]: /cli/azure/aks#az-aks-scale [az-feature-register]: /cli/azure/feature#az-feature-register [az-feature-show]: /cli/azure/feature#az-feature-show-[az-feature-list]: /cli/azure/feature#az-feature-list [az-provider-register]: /cli/azure/provider#az-provider-register-[upgrade-cluster]: upgrade-cluster.md -[use-multiple-node-pools]: use-multiple-node-pools.md -[max-surge]: upgrade-cluster.md#customize-node-surge-upgrade ---<!-- LINKS - external --> -[az-aks-update-preview]: https://github.com/Azure/azure-cli-extensions/tree/master/src/aks-preview -[az-aks-nodepool-update]: https://github.com/Azure/azure-cli-extensions/tree/master/src/aks-preview#enable-cluster-auto-scaler-for-a-node-pool -[autoscaler-scaledown]: https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#what-types-of-pods-can-prevent-ca-from-removing-a-node -[autoscaler-parameters]: https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#what-are-the-parameters-to-ca -[kubernetes-faq]: https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#ca-doesnt-work-but-it-used-to-work-yesterday-why |
aks | Outbound Rules Control Egress | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/outbound-rules-control-egress.md | There are two options to provide access to Azure Monitor for containers: ### Required FQDN / application rules -| FQDN | Port | Use | +| FQDN | Port               | Use | |--|--|-| | **`<region>.dp.kubernetesconfiguration.azure.com`** | **`HTTPS:443`** | This address is used to fetch configuration information from the Cluster Extensions service and report extension status to the service.| | **`mcr.microsoft.com, *.data.mcr.microsoft.com`** | **`HTTPS:443`** | This address is required to pull container images for installing cluster extension agents on AKS cluster.|+|**`arcmktplaceprod.azurecr.io`**|**`HTTPS:443`**|This address is required to pull container images for installing marketplace extensions on AKS cluster.| +|**`*.ingestion.msftcloudes.com, *.microsoftmetrics.com`**|**`HTTPS:443`**|This address is used to send agents metrics data to Azure.| +|**`marketplaceapi.microsoft.com`**|**`HTTPS: 443`**|This address is used to send custom meter-based usage to the commerce metering API.| #### Azure US Government required FQDN / application rules |
aks | Start Stop Nodepools | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/start-stop-nodepools.md | Title: Start and stop a node pool on Azure Kubernetes Service (AKS) description: Learn how to start or stop a node pool on Azure Kubernetes Service (AKS). Previously updated : 10/25/2021 Last updated : 04/25/2023 # Start and stop an Azure Kubernetes Service (AKS) node pool -Your AKS workloads may not need to run continuously, for example a development cluster that has node pools running specific workloads. To optimize your costs, you can completely turn off (stop) your node pools in your AKS cluster, allowing you to save on compute costs. +You might not need to continuously run your AKS workloads. For example, you might have a development cluster that has node pools running specific workloads. To optimize your compute costs, you can completely stop your node pools in your AKS cluster. ++## Features and limitations ++* You can't stop system pools. +* Spot node pools are supported. +* Stopped node pools can be upgraded. +* The cluster and node pool must be running. ## Before you begin -This article assumes that you have an existing AKS cluster. If you need an AKS cluster, see the AKS quickstart [using the Azure CLI][aks-quickstart-cli], [using Azure PowerShell][aks-quickstart-powershell], or [using the Azure portal][aks-quickstart-portal]. +This article assumes you have an existing AKS cluster. If you need an AKS cluster, create one using the [Azure CLI][aks-quickstart-cli], [Azure PowerShell][aks-quickstart-powershell], or the [Azure portal][aks-quickstart-portal]. ## Stop an AKS node pool -> [!IMPORTANT] -> When using node pool start/stop, the following is expected behavior: -> -> * You can't stop system pools. -> * Spot node pools are supported. -> * Stopped node pools can be upgraded. -> * The cluster and node pool must be running. --Use `az aks nodepool stop` to stop a running AKS node pool. The following example stops the *testnodepool* node pool: --```azurecli-interactive -az aks nodepool stop --nodepool-name testnodepool --resource-group myResourceGroup --cluster-name myAKSCluster -``` --You can verify when your node pool is stopped by using the [az aks show][az-aks-show] command and confirming the `powerState` shows as `Stopped` as on the below output: --```json -{ -[...] - "osType": "Linux", - "podSubnetId": null, - "powerState": { - "code": "Stopped" - }, - "provisioningState": "Succeeded", - "proximityPlacementGroupId": null, -[...] -} -``` --> [!NOTE] -> If the `provisioningState` shows `Stopping`, your node pool hasn't fully stopped yet. +1. Stop a running AKS node pool using the [`az aks nodepool stop`][az-aks-nodepool-stop] command. ++ ```azurecli-interactive + az aks nodepool stop --resource-group myResourceGroup --cluster-name myAKSCluster --nodepool-name testnodepool + ``` ++2. Verify your node pool stopped using the [`az aks nodepool show`][az-aks-nodepool-show] command. ++ ```azurecli-interactive + az aks nodepool show --resource-group myResourceGroup --cluster-name myAKSCluster --nodepool-name testnodepool + ``` ++ The following condensed example output shows the `powerState` as `Stopped`: ++ ```output + { + [...] + "osType": "Linux", + "podSubnetId": null, + "powerState": { + "code": "Stopped" + }, + "provisioningState": "Succeeded", + "proximityPlacementGroupId": null, + [...] + } + ``` ++ > [!NOTE] + > If the `provisioningState` shows `Stopping`, your node pool is still in the process of stopping. ++ ## Start a stopped AKS node pool -Use `az aks nodepool start` to start a stopped AKS node pool. The following example starts the stopped node pool named *testnodepool*: +1. Restart a stopped node pool using the [`az aks nodepool start`][az-aks-nodepool-start] command. -```azurecli-interactive -az aks nodepool start --nodepool-name testnodepool --resource-group myResourceGroup --cluster-name myAKSCluster -``` + ```azurecli-interactive + az aks nodepool start --resource-group myResourceGroup --cluster-name myAKSCluster --nodepool-name testnodepool + ``` -You can verify your node pool has started using [az aks show][az-aks-show] and confirming the `powerState` shows `Running`. For example: +2. Verify your node pool started using the [`az aks nodepool show`][az-aks-nodepool-show] command. -```json -{ -[...] - "osType": "Linux", - "podSubnetId": null, - "powerState": { - "code": "Running" - }, - "provisioningState": "Succeeded", - "proximityPlacementGroupId": null, -[...] -} -``` + ```azurecli-interactive + az aks nodepool show --resource-group myResourceGroup --cluster-name myAKSCluster --nodepool-name testnodepool + ``` -> [!NOTE] -> If the `provisioningState` shows `Starting`, your node pool hasn't fully started yet. + The following condensed example output shows the `powerState` as `Running`: ++ ```output + { + [...] + "osType": "Linux", + "podSubnetId": null, + "powerState": { + "code": "Running" + }, + "provisioningState": "Succeeded", + "proximityPlacementGroupId": null, + [...] + } + ``` ++ > [!NOTE] + > If the `provisioningState` shows `Starting`, your node pool is still in the process of starting. ## Next steps -- To learn how to scale `User` pools to 0, see [Scale `User` pools to 0](scale-cluster.md#scale-user-node-pools-to-0).-- To learn how to stop your cluster, see [Cluster start/stop](start-stop-cluster.md).-- To learn how to save costs using Spot instances, see [Add a spot node pool to AKS](spot-node-pool.md).-- To learn more about the AKS support policies, see [AKS support policies](support-policies.md).--<!-- LINKS - external --> +* To learn how to scale `User` pools to 0, see [scale `User` pools to 0](scale-cluster.md#scale-user-node-pools-to-0). +* To learn how to stop your cluster, see [cluster start/stop](start-stop-cluster.md). +* To learn how to save costs using Spot instances, see [add a spot node pool to AKS](spot-node-pool.md). +* To learn more about the AKS support policies, see [AKS support policies](support-policies.md). <!-- LINKS - internal --> [aks-quickstart-cli]: ./learn/quick-kubernetes-deploy-cli.md [aks-quickstart-portal]: ./learn/quick-kubernetes-deploy-portal.md [aks-quickstart-powershell]: ./learn/quick-kubernetes-deploy-powershell.md-[install-azure-cli]: /cli/azure/install-azure-cli -[az-extension-add]: /cli/azure/extension#az_extension_add -[az-extension-update]: /cli/azure/extension#az_extension_update -[az-feature-register]: /cli/azure/feature#az_feature_register -[az-feature-list]: /cli/azure/feature#az_feature_list -[az-provider-register]: /cli/azure/provider#az_provider_register -[az-aks-show]: /cli/azure/aks#az_aks_show -[kubernetes-walkthrough-powershell]: kubernetes-walkthrough-powershell.md -[stop-azakscluster]: /powershell/module/az.aks/stop-azakscluster -[get-azakscluster]: /powershell/module/az.aks/get-azakscluster -[start-azakscluster]: /powershell/module/az.aks/start-azakscluster +[az-aks-nodepool-stop]: /cli/azure/aks/nodepool#az_aks_nodepool_stop +[az-aks-nodepool-start]:/cli/azure/aks/nodepool#az_aks_nodepool_start +[az-aks-nodepool-show]: /cli/azure/aks/nodepool#az_aks_nodepool_show |
aks | Use Pod Security Policies | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-pod-security-policies.md | Title: Use pod security policies in Azure Kubernetes Service (AKS) -description: Learn how to control pod admissions by using PodSecurityPolicy in Azure Kubernetes Service (AKS) +description: Learn how to control pod admissions using PodSecurityPolicy in Azure Kubernetes Service (AKS) Previously updated : 03/25/2021 Last updated : 04/25/2023 -# Preview - Secure your cluster using pod security policies in Azure Kubernetes Service (AKS) +# Secure your cluster using pod security policies in Azure Kubernetes Service (AKS) (preview) -> [!Important] -> The feature described in this article, pod security policy (preview), will be deprecated starting with Kubernetes version 1.21, and it will be removed in version 1.25. AKS will mark the pod security policy as Deprecated with the AKS API on 06-01-2023 and remove it in version 1.25. You can migrate pod security policy to pod security admission controller before the deprecation deadline. --After pod security policy (preview) is deprecated, you must have already migrated to Pod Security Admission controller or disabled the feature on any existing clusters using the deprecated feature to perform future cluster upgrades and stay within Azure support. +> [!IMPORTANT] +> +> The pod security policy feature will be deprecated starting with Kubernetes version *1.21* and will be removed in version *1.25*. +> +> The AKS API will mark the pod security policy as `Deprecated` on 06-01-2023 and remove it in version *1.25*. We recommend you migrate to pod security admission controller before the deprecation deadline to stay within Azure support. ## Before you begin -This article assumes that you have an existing AKS cluster. If you need an AKS cluster, see the AKS quickstart [using the Azure CLI][aks-quickstart-cli], [using Azure PowerShell][aks-quickstart-powershell], or [using the Azure portal][aks-quickstart-portal]. --You need the Azure CLI version 2.0.61 or later installed and configured. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli]. +* This article assumes you have an existing AKS cluster. If you need an AKS cluster, create one using [Azure CLI][aks-quickstart-cli], [Azure PowerShell][aks-quickstart-powershell], or the [Azure portal][aks-quickstart-portal]. +* You need the Azure CLI version 2.0.61 or later installed and configured. Run `az --version` to find the version. If you need to install or upgrade, see [install Azure CLI][install-azure-cli]. -## Install the aks-preview Azure CLI extension +## Install the `aks-preview` Azure CLI extension [!INCLUDE [preview features callout](includes/preview/preview-callout.md)] -To install the aks-preview extension, run the following command: +1. Install the aks-preview extension using the [`az extension add`][az-extension-add] command. -```azurecli -az extension add --name aks-preview -``` + ```azurecli + az extension add --name aks-preview + ``` -Run the following command to update to the latest version of the extension released: +2. Update to the latest version of the extension using the [`az extension update`][az-extension-update] command. -```azurecli -az extension update --name aks-preview -``` + ```azurecli + az extension update --name aks-preview + ``` -## Register the 'PodSecurityPolicyPreview' feature flag +## Register the `PodSecurityPolicyPreview` feature flag -Register the `PodSecurityPolicyPreview` feature flag by using the [az feature register][az-feature-register] command, as shown in the following example: +1. Register the `PodSecurityPolicyPreview` feature flag using the [`az feature register`][az-feature-register] command. -```azurecli-interactive -az feature register --namespace "Microsoft.ContainerService" --name "PodSecurityPolicyPreview" -``` + ```azurecli-interactive + az feature register --namespace "Microsoft.ContainerService" --name "PodSecurityPolicyPreview" + ``` -It takes a few minutes for the status to show *Registered*. Verify the registration status by using the [az feature show][az-feature-show] command: + It takes a few minutes for the status to show *Registered*. -```azurecli-interactive -az feature show --namespace "Microsoft.ContainerService" --name "PodSecurityPolicyPreview" -``` +2. Verify the registration status using the [`az feature show`][az-feature-show] command. -When the status reflects *Registered*, refresh the registration of the *Microsoft.ContainerService* resource provider by using the [az provider register][az-provider-register] command: + ```azurecli-interactive + az feature show --namespace "Microsoft.ContainerService" --name "PodSecurityPolicyPreview" + ``` -```azurecli-interactive -az provider register --namespace Microsoft.ContainerService -``` +3. When the status reflects *Registered*, refresh the registration of the *Microsoft.ContainerService* resource provider using the [`az provider register`][az-provider-register] command. -## Overview of pod security policies + ```azurecli-interactive + az provider register --namespace Microsoft.ContainerService + ``` -In a Kubernetes cluster, an admission controller is used to intercept requests to the API server when a resource is to be created. The admission controller can then *validate* the resource request against a set of rules, or *mutate* the resource to change deployment parameters. +## Overview of pod security policies -*PodSecurityPolicy* is an admission controller that validates a pod specification meets your defined requirements. These requirements may limit the use of privileged containers, access to certain types of storage, or the user or group the container can run as. When you try to deploy a resource where the pod specifications don't meet the requirements outlined in the pod security policy, the request is denied. This ability to control what pods can be scheduled in the AKS cluster prevents some possible security vulnerabilities or privilege escalations. +Kubernetes clusters use admission controllers to intercept requests to the API server when a resource is going to be created. The admission controller can then *validate* the resource request against a set of rules, or *mutate* the resource to change deployment parameters. -When you enable pod security policy in an AKS cluster, some default policies are applied. These default policies provide an out-of-the-box experience to define what pods can be scheduled. However, cluster users may run into problems deploying pods until you define your own policies. The recommended approach is to: +`PodSecurityPolicy` is an admission controller that validates a pod specification meets your defined requirements. These requirements may limit the use of privileged containers, access to certain types of storage, or the user or group the container can run as. When you try to deploy a resource where the pod specifications don't meet the requirements outlined in the pod security policy, the request is denied. This ability to control what pods can be scheduled in the AKS cluster prevents some possible security vulnerabilities or privilege escalations. -* Create an AKS cluster -* Define your own pod security policies -* Enable the pod security policy feature +When you enable pod security policy in an AKS cluster, some default policies are applied. These policies provide an out-of-the-box experience to define what pods can be scheduled. However, you might run into problems deploying your pods until you define your own policies. The recommended approach is to: -To show how the default policies limit pod deployments, in this article we first enable the pod security policies feature, then create a custom policy. +1. Create an AKS cluster. +2. Define your own pod security policies. +3. Enable the pod security policy feature. ### Behavior changes between pod security policy and Azure Policy -Below is a summary of behavior changes between pod security policy and Azure Policy. - |Scenario| Pod security policy | Azure Policy | |||| |Installation|Enable pod security policy feature |Enable Azure Policy Add-on Below is a summary of behavior changes between pod security policy and Azure Pol | Default policies | When pod security policy is enabled in AKS, default Privileged and Unrestricted policies are applied. | No default policies are applied by enabling the Azure Policy Add-on. You must explicitly enable policies in Azure Policy. | Who can create and assign policies | Cluster admin creates a pod security policy resource | Users must have a minimum role of 'owner' or 'Resource Policy Contributor' permissions on the AKS cluster resource group. - Through API, users can assign policies at the AKS cluster resource scope. The user should have minimum of 'owner' or 'Resource Policy Contributor' permissions on AKS cluster resource. - In the Azure portal, policies can be assigned at the Management group/subscription/resource group level. | Authorizing policies| Users and Service Accounts require explicit permissions to use pod security policies. | No additional assignment is required to authorize policies. Once policies are assigned in Azure, all cluster users can use these policies.-| Policy applicability | The admin user bypasses the enforcement of pod security policies. | All users (admin & non-admin) see the same policies. There is no special casing based on users. Policy application can be excluded at the namespace level. -| Policy scope | Pod security policies are not namespaced | Constraint templates used by Azure Policy are not namespaced. -| Deny/Audit/Mutation action | Pod security policies support only deny actions. Mutation can be done with default values on create requests. Validation can be done during update requests.| Azure Policy supports both audit & deny actions. Mutation is not supported yet, but planned. -| Pod security policy compliance | There is no visibility on compliance of pods that existed before enabling pod security policy. Non-compliant pods created after enabling pod security policies are denied. | Non-compliant pods that existed before applying Azure policies would show up in policy violations. Non-compliant pods created after enabling Azure policies are denied if policies are set with a deny effect. +| Policy applicability | The admin user bypasses the enforcement of pod security policies. | All users (admin & non-admin) see the same policies. There's no special casing based on users. Policy application can be excluded at the namespace level. +| Policy scope | Pod security policies aren't namespaced | Constraint templates used by Azure Policy aren't namespaced. +| Deny/Audit/Mutation action | Pod security policies support only deny actions. Mutation can be done with default values on create requests. Validation can be done during update requests.| Azure Policy supports both audit & deny actions. Mutation isn't yet supported. +| Pod security policy compliance | There's no visibility into compliance of pods that existed before enabling pod security policy. Non-compliant pods created after enabling pod security policies are denied. | Non-compliant pods that existed before applying Azure policies would show up in policy violations. Non-compliant pods created after enabling Azure policies are denied if policies are set with a deny effect. | How to view policies on the cluster | `kubectl get psp` | `kubectl get constrainttemplate` - All policies are returned.-| Pod security policy standard - Privileged | A privileged pod security policy resource is created by default when enabling the feature. | Privileged mode implies no restriction, as a result it is equivalent to not having any Azure Policy assignment. +| Pod security policy standard - Privileged | A privileged pod security policy resource is created by default when enabling the feature. | Privileged mode implies no restriction, as a result it's equivalent to not having any Azure Policy assignment. | [Pod security policy standard - Baseline/default](https://kubernetes.io/docs/concepts/security/pod-security-standards/#baseline-default) | User installs a pod security policy baseline resource. | Azure Policy provides a [built-in baseline initiative](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicySetDefinitions%2Fa8640138-9b0a-4a28-b8cb-1666c838647d) which maps to the baseline pod security policy. | [Pod security policy standard - Restricted](https://kubernetes.io/docs/concepts/security/pod-security-standards/#restricted) | User installs a pod security policy restricted resource. | Azure Policy provides a [built-in restricted initiative](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicySetDefinitions%2F42b8ef37-b724-4e24-bbc8-7a7708edfe00) which maps to the restricted pod security policy. ## Enable pod security policy on an AKS cluster -You can enable or disable pod security policy using the [az aks update][az-aks-update] command. The following example enables pod security policy on the cluster name *myAKSCluster* in the resource group named *myResourceGroup*. - > [!NOTE]-> For real-world use, don't enable the pod security policy until you have defined your own custom policies. In this article, you enable pod security policy as the first step to see how the default policies limit pod deployments. +> For real-world use, don't enable the pod security policy until you define your own custom policies. In this article, we enable pod security policy as the first step to see how the default policies limit pod deployments. -```azurecli-interactive -az aks update \ - --resource-group myResourceGroup \ - --name myAKSCluster \ - --enable-pod-security-policy -``` +* Enable the pod security policy using the [`az aks update`][az-aks-update] command. ++ ```azurecli-interactive + az aks update \ + --resource-group myResourceGroup \ + --name myAKSCluster \ + --enable-pod-security-policy + ``` ## Default AKS policies When you enable pod security policy, AKS creates one default policy named *privileged*. Don't edit or remove the default policy. Instead, create your own policies that define the settings you want to control. Let's first look at what these default policies are how they impact pod deployments. -To view the policies available, use the [kubectl get psp][kubectl-get] command, as shown in the following example +1. View the available policies using the [`kubectl get psp`][kubectl-get] command. ++ ```console + kubectl get psp + ``` ++ Your output will look similar to the following example output: -```console -$ kubectl get psp + ```output + NAME PRIV CAPS SELINUX RUNASUSER FSGROUP SUPGROUP READONLYROOTFS VOLUMES + privileged true * RunAsAny RunAsAny RunAsAny RunAsAny false * configMap,emptyDir,projected,secret,downwardAPI,persistentVolumeClaim + ``` -NAME PRIV CAPS SELINUX RUNASUSER FSGROUP SUPGROUP READONLYROOTFS VOLUMES -privileged true * RunAsAny RunAsAny RunAsAny RunAsAny false * configMap,emptyDir,projected,secret,downwardAPI,persistentVolumeClaim -``` + The *privileged* pod security policy is applied to any authenticated user in the AKS cluster. This assignment is controlled by `ClusterRoles` and `ClusterRoleBindings`. -The *privileged* pod security policy is applied to any authenticated user in the AKS cluster. This assignment is controlled by ClusterRoles and ClusterRoleBindings. Use the [kubectl get rolebindings][kubectl-get] command and search for the *default:privileged:* binding in the *kube-system* namespace: +2. Search for the *default:privileged:* binding in the *kube-system* namespace using the [`kubectl get rolebindings`][kubectl-get] command. -```console -kubectl get rolebindings default:privileged -n kube-system -o yaml -``` + ```console + kubectl get rolebindings default:privileged -n kube-system -o yaml + ``` -As shown in the following condensed output, the *psp:privileged* ClusterRole is assigned to any *system:authenticated* users. This ability provides a basic level of privilege without your own policies being defined. + The following condensed example output shows the *psp:privileged* `ClusterRole` is assigned to any *system:authenticated* users. This ability provides a basic level of privilege without your own policies being defined. -``` -apiVersion: rbac.authorization.k8s.io/v1 -kind: RoleBinding -metadata: - [...] - name: default:privileged - [...] -roleRef: - apiGroup: rbac.authorization.k8s.io - kind: ClusterRole - name: psp:privileged -subjects: -- apiGroup: rbac.authorization.k8s.io- kind: Group - name: system:masters -``` + ```output + apiVersion: rbac.authorization.k8s.io/v1 + kind: RoleBinding + metadata: + [...] + name: default:privileged + [...] + roleRef: + apiGroup: rbac.authorization.k8s.io + kind: ClusterRole + name: psp:privileged + subjects: + - apiGroup: rbac.authorization.k8s.io + kind: Group + name: system:masters + ``` -It's important to understand how these default policies interact with user requests to schedule pods before you start to create your own pod security policies. In the next few sections, let's schedule some pods to see these default policies in action. +It's important to understand how these default policies interact with user requests to schedule pods before you start to create your own pod security policies. In the next few sections, we schedule some pods to see the default policies in action. ## Create a test user in an AKS cluster -By default, when you use the [az aks get-credentials][az-aks-get-credentials] command, the *admin* credentials for the AKS cluster are added to your `kubectl` config. The admin user bypasses the enforcement of pod security policies. If you use Azure Active Directory integration for your AKS clusters, you could sign in with the credentials of a non-admin user to see the enforcement of policies in action. In this article, let's create a test user account in the AKS cluster that you can use. +When you use the [`az aks get-credentials`][az-aks-get-credentials] command, the *admin* credentials for the AKS cluster are added to your `kubectl` config by default. The admin user bypasses the enforcement of pod security policies. If you use Azure Active Directory integration for your AKS clusters, you can sign in with the credentials of a non-admin user to see the enforcement of policies in action. ++1. Create a sample namespace named *psp-aks* for test resources using the [`kubectl create namespace`][kubectl-create] command. ++ ```console + kubectl create namespace psp-aks + ``` -Create a sample namespace named *psp-aks* for test resources using the [kubectl create namespace][kubectl-create] command. Then, create a service account named *nonadmin-user* using the [kubectl create serviceaccount][kubectl-create] command: +2. Create a service account named *nonadmin-user* using the [`kubectl create serviceaccount`][kubectl-create] command. -```console -kubectl create namespace psp-aks -kubectl create serviceaccount --namespace psp-aks nonadmin-user -``` + ```console + kubectl create serviceaccount --namespace psp-aks nonadmin-user + ``` -Next, create a RoleBinding for the *nonadmin-user* to perform basic actions in the namespace using the [kubectl create rolebinding][kubectl-create] command: +3. Create a RoleBinding for the *nonadmin-user* to perform basic actions in the namespace using the [`kubectl create rolebinding`][kubectl-create] command. -```console -kubectl create rolebinding \ - --namespace psp-aks \ - psp-aks-editor \ - --clusterrole=edit \ - --serviceaccount=psp-aks:nonadmin-user -``` + ```console + kubectl create rolebinding \ + --namespace psp-aks \ + psp-aks-editor \ + --clusterrole=edit \ + --serviceaccount=psp-aks:nonadmin-user + ``` ### Create alias commands for admin and non-admin user -To highlight the difference between the regular admin user when using `kubectl` and the non-admin user created in the previous steps, create two command-line aliases: +When using `kubectl`, you can highlight the differences between the regular admin user and the non-admin user by creating two command-line aliases: -* The **kubectl-admin** alias is for the regular admin user, and is scoped to the *psp-aks* namespace. -* The **kubectl-nonadminuser** alias is for the *nonadmin-user* created in the previous step, and is scoped to the *psp-aks* namespace. +1. The **kubectl-admin** alias for the regular admin user, which is scoped to the *psp-aks* namespace. +2. The **kubectl-nonadminuser** alias for the *nonadmin-user* created in the previous step, which is scoped to the *psp-aks* namespace. -Create these two aliases as shown in the following commands: +* Create the two aliases using the following commands. -```console -alias kubectl-admin='kubectl --namespace psp-aks' -alias kubectl-nonadminuser='kubectl --as=system:serviceaccount:psp-aks:nonadmin-user --namespace psp-aks' -``` + ```console + alias kubectl-admin='kubectl --namespace psp-aks' + alias kubectl-nonadminuser='kubectl --as=system:serviceaccount:psp-aks:nonadmin-user --namespace psp-aks' + ``` ## Test the creation of a privileged pod -Let's first test what happens when you schedule a pod with the security context of `privileged: true`. This security context escalates the pod's privileges. In the previous section that showed the default AKS pod security policies, the *privilege* policy should deny this request. +Let's test what happens when you schedule a pod with the security context of `privileged: true`. This security context escalates the pod's privileges. The default *privilege* AKS security policy should deny this request. -Create a file named `nginx-privileged.yaml` and paste the following YAML manifest: +1. Create a file named `nginx-privileged.yaml` and paste in the contents of following YAML manifest. -```yaml -apiVersion: v1 -kind: Pod -metadata: - name: nginx-privileged -spec: - containers: - - name: nginx-privileged - image: mcr.microsoft.com/oss/nginx/nginx:1.14.2-alpine - securityContext: - privileged: true -``` + ```yaml + apiVersion: v1 + kind: Pod + metadata: + name: nginx-privileged + spec: + containers: + - name: nginx-privileged + image: mcr.microsoft.com/oss/nginx/nginx:1.14.2-alpine + securityContext: + privileged: true + ``` -Create the pod using the [kubectl apply][kubectl-apply] command and specify the name of your YAML manifest: +2. Create the pod using the [`kubectl apply`][kubectl-apply] command and specify the name of your YAML manifest. -```console -kubectl-nonadminuser apply -f nginx-privileged.yaml -``` + ```console + kubectl-nonadminuser apply -f nginx-privileged.yaml + ``` -The pod fails to be scheduled, as shown in the following example output: + The following example output shows the pod failed to be scheduled: -```console -$ kubectl-nonadminuser apply -f nginx-privileged.yaml + ```output + Error from server (Forbidden): error when creating "nginx-privileged.yaml": pods "nginx-privileged" is forbidden: unable to validate against any pod security policy: [] + ``` -Error from server (Forbidden): error when creating "nginx-privileged.yaml": pods "nginx-privileged" is forbidden: unable to validate against any pod security policy: [] -``` --The pod doesn't reach the scheduling stage, so there are no resources to delete before you move on. + Since the pod doesn't reach the scheduling stage, there are no resources to delete before you move on. ## Test creation of an unprivileged pod -In the previous example, the pod specification requested privileged escalation. This request is denied by the default *privilege* pod security policy, so the pod fails to be scheduled. Let's try now running that same NGINX pod without the privilege escalation request. --Create a file named `nginx-unprivileged.yaml` and paste the following YAML manifest: +In the previous example, the pod specification requested privileged escalation. This request is denied by the default *privilege* pod security policy, so the pod fails to be scheduled. Let's try running the same NGINX pod without the privilege escalation request. -```yaml -apiVersion: v1 -kind: Pod -metadata: - name: nginx-unprivileged -spec: - containers: - - name: nginx-unprivileged - image: mcr.microsoft.com/oss/nginx/nginx:1.14.2-alpine -``` +1. Create a file named `nginx-unprivileged.yaml` and paste in the contents of the following YAML manifest. -Create the pod using the [kubectl apply][kubectl-apply] command and specify the name of your YAML manifest: + ```yaml + apiVersion: v1 + kind: Pod + metadata: + name: nginx-unprivileged + spec: + containers: + - name: nginx-unprivileged + image: mcr.microsoft.com/oss/nginx/nginx:1.14.2-alpine + ``` -```console -kubectl-nonadminuser apply -f nginx-unprivileged.yaml -``` +2. Create the pod using the [`kubectl apply`][kubectl-apply] command and specify the name of your YAML manifest. -The pod fails to be scheduled, as shown in the following example output: + ```console + kubectl-nonadminuser apply -f nginx-unprivileged.yaml + ``` -```console -$ kubectl-nonadminuser apply -f nginx-unprivileged.yaml + The following example output shows the pod failed to be scheduled: -Error from server (Forbidden): error when creating "nginx-unprivileged.yaml": pods "nginx-unprivileged" is forbidden: unable to validate against any pod security policy: [] -``` + ```output + Error from server (Forbidden): error when creating "nginx-unprivileged.yaml": pods "nginx-unprivileged" is forbidden: unable to validate against any pod security policy: [] + ``` -The pod doesn't reach the scheduling stage, so there are no resources to delete before you move on. + Since the pod doesn't reach the scheduling stage, there are no resources to delete before you move on. ## Test creation of a pod with a specific user context -In the previous example, the container image automatically tried to use root to bind NGINX to port 80. This request was denied by the default *privilege* pod security policy, so the pod fails to start. Let's try now running that same NGINX pod with a specific user context, such as `runAsUser: 2000`. --Create a file named `nginx-unprivileged-nonroot.yaml` and paste the following YAML manifest: +In the previous example, the container image automatically tried to use root to bind NGINX to port 80. This request was denied by the default *privilege* pod security policy, so the pod fails to start. Let's try running the same NGINX pod with a specific user context, such as `runAsUser: 2000`. -```yaml -apiVersion: v1 -kind: Pod -metadata: - name: nginx-unprivileged-nonroot -spec: - containers: - - name: nginx-unprivileged - image: mcr.microsoft.com/oss/nginx/nginx:1.14.2-alpine - securityContext: - runAsUser: 2000 -``` +1. Create a file named `nginx-unprivileged-nonroot.yaml` and paste in the following YAML manifest. -Create the pod using the [kubectl apply][kubectl-apply] command and specify the name of your YAML manifest: + ```yaml + apiVersion: v1 + kind: Pod + metadata: + name: nginx-unprivileged-nonroot + spec: + containers: + - name: nginx-unprivileged + image: mcr.microsoft.com/oss/nginx/nginx:1.14.2-alpine + securityContext: + runAsUser: 2000 + ``` -```console -kubectl-nonadminuser apply -f nginx-unprivileged-nonroot.yaml -``` +2. Create the pod using the [`kubectl apply`][kubectl-apply] command and specify the name of your YAML manifest. -The pod fails to be scheduled, as shown in the following example output: + ```console + kubectl-nonadminuser apply -f nginx-unprivileged-nonroot.yaml + ``` -```console -$ kubectl-nonadminuser apply -f nginx-unprivileged-nonroot.yaml + The following example output shows the pod failed to be scheduled: -Error from server (Forbidden): error when creating "nginx-unprivileged-nonroot.yaml": pods "nginx-unprivileged-nonroot" is forbidden: unable to validate against any pod security policy: [] -``` + ```output + Error from server (Forbidden): error when creating "nginx-unprivileged-nonroot.yaml": pods "nginx-unprivileged-nonroot" is forbidden: unable to validate against any pod security policy: [] + ``` -The pod doesn't reach the scheduling stage, so there are no resources to delete before you move on. + Since the pod doesn't reach the scheduling stage, there are no resources to delete before you move on. ## Create a custom pod security policy Now that you've seen the behavior of the default pod security policies, let's provide a way for the *nonadmin-user* to successfully schedule pods. -Let's create a policy to reject pods that request privileged access. Other options, such as *runAsUser* or allowed *volumes*, aren't explicitly restricted. This type of policy denies a request for privileged access, but otherwise lets the cluster run the requested pods. --Create a file named `psp-deny-privileged.yaml` and paste the following YAML manifest: --```yaml -apiVersion: policy/v1beta1 -kind: PodSecurityPolicy -metadata: - name: psp-deny-privileged -spec: - privileged: false - seLinux: - rule: RunAsAny - supplementalGroups: - rule: RunAsAny - runAsUser: - rule: RunAsAny - fsGroup: - rule: RunAsAny - volumes: - - '*' -``` --Create the policy using the [kubectl apply][kubectl-apply] command and specify the name of your YAML manifest: --```console -kubectl apply -f psp-deny-privileged.yaml -``` --To view the policies available, use the [kubectl get psp][kubectl-get] command, as shown in the following example. Compare the *psp-deny-privileged* policy with the default *privilege* policy that was enforced in the previous examples to create a pod. Only the use of *PRIV* escalation is denied by your policy. There are no restrictions on the user or group for the *psp-deny-privileged* policy. --```console -$ kubectl get psp --NAME PRIV CAPS SELINUX RUNASUSER FSGROUP SUPGROUP READONLYROOTFS VOLUMES -privileged true * RunAsAny RunAsAny RunAsAny RunAsAny false * -psp-deny-privileged false RunAsAny RunAsAny RunAsAny RunAsAny false * -``` +We'll create a policy to reject pods that request privileged access. Other options, such as *runAsUser* or allowed *volumes*, aren't explicitly restricted. This type of policy denies a request for privileged access, but allows the cluster to run the requested pods. ++1. Create a file named `psp-deny-privileged.yaml` and paste in the following YAML manifest. ++ ```yaml + apiVersion: policy/v1beta1 + kind: PodSecurityPolicy + metadata: + name: psp-deny-privileged + spec: + privileged: false + seLinux: + rule: RunAsAny + supplementalGroups: + rule: RunAsAny + runAsUser: + rule: RunAsAny + fsGroup: + rule: RunAsAny + volumes: + - '*' + ``` ++2. Create the policy using the [`kubectl apply`][kubectl-apply] command and specify the name of your YAML manifest. ++ ```console + kubectl apply -f psp-deny-privileged.yaml + ``` ++3. View the available policies using the [`kubectl get psp`][kubectl-get] command. ++ ```console + kubectl get psp + ``` ++ In the following example output, compare the *psp-deny-privileged* policy with the default *privilege* policy that was enforced in the previous examples to create a pod. Only the use of *PRIV* escalation is denied by your policy. There are no restrictions on the user or group for the *psp-deny-privileged* policy. ++ ```output + NAME PRIV CAPS SELINUX RUNASUSER FSGROUP SUPGROUP READONLYROOTFS VOLUMES + privileged true * RunAsAny RunAsAny RunAsAny RunAsAny false * + psp-deny-privileged false RunAsAny RunAsAny RunAsAny RunAsAny false * + ``` ## Allow user account to use the custom pod security policy -In the previous step, you created a pod security policy to reject pods that request privileged access. To allow the policy to be used, you create a *Role* or a *ClusterRole*. Then, you associate one of these roles using a *RoleBinding* or *ClusterRoleBinding*. --For this example, create a ClusterRole that allows you to *use* the *psp-deny-privileged* policy created in the previous step. Create a file named `psp-deny-privileged-clusterrole.yaml` and paste the following YAML manifest: --```yaml -kind: ClusterRole -apiVersion: rbac.authorization.k8s.io/v1 -metadata: - name: psp-deny-privileged-clusterrole -rules: -- apiGroups:- - extensions - resources: - - podsecuritypolicies - resourceNames: - - psp-deny-privileged - verbs: - - use -``` --Create the ClusterRole using the [kubectl apply][kubectl-apply] command and specify the name of your YAML manifest: --```console -kubectl apply -f psp-deny-privileged-clusterrole.yaml -``` --Now create a ClusterRoleBinding to use the ClusterRole created in the previous step. Create a file named `psp-deny-privileged-clusterrolebinding.yaml` and paste the following YAML manifest: --```yaml -apiVersion: rbac.authorization.k8s.io/v1 -kind: ClusterRoleBinding -metadata: - name: psp-deny-privileged-clusterrolebinding -roleRef: - apiGroup: rbac.authorization.k8s.io - kind: ClusterRole - name: psp-deny-privileged-clusterrole -subjects: -- apiGroup: rbac.authorization.k8s.io- kind: Group - name: system:serviceaccounts -``` --Create a ClusterRoleBinding using the [kubectl apply][kubectl-apply] command and specify the name of your YAML manifest: --```console -kubectl apply -f psp-deny-privileged-clusterrolebinding.yaml -``` +In the previous step, you created a pod security policy to reject pods that request privileged access. To allow the policy to be used, you create a *Role* or a *ClusterRole*. Then, you associate one of these roles using a *RoleBinding* or *ClusterRoleBinding*. For this example, we'll create a ClusterRole that allows you to *use* the *psp-deny-privileged* policy created in the previous step. ++1. Create a file named `psp-deny-privileged-clusterrole.yaml` and paste in the following YAML manifest. ++ ```yaml + kind: ClusterRole + apiVersion: rbac.authorization.k8s.io/v1 + metadata: + name: psp-deny-privileged-clusterrole + rules: + - apiGroups: + - extensions + resources: + - podsecuritypolicies + resourceNames: + - psp-deny-privileged + verbs: + - use + ``` ++2. Create the ClusterRole using the [`kubectl apply`][kubectl-apply] command and specify the name of your YAML manifest. ++ ```console + kubectl apply -f psp-deny-privileged-clusterrole.yaml + ``` ++3. Create a file named `psp-deny-privileged-clusterrolebinding.yaml` and paste in the following YAML manifest. ++ ```yaml + apiVersion: rbac.authorization.k8s.io/v1 + kind: ClusterRoleBinding + metadata: + name: psp-deny-privileged-clusterrolebinding + roleRef: + apiGroup: rbac.authorization.k8s.io + kind: ClusterRole + name: psp-deny-privileged-clusterrole + subjects: + - apiGroup: rbac.authorization.k8s.io + kind: Group + name: system:serviceaccounts + ``` ++4. Create the ClusterRoleBinding using the [`kubectl apply`][kubectl-apply] command and specify the name of your YAML manifest. ++ ```console + kubectl apply -f psp-deny-privileged-clusterrolebinding.yaml + ``` > [!NOTE]-> In the first step of this article, the pod security policy feature was enabled on the AKS cluster. The recommended practice was to only enable the pod security policy feature after you've defined your own policies. This is the stage where you would enable the pod security policy feature. One or more custom policies have been defined, and user accounts have been associated with those policies. Now you can safely enable the pod security policy feature and minimize problems caused by the default policies. +> In the first step of this article, the pod security policy feature was enabled on the AKS cluster. The recommended practice was to only enable the pod security policy feature after you've defined your own policies. This is the stage where you would enable the pod security policy feature. One or more custom policies have been defined, and user accounts have been associated with those policies. You can now safely enable the pod security policy feature and minimize problems caused by the default policies. ## Test the creation of an unprivileged pod again -With your custom pod security policy applied and a binding for the user account to use the policy, let's try to create an unprivileged pod again. Use the same `nginx-privileged.yaml` manifest to create the pod using the [kubectl apply][kubectl-apply] command: +With your custom pod security policy applied and a binding for the user account to use the policy, let's try to create an unprivileged pod again. -```console -kubectl-nonadminuser apply -f nginx-unprivileged.yaml -``` +This example shows how you can create custom pod security policies to define access to the AKS cluster for different users or groups. The default AKS policies provide tight controls on what pods can run, so create your own custom policies to then correctly define the restrictions you need. -The pod is successfully scheduled. When you check the status of the pod using the [kubectl get pods][kubectl-get] command, the pod is *Running*: +1. Use the `nginx-privileged.yaml` manifest to create the pod using the [`kubectl apply`][kubectl-apply] command. -``` -$ kubectl-nonadminuser get pods + ```console + kubectl-nonadminuser apply -f nginx-unprivileged.yaml + ``` -NAME READY STATUS RESTARTS AGE -nginx-unprivileged 1/1 Running 0 7m14s -``` +2. Check the status of the pod using the [`kubectl get pods`][kubectl-get] command. -This example shows how you can create custom pod security policies to define access to the AKS cluster for different users or groups. The default AKS policies provide tight controls on what pods can run, so create your own custom policies to then correctly define the restrictions you need. + ```output + kubectl-nonadminuser get pods + ``` -Delete the NGINX unprivileged pod using the [kubectl delete][kubectl-delete] command and specify the name of your YAML manifest: + The following example output shows the pod was successfully scheduled and is *Running*: -```console -kubectl-nonadminuser delete -f nginx-unprivileged.yaml -``` + ```output + NAME READY STATUS RESTARTS AGE + nginx-unprivileged 1/1 Running 0 7m14s + ``` ++3. Delete the NGINX unprivileged pod using the [`kubectl delete`][kubectl-delete] command and specify the name of your YAML manifest. ++ ```console + kubectl-nonadminuser delete -f nginx-unprivileged.yaml + ``` ## Clean up resources -To disable pod security policy, use the [az aks update][az-aks-update] command again. The following example disables pod security policy on the cluster name *myAKSCluster* in the resource group named *myResourceGroup*: +1. Disable pod security policy using the [`az aks update`][az-aks-update] command. ++ ```azurecli-interactive + az aks update \ + --resource-group myResourceGroup \ + --name myAKSCluster \ + --disable-pod-security-policy + ``` -```azurecli-interactive -az aks update \ - --resource-group myResourceGroup \ - --name myAKSCluster \ - --disable-pod-security-policy -``` +2. Delete the ClusterRole and ClusterRoleBinding using the [`kubectl delete`][kubectl-delete] command. -Next, delete the ClusterRole and ClusterRoleBinding: + ```console + kubectl delete -f psp-deny-privileged-clusterrole.yaml + ``` -```console -kubectl delete -f psp-deny-privileged-clusterrolebinding.yaml -kubectl delete -f psp-deny-privileged-clusterrole.yaml -``` +3. Delete the ClusterRoleBinding using the [`kubectl delete`][kubectl-delete] command. -Delete the security policy using [kubectl delete][kubectl-delete] command and specify the name of your YAML manifest: + ```console + kubectl delete -f psp-deny-privileged-clusterrolebinding.yaml + ``` -```console -kubectl delete -f psp-deny-privileged.yaml -``` +4. Delete the security policy using [`kubectl delete`][kubectl-delete] command and specify the name of your YAML manifest. -Finally, delete the *psp-aks* namespace: + ```console + kubectl delete -f psp-deny-privileged.yaml + ``` -```console -kubectl delete namespace psp-aks -``` +5. Delete the *psp-aks* namespace using the [`kubectl delete`][kubectl-delete] command. ++ ```console + kubectl delete namespace psp-aks + ``` ## Next steps -This article showed you how to create a pod security policy to prevent the use of privileged access. There are lots of features that a policy can enforce, such as type of volume or the RunAs user. For more information on the available options, see the [Kubernetes pod security policy reference docs][kubernetes-policy-reference]. +This article showed you how to create a pod security policy to prevent the use of privileged access. Policies can enforce a lot of features, such as type of volume or the RunAs user. For more information on the available options, see the [Kubernetes pod security policy reference docs][kubernetes-policy-reference]. For more information about limiting pod network traffic, see [Secure traffic between pods using network policies in AKS][network-policies]. For more information about limiting pod network traffic, see [Secure traffic bet [kubectl-delete]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#delete [kubectl-get]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#get [kubectl-create]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#create-[kubectl-describe]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#describe -[kubectl-logs]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#logs -[terms-of-use]: https://azure.microsoft.com/support/legal/preview-supplemental-terms/ [kubernetes-policy-reference]: https://kubernetes.io/docs/concepts/policy/pod-security-policy/#policy-reference <!-- LINKS - internal --> [aks-quickstart-cli]: ./learn/quick-kubernetes-deploy-cli.md For more information about limiting pod network traffic, see [Secure traffic bet [az-aks-get-credentials]: /cli/azure/aks#az_aks_get_credentials [az-aks-update]: /cli/azure/aks#az_aks_update [az-extension-add]: /cli/azure/extension#az_extension_add-[aks-support-policies]: support-policies.md -[aks-faq]: faq.md -[az-extension-add]: /cli/azure/extension#az_extension_add [az-extension-update]: /cli/azure/extension#az_extension_update-[policy-samples]: ./policy-reference.md#microsoftcontainerservice -[azure-policy-add-on]: ../governance/policy/concepts/policy-for-kubernetes.md |
api-management | Api Management Revisions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-revisions.md | You can set a revision as current using the Azure portal. If you use PowerShell, ## Revision descriptions -When you create a revision, you can set a description for your own tracking purposes. Descriptions aren't played to your API users. +When you create a revision, you can set a description for your own tracking purposes. Descriptions aren't displayed to your API users. When you set a revision as current you can also optionally specify a public change log note. The change log is included in the developer portal for your API users to view. You can modify your change log note using the `Update-AzApiManagementApiRelease` PowerShell cmdlet. |
api-management | How To Server Sent Events | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/how-to-server-sent-events.md | Follow these guidelines when using API Management to reach a backend API that im * **Avoid other policies that buffer responses** - Certain policies such as [`validate-content`](validate-content-policy.md) can also buffer response content and shouldn't be used with APIs that implement SSE. +* **Avoid logging request/response body for Azure Monitor and Application Insights** - You can configure API request logging for Azure Monitor or Application Insights using diagnostic settings. The diagnostic settings allow you to log the request/response body at various stages of the request execution. For APIs that implement SSE, this can cause unexpected buffering which can lead to problems. Diagnostic settings for Azure Monitor and Application Insights configured at the global/All APIs scope apply to all APIs in the service. You can override the settings for individual APIs as needed. For APIs that implement SSE, ensure you have disabled request/response body logging for Azure Monitor and Application Insights. + * **Disable response caching** - To ensure that notifications to the client are timely, verify that [response caching](api-management-howto-cache.md) isn't enabled. For more information, see [API Management caching policies](api-management-caching-policies.md). * **Test API under load** - Follow general practices to test your API under load to detect performance or configuration issues before going into production. Follow these guidelines when using API Management to reach a backend API that im ## Next steps * Learn more about [configuring policies](./api-management-howto-policies.md) in API Management.-* Learn about API Management [capacity](api-management-capacity.md). +* Learn about API Management [capacity](api-management-capacity.md). |
api-management | Mitigate Owasp Api Threats | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/mitigate-owasp-api-threats.md | The Open Web Application Security Project ([OWASP](https://owasp.org/about/)) Fo The OWASP [API Security Project](https://owasp.org/www-project-api-security/) focuses on strategies and solutions to understand and mitigate the unique *vulnerabilities and security risks of APIs*. In this article, we'll discuss recommendations to use Azure API Management to mitigate the top 10 API threats identified by OWASP. > [!NOTE]-> In addition to following the recommendations in this article, you can enable Defender for APIs (preview), a capability of [Microsoft Defender for Cloud](/azure/defender-for-cloud/defender-for-cloud-introduction), for API security insights, recommendations, and threat detection. [Learn more about using Defender for APIs with API Management](protect-with-defender-for-apis.md) +> In addition to following the recommendations in this article, you can enable [Defender for APIs](/azure/defender-for-cloud/defender-for-apis-introduction) (preview), a capability of [Microsoft Defender for Cloud](/azure/defender-for-cloud/defender-for-cloud-introduction), for API security insights, recommendations, and threat detection. [Learn more about using Defender for APIs with API Management](protect-with-defender-for-apis.md) ## Broken object level authorization Learn more about: * [Security baseline for API Management](/security/benchmark/azure/baselines/api-management-security-baseline) * [Security controls by Azure policy](security-controls-policy.md) * [Landing zone accelerator for API Management](/azure/cloud-adoption-framework/scenarios/app-platform/api-management/landing-zone-accelerator)-* [Microsoft Defender for Cloud](/azure/defender-for-cloud/defender-for-cloud-introduction) +* [Microsoft Defender for Cloud](/azure/defender-for-cloud/defender-for-cloud-introduction) |
api-management | Set Body Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/set-body-policy.md | The `set-body` policy can be configured to use the [Liquid](https://shopify.gith > [!IMPORTANT] > In order to correctly bind to an XML body using the Liquid template, use a `set-header` policy to set Content-Type to either application/xml, text/xml (or any type ending with +xml); for a JSON body, it must be application/json, text/json (or any type ending with +json). +> [!IMPORTANT] +> Liquid templates use the request/response body in the current execution pipeline as their input. For this reason, liquid templates do not work when used inside a return-response policy. A return-response policy cancels the current execution pipeline and removes the request/response body. As a result, any liquid template used inside the return-response will receive an empty string as its input and will not produced the expected output. + ### Supported Liquid filters The following Liquid filters are supported in the `set-body` policy. For filter examples, see the [Liquid documentation](https://shopify.github.io/liquid/). The following example uses the `AsFormUrlEncodedContent()` expression to access * [API Management transformation policies](api-management-transformation-policies.md) |
api-management | Validate Graphql Request Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/validate-graphql-request-policy.md | This example applies the following validation and authorization rules to a Graph ## Related policies -* [API Management policies for GraphQL APIs](graphql-policies.md) +* [Validation policies](api-management-policies.md#validation-policies) |
application-gateway | Configuration Request Routing Rules | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/configuration-request-routing-rules.md | When you create a rule, you choose between [*basic* and *path-based*](./applicat For the v1 and v2 SKU, pattern matching of incoming requests is processed in the order that the paths are listed in the URL path map of the path-based rule. If a request matches the pattern in two or more paths in the path map, the path that's listed first is matched. And the request is forwarded to the back end that's associated with that path. +If you have multiple listeners, it's even more important that rules are processed in the correct order so that client traffic is received by the correct listener. For more information about rules evaluation order, see [Request Routing rules evaluation order](multiple-site-overview.md#request-routing-rules-evaluation-order). + ## Associated listener Associate a listener to the rule so that the *request-routing rule* that's associated with the listener is evaluated to determine the backend pool to route the request to. |
application-gateway | Disabled Listeners | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/disabled-listeners.md | description: The article explains the details of a disabled listener and ways to Previously updated : 02/22/2022 Last updated : 04/25/2023 -It is important to consider any impact on your Application Gateway resource when making changes or revoking access to your Key Vault resource. In case your application gateway is unable to access the associated key vault or locate its certificate object, it will automatically put that listener in a disabled state. The action is triggered only for configuration errors. Transient connectivity problems do not have any impact on the listeners. +It's important to consider any impact on your Application Gateway resource when making changes or revoking access to your Key Vault resource. In case your application gateway is unable to access the associated key vault or locate its certificate object, it will automatically put that listener in a disabled state. **The action is triggered only for configuration errors**. Any customer misconfigurations like deletion/disablement of certificates or prohibiting the application gateway's access through key vault's firewall or permissions cause the key vault-based HTTPS listener to get disabled. Transient connectivity problems don't have any impact on the listeners. -A disabled listener doesnΓÇÖt affect the traffic for other operational listeners on your Application Gateway. For example, the HTTP listeners or HTTPS listeners for which PFX certificate file is directly uploaded on Application Gateway resource will never go in a disabled state. +A disabled listener doesnΓÇÖt affect the traffic for other operational listeners on your Application Gateway. For example, the HTTP listeners or HTTPS listeners for which the PFX certificate file is directly uploaded on the Application Gateway resource are never disabled. [](../application-gateway/media/disabled-listeners/affected-listener.png#lightbox) Understanding the behavior of the Application GatewayΓÇÖs periodic check and its [  ](../application-gateway/media/disabled-listeners/client-error.png#lightbox) -2. You can verify if the error is a result of a disabled listener on your gateway by checking your [Application GatewayΓÇÖs Resource Health page](../application-gateway/resource-health-overview.md). You will see an event as shown below. +2. You can verify if the client error results from a disabled listener on your gateway by checking your [Application GatewayΓÇÖs Resource Health page](../application-gateway/resource-health-overview.md), as shown in the screenshot.  You can narrow down to the exact cause and find steps to resolve the problem by 1. Sign-in to your Azure portal 1. Select Advisor 1. Select Operational Excellence category from the left menu.-1. You will find a recommendation titled **Resolve Azure Key Vault issue for your Application Gateway**, if your gateway is experiencing this issue. Ensure the correct Subscription is selcted from the drop-down options above. +1. Find the recommendation titled **Resolve Azure Key Vault issue for your Application Gateway** (shown only if your gateway is experiencing this issue). Ensure the correct subscription is selected. 1. Select it to view the error details and the associated key vault resource along with the [troubleshooting guide](../application-gateway/application-gateway-key-vault-common-errors.md) to fix your exact issue. > [!NOTE] |
application-gateway | Http Response Codes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/http-response-codes.md | For cases when mutual authentication is configured, several scenarios can lead t - OCSP Client Revocation check is enabled, but OCSP responder isn't provided in the certificate. For more information about troubleshooting mutual authentication, see [Error code troubleshooting](mutual-authentication-troubleshooting.md#solution-2).+#### 401 ΓÇô Unauthorized -#### 403 ΓÇô Forbidden +An HTTP 401 unauthorized response can be returned when backend pool is configured with [NTLM](/windows/win32/secauthn/microsoft-ntlm?redirectedfrom=MSDN) authentication. +There are several ways to resolve this: +- Allow anonymous access on backend pool. +- Configure the probe to send the request to another ΓÇ£fakeΓÇ¥ site that doesnΓÇÖt require NTLM. +- Not recommended, as this will not tell us if the actual site behind the application gateway is active or not. +- Configure application gateway to allow 401 responses as valid for the probes: [Probe matching conditions](/azure/application-gateway/application-gateway-probe-overview). + #### 403 ΓÇô Forbidden HTTP 403 Forbidden is presented when customers are utilizing WAF skus and have WAF configured in Prevention mode. If enabled WAF rulesets or custom deny WAF rules match the characteristics of an inbound request, the client is presented a 403 forbidden response. Azure application Gateway V2 SKU sent HTTP 504 errors if the backend response ti ## Next steps If the information in this article doesn't help to resolve the issue, [submit a support ticket](https://azure.microsoft.com/support/options/).++++ |
application-gateway | Multiple Site Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/multiple-site-overview.md | Title: Hosting multiple sites on Azure Application Gateway -description: This article provides an overview of the Azure Application Gateway multi-site support. Examples are provided of rule priority and the order of evaluation for rules applied to incoming requests. Conditions and limitations for using wildcard rules are described. +description: This article provides an overview of the Azure Application Gateway multi-site support. Examples are provided of rule priority and the order of evaluation for rules applied to incoming requests. Application Gateway rule priority evaluation order is described in detail. Conditions and limitations for using wildcard rules are provided. Previously updated : 04/07/2023 Last updated : 04/25/2023 |
application-gateway | Ssl Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/ssl-overview.md | For the TLS connection to work, you need to ensure that the TLS/SSL certificate - That the current date and time is within the "Valid from" and "Valid to" date range on the certificate. - That the certificate's "Common Name" (CN) matches the host header in the request. For example, if the client is making a request to `https://www.contoso.com/`, then the CN must be `www.contoso.com`. +If you have errors with the backend certificate common name (CN), see [Backend certificate invalid common name (CN)](application-gateway-backend-health-troubleshooting.md#backend-certificate-invalid-common-name-cn). + ### Certificates supported for TLS termination Application gateway supports the following types of certificates: |
applied-ai-services | Changelog Release History | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/changelog-release-history.md | + + Title: Form Recognizer changelog and release history ++description: A version-based description of Form Recognizer feature and capability releases, changes, enhancements, and updates. +++++ Last updated : 04/24/2023++recommendations: false +++<!-- markdownlint-disable MD001 --> +<!-- markdownlint-disable MD033 --> +<!-- markdownlint-disable MD051 --> ++# Changelog and release history ++This reference article provides a version-based description of Form Recognizer feature and capability releases, changes, updates, and enhancements. ++#### Form Recognizer SDK April 2023 preview release ++This release includes the following updates: ++### [**C#**](#tab/csharp) ++* **Version 4.1.0-beta.1 (2023-04-13**) +* **Targets 2023-02-28-preview by default** +* **No breaking changes** ++[**Package (NuGet)**](https://www.nuget.org/packages/Azure.AI.FormRecognizer/4.1.0-beta.1) ++[**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/formrecognizer/Azure.AI.FormRecognizer/CHANGELOG.md#410-beta1-2023-04-13) ++[**ReadMe**](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/formrecognizer/Azure.AI.FormRecognizer/README.md) ++[**Samples**](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/formrecognizer/Azure.AI.FormRecognizer/samples/README.md) ++### [**Java**](#tab/java) ++* **Version 4.1.0-beta.1 (2023-04-12**) +* **Targets 2023-02-28-preview by default** +* **No breaking changes** ++[**Package (MVN)**](https://mvnrepository.com/artifact/com.azure/azure-ai-formrecognizer/4.1.0-beta.1) ++[**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-jav#410-beta1-2023-04-12) ++[**ReadMe**](https://github.com/Azure/azure-sdk-for-jav) ++[**Samples**](https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/formrecognizer/azure-ai-formrecognizer/src/samples#readme) ++### [**JavaScript**](#tab/javascript) ++* **Version 4.1.0-beta.1 (2023-04-11**) +* **Targets 2023-02-28-preview by default** +* **No breaking changes** ++[**Package (npm)**](https://www.npmjs.com/package/@azure/ai-form-recognizer/v/4.1.0-beta.1) ++[**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-js/blob/a162daee4be05eadff0be1caa7fb2071960bbf44/sdk/formrecognizer/ai-form-recognizer/CHANGELOG.md#410-beta1-2023-04-11) ++[**ReadMe**](https://github.com/Azure/azure-sdk-for-js/blob/a162daee4be05eadff0be1caa7fb2071960bbf44/sdk/formrecognizer/ai-form-recognizer/README.md) ++[**Samples**](https://github.com/Azure/azure-sdk-for-js/tree/a162daee4be05eadff0be1caa7fb2071960bbf44/sdk/formrecognizer/ai-form-recognizer/samples/v4-beta) ++### [**Python**](#tab/python) ++* **Version 3.3.0b1 (2023-04-13**) +* **Targets 2023-02-28-preview by default** +* **No breaking changes** ++[**Package (PyPi)**](https://pypi.org/project/azure-ai-formrecognizer/3.3.0b1/) ++[**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-python/blob/azure-ai-formrecognizer_3.3.0b1/sdk/formrecognizer/azure-ai-formrecognizer/CHANGELOG.md#330b1-2023-04-13) ++[**ReadMe**](https://github.com/Azure/azure-sdk-for-python/blob/azure-ai-formrecognizer_3.3.0b1/sdk/formrecognizer/azure-ai-formrecognizer/README.md) ++[**Samples**](https://github.com/Azure/azure-sdk-for-python/tree/azure-ai-formrecognizer_3.3.0b1/sdk/formrecognizer/azure-ai-formrecognizer/samples) ++++#### Form Recognizer SDK September 2022 GA release ++This release includes the following updates: ++> [!IMPORTANT] +> The `DocumentAnalysisClient` and `DocumentModelAdministrationClient` now target API version v3.0 GA, released 2022-08-31. These clients are no longer supported by API versions 2020-06-30-preview or earlier. ++### [**C#**](#tab/csharp) ++* **Version 4.0.0 GA (2022-09-08)** +* **Supports REST API v3.0 and v2.0 clients** ++[**Package (NuGet)**](https://www.nuget.org/packages/Azure.AI.FormRecognizer/4.0.0) ++[**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/formrecognizer/Azure.AI.FormRecognizer/CHANGELOG.md) ++[**Migration guide**](https://github.com/Azure/azure-sdk-for-net/blob/Azure.AI.FormRecognizer_4.0.0/sdk/formrecognizer/Azure.AI.FormRecognizer/MigrationGuide.md) ++[**ReadMe**](https://github.com/Azure/azure-sdk-for-net/blob/Azure.AI.FormRecognizer_4.0.0/sdk/formrecognizer/Azure.AI.FormRecognizer/README.md) ++[**Samples**](https://github.com/Azure/azure-sdk-for-net/blob/Azure.AI.FormRecognizer_4.0.0/sdk/formrecognizer/Azure.AI.FormRecognizer/samples/README.md) ++### [**Java**](#tab/java) ++* **Version 4.0.0 GA (2022-09-08)** +* **Supports REST API v3.0 and v2.0 clients** ++[**Package (Maven)**](https://oss.sonatype.org/#nexus-search;quick~azure-ai-formrecognizer) ++[**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-jav) ++[**Migration guide**](https://github.com/Azure/azure-sdk-for-jav) ++[**ReadMe**](https://github.com/Azure/azure-sdk-for-jav) ++[**Samples**](https://github.com/Azure/azure-sdk-for-jav) ++### [**JavaScript**](#tab/javascript) ++* **Version 4.0.0 GA (2022-09-08)** +* **Supports REST API v3.0 and v2.0 clients** ++[**Package (npm)**](https://www.npmjs.com/package/@azure/ai-form-recognizer) ++[**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-js/blob/%40azure/ai-form-recognizer_4.0.0/sdk/formrecognizer/ai-form-recognizer/CHANGELOG.md) ++[**Migration guide**](https://github.com/Azure/azure-sdk-for-js/blob/%40azure/ai-form-recognizer_4.0.0/sdk/formrecognizer/ai-form-recognizer/MIGRATION-v3_v4.md) ++[**ReadMe**](https://github.com/Azure/azure-sdk-for-js/blob/%40azure/ai-form-recognizer_4.0.0/sdk/formrecognizer/ai-form-recognizer/README.md) ++[**Samples**](https://github.com/witemple-msft/azure-sdk-for-js/blob/7e3196f7e529212a6bc329f5f06b0831bf4cc174/sdk/formrecognizer/ai-form-recognizer/samples/v4/javascript/README.md) ++### [**Python**](#tab/python) ++> [!NOTE] +> Python 3.7 or later is required to use this package. ++* **Version 3.2.0 GA (2022-09-08)** +* **Supports REST API v3.0 and v2.0 clients** ++[**Package (PyPi)**](https://pypi.org/project/azure-ai-formrecognizer/3.2.0/) ++[**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-python/blob/azure-ai-formrecognizer_3.2.0/sdk/formrecognizer/azure-ai-formrecognizer/CHANGELOG.md) ++[**Migration guide**](https://github.com/Azure/azure-sdk-for-python/blob/azure-ai-formrecognizer_3.2.0/sdk/formrecognizer/azure-ai-formrecognizer/MIGRATION_GUIDE.md) ++[**ReadMe**](https://github.com/Azure/azure-sdk-for-python/blob/azure-ai-formrecognizer_3.2.0/sdk/formrecognizer/azure-ai-formrecognizer/README.md) ++[**Samples**](https://github.com/Azure/azure-sdk-for-python/blob/azure-ai-formrecognizer_3.2.0/sdk/formrecognizer/azure-ai-formrecognizer/samples/README.md) ++++#### Form Recognizer SDK beta August 2022 preview release ++This release includes the following updates: ++### [**C#**](#tab/csharp) ++**Version 4.0.0-beta.5 (2022-08-09)** +**Supports REST API 2022-06-30-preview clients** ++[**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/formrecognizer/Azure.AI.FormRecognizer/CHANGELOG.md#400-beta5-2022-08-09) ++[**Package (NuGet)**](https://www.nuget.org/packages/Azure.AI.FormRecognizer/4.0.0-beta.5) ++[**SDK reference documentation**](/dotnet/api/overview/azure/ai.formrecognizer-readme?view=azure-dotnet-preview&preserve-view=true) ++### [**Java**](#tab/java) ++**Version 4.0.0-beta.6 (2022-08-10)** +**Supports REST API 2022-06-30-preview and earlier clients** ++[**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-jav#400-beta6-2022-08-10) ++ [**Package (Maven)**](https://oss.sonatype.org/#nexus-search;quick~azure-ai-formrecognizer) ++ [**SDK reference documentation**](/java/api/overview/azure/ai-formrecognizer-readme?view=azure-java-preview&preserve-view=true) ++### [**JavaScript**](#tab/javascript) ++**Version 4.0.0-beta.6 (2022-08-09)** +**Supports REST API 2022-06-30-preview and earlier clients** ++ [**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-js/blob/%40azure/ai-form-recognizer_4.0.0-beta.6/sdk/formrecognizer/ai-form-recognizer/CHANGELOG.md) ++ [**Package (npm)**](https://www.npmjs.com/package/@azure/ai-form-recognizer/v/4.0.0-beta.6) ++ [**SDK reference documentation**](/javascript/api/overview/azure/ai-form-recognizer-readme?view=azure-node-preview&preserve-view=true) ++### [**Python**](#tab/python) ++> [!IMPORTANT] +> Python 3.6 is no longer supported in this release. Use Python 3.7 or later. ++**Version 3.2.0b6 (2022-08-09)** +**Supports REST API 2022-06-30-preview and earlier clients** ++ [**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-python/blob/azure-ai-formrecognizer_3.2.0b6/sdk/formrecognizer/azure-ai-formrecognizer/CHANGELOG.md) ++ [**Package (PyPi)**](https://pypi.org/project/azure-ai-formrecognizer/3.2.0b6/) ++ [**SDK reference documentation**](https://pypi.org/project/azure-ai-formrecognizer/3.2.0b6/) ++++### Form Recognizer SDK beta June 2022 preview release ++This release includes the following updates: ++### [**C#**](#tab/csharp) ++**Version 4.0.0-beta.4 (2022-06-08)** ++[**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-net/blob/Azure.AI.FormRecognizer_4.0.0-beta.4/sdk/formrecognizer/Azure.AI.FormRecognizer/CHANGELOG.md) ++[**Package (NuGet)**](https://www.nuget.org/packages/Azure.AI.FormRecognizer/4.0.0-beta.4) ++[**SDK reference documentation**](/dotnet/api/azure.ai.formrecognizer?view=azure-dotnet-preview&preserve-view=true) ++### [**Java**](#tab/java) ++**Version 4.0.0-beta.5 (2022-06-07)** ++[**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-jav) ++ [**Package (Maven)**](https://search.maven.org/artifact/com.azure/azure-ai-formrecognizer/4.0.0-beta.5/jar) ++ [**SDK reference documentation**](/java/api/overview/azure/ai-formrecognizer-readme?view=azure-java-preview&preserve-view=true) ++### [**JavaScript**](#tab/javascript) ++**Version 4.0.0-beta.4 (2022-06-07)** ++ [**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-js/blob/%40azure/ai-form-recognizer_4.0.0-beta.4/sdk/formrecognizer/ai-form-recognizer/CHANGELOG.md) ++ [**Package (npm)**](https://www.npmjs.com/package/@azure/ai-form-recognizer/v/4.0.0-beta.4) ++ [**SDK reference documentation**](/javascript/api/@azure/ai-form-recognizer/?view=azure-node-preview&preserve-view=true) ++### [**Python**](#tab/python) ++**Version 3.2.0b5 (2022-06-07** ++ [**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-python/blob/azure-ai-formrecognizer_3.2.0b5/sdk/formrecognizer/azure-ai-formrecognizer/CHANGELOG.md) ++ [**Package (PyPi)**](https://pypi.org/project/azure-ai-formrecognizer/3.2.0b5/) ++ [**SDK reference documentation**](/python/api/azure-ai-formrecognizer/azure.ai.formrecognizer?view=azure-python-preview&preserve-view=true) ++ |
applied-ai-services | Concept Invoice | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-invoice.md | See how data, including customer information, vendor details, and line items, is | • French (fr) | France (fr) | | • Italian (it) | Italy (it)| | • Portuguese (pt) | Portugal (pt), Brazil (br)|-| • Dutch (de) | Netherlands (de)| +| • Dutch (nl) | Netherlands (nl)| ## Field extraction |
applied-ai-services | Sdk Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/sdk-overview.md | Title: Form Recognizer SDKs + Title: Form Recognizer SDKs -description: The Form Recognizer software development kits (SDKs) expose Form Recognizer models, features and capabilities, using C#, Java, JavaScript, or Python programming language. +description: Form Recognizer software development kits (SDKs) expose Form Recognizer models, features and capabilities, using C#, Java, JavaScript, or Python programming language. Previously updated : 01/06/2023 Last updated : 04/25/2023 recommendations: false recommendations: false <!-- markdownlint-disable MD001 --> <!-- markdownlint-disable MD051 --> -# Form Recognizer SDKs +# Form Recognizer SDK (GA) [!INCLUDE [applies to v3.0 and v2.1](includes/applies-to-v3-0-and-v2-1.md)] > [!IMPORTANT]-> The **2023-02-28-preview** version is currently only available through the [**Form Recognizer 2023-02-28-preview REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-02-28-preview/operations/AnalyzeDocument). +> For more information on the latest public preview version (**2023-02-28-preview**), *see* [Form Recognizer SDK (preview)](sdk-preview.md) Azure Cognitive Services Form Recognizer is a cloud service that uses machine learning to analyze text and structured data from documents. The Form Recognizer software development kit (SDK) is a set of libraries and tools that enable you to easily integrate Form Recognizer models and capabilities into your applications. Form Recognizer SDK is available across platforms in C#/.NET, Java, JavaScript, and Python programming languages. Azure Cognitive Services Form Recognizer is a cloud service that uses machine le Form Recognizer SDK supports the following languages and platforms: -| Language → SDK version | Package| Azure Form Recognizer SDK |Supported API version| Platform support | -|:-:|:-|:-| :-|--| -|[.NET/C# → 4.0.0 (latest GA release)](/dotnet/api/overview/azure/form-recognizer?view=azure-dotnet&preserve-view=true) | [Azure SDK for .NET](https://azuresdkdocs.blob.core.windows.net/$web/dotnet/Azure.AI.FormRecognizer/4.0.0/https://docsupdatetracker.net/index.html)|[**v3.0**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)</br> [**v2.1**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeBusinessCardAsync)</br>[**v2.0**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2/operations/AnalyzeLayoutAsync) |[Windows, macOS, Linux, Docker](https://dotnet.microsoft.com/download)| -|[Java → 4.0.0 (latest GA release)](/java/api/overview/azure/form-recognizer?view=azure-java-stable&preserve-view=true) | [Azure SDK for Java](https://azuresdkdocs.blob.core.windows.net/$web/java/azure-ai-formrecognizer/4.0.0/https://docsupdatetracker.net/index.html)|[**v3.0**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)</br> [**v2.1**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeBusinessCardAsync)</br>[**v2.0**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2/operations/AnalyzeLayoutAsync) |[Windows, macOS, Linux](/java/openjdk/install)| -|[JavaScript → 4.0.0 (latest GA release)](/javascript/api/overview/azure/form-recognizer?view=azure-node-latest&preserve-view=true)| [npm](https://www.npmjs.com/package/@azure/ai-form-recognizer)| [Azure SDK for JavaScript](https://azuresdkdocs.blob.core.windows.net/$web/javascript/azure-ai-form-recognizer/4.0.0/https://docsupdatetracker.net/index.html) | [**v3.0**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)</br> [**v2.1**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeBusinessCardAsync)</br>[**v2.0**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2/operations/AnalyzeLayoutAsync) | [Browser, Windows, macOS, Linux](https://nodejs.org/en/download/) | -|[Python → 3.2.0 (latest GA release)](/python/api/overview/azure/form-recognizer?view=azure-python&preserve-view=true) | [PyPI](https://pypi.org/project/azure-ai-formrecognizer/3.2.0/)| [Azure SDK for Python](https://azuresdkdocs.blob.core.windows.net/$web/python/azure-ai-formrecognizer/3.2.0/https://docsupdatetracker.net/index.html)| [**v3.0**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)</br> [**v2.1**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeBusinessCardAsync)</br>[**v2.0**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2/operations/AnalyzeLayoutAsync) |[Windows, macOS, Linux](/azure/developer/python/configure-local-development-environment?tabs=windows%2Capt%2Ccmd#use-the-azure-cli) +| Language → Azure Form Recognizer SDK version | Package| Supported API version| Platform support | +|:-:|:-|:-| :-| +| [.NET/C# → 4.0.0 (latest GA release)](https://azuresdkdocs.blob.core.windows.net/$web/dotnet/Azure.AI.FormRecognizer/4.0.0/https://docsupdatetracker.net/index.html)|[NuGet](https://www.nuget.org/packages/Azure.AI.FormRecognizer)|[**v3.0**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)</br> [**v2.1**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeBusinessCardAsync)</br>[**v2.0**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2/operations/AnalyzeLayoutAsync) |[Windows, macOS, Linux, Docker](https://dotnet.microsoft.com/download)| +|[Java → 4.0.6 (latest GA release)](https://azuresdkdocs.blob.core.windows.net/$web/java/azure-ai-formrecognizer/4.0.0/https://docsupdatetracker.net/index.html) |[MVN repository](https://mvnrepository.com/artifact/com.azure/azure-ai-formrecognizer/4.0.0-beta.6) |[**v3.0**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)</br> [**v2.1**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeBusinessCardAsync)</br>[**v2.0**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2/operations/AnalyzeLayoutAsync) |[Windows, macOS, Linux](/java/openjdk/install)| +|[JavaScript → 4.0.0 (latest GA release)](https://azuresdkdocs.blob.core.windows.net/$web/javascript/azure-ai-form-recognizer/4.0.0/https://docsupdatetracker.net/index.html)| [npm](https://www.npmjs.com/package/@azure/ai-form-recognizer)| [**v3.0**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)</br> [**v2.1**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeBusinessCardAsync)</br>[**v2.0**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2/operations/AnalyzeLayoutAsync) | [Browser, Windows, macOS, Linux](https://nodejs.org/en/download/) | +|[Python → 3.2.0 (latest GA release)](https://azuresdkdocs.blob.core.windows.net/$web/python/azure-ai-formrecognizer/3.2.0/https://docsupdatetracker.net/index.html) | [PyPI](https://pypi.org/project/azure-ai-formrecognizer/3.2.0/)| [**v3.0**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)</br> [**v2.1**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeBusinessCardAsync)</br>[**v2.0**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2/operations/AnalyzeLayoutAsync) |[Windows, macOS, Linux](/azure/developer/python/configure-local-development-environment?tabs=windows%2Capt%2Ccmd#use-the-azure-cli) ## Supported Clients | Language| SDK version | API version | Supported clients| | : | :--|:- | :--|-|<ul><li> C# /.NET </li><li>Java</li><li>JavaScript</li></ul>| <ul><li>4.0.0 (latest GA release)</li></ul>| <ul><li> v3.0 (default)</li></ul>| <ul><li> **DocumentAnalysisClient**</li><li>**DocumentModelAdministrationClient**</li></ul> | -|<ul><li> C# /.NET </li><li>Java</li><li>JavaScript</li></ul>| <ul><li>4.0.0 (latest GA release)</li></ul>| <ul><li> v2.1</li><li>v2.0</li></ul> | <ul><li> **FormRecognizerClient**</li><li>**FormTrainingClient**</li></ul> | -|<ul><li> C# /.NET </li><li>Java</li><li>JavaScript</li></ul>| <ul><li>3.1.x</li></ul> | <ul><li> v2.1 (default)</li><li>v2.0</li></ul> | <ul><li> **FormRecognizerClient**</li><li>**FormTrainingClient**</li></ul> | -|<ul><li> C# /.NET </li><li>Java</li><li>JavaScript</li></ul>| <ul><li>3.0.x</li></ul>| <ul><li>v2.0</li></ul> | <ul><li> **FormRecognizerClient**</li><li>**FormTrainingClient**</li></ul> | -|<ul><li> Python</li></ul>| <ul><li>3.2.0 (latest GA release)</li></ul> | <ul><li> v3.0 (default)</li></ul> | <ul><li> **DocumentAnalysisClient**</li><li>**DocumentModelAdministrationClient**</li></ul>| -|<ul><li> Python</li></ul>| <ul><li>3.2.0 (latest GA release)</li></ul> | <ul><li> v2.1</li><li>v2.0</li></ul> | <ul><li> **FormRecognizerClient**</li><li>**FormTrainingClient**</li></ul> | -|<ul><li> Python </li></ul>| <ul><li>3.1.x</li></ul> | <ul><li> v2.1 (default)</li><li>v2.0</li></ul> |<ul><li>**FormRecognizerClient**</li><li>**FormTrainingClient**</li></ul> | -|<ul><li> Python</li></ul>| <ul><li>3.0.0</li></ul> | <ul><li>v2.0</li></ul>| <ul><li> **FormRecognizerClient**</li><li>**FormTrainingClient**</li></ul> | +|**.NET/C#**</br> **Java**</br> **JavaScript**</br>| 4.0.0 (latest GA release)| v3.0 / 2022-08-31 (default)| **DocumentAnalysisClient**<br>**DocumentModelAdministrationClient** | +|**.NET/C#**</br> **Java**</br> **JavaScript**</br>| 3.1.x | v2.1 (default)</br>v2.0 | **FormRecognizerClient**</br>**FormTrainingClient** | +|**.NET/C#**</br> **Java**</br> **JavaScript**</br>| 3.0.x| v2.0 | **FormRecognizerClient**</br>**FormTrainingClient** | +| **Python**| 3.2.x (latest GA release) | v3.0 / 2022-08-31 (default)| **DocumentAnalysisClient**</br>**DocumentModelAdministrationClient**| +| **Python** | 3.1.x | v2.1 (default)</br>v2.0 |**FormRecognizerClient**</br>**FormTrainingClient** | +| **Python** | 3.0.0 | v2.0 |**FormRecognizerClient**</br>**FormTrainingClient** | ## Use Form Recognizer SDK in your applications The Form Recognizer SDK enables the use and management of the Form Recognizer se ### [C#/.NET](#tab/csharp) ```dotnetcli-dotnet add package Azure.AI.FormRecognizer --version 4.0.0-beta.5 +dotnet add package Azure.AI.FormRecognizer --version 4.0.0 ``` ```powershell-Install-Package Azure.AI.FormRecognizer -Version 4.0.0-beta.5 +Install-Package Azure.AI.FormRecognizer -Version 4.0.0 ``` ### [Java](#tab/java) ```xml- <dependency> - <groupId>com.azure</groupId> - <artifactId>azure-ai-formrecognizer</artifactId> - <version>4.0.0-beta.5</version> - </dependency> +<dependency> +<groupId>com.azure</groupId> +<artifactId>azure-ai-formrecognizer</artifactId> +<version>4.0.6</version> +</dependency> ``` ```kotlin-implementation("com.azure:azure-ai-formrecognizer:4.0.0-beta.5") +implementation("com.azure:azure-ai-formrecognizer:4.0.6") ``` ### [JavaScript](#tab/javascript) ```javascript-npm i @azure/ai-form-recognizer@4.0.0-beta.6 +npm i @azure/ai-form-recognizer ``` ### [Python](#tab/python) ```python-pip install azure-ai-formrecognizer==3.2.0b6 +pip install azure-ai-formrecognizer ``` For more information, *see* [Authenticate the client](https://github.com/Azure/a ### 4. Build your application -You'll create a client object to interact with the Form Recognizer SDK, and then call methods on that client object to interact with the service. The SDKs provide both synchronous and asynchronous methods. For more insight, try a [quickstart](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true) in a language of your choice. --## Changelog and release history --#### Form Recognizer SDK September 2022 GA release --This release includes the following updates: --> [!IMPORTANT] -> The `DocumentAnalysisClient` and `DocumentModelAdministrationClient` now target API version v3.0 GA, released 2022-08-31. These clients are no longer supported by API versions 2020-06-30-preview or earlier. --### [**C#**](#tab/csharp) --* **Version 4.0.0 GA (2022-09-08)** -* **Supports REST API v3.0 and v2.0 clients** --[**Package (NuGet)**](https://www.nuget.org/packages/Azure.AI.FormRecognizer/4.0.0) --[**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/formrecognizer/Azure.AI.FormRecognizer/CHANGELOG.md) --[**Migration guide**](https://github.com/Azure/azure-sdk-for-net/blob/Azure.AI.FormRecognizer_4.0.0/sdk/formrecognizer/Azure.AI.FormRecognizer/MigrationGuide.md) --[**ReadMe**](https://github.com/Azure/azure-sdk-for-net/blob/Azure.AI.FormRecognizer_4.0.0/sdk/formrecognizer/Azure.AI.FormRecognizer/README.md) --[**Samples**](https://github.com/Azure/azure-sdk-for-net/blob/Azure.AI.FormRecognizer_4.0.0/sdk/formrecognizer/Azure.AI.FormRecognizer/samples/README.md) --### [**Java**](#tab/java) --* **Version 4.0.0 GA (2022-09-08)** -* **Supports REST API v3.0 and v2.0 clients** --[**Package (Maven)**](https://oss.sonatype.org/#nexus-search;quick~azure-ai-formrecognizer) --[**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-jav) --[**Migration guide**](https://github.com/Azure/azure-sdk-for-jav) --[**ReadMe**](https://github.com/Azure/azure-sdk-for-jav) --[**Samples**](https://github.com/Azure/azure-sdk-for-jav) --### [**JavaScript**](#tab/javascript) --* **Version 4.0.0 GA (2022-09-08)** -* **Supports REST API v3.0 and v2.0 clients** --[**Package (npm)**](https://www.npmjs.com/package/@azure/ai-form-recognizer) --[**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-js/blob/%40azure/ai-form-recognizer_4.0.0/sdk/formrecognizer/ai-form-recognizer/CHANGELOG.md) --[**Migration guide**](https://github.com/Azure/azure-sdk-for-js/blob/%40azure/ai-form-recognizer_4.0.0/sdk/formrecognizer/ai-form-recognizer/MIGRATION-v3_v4.md) --[**ReadMe**](https://github.com/Azure/azure-sdk-for-js/blob/%40azure/ai-form-recognizer_4.0.0/sdk/formrecognizer/ai-form-recognizer/README.md) --[**Samples**](https://github.com/witemple-msft/azure-sdk-for-js/blob/7e3196f7e529212a6bc329f5f06b0831bf4cc174/sdk/formrecognizer/ai-form-recognizer/samples/v4/javascript/README.md) --### [Python](#tab/python) --> [!NOTE] -> Python 3.7 or later is required to use this package. --* **Version 3.2.0 GA (2022-09-08)** -* **Supports REST API v3.0 and v2.0 clients** --[**Package (PyPi)**](https://pypi.org/project/azure-ai-formrecognizer/3.2.0/) --[**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-python/blob/azure-ai-formrecognizer_3.2.0/sdk/formrecognizer/azure-ai-formrecognizer/CHANGELOG.md) --[**Migration guide**](https://github.com/Azure/azure-sdk-for-python/blob/azure-ai-formrecognizer_3.2.0/sdk/formrecognizer/azure-ai-formrecognizer/MIGRATION_GUIDE.md) --[**ReadMe**](https://github.com/Azure/azure-sdk-for-python/blob/azure-ai-formrecognizer_3.2.0/sdk/formrecognizer/azure-ai-formrecognizer/README.md) --[**Samples**](https://github.com/Azure/azure-sdk-for-python/blob/azure-ai-formrecognizer_3.2.0/sdk/formrecognizer/azure-ai-formrecognizer/samples/README.md) ----#### Form Recognizer SDK beta August 2022 preview release --This release includes the following updates: --### [**C#**](#tab/csharp) --**Version 4.0.0-beta.5 (2022-08-09)** -**Supports REST API 2022-06-30-preview clients** --[**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/formrecognizer/Azure.AI.FormRecognizer/CHANGELOG.md#400-beta5-2022-08-09) --[**Package (NuGet)**](https://www.nuget.org/packages/Azure.AI.FormRecognizer/4.0.0-beta.5) --[**SDK reference documentation**](/dotnet/api/overview/azure/ai.formrecognizer-readme?view=azure-dotnet-preview&preserve-view=true) --### [**Java**](#tab/java) --**Version 4.0.0-beta.6 (2022-08-10)** -**Supports REST API 2022-06-30-preview and earlier clients** --[**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-jav#400-beta6-2022-08-10) -- [**Package (Maven)**](https://oss.sonatype.org/#nexus-search;quick~azure-ai-formrecognizer) -- [**SDK reference documentation**](/java/api/overview/azure/ai-formrecognizer-readme?view=azure-java-preview&preserve-view=true) --### [**JavaScript**](#tab/javascript) --**Version 4.0.0-beta.6 (2022-08-09)** -**Supports REST API 2022-06-30-preview and earlier clients** -- [**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-js/blob/%40azure/ai-form-recognizer_4.0.0-beta.6/sdk/formrecognizer/ai-form-recognizer/CHANGELOG.md) -- [**Package (npm)**](https://www.npmjs.com/package/@azure/ai-form-recognizer/v/4.0.0-beta.6) -- [**SDK reference documentation**](/javascript/api/overview/azure/ai-form-recognizer-readme?view=azure-node-preview&preserve-view=true) --### [Python](#tab/python) --> [!IMPORTANT] -> Python 3.6 is no longer supported in this release. Use Python 3.7 or later. --**Version 3.2.0b6 (2022-08-09)** -**Supports REST API 2022-06-30-preview and earlier clients** -- [**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-python/blob/azure-ai-formrecognizer_3.2.0b6/sdk/formrecognizer/azure-ai-formrecognizer/CHANGELOG.md) -- [**Package (PyPi)**](https://pypi.org/project/azure-ai-formrecognizer/3.2.0b6/) -- [**SDK reference documentation**](https://pypi.org/project/azure-ai-formrecognizer/3.2.0b6/) ----### Form Recognizer SDK beta June 2022 preview release --This release includes the following updates: --### [**C#**](#tab/csharp) --**Version 4.0.0-beta.4 (2022-06-08)** --[**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-net/blob/Azure.AI.FormRecognizer_4.0.0-beta.4/sdk/formrecognizer/Azure.AI.FormRecognizer/CHANGELOG.md) --[**Package (NuGet)**](https://www.nuget.org/packages/Azure.AI.FormRecognizer/4.0.0-beta.4) --[**SDK reference documentation**](/dotnet/api/azure.ai.formrecognizer?view=azure-dotnet-preview&preserve-view=true) --### [**Java**](#tab/java) --**Version 4.0.0-beta.5 (2022-06-07)** --[**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-jav) -- [**Package (Maven)**](https://search.maven.org/artifact/com.azure/azure-ai-formrecognizer/4.0.0-beta.5/jar) -- [**SDK reference documentation**](/java/api/overview/azure/ai-formrecognizer-readme?view=azure-java-preview&preserve-view=true) --### [**JavaScript**](#tab/javascript) --**Version 4.0.0-beta.4 (2022-06-07)** -- [**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-js/blob/%40azure/ai-form-recognizer_4.0.0-beta.4/sdk/formrecognizer/ai-form-recognizer/CHANGELOG.md) -- [**Package (npm)**](https://www.npmjs.com/package/@azure/ai-form-recognizer/v/4.0.0-beta.4) -- [**SDK reference documentation**](/javascript/api/@azure/ai-form-recognizer/?view=azure-node-preview&preserve-view=true) --### [Python](#tab/python) --**Version 3.2.0b5 (2022-06-07** -- [**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-python/blob/azure-ai-formrecognizer_3.2.0b5/sdk/formrecognizer/azure-ai-formrecognizer/CHANGELOG.md) -- [**Package (PyPi)**](https://pypi.org/project/azure-ai-formrecognizer/3.2.0b5/) -- [**SDK reference documentation**](/python/api/azure-ai-formrecognizer/azure.ai.formrecognizer?view=azure-python-preview&preserve-view=true) --+Create a client object to interact with the Form Recognizer SDK, and then call methods on that client object to interact with the service. The SDKs provide both synchronous and asynchronous methods. For more insight, try a [quickstart](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true) in a language of your choice. ## Help options The [Microsoft Q&A](/answers/topics/azure-form-recognizer.html) and [Stack Overf ## Next steps >[!div class="nextstepaction"]-> [**Try a Form Recognizer quickstart**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true) +> [**Explore Form Recognizer REST API v3.0**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument) > [!div class="nextstepaction"]-> [**Explore the Form Recognizer REST API v3.0**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument) +> [**Try a Form Recognizer quickstart**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true) |
applied-ai-services | Sdk Preview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/sdk-preview.md | + + Title: Form Recognizer SDKs (preview) ++description: The preview Form Recognizer software development kits (SDKs) expose Form Recognizer models, features and capabilities that are in active development for C#, Java, JavaScript, or Python programming language. +++++ Last updated : 04/25/2023++recommendations: false +++<!-- markdownlint-disable MD024 --> +<!-- markdownlint-disable MD036 --> +<!-- markdownlint-disable MD001 --> +<!-- markdownlint-disable MD051 --> ++# Form Recognizer SDK (public preview) ++**This article applies to:**  **Form Recognizer version 2023-02-28-preview**. ++> [!IMPORTANT] +> +> * Form Recognizer public preview releases provide early access to features that are in active development. +> * Features, approaches, and processes may change, prior to General Availability (GA), based on user feedback. +> * The public preview version of Form Recognizer client libraries default to service version [**Form Recognizer 2023-02-28-preview REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-02-28-preview/operations/AnalyzeDocument). ++Azure Cognitive Services Form Recognizer is a cloud service that uses machine learning to analyze text and structured data from documents. The Form Recognizer software development kit (SDK) is a set of libraries and tools that enable you to easily integrate Form Recognizer models and capabilities into your applications. Form Recognizer SDK is available across platforms in C#/.NET, Java, JavaScript, and Python programming languages. ++## Supported languages ++Form Recognizer SDK supports the following languages and platforms: ++| Language → Azure Form Recognizer SDK version | Package| Supported API version| Platform support | +|:-:|:-|:-| :-| +| [.NET/C# → 4.1.0-beta.1 (preview)](https://azuresdkdocs.blob.core.windows.net/$web/dotnet/Azure.AI.FormRecognizer/4.1.0-beta.1/https://docsupdatetracker.net/index.html)|[NuGet](https://www.nuget.org/packages/Azure.AI.FormRecognizer/4.1.0-beta.1)|[**2023-02-28-preview**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-02-28-preview/operations/AnalyzeDocument)</br> [**2022-08-31 (GA)**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)</br> [**v2.1**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeBusinessCardAsync)</br>[**v2.0**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2/operations/AnalyzeLayoutAsync) |[Windows, macOS, Linux, Docker](https://dotnet.microsoft.com/download)| +|[Java → 4.1.0-beta.1 (preview)](https://azuresdkdocs.blob.core.windows.net/$web/java/azure-ai-formrecognizer/4.1.0-beta.1/https://docsupdatetracker.net/index.html) |[MVN repository](https://mvnrepository.com/artifact/com.azure/azure-ai-formrecognizer/4.1.0-beta.1) |[**2023-02-28-preview**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-02-28-preview/operations/AnalyzeDocument)</br> [**2022-08-31 (GA)**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)</br> [**v2.1**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeBusinessCardAsync)</br>[**v2.0**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2/operations/AnalyzeLayoutAsync) |[Windows, macOS, Linux](/java/openjdk/install)| +|[JavaScript → 4.1.0-beta.1 (preview)](https://azuresdkdocs.blob.core.windows.net/$web/javascript/azure-ai-form-recognizer/4.1.0-beta.1/https://docsupdatetracker.net/index.html)| [npm](https://www.npmjs.com/package/@azure/ai-form-recognizer/v/4.1.0-beta.1)| [**2023-02-28-preview**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-02-28-preview/operations/AnalyzeDocument)</br> [**2022-08-31 (GA)**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)</br> [**v2.1**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeBusinessCardAsync)</br>[**v2.0**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2/operations/AnalyzeLayoutAsync) | [Browser, Windows, macOS, Linux](https://nodejs.org/en/download/) | +|[Python → 3.3.0b1 (preview)](https://azuresdkdocs.blob.core.windows.net/$web/python/azure-ai-formrecognizer/3.3.0b1/https://docsupdatetracker.net/index.html) | [PyPI](https://pypi.org/project/azure-ai-formrecognizer/3.3.0b1/)| [**2023-02-28-preview**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-02-28-preview/operations/AnalyzeDocument)</br> [**2022-08-31 (GA)**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)</br> [**v2.1**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeBusinessCardAsync)</br>[**v2.0**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2/operations/AnalyzeLayoutAsync) |[Windows, macOS, Linux](/azure/developer/python/configure-local-development-environment?tabs=windows%2Capt%2Ccmd#use-the-azure-cli) ++## Supported Clients ++| Language| SDK version | API version (default) | Supported clients| +| : | :--|:- | :--| +|**.NET/C#**</br> **Java**</br> **JavaScript**</br>| 4.1.0-beta-1 (preview)| 2023_02_28_preview|**DocumentAnalysisClient**</br>**DocumentModelAdministrationClient** | +|**.NET/C#**</br> **Java**</br> **JavaScript**</br>| 4.0.0 (GA)| v3.0 / 2022-08-31| **DocumentAnalysisClient**</br>**DocumentModelAdministrationClient** | +|**.NET/C#**</br> **Java**</br> **JavaScript**</br>| 3.1.x | v2.1 | **FormRecognizerClient**</br>**FormTrainingClient** | +|**.NET/C#**</br> **Java**</br> **JavaScript**</br>| 3.0.x| v2.0 | **FormRecognizerClient**</br>**FormTrainingClient** | +| **Python**| 3.3.0bx (preview) | 2023-02-28-preview | **DocumentAnalysisClient**</br>**DocumentModelAdministrationClient**| +| **Python**| 3.2.x (GA) | v3.0 / 2022-08-31| **DocumentAnalysisClient**</br>**DocumentModelAdministrationClient**| +| **Python**| 3.1.x | v2.1 | **FormRecognizerClient**</br>**FormTrainingClient** | +| **Python** | 3.0.0 | v2.0 |**FormRecognizerClient**</br>**FormTrainingClient** | ++## Use Form Recognizer SDK in your applications ++The Form Recognizer SDK enables the use and management of the Form Recognizer service in your application. The SDK builds on the underlying Form Recognizer REST API allowing you to easily use those APIs within your programming language paradigm. Here's how you use the Form Recognizer SDK for your preferred language: ++### 1. Install the SDK client library ++### [C#/.NET](#tab/csharp) ++```dotnetcli +dotnet add package Azure.AI.FormRecognizer --version 4.1.0-beta.1 +``` ++```powershell +Install-Package Azure.AI.FormRecognizer -Version 4.1.0-beta.1 +``` ++### [Java](#tab/java) ++```xml + <dependency> + <groupId>com.azure</groupId> + <artifactId>azure-ai-formrecognizer</artifactId> + <version>4.1.0-beta.1</version> + </dependency> +``` ++```kotlin +implementation("com.azure:azure-ai-formrecognizer:4.1.0-beta.1") +``` ++### [JavaScript](#tab/javascript) ++```javascript +npm i @azure/ai-form-recognizer@4.1.0-beta.1 +``` ++### [Python](#tab/python) ++```python +pip install azure-ai-formrecognizer==3.3.0b1 +``` ++++### 2. Import the SDK client library into your application ++### [C#/.NET](#tab/csharp) ++```csharp +using Azure; +using Azure.AI.FormRecognizer.DocumentAnalysis; +``` ++### [Java](#tab/java) ++```java +import com.azure.ai.formrecognizer.*; +import com.azure.ai.formrecognizer.models.*; +import com.azure.ai.formrecognizer.DocumentAnalysisClient.*; ++import com.azure.core.credential.AzureKeyCredential; +``` ++### [JavaScript](#tab/javascript) ++```javascript +const { AzureKeyCredential, DocumentAnalysisClient } = require("@azure/ai-form-recognizer"); +``` ++### [Python](#tab/python) ++```python +from azure.ai.formrecognizer import DocumentAnalysisClient +from azure.core.credentials import AzureKeyCredential +``` ++++### 3. Set up authentication ++There are two supported methods for authentication ++* Use a [Form Recognizer API key](#use-your-api-key) with AzureKeyCredential from azure.core.credentials. ++* Use a [token credential from azure-identity](#use-an-azure-active-directory-azure-ad-token-credential) to authenticate with [Azure Active Directory](../../active-directory/fundamentals/active-directory-whatis.md). ++#### Use your API key ++Here's where to find your Form Recognizer API key in the Azure portal: +++### [C#/.NET](#tab/csharp) ++```csharp ++//set `<your-endpoint>` and `<your-key>` variables with the values from the Azure portal to create your `AzureKeyCredential` and `DocumentAnalysisClient` instance +string key = "<your-key>"; +string endpoint = "<your-endpoint>"; +AzureKeyCredential credential = new AzureKeyCredential(key); +DocumentAnalysisClient client = new DocumentAnalysisClient(new Uri(endpoint), credential); +``` ++### [Java](#tab/java) ++```java ++// create your `DocumentAnalysisClient` instance and `AzureKeyCredential` variable +DocumentAnalysisClient client = new DocumentAnalysisClientBuilder() + .credential(new AzureKeyCredential("<your-key>")) + .endpoint("<your-endpoint>") + .buildClient(); +``` ++### [JavaScript](#tab/javascript) ++```javascript ++// create your `DocumentAnalysisClient` instance and `AzureKeyCredential` variable +async function main() { + const client = new DocumentAnalysisClient("<your-endpoint>", new AzureKeyCredential("<your-key>")); +``` ++### [Python](#tab/python) ++```python ++# create your `DocumentAnalysisClient` instance and `AzureKeyCredential` variable + document_analysis_client = DocumentAnalysisClient(endpoint="<your-endpoint>", credential=AzureKeyCredential("<your-key>")) +``` ++++#### Use an Azure Active Directory (Azure AD) token credential ++> [!NOTE] +> Regional endpoints do not support AAD authentication. Create a [custom subdomain](../../cognitive-services/authentication.md?tabs=powershell#create-a-resource-with-a-custom-subdomain) for your resource in order to use this type of authentication. ++Authorization is easiest using the `DefaultAzureCredential`. It provides a default token credential, based upon the running environment, capable of handling most Azure authentication scenarios. ++### [C#/.NET](#tab/csharp) ++Here's how to acquire and use the [DefaultAzureCredential](/dotnet/api/azure.identity.defaultazurecredential?view=azure-dotnet&preserve-view=true) for .NET applications: ++1. Install the [Azure Identity library for .NET](/dotnet/api/overview/azure/identity-readme): ++ ```console + dotnet add package Azure.Identity + ``` ++ ```powershell + Install-Package Azure.Identity + ``` ++1. [Register an Azure AD application and create a new service principal](../../cognitive-services/authentication.md?tabs=powershell#assign-a-role-to-a-service-principal). ++1. Grant access to Form Recognizer by assigning the **`Cognitive Services User`** role to your service principal. ++1. Set the values of the client ID, tenant ID, and client secret in the Azure AD application as environment variables: **`AZURE_CLIENT_ID`**, **`AZURE_TENANT_ID`**, and **`AZURE_CLIENT_SECRET`**, respectively. ++1. Create your **`DocumentAnalysisClient`** instance including the **`DefaultAzureCredential`**: ++ ```csharp + string endpoint = "<your-endpoint>"; + var client = new DocumentAnalysisClient(new Uri(endpoint), new DefaultAzureCredential()); + ``` ++For more information, *see* [Authenticate the client](https://github.com/Azure/azure-sdk-for-net/tree/Azure.AI.FormRecognizer_4.0.0-beta.4/sdk/formrecognizer/Azure.AI.FormRecognizer#authenticate-the-client) ++### [Java](#tab/java) ++Here's how to acquire and use the [DefaultAzureCredential](/java/api/com.azure.identity.defaultazurecredential?view=azure-java-stable&preserve-view=true) for Java applications: ++1. Install the [Azure Identity library for Java](/java/api/overview/azure/identity-readme?view=azure-java-stable&preserve-view=true): ++ ```xml + <dependency> + <groupId>com.azure</groupId> + <artifactId>azure-identity</artifactId> + <version>1.5.3</version> + </dependency> + ``` ++1. [Register an Azure AD application and create a new service principal](../../cognitive-services/authentication.md?tabs=powershell#assign-a-role-to-a-service-principal). ++1. Grant access to Form Recognizer by assigning the **`Cognitive Services User`** role to your service principal. ++1. Set the values of the client ID, tenant ID, and client secret of the Azure AD application as environment variables: **`AZURE_CLIENT_ID`**, **`AZURE_TENANT_ID`**, and **`AZURE_CLIENT_SECRET`**, respectively. ++1. Create your **`DocumentAnalysisClient`** instance and **`TokenCredential`** variable: ++ ```java + TokenCredential credential = new DefaultAzureCredentialBuilder().build(); + DocumentAnalysisClient documentAnalysisClient = new DocumentAnalysisClientBuilder() + .endpoint("{your-endpoint}") + .credential(credential) + .buildClient(); + ``` ++For more information, *see* [Authenticate the client](https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/formrecognizer/azure-ai-formrecognizer#authenticate-the-client) ++### [JavaScript](#tab/javascript) ++Here's how to acquire and use the [DefaultAzureCredential](/javascript/api/@azure/identity/defaultazurecredential?view=azure-node-latest&preserve-view=true) for JavaScript applications: ++1. Install the [Azure Identity library for JavaScript](/javascript/api/overview/azure/identity-readme?view=azure-node-latest&preserve-view=true): ++ ```javascript + npm install @azure/identity + ``` ++1. [Register an Azure AD application and create a new service principal](../../cognitive-services/authentication.md?tabs=powershell#assign-a-role-to-a-service-principal). ++1. Grant access to Form Recognizer by assigning the **`Cognitive Services User`** role to your service principal. ++1. Set the values of the client ID, tenant ID, and client secret of the Azure AD application as environment variables: **`AZURE_CLIENT_ID`**, **`AZURE_TENANT_ID`**, and **`AZURE_CLIENT_SECRET`**, respectively. ++1. Create your **`DocumentAnalysisClient`** instance including the **`DefaultAzureCredential`**: ++ ```javascript + const { DocumentAnalysisClient } = require("@azure/ai-form-recognizer"); + const { DefaultAzureCredential } = require("@azure/identity"); ++ const client = new DocumentAnalysisClient("<your-endpoint>", new DefaultAzureCredential()); + ``` ++For more information, *see* [Create and authenticate a client](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/formrecognizer/ai-form-recognizer#create-and-authenticate-a-client). ++### [Python](#tab/python) ++Here's how to acquire and use the [DefaultAzureCredential](/python/api/azure-identity/azure.identity.defaultazurecredential?view=azure-python&preserve-view=true) for Python applications. ++1. Install the [Azure Identity library for Python](/python/api/overview/azure/identity-readme?view=azure-python&preserve-view=true): ++ ```python + pip install azure-identity + ``` ++1. [Register an Azure AD application and create a new service principal](../../cognitive-services/authentication.md?tabs=powershell#assign-a-role-to-a-service-principal). ++1. Grant access to Form Recognizer by assigning the **`Cognitive Services User`** role to your service principal. ++1. Set the values of the client ID, tenant ID, and client secret of the Azure AD application as environment variables: **`AZURE_CLIENT_ID`**, **`AZURE_TENANT_ID`**, and **`AZURE_CLIENT_SECRET`**, respectively. ++1. Create your **`DocumentAnalysisClient`** instance including the **`DefaultAzureCredential`**: ++ ```python + from azure.identity import DefaultAzureCredential + from azure.ai.formrecognizer import DocumentAnalysisClient ++ credential = DefaultAzureCredential() + document_analysis_client = DocumentAnalysisClient( + endpoint="https://<my-custom-subdomain>.cognitiveservices.azure.com/", + credential=credential + ) + ``` ++For more information, *see* [Authenticate the client](https://github.com/Azure/azure-sdk-for-python/tree/azure-ai-formrecognizer_3.2.0b5/sdk/formrecognizer/azure-ai-formrecognizer#authenticate-the-client) ++++### 4. Build your application ++Create a client object to interact with the Form Recognizer SDK, and then call methods on that client object to interact with the service. The SDKs provide both synchronous and asynchronous methods. For more insight, try a [quickstart](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true) in a language of your choice. ++## Help options ++The [Microsoft Q&A](/answers/topics/azure-form-recognizer.html) and [Stack Overflow](https://stackoverflow.com/questions/tagged/azure-form-recognizer) forums are available for the developer community to ask and answer questions about Azure Form Recognizer and other services. Microsoft monitors the forums and replies to questions that the community has yet to answer. To make sure that we see your question, tag it with **`azure-form-recognizer`**. ++## Next steps ++> [!div class="nextstepaction"] +> [**Explore Form Recognizer REST API 2023-02-28-preview**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-02-28-preview/operations/AnalyzeDocument) |
azure-app-configuration | Concept Disaster Recovery | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/concept-disaster-recovery.md | Title: Azure App Configuration resiliency and disaster recovery -description: Lean how to implement resiliency and disaster recovery with Azure App Configuration. --+description: Learn how to implement resiliency and disaster recovery with Azure App Configuration. ++ Previously updated : 07/09/2020 Last updated : 04/20/2023 # Resiliency and disaster recovery -> [!IMPORTANT] -> Azure App Configuration supports [geo-replication](./concept-geo-replication.md). You can enable replicas of your data across multiple locations for enhanced resiliency to regional outages. You can also leverage App Configuration provider libraries in your applications for [automatic failover](./howto-geo-replication.md#use-replicas). Utilizing geo-replication is the recommended solution for high availability. +Azure App Configuration is a regional service. Each configuration store is created in a particular Azure region. A region-wide outage affects all stores in that region, and failover between regions isn't available by default. However, Azure App Configuration supports [geo-replication](./concept-geo-replication.md). You can enable replicas of your data across multiple locations for enhanced resiliency to regional outages. Utilizing geo-replication is the recommended solution for high availability. -Currently, Azure App Configuration is a regional service. Each configuration store is created in a particular Azure region. A region-wide outage affects all stores in that region. App Configuration doesn't offer automatic failover to another region. This article provides general guidance on how you can use multiple configuration stores across Azure regions to increase the geo-resiliency of your application. +This article provides general guidance on how you can use multiple replicas across Azure regions to increase the geo-resiliency of your application. ## High-availability architecture -To realize cross-region redundancy, you need to create multiple App Configuration stores in different regions. With this setup, your application has at least one additional configuration store to fall back on if the primary store becomes inaccessible. The following diagram illustrates the topology between your application and its primary and secondary configuration stores: +The original App Configuration store is also considered a replica, so to realize cross-region redundancy, you need to create at least one new replica in a different region. However, you can choose to create multiple App Configuration replicas in different regions based on your requirements. You may then utilize these replicas in your application in the order of your preference. With this setup, your application has at least one additional replica to fall back on if the primary replica becomes inaccessible. - +The following diagram illustrates the topology between your application and two replicas: -Your application loads its configuration from both the primary and secondary stores in parallel. Doing this increases the chance of successfully getting the configuration data. You're responsible for keeping the data in both stores in sync. The following sections explain how you can build geo-resiliency into your application. -## Failover between configuration stores +Your application loads its configuration from the more preferred replica. If the preferred replica is not available, configuration is loaded from the less preferred replica. This increases the chance of successfully getting the configuration data. The data in both replicas is always in sync. -Technically, your application isn't executing a failover. It's attempting to retrieve the same set of configuration data from two App Configuration stores simultaneously. Arrange your code so that it loads from the secondary store first and then the primary store. This approach ensures that the configuration data in the primary store takes precedence whenever it's available. The following code snippet shows how you can implement this arrangement in .NET Core: +## Failover between replicas -#### [.NET Core 2.x](#tab/core2x) +If you want to leverage automatic failover between replicas, follow [these instructions](./howto-geo-replication.md#use-replicas) to set up failover using App Configuration provider libraries. This is the recommended approach for building resiliency in your application. -```csharp -public static IWebHostBuilder CreateWebHostBuilder(string[] args) => - WebHost.CreateDefaultBuilder(args) - .ConfigureAppConfiguration((hostingContext, config) => - { - var settings = config.Build(); - config.AddAzureAppConfiguration(settings["ConnectionString_SecondaryStore"], optional: true) - .AddAzureAppConfiguration(settings["ConnectionString_PrimaryStore"], optional: true); - }) - .UseStartup<Startup>(); - -``` --#### [.NET Core 3.x](#tab/core3x) --```csharp -public static IHostBuilder CreateHostBuilder(string[] args) => - Host.CreateDefaultBuilder(args) - .ConfigureWebHostDefaults(webBuilder => - webBuilder.ConfigureAppConfiguration((hostingContext, config) => - { - var settings = config.Build(); - config.AddAzureAppConfiguration(settings["ConnectionString_SecondaryStore"], optional: true) - .AddAzureAppConfiguration(settings["ConnectionString_PrimaryStore"], optional: true); - }) - .UseStartup<Startup>()); -``` ---Notice the `optional` parameter passed into the `AddAzureAppConfiguration` function. When set to `true`, this parameter prevents the application from failing to continue if the function can't load configuration data. --## Synchronization between configuration stores --It's important that your geo-redundant configuration stores all have the same set of data. There are two ways to achieve this: --### Backup manually using the Export function --You can use the **Export** function in App Configuration to copy data from the primary store to the secondary on demand. This function is available through both the Azure portal and the CLI. --From the Azure portal, you can push a change to another configuration store by following these steps. --1. Go to the **Import/Export** tab, and select **Export** > **App Configuration** > **Target** > **Select a resource**. --1. In the new blade that opens, specify the subscription, resource group, and resource name of your secondary store, then select **Apply**. --1. The UI is updated so that you can choose what configuration data you want to export to your secondary store. You can leave the default time value as is and set both **From label** and **Label** to the same value. Select **Apply**. Repeat this for all the labels in your primary store. --1. Repeat the previous steps whenever your configuration changes. --The export process can also be achieved using the Azure CLI. The following command shows how to export all configurations from the primary store to the secondary: --```azurecli - az appconfig kv export --destination appconfig --name {PrimaryStore} --dest-name {SecondaryStore} --label * --preserve-labels -y -``` --### Backup automatically using Azure Functions --The backup process can be automated by using Azure Functions. It leverages the integration with Azure Event Grid in App Configuration. Once set up, App Configuration will publish events to Event Grid for any changes made to key-values in a configuration store. Thus, an Azure Functions app can listen to these events and backup data accordingly. For details, see the tutorial on [how to backup App Configuration stores automatically](./howto-backup-config-store.md). +If the App Configuration provider libraries don't meet your requirements, you can still implement your own failover strategy. When geo-replication is enabled, and if one replica isn't accessible, you can let your application failover to another replica for accessing your configuration. ## Next steps |
azure-app-configuration | Howto Backup Config Store | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/howto-backup-config-store.md | - Title: Automatically back up key-values from Azure App Configuration stores -description: Learn how to set up an automatic backup of key-values between App Configuration stores. ----- Previously updated : 08/24/2022----#Customer intent: I want to back up all key-values to a secondary App Configuration store and keep them up to date with any changes in the primary store. ---# Back up App Configuration stores automatically --> [!IMPORTANT] -> Azure App Configuration supports [geo-replication](./concept-geo-replication.md). You can enable replicas of your data across multiple locations for enhanced resiliency to regional outages. You can also leverage App Configuration provider libraries in your applications for [automatic failover](./howto-geo-replication.md#use-replicas). Utilizing geo-replication is the recommended solution for high availability. --In this article, you'll learn how to set up an automatic backup of key-values from a primary Azure App Configuration store to a secondary store. The automatic backup uses the integration of Azure Event Grid with App Configuration. --After you set up the automatic backup, App Configuration will publish events to Azure Event Grid for any changes made to key-values in a configuration store. Event Grid supports various Azure services from which users can subscribe to the events emitted whenever key-values are created, updated, or deleted. --## Overview --In this article, you'll use Azure Queue storage to receive events from Event Grid and use a timer-trigger of Azure Functions to process events in the queue in batches. --When a function is triggered, based on the events, it will fetch the latest values of the keys that have changed from the primary App Configuration store and update the secondary store accordingly. This setup helps combine multiple changes that occur in a short period in one backup operation, which avoids excessive requests made to your App Configuration stores. -- --## Resource provisioning --The motivation behind backing up App Configuration stores is to use multiple configuration stores across different Azure regions to increase the geo-resiliency of your application. To achieve this, your primary and secondary stores should be in different Azure regions. All other resources created in this tutorial can be provisioned in any region of your choice. This is because if primary region is down, there will be nothing new to back up until the primary region is accessible again. --In this tutorial, you'll create a secondary store in the `centralus` region and all other resources in the `westus` region. ---## Prerequisites --- [Visual Studio 2019](https://visualstudio.microsoft.com/vs) with the Azure development workload.--- [.NET Core SDK](https://dotnet.microsoft.com/download).---- This tutorial requires version 2.3.1 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.--## Create a resource group --The resource group is a logical collection into which Azure resources are deployed and managed. --Create a resource group by using the [az group create](/cli/azure/group) command. --The following example creates a resource group named `<resource_group_name>` in the `westus` location. Replace `<resource_group_name>` with a unique name for your resource group. --```azurecli-interactive -resourceGroupName="<resource_group_name>" -az group create --name $resourceGroupName --location westus -``` --## Create App Configuration stores --Create your primary and secondary App Configuration stores in different regions. -Replace `<primary_appconfig_name>` and `<secondary_appconfig_name>` with unique names for your configuration stores. Each store name must be unique because it's used as a DNS name. --```azurecli-interactive -primaryAppConfigName="<primary_appconfig_name>" -secondaryAppConfigName="<secondary_appconfig_name>" -az appconfig create \ - --name $primaryAppConfigName \ - --location westus \ - --resource-group $resourceGroupName\ - --sku standard --az appconfig create \ - --name $secondaryAppConfigName \ - --location centralus \ - --resource-group $resourceGroupName\ - --sku standard -``` --## Create a queue --Create a storage account and a queue for receiving the events published by Event Grid. --```azurecli-interactive -storageName="<unique_storage_name>" -queueName="<queue_name>" -az storage account create -n $storageName -g $resourceGroupName -l westus --sku Standard_LRS -az storage queue create --name $queueName --account-name $storageName --auth-mode login -``` ---## Subscribe to your App Configuration store events --You subscribe to these two events from the primary App Configuration store: --- `Microsoft.AppConfiguration.KeyValueModified`-- `Microsoft.AppConfiguration.KeyValueDeleted`--The following command creates an Event Grid subscription for the two events sent to your queue. The endpoint type is set to `storagequeue`, and the endpoint is set to the queue ID. Replace `<event_subscription_name>` with the name of your choice for the event subscription. --```azurecli-interactive -storageId=$(az storage account show --name $storageName --resource-group $resourceGroupName --query id --output tsv) -queueId="$storageId/queueservices/default/queues/$queueName" -appconfigId=$(az appconfig show --name $primaryAppConfigName --resource-group $resourceGroupName --query id --output tsv) -eventSubscriptionName="<event_subscription_name>" -az eventgrid event-subscription create \ - --source-resource-id $appconfigId \ - --name $eventSubscriptionName \ - --endpoint-type storagequeue \ - --endpoint $queueId \ - --included-event-types Microsoft.AppConfiguration.KeyValueModified Microsoft.AppConfiguration.KeyValueDeleted -``` --## Create functions for handling events from Queue storage --### Set up with ready-to-use functions --In this article, you'll work with C# functions that have the following properties: --- Runtime stack .NET Core 3.1-- Azure Functions runtime version 3.x-- Function triggered by timer every 10 minutes--To make it easier for you to start backing up your data, we've [tested and published a function](https://github.com/Azure/AppConfiguration/tree/master/examples/ConfigurationStoreBackup) that you can use without making any changes to the code. Download the project files and [publish them to your own function app from Visual Studio](../azure-functions/functions-develop-vs.md#publish-to-azure). --> [!IMPORTANT] -> Don't make any changes to the environment variables in the code you've downloaded. You'll create the required app settings in the next section. -> --### Build your own function --If the sample code provided earlier doesn't meet your requirements, you can also create your own function. Your function must be able to perform the following tasks in order to complete the backup: --- Periodically read contents of your queue to see if it contains any notifications from Event Grid. Refer to the [Storage Queue SDK](../storage/queues/storage-quickstart-queues-dotnet.md) for implementation details.-- If your queue contains [event notifications from Event Grid](./concept-app-configuration-event.md#event-schema), extract all the unique `<key, label>` information from event messages. The combination of key and label is the unique identifier for key-value changes in the primary store.-- Read all settings from the primary store. Update only those settings in the secondary store that have a corresponding event in the queue. Delete all settings from the secondary store that were present in the queue but not in the primary store. You can use the [App Configuration SDK](https://github.com/Azure/AppConfiguration#sdks) to access your configuration stores programmatically.-- Delete messages from the queue if there were no exceptions during processing.-- Implement error handling according to your needs. Refer to the preceding code sample to see some common exceptions that you might want to handle.--To learn more about creating a function, see: [Create a function in Azure that is triggered by a timer](../azure-functions/functions-create-scheduled-function.md) and [Develop Azure Functions using Visual Studio](../azure-functions/functions-develop-vs.md). --> [!IMPORTANT] -> Use your best judgement to choose the timer schedule based on how often you make changes to your primary configuration store. Running the function too often might end up throttling requests for your store. -> --## Create function app settings --If you're using a function that we've provided, you need the following app settings in your function app: --- `PrimaryStoreEndpoint`: Endpoint for the primary App Configuration store. An example is `https://{primary_appconfig_name}.azconfig.io`.-- `SecondaryStoreEndpoint`: Endpoint for the secondary App Configuration store. An example is `https://{secondary_appconfig_name}.azconfig.io`.-- `StorageQueueUri`: Queue URI. An example is `https://{unique_storage_name}.queue.core.windows.net/{queue_name}`.--The following command creates the required app settings in your function app. Replace `<function_app_name>` with the name of your function app. --```azurecli-interactive -functionAppName="<function_app_name>" -primaryStoreEndpoint="https://$primaryAppConfigName.azconfig.io" -secondaryStoreEndpoint="https://$secondaryAppConfigName.azconfig.io" -storageQueueUri="https://$storageName.queue.core.windows.net/$queueName" -az functionapp config appsettings set --name $functionAppName --resource-group $resourceGroupName --settings StorageQueueUri=$storageQueueUri PrimaryStoreEndpoint=$primaryStoreEndpoint SecondaryStoreEndpoint=$secondaryStoreEndpoint -``` --## Grant access to the managed identity of the function app --Use the following command or the [Azure portal](../app-service/overview-managed-identity.md#add-a-system-assigned-identity) to add a system-assigned managed identity for your function app. --```azurecli-interactive -az functionapp identity assign --name $functionAppName --resource-group $resourceGroupName -``` --> [!NOTE] -> To perform the required resource creation and role management, your account needs `Owner` permissions at the appropriate scope (your subscription or resource group). If you need assistance with role assignment, learn [how to assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md). --Use the following commands or the [Azure portal](./howto-integrate-azure-managed-service-identity.md#grant-access-to-app-configuration) to grant the managed identity of your function app access to your App Configuration stores. Use these roles: --- Assign the `App Configuration Data Reader` role in the primary App Configuration store.-- Assign the `App Configuration Data Owner` role in the secondary App Configuration store.--```azurecli-interactive -functionPrincipalId=$(az functionapp identity show --name $functionAppName --resource-group $resourceGroupName --query principalId --output tsv) -primaryAppConfigId=$(az appconfig show -n $primaryAppConfigName --query id --output tsv) -secondaryAppConfigId=$(az appconfig show -n $secondaryAppConfigName --query id --output tsv) --az role assignment create \ - --role "App Configuration Data Reader" \ - --assignee $functionPrincipalId \ - --scope $primaryAppConfigId --az role assignment create \ - --role "App Configuration Data Owner" \ - --assignee $functionPrincipalId \ - --scope $secondaryAppConfigId -``` --Use the following command or the [Azure portal](../storage/blobs/assign-azure-role-data-access.md#assign-an-azure-role) to grant the managed identity of your function app access to your queue. Assign the `Storage Queue Data Contributor` role in the queue. --```azurecli-interactive -az role assignment create \ - --role "Storage Queue Data Contributor" \ - --assignee $functionPrincipalId \ - --scope $queueId -``` --## Trigger an App Configuration event --To test that everything works, you can create, update, or delete a key-value from the primary store. You should automatically see this change in the secondary store a few seconds after the timer triggers Azure Functions. --```azurecli-interactive -az appconfig kv set --name $primaryAppConfigName --key Foo --value Bar --yes -``` --You've triggered the event. In a few moments, Event Grid will send the event notification to your queue. *After the next scheduled run of your function*, view configuration settings in your secondary store to see if it contains the updated key-value from the primary store. --> [!NOTE] -> You can [trigger your function manually](../azure-functions/functions-manually-run-non-http.md) during the testing and troubleshooting without waiting for the scheduled timer-trigger. --After you make sure that the backup function ran successfully, you can see that the key is now present in your secondary store. --```azurecli-interactive -az appconfig kv show --name $secondaryAppConfigName --key Foo -``` --```json -{ - "contentType": null, - "etag": "eVgJugUUuopXnbCxg0dB63PDEJY", - "key": "Foo", - "label": null, - "lastModified": "2020-04-27T23:25:08+00:00", - "locked": false, - "tags": {}, - "value": "Bar" -} -``` --## Troubleshooting --If you don't see the new setting in your secondary store: --- Make sure the backup function was triggered *after* you created the setting in your primary store.-- It's possible that Event Grid couldn't send the event notification to the queue in time. Check if your queue still contains the event notification from your primary store. If it does, trigger the backup function again.-- Check [Azure Functions logs](../azure-functions/functions-create-scheduled-function.md#test-the-function) for any errors or warnings.-- Use the [Azure portal](../azure-functions/functions-how-to-use-azure-function-app-settings.md#get-started-in-the-azure-portal) to ensure that the Azure function app contains correct values for the application settings that the Azure function is trying to read.-- You can also set up monitoring and alerting for Azure Functions by using [Azure Application Insights](../azure-functions/functions-monitoring.md?tabs=cmd).--## Clean up resources --If you plan to continue working with this App Configuration and event subscription, you might want to leave these resources in place. If you don't plan to continue, use the [az group delete](/cli/azure/group#az-group-delete) command, which deletes the resource group and the resources in it. --```azurecli-interactive -az group delete --name $resourceGroupName -``` --## Next steps --Now that you know how to set up automatic backup of your key-values, learn more about how you can increase the geo-resiliency of your application: --> [!div class="nextstepaction"] -> [Resiliency and disaster recovery](concept-disaster-recovery.md) |
azure-app-configuration | Howto Import Export Data | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/howto-import-export-data.md | You may encounter the following error messages when importing or exporting App C ## Next steps > [!div class="nextstepaction"]-> [Back up App Configuration stores automatically](./howto-backup-config-store.md) +> [Integrate with a CI/CD pipeline](./integrate-ci-cd-pipeline.md) |
azure-app-configuration | Pull Key Value Devops Pipeline | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/pull-key-value-devops-pipeline.md | The [Azure App Configuration](https://marketplace.visualstudio.com/items?itemNam ## Prerequisites - Azure subscription - [create one for free](https://azure.microsoft.com/free/)-- App Configuration store - create one for free in the [Azure portal](https://portal.azure.com).+- App Configuration store - [create one for free](./quickstart-azure-app-configuration-create.md#create-an-app-configuration-store) - Azure DevOps project - [create one for free](https://go.microsoft.com/fwlink/?LinkId=2014881) - Azure App Configuration task - download for free from the [Visual Studio Marketplace](https://marketplace.visualstudio.com/items?itemName=AzureAppConfiguration.azure-app-configuration-task#:~:text=Navigate%20to%20the%20Tasks%20tab,the%20Azure%20App%20Configuration%20instance.). -- [Node 16](https://nodejs.org/en/blog/release/v16.16.0/) - for users running the task on self-hosted agents. +- [Azure Pipelines agent version 2.206.1](https://github.com/microsoft/azure-pipelines-agent/releases/tag/v2.206.1) or later and [Node version 16](https://nodejs.org/en/blog/release/v16.16.0/) or later for running the task on self-hosted agents. ## Create a service connection |
azure-app-configuration | Push Kv Devops Pipeline | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/push-kv-devops-pipeline.md | The [Azure App Configuration Push](https://marketplace.visualstudio.com/items?it ## Prerequisites - Azure subscription - [create one for free](https://azure.microsoft.com/free/)-- App Configuration resource - create one for free in the [Azure portal](https://portal.azure.com).+- App Configuration store - [create one for free](./quickstart-azure-app-configuration-create.md#create-an-app-configuration-store) - Azure DevOps project - [create one for free](https://go.microsoft.com/fwlink/?LinkId=2014881) - Azure App Configuration Push task - download for free from the [Visual Studio Marketplace](https://marketplace.visualstudio.com/items?itemName=AzureAppConfiguration.azure-app-configuration-task-push).-- [Node 16](https://nodejs.org/en/blog/release/v16.16.0/) - for users running the task on self-hosted agents.+- [Azure Pipelines agent version 2.206.1](https://github.com/microsoft/azure-pipelines-agent/releases/tag/v2.206.1) or later and [Node version 16](https://nodejs.org/en/blog/release/v16.16.0/) or later for running the task on self-hosted agents. ## Create a service connection |
azure-arc | Azcmagent Check | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/azcmagent-check.md | + + Title: azcmagent check CLI reference +description: Syntax for the azcmagent check command line tool + Last updated : 04/20/2023+++# azcmagent check ++Run a series of network connectivity checks to see if the agent can successfully communicate with required network endpoints. The command outputs a table showing connectivity test results for each required endpoint, including whether the agent used a private endpoint and/or proxy server. ++## Usage ++``` +azcmagent check [flags] +``` ++## Examples ++Check connectivity with the agent's currently configured cloud and region. ++``` +azcmagent check +``` ++Check connectivity with the East US region using public endpoints. ++``` +azcmagent check --location "eastus" +``` ++Check connectivity with the Central India region using private endpoints. ++``` +azcmagent check --location "centralindia" --enable-pls-check +``` ++## Flags ++`--cloud` ++Specifies the Azure cloud instance. Must be used with the `--location` flag. If the machine is already connected to Azure Arc, the default value is the cloud to which the agent is already connected. Otherwise, the default value is "AzureCloud". ++Supported values: ++* AzureCloud (public regions) +* AzureUSGovernment (Azure US Government regions) +* AzureChinaCloud (Azure China regions) ++`-l`, `--location` ++The Azure region to check connectivity with. If the machine is already connected to Azure Arc, the current region is selected as the default. ++Sample value: westeurope ++`-p`, `--enable-pls-check` ++Checks if supported Azure Arc endpoints resolve to private IP addresses. This flag should be used when you intend to connect the server to Azure using an Azure Arc private link scope. + |
azure-arc | Azcmagent Config | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/azcmagent-config.md | + + Title: azcmagent config CLI reference +description: Syntax for the azcmagent config command line tool + Last updated : 04/20/2023+++# azcmagent config ++Configure settings for the Azure connected machine agent. Configurations are stored locally and are unique to each machine. Available configuration properties vary by agent version. Use [azcmagent config info](#azcmagent-config-info) to see all available configuration properties and supported values for the currently installed agent. ++## Commands ++| Command | Purpose | +| - | - | +| [azcmagent config clear](#azcmagent-config-clear) | Clear a configuration property's value | +| [azcmagent config get](#azcmagent-config-get) | Gets a configuration property's value | +| [azcmagent config info](#azcmagent-config-info) | Describes all available configuration properties and supported values | +| [azcmagent config list](#azcmagent-config-list) | Lists all configuration properties and values | +| [azcmagent config set](#azcmagent-config-set) | Set a value for a configuration property | ++## azcmagent config clear ++Clear a configuration property's value and reset it to its default state. ++### Usage ++``` +azcmagent config clear [property] [flags] +``` ++### Examples ++Clear the proxy server URL property. ++``` +azcmagent config clear proxy.url +``` ++### Flags +++## azcmagent config get ++Get a configuration property's value. ++### Usage ++``` +azcmagent config get [property] [flags] +``` ++### Examples ++Get the agent mode. ++``` +azcmagent config get config.mode +``` ++### Flags +++## azcmagent config info ++Describes available configuration properties and supported values. When run without specifying a specific property, the command describes all available properties their supported values. ++### Usage ++``` +azcmagent config info [property] [flags] +``` ++### Examples ++Describe all available configuration properties and supported values. ++``` +azcmagent config info +``` ++Learn more about the extensions allowlist property and its supported values. ++``` +azcmagent config info extensions.allowlist +``` ++### Flags +++## azcmagent config list ++Lists all configuration properties and their current values ++### Usage ++``` +azcmagent config list [flags] +``` ++### Examples ++List the current agent configuration. ++``` +azcmagent config list +``` ++### Flags +++## azcmagent config set ++Set a value for a configuration property. ++### Usage ++``` +azcmagent config set [property] [value] [flags] +``` ++### Examples ++Configure the agent to use a proxy server. ++``` +azcmagent config set proxy.url "http://proxy.contoso.corp:8080" +``` ++Append an extension to the extension allowlist. ++``` +azcmagent config set extensions.allowlist "Microsoft.Azure.Monitor/AzureMonitorWindowsAgent" --add +``` ++### Flags ++`-a`, `--add` ++Append the value to the list of existing values. If not specified, the default behavior is to replace the list of existing values. This flag is only supported for configuration properties that support more than one value. Can't be used with the `--remove` flag. ++`-r`, `--remove` ++Remove the specified value from the list, retaining all other values. If not specified, the default behavior is to replace the list of existing values. This flag is only supported for configuration properties that support more than one value. Can't be used in conjunction with the `--add` flag. + |
azure-arc | Azcmagent Connect | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/azcmagent-connect.md | + + Title: azcmagent connect CLI reference +description: Syntax for the azcmagent connect command line tool + Last updated : 04/20/2023+++# azcmagent connect ++Connects the server to Azure Arc by creating a metadata representation of the server in Azure and associating the Azure connected machine agent with it. The command requires information about the tenant, subscription, and resource group where you want to represent the server in Azure and valid credentials with permissions to create Azure Arc-enabled server resources in that location. ++## Usage ++``` +azcmagent connect [authentication] --subscription-id [subscription] --resource-group [resourcegroup] --location [region] [flags] +``` ++## Examples ++Connect a server using the default login method (interactive browser or device code). ++``` +azcmagent connect --subscription "Production" --resource-group "HybridServers" --location "eastus" +``` ++Connect a server using a service principal. ++``` +azcmagent connect --subscription "aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee" --resource-group "HybridServers" --location "australiaeast" --service-principal-id "ID" --service-principal-secret "SECRET" --tenant-id "TENANT" +``` ++Connect a server using a private endpoint and device code login method. ++``` +azcmagent connect --subscription "Production" --resource-group "HybridServers" --location "koreacentral" --use-device-code --private-link-scope "/subscriptions/.../Microsoft.HybridCompute/privateLinkScopes/ScopeName" +``` ++## Authentication options ++There are 4 ways to provide authentication credentials to the Azure connected machine agent. Choose one authentication option and replace the `[authentication]` section in the usage syntax with the recommended flags. ++### Interactive browser login (Windows-only) ++This option is the default on Windows operating systems with a desktop experience. It login page opens in your default web browser. This option may be required if your organization has configured conditional access policies that require you to log in from trusted machines. ++No flag is required to use the interactive browser login. ++### Device code login ++This option generates a code that you can use to log in on a web browser on another device. This is the default option on Windows Server core editions and all Linux distributions. When you execute the connect command, you have 5 minutes to open the specified login URL on an internet-connected device and complete the login flow. ++To authenticate with a device code, use the `--use-device-code` flag. If the account you're logging in with and the subscription where you're registering the server aren't in the same tenant, you must also provide the tenant ID for the subscription with `--tenant-id [tenant]`. ++### Service principal ++Service principals allow you to authenticate non-interactively and are often used for at-scale deployments where the same script is run across multiple servers. It's recommended that you provide service principal information via a configuration file (see `--config`) to avoid exposing the secret in any console logs. The service principal should also be dedicated for Arc onboarding and have as few permissions as possible, to limit the impact of a stolen credential. ++To authenticate with a service principal, provide the service principal's application ID, secret, and tenant ID: `--service-principal-id [appid] --service-principal-secret [secret] --tenant-id [tenantid]` ++### Access token ++Access tokens can also be used for non-interactive authentication, but are short-lived and typically used by automation solutions onboarding several servers over a short period of time. You can get an access token with [Get-AzAccessToken](/powershell/module/az.accounts/get-azaccesstoken) or any other Azure Active Directory client. ++To authenticate with an access token, use the `--access-token [token]` flag. If the account you're logging in with and the subscription where you're registering the server aren't in the same tenant, you must also provide the tenant ID for the subscription with `--tenant-id [tenant]`. ++## Flags ++`--access-token` ++Specifies the Azure Active Directory access token used to create the Azure Arc-enabled server resource in Azure. For more information, see [authentication options](#authentication-options). ++`--automanage-profile` ++Resource ID of an Azure Automanage best practices profile that will be applied to the server once it's connected to Azure. ++Sample value: /providers/Microsoft.Automanage/bestPractices/AzureBestPracticesProduction ++`--cloud` ++Specifies the Azure cloud instance. Must be used with the `--location` flag. If the machine is already connected to Azure Arc, the default value is the cloud to which the agent is already connected. Otherwise, the default value is "AzureCloud". ++Supported values: ++* AzureCloud (public regions) +* AzureUSGovernment (Azure US Government regions) +* AzureChinaCloud (Azure China regions) ++`--correlation-id` ++Identifies the mechanism being used to connect the server to Azure Arc. For example, scripts generated in the Azure portal include a GUID that helps Microsoft track usage of that experience. This flag is optional and only used for telemetry purposes to improve your experience. ++`--ignore-network-check` ++Instructs the agent to continue onboarding even if the network check for required endpoints fails. You should only use this option if you're sure that the network check results are incorrect. In most cases, a failed network check indicates that the Arc agent won't function correctly on the server. ++`-l`, `--location` ++The Azure region to check connectivity with. If the machine is already connected to Azure Arc, the current region is selected as the default. ++Sample value: westeurope ++`--private-link-scope` ++Specifies the resource ID of the Azure Arc private link scope to associate with the server. This flag is required if you're using private endpoints to connect the server to Azure. ++`-g`, `--resource-group` ++Name of the Azure resource group where you want to create the Azure Arc-enabled server resource. ++Sample value: HybridServers ++`-n`, `--resource-name` ++Name for the Azure Arc-enabled server resource. By default, the resource name is: ++* The AWS instance ID, if the server is on AWS +* The hostname for all other machines ++You can override the default name with a name of your own choosing to avoid naming conflicts. Once chosen, the name of the Azure resource can't be changed without disconnecting and re-connecting the agent. ++If you want to force AWS servers to use the hostname instead of the instance ID, pass in `$(hostname)` to have the shell evaluate the current hostname and pass that in as the new resource name. ++Sample value: FileServer01 ++`-i`, `--service-principal-id` ++Specifies the application ID of the service principal used to create the Azure Arc-enabled server resource in Azure. Must be used with the `--service-principal-secret` and `--tenant-id` flags. For more information, see [authentication options](#authentication-options). ++`-p`, `--service-principal-secret` ++Specifies the service principal secret. Must be used with the `--service-principal-id` and `--tenant-id` flags. To avoid exposing the secret in console logs, it's recommended to pass in the service principal secret in a configuration file. For more information, see [authentication options](#authentication-options). ++`-s`, `--subscription-id` ++The subscription name or ID where you want to create the Azure Arc-enabled server resource. ++Sample values: Production, aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee ++`--tags` ++Comma-delimited list of tags to apply to the Azure Arc-enabled server resource. Each tag should be specified in the format: TagName=TagValue. If the tag name or value contains a space, use single quotes around the name or value. ++Sample value: Datacenter=NY3,Application=SharePoint,Owner='Shared Infrastructure Services' ++`-t`, `--tenant-id` ++The tenant ID for the subscription where you want to create the Azure Arc-enabled server resource. This flag is required when authenticating with a service principal. For all other authentication methods, the home tenant of the account used to authenticate with Azure is used for the resource as well. If the tenants for the account and subscription are different (guest accounts, Lighthouse), you must specify the tenant ID to clarify the tenant where the subscription is located. ++`--use-device-code` ++Generate an Azure Active Directory device login code that can be entered in a web browser on another computer to authenticate the agent with Azure. For more information, see [authentication options](#authentication-options). + |
azure-arc | Azcmagent Disconnect | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/azcmagent-disconnect.md | + + Title: azcmagent disconnect CLI reference +description: Syntax for the azcmagent disconnect command line tool + Last updated : 04/20/2023+++# azcmagent disconnect ++Deletes the Azure Arc-enabled server resource in the cloud and resets the configuration of the local agent. For detailed information on removing extensions and disconnecting and uninstalling the agent, see [uninstall the agent](manage-agent.md#uninstall-the-agent). ++## Usage ++``` +azcmagent disconnect [authentication] [flags] +``` ++## Examples ++Disconnect a server using the default login method (interactive browser or device code). ++``` +azcmagent disconnect +``` ++Disconnect a server using a service principal. ++``` +azcmagent disconnect --service-principal-id "ID" --service-principal-secret "SECRET" +``` ++Disconnect a server if the corresponding resource in Azure has already been deleted. ++``` +azcmagent disconnect --force-local-only +``` ++## Authentication options ++There are 4 ways to provide authentication credentials to the Azure connected machine agent. Choose one authentication option and replace the `[authentication]` section in the usage syntax with the recommended flags. ++> [!NOTE] +> The account used to disconnect a server must be from the same tenant as the subscription where the server is registered. ++### Interactive browser login (Windows-only) ++This option is the default on Windows operating systems with a desktop experience. The login page opens in your default web browser. This option may be required if your organization has configured conditional access policies that require you to log in from trusted machines. ++No flag is required to use the interactive browser login. ++### Device code login ++This option generates a code that you can use to log in on a web browser on another device. This is the default option on Windows Server core editions and all Linux distributions. When you execute the connect command, you have 5 minutes to open the specified login URL on an internet-connected device and complete the login flow. ++To authenticate with a device code, use the `--use-device-code` flag. ++### Service principal ++Service principals allow you to authenticate non-interactively and are often used for at-scale operations where the same script is run across multiple servers. It's recommended that you provide service principal information via a configuration file (see `--config`) to avoid exposing the secret in any console logs. ++To authenticate with a service principal, provide the service principal's application ID and secret: `--service-principal-id [appid] --service-principal-secret [secret]` ++### Access token ++Access tokens can also be used for non-interactive authentication, but are short-lived and typically used by automation solutions operating on several servers over a short period of time. You can get an access token with [Get-AzAccessToken](/powershell/module/az.accounts/get-azaccesstoken) or any other Azure Active Directory client. ++To authenticate with an access token, use the `--access-token [token]` flag. ++## Flags ++`--access-token` ++Specifies the Azure Active Directory access token used to create the Azure Arc-enabled server resource in Azure. For more information, see [authentication options](#authentication-options). ++`-f`, `--force-local-only` ++Disconnects the server without deleting the resource in Azure. Primarily used if the Azure resource has already been deleted and the local agent configuration needs to be cleaned up. ++`-i`, `--service-principal-id` ++Specifies the application ID of the service principal used to create the Azure Arc-enabled server resource in Azure. Must be used with the `--service-principal-secret` and `--tenant-id` flags. For more information, see [authentication options](#authentication-options). ++`-p`, `--service-principal-secret` ++Specifies the service principal secret. Must be used with the `--service-principal-id` and `--tenant-id` flags. To avoid exposing the secret in console logs, it's recommended to pass in the service principal secret in a configuration file. For more information, see [authentication options](#authentication-options). ++`--use-device-code` ++Generate an Azure Active Directory device login code that can be entered in a web browser on another computer to authenticate the agent with Azure. For more information, see [authentication options](#authentication-options). + |
azure-arc | Azcmagent Genkey | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/azcmagent-genkey.md | + + Title: azcmagent genkey CLI reference +description: Syntax for the azcmagent genkey command line tool + Last updated : 04/20/2023+++# azcmagent genkey ++Generates a private-public key pair that can be used to onboard a machine asynchronously. This command is used when connecting a server to an Azure Arc-enabled virtual machine offering (for example, [Azure Arc-enabled VMware vSphere VMs](../vmware-vsphere/overview.md)). You should normally use [azcmagent connect](azcmagent-connect.md) to configure the agent. ++## Usage ++``` +azcmagent genkey [flags] +``` ++## Examples ++Generate a key pair and print the public key to the console. ++``` +azcmagent genkey +``` ++## Flags + |
azure-arc | Azcmagent Help | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/azcmagent-help.md | + + Title: azcmagent help CLI reference +description: Syntax for the azcmagent help command line tool + Last updated : 04/20/2023+++# azcmagent help ++Prints usage information and a list of all available commands for the Azure Connected Machine agent CLI. For help with a particular command, use `azcmagent COMMANDNAME --help`. ++## Usage ++``` +azcmagent help [flags] +``` ++## Examples ++Show all available commands for the command line interface. ++``` +azcmagent help +``` ++## Flags + |
azure-arc | Azcmagent License | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/azcmagent-license.md | + + Title: azcmagent license CLI reference +description: Syntax for the azcmagent license command line tool + Last updated : 04/20/2023+++# azcmagent license ++Show the license agreement for the Azure Connected Machine agent. ++## Usage ++``` +azcmagent license [flags] +``` ++## Examples ++Show the license agreement. ++``` +azcmagent license +``` ++## Flags + |
azure-arc | Azcmagent Logs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/azcmagent-logs.md | + + Title: azcmagent logs CLI reference +description: Syntax for the azcmagent logs command line tool + Last updated : 04/20/2023+++# azcmagent logs ++Collects log files for the Azure connected machine agent and extensions into a ZIP archive. ++## Usage ++``` +azcmagent logs [flags] +``` ++## Examples ++Collect the most recent log files and store them in a ZIP archive in the current directory. ++``` +azcmagent logs +``` ++Collect all log files and store them in a specific location. ++``` +azcmagent logs --full --output "/tmp/azcmagent-logs.zip" +``` ++## Flags ++`-f`, `--full` ++Collect all log files on the system instead of just the most recent. Useful when troubleshooting older problems. ++`-o`, `--output` ++Specifies the path and name for the ZIP file. If this flag isn't specified, the ZIP is saved to the console's current directory with the name "azcmagent-_TIMESTAMP_-_COMPUTERNAME_.zip" ++Sample value: custom-logname.zip + |
azure-arc | Azcmagent Show | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/azcmagent-show.md | + + Title: azcmagent show CLI reference +description: Syntax for the azcmagent show command line tool + Last updated : 04/20/2023+++# azcmagent show ++Displays the current state of the Azure Connected Machine agent, including whether or not it's connected to Azure, the Azure resource information, and the status of dependent services. ++## Usage ++``` +azcmagent show [flags] +``` ++## Examples ++Check the status of the agent. ++``` +azcmagent show +``` ++Check the status of the agent and save it in a JSON file in the current directory. ++``` +azcmagent show -j > "agent-status.json" +``` ++## Flags ++`--os` ++Outputs additional information about the operating system. + |
azure-arc | Azcmagent Version | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/azcmagent-version.md | + + Title: azcmagent version CLI reference +description: Syntax for the azcmagent version command line tool + Last updated : 04/20/2023+++# azcmagent version ++Shows the version of the currently installed agent. ++## Usage ++``` +azcmagent version [flags] +``` ++## Examples ++Show the agent version. ++``` +azcmagent version +``` ++## Flags + |
azure-arc | Azcmagent | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/azcmagent.md | + + Title: azcmagent CLI reference +description: Reference documentation for the Azure Connected Machine agent command line tool + Last updated : 04/20/2023+++# azcmagent CLI reference ++The Azure Connected Machine agent command line tool, azcmagent, helps you configure, manage, and troubleshoot a server's connection with Azure Arc. The azcmagent CLI is installed with the Azure Connected Machine agent and controls actions specific to the server where it's running. Once the server is connected to Azure Arc, you can use the [Azure CLI](/cli/azure/connectedmachine) or [Azure PowerShell](/powershell/module/az.connectedmachine/) module to enable extensions, manage tags, and perform other operations on the server resource. ++Unless otherwise specified, the command syntax and flags represent available options in the most recent release of the Azure Connected Machine agent. For more information, see [What's new with the Azure Arc-enabled servers agent](agent-release-notes.md). ++## Commands ++| Command | Purpose | +| - | - | +| [azcmagent check](azcmagent-check.md) | Run network connectivity checks for Azure Arc endpoints | +| [azcmagent config](azcmagent-config.md) | Manage agent settings | +| [azcmagent connect](azcmagent-connect.md) | Connect the server to Azure Arc | +| [azcmagent disconnect](azcmagent-disconnect.md) | Disconnect the server from Azure Arc | +| [azcmagent genkey](azcmagent-genkey.md) | Generate a public-private key pair for asynchronous onboarding | +| [azcmagent help](azcmagent-help.md) | Get help for commands | +| [azcmagent license](azcmagent-license.md) | Display the end-user license agreement | +| [azcmagent logs](azcmagent-logs.md) | Collect logs to troubleshoot agent issues | +| [azcmagent show](azcmagent-show.md) | Display the agent status | +| [azcmagent version](azcmagent-version.md) | Display the agent version | ++## Frequently asked questions ++### How can I install the azcmagent CLI? ++The azcmagent CLI is bundled with the Azure Connected Machine agent. Review your [deployment options](deployment-options.md) for Azure Arc to learn how to install and configure the agent. ++### Where is the CLI installed? ++On Windows operating systems, the CLI is installed at `%PROGRAMFILES%\AzureConnectedMachineAgent\azcmagent.exe`. This path is automatically added to the system PATH variable during the installation process. You may need to close and reopen your console to refresh the PATH variable and be able to run `azcmagent` without specifying the full path. ++On Linux operating systems, the CLI is installed at `/opt/azcmagent/bin/azcmagent` ++### What's the difference between the azcmagent CLI and the Azure CLI for Azure Arc-enabled servers? ++The azcmagent CLI is used to configure the local agent. It's responsible for connecting the agent to Azure, disconnecting it, and configuring local settings like proxy URLs and security features. ++The Azure CLI and other management experiences are used to interact with the Azure Arc resource in Azure once the agent is connected. These tools help you manage extensions, move the resource to another subscription or resource group, and change certain settings of the Arc server remotely. |
azure-functions | Develop Python Worker Extensions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/develop-python-worker-extensions.md | Title: Develop Python worker extensions for Azure Functions description: Learn how to create and publish worker extensions that let you inject middleware behavior into Python functions running in Azure. Previously updated : 6/1/2021 Last updated : 04/13/2023 # Develop Python worker extensions for Azure Functions -Azure Functions lets you integrate custom behaviors as part of Python function execution. This feature enables you to create business logic that customers can easily use in their own function apps. To learn more, see the [Python developer reference](functions-reference-python.md#python-worker-extensions). +Azure Functions lets you integrate custom behaviors as part of Python function execution. This feature enables you to create business logic that customers can easily use in their own function apps. To learn more, see the [Python developer reference](functions-reference-python.md#python-worker-extensions). Worker extensions are supported in both the v1 and v2 Python programming models. In this tutorial, you'll learn how to: > [!div class="checklist"] In this tutorial, you'll learn how to: Before you start, you must meet these requirements: -* [Python 3.6.x or above](https://www.python.org/downloads/release/python-374/). To check the full list of supported Python versions in Azure Functions, see the [Python developer guide](functions-reference-python.md#python-version). +* [Python 3.7 or above](https://www.python.org/downloads). To check the full list of supported Python versions in Azure Functions, see the [Python developer guide](functions-reference-python.md#python-version). -* The [Azure Functions Core Tools](functions-run-local.md#v2), version 3.0.3568 or later. +* The [Azure Functions Core Tools](functions-run-local.md#v2), version 4.0.5095 or later, which supports using the extension with the [v2 Python programming model](./functions-reference-python.md). Check your version with `func --version`. * [Visual Studio Code](https://code.visualstudio.com/) installed on one of the [supported platforms](https://code.visualstudio.com/docs/supporting/requirements#_platforms). The folder for your extension project should be like the following structure: | **.venv/** | (Optional) Contains a Python virtual environment used for local development. | | **python_worker_extension/** | Contains the source code of the Python worker extension. This folder contains the main Python module to be published into PyPI. | | **setup.py** | Contains the metadata of the Python worker extension package. |-| **readme.md** | (Optional) Contains the instruction and usage of your extension. This content is displayed as the description in the home page in your PyPI project. | +| **readme.md** | Contains the instruction and usage of your extension. This content is displayed as the description in the home page in your PyPI project. | ### Configure project metadata The `pre_invocation_app_level` method is called by the Python worker before the Similarly, the `post_invocation_app_level` is called after function execution. This example calculates the elapsed time based on the start time and current time. It also overwrites the return value of the HTTP response. +### Create a readme.md ++Create a readme.md file in the root of your extension project. This file contains the instructions and usage of your extension. The readme.md content is displayed as the description in the home page in your PyPI project. ++```markdown +# Python Worker Extension Timer ++In this file, tell your customers when they need to call `Extension.configure()`. ++The readme should also document the extension capabilities, possible configuration, +and usage of your extension. +``` + ## Consume your extension locally Now that you've created an extension, you can use it in an app project to verify it works as intended. Now that you've created an extension, you can use it in an app project to verify pip install -e <PYTHON_WORKER_EXTENSION_ROOT> ``` - In this example, replace `<PYTHON_WORKER_EXTENSION_ROOT>` with the file location of your extension project. + In this example, replace `<PYTHON_WORKER_EXTENSION_ROOT>` with the root file location of your extension project. + When a customer uses your extension, they'll instead add your extension package location to the requirements.txt file, as in the following examples: # [PyPI](#tab/pypi) Now that you've created an extension, you can use it in an app project to verify When running in Azure, you instead add `PYTHON_ENABLE_WORKER_EXTENSIONS=1` to the [app settings in the function app](functions-how-to-use-azure-function-app-settings.md#settings). -1. Add following two lines before the `main` function in \_\_init.py\_\_: +1. Add following two lines before the `main` function in *\_\_init.py\_\_* file for the v1 programming model, or in the *function_app.py* file for the v2 programming model: ```python from python_worker_extension_timer import TimerExtension Now that you've created an extension, you can use it in an app project to verify 1. In the browser, send a GET request to `https://localhost:7071/api/HttpTrigger`. You should see a response like the following, with the **TimeElapsed** data for the request appended. - <pre> + ``` This HTTP triggered function executed successfully. Pass a name in the query string or in the request body for a personalized response. (TimeElapsed: 0.0009996891021728516 sec)- </pre> + ``` ## Publish your extension To publish your extension to PyPI: twine upload dist/* ``` - You may need to provide your PyPI account credentials during upload. + You may need to provide your PyPI account credentials during upload. You can also test your package upload with `twine upload -r testpypi dist/*`. For more information, see the [Twine documentation](https://twine.readthedocs.io/en/stable/). After these steps, customers can use your extension by including your package name in their requirements.txt. |
azure-functions | Functions Run Local | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-run-local.md | curl --request POST -H "Content-Type:application/json" --data "{'input':'sample ``` +The administrator endpoint also provides a list of all (HTTP triggered and non-HTTP triggered) functions on `http://localhost:{port}/admin/functions/`. + When you call an administrator endpoint on your function app in Azure, you must provide an access key. To learn more, see [Function access keys](functions-bindings-http-webhook-trigger.md#authorization-keys). ## <a name="publish"></a>Publish to Azure |
azure-government | Documentation Government Csp List | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/documentation-government-csp-list.md | Below you can find a list of all the authorized Cloud Solution Providers (CSPs), |[People Services Inc. DBA CATCH Intelligence](https://catchintelligence.com)| |[Perizer Corp.](https://perizer.com)| |[Perrygo Consulting Group, LLC](https://perrygo.com)|-|[Perspecta](https://perspecta.com/)| |[Phacil (By Light)](https://www.bylight.com/phacil/)| |[Pharicode LLC](https://pharicode.com)| |Philistin & Heller Group, Inc.| Below you can find a list of all the authorized Cloud Solution Providers (CSPs), |[TestPros Inc.](https://www.testpros.com)| |[The Cram Group LLC](https://aeccloud.com/)| |[The Informatics Application Group Inc.](https://tiag.net)|-|[The Porter Group, LLC](https://www.thepottergroupllc.com/)| |[Thundercat Technology](https://www.thundercattech.com/)| |[TIC Business Consultants, Ltd.](https://www.ticbiz.com/)| |[Tier1, Inc.](https://www.tier1inc.com)| Below you can find a list of all the authorized Cloud Solution Providers (CSPs), |[Novetta](https://www.novetta.com)| |[PAX 8](https://www.pax8.com)| |[Permuta Technologies, Inc.](http://www.permuta.com/)|-|[Perspecta](https://perspecta.com)| |[Planet Technologies, Inc.](https://go-planet.com)| |[Progeny Systems](https://www.progeny.net/)| |[Project Hosts](https://www.projecthosts.com)| |
azure-maps | How To Use Indoor Module Ios | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-use-indoor-module-ios.md | -> [!NOTE] -> The iOS SDK will support *dynamic styling* in a future release, coming soon! - ## Prerequisites -1. Be sure to complete the steps in the [Quickstart: Create an iOS app](quick-ios-app.md). Code blocks in this article can be inserted into the `viewDidLoad` function of `ViewController`. -1. [Create a Creator resource](how-to-manage-creator.md) -1. Get a `tilesetId` by completing the [tutorial for creating Indoor maps](tutorial-creator-indoor-maps.md). You'll use this identifier to render indoor maps with the Azure Maps iOS SDK. +1. Complete the steps in the [Quickstart: Create an iOS app]. Code blocks in this article can be inserted into the `viewDidLoad` function of `ViewController`. +1. A [Creator resource] +1. Get a `tilesetId` by completing the [Tutorial: Use Creator to create indoor maps]. The tileset ID is used to render indoor maps with the Azure Maps iOS SDK. ## Instantiate the indoor manager func indoorManager( ## Example -The screenshot below shows the above code displaying an indoor map. +The following screenshot shows the above code displaying an indoor map.  ## Additional information -- [Creator for indoor maps](creator-indoor-maps.md)-- [Drawing package requirements](drawing-requirements.md)+- [Creator for indoor maps] +- [Drawing package requirements] ++[Quickstart: Create an iOS app]: quick-ios-app.md +[Creator resource]: how-to-manage-creator.md +[Tutorial: Use Creator to create indoor maps]: tutorial-creator-indoor-maps.md +[Creator for indoor maps]: creator-indoor-maps.md +[Drawing package requirements]: drawing-requirements.md |
azure-monitor | App Insights Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/app-insights-overview.md | Title: Application Insights overview description: Learn how Application Insights in Azure Monitor provides performance management and usage tracking of your live web application. Previously updated : 03/22/2023 Last updated : 04/24/2023 # Application Insights overview To understand the number of Application Insights resources required to cover you ## How do I use Application Insights? -Application Insights is enabled through either [autoinstrumentation](codeless-overview.md) (agent) or by adding the [Application Insights SDK](sdk-support-guidance.md) to your application code. [Many languages](platforms.md) are supported. The applications could be on Azure, on-premises, or hosted by another cloud. To figure out which type of instrumentation is best for you, see [How do I instrument an application?](#how-do-i-instrument-an-application). +Application Insights is enabled through either [autoinstrumentation](codeless-overview.md) (agent) or by adding the [Application Insights SDK](sdk-support-guidance.md) to your application code. [Many languages](#supported-languages) are supported. The applications could be on Azure, on-premises, or hosted by another cloud. To figure out which type of instrumentation is best for you, see [How do I instrument an application?](#how-do-i-instrument-an-application). The Application Insights agent or SDK preprocesses telemetry and metrics before sending the data to Azure. Then it's ingested and processed further before it's stored in Azure Monitor Logs (Log Analytics). For this reason, an Azure account is required to use Application Insights. Leave product feedback for the engineering team in the [Feedback Community](http - [Autoinstrumentation overview](codeless-overview.md) - [Overview dashboard](overview-dashboard.md) - [Availability overview](availability-overview.md)-- [Application Map](app-map.md)+- [Application Map](app-map.md) |
azure-monitor | Asp Net Core | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/asp-net-core.md | description: Monitor ASP.NET Core web applications for availability, performance ms.devlang: csharp Previously updated : 03/22/2023 Last updated : 04/24/2023 # Application Insights for ASP.NET Core applications |
azure-monitor | Asp Net | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/asp-net.md | Title: Configure monitoring for ASP.NET with Azure Application Insights | Microsoft Docs description: Configure performance, availability, and user behavior analytics tools for your ASP.NET website hosted on-premises or in Azure. Previously updated : 03/22/2023 Last updated : 04/24/2023 ms.devlang: csharp To add Application Insights to your ASP.NET website, you need to: ## Add Application Insights automatically -This section will guide you through automatically adding Application Insights to a template-based ASP.NET web app. From within your ASP.NET web app project in Visual Studio: +This section guides you through automatically adding Application Insights to a template-based ASP.NET web app. From within your ASP.NET web app project in Visual Studio: 1. Select **Project** > **Add Application Insights Telemetry** > **Application Insights Sdk (local)** > **Next** > **Finish** > **Close**. 2. Open the *ApplicationInsights.config* file. This section will guide you through automatically adding Application Insights to ``` 4. Select **Project** > **Manage NuGet Packages** > **Updates**. Then update each `Microsoft.ApplicationInsights` NuGet package to the latest stable release. -5. Run your application by selecting **IIS Express**. A basic ASP.NET app opens. As you browse through the pages on the site, telemetry will be sent to Application Insights. +5. Run your application by selecting **IIS Express**. A basic ASP.NET app opens. As you browse through the pages on the site, telemetry is sent to Application Insights. ## Add Application Insights manually -This section will guide you through manually adding Application Insights to a template-based ASP.NET web app. This section assumes that you're using a web app based on the standard MVC web app template for the ASP.NET Framework. +This section guides you through manually adding Application Insights to a template-based ASP.NET web app. This section assumes that you're using a web app based on the standard MVC web app template for the ASP.NET Framework. 1. Add the following NuGet packages and their dependencies to your project: This section will guide you through manually adding Application Insights to a te 2. In some cases, the *ApplicationInsights.config* file is created for you automatically. If the file is already present, skip to step 4. - If it's not created automatically, you'll need to create it yourself. In the root directory of an ASP.NET application, create a new file called *ApplicationInsights.config*. + If it's not created automatically, you need to create it yourself. In the root directory of an ASP.NET application, create a new file called *ApplicationInsights.config*. 3. Copy the following XML configuration into your newly created file: You have now successfully configured server-side application monitoring. If you The previous sections provided guidance on methods to automatically and manually configure server-side monitoring. To add client-side monitoring, use the [client-side JavaScript SDK](javascript.md). You can monitor any web page's client-side transactions by adding a [JavaScript snippet](javascript.md#snippet-based-setup) before the closing `</head>` tag of the page's HTML. -Although it's possible to manually add the snippet to the header of each HTML page, we recommend that you instead add the snippet to a primary page. That action will inject the snippet into all pages of a site. +Although it's possible to manually add the snippet to the header of each HTML page, we recommend that you instead add the snippet to a primary page. That action injects the snippet into all pages of a site. For the template-based ASP.NET MVC app from this article, the file that you need to edit is *_Layout.cshtml*. You can find it under **Views** > **Shared**. To add client-side monitoring, open *_Layout.cshtml* and follow the [snippet-based setup instructions](javascript.md#snippet-based-setup) from the article about client-side JavaScript SDK configuration. |
azure-monitor | Availability Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/availability-overview.md | Title: Application Insights availability tests description: Set up recurring web tests to monitor availability and responsiveness of your app or website. Previously updated : 03/22/2023 Last updated : 04/24/2023 |
azure-monitor | Java Standalone Sampling Overrides | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-standalone-sampling-overrides.md | Title: Sampling overrides (preview) - Azure Monitor Application Insights for Java description: Learn to configure sampling overrides in Azure Monitor Application Insights for Java. Previously updated : 11/15/2022 Last updated : 04/24/2023 ms.devlang: java |
azure-monitor | Javascript Framework Extensions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/javascript-framework-extensions.md | You can also run custom queries to divide Application Insights data to generate ### Use React Hooks -[React Hooks](https://reactjs.org/docs/hooks-reference.html) are an approach to state and lifecycle management in a React application without relying on class-based React components. The Application Insights React plug-in provides several Hooks integrations that operate in a similar way to the higher-order component approach. +[React Hooks](https://react.dev/reference/react) are an approach to state and lifecycle management in a React application without relying on class-based React components. The Application Insights React plug-in provides several Hooks integrations that operate in a similar way to the higher-order component approach. #### Use React Context -The React Hooks for Application Insights are designed to use [React Context](https://reactjs.org/docs/context.html) as a containing aspect for it. To use Context, initialize Application Insights, and then import the Context object: +The React Hooks for Application Insights are designed to use [React Context](https://react.dev/learn/passing-data-deeply-with-context) as a containing aspect for it. To use Context, initialize Application Insights, and then import the Context object: ```javascript import React from "react"; When the Hook is used, a data payload can be provided to it to add more data to ### React error boundaries -[Error boundaries](https://reactjs.org/docs/error-boundaries.html) provide a way to gracefully handle an exception when it occurs within a React application. When such an error occurs, it's likely that the exception needs to be logged. The React plug-in for Application Insights provides an error boundary component that automatically logs the error when it occurs. +[Error boundaries](https://react.dev/reference/react/Component#catching-rendering-errors-with-an-error-boundary) provide a way to gracefully handle an exception when it occurs within a React application. When such an error occurs, it's likely that the exception needs to be logged. The React plug-in for Application Insights provides an error boundary component that automatically logs the error when it occurs. ```javascript import React from "react"; |
azure-monitor | Monitor Functions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/monitor-functions.md | Title: Monitor applications running on Azure Functions with Application Insights - Azure Monitor | Microsoft Docs description: Azure Monitor integrates with your Azure Functions application, allowing performance monitoring and quickly identifying problems. Previously updated : 02/09/2023 Last updated : 04/24/2023 |
azure-monitor | Opencensus Python | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opencensus-python.md | Title: Monitor Python applications with Azure Monitor | Microsoft Docs description: This article provides instructions on how to wire up OpenCensus Python with Azure Monitor. Previously updated : 03/22/2023 Last updated : 04/24/2023 ms.devlang: python |
azure-monitor | Sdk Support Guidance | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/sdk-support-guidance.md | Title: Application Insights SDK support guidance description: Support guidance for Application Insights legacy and preview SDKs Previously updated : 11/15/2022 Last updated : 04/24/2023 |
azure-monitor | Tutorial Asp Net Core | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/tutorial-asp-net-core.md | description: Application Insights SDK tutorial to monitor ASP.NET Core web appli ms.devlang: csharp Previously updated : 03/22/2023 Last updated : 04/24/2023 |
azure-monitor | Worker Service | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/worker-service.md | description: Monitoring .NET Core/.NET Framework non-HTTP apps with Azure Monito ms.devlang: csharp Previously updated : 01/24/2023 Last updated : 04/24/2023 |
azure-monitor | Data Platform Metrics | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/data-platform-metrics.md | Azure Monitor collects metrics from the following sources. After these metrics a For a complete list of data sources that can send data to Azure Monitor Metrics, see [What is monitored by Azure Monitor?](../monitor-reference.md). +## REST API +Azure Monitor provides REST APIs that allow you to get data in and out of Azure Monitor Metrics. +- **Custom metrics API** - [Custom metrics](./metrics-custom-overview.md) allow you to load your own metrics into the Azure Monitor Metrics database. Those metrics can then be used by the same analysis tools that process Azure Monitor platform metrics. +- **Azure Monitor Metrics REST API** - Allows you to access Azure Monitor platform metrics definitions and values. For more information, see [Azure Monitor REST API](/rest/api/monitor/). For information on how to use the API, see the [Azure monitoring REST API walkthrough](./rest-api-walkthrough.md). +- **Azure Monitor Metrics Data plane REST API** - [Azure Monitor Metrics data plane API](/rest/api/monitor/metrics-data-plane/) is a high-volume API designed for customers with large volume metrics queries. It's similar to the existing standard Azure Monitor Metrics REST API, but provides the capability to retrieve metric data for up to 50 resource IDs in the same subscription and region in a single batch API call. This improves query throughput and reduces the risk of throttling. ++ ## Metrics Explorer Use [Metrics Explorer](metrics-charts.md) to interactively analyze the data in your metric database and chart the values of multiple metrics over time. You can pin the charts to a dashboard to view them with other visualizations. You can also retrieve metrics by using the [Azure monitoring REST API](./rest-api-walkthrough.md). |
azure-monitor | Prometheus Remote Write Active Directory | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/prometheus-remote-write-active-directory.md | This step is only required if you didn't enable Azure Key Vault Provider for Sec | Value | Description | |:|:| | `<CLUSTER-NAME>` | Name of your AKS cluster |- | `<CONTAINER-IMAGE-VERSION>` | `mcr.microsoft.com/azuremonitor/prometheus/promdev/prom-remotewrite:prom-remotewrite-20221103.1`<br>This is the remote write container image version. | + | `<CONTAINER-IMAGE-VERSION>` | `mcr.microsoft.com/azuremonitor/prometheus/promdev/prom-remotewrite:prom-remotewrite-20230323.1`<br>This is the remote write container image version. | | `<INGESTION-URL>` | **Metrics ingestion endpoint** from the **Overview** page for the Azure Monitor workspace | | `<APP-REGISTRATION -CLIENT-ID> ` | Client ID of your application | | `<TENANT-ID> ` | Tenant ID of the Azure Active Directory application | |
azure-monitor | Prometheus Remote Write Managed Identity | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/prometheus-remote-write-managed-identity.md | This step isn't required if you're using an AKS identity since it will already h | Value | Description | |:|:| | `<AKS-CLUSTER-NAME>` | Name of your AKS cluster |- | `<CONTAINER-IMAGE-VERSION>` | `mcr.microsoft.com/azuremonitor/prometheus/promdev/prom-remotewrite:prom-remotewrite-20221103.1`<br>This is the remote write container image version. | + | `<CONTAINER-IMAGE-VERSION>` | `mcr.microsoft.com/azuremonitor/prometheus/promdev/prom-remotewrite:prom-remotewrite-20230323.1`<br>This is the remote write container image version. | | `<INGESTION-URL>` | **Metrics ingestion endpoint** from the **Overview** page for the Azure Monitor workspace | | `<MANAGED-IDENTITY-CLIENT-ID>` | **Client ID** from the **Overview** page for the managed identity | | `<CLUSTER-NAME>` | Name of the cluster Prometheus is running on | |
azure-monitor | Daily Cap | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/daily-cap.md | Daily caps are typically used by organizations that are particularly cost consci When data collection stops, you effectively have no monitoring of features and resources relying on that workspace. Instead of relying on the daily cap alone, you can [create an alert rule](#alert-when-daily-cap-is-reached) to notify you when data collection reaches some level before the daily cap. Notification allows you to address any increases before data collection shuts down, or even to temporarily disable collection for less critical resources. ## Application Insights-You shouldn't create a daily cap for workspace-based Application Insights resources but instead create a daily cap for their workspace. You do need to create a separate daily cap for any classic Application Insights resources since their data doesn't reside in a Log Analytics workspace. +You should configure the daily cap setting for both Application Insights and Log Analytics to limit the amount of telemetry data ingested by your service. For workspace-based Application Insights resources, the effective daily cap is the minimum of the two settings. For classic Application Insights resources, only the Application Insights daily cap applies since their data doesnΓÇÖt reside in a Log Analytics workspace. > [!TIP] > If you're concerned about the amount of billable data collected by Application Insights, you should configure [sampling](../app/sampling.md) to tune its data volume to the level you want. Use the daily cap as a safety method in case your application unexpectedly begins to send much higher volumes of telemetry. The maximum cap for an Application Insights classic resource is 1,000 GB/day unless you request a higher maximum for a high-traffic application. When you create a resource in the Azure portal, the daily cap is set to 100 GB/day. When you create a resource in Visual Studio, the default is small (only 32.3 MB/day). The daily cap default is set to facilitate testing. It's intended that the user will raise the daily cap before deploying the app into production. -We've removed the restriction on some subscription types that have credit that couldn't be used for Application Insights. Previously, if the subscription has a spending limit, the daily cap dialog has instructions to remove the spending limit and enable the daily cap to be raised beyond 32.3 MB/day. +> [!NOTE] +> If you are using connection strings to send data to Application Insights using [regional ingestion endpoints](../app/ip-addresses.md#outgoing-ports), then the Application Insights and Log Analytics daily cap settings are effective per region. If you are using only instrumentation key (ikey) to send data to Application Insights using the [global ingestion endpoint](../app/ip-addresses.md#outgoing-ports), then the Application Insights daily cap setting may not be effective across regions, but the Log Analytics daily cap setting will still apply. +We've removed the restriction on some subscription types that have credit that couldn't be used for Application Insights. Previously, if the subscription has a spending limit, the daily cap dialog has instructions to remove the spending limit and enable the daily cap to be raised beyond 32.3 MB/day. ## Determine your daily cap To help you determine an appropriate daily cap for your workspace, see [Azure Monitor cost and usage](../usage-estimated-costs.md) to understand your data ingestion trends. You can also review [Analyze usage in Log Analytics workspace](analyze-usage.md) which provides methods to analyze your workspace usage in more detail. |
azure-monitor | Snapshot Debugger App Service | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/snapshot-debugger/snapshot-debugger-app-service.md | |
azure-netapp-files | Application Volume Group Considerations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/application-volume-group-considerations.md | This article describes the requirements and considerations you need to be aware Application volume group for SAP HANA will create multiple IP addresses, up to six IP addresses for larger-sized estates. Ensure that the delegated subnet has sufficient free IP addresses. ItΓÇÖs recommended that you use a delegated subnet with a minimum of 59 IP addresses with a subnet size of /26. See [Considerations about delegating a subnet to Azure NetApp Files](azure-netapp-files-delegate-subnet.md#considerations). +>[!IMPORTANT] +>The use of application volume group for SAP HANA for applications other than SAP HANA is not supported. Reach out to your Azure NetApp Files specialist for guidance on using Azure NetApp Files multi-volume layouts with other database applications. + ## Best practices about proximity placement groups To deploy SAP HANA volumes using the application volume group, you need to use your HANA database VMs as an anchor for a proximity placement group (PPG). ItΓÇÖs recommended that you create an availability set per database and use the **[SAP HANA VM pinning request form](https://aka.ms/HANAPINNING)** to pin the availability set to a dedicated compute cluster. After pinning, you need to add a PPG to the availability set and then deploy all hosts of an SAP HANA database using that availability set. Doing so ensures that all virtual machines are at the same location. If the virtual machines are started, the PPG has its anchor. |
azure-netapp-files | Azure Netapp Files Solution Architectures | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-solution-architectures.md | This section provides references to SAP on Azure solutions. * [Attach Azure NetApp Files to Azure VMware Solution VMs - Guest OS Mounts](../azure-vmware/netapp-files-with-azure-vmware-solution.md) * [Disaster Recovery with Azure NetApp Files, JetStream DR and Azure VMware Solution](../azure-vmware/deploy-disaster-recovery-using-jetstream.md#disaster-recovery-with-azure-netapp-files-jetstream-dr-and-azure-vmware-solution) * [Disaster Recovery with Azure NetApp Files, JetStream DR and AVS (Azure VMware Solution)](https://www.jetstreamsoft.com/portal/jetstream-knowledge-base/disaster-recovery-with-azure-netapp-files-jetstream-dr-and-avs-azure-vmware-solution/) - Jetstream+* [Enable App Volume Replication for Horizon VDI on Azure VMware Solution using Azure NetApp Files](https://techcommunity.microsoft.com/t5/azure-migration-and/enable-app-volume-replication-for-horizon-vdi-on-azure-vmware/ba-p/3798178) ## Virtual Desktop Infrastructure solutions |
azure-netapp-files | Whats New | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/whats-new.md | Azure NetApp Files is updated regularly. This article provides a summary about t * [Azure Application Consistent Snapshot Tool (AzAcSnap) v5.1 Public Preview](azacsnap-release-notes.md) - [Azure Application Consistent Snapshot Tool](azacsnap-introduction.md) (AzAcSnap) is a command-line tool that enables customers to simplify data protection for third-party databases (SAP HANA) in Linux environments (for example, SUSE and RHEL). + [Azure Application Consistent Snapshot Tool](azacsnap-introduction.md) (AzAcSnap) is a command-line tool that enables customers to simplify data protection for third-party databases (SAP HANA) in Linux environments (for example, `SUSE` and `RHEL`). The public preview of v5.1 brings the following new capabilities to AzAcSnap: * Oracle Database support Azure NetApp Files is updated regularly. This article provides a summary about t * Azure NetApp Files Application Consistent Snapshot tool [(AzAcSnap)](azacsnap-introduction.md) is now generally available. - AzAcSnap is a command-line tool that enables you to simplify data protection for third-party databases (SAP HANA) in Linux environments (for example, SUSE and RHEL). See [Release Notes for AzAcSnap](azacsnap-release-notes.md) for the latest changes about the tool. + AzAcSnap is a command-line tool that enables you to simplify data protection for third-party databases (SAP HANA) in Linux environments (for example, `SUSE` and `RHEL`). See [Release Notes for AzAcSnap](azacsnap-release-notes.md) for the latest changes about the tool. * [Support for capacity pool billing tags](manage-billing-tags.md) Azure NetApp Files is updated regularly. This article provides a summary about t * [Azure Application Consistent Snapshot Tool](azacsnap-introduction.md) (Preview) - Azure Application Consistent Snapshot Tool (AzAcSnap) is a command-line tool that enables you to simplify data protection for third-party databases (SAP HANA) in Linux environments (for example, SUSE and RHEL). + Azure Application Consistent Snapshot Tool (AzAcSnap) is a command-line tool that enables you to simplify data protection for third-party databases (SAP HANA) in Linux environments (for example, `SUSE` and `RHEL`). AzAcSnap leverages the volume snapshot and replication functionalities in Azure NetApp Files and Azure Large Instance. It provides the following benefits: |
azure-resource-manager | User Defined Data Types | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/user-defined-data-types.md | resource storageAccount 'Microsoft.Storage/storageAccounts@2022-09-01' = { ## Next steps -- For a list of the Bicep date types, see [Data types](./data-types.md).+- For a list of the Bicep data types, see [Data types](./data-types.md). |
azure-resource-manager | Deploy Marketplace App Quickstart | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/deploy-marketplace-app-quickstart.md | + + Title: Deploy an Azure Marketplace managed application +description: Describes how to deploy an Azure Marketplace managed application using Azure portal. +++ Last updated : 04/25/2023+++# Quickstart: Deploy an Azure Marketplace managed application ++In this quickstart, you deploy an Azure Marketplace managed application and verify the resource deployments in Azure. A Marketplace managed application publisher charges a fee to maintain the application, and during the deployment, the publisher is given permissions to your application's managed resource group. As a customer, you have limited access to the deployed resources, but can delete the managed application from your Azure subscription. ++To avoid unnecessary costs for the managed application's Azure resources, go to [clean up resources](#clean-up-resources) when you're finished. ++## Prerequisites ++An Azure account with an active subscription. If you don't have an account, [create a free account](https://azure.microsoft.com/free/) before you begin. ++## Find a managed application ++To get a managed application from the Azure portal, use the following steps. ++1. Sign in to the [Azure portal](https://portal.azure.com). +1. Search for _Marketplace_ and select it from the available options. Or if you've recently used **Marketplace**, select it from the list. ++ :::image type="content" source="media/deploy-marketplace-app-quickstart/select-marketplace.png" alt-text="Screenshot of the Azure portal home page to search for Marketplace or select it from the list of Azure services."::: ++1. On the **Marketplace** page, search for _Microsoft community training_. +1. Select **Microsoft Community Training (Preview)**. ++ :::image type="content" source="media/deploy-marketplace-app-quickstart/select-marketplace-app.png" alt-text="Screenshot of the Azure Marketplace that shows the managed application to select for deployment."::: ++1. Select the **Basic** plan and then select **Create**. ++ :::image type="content" source="media/deploy-marketplace-app-quickstart/select-plan.png" alt-text="Screenshot that shows the Basic plan is selected and the create button is highlighted."::: ++## Deploy the managed application ++1. On the **Basics** tab, enter the required information. ++ :::image type="content" source="media/deploy-marketplace-app-quickstart/portal-basics.png" alt-text="Screenshot that shows the form's Basics tab to deploy the managed application."::: ++ - **Subscription**: Select your Azure subscription. + - **Resource group**: Create a new resource group. For this example use _demo-marketplace-app_. + - **Region**: Select a region, like _West US_. + - **Application Name**: Enter a name, like _demotrainingapp_. + - **Managed Resource Group**: Use the default name for this example. The format is `mrg-microsoft-community-training-<dateTime>`. But you can change the name if you want. ++1. Select **Next: Setup your portal**. +1. On the **Setup your portal** tab, enter the required information. ++ :::image type="content" source="media/deploy-marketplace-app-quickstart/portal-setup.png" alt-text="Screenshot that shows the form's Setup your portal tab to deploy the managed application."::: ++ - **Website name**: Enter a name that meets the criteria specified on the form, like _demotrainingsite_. Your website name should be globally unique across Azure. + - **Organization name**: Enter your organization's name. + - **Contact email addresses**: Enter at least one valid email address. ++1. Select **Next: Setup your login type**. +1. On the **Setup your login type** tab, enter the required information. ++ :::image type="content" source="media/deploy-marketplace-app-quickstart/portal-setup-login.png" alt-text="Screenshot that shows the form's Setup your login type tab to deploy the managed application."::: ++ - **Login type**: For this example, select **Mobile**. + - **Org admin's mobile number**: Enter a valid mobile phone number including the country/region code, in the format _+1 1234567890_. The phone number is used to sign in to the training site. ++1. Select **Next: Review + create**. +1. After **Validation passed** is displayed, verify the information is correct. +1. Read **Co-Admin Access Permission** and check the box to agree to the terms. ++ :::image type="content" source="media/deploy-marketplace-app-quickstart/create-app.png" alt-text="Screenshot that shows the validation passed, the co-admin permission box is selected, and create button is highlighted."::: ++1. Select **Create**. ++The deployment begins and because many resources are created, the Azure deployment takes about 20 minutes to finish. You can verify the Azure deployments before the website becomes available. ++## Verify the managed application deployment ++After the managed application deployment is finished, you can verify the resources. ++1. Go to resource group **demo-marketplace-app** and select the managed application. ++ :::image type="content" source="media/deploy-marketplace-app-quickstart/app-resource-group.png" alt-text="Screenshot of the resource group where the managed application is installed that highlights the application name."::: ++1. Select the **Overview** tab to display the managed application and link to the managed resource group. ++ :::image type="content" source="media/deploy-marketplace-app-quickstart/managed-app.png" alt-text="Screenshot of the managed application that highlights the link to the managed resource group."::: ++1. The managed resource group shows the resources that were deployed and the deployments that created the resources. ++ :::image type="content" source="media/deploy-marketplace-app-quickstart/mrg-apps.png" alt-text="Screenshot of the managed resource group that that highlights the deployments and list of deployed resources."::: ++1. To review the publisher's permissions in the managed resource group, select **Access Control (IAM)** > **Role assignments**. ++ You can also verify the **Deny assignments**. ++For this example, the website's availability isn't necessary. The article's purpose is to show how to deploy an Azure Marketplace managed application and verify the resources. To avoid unnecessary costs, go to [clean up resources](#clean-up-resources) when you're finished. ++### Launch the website (optional) ++After the deployment is completed, from the managed resource group, you can go to the App Service resource and launch your website. ++ :::image type="content" source="media/deploy-marketplace-app-quickstart/app-service.png" alt-text="Screenshot of the App Service with the website link highlighted."::: ++The site might respond with a page that the deployment is still processing. ++ :::image type="content" source="media/deploy-marketplace-app-quickstart/deployment-message.png" alt-text="Screenshot that shows the website deployment is in progress."::: ++When your website is available, a default sign-in page is displayed. You can sign-in with the mobile phone number that you used during the deployment and you'll receive a text message confirmation. When you're finished, be sure to sign out of your training website. ++## Clean up resources ++When you're finished with the managed application, you can delete the resource groups and that removes all the Azure resources you created. For example, in this quickstart you created the resource groups _demo-marketplace-app_ and a managed resource group with the prefix _mrg-microsoft-community-training_. ++When you delete the **demo-marketplace-app** resource group, the managed application, managed resource group, and all the Azure resources are deleted. ++1. Go to the **demo-marketplace-app** resource group and **Delete resource group**. ++ :::image type="content" source="media/deploy-marketplace-app-quickstart/delete-resource-group.png" alt-text="Screenshot of the highlighted delete resource group button."::: ++1. To confirm the deletion, enter the resource group name and select **Delete**. ++ :::image type="content" source="media/deploy-marketplace-app-quickstart/confirm-delete-resource-group.png" alt-text="Screenshot that shows the delete resource group confirmation."::: +++## Next steps ++- To learn how to create and publish the definition files for a managed application, go to [Quickstart: Create and publish an Azure Managed Application definition](publish-service-catalog-app.md). +- To learn how to deploy a managed application, go to [Quickstart: Deploy a service catalog managed application](deploy-service-catalog-quickstart.md) +- To use your own storage to create and publish the definition files for a managed application, go to [Quickstart: Bring your own storage to create and publish an Azure Managed Application definition](publish-service-catalog-bring-your-own-storage.md). |
azure-resource-manager | Deploy Python | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/deploy-python.md | + + Title: Deploy resources with Python and template +description: Use Azure Resource Manager and Python to deploy resources to Azure. The resources are defined in an Azure Resource Manager template. + Last updated : 04/24/2023++++# Deploy resources with ARM templates and Python ++This article explains how to use Python with Azure Resource Manager templates (ARM templates) to deploy your resources to Azure. If you aren't familiar with the concepts of deploying and managing your Azure solutions, see [template deployment overview](overview.md). +++## Prerequisites ++* A template to deploy. If you don't already have one, download and save an [example template](https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/quickstarts/microsoft.storage/storage-account-create/azuredeploy.json) from the Azure Quickstart templates repo. ++* Python 3.7 or later installed. To install the latest, see [Python.org](https://www.python.org/downloads/) ++* The following Azure library packages for Python installed in your virtual environment. To install any of the packages, use `pip install {package-name}` + * azure-identity + * azure-mgmt-resource ++ If you have older versions of these packages already installed in your virtual environment, you may need to update them with `pip install --upgrade {package-name}` ++* The examples in this article use CLI-based authentication (`AzureCliCredential`). Depending on your environment, you may need to run `az login` first to authenticate. +++## Deployment scope ++You can target your deployment to a resource group, subscription, management group, or tenant. Depending on the scope of the deployment, you use different methods. ++* To deploy to a **resource group**, use [ResourceManagementClient.deployments.begin_create_or_update](/python/api/azure-mgmt-resource/azure.mgmt.resource.resources.v2022_09_01.operations.deploymentsoperations#azure-mgmt-resource-resources-v2022-09-01-operations-deploymentsoperations-begin-create-or-update): ++* To deploy to a **subscription**, use [ResourceManagementClient.deployments.begin_create_or_update_at_subscription_scope](/python/api/azure-mgmt-resource/azure.mgmt.resource.resources.v2022_09_01.operations.deploymentsoperations#azure-mgmt-resource-resources-v2022-09-01-operations-deploymentsoperations-begin-create-or-update-at-subscription-scope): ++ For more information about subscription level deployments, see [Create resource groups and resources at the subscription level](deploy-to-subscription.md). ++* To deploy to a **management group**, use [ResourceManagementClient.deployments.begin_create_or_update_at_management_group_scope](/python/api/azure-mgmt-resource/azure.mgmt.resource.resources.v2022_09_01.operations.deploymentsoperations#azure-mgmt-resource-resources-v2022-09-01-operations-deploymentsoperations-begin-create-or-update-at-management-group-scope). ++ For more information about management group level deployments, see [Create resources at the management group level](deploy-to-management-group.md). ++* To deploy to a **tenant**, use [ResourceManagementClient.deployments.begin_create_or_update_at_tenant_scope](/python/api/azure-mgmt-resource/azure.mgmt.resource.resources.v2022_09_01.operations.deploymentsoperations#azure-mgmt-resource-resources-v2022-09-01-operations-deploymentsoperations-begin-create-or-update-at-tenant-scope). ++ For more information about tenant level deployments, see [Create resources at the tenant level](deploy-to-tenant.md). ++For every scope, the user deploying the template must have the required permissions to create resources. ++## Deployment name ++When deploying an ARM template, you can give the deployment a name. This name can help you retrieve the deployment from the deployment history. If you don't provide a name for the deployment, the name of the template file is used. For example, if you deploy a template named `azuredeploy.json` and don't specify a deployment name, the deployment is named `azuredeploy`. ++Every time you run a deployment, an entry is added to the resource group's deployment history with the deployment name. If you run another deployment and give it the same name, the earlier entry is replaced with the current deployment. If you want to maintain unique entries in the deployment history, give each deployment a unique name. ++To create a unique name, you can assign a random number. ++```python +import random ++suffix = random.randint(1, 1000) +deployment_name = f"ExampleDeployment{suffix}" +``` ++Or, add a date value. ++```python +from datetime import datetime ++today = datetime.now().strftime("%m-%d-%Y") +deployment_name = f"ExampleDeployment{today}" +``` ++If you run concurrent deployments to the same resource group with the same deployment name, only the last deployment is completed. Any deployments with the same name that haven't finished are replaced by the last deployment. For example, if you run a deployment named `newStorage` that deploys a storage account named `storage1`, and at the same time run another deployment named `newStorage` that deploys a storage account named `storage2`, you deploy only one storage account. The resulting storage account is named `storage2`. ++However, if you run a deployment named `newStorage` that deploys a storage account named `storage1`, and immediately after it completes you run another deployment named `newStorage` that deploys a storage account named `storage2`, then you have two storage accounts. One is named `storage1`, and the other is named `storage2`. But, you only have one entry in the deployment history. ++When you specify a unique name for each deployment, you can run them concurrently without conflict. If you run a deployment named `newStorage1` that deploys a storage account named `storage1`, and at the same time run another deployment named `newStorage2` that deploys a storage account named `storage2`, then you have two storage accounts and two entries in the deployment history. ++To avoid conflicts with concurrent deployments and to ensure unique entries in the deployment history, give each deployment a unique name. ++## Deploy local template ++You can deploy a template from your local machine or one that is stored externally. This section describes deploying a local template. ++If you're deploying to a resource group that doesn't exist, create the resource group. The name of the resource group can only include alphanumeric characters, periods, underscores, hyphens, and parenthesis. It can be up to 90 characters. The name can't end in a period. ++```python +import os +from azure.identity import AzureCliCredential +from azure.mgmt.resource import ResourceManagementClient ++credential = AzureCliCredential() +subscription_id = os.environ["AZURE_SUBSCRIPTION_ID"] ++resource_client = ResourceManagementClient(credential, subscription_id) ++rg_result = resource_client.resource_groups.create_or_update( + "exampleGroup", + { + "location": "Central US" + } +) ++print(f"Provisioned resource group with ID: {rg_result.id}") +``` ++To deploy an ARM template, use [ResourceManagementClient.deployments.begin_create_or_update](/python/api/azure-mgmt-resource/azure.mgmt.resource.resources.v2022_09_01.operations.deploymentsoperations#azure-mgmt-resource-resources-v2022-09-01-operations-deploymentsoperations-begin-create-or-update). The following example requires a local template named `storage.json`. ++```python +import os +import json +from azure.identity import AzureCliCredential +from azure.mgmt.resource import ResourceManagementClient +from azure.mgmt.resource.resources.models import DeploymentMode ++credential = AzureCliCredential() +subscription_id = os.environ["AZURE_SUBSCRIPTION_ID"] ++resource_client = ResourceManagementClient(credential, subscription_id) ++with open("storage.json", "r") as template_file: + template_body = json.load(template_file) ++rg_deployment_result = resource_client.deployments.begin_create_or_update( + "exampleGroup", + "exampleDeployment", + { + "properties": { + "template": template_body, + "parameters": { + "storagePrefix": { + "value": "demostore" + }, + }, + "mode": DeploymentMode.incremental + } + } +) +``` ++The deployment can take several minutes to complete. ++## Deploy remote template ++Instead of storing ARM templates on your local machine, you may prefer to store them in an external location. You can store templates in a source control repository (such as GitHub). Or, you can store them in an Azure storage account for shared access in your organization. ++If you're deploying to a resource group that doesn't exist, create the resource group. The name of the resource group can only include alphanumeric characters, periods, underscores, hyphens, and parenthesis. It can be up to 90 characters. The name can't end in a period. ++```python +import os +from azure.identity import AzureCliCredential +from azure.mgmt.resource import ResourceManagementClient ++credential = AzureCliCredential() +subscription_id = os.environ["AZURE_SUBSCRIPTION_ID"] ++resource_client = ResourceManagementClient(credential, subscription_id) ++rg_result = resource_client.resource_groups.create_or_update( + "exampleGroup", + { + "location": "Central US" + } +) ++print(f"Provisioned resource group with ID: {rg_result.id}") +``` ++To deploy an ARM template, use [ResourceManagementClient.deployments.begin_create_or_update](/python/api/azure-mgmt-resource/azure.mgmt.resource.resources.v2022_09_01.operations.deploymentsoperations#azure-mgmt-resource-resources-v2022-09-01-operations-deploymentsoperations-begin-create-or-update). The following example deploys a [remote template](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.storage/storage-account-create). That template creates a storage account. ++```python +import os +from azure.identity import AzureCliCredential +from azure.mgmt.resource import ResourceManagementClient +from azure.mgmt.resource.resources.models import DeploymentMode ++credential = AzureCliCredential() +subscription_id = os.environ["AZURE_SUBSCRIPTION_ID"] ++resource_client = ResourceManagementClient(credential, subscription_id) ++resource_group_name = "exampleGroup" +location = "westus" +template_uri = "https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/quickstarts/microsoft.storage/storage-account-create/azuredeploy.json" ++rg_deployment_result = resource_client.deployments.begin_create_or_update( + resource_group_name, + "exampleDeployment", + { + "properties": { + "templateLink": { + "uri": template_uri + }, + "parameters": { + "location": { + "value": location + } + }, + "mode": DeploymentMode.incremental + } + } +) +``` ++The preceding example requires a publicly accessible URI for the template, which works for most scenarios because your template shouldn't include sensitive data. If you need to specify sensitive data (like an admin password), pass that value as a secure parameter. If you keep your templates in a storage account that doesn't allow anonymous access, you need to provide a SAS token. ++```python +import os +from azure.identity import AzureCliCredential +from azure.mgmt.resource import ResourceManagementClient +from azure.mgmt.resource.resources.models import DeploymentMode ++credential = AzureCliCredential() +subscription_id = os.environ["AZURE_SUBSCRIPTION_ID"] +sas_token = os.environ["SAS_TOKEN"] ++resource_client = ResourceManagementClient(credential, subscription_id) ++resource_group_name = "exampleGroup" +location = "westus" +template_uri = f"https://stage20230425.blob.core.windows.net/templates/storage.json?{sas_token}" ++rg_deployment_result = resource_client.deployments.begin_create_or_update( + resource_group_name, + "exampleDeployment", + { + "properties": { + "templateLink": { + "uri": template_uri + }, + "parameters": { + "location": { + "value": location + } + }, + "mode": DeploymentMode.incremental + } + } +) +``` ++For more information, see [Use relative path for linked templates](./linked-templates.md#linked-template). ++## Deploy template spec ++Instead of deploying a local or remote template, you can create a [template spec](template-specs.md). The template spec is a resource in your Azure subscription that contains an ARM template. It makes it easy to securely share the template with users in your organization. You use Azure role-based access control (Azure RBAC) to grant access to the template spec. ++The following examples show how to create and deploy a template spec. ++First, create the template spec by providing the ARM template. ++```python +import os +import json +from azure.identity import AzureCliCredential +from azure.mgmt.resource.templatespecs import TemplateSpecsClient +from azure.mgmt.resource.templatespecs.models import TemplateSpecVersion, TemplateSpec ++credential = AzureCliCredential() +subscription_id = os.environ["AZURE_SUBSCRIPTION_ID"] ++template_specs_client = TemplateSpecsClient(credential, subscription_id) ++template_spec = TemplateSpec( + location="westus2", + description="Storage Spec" +) ++template_specs_client.template_specs.create_or_update( + "templateSpecsRG", + "storageSpec", + template_spec +) ++with open("storage.json", "r") as template_file: + template_body = json.load(template_file) ++version = TemplateSpecVersion( + location="westus2", + description="Storage Spec", + main_template=template_body +) ++template_spec_result = template_specs_client.template_spec_versions.create_or_update( + "templateSpecsRG", + "storageSpec", + "1.0.0", + version +) ++print(f"Provisioned template spec with ID: {template_spec_result.id}") +``` ++Then, get the ID for template spec and deploy it. ++```python +import os +from azure.identity import AzureCliCredential +from azure.mgmt.resource import ResourceManagementClient +from azure.mgmt.resource.resources.models import DeploymentMode +from azure.mgmt.resource.templatespecs import TemplateSpecsClient ++credential = AzureCliCredential() +subscription_id = os.environ["AZURE_SUBSCRIPTION_ID"] ++resource_client = ResourceManagementClient(credential, subscription_id) +template_specs_client = TemplateSpecsClient(credential, subscription_id) ++template_spec = template_specs_client.template_spec_versions.get( + "templateSpecsRg", + "storageSpec", + "1.0.0" +) ++rg_deployment_result = resource_client.deployments.begin_create_or_update( + "exampleGroup", + "exampleDeployment", + { + "properties": { + "template_link": { + "id": template_spec.id + }, + "mode": DeploymentMode.incremental + } + } +) +``` ++For more information, see [Azure Resource Manager template specs](template-specs.md). ++## Preview changes ++Before deploying your template, you can preview the changes the template will make to your environment. Use the [what-if operation](./deploy-what-if.md) to verify that the template makes the changes that you expect. What-if also validates the template for errors. ++## Next steps ++- To roll back to a successful deployment when you get an error, see [Rollback on error to successful deployment](rollback-on-error.md). +- To specify how to handle resources that exist in the resource group but aren't defined in the template, see [Azure Resource Manager deployment modes](deployment-modes.md). +- To understand how to define parameters in your template, see [Understand the structure and syntax of ARM templates](./syntax.md). +- For information about deploying a template that requires a SAS token, see [Deploy private ARM template with SAS token](secure-template-with-sas-token.md). |
azure-video-indexer | Release Notes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/release-notes.md | Title: Azure Video Indexer release notes | Microsoft Docs description: To stay up-to-date with the most recent developments, this article provides you with the latest updates on Azure Video Indexer. Previously updated : 04/17/2023 Last updated : 04/25/2023 To stay up-to-date with the most recent Azure Video Indexer developments, this a ## April 2023 +### Resource Health support ++Azure Video Indexer is now integrated with Azure Resource Health enabling you to see the health and availability of each of your Video Indexer resources and if needed, help with diagnosing and solving problems. You can also set alerts to be notified when your resources are affected. For more information, see [Azure Resource Health overview](../service-health/resource-health-overview.md). + ### The animation character recognition model has been retired The **animation character recognition** model has been retired on March 1st, 2023. For any related issues, [open a support ticket via the Azure portal](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/overview). |
azure-vmware | Azure Vmware Solution Platform Updates | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/azure-vmware-solution-platform-updates.md | description: Learn about the platform updates to Azure VMware Solution. Previously updated : 4/20/2023 Last updated : 4/24/2023 # What's new in Azure VMware Solution Introducing Run Commands for VMware HCX on Azure VMware Solution. You can use th All new Azure VMware Solution private clouds are being deployed with VMware NSX-T Data Center version 3.2.2. NSX-T Data Center versions in existing private clouds will be upgraded to NSX-T Data Center version 3.2.2 through April 2023. -**HCX Enterprise Edition - Default** +**VMware HCX Enterprise Edition - Default** VMware HCX Enterprise is now available and supported on Azure VMware Solution at no extra cost. VMware HCX Enterprise brings valuable [services](https://docs.vmware.com/en/VMware-HCX/4.5/hcx-user-guide/GUID-32AF32BD-DE0B-4441-95B3-DF6A27733EED.html), like Replicated Assisted vMotion (RAV) and Mobility Optimized Networking (MON). VMware HCX Enterprise is now automatically installed for all new VMware HCX add-on requests, and existing VMware HCX Advanced customers can upgrade to VMware HCX Enterprise using the Azure portal. Learn more on how to [Install and activate VMware HCX in Azure VMware Solution](install-vmware-hcx.md). -**Log analytics - monitor Azure VMware Solution** +**Azure Log analytics - monitor Azure VMware Solution** The data in Azure Log Analytics offer insights into issues by searching using Kusto Query Language. All new Azure VMware Solution private clouds are now deployed with NSX-T Data Ce You'll receive an email with the planned maintenance date and time. You can reschedule an upgrade. The email also provides details on the upgraded component, its effect on workloads, private cloud access, and other Azure services. -For more information on this NSX-T Data Center version, see [VMware NSX-T Data Center [!INCLUDE [nsxt-version](includes/nsxt-version.md)] Release Notes](https://docs.vmware.com/en/VMware-NSX-T-Data-Center/3.1/rn/VMware-NSX-T-Data-Center-312-Release-Notes.html). +For more information on this NSX-T Data Center version, see [VMware NSX-T Data Center 3.1.1 Release Notes](https://docs.vmware.com/en/VMware-NSX/3.1/rn/VMware-NSX-T-Data-Center-311-Release-Notes.html). ## May 2021 |
backup | Archive Tier Support | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/archive-tier-support.md | Title: Azure Backup - Archive tier overview description: Learn about Archive tier support for Azure Backup. Previously updated : 04/15/2023 Last updated : 04/25/2023 When you restore from recovery point in Archive tier in primary region, the reco The recovery points for Virtual Machines meet the eligibility criteria. So, there are archivable recovery points. However, the churn in the Virtual Machine may be low, thus there are no recommendations. In this scenario, though you can move the archivable recovery points to archive tier, but it may increase the overall backup storage costs. -### I have stopped protection and retained data for my workload. Can I move the recovery points to archive tier? --No. Once protection is stopped for a particular workload, the corresponding recovery points can't be moved to the archive tier. To move recovery points to archive tier, you need to resume the protection on the data source. - ### How do I ensure that all recovery points are moved to Archive tier, if moved via Azure portal? To ensure that all recovery points are moved to Archive tier, |
backup | Azure Kubernetes Service Cluster Manage Backups | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/azure-kubernetes-service-cluster-manage-backups.md | Title: Manage Azure Kubernetes Service (AKS) backups using Azure Backup description: This article explains how to manage Azure Kubernetes Service (AKS) backups using Azure Backup. Previously updated : 03/27/2023 Last updated : 04/21/2023 This section provides the set of Azure CLI commands to perform create, update, o To install the Backup Extension, run the following command: ```azurecli-interactive- az k8s-extension create --name azure-aks-backup --extension-type Microsoft.DataProtection.Kubernetes --scope cluster --cluster-type managedClusters --cluster-name aksclustername --resource-group aksclusterrg --release-train stable --configuration-settings blobContainer=containername storageAccount=storageaccountname storageAccountResourceGroup=storageaccountrg storageAccountSubscriptionId=subscriptionid + az k8s-extension create --name azure-aks-backup --extension-type Microsoft.DataProtection.Kubernetes --scope cluster --cluster-type managedClusters --cluster-name <aksclustername> --resource-group <aksclusterrg> --release-train stable --configuration-settings blobContainer=<containername> storageAccount=<storageaccountname> storageAccountResourceGroup=<storageaccountrg> storageAccountSubscriptionId=<subscriptionid> ``` ### View Backup Extension installation status To install the Backup Extension, run the following command: To view the progress of Backup Extension installation, use the following command: ```azurecli-interactive- az k8s-extension show --name azure-aks-backup --cluster-type managedClusters --cluster-name aksclustername --resource-group aksclusterrg + az k8s-extension show --name azure-aks-backup --cluster-type managedClusters --cluster-name <aksclustername> --resource-group <aksclusterrg> ``` ### Update resources in Backup Extension To view the progress of Backup Extension installation, use the following command To update blob container, CPU, and memory in the Backup Extension, use the following command: ```azurecli-interactive- az k8s-extension update --name azure-aks-backup --cluster-type managedClusters --cluster-name aksclustername --resource-group aksclusterrg --release-train stable --configuration-settings [blobContainer=containername storageAccount=storageaccountname storageAccountResourceGroup=storageaccountrg storageAccountSubscriptionId=subscriptionid] [cpuLimit=1] [memoryLimit=1Gi] + az k8s-extension update --name azure-aks-backup --cluster-type managedClusters --cluster-name <aksclustername> --resource-group <aksclusterrg> --release-train stable --configuration-settings [blobContainer=<containername> storageAccount=<storageaccountname> storageAccountResourceGroup=<storageaccountrg> storageAccountSubscriptionId=<subscriptionid>] [cpuLimit=1] [memoryLimit=1Gi] []: denotes the 3 different sub-groups of updates possible (discard the brackets while using the command) To update blob container, CPU, and memory in the Backup Extension, use the follo To stop the Backup Extension install operation, use the following command: ```azurecli-interactive- az k8s-extension delete --name azure-aks-backup --cluster-type managedClusters --cluster-name aksclustername --resource-group aksclusterrg + az k8s-extension delete --name azure-aks-backup --cluster-type managedClusters --cluster-name <aksclustername> --resource-group <aksclusterrg> ``` ### Grant permission on storage account To stop the Backup Extension install operation, use the following command: To provide *Storage Account Contributor Permission* to the Extension Identity on storage account, run the following command: ```azurecli-interactive- az role assignment create --assignee-object-id $(az k8s-extension show --name azure-aks-backup --cluster-name aksclustername --resource-group aksclusterresourcegroup --cluster-type managedClusters --query aksAssignedIdentity.principalId --output tsv) --role 'Storage Account Contributor' --scope /subscriptions/subscriptionid/resourceGroups/storageaccountresourcegroup/providers/Microsoft.Storage/storageAccounts/storageaccountname + az role assignment create --assignee-object-id $(az k8s-extension show --name azure-aks-backup --cluster-name <aksclustername> --resource-group <aksclusterrg> --cluster-type managedClusters --query identity.principalId --output tsv) --role 'Storage Account Contributor' --scope /subscriptions/<subscriptionid>/resourceGroups/<storageaccountrg>/providers/Microsoft.Storage/storageAccounts/<storageaccountname> ``` To enable Trusted Access between Backup vault and AKS cluster, use the following ```azurecli-interactive az aks trustedaccess rolebinding create \- -g $myResourceGroup \ - --cluster-name $myAKSCluster - ΓÇôn <randomRoleBindingName> \ - -s <vaultID> \ + --resource-group <backupvaultrg> \ + --cluster-name <aksclustername> \ + --name <randomRoleBindingName> \ + --source-resource-id /subscriptions/<subscriptionid>/resourcegroups/<backupvaultrg>/providers/Microsoft.DataProtection/BackupVaults/<backupvaultname> \ --roles Microsoft.DataProtection/backupVaults/backup-operator- ``` Learn more about [other commands related to Trusted Access](../aks/trusted-access-feature.md#trusted-access-feature-overview). Learn more about [other commands related to Trusted Access](../aks/trusted-acces - [Back up Azure Kubernetes Service cluster (preview)](azure-kubernetes-service-cluster-backup.md) - [Restore Azure Kubernetes Service cluster (preview)](azure-kubernetes-service-cluster-restore.md)-- [Supported scenarios for backing up Azure Kubernetes Service cluster (preview)](azure-kubernetes-service-cluster-backup-support-matrix.md)+- [Supported scenarios for backing up Azure Kubernetes Service cluster (preview)](azure-kubernetes-service-cluster-backup-support-matrix.md) |
backup | Backup Azure Diagnostic Events | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-diagnostic-events.md | Title: Use diagnostics settings for Recovery Services vaults description: 'This article describes how to use the old and new diagnostics events for Azure Backup.'- Previously updated : 03/31/2023+ Last updated : 04/18/2023 + # Use diagnostics settings for Recovery Services vaults |
backup | Backup Mabs Whats New Mabs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-mabs-whats-new-mabs.md | Title: What's new in Microsoft Azure Backup Server description: Microsoft Azure Backup Server gives you enhanced backup capabilities for protecting VMs, files and folders, workloads, and more. Previously updated : 03/02/2023 Last updated : 04/25/2023 + # What's new in Microsoft Azure Backup Server (MABS)? |
bastion | Kerberos Authentication Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/kerberos-authentication-portal.md | This article shows you how to configure Azure Bastion to use Kerberos authentica ## Considerations -* During Preview, the Kerberos setting for Azure Bastion can be configured in the Azure portal only. +* During Preview, the Kerberos setting for Azure Bastion can be configured in the Azure portal only and not with native client. * VMs migrated from on-premises to Azure are not currently supported for Kerberos.  * Cross-realm authentication is not currently supported for Kerberos.  * Changes to DNS server are not currently supported for Kerberos. After making any changes to DNS server, you will need to delete and re-create the Bastion resource. |
bastion | Shareable Link | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/shareable-link.md | By default, users in your org will have only read access to shared links. If a u ## Considerations -* Shareable Links isn't currently supported for peered VNets that aren't in the same subscription. * Shareable Links isn't currently supported for peered VNEts across tenants. -* Shareable Links isn't currently supported for peered VNets that aren't in the same region. * Shareable Links isn't supported for national clouds during preview. * The Standard SKU is required for this feature. |
cdn | Cdn Manage Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-manage-powershell.md | Title: Manage Azure CDN with PowerShell | Microsoft Docs description: Use this tutorial to learn how to use PowerShell to manage aspects of your Azure Content Delivery Network endpoint profiles and endpoints. - Previously updated : 02/27/2023 Last updated : 04/24/2023 - # Manage Azure CDN with PowerShell PS C:\> Get-Command -Module Az.Cdn CommandType Name Version Source -- - - -Cmdlet Confirm-AzCdnEndpointProbeURL 1.4.0 Az.Cdn -Cmdlet Disable-AzCdnCustomDomain 1.4.0 Az.Cdn -Cmdlet Disable-AzCdnCustomDomainHttps 1.4.0 Az.Cdn -Cmdlet Enable-AzCdnCustomDomain 1.4.0 Az.Cdn -Cmdlet Enable-AzCdnCustomDomainHttps 1.4.0 Az.Cdn -Cmdlet Get-AzCdnCustomDomain 1.4.0 Az.Cdn -Cmdlet Get-AzCdnEdgeNode 1.4.0 Az.Cdn -Cmdlet Get-AzCdnEndpoint 1.4.0 Az.Cdn -Cmdlet Get-AzCdnEndpointNameAvailability 1.4.0 Az.Cdn -Cmdlet Get-AzCdnEndpointResourceUsage 1.4.0 Az.Cdn -Cmdlet Get-AzCdnOrigin 1.4.0 Az.Cdn -Cmdlet Get-AzCdnProfile 1.4.0 Az.Cdn -Cmdlet Get-AzCdnProfileResourceUsage 1.4.0 Az.Cdn -Cmdlet Get-AzCdnProfileSsoUrl 1.4.0 Az.Cdn -Cmdlet Get-AzCdnProfileSupportedOptimizationType 1.4.0 Az.Cdn -Cmdlet Get-AzCdnSubscriptionResourceUsage 1.4.0 Az.Cdn -Cmdlet New-AzCdnCustomDomain 1.4.0 Az.Cdn -Cmdlet New-AzCdnDeliveryPolicy 1.4.0 Az.Cdn -Cmdlet New-AzCdnDeliveryRule 1.4.0 Az.Cdn -Cmdlet New-AzCdnDeliveryRuleAction 1.4.0 Az.Cdn -Cmdlet New-AzCdnDeliveryRuleCondition 1.4.0 Az.Cdn -Cmdlet New-AzCdnEndpoint 1.4.0 Az.Cdn -Cmdlet New-AzCdnProfile 1.4.0 Az.Cdn -Cmdlet Publish-AzCdnEndpointContent 1.4.0 Az.Cdn -Cmdlet Remove-AzCdnCustomDomain 1.4.0 Az.Cdn -Cmdlet Remove-AzCdnEndpoint 1.4.0 Az.Cdn -Cmdlet Remove-AzCdnProfile 1.4.0 Az.Cdn -Cmdlet Set-AzCdnEndpoint 1.4.0 Az.Cdn -Cmdlet Set-AzCdnOrigin 1.4.0 Az.Cdn -Cmdlet Set-AzCdnProfile 1.4.0 Az.Cdn -Cmdlet Start-AzCdnEndpoint 1.4.0 Az.Cdn -Cmdlet Stop-AzCdnEndpoint 1.4.0 Az.Cdn -Cmdlet Test-AzCdnCustomDomain 1.4.0 Az.Cdn -Cmdlet Unpublish-AzCdnEndpointContent 1.4.0 Az.Cdn +Cmdlet Confirm-AzCdnEndpointProbeURL 2.1.0 Az.Cdn +Cmdlet Disable-AzCdnCustomDomain 2.1.0 Az.Cdn +Cmdlet Disable-AzCdnCustomDomainHttps 2.1.0 Az.Cdn +Cmdlet Enable-AzCdnCustomDomain 2.1.0 Az.Cdn +Cmdlet Enable-AzCdnCustomDomainHttps 2.1.0 Az.Cdn +Cmdlet Get-AzCdnCustomDomain 2.1.0 Az.Cdn +Cmdlet Get-AzCdnEdgeNode 2.1.0 Az.Cdn +Cmdlet Get-AzCdnEndpoint 2.1.0 Az.Cdn +Cmdlet Get-AzCdnEndpointResourceUsage 2.1.0 Az.Cdn +Cmdlet Get-AzCdnOrigin 2.1.0 Az.Cdn +Cmdlet Get-AzCdnProfile 2.1.0 Az.Cdn +Cmdlet Get-AzCdnProfileResourceUsage 2.1.0 Az.Cdn +Cmdlet Get-AzCdnProfileSupportedOptimizationType 2.1.0 Az.Cdn +Cmdlet Get-AzCdnSubscriptionResourceUsage 2.1.0 Az.Cdn +Cmdlet New-AzCdnCustomDomain 2.1.0 Az.Cdn +Cmdlet New-AzCdnDeliveryPolicy 2.1.0 Az.Cdn +Cmdlet New-AzCdnDeliveryRule 2.1.0 Az.Cdn +Cmdlet New-AzCdnDeliveryRuleAction 2.1.0 Az.Cdn +Cmdlet New-AzCdnDeliveryRuleCondition 2.1.0 Az.Cdn +Cmdlet New-AzCdnEndpoint 2.1.0 Az.Cdn +Cmdlet New-AzCdnProfile 2.1.0 Az.Cdn +Cmdlet Remove-AzCdnCustomDomain 2.1.0 Az.Cdn +Cmdlet Remove-AzCdnEndpoint 2.1.0 Az.Cdn +Cmdlet Remove-AzCdnProfile 2.1.0 Az.Cdn +Cmdlet Set-AzCdnProfile 2.1.0 Az.Cdn +Cmdlet Start-AzCdnEndpoint 2.1.0 Az.Cdn +Cmdlet Stop-AzCdnEndpoint 2.1.0 Az.Cdn ``` ## Getting help DESCRIPTION RELATED LINKS+ https://docs.microsoft.com/powershell/module/az.cdn/get-azcdnprofile REMARKS To see the examples, type: "get-help Get-AzCdnProfile -examples". For more information, type: "get-help Get-AzCdnProfile -detailed". For technical information, type: "get-help Get-AzCdnProfile -full".-+ For online help, type: "get-help Get-AzCdnProfile -online" ``` ## Listing existing Azure CDN profiles This output can be piped to cmdlets for enumeration. ```powershell # Output the name of all profiles on this subscription. Get-AzCdnProfile | ForEach-Object { Write-Host $_.Name }--# Return only **Azure CDN from Verizon** profiles. -Get-AzCdnProfile | Where-Object { $_.Sku.Name -eq "Standard_Verizon" } ``` You can also return a single profile by specifying the profile name and resource group. Get-AzCdnProfile -ProfileName CdnDemo -ResourceGroupName CdnDemoRG > [!TIP] > It is possible to have multiple CDN profiles with the same name, so long as they are in different resource groups. Omitting the `ResourceGroupName` parameter returns all profiles with a matching name. > -> ## Listing existing CDN endpoints `Get-AzCdnEndpoint` can retrieve an individual endpoint or all the endpoints on a profile. Get-AzCdnEndpoint -ProfileName CdnDemo -ResourceGroupName CdnDemoRG -EndpointNam # Get all of the endpoints on a given profile. Get-AzCdnEndpoint -ProfileName CdnDemo -ResourceGroupName CdnDemoRG--# Return all of the endpoints on all of the profiles. -Get-AzCdnProfile | Get-AzCdnEndpoint --# Return all of the endpoints in this subscription that are currently running. -Get-AzCdnProfile | Get-AzCdnEndpoint | Where-Object { $_.ResourceState -eq "Running" } ``` ## Creating CDN profiles and endpoints Get-AzCdnProfile | Get-AzCdnEndpoint | Where-Object { $_.ResourceState -eq "Runn ```powershell # Create a new profile-New-AzCdnProfile -ProfileName CdnPoshDemo -ResourceGroupName CdnDemoRG -Sku Standard_Akamai -Location "Central US" +New-AzCdnProfile -ProfileName CdnPoshDemo -ResourceGroupName CdnDemoRG -Sku Standard_Microsoft -Location "Central US" # Create a new endpoint-New-AzCdnEndpoint -ProfileName CdnPoshDemo -ResourceGroupName CdnDemoRG -Location "Central US" -EndpointName cdnposhdoc -OriginName "Contoso" -OriginHostName "www.contoso.com" --# Create a new profile and endpoint (same as above) in one line -New-AzCdnProfile -ProfileName CdnPoshDemo -ResourceGroupName CdnDemoRG -Sku Standard_Akamai -Location "Central US" | New-AzCdnEndpoint -EndpointName cdnposhdoc -OriginName "Contoso" -OriginHostName "www.contoso.com" +$origin = @{ + Name = "Contoso" + HostName = "www.contoso.com" +}; -``` --## Checking endpoint name availability -`Get-AzCdnEndpointNameAvailability` returns an object indicating if an endpoint name is available. --```powershell -# Retrieve availability -$availability = Get-AzCdnEndpointNameAvailability -EndpointName "cdnposhdoc" --# If available, write a message to the console. -If($availability.NameAvailable) { Write-Host "Yes, that endpoint name is available." } -Else { Write-Host "No, that endpoint name is not available." } +New-AzCdnEndpoint -ProfileName CdnPoshDemo -ResourceGroupName CdnDemoRG -Location "Central US" -EndpointName cdnposhdoc -Origin $origin ``` ## Adding a custom domain Else { Write-Host "No, that endpoint name is not available." } > ```powershell-# Get an existing endpoint -$endpoint = Get-AzCdnEndpoint -ProfileName CdnPoshDemo -ResourceGroupName CdnDemoRG -EndpointName cdnposhdoc --# Check the mapping -$result = Test-AzCdnCustomDomain -CdnEndpoint $endpoint -CustomDomainHostName "cdn.contoso.com" - # Create the custom domain on the endpoint-If($result.CustomDomainValidated){ New-AzCdnCustomDomain -CustomDomainName Contoso -HostName "cdn.contoso.com" -CdnEndpoint $endpoint } +New-AzCdnCustomDomain -ResourceGroupName CdnDemoRG -ProfileName CdnPoshDemo -Name contoso -HostName "cdn.contoso.com" -EndpointName cdnposhdoc ``` ## Modifying an endpoint-`Set-AzCdnEndpoint` modifies an existing endpoint. +`Update-AzCdnEndpoint` modifies an existing endpoint. ```powershell-# Get an existing endpoint -$endpoint = Get-AzCdnEndpoint -ProfileName CdnPoshDemo -ResourceGroupName CdnDemoRG -EndpointName cdnposhdoc --# Set up content compression -$endpoint.IsCompressionEnabled = $true -$endpoint.ContentTypesToCompress = "text/javascript","text/css","application/json" --# Save the changed endpoint and apply the changes -Set-AzCdnEndpoint -CdnEndpoint $endpoint +# Update endpoint with compression settings +Update-AzCdnEndpoint -Name cdnposhdoc -ProfileName CdnPoshDemo -ResourceGroupName CdnDemoRG -IsCompressionEnabled -ContentTypesToCompress "text/javascript","text/css","application/json" ``` -## Purging/Pre-loading CDN assets -`Unpublish-AzCdnEndpointContent` purges cached assets, while `Publish-AzCdnEndpointContent` pre-loads assets on supported endpoints. +## Purging +`Clear-AzCdnEndpointContent` purges cached assets. ```powershell # Purge some assets.-Unpublish-AzCdnEndpointContent -ProfileName CdnDemo -ResourceGroupName CdnDemoRG -EndpointName cdndocdemo -PurgeContent "/images/kitten.png","/video/rickroll.mp4" +Clear-AzCdnEndpointContent -ProfileName CdnPoshDemo -ResourceGroupName CdnDemoRG -EndpointName cdnposhdoc -ContentFilePath @("/images/kitten.png","/video/rickroll.mp4") +``` ++## Pre-load some assets -# Pre-load some assets. -Publish-AzCdnEndpointContent -ProfileName CdnDemo -ResourceGroupName CdnDemoRG -EndpointName cdndocdemo -LoadContent "/images/kitten.png","/video/rickroll.mp4" +> [!NOTE] +> Pre-loading is only available on Azure CDN from Verizon profiles. -# Purge everything in /images/ on all endpoints. -Get-AzCdnProfile | Get-AzCdnEndpoint | Unpublish-AzCdnEndpointContent -PurgeContent "/images/*" +`Import-AzCdnEndpointContent` pre-loads assets into the CDN cache. ++```powershell +Import-AzCdnEndpointContent -ProfileName CdnPoshDemo -ResourceGroupName CdnDemoRG -EndpointName cdnposhdoc -ContentFilePath @("/images/kitten.png","/video/rickroll.mp4")` ``` ## Starting/Stopping CDN endpoints `Start-AzCdnEndpoint` and `Stop-AzCdnEndpoint` can be used to start and stop individual endpoints or groups of endpoints. ```powershell-# Stop the cdndocdemo endpoint -Stop-AzCdnEndpoint -ProfileName CdnDemo -ResourceGroupName CdnDemoRG -EndpointName cdndocdemo --# Stop all endpoints -Get-AzCdnProfile | Get-AzCdnEndpoint | Stop-AzCdnEndpoint +# Stop the CdnPoshDemo endpoint +Stop-AzCdnEndpoint -ProfileName CdnPoshDemo -ResourceGroupName CdnDemoRG -Name cdnposhdoc -# Start all endpoints -Get-AzCdnProfile | Get-AzCdnEndpoint | Start-AzCdnEndpoint +# Start the CdnPoshDemo endpoint +Start-AzCdnEndpoint -ProfileName CdnPoshDemo -ResourceGroupName CdnDemoRG -Name cdnposhdoc ``` ## Creating Standard Rules engine policy and applying to an existing CDN endpoint-`New-AzCdnDeliveryRule`, `New=AzCdnDeliveryRuleCondition`, and `New-AzCdnDeliveryRuleAction` can be used to configure the Azure CDN Standard Rules engine on Azure CDN from Microsoft profiles. -```powershell -# Create a new http to https redirect rule -$Condition=New-AzCdnDeliveryRuleCondition -MatchVariable RequestProtocol -Operator Equal -MatchValue HTTP -$Action=New-AzCdnDeliveryRuleAction -RedirectType Found -DestinationProtocol HTTPS -$HttpToHttpsRedirectRule=New-AzCdnDeliveryRule -Name "HttpToHttpsRedirectRule" -Order 2 -Condition $Condition -Action $Action +The following list of cmdlets can be used to create a Standard Rules engine policy and apply it to an existing CDN endpoint. ++Conditions: ++* [New-AzFrontDoorCdnRuleCookiesConditionObject](/powershell/module/az.cdn/new-azcdndeliveryrulecookiesconditionobject) +* [New-AzCdnDeliveryRuleHttpVersionConditionObject](/powershell/module/az.cdn/new-azcdndeliveryrulehttpversionconditionobject) +* [New-AzCdnDeliveryRuleIsDeviceConditionObject](/powershell/module/az.cdn/new-azcdndeliveryruleisdeviceconditionobject) +* [New-AzCdnDeliveryRulePostArgsConditionObject](/powershell/module/az.cdn/new-azcdndeliveryrulepostargsconditionobject) +* [New-AzCdnDeliveryRuleQueryStringConditionObject](/powershell/module/az.cdn/new-azcdndeliveryrulequerystringconditionobject) +* [New-AzCdnDeliveryRuleRemoteAddressConditionObject](/powershell/module/az.cdn/new-azcdndeliveryruleremoteaddressconditionobject) +* [New-AzCdnDeliveryRuleRequestBodyConditionObject](/powershell/module/az.cdn/new-azcdndeliveryrulerequestbodyconditionobject) +* [New-AzCdnDeliveryRuleRequestHeaderConditionObject](/powershell/module/az.cdn/new-azcdndeliveryrulerequestheaderconditionobject) +* [New-AzCdnDeliveryRuleRequestMethodConditionObject](/powershell/module/az.cdn/new-azcdndeliveryrulerequestmethodconditionobject) +* [New-AzCdnDeliveryRuleRequestSchemeConditionObject](/powershell/module/az.cdn/new-azcdndeliveryrulerequestschemeconditionobject) +* [New-AzCdnDeliveryRuleRequestUriConditionObject](/powershell/module/az.cdn/new-azcdndeliveryrulerequesturiconditionobject) +* [New-AzCdnDeliveryRuleResponseHeaderActionObject](/powershell/module/az.cdn/new-azcdndeliveryruleresponseheaderactionobject) +* [New-AzCdnDeliveryRuleUrlFileExtensionConditionObject](/powershell/module/az.cdn/new-azcdndeliveryruleurlfileextensionconditionobject) +* [New-AzCdnDeliveryRuleUrlFileNameConditionObject](/powershell/module/az.cdn/new-azcdndeliveryruleurlfilenameconditionobject) +* [New-AzCdnDeliveryRuleUrlPathConditionObject](/powershell/module/az.cdn/new-azcdndeliveryruleurlpathconditionobject) ++Actions: ++* [New-AzCdnDeliveryRuleRequestHeaderActionObject](/powershell/module/az.cdn/new-azcdndeliveryrulerequestheaderactionobject) +* [New-AzCdnDeliveryRuleRequestHeaderActionObject](/powershell/module/az.cdn/new-azcdndeliveryrulerequestheaderactionobject) +* [New-AzCdnUrlRedirectActionObject](/powershell/module/az.cdn/new-azcdnurlredirectactionobject) +* [New-AzCdnUrlRewriteActionObject](/powershell/module/az.cdn/new-azcdnurlrewriteactionobject) +* [New-AzCdnUrlSigningActionObject](/powershell/module/az.cdn/new-azcdnurlsigningactionobject) +```powershell # Create a path based Response header modification rule. -$Cond1=New-AzCdnDeliveryRuleCondition -MatchVariable UrlPath -Operator BeginsWith -MatchValue "/images/" -$Action1=New-AzCdnDeliveryRuleAction -HeaderActionType ModifyResponseHeader -Action Overwrite -HeaderName "Access-Control-Allow-Origin" -Value "*" -$PathBasedCacheOverrideRule=New-AzCdnDeliveryRule -Name "PathBasedCacheOverride" -Order 1 -Condition $Cond1 -Action $action1 +$cond1 = New-AzCdnDeliveryRuleUrlPathConditionObject -Name UrlPath -ParameterOperator BeginsWith -ParameterMatchValue "/images/" +$action1 = New-AzCdnDeliveryRuleResponseHeaderActionObject -Name ModifyResponseHeader -ParameterHeaderAction Overwrite -ParameterHeaderName "Access-Control-Allow-Origin" -ParameterValue "*" +$rule1 = New-AzCdnDeliveryRuleObject -Name "PathBasedCacheOverride" -Order 1 -Condition $cond1 -Action $action1 -# Create a delivery policy with above deliveryRules. -$Policy = New-AzCdnDeliveryPolicy -Description "DeliveryPolicy" -Rule $HttpToHttpsRedirectRule,$UrlRewriteRule +# Create a new http to https redirect rule +$cond1 = New-AzCdnDeliveryRuleRequestSchemeConditionObject -Name RequestScheme -ParameterMatchValue HTTPS +$action1 = New-AzCdnUrlRedirectActionObject -Name UrlRedirect -ParameterRedirectType Found -ParameterDestinationProtocol Https +$rule2 = New-AzCdnDeliveryRuleObject -Name "UrlRewriteRule" -Order 2 -Condition $cond1 -Action $action1 -# Update existing endpoint with created delivery policy -$ep = Get-AzCdnEndpoint -EndpointName cdndocdemo -ProfileName CdnDemo -ResourceGroupName CdnDemoRG -$ep.DeliveryPolicy = $Policy -Set-AzCdnEndpoint -CdnEndpoint $ep +# Update existing endpoint with new rules +Update-AzCdnEndpoint -Name cdnposhdoc -ProfileName CdnPoshDemo -ResourceGroupName CdnDemoRG -DeliveryPolicyRule $rule1,$rule2 ``` ## Deleting CDN resources Set-AzCdnEndpoint -CdnEndpoint $ep # Remove a single endpoint Remove-AzCdnEndpoint -ProfileName CdnPoshDemo -ResourceGroupName CdnDemoRG -EndpointName cdnposhdoc -# Remove all the endpoints on a profile and skip confirmation (-Force) -Get-AzCdnProfile -ProfileName CdnPoshDemo -ResourceGroupName CdnDemoRG | Get-AzCdnEndpoint | Remove-AzCdnEndpoint -Force - # Remove a single profile Remove-AzCdnProfile -ProfileName CdnPoshDemo -ResourceGroupName CdnDemoRG ``` ## Next Steps-Learn how to automate Azure CDN with [.NET](cdn-app-dev-net.md) or [Node.js](cdn-app-dev-node.md). -To learn about CDN features, see [CDN Overview](cdn-overview.md). +* Learn how to automate Azure CDN with [.NET](cdn-app-dev-net.md) or [Node.js](cdn-app-dev-node.md). ++* To learn about CDN features, see [CDN Overview](cdn-overview.md). |
communication-services | End Of Call Survey Concept | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/voice-video-calling/end-of-call-survey-concept.md | + + Title: Azure Communication Services End of Call Survey overview ++description: Learn about the End of Call Survey. +++++ Last updated : 4/03/2023++++++++# End of Call Survey overview +++++> [!NOTE] +> End of Call Survey is currently supported only for our JavaScript / Web SDK. +++The End of Call Survey allows Azure Communication Services to improve the overall Calling SDK. ++<!-- provides you with a tool to understand how your end users perceive the overall quality and reliability of your JavaScript / Web SDK calling solution. --> +<!-- +## Purpose of the End of Call Survey +ItΓÇÖs difficult to determine a customerΓÇÖs perceived calling experience and determine how well your calling solution is performing without gathering subjective feedback from customers. ++You can use the End of Call Survey to collect and analyze customers **subjective** opinions on their calling experience as opposed to relying only on **objective** measurements such as audio and video bitrate, jitter, and latency, which may not indicate if a customer had a poor calling experience. ++After publishing survey data, you can view the survey results through Azure for analysis and improvements. Azure Communication Services uses these survey results to monitor and improve quality and reliability. --> +++## Survey structure ++The survey is designed to answer two questions from a userΓÇÖs point of view. ++- **Question 1:** How did the users perceive their overall call quality experience? ++- **Question 2:** Did the user perceive any Audio, Video, or Screen Share issues in the call? ++The API allows applications to gather data points that describe user perceived ratings of their Overall Call, Audio, Video, and Screen Share experiences. Microsoft analyzes survey API results according to the following goals. ++### End of Call Survey API goals +++| API Rating Categories | Question Goal | +| -- | -- | +| Overall Call | Responses indicate how a call participant perceived their overall call quality. | +| Audio | Responses indicate if the user perceived any Audio issues. | +| Video | Responses indicate if the user perceived any Video issues. | +| Screenshare | Responses indicate if the user perceived any Screen Share issues. | ++++## Survey capabilities ++++### Default survey API configuration ++| API Rating Categories | Cutoff Value* | Input Range | Comments | +| -- | -- | -- | -- | +| Overall Call | 2 | 1 - 5 | Surveys a calling participantΓÇÖs overall quality experience on a scale of 1-5. A response of 1 indicates an imperfect call experience and 5 indicates a perfect call. The cutoff value of 2 means that a customer response of 1 or 2 indicates a less than perfect call experience. | +| Audio | 2 | 1 - 5 | A response of 1 indicates an imperfect audio experience and 5 indicates no audio issues were experienced. | +| Video | 2 | 1 - 5 | A response of 1 indicates an imperfect video experience and 5 indicates no video issues were experienced. | +| Screenshare | 2 | 1 - 5 | A response of 1 indicates an imperfect screen share experience and 5 indicates no screen share issues were experienced. | ++++> [!NOTE] +>A questionΓÇÖs indicated cutoff value in the API is the threshold that Microsoft uses when analyzing your survey data. When you customize the cutoff value or Input Range, Microsoft analyzes your survey data according to your customization. ++### More survey tags +| Rating Categories | Optional Tags | +| -- | -- | +| Overall Call | `CallCannotJoin` `CallCannotInvite` `HadToRejoin` `CallEndedUnexpectedly` `OtherIssues` | +| Audio | `NoLocalAudio` `NoRemoteAudio` `Echo` `AudioNoise` `LowVolume` `AudioStoppedUnexpectedly` `DistortedSpeech` `AudioInterruption` `OtherIssues` | +| Video | `NoVideoReceived` `NoVideoSent` `LowQuality` `Freezes` `StoppedUnexpectedly` `DarkVideoReceived` `AudioVideoOutOfSync` `OtherIssues` | +| Screenshare | `NoContentLocal` `NoContentRemote` `CannotPresent` `LowQuality` `Freezes` `StoppedUnexpectedly` `LargeDelay` `OtherIssues` | ++++### End of Call Survey customization +++You can choose to collect each of the four API values or only the ones +you find most important. For example, you can choose to only ask +customers about their overall call experience instead of asking them +about their audio, video, and screen share experience. You can also +customize input ranges to suit your needs. The default input range is 1 +to 5 for Overall Call, Audio, Video, and +Screenshare. However, each API value can be customized from a minimum of +0 to maximum of 100. ++### Customization options +++| API Rating Categories | Cutoff Value* | Input Range | +| -- | -- | -- | +| Overall Call | 0 - 100 | 0 - 100 | +| Audio | 0 - 100 | 0 - 100 | +| Video | 0 - 100 | 0 - 100 | +| Screenshare | 0 - 100 | 0 - 100 | ++ > [!NOTE] + > A questionΓÇÖs indicated cutoff value in the API is the threshold that Microsoft uses when analyzing your survey data. When you customize the cutoff value or Input Range, Microsoft analyzes your survey data according to your customization. ++<!-- ## Store and view survey data: ++> [!IMPORTANT] +> You must enable a Diagnostic Setting in Azure Monitor to send the log data of your surveys to a Log Analytics workspace, Event Hubs, or an Azure storage account to receive and analyze your survey data. If you do not send survey data to one of these options your survey data will not be stored and will be lost. To enable these logs for your Communications Services, see: **[Enable logging in Diagnostic Settings](../analytics/enable-logging.md)** ++You can only view your survey data if you have enabled a Diagnostic Setting to capture your survey data. --> ++## Next Steps ++<!-- - Learn how to use the Log Analytics workspace, see: [Log Analytics Tutorial](../../../azure-monitor/logs/log-analytics-tutorial.md) ++- Create your own queries in Log Analytics, see: [Get Started Queries](../../../azure-monitor/logs/get-started-queries.md) --> +Learn how to use the End of Call Survey, see our tutorial: [Use the End of Call Survey to collect user feedback](../../tutorials/end-of-call-survey-tutorial.md) |
communication-services | Simulcast | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/voice-video-calling/simulcast.md | -Simulcast is provided as a preview for developers and may change based on feedback that we receive. To use this feature, use 1.9.1-beta.1+ release of Azure Communication Services Calling Web SDK. Currently, we support simulcast send from desktop chrome and desktop edge. Simulcast send from mobile devices will be available shortly in the future. ++Simulcast is supported starting from 1.9.1-beta.1+ release of Azure Communication Services Calling Web SDK. Currently, simulcast on the sender side is supported on following desktop browsers - Chrome and Edge. Simulcast on receiver side is supported on all platforms that Azure Communication Services Calling supports. +Support for Sender side Simulcast capability from mobile browsers will be added in the future. Simulcast is a technique by which an endpoint encodes the same video feed using different qualities, sends these video feeds of multiple qualities to a selective forwarding unit ΓÇô SFU that decides which of the receivers gets which quality. The lack of simulcast support leads to a degraded video experience in calls with three or more participants. If a video receiver with poor network conditions joins the conference, it will impact the quality of video received from the sender without simulcast support for all other participants. This is because the video sender will optimize its video feed against the lowest common denominator. With simulcast, the impact of lowest common denominator will be minimized. That is because the video sender will produce specialized low fidelity video encoding for a subset of receivers that run on poor networks (or otherwise constrained). |
communication-services | End Of Call Survey Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/tutorials/end-of-call-survey-tutorial.md | + + Title: Azure Communication Services End of Call Survey ++description: Learn how to use the End of Call Survey to collect user feedback. +++++ Last updated : 4/03/2023++++++++# Use the End of Call Survey to collect user feedback ++++++> [!NOTE] +> End of Call Survey is currently supported only for our JavaScript / Web SDK. ++This tutorial shows you how to use the Azure Communication Services End of Call Survey for JavaScript / Web SDK. +++## Prerequisites ++- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). ++- [Node.js](https://nodejs.org/) active Long Term Support(LTS) versions are recommended. ++- An active Communication Services resource. [Create a Communication Services resource](../quickstarts/create-communication-resource.md). Survey results are tied to single Communication Services resources. +- An active Log Analytics Workspace, also known as Azure Monitor Logs. [Enable logging in Diagnostic Settings](../concepts/analytics/enable-logging.md). +++<!-- - An active Log Analytics Workspace, also known as Azure Monitor Logs, to ensure you don't lose your survey results. [Enable logging in Diagnostic Settings](../concepts/analytics/enable-logging.md). --> ++> [!IMPORTANT] +> End of Call Survey is available starting on the version [1.13.0-beta.4](https://www.npmjs.com/package/@azure/communication-calling/v/1.13.0-beta.4) of the Calling SDK. Make sure to use that version or later when trying the instructions. ++## Sample of API usage +++The End of Call Survey feature should be used after the call ends. Users can rate any kind of VoIP call, 1:1, group, meeting, outgoing and incoming. Once a user's call ends, your application can show a UI to the end user allowing them to choose a rating score, and if needed, pick issues theyΓÇÖve encountered during the call from our predefined list. ++The following code snips show an example of one-to-one call. After the end of the call, your application can show a survey UI and once the user chooses a rating, your application should call the feature API to submit the survey with the user choices. ++We encourage you to use the default rating scale. However, you can submit a survey with custom rating scale. You can check out the [sample application](https://github.com/Azure-Samples/communication-services-web-calling-tutorial/blob/main/Project/src/MakeCall/CallSurvey.js) for the sample API usage. +++### Rate call only - no custom scale ++```javascript +call.feature(Features.CallSurvey).submitSurvey({ + overallRating: { score: 5 }, // issues are optional +}).then(() => console.log('survey submitted successfully')); +``` ++OverallRating is a required category for all surveys. +++### Rate call only - with custom scale and issues ++```javascript +call.feature(Features.CallSurvey).submitSurvey({ + overallRating: { + score: 1, // my score + scale: { // my custom scale + lowerBound: 0, + upperBound: 1, + lowScoreThreshold: 0 + }, + issues: ['HadToRejoin'] // my issues, check the table below for all available issues + } +}).then(() => console.log('survey submitted successfully')); +``` ++### Rate overall, audio, and video with a sample issue ++``` javascript +call.feature(Features.CallSurvey).submitSurvey({ + overallRating: { score: 3 }, + audioRating: { score: 4 }, + videoRating: { score: 3, issues: ['Freezes'] } +}).then(() => console.log('survey submitted successfully')) +``` ++### Handle errors the SDK can send + ``` javascript +call.feature(Features.CallSurvey).submitSurvey({ + overallRating: { score: 3 } +}).catch((e) => console.log('error when submitting survey: ' + e)) +``` ++++<!-- ## Find different types of errors ++### Failures while submitting survey: ++API will return the error messages when data validation failed or unable to submit the survey. +- At least one survey rating is required. +- In default scale X should be 1 to 5. - where X is either of +- overallRating.score +- audioRating.score +- videoRating.score +- ScreenshareRating.score +- ${propertyName}: ${rating.score} should be between ${rating.scale?.lowerBound} and ${rating.scale?.upperBound}. ; +- ${propertyName}: ${rating.scale?.lowScoreThreshold} should be between ${rating.scale?.lowerBound} and ${rating.scale?.upperBound}. ; +- ${propertyName} lowerBound: ${rating.scale?.lowerBound} and upperBound: ${rating.scale?.upperBound} should be between 0 and 100. ; +- event discarded [ACS failed to submit survey, due to network or other error] --> ++## All possible values ++### Default survey API configuration ++| API Rating Categories | Cutoff Value* | Input Range | Comments | +| -- | -- | -- | -- | +| Overall Call | 2 | 1 - 5 | Surveys a calling participantΓÇÖs overall quality experience on a scale of 1-5. A response of 1 indicates an imperfect call experience and 5 indicates a perfect call. The cutoff value of 2 means that a customer response of 1 or 2 indicates a less than perfect call experience. | +| Audio | 2 | 1 - 5 | A response of 1 indicates an imperfect audio experience and 5 indicates no audio issues were experienced. | +| Video | 2 | 1 - 5 | A response of 1 indicates an imperfect video experience and 5 indicates no video issues were experienced. | +| Screenshare | 2 | 1 - 5 | A response of 1 indicates an imperfect screen share experience and 5 indicates no screen share issues were experienced. | ++++> [!NOTE] +>A questionΓÇÖs indicated cutoff value in the API is the threshold that Microsoft uses when analyzing your survey data. When you customize the cutoff value or Input Range, Microsoft analyzes your survey data according to your customization. +++### More survey tags +| Rating Categories | Optional Tags | +| -- | -- | +| Overall Call | `CallCannotJoin` `CallCannotInvite` `HadToRejoin` `CallEndedUnexpectedly` `OtherIssues` | +| Audio | `NoLocalAudio` `NoRemoteAudio` `Echo` `AudioNoise` `LowVolume` `AudioStoppedUnexpectedly` `DistortedSpeech` `AudioInterruption` `OtherIssues` | +| Video | `NoVideoReceived` `NoVideoSent` `LowQuality` `Freezes` `StoppedUnexpectedly` `DarkVideoReceived` `AudioVideoOutOfSync` `OtherIssues` | +| Screenshare | `NoContentLocal` `NoContentRemote` `CannotPresent` `LowQuality` `Freezes` `StoppedUnexpectedly` `LargeDelay` `OtherIssues` | +++### Customization options ++You can choose to collect each of the four API values or only the ones +you find most important. For example, you can choose to only ask +customers about their overall call experience instead of asking them +about their audio, video, and screen share experience. You can also +customize input ranges to suit your needs. The default input range is 1 +to 5 for Overall Call, Audio, Video, and +Screenshare. However, each API value can be customized from a minimum of +0 to maximum of 100. ++### Customization examples +++| API Rating Categories | Cutoff Value* | Input Range | +| -- | -- | -- | +| Overall Call | 0 - 100 | 0 - 100 | +| Audio | 0 - 100 | 0 - 100 | +| Video | 0 - 100 | 0 - 100 | +| Screenshare | 0 - 100 | 0 - 100 | ++ > [!NOTE] + > A questionΓÇÖs indicated cutoff value in the API is the threshold that Microsoft uses when analyzing your survey data. When you customize the cutoff value or Input Range, Microsoft analyzes your survey data according to your customization. ++<!-- +## Collect survey data ++> [!IMPORTANT] +> You must enable a Diagnostic Setting in Azure Monitor to send the log data of your surveys to a Log Analytics workspace, Event Hubs, or an Azure storage account to receive and analyze your survey data. If you do not send survey data to one of these options your survey data will not be stored and will be lost. To enable these logs for your Communications Services, see: [Enable logging in Diagnostic Settings](../concepts/analytics/enable-logging.md) + ++### View survey data with a Log Analytics workspace ++You need to enable a Log Analytics Workspace to both store the log data of your surveys and access survey results. To enable these logs for your Communications Services, see: [Enable logging in Diagnostic Settings](../concepts/analytics/enable-logging.md). Follow the steps to add a diagnostic setting. Select the ΓÇ£ACSCallSurveyΓÇ¥ data source when choosing category details. Also, choose ΓÇ£Send to Log Analytics workspaceΓÇ¥ as your destination detail. ++- You can also integrate your Log Analytics workspace with Power BI, see: [Integrate Log Analytics with Power BI](../../../articles/azure-monitor/logs/log-powerbi.md) + --> ++## Best practices +Here are our recommended survey flows and suggested question prompts for consideration. Your development can use our recommendation or use customized question prompts and flows for your visual interface. ++**Question 1:** How did the users perceive their overall call quality experience? +We recommend you start the survey by only asking about the participantsΓÇÖ overall quality. If you separate the first and second questions, it helps to only collect responses to Audio, Video, and Screen Share issues if a survey participant indicates they experienced call quality issues. +++- Suggested prompt: ΓÇ£How was the call quality?ΓÇ¥ +- API Question Values: Overall Call ++**Question 2:** Did the user perceive any Audio, Video, or Screen Sharing issues in the call? +If a survey participant responded to Question 1 with a score at or below the cutoff value for the overall call, then present the second question. ++- Suggested prompt: ΓÇ£What could have been better?ΓÇ¥ +- API Question Values: Audio, Video, and Screenshare ++Surveying Guidelines +- Avoid survey burnout, donΓÇÖt survey all call participants. +- The order of your questions matters. We recommend you randomize the sequence of optional tags in Question 2 in case respondents focus most of their feedback on the first prompt they visually see. +<!-- - Consider using surveys for separate Azure Communication Services Resources in controlled experiments to identify release impacts. --> +++## Next steps ++- Learn more about the End of Call Survey, see: [End of Call Survey overview](../concepts/voice-video-calling/end-of-call-survey-concept.md) ++<!-- - Learn how to use the Log Analytics workspace, see: [Log Analytics Tutorial](../../../articles/azure-monitor/logs/log-analytics-tutorial.md) ++- Create your own queries in Log Analytics, see: [Get Started Queries](../../../articles/azure-monitor/logs/get-started-queries.md) --> + |
communication-services | Proxy Calling Support Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/tutorials/proxy-calling-support-tutorial.md | In certain situations, it might be useful to have all your client traffic proxie Many times, establishing a network connection between two peers isn't straightforward. A direct connection might not work because of many reasons: firewalls with strict rules, peers sitting behind a private network, or computers are running in a NAT environment. To solve these network connection issues, you can use a TURN server. The term stands for Traversal Using Relays around NAT, and it's a protocol for relaying network traffic STUN and TURN servers are the relay servers here. Learn more about how ACS [mitigates](../concepts/network-traversal.md) network challenges by utilizing STUN and TURN. ### Provide your TURN servers details to the SDK-To provide the details of your TURN servers, you need to pass details of what TURN server to use as part of `CallClientOptions` while initializing the `CallClient`. For more information how to setup a call, see [Azure Communication Services Web SDK](../quickstarts/voice-video-calling/get-started-with-video-calling.md?pivots=platform-web)) for the Quickstart on how to setup Voice and Video. +To provide the details of your TURN servers, you need to pass details of what TURN server to use as part of `CallClientOptions` while initializing the `CallClient`. For more information how to setup a call, see [Azure Communication Services Web SDK](../quickstarts/voice-video-calling/get-started-with-video-calling.md?pivots=platform-web) for the Quickstart on how to setup Voice and Video. ```js import { CallClient } from '@azure/communication-calling'; const callClient = new CallClient({ ``` > [!IMPORTANT]-> Note that if you have provided your TURN server details while initializing the `CallClient`, all the media traffic will <i>exclusively</i> flow through these TURN servers. Any other ICE candidates that are normally generated when creating a call won't be considered while trying to establish connectivity between peers i.e. only 'relay' candidates will be considered. To learn more about different types of Ice candidates can be found [here](https://developer.mozilla.org/en-US/docs/Web/API/RTCIceCandidate/type). +> Note that if you have provided your TURN server details while initializing the `CallClient`, all the media traffic will <i>exclusively</i> flow through these TURN servers. Any other ICE candidates that are normally generated when creating a call won't be considered while trying to establish connectivity between peers i.e. only 'relay' candidates will be considered. To learn more about different types of Ice candidates click here [here](https://developer.mozilla.org/en-US/docs/Web/API/RTCIceCandidate/type). > [!NOTE]-> If the '?transport' query parameter is not present as part of the TURN url or is not one of these values - 'udp', 'tcp', 'tls', the default will behaviour will be UDP. +> If the '?transport' query parameter is not present as part of the TURN url or is not one of these values - 'udp', 'tcp', 'tls', the default behaviour will be UDP. > [!NOTE] > If any of the URLs provided are invalid or don't have one of these schemas - 'turn:', 'turns:', 'stun:', the `CallClient` initialization will fail and will throw errors accordingly. The error messages thrown should help you troubleshoot if you run into issues. |
confidential-computing | Multi Party Data | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/multi-party-data.md | ++ + Title: Multi-party Data Analytics ++description: Data cleanroom and multi-party data confidential computing solutions +++++++++++++ Last updated : 04/20/2023++++++# Cleanroom and Multi-party Data Analytics ++Azure confidential computing (ACC) provides a foundation for solutions that enable multiple parties to collaborate on data. There are various approaches to solutions, and a growing ecosystem of partners to help enable Azure customers, researchers, data scientists and data providers to collaborate on data while preserving privacy. This overview covers some of the approaches and existing solutions that can be used, all running on ACC. ++## What are the data and model protections? ++Data cleanroom solutions typically offer a means for one or more data providers to combine data for processing. There's typically agreed upon code, queries, or models that are created by one of the providers or another participant, such as a researcher or solution provider. In many cases, the data can be considered sensitive and undesired to directly share to other participants ΓÇô whether another data provider, a researcher, or solution vendor. To help ensure security and privacy on both the data and models used within data cleanrooms, confidential computing can be used to cryptographically verify that participants don't have access to the data or models, including during processing. By using ACC, the solutions can bring protections on the data and model IP from the cloud operator, solution provider, and data collaboration participants. ++## What are examples of industry use cases? ++With ACC, customers and partners build privacy preserving multi-party data analytics solutions, sometimes referred to as "confidential cleanrooms" ΓÇô both net new solutions uniquely confidential, and existing cleanroom solutions made confidential with ACC. ++1. **Royal Bank of Canada** - [Virtual clean room](https://aka.ms/RBCstory) solution combining merchant data with bank data in order to provide personalized offers, using Azure confidential computing VMs and Azure SQL AE in secure enclaves. +2. **Scotiabank** ΓÇô Proved the use of AI on cross-bank money flows to identify money laundering to flag human trafficking instances, using Azure confidential computing and a solution partner, Opaque. +3. **Novartis Biome** ΓÇô used a partner solution from [BeeKeeperAI](https://aka.ms/ACC-BeeKeeperAI) running on ACC in order to find candidates for clinical trials for rare diseases. +4. **Leading payment providers** connecting data across banks for fraud and anomaly detection. +5. **Data analytic services** and clean room solutions using ACC to increase data protection and meet EU customer compliance needs and privacy regulation. +++## Why confidential computing? ++Data cleanrooms aren't a brand-new concept, however with advances in confidential computing, there are more opportunities to take advantage of cloud scale with broader datasets, securing IP of AI models, and ability to better meet data privacy regulations. In previous cases, certain data might be inaccessible for reasons such as ++- Competitive disadvantages or regulation preventing of sharing data across industry companies. +- Anonymization reducing the quality of insights on data, or being too costly and time consuming. +- Data being bound to certain locations and refrained from processing in the cloud due to security concerns. +- Costly or lengthy legal processes cover liability if data is exposed or abused ++These realities could lead to incomplete or ineffective datasets that result in weaker insights, or more time needed in training and using AI models. ++## What are considerations when building a cleanroom solution? ++_Batch analytics vs. real-time data pipelines:_ The size of the datasets and speed of insights should be considered when designing or using a cleanroom solution. When data is available "offline", it can be loaded into a verified and secured compute environment for data analytic processing on large portions of data, if not the entire dataset. This batch analytics allow for large datasets to be evaluated with models and algorithms that aren't expected to provide an immediate result. For example, batch analytics work well when doing ML inferencing across millions of health records to find best candidates for a clinical trial. Other solutions require real-time insights on data, such as when algorithms and models aim to identify fraud on near real-time transactions between multiple entities. ++_Zero-trust participation:_ A major differentiator in confidential cleanrooms is the ability to have no party involved trusted ΓÇô from all data providers, code and model developers, solution providers and infrastructure operator admins. Solutions can be provided where both the data and model IP can be protected from all parties. When onboarding or building a solution, participants should consider both what is desired to protect, and from whom to protect each of the code, models, and data. ++_Federated learning:_ Federated learning involves creating or using a solution whereas models process in the data owner's tenant, and insights are aggregated in a central tenant. In some cases, the models can even be run on data outside of Azure, with model aggregation still occurring in Azure. Many times, federated learning iterates on data many times as the parameters of the model improve after insights are aggregated. The iteration costs and quality of the model should be factored into the solution and expected outcomes. ++_Data residency and sources:_ Customers have data stored in multiple clouds and on-premises. Collaboration can include data and models from different sources. Cleanroom solutions can facilitate data and models coming to Azure from these other locations. When data can't move to Azure from an on-premises data store, some cleanroom solutions can run on site where the data resides. Management and policies can be powered by a common solution provider, where available. ++_Code integrity and confidential ledgers:_ With distributed ledger technology (DLT) running on Azure confidential computing, solutions can be built that run on a network across organizations. The code logic and analytic rules can be added only when there's consensus across the various participants. All updates to the code are recorded for auditing via tamper-proof logging enabled with Azure confidential computing. ++## What are options to get started? ++### ACC platform offerings that help enable confidential cleanrooms +Roll up your sleeves and build a data clean room solution directly on these confidential computing service offerings. ++[Confidential containers](./confidential-containers.md) on Azure Container Instances (ACI) and Intel SGX VMs with application enclaves provide a container solution for building confidential cleanroom solutions. ++[Confidential Virtual Machines (VMs)](./confidential-vm-overview.md) provide a VM platform for confidential cleanroom solutions. ++[Azure SQL AE in secure enclaves](/sql/relational-databases/security/encryption/always-encrypted-enclaves) provides a platform service for encrypting data and queries in SQL that can be used in multi-party data analytics and confidential cleanrooms. ++[Confidential Consortium Framework](https://ccf.microsoft.com/) is an open-source framework for building highly available stateful services that use centralized compute for ease of use and performance, while providing decentralized trust. It enables multiple parties to execute auditable compute over confidential data without trusting each other or a privileged operator. ++### ACC partner solutions that enable confidential cleanrooms +Use a partner that has built a multi-party data analytics solution on top of the Azure confidential computing platform. ++- [**Anjuna**](https://www.anjuna.io/use-case-solutions) provides a confidential computing platform to enable various use cases, including secure clean rooms, for organizations to share data for joint analysis, such as calculating credit risk scores or developing machine learning models, without exposing sensitive information. +- [**BeeKeeperAI**](https://www.beekeeperai.com/) enables healthcare AI through a secure collaboration platform for algorithm owners and data stewards. BeeKeeperAIΓäó uses privacy-preserving analytics on multi-institutional sources of protected data in a confidential computing environment. The solution supports end-to-end encryption, secure computing enclaves, and Intel's latest SGX enabled processors to protect the data and the algorithm IP. +- [**Decentriq**](https://www.decentriq.com/) provides Software as a Service (SaaS) data clean rooms to enable companies to collaborate with other organizations on their most sensitive datasets and create value for their clients. The technologies help prevent anyone to see the sensitive data, including Decentriq. +- [**Fortanix**](https://www.fortanix.com/platform/confidential-ai) provides a confidential computing platform that can enable confidential AI, including multiple organizations collaborating together for multi-party analytics. +- [**Mithril Security**](https://www.mithrilsecurity.io/) provides tooling to help SaaS vendors serve AI models inside secure enclaves, and providing an on-premises level of security and control to data owners. Data owners can use their SaaS AI solutions while remaining compliant and in control of their data. +- [**Opaque**](https://opaque.co/) provides a confidential computing platform for collaborative analytics and AI, giving the ability to perform collaborative scalable analytics while protecting data end-to-end and enabling organizations to comply with legal and regulatory mandates. +- [**SafeLiShare**](https://safelishare.com/solution/encrypted-data-clean-room/) provides policy-driven encrypted data clean rooms where access to data is auditable, trackable, and visible, while keeping data protected during multi-party data sharing. |
confidential-computing | Tdx Confidential Vm Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/tdx-confidential-vm-overview.md | + + Title: DCesv5 and ECesv5 series confidential VMs +description: Learn about Azure DCesv5 and ECesv5 series confidential virtual machines (confidential VMs). These series are for tenants with high security and confidentiality requirements. +++++ Last updated : 4/25/2023+++# DCesv5 and ECesv5 series confidential VMs ++Starting with the 4th Gen Intel® Xeon® Scalable processors, Azure has begun supporting VMs backed by an all-new hardware-based Trusted Execution Environment called [Intel® Trust Domain Extensions (TDX)](https://www.intel.com/content/www/us/en/developer/articles/technical/intel-trust-domain-extensions.html#inpage-nav-2). Organizations can use these VMs to seamlessly bring confidential workloads to the cloud without any code changes to their applications. ++Intel TDX helps harden the virtualized environment to deny the hypervisor and other host management code access to VM memory and state, including the cloud operator. Intel TDX helps assure workload integrity and confidentiality by mitigating a wide range of software and hardware attacks, including intrusion or inspection by software running in other VMs. ++> [!IMPORTANT] +> DCesv5 and ECesv5 are now available in preview, customers can sign-up [today](https://aka.ms/TDX-signup). ++## Benefits ++Some of the benefits of Confidential VMs with Intel TDX include: ++- Support for general-purpose and memory-optimized virtual machines. +- Improved performance for compute, memory, IO and network-intensive workloads. +- Ability to retrieve raw hardware evidence and submit for judgment to attestation provider, including open-sourcing our client application. +- Support for [Microsoft Azure Attestation](https://learn.microsoft.com/azure/attestation) (coming soon) backed by high availability zonal capabilities and disaster recovery capabilities. +- Support for operator-independent remote attestation with [Intel Project Amber](http://projectamber.intel.com/). +- Support for Ubuntu 22.04, SUSE Linux Enterprise Server 15 SP5 and SUSE Linux Enterprise Server for SAP 15 SP5. ++## See also ++- [Read our product announcement](https://aka.ms/tdx-blog) +- [Try Ubuntu confidential VMs with Intel TDX today: limited preview now available on Azure](https://canonical.com/blog/ubuntu-confidential-vms-intel-tdx-microsoft-azure-confidential-computing) |
cosmos-db | How To Configure Private Endpoints | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/how-to-configure-private-endpoints.md | description: Learn how to set up Azure Private Link to access an Azure Cosmos DB Previously updated : 03/03/2023 Last updated : 04/24/2023 When you have an approved Private Link for an Azure Cosmos DB account, in the Az ## <a id="private-zone-name-mapping"></a>API types and private zone names -The following table shows the mapping between different Azure Cosmos DB account API types, supported subresources, and the corresponding private zone names. You can also access the Gremlin and API for Table accounts through the API for NoSQL, so there are two entries for these APIs. There's also an extra entry for the API for NoSQL for accounts using the [dedicated gateway](./dedicated-gateway.md). +Please review [Azure Private Endpoint DNS configuration](../private-link/private-endpoint-dns.md) for a more detailed explanation about private zones and DNS configurations for private endpoint. The following table shows the mapping between different Azure Cosmos DB account API types, supported subresources, and the corresponding private zone names. You can also access the Gremlin and API for Table accounts through the API for NoSQL, so there are two entries for these APIs. There's also an extra entry for the API for NoSQL for accounts using the [dedicated gateway](./dedicated-gateway.md). |Azure Cosmos DB account API type |Supported subresources or group IDs |Private zone name | |||| |
cosmos-db | Feature Support 36 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/feature-support-36.md | Azure Cosmos DB for MongoDB supports the following database commands: | `Text Index` | No | | `2dsphere` | Yes | | `2d Index` | No |-| `Hashed Index` | Yes | +| `Hashed Index` | No | ### Index properties |
cosmos-db | Feature Support 40 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/feature-support-40.md | We recommend enabling Server Side Retry and avoiding wildcard indexes to ensure | `Text Index` | No | | `2dsphere` | Yes | | `2d Index` | No |-| `Hashed Index` | Yes | +| `Hashed Index` | No | ### Index properties |
cosmos-db | Feature Support 42 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/feature-support-42.md | We recommend enabling Server Side Retry and avoiding wildcard indexes to ensure | `Text Index` | No | | `2dsphere` | Yes | | `2d Index` | No |-| `Hashed Index` | Yes | +| `Hashed Index` | No | ### Index properties |
cosmos-db | Compatibility | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/vcore/compatibility.md | Azure Cosmos DB for MongoDB vCore supports the following indexes and index prope | `Multikey Index` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes | | `Text Index` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No | | `Geospatial Index` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |-| `Hashed Index` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes | +| `Hashed Index` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No | ### Index properties |
cosmos-db | Tune Connection Configurations Java Sdk V4 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/tune-connection-configurations-java-sdk-v4.md | As a first step, use the following recommended configuration settings below. The | maxConnectionsPerEndpoint | "130" | "130" | This represents the upper bound size of the *connection pool* for an endpoint/backend node (representing a replica). SDK creates connections to endpoint/backend node on-demand and based on incoming concurrent requests. By default, if required, SDK will create maximum 130 connections to an endpoint/backend node. (NOTE: SDK doesn't create these 130 connections upfront). | | maxRequestsPerConnection | "30" | "30" | This represents the upper bound size of the maximum number of requests that can be queued on a *single connection* for a specific endpoint/backend node (representing a replica). SDK queues requests to a single connection to an endpoint/backend node on-demand and based on incoming concurrent requests. By default, if required, SDK will queue maximum 30 requests to a single connection for a specific endpoint/backend node. (NOTE: SDK doesn't queue these 30 requests to a single connection upfront). | | connectTimeout | "PT5S" | "~PT1S" | This represents the connection establishment timeout duration for a *single connection* to be established with an endpoint/backend node. By default SDK will wait for maximum 5 seconds for connection establishment before throwing an error. TCP connection establishment uses [multi-step handshake](https://en.wikipedia.org/wiki/Transmission_Control_Protocol#Protocol_operation) which increases latency of the connection establishment time, hence, customers are recommended to set this value according to their network bandwidth and environment settings. NOTE: This recommendation of ~PT1S is only for applications deployed in colocated regions of their Cosmos DB accounts. |-| networkRequestTimeout | "PT5S" | "PT5S" | This represents the network timeout duration for a *single request*. SDK will wait maximum for this duration to consume a service response after the request has been written to the network connection. SDK only allows values between 5 seconds (min) and 10 seconds (max). Setting a value too high can result in fewer retries and reduce chances of success by retries. | +| networkRequestTimeout | "PT5S" | "PT5S" | This represents the network timeout duration for a *single request*. SDK will wait maximum for this duration to consume a service response after the request has been written to the network connection. SDK only allows values between 1 second (min) and 10 seconds (max). Setting a value too high can result in fewer retries and reduce chances of success by retries. | ### Gateway Connection mode |
cost-management-billing | Direct Ea Administration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/direct-ea-administration.md | Title: EA Billing administration on the Azure portal description: This article explains the common tasks that an enterprise administrator accomplishes in the Azure portal. Previously updated : 04/18/2023 Last updated : 04/25/2023 This article explains the common tasks that an Enterprise Agreement (EA) adminis > As of February 20, 2023 indirect EA customers wonΓÇÖt be able to manage their billing account in the EA portal. Instead, they must use the Azure portal. > > This change doesnΓÇÖt affect Azure Government EA enrollments. They continue using the EA portal to manage their enrollment.+> +> As of April 24, 2023 EA customers won't be able to manage their Azure Government EA enrollments from [Azure portal](https://portal.azure.com) instead they can manage it from [Azure Government portal](https://portal.azure.us). ## Manage your enrollment |
cost-management-billing | Understand Ea Roles | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/understand-ea-roles.md | The following sections describe the limitations and capabilities of each role. - ⁴ Notification contacts are sent email communications about the Azure Enterprise Agreement. - ⁵ Task is limited to accounts in your department.-- ⁶ The Enterprise Administrator (read only) role doesn't allow reservation purchases. However, if the EA Admin (read only) is also a subscription owner or subscription reservation purchaser, they can purchase a reservation.+- ⁶ A subscription owner or reservation purchaser may purchase and manage reservations and savings plans within the subscription, and only if permitted by the reservation purchase enabled flag. Enterprise administrators may purchase and manage reservations and savings plans across the billing account. Enterprise administrators (read-only) may view all purchased reservations and savings plans. Neither EA administrator role is governed by the reservation purchase enabled flag. While the Enterprise Admin (read-only) role holder is not permitted to make purchases, as it is not governed by reservation purchase enabled, if a user with that role also holds either a subscription owner or reservation purchaser permission, that user may purchase reservations and savings plans even if the reservation purchase enabled flag is set to false ## Add a new enterprise administrator |
cost-management-billing | Tutorial Azure Hybrid Benefits Sql | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/scope-level/tutorial-azure-hybrid-benefits-sql.md | Title: Tutorial - Optimize centrally managed Azure Hybrid Benefit for SQL Server description: This tutorial guides you through proactively assigning SQL Server licenses in Azure to manage and optimize Azure Hybrid Benefit. Previously updated : 04/20/2022 Last updated : 04/25/2022 The preceding section discusses ongoing monitoring. We also recommend that you e - Monitor usage and adjust on the fly, as needed. - Repeat the process every year or at whatever frequency best suits your needs. +### License assignment review date ++After you assign licenses and set a review date, Microsoft later sends you email notifications to let you know that the license assignment will expire. ++Email notifications are sent: ++- 90 days before expiration +- 30 days before expiration +- 7 days before expiration ++No notification is sent on the review date. The license assignment becomes inactive and no longer applies 90 days after expiration. + ## Example walkthrough In the following example, assume that you're the billing administrator for the Contoso Insurance company. You manage Contoso's Azure Hybrid Benefit for SQL Server. |
data-factory | Airflow Pricing | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/airflow-pricing.md | Managed Airflow supports either small (D2v4) or large (D4v4) node sizing. Small ## Next steps - [Run an existing pipeline with Managed Airflow](tutorial-run-existing-pipeline-with-airflow.md)-- [Refresh a Power BI dataset with Managed Airflow](tutorial-refresh-power-bi-dataset-with-airflow.md) - [Changing password for Airflow environments](password-change-airflow.md) |
data-factory | Concept Managed Airflow | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/concept-managed-airflow.md | -[Apache Airflow](https://airflow.apache.org) is an open-source platform used to programmatically create, schedule, and monitor complex data workflows. It allows you to define a set of tasks, called operators, that can be combined into directed acyclic graphs (DAGs) to represent data pipelines. Airflow enables you to execute these DAGs on a schedule or in response to an event, monitor the progress of workflows, and provide visibility into the state of each task. It is widely used in data engineering and data science to orchestrate data pipelines, and is known for its flexibility, extensibility, and ease of use. +[Apache Airflow](https://airflow.apache.org) is an open-source platform used to programmatically create, schedule, and monitor complex data workflows. It allows you to define a set of tasks, called operators, that can be combined into directed acyclic graphs (DAGs) to represent data pipelines. Airflow enables you to execute these DAGs on a schedule or in response to an event, monitor the progress of workflows, and provide visibility into the state of each task. It's widely used in data engineering and data science to orchestrate data pipelines, and is known for its flexibility, extensibility, and ease of use. :::image type="content" source="media/concept-managed-airflow/data-integration.png" alt-text="Screenshot shows data integration."::: You can install any provider package by editing the airflow environment from the ## Limitations -* Managed Airflow in other regions will be available by GA (Tentative GA is Q2 2023 ). +* Managed Airflow in other regions is available by GA. * Data Sources connecting through airflow should be publicly accessible. -* Blob Storage behind VNet are not supported during the public preview (Tentative GA is Q2 2023 +* Blob Storage behind VNet is not supported during the public preview. * DAGs that are inside a Blob Storage in VNet/behind Firewall is currently not supported.-* Azure Key Vault is not supported in LinkedServices to import dags.(Tentative GA is Q2 2023) +* Azure Key Vault isn't supported in LinkedServices to import dags. * Airflow supports officially Blob Storage and ADLS with some limitations. ## Next steps - [Run an existing pipeline with Managed Airflow](tutorial-run-existing-pipeline-with-airflow.md)-- [Refresh a Power BI dataset with Managed Airflow](tutorial-refresh-power-bi-dataset-with-airflow.md) - [Managed Airflow pricing](airflow-pricing.md) - [How to change the password for Managed Airflow environments](password-change-airflow.md) |
data-factory | Format Delta | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/format-delta.md | Delta will only read 2 partitions where **part_col == 5 and 8** from the target In Settings tab, you will find three more options to optimize delta sink transformation. -* When **Merge schema** option is enabled, any columns that are present in the previous stream, but not in the Delta table, are automatically added on to the end of the schema. +* When **Merge schema** option is enabled, it allows schema evolution, i.e. any columns that are present in the current incoming stream but not in the target Delta table are automatically added to its schema. This option is supported across all update methods. * When **Auto compact** is enabled, after an individual write, transformation checks if files can further be compacted, and runs a quick OPTIMIZE job (with 128 MB file sizes instead of 1GB) to further compact files for partitions that have the most number of small files. Auto compaction helps in coalescing a large number of small files into a smaller number of large files. Auto compaction only kicks in when there are at least 50 files. Once a compaction operation is performed, it creates a new version of the table, and writes a new file containing the data of several previous files in a compact compressed form. |
data-factory | How Does Managed Airflow Work | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-does-managed-airflow-work.md | If you're using Airflow version 1.x, delete DAGs that are deployed on any Airflo ## Next steps -* [Run an existing pipeline with Managed Airflow](tutorial-run-existing-pipeline-with-airflow.md) -* [Refresh a Power BI dataset with Managed Airflow](tutorial-refresh-power-bi-dataset-with-airflow.md) -* [Managed Airflow pricing](airflow-pricing.md) -* [How to change the password for Managed Airflow environments](password-change-airflow.md) +- [Run an existing pipeline with Managed Airflow](tutorial-run-existing-pipeline-with-airflow.md) +- [Managed Airflow pricing](airflow-pricing.md) +- [How to change the password for Managed Airflow environments](password-change-airflow.md) |
data-factory | Password Change Airflow | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/password-change-airflow.md | We recommend using **Azure AD** authentication in Managed Airflow environments. ## Next steps - [Run an existing pipeline with Managed Airflow](tutorial-run-existing-pipeline-with-airflow.md)-- [Refresh a Power BI dataset with Managed Airflow](tutorial-refresh-power-bi-dataset-with-airflow.md) - [Managed Airflow pricing](airflow-pricing.md) |
data-factory | Tutorial Refresh Power Bi Dataset With Airflow | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-refresh-power-bi-dataset-with-airflow.md | - Title: Refresh a Power BI dataset with Managed Airflow -description: This tutorial provides step-by-step instructions for refreshing a Power BI dataset with Managed Airflow. ---- Previously updated : 01/24/2023----# Refresh a Power BI dataset with Managed Airflow ---> [!NOTE] -> Managed Airflow for Azure Data Factory relies on the open source Apache Airflow application. Documentation and more tutorials for Airflow can be found on the Apache Airflow [Documentation](https://airflow.apache.org/docs/) or [Community](https://airflow.apache.org/community/) pages. --This tutorial shows you how to refresh a Power BI dataset with Managed Airflow in Azure Data Factory. --## Prerequisites --* **Azure subscription**. If you don't have an Azure subscription, create a [free Azure account](https://azure.microsoft.com/free/) before you begin. -* **Azure storage account**. If you don't have a storage account, see [Create an Azure storage account](../storage/common/storage-account-create.md?tabs=azure-portal) for steps to create one. *Ensure the storage account allows access only from selected networks.* -* **Setup a Service Principal**. You will need to [create a new service principal](../active-directory/develop/howto-create-service-principal-portal.md) or use an existing one and grant it permission to run the pipeline (example ΓÇô contributor role in the data factory where the existing pipelines exist), even if the Managed Airflow environment and the pipelines exist in the same data factory. You will need to get the Service PrincipalΓÇÖs Client ID and Client Secret (API Key). --## Steps --1. Create a new Python file **pbi-dataset-refresh.py** with the below contents: - ```python - from airflow import DAG - from airflow.operators.python_operator import PythonOperator - from datetime import datetime, timedelta - from powerbi.datasets import Datasets -- # Default arguments for the DAG - default_args = { - 'owner': 'me', - 'start_date': datetime(2022, 1, 1), - 'depends_on_past': False, - 'retries': 1, - 'retry_delay': timedelta(minutes=5), - } -- # Create the DAG - dag = DAG( - 'refresh_power_bi_dataset', - default_args=default_args, - schedule_interval=timedelta(hours=1), - ) -- # Define a function to refresh the dataset - def refresh_dataset(**kwargs): - # Create a Power BI client - datasets = Datasets(client_id='your_client_id', - client_secret='your_client_secret', - tenant_id='your_tenant_id') - - # Refresh the dataset - dataset_name = 'your_dataset_name' - datasets.refresh(dataset_name) - print(f'Successfully refreshed dataset: {dataset_name}') -- # Create a PythonOperator to run the dataset refresh - refresh_dataset_operator = PythonOperator( - task_id='refresh_dataset', - python_callable=refresh_dataset, - provide_context=True, - dag=dag, - ) -- refresh_dataset_operator - ``` -- You will have to fill in your **client_id**, **client_secret**, **tenant_id**, and **dataset_name** with your own values. -- Also, you will need to install the **powerbi** python package to use the above code using Managed Airflow requirements. Edit a Managed Airflow environment and add the **powerbi** python package under **Airflow requirements**. --1. Upload the **pbi-dataset-refresh.py** file to the blob storage within a folder named **DAG**. -1. [Import the **DAG** folder into your Airflow environment](). If you do not have one, [create a new one](). - :::image type="content" source="media/tutorial-run-existing-pipeline-with-airflow/airflow-environment.png" alt-text="Screenshot showing the data factory management tab with the Airflow section selected."::: --## Next Steps --* [Run an existing pipeline with Managed Airflow](tutorial-run-existing-pipeline-with-airflow.md) -* [Managed Airflow pricing](airflow-pricing.md) -* [Changing password for Managed Airflow environments](password-change-airflow.md) |
data-factory | Tutorial Run Existing Pipeline With Airflow | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-run-existing-pipeline-with-airflow.md | Data Factory pipelines provide 100+ data source connectors that provide scalable * **Azure subscription**. If you don't have an Azure subscription, create a [free Azure account](https://azure.microsoft.com/free/) before you begin. * **Azure storage account**. If you don't have a storage account, see [Create an Azure storage account](../storage/common/storage-account-create.md?tabs=azure-portal) for steps to create one. *Ensure the storage account allows access only from selected networks.*-* **Azure Data Factory pipeline**. You can follow any of the tutorials and create a new data factory pipeline in case you do not already have one, or create one with one click in [Get started and try out your first data factory pipeline](quickstart-get-started.md). -* **Setup a Service Principal**. You will need to [create a new service principal](../active-directory/develop/howto-create-service-principal-portal.md) or use an existing one and grant it permission to run the pipeline (example ΓÇô contributor role in the data factory where the existing pipelines exist), even if the Managed Airflow environment and the pipelines exist in the same data factory. You will need to get the Service PrincipalΓÇÖs Client ID and Client Secret (API Key). +* **Azure Data Factory pipeline**. You can follow any of the tutorials and create a new data factory pipeline in case you don't already have one, or create one with one select in [Get started and try out your first data factory pipeline](quickstart-get-started.md). +* **Setup a Service Principal**. You'll need to [create a new service principal](../active-directory/develop/howto-create-service-principal-portal.md) or use an existing one and grant it permission to run the pipeline (example ΓÇô contributor role in the data factory where the existing pipelines exist), even if the Managed Airflow environment and the pipelines exist in the same data factory. You'll need to get the Service PrincipalΓÇÖs Client ID and Client Secret (API Key). ## Steps Data Factory pipelines provide 100+ data source connectors that provide scalable # run_pipeline2 >> pipeline_run_sensor ``` - You will have to create the connection using the Airflow UI (Admin -> Connections -> '+' -> Choose 'Connection type' as 'Azure Data Factory', then fill in your **client_id**, **client_secret**, **tenant_id**, **subscription_id**, **resource_group_name**, **data_factory_name**, and **pipeline_name**. + You'll have to create the connection using the Airflow UI (Admin -> Connections -> '+' -> Choose 'Connection type' as 'Azure Data Factory', then fill in your **client_id**, **client_secret**, **tenant_id**, **subscription_id**, **resource_group_name**, **data_factory_name**, and **pipeline_name**. 1. Upload the **adf.py** file to your blob storage within a folder called **DAGS**.-1. [Import the **DAGS** folder into your Managed Airflow environment](./how-does-managed-airflow-work.md#import-dags). If you do not have one, [create a new one](./how-does-managed-airflow-work.md#create-a-managed-airflow-environment) +1. [Import the **DAGS** folder into your Managed Airflow environment](./how-does-managed-airflow-work.md#import-dags). If you don't have one, [create a new one](./how-does-managed-airflow-work.md#create-a-managed-airflow-environment) :::image type="content" source="media/tutorial-run-existing-pipeline-with-airflow/airflow-environment.png" alt-text="Screenshot showing the data factory management tab with the Airflow section selected."::: ## Next steps -* [Refresh a Power BI dataset with Managed Airflow](tutorial-refresh-power-bi-dataset-with-airflow.md) -* [Managed Airflow pricing](airflow-pricing.md) -* [Changing password for Managed Airflow environments](password-change-airflow.md) +- [Managed Airflow pricing](airflow-pricing.md) +- [Changing password for Managed Airflow environments](password-change-airflow.md) |
deployment-environments | Concept Common Components | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/concept-common-components.md | |
deployment-environments | Concept Environments Key Concepts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/concept-environments-key-concepts.md | |
deployment-environments | Concept Environments Scenarios | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/concept-environments-scenarios.md | |
deployment-environments | Configure Catalog Item | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/configure-catalog-item.md | |
deployment-environments | How To Configure Catalog | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/how-to-configure-catalog.md | |
deployment-environments | How To Configure Deployment Environments User | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/how-to-configure-deployment-environments-user.md | |
deployment-environments | How To Configure Devcenter Environment Types | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/how-to-configure-devcenter-environment-types.md | |
deployment-environments | How To Configure Managed Identity | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/how-to-configure-managed-identity.md | |
deployment-environments | How To Configure Project Admin | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/how-to-configure-project-admin.md | |
deployment-environments | How To Configure Project Environment Types | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/how-to-configure-project-environment-types.md | |
deployment-environments | How To Create Access Environments | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/how-to-create-access-environments.md | |
deployment-environments | How To Install Devcenter Cli Extension | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/how-to-install-devcenter-cli-extension.md | |
deployment-environments | How To Manage Environments | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/how-to-manage-environments.md | |
deployment-environments | Overview What Is Azure Deployment Environments | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/overview-what-is-azure-deployment-environments.md | |
deployment-environments | Quickstart Create Access Environments | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/quickstart-create-access-environments.md | |
deployment-environments | Quickstart Create And Configure Devcenter | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/quickstart-create-and-configure-devcenter.md | |
deployment-environments | Quickstart Create And Configure Projects | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/quickstart-create-and-configure-projects.md | |
dev-box | Concept Common Components | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/concept-common-components.md | |
dev-box | Concept Dev Box Concepts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/concept-dev-box-concepts.md | |
dev-box | How To Configure Azure Compute Gallery | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/how-to-configure-azure-compute-gallery.md | |
dev-box | How To Configure Network Connections | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/how-to-configure-network-connections.md | |
dev-box | How To Configure Stop Schedule | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/how-to-configure-stop-schedule.md | |
dev-box | How To Create Dev Boxes Developer Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/how-to-create-dev-boxes-developer-portal.md | |
dev-box | How To Customize Devbox Azure Image Builder | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/how-to-customize-devbox-azure-image-builder.md | |
dev-box | How To Dev Box User | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/how-to-dev-box-user.md | |
dev-box | How To Get Help | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/how-to-get-help.md | |
dev-box | How To Install Dev Box Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/how-to-install-dev-box-cli.md | |
dev-box | How To Manage Dev Box Definitions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/how-to-manage-dev-box-definitions.md | |
dev-box | How To Manage Dev Box Pools | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/how-to-manage-dev-box-pools.md | |
dev-box | How To Manage Dev Box Projects | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/how-to-manage-dev-box-projects.md | |
dev-box | How To Manage Dev Center | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/how-to-manage-dev-center.md | |
dev-box | How To Project Admin | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/how-to-project-admin.md | |
dev-box | Overview What Is Microsoft Dev Box | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/overview-what-is-microsoft-dev-box.md | |
dev-box | Quickstart Configure Dev Box Service | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/quickstart-configure-dev-box-service.md | |
dev-box | Quickstart Create Dev Box | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/quickstart-create-dev-box.md | |
dev-box | Tutorial Connect To Dev Box With Remote Desktop App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/tutorial-connect-to-dev-box-with-remote-desktop-app.md | |
devtest-labs | Devtest Lab Auto Shutdown | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-auto-shutdown.md | description: Learn how to set auto shutdown schedules and policies for Azure Dev Previously updated : 12/18/2021 Last updated : 04/24/2023 # Configure auto shutdown for labs and VMs in DevTest Labs After you update auto shutdown settings, you can see the activity logged in the ## Auto shutdown notifications -When you enable notifications in auto shutdown configuration, lab users receive a notification 30 minutes before auto shutdown if any of their VMs will be affected. The notification gives users a chance to save their work before the shutdown. If the auto shutdown settings specify an email address, the notification sends to that email address. If the settings specify a webhook, the notification sends to the webhook URL. +When you enable notifications in auto shutdown configuration, lab users receive a notification 30 minutes before auto shutdown affects any of their VMs. The notification gives users a chance to save their work before the shutdown. If the auto shutdown settings specify an email address, the notification sends to that email address. If the settings specify a webhook, the notification sends to the webhook URL. The notification can also provide links that allow the following actions for each VM if someone needs to keep working: To get started, create a logic app in Azure with the following steps: 1. At the top of the **Logic apps** page, select **Add**. 1. On the **Create Logic App** page:+ + |Name |Value | + ||| + |Subscription |Select your Azure Subscription. | + |Resource group |Select a resource group or create a new one. | + |Logic app name |Enter a descriptive name for your logic app. | + |Publish | Workflow | + |Region |Select a region near you or near other services your logic app accesses. | + |Plan type |Consumption. A consumption plan allows you to use the logic app designer to create your app. | + |Windows Plan |Accept the default App Service Plan (ASP). | + |Pricing plan |Accept the default Workflow Standard WS1 (210 total ACU, 3.5 GB memory, 1 vCPU) | + |Zone redundancy |Accept the default: Disabled. | - - Select your Azure **Subscription**. - - Select a **Resource Group** or create a new one. - - Enter a **Logic App name**. - - Select a **Region** for the logic app. - - Select a **Plan type** for the logic app. - - Select a **Windows Plan** for the logic app. - - Select a **Pricing plan** for the logic app. - - Enabled **Zone redundancy** if necessary. :::image type="content" source="media/devtest-lab-auto-shutdown/new-logic-app-page.png" alt-text="Screenshot showing the Create Logic App page."::: |
devtest-labs | Devtest Lab Integrate Ci Cd | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-integrate-ci-cd.md | $labVmRgName = (Get-AzResource -Id $labVmComputeId).ResourceGroupName $labVmName = (Get-AzResource -Id $labVmId).Name # Get lab VM public IP address-$labVMIpAddress = (Get-AzPublicIpAddress -ResourceGroupName $labVmRgName - -Name $labVmName).IpAddress +$labVMIpAddress = (Get-AzPublicIpAddress -ResourceGroupName $labVmRgName -Name $labVmName).IpAddress # Get lab VM FQDN-$labVMFqdn = (Get-AzPublicIpAddress -ResourceGroupName $labVmRgName - -Name $labVmName).DnsSettings.Fqdn +$labVMFqdn = (Get-AzPublicIpAddress -ResourceGroupName $labVmRgName -Name $labVmName).DnsSettings.Fqdn # Set a variable labVmRgName to store the lab VM resource group name Write-Host "##vso[task.setvariable variable=labVmRgName;]$labVmRgName" The next step creates a golden image VM to use for future deployments. This step - **Virtual Machine Name**: the variable you specified for your virtual machine name: *$vmName*. - **Template**: Browse to and select the template file you checked in to your project repository. - **Parameters File**: If you checked a parameters file into your repository, browse to and select it.- - **Parameter Overrides**: Enter `-newVMName '$(vmName)' -userName '$(userName)' -password (ConvertTo-SecureString -String '$(password)' -AsPlainText -Force)`. - - Drop down **Output Variables**, and under **Reference name**, enter the variable for the created lab VM ID. If you use the default *labVmId*, you can refer to the variable in subsequent tasks as **$(labVmId)**. + - **Parameter Overrides**: Enter `-newVMName '$(vmName)' -userName '$(userName)' -password '$(password)'`. + - Drop down **Output Variables**, and under **Reference name**, enter the variable for the created lab VM ID. Let's enter *vm* for **Reference name** for simplicity. **labVmId** will be an attribute of this variable and will be referred to later as *$vm.labVmId*. If you use any other name, then remember to use it accordingly in the subsequent tasks. - You can create a name other than the default, but remember to use the correct name in subsequent tasks. You can write the Lab VM ID in the following form: `/subscriptions/{subscription Id}/resourceGroups/{resource group Name}/providers/Microsoft.DevTestLab/labs/{lab name}/virtualMachines/{vmName}`. + Lab VM ID will be in the following form: `/subscriptions/{subscription Id}/resourceGroups/{resource group Name}/providers/Microsoft.DevTestLab/labs/{lab name}/virtualMachines/{vmName}`. ### Collect the details of the DevTest Labs VM Next, the pipeline runs the script you created to collect the details of the Dev - **Azure Subscription**: Select your service connection or subscription. - **Script Type**: Select **Script File Path**. - **Script Path**: Browse to and select the PowerShell script that you checked in to your source code repository. You can use built-in properties to simplify the path, for example: `$(System.DefaultWorkingDirectory/Scripts/GetLabVMParams.ps1`.- - **Script Arguments**: Enter the name of the **labVmId** variable you populated in the previous task, for example *-labVmId '$(labVmId)'*. + - **Script Arguments**: Enter the value as **-labVmId $(vm.labVmId)**. The script collects the required values and stores them in environment variables within the release pipeline, so you can refer to them in later steps. The next task creates an image of the newly deployed VM in your lab. You can use - **Lab**: Select your lab. - **Custom Image Name**: Enter a name for the custom image. - **Description**: Enter an optional description to make it easy to select the correct image.- - **Source Lab VM**: The source **labVmId**. If you changed the default name of the **labVmId** variable, enter it here. The default value is **$(labVmId)**. + - **Source Lab VM**: The source **labVmId**. Enter the value as **$(vm.labVmId)**. - **Output Variables**: You can edit the name of the default Custom Image ID variable if necessary. ### Deploy your app to the DevTest Labs VM (optional) The final task is to delete the VM that you deployed in your lab. You'd ordinari 1. Configure the task as follows: - **Azure RM Subscription**: Select your service connection or subscription. - **Lab**: Select your lab.- - **Virtual Machine**: Select the VM you want to delete. + - **Virtual Machine**: Enter the value as **$(vm.labVmId)**. - **Output Variables**: Under **Reference name**, if you changed the default name of the **labVmId** variable, enter it here. The default value is **$(labVmId)**. ### Save the release pipeline |
energy-data-services | Release Notes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/release-notes.md | Azure Data Manager for Energy Preview is updated on an ongoing basis. To stay up ## April 2023 +### Enabled Monitoring of OSDU Service Logs ++Now you can configure diagnostic settings of your Azure Data Manager for Energy Preview to export OSDU Service Logs to Azure Monitor. You can access, query, & analyze the logs in a Log Analytics Workspace. You can archive them in a storage account for later use. + ### Monitoring and investigating actions with Audit logs Knowing who is taking what action on which item is critical in helping organizations meet regulatory compliance and record management requirements. Azure Data Manager for Energy captures audit logs for data plane APIs of OSDU services and audit events listed [here](https://community.opengroup.org/osdu/documentation/-/wikis/Releases/R3.0/GCP/GCP-Operation/Logging/Audit-Logging-Status). Learn more about [audit logging in Azure Data Manager for Energy](how-to-manage-audit-logs.md). |
governance | Assign Policy Bicep | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/assign-policy-bicep.md | If there are any existing resources that aren't compliant with this new assignme under **Non-compliant resources**. For more information, see-[How compliance works](./how-to/get-compliance-data.md#how-compliance-works). +[How compliance works](./concepts/compliance-states.md). ## Clean up resources |
governance | Assign Policy Template | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/assign-policy-template.md | -template) to create a policy assignment to identify virtual machines that aren't using managed -disks. At the end of this process, you'll successfully identify virtual machines that aren't using -managed disks. They're _non-compliant_ with the policy assignment. +template) to create a policy assignment that identifies virtual machines that aren't using managed +disks, and flags them as _non-compliant_ to the policy assignment. [!INCLUDE [About Azure Resource Manager](../../../includes/resource-manager-quickstart-introduction.md)] The resource defined in the template is: | Resource group | Select **Create new**, specify a name, and then select **OK**. In the screenshot, the resource group name is _mypolicyquickstart\<Date in MMDD\>rg_. | | Location | Select a region. For example, **Central US**. | | Policy Assignment Name | Specify a policy assignment name. You can use the policy definition display if you want. For example, _Audit VMs that do not use managed disks_. |- | Rg Name | Specify a resource group name where you want to assign the policy to. In this quickstart, use the default value **[resourceGroup().name]**. **[resourceGroup()](../../azure-resource-manager/templates/template-functions-resource.md#resourcegroup)** is a template function that retrieves the resource group. | + | Resource Group Name | Specify a resource group name where you want to assign the policy to. In this quickstart, use the default value **[resourceGroup().name]**. **[resourceGroup()](../../azure-resource-manager/templates/template-functions-resource.md#resourcegroup)** is a template function that retrieves the resource group. | | Policy Definition ID | Specify **/providers/Microsoft.Authorization/policyDefinitions/0a914e76-4921-4c19-b460-a2d36003525a**. | | I agree to the terms and conditions stated above | (Select) | If there are any existing resources that aren't compliant with this new assignme under **Non-compliant resources**. For more information, see-[How compliance works](./how-to/get-compliance-data.md#how-compliance-works). +[How compliance works](./concepts/compliance-states.md). ## Clean up resources |
governance | Compliance States | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/concepts/compliance-states.md | + + Title: Azure Policy compliance states +description: This article describes the concept of compliance states in Azure Policy. Last updated : 04/05/2023++++# Azure Policy compliance states ++## How compliance works ++When initiative or policy definitions are assigned, Azure Policy determines which resources are [applicable](./policy-applicability.md) then evaluates those which haven't been [excluded](./assignment-structure.md#excluded-scopes) or [exempted](./exemption-structure.md). Evaluation yields **compliance states** based on conditions in the policy rule and each resources' adherence to those requirements. ++## Available compliance states ++### Non-compliant ++Policy assignments with `audit`, `auditIfNotExists`, or `modify` effects are considered non-compliant for _new_, _updated_, or _existing_ resources when the conditions of the policy rule evaluate to **TRUE**. ++Policy assignments with `append`, `deny`, and `deployIfNotExists` effects are considered non-compliant for _existing_ resources when the conditions of the policy rule evaluate to **TRUE**. _New_ and _updated_ resources are automatically remediated or denied at request time to enforce compliance. When a previously existing non-compliant resource is updated, the compliance state remains non-compliant until the resource deployment and Policy evaluation complete. ++> [!NOTE] +> The DeployIfNotExist and AuditIfNotExist effects require the IF statement to be TRUE and the +> existence condition to be FALSE to be non-compliant. When TRUE, the IF condition triggers +> evaluation of the existence condition for the related resources. ++Policy assignments with `manual` effects are considered non-compliant under two circumstances: +1. The policy definition has a default compliance state of non-compliant and there is no active [attestation](./attestation-structure.md) for the applicable resource stating otherwise. +1. The resource has been attested as non-compliant. ++To determine +the reason a resource is non-compliant or to find the change responsible, see +[Determine non-compliance](../how-to/determine-non-compliance.md). To [remediate](./remediation-structure.md) non-compliant resources for `deployIfNotExists` and `modify` policies, see [Remediate non-compliant resources with Azure Policy](../how-to/remediate-resources.md). ++### Compliant ++Policy assignments with `append`, `audit`, `auditIfNotExists`, `deny`, `deployIfNotExists`, or `modify` effects are considered compliant for _new_, _updated_, or _existing_ resources when the conditions of the policy rule evaluate to **FALSE**. ++Policy assignments with `manual` effects are considered compliant under two circumstances: +1. The policy definition has a default compliance state of compliant and there is no active [attestation](./attestation-structure.md) for the applicable resource stating otherwise. +1. The resource has been attested as compliant. ++### Error ++The error compliance state is given to policy assignments that generate a system error, such as template or evaluation error. ++### Conflicting ++A policy assignment is considered conflicting when there are two or more policy assignments existing in the same scope with contradicting or conflicting rules. For example, two definitions that append the same tag with different values. ++### Exempt ++An applicable resource has a compliance state of exempt for a policy assignment when it is in the scope of an [exemption](./exemption-structure.md). ++> [!NOTE] +> _Exempt_ is different than _excluded_. For more details, see [scope](./scope.md). ++### Unknown (preview) ++ Unknown is the default compliance state for definitions with `manual` effect, unless the default has been explicitly set to compliant or non-compliant. This state indicates that an [attestation](./attestation-structure.md) of compliance is warranted. This compliance state only occurs for policy assignments with `manual` effect. ++### Not registered ++This compliance state is visible in portal when the Azure Policy Resource Provider hasn't been registered, or when the account logged in doesn't have permission to read compliance data. ++> [!NOTE] +> If compliance state is being reported as **Not registered**, verify that the +> **Microsoft.PolicyInsights** Resource Provider is registered and that the user has the appropriate Azure role-based access control (Azure RBAC) permissions as described in +> [Azure RBAC permissions in Azure Policy](../overview.md#azure-rbac-permissions-in-azure-policy). +> To register Microsoft.PolicyInsights, [follow these steps](../../../azure-resource-manager/management/resource-providers-and-types.md). ++### Not started ++This compliance state indicates that the evaluation cycle hasn't started for the policy or resource. ++## Example ++Now that you have an understanding of what compliance states exist and what each one means, let's look at an example using compliant and non-compliant states. ++Suppose you have a resource group - ContosoRG, with some storage accounts +(highlighted in red) that are exposed to public networks. ++ Diagram showing images for five storage accounts in the Contoso R G resource group. Storage accounts one and three are blue, while storage accounts two, four, and five are red. ++In this example, you need to be wary of security risks. Assume you assign a policy definition that audits for storage accounts that are exposed to public networks, and that no exemptions are created for this assignment. The policy checks for applicable resources (which includes all storage accounts in the ContosoRG resource group), then evaluates those resources that aren't excluded from evaluation. It audits the three storage accounts exposed to public networks, changing their compliance states to **Non-compliant.** The remainder are marked **compliant**. ++ Diagram showing images for five storage accounts in the Contoso R G resource group. Storage accounts one and three now have green checkmarks beneath them, while storage accounts two, four, and five now have red warning signs beneath them. ++## Compliance rollup ++Compliance state is determined per-resource, per-policy assignment. However, we often need a big-picture view of the state of the environment, which is where aggregate compliance comes into play. ++There are several ways to view aggregated compliance results in the portal: ++| Aggregate compliance view | Factors determining compliance state | +| | | +| Scope | All policies within the selected scope | +| Initiative | All policies within the initiative | +| Initiative group or control | All policies within the group or control | +| Policy | All applicable resources | +| Resource | All applicable policies | ++### Comparing different compliance states ++So how is the aggregate compliance state determined if multiple resources or policies have different compliance states themselves? Azure Policy ranks each compliance state so that one "wins" over another in this situation. The rank order is: +1. Non-compliant +1. Compliant +1. Error +1. Conflicting +1. Exempted +1. Unknown (preview) ++> [!NOTE] +> [Not started](#not-started) and [not registered](#not-registered) aren't considered in compliance rollup calculations. ++With this ranking, if there are both non-compliant and compliant states, then the rolled up aggregate would be non-compliant, and so on. Let's look at an example: ++Assume an initiative contains 10 policies, and a resource is exempt from one policy but compliant to the remaining nine. Because a compliant state has a higher rank than an exempted state, the resource would register as compliant in the rolled-up summary of the initiative. So, a resource only shows as exempt for the entire initiative if it's exempt from, or has unknown compliance to, every other single applicable policy in that initiative. On the other extreme, a resource that is non-compliant to at least one applicable policy in the initiative has an overall compliance state of non-compliant, regardless of the remaining applicable policies. ++### Compliance percentage ++The compliance percentage is determined by dividing **Compliant**, **Exempt**, and **Unknown** resources by _total resources_. _Total resources_ include **Compliant**, **Non-compliant**, +**Exempt**, and **Conflicting** resources. The overall compliance numbers are the sum of distinct +resources that are **Compliant**, **Exempt**, and **Unknown** divided by the sum of all distinct resources. ++```text +overall compliance % = (compliant + exempt + unknown) / (compliant + non-compliant + exempt + conflicting) +``` ++In the image shown, there are 20 distinct resources that are applicable and only one is **Non-compliant**. +The overall resource compliance is 95% (19 out of 20). +++## Next steps ++- Learn how to [get compliance data](../how-to/get-compliance-data.md) +- Learn how to [determine causes of non-compliance](../how-to/determine-non-compliance.md) +- Get compliance data through [ARG query samples](../samples/resource-graph-samples.md) |
governance | Scope | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/concepts/scope.md | The following table is a comparison of the scope options: |**Resource Manager object** | - | - | ✔ | |**Requires modifying policy assignment object** | ✔ | ✔ | - | +So how do you choose whether to use an exclusion or exemption? Typically exclusions are recommended to permanently bypass evaluation for a broad scope like a test environment which doesn't require the same level of governance. Exemptions are recommended for time-bound or more specific scenarios where a resource or resource hierarchy should still be tracked and would otherwise be evaluated, but there is a specific reason it should not be assessed for compliance. + ## Next steps - Learn about the [policy definition structure](./definition-structure.md). |
governance | Get Compliance Data | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/how-to/get-compliance-data.md | wrong location, enforce common and consistent tag usage, or audit existing resou appropriate configurations and settings. In all cases, data is generated by Azure Policy to enable you to understand the compliance state of your environment. +Before reviewing compliance data, it is important to [understand compliance states](../concepts/compliance-states.md) in Azure Policy. + There are several ways to access the compliance information generated by your policy and initiative assignments: - Using the [Azure portal](#portal) - Through [command line](#command-line) scripting+- By viewing [Azure Monitor logs](#azure-monitor-logs) +- Through [Azure Resource Graph](#azure-resource-graph) queries Before looking at the methods to report on compliance, let's look at when compliance information is updated and the frequency and events that trigger an evaluation cycle. -> [!WARNING] -> If compliance state is being reported as **Not registered**, verify that the -> **Microsoft.PolicyInsights** Resource Provider is registered and that the user has the appropriate -> Azure role-based access control (Azure RBAC) permissions as described in -> [Azure RBAC permissions in Azure Policy](../overview.md#azure-rbac-permissions-in-azure-policy). - ## Evaluation triggers The results of a completed evaluation cycle are available in the `Microsoft.PolicyInsights` Resource with the status: #### On-demand evaluation scan - Visual Studio Code -The Azure Policy extension for Visual Studio code is capable of running an evaluation scan for a +The Azure Policy extension for Visual Studio Code is capable of running an evaluation scan for a specific resource. This scan is a synchronous process, unlike the Azure PowerShell and REST methods. For details and steps, see [On-demand evaluation with the VS Code extension](./extension-for-vscode.md#on-demand-evaluation-scan). -## How compliance works --When initiative or policy definitions are assigned and evaluated, resulting compliance states are determined based on conditions in the policy rule and resources' adherence to those requirements. --Azure Policy supports the following compliance states: -- Non-compliant-- Compliant-- Conflict-- Exempted-- Unknown (preview)--### Compliant and non-compliant states --In an assignment, a resource is **non-compliant** if it's applicable to the policy assignment and doesn't adhere to conditions in the policy rule. The following table shows how different policy effects work with the condition evaluation for the resulting compliance state: --| Resource State | Effect | Policy Evaluation | Compliance State | -| | | | | -| New or Updated | Audit, Modify, AuditIfNotExist | True | Non-Compliant | -| New or Updated | Audit, Modify, AuditIfNotExist | False | Compliant | -| Exists | Deny, Audit, Append, Modify, DeployIfNotExist, AuditIfNotExist | True | Non-Compliant | -| Exists | Deny, Audit, Append, Modify, DeployIfNotExist, AuditIfNotExist | False | Compliant | --> [!NOTE] -> The DeployIfNotExist and AuditIfNotExist effects require the IF statement to be TRUE and the -> existence condition to be FALSE to be non-compliant. When TRUE, the IF condition triggers -> evaluation of the existence condition for the related resources. --#### Example --For example, assume that you have a resource group - ContsoRG, with some storage accounts -(highlighted in red) that are exposed to public networks. -- Diagram showing images for five storage accounts in the Contoso R G resource group. Storage accounts one and three are blue, while storage accounts two, four, and five are red. --In this example, you need to be wary of security risks. Now that you've created a policy assignment, -it's evaluated for all included and non-exempt storage accounts in the ContosoRG resource group. It -audits the three non-compliant storage accounts, changing their states to -**Non-compliant.** -- Diagram showing images for five storage accounts in the Contoso R G resource group. Storage accounts one and three now have green checkmarks beneath them, while storage accounts two, four, and five now have red warning signs beneath them. --#### Understand non-compliance --When a resource is determined to be **non-compliant**, there are many possible reasons. To determine -the reason a resource is **non-compliant** or to find the change responsible, see -[Determine non-compliance](./determine-non-compliance.md). --### Other compliance states --Besides **Compliant** and **Non-compliant**, policies and resources have four other states: --- **Exempt**: The resource is in scope of an assignment, but has a- [defined exemption](../concepts/exemption-structure.md). -- **Conflicting**: Two or more policy definitions exist with conflicting rules. For example, two- definitions append the same tag with different values. -- **Not started**: The evaluation cycle hasn't started for the policy or resource.-- **Not registered**: The Azure Policy Resource Provider hasn't been registered or the account- logged in doesn't have permission to read compliance data. --Azure Policy relies on several factors to determine whether a resource is considered [applicable](../concepts/policy-applicability.md), then to determine its compliance state. --The compliance percentage is determined by dividing **Compliant**, **Exempt**, and **Unknown** resources by _total -resources_. _Total resources_ include **Compliant**, **Non-compliant**, -**Exempt**, and **Conflicting** resources. The overall compliance numbers are the sum of distinct -resources that are **Compliant**, **Exempt**, and **Unknown** divided by the sum of all distinct resources. In the -image below, there are 20 distinct resources that are applicable and only one is **Non-compliant**. -The overall resource compliance is 95% (19 out of 20). ---> [!NOTE] -> Regulatory Compliance in Azure Policy is a Preview feature. Compliance properties from SDK and -> pages in portal are different for enabled initiatives. For more information, see -> [Regulatory Compliance](../concepts/regulatory-compliance.md) --### Compliance rollup --There are several ways to view aggregated compliance results: --| Aggregate scope | Factors determining resulting compliance state | -| | | -| Initiative | All policies within | -| Initiative group or control | All policies within | -| Policy | All applicable resources | -| Resource | All applicable policies | --So how is the aggregate compliance state determined if multiple resources or policies have different compliance states themselves? This is done by ranking each compliance state so that one "wins" over another in this situation. The rank order is: -1. Non-compliant -1. Compliant -1. Conflict -1. Exempted -1. Unknown (preview) --This means that if there are both non-compliant and compliant states, the rolled up aggregate would be non-compliant, and so on. Let's look at an example. --Assume an initiative contains 10 policies, and a resource is exempt from one policy but compliant to the remaining nine. Because a compliant state has a higher rank than an exempted state, the resource would register as compliant in the rolled-up summary of the initiative. So, a resource will only show as exempt for the entire initiative if it's exempt from, or has unknown compliance to, every other single applicable policy in that initiative. On the other extreme, if the resource is non-compliant to at least one applicable policy in the initiative, it will have an overall compliance state of non-compliant, regardless of the remaining applicable policies. - ## Portal The Azure portal showcases a graphical experience of visualizing and understanding the state of logs, alerts can be configured to watch for non-compliance. :::image type="content" source="../media/getting-compliance-data/compliance-loganalytics.png" alt-text="Screenshot of Azure Monitor logs showing Azure Policy actions in the AzureActivity table." border="false"::: +## Azure Resource Graph ++Compliance records are stored in Azure Resource Graph (ARG). Data can be exported from ARG queries to form customized dashboards based on the scopes and policies of interest. Review our [sample queries](../samples/resource-graph-samples.md) for exporting compliance data through ARG. + ## Next steps - Review examples at [Azure Policy samples](../samples/index.md). |
hdinsight | Apache Domain Joined Configure Using Azure Adds | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/domain-joined/apache-domain-joined-configure-using-azure-adds.md | description: Learn how to set up and configure an HDInsight cluster integrated w Previously updated : 04/01/2022 Last updated : 04/25/2023 # Configure HDInsight clusters for Azure Active Directory integration with Enterprise Security Package |
hdinsight | Apache Hadoop Use Hive Dotnet Sdk | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hadoop/apache-hadoop-use-hive-dotnet-sdk.md | description: Learn how to submit Apache Hadoop jobs to Azure HDInsight Apache Ha Previously updated : 12/24/2019 Last updated : 04/24/2023 # Run Apache Hive queries using HDInsight .NET SDK |
hdinsight | Apache Hadoop Use Sqoop Curl | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hadoop/apache-hadoop-use-sqoop-curl.md | Title: Use Curl to export data with Apache Sqoop in Azure HDInsight description: Learn how to remotely submit Apache Sqoop jobs to Azure HDInsight using Curl. Previously updated : 01/06/2020 Last updated : 04/25/2023 # Run Apache Sqoop jobs in HDInsight with Curl |
hdinsight | Using Json In Hive | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hadoop/using-json-in-hive.md | description: Learn how to use JSON documents and analyze them by using Apache Hi Previously updated : 04/01/2022 Last updated : 04/24/2023 # Process and analyze JSON documents by using Apache Hive in Azure HDInsight |
hdinsight | Hdinsight Capacity Planning | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-capacity-planning.md | description: Identify key questions for capacity and performance planning of an Previously updated : 09/08/2022 Last updated : 04/25/2023 # Capacity planning for HDInsight clusters You're charged for a cluster's lifetime. If there are only specific times that y Sometimes errors can occur because of the parallel execution of multiple maps and reduce components on a multi-node cluster. To help isolate the issue, try distributed testing. Run concurrent multiple jobs on a single worker node cluster. Then expand this approach to run multiple jobs concurrently on clusters containing more than one node. To create a single-node HDInsight cluster in Azure, use the *`Custom(size, settings, apps)`* option and use a value of 1 for *Number of Worker nodes* in the **Cluster size** section when provisioning a new cluster in the portal. +## View quota management for HDInsight ++View a granular level and categorization of the quota at a VM at family level. View the current quota and how much quota is remaining for a region at a VM family level. ++> [!NOTE] +> This feature is currently available on HDInsight 4.x and 5.x for East US EUAP region. Other regions to follow subsequently. ++1. View current quota: ++ See the current quota and how much quota is remaining for a region at a VM family level. + + 1. From Azure portal, in the top search bar, search and select **Quotas**. + 1. From the Quota page, select **Azure HDInsight** + + :::image type="content" source="./media/hdinsight-capacity-planning/hdinsight-search-quota.png" alt-text="Screenshot showing how to search quotas." lightbox="./media/hdinsight-capacity-planning/hdinsight-search-quota.png"::: + + 1. From the dropdown box, select your **Subscription** and **Region** + + :::image type="content" source="./media/hdinsight-capacity-planning/select-cluster-and-region.png" alt-text="Screenshot showing how to select cluster and region for quota allocation." lightbox="./media/hdinsight-capacity-planning/select-cluster-and-region.png"::: ++ :::image type="content" source="./media/hdinsight-capacity-planning/view-and-manage-quota.png" alt-text="Screenshot showing how to view and manage quota." lightbox="./media/hdinsight-capacity-planning/view-and-manage-quota.png"::: + +1. View quota details: + + 1. Click on the row for which you want to view the quota details. + + :::image type="content" source="./media/hdinsight-capacity-planning/quota-details.png" alt-text="Screenshot showing the quota details." lightbox="./media/hdinsight-capacity-planning/quota-details.png"::: + + ## Quotas For more information on managing subscription quotas, see [Requesting quota increases](quota-increase-request.md). |
hdinsight | Hdinsight Hadoop Manage Ambari | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-hadoop-manage-ambari.md | description: Learn how to use Apache Ambari UI to monitor and manage HDInsight c Previously updated : 04/01/2022 Last updated : 04/25/2023 # Manage HDInsight clusters by using the Apache Ambari Web UI |
hdinsight | Hdinsight Upload Data | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-upload-data.md | description: Learn how to upload and access data for Apache Hadoop jobs in HDIns Previously updated : 04/27/2020 Last updated : 04/25/2023 # Upload data for Apache Hadoop jobs in HDInsight |
hdinsight | Hdinsight Connect Hive Zeppelin | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/interactive-query/hdinsight-connect-hive-zeppelin.md | description: In this quickstart, you learn how to use Apache Zeppelin to run Apa Previously updated : 12/28/2012 Last updated : 04/25/2023 #Customer intent: As a Hive user, I want learn Zeppelin so that I can run queries. |
hdinsight | Interactive Query Troubleshoot View Time Out | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/interactive-query/interactive-query-troubleshoot-view-time-out.md | Title: Apache Hive View times out from query result - Azure HDInsight description: Apache Hive View times out when fetching a query result in Azure HDInsight Previously updated : 02/11/2022 Last updated : 04/24/2023 # Scenario: Apache Hive View times out when fetching a query result in Azure HDInsight This article describes troubleshooting steps and possible resolutions for issues ## Issue -When running certain queries from the Apache Hive view, the following error may be encountered: +When you run certain queries from the Apache Hive view, the following error may be encountered: ``` ERROR [ambari-client-thread-1] [HIVE 2.0.0 AUTO_HIVE20_INSTANCE] NonPersistentCursor:131 - Result fetch timed out java.util.concurrent.TimeoutException: deadline passed ## Cause -The Hive View default timeout value may not be suitable for the query you are running. The specified time period is too short for the Hive View to fetch the query result. +The Hive View default timeout value may not be suitable for the query you're running. The specified time period is too short for the Hive View to fetch the query result. ## Resolution The Hive View default timeout value may not be suitable for the query you are ru ``` Confirm the Hive View instance name `AUTO_HIVE20_INSTANCE` by going to YOUR_USERNAME > Manage Ambari > Views. Get the instance name from the Name column. If it doesn't match, then replace this value. **Do not use the URL Name column**. -2. Restart the active Ambari server by running the following. If you get an error message saying it's not the active Ambari server, just ssh into the next headnode and repeat this step. Note down the PID of the current Ambari server process. +2. Restart the active Ambari server by running the following. If you get an error message saying it's not the active Ambari server, ssh into the next headnode and repeat this step. Note down the PID of the current Ambari server process. ``` sudo ambari-server status sudo systemctl restart ambari-server ```-3. Confirm Ambari server actually restarted. If you followed the steps, you will notice the PID has changed. +3. Confirm Ambari server restarted. If you followed the steps, you notice the PID has changed. ``` sudo ambari-server status ``` ## Notes-If you get a 502 error, then that is coming from the HDI gateway. You can confirm by opening web inspector, go to network tab, then re-submit query. You'll see a request fail, returning a 502 status code, and the time will show 2 mins elapsed. +If you get a 502 error, then that is coming from the HDI gateway. You can confirm by opening web inspector, go to network tab, then resubmit query. You see a request fail, returning a 502 status code, and the time shows two mins elapsed. -The query is not suited for Hive View. It is recommended that you either try the following instead: +The query isn't suited for Hive View. It's recommended that you either try the following instead: - Use beeline-- Re-write the query to be more optimal+- Rewrite the query to be more optimal ## Next steps |
healthcare-apis | Release Notes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/release-notes.md | Azure API for FHIR provides a fully managed deployment of the Microsoft FHIR Ser > [!Note] > Azure Health Data services is the evolved version of Azure API for FHIR enabling customers to manage FHIR, DICOM, and MedTech services with integrations into other Azure Services. To learn about Azure Health Data Services [click here](https://azure.microsoft.com/products/health-data-services/). +## **April 2023** ++**Fixed transient issues associated with loading custom search parameters** +This bug fix address the issue, where the FHIR service would not load the latest SearchParameter status in event of failure. +For more details, visit [#3222](https://github.com/microsoft/fhir-server/pull/3222) + ## **November 2022** **Fixed the Error generated when resource is updated using if-match header and PATCH** |
healthcare-apis | Dicom Cast Access Request | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/dicom-cast-access-request.md | - Title: DICOM access request reference guide - Azure Health Data Services -description: This reference guide provides information about to create an Azure support ticket to request DICOMcast access. ---- Previously updated : 06/03/2022----# DICOMcast access request --This article describes how to request DICOMcast access. --## Create Azure support ticket --To enable DICOMcast for your Azure subscription, please request access for DICOMcast by opening an [Azure support ticket](https://azure.microsoft.com/support/create-ticket/). --> [!IMPORTANT] -> Ensure that you include the **resource IDs** of your DICOM service and FHIR service when you submit a support ticket. --### Basics tab --1. In the **Summary** field, enter "Access request for DICOMcast". - - [  ](media/new-support-request-basic-tab.png#lightbox) --1. Select the **Issue type** drop-down list, and then select **Technical**. -1. Select the **Subscription** drop-down list, and then select your Azure subscription. -1. Select the **Service type** drop-down list, and then select **Azure Health Data Services**. -1. Select the **Resource** drop-down list, and then select your resource. -1. Select the **Problem** drop-down list, and then select **DICOM service**. -1. Select the **Problem subtype** drop-down list, and then select **About the DICOM service**. -1. Select **Next Solutions**. -1. From the **Solutions** tab, select **Next Details**. --### Details tab --1. Under the **Problem details** section, select today's date to submit your support request. You may keep the default time as 12:00AM. -- [  ](media/new-support-request-details-tab.png#lightbox) --1. In the **Description** box, ensure to include the Resource IDs of your FHIR service and DICOM service. -- > [!NOTE] - > To obtain your DICOM service and FHIR service resource IDs, select your DICOM service instance in the Azure portal, and select the **Properties** blade that's listed under **Settings**. --1. File upload isn't required, so you may omit this option. -1. Under the **Support method** section, select the **Severity** and the **Preferred contact method** options. -1. Select **Next: Review + Create >>**. -1. In the **Review + create** tab, select **Create** to submit your Azure support ticket. ---## Next steps --This article described the steps for creating an Azure support ticket to request DICOMcast access. For more information about using the DICOM service, see -->[!div class="nextstepaction"] ->[Deploy DICOM service to Azure](deploy-dicom-services-in-azure.md) --For more information about DICOMcast, see -->[!div class="nextstepaction"] ->[DICOMcast overview](dicom-cast-overview.md) --FHIR® is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7. |
healthcare-apis | Dicom Cast Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/dicom-cast-overview.md | +> [!NOTE] +> On **July 31, 2023** DICOMcast will be retired. DICOMcast will continue to be available as an open-source component that can be self-hosted. For more information about deploying the DICOMcast service, see the [migration guidance](https://aka.ms/dicomcast-migration). + DICOMcast offers customers the ability to synchronize the data from a DICOM service to a [FHIR service](../../healthcare-apis/fhir/overview.md), which allows healthcare organizations to integrate clinical and imaging data. DICOMcast expands the use cases for health data by supporting both a streamlined view of longitudinal patient data and the ability to effectively create cohorts for medical studies, analytics, and machine learning. ## Architecture DICOM has different date time VR types. Some tags (like Study and Series) have t ## Summary -In this concept, we reviewed the architecture and mappings of DICOMcast. This feature is available on demand. To enable DICOMcast for your Azure subscription, please request access for DICOMcast by opening an [Azure support ticket](https://azure.microsoft.com/support/create-ticket/). For more information about requesting access to DICOMcast, see [DICOMcast request access](dicom-cast-access-request.md). +In this concept, we reviewed the architecture and mappings of DICOMcast. This feature is available as an open-source component that can be self-hosted. For more information about deploying the DICOMcast service, see the [deployment instructions](https://github.com/microsoft/dicom-server/blob/main/docs/quickstarts/deploy-dicom-cast.md). > [!IMPORTANT] > Ensure that you include the **resource IDs** of your DICOM service and FHIR service when you submit a support ticket. |
healthcare-apis | Dicom Services Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/dicom-services-overview.md | The DICOM service is a managed service within [Azure Health Data Services](../he - **PHI Compliant**: Protect your PHI with unparalleled security intelligence. Your data is isolated to a unique database per API instance and protected with multi-region failover. The DICOM service implements a layered, in-depth defense and advanced threat protection for your data. - **Extended Query Tags**: Additionally index DICOM studies, series, and instances on both standard and private DICOM tags by expanding list of tags that are already specified within [DICOM Conformance Statement](dicom-services-conformance-statement.md). - **Change Feed**: Access ordered, guaranteed, immutable, read-only logs of all the changes that occur in DICOM service. Client applications can read these logs at any time independently, in parallel and at their own pace.-- **DICOMcast**: Via DICOMcast, DICOM service can inject DICOM metadata into a FHIR service, or FHIR server, as an imaging study resource allowing a single source of truth for both clinical data and imaging metadata. This feature is available on demand. To enable DICOMcast for your Azure subscription, please request access for DICOMcast via opening an [Azure Technical Support](https://azure.microsoft.com/support/create-ticket/) ticket.+- **DICOMcast**: Via DICOMcast, the DICOM service can inject DICOM metadata into a FHIR service, or FHIR server, as an imaging study resource allowing a single source of truth for both clinical data and imaging metadata. This feature is available as an open-source feature that can be self-hosted in Azure. Learn more about [deploying DICOMcast](https://github.com/microsoft/dicom-server/blob/main/docs/quickstarts/deploy-dicom-cast.md). - **Region availability**: DICOM service has wide-range of [availability across many regions](https://azure.microsoft.com/global-infrastructure/services/?products=azure-api-for-fhir®ions=all) with multi-region failover protection and continuously expanding. - **Scalability**: DICOM service is designed out-of-the-box to support different workload levels at a hospital, region, country and global scale without sacrificing any performance spec by using autoscaling features. - **Role-based access**: You control your data. Role-based access control (RBAC) enables you to manage how your data is stored and accessed. Providing increased security and reducing administrative workload, you determine who has access to the datasets you create, based on role definitions you create for your environment. |
healthcare-apis | Get Started With Dicom | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/get-started-with-dicom.md | You can find more details on DICOMweb standard APIs and change feed in the [DICO #### DICOMcast -DICOMcast is currently available as an [open source](https://github.com/microsoft/dicom-server/blob/main/docs/concepts/dicom-cast.md) project, and it's under private preview as a managed service. To enable DICOMcast as a managed service for your Azure subscription, request access by creating an [Azure support ticket](https://azure.microsoft.com/support/create-ticket/) by following the guidance in the article [DICOMcast access request](dicom-cast-access-request.md). +DICOMcast is currently available as an [open source](https://github.com/microsoft/dicom-server/blob/main/docs/concepts/dicom-cast.md) project. ## Next steps |
healthcare-apis | Deploy Arm Template | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/deploy-arm-template.md | + + Title: Deploy the MedTech service using an Azure Resource Manager template - Azure Health Data Services +description: In this article, you'll learn how to deploy the MedTech service using an Azure Resource Manager template. +++++ Last updated : 04/14/2023++++# Quickstart: Deploy the MedTech service using an Azure Resource Manager template ++> [!NOTE] +> [Fast Healthcare Interoperability Resources (FHIR®)](https://www.hl7.org/fhir/) is an open healthcare specification. ++To implement infrastructure as code for your Azure solutions, use Azure Resource Manager templates (ARM templates). The template is a [JavaScript Object Notation (JSON)](https://www.json.org/) file that defines the infrastructure and configuration for your project. The template uses declarative syntax, which lets you state what you intend to deploy without having to write the sequence of programming commands to create it. In the template, you specify the resources to deploy and the properties for those resources. ++In this quickstart, you'll learn how to: ++- Open an ARM template in the Azure portal. +- Configure the ARM template for your deployment. +- Deploy the ARM template. ++> [!TIP] +> To learn more about ARM templates, see [What are ARM templates?](./../../azure-resource-manager/templates/overview.md) ++## Prerequisites ++To begin your deployment and complete the quickstart, you must have the following prerequisites: ++- An active Azure subscription account. If you don't have an Azure subscription, see [Subscription decision guide](/azure/cloud-adoption-framework/decision-guides/subscriptions/). ++- Owner or Contributor and User Access Administrator role assignments in the Azure subscription. For more information, see [What is Azure role-based access control (Azure RBAC)?](../../role-based-access-control/overview.md) ++- The Microsoft.HealthcareApis and Microsoft.EventHub resource providers registered with your Azure subscription. To learn more about registering resource providers, see [Azure resource providers and types](../../azure-resource-manager/management/resource-providers-and-types.md). ++When you have these prerequisites, you're ready to configure the ARM template by using the **Deploy to Azure** button. ++## Review the ARM template - Optional ++The ARM template used to deploy the resources in this quickstart is available at [Azure Quickstart Templates](/samples/azure/azure-quickstart-templates/iotconnectors-with-iothub/) by using the *azuredeploy.json* file on [GitHub](https://github.com/azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.healthcareapis/workspaces/iotconnectors-with-iothub). ++## Use the Deploy to Azure button ++To begin deployment in the Azure portal, select the **Deploy to Azure** button: ++ [](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.healthcareapis%2Fworkspaces%2Fiotconnectors%2Fazuredeploy.json). ++## Configure the deployment ++1. In the Azure portal, on the Basics tab of the Azure Quickstart Template, select or enter the following information for your deployment: ++ - **Subscription** - The Azure subscription to use for the deployment. ++ - **Resource group** - An existing resource group, or you can create a new resource group. ++ - **Region** - The Azure region of the resource group that's used for the deployment. Region autofills by using the resource group region. ++ - **Basename** - A value that's appended to the name of the Azure resources and services that are deployed. ++ - **Location** - Use the drop-down list to select a supported Azure region for the Azure Health Data Services (the value could be the same or different region than your resource group). ++ - **Device Mapping** - Don't change the default values for this quickstart. + + - **Destination Mapping** - Don't change the default values for this quickstart. ++ :::image type="content" source="media\deploy-new-arm\iot-deploy-quickstart-options.png" alt-text="Screenshot of Azure portal page displaying deployment options for the Azure Health Data Service MedTech service." lightbox="media\deploy-new-arm\iot-deploy-quickstart-options.png"::: ++2. To validate your configuration, select **Review + create**. ++ :::image type="content" source="media\deploy-new-arm\iot-review-and-create-button.png" alt-text="Screenshot that shows the Review + create button selected in the Azure portal."::: ++3. In **Review + create**, check the template validation status. If validation is successful, the template displays **Validation Passed**. If validation fails, fix the detail that's indicated in the error message, and then select **Review + create** again. ++ :::image type="content" source="media\deploy-new-arm\iot-validation-completed.png" alt-text="Screenshot that shows the Review + create pane displaying the Validation Passed message."::: ++4. After a successful validation, to begin the deployment, select **Create**. ++ :::image type="content" source="media\deploy-new-arm\iot-create-button.png" alt-text="Screenshot that shows the highlighted Create button."::: ++5. In a few minutes, the Azure portal displays the message that your deployment is completed. ++ :::image type="content" source="media\deploy-new-arm\iot-deployment-complete-banner.png" alt-text="Screenshot that shows a green checkmark and the message Your deployment is complete."::: ++ > [!IMPORTANT] + > If you're going to allow access from multiple services to the device message event hub, it's required that each service has its own event hub consumer group. + > + > Consumer groups enable multiple consuming applications to have a separate view of the event stream, and to read the stream independently at their own pace and with their own offsets. For more information, see [Consumer groups](../../event-hubs/event-hubs-features.md#consumer-groups). + > + > Examples: + > + > - Two MedTech services accessing the same device message event hub. + > + > - A MedTech service and a storage writer application accessing the same device message event hub. ++## Review deployed resources and access permissions ++When deployment is completed, the following resources and access roles are created in the ARM template deployment: ++- Azure Event Hubs namespace and device message event hub. In this deployment, the device message event hub is named *devicedata*. ++ - An event hub consumer group. In this deployment, the consumer group is named *$Default*. ++ - An Azure Event Hubs Data Sender role. In this deployment, the sender role is named *devicedatasender* and can be used to provide access to the device event hub using a shared access signature (SAS). To learn more about authorizing access using a SAS, see [Authorizing access to Event Hubs resources using Shared Access Signatures](../../event-hubs/authorize-access-shared-access-signature.md). ++- A Health Data Services workspace. ++- A Health Data Services Fast Healthcare Interoperability Resources FHIR service. ++- A Health Data Services MedTech service with the required [system-assigned managed identity](../../active-directory/managed-identities-azure-resources/overview.md) roles: ++ - For the device message event hub, the Azure Events Hubs Data Receiver role is assigned in the [Access control section (IAM)](../../role-based-access-control/overview.md) of the device message event hub. ++ - For the FHIR service, the FHIR Data Writer role is assigned in the [Access control section (IAM)](../../role-based-access-control/overview.md) of the FHIR service. ++> [!IMPORTANT] +> In this quickstart, the ARM template configures the MedTech service to operate in **Create** mode. A patient resource and a device resource are created for each device that sends data to your FHIR service. +> +> To learn more about the MedTech service resolution types Create and Lookup, see [Destination properties](deploy-new-config.md#destination-properties). ++## Post-deployment mappings ++After you've successfully deployed an instance of the MedTech service, you'll still need to provide conforming and valid device and FHIR destination mappings. ++ - To learn about the device mapping, see [Overview of the MedTech service device mapping](overview-of-device-mapping.md). ++ - To learn about the FHIR destination mapping, see [Overview of the MedTech service FHIR destination mapping](overview-of-fhir-destination-mapping.md). ++## Next steps ++In this quickstart, you learned how to deploy an instance of the MedTech service in the Azure portal using an ARM template with a **Deploy to Azure** button. ++To learn about other methods for deploying the MedTech service, see ++> [!div class="nextstepaction"] +> [Choose a deployment method for the MedTech service](deploy-new-choose.md) ++FHIR® is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission. |
healthcare-apis | Deploy Bicep Powershell Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/deploy-bicep-powershell-cli.md | + + Title: Deploy the MedTech service using a Bicep file and Azure PowerShell or the Azure CLI - Azure Health Data Services +description: In this article, you'll learn how to deploy the MedTech service using a Bicep file and Azure PowerShell or the Azure CLI. +++++ Last updated : 04/14/2023++++# Quickstart: Deploy the MedTech service using a Bicep file and Azure PowerShell or the Azure CLI ++> [!NOTE] +> [Fast Healthcare Interoperability Resources (FHIR®)](https://www.hl7.org/fhir/) is an open healthcare specification. ++Bicep is a domain-specific language (DSL) that uses declarative syntax to deploy Azure resources. In a Bicep file, you define the infrastructure you want to deploy to Azure, and then use that file throughout the development lifecycle to repeatedly deploy your infrastructure. Your resources are deployed in a consistent manner Bicep provides concise syntax, reliable type safety, and support for code reuse. Bicep offers a first-class authoring experience for your infrastructure-as-code solutions in Azure. ++In this quickstart, you'll learn how to use Azure PowerShell or the Azure CLI to deploy an instance of the MedTech service using a Bicep file. ++> [!TIP] +> To learn more about Bicep, see [What is Bicep?](../../azure-resource-manager/bicep/overview.md?tabs=bicep) ++## Prerequisites ++To begin your deployment and complete the quickstart, you must have the following prerequisites: ++- An active Azure subscription account. If you don't have an Azure subscription, see [Subscription decision guide](/azure/cloud-adoption-framework/decision-guides/subscriptions/). ++- Owner or Contributor and User Access Administrator role assignments in the Azure subscription. For more information, see [What is Azure role-based access control (Azure RBAC)?](../../role-based-access-control/overview.md) ++- The Microsoft.HealthcareApis and Microsoft.EventHub resource providers registered with your Azure subscription. To learn more about registering resource providers, see [Azure resource providers and types](../../azure-resource-manager/management/resource-providers-and-types.md). ++- [Azure PowerShell](/powershell/azure/install-az-ps) and/or the [Azure CLI](/cli/azure/install-azure-cli) installed locally. + - For Azure PowerShell, you'll also need to install [Bicep CLI](../../azure-resource-manager/bicep/install.md#windows) to deploy the Bicep file used in this quickstart. ++When you have these prerequisites, you're ready to deploy the Bicep file. ++## Review the Bicep file - Optional ++The Bicep file used to deploy the resources in this quickstart is available at [Azure Quickstart Templates](/samples/azure/azure-quickstart-templates/iotconnectors/) by using the *main.bicep* file on [GitHub](https://github.com/azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.healthcareapis/workspaces/iotconnectors/). ++## Save the Bicep file locally ++Save the Bicep file locally as *main.bicep*. You'll need to have the working directory of your Azure PowerShell or the Azure CLI console pointing to the location where this file is saved. ++## Deploy the MedTech service with the Bicep file and Azure PowerShell ++Complete the following five steps to deploy the MedTech service using Azure PowerShell: ++1. Sign-in into Azure. ++ ```azurepowershell + Connect-AzAccount + ``` ++2. Set your Azure subscription deployment context using your subscription ID. To learn how to get your subscription ID, see [Get subscription and tenant IDs in the Azure portal](../../azure-portal/get-subscription-tenant-id.md). ++ ```azurepowershell + Set-AzContext <AzureSubscriptionId> + ``` ++ For example: `Set-AzContext abcdef01-2345-6789-0abc-def012345678` + +3. Confirm the location you want to deploy in. See the [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=health-data-services) site for the current Azure regions where Azure Health Data Services is available. ++ You can also review the **location** section of the locally saved *main.bicep* file. ++ If you need a list of the Azure regions location names, you can use this code to display a list: ++ ```azurepowershell + Get-AzLocation | Format-Table -Property DisplayName,Location + ``` ++4. If you don't already have a resource group created for this quickstart, you can use this code to create one: ++ ```azurepowershell + New-AzResourceGroup -name <ResourceGroupName> -location <AzureRegion> + ``` ++ For example: `New-AzResourceGroup -name BicepTestDeployment -location southcentralus` ++ > [!IMPORTANT] + > For a successful deployment of the MedTech service, you'll need to use numbers and lowercase letters for the basename of your resources. The minimum basename requirement is three characters with a maximum of 16 characters. ++5. Use the following code to deploy the MedTech service using the Bicep file: ++ ```azurepowershell + New-AzResourceGroupDeployment -ResourceGroupName <ResourceGroupName> -TemplateFile main.bicep -basename <BaseName> -location <AzureRegion> + ``` ++ For example: `New-AzResourceGroupDeployment -ResourceGroupName BicepTestDeployment -TemplateFile main.bicep -basename abc123 -location southcentralus` ++ > [!IMPORTANT] + > If you're going to allow access from multiple services to the device message event hub, it is highly recommended that each service has its own event hub consumer group. + > + > Consumer groups enable multiple consuming applications to have a separate view of the event stream, and to read the stream independently at their own pace and with their own offsets. For more information, see [Consumer groups](../../event-hubs/event-hubs-features.md#consumer-groups). + > + > Examples: + > + > - Two MedTech services accessing the same device message event hub. + > + > - A MedTech service and a storage writer application accessing the same device message event hub. ++## Deploy the MedTech service with the Bicep file and the Azure CLI ++Complete the following five steps to deploy the MedTech service using the Azure CLI: ++1. Sign-in into Azure. ++ ```azurecli + az login + ``` ++2. Set your Azure subscription deployment context using your subscription ID. To learn how to get your subscription ID, see [Get subscription and tenant IDs in the Azure portal](../../azure-portal/get-subscription-tenant-id.md). ++ ```azurecli + az account set <AzureSubscriptionId> + ``` ++ For example: `az account set abcdef01-2345-6789-0abc-def012345678` ++3. Confirm the location you want to deploy in. See the [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=health-data-services) site for the current Azure regions where Azure Health Data Services is available. ++ You can also review the **location** section of the locally saved *main.bicep* file. ++ If you need a list of the Azure regions location names, you can use this code to display a list: ++ ```azurecli + az account list-locations -o table + ``` ++4. If you don't already have a resource group created for this quickstart, you can use this code to create one: ++ ```azurecli + az group create --resource-group <ResourceGroupName> --location <AzureRegion> + ``` ++ For example: `az group create --resource-group BicepTestDeployment --location southcentralus` ++ > [!IMPORTANT] + > For a successful deployment of the MedTech service, you'll need to use numbers and lowercase letters for the basename of your resources. ++5. Use the following code to deploy the MedTech service using the Bicep file: ++ ```azurecli + az deployment group create --resource-group BicepTestDeployment --template-file main.bicep --parameters basename=<BaseName> location=<AzureRegion> + ``` ++ For example: `az deployment group create --resource-group BicepTestDeployment --template-file main.bicep --parameters basename=abc location=southcentralus` ++ > [!IMPORTANT] + > If you're going to allow access from multiple services to the device message event hub, it is highly recommended that each service has its own event hub consumer group. + > + > Consumer groups enable multiple consuming applications to have a separate view of the event stream, and to read the stream independently at their own pace and with their own offsets. For more information, see [Consumer groups](../../event-hubs/event-hubs-features.md#consumer-groups). + > + > Examples: + > + > - Two MedTech services accessing the same device message event hub. + > + > - A MedTech service and a storage writer application accessing the same device message event hub. ++## Review deployed resources and access permissions ++When deployment is completed, the following resources and access roles are created in the Bicep file deployment: ++- Azure Event Hubs namespace and device message event hub. In this deployment, the device message event hub is named *devicedata*. ++ - An event hub consumer group. In this deployment, the consumer group is named *$Default*. ++ - An Azure Event Hubs Data Sender role. In this deployment, the sender role is named *devicedatasender* and can be used to provide access to the device event hub using a shared access signature (SAS). To learn more about authorizing access using a SAS, see [Authorizing access to Event Hubs resources using Shared Access Signatures](../../event-hubs/authorize-access-shared-access-signature.md). ++- A Health Data Services workspace. ++- A Health Data Services FHIR service. ++- A Health Data Services MedTech service with the required [system-assigned managed identity](../../active-directory/managed-identities-azure-resources/overview.md) roles: ++ - For the device message event hub, the Azure Events Hubs Data Receiver role is assigned in the [Access control section (IAM)](../../role-based-access-control/overview.md) of the device message event hub. ++ - For the FHIR service, the FHIR Data Writer role is assigned in the [Access control section (IAM)](../../role-based-access-control/overview.md) of the FHIR service. + +> [!IMPORTANT] +> In this quickstart, the ARM template configures the MedTech service to operate in **Create** mode. A Patient resource and a Device resource are created for each device that sends data to your FHIR service. +> +> To learn more about the MedTech service resolution types **Create** and **Lookup**, see [Destination properties](deploy-new-config.md#destination-properties). ++## Post-deployment mappings ++After you've successfully deployed an instance of the MedTech service, you'll still need to provide conforming and valid device and FHIR destination mappings. ++- To learn about the device mapping, see [Overview of the device mapping](overview-of-device-mapping.md). ++- To learn about the FHIR destination mapping, see [Overview of the FHIR destination mapping](overview-of-fhir-destination-mapping.md). ++## Clean up Azure PowerShell deployed resources ++When your resource group and deployed Bicep file resources are no longer needed, delete the resource group, which deletes the resources in the resource group. ++```azurepowershell +Remove-AzResourceGroup -Name <ResourceGroupName> +``` ++For example: `Remove-AzResourceGroup -Name BicepTestDeployment` ++## Clean up the Azure CLI deployed resources ++When your resource group and deployed Bicep file resources are no longer needed, delete the resource group, which deletes the resources in the resource group. ++```azurecli +az group delete --name <ResourceGroupName> +``` ++For example: `az group delete --resource-group BicepTestDeployment` ++> [!TIP] +> For a step-by-step tutorial that guides you through the process of creating a Bicep file, see [Build your first Bicep template](/training/modules/build-first-bicep-template/). ++## Next steps ++In this quickstart, you learned about how to use Azure PowerShell or the Azure CLI to deploy an instance of the MedTech service using a Bicep file. ++To learn about other methods for deploying the MedTech service, see ++> [!div class="nextstepaction"] +> [Choose a deployment method for the MedTech service](deploy-new-choose.md) ++FHIR® is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission. |
healthcare-apis | Deploy Choose Method | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/deploy-choose-method.md | + + Title: Choose a deployment method for the MedTech service - Azure Health Data Services +description: In this article, learn about the different methods for deploying the MedTech service. +++++ Last updated : 04/25/2023++++# Quickstart: Choose a deployment method for the MedTech service ++> [!NOTE] +> [Fast Healthcare Interoperability Resources (FHIR®)](https://www.hl7.org/fhir/) is an open healthcare specification. ++The MedTech service provides multiple methods for deployment into Azure. Each deployment method has different advantages that allow you to customize your deployment to suit your needs and use cases. ++In this quickstart, learn about these deployment methods: ++* Azure Resource Manager template (ARM template) including an Azure Iot Hub using the **Deploy to Azure** button. +* ARM template using the **Deploy to Azure** button. +* ARM template using Azure PowerShell or the Azure CLI. +* Bicep file using Azure PowerShell or the Azure CLI. +* Manually in the Azure portal. ++## Deployment overview ++The following diagram outlines the basic steps of the MedTech service deployment. These steps may help you analyze the deployment options and determine which deployment method is best for you. +++## ARM template including an Azure Iot Hub using the Deploy to Azure button ++Using an ARM template with the **Deploy to Azure** button is an easy and fast deployment method because it automates the deployment, most configuration steps, and uses the Azure portal. The deployed MedTech service and Azure IoT Hub are fully functional including conforming and valid device and FHIR destination mappings. Use the Azure IoT Hub to create devices and send device messages to the MedTech service. ++[](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.healthcareapis%2Fworkspaces%2Fiotconnectors-with-iothub%2Fazuredeploy.json) ++To learn more about deploying the MedTech service including an Azure IoT Hub using an ARM template and the **Deploy to Azure** button, see [Receive device messages through Azure IoT Hub](device-messages-through-iot-hub.md). ++## ARM template using the Deploy to Azure button ++Using an ARM template with the **Deploy to Azure** button is an easy and fast deployment method because it automates the deployment, most configuration steps, and uses the Azure portal. The deployed MedTech service requires conforming and valid device and FHIR destination mappings to be fully functional. ++[](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.healthcareapis%2Fworkspaces%2Fiotconnectors%2Fazuredeploy.json). ++To learn more about deploying the MedTech service using an ARM template and the **Deploy to Azure** button, see [Deploy the MedTech service using an Azure Resource Manager template](deploy-arm-template.md). ++## ARM template using Azure PowerShell or the Azure CLI ++Using an ARM template with Azure PowerShell or the Azure CLI is a more advanced deployment method. This deployment method can be useful for adding automation and repeatability so that you can scale and customize your deployments. The deployed MedTech service requires conforming and valid device and FHIR destination mappings to be fully functional. ++To learn more about deploying the MedTech service using an ARM template and Azure PowerShell or the Azure CLI, see [Deploy the MedTech service using an Azure Resource Manager template and Azure PowerShell or the Azure CLI](deploy-json-powershell-cli.md). ++## Bicep file using Azure PowerShell or the Azure CLI ++Using a Bicep file with Azure PowerShell or the Azure CLI is a more advanced deployment method. This deployment method can be useful for adding automation and repeatability so that you can scale and customize your deployments. The deployed MedTech service requires conforming and valid device and FHIR destination mappings to be fully functional. ++To learn more about deploying the MedTech service using a Bicep file and Azure PowerShell or the Azure CLI, see [Deploy the MedTech service using a Bicep file and Azure PowerShell or the Azure CLI](deploy-bicep-powershell-cli.md). ++## Manually in the Azure portal ++Using the Azure portal manual deployment allows you to see the details of each deployment step. The manual deployment has many steps, but it provides valuable technical information that may be useful for customizing and troubleshooting your MedTech service. ++To learn more about deploying the MedTech service manually using the Azure portal, see [Deploy the MedTech service manually using the Azure portal](deploy-manual-prerequisites.md). ++> [!IMPORTANT] +> If you're going to allow access from multiple services to the device message event hub, it is highly recommended that each service has its own event hub consumer group. +> +> Consumer groups enable multiple consuming applications to have a separate view of the event stream, and to read the stream independently at their own pace and with their own offsets. For more information, see [Consumer groups](../../event-hubs/event-hubs-features.md#consumer-groups). +> +> Examples: +> +> * Two MedTech services accessing the same device message event hub. +> +> * A MedTech service and a storage writer application accessing the same device message event hub. ++## Next steps ++In this quickstart, you learned about the different types of deployment methods for the MedTech service. ++To learn about the MedTech service, see ++> [!div class="nextstepaction"] +> [What is the MedTech service?](overview.md) ++FHIR® is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission. |
healthcare-apis | Deploy Json Powershell Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/deploy-json-powershell-cli.md | + + Title: Deploy the MedTech service using an Azure Resource Manager template and Azure PowerShell or the Azure CLI - Azure Health Data Services +description: In this article, you'll learn how to deploy the MedTech service using an Azure Resource Manager template and Azure PowerShell or the Azure CLI +++++ Last updated : 04/14/2023++++# Quickstart: Deploy the MedTech service using an Azure Resource Manager template and Azure PowerShell or the Azure CLI ++> [!NOTE] +> [Fast Healthcare Interoperability Resources (FHIR®)](https://www.hl7.org/fhir/) is an open healthcare specification. ++To implement infrastructure as code for your Azure solutions, use Azure Resource Manager templates (ARM templates). The template is a JavaScript Object Notation (JSON) file that defines the infrastructure and configuration for your project. The template uses declarative syntax, which lets you state what you intend to deploy without having to write the sequence of programming commands to create it. In the template, you specify the resources to deploy and the properties for those resources. ++In this quickstart, you'll learn how to use Azure PowerShell or the Azure CLI to deploy an instance of the MedTech service using an Azure Resource Manager template (ARM template). ++> [!TIP] +> To learn more about ARM templates, see [What are ARM templates?](./../../azure-resource-manager/templates/overview.md) ++## Prerequisites ++To begin your deployment and complete the quickstart, you must have the following prerequisites: ++- An active Azure subscription account. If you don't have an Azure subscription, see [Subscription decision guide](/azure/cloud-adoption-framework/decision-guides/subscriptions/). ++- Owner or Contributor and User Access Administrator role assignments in the Azure subscription. For more information, see [What is Azure role-based access control (Azure RBAC)?](../../role-based-access-control/overview.md) ++- The Microsoft.HealthcareApis and Microsoft.EventHub resource providers registered with your Azure subscription. To learn more about registering resource providers, see [Azure resource providers and types](../../azure-resource-manager/management/resource-providers-and-types.md). ++- [Azure PowerShell](/powershell/azure/install-az-ps) and/or the [Azure CLI](/cli/azure/install-azure-cli) installed locally. ++When you have these prerequisites, you're ready to deploy the ARM template. ++## Review the ARM template - Optional ++The ARM template used to deploy the resources in this quickstart is available at [Azure Quickstart Templates](/samples/azure/azure-quickstart-templates/iotconnectors/) by using the *azuredeploy.json* file on [GitHub](https://github.com/azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.healthcareapis/workspaces/iotconnectors/). ++## Deploy the MedTech service with the Azure Resource Manager template and Azure PowerShell ++Complete the following five steps to deploy the MedTech service using Azure PowerShell: ++1. Sign-in into Azure. ++ ```azurepowershell + Connect-AzAccount + ``` ++2. Set your Azure subscription deployment context using your subscription ID. To learn how to get your subscription ID, see [Get subscription and tenant IDs in the Azure portal](../../azure-portal/get-subscription-tenant-id.md). ++ ```azurepowershell + Set-AzContext <AzureSubscriptionId> + ``` ++ For example: `Set-AzContext abcdef01-2345-6789-0abc-def012345678` ++3. Confirm the location you want to deploy in. See the [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=health-data-services) site for the current Azure regions where Azure Health Data Services is available. ++ You can also review the **location** section of the *azuredeploy.json* file. ++ If you need a list of the Azure regions location names, you can use this code to display a list: ++ ```azurepowershell + Get-AzLocation | Format-Table -Property DisplayName,Location + ``` ++4. If you don't already have a resource group created for this quickstart, you can use this code to create one: ++ ```azurepowershell + New-AzResourceGroup -name <ResourceGroupName> -location <AzureRegion> + ``` ++ For example: `New-AzResourceGroup -name ArmTestDeployment -location southcentralus` ++ > [!IMPORTANT] + > For a successful deployment of the MedTech service, you'll need to use numbers and lowercase letters for the basename of your resources. The minimum basename requirement is three characters with a maximum of 16 characters. ++5. Use the following code to deploy the MedTech service using the ARM template: ++ ```azurepowershell + New-AzResourceGroupDeployment -ResourceGroupName <ResourceGroupName> -TemplateUri https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/quickstarts/microsoft.healthcareapis/workspaces/iotconnectors/azuredeploy.json -basename <BaseName> -location <AzureRegion> + ``` ++ For example: `New-AzResourceGroupDeployment -ResourceGroupName ArmTestDeployment -TemplateUri https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/quickstarts/microsoft.healthcareapis/workspaces/iotconnectors/azuredeploy.json -basename abc123 -location southcentralus` ++ > [!IMPORTANT] + > If you're going to allow access from multiple services to the device message event hub, it is highly recommended that each service has its own event hub consumer group. + > + > Consumer groups enable multiple consuming applications to have a separate view of the event stream, and to read the stream independently at their own pace and with their own offsets. For more information, see [Consumer groups](../../event-hubs/event-hubs-features.md#consumer-groups). + > + > Examples: + > + > - Two MedTech services accessing the same device message event hub. + > + > - A MedTech service and a storage writer application accessing the same device message event hub. ++## Deploy the MedTech service with the Azure Resource Manager template and the Azure CLI ++Complete the following five steps to deploy the MedTech service using the Azure CLI: ++1. Sign-in into Azure. ++ ```azurecli + az login + ``` ++2. Set your Azure subscription deployment context using your subscription ID. To learn how to get your subscription ID, see [Get subscription and tenant IDs in the Azure portal](../../azure-portal/get-subscription-tenant-id.md). ++ ```azurecli + az account set <AzureSubscriptionId> + ``` ++ For example: `az account set abcdef01-2345-6789-0abc-def012345678` ++3. Confirm the location you want to deploy in. See the [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=health-data-services) site for the current Azure regions where Azure Health Data Services is available. ++ You can also review the **location** section of the *azuredeploy.json* file. ++ If you need a list of the Azure regions location names, you can use this code to display a list: ++ ```azurecli + az account list-locations -o table + ``` ++4. If you don't already have a resource group created for this quickstart, you can use this code to create one: ++ ```azurecli + az group create --resource-group <ResourceGroupName> --location <AzureRegion> + ``` ++ For example: `az group create --resource-group ArmTestDeployment --location southcentralus` ++ > [!IMPORTANT] + > For a successful deployment of the MedTech service, you'll need to use numbers and lowercase letters for the basename of your resources. ++5. Use the following code to deploy the MedTech service using the ARM template: ++ ```azurecli + az deployment group create --resource-group <ResourceGroupName> --template-uri https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/quickstarts/microsoft.healthcareapis/workspaces/iotconnectors/azuredeploy.json --parameters basename=<BaseName> location=<AzureRegion> + ``` ++ For example: `az deployment group create --resource-group ArmTestDeployment --template-uri https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/quickstarts/microsoft.healthcareapis/workspaces/iotconnectors/azuredeploy.json --parameters basename=abc123 location=southcentralus` ++ > [!IMPORTANT] + > If you're going to allow access from multiple services to the device message event hub, it is highly recommended that each service has its own event hub consumer group. + > + > Consumer groups enable multiple consuming applications to have a separate view of the event stream, and to read the stream independently at their own pace and with their own offsets. For more information, see [Consumer groups](../../event-hubs/event-hubs-features.md#consumer-groups). + > + > Examples: + > + > - Two MedTech services accessing the same device message event hub. + > + > - A MedTech service and a storage writer application accessing the same device message event hub. ++## Review deployed resources and access permissions ++When deployment is completed, the following resources and access roles are created in the ARM template deployment: ++- Azure Event Hubs namespace and device message event hub. In this deployment, the device message event hub is named *devicedata*. ++ - An event hub consumer group. In this deployment, the consumer group is named *$Default*. ++ - An Azure Event Hubs Data Sender role. In this deployment, the sender role is named *devicedatasender* and can be used to provide access to the device event hub using a shared access signature (SAS). To learn more about authorizing access using a SAS, see [Authorizing access to Event Hubs resources using Shared Access Signatures](../../event-hubs/authorize-access-shared-access-signature.md). ++- A Health Data Services workspace. ++- A Health Data Services FHIR service. ++- A Health Data Services MedTech service with the required [system-assigned managed identity](../../active-directory/managed-identities-azure-resources/overview.md) roles: ++ - For the device message event hub, the Azure Events Hubs Data Receiver role is assigned in the [Access control section (IAM)](../../role-based-access-control/overview.md) of the device message event hub. ++ - For the FHIR service, the FHIR Data Writer role is assigned in the [Access control section (IAM)](../../role-based-access-control/overview.md) of the FHIR service. ++> [!IMPORTANT] +> In this quickstart, the ARM template configures the MedTech service to operate in **Create** mode. A Patient resource and a Device resource are created for each device that sends data to your FHIR service. +> +> To learn more about the MedTech service resolution types **Create** and **Lookup**, see [Destination properties](deploy-new-config.md#destination-properties). ++## Post-deployment mappings ++After you've successfully deployed an instance of the MedTech service, you'll still need to provide conforming and valid device and FHIR destination mappings. ++ - To learn about the device mapping, see [Overview of the MedTech service device mapping](overview-of-device-mapping.md). ++ - To learn about the FHIR destination mapping, see [Overview of the FHIR destination mapping](overview-of-fhir-destination-mapping.md). ++## Clean up Azure PowerShell resources ++When your resource group and deployed Bicep file resources are no longer needed, delete the resource group, which deletes the resources in the resource group. ++```azurepowershell +Remove-AzResourceGroup -Name <ResourceGroupName> +``` ++For example: `Remove-AzResourceGroup -Name ArmTestDeployment` ++## Clean up the Azure CLI resources ++When your resource group and deployed Bicep file resources are no longer needed, delete the resource group, which deletes the resources in the resource group. ++```azurecli +az group delete --name <ResourceGroupName> +``` ++For example: `az group delete --resource-group ArmTestDeployment` ++> [!TIP] +> For a step-by-step tutorial that guides you through the process of creating an ARM template, see [Tutorial: Create and deploy your first ARM template](../../azure-resource-manager/templates/template-tutorial-create-first-template.md). ++## Next steps ++In this quickstart, you learned how to use Azure PowerShell or Azure CLI to deploy an instance of the MedTech service using an ARM template. ++To learn about other methods for deploying the MedTech service, see ++> [!div class="nextstepaction"] +> [Choose a deployment method for the MedTech service](deploy-new-choose.md) ++FHIR® is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission. |
healthcare-apis | Deploy Manual Config | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/deploy-manual-config.md | + + Title: Configure the MedTech service for deployment using the Azure portal - Azure Health Data Services +description: In this article, you'll learn how to configure the MedTech service for manual deployment using the Azure portal. ++++ Last updated : 04/14/2023++++# Quickstart: Part 2: Configure the MedTech service for manual deployment using the Azure portal ++> [!NOTE] +> [Fast Healthcare Interoperability Resources (FHIR®)](https://www.hl7.org/fhir/) is an open healthcare specification. ++Before you can manually deploy the MedTech service, you must complete the following configuration tasks: ++## Set up the MedTech service configuration ++Start with these three steps to begin configuring the MedTech service so it will be ready to accept your tabbed configuration input: ++1. Start by going to the Health Data Services workspace you created in the manual deployment [Prerequisites](deploy-new-manual.md#part-1-prerequisites) section. Select the **Create MedTech service** box. ++2. This step will take you to the **Add MedTech service** button. Select the button. ++3. This step will take you to the **Create MedTech service** page. This page has five tabs you need to fill out: ++- Basics +- Device mapping +- Destination mapping +- Tags (optional) +- Review + create ++## Configure the Basics tab ++Follow these six steps to fill in the Basics tab configuration: ++1. Enter the **MedTech service name**. ++ The **MedTech service name** is a friendly, unique name for your MedTech service. For this example, we'll name the MedTech service `mt-azuredocsdemo`. ++2. Enter the **Event Hubs Namespace**. ++ The Event Hubs Namespace is the name of the **Event Hubs Namespace** that you previously deployed. For this example, we'll use `eh-azuredocsdemo` with our MedTech service device messages. ++ > [!TIP] + > For information about deploying an Azure Event Hubs Namespace, see [Create an Event Hubs Namespace](../../event-hubs/event-hubs-create.md#create-an-event-hubs-namespace). + > + > For more information about Azure Event Hubs Namespaces, see [Namespace](../../event-hubs/event-hubs-features.md?WT.mc_id=Portal-Microsoft_Healthcare_APIs#namespace) in the Features and terminology in Azure Event Hubs document. ++3. Enter the **Events Hubs name**. ++ The Event Hubs name is the name of the event hub that you previously deployed within the Event Hubs Namespace. For this example, we'll use `devicedata` with our MedTech service device messages. ++ > [!TIP] + > For information about deploying an Azure event hub, see [Create an event hub](../../event-hubs/event-hubs-create.md#create-an-event-hub). ++4. Enter the **Consumer group**. ++ The Consumer group name is located by going to the **Overview** page of the Event Hubs Namespace and selecting the event hub to be used for the MedTech service device messages. In this example, the event hub is named `devicedata`. ++5. When you're inside the event hub, select the **Consumer groups** button under **Entities** to display the name of the consumer group to be used by your MedTech service. ++6. By default, a consumer group named **$Default** is created during the deployment of an event hub. Use this consumer group for your MedTech service deployment. ++ > [!IMPORTANT] + > If you're going to allow access from multiple services to the device message event hub, it is highly recommended that each service has its own event hub consumer group. + > + > Consumer groups enable multiple consuming applications to have a separate view of the event stream, and to read the stream independently at their own pace and with their own offsets. For more information, see [Consumer groups](../../event-hubs/event-hubs-features.md#consumer-groups). + > + > Examples: + > + > - Two MedTech services accessing the same device message event hub. + > + > - A MedTech service and a storage writer application accessing the same device message event hub. ++The Basics tab should now look like this after you've filled it out: ++ :::image type="content" source="media\deploy-new-config\select-device-mapping-button.png" alt-text="Screenshot of Basics tab filled out correctly." lightbox="media\deploy-new-config\select-device-mapping-button.png"::: ++You're now ready to select the Device mapping tab and begin setting up the device mappings for your MedTech service. ++## Configure the Device mapping tab ++You need to configure device mappings so that your instance of the MedTech service can normalize the incoming device data. The device data will first be sent to your event hub instance and then picked up by the MedTech service. ++The easiest way to configure the Device mapping tab is to use the Internet of Medical Things (IoMT) Connector Data Mapper tool to visualize, edit, and test your device mapping. This open source tool is available from [IoMT Connector Data Mapper](https://github.com/microsoft/iomt-fhir/tree/master/tools/data-mapper). ++To begin configuring the device mapping tab, go to the Create MedTech service page and select the **Device mapping** tab. Then follow these two steps: ++1. Go to the IoMT Connector Data Mapper and get the appropriate JSON code. ++2. Return to the Create MedTech service page. Enter the JSON code for the template you want to use into the **Device mapping** tab. After you enter the template code, the Device mapping code will be displayed on the screen. ++3. If the Device code is correct, select the **Next: Destination >** tab to enter the destination properties you want to use with your MedTech service. Your device configuration data will be saved for this session. ++For more information regarding device mappings, see the relevant GitHub open source documentation at [Device Content Mapping](https://github.com/microsoft/iomt-fhir/blob/master/docs/Configuration.md#device-content-mapping). ++For Azure docs information about the device mapping, see [How to configure the MedTech service device mapping](how-to-configure-device-mappings.md). ++## Configure the Destination tab ++In order to configure the **Destination** tab, you can use the [Mapping debugger](how-to-use-mapping-debugger.md) tool to create, edit, and test the FHIR destination mapping. You need to configure FHIR destination mapping so that your instance of MedTech service can send transformed device data to the FHIR service. ++To begin configuring FHIR destination mapping, go to the **Create** MedTech service page and select the **Destination mapping** tab. There are two parts of the tab you must fill out: ++ 1. Destination properties + 2. JSON template request ++### Destination properties ++Under the **Destination** tab, use these values to enter the destination properties for your MedTech service instance: ++- First, enter the name of your **FHIR server** using the following four steps: ++ 1. The **FHIR Server** name (also known as the **FHIR service**) can be located by using the **Search** bar at the top of the screen. + 1. To connect to your FHIR service instance, enter the name of the FHIR service you used in the manual deploy configuration article at [Deploy the FHIR service](deploy-new-manual.md#deploy-the-fhir-service). + 1. Then select the **Properties** button. + 1. Next, Copy and paste the **Name** string into the **FHIR Server** text field. In this example, the **FHIR Server** name is `fs-azuredocsdemo`. ++- Next, enter the **Destination Name**. ++ The **Destination Name** is a friendly name for the destination. Enter a unique name for your destination. In this example, the **Destination Name** is ++ `fs-azuredocsdemo`. ++- Then, select the **Resolution Type**. ++ **Resolution Type** specifies how MedTech service will resolve missing data when reading from the FHIR service. MedTech reads device and patient resources from the FHIR service using [device identifiers](https://www.hl7.org/fhir/device-definitions.html#Device.identifier) and [patient identifiers](https://www.hl7.org/fhir/patient-definitions.html#Patient.identifier). ++ Missing data can be resolved by choosing a **Resolution Type** of **Create** and **Lookup**: ++ - **Create** ++ If **Create** was selected, and device or patient resources are missing when you're reading data, new resources will be created, containing just the identifier. ++ - **Lookup** + + If **Lookup** was selected, and device or patient resources are missing, an error will occur, and the data won't be processed. The errors **DeviceNotFoundException** and/or a **PatientNotFoundException** error will be generated, depending on the type of resource not found. ++For more information regarding destination mapping, see the FHIR service GitHub documentation at [FHIR mapping](https://github.com/microsoft/iomt-fhir/blob/master/docs/Configuration.md#fhir-mapping). ++For Azure docs information about the FHIR destination mapping, see [Overview of the FHIR destination mapping](overview-of-fhir-destination-mapping.md). ++### JSON template request ++Before you can complete the FHIR destination mapping, you must get a FHIR destination mapping code. Follow these four steps: ++1. Go to the [Mapping debugger](how-to-use-mapping-debugger.md) and get the JSON template for your FHIR destination. +1. Go back to the Destination tab of the Create MedTech service page. +1. Go to the large box below the boxes for FHIR server name, Destination name, and Resolution type. Enter the JSON template request in that box. +1. You'll then receive the FHIR Destination mapping code, which will be saved as part of your configuration. ++## Configure the Tags tab (optional) ++Before you complete your configuration in the **Review + create** tab, you may want to configure tabs. You can do this step by selecting the **Next: Tags >** tabs. ++Tags are name and value pairs used for categorizing resources. This step is an optional step when you may have many resources and want to sort them. For more information about tags, see [Use tags to organize your Azure resources and management hierarchy](../../azure-resource-manager/management/tag-resources.md). ++Follow these steps if you want to create tags: ++1. Under the **Tags** tab, enter the tag properties associated with the MedTech service. ++ - Enter a **Name**. + - Enter a **Value**. ++2. Once you've entered your tag(s), you're ready to do the last step of your configuration. ++## Select the Review + create tab to validate your deployment request ++To begin the validation process of your MedTech service deployment, select the **Review + create** tab. There will be a short delay and then you should see a screen that displays a **Validation success** message. Below the message, you should see the following values for your deployment. ++**Basics** +- MedTech service name +- Event Hubs name +- Consumer group +- Event Hubs namespace ++++**Destination** +- FHIR server +- Destination name +- Resolution type ++Your validation screen should look something like this: ++ :::image type="content" source="media\deploy-new-config\validate-and-review-medtech-service.png" alt-text="Screenshot of validation success with details displayed." lightbox="media\deploy-new-config\validate-and-review-medtech-service.png"::: ++If your MedTech service didn't validate, review the validation failure message, and troubleshoot the issue. Check all properties under each MedTech service tab that you've configured. Go back and try again. ++## Continue on to Part 3: Deployment and post-deployment ++After your configuration is successfully completed, you can go on to Part 3: Deployment and post deployment. See **Next steps**. ++## Next steps ++When you're ready to begin Part 3 of Manual Deployment, see ++> [!div class="nextstepaction"] +> [Part 3: Manual deployment and post-deployment of MedTech service](deploy-new-deploy.md) ++FHIR® is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission. |
healthcare-apis | Deploy Manual Post | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/deploy-manual-post.md | + + Title: Manual deployment and post-deployment of the MedTech service using the Azure portal - Azure Health Data Services +description: In this article, you'll learn how to manually create a deployment and post-deployment of the MedTech service in the Azure portal. ++++ Last updated : 03/10/2023++++# Quickstart: Part 3: Manual deployment and post-deployment of the MedTech service ++> [!NOTE] +> [Fast Healthcare Interoperability Resources (FHIR®)](https://www.hl7.org/fhir/) is an open healthcare specification. ++When you're satisfied with your configuration and it has been successfully validated, you can complete the deployment and post-deployment process. ++## Create your manual deployment ++1. Select the **Create** button to begin the deployment. ++2. The deployment process may take several minutes. The screen will display a message saying that your deployment is in progress. ++3. When Azure has finished deploying, a message will appear will say, "Your Deployment is complete" and will also display the following information: ++- Deployment name +- Subscription +- Resource group +- Deployment details ++Your screen should look something like this: ++ :::image type="content" source="media\deploy-new-deploy\created-medtech-service.png" alt-text="Screenshot of the MedTech service deployment completion." lightbox="media\deploy-new-deploy\created-medtech-service.png"::: ++## Manual post-deployment requirements ++There are two post-deployment steps you must perform or the MedTech service can't: ++1. Read device data from the device message event hub. +2. Read or write to the FHIR service. ++These steps are: ++1. Grant access to the device message event hub. +2. Grant access to the FHIR service. ++These two other steps are needed because MedTech service uses [Azure role-based access control (Azure RBAC)](../../role-based-access-control/overview.md) and a [system-assigned managed identity](../../active-directory/managed-identities-azure-resources/overview.md) for extra security and control of your MedTech service assets. ++### Grant access to the device message event hub ++Follow these steps to grant access to the device message event hub: ++1. In the **Search** bar at the top center of the Azure portal, enter and select the name of your **Event Hubs Namespace** that was previously created for your MedTech service device messages. ++2. Select the **Event Hubs** button under **Entities**. ++3. Select the event hub that will be used for your MedTech service device messages. For this example, the device message event hub is named **devicedata**. ++4. Select the **Access control (IAM)** button. ++5. Select the **Add role assignment** button. ++6. On the **Add role assignment** page, select the **View** button directly across from the **Azure Event Hubs Data Receiver** role. The Azure Event Hubs Data Receiver role allows the MedTech service to receive device message data from this event hub. For more information about application roles, see [Authentication & Authorization for Azure Health Data Services](.././authentication-authorization.md). ++7. Select the **Select role** button. ++8. Select the **Next** button. ++9. In the **Add role assignment** page, select **Managed identity** next to **Assign access to** and **+ Select members** next to **Members**. ++10. When the **Select managed identities** box opens, under the **Managed identity** box, select **MedTech service,** and find your MedTech service system-assigned managed identity under the **Select** box. Once the system-assigned managed identity for your MedTech service is found, select it, and then select the **Select** button. ++ The system-assigned managed identify name for your MedTech service is a concatenation of the workspace name and the name of your MedTech service, using the format: **"your workspace name"/"your MedTech service name"** or **"your workspace name"/iotconnectors/"your MedTech service name"**. For example: **azuredocsdemo/mt-azuredocsdemo** or **azuredocsdemo/iotconnectors/mt-azuredocsdemo**. ++11. On the **Add role assignment** page, select the **Review + assign** button. ++12. On the **Add role assignment** confirmation page, select the **Review + assign** button. ++13. After the role assignment has been successfully added to the event hub, a notification will display on your screen with a green check mark. This notification indicates that your MedTech service can now read from your device message event hub. It should look like this: ++ :::image type="content" source="media\deploy-new-deploy\validate-medtech-service-managed-identity-added-to-event-hub.png" alt-text="Screenshot of the MedTech service system-assigned managed identity being successfully granted access to the event hub with a red box around the message." lightbox="media\deploy-new-deploy\validate-medtech-service-managed-identity-added-to-event-hub.png"::: ++For more information about authorizing access to Event Hubs resources, see [Authorize access with Azure Active Directory](../../event-hubs/authorize-access-azure-active-directory.md). ++### Grant access to the FHIR service ++The process for granting your MedTech service system-assigned managed identity access to your **FHIR service** requires the same 13 steps that you used to grant access to your device message event hub. There are two exceptions. The first is that, instead of navigating to the **Access Control (IAM)** menu from within your event hub (as outlined in steps 1-4), you should navigate to the equivalent **Access Control (IAM)** menu from within your **FHIR service**. The second exception is that, in step 6, your MedTech service system-assigned managed identity will require you to select the **View** button directly across from **FHIR Data Writer** access instead of the button across from **Azure Event Hubs Data Receiver**. ++The **FHIR Data Writer** role provides read and write access to your FHIR service, which your MedTech service uses to access or persist data. Because the MedTech service is deployed as a separate resource, the FHIR service will receive requests from the MedTech service. If the FHIR service doesnΓÇÖt know who's making the request, it will deny the request as unauthorized. ++For more information about assigning roles to the FHIR service, see [Configure Azure Role-based Access Control (RBAC)](.././configure-azure-rbac.md). ++For more information about application roles, see [Authentication & Authorization for Azure Health Data Services](.././authentication-authorization.md). ++Now that you have granted access to the device message event hub and the FHIR service, your manual deployment is complete. You're MedTech service is now ready to receive data from a device and process it into a FHIR Observation resource. ++## Next steps ++In this article, you learned how to perform the manual deployment and post-deployment steps to implement your MedTech service. ++To learn about other methods for deploying the MedTech service, see ++> [!div class="nextstepaction"] +> [Choose a deployment method for the MedTech service](deploy-new-choose.md) ++FHIR® is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission. |
healthcare-apis | Deploy Manual Prerequisites | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/deploy-manual-prerequisites.md | + + Title: Deploy the MedTech service manually using the Azure portal - Azure Health Data Services +description: In this article, you'll learn how to deploy the MedTech service manually using the Azure portal. ++++ Last updated : 04/19/2022++++# Quickstart: Deploy the MedTech service manually using the Azure portal ++> [!NOTE] +> [Fast Healthcare Interoperability Resources (FHIR®)](https://www.hl7.org/fhir/) is an open healthcare specification. ++You may prefer to manually deploy the MedTech service if you need to track every step of the developmental process. Manual deployment might be necessary if you have to customize or troubleshoot your deployment. Manual deployment will help you by providing all the details for implementing each task. ++The explanation of the MedTech service manual deployment using the Azure portal is divided into three parts that cover each of key tasks required: ++- Part 1: Prerequisites (see Prerequisites below) +- Part 2: Configuration (see [Configure for manual deployment](deploy-new-config.md)) +- Part 3: Deployment and Post Deployment (see [Manual deployment and post-deployment](deploy-new-deploy.md)) ++If you need a diagram with information on the MedTech service deployment, there's an overview at [Choose a deployment method](deploy-new-choose.md#deployment-overview). This diagram shows the steps of deployment and how MedTech service processes device data into FHIR Observations. ++## Part 1: Prerequisites ++Before you can begin configuring to deploy MedTech services, you need to have the following five prerequisites: ++- A valid Azure subscription +- A resource group deployed in the Azure portal +- A workspace deployed in Azure Health Data Services +- An event hub deployed in a namespace +- FHIR service deployed in Azure Health Data Services ++## Open your Azure account ++The first thing you need to do is determine if you have a valid Azure subscription. If you don't have an Azure subscription, see [Subscription decision guide](/azure/cloud-adoption-framework/decision-guides/subscriptions/). ++## Deploy a resource group in the Azure portal ++When you sign in to your Azure account, go to the Azure portal and select the **Create a resource** button. Enter "Azure Health Data Services" in the "Search services and marketplace" box. This step should take you to the Azure Health Data Services page. ++## Deploy a workspace in Azure Health Data Services ++The first resource you must create is a workspace to contain your Azure Health Data Services resources. Start by selecting Create from the Azure Health Data Services resource page. This step will take you to the first page of Create Azure Health Data Services workspace, when you need to do the following eight steps: ++1. Fill in the resource group you want to use or create a new one. ++2. Give the workspace a unique name. ++3. Select the region you want to use. ++4. Select the Networking button at the bottom to continue. ++5. Choose whether you want a public or private endpoint. ++6. Create tags if you want to use them. They're optional. ++7. When you're ready to continue, select the Review + create tab. ++8. Select the Create button to deploy your workspace. ++After a short delay, you'll start to see information about your new workspace. Make sure you wait until all parts of the screen are displayed. If your initial deployment was successful, you should see: ++- "Your deployment is complete" +- Deployment name +- Subscription name +- Resource group name ++## Deploy an event hub in the Azure portal using a namespace ++An event hub is the next prerequisite you need to create. It's an important step because the event hub receives the data flow from a device and stores it until the MedTech service picks up the device data. Once the MedTech service picks up the device data, it can begin the transformation of the device data into a FHIR service Observation resource. Because Internet propagation times are indeterminate, the event hub is needed to buffer the data and store it for as much as 24 hours before expiring. ++Before you can create an event hub, you must create a namespace in Azure portal to contain it. For more information on how To create a namespace and an event hub, see [Azure Event Hubs namespace and event hub deployed in the Azure portal](../../event-hubs/event-hubs-create.md). ++## Deploy the FHIR service ++The last prerequisite you need to do before you can configure and deploy MedTech service, is to deploy the FHIR service. ++There are three ways to deploy FHIR service: ++1. Using portal. See [Deploy a FHIR service within Azure Health Data Services - using portal](../fhir/fhir-portal-quickstart.md). ++2. Using Bicep. See [Deploy a FHIR service within Azure Health Data Services using Bicep](../fhir/fhir-service-bicep.md). ++3. Using an ARM template. See [Deploy a FHIR service within Azure Health Data Services - using ARM template](../fhir/fhir-service-resource-manager-template.md). ++After you have deployed FHIR service, it will be ready to receive the data processed by MedTech and persist it as a FHIR service Observation. ++## Continue on to Part 2: Configuration ++After your prerequisites are successfully completed, you can go on to Part 2: Configuration. See **Next steps**. ++## Next steps ++When you're ready to begin Part 2 of Manual Deployment, see ++> [!div class="nextstepaction"] +> [Part 2: Configure the MedTech service for manual deployment using the Azure portal](deploy-new-config.md) ++FHIR® is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission. |
healthcare-apis | Get Started | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/get-started.md | Title: Get started with the MedTech service - Azure Health Data Services -description: This article describes how to get started with the MedTech service. +description: This article describes the basic steps for deploying the MedTech service. Previously updated : 04/21/2023 Last updated : 04/25/2023 -This article and diagram outlines the basic steps to get started with the MedTech service in the [Azure Health Data Services](../healthcare-apis-overview.md). These basic steps may help you analyze the MedTech service deployment options and determine which deployment method is best for you. +This article and diagram outlines the basic steps to get started with the MedTech service in the [Azure Health Data Services](../healthcare-apis-overview.md). These steps may help you analyze the MedTech service deployment options and determine which deployment method is best for you. -As a prerequisite, you need an Azure subscription and have been granted proper permissions to deploy Azure resource groups and resources. You can follow all the steps, or skip some if you have an existing environment. Also, you can combine all the steps and complete them in Azure PowerShell, Azure CLI, and REST API scripts. +As a prerequisite, you need an Azure subscription and have been granted the proper permissions to deploy Azure resource groups and resources. You can follow all the steps, or skip some if you have an existing environment. Also, you can combine all the steps and complete them in Azure PowerShell, Azure CLI, and REST API scripts. :::image type="content" source="media/get-started/get-started-with-medtech-service.png" alt-text="Diagram showing the MedTech service deployment overview." lightbox="media/get-started/get-started-with-medtech-service.png"::: > [!TIP]-> See the MedTech service article, [Quickstart: Choose a deployment method for the MedTech service](deploy-new-choose.md), for a description of the different deployment methods that can help to simply and automate the deployment of the MedTech service. +> See the MedTech service article, [Choose a deployment method for the MedTech service](deploy-choose-method.md), for a description of the different deployment methods that can help to simply and automate the deployment of the MedTech service. ## Deploy resources Deploy a [resource group](../../azure-resource-manager/management/manage-resourc ### Deploy an Event Hubs namespace and event hub -Deploy an Event Hubs namespace into the resource group. Event Hubs namespaces are logical containers for event hubs. Once the namespace is deployed, you can deploy an event hub, which the MedTech service reads from. For information about deploying Event Hubs namespaces and event hubs, see [Quickstart: Create an event hub using Azure portal](../../event-hubs/event-hubs-create.md). +Deploy an Event Hubs namespace into the resource group. Event Hubs namespaces are logical containers for event hubs. Once the namespace is deployed, you can deploy an event hub, which the MedTech service reads from. For information about deploying Event Hubs namespaces and event hubs, see [Create an event hub using Azure portal](../../event-hubs/event-hubs-create.md). ### Deploy a workspace Deploy a [FHIR service](../fhir/fhir-portal-quickstart.md) into your resource gr ### Deploy a MedTech service -If you have successfully deployed the prerequisite resources, you're now ready to deploy a [MedTech service](deploy-new-manual.md) using your workspace. +If you have successfully deployed the prerequisite resources, you're now ready to deploy a [MedTech service](deploy-manual-prerequisites.md) using your workspace. ## Next steps |
healthcare-apis | Release Notes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/release-notes.md | +## April 2023 +#### FHIR Service ++**Fixed performance for Search Queries with identifiers** +This bug fix addresses timeout issues observed for search queries with identifiers, by leveraging OPTIMIZE clause. +For more details, visit [#3207](https://github.com/microsoft/fhir-server/pull/3207) ++**Fixed transient issues associated with loading custom search parameters** +This bug fix addresses the issue, where the FHIR service would not load the latest SearchParameter status in event of failure. +For more details, visit [#3222](https://github.com/microsoft/fhir-server/pull/3222) + ## March 2023 #### Azure Health Data Services Azure Health Data Services is a set of managed API services based on open standa General availability (GA) of Azure Health Data services in Japan East region. - ## February 2023 #### FHIR service |
internet-peering | Howto Subscription Association Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/internet-peering/howto-subscription-association-powershell.md | Title: Associate peer ASN to Azure subscription - PowerShell -description: Associate peer ASN to Azure subscription using PowerShell. +description: Learn how to associate peer ASN to Azure subscription using PowerShell. Previously updated : 01/23/2023 Last updated : 04/24/2023 -Before you submit a peering request, you should first associate your ASN with Azure subscription using the steps below. +Before you submit a peering request, you should first associate your ASN with Azure subscription using the steps in this article. If you prefer, you can complete this guide using the [Azure portal](howto-subscription-association-portal.md). If you prefer, you can complete this guide using the [Azure portal](howto-subscr [!INCLUDE [Account](./includes/account-powershell.md)] ### Register for peering resource provider-Register for peering resource provider in your subscription using the command below. If you don't execute this, then Azure resources required to set up peering aren't accessible. +Register for peering resource provider in your subscription using [Register-AzResourceProvider](/powershell/module/az.resources/register-azresourceprovider). If you don't execute this, then Azure resources required to set up peering aren't accessible. ```powershell Register-AzResourceProvider -ProviderNamespace Microsoft.Peering ``` -You can check the registration status using the commands below: +You can check the registration status using [Get-AzResourceProvider](/powershell/module/az.resources/get-azresourceprovider): ```powershell Get-AzResourceProvider -ProviderNamespace Microsoft.Peering ``` Get-AzResourceProvider -ProviderNamespace Microsoft.Peering Below is an example to update peer information. ```powershell-New-AzPeerAsn ` - -Name "Contoso_1234" ` - -PeerName "Contoso" ` - -PeerAsn 1234 ` - -Email noc@contoso.com, support@contoso.com ` - -Phone "+1 (555) 555-5555" +$contactDetails = New-AzPeerAsnContactDetail -Role Noc -Email "noc@contoso.com" -Phone "+1 (555) 555-5555" +New-AzPeerAsn -Name "Contoso_1234" -PeerName "Contoso" -PeerAsn 1234 -ContactDetail $contactDetails ``` > [!NOTE] A subscription can have multiple ASNs. Update the peering information for each A Peers are expected to have a complete and up-to-date profile on [PeeringDB](https://www.peeringdb.com). We use this information during registration to validate the peer's details such as NOC information, technical contact information, and their presence at the peering facilities etc. -Note that in place of **{subscriptionId}** in the output above, actual subscription ID will be displayed. +In place of **{subscriptionId}** in the output, actual subscription ID is displayed. ## View status of a PeerASN -Check for ASN Validation state using the command below: +Check for ASN Validation state using [Get-AzPeerAsn](/powershell/module/az.peering/get-azpeerasn): ```powershell Get-AzPeerAsn |
iot-central | Tutorial Use Rest Api | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/tutorial-use-rest-api.md | The tutorial uses a predefined Postman collection that includes some scripts to ## Import the Postman collection -To import the collection, open Postman and select **Import**. In the **Import** dialog, select **Link** and paste in the following [URL](https://raw.githubusercontent.com/Azure-Samples/iot-central-docs-samples/main/postman-collection/IoT%20Central.postman_collection.json), <!-- TODO: Add link here --> Select **Continue**. +To import the collection, open Postman and select **Import**. In the **Import** dialog, select **Link** and paste in the following [URL](https://raw.githubusercontent.com/Azure-Samples/iot-central-docs-samples/main/postman-collection/IoT%20Central%20REST%20tutorial.postman_collection.json), select **Continue**. Your workspace now contains the **IoT Central REST tutorial** collection. This collection includes all the APIs you use in the tutorial. |
iot-develop | Concepts Architecture | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/concepts-architecture.md | The following diagram shows the key elements of an IoT Plug and Play solution: ## Model repository -The [model repository](./concepts-model-repository.md) is a store for model and interface definitions. You define models and interfaces using the [Digital Twins Definition Language (DTDL) V2](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/DTDL.v2.md). +The [model repository](./concepts-model-repository.md) is a store for model and interface definitions. You define models and interfaces using the [Digital Twins Definition Language (DTDL)](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/README.md). The web UI lets you manage the models and interfaces. |
iot-develop | Concepts Convention | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/concepts-convention.md | IoT Plug and Play devices should follow a set of conventions when they exchange A device can include [modules](../iot-hub/iot-hub-devguide-module-twins.md), or be implemented in an [IoT Edge module](../iot-edge/about-iot-edge.md) hosted by the IoT Edge runtime. -You describe the telemetry, properties, and commands that an IoT Plug and Play device implements with a [Digital Twins Definition Language (DTDL) V2](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/DTDL.v2.md) _model_. There are two types of model referred to in this article: +You describe the telemetry, properties, and commands that an IoT Plug and Play device implements with a [Digital Twins Definition Language (DTDL)](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/README.md) _model_. There are two types of model referred to in this article: - **No component** - A model with no components. The model declares telemetry, properties, and commands as top-level elements in the contents section of the main interface. In the Azure IoT explorer tool, this model appears as a single _default component_. - **Multiple components** - A model composed of two or more interfaces. A main interface, which appears as the _default component_, with telemetry, properties, and commands. One or more interfaces declared as components with more telemetry, properties, and commands. A read-only property is set by the device and reported to the back-end applicati ### Sample no component read-only property -A device or module can send any valid JSON that follows the DTDL V2 rules. +A device or module can send any valid JSON that follows the DTDL rules. DTDL that defines a property on an interface: The device responds with an acknowledgment that looks like the following example When a device receives multiple desired properties in a single payload, it can send the reported property responses across multiple payloads or combine the responses into a single payload. -A device or module can send any valid JSON that follows the DTDL V2 rules. +A device or module can send any valid JSON that follows the DTDL rules. DTDL: On a device or module, multiple component interfaces use command names with the Now that you've learned about IoT Plug and Play conventions, here are some other resources: -- [Digital Twins Definition Language (DTDL) V2](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/DTDL.v2.md)+- [Digital Twins Definition Language (DTDL)](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/README.md) - [C device SDK](https://github.com/Azure/azure-iot-sdk-c/) - [IoT REST API](/rest/api/iothub/device) - [IoT Plug and Play modeling guide](concepts-modeling-guide.md) |
iot-develop | Concepts Developer Guide Device | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/concepts-developer-guide-device.md | This guide describes the basic steps required to create a device, module, or IoT To build an IoT Plug and Play device, module, or IoT Edge module, follow these steps: 1. Ensure your device is using either the MQTT or MQTT over WebSockets protocol to connect to Azure IoT Hub.-1. Create a [Digital Twins Definition Language (DTDL) V2](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/DTDL.v2.md) model to describe your device. To learn more, see [Understand components in IoT Plug and Play models](concepts-modeling-guide.md). +1. Create a [Digital Twins Definition Language (DTDL)](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/README.md) model to describe your device. To learn more, see [Understand components in IoT Plug and Play models](concepts-modeling-guide.md). 1. Update your device or module to announce the `model-id` as part of the device connection. 1. Implement telemetry, properties, and commands that follow the [IoT Plug and Play conventions](concepts-convention.md) Once your device or module implementation is ready, use the [Azure IoT explorer] Now that you've learned about IoT Plug and Play device development, here are some other resources: -- [Digital Twins Definition Language (DTDL) V2](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/DTDL.v2.md)+- [Digital Twins Definition Language (DTDL)](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/README.md) - [C device SDK](https://github.com/Azure/azure-iot-sdk-c/) - [IoT REST API](/rest/api/iothub/device) - [Understand components in IoT Plug and Play models](concepts-modeling-guide.md) |
iot-develop | Concepts Developer Guide Service | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/concepts-developer-guide-service.md | The service SDKs let you access device information from a solution component suc Now that you've learned about device modeling, here are some more resources: -- [Digital Twins Definition Language (DTDL) V2](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/DTDL.v2.md)+- [Digital Twins Definition Language (DTDL)](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/README.md) - [C device SDK](https://github.com/Azure/azure-iot-sdk-c/) - [IoT REST API](/rest/api/iothub/device) - [IoT Plug and Play modeling guide](concepts-modeling-guide.md) |
iot-develop | Concepts Digital Twin | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/concepts-digital-twin.md | Title: Understand IoT Plug and Play digital twins description: Understand how IoT Plug and Play uses digital twins Previously updated : 11/17/2022 Last updated : 04/25/2023 -An IoT Plug and Play device implements a model described by the [Digital Twins Definition Language (DTDL) V2](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/DTDL.v2.md) schema. A model describes the set of components, properties, commands, and telemetry messages that a particular device can have. --IoT Plug and Play uses DTDL version 2. For more information about this version, see the [Digital Twins Definition Language (DTDL) V2](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/DTDL.v2.md) specification on GitHub. +An IoT Plug and Play device implements a model described by the [Digital Twins Definition Language (DTDL)](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/README.md) schema. A model describes the set of components, properties, commands, and telemetry messages that a particular device can have. > [!NOTE] > DTDL isn't exclusive to IoT Plug and Play. Other IoT services, such as [Azure Digital Twins](../digital-twins/overview.md), use it to represent entire environments such as buildings and energy networks. |
iot-develop | Concepts Model Parser | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/concepts-model-parser.md | Title: Understand the Azure Digital Twins model parser | Microsoft Docs description: As a developer, learn how to use the DTDL parser to validate models. Previously updated : 11/17/2022 Last updated : 04/25/2023 -The Digital Twins Definition Language (DTDL) is described in the [DTDL Specification V2](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/DTDL.v2.md). Users can use the _Digital Twins Model Parser_ NuGet package to validate and query a DTDL model. The DTDL model may be defined in multiple files. +The Digital Twins Definition Language (DTDL) is described in the [DTDL Specification](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/README.md). Users can use the _Digital Twins Model Parser_ NuGet package to validate and query a DTDL model. The DTDL model may be defined in multiple files. ## Install the DTDL model parser -The parser is available in NuGet.org with the ID: [Microsoft.Azure.DigitalTwins.Parser](https://www.nuget.org/packages/Microsoft.Azure.DigitalTwins.Parser). To install the parser, use any compatible NuGet package manager such as the one in Visual Studio or in the `dotnet` CLI. +The parser is available in NuGet.org with the ID: [DTDLParser](https://www.nuget.org/packages/DTDLParser). To install the parser, use any compatible NuGet package manager such as the one in Visual Studio or in the `dotnet` CLI. ```bash-dotnet add package Microsoft.Azure.DigitalTwins.Parser +dotnet add package DTDLParser ``` > [!NOTE]-> At the time of writing, the parser version is `3.12.7`. --## Use the parser to validate a model --A model can be composed of one or more interfaces described in JSON files. You can use the parser to load all the files in a given folder and then validate all the files as a whole, including any references between the files: --1. Create an `IEnumerable<string>` with a list of all model contents: -- ```csharp - using System.IO; -- string folder = @"c:\myModels\"; - string filespec = "*.json"; -- List<string> modelJson = new List<string>(); - foreach (string filename in Directory.GetFiles(folder, filespec)) - { - using StreamReader modelReader = new StreamReader(filename); - modelJson.Add(modelReader.ReadToEnd()); - } - ``` --1. Instantiate the `ModelParser` and call `ParseAsync`: -- ```csharp - using Microsoft.Azure.DigitalTwins.Parser; -- ModelParser modelParser = new ModelParser(); - IReadOnlyDictionary<Dtmi, DTEntityInfo> parseResult = await modelParser.ParseAsync(modelJson); - ``` --1. Check for validation errors. If the parser finds any errors, it throws an `ParsingException` with a list of errors: -- ```csharp - try - { - IReadOnlyDictionary<Dtmi, DTEntityInfo> parseResult = await modelParser.ParseAsync(modelJson); - } - catch (ParsingException pex) - { - Console.WriteLine(pex.Message); - foreach (var err in pex.Errors) - { - Console.WriteLine(err.PrimaryID); - Console.WriteLine(err.Message); - } - } - ``` --1. Inspect the `Model`. If the validation succeeds, you can use the model parser API to inspect the model. The following code snippet shows how to iterate over all the models parsed and display the existing properties: -- ```csharp - foreach (var item in parseResult) - { - Console.WriteLine($"\t{item.Key}"); - Console.WriteLine($"\t{item.Value.DisplayName?.Values.FirstOrDefault()}"); - } - ``` +> At the time of writing, the parser version is `1.0.52`. ++## Use the parser to validate and inspect a model ++The DTDLParser is a library that you can use to: ++- Determine whether one or more models are valid according to the language v2 or v3 specifications. +- Identify specific modeling errors. +- Inspect model contents. ++A model can be composed of one or more interfaces described in JSON files. You can use the parser to load all the files that define a model and then validate all the files as a whole, including any references between the files. ++The [DTDLParser for .NET](https://github.com/digitaltwinconsortium/DTDLParser) repository includes the following samples that illustrate the use of the parser: ++- [DTDLParserResolveSample](https://github.com/digitaltwinconsortium/DTDLParser/blob/main/samples/DTDLParserResolveSample) shows how to parse an interface with external references, resolve the dependencies using the `Azure.IoT.ModelsRepository` client. +- [DTDLParserJSInteropSample](https://github.com/digitaltwinconsortium/DTDLParser/blob/main/samples/DTDLParserJSInteropSample) shows how to use the DTDL Parser from JavaScript running in the browser, using .NET JSInterop. ++The DTDLParser for .NET repository also includes a [collection of tutorials](https://github.com/digitaltwinconsortium/DTDLParser/blob/main/tutorials/README.md) that show you how to use the parser to validate and inspect models. ## Next steps |
iot-develop | Concepts Model Repository | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/concepts-model-repository.md | -The device models repository (DMR) enables device builders to manage and share IoT Plug and Play device models. The device models are JSON LD documents defined using the [Digital Twins Modeling Language (DTDL) V2](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/DTDL.v2.md). +The device models repository (DMR) enables device builders to manage and share IoT Plug and Play device models. The device models are JSON LD documents defined using the [Digital Twins Modeling Language (DTDL)](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/README.md). The DMR defines a pattern to store DTDL interfaces in a folder structure based on the device twin model identifier (DTMI). You can locate an interface in the DMR by converting the DTMI to a relative path. For example, the `dtmi:com:example:Thermostat;1` DTMI translates to `/dtmi/com/example/thermostat-1.json` and can be obtained from the public base URL `devicemodels.azure.com` at the URL [https://devicemodels.azure.com/dtmi/com/example/thermostat-1.json](https://devicemodels.azure.com/dtmi/com/example/thermostat-1.json). |
iot-develop | Concepts Modeling Guide | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/concepts-modeling-guide.md | At the core of IoT Plug and Play, is a device _model_ that describes a device's To learn more about how IoT Plug and Play uses device models, see [IoT Plug and Play device developer guide](concepts-developer-guide-device.md) and [IoT Plug and Play service developer guide](concepts-developer-guide-service.md). -To define a model, you use the Digital Twins Definition Language (DTDL) V2. DTDL uses a JSON variant called [JSON-LD](https://json-ld.org/). The following snippet shows the model for a thermostat device that: +To define a model, you use the Digital Twins Definition Language (DTDL). DTDL uses a JSON variant called [JSON-LD](https://json-ld.org/). The following snippet shows the model for a thermostat device that: - Has a unique model ID: `dtmi:com:example:Thermostat;1`. - Sends temperature telemetry. The thermostat model has a single interface. Later examples in this article show This article describes how to design and author your own models and covers topics such as data types, model structure, and tools. -To learn more, see the [Digital Twins Definition Language V2](https://github.com/Azure/opendigitaltwins-dtdl) specification. +To learn more, see the [Digital Twins Definition Language](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/README.md) specification. ## Model structure There's a DTDL authoring extension for VS Code. To install the DTDL extension for VS Code, go to [DTDL editor for Visual Studio Code](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.vscode-dtdl). You can also search for **DTDL** in the **Extensions** view in VS Code. -When you've installed the extension, use it to help you author DTDL model files in VS code: +When you've installed the extension, use it to help you author DTDL model files in VS Code: - The extension provides syntax validation in DTDL model files, highlighting errors as shown on the following screenshot: The following list summarizes some key constraints and limits on models: Now that you've learned about device modeling, here are some more resources: -- [Digital Twins Definition Language V2 (DTDL)](https://github.com/Azure/opendigitaltwins-dtdl)+- [Digital Twins Definition Language (DTDL)](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/README.md) - [Model repositories](./concepts-model-repository.md) |
iot-develop | Howto Convert To Pnp | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/howto-convert-to-pnp.md | In summary, the sample implements the following capabilities: ## Design a model -Every IoT Plug and Play device has a model that describes the features and capabilities of the device. The model uses the [Digital Twin Definition Language (DTDL) V2](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/DTDL.v2.md) to describe the device capabilities. +Every IoT Plug and Play device has a model that describes the features and capabilities of the device. The model uses the [Digital Twin Definition Language (DTDL)](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/README.md) to describe the device capabilities. For a simple model that maps the existing capabilities of your device, use the *Telemetry*, *Property*, and *Command* DTDL elements. |
iot-develop | Howto Manage Digital Twin | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/howto-manage-digital-twin.md | IoT Plug and Play supports **Get digital twin** and **Update digital twin** oper ## Update a digital twin -An IoT Plug and Play device implements a model described by [Digital Twins Definition Language (DTDL) V2](https://github.com/Azure/opendigitaltwins-dtdl). Solution developers can use the **Update Digital Twin API** to update the state of component and the properties of the digital twin. +An IoT Plug and Play device implements a model described by [Digital Twins Definition Language (DTDL)](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/README.md). Solution developers can use the **Update Digital Twin API** to update the state of component and the properties of the digital twin. The IoT Plug and Play device used as an example in this article implements the [Temperature Controller model](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/samples/TemperatureController.json) with [Thermostat](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/samples/Thermostat.json) components. The following JSON Patch sample shows how to add, replace, or remove a property **Name** -The name of a component or property must be valid DTDL V2 name. +The name of a component or property must be valid DTDL name. Allowed characters are a-z, A-Z, 0-9 (not as the first character), and underscore (not as the first or last character). A name can be 1-64 characters long. **Property value** -The value must be a valid [DTDL V2 Property](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/DTDL.v2.md#property). +The value must be a valid [DTDL Property](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v3/DTDL.v3.md#property). -All primitive types are supported. Within complex types, enums, maps, and objects are supported. To learn more, see [DTDL V2 Schemas](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/DTDL.v2.md#schema). +All primitive types are supported. Within complex types, enums, maps, and objects are supported. To learn more, see [DTDL Schemas](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v3/DTDL.v3.md#schema). Properties don't support array or any complex schema with an array. A maximum depth of a five levels is supported for a complex object. -All field names within complex object should be valid DTDL V2 names. +All field names within complex object should be valid DTDL names. -All map keys should be valid DTDL V2 names. +All map keys should be valid DTDL names. ## Troubleshoot update digital twin API errors |
iot-develop | Overview Iot Plug And Play | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/overview-iot-plug-and-play.md | IoT Plug and Play enables solution builders to integrate IoT devices with their You can group these elements in interfaces to reuse across models to make collaboration easier and to speed up development. -To make IoT Plug and Play work with [Azure Digital Twins](../digital-twins/overview.md), you define models and interfaces using the [Digital Twins Definition Language (DTDL) V2](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/DTDL.v2.md). IoT Plug and Play and the DTDL are open to the community, and Microsoft welcomes collaboration with customers, partners, and industry. Both are based on open W3C standards such as JSON-LD and RDF, which enables easier adoption across services and tooling. +To make IoT Plug and Play work with [Azure Digital Twins](../digital-twins/overview.md), you define models and interfaces using the [Digital Twins Definition Language (DTDL)](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/README.md). IoT Plug and Play and the DTDL are open to the community, and Microsoft welcomes collaboration with customers, partners, and industry. Both are based on open W3C standards such as JSON-LD and RDF, which enables easier adoption across services and tooling. There's no extra cost for using IoT Plug and Play and DTDL. Standard rates for [Azure IoT Hub](../iot-hub/about-iot-hub.md) and other Azure services remain the same. |
iot-develop | Tutorial Migrate Device To Module | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/tutorial-migrate-device-to-module.md | This tutorial shows you how to connect a generic IoT Plug and Play [module](../i A device is an IoT Plug and Play device if it: * Publishes its model ID when it connects to an IoT hub.-* Implements the properties and methods described in the Digital Twins Definition Language (DTDL) V2 model identified by the model ID. +* Implements the properties and methods described in the Digital Twins Definition Language (DTDL) model identified by the model ID. To learn more about how devices use a DTDL and model ID, see [IoT Plug and Play developer guide](./concepts-developer-guide-device.md). Modules use model IDs and DTDL models in the same way. |
iot-develop | Tutorial Use Mqtt | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/tutorial-use-mqtt.md | if (rc != MOSQ_ERR_SUCCESS) printf("Publish returned OK\r\n"); ``` -To learn more, see [Sending device-to-cloud messages](../iot-hub/iot-hub-mqtt-support.md#sending-device-to-cloud-messages). +To learn more, see [Sending device-to-cloud messages](../iot/iot-mqtt-connect-to-iot-hub.md#sending-device-to-cloud-messages). ## Receive a cloud-to-device message void message_callback(struct mosquitto* mosq, void* obj, const struct mosquitto_ } ``` -To learn more, see [Use MQTT to receive cloud-to-device messages](../iot-hub/iot-hub-mqtt-support.md#receiving-cloud-to-device-messages). +To learn more, see [Use MQTT to receive cloud-to-device messages](../iot/iot-mqtt-connect-to-iot-hub.md#receiving-cloud-to-device-messages). ## Update a device twin void message_callback(struct mosquitto* mosq, void* obj, const struct mosquitto_ } ``` -To learn more, see [Use MQTT to update a device twin reported property](../iot-hub/iot-hub-mqtt-support.md#update-device-twins-reported-properties) and [Use MQTT to retrieve a device twin property](../iot-hub/iot-hub-mqtt-support.md#retrieving-a-device-twins-properties). +To learn more, see [Use MQTT to update a device twin reported property](../iot/iot-mqtt-connect-to-iot-hub.md#update-device-twins-reported-properties) and [Use MQTT to retrieve a device twin property](../iot/iot-mqtt-connect-to-iot-hub.md#retrieving-a-device-twins-properties). ## Clean up resources To learn more, see [Use MQTT to update a device twin reported property](../iot-h Now that you've learned how to use the Mosquitto MQTT library to communicate with IoT Hub, a suggested next step is to review: > [!div class="nextstepaction"]-> [Communicate with your IoT hub using the MQTT protocol](../iot-hub/iot-hub-mqtt-support.md) +> [Communicate with your IoT hub using the MQTT protocol](../iot/iot-mqtt-connect-to-iot-hub.md) |
iot-dps | Quick Create Simulated Device Symm Key | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/quick-create-simulated-device-symm-key.md | Title: Quickstart - Provision a simulated symmetric key device to Microsoft Azur description: Learn how to provision a device that authenticates with a symmetric key in the Azure IoT Hub Device Provisioning Service (DPS) Previously updated : 09/29/2021 Last updated : 04/06/2023 zone_pivot_groups: iot-dps-set1-#Customer intent: As a new IoT developer, I want to connect a device to an IoT Hub using the SDK, to learn how secure provisioning works with symmetric keys. +#Customer intent: As a new IoT developer, I want to connect a device to an IoT hub using the SDK, to learn how secure provisioning works with symmetric keys. # Quickstart: Provision a simulated symmetric key device -In this quickstart, you'll create a simulated device on your Windows machine. The simulated device will be configured to use the [symmetric key attestation](concepts-symmetric-key-attestation.md) mechanism for authentication. After you've configured your device, you'll then provision it to your IoT hub using the Azure IoT Hub Device Provisioning Service. +In this quickstart, you create a simulated device on your Windows machine. The simulated device is configured to use the [symmetric key attestation](concepts-symmetric-key-attestation.md) mechanism for authentication. After you've configured your device, you then provision it to your IoT hub using the Azure IoT Hub Device Provisioning Service. If you're unfamiliar with the process of provisioning, review the [provisioning](about-iot-dps.md#provisioning-process) overview. -This quickstart demonstrates a solution for a Windows-based workstation. However, you can also perform the procedures on Linux. For a Linux example, see [Tutorial: provision for geolatency](how-to-provision-multitenant.md). +This quickstart demonstrates a solution for a Windows-based workstation. However, you can also perform the procedures on Linux. For a Linux example, see [Tutorial: provision for geo latency](how-to-provision-multitenant.md). ## Prerequisites Once you create the individual enrollment, a **primary key** and **secondary key 1. Copy the value of the generated **Primary key**. - :::image type="content" source="./media/quick-create-simulated-device-symm-key/copy-device-enrollment-primary-key.png" alt-text="Copy the primary key of the device enrollment"::: + :::image type="content" source="./media/quick-create-simulated-device-symm-key/copy-device-enrollment-primary-key.png" alt-text="Screenshot showing the enrollment details, highlighting the Copy button for the primary key of the device enrollment"::: <a id="firstbootsequence"></a> To update and run the provisioning sample with your device information: 2. Copy the **ID Scope** value. - :::image type="content" source="./media/quick-create-simulated-device-symm-key/extract-dps-endpoints.png" alt-text="Extract Device Provisioning Service endpoint information"::: + :::image type="content" source="./media/quick-create-simulated-device-symm-key/extract-dps-endpoints.png" alt-text="Screenshot showing the overview of the Device Provisioning Service instance, highlighting the ID Scope value for the instance."::: 3. In Visual Studio, open the *azure_iot_sdks.sln* solution file that was generated by running CMake. The solution file should be in the following location: To update and run the provisioning sample with your device information: static const char* id_scope = "0ne00002193"; ``` -6. Find the definition for the `main()` function in the same file. Make sure the `hsm_type` variable is set to `SECURE_DEVICE_TYPE_SYMMETRIC_KEY` as shown below: +6. Find the definition for the `main()` function in the same file. Make sure the `hsm_type` variable is set to `SECURE_DEVICE_TYPE_SYMMETRIC_KEY` as shown in the following example: ```c SECURE_DEVICE_TYPE hsm_type; To update and run the provisioning sample with your device information: 2. Copy the **ID Scope** value. - :::image type="content" source="./media/quick-create-simulated-device-symm-key/extract-dps-endpoints.png" alt-text="Extract Device Provisioning Service endpoint information"::: + :::image type="content" source="./media/quick-create-simulated-device-symm-key/extract-dps-endpoints.png" alt-text="Screenshot showing the overview of the Device Provisioning Service instance, highlighting the ID Scope value for the instance."::: 3. Open a command prompt and go to the *SymmetricKeySample* in the cloned sdk repository: To update and run the provisioning sample with your device information: cd '.\azure-iot-sdk-csharp\provisioning\device\samples\how to guides\SymmetricKeySample\' ``` -4. In the *SymmetricKeySample* folder, open *Parameters.cs* in a text editor. This file shows the parameters that are supported by the sample. Only the first three required parameters are used in this article when running the sample. Review the code in this file. No changes are needed. +4. In the *SymmetricKeySample* folder, open *Parameters.cs* in a text editor. This file shows the available parameters for the sample. Only the first three required parameters are used in this article when running the sample. Review the code in this file. No changes are needed. | Parameter | Required | Description | | :-- | :- | :-- | To update and run the provisioning sample with your device information: 2. Copy the **ID Scope** and **Global device endpoint** values. - :::image type="content" source="./media/quick-create-simulated-device-symm-key/copy-id-scope-and-global-device-endpoint.png" alt-text="Extract Device Provisioning Service endpoint information"::: + :::image type="content" source="./media/quick-create-simulated-device-symm-key/copy-id-scope-and-global-device-endpoint.png" alt-text="Screenshot showing the overview of the Device Provisioning Service instance, highlighting the global device endpoint and ID Scope values for the instance."::: 3. Open a command prompt for executing Node.js commands, and go to the following directory: To update and run the provisioning sample with your device information: provisioningClient.setProvisioningPayload({a: 'b'}); ``` - You may comment out this code, as it is not needed with for this quick start. A custom payload would be required you wanted to use a custom allocation function to assign your device to an IoT Hub. For more information, see [Tutorial: Use custom allocation policies](tutorial-custom-allocation-policies.md). + You may comment out this code, as it's not needed with for this quickstart. A custom payload would be required you wanted to use a custom allocation function to assign your device to an IoT hub. For more information, see [Tutorial: Use custom allocation policies](tutorial-custom-allocation-policies.md). The `provisioningClient.register()` method attempts the registration of your device. To update and run the provisioning sample with your device information: 7. You should now see something similar to the following output. A "Hello World" string is sent to the hub as a test message. ```output- D:\azure-iot-samples-csharp\provisioning\Samples\device\SymmetricKeySample>dotnet run --s 0ne00000A0A --i symm-key-csharp-device-01 --p sbDDeEzRuEuGKag+kQKV+T1QGakRtHpsERLP0yPjwR93TrpEgEh/Y07CXstfha6dhIPWvdD1nRxK5T0KGKA+nQ== -- Initializing the device provisioning client... - Initialized for registration Id symm-key-csharp-device-01. - Registering with the device provisioning service... - Registration status: Assigned. - Device csharp-device-01 registered to ExampleIoTHub.azure-devices.net. - Creating symmetric key authentication for IoT Hub... - Testing the provisioned device with IoT Hub... - Sending a telemetry message... - Finished. - Enter any key to exit. + D:\azure-iot-samples-csharp\provisioning\device\samples>node register_symkey.js + registration succeeded + assigned hub=ExampleIoTHub.azure-devices.net + deviceId=nodejs-device-01 + payload=undefined + Client connected + send status: MessageEnqueued ``` ::: zone-end To update and run the provisioning sample with your device information: 2. Copy the **ID Scope** and **Global device endpoint** values. - :::image type="content" source="./media/quick-create-simulated-device-symm-key/copy-id-scope-and-global-device-endpoint.png" alt-text="Extract Device Provisioning Service endpoint information"::: + :::image type="content" source="./media/quick-create-simulated-device-symm-key/copy-id-scope-and-global-device-endpoint.png" alt-text="Screenshot showing the overview of the Device Provisioning Service instance, highlighting the global device endpoint and ID Scope values for the instance."::: 3. Open a command prompt and go to the directory where the sample file, _provision_symmetric_key.py_, is located. To update and run the provisioning sample with your device information: 1. In the main menu of your Device Provisioning Service, select **Overview**. -2. Copy the **ID Scope** and **Global device endpoint** values. These are your `SCOPE_ID` and `GLOBAL_ENDPOINT` respectively. +2. Copy the **ID Scope** and **Global device endpoint** values. These values are your `SCOPE_ID` and `GLOBAL_ENDPOINT` parameters, respectively. - :::image type="content" source="./media/quick-create-simulated-device-symm-key/copy-id-scope-and-global-device-endpoint.png" alt-text="Extract Device Provisioning Service endpoint information"::: + :::image type="content" source="./media/quick-create-simulated-device-symm-key/copy-id-scope-and-global-device-endpoint.png" alt-text="Screenshot showing the overview of the Device Provisioning Service instance, highlighting the global device endpoint and ID Scope values for the instance."::: 3. Open the Java device sample code for editing. The full path to the device sample code is: To update and run the provisioning sample with your device information: 3. Select the IoT hub to which your device was assigned. -4. In the **Explorers** menu, select **IoT Devices**. +4. In the **Device management** menu, select **Devices**. -5. If your device was provisioned successfully, the device ID should appear in the list, with **Status** set as *enabled*. If you don't see your device, select **Refresh** at the top of the page. +5. If your device was provisioned successfully, the device ID should appear in the list, with **Status** set as *Enabled*. If you don't see your device, select **Refresh** at the top of the page. :::zone pivot="programming-language-ansi-c" - :::image type="content" source="./media/quick-create-simulated-device-symm-key/hub-registration.png" alt-text="Device is registered with the IoT hub"::: + :::image type="content" source="./media/quick-create-simulated-device-symm-key/hub-registration.png" alt-text="Screenshot showing that the device is registered with the IoT hub and enabled for the C example."::: ::: zone-end :::zone pivot="programming-language-csharp" - :::image type="content" source="./media/quick-create-simulated-device-symm-key/hub-registration-csharp.png" alt-text="CSharp device is registered with the IoT hub"::: + :::image type="content" source="./media/quick-create-simulated-device-symm-key/hub-registration-csharp.png" alt-text="Screenshot showing that the device is registered with the IoT hub and enabled for the C# example."::: ::: zone-end :::zone pivot="programming-language-nodejs" - :::image type="content" source="./media/quick-create-simulated-device-symm-key/hub-registration-nodejs.png" alt-text="Node.js device is registered with the IoT hub"::: + :::image type="content" source="./media/quick-create-simulated-device-symm-key/hub-registration-nodejs.png" alt-text="Screenshot showing that the device is registered with the IoT hub and enabled for the Node.js example."::: ::: zone-end :::zone pivot="programming-language-python" - :::image type="content" source="./media/quick-create-simulated-device-symm-key/hub-registration-python.png" alt-text="Python device is registered with the IoT hub"::: + :::image type="content" source="./media/quick-create-simulated-device-symm-key/hub-registration-python.png" alt-text="Screenshot showing that the device is registered with the IoT hub and enabled for the Python example."::: ::: zone-end ::: zone pivot="programming-language-java" - :::image type="content" source="./media/quick-create-simulated-device-symm-key/hub-registration-java.png" alt-text="Java device is registered with the IoT hub"::: + :::image type="content" source="./media/quick-create-simulated-device-symm-key/hub-registration-java.png" alt-text="Screenshot showing that the device is registered with the IoT hub and enabled for the Java example."::: ::: zone-end |
iot-dps | Quick Create Simulated Device X509 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/quick-create-simulated-device-x509.md | Title: Quickstart - Provision an X.509 certificate simulated device to Microsoft description: Learn how to provision a simulated device that authenticates with an X.509 certificate in the Azure IoT Hub Device Provisioning Service Previously updated : 11/01/2022 Last updated : 04/06/2023 zone_pivot_groups: iot-dps-set1 # Quickstart: Provision an X.509 certificate simulated device -In this quickstart, you'll create a simulated device on your Windows machine. The simulated device will be configured to use the [X.509 certificate attestation](concepts-x509-attestation.md) mechanism for authentication. After you've configured your device, you'll then provision it to your IoT hub using the Azure IoT Hub Device Provisioning Service. +In this quickstart, you create a simulated device on your Windows machine. The simulated device is configured to use the [X.509 certificate attestation](concepts-x509-attestation.md) mechanism for authentication. After you've configured your device, you then provision it to your IoT hub using the Azure IoT Hub Device Provisioning Service. If you're unfamiliar with the process of provisioning, review the [provisioning](about-iot-dps.md#provisioning-process) overview. Also make sure you've completed the steps in [Set up IoT Hub Device Provisioning Service with the Azure portal](./quick-setup-auto-provision.md) before continuing. -This quickstart demonstrates a solution for a Windows-based workstation. However, you can also perform the procedures on Linux. For a Linux example, see [Tutorial: Provision for geolatency](how-to-provision-multitenant.md). +This quickstart demonstrates a solution for a Windows-based workstation. However, you can also perform the procedures on Linux. For a Linux example, see [Tutorial: Provision for geo latency](how-to-provision-multitenant.md). ## Prerequisites The following prerequisites are for a Windows development environment. For Linux * Open both a Windows command prompt and a Git Bash prompt. - The steps in this quickstart assume that you're using a Windows machine and the OpenSSL installation that is installed as part of Git. You'll use the Git Bash prompt to issue OpenSSL commands and the Windows command prompt for everything else. If you're using Linux, you can issue all commands from a Bash shell. + The steps in this quickstart assume that you're using a Windows machine and the OpenSSL installation that is installed as part of Git. You use the Git Bash prompt to issue OpenSSL commands and the Windows command prompt for everything else. If you're using Linux, you can issue all commands from a Bash shell. ## Prepare your development environment ::: zone pivot="programming-language-ansi-c" -In this section, you'll prepare a development environment that's used to build the [Azure IoT C SDK](https://github.com/Azure/azure-iot-sdk-c). The sample code attempts to provision the device, during the device's boot sequence. +In this section, you prepare a development environment that's used to build the [Azure IoT C SDK](https://github.com/Azure/azure-iot-sdk-c). The sample code attempts to provision the device, during the device's boot sequence. 1. Open a web browser, and go to the [Release page of the Azure IoT C SDK](https://github.com/Azure/azure-iot-sdk-c/releases/latest). git clone -b v2 https://github.com/Azure/azure-iot-sdk-python.git --recursive ## Create a self-signed X.509 device certificate -In this section, you'll use OpenSSL to create a self-signed X.509 certificate and a private key. This certificate will be uploaded to your provisioning service instance and verified by the service. +In this section, you use OpenSSL to create a self-signed X.509 certificate and a private key. This certificate is uploaded to your provisioning service instance and verified by the service. > [!CAUTION] > Use certificates created with OpenSSL in this quickstart for development testing only. Perform the steps in this section in your Git Bash prompt. 7. When asked to **Enter pass phrase for device-key.pem:**, use the same pass phrase you did previously, `1234`. -Keep the Git Bash prompt open. You'll need it later in this quickstart. +Keep the Git Bash prompt open. You need it later in this quickstart. ::: zone-end ::: zone pivot="programming-language-csharp" -The C# sample code is set up to use X.509 certificates that are stored in a password-protected PKCS#12 formatted file (`certificate.pfx`). You'll still need the PEM formatted public key certificate file (`device-cert.pem`) that you just created to create an individual enrollment entry later in this quickstart. +The C# sample code is set up to use X.509 certificates that are stored in a password-protected PKCS#12 formatted file (`certificate.pfx`). You still need the PEM formatted public key certificate file (`device-cert.pem`) that you just created to create an individual enrollment entry later in this quickstart. 1. To generate the PKCS12 formatted file expected by the sample, enter the following command: The C# sample code is set up to use X.509 certificates that are stored in a pass cp certificate.pfx ./azure-iot-sdk-csharp/provisioning/device/samples/"Getting Started"/X509Sample ``` -You won't need the Git Bash prompt for the rest of this quickstart. However, you may want to keep it open to check your certificate if you have problems in later steps. +You don't need the Git Bash prompt for the rest of this quickstart. However, you may want to keep it open to check your certificate if you have problems in later steps. ::: zone-end You won't need the Git Bash prompt for the rest of this quickstart. However, you cp unencrypted-device-key.pem ./azure-iot-sdk-node/provisioning/device/samples ``` -You won't need the Git Bash prompt for the rest of this quickstart. However, you may want to keep it open to check your certificate if you have problems in later steps. +You don't need the Git Bash prompt for the rest of this quickstart. However, you may want to keep it open to check your certificate if you have problems in later steps. ::: zone-end You won't need the Git Bash prompt for the rest of this quickstart. However, you cp device-key.pem ./azure-iot-sdk-python/samples/async-hub-scenarios ``` -You won't need the Git Bash prompt for the rest of this quickstart. However, you may want to keep it open to check your certificate if you have problems in later steps. +You don't need the Git Bash prompt for the rest of this quickstart. However, you may want to keep it open to check your certificate if you have problems in later steps. ::: zone-end ::: zone pivot="programming-language-java" You won't need the Git Bash prompt for the rest of this quickstart. However, you 7. When asked to **Enter pass phrase for device-key.pem:**, use the same pass phrase you did previously, `1234`. -Keep the Git Bash prompt open. You'll need it later in this quickstart. +Keep the Git Bash prompt open. You need it later in this quickstart. ::: zone-end In this section, you update the sample code with your Device Provisioning Servic ### Configure the custom HSM stub code -The specifics of interacting with actual secure hardware-based storage vary depending on the hardware. As a result, the certificate and private key used by the simulated device in this quickstart will be hardcoded in the custom Hardware Security Module (HSM) stub code. +The specifics of interacting with actual secure hardware-based storage vary depending on the hardware. As a result, the certificate and private key used by the simulated device in this quickstart is hardcoded in the custom Hardware Security Module (HSM) stub code. To update the custom HSM stub code to simulate the identity of the device with ID `my-x509-device`: To update the custom HSM stub code to simulate the identity of the device with I "--END CERTIFICATE--"; ``` - Updating this string value manually can be prone to error. To generate the proper syntax, you can copy and paste the following command into your **Git Bash prompt**, and press **ENTER**. This command will generate the syntax for the `CERTIFICATE` string constant value and write it to the output. + Updating this string value manually can be prone to error. To generate the proper syntax, you can copy and paste the following command into your **Git Bash prompt**, and press **ENTER**. This command generates the syntax for the `CERTIFICATE` string constant value and writes it to the output. ```Bash sed -e 's/^/"/;$ !s/$/""\\n"/;$ s/$/"/' device-cert.pem To update the custom HSM stub code to simulate the identity of the device with I "--END RSA PRIVATE KEY--"; ``` - Updating this string value manually can be prone to error. To generate the proper syntax, you can copy and paste the following command into your **Git Bash prompt**, and press **ENTER**. This command will generate the syntax for the `PRIVATE_KEY` string constant value and write it to the output. + Updating this string value manually can be prone to error. To generate the proper syntax, you can copy and paste the following command into your **Git Bash prompt**, and press **ENTER**. This command generates the syntax for the `PRIVATE_KEY` string constant value and writes it to the output. ```Bash sed -e 's/^/"/;$ !s/$/""\\n"/;$ s/$/"/' unencrypted-device-key.pem To update the custom HSM stub code to simulate the identity of the device with I ::: zone pivot="programming-language-csharp" -In this section, you'll use your Windows command prompt. +In this section, you use your Windows command prompt. 1. In the Azure portal, select the **Overview** tab for your Device Provisioning Service. In this section, you'll use your Windows command prompt. 3. In your Windows command prompt, change to the X509Sample directory. This directory is located in the *.\azure-iot-sdk-csharp\provisioning\device\samples\getting started\X509Sample* directory off the directory where you cloned the samples on your computer. -4. Enter the following command to build and run the X.509 device provisioning sample (replace the `<IDScope>` value with the ID Scope that you copied in the previous section. The certificate file will default to *./certificate.pfx* and prompt for the .pfx password. +4. Enter the following command to build and run the X.509 device provisioning sample (replace the `<IDScope>` value with the ID Scope that you copied in the previous section. The certificate file defaults to *./certificate.pfx* and prompts for the .pfx password. ```cmd dotnet run -- -s <IDScope> In this section, you'll use your Windows command prompt. dotnet run -- -s 0ne00000A0A -c certificate.pfx -p 1234 ``` -5. The device connects to DPS and is assigned to an IoT hub. Then, the device will send a telemetry message to the IoT hub. +5. The device connects to DPS and is assigned to an IoT hub. Then, the device sends a telemetry message to the IoT hub. ```output Loading the certificate... In this section, you'll use your Windows command prompt. ::: zone pivot="programming-language-nodejs" -In this section, you'll use your Windows command prompt. +In this section, you use your Windows command prompt. 1. In the Azure portal, select the **Overview** tab for your Device Provisioning Service. In this section, you'll use your Windows command prompt. ::: zone pivot="programming-language-python" -In this section, you'll use your Windows command prompt. +In this section, you use your Windows command prompt. 1. In the Azure portal, select the **Overview** tab for your Device Provisioning Service. In this section, you'll use your Windows command prompt. 1. Review the code for [provision_x509.py](https://github.com/Azure/azure-iot-sdk-python/blob/v2/samples/async-hub-scenarios/provision_x509.py). If you're not using **Python version 3.7** or later, make the [code change mentioned here](https://github.com/Azure/azure-iot-sdk-python/tree/v2/samples/async-hub-scenarios#advanced-iot-hub-scenario-samples-for-the-azure-iot-hub-device-sdk) to replace `asyncio.run(main())` and save your changes. -1. Run the sample. The sample connects to DPS, which will provision the device to an IoT hub. After the device is provisioned, the sample will send some test messages to the IoT hub. +1. Run the sample. The sample connects to DPS, which provisions the device to an IoT hub. After the device is provisioned, the sample sends some test messages to the IoT hub. ```cmd $ python azure-iot-sdk-python/samples/async-hub-scenarios/provision_x509.py In this section, you use both your Windows command prompt and your Git Bash prom "--END CERTIFICATE--"; ``` - Updating this string value manually can be prone to error. To generate the proper syntax, you can copy and paste the following command into your **Git Bash prompt**, and press **ENTER**. This command will generate the syntax for the `leafPublicPem` string constant value and write it to the output. + Updating this string value manually can be prone to error. To generate the proper syntax, you can copy and paste the following command into your **Git Bash prompt**, and press **ENTER**. This command generates the syntax for the `leafPublicPem` string constant value and writes it to the output. ```Bash sed 's/^/"/;$ !s/$/\\n" +/;$ s/$/"/' device-cert.pem In this section, you use both your Windows command prompt and your Git Bash prom "--END PRIVATE KEY--"; ``` - Updating this string value manually can be prone to error. To generate the proper syntax, you can copy and paste the following command into your **Git Bash prompt**, and press **ENTER**. This command will generate the syntax for the `leafPrivateKey` string constant value and write it to the output. + Updating this string value manually can be prone to error. To generate the proper syntax, you can copy and paste the following command into your **Git Bash prompt**, and press **ENTER**. This command generates the syntax for the `leafPrivateKey` string constant value and writes it to the output. ```Bash sed 's/^/"/;$ !s/$/\\n" +/;$ s/$/"/' unencrypted-device-key.pem In this section, you use both your Windows command prompt and your Git Bash prom java -jar ./provisioning-x509-sample-1.8.1-with-deps.jar ``` - The sample connects to DPS, which will provision the device to an IoT hub. After the device is provisioned, the sample will send some test messages to the IoT hub. + The sample connects to DPS, which provisions the device to an IoT hub. After the device is provisioned, the sample sends some test messages to the IoT hub. ```output Starting... |
iot-dps | Quick Setup Auto Provision Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/quick-setup-auto-provision-cli.md | -The Azure CLI is used to create and manage Azure resources from the command line or in scripts. This quickstart details using the Azure CLI to create an IoT hub and an IoT Hub Device Provisioning Service, and to link the two services together. +The Azure CLI is used to create and manage Azure resources from the command line or in scripts. This quickstart details using the Azure CLI to create an IoT hub and an IoT Hub Device Provisioning Service instance, and to link the two services together. [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)] az group create --name my-sample-resource-group --location westus Create an IoT hub with the [az iot hub create](/cli/azure/iot/hub#az-iot-hub-create) command. -The following example creates an IoT hub named *my-sample-hub* in the *westus* location. An IoT hub name must be globally unique in Azure, so you may want to add a unique prefix or suffix to the example name, or choose a new name altogether. Make sure your name follows proper naming conventions for an IoT hub: it should be 3-50 characters in length, and can contain only upper or lower case alphanumeric characters or hyphens ('-'). +The following example creates an IoT hub named *my-sample-hub* in the *westus* location. An IoT hub name must be globally unique in Azure, so either add a unique prefix or suffix to the example name or choose a new name altogether. Make sure your name follows proper naming conventions for an IoT hub: it should be 3-50 characters in length, and can contain only upper or lower case alphanumeric characters or hyphens ('-'). ```azurecli-interactive az iot hub create --name my-sample-hub --resource-group my-sample-resource-group --location westus ``` -## Create a Device Provisioning Service +## Create a Device Provisioning Service instance -Create a Device Provisioning Service with the [az iot dps create](/cli/azure/iot/dps#az-iot-dps-create) command. +Create a Device Provisioning Service instance with the [az iot dps create](/cli/azure/iot/dps#az-iot-dps-create) command. -The following example creates a provisioning service named *my-sample-dps* in the *westus* location. You'll also choose a globally unique name for your own provisioning service. Make sure it follows proper naming conventions for an IoT Hub Device Provisioning Service: it should be 3-64 characters in length and can contain only upper or lower case alphanumeric characters or hyphens ('-'). +The following example creates a Device Provisioning Service instance named *my-sample-dps* in the *westus* location. You must also choose a globally unique name for your own instance. Make sure it follows proper naming conventions for an IoT Hub Device Provisioning Service: it should be 3-64 characters in length and can contain only upper or lower case alphanumeric characters or hyphens ('-'). ```azurecli-interactive az iot dps create --name my-sample-dps --resource-group my-sample-resource-group --location westus az iot dps create --name my-sample-dps --resource-group my-sample-resource-group ## Get the connection string for the IoT hub -You need your IoT hub's connection string to link it with the Device Provisioning Service. Use the [az iot hub show-connection-string](/cli/azure/iot/hub#az-iot-hub-show-connection-string) command to get the connection string and use its output to set a variable that you'll use when you link the two resources. +You need your IoT hub's connection string to link it with the Device Provisioning Service. Use the [az iot hub show-connection-string](/cli/azure/iot/hub#az-iot-hub-show-connection-string) command to get the connection string and use its output to set a variable that's used later, when you link the two resources. The following example sets the *hubConnectionString* variable to the value of the connection string for the primary key of the hub's *iothubowner* policy (the `--policy-name` parameter can be used to specify a different policy). Trade out *my-sample-hub* for the unique IoT hub name you chose earlier. The command uses the Azure CLI [query](/cli/azure/query-azure-cli) and [output](/cli/azure/format-output-azure-cli#tsv-output-format) options to extract the connection string from the command output. The linked IoT hub is shown in the *properties.iotHubs* collection. ## Clean up resources -Other quickstarts in this collection build upon this quickstart. If you plan to continue on to work with subsequent quickstarts or with the tutorials, don't clean up the resources created in this quickstart. If you don't plan to continue, you can use the following commands to delete the provisioning service, the IoT hub or the resource group and all of its resources. Replace the names of the resources written below with the names of your own resources. +Other quickstarts in this collection build upon this quickstart. If you plan to continue on to work with subsequent quickstarts or with the tutorials, don't clean up the resources created in this quickstart. If you don't plan to continue, you can use the following commands to delete the provisioning service, the IoT hub or the resource group and all of its resources. Replace the names of the resources included in the following commands with the names of your own resources. To delete the provisioning service, run the [az iot dps delete](/cli/azure/iot/dps#az-iot-dps-delete) command: |
iot-dps | Quick Setup Auto Provision Rm | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/quick-setup-auto-provision-rm.md | Title: Quickstart - Create an Azure IoT Hub Device Provisioning Service (DPS) us description: Azure quickstart - Learn how to create an Azure IoT Hub Device Provisioning Service (DPS) using Azure Resource Manager template (ARM template). Previously updated : 01/27/2021 Last updated : 04/06/2023 You can use an [Azure Resource Manager](../azure-resource-manager/management/ove [!INCLUDE [About Azure Resource Manager](../../includes/resource-manager-quickstart-introduction.md)] -This quickstart uses [Azure portal](../azure-resource-manager/templates/deploy-portal.md), and the [Azure CLI](../azure-resource-manager/templates/deploy-cli.md) to perform the programmatic steps necessary to create a resource group and deploy the template, but you can easily use the [PowerShell](../azure-resource-manager/templates/deploy-powershell.md), .NET, Ruby, or other programming languages to perform these steps and deploy your template. +This quickstart uses [Azure portal](../azure-resource-manager/templates/deploy-portal.md) and the [Azure CLI](../azure-resource-manager/templates/deploy-cli.md) to perform the programmatic steps necessary to create a resource group and deploy the template. However, you can also use [PowerShell](../azure-resource-manager/templates/deploy-powershell.md), .NET, Ruby, or other programming languages to perform these steps and deploy your template. -If your environment meets the prerequisites, and you're already familiar with using ARM templates, selecting the **Deploy to Azure** button below will open the template for deployment in the Azure portal. +If your environment meets the prerequisites, and you're already familiar with using ARM templates, selecting the **Deploy to Azure** button opens the template for deployment in the Azure portal. [](https://portal.azure.com/#create/Microsoft.Template/uri/https%3a%2f%2fraw.githubusercontent.com%2fAzure%2fazure-quickstart-templates%2fmaster%2Fquickstarts%2Fmicrosoft.devices%2Fiothub-device-provisioning%2fazuredeploy.json) The template used in this quickstart is from [Azure Quickstart Templates](https: :::code language="json" source="~/quickstart-templates/quickstarts/microsoft.devices/iothub-device-provisioning/azuredeploy.json"::: -Two Azure resources are defined in the template above: +Two Azure resources are defined in the previous template: -* [**Microsoft.Devices/iothubs**](/azure/templates/microsoft.devices/iothubs): Creates a new Azure IoT Hub. -* [**Microsoft.Devices/provisioningservices**](/azure/templates/microsoft.devices/provisioningservices): Creates a new Azure IoT Hub Device Provisioning Service with the new IoT Hub already linked to it. +* [**Microsoft.Devices/IotHubs**](/azure/templates/microsoft.devices/iothubs): Creates a new Azure IoT hub. +* [**Microsoft.Devices/provisioningServices**](/azure/templates/microsoft.devices/provisioningservices): Creates a new Azure IoT Hub Device Provisioning Service with the new IoT hub already linked to it. ## Deploy the template #### Deploy with the Portal -1. Select the following image to sign in to Azure and open the template for deployment. The template creates a new Iot Hub and DPS resource. The hub will be linked in the DPS resource. +1. Select the following image to sign in to Azure and open the template for deployment. The template creates a new Iot hub and DPS resource. The new IoT hub is linked to the DPS resource. [](https://portal.azure.com/#create/Microsoft.Template/uri/https%3a%2f%2fraw.githubusercontent.com%2fAzure%2fazure-quickstart-templates%2fmaster%2Fquickstarts%2Fmicrosoft.devices%2Fiothub-device-provisioning%2fazuredeploy.json) Two Azure resources are defined in the template above:  - Unless it's specified below, use the default value to create the Iot Hub and DPS resource. + Unless otherwise specified for the following fields, use the default value to create the Iot Hub and DPS resource. | Field | Description | | :- | :- | Two Azure resources are defined in the template above: 3. On the next screen, read the terms. If you agree to all terms, select **Create**. - The deployment will take a few moments to complete. + The deployment takes a few moments to complete. In addition to the Azure portal, you can also use the Azure PowerShell, Azure CLI, and REST API. To learn other deployment methods, see [Deploy templates](../azure-resource-manager/templates/deploy-powershell.md). Sign in to your Azure account and select your subscription. az account set --subscription {your subscription name or id} ``` -3. Copy and paste the following commands into your CLI prompt. Then execute the commands by pressing **ENTER**. +3. Copy and paste the following commands into your CLI prompt. Then execute the commands by selecting the Enter key. > [!TIP]- > The commands will prompt for a resource group location. + > The commands prompt for a resource group location. > You can view a list of available locations by first running the command: > > `az account list-locations -o table` Sign in to your Azure account and select your subscription. read ``` -4. The commands will prompt you for the following information. Provide each value and press **ENTER**. +4. The commands prompt you for the following information. Provide each value and select the Enter key. | Parameter | Description | | :-- | :- |- | **Project name** | The value of this parameter will be used to create a resource group to hold all resources. The string `rg` will be added to the end of the value for your resource group name. | - | **location** | This value is the region where all resources will reside. | + | **Project name** | The value of this parameter is used to create a resource group to hold all resources. The string `rg` is added to the end of the value for your resource group name. | + | **location** | This value is the region where all resources are created. | | **iotHubName** | Enter a name for the IoT Hub that must be globally unique within the *.azure-devices.net* namespace. You need the hub name in the next section when you validate the deployment. | | **provisioningServiceName** | Enter a name for the new Device Provisioning Service (DPS) resource. The name must be globally unique within the *.azure-devices-provisioning.net* namespace. You need the DPS name in the next section when you validate the deployment. | - The AzureCLI is used to deploy the template. In addition to the Azure CLI, you can also use the Azure PowerShell, Azure portal, and REST API. To learn other deployment methods, see [Deploy templates](../azure-resource-manager/templates/deploy-powershell.md). + The Azure CLI is used to deploy the template. In addition to the Azure CLI, you can also use the Azure PowerShell, Azure portal, and REST API. To learn other deployment methods, see [Deploy templates](../azure-resource-manager/templates/deploy-powershell.md). ## Review deployed resources Sign in to your Azure account and select your subscription. Notice the hubs that are linked on the `iotHubs` member. - ## Clean up resources Other quickstarts in this collection build upon this quickstart. If you plan to continue on to work with subsequent quickstarts or with the tutorials, don't clean up the resources created in this quickstart. If you don't plan to continue, you can use the Azure portal or Azure CLI to delete the resource group and all of its resources. To delete the resource group deployed using the Azure CLI: az group delete --name "${projectName}rg" ``` -You can also delete resource groups and individual resources using the Azure portal, PowerShell, or REST APIs, as well as with supported platform SDKs published for Azure Resource Manager or IoT Hub Device Provisioning Service. +You can also delete resource groups and individual resources using any of the following options: ++- Azure portal +- PowerShell +- REST APIs +- Supported platform SDKs published for Azure Resource Manager or IoT Hub Device Provisioning Service ## Next steps |
iot-dps | Quick Setup Auto Provision | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/quick-setup-auto-provision.md | Title: Quickstart - Set up Device Provisioning Service in portal+ description: Quickstart - Set up the Azure IoT Hub Device Provisioning Service (DPS) in the Microsoft Azure portal Previously updated : 08/06/2021 Last updated : 04/06/2023 -In this quickstart, you will learn how to set up the IoT Hub Device Provisioning Service in the Azure portal. The IoT Hub Device Provisioning Service enables zero-touch, just-in-time device provisioning to any IoT hub. The Device Provisioning Service enables customers to provision millions of IoT devices in a secure and scalable manner, without requiring human intervention. Azure IoT Hub Device Provisioning Service supports IoT devices with TPM, symmetric key, and X.509 certificate authentications. For more information, please refer to [IoT Hub Device Provisioning Service overview](about-iot-dps.md). +In this quickstart, you learn how to set up the IoT Hub Device Provisioning Service in the Azure portal. The IoT Hub Device Provisioning Service enables zero-touch, just-in-time device provisioning to any IoT hub. The Device Provisioning Service enables customers to provision millions of IoT devices in a secure and scalable manner, without requiring human intervention. Azure IoT Hub Device Provisioning Service supports IoT devices with TPM, symmetric key, and X.509 certificate authentications. For more information, please refer to [IoT Hub Device Provisioning Service overview](about-iot-dps.md). -To provision your devices, you will: +To provision your devices, you first perform the following steps: -* Use the Azure portal to create an IoT Hub -* Use the Azure portal to create an IoT Hub Device Provisioning Service -* Link the IoT hub to the Device Provisioning Service +> [!div class="checklist"] +> * Use the Azure portal to create an IoT hub +> * Use the Azure portal to create an IoT Hub Device Provisioning Service instance +> * Link the IoT hub to the Device Provisioning Service instance ## Prerequisites -You'll need an Azure subscription to begin with this article. You can create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F), if you haven't already. +If you don't have an Azure subscription, create a [free Azure account](https://azure.microsoft.com/free/) before you begin. ## Create an IoT hub [!INCLUDE [iot-hub-include-create-hub](../../includes/iot-hub-include-create-hub.md)] -## Create a new IoT Hub Device Provisioning Service +## Create a new IoT Hub Device Provisioning S |