Updates from: 04/26/2023 01:13:12
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory Application Provisioning Log Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/application-provisioning-log-analytics.md
Previously updated : 10/06/2022 Last updated : 04/25/2023
Provisioning integrates with Azure Monitor logs and Log Analytics. With Azure mo
## Enabling provisioning logs
-You should already be familiar with Azure monitoring and Log Analytics. If not, jump over to learn about them and then come back to learn about application provisioning logs. To learn more about Azure monitoring, see [Azure Monitor overview](../../azure-monitor/overview.md). To learn more about Azure Monitor logs and Log Analytics, see [Overview of log queries in Azure Monitor](../../azure-monitor/logs/log-query-overview.md).
+You should already be familiar with Azure monitoring and Log Analytics. If not, jump over to learn about them, and then come back to learn about application provisioning logs. To learn more about Azure monitoring, see [Azure Monitor overview](../../azure-monitor/overview.md). To learn more about Azure Monitor logs and Log Analytics, see [Overview of log queries in Azure Monitor](../../azure-monitor/logs/log-query-overview.md).
Once you've configured Azure monitoring, you can enable logs for application provisioning. The option is located on the **Diagnostics settings** page.
The underlying data stream that Provisioning sends log viewers is almost identic
Azure Monitor workbooks provide a flexible canvas for data analysis. They also provide for the creation of rich visual reports within the Azure portal. To learn more, see [Azure Monitor Workbooks overview](../../azure-monitor/visualize/workbooks-overview.md).
-Application provisioning comes with a set of pre-built workbooks. You can find them on the Workbooks page. To view the data, you'll need to ensure that all the filters (timeRange, jobID, appName) are populated. You'll also need to make sure you've provisioned an app, otherwise there won't be any data in the logs.
+Application provisioning comes with a set of prebuilt workbooks. You can find them on the Workbooks page. To view the data, ensure that all the filters (timeRange, jobID, appName) are populated. Also confirm the app was provisioned, otherwise there isn't any data in the logs.
:::image type="content" source="media/application-provisioning-log-analytics/workbooks.png" alt-text="Application provisioning workbooks" lightbox="media/application-provisioning-log-analytics/workbooks.png":::
Alert when there's a spike in disables or deletes.
## Community contributions
-We're taking an open source and community-based approach to application provisioning queries and dashboards. If you've built a query, alert, or workbook that you think others would find useful, be sure to publish it to the [AzureMonitorCommunity GitHub repo](https://github.com/microsoft/AzureMonitorCommunity). Then shoot us an email with a link. We'll review and publish it to the service so others can benefit too. You can contact us at provisioningfeedback@microsoft.com.
+We're taking an open source and community-based approach to application provisioning queries and dashboards. Build a query, alert, or workbook that you think is useful to others, then publish it to the [AzureMonitorCommunity GitHub repo](https://github.com/microsoft/AzureMonitorCommunity). Shoot us an email with a link. We review and publish queries and dashboards to the service so others benefit too. Contact us at provisioningfeedback@microsoft.com.
## Next steps
active-directory Concept Fido2 Hardware Vendor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-fido2-hardware-vendor.md
The following table lists partners who are Microsoft-compatible FIDO2 security k
| Nymi | ![y] | ![n]| ![y]| ![n]| ![n] | https://www.nymi.com/nymi-band | | Octatco | ![y] | ![y]| ![n]| ![n]| ![n] | https://octatco.com/ | | OneSpan Inc. | ![n] | ![y]| ![n]| ![y]| ![n] | https://www.onespan.com/products/fido |
-| Swissbit | ![n] | ![y]| ![y]| ![n]| ![n] | https://www.swissbit.com/en/products/ishield-fido2/ |
+| Swissbit | ![n] | ![y]| ![y]| ![n]| ![n] | https://www.swissbit.com/en/products/ishield-key/ |
| Thales Group | ![n] | ![y]| ![y]| ![n]| ![y] | https://cpl.thalesgroup.com/access-management/authenticators/fido-devices | | Thetis | ![y] | ![y]| ![y]| ![y]| ![n] | https://thetis.io/collections/fido2 | | Token2 Switzerland | ![y] | ![y]| ![y]| ![n]| ![n] | https://www.token2.swiss/shop/product/token2-t2f2-alu-fido2-u2f-and-totp-security-key |
active-directory Report View System Report https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/report-view-system-report.md
Previously updated : 02/23/2022 Last updated : 04/24/2023
This article describes how to generate and view a system report in Permissions M
## Generate a system report
-1. In the Permissions Management home page, select the **Reports** tab, and then select the **Systems Reports** subtab.
+1. From the Permissions Management home page, select the **Reports** tab, and then select the **Systems Reports** subtab.
The **Systems Reports** subtab displays the following options in the **Reports** table: - **Report Name**: The name of the report. - **Category**: The type of report: **Permission**.
- - **Authorization System**: The authorization system activity in the report: Amazon Web Services (AWS), Microsoft Azure (Azure), and Google Cloud Platform (GCP).
+ - **Authorization System**: The cloud provider included in the report: Amazon Web Services (AWS), Microsoft Azure (Azure), and Google Cloud Platform (GCP).
- **Format**: The format in which the report is available: comma-separated values (**CSV**) format, portable document format (**PDF**), or Microsoft Excel Open XML Spreadsheet (**XLSX**) format.
-1. In the **Report Name** column, find the report you want, and then select the down arrow to the right of the report name to download the report.
-
- Or, from the ellipses **(...)** menu, select **Download**.
-
- The following message displays: **Successfully Started To Generate On Demand Report.**
-
+1. In the **Report Name** column, find the report you want to generate.
+1. From the ellipses **(...)** menu for that report, select **Generate & Download**. A new window appears where you provide more information for the report you want to generate.
+1. For each **Authorization System**, select the **Authorization System Name** you want to include in the report by selecting the box next to the name.
+1. If you want to combine all Authorization Systems into one report, check the box for **Collate**.
+1. For **Report Format**, check the boxes for a **Detailed** or **Summary** of the report in CSV format. You can select both.
> [!NOTE] > If you select one authorization system, the report includes a summary. If you select more than one authorization system, the report does not include a summary.-
+1. For **Schedule**, select the frequency for how often you want to receive the report(s). You can select **None** if you don't want to generate reports on a scheduled basis.
+1. Click **Save**. Upon clicking **Save**, you receive a message **Report has been created**. The report appears on the **Custom Reports** tab.
1. To refresh the list of reports, select **Reload**.
+1. On the **Custom Reports** tab, hover your mouse over the report, and click the down arrow to **Download** the report. A message appears **Successfully Started to Generate On Demand Report**. The report is sent to your email.
## Search for a system report 1. On the **Systems Reports** subtab, select **Search**.
-1. In the **Search** box, enter the name of the report you want.
-
- The **Systems Reports** subtab displays a list of reports that match your search criteria.
+1. In the **Search** box, enter the name of the report you want to locate. The **Systems Reports** subtab displays a list of reports that match your search criteria.
1. Select a report from the **Report Name** column.
-1. To download a report, select the down arrow to the right of the report name, or from the ellipses **(...)** menu, select **Download**.
+1. To generate a report, click on the ellipses **(...)** menu for that report, then select **Generate & Download**.
+1. For each **Authorization System**, select the **Authorization System Name** you want to include in the report by selecting the box next to the name.
+1. If you want to combine all Authorization Systems into one report, check the box for **Collate**.
+1. For **Report Format**, check the boxes for a **Detailed** or **Summary** of the report in CSV format. You can select both.
+
+ > [!NOTE]
+ > If you select one authorization system, the report includes a summary. If you select more than one authorization system, the report does not include a summary.
+1. For **Schedule**, select the frequency for how often you want to receive the report(s). You can select **None** if you don't want to generate reports on a scheduled basis.
+1. Click **Save**. Upon clicking **Save**, you receive a message **Report has been created**. The report appears on the **Custom Reports** tab.
1. To refresh the list of reports, select **Reload**.
+1. On the **Custom Reports** tab, hover your mouse over the report, and click the down arrow to **Download** the report. A message appears **Successfully Started to Generate On Demand Report**. The report is sent to your email.
+
## Next steps
active-directory Groups Bulk Import Members https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-bulk-import-members.md
Previously updated : 06/24/2022 Last updated : 04/24/2023
The rows in a downloaded CSV template are as follows:
- The first two rows of the upload template must not be removed or modified, or the upload can't be processed. - The required columns are listed first.-- We don't recommend adding new columns to the template. Any additional columns you add are ignored and not processed.
+- We don't recommend adding new columns to the template. Any other columns you add are ignored and not processed.
- We recommend that you download the latest version of the CSV template as often as possible. - Add at least two users' UPNs or object IDs to successfully upload the file.
The rows in a downloaded CSV template are as follows:
1. Sign in to [the Azure portal](https://portal.azure.com) with a User administrator account in the organization. Group owners can also bulk import members of groups they own. 1. In Azure AD, select **Groups** > **All groups**. 1. Open the group to which you're adding members and then select **Members**.
-1. On the **Members** page, select **Import members**.
+1. On the **Members** page, select **bulk operations** and then choose **Import members**.
1. On the **Bulk import group members** page, select **Download** to get the CSV file template with required group member properties. ![The Import Members command is on the profile page for the group](./media/groups-bulk-import-members/import-panel.png)
active-directory Authentication Conditional Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/authentication-conditional-access.md
When an Azure AD organization shares resources with external users with an ident
The following diagram illustrates the authentication flow when an external user signs in with an account from a non-Azure AD identity provider, such as Google, Facebook, or a federated SAML/WS-Fed identity provider.
-[ ![Diagram showing the Authentication flow for B2B guest users from an external directory.](media/authentication-conditional-access/authentication-flow-b2b-guests.png) ](media/authentication-conditional-access/authentication-flow-b2b-guests.png#lightbox))
+[ ![Diagram showing the Authentication flow for B2B guest users from an external directory.](media/authentication-conditional-access/authentication-flow-b2b-guests.png) ](media/authentication-conditional-access/authentication-flow-b2b-guests.png#lightbox)
| Step | Description | |--|--|
active-directory Secure External Access Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/secure-external-access-resources.md
Both methods have drawbacks. For more information, see the following table.
| Area of concern | Local credentials | Federation | |-||| | Security | - Access continues after external user terminates<br> - UserType is Member by default, which grants too much default access | - No user-level visibility <br> - Unknown partner security posture|
-| Expense | - Password and multi-factor authentication (MFA) management<br> - Onboarding process<br> - Identity cleanup<br> - Overhead of running a separate directory | Small partners can't afford the infrastructure, lack expertise, and might user consumer email|
+| Expense | - Password and multi-factor authentication (MFA) management<br> - Onboarding process<br> - Identity cleanup<br> - Overhead of running a separate directory | Small partners can't afford the infrastructure, lack expertise, and might use consumer email|
| Complexity | Partner users manage more credentials | Complexity grows with each new partner, and increased for partners| Azure Active Directory (Azure AD) B2B integrates with other tools in Azure AD, and Microsoft 365 services. Azure AD B2B simplifies collaboration, reduces expense, and increases security.
active-directory Whats New Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/whats-new-archive.md
For more information, see [Automate user provisioning to SaaS applications with
In June 2021, we have added following 42 new applications in our App gallery with Federation support
-[Taksel](https://help.ubuntu.com/community/Tasksel), [IDrive360](../saas-apps/idrive360-tutorial.md), [VIDA](../saas-apps/vida-tutorial.md), [ProProfs Classroom](../saas-apps/proprofs-classroom-tutorial.md), [WAN-Sign](../saas-apps/wan-sign-tutorial.md), [Citrix Cloud SAML SSO](../saas-apps/citrix-cloud-saml-sso-tutorial.md), [Fabric](../saas-apps/fabric-tutorial.md), [DssAD](https://cloudlicensing.deepseedsolutions.com/), [RICOH Creative Collaboration RICC](https://www.ricoh-europe.com/products/software-apps/collaboration-board-software/ricc/), [Styleflow](../saas-apps/styleflow-tutorial.md), [Chaos](https://accounts.chaosgroup.com/corporate_login), [Traced Connector](https://control.traced.app/signup), [Squarespace](https://account.squarespace.com/org/azure), [MX3 Diagnostics Connector](https://www.mx3diagnostics.com/), [Ten Spot](https://tenspot.co/api/v1/sso/azure/login/), [Finvari](../saas-apps/finvari-tutorial.md), [Mobile4ERP](https://play.google.com/store/apps/details?id=com.negevsoft.mobile4erp), [WalkMe US OpenID Connect](https://www.walkme.com/), [Neustar UltraDNS](../saas-apps/neustar-ultradns-tutorial.md), [cloudtamer.io](../saas-apps/cloudtamer-io-tutorial.md), [A Cloud Guru](../saas-apps/a-cloud-guru-tutorial.md), [PetroVue](../saas-apps/petrovue-tutorial.md), [Postman](../saas-apps/postman-tutorial.md), [ReadCube Papers](../saas-apps/readcube-papers-tutorial.md), [Peklostroj](https://app.peklostroj.cz/), [SynCloud](https://onboard.syncloud.io/), [Polymerhq.io](https://www.polymerhq.io/), [Bonos](../saas-apps/bonos-tutorial.md), [Astra Schedule](../saas-apps/astra-schedule-tutorial.md), [Draup](../saas-apps/draup-inc-tutorial.md), [Inc](../saas-apps/draup-inc-tutorial.md), [Applied Mental Health](../saas-apps/applied-mental-health-tutorial.md), [iHASCO Training](../saas-apps/ihasco-training-tutorial.md), [Nexsure](../saas-apps/nexsure-tutorial.md), [XEOX](https://login.xeox.com/), [Plandisc](https://create.plandisc.com/account/logon), [foundU](../saas-apps/foundu-tutorial.md), [Standard for Success Accreditation](../saas-apps/standard-for-success-accreditation-tutorial.md), [Penji Teams](https://web.penjiapp.com/), [CheckPoint Infinity Portal](../saas-apps/checkpoint-infinity-portal-tutorial.md), [Teamgo](../saas-apps/teamgo-tutorial.md), [Hopsworks.ai](../saas-apps/hopsworks-ai-tutorial.md), [HoloMeeting 2](https://backend2.holomeeting.io/)
+[Taksel](https://help.ubuntu.com/community/Tasksel), [IDrive360](../saas-apps/idrive360-tutorial.md), [VIDA](../saas-apps/vida-tutorial.md), [ProProfs Classroom](../saas-apps/proprofs-classroom-tutorial.md), [WAN-Sign](../saas-apps/wan-sign-tutorial.md), [Citrix Cloud SAML SSO](../saas-apps/citrix-cloud-saml-sso-tutorial.md), [Fabric](../saas-apps/fabric-tutorial.md), [DssAD](https://cloudlicensing.deepseedsolutions.com/), [RICOH Creative Collaboration RICC](https://www.ricoh-europe.com/products/software-apps/collaboration-board-software/ricc/), [Styleflow](../saas-apps/styleflow-tutorial.md), [Chaos](https://accounts.chaosgroup.com/corporate_login), [Traced Connector](https://control.traced.app/signup), [Squarespace](https://account.squarespace.com/org/azure), [MX3 Diagnostics Connector](https://www.mx3diagnostics.com/), [Ten Spot](https://tenspot.co/api/v1/sso/azure/login/), [Finvari](../saas-apps/finvari-tutorial.md), [Mobile4ERP](https://play.google.com/store/apps/details?id=com.negevsoft.mobile4erp), [WalkMe US OpenID Connect](https://www.walkme.com/), [Neustar UltraDNS](../saas-apps/neustar-ultradns-tutorial.md), [cloudtamer.io](../saas-apps/cloudtamer-io-tutorial.md), [A Cloud Guru](../saas-apps/a-cloud-guru-tutorial.md), [PetroVue](../saas-apps/petrovue-tutorial.md), [Postman](../saas-apps/postman-tutorial.md), [ReadCube Papers](../saas-apps/readcube-papers-tutorial.md), [Peklostroj](https://app.peklostroj.cz/), [SynCloud](https://www.syncloud.org/apps.html), [Polymerhq.io](https://www.polymerhq.io/), [Bonos](../saas-apps/bonos-tutorial.md), [Astra Schedule](../saas-apps/astra-schedule-tutorial.md), [Draup](../saas-apps/draup-inc-tutorial.md), [Inc](../saas-apps/draup-inc-tutorial.md), [Applied Mental Health](../saas-apps/applied-mental-health-tutorial.md), [iHASCO Training](../saas-apps/ihasco-training-tutorial.md), [Nexsure](../saas-apps/nexsure-tutorial.md), [XEOX](https://login.xeox.com/), [Plandisc](https://create.plandisc.com/account/logon), [foundU](../saas-apps/foundu-tutorial.md), [Standard for Success Accreditation](../saas-apps/standard-for-success-accreditation-tutorial.md), [Penji Teams](https://web.penjiapp.com/), [CheckPoint Infinity Portal](../saas-apps/checkpoint-infinity-portal-tutorial.md), [Teamgo](../saas-apps/teamgo-tutorial.md), [Hopsworks.ai](../saas-apps/hopsworks-ai-tutorial.md), [HoloMeeting 2](https://backend2.holomeeting.io/)
You can also find the documentation of all the applications here: https://aka.ms/AppsTutorial
You can add free text notes to Enterprise applications. You can add any relevant
In September 2020 we have added following 34 new applications in our App gallery with Federation support:
-[VMware Horizon - Unified Access Gateway](), [Pulse Secure PCS](../saas-apps/vmware-horizon-unified-access-gateway-tutorial.md), [Inventory360](../saas-apps/pulse-secure-pcs-tutorial.md), [Frontitude](https://services.enteksystems.de/sso/microsoft/signup), [BookWidgets](https://www.bookwidgets.com/sso/office365), [ZVD_Server](https://zaas.zenmutech.com/user/signin), [HashData for Business](https://hashdata.app/login.xhtml), [SecureLogin](https://securelogin.securelogin.nu/sso/azure/login), [CyberSolutions MAILBASEΣ/CMSS](../saas-apps/cybersolutions-mailbase-tutorial.md), [CyberSolutions CYBERMAILΣ](../saas-apps/cybersolutions-cybermail-tutorial.md), [LimbleCMMS](https://auth.limblecmms.com/), [Glint Inc](../saas-apps/glint-inc-tutorial.md), [zeroheight](../saas-apps/zeroheight-tutorial.md), [Gender Fitness](https://app.genderfitness.com/), [Coeo Portal](https://my.coeo.com/), [Grammarly](../saas-apps/grammarly-tutorial.md), [Fivetran](../saas-apps/fivetran-tutorial.md), [Kumolus](../saas-apps/kumolus-tutorial.md), [RSA Archer Suite](../saas-apps/rsa-archer-suite-tutorial.md), [TeamzSkill](../saas-apps/teamzskill-tutorial.md), [raumfürraum](../saas-apps/raumfurraum-tutorial.md), [Saviynt](../saas-apps/saviynt-tutorial.md), [BizMerlinHR](https://marketplace.bizmerlin.net/bmone/signup), [Mobile Locker](../saas-apps/mobile-locker-tutorial.md), [Zengine](../saas-apps/zengine-tutorial.md), [CloudCADI](https://app.cloudcadi.com/login), [Simfoni Analytics](https://simfonianalytics.com/accounts/microsoft/login/), [Priva Identity & Access Management](https://my.priva.com/), [Nitro Pro](https://www.gonitro.com/nps/product-details/downloads), [Eventfinity](../saas-apps/eventfinity-tutorial.md), [Fexa](../saas-apps/fexa-tutorial.md), [Secured Signing Enterprise Portal](https://www.securedsigning.com/aad/Auth/ExternalLogin/AdminPortal), [Secured Signing Enterprise Portal AAD Setup](https://www.securedsigning.com/aad/Auth/ExternalLogin/AdminPortal), [Wistec Online](https://wisteconline.com/auth/oidc), [Oracle PeopleSoft - Protected by F5 BIG-IP APM](../saas-apps/oracle-peoplesoft-protected-by-f5-big-ip-apm-tutorial.md)
+[VMware Horizon - Unified Access Gateway](), [Pulse Secure PCS](../saas-apps/vmware-horizon-unified-access-gateway-tutorial.md), [Inventory360](../saas-apps/pulse-secure-pcs-tutorial.md), [Frontitude](https://services.enteksystems.de/sso/microsoft/signup), [BookWidgets](https://www.bookwidgets.com/sso/office365), [ZVD_Server](https://zaas.zenmutech.com/user/signin), [HashData for Business](https://hashdata.app/login.xhtml), [SecureLogin](https://securelogin.securelogin.nu/sso/azure/login), [CyberSolutions MAILBASEΣ/CMSS](../saas-apps/cybersolutions-mailbase-tutorial.md), [CyberSolutions CYBERMAILΣ](../saas-apps/cybersolutions-cybermail-tutorial.md), [LimbleCMMS](https://auth.limblecmms.com/), [Glint Inc](../saas-apps/glint-inc-tutorial.md), [zeroheight](../saas-apps/zeroheight-tutorial.md), [Gender Fitness](https://app.genderfitness.com/), [Coeo Portal](https://my.coeo.com/), [Grammarly](../saas-apps/grammarly-tutorial.md), [Fivetran](../saas-apps/fivetran-tutorial.md), [Kumolus](../saas-apps/kumolus-tutorial.md), [RSA Archer Suite](../saas-apps/rsa-archer-suite-tutorial.md), [TeamzSkill](../saas-apps/teamzskill-tutorial.md), [raumfürraum](../saas-apps/raumfurraum-tutorial.md), [Saviynt](../saas-apps/saviynt-tutorial.md), [BizMerlinHR](https://marketplace.bizmerlin.net/bmone/signup), [Mobile Locker](../saas-apps/mobile-locker-tutorial.md), [Zengine](../saas-apps/zengine-tutorial.md), [CloudCADI](https://cloudcadi.com/), [Simfoni Analytics](https://simfonianalytics.com/accounts/microsoft/login/), [Priva Identity & Access Management](https://my.priva.com/), [Nitro Pro](https://www.gonitro.com/nps/product-details/downloads), [Eventfinity](../saas-apps/eventfinity-tutorial.md), [Fexa](../saas-apps/fexa-tutorial.md), [Secured Signing Enterprise Portal](https://www.securedsigning.com/aad/Auth/ExternalLogin/AdminPortal), [Secured Signing Enterprise Portal AAD Setup](https://www.securedsigning.com/aad/Auth/ExternalLogin/AdminPortal), [Wistec Online](https://wisteconline.com/auth/oidc), [Oracle PeopleSoft - Protected by F5 BIG-IP APM](../saas-apps/oracle-peoplesoft-protected-by-f5-big-ip-apm-tutorial.md)
You can also find the documentation of all the applications from here: https://aka.ms/AppsTutorial.
active-directory Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/whats-new.md
For more information, see:
**Service category:** Group Management **Product capability:** End User Experiences
-A new and improved My Groups experience is now available at https://www.myaccount.microsoft.com/groups. My Groups enables end users to easily manage groups, such as finding groups to join, managing groups they own, and managing existing group memberships. Based on customer feedback, the new My Groups support sorting and filtering on lists of groups and group members, a full list of group members in large groups, and an actionable overview page for membership requests.
-This experience replaces the existing My Groups experience at https://www.mygroups.microsoft.com in May.
+A new and improved My Groups experience is now available at `https://www.myaccount.microsoft.com/groups`. My Groups enables end users to easily manage groups, such as finding groups to join, managing groups they own, and managing existing group memberships. Based on customer feedback, the new My Groups support sorting and filtering on lists of groups and group members, a full list of group members in large groups, and an actionable overview page for membership requests.
+This experience replaces the existing My Groups experience at `https://www.mygroups.microsoft.com` in May.
For more information, see: [Update your Groups info in the My Apps portal](https://support.microsoft.com/account-billing/update-your-groups-info-in-the-my-apps-portal-bc0ca998-6d3a-42ac-acb8-e900fb1174a4).
active-directory Migrate From Federation To Cloud Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/migrate-from-federation-to-cloud-authentication.md
Modern authentication clients (Office 2016 and Office 2013, iOS, and Android app
To plan for rollback, use the [documented current federation settings](#document-current-federation-settings) and check the [federation design and deployment documentation](/windows-server/identity/ad-fs/deployment/windows-server-2012-r2-ad-fs-deployment-guide).
-The rollback process should include converting managed domains to federated domains by using the [Convert-MSOLDomainToFederated](/powershell/module/microsoft.graph.identity.directorymanagement/new-mgdomainfederationconfiguration?view=graph-powershell-1.0&preserve-view=true) cmdlet. If necessary, configuring extra claims rules.
+The rollback process should include converting managed domains to federated domains by using the [New-MgDomainFederationConfiguration](/powershell/module/microsoft.graph.identity.directorymanagement/new-mgdomainfederationconfiguration?view=graph-powershell-1.0&preserve-view=true) cmdlet. If necessary, configuring extra claims rules.
## Migration considerations
active-directory Configure Password Single Sign On Non Gallery Applications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/configure-password-single-sign-on-non-gallery-applications.md
Previously updated : 09/22/2021 Last updated : 04/25/2023 # Customer intent: As an IT admin, I need to know how to implement password-based single sign-on in Azure Active Directory.
The configuration page for password-based SSO is simple. It includes only the UR
## Prerequisites To configure password-based SSO in your Azure AD tenant, you need:-- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F)-- One of the following roles: Global Administrator, Cloud Application Administrator, Application Administrator, or owner of the service principal.
+- An Azure account with an active subscription. If you don't already have one, you can [create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F)
+- Global Administrator, or owner of the service principal.
- An application that supports password-based SSO. ## Configure password-based single sign-on
active-directory Groups Assign Role https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/groups-assign-role.md
$roleAssignment = New-MgRoleManagementDirectoryRoleAssignment -DirectoryScopeId
# [Azure AD PowerShell](#tab/aad-powershell) + ### Create a role-assignable group Use the [New-AzureADMSGroup](/powershell/module/azuread/new-azureadmsgroup?branch=main) command to create a role-assignable group.
active-directory Azure Ad Pci Dss Guidance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/standards/azure-ad-pci-dss-guidance.md
+
+ Title: Azure Active Directory PCI-DSS guidance
+description: Guidance on meeting payment card industry (PCI) compliance with Azure AD
+++++++++ Last updated : 04/18/2023++++
+# Azure Active Directory PCI-DSS guidance
+
+The Payment Card Industry Security Standards Council (PCI SSC) is responsible for developing and promoting data security standards and resources, including the Payment Card Industry Data Security Standard (PCI-DSS), to ensure the security of payment transactions. To achieve PCI compliance, organizations using Azure Active Directory (Azure AD) can refer to guidance in this document. However, it is the responsibility of the organizations to ensure their PCI compliance. Their IT teams, SecOps teams, and Solutions Architects are responsible for creating and maintaining secure systems, products, and networks that handle, process, and store payment card information.
+
+While Azure AD helps meet some PCI-DSS control requirements, and provides modern identity and access protocols for cardholder data environment (CDE) resources, it should not be the sole mechanism for protecting cardholder data. Therefore, review this document set and all PCI-DSS requirements to establish a comprehensive security program that preserves customer trust. For a complete list of requirements, please visit the official PCI Security Standards Council website at pcisecuritystandards.org: [Official PCI Security Standards Council Site](https://docs-prv.pcisecuritystandards.org/PCI%20DSS/Standard/PCI-DSS-v4_0.pdf)
+
+## PCI requirements for controls
+
+The global PCI-DSS v4.0 establishes a baseline of technical and operational standards for protecting account data. It ΓÇ£was developed to encourage and enhance payment card account data security and facilitate the broad adoption of consistent data security measures, globally. It provides a baseline of technical and operational requirements designed to protect account data. While specifically designed to focus on environments with payment card account data, PCI-DSS can also be used to protect against threats and secure other elements in the payment ecosystem.ΓÇ¥
+
+## Azure AD configuration and PCI-DSS
+
+This document serves as a comprehensive guide for technical and business leaders who are responsible for managing identity and access management (IAM) with Azure Active Directory (Azure AD) in compliance with the Payment Card Industry Data Security Standard (PCI DSS). By following the key requirements, best practices, and approaches outlined in this document, organizations can reduce the scope, complexity, and risk of PCI noncompliance, while promoting security best practices and standards compliance. The guidance provided in this document aims to help organizations configure Azure AD in a way that meets the necessary PCI DSS requirements and promotes effective IAM practices.
+
+Technical and business leaders can use the following guidance to fulfill responsibilities for identity and access management (IAM) with Azure AD. For more information on PCI-DSS in other Microsoft workloads, see [Overview of the Microsoft cloud security benchmark (v1)](/security/benchmark/azure/overview).
+
+PCI-DSS requirements and testing procedures consist of 12 principal requirements that ensure the secure handling of payment card information. Together, these requirements are a comprehensive framework that helps organizations secure payment card transactions and protect sensitive cardholder data.
+
+Azure AD is an enterprise identity service that secures applications, systems, and resources to support PCI-DSS compliance. The following table has the PCI principal requirements and links to Azure AD recommended controls for PCI-DSS compliance.
+
+## Principal PCI-DSS requirements
+
+PCI-DSS requirements **3**, **4**, **9**, and **12** aren't addressed or met by Azure AD, therefore there are no corresponding articles. To see all requirements, go to pcisecuritystandards.org: [Official PCI Security Standards Council Site](https://docs-prv.pcisecuritystandards.org/PCI%20DSS/Standard/PCI-DSS-v4_0.pdf).
+
+|PCI Data Security Standard - High Level Overview|Azure AD recommended PCI-DSS controls|
+|-|-|
+|Build and Maintain Secure Network and Systems|[1. Install and Maintain Network Security Controls]() </br> [2. Apply Secure Configurations to All System Components]()|
+|Protect Account Data|3. Protect Stored Account Data </br> 4. Protect Cardholder Data with Strong Cryptography During Transmission Over Public Networks|
+|Maintain a Vulnerability Management Program|[5. Protect All Systems and Networks from Malicious Software]() </br> [6. Develop and Maintain Secure Systems and Software]()|
+|Implement Strong Access Control Measures|[7. Restrict Access to System Components and Cardholder Data by Business Need to Know]() </br> [8. Identify and Authenticate Access to System Components]() </br> 9. Restrict Physical Access to System Components and Cardholder Data|
+|Regularly Monitor and Test Networks|[10. Log and Monitor All Access to System Components and Cardholder Data]() </br> [11. Test Security of Systems and Networks Regularly]()|
+|Maintain an Information Security Policy|12. Support Information Security with Organizational Policies and Programs|
+
+## PCI-DSS applicability
+
+PCI-DSS applies to organizations that store, process, or transmit cardholder data (CHD) and/or sensitive authentication data (SAD). These data elements, considered together, are known as account data. PCI-DSS provides security guidelines and requirements for organizations that affect the cardholder data environment (CDE). Entities safeguarding CDE ensures the confidentiality and security of customer payment information.
+
+CHD consists of:
+
+* **Primary account number (PAN)** - a unique payment card number (credit, debit, or prepaid cards, etc.) that identifies the issuer and the cardholder account
+* **Cardholder name** ΓÇô the card owner
+* **Card expiration date** ΓÇô the day and month the card expires
+* **Service code** - a three- or four-digit value in the magnetic stripe that follows the expiration date of the payment card on the track data. It defines service attributes, differentiating between international and national interchange, or identifying usage restrictions.
+
+SAD consists of security-related information used to authenticate cardholders and/or authorize payment card transactions. SAD includes, but isn't limited to:
+
+* **Full track data** - magnetic stripe or chip equivalent
+* **Card verification codes/values** - also referred to as the card validation code (CVC), or value (CVV). ItΓÇÖs the three- or four-digit value on the front or back of the payment card. ItΓÇÖs also referred to as CAV2, CVC2, CVN2, CVV2 or CID, determined by the participating payment brands (PPB).
+* **PIN** - personal identification number
+ * **PIN blocks** - an encrypted representation of the PIN used in a debit or credit card transaction. It ensures the secure transmission of sensitive information during a transaction
+
+Protecting the CDE is essential to the security and confidentiality of customer payment information and helps:
+
+* **Preserve customer trust** - customers expect their payment information to be handled securely and kept confidential. If a company experiences a data breach that results in the theft of customer payment data, it can degrade customer trust in the company and cause reputational damage.
+* **Comply with regulations** - companies processing credit card transactions are required to comply with the PCI-DSS. Failure to comply results in fines, legal liabilities, and resultant reputational damage.
+* **Financial risk mitigation** -data breaches have significant financial effects, including, costs for forensic investigations, legal fees, and compensation for affected customers.
+* **Business continuity** - data breaches disrupt business operations and might affect credit card transaction processes. This scenario might lead to lost revenue, operational disruptions, and reputational damage.
+
+## PCI audit scope
+
+PCI audit scope relates to the systems, networks, and processes in the storage, processing, or transmission of CHD and/or SAD. If Account Data is stored, processed, or transmitted in a cloud environment, PCI-DSS applies to that environment and compliance typically involves validation of the cloud environment and the usage of it. There are five fundamental elements in scope for a PCI audit:
+
+* **Cardholder data environment (CDE)** - the area where CHD, and/or SAD, is stored, processed, or transmitted. It includes an organizationΓÇÖs components that touch CHD, such as networks, and network components, databases, servers, applications, and payment terminals.
+* **People** - with access to the CDE, such as employees, contractors, and third-party service providers, are in the scope of a PCI audit.
+* **Processes** - that involve CHD, such as authorization, authentication, encryption and storage of account data in any format, are within the scope of a PCI audit.
+* **Technology** - that processes, stores, or transmits CHD, including hardware such as printers, and multi-function devices that scan, print and fax, end-user devices such as computers, laptops workstations, administrative workstations, tablets and mobile devices, software, and other IT systems, are in the scope of a PCI audit.
+* **System components** ΓÇô that might not store, process, or transmit CHD/SAD but have unrestricted connectivity to system components that store, process, or transmit CHD/SAD, or that could effect the security of the CDE.
+
+If PCI scope is minimized, organizations can effectively reduce the effects of security incidents and lower the risk of data breaches. Segmentation can be a valuable strategy for reducing the size of the PCI CDE, resulting in reduced compliance costs and overall benefits for the organization including but not limited to:
+
+* **Cost savings** - by limiting audit scope, organizations reduce time, resources, and expenses to undergo an audit, which leads to cost savings.
+* **Reduced risk exposure** - a smaller PCI audit scope reduces potential risks associated with processing, storing, and transmitting cardholder data. If the number of systems, networks, and applications subject to an audit are limited, organizations focus on securing their critical assets and reducing their risk exposure.
+* **Streamlined compliance** - narrowing audit scope makes PCI-DSS compliance more manageable and streamlined. Results are more efficient audits, fewer compliance issues, and a reduced risk of incurring noncompliance penalties.
+* **Improved security posture** - with a smaller subset of systems and processes, organizations allocate security resources and efforts efficiently. Outcomes are a stronger security posture, as security teams concentrate on securing critical assets and identifying vulnerabilities in a targeted and effective manner.
+
+## Strategies to reduce PCI audit scope
+
+An organizationΓÇÖs definition of its CDE determines PCI audit scope. Organizations document and communicate this definition to the PCI-DSS Qualified Security Assessor (QSA) performing the audit. The QSA assesses controls for the CDE to determine compliance.
+Adherence to PCI standards and use of effective risk mitigation helps businesses protect customer personal and financial data, which maintains trust in their operations. The following section outlines strategies to reduce risk in PCI audit scope.
+
+### Tokenization
+
+Tokenization is a data security technique. Use tokenization to replace sensitive information, such as credit card numbers, with a unique token stored and used for transactions, without exposing sensitive data. Tokens reduce the scope of a PCI audit for the following requirements:
+
+* **Requirement 3** - Protect Stored Account Data
+* **Requirement 4** - Protect Cardholder Data with strong Cryptography During Transmission Over Open Public Networks
+* **Requirement 9** - Restrict Physical Access to Cardholder Data
+* **Requirement 10** - Log and Monitor All Access to Systems Components and Cardholder Data.
+
+When using cloud-based processing methodologies, consider the relevant risks to sensitive data and transactions. To mitigate these risks, it's recommended you implement relevant security measures and contingency plans to protect data and prevent transaction interruptions. As a best practice, use payment tokenization as a methodology to declassify data, and potentially reduce the footprint of the CDE. With payment tokenization, sensitive data is replaced with a unique identifier that reduces the risk of data theft and limits the exposure of sensitive information in the CDE.
+
+### Secure CDE
+
+PCI-DSS requires organizations to maintain a secure CDE. With effectively configured CDE, businesses can mitigate their risk exposure and reduce the associated costs for both on-premises and cloud environments. This approach helps minimize the scope of a PCI audit, making it easier and more cost-effective to demonstrate compliance with the standard.
+
+To configure Azure AD to secure the CDE:
+
+* Use passwordless credentials for users: Windows Hello for Business, FIDO2 security keys, and Microsoft Authenticator app
+* Use strong credentials for workload identities: certificates and managed identities for Azure resources.
+ * Integrate access technologies such as VPN, remote desktop, and network access points with Azure AD for authentication, if applicable
+* Enable privileged identity management and access reviews for Azure AD roles, privileged access groups and Azure resources
+* Use Conditional Access policies to enforce PCI-requirement controls: credential strength, device state, and enforce them based on location, group membership, applications, and risk
+* Use modern authentication for DCE workloads
+* Archive Azure AD logs in security information and event management (SIEM) systems
+
+Where applications and resources use Azure AD for identity and access management (IAM), the Azure AD tenant(s) are in scope of PCI audit, and the guidance herein is applicable. Organizations must evaluate identity and resource isolation requirements, between non-PCI and PCI workloads, to determine their best architecture.
+
+Learn more
+
+* [Introduction to delegated administration and isolated environments](../fundamentals/secure-with-azure-ad-introduction.md)
+* [How to use the Microsoft Authenticator app](https://support.microsoft.com/account-billing/how-to-use-the-microsoft-authenticator-app-9783c865-0308-42fb-a519-8cf666fe0acc)
+* [What are managed identities for Azure resources?](../managed-identities-azure-resources/overview.md)
+* [What are access reviews?](../governance/access-reviews-overview.md)
+* [What is Conditional Access?](../conditional-access/overview.md)
+* [Audit logs in Azure AD](../reports-monitoring/concept-audit-logs.md)
+
+### Establish a responsibility matrix
+
+PCI compliance is the responsibility of entities that process payment card transactions including but not limited to:
+
+* Merchants
+* Card service providers
+* Merchant service providers
+* Acquiring banks
+* Payment processors
+* Payment card issuers
+* Hardware vendors
+
+These entities ensure payment card transactions are processed securely and are PCI-DSS compliant. All entities involved in payment card transactions have a role to help ensure PCI compliance.
+
+Azure PCI DSS compliance status doesn't automatically translate to PCI-DSS validation for the services you build or host on Azure. You ensure that you achieve compliance with PCI-DSS requirements.
+
+### Establish continuous processes to maintain compliance
+
+Continuous processes entail ongoing monitoring and improvement of compliance posture. Benefits of continuous processes to maintain PCI compliance:
+
+* Reduced risk of security incidents and noncompliance
+* Improved data security
+* Better alignment with regulatory requirements
+* Increased customer and stakeholder confidence
+
+With ongoing processes, organizations respond effectively to changes in the regulatory environment and ever-evolving security threats.
+
+* **Risk assessment** ΓÇô conduct this process to identify credit-card data vulnerabilities and security risks. Identify potential threats, assess the likelihood threats occurring, and evaluate the potential effects on the business.
+* **Security awareness training** - employees who handle credit card data receive regular security awareness training to clarify the importance of protecting cardholder data and the measures to do so.
+* **Vulnerability management** -conduct regular vulnerability scans and penetration testing to identify network or system weaknesses exploitable by attackers.
+* **Monitor and maintain access control policies** - access to credit card data is restricted to authorized individuals. Monitor access logs to identify unauthorized access attempts.
+* **Incident response** ΓÇô an incident response plan helps security teams take action during security incidents involving credit card data. Identify incident cause, contain the damage, and restore normal operations in a timely manner.
+* **Compliance monitoring** - and auditing is conducted to ensure ongoing compliance with PCI-DSS requirements. Review security logs, conduct regular policy reviews, and ensure system components are accurately configured and maintained.
+
+### Implement strong security for shared infrastructure
+
+Typically, web services such as Azure, have a shared infrastructure wherein customer data might be stored on the same physical server or data storage device. This scenario creates the risk of unauthorized customers accessing data they donΓÇÖt own, and the risk of malicious actors targeting the shared infrastructure. Azure AD security features help mitigate risks associated with shared infrastructure:
+
+* User authentication to network access technologies that support modern authentication protocols: virtual private network (VPN), remote desktop, and network access points.
+* Access control policies that enforce strong authentication methods and device compliance based on signals such as user context, device, location, and risk.
+* Conditional Access provides an identity-driven control plane and brings signals together, to make decisions, and enforce organizational policies.
+* Privileged role governance - access reviews, just-in-time (JIT) activation, etc.
+
+Learn more: [What is Conditional Access?](../conditional-access/overview.md)
+
+### Data residency
+
+PCI-DSS cites no specific geographic location for credit card data storage. However, it requires cardholder data is stored securely, which might include geographic restrictions, depending on the organization's security and regulatory requirements. Different countries and regions have data protection and privacy laws. Consult with a legal or compliance advisor to determine applicable data residency requirements.
+
+Learn more: [Azure AD and data residency](../fundamentals/azure-ad-data-residency.md)
+
+### Third-party security risks
+
+A non-PCI compliant third-party provider poses a risk to PCI compliance. Regularly assess and monitor third-party vendors and service providers to ensure they maintain required controls to protect cardholder data.
+
+Azure AD features and functions in **Data residency** help mitigate risks associated with third-party security.
+
+### Logging and monitoring
+
+Implement accurate logging and monitoring to detect, and respond to, security incidents in a timely manner. Azure AD helps manage PCI compliance with audit and activity logs, and reports that can be integrated with a SIEM system. Azure AD has role -based access control (RBAC) and MFA to secure access to sensitive resources, encryption, and threat protection features to protect organizations from unauthorized access and data theft.
+
+Learn more:
+
+ΓÇó [What are Azure AD reports?](../reports-monitoring/overview-reports.md)
+ΓÇó [Azure AD built-in roles](../roles/permissions-reference.md)
+
+### Multi-application environments: host outside the CDE
+
+PCI-DSS ensures that companies that accept, process, store, or transmit credit card information maintain a secure environment. Hosting outside the CDE introduces risks such as:
+
+* Poor access control and identity management might result in unauthorized access to sensitive data and systems
+* Insufficient logging and monitoring of security events impedes detection and response to security incidents
+* Insufficient encryption and threat protection increases the risk of data theft and unauthorized access
+* Poor, or no security awareness and training for users might result in avoidable social engineering attacks, such as phishing
+
+## Next steps
+
+PCI-DSS requirements **3**, **4**, **9**, and **12** aren't applicable to Azure AD, therefore there are no corresponding articles. To see all requirements, go to pcisecuritystandards.org: [Official PCI Security Standards Council Site](https://docs-prv.pcisecuritystandards.org/PCI%20DSS/Standard/PCI-DSS-v4_0.pdf).
+
+To configure Azure AD to comply with PCI-DSS, see the following articles.
+
+* [Azure AD PCI-DSS guidance](azure-ad-pci-dss-guidance.md) (You're here)
+* [Requirement 1: Install and Maintain Network Security Controls](pci-requirement-1.md)
+* [Requirement 2: Apply Secure Configurations to All System Components](pci-requirement-2.md)
+* [Requirement 5: Protect All Systems and Networks from Malicious Software](pci-requirement-5.md)
+* [Requirement 6: Develop and Maintain Secure Systems and Software](pci-requirement-6.md)
+* [Requirement 7: Restrict Access to System Components and Cardholder Data by Business Need to Know](pci-requirement-7.md)
+* [Requirement 8: Identify Users and Authenticate Access to System Components](pci-requirement-8.md)
+* [Requirement 10: Log and Monitor All Access to System Components and Cardholder Data](pci-requirement-10.md)
+* [Requirement 11: Test Security of Systems and Networks Regularly](pci-requirement-11.md)
+* [Azure AD PCI-DSS Multi-Factor Authentication guidance](azure-ad-pci-dss-mfa.md)
+
active-directory Azure Ad Pci Dss Mfa https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/standards/azure-ad-pci-dss-mfa.md
+
+ Title: Azure Active Directory PCI-DSS Multi-Factor Authentication guidance
+description: Learn the authentication methods supported by Azure AD to meet PCI MFA requirements
+++++++++ Last updated : 04/18/2023++++
+# Azure Active Directory PCI-DSS Multi-Factor Authentication guidance
+**Information Supplement: Multi-Factor Authentication v 1.0**
+
+Use the following table of authentication methods supported by Azure Active Directory (Azure AD) to meet requirements in the PCI Security Standards Council [Information Supplement, Multi-Factor Authentication v 1.0](https://listings.pcisecuritystandards.org/pdfs/Multi-Factor-Authentication-Guidance-v1.pdf).
+
+|Method|To meet requirements|Protection|MFA element|
+|-|-|-|-|
+|[Passwordless phone sign in with Microsoft Authenticator](../authentication/howto-authentication-passwordless-phone.md)|Something you have (device with a key), something you know or are (PIN or biometric) </br> In iOS, Authenticator Secure Element (SE) stores the key in Keychain. [Apple Platform Security, Keychain data protection](https://support.apple.com/guide/security/keychain-data-protection-secb0694df1a/web) </br> In Android, Authenticator uses Trusted Execution Engine (TEE) by storing the key in Keystore. [Developers, Android Keystore system](https://developer.android.com/training/articles/keystore) </br> When users authenticate using Microsoft Authenticator, Azure AD generates a random number the user enters in the app. This action fulfills the out-of-band authentication requirement. |Customers configure device protection policies to mitigate device compromise risk. For instance, Microsoft Intune compliance policies. |Users unlock the key with the gesture, then Azure AD validates the authentication method. |
+|[Windows Hello for Business Deployment Prerequisite Overview](/windows/security/identity-protection/hello-for-business/hello-identity-verification) |Something you have (Windows device with a key), and something you know or are (PIN or biometric). </br> Keys are stored with device Trusted Platform Module (TPM). Customers use devices with hardware TPM 2.0 or later to meet the authentication method independence and out-of-band requirements. </br> [Certified Authenticator Levels](https://fidoalliance.org/certification/authenticator-certification-levels/)|Configure device protection policies to mitigate device compromise risk. For instance, Microsoft Intune compliance policies. |Users unlock the key with the gesture for Windows device sign in.|
+|[Enable passwordless security key sign-in, Enable FIDO2 security key method](../authentication/howto-authentication-passwordless-security-key.md)|Something that you have (FIDO2 security key) and something you know or are (PIN or biometric). </br> Keys are stored with hardware cryptographic features. Customers use FIDO2 keys, at least Authentication Certification Level 2 (L2) to meet the authentication method independence and out-of-band requirement.|Procure hardware with protection against tampering and compromise.|Users unlock the key with the gesture, then Azure AD validates the credential. |
+|[Overview of Azure AD certificate-based authentication](../authentication/concept-certificate-based-authentication.md)|Something you have (smart card) and something you know (PIN). </br> Physical smart cards or virtual smartcards stored in TPM 2.0 or later, are a Secure Element (SE). This action meets the authentication method independence and out-of-band requirement.|Procure smart cards with protection against tampering and compromise.|Users unlock the certificate private key with the gesture, or PIN, then Azure AD validates the credential. |
+
+## Next steps
+
+PCI-DSS requirements **3**, **4**, **9**, and **12** aren't applicable to Azure AD, therefore there are no corresponding articles. To see all requirements, go to pcisecuritystandards.org: [Official PCI Security Standards Council Site](https://docs-prv.pcisecuritystandards.org/PCI%20DSS/Standard/PCI-DSS-v4_0.pdf).
+
+To configure Azure AD to comply with PCI-DSS, see the following articles.
+
+* [Azure AD PCI-DSS guidance](azure-ad-pci-dss-guidance.md)
+* [Requirement 1: Install and Maintain Network Security Controls](pci-requirement-1.md)
+* [Requirement 2: Apply Secure Configurations to All System Components](pci-requirement-2.md)
+* [Requirement 5: Protect All Systems and Networks from Malicious Software](pci-requirement-5.md)
+* [Requirement 6: Develop and Maintain Secure Systems and Software](pci-requirement-6.md)
+* [Requirement 7: Restrict Access to System Components and Cardholder Data by Business Need to Know](pci-requirement-7.md)
+* [Requirement 8: Identify Users and Authenticate Access to System Components](pci-requirement-8.md)
+* [Requirement 10: Log and Monitor All Access to System Components and Cardholder Data](pci-requirement-10.md)
+* [Requirement 11: Test Security of Systems and Networks Regularly](pci-requirement-11.md)
+* [Azure AD PCI-DSS Multi-Factor Authentication guidance](azure-ad-pci-dss-mfa.md) (You're here)
active-directory Pci Requirement 1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/standards/pci-requirement-1.md
+
+ Title: Azure Active Directory and PCI-DSS Requirement 1
+description: Learn PCI-DSS defined approach requirements for installing and maintaining network security controls
+++++++++ Last updated : 04/18/2023++++
+# Azure Active Directory and PCI-DSS Requirement 1
+
+**Requirement 1: Install and Maintain Network Security Controls**
+</br> **Defined approach requirements**
+
+## 1.1 Processes and mechanisms for installing and maintaining network security controls are defined and understood.
+
+|PCI-DSS Defined approach requirements|Azure AD guidance and recommendations|
+|-|-|
+|**1.1.1** All security policies and operational procedures that are identified in Requirement 1 are: </br> Documented </br> Kept up to date </br> In use </br> Known to all affected parties|Use the guidance and links herein to produce the documentation to fulfill requirements based on your environment configuration.|
+|**1.1.2** Roles and responsibilities for performing activities in Requirement 1 are documented, assigned, and understood|Use the guidance and links herein to produce the documentation to fulfill requirements based on your environment configuration.|
+
+## 1.2 Network security controls (NSCs) are configured and maintained.
+
+|PCI-DSS Defined approach requirements|Azure AD guidance and recommendations|
+|-|-|
+|**1.2.1** Configuration standards for NSC rulesets are: </br> Defined </br> Implemented </br> Maintained|Integrate access technologies such as VPN, remote desktop, and network access points with Azure AD for authentication and authorization, if the access technologies support modern authentication. Ensure NSC standards, which pertain to identity-related controls, include definition of Conditional Access policies, application assignment, access reviews, group management, credential policies, etc. [Azure AD operations reference guide](../fundamentals/active-directory-ops-guide-intro.md)|
+|**1.2.2** All changes to network connections and to configurations of NSCs are approved and managed in accordance with the change control process defined at Requirement 6.5.1|Not applicable to Azure AD.|
+|**1.2.3** An accurate network diagram(s) is maintained that shows all connections between the cardholder data environment (CDE) and other networks, including any wireless networks.|Not applicable to Azure AD.|
+|**1.2.4** An accurate data-flow diagram(s) is maintained that meets the following: </br> Shows all account data flows across systems and networks. </br> Updated as needed upon changes to the environment.|Not applicable to Azure AD.|
+|**1.2.5** All services, protocols, and ports allowed are identified, approved, and have a defined business need|Not applicable to Azure AD.|
+|**1.2.6** Security features are defined and implemented for all services, protocols, and ports in use and considered insecure, such that risk is mitigated.|Not applicable to Azure AD.|
+|**1.2.7** Configurations of NSCs are reviewed at least once every six months to confirm they're relevant and effective.|Use Azure AD access reviews to automate group-membership reviews and applications, such as VPN appliances, which align to network security controls in your CDE. [What are access reviews?](../governance/access-reviews-overview.md)|
+|**1.2.8** Configuration files for NSCs are: </br> Secured from unauthorized access </br> Kept consistent with active network configurations|Not applicable to Azure AD.|
+
+## 1.3 Network access to and from the cardholder data environment is restricted.
+
+|PCI-DSS Defined approach requirements|Azure AD guidance and recommendations|
+|-|-|
+|**1.3.1** Inbound traffic to the CDE is restricted as follows: </br> To only traffic that is necessary. </br> All other traffic is specifically denied|Use Azure AD to configure named locations to create Conditional Access policies. Calculate user and sign-in risk. Microsoft recommends customers populate and maintain the CDE IP addresses using network locations. Use them to define Conditional Access policy requirements. [Using the location condition in a CA policy](../conditional-access/location-condition.md)|
+|**1.3.2** Outbound traffic from the CDE is restricted as follows: </br> To only traffic that is necessary. </br> All other traffic is specifically denied|For NSC design, include Conditional Access policies for applications to allow access to CDE IP addresses. </br> Emergency access or remote access to establish connectivity to CDE, such as virtual private network (VPN) appliances, captive portals, might need policies to prevent unintended lockout. [Using the location condition in a CA policy](../conditional-access/location-condition.md) </br> [Manage emergency access accounts in Azure AD](../roles/security-emergency-access.md)|
+|**1.3.3** NSCs are installed between all wireless networks and the CDE, regardless of whether the wireless network is a CDE, such that: </br> All wireless traffic from wireless networks into the CDE is denied by default. </br> Only wireless traffic with an authorized business purpose is allowed into the CDE.|For NSC design, include Conditional Access policies for applications to allow access to CDE IP addresses. </br> Emergency access or remote access to establish connectivity to CDE, such as virtual private network (VPN) appliances, captive portals, might need policies to prevent unintended lockout. [Using the location condition in a CA policy](../conditional-access/location-condition.md) </br> [Manage emergency access accounts in Azure AD](../roles/security-emergency-access.md)|
+
+## 1.4 Network connections between trusted and untrusted networks are controlled.
+
+|PCI-DSS Defined approach requirements|Azure AD guidance and recommendations|
+|-|-|
+|**1.4.1** NSCs are implemented between trusted and untrusted networks.|Not applicable to Azure AD.|
+|**1.4.2** Inbound traffic from untrusted networks to trusted networks is restricted to: </br> Communications with system components that are authorized to provide publicly accessible services, protocols, and ports. </br> Stateful responses to communications initiated by system components in a trusted network. </br> All other traffic is denied.|Not applicable to Azure AD.|
+|**1.4.3** Anti-spoofing measures are implemented to detect and block forged source IP addresses from entering the trusted network.|Not applicable to Azure AD.|
+|**1.4.4** System components that store cardholder data are not directly accessible from untrusted networks.|In addition to controls in the networking layer, applications in the CDE using Azure AD can use Conditional Access policies. Restrict access to applications based on location. [Using the location condition in a CA policy](../conditional-access/location-condition.md)|
+|**1.4.5** The disclosure of internal IP addresses and routing information is limited to only authorized parties.|Not applicable to Azure AD.|
+
+## 1.5 Risks to the CDE from computing devices that are able to connect to both untrusted networks and the CDE are mitigated.
+
+|PCI-DSS Defined approach requirements|Azure AD guidance and recommendations|
+|-|-|
+|**1.5.1** Security controls are implemented on any computing devices, including company- and employee-owned devices, that connect to both untrusted networks (including the Internet) and the CDE as follows: </br> Specific configuration settings are defined to prevent threats being introduced into the entityΓÇÖs network. </br> Security controls are actively running. </br> Security controls are not alterable by users of the computing devices unless specifically documented and authorized by management on a case-by-case basis for a limited period.| Deploy Conditional Access policies that require device compliance. [Use compliance policies to set rules for devices you manage with Intune](/mem/intune/protect/device-compliance-get-started) </br> Integrate device compliance state with anti-malware solutions. [Enforce compliance for Microsoft Defender for Endpoint with Conditional Access in Intune](/mem/intune/protect/advanced-threat-protection) </br> [Mobile Threat Defense integration with Intune](/mem/intune/protect/mobile-threat-defense)|
+
+## Next steps
+
+PCI-DSS requirements **3**, **4**, **9**, and **12** aren't applicable to Azure AD, therefore there are no corresponding articles. To see all requirements, go to pcisecuritystandards.org: [Official PCI Security Standards Council Site](https://docs-prv.pcisecuritystandards.org/PCI%20DSS/Standard/PCI-DSS-v4_0.pdf).
+
+To configure Azure AD to comply with PCI-DSS, see the following articles.
+
+* [Azure AD PCI-DSS guidance](azure-ad-pci-dss-guidance.md)
+* [Requirement 1: Install and Maintain Network Security Controls](pci-requirement-1.md) (You're here)
+* [Requirement 2: Apply Secure Configurations to All System Components](pci-requirement-2.md)
+* [Requirement 5: Protect All Systems and Networks from Malicious Software](pci-requirement-5.md)
+* [Requirement 6: Develop and Maintain Secure Systems and Software](pci-requirement-6.md)
+* [Requirement 7: Restrict Access to System Components and Cardholder Data by Business Need to Know](pci-requirement-7.md)
+* [Requirement 8: Identify Users and Authenticate Access to System Components](pci-requirement-8.md)
+* [Requirement 10: Log and Monitor All Access to System Components and Cardholder Data](pci-requirement-10.md)
+* [Requirement 11: Test Security of Systems and Networks Regularly](pci-requirement-11.md)
+* [Azure AD PCI-DSS Multi-Factor Authentication guidance](azure-ad-pci-dss-mfa.md)
+
active-directory Pci Requirement 10 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/standards/pci-requirement-10.md
+
+ Title: Azure Active Directory and PCI-DSS Requirement 10
+description: Learn PCI-DSS defined approach requirements about logging and monitoring all acess to system components and CHD
+++++++++ Last updated : 04/18/2023++++
+# Azure Active Directory and PCI-DSS Requirement 10
+
+**Requirement 10: Log and Monitor All Access to System Components and Cardholder Data**
+</br>**Defined approach requirements**
+
+## 10.1 Processes and mechanisms for logging and monitoring all access to system components and cardholder data are defined and documented.
+
+|PCI-DSS Defined approach requirements|Azure AD guidance and recommendations|
+|-|-|
+|**10.1.1** All security policies and operational procedures that are identified in Requirement 10 are: </br> Documented </br> Kept up to date </br> In use </br> Known to all affected parties|Use the guidance and links herein to produce the documentation to fulfill requirements based on your environment configuration.|
+|**10.1.2** Roles and responsibilities for performing activities in Requirement 10 are documented, assigned, and understood.|Use the guidance and links herein to produce the documentation to fulfill requirements based on your environment configuration.|
+
+## 10.2 Audit logs are implemented to support the detection of anomalies and suspicious activity, and the forensic analysis of events.
+
+|PCI-DSS Defined approach requirements|Azure AD guidance and recommendations|
+|-|-|
+|**10.2.1** Audit logs are enabled and active for all system components and cardholder data.|Archive Azure AD audit logs to obtain changes to security policies and Azure AD tenant configuration. </br> Archive Azure AD activity logs in a security information and event management (SIEM) system to learn about usage. [Azure AD activity logs in Azure Monitor](../reports-monitoring/concept-activity-logs-azure-monitor.md)|
+|**10.2.1.1** Audit logs capture all individual user access to cardholder data.|Not applicable to Azure AD.|
+|**10.2.1.2** Audit logs capture all actions taken by any individual with administrative access, including any interactive use of application or system accounts.|Not applicable to Azure AD.|
+|**10.2.1.3** Audit logs capture all access to audit logs.|In Azure AD, you canΓÇÖt wipe or modify logs. Privileged users can query logs from Azure AD. [Least privileged roles by task in Azure AD](../roles/delegate-by-task.md) </br> When audit logs are exported to systems such as Azure Log Analytics Workspace, storage accounts, or third-party SIEM systems, monitor them for access.|
+|**10.2.1.4** Audit logs capture all invalid logical access attempts.|Azure AD generates activity logs when a user attempts to sign in with invalid credentials. It generates activity logs when access is denied due to Conditional Access policies. |
+|**10.2.1.5** Audit logs capture all changes to identification and authentication credentials including, but not limited to: </br> Creation of new accounts </br> Elevation of privileges </br> All changes, additions, or deletions to accounts with administrative access|Azure AD generates audit logs for the events in this requirement. |
+|**10.2.1.6** Audit logs capture the following: </br> All initialization of new audit logs, and </br> All starting, stopping, or pausing of the existing audit logs.|Not applicable to Azure AD.|
+|**10.2.1.7** Audit logs capture all creation and deletion of system-level objects.|Azure AD generates audit logs for events in this requirement.|
+|**10.2.2** Audit logs record the following details for each auditable event: </br> User identification. </br> Type of event. </br> Date and time. </br> Success and failure indication. </br> Origination of event. </br> Identity or name of affected data, system component, resource, or service (for example, name and protocol).|See, [Audit logs in Azure AD](../reports-monitoring/concept-audit-logs.md)|
+
+## 10.3 Audit logs are protected from destruction and unauthorized modifications.
+
+|PCI-DSS Defined approach requirements|Azure AD guidance and recommendations|
+|-|-|
+|**10.3.1** Read access to audit logs files is limited to those with a job-related need.|Privileged users can query logs from Azure AD. [Least privileged roles by task in Azure AD](../roles/delegate-by-task.md)|
+|**10.3.2** Audit log files are protected to prevent modifications by individuals.|In Azure AD, you canΓÇÖt wipe or modify logs. </br> When audit logs are exported to systems such as Azure Log Analytics Workspace, storage accounts, or third-party SIEM systems, monitor them for access.|
+|**10.3.3** Audit log files, including those for external-facing technologies, are promptly backed up to a secure, central, internal log server(s) or other media that is difficult to modify.|In Azure AD, you canΓÇÖt wipe or modify logs. </br> When audit logs are exported to systems such as Azure Log Analytics Workspace, storage accounts, or third-party SIEM systems, monitor them for access.|
+|**10.3.4** File integrity monitoring or change-detection mechanisms is used on audit logs to ensure that existing log data can't be changed without generating alerts.|In Azure AD, you canΓÇÖt wipe or modify logs. </br> When audit logs are exported to systems such as Azure Log Analytics Workspace, storage accounts, or third-party SIEM systems, monitor them for access.|
+
+## 10.4 Audit logs are reviewed to identify anomalies or suspicious activity.
+
+|PCI-DSS Defined approach requirements|Azure AD guidance and recommendations|
+|-|-|
+|**10.4.1** The following audit logs are reviewed at least once daily: </br> All security events. </br> Logs of all system components that store, process, or transmit cardholder data (CHD) and/or sensitive authentication data (SAD). Logs of all critical system components. </br> Logs of all servers and system components that perform security functions (for example, network security controls, intrusion-detection systems/intrusion-prevention systems (IDS/IPS), authentication servers).|Include Azure AD logs in this process.|
+|**10.4.1.1** Automated mechanisms are used to perform audit log reviews.|Include Azure AD logs in this process. Configure automated actions and alerting when Azure AD logs are integrated with Azure Monitor. [Deploy Azure Monitor: Alerts and automated actions](/azure/azure-monitor/best-practices-alerts)|
+|**10.4.2** Logs of all other system components (those not specified in Requirement 10.4.1) are reviewed periodically.|Not applicable to Azure AD.|
+|**10.4.2.1** The frequency of periodic log reviews for all other system components (not defined in Requirement 10.4.1) is defined in the entityΓÇÖs targeted risk analysis, which is performed according to all elements specified in Requirement 12.3.1|Not applicable to Azure AD.|
+|**10.4.3** Exceptions and anomalies identified during the review process are addressed.|Not applicable to Azure AD.|
+
+## 10.5 Audit log history is retained and available for analysis.
+
+|PCI-DSS Defined approach requirements|Azure AD guidance and recommendations|
+|-|-|
+|**10.5.1** Retain audit log history for at least 12 months, with at least the most recent three months immediately available for analysis.|Integrate with Azure Monitor and export the logs for long term archival. [Integrate Azure AD logs with Azure Monitor logs](../reports-monitoring/howto-integrate-activity-logs-with-log-analytics.md) </br> Learn about Azure AD logs data retention policy. [Azure AD data retention](../reports-monitoring/reference-reports-data-retention.md)|
+
+## 10.6 Time-synchronization mechanisms support consistent time settings across all systems.
+
+|PCI-DSS Defined approach requirements|Azure AD guidance and recommendations|
+|-|-|
+|**10.6.1** System clocks and time are synchronized using time-synchronization technology.|Learn about the time synchronization mechanism in Azure services. [Time synchronization for financial services in Azure](https://azure.microsoft.com/blog/time-synchronization-for-financial-services-in-azure/)|
+|**10.6.2** Systems are configured to the correct and consistent time as follows: </br> One or more designated time servers are in use. </br> Only the designated central time server(s) receives time from external sources. </br> Time received from external sources is based on International Atomic Time or Coordinated Universal Time (UTC). </br> The designated time server(s) accept time updates only from specific industry-accepted external sources. </br> Where there's more than one designated time server, the time servers peer with one another to keep accurate time. </br> Internal systems receive time information only from designated central time server(s).|Learn about the time synchronization mechanism in Azure services. [Time synchronization for financial services in Azure](https://azure.microsoft.com/blog/time-synchronization-for-financial-services-in-azure/)|
+|**10.6.3** Time synchronization settings and data are protected as follows: </br> Access to time data is restricted to only personnel with a business need. </br> Any changes to time settings on critical systems are logged, monitored, and reviewed.|Azure AD relies on time synchronization mechanisms in Azure. </br> Azure procedures synchronize servers and network devices with NTP Stratum 1-time servers synchronized to global positioning system (GPS) satellites. Synchronization occurs every five minutes. Azure ensures service hosts sync time. [Time synchronization for financial services in Azure](https://azure.microsoft.com/blog/time-synchronization-for-financial-services-in-azure/) </br> Hybrid components in Azure AD, such as Azure AD Connect servers, interact with on-premises infrastructure. The customer owns time synchronization of on-premises servers. |
+
+## 10.7 Failures of critical security control systems are detected, reported, and responded to promptly.
+
+|PCI-DSS Defined approach requirements|Azure AD guidance and recommendations|
+|-|-|
+|**10.7.2** *Additional requirement for service providers only*: Failures of critical security control systems are detected, alerted, and addressed promptly, including but not limited to failure of the following critical security control systems: </br> Network security controls </br> IDS/IPS </br> File integrity monitoring (FIM) </br> Anti-malware solutions </br> Physical access controls </br> Logical access controls </br> Audit logging mechanism </br> Segmentation controls (if used)|Azure AD relies on time synchronization mechanisms in Azure. </br> Azure supports real-time event analysis in its operational environment. Internal Azure infrastructure systems generate near real-time event alerts about potential compromise.|
+|**10.7.2** Failures of critical security control systems are detected, alerted, and addressed promptly, including but not limited to failure of the following critical security control systems: </br> Network security controls </br> IDS/IP </br> Change-detection mechanisms </br> Anti-malware solutions </br> Physical access controls </br> Logical access controls </br> Audit logging mechanisms </br> Segmentation controls (if used) </br> Audit log review mechanisms </br> Automated security testing tools (if used)|See, [Azure AD security operations guide](../fundamentals/security-operations-introduction.md) |
+|**10.7.3** Failures of any critical security controls systems are responded to promptly, including but not limited to: </br> Restoring security functions. </br> Identifying and documenting the duration (date and time from start to end) of the security failure. </br> Identifying and documenting the cause(s) of failure and documenting required remediation. </br> Identifying and addressing any security issues that arose during the failure. </br> Determining whether further actions are required as a result of the security failure. </br> Implementing controls to prevent the cause of failure from reoccurring. </br> Resuming monitoring of security controls.|See, [Azure AD security operations guide](../fundamentals/security-operations-introduction.md)|
+
+## Next steps
+
+PCI-DSS requirements **3**, **4**, **9**, and **12** aren't applicable to Azure AD, therefore there are no corresponding articles. To see all requirements, go to pcisecuritystandards.org: [Official PCI Security Standards Council Site](https://docs-prv.pcisecuritystandards.org/PCI%20DSS/Standard/PCI-DSS-v4_0.pdf).
+
+To configure Azure AD to comply with PCI-DSS, see the following articles.
+
+* [Azure AD PCI-DSS guidance](azure-ad-pci-dss-guidance.md)
+* [Requirement 1: Install and Maintain Network Security Controls](pci-requirement-1.md)
+* [Requirement 2: Apply Secure Configurations to All System Components](pci-requirement-2.md)
+* [Requirement 5: Protect All Systems and Networks from Malicious Software](pci-requirement-5.md)
+* [Requirement 6: Develop and Maintain Secure Systems and Software](pci-requirement-6.md)
+* [Requirement 7: Restrict Access to System Components and Cardholder Data by Business Need to Know](pci-requirement-7.md)
+* [Requirement 8: Identify Users and Authenticate Access to System Components](pci-requirement-8.md)
+* [Requirement 10: Log and Monitor All Access to System Components and Cardholder Data](pci-requirement-10.md) (You're here)
+* [Requirement 11: Test Security of Systems and Networks Regularly](pci-requirement-11.md)
+* [Azure AD PCI-DSS Multi-Factor Authentication guidance](azure-ad-pci-dss-mfa.md)
active-directory Pci Requirement 11 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/standards/pci-requirement-11.md
+
+ Title: Azure Active Directory and PCI-DSS Requirement 11
+description: Learn PCI-DSS defined approach requirements for regular testing of security and network security
+++++++++ Last updated : 04/18/2023++++
+# Azure Active Directory and PCI-DSS Requirement 11
+
+**Requirement 11: Test Security of Systems and Networks Regularly**
+</br>**Defined approach requirements**
+
+## 11.1 Processes and mechanisms for regularly testing security of systems and networks are defined and understood.
+
+|PCI-DSS Defined approach requirements|Azure AD guidance and recommendations|
+|-|-|
+|**11.1.1** All security policies and operational procedures that are identified in Requirement 11 are: </br> Documented </br> Kept up to date </br> In use </br> Known to all affected parties|Use the guidance and links herein to produce the documentation to fulfill requirements based on your environment configuration.|
+|**11.1.2** Roles and responsibilities for performing activities in Requirement 11 are documented, assigned, and understood.|Use the guidance and links herein to produce the documentation to fulfill requirements based on your environment configuration.|
+
+## 11.2 Wireless access points are identified and monitored, and unauthorized wireless access points are addressed.
+
+|PCI-DSS Defined approach requirements|Azure AD guidance and recommendations|
+|-|-|
+|**11.2.1** Authorized and unauthorized wireless access points are managed as follows: </br> The presence of wireless (Wi-Fi) access points is tested for. </br> All authorized and unauthorized wireless access points are detected and identified. </br> Testing, detection, and identification occurs at least once every three months. </br> If automated monitoring is used, personnel are notified via generated alerts.|If your organization integrates network access points with Azure AD for authentication, see [Requirement 1: Install and Maintain Network Security Controls](pci-requirement-1.md)|
+|**11.2.2** An inventory of authorized wireless access points is maintained, including a documented business justification.|Not applicable to Azure AD.|
+
+## 11.3 External and internal vulnerabilities are regularly identified, prioritized, and addressed.
+
+|PCI-DSS Defined approach requirements|Azure AD guidance and recommendations|
+|-|-|
+|**11.3.1** Internal vulnerability scans are performed as follows: </br> At least once every three months. </br> High-risk and critical vulnerabilities (per the entityΓÇÖs vulnerability risk rankings defined at Requirement 6.3.1) are resolved. </br> Rescans are performed that confirm all high-risk and critical vulnerabilities (as noted) have been resolved. </br> Scan tool is kept up to date with latest vulnerability information. </br> Scans are performed by qualified personnel and organizational independence of the tester exists.|Include servers that support Azure AD hybrid capabilities. For example, Azure AD Connect, Application proxy connectors, etc. as part of internal vulnerability scans. </br> Organizations using federated authentication: review and address federation system infrastructure vulnerabilities. [What is federation with Azure AD?](../hybrid/whatis-fed.md) </br> Review and mitigate risk detections reported by Azure AD Identity Protection. Integrate the signals with a SIEM solution to integrate more with remediation workflows or automation. [Risk types and detection](../identity-protection/concept-identity-protection-risks.md) </br> Run the Azure AD assessment tool regularly and address findings. [AzureAD/AzureADAssessment](https://github.com/AzureAD/AzureADAssessment) </br> [Security operations for infrastructure](../fundamentals/security-operations-infrastructure.md) </br> [Integrate Azure AD logs with Azure Monitor logs](../reports-monitoring/howto-integrate-activity-logs-with-log-analytics.md)|
+|**11.3.1.1** All other applicable vulnerabilities (those not ranked as high-risk or critical per the entityΓÇÖs vulnerability risk rankings defined at Requirement 6.3.1) are managed as follows: </br> Addressed based on the risk defined in the entityΓÇÖs targeted risk analysis, which is performed according to all elements specified in Requirement 12.3.1. </br> Rescans are conducted as needed.|Include servers that support Azure AD hybrid capabilities. For example, Azure AD Connect, Application proxy connectors, etc. as part of internal vulnerability scans. </br> Organizations using federated authentication: review and address federation system infrastructure vulnerabilities. [What is federation with Azure AD?](../hybrid/whatis-fed.md) </br> Review and mitigate risk detections reported by Azure AD Identity Protection. Integrate the signals with a SIEM solution to integrate more with remediation workflows or automation. [Risk types and detection](../identity-protection/concept-identity-protection-risks.md) </br> Run the Azure AD assessment tool regularly and address findings. [AzureAD/AzureADAssessment](https://github.com/AzureAD/AzureADAssessment) </br> [Security operations for infrastructure](../fundamentals/security-operations-infrastructure.md) </br> [Integrate Azure AD logs with Azure Monitor logs](../reports-monitoring/howto-integrate-activity-logs-with-log-analytics.md)|
+|**11.3.1.2** Internal vulnerability scans are performed via authenticated scanning as follows: </br> Systems that are unable to accept credentials for authenticated scanning are documented. </br> Sufficient privileges are used for those systems that accept credentials for scanning. </br> If accounts used for authenticated scanning can be used for interactive login, they're managed in accordance with Requirement 8.2.2.|Include servers that support Azure AD hybrid capabilities. For example, Azure AD Connect, Application proxy connectors, etc. as part of internal vulnerability scans. </br> Organizations using federated authentication: review and address federation system infrastructure vulnerabilities. [What is federation with Azure AD?](../hybrid/whatis-fed.md) </br> Review and mitigate risk detections reported by Azure AD Identity Protection. Integrate the signals with a SIEM solution to integrate more with remediation workflows or automation. [Risk types and detection](../identity-protection/concept-identity-protection-risks.md) </br> Run the Azure AD assessment tool regularly and address findings. [AzureAD/AzureADAssessment](https://github.com/AzureAD/AzureADAssessment) </br> [Security operations for infrastructure](../fundamentals/security-operations-infrastructure.md) </br> [Integrate Azure AD logs with Azure Monitor logs](../reports-monitoring/howto-integrate-activity-logs-with-log-analytics.md)|
+|**11.3.1.3** Internal vulnerability scans are performed after any significant change as follows: </br> High-risk and critical vulnerabilities (per the entityΓÇÖs vulnerability risk rankings defined at Requirement 6.3.1) are resolved. </br> Rescans are conducted as needed. </br> Scans are performed by qualified personnel and organizational independence of the tester exists (not required to be a Qualified Security Assessor (QSA) or Approved Scanning Vendor (ASV)).|Include servers that support Azure AD hybrid capabilities. For example, Azure AD Connect, Application proxy connectors, etc. as part of internal vulnerability scans. </br> Organizations using federated authentication: review and address federation system infrastructure vulnerabilities. [What is federation with Azure AD?](../hybrid/whatis-fed.md) </br> Review and mitigate risk detections reported by Azure AD Identity Protection. Integrate the signals with a SIEM solution to integrate more with remediation workflows or automation. [Risk types and detection](../identity-protection/concept-identity-protection-risks.md) </br> Run the Azure AD assessment tool regularly and address findings. [AzureAD/AzureADAssessment](https://github.com/AzureAD/AzureADAssessment) </br> [Security operations for infrastructure](../fundamentals/security-operations-infrastructure.md) </br> [Integrate Azure AD logs with Azure Monitor logs](../reports-monitoring/howto-integrate-activity-logs-with-log-analytics.md)|
+|**11.3.2** External vulnerability scans are performed as follows: </br> At least once every three months. </br> By a PCI SSC ASV. </br> Vulnerabilities are resolved and ASV Program Guide requirements for a passing scan are met. </br> Rescans are performed as needed to confirm that vulnerabilities are resolved per the ASV Program Guide requirements for a passing scan.|Not applicable to Azure AD.|
+|**11.3.2.1** External vulnerability scans are performed after any significant change as follows: </br> Vulnerabilities that are scored 4.0 or higher by the CVSS are resolved. </br> Rescans are conducted as needed. </br> Scans are performed by qualified personnel and organizational independence of the tester exists (not required to be a QSA or ASV).|Not applicable to Azure AD.|
+
+## 11.4 External and internal penetration testing is regularly performed, and exploitable vulnerabilities and security weaknesses are corrected.
+
+|PCI-DSS Defined approach requirements|Azure AD guidance and recommendations|
+|-|-|
+|**11.4.1** A penetration testing methodology is defined, documented, and implemented by the entity, and includes: </br> Industry-accepted penetration testing approaches. </br> Coverage for the entire cardholder data environment (CDE) perimeter and critical systems. </br> Testing from both inside and outside the network. </br> Testing to validate any segmentation and scope-reduction controls. </br> Application-layer penetration testing to identify, at a minimum, the vulnerabilities listed in Requirement 6.2.4. </br> Network-layer penetration tests that encompass all components that support network functions and operating systems. </br> Review and consideration of threats and vulnerabilities experienced in the last 12 months. </br> Documented approach to assessing and addressing the risk posed by exploitable vulnerabilities and security weaknesses found during penetration testing. </br> Retention of penetration testing results and remediation activities results for at least 12 months.|[Penetration Testing Rules of Engagement, Microsoft Cloud](https://www.microsoft.com/msrc/pentest-rules-of-engagement)|
+|**11.4.2** Internal penetration testing is performed: </br> Per the entityΓÇÖs defined methodology. </br> At least once every 12 months. </br> After any significant infrastructure or application upgrade or change. </br> By a qualified internal resource or qualified external third-party. </br> Organizational independence of the tester exists (not required to be a QSA or ASV).|[Penetration Testing Rules of Engagement, Microsoft Cloud](https://www.microsoft.com/msrc/pentest-rules-of-engagement)|
+|**11.4.3** External penetration testing is performed: </br> Per the entityΓÇÖs defined methodology. </br> At least once every 12 months. </br> After any significant infrastructure or application upgrade or change. </br> By a qualified internal resource or qualified external third party. </br> Organizational independence of the tester exists (not required to be a QSA or ASV).|[Penetration Testing Rules of Engagement, Microsoft Cloud](https://www.microsoft.com/msrc/pentest-rules-of-engagement)|
+|**11.4.4** Exploitable vulnerabilities and security weaknesses found during penetration testing are corrected as follows: </br> In accordance with the entityΓÇÖs assessment of the risk posed by the security issue as defined in Requirement 6.3.1. </br> Penetration testing is repeated to verify the corrections.|[Penetration Testing Rules of Engagement, Microsoft Cloud](https://www.microsoft.com/msrc/pentest-rules-of-engagement)|
+|**11.4.5** If segmentation is used to isolate the CDE from other networks, penetration tests are performed on segmentation controls as follows: </br> At least once every 12 months and after any changes to segmentation controls/methods. </br> Covering all segmentation controls/methods in use. </br> According to the entityΓÇÖs defined penetration testing methodology. </br> Confirming that the segmentation controls/methods are operational and effective, and isolate the CDE from all out-of-scope systems. </br> Confirming effectiveness of any use of isolation to separate systems with differing security levels (see Requirement 2.2.3). </br> Performed by a qualified internal resource or qualified external third party. </br> Organizational independence of the tester exists (not required to be a QSA or ASV).|Not applicable to Azure AD.|
+|**11.4.6** *Additional requirement for service providers only*: If segmentation is used to isolate the CDE from other networks, penetration tests are performed on segmentation controls as follows: </br> At least once every six months and after any changes to segmentation controls/methods. </br> Covering all segmentation controls/methods in use. </br> According to the entityΓÇÖs defined penetration testing methodology. </br> Confirming that the segmentation controls/methods are operational and effective, and isolate the CDE from all out-of-scope systems. </br> Confirming effectiveness of any use of isolation to separate systems with differing security levels (see Requirement 2.2.3). </br> Performed by a qualified internal resource or qualified external third party. </br> Organizational independence of the tester exists (not required to be a QSA or ASV).|Not applicable to Azure AD.|
+|**11.4.7** *Additional requirement for multi-tenant service providers only*: Multi-tenant service providers support their customers for external penetration testing per Requirement 11.4.3 and 11.4.4.|[Penetration Testing Rules of Engagement, Microsoft Cloud](https://www.microsoft.com/msrc/pentest-rules-of-engagement)|
+
+## 11.5 Network intrusions and unexpected file changes are detected and responded to.
+
+|PCI-DSS Defined approach requirements|Azure AD guidance and recommendations|
+|-|-|
+|**11.5.1** Intrusion-detection and/or intrusion-prevention techniques are used to detect and/or prevent intrusions into the network as follows: </br> All traffic is monitored at the perimeter of the CDE. </br> All traffic is monitored at critical points in the CDE. </br> Personnel are alerted to suspected compromises. </br> All intrusion-detection and prevention engines, baselines, and signatures are kept up to date.|Not applicable to Azure AD.|
+|**11.5.1.1** *Additional requirement for service providers only*: Intrusion-detection and/or intrusion-prevention techniques detect, alert on/prevent, and address covert malware communication channels.|Not applicable to Azure AD.|
+|**11.5.2** A change-detection mechanism (for example, file integrity monitoring tools) is deployed as follows: </br> To alert personnel to unauthorized modification (including changes, additions, and deletions) of critical files. </br> To perform critical file comparisons at least once weekly.|Not applicable to Azure AD.|
+
+## 11.6 Unauthorized changes on payment pages are detected and responded to.
+
+|PCI-DSS Defined approach requirements|Azure AD guidance and recommendations|
+|-|-|
+|**11.6.1** A change- and tamper-detection mechanism is deployed as follows: </br> To alert personnel to unauthorized modification (including indicators of compromise, changes, additions, and deletions) to the HTTP headers and the contents of payment pages as received by the consumer browser. </br> The mechanism is configured to evaluate the received HTTP header and payment page. </br> The mechanism functions are performed as follows: At least once every seven days </br> OR </br> Periodically at the frequency defined in the entityΓÇÖs targeted risk analysis, which is performed according to all elements|Not applicable to Azure AD.|
+
+## Next steps
+
+PCI-DSS requirements **3**, **4**, **9**, and **12** aren't applicable to Azure AD, therefore there are no corresponding articles. To see all requirements, go to pcisecuritystandards.org: [Official PCI Security Standards Council Site](https://docs-prv.pcisecuritystandards.org/PCI%20DSS/Standard/PCI-DSS-v4_0.pdf).
+
+To configure Azure AD to comply with PCI-DSS, see the following articles.
+
+* [Azure AD PCI-DSS guidance](azure-ad-pci-dss-guidance.md)
+* [Requirement 1: Install and Maintain Network Security Controls](pci-requirement-1.md)
+* [Requirement 2: Apply Secure Configurations to All System Components](pci-requirement-2.md)
+* [Requirement 5: Protect All Systems and Networks from Malicious Software](pci-requirement-5.md)
+* [Requirement 6: Develop and Maintain Secure Systems and Software](pci-requirement-6.md)
+* [Requirement 7: Restrict Access to System Components and Cardholder Data by Business Need to Know](pci-requirement-7.md)
+* [Requirement 8: Identify Users and Authenticate Access to System Components](pci-requirement-8.md)
+* [Requirement 10: Log and Monitor All Access to System Components and Cardholder Data](pci-requirement-10.md)
+* [Requirement 11: Test Security of Systems and Networks Regularly](pci-requirement-11.md) (You're here)
+* [Azure AD PCI-DSS Multi-Factor Authentication guidance](azure-ad-pci-dss-mfa.md)
active-directory Pci Requirement 2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/standards/pci-requirement-2.md
+
+ Title: Azure Active Directory and PCI-DSS Requirement 2
+description: Learn PCI-DSS defined approach requirements for applying secure configurations to all system components
+++++++++ Last updated : 04/18/2023++++
+# Azure Active Directory and PCI-DSS Requirement 2
+
+**Requirement 2: Apply Secure Configurations to All System Components**
+</br> **Defined approach requirements**
+
+## 2.1 Processes and mechanisms for applying secure configurations to all system components are defined and understood.
+
+|PCI-DSS Defined approach requirements|Azure AD guidance and recommendations|
+|-|-|
+|**2.1.1** All security policies and operational procedures that are identified in Requirement 2 are: </br> Documented </br> Kept up to date </br> In use</br> Known to all affected parties|Use the guidance and links herein to produce the documentation to fulfill requirements based on your environment configuration.|
+|**2.1.2** Roles and responsibilities for performing activities in Requirement 2 are documented, assigned, and understood.|Use the guidance and links herein to produce the documentation to fulfill requirements based on your environment configuration.|
+
+## 2.2 System components are configured and managed securely.
+
+|PCI-DSS Defined approach requirements|Azure AD guidance and recommendations|
+|-|-|
+|**2.2.1** Configuration standards are developed, implemented, and maintained to: </br> Cover all system components. </br> Address all known security vulnerabilities.</br> Be consistent with industry-accepted system hardening standards or vendor hardening recommendations. </br> Be updated as new vulnerability issues are identified, as defined in Requirement 6.3.1. </br> Be applied when new systems are configured and verified as in place before or immediately after a system component is connected to a production environment.|See, [Azure AD security operations guide](../fundamentals/security-operations-introduction.md)|
+|**2.2.2** Vendor default accounts are managed as follows: </br> If the vendor default account(s) will be used, the default password is changed per Requirement 8.3.6. </br> If the vendor default account(s) will not be used, the account is removed or disabled.|Not applicable to Azure AD.|
+|**2.2.3** Primary functions requiring different security levels are managed as follows: </br> Only one primary function exists on a system component, </br> OR </br> Primary functions with differing security levels that exist on the same system component are isolated from each other,</br> OR </br> Primary functions with differing security levels on the same system component are all secured to the level required by the function with the highest security need.|Learn about determining least-privileged roles. [Least privileged roles by task in Azure AD](../roles/delegate-by-task.md)|
+|**2.2.4** Only necessary services, protocols, daemons, and functions are enabled, and all unnecessary functionality is removed or disabled.|Review Azure AD settings and disable unused features. [Five steps to securing your identity infrastructure](../../security/fundamentals/steps-secure-identity.md) </br> [Azure AD security operations guide](../fundamentals/security-operations-introduction.md)|
+|**2.2.5** If any insecure services, protocols, or daemons are present: </br> Business justification is documented. </br> Additional security features are documented and implemented that reduce the risk of using insecure services, protocols, or daemons.|Review Azure AD settings and disable unused features. [Five steps to securing your identity infrastructure](../../security/fundamentals/steps-secure-identity.md) </br> [Azure AD security operations guide](../fundamentals/security-operations-introduction.md)|
+|**2.2.6** System security parameters are configured to prevent misuse.|Review Azure AD settings and disable unused features. [Five steps to securing your identity infrastructure](../../security/fundamentals/steps-secure-identity.md) </br> [Azure AD security operations guide](../fundamentals/security-operations-introduction.md)|
+|**2.2.7** All nonconsole administrative access is encrypted using strong cryptography.|Azure AD interfaces, such the management portal, Microsoft Graph, and PowerShell, are encrypted in transit using TLS. [Enable support for TLS 1.2 in your environment for Azure AD TLS 1.1 and 1.0 deprecation](/troubleshoot/azure/active-directory/enable-support-tls-environment?tabs=azure-monitor)|
+
+## 2.3 Wireless environments are configured and managed securely.
+
+|PCI-DSS Defined approach requirements|Azure AD guidance and recommendations|
+|-|-|
+|**2.3.1** For wireless environments connected to the CDE or transmitting account data, all wireless vendor defaults are changed at installation or are confirmed to be secure, including but not limited to: </br> Default wireless encryption keys </br> Passwords on wireless access points </br> SNMP defaults </br> Any other security-related wireless vendor defaults|If your organization integrates network access points with Azure AD for authentication, see [Requirement 1: Install and Maintain Network Security Controls](pci-requirement-1.md).|
+|**2.3.2** For wireless environments connected to the CDE or transmitting account data, wireless encryption keys are changed as follows: </br> Whenever personnel with knowledge of the key leave the company or the role for which the knowledge was necessary. </br> Whenever a key is suspected of or known to be compromised.|Not applicable to Azure AD.|
+
+## Next steps
+
+PCI-DSS requirements **3**, **4**, **9**, and **12** aren't applicable to Azure AD, therefore there are no corresponding articles. To see all requirements, go to pcisecuritystandards.org: [Official PCI Security Standards Council Site](https://docs-prv.pcisecuritystandards.org/PCI%20DSS/Standard/PCI-DSS-v4_0.pdf).
+
+To configure Azure AD to comply with PCI-DSS, see the following articles.
+
+* [Azure AD PCI-DSS guidance](azure-ad-pci-dss-guidance.md)
+* [Requirement 1: Install and Maintain Network Security Controls](pci-requirement-1.md)
+* [Requirement 2: Apply Secure Configurations to All System Components](pci-requirement-2.md) (You're here)
+* [Requirement 5: Protect All Systems and Networks from Malicious Software](pci-requirement-5.md)
+* [Requirement 6: Develop and Maintain Secure Systems and Software](pci-requirement-6.md)
+* [Requirement 7: Restrict Access to System Components and Cardholder Data by Business Need to Know](pci-requirement-7.md)
+* [Requirement 8: Identify Users and Authenticate Access to System Components](pci-requirement-8.md)
+* [Requirement 10: Log and Monitor All Access to System Components and Cardholder Data](pci-requirement-10.md)
+* [Requirement 11: Test Security of Systems and Networks Regularly](pci-requirement-11.md)
+* [Azure AD PCI-DSS Multi-Factor Authentication guidance](azure-ad-pci-dss-mfa.md)
active-directory Pci Requirement 5 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/standards/pci-requirement-5.md
+
+ Title: Azure Active Directory and PCI-DSS Requirement 5
+description: Learn PCI-DSS defined approach requirements for protecting all systems and networks from malicious software
+++++++++ Last updated : 04/18/2023++++
+# Azure Active Directory and PCI-DSS Requirement 5
+
+**Requirement 5: Protect All Systems and Networks from Malicious Software**
+</br>**Defined approach requirements**
+
+## 5.1 Processes and mechanisms for protecting all systems and networks from malicious software are defined and understood.
+
+|PCI-DSS Defined approach requirements|Azure AD guidance and recommendations|
+|-|-|
+|**5.1.1** All security policies and operational procedures that are identified in Requirement 5 are: </br> Documented </br> Kept up to date </br> In use </br> Known to all affected parties|Use the guidance and links herein to produce the documentation to fulfill requirements based on your environment configuration.|
+|**5.1.2** Roles and responsibilities for performing activities in Requirement 5 are documented, assigned, and understood.|Use the guidance and links herein to produce the documentation to fulfill requirements based on your environment configuration.|
+
+## 5.2 Malicious software (malware) is prevented, or detected and addressed.
+
+|PCI-DSS Defined approach requirements|Azure AD guidance and recommendations|
+|-|-|
+|**5.2.1** An anti-malware solution(s) is deployed on all system components, except for those system components identified in periodic evaluations per Requirement 5.2.3 that concludes the system components aren't at risk from malware.|Deploy Conditional Access policies that require device compliance. [Use compliance policies to set rules for devices you manage with Intune](/mem/intune/protect/device-compliance-get-started) </br> Integrate device compliance state with anti-malware solutions. [Enforce compliance for Microsoft Defender for Endpoint with Conditional Access in Intune](/mem/intune/protect/advanced-threat-protection) </br> [Mobile Threat Defense integration with Intune](/mem/intune/protect/mobile-threat-defense)|
+|**5.2.2** The deployed anti-malware solution(s): </br> Detects all known types of malware. Removes, blocks, or contains all known types of malware.|Not applicable to Azure AD.|
+|**5.2.3** Any system components that aren't at risk for malware are evaluated periodically to include the following: </br> A documented list of all system components not at risk for malware. </br> Identification and evaluation of evolving malware threats for those system components. </br> Confirmation whether such system components continue to not require anti-malware protection.|Not applicable to Azure AD.|
+|**5.2.3.1** The frequency of periodic evaluations of system components identified as not at risk for malware is defined in the entityΓÇÖs targeted risk analysis, which is performed according to all elements specified in Requirement 12.3.1.|Not applicable to Azure AD.|
+
+## 5.3 Anti-malware mechanisms and processes are active, maintained, and monitored.
+
+|PCI-DSS Defined approach requirements|Azure AD guidance and recommendations|
+|-|-|
+|**5.3.1** The anti-malware solution(s) is kept current via automatic updates.|Not applicable to Azure AD.|
+|**5.3.2** The anti-malware solution(s): </br> Performs periodic scans and active or real-time scans.</br> OR </br> Performs continuous behavioral analysis of systems or processes.|Not applicable to Azure AD.|
+|**5.3.2.1** If periodic malware scans are performed to meet Requirement 5.3.2, the frequency of scans is defined in the entityΓÇÖs targeted risk analysis, which is performed according to all elements specified in Requirement 12.3.1.|Not applicable to Azure AD.|
+|**5.3.3** For removable electronic media, the anti-malware solution(s): </br> Performs automatic scans of when the media is inserted, connected, or logically mounted, </br> OR </br> Performs continuous behavioral analysis of systems or processes when the media is inserted, connected, or logically mounted.|Not applicable to Azure AD.|
+|**5.3.4** Audit logs for the anti-malware solution(s) are enabled and retained in accordance with Requirement 10.5.1.|Not applicable to Azure AD.|
+|**5.3.5** Anti-malware mechanisms can't be disabled or altered by users, unless specifically documented, and authorized by management on a case-by-case basis for a limited time period.|Not applicable to Azure AD.|
+
+## 5.4 Anti-phishing mechanisms protect users against phishing attacks.
+
+|PCI-DSS Defined approach requirements|Azure AD guidance and recommendations|
+|-|-|
+|**5.4.1** Processes and automated mechanisms are in place to detect and protect personnel against phishing attacks.|Configure Azure AD to use phishing-resistant credentials. [Implementation considerations for phishing-resistant MFA](memo-22-09-multi-factor-authentication.md) </br> Use controls in Conditional Access to require authentication with phishing-resistant credentials. [Conditional Access authentication strength](../authentication/concept-authentication-strengths.md) </br> Guidance herein relates to identity and access management configuration. To mitigate phishing attacks, deploy workload capabilities, such as in Microsoft 365. [Anti-phishing protection in Microsoft 365](/microsoft-365/security/office-365-security/anti-phishing-protection-about?view=o365-worldwide&preserve-view=true)|
+
+## Next steps
+
+PCI-DSS requirements **3**, **4**, **9**, and **12** aren't applicable to Azure AD, therefore there are no corresponding articles. To see all requirements, go to pcisecuritystandards.org: [Official PCI Security Standards Council Site](https://docs-prv.pcisecuritystandards.org/PCI%20DSS/Standard/PCI-DSS-v4_0.pdf).
+
+To configure Azure AD to comply with PCI-DSS, see the following articles.
+
+* [Azure AD PCI-DSS guidance](azure-ad-pci-dss-guidance.md)
+* [Requirement 1: Install and Maintain Network Security Controls](pci-requirement-1.md)
+* [Requirement 2: Apply Secure Configurations to All System Components](pci-requirement-2.md)
+* [Requirement 5: Protect All Systems and Networks from Malicious Software](pci-requirement-5.md) (You're here)
+* [Requirement 6: Develop and Maintain Secure Systems and Software](pci-requirement-6.md)
+* [Requirement 7: Restrict Access to System Components and Cardholder Data by Business Need to Know](pci-requirement-7.md)
+* [Requirement 8: Identify Users and Authenticate Access to System Components](pci-requirement-8.md)
+* [Requirement 10: Log and Monitor All Access to System Components and Cardholder Data](pci-requirement-10.md)
+* [Requirement 11: Test Security of Systems and Networks Regularly](pci-requirement-11.md)
+* [Azure AD PCI-DSS Multi-Factor Authentication guidance](azure-ad-pci-dss-mfa.md)
active-directory Pci Requirement 6 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/standards/pci-requirement-6.md
+
+ Title: Azure Active Directory and PCI-DSS Requirement 6
+description: Learn PCI-DSS defined approach requirements about developing and maintaining secure systems and software
+++++++++ Last updated : 04/18/2023++++
+# Azure Active Directory and PCI-DSS Requirement 6
+
+**Requirement 6: Develop and Maintain Secure Systems and Software**
+</br>**Defined approach requirements**
+
+## 6.1 Processes and mechanisms for developing and maintaining secure systems and software are defined and understood.
+
+|PCI-DSS Defined approach requirements|Azure AD guidance and recommendations|
+|-|-|
+|**6.1.1** All security policies and operational procedures that are identified in Requirement 6 are: </br> Documented </br> Kept up to date </br> In use </br> Known to all affected parties|Use the guidance and links herein to produce the documentation to fulfill requirements based on your environment configuration.|
+|**6.1.2** Roles and responsibilities for performing activities in Requirement 6 are documented, assigned, and understood.|Use the guidance and links herein to produce the documentation to fulfill requirements based on your environment configuration.|
+
+## 6.2 Bespoke and custom software are developed securely.
+
+|PCI-DSS Defined approach requirements|Azure AD guidance and recommendations|
+|-|-|
+|**6.2.1** Bespoke and custom software are developed securely, as follows: </br> Based on industry standards and/or best practices for secure development. </br> In accordance with PCI-DSS (for example, secure authentication and logging). </br> Incorporating consideration of information security issues during each stage of the software development lifecycle.|Procure and develop applications that use modern authentication protocols, such as OAuth2 and OpenID Connect (OIDC), which integrate with Azure Active Directory (Azure AD). </br> Build software using the Microsoft identity platform. [Microsoft identity platform best practices and recommendations](../develop/identity-platform-integration-checklist.md)|
+|**6.2.2** Software development personnel working on bespoke and custom software are trained at least once every 12 months as follows: </br> On software security relevant to their job function and development languages. </br> Including secure software design and secure coding techniques. </br> Including, if security testing tools are used, how to use the tools for detecting vulnerabilities in software.|Use the following exam to provide proof of proficiency on Microsoft identity platform: [Exam MS-600: Building Applications and Solutions with Microsoft 365 Core Services](/certifications/exams/ms-600) Use the following training to prepare for the exam: [MS-600: Implement Microsoft identity](/training/paths/m365-identity-associate/)|
+|**6.2.3** Bespoke and custom software is reviewed prior to being released into production or to customers, to identify and correct potential coding vulnerabilities, as follows: </br> Code reviews ensure code is developed according to secure coding guidelines. </br> Code reviews look for both existing and emerging software vulnerabilities. </br> Appropriate corrections are implemented prior to release.|Not applicable to Azure AD.|
+|**6.2.3.1** If manual code reviews are performed for bespoke and custom software prior to release to production, code changes are: </br> Reviewed by individuals other than the originating code author, and who are knowledgeable about code-review techniques and secure coding practices. </br> Reviewed and approved by management prior to release.|Not applicable to Azure AD.|
+|**6.2.4** Software engineering techniques or other methods are defined and in use by software development personnel to prevent or mitigate common software attacks and related vulnerabilities in bespoke and custom software, including but not limited to the following: </br> Injection attacks, including SQL, LDAP, XPath, or other command, parameter, object, fault, or injection-type flaws. </br> Attacks on data and data structures, including attempts to manipulate buffers, pointers, input data, or shared data. </br> Attacks on cryptography usage, including attempts to exploit weak, insecure, or inappropriate cryptographic implementations, algorithms, cipher suites, or modes of operation. </br> Attacks on business logic, including attempts to abuse or bypass application features and functionalities through the manipulation of APIs, communication protocols and channels, client-side functionality, or other system/application functions and resources. This includes cross-site scripting (XSS) and cross-site request forgery (CSRF). </br> Attacks on access control mechanisms, including attempts to bypass or abuse identification, authentication, or authorization mechanisms, or attempts to exploit weaknesses in the implementation of such mechanisms. </br> Attacks via any ΓÇ£high-riskΓÇ¥ vulnerabilities identified in the vulnerability identification process, as defined in Requirement 6.3.1.|Not applicable to Azure AD.|
+
+## 6.3 Security vulnerabilities are identified and addressed.
+
+|PCI-DSS Defined approach requirements|Azure AD guidance and recommendations|
+|-|-|
+|**6.3.1** Security vulnerabilities are identified and managed as follows: </br> New security vulnerabilities are identified using industry-recognized sources for security vulnerability information, including alerts from international and national computer emergency response teams (CERTs). </br> Vulnerabilities are assigned a risk ranking based on industry best practices and consideration of potential impact. </br> Risk rankings identify, at a minimum, all vulnerabilities considered to be a high-risk or critical to the environment. </br> Vulnerabilities for bespoke and custom, and third-party software (for example operating systems and databases) are covered.|Learn about vulnerabilities. [MSRC | Security Updates, Security Update Guide](https://msrc.microsoft.com/update-guide)|
+|**6.3.2** An inventory of bespoke and custom software, and third-party software components incorporated into bespoke and custom software is maintained to facilitate vulnerability and patch management.|Generate reports for applications using Azure AD for authentication for inventory. [applicationSignInDetailedSummary resource type](/graph/api/resources/applicationsignindetailedsummary?view=graph-rest-beta&viewFallbackFrom=graph-rest-1.0&preserve-view=true) </br> [Applications listed in Enterprise applications](../manage-apps/application-list.md)|
+|**6.3.3** All system components are protected from known vulnerabilities by installing applicable security patches/updates as follows: </br> Critical or high-security patches/updates (identified according to the risk ranking process at Requirement 6.3.1) are installed within one month of release. </br> All other applicable security patches/updates are installed within an appropriate time frame as determined by the entity (for example, within three months of release).|Not applicable to Azure AD.|
+
+## 6.4 Public-facing web applications are protected against attacks.
+
+|PCI-DSS Defined approach requirements|Azure AD guidance and recommendations|
+|-|-|
+|**6.4.1** For public-facing web applications, new threats and vulnerabilities are addressed on an ongoing basis and these applications are protected against known attacks as follows: Reviewing public-facing web applications via manual or automated application vulnerability security assessment tools or methods as follows: </br> ΓÇô At least once every 12 months and after significant changes. </br> ΓÇô By an entity that specializes in application security. </br> ΓÇô Including, at a minimum, all common software attacks in Requirement 6.2.4. </br> ΓÇô All vulnerabilities are ranked in accordance with requirement 6.3.1. </br> ΓÇô All vulnerabilities are corrected. </br> ΓÇô The application is reevaluated after the corrections </br> OR </br> Installing an automated technical solution(s) that continually detect and prevent web-based attacks as follows: </br> ΓÇô Installed in front of public-facing web applications to detect and prevent web-based attacks. </br> ΓÇô Actively running and up to date as applicable. </br> ΓÇô Generating audit logs. </br> ΓÇô Configured to either block web-based attacks or generate an alert that is immediately investigated.|Not applicable to Azure AD.|
+|**6.4.2** For public-facing web applications, an automated technical solution is deployed that continually detects and prevents web-based attacks, with at least the following: </br> Is installed in front of public-facing web applications and is configured to detect and prevent web-based attacks. </br> Actively running and up to date as applicable. </br> Generating audit logs. </br> Configured to either block web-based attacks or generate an alert that is immediately investigated.|Not applicable to Azure AD.|
+|**6.4.3** All payment page scripts that are loaded and executed in the consumerΓÇÖs browser are managed as follows: </br> A method is implemented to confirm that each script is authorized. </br> A method is implemented to assure the integrity of each script. </br> An inventory of all scripts is maintained with written justification as to why each is necessary.|Not applicable to Azure AD.|
+
+## 6.5 Changes to all system components are managed securely.
+
+|PCI-DSS Defined approach requirements|Azure AD guidance and recommendations|
+|-|-|
+|**6.5.1** Changes to all system components in the production environment are made according to established procedures that include: </br> Reason for, and description of, the change. </br> Documentation of security impact. </br> Documented change approval by authorized parties. </br> Testing to verify that the change doesn't adversely impact system security. </br> For bespoke and custom software changes, all updates are tested for compliance with Requirement 6.2.4 before being deployed into production. </br> Procedures to address failures and return to a secure state.|Include changes to Azure AD configuration in the change control process. |
+|**6.5.2** Upon completion of a significant change, all applicable PCI-DSS requirements are confirmed to be in place on all new or changed systems and networks, and documentation is updated as applicable.|Not applicable to Azure AD.|
+|**6.5.3** Preproduction environments are separated from production environments and the separation is enforced with access controls.|Approaches to separate preproduction and production environments, based on organizational requirements. [Resource isolation in a single tenant](../fundamentals/secure-with-azure-ad-single-tenant.md) </br> [Resource isolation with multiple tenants](../fundamentals/secure-with-azure-ad-multiple-tenants.md)|
+|**6.5.4** Roles and functions are separated between production and preproduction environments to provide accountability such that only reviewed and approved changes are deployed.|Learn about privileged roles and dedicated preproduction tenants. [Best practices for Azure AD roles](../roles/best-practices.md)|
+|**6.5.5** Live PANs aren't used in preproduction environments, except where those environments are included in the CDE and protected in accordance with all applicable PCI-DSS requirements.|Not applicable to Azure AD.|
+|**6.5.6** Test data and test accounts are removed from system components before the system goes into production.|Not applicable to Azure AD.|
+
+## Next steps
+
+PCI-DSS requirements **3**, **4**, **9**, and **12** aren't applicalbe to Azure AD, therefore there are no corresponding articles. To see all requirements, go to pcisecuritystandards.org: [Official PCI Security Standards Council Site](https://docs-prv.pcisecuritystandards.org/PCI%20DSS/Standard/PCI-DSS-v4_0.pdf).
+
+To configure Azure AD to comply with PCI-DSS, see the following articles.
+
+* [Azure AD PCI-DSS guidance](azure-ad-pci-dss-guidance.md)
+* [Requirement 1: Install and Maintain Network Security Controls](pci-requirement-1.md)
+* [Requirement 2: Apply Secure Configurations to All System Components](pci-requirement-2.md)
+* [Requirement 5: Protect All Systems and Networks from Malicious Software](pci-requirement-5.md)
+* [Requirement 6: Develop and Maintain Secure Systems and Software](pci-requirement-6.md) (You're here)
+* [Requirement 7: Restrict Access to System Components and Cardholder Data by Business Need to Know](pci-requirement-7.md)
+* [Requirement 8: Identify Users and Authenticate Access to System Components](pci-requirement-8.md)
+* [Requirement 10: Log and Monitor All Access to System Components and Cardholder Data](pci-requirement-10.md)
+* [Requirement 11: Test Security of Systems and Networks Regularly](pci-requirement-11.md)
+* [Azure AD PCI-DSS Multi-Factor Authentication guidance](azure-ad-pci-dss-mfa.md)
active-directory Pci Requirement 7 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/standards/pci-requirement-7.md
+
+ Title: Azure Active Directory and PCI-DSS Requirement 7
+description: Learn PCI-DSS defined approach requirements for restricting access to system components and CHD by business need-to-know
+++++++++ Last updated : 04/18/2023++++
+# Azure Active Directory and PCI-DSS Requirement 7
+
+**Requirement 7: Restrict Access to System Components and Cardholder Data by Business Need to Know**
+</br>**Defined approach requirements**
+
+## 7.1 Processes and mechanisms for restricting access to system components and cardholder data by business need to know are defined and understood.
+
+|PCI-DSS Defined approach requirements|Azure AD guidance and recommendations|
+|-|-|
+|**7.1.1** All security policies and operational procedures that are identified in Requirement 7 are: </br> Documented </br> Kept up to date </br> In use </br> Known to all affected parties|Integrate access to cardholder data environment (CDE) applications with Azure Active Directory (Azure AD) for authentication and authorization. </br> Document Conditional Access policies for remote access technologies. Automate with Microsoft Graph API and PowerShell. [Conditional Access: Programmatic access](../conditional-access/howto-conditional-access-apis.md) </br> Archive the Azure AD audit logs to record security policy changes and Azure AD tenant configuration. To record usage, archive Azure AD sign-in logs in a security information and event management (SIEM) system. [Azure AD activity logs in Azure Monitor](../reports-monitoring/concept-activity-logs-azure-monitor.md)|
+|**7.1.2** Roles and responsibilities for performing activities in Requirement 7 are documented, assigned, and understood.|Integrate access to CDE applications with Azure AD for authentication and authorization. </br> - Assign users roles to applications or with group membership </br> - Use Microsoft Graph to list application assignments </br> - Use Azure AD audit logs to track assignment changes. </br> [List appRoleAssignments granted to a user](/graph/api/user-list-approleassignments?view=graph-rest-1.0&tabs=http&preserve-view=true) </br> [Get-MgServicePrincipalAppRoleAssignedTo](/powershell/module/microsoft.graph.applications/get-mgserviceprincipalapproleassignedto?view=graph-powershell-1.0&preserve-view=true) </br></br> **Privileged access** </br> Use Azure AD audit logs to track directory role assignments. Administrator roles relevant to this PCI requirement: </br> - Global </br> - Application </br> - Authentication </br> - Authentication Policy </br> - Hybrid Identity </br> To implement least privilege access, use Azure AD to create custom directory roles. </br> If you build portions of CDE in Azure, document privileged role assignments such as Owner, Contributor, user Access Administrator, etc., and subscription custom roles where CDE resources are deployed. </br> Microsoft recommends you enable Just-In-Time (JIT) access to roles using Privileged Identity Management (PIM). PIM enables JIT access to Azure AD security groups for scenarios when group membership represents privileged access to CDE applications or resources. [Azure AD built-in roles](../roles/permissions-reference.md) </br> [Azure AD Identity and access management operations reference guide](../fundamentals/active-directory-ops-guide-iam.md) </br> [Create and assign a custom role in Azure Active Directory](../roles/custom-create.md) </br> [Securing privileged access for hybrid and cloud deployments in Azure AD](../roles/security-planning.md) </br> [What is Azure AD Privileged Identity Management?](../privileged-identity-management/pim-configure.md) </br> [Best practices for all isolation architectures]() </br> [PIM for Groups](../fundamentals/secure-with-azure-ad-best-practices.md)|
+
+## 7.2 Access to system components and data is appropriately defined and assigned.
+
+|PCI-DSS Defined approach requirements|Azure AD guidance and recommendations|
+|-|-|
+|**7.2.1** An access control model is defined and includes granting access as follows: </br> Appropriate access depending on the entityΓÇÖs business and access needs. </br> Access to system components and data resources that is based on usersΓÇÖ job classification and functions. </br> The least privileges required (for example, user, administrator) to perform a job function.|Use Azure AD to assign users to roles in applications directly or through group membership. </br> Organizations with standardized taxonomy implemented as attributes can automate access grants based on user job classification and function. Use Azure AD Groups with dynamic membership, and Azure AD entitlement management access packages with dynamic assignment policies. </br> Use entitlement management to define separation of duties to delineate least privilege. </br> PIM enables JIT access to Azure AD security groups for custom scenarios where group membership represents privileged access to CDE applications or resources. [Dynamic membership rules for groups in Azure AD](../enterprise-users/groups-dynamic-membership.md) </br> [Configure an automatic assignment policy for an access package in entitlement management](../governance/entitlement-management-access-package-auto-assignment-policy.md) </br> [Configure separation of duties for an access package in entitlement management](../governance/entitlement-management-access-package-incompatible.md) </br> [PIM for Groups](../privileged-identity-management/concept-pim-for-groups.md)|
+|**7.2.2** Access is assigned to users, including privileged users, based on: </br> Job classification and function. </br> Least privileges necessary to perform job responsibilities.|Use Azure AD to assign users to roles in applications directly or through group membership. </br> Organizations with standardized taxonomy implemented as attributes can automate access grants based on user job classification and function. Use Azure AD Groups with dynamic membership, and Azure AD entitlement management access packages with dynamic assignment policies. </br> Use entitlement management to define separation of duties to delineate least privilege. </br> PIM enables JIT access to Azure AD security groups for custom scenarios where group membership represents privileged access to CDE applications or resources. [Dynamic membership rules for groups in Azure AD](../enterprise-users/groups-dynamic-membership.md) </br> [Configure an automatic assignment policy for an access package in entitlement management](../governance/entitlement-management-access-package-auto-assignment-policy.md) </br> [Configure separation of duties for an access package in entitlement management](../governance/entitlement-management-access-package-incompatible.md) </br> [PIM for Groups](../privileged-identity-management/concept-pim-for-groups.md)|
+|**7.2.3** Required privileges are approved by authorized personnel.|Entitlement management supports approval workflows to grant access to resources, and periodic access reviews. [Approve or deny access requests in entitlement management](../governance/entitlement-management-request-approve.md) </br> [Review access of an access package in entitlement management](../governance/entitlement-management-access-reviews-review-access.md) </br> PIM supports approval workflows to activate Azure AD directory roles, and Azure roles, and cloud groups. [Approve or deny requests for Azure AD roles in PIM](../privileged-identity-management/azure-ad-pim-approval-workflow.md) </br> [Approve activation requests for group members and owners](../privileged-identity-management/groups-approval-workflow.md)|
+|**7.2.4** All user accounts and related access privileges, including third-party/vendor accounts, are reviewed as follows: </br> At least once every six months. </br> To ensure user accounts and access remain appropriate based on job function. </br> Any inappropriate access is addressed. Management acknowledges that access remains appropriate.|If you grant access to applications using direct assignment or with group membership, configure Azure AD access reviews. If you grant access to applications using entitlement management, enable access reviews at the access package level. [Create an access review of an access package in entitlement management](../governance/entitlement-management-access-reviews-create.md) </br> Use Azure AD external identities for third-party and vendor accounts. You can perform access reviews targeting external identities, for instance third-party or vendor accounts. [Manage guest access with access reviews](../governance/manage-guest-access-with-access-reviews.md)|
+|**7.2.5** All application and system accounts and related access privileges are assigned and managed as follows: </br> Based on the least privileges necessary for the operability of the system or application. </br> Access is limited to the systems, applications, or processes that specifically require their use.|Use Azure AD to assign users to roles in applications directly or through group membership. </br> Organizations with standardized taxonomy implemented as attributes can automate access grants based on user job classification and function. Use Azure AD Groups with dynamic membership, and Azure AD entitlement management access packages with dynamic assignment policies. </br> Use entitlement management to define separation of duties to delineate least privilege. </br> PIM enables JIT access to Azure AD security groups for custom scenarios where group membership represents privileged access to CDE applications or resources. [Dynamic membership rules for groups in Azure AD](../enterprise-users/groups-dynamic-membership.md) </br> [Configure an automatic assignment policy for an access package in entitlement management](../governance/entitlement-management-access-package-auto-assignment-policy.md) </br> [Configure separation of duties for an access package in entitlement management](../governance/entitlement-management-access-package-incompatible.md) </br> [PIM for Groups](../privileged-identity-management/concept-pim-for-groups.md)|
+|**7.2.5.1** All access by application and system accounts and related access privileges are reviewed as follows: </br> Periodically (at the frequency defined in the entityΓÇÖs targeted risk analysis, which is performed according to all elements specified in Requirement 12.3.1). </br> The application/system access remains appropriate for the function being performed. </br> Any inappropriate access is addressed. </br> Management acknowledges that access remains appropriate.|Best practices when reviewing service accounts permissions. [Governing Azure AD service accounts](../fundamentals/service-accounts-governing-azure.md) </br> [Govern on-premises service accounts](../fundamentals/service-accounts-govern-on-premises.md)|
+|**7.2.6** All user access to query repositories of stored cardholder data is restricted as follows: </br> Via applications or other programmatic methods, with access and allowed actions based on user roles and least privileges. </br> Only the responsible administrator(s) can directly access or query repositories of stored card-holder data (CHD).|Modern applications enable programmatic methods that restrict access to data repositories.</br> Integrate applications with Azure AD using modern authentication protocols such as OAuth and OpenID connect (OIDC). [OAuth 2.0 and OIDC protocols on the Microsoft identity platform](../develop/active-directory-v2-protocols.md) </br> Define application-specific roles to model privileged and nonprivileged user access. Assign users or groups to roles. [Add app roles to your application and receive them in the token](../develop/howto-add-app-roles-in-azure-ad-apps.md) </br> For APIs exposed by your application, define OAuth scopes to enable user and administrator consent. [Scopes and permissions in the Microsoft identity platform](../develop/scopes-oidc.md) </br> Model privileged and non-privileged access to the repositories with the following approach and avoid direct repository access. If administrators and operators require access, grant it per the underlying platform. For instance, ARM IAM assignments in Azure, Access Control Lists (ACLs) windows, etc. </br> See architecture guidance that includes securing application platform-as-a-service (PaaS) and infrastructure-as-a-service (IaaS) in Azure. [Azure Architecture Center](/azure/architecture/)|
+
+## 7.3 Access to system components and data is managed via an access control system(s).
+
+|PCI-DSS Defined approach requirements|Azure AD guidance and recommendations|
+|-|-|
+|**7.3.1** An access control system(s) is in place that restricts access based on a userΓÇÖs need to know and covers all system components.|Integrate access to applications in the CDE with Azure AD as an access control system authentication and authorization. Conditional Access policies, with application assignments control access to applications. [What is Conditional Access?](../conditional-access/overview.md) </br> [Assign users and groups to an application](../manage-apps/assign-user-or-group-access-portal.md)|
+|**7.3.2** The access control system(s) is configured to enforce permissions assigned to individuals, applications, and systems based on job classification and function.|Integrate access to applications in the CDE with Azure AD as an access control system authentication and authorization. Conditional Access policies, with application assignments control access to applications. [What is Conditional Access?](../conditional-access/overview.md) </br> [Assign users and groups to an application](../manage-apps/assign-user-or-group-access-portal.md)|
+|**7.3.3** The access control system(s) is set to ΓÇ£deny allΓÇ¥ by default.|Use Conditional Access to block access based on access request conditions such as group membership, applications, network location, credential strength, etc. [Conditional Access: Block access](../conditional-access/howto-conditional-access-policy-block-access.md) </br> Misconfigured block policy might contribute to unintentional lockouts. Design an emergency access strategy. [Manage emergency access admin accounts in Azure AD](../manage-apps/assign-user-or-group-access-portal.md)
+
+## Next steps
+
+PCI-DSS requirements **3**, **4**, **9**, and **12** aren't applicable to Azure AD, therefore there are no corresponding articles. To see all requirements, go to pcisecuritystandards.org: [Official PCI Security Standards Council Site](https://docs-prv.pcisecuritystandards.org/PCI%20DSS/Standard/PCI-DSS-v4_0.pdf).
+
+To configure Azure AD to comply with PCI-DSS, see the following articles.
+
+* [Azure AD PCI-DSS guidance](azure-ad-pci-dss-guidance.md)
+* [Requirement 1: Install and Maintain Network Security Controls](pci-requirement-1.md)
+* [Requirement 2: Apply Secure Configurations to All System Components](pci-requirement-2.md)
+* [Requirement 5: Protect All Systems and Networks from Malicious Software](pci-requirement-5.md)
+* [Requirement 6: Develop and Maintain Secure Systems and Software](pci-requirement-6.md)
+* [Requirement 7: Restrict Access to System Components and Cardholder Data by Business Need to Know](pci-requirement-7.md) (You're here)
+* [Requirement 8: Identify Users and Authenticate Access to System Components](pci-requirement-8.md)
+* [Requirement 10: Log and Monitor All Access to System Components and Cardholder Data](pci-requirement-10.md)
+* [Requirement 11: Test Security of Systems and Networks Regularly](pci-requirement-11.md)
+* [Azure AD PCI-DSS Multi-Factor Authentication guidance](azure-ad-pci-dss-mfa.md)
active-directory Pci Requirement 8 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/standards/pci-requirement-8.md
+
+ Title: Azure Active Directory and PCI-DSS Requirement 8
+description: Learn PCI-DSS defined approach requirements to identify users and authenticate access to system components
+++++++++ Last updated : 04/18/2023++++
+# Azure Active Directory and PCI-DSS Requirement 8
+
+**Requirement 8: Identify Users and Authenticate Access to System Components**
+</br>**Defined approach requirements**
+
+## 8.1 Processes and mechanisms for identifying users and authenticating access to system components are defined and understood.
+
+|PCI-DSS Defined approach requirements|Azure AD guidance and recommendations|
+|-|-|
+|**8.1.1** All security policies and operational procedures that are identified in Requirement 8 are: </br> Documented </br> Kept up to date </br> In use </br> Known to all affected parties|Use the guidance and links herein to produce the documentation to fulfill requirements based on your environment configuration.|
+|**8.1.2** Roles and responsibilities for performing activities in Requirement 8 are documented, assigned, and understood.|Use the guidance and links herein to produce the documentation to fulfill requirements based on your environment configuration.|
+
+## 8.2 User identification and related accounts for users and administrators are strictly managed throughout an accountΓÇÖs lifecycle.
+
+|PCI-DSS Defined approach requirements|Azure AD guidance and recommendations|
+|-|-|
+|**8.2.1** All users are assigned a unique ID before access to system components or cardholder data is allowed.|For CDE applications that rely on Azure AD, the unique user ID is the user principal name (UPN) attribute. [Azure AD UserPrincipalName population](../hybrid/plan-connect-userprincipalname.md)|
+|**8.2.2** Group, shared, or generic accounts, or other shared authentication credentials are only used when necessary on an exception basis, and are managed as follows: </br> Account use is prevented unless needed for an exceptional circumstance. </br> Use is limited to the time needed for the exceptional circumstance. </br> Business justification for use is documented. </br> Use is explicitly approved by management </br> Individual user identity is confirmed before access to an account is granted. </br> Every action taken is attributable to an individual user.|Ensure CDEs using Azure AD for application access have processes to prevent shared accounts. Create them as an exception that requires approval. </br> For CDE resources deployed in Azure, use Azure AD managed identities to represent the workload identity, instead of creating a shared service account. [What are managed identities for Azure resources?](../managed-identities-azure-resources/overview.md) </br> If you canΓÇÖt use managed identities and the resources accessed are using the OAuth protocol, use service principals to represent workload identities. Grant identities least privileged access through OAuth scopes. Administrators can restrict access and define approval workflows to create them. [What are workload identities?](../workload-identities/workload-identities-overview.md)|
+|**8.2.3** *Additional requirement for service providers only*: Service providers with remote access to customer premises use unique authentication factors for each customer premises.|Azure AD has on-premises connectors to enable hybrid capabilities. Connectors are identifiable and use uniquely generated credentials. [Azure AD Connect sync: Understand and customize synchronization](../hybrid/how-to-connect-sync-whatis.md) </br> [Cloud sync deep dive](../cloud-sync/concept-how-it-works.md) </br> [Azure AD on-premises application provisioning architecture](../app-provisioning/on-premises-application-provisioning-architecture.md) </br> [Plan cloud HR application to Azure AD user provisioning](../app-provisioning/plan-cloud-hr-provision.md) </br> [Install the Azure AD Connect Health agents](../hybrid/how-to-connect-health-agent-install.md)|
+|**8.2.4** Addition, deletion, and modification of user IDs, authentication factors, and other identifier objects are managed as follows: </br> Authorized with the appropriate approval. </br> Implemented with only the privileges specified on the documented approval.|Azure AD has automated user account provisioning from HR systems. Use this feature to create a lifecycle. [What is HR driven provisioning?](../app-provisioning/what-is-hr-driven-provisioning.md) </br> Azure AD has lifecycle workflows to enable customized logic for joiner, mover, and leaver processes. [What are Lifecycle Workflows?](../governance/what-are-lifecycle-workflows.md) </br> Azure AD has a programmatic interface to manage authentication methods with Microsoft Graph. Some authentication methods such as Windows Hello for Business and FIDO2 keys, require user intervention to register. [Get started with the Graph authentication methods API](/graph/authenticationmethods-get-started) </br> Administrators and/or automation generates the Temporary Access Pass credential using Graph API. Use this credential for passwordless onboarding. [Configure a Temporary Access Pass in Azure AD to register Passwordless authentication methods](../authentication/howto-authentication-temporary-access-pass.md)|
+|**8.2.5** Access for terminated users is immediately revoked.|To revoke access to an account, disable on-premises accounts for hybrid accounts synchronized from Azure AD, disable accounts in Azure AD, and revoke tokens. [Revoke user access in Azure AD](../enterprise-users/users-revoke-access.md) </br> Use Continuous Access Evaluation (CAE) for compatible applications to have a two-way conversation with Azure AD. Apps can be notified of events, such as account termination and reject tokens. [Continuous access evaluation](../conditional-access/concept-continuous-access-evaluation.md)|
+|**8.2.6** Inactive user accounts are removed or disabled within 90 days of inactivity.|For hybrid accounts, administrators check activity in Active Directory and Azure AD every 90 days. For Azure AD, use Microsoft Graph to find the last sign-in date. [How to: Manage inactive user accounts in Azure AD](../reports-monitoring/howto-manage-inactive-user-accounts.md)|
+|**8.2.7** Accounts used by third parties to access, support, or maintain system components via remote access are managed as follows: </br> Enabled only during the time period needed and disabled when not in use. </br> Use is monitored for unexpected activity.|Azure AD has external identity management capabilities. </br> Use governed guest lifecycle with entitlement management. External users are onboarded in the context of apps, resources, and access packages, which you can grant for a limited period and require periodic access reviews. Reviews can result in account removal or disablement. [Govern access for external users in entitlement management](../governance/entitlement-management-external-users.md) </br> Azure AD generates risk events at the user and session level. Learn to protect, detect, and respond to unexpected activity. [What is risk?](../identity-protection/concept-identity-protection-risks.md)|
+|**8.2.8** If a user session has been idle for more than 15 minutes, the user is required to reauthenticate to reactivate the terminal or session.|Use endpoint management policies with Intune, and Microsoft Endpoint Manager. Then, use Conditional Access to allow access from compliant devices. [Use compliance policies to set rules for devices you manage with Intune](/mem/intune/protect/device-compliance-get-started) </br> If your CDE environment relies on group policy objects (GPO), configure GPO to set an idle timeout. Configure Azure AD to allow access from hybrid Azure AD joined devices. [Hybrid Azure AD joined devices](../devices/concept-azure-ad-join-hybrid.md)|
+
+## 8.3 Strong authentication for users and administrators is established and managed.
+
+For more information about Azure AD authentication methods that meet PCI requirements, see: [Information Supplement: Multi-Factor Authentication](azure-ad-pci-dss-mfa.md).
+
+|PCI-DSS Defined approach requirements|Azure AD guidance and recommendations|
+|-|-|
+|**8.3.1** All user access to system components for users and administrators is authenticated via at least one of the following authentication factors: </br> Something you know, such as a password or passphrase. </br> Something you have, such as a token device or smart card. </br> Something you are, such as a biometric element.|[Azure AD requires passwordless methods to meet the PCI requirements](https://microsoft.sharepoint-df.com/:w:/t/MicrosoftTechnicalContributorProgram-PCIDSSDocumentation/ETlhHVraW_NPsMGM-mFZlfgB4OPry8BxGizhQ4qItfGCFw?e=glcZ8y) </br> See holistic passwordless deployment. [Plan a passwordless authentication deployment in Azure AD](../authentication/howto-authentication-passwordless-deployment.md)|
+|**8.3.2** Strong cryptography is used to render all authentication factors unreadable during transmission and storage on all system components.|Cryptography used by Azure AD is compliant with [PCI definition of Strong Cryptography](https://www.pcisecuritystandards.org/glossary/#glossary-s). [Azure AD Data protection considerations](../fundamentals/data-protection-considerations.md)|
+|**8.3.3** User identity is verified before modifying any authentication factor.|Azure AD requires users to authenticate to update their authentication methods using self-service, such as mysecurityinfo portal and the self-service password reset (SSPR) portal. [Set up security info from a sign-in page](https://support.microsoft.com/en-us/topic/28180870-c256-4ebf-8bd7-5335571bf9a8) </br> [Common Conditional Access policy: Securing security info registration](../conditional-access/howto-conditional-access-policy-registration.md) </br> [Azure AD self-service password reset](../authentication/concept-sspr-howitworks.md) </br> Administrators with privileged roles can modify authentication factors: Global, Password, User, Authentication, and Privileged Authentication. [Least privileged roles by task in Azure AD](../roles/delegate-by-task.md). Microsoft recommends you enable JIT access and governance, for privileged access using [Azure AD Privileged Identity Management](../privileged-identity-management/pim-configure.md)|
+|**8.3.4** Invalid authentication attempts are limited by: </br> Locking out the user ID after not more than 10 attempts. </br> Setting the lockout duration to a minimum of 30 minutes or until the userΓÇÖs identity is confirmed.|Deploy Windows Hello for Business for Windows devices that support hardware Trusted Platform Modules (TPM) 2.0 or higher. </br> For Windows Hello for Business, lockout relates to the device. The gesture, PIN, or biometric, unlocks access to the local TPM. Administrators configure the lockout behavior with GPO or Intune policies. [TPM Group Policy settings](/windows/security/information-protection/tpm/trusted-platform-module-services-group-policy-settings) </br> [Manage Windows Hello for Business on devices at the time devices enroll with Intune](/mem/intune/protect/windows-hello) </br> [TPM fundamentals](/windows/security/information-protection/tpm/tpm-fundamentals) </br> Windows Hello for Business works for on-premises authentication to Active Directory and cloud resources on Azure AD. </br> For FIDO2 security keys, brute-force protection is related to the key. The gesture, PIN or biometric, unlocks access to the local key storage. Administrators configure Azure AD to allow registration of FIDO2 security keys from manufacturers that align to PCI requirements. [Enable passwordless security key sign-in](../authentication/howto-authentication-passwordless-security-key.md) </br></br> **Microsoft Authenticator App** </br> To mitigate brute force attacks using Microsoft Authenticator app passwordless sign in, enable number matching and more context. </br> Azure AD generates a random number in the authentication flow. The user types it in the authenticator app. The mobile app authentication prompt shows the location, the request IP address, and the request application. [How to use number matching in MFA notifications](../authentication/how-to-mfa-number-match.md) </br> [How to use additional context in Microsoft Authenticator notifications](../authentication/how-to-mfa-additional-context.md)|
+|**8.3.5** If passwords/passphrases are used as authentication factors to meet Requirement 8.3.1, they're set and reset for each user as follows: </br> Set to a unique value for first-time use and upon reset. </br> Forced to be changed immediately after the first use.|Not applicable to Azure AD.|
+|**8.3.6** If passwords/passphrases are used as authentication factors to meet Requirement 8.3.1, they meet the following minimum level of complexity: </br> A minimum length of 12 characters (or IF the system doesn't support 12 characters, a minimum length of eight characters). </br> Contain both numeric and alphabetic characters.|Not applicable to Azure AD.|
+|**8.3.7** Individuals aren't allowed to submit a new password/passphrase that is the same as any of the last four passwords/passphrases used.|Not applicable to Azure AD.|
+|**8.3.8** Authentication policies and procedures are documented and communicated to all users including: </br> Guidance on selecting strong authentication factors. </br> Guidance for how users should protect their authentication factors. </br> Instructions not to reuse previously used passwords/passphrases. </br> Instructions to change passwords/passphrases if there's any suspicion or knowledge that the password/passphrases have been compromised and how to report the incident.|Document policies and procedures, then communicate to users per this requirement. Microsoft provides customizable templates in the [Download Center](https://www.microsoft.com/en-us/download/details.aspx?id=57600).|
+|**8.3.9** If passwords/passphrases are used as the only authentication factor for user access (that is, in any single-factor authentication implementation) then either: Passwords/passphrases are changed at least once every 90 days, </br> OR </br> The security posture of accounts is dynamically analyzed, and real-time access to resources is automatically determined accordingly.|Not applicable to Azure AD.|
+|**8.3.10** *Additional requirement for service providers only*: If passwords/passphrases are used as the only authentication factor for customer user access to cardholder data (that is, in any single-factor authentication implementation), then guidance is provided to customer users including: </br> Guidance for customers to change their user passwords/passphrases periodically. </br> Guidance as to when, and under what circumstances, passwords/passphrases are to be changed.|Not applicable to Azure AD.|
+|**8.3.10.1** Additional requirement for service providers only: If passwords/passphrases are used as the only authentication factor for customer user access (that is, in any single-factor authentication implementation) then either: </br> Passwords/passphrases are changed at least once every 90 days, </br> OR </br> The security posture of accounts is dynamically analyzed, and real-time access to resources is automatically determined accordingly.|Not applicable to Azure AD.|
+|**8.3.11** Where authentication factors such as physical or logical security tokens, smart cards, or certificates are used: </br> Factors are assigned to an individual user and not shared among multiple users. </br> Physical and/or logical controls ensure only the intended user can use that factor to gain access.|Use passwordless authentication methods such as Windows Hello for Business, FIDO2 security keys, and Microsoft Authenticator app for phone sign in. Use smart cards based on public or private keypairs associated with users to prevent reuse.|
+
+## 8.4 Multi-factor authentication (MFA) is implemented to secure access into the cardholder data environment (CDE)
+
+|PCI-DSS Defined approach requirements|Azure AD guidance and recommendations|
+|-|-|
+|**8.4.1** MFA is implemented for all nonconsole access into the CDE for personnel with administrative access.|Use Conditional Access to require strong authentication to access CDE resources. Define policies to target an administrative role (Global Administrator), or a security group representing administrative access to an application. </br> For administrative access, use Azure AD Privileged Identity Management (PIM) to enable just-in-time (JIT) activation of privileged roles. [What is Conditional Access?](../conditional-access/overview.md) </br> [CA templates](/azure/active-directory/conditional-access/concept-conditional-access-policy-common) </br> [Start using PIM](../privileged-identity-management/pim-getting-started.md)|
+|**8.4.2** MFA is implemented for all access into the CDE.|Block access to legacy protocols that donΓÇÖt support strong authentication. [Block legacy authentication with Azure AD with Conditional Access](../conditional-access/block-legacy-authentication.md)|
+|**8.4.3** MFA is implemented for all remote network access originating from outside the entityΓÇÖs network that could access or impact the CDE as follows: </br> All remote access by all personnel, both users and administrators, originating from outside the entityΓÇÖs network. </br> All remote access by third parties and vendors.|Integrate access technologies like virtual private network (VPN), remote desktop, and network access points with Azure AD for authentication and authorization. Use Conditional Access to require strong authentication to access remote access applications. [CA templates](/azure/active-directory/conditional-access/concept-conditional-access-policy-common)|
+
+## 8.5 Multi-factor authentication (MFA) systems are configured to prevent misuse.
+
+|PCI-DSS Defined approach requirements|Azure AD guidance and recommendations|
+|-|-|
+|**8.5.1** MFA systems are implemented as follows: </br> The MFA system isn't susceptible to replay attacks. </br> MFA systems can't be bypassed by any users, including administrative users unless specifically documented, and authorized by management on an exception basis, for a limited time period. </br> At least two different types of authentication factors are used. </br> Success of all authentication factors is required before access is granted.|The recommended Azure AD authentication methods use nonce or challenges. These methods resist replay attacks because Azure AD detects replayed authentication transactions. </br> Windows Hello for Business, FIDO2, and Microsoft Authenticator app for passwordless phone sign in use a nonce to identify the request and detect replay attempts. Use passwordless credentials for users in the CDE. </br> Certificate-based authentication uses challenges to detect replay attempts. </br> [NIST authenticator assurance level 2 with Azure AD](nist-authenticator-assurance-level-2.md) </br> [NIST authenticator assurance level 3 by using Azure AD](nist-authenticator-assurance-level-3.md)|
+
+## 8.6 Use of application and system accounts and associated authentication factors is strictly managed.
+
+|PCI-DSS Defined approach requirements|Azure AD guidance and recommendations|
+|-|-|
+|**8.6.1** If accounts used by systems or applications can be used for interactive login, they're managed as follows: </br> Interactive use is prevented unless needed for an exceptional circumstance. </br> Interactive use is limited to the time needed for the exceptional circumstance. </br> Business justification for interactive use is documented. </br> Interactive use is explicitly approved by management. </br> Individual user identity is confirmed before access to account is granted. </br> Every action taken is attributable to an individual user.|For CDE applications with modern authentication, and for CDE resources deployed in Azure that use modern authentication, Azure AD has two service account types for applications: Managed Identities and service principals. </br> Learn about Azure AD service account governance: planning, provisioning, lifecycle, monitoring, access reviews, etc. [Governing Azure AD service accounts](../fundamentals/service-accounts-governing-azure.md) </br> To secure Azure AD service accounts. [Securing managed identities in Azure AD](../fundamentals/service-accounts-managed-identities.md) </br> [Securing service principals in Azure AD](../fundamentals/service-accounts-principal.md) </br> For CDEs with resources outside Azure that require access, configure workload identity federations without managing secrets or interactive sign in. [Workload identity federation](../develop/workload-identity-federation.md) </br> To enable approval and tracking processes to fulfill requirements, orchestrate workflows using IT Service Management (ITSM) and configuration management databases (CMDB) These tools use MS Graph API to interact with Azure AD and manage the service account. </br> For CDEs that require service accounts compatible with on-premises Active Directory, use Group Managed Service Accounts (GMSAs), and standalone managed service accounts (sMSA), computer accounts, or user accounts. [Securing on-premises service accounts](../fundamentals/service-accounts-on-premises.md)|
+|**8.6.2** Passwords/passphrases for any application and system accounts that can be used for interactive login aren't hard coded in scripts, configuration/property files, or bespoke and custom source code.|Use modern service accounts such as Azure Managed Identities and service principals that donΓÇÖt require passwords. </br> Azure AD Managed Identities credentials are provisioned, and rotated in the cloud, which prevents using shared secrets such as passwords and passphrases. When using system-assigned managed identities, the lifecycle is tied to the underlying Azure resource lifecycle. </br> Use service principals to use certificates as credentials, which prevents use of shared secrets such as passwords and passphrases. If certificates are not feasible, use Azure Key Vault to store service principal client secrets. [Best practices for using Azure Key Vault](/azure/key-vault/general/best-practices#using-service-principals-with-key-vault) </br> For CDEs with resources outside Azure that require access, configure workload identity federations without managing secrets or interactive sign-in. [Workload identity federation](../workload-identities/workload-identity-federation.md) </br> Deploy Conditional Access for workload identities to control authorization based on location and/or risk level. [CA for workload identities](../conditional-access/workload-identity.md) </br> In addition to the previous guidance, use code analysis tools to detect hard-coded secrets in code and configuration files. [Detect exposed secrets in code](/azure/defender-for-cloud/detect-exposed-secrets) </br> [Security rules](/dotnet/fundamentals/code-analysis/quality-rules/security-warnings)|
+|**8.6.3** Passwords/passphrases for any application and system accounts are protected against misuse as follows: </br> Passwords/passphrases are changed periodically (at the frequency defined in the entityΓÇÖs targeted risk analysis, which is performed according to all elements specified in Requirement 12.3.1) and upon suspicion or confirmation of compromise. </br> Passwords/passphrases are constructed with sufficient complexity appropriate for how frequently the entity changes the passwords/passphrases.|Use modern service accounts such as Azure Managed Identities and service principals that donΓÇÖt require passwords. </br> For exceptions that require service principals with secrets, abstract secret lifecycle with workflows and automations that sets random passwords to service principals, rotates them regularly, and reacts to risk events. </br> Security operations teams can review and remediate reports generated by Azure AD such as Risky workload identities. [Securing workload identities with Identity Protection](../identity-protection/concept-workload-identity-risk.md) |
+
+## Next steps
+
+PCI-DSS requirements **3**, **4**, **9**, and **12** aren't applicable to Azure AD, therefore there are no corresponding articles. To see all requirements, go to pcisecuritystandards.org: [Official PCI Security Standards Council Site](https://docs-prv.pcisecuritystandards.org/PCI%20DSS/Standard/PCI-DSS-v4_0.pdf).
+
+To configure Azure AD to comply with PCI-DSS, see the following articles.
+
+* [Azure AD PCI-DSS guidance](azure-ad-pci-dss-guidance.md)
+* [Requirement 1: Install and Maintain Network Security Controls](pci-requirement-1.md)
+* [Requirement 2: Apply Secure Configurations to All System Components](pci-requirement-2.md)
+* [Requirement 5: Protect All Systems and Networks from Malicious Software](pci-requirement-5.md)
+* [Requirement 6: Develop and Maintain Secure Systems and Software](pci-requirement-6.md)
+* [Requirement 7: Restrict Access to System Components and Cardholder Data by Business Need to Know](pci-requirement-7.md)
+* [Requirement 8: Identify Users and Authenticate Access to System Components](pci-requirement-8.md) (You're here)
+* [Requirement 10: Log and Monitor All Access to System Components and Cardholder Data](pci-requirement-10.md)
+* [Requirement 11: Test Security of Systems and Networks Regularly](pci-requirement-11.md)
+* [Azure AD PCI-DSS Multi-Factor Authentication guidance](azure-ad-pci-dss-mfa.md)
advisor Advisor Reference Cost Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/advisor/advisor-reference-cost-recommendations.md
Apache Spark for Azure Synapse Analytics pool's Autoscale feature automatically
Learn more about [Synapse workspace - EnableSynapseSparkComputeAutoScaleGuidance (Consider enabling autoscale feature on spark compute.)](https://aka.ms/EnableSynapseSparkComputeAutoScaleGuidance).
+## Web
+
+### Right-size underutilized App Service plans
+
+We've analyzed the usage patterns of your app service plan over the past 7 days and identified low CPU usage. While certain scenarios can result in low utilization by design, you can often save money by choosing a less expensive SKU while retaining the same features.
+
+> [!NOTE]
+> - Currently, this recommendation only works for App Service plans running on Windows on a SKU that allows you to downscale to less expensive tiers without losing any features, like from P3v2 to P2v2 or from P2v2 to P1v2.
+> - CPU bursts that last only a few minutes might not be correctly detected. Please perform a careful analysis in your App Service plan metrics blade before downscaling your SKU.
+
+Learn more about [App Service plans](../app-service/overview-hosting-plans.md).
+ ## Azure Monitor For Azure Monitor cost optimization suggestions, please see [Optimize costs in Azure Monitor](../azure-monitor/best-practices-cost.md).
aks Custom Certificate Authority https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/custom-certificate-authority.md
Previously updated : 4/12/2022 Last updated : 04/25/2023 # Custom certificate authority (CA) in Azure Kubernetes Service (AKS) (preview)
-Custom certificate authorities (CAs) allow you to establish trust between your Azure Kubernetes Service (AKS) cluster and your workloads, such as private registries, proxies, and firewalls. A Kubernetes secret is used to store the certificate authority's information, then it's passed to all nodes in the cluster.
+AKS generates and uses the following certificates, Certificate Authorities (CAs), and Service Accounts (SAs):
-This feature is applied per nodepool, so new and existing node pools must be configured to enable this feature.
+* The AKS API server creates a CA called the Cluster CA.
+* The API server has a Cluster CA, which signs certificates for one-way communication from the API server to kubelets.
+* Each kubelet also creates a Certificate Signing Request (CSR), which is signed by the Cluster CA, for communication from the kubelet to the API server.
+* The API aggregator uses the Cluster CA to issue certificates for communication with other APIs. The API aggregator can also have its own CA for issuing those certificates, but it currently uses the Cluster CA.
+* Each node uses an SA token, which is signed by the Cluster CA.
+* The `kubectl` client has a certificate for communicating with the AKS cluster.
+
+You can also create custom certificate authorities, which allow you to establish trust between your Azure Kubernetes Service (AKS) clusters and workloads, such as private registries, proxies, and firewalls. A Kubernetes secret stores the certificate authority's information, and then it's passed to all nodes in the cluster. This feature is applied per node pool, so you need to enable it on new and existing node pools.
+
+This article shows you how to create custom CAs and apply them to your AKS clusters.
## Prerequisites
-* An Azure subscription. If you don't have an Azure subscription, you can create a [free account](https://azure.microsoft.com/free).
+* An Azure subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free).
* [Azure CLI installed][azure-cli-install] (version 2.43.0 or greater). * A base64 encoded certificate string or a text file with certificate. ## Limitations
-This feature isn't currently supported for Windows node pools.
+* This feature currently isn't supported for Windows node pools.
-## Install the aks-preview Azure CLI extension
+## Install the `aks-preview` Azure CLI extension
[!INCLUDE [preview features callout](includes/preview/preview-callout.md)]
-To install the aks-preview extension, run the following command:
+1. Install the aks-preview extension using the [`az extension add`][az-extension-add] command.
-```azurecli
-az extension add --name aks-preview
-```
+ ```azurecli
+ az extension add --name aks-preview
+ ```
-Run the following command to update to the latest version of the extension released:
+2. Update to the latest version of the extension using the [`az extension update`][az-extension-update] command.
-```azurecli
-az extension update --name aks-preview
-```
+ ```azurecli
+ az extension update --name aks-preview
+ ```
-## Register the 'CustomCATrustPreview' feature flag
+## Register the `CustomCATrustPreview` feature flag
-Register the `CustomCATrustPreview` feature flag by using the [az feature register][az-feature-register] command, as shown in the following example:
+1. Register the `CustomCATrustPreview` feature flag using the [`az feature register`][az-feature-register] command.
-```azurecli
-az feature register --namespace "Microsoft.ContainerService" --name "CustomCATrustPreview"
-```
+ ```azurecli
+ az feature register --namespace "Microsoft.ContainerService" --name "CustomCATrustPreview"
+ ```
-It takes a few minutes for the status to show *Registered*. Verify the registration status by using the [az feature show][az-feature-show] command:
+ It takes a few minutes for the status to show *Registered*.
-```azurecli-interactive
-az feature show --namespace "Microsoft.ContainerService" --name "CustomCATrustPreview"
-```
+2. Verify the registration status using the [`az feature show`][az-feature-show] command.
-When the status reflects *Registered*, refresh the registration of the *Microsoft.ContainerService* resource provider by using the [az provider register][az-provider-register] command:
+ ```azurecli
+ az feature show --namespace "Microsoft.ContainerService" --name "CustomCATrustPreview"
+ ```
-```azurecli-interactive
-az provider register --namespace Microsoft.ContainerService
-```
+3. When the status reflects *Registered*, refresh the registration of the *Microsoft.ContainerService* resource provider using the [`az provider register`][az-provider-register] command.
-## Two ways for custom CA installation on AKS node pools
+ ```azurecli
+ az provider register --namespace Microsoft.ContainerService
+ ```
-Two ways of installing custom CAs on your AKS cluster are available. They're intended for different use cases, which are outlined below.
+## Custom CA installation on AKS node pools
-### Install CAs during node pool boot up
-If your environment requires your custom CAs to be added to node trust store for correct provisioning,
-text file containing up to 10 blank line separated certificates needs to be passed during
-[az aks create][az-aks-create] or [az aks update][az-aks-update] operations.
+### Install CAs on AKS node pools
-Example command:
-```azurecli
-az aks create \
- --resource-group myResourceGroup \
- --name myAKSCluster \
- --node-count 2 \
- --enable-custom-ca-trust \
- --custom-ca-trust-certificates pathToFileWithCAs
-```
+* If your environment requires your custom CAs to be added to node trust store for correct provisioning, you need to pass a text file containing up to 10 blank line separated certificates during [`az aks create`][az-aks-create] or [`az aks update`][az-aks-update] operations. Example text file:
-Example file:
-```
BEGIN CERTIFICATE--
-cert1
END CERTIFICATE--
+ ```txt
+ --BEGIN CERTIFICATE--
+ cert1
+ --END CERTIFICATE--
BEGIN CERTIFICATE--
-cert2
END CERTIFICATE--
-```
+ --BEGIN CERTIFICATE--
+ cert2
+ --END CERTIFICATE--
+ ```
-CAs will be added to node's trust store during node boot up process, allowing the node to, for example access a private registry.
+#### Install CAs during node pool creation
-#### CA rotation for availability during node pool boot up
-To update CAs passed to cluster during boot up [az aks update][az-aks-update] operation has to be used.
-```azurecli
-az aks update \
- --resource-group myResourceGroup \
- --name myAKSCluster \
- --custom-ca-trust-certificates pathToFileWithCAs
-```
+* Install CAs during node pool creation using the [`az aks create][az-aks-create] command and specifying your text file for the `--custom-ca-trust-certificates` parameter.
+
+ ```azurecli
+ az aks create \
+ --resource-group myResourceGroup \
+ --name myAKSCluster \
+ --node-count 2 \
+ --enable-custom-ca-trust \
+ --custom-ca-trust-certificates pathToFileWithCAs
+ ```
-> [!NOTE]
-> Running this operation will trigger a model update, to ensure that new nodes added during for example scale up operation have the newest CAs required for correct provisioning.
-> This means that AKS will create additional nodes, drain currently existing ones, delete them and then replace them with nodes that have the new set of CAs installed.
+#### CA rotation for availability during node pool boot up
+* Update CAs passed to your cluster during boot up using the [`az aks update`][az-aks-update] command and specifying your text file for the `--custom-ca-trust-certificates` parameter.
-### Install CAs once node pool is up and running
-If your environment can be successfully provisioned without your custom CAs, you can provide the CAs using a secret deployed in the kube-system namespace.
-This approach allows for certificate rotation without the need for node recreation.
+ ```azurecli
+ az aks update \
+ --resource-group myResourceGroup \
+ --name myAKSCluster \
+ --custom-ca-trust-certificates pathToFileWithCAs
+ ```
-Create a [Kubernetes secret][kubernetes-secrets] YAML manifest with your base64 encoded certificate string in the `data` field. Data from this secret is used to update CAs on all nodes.
+ > [!NOTE]
+ > This operation triggers a model update, ensuring new nodes have the newest CAs required for correct provisioning. AKS creates additional nodes, drains existing ones, deletes them, and replaces them with nodes that have the new set of CAs installed.
-You must ensure that:
-* The secret is named `custom-ca-trust-secret`.
-* The secret is created in the `kube-system` namespace.
+### Install CAs after node pool creation
-```yaml
-apiVersion: v1
-kind: Secret
-metadata:
- name: custom-ca-trust-secret
- namespace: kube-system
-type: Opaque
-data:
- ca1.crt: |
- {base64EncodedCertStringHere}
- ca2.crt: |
- {anotherBase64EncodedCertStringHere}
-```
+If your environment can be successfully provisioned without your custom CAs, you can provide the CAs by deploying a secret in the `kube-system` namespace. This approach allows for certificate rotation without the need for node recreation.
-To update or remove a CA, edit and apply the secret's YAML manifest. The cluster will poll for changes and update the nodes accordingly. This process may take a couple of minutes before changes are applied.
+* Create a [Kubernetes secret][kubernetes-secrets] YAML manifest with your base64 encoded certificate string in the `data` field.
-Sometimes containerd restart on the node might be required for the CAs to be picked up properly. If it appears like CAs aren't added correctly to your node's trust store, you can trigger such restart using the following command from node's shell:
+ ```yaml
+ apiVersion: v1
+ kind: Secret
+ metadata:
+ name: custom-ca-trust-secret
+ namespace: kube-system
+ type: Opaque
+ data:
+ ca1.crt: |
+ {base64EncodedCertStringHere}
+ ca2.crt: |
+ {anotherBase64EncodedCertStringHere}
+ ```
-```systemctl restart containerd```
+ Data from this secret is used to update CAs on all nodes. Make sure the secret is named `custom-ca-trust-secret` and is created in the `kube-system` namespace. Installing CAs using the secret in the `kube-system` namespace allows for CA rotation without the need for node recreation. To update or remove a CA, you can edit and apply the YAML manifest. The cluster polls for changes and updates the nodes accordingly. It may take a couple minutes before changes are applied.
-> [!NOTE]
-> Installing CAs using the secret in the kube-system namespace will allow for CA rotation without need for node recreation.
+ > [!NOTE]
+ >
+ > containerd restart on the node might be required for the CAs to be picked up properly. If it appears like CAs aren't correctly added to your node's trust store, you can trigger a restart using the following command from node's shell:
+ >
+ > ```systemctl restart containerd```
## Configure a new AKS cluster to use a custom CA
-To configure a new AKS cluster to use a custom CA, run the [az aks create][az-aks-create] command with the `--enable-custom-ca-trust` parameter.
+* Configure a new AKS cluster to use a custom CA using the [`az aks create`][az-aks-create] command with the `--enable-custom-ca-trust` parameter.
+
+ ```azurecli
+ az aks create \
+ --resource-group myResourceGroup \
+ --name myAKSCluster \
+ --node-count 2 \
+ --enable-custom-ca-trust
+ ```
-```azurecli
-az aks create \
- --resource-group myResourceGroup \
- --name myAKSCluster \
- --node-count 2 \
- --enable-custom-ca-trust
-```
+## Configure a new AKS cluster to use a custom CA with CAs installed before node boots up
-To configure a new AKS cluster to use custom CA with CAs installed before node boots up, run the [az aks create][az-aks-create] command with the `--enable-custom-ca-trust` and `--custom-ca-trust-certificates` parameters.
+* Configure a new AKS cluster to use custom CA with CAs installed before the node boots up using the [`az aks create`][az-aks-create] command with the `--enable-custom-ca-trust` and `--custom-ca-trust-certificates` parameters.
-```azurecli
-az aks create \
- --resource-group myResourceGroup \
- --name myAKSCluster \
- --node-count 2 \
- --enable-custom-ca-trust \
- --custom-ca-trust-certificates pathToFileWithCAs
-```
+ ```azurecli
+ az aks create \
+ --resource-group myResourceGroup \
+ --name myAKSCluster \
+ --node-count 2 \
+ --enable-custom-ca-trust \
+ --custom-ca-trust-certificates pathToFileWithCAs
+ ```
## Configure an existing AKS cluster to have custom CAs installed before node boots up
-To configure an existing AKS cluster to have your custom CAs added to node's trust store before it boots up, run [az aks update][az-aks-update] command with the `--custom-ca-trust-certificates` parameter.
+* Configure an existing AKS cluster to have your custom CAs added to node's trust store before it boots up using the [`az aks update`][az-aks-update] command with the `--custom-ca-trust-certificates` parameter.
-```azurecli
-az aks update \
- --resource-group myResourceGroup \
- --name myAKSCluster \
- --custom-ca-trust-certificates pathToFileWithCAs
-```
+ ```azurecli
+ az aks update \
+ --resource-group myResourceGroup \
+ --name myAKSCluster \
+ --custom-ca-trust-certificates pathToFileWithCAs
+ ```
## Configure a new node pool to use a custom CA
-To configure a new node pool to use a custom CA, run the [az aks nodepool add][az-aks-nodepool-add] command with the `--enable-custom-ca-trust` parameter.
-
-```azurecli
-az aks nodepool add \
- --cluster-name myAKSCluster \
- --resource-group myResourceGroup \
- --name myNodepool \
- --enable-custom-ca-trust \
- --os-type Linux
-```
-
-If there are currently no other node pools with the feature enabled, cluster will have to reconcile its settings for
-the changes to take effect. Before that happens, daemonset and pods, which install CAs won't appear on the cluster.
-This operation will happen automatically as a part of AKS's reconcile loop.
-You can trigger reconcile operation immediately by running the [az aks update][az-aks-update] command:
+* Configure a new node pool to use a custom CA using the [`az aks nodepool add`][az-aks-nodepool-add] command with the `--enable-custom-ca-trust` parameter.
-```azurecli
-az aks update \
- --resource-group myResourceGroup \
- --name cluster-name
-```
+ ```azurecli
+ az aks nodepool add \
+ --cluster-name myAKSCluster \
+ --resource-group myResourceGroup \
+ --name myNodepool \
+ --enable-custom-ca-trust \
+ --os-type Linux
+ ```
-Once completed, the daemonset and pods will appear in the cluster.
+ If no other node pools with the feature enabled exist, the cluster has to reconcile its settings for the changes to take effect. This operation happens automatically as a part of AKS's reconcile loop. Before the operation, the daemon set and pods don't appear on the cluster. You can trigger an immediate reconcile operation using the [`az aks update`][az-aks-update] command. The daemon set and pods appear after the update completes.
## Configure an existing node pool to use a custom CA
-To configure an existing node pool to use a custom CA, run the [az aks nodepool update][az-aks-nodepool-update] command with the `--enable-custom-trust-ca` parameter.
+* Configure an existing node pool to use a custom CA using the [`az aks nodepool update`][az-aks-nodepool-update] command with the `--enable-custom-trust-ca` parameter.
-```azurecli
-az aks nodepool update \
- --resource-group myResourceGroup \
- --cluster-name myAKSCluster \
- --name myNodepool \
- --enable-custom-ca-trust
-```
+ ```azurecli
+ az aks nodepool update \
+ --resource-group myResourceGroup \
+ --cluster-name myAKSCluster \
+ --name myNodepool \
+ --enable-custom-ca-trust
+ ```
-If there are currently no other node pools with the feature enabled, cluster will have to reconcile its settings for
-the changes to take effect. Before that happens, daemon set and pods, which install CAs won't appear on the cluster.
-This operation will happen automatically as a part of AKS's reconcile loop.
-You can trigger reconcile operation by running the following command:
-
-```azurecli
-az aks update -g myResourceGroup --name cluster-name
-```
-
-Once complete, the daemonset and pods will appear in the cluster.
+ If no other node pools with the feature enabled exist, the cluster has to reconcile its settings for the changes to take effect. This operation happens automatically as a part of AKS's reconcile loop. Before the operation, the daemon set and pods don't appear on the cluster. You can trigger an immediate reconcile operation using the [`az aks update`][az-aks-update] command. The daemon set and pods appear after the update completes.
## Troubleshooting ### Feature is enabled and secret with CAs is added, but operations are failing with X.509 Certificate Signed by Unknown Authority error+ #### Incorrectly formatted certs passed in the secret+ AKS requires certs passed in the user-created secret to be properly formatted and base64 encoded. Make sure the CAs you passed are properly base64 encoded and that files with CAs don't have CRLF line breaks.
-Certificates passed to ```--custom-ca-trust-certificates``` option shouldn't be base64 encoded.
-#### Containerd hasn't picked up new certs
-From node's shell, run ```systemctl restart containerd```, once containerd is restarted, new certs will be properly picked up by the container runtime.
+Certificates passed to ```--custom-ca-trust-certificates``` shouldn't be base64 encoded.
+
+#### containerd hasn't picked up new certs
+
+From the node's shell, run ```systemctl restart containerd```. Once containerd is restarts, the new certs are properly picked up by the container runtime.
## Next steps For more information on AKS security best practices, see [Best practices for cluster security and upgrades in Azure Kubernetes Service (AKS)][aks-best-practices-security-upgrades].
-<!-- LINKS EXTERNAL -->
-[kubernetes-secrets]:https://kubernetes.io/docs/concepts/configuration/secret/
- <!-- LINKS INTERNAL --> [aks-best-practices-security-upgrades]: operator-best-practices-cluster-security.md [azure-cli-install]: /cli/azure/install-azure-cli
aks Custom Node Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/custom-node-configuration.md
Title: Customize the node configuration for Azure Kubernetes Service (AKS) node
description: Learn how to customize the configuration on Azure Kubernetes Service (AKS) cluster nodes and node pools. Previously updated : 12/03/2020 Last updated : 04/24/2023 # Customize node configuration for Azure Kubernetes Service (AKS) node pools
-Customizing your node configuration allows you to configure or tune your operating system (OS) settings or the kubelet parameters to match the needs of the workloads. When you create an AKS cluster or add a node pool to your cluster, you can customize a subset of commonly used OS and kubelet settings. To configure settings beyond this subset, [use a daemon set to customize your needed configurations without losing AKS support for your nodes](support-policies.md#shared-responsibility).
+Customizing your node configuration allows you to adjust operating system (OS) settings or kubelet parameters to match the needs of your workloads. When you create an AKS cluster or add a node pool to your cluster, you can customize a subset of commonly used OS and kubelet settings. To configure settings beyond this subset, you can [use a daemon set to customize your needed configurations without losing AKS support for your nodes](support-policies.md#shared-responsibility).
## Create an AKS cluster with a customized node configuration [!INCLUDE [preview features callout](./includes/preview/preview-callout.md)]
-### Prerequisites for Windows kubelet custom configuration (Preview)
+### Prerequisites for Windows kubelet custom configuration (preview)
-* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+Before you begin, make sure you have an Azure account with an active subscription. If you don't have one, [create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). You also need to register the feature flag using the following steps:
-First, install the aks-preview extension by running the following command:
+1. Install the aks-preview extension using the [`az extension add`][az-extension-add] command.
-```azurecli
-az extension add --name aks-preview
-```
+ ```azurecli
+ az extension add --name aks-preview
+ ```
-Run the following command to update to the latest version of the extension released:
+2. Update to the latest version of the extension using the [`az extension update`][az-extension-update] command.
-```azurecli
-az extension update --name aks-preview
-```
+ ```azurecli
+ az extension update --name aks-preview
+ ```
-Then register the `WindowsCustomKubeletConfigPreview` feature flag by using the [`az feature register`][az-feature-register] command, as shown in the following example:
+3. Register the `WindowsCustomKubeletConfigPreview` feature flag using the [`az feature register`][az-feature-register] command.
-```azurecli-interactive
-az feature register --namespace "Microsoft.ContainerService" --name "WindowsCustomKubeletConfigPreview"
-```
+ ```azurecli-interactive
+ az feature register --namespace "Microsoft.ContainerService" --name "WindowsCustomKubeletConfigPreview"
+ ```
-It takes a few minutes for the status to show *Registered*. Verify the registration status by using the [`az feature show`][az-feature-show] command:
+ It takes a few minutes for the status to show *Registered*.
-```azurecli-interactive
-az feature show --namespace "Microsoft.ContainerService" --name "WindowsCustomKubeletConfigPreview"
-```
+4. Verify the registration status using the [`az feature show`][az-feature-show] command.
-When the status reflects *Registered*, refresh the registration of the *Microsoft.ContainerService* resource provider by using the [`az provider register`][az-provider-register] command:
+ ```azurecli-interactive
+ az feature show --namespace "Microsoft.ContainerService" --name "WindowsCustomKubeletConfigPreview"
+ ```
-```azurecli-interactive
-az provider register --namespace Microsoft.ContainerService
-```
+5. When the status reflects *Registered*, refresh the registration of the *Microsoft.ContainerService* resource provider using the [`az provider register`][az-provider-register] command.
+
+ ```azurecli-interactive
+ az provider register --namespace Microsoft.ContainerService
+ ```
-### Create config files for kubelet configuration, OS configuration, or both
+### Create config files
-Create a `linuxkubeletconfig.json` file with the following contents (for Linux node pools):
+#### Kubelet configuration
+
+### [Linux node pools](#tab/linux-node-pools)
+
+Create a `linuxkubeletconfig.json` file with the following contents:
```json {
Create a `linuxkubeletconfig.json` file with the following contents (for Linux n
"failSwapOn": false } ```+
+### [Windows node pools](#tab/windows-node-pools)
+ > [!NOTE]
-> Windows kubelet custom configuration only supports the parameters `imageGcHighThreshold`, `imageGcLowThreshold`, `containerLogMaxSizeMB`, and `containerLogMaxFiles`. The json file contents above should be modified to remove any unsupported parameters.
+> Windows kubelet custom configuration only supports the parameters `imageGcHighThreshold`, `imageGcLowThreshold`, `containerLogMaxSizeMB`, and `containerLogMaxFiles`.
-Create a `windowskubeletconfig.json` file with the following contents (for Windows node pools):
+Create a `windowskubeletconfig.json` file with the following contents:
```json {
Create a `windowskubeletconfig.json` file with the following contents (for Windo
} ```
-Create a `linuxosconfig.json` file with the following contents (for Linux node pools only):
++
+#### OS configuration
+
+### [Linux node pools](#tab/linux-node-pools)
+
+Create a `linuxosconfig.json` file with the following contents:
```json {
Create a `linuxosconfig.json` file with the following contents (for Linux node p
} ```
+### [Windows node pools](#tab/windows-node-pools)
+
+Not currently supported.
+++ ### Create a new cluster using custom configuration files
-When creating a new cluster, you can use the customized configuration files created in the previous step to specify the kubelet configuration, OS configuration, or both. Since the first node pool created with az aks create is a linux node pool in all cases, you should use the `linuxkubeletconfig.json` and `linuxosconfig.json` files.
+When creating a new cluster, you can use the customized configuration files created in the previous steps to specify the kubelet configuration, OS configuration, or both.
> [!NOTE]
-> If you specify a configuration when creating a cluster, only the nodes in the initial node pool will have that configuration applied. Any settings not configured in the JSON file will retain the default value. CustomLinuxOsConfig isn't supported for OS type: Windows.
+> If you specify a configuration when creating a cluster, only the nodes in the initial node pool will have that configuration applied. Any settings not configured in the JSON file will retain the default value. `CustomLinuxOsConfig` isn't supported for OS type: Windows.
+
+Create a new cluster using custom configuration files using the [`az aks create`][az-aks-create] command and specifying your configuration files. The following example command creates a new cluster with the custom `./linuxkubeletconfig.json` and `./linuxosconfig.json` files:
```azurecli az aks create --name myAKSCluster --resource-group myResourceGroup --kubelet-config ./linuxkubeletconfig.json --linux-os-config ./linuxosconfig.json ```+ ### Add a node pool using custom configuration files
-When adding a node pool to a cluster, you can use the customized configuration file created in the previous step to specify the kubelet configuration. CustomKubeletConfig is supported for Linux and Windows node pools.
+When adding a node pool to a cluster, you can use the customized configuration file created in the previous step to specify the kubelet configuration. `CustomKubeletConfig` is supported for Linux and Windows node pools.
> [!NOTE] > When you add a Linux node pool to an existing cluster, you can specify the kubelet configuration, OS configuration, or both. When you add a Windows node pool to an existing cluster, you can only specify the kubelet configuration. If you specify a configuration when adding a node pool, only the nodes in the new node pool will have that configuration applied. Any settings not configured in the JSON file will retain the default value.
-For Linux node pools
+### [Linux node pools](#tab/linux-node-pools)
```azurecli az aks nodepool add --name mynodepool1 --cluster-name myAKSCluster --resource-group myResourceGroup --kubelet-config ./linuxkubeletconfig.json ```
-For Windows node pools (Preview)
+
+### [Windows node pools](#tab/windows-node-pools)
```azurecli az aks nodepool add --name mynodepool1 --cluster-name myAKSCluster --resource-group myResourceGroup --os-type Windows --kubelet-config ./windowskubeletconfig.json ``` ++ ### Other configurations
-These settings can be used to modify other operating system settings.
+The following settings can be used to modify other operating system settings:
#### Message of the Day Pass the ```--message-of-the-day``` flag with the location of the file to replace the Message of the Day on Linux nodes at cluster creation or node pool creation.
-##### Cluster creation
- ```azurecli az aks create --cluster-name myAKSCluster --resource-group myResourceGroup --message-of-the-day ./newMOTD.txt ```
az aks nodepool add --name mynodepool1 --cluster-name myAKSCluster --resource-gr
### Confirm settings have been applied
-After you have applied custom node configuration, you can confirm the settings have been applied to the nodes by [connecting to the host][node-access] and verifying `sysctl` or configuration changes have been made on the filesystem.
+After you apply custom node configuration, you can confirm the settings have been applied to the nodes by [connecting to the host][node-access] and verifying `sysctl` or configuration changes have been made on the filesystem.
## Custom node configuration supported parameters
-## Kubelet custom configuration
+### Kubelet custom configuration
Kubelet custom configuration is supported for Linux and Windows node pools. Supported parameters differ and are documented below.
-### Linux Kubelet custom configuration
-
-The supported Kubelet parameters and accepted values for Linux node pools are listed below.
+#### Linux Kubelet custom configuration
| Parameter | Allowed values/interval | Default | Description | | | -- | - | -- | | `cpuManagerPolicy` | none, static | none | The static policy allows containers in [Guaranteed pods](https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/) with integer CPU requests access to exclusive CPUs on the node. |
-| `cpuCfsQuota` | true, false | true | Enable/Disable CPU CFS quota enforcement for containers that specify CPU limits. |
+| `cpuCfsQuota` | true, false | true | Enable/Disable CPU CFS quota enforcement for containers that specify CPU limits. |
| `cpuCfsQuotaPeriod` | Interval in milliseconds (ms) | `100ms` | Sets CPU CFS quota period value. |
-| `imageGcHighThreshold` | 0-100 | 85 | The percent of disk usage after which image garbage collection is always run. Minimum disk usage that **will** trigger garbage collection. To disable image garbage collection, set to 100. |
+| `imageGcHighThreshold` | 0-100 | 85 | The percent of disk usage after which image garbage collection is always run. Minimum disk usage that **will** trigger garbage collection. To disable image garbage collection, set to 100. |
| `imageGcLowThreshold` | 0-100, no higher than `imageGcHighThreshold` | 80 | The percent of disk usage before which image garbage collection is never run. Minimum disk usage that **can** trigger garbage collection. | | `topologyManagerPolicy` | none, best-effort, restricted, single-numa-node | none | Optimize NUMA node alignment, see more [here](https://kubernetes.io/docs/tasks/administer-cluster/topology-manager/). |
-| `allowedUnsafeSysctls` | `kernel.shm*`, `kernel.msg*`, `kernel.sem`, `fs.mqueue.*`, `net.*` | None | Allowed list of unsafe sysctls or unsafe sysctl patterns. |
-| `containerLogMaxSizeMB` | Size in megabytes (MB) | 10 | The maximum size (for example, 10 MB) of a container log file before it's rotated. |
-| `containerLogMaxFiles` | ≥ 2 | 5 | The maximum number of container log files that can be present for a container. |
+| `allowedUnsafeSysctls` | `kernel.shm*`, `kernel.msg*`, `kernel.sem`, `fs.mqueue.*`, `net.*` | None | Allowed list of unsafe sysctls or unsafe sysctl patterns. |
+| `containerLogMaxSizeMB` | Size in megabytes (MB) | 10 | The maximum size (for example, 10 MB) of a container log file before it's rotated. |
+| `containerLogMaxFiles` | ≥ 2 | 5 | The maximum number of container log files that can be present for a container. |
| `podMaxPids` | -1 to kernel PID limit | -1 (∞)| The maximum amount of process IDs that can be running in a Pod |
-### Windows Kubelet custom configuration (Preview)
-
-The supported Kubelet parameters and accepted values for Windows node pools are listed below.
+#### Windows Kubelet custom configuration (preview)
| Parameter | Allowed values/interval | Default | Description | | | -- | - | -- |
-| `imageGcHighThreshold` | 0-100 | 85 | The percent of disk usage after which image garbage collection is always run. Minimum disk usage that **will** trigger garbage collection. To disable image garbage collection, set to 100. |
+| `imageGcHighThreshold` | 0-100 | 85 | The percent of disk usage after which image garbage collection is always run. Minimum disk usage that **will** trigger garbage collection. To disable image garbage collection, set to 100. |
| `imageGcLowThreshold` | 0-100, no higher than `imageGcHighThreshold` | 80 | The percent of disk usage before which image garbage collection is never run. Minimum disk usage that **can** trigger garbage collection. |
-| `containerLogMaxSizeMB` | Size in megabytes (MB) | 10 | The maximum size (for example, 10 MB) of a container log file before it's rotated. |
-| `containerLogMaxFiles` | ≥ 2 | 5 | The maximum number of container log files that can be present for a container. |
+| `containerLogMaxSizeMB` | Size in megabytes (MB) | 10 | The maximum size (for example, 10 MB) of a container log file before it's rotated. |
+| `containerLogMaxFiles` | ≥ 2 | 5 | The maximum number of container log files that can be present for a container. |
-## Linux OS custom configuration
-
-The supported OS settings and accepted values are listed below.
+## Linux custom OS configuration settings
### File handle limits
-When you're serving a lot of traffic, it's common that the traffic you're serving is coming from a large number of local files. You can tweak the below kernel settings and built-in limits to allow you to handle more, at the cost of some system memory.
+When serving a lot of traffic, the traffic commonly comes from a large number of local files. You can adjust the below kernel settings and built-in limits to allow you to handle more, at the cost of some system memory.
| Setting | Allowed values/interval | Default | Description | | - | -- | - | -- | | `fs.file-max` | 8192 - 12000500 | 709620 | Maximum number of file-handles that the Linux kernel will allocate, by increasing this value you can increase the maximum number of open files permitted. |
-| `fs.inotify.max_user_watches` | 781250 - 2097152 | 1048576 | Maximum number of file watches allowed by the system. Each *watch* is roughly 90 bytes on a 32-bit kernel, and roughly 160 bytes on a 64-bit kernel. |
+| `fs.inotify.max_user_watches` | 781250 - 2097152 | 1048576 | Maximum number of file watches allowed by the system. Each *watch* is roughly 90 bytes on a 32-bit kernel, and roughly 160 bytes on a 64-bit kernel. |
| `fs.aio-max-nr` | 65536 - 6553500 | 65536 | The aio-nr shows the current system-wide number of asynchronous io requests. aio-max-nr allows you to change the maximum value aio-nr can grow to. | | `fs.nr_open` | 8192 - 20000500 | 1048576 | The maximum number of file-handles a process can allocate. | ### Socket and network tuning
-For agent nodes, which are expected to handle very large numbers of concurrent sessions, you can use the subset of TCP and network options below that you can tweak per node pool.
+For agent nodes, which are expected to handle very large numbers of concurrent sessions, you can use the subset of TCP and network options below that you can tweak per node pool.
| Setting | Allowed values/interval | Default | Description | | - | -- | - | -- |
For agent nodes, which are expected to handle very large numbers of concurrent s
| `net.ipv4.tcp_fin_timeout` | 5 - 120 | 60 | The length of time an orphaned (no longer referenced by any application) connection will remain in the FIN_WAIT_2 state before it's aborted at the local end. | | `net.ipv4.tcp_keepalive_time` | 30 - 432000 | 7200 | How often TCP sends out `keepalive` messages when `keepalive` is enabled. | | `net.ipv4.tcp_keepalive_probes` | 1 - 15 | 9 | How many `keepalive` probes TCP sends out, until it decides that the connection is broken. |
-| `net.ipv4.tcp_keepalive_intvl` | 1 - 75 | 75 | How frequently the probes are sent out. Multiplied by `tcp_keepalive_probes` it makes up the time to kill a connection that isn't responding, after probes started. |
+| `net.ipv4.tcp_keepalive_intvl` | 1 - 75 | 75 | How frequently the probes are sent out. Multiplied by `tcp_keepalive_probes` it makes up the time to kill a connection that isn't responding, after probes started. |
| `net.ipv4.tcp_tw_reuse` | 0 or 1 | 0 | Allow to reuse `TIME-WAIT` sockets for new connections when it's safe from protocol viewpoint. | | `net.ipv4.ip_local_port_range` | First: 1024 - 60999 and Last: 32768 - 65000] | First: 32768 and Last: 60999 | The local port range that is used by TCP and UDP traffic to choose the local port. Comprised of two numbers: The first number is the first local port allowed for TCP and UDP traffic on the agent node, the second is the last local port number. |
-| `net.ipv4.neigh.default.gc_thresh1`| 128 - 80000 | 4096 | Minimum number of entries that may be in the ARP cache. Garbage collection won't be triggered if the number of entries is below this setting. |
+| `net.ipv4.neigh.default.gc_thresh1`| 128 - 80000 | 4096 | Minimum number of entries that may be in the ARP cache. Garbage collection won't be triggered if the number of entries is below this setting. |
| `net.ipv4.neigh.default.gc_thresh2`| 512 - 90000 | 8192 | Soft maximum number of entries that may be in the ARP cache. This setting is arguably the most important, as ARP garbage collection will be triggered about 5 seconds after reaching this soft maximum. | | `net.ipv4.neigh.default.gc_thresh3`| 1024 - 100000 | 16384 | Hard maximum number of entries in the ARP cache. |
-| `net.netfilter.nf_conntrack_max` | 131072 - 1048576 | 131072 | `nf_conntrack` is a module that tracks connection entries for NAT within Linux. The `nf_conntrack` module uses a hash table to record the *established connection* record of the TCP protocol. `nf_conntrack_max` is the maximum number of nodes in the hash table, that is, the maximum number of connections supported by the `nf_conntrack` module or the size of connection tracking table. |
-| `net.netfilter.nf_conntrack_buckets` | 65536 - 147456 | 65536 | `nf_conntrack` is a module that tracks connection entries for NAT within Linux. The `nf_conntrack` module uses a hash table to record the *established connection* record of the TCP protocol. `nf_conntrack_buckets` is the size of hash table. |
+| `net.netfilter.nf_conntrack_max` | 131072 - 1048576 | 131072 | `nf_conntrack` is a module that tracks connection entries for NAT within Linux. The `nf_conntrack` module uses a hash table to record the *established connection* record of the TCP protocol. `nf_conntrack_max` is the maximum number of nodes in the hash table, that is, the maximum number of connections supported by the `nf_conntrack` module or the size of connection tracking table. |
+| `net.netfilter.nf_conntrack_buckets` | 65536 - 147456 | 65536 | `nf_conntrack` is a module that tracks connection entries for NAT within Linux. The `nf_conntrack` module uses a hash table to record the *established connection* record of the TCP protocol. `nf_conntrack_buckets` is the size of hash table. |
### Worker limits
-Like file descriptor limits, the number of workers or threads that a process can create are limited by both a kernel setting and user limits. The user limit on AKS is unlimited.
+Like file descriptor limits, the number of workers or threads that a process can create are limited by both a kernel setting and user limits. The user limit on AKS is unlimited.
| Setting | Allowed values/interval | Default | Description | | - | -- | - | -- |
-| `kernel.threads-max` | 20 - 513785 | 55601 | Processes can spin up worker threads. The maximum number of all threads that can be created is set with the kernel setting `kernel.threads-max`. |
+| `kernel.threads-max` | 20 - 513785 | 55601 | Processes can spin up worker threads. The maximum number of all threads that can be created is set with the kernel setting `kernel.threads-max`. |
### Virtual memory
The settings below can be used to tune the operation of the virtual memory (VM)
| Setting | Allowed values/interval | Default | Description | | - | -- | - | -- |
-| `vm.max_map_count` | 65530 - 262144 | 65530 | This file contains the maximum number of memory map areas a process may have. Memory map areas are used as a side-effect of calling `malloc`, directly by `mmap`, `mprotect`, and `madvise`, and also when loading shared libraries. |
+| `vm.max_map_count` | 65530 - 262144 | 65530 | This file contains the maximum number of memory map areas a process may have. Memory map areas are used as a side-effect of calling `malloc`, directly by `mmap`, `mprotect`, and `madvise`, and also when loading shared libraries. |
| `vm.vfs_cache_pressure` | 1 - 500 | 100 | This percentage value controls the tendency of the kernel to reclaim the memory, which is used for caching of directory and inode objects. |
-| `vm.swappiness` | 0 - 100 | 60 | This control is used to define how aggressive the kernel will swap memory pages. Higher values will increase aggressiveness, lower values decrease the amount of swap. A value of 0 instructs the kernel not to initiate swap until the amount of free and file-backed pages is less than the high water mark in a zone. |
-| `swapFileSizeMB` | 1 MB - Size of the [temporary disk](../virtual-machines/managed-disks-overview.md#temporary-disk) (/dev/sdb) | None | SwapFileSizeMB specifies size in MB of a swap file will be created on the agent nodes from this node pool. |
-| `transparentHugePageEnabled` | `always`, `madvise`, `never` | `always` | [Transparent Hugepages](https://www.kernel.org/doc/html/latest/admin-guide/mm/transhuge.html#admin-guide-transhuge) is a Linux kernel feature intended to improve performance by making more efficient use of your processorΓÇÖs memory-mapping hardware. When enabled the kernel attempts to allocate `hugepages` whenever possible and any Linux process will receive 2-MB pages if the `mmap` region is 2 MB naturally aligned. In certain cases when `hugepages` are enabled system wide, applications may end up allocating more memory resources. An application may `mmap` a large region but only touch 1 byte of it, in that case a 2-MB page might be allocated instead of a 4k page for no good reason. This scenario is why it's possible to disable `hugepages` system-wide or to only have them inside `MADV_HUGEPAGE madvise` regions. |
-| `transparentHugePageDefrag` | `always`, `defer`, `defer+madvise`, `madvise`, `never` | `madvise` | This value controls whether the kernel should make aggressive use of memory compaction to make more `hugepages` available. |
+| `vm.swappiness` | 0 - 100 | 60 | This control is used to define how aggressive the kernel will swap memory pages. Higher values will increase aggressiveness, lower values decrease the amount of swap. A value of 0 instructs the kernel not to initiate swap until the amount of free and file-backed pages is less than the high water mark in a zone. |
+| `swapFileSizeMB` | 1 MB - Size of the [temporary disk](../virtual-machines/managed-disks-overview.md#temporary-disk) (/dev/sdb) | None | SwapFileSizeMB specifies size in MB of a swap file will be created on the agent nodes from this node pool. |
+| `transparentHugePageEnabled` | `always`, `madvise`, `never` | `always` | [Transparent Hugepages](https://www.kernel.org/doc/html/latest/admin-guide/mm/transhuge.html#admin-guide-transhuge) is a Linux kernel feature intended to improve performance by making more efficient use of your processorΓÇÖs memory-mapping hardware. When enabled the kernel attempts to allocate `hugepages` whenever possible and any Linux process will receive 2-MB pages if the `mmap` region is 2 MB naturally aligned. In certain cases when `hugepages` are enabled system wide, applications may end up allocating more memory resources. An application may `mmap` a large region but only touch 1 byte of it, in that case a 2-MB page might be allocated instead of a 4k page for no good reason. This scenario is why it's possible to disable `hugepages` system-wide or to only have them inside `MADV_HUGEPAGE madvise` regions. |
+| `transparentHugePageDefrag` | `always`, `defer`, `defer+madvise`, `madvise`, `never` | `madvise` | This value controls whether the kernel should make aggressive use of memory compaction to make more `hugepages` available. |
> [!IMPORTANT] > For ease of search and readability the OS settings are displayed in this document by their name but should be added to the configuration json file or AKS API using [camelCase capitalization convention](/dotnet/standard/design-guidelines/capitalization-conventions).
The settings below can be used to tune the operation of the virtual memory (VM)
- See the list of [Frequently asked questions about AKS](faq.md) to find answers to some common AKS questions. <!-- LINKS - internal -->
-[aks-faq]: faq.md
-[aks-faq-node-resource-group]: faq.md#can-i-modify-tags-and-other-properties-of-the-aks-resources-in-the-node-resource-group
-[aks-multiple-node-pools]: use-multiple-node-pools.md
-[aks-scale-apps]: tutorial-kubernetes-scale.md
-[aks-support-policies]: support-policies.md
-[aks-upgrade]: upgrade-cluster.md
[node-access]: node-access.md
-[aks-view-master-logs]: ../azure-monitor/containers/container-insights-log-query.md#enable-resource-logs
-[autoscaler-profile-properties]: #using-the-autoscaler-profile
-[azure-cli-install]: /cli/azure/install-azure-cli
-[az-aks-show]: /cli/azure/aks#az-aks-show
[az-extension-add]: /cli/azure/extension#az-extension-add [az-extension-update]: /cli/azure/extension#az-extension-update [az-aks-create]: /cli/azure/aks#az-aks-create
-[az-aks-update]: /cli/azure/aks#az-aks-update
-[az-aks-scale]: /cli/azure/aks#az-aks-scale
[az-feature-register]: /cli/azure/feature#az-feature-register [az-feature-show]: /cli/azure/feature#az-feature-show
-[az-feature-list]: /cli/azure/feature#az-feature-list
[az-provider-register]: /cli/azure/provider#az-provider-register
-[upgrade-cluster]: upgrade-cluster.md
-[use-multiple-node-pools]: use-multiple-node-pools.md
-[max-surge]: upgrade-cluster.md#customize-node-surge-upgrade
--
-<!-- LINKS - external -->
-[az-aks-update-preview]: https://github.com/Azure/azure-cli-extensions/tree/master/src/aks-preview
-[az-aks-nodepool-update]: https://github.com/Azure/azure-cli-extensions/tree/master/src/aks-preview#enable-cluster-auto-scaler-for-a-node-pool
-[autoscaler-scaledown]: https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#what-types-of-pods-can-prevent-ca-from-removing-a-node
-[autoscaler-parameters]: https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#what-are-the-parameters-to-ca
-[kubernetes-faq]: https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#ca-doesnt-work-but-it-used-to-work-yesterday-why
aks Outbound Rules Control Egress https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/outbound-rules-control-egress.md
There are two options to provide access to Azure Monitor for containers:
### Required FQDN / application rules
-| FQDN | Port | Use |
+| FQDN | Port               | Use |
|--|--|-| | **`<region>.dp.kubernetesconfiguration.azure.com`** | **`HTTPS:443`** | This address is used to fetch configuration information from the Cluster Extensions service and report extension status to the service.| | **`mcr.microsoft.com, *.data.mcr.microsoft.com`** | **`HTTPS:443`** | This address is required to pull container images for installing cluster extension agents on AKS cluster.|
+|**`arcmktplaceprod.azurecr.io`**|**`HTTPS:443`**|This address is required to pull container images for installing marketplace extensions on AKS cluster.|
+|**`*.ingestion.msftcloudes.com, *.microsoftmetrics.com`**|**`HTTPS:443`**|This address is used to send agents metrics data to Azure.|
+|**`marketplaceapi.microsoft.com`**|**`HTTPS: 443`**|This address is used to send custom meter-based usage to the commerce metering API.|
#### Azure US Government required FQDN / application rules
aks Start Stop Nodepools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/start-stop-nodepools.md
Title: Start and stop a node pool on Azure Kubernetes Service (AKS) description: Learn how to start or stop a node pool on Azure Kubernetes Service (AKS). Previously updated : 10/25/2021 Last updated : 04/25/2023 # Start and stop an Azure Kubernetes Service (AKS) node pool
-Your AKS workloads may not need to run continuously, for example a development cluster that has node pools running specific workloads. To optimize your costs, you can completely turn off (stop) your node pools in your AKS cluster, allowing you to save on compute costs.
+You might not need to continuously run your AKS workloads. For example, you might have a development cluster that has node pools running specific workloads. To optimize your compute costs, you can completely stop your node pools in your AKS cluster.
+
+## Features and limitations
+
+* You can't stop system pools.
+* Spot node pools are supported.
+* Stopped node pools can be upgraded.
+* The cluster and node pool must be running.
## Before you begin
-This article assumes that you have an existing AKS cluster. If you need an AKS cluster, see the AKS quickstart [using the Azure CLI][aks-quickstart-cli], [using Azure PowerShell][aks-quickstart-powershell], or [using the Azure portal][aks-quickstart-portal].
+This article assumes you have an existing AKS cluster. If you need an AKS cluster, create one using the [Azure CLI][aks-quickstart-cli], [Azure PowerShell][aks-quickstart-powershell], or the [Azure portal][aks-quickstart-portal].
## Stop an AKS node pool
-> [!IMPORTANT]
-> When using node pool start/stop, the following is expected behavior:
->
-> * You can't stop system pools.
-> * Spot node pools are supported.
-> * Stopped node pools can be upgraded.
-> * The cluster and node pool must be running.
-
-Use `az aks nodepool stop` to stop a running AKS node pool. The following example stops the *testnodepool* node pool:
-
-```azurecli-interactive
-az aks nodepool stop --nodepool-name testnodepool --resource-group myResourceGroup --cluster-name myAKSCluster
-```
-
-You can verify when your node pool is stopped by using the [az aks show][az-aks-show] command and confirming the `powerState` shows as `Stopped` as on the below output:
-
-```json
-{
-[...]
- "osType": "Linux",
- "podSubnetId": null,
- "powerState": {
- "code": "Stopped"
- },
- "provisioningState": "Succeeded",
- "proximityPlacementGroupId": null,
-[...]
-}
-```
-
-> [!NOTE]
-> If the `provisioningState` shows `Stopping`, your node pool hasn't fully stopped yet.
+1. Stop a running AKS node pool using the [`az aks nodepool stop`][az-aks-nodepool-stop] command.
+
+ ```azurecli-interactive
+ az aks nodepool stop --resource-group myResourceGroup --cluster-name myAKSCluster --nodepool-name testnodepool
+ ```
+
+2. Verify your node pool stopped using the [`az aks nodepool show`][az-aks-nodepool-show] command.
+
+ ```azurecli-interactive
+ az aks nodepool show --resource-group myResourceGroup --cluster-name myAKSCluster --nodepool-name testnodepool
+ ```
+
+ The following condensed example output shows the `powerState` as `Stopped`:
+
+ ```output
+ {
+ [...]
+ "osType": "Linux",
+ "podSubnetId": null,
+ "powerState": {
+ "code": "Stopped"
+ },
+ "provisioningState": "Succeeded",
+ "proximityPlacementGroupId": null,
+ [...]
+ }
+ ```
+
+ > [!NOTE]
+ > If the `provisioningState` shows `Stopping`, your node pool is still in the process of stopping.
++ ## Start a stopped AKS node pool
-Use `az aks nodepool start` to start a stopped AKS node pool. The following example starts the stopped node pool named *testnodepool*:
+1. Restart a stopped node pool using the [`az aks nodepool start`][az-aks-nodepool-start] command.
-```azurecli-interactive
-az aks nodepool start --nodepool-name testnodepool --resource-group myResourceGroup --cluster-name myAKSCluster
-```
+ ```azurecli-interactive
+ az aks nodepool start --resource-group myResourceGroup --cluster-name myAKSCluster --nodepool-name testnodepool
+ ```
-You can verify your node pool has started using [az aks show][az-aks-show] and confirming the `powerState` shows `Running`. For example:
+2. Verify your node pool started using the [`az aks nodepool show`][az-aks-nodepool-show] command.
-```json
-{
-[...]
- "osType": "Linux",
- "podSubnetId": null,
- "powerState": {
- "code": "Running"
- },
- "provisioningState": "Succeeded",
- "proximityPlacementGroupId": null,
-[...]
-}
-```
+ ```azurecli-interactive
+ az aks nodepool show --resource-group myResourceGroup --cluster-name myAKSCluster --nodepool-name testnodepool
+ ```
-> [!NOTE]
-> If the `provisioningState` shows `Starting`, your node pool hasn't fully started yet.
+ The following condensed example output shows the `powerState` as `Running`:
+
+ ```output
+ {
+ [...]
+ "osType": "Linux",
+ "podSubnetId": null,
+ "powerState": {
+ "code": "Running"
+ },
+ "provisioningState": "Succeeded",
+ "proximityPlacementGroupId": null,
+ [...]
+ }
+ ```
+
+ > [!NOTE]
+ > If the `provisioningState` shows `Starting`, your node pool is still in the process of starting.
## Next steps -- To learn how to scale `User` pools to 0, see [Scale `User` pools to 0](scale-cluster.md#scale-user-node-pools-to-0).-- To learn how to stop your cluster, see [Cluster start/stop](start-stop-cluster.md).-- To learn how to save costs using Spot instances, see [Add a spot node pool to AKS](spot-node-pool.md).-- To learn more about the AKS support policies, see [AKS support policies](support-policies.md).-
-<!-- LINKS - external -->
+* To learn how to scale `User` pools to 0, see [scale `User` pools to 0](scale-cluster.md#scale-user-node-pools-to-0).
+* To learn how to stop your cluster, see [cluster start/stop](start-stop-cluster.md).
+* To learn how to save costs using Spot instances, see [add a spot node pool to AKS](spot-node-pool.md).
+* To learn more about the AKS support policies, see [AKS support policies](support-policies.md).
<!-- LINKS - internal --> [aks-quickstart-cli]: ./learn/quick-kubernetes-deploy-cli.md [aks-quickstart-portal]: ./learn/quick-kubernetes-deploy-portal.md [aks-quickstart-powershell]: ./learn/quick-kubernetes-deploy-powershell.md
-[install-azure-cli]: /cli/azure/install-azure-cli
-[az-extension-add]: /cli/azure/extension#az_extension_add
-[az-extension-update]: /cli/azure/extension#az_extension_update
-[az-feature-register]: /cli/azure/feature#az_feature_register
-[az-feature-list]: /cli/azure/feature#az_feature_list
-[az-provider-register]: /cli/azure/provider#az_provider_register
-[az-aks-show]: /cli/azure/aks#az_aks_show
-[kubernetes-walkthrough-powershell]: kubernetes-walkthrough-powershell.md
-[stop-azakscluster]: /powershell/module/az.aks/stop-azakscluster
-[get-azakscluster]: /powershell/module/az.aks/get-azakscluster
-[start-azakscluster]: /powershell/module/az.aks/start-azakscluster
+[az-aks-nodepool-stop]: /cli/azure/aks/nodepool#az_aks_nodepool_stop
+[az-aks-nodepool-start]:/cli/azure/aks/nodepool#az_aks_nodepool_start
+[az-aks-nodepool-show]: /cli/azure/aks/nodepool#az_aks_nodepool_show
aks Use Pod Security Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-pod-security-policies.md
Title: Use pod security policies in Azure Kubernetes Service (AKS)
-description: Learn how to control pod admissions by using PodSecurityPolicy in Azure Kubernetes Service (AKS)
+description: Learn how to control pod admissions using PodSecurityPolicy in Azure Kubernetes Service (AKS)
Previously updated : 03/25/2021 Last updated : 04/25/2023
-# Preview - Secure your cluster using pod security policies in Azure Kubernetes Service (AKS)
+# Secure your cluster using pod security policies in Azure Kubernetes Service (AKS) (preview)
-> [!Important]
-> The feature described in this article, pod security policy (preview), will be deprecated starting with Kubernetes version 1.21, and it will be removed in version 1.25. AKS will mark the pod security policy as Deprecated with the AKS API on 06-01-2023 and remove it in version 1.25. You can migrate pod security policy to pod security admission controller before the deprecation deadline.
-
-After pod security policy (preview) is deprecated, you must have already migrated to Pod Security Admission controller or disabled the feature on any existing clusters using the deprecated feature to perform future cluster upgrades and stay within Azure support.
+> [!IMPORTANT]
+>
+> The pod security policy feature will be deprecated starting with Kubernetes version *1.21* and will be removed in version *1.25*.
+>
+> The AKS API will mark the pod security policy as `Deprecated` on 06-01-2023 and remove it in version *1.25*. We recommend you migrate to pod security admission controller before the deprecation deadline to stay within Azure support.
## Before you begin
-This article assumes that you have an existing AKS cluster. If you need an AKS cluster, see the AKS quickstart [using the Azure CLI][aks-quickstart-cli], [using Azure PowerShell][aks-quickstart-powershell], or [using the Azure portal][aks-quickstart-portal].
-
-You need the Azure CLI version 2.0.61 or later installed and configured. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli].
+* This article assumes you have an existing AKS cluster. If you need an AKS cluster, create one using [Azure CLI][aks-quickstart-cli], [Azure PowerShell][aks-quickstart-powershell], or the [Azure portal][aks-quickstart-portal].
+* You need the Azure CLI version 2.0.61 or later installed and configured. Run `az --version` to find the version. If you need to install or upgrade, see [install Azure CLI][install-azure-cli].
-## Install the aks-preview Azure CLI extension
+## Install the `aks-preview` Azure CLI extension
[!INCLUDE [preview features callout](includes/preview/preview-callout.md)]
-To install the aks-preview extension, run the following command:
+1. Install the aks-preview extension using the [`az extension add`][az-extension-add] command.
-```azurecli
-az extension add --name aks-preview
-```
+ ```azurecli
+ az extension add --name aks-preview
+ ```
-Run the following command to update to the latest version of the extension released:
+2. Update to the latest version of the extension using the [`az extension update`][az-extension-update] command.
-```azurecli
-az extension update --name aks-preview
-```
+ ```azurecli
+ az extension update --name aks-preview
+ ```
-## Register the 'PodSecurityPolicyPreview' feature flag
+## Register the `PodSecurityPolicyPreview` feature flag
-Register the `PodSecurityPolicyPreview` feature flag by using the [az feature register][az-feature-register] command, as shown in the following example:
+1. Register the `PodSecurityPolicyPreview` feature flag using the [`az feature register`][az-feature-register] command.
-```azurecli-interactive
-az feature register --namespace "Microsoft.ContainerService" --name "PodSecurityPolicyPreview"
-```
+ ```azurecli-interactive
+ az feature register --namespace "Microsoft.ContainerService" --name "PodSecurityPolicyPreview"
+ ```
-It takes a few minutes for the status to show *Registered*. Verify the registration status by using the [az feature show][az-feature-show] command:
+ It takes a few minutes for the status to show *Registered*.
-```azurecli-interactive
-az feature show --namespace "Microsoft.ContainerService" --name "PodSecurityPolicyPreview"
-```
+2. Verify the registration status using the [`az feature show`][az-feature-show] command.
-When the status reflects *Registered*, refresh the registration of the *Microsoft.ContainerService* resource provider by using the [az provider register][az-provider-register] command:
+ ```azurecli-interactive
+ az feature show --namespace "Microsoft.ContainerService" --name "PodSecurityPolicyPreview"
+ ```
-```azurecli-interactive
-az provider register --namespace Microsoft.ContainerService
-```
+3. When the status reflects *Registered*, refresh the registration of the *Microsoft.ContainerService* resource provider using the [`az provider register`][az-provider-register] command.
-## Overview of pod security policies
+ ```azurecli-interactive
+ az provider register --namespace Microsoft.ContainerService
+ ```
-In a Kubernetes cluster, an admission controller is used to intercept requests to the API server when a resource is to be created. The admission controller can then *validate* the resource request against a set of rules, or *mutate* the resource to change deployment parameters.
+## Overview of pod security policies
-*PodSecurityPolicy* is an admission controller that validates a pod specification meets your defined requirements. These requirements may limit the use of privileged containers, access to certain types of storage, or the user or group the container can run as. When you try to deploy a resource where the pod specifications don't meet the requirements outlined in the pod security policy, the request is denied. This ability to control what pods can be scheduled in the AKS cluster prevents some possible security vulnerabilities or privilege escalations.
+Kubernetes clusters use admission controllers to intercept requests to the API server when a resource is going to be created. The admission controller can then *validate* the resource request against a set of rules, or *mutate* the resource to change deployment parameters.
-When you enable pod security policy in an AKS cluster, some default policies are applied. These default policies provide an out-of-the-box experience to define what pods can be scheduled. However, cluster users may run into problems deploying pods until you define your own policies. The recommended approach is to:
+`PodSecurityPolicy` is an admission controller that validates a pod specification meets your defined requirements. These requirements may limit the use of privileged containers, access to certain types of storage, or the user or group the container can run as. When you try to deploy a resource where the pod specifications don't meet the requirements outlined in the pod security policy, the request is denied. This ability to control what pods can be scheduled in the AKS cluster prevents some possible security vulnerabilities or privilege escalations.
-* Create an AKS cluster
-* Define your own pod security policies
-* Enable the pod security policy feature
+When you enable pod security policy in an AKS cluster, some default policies are applied. These policies provide an out-of-the-box experience to define what pods can be scheduled. However, you might run into problems deploying your pods until you define your own policies. The recommended approach is to:
-To show how the default policies limit pod deployments, in this article we first enable the pod security policies feature, then create a custom policy.
+1. Create an AKS cluster.
+2. Define your own pod security policies.
+3. Enable the pod security policy feature.
### Behavior changes between pod security policy and Azure Policy
-Below is a summary of behavior changes between pod security policy and Azure Policy.
- |Scenario| Pod security policy | Azure Policy | |||| |Installation|Enable pod security policy feature |Enable Azure Policy Add-on
Below is a summary of behavior changes between pod security policy and Azure Pol
| Default policies | When pod security policy is enabled in AKS, default Privileged and Unrestricted policies are applied. | No default policies are applied by enabling the Azure Policy Add-on. You must explicitly enable policies in Azure Policy. | Who can create and assign policies | Cluster admin creates a pod security policy resource | Users must have a minimum role of 'owner' or 'Resource Policy Contributor' permissions on the AKS cluster resource group. - Through API, users can assign policies at the AKS cluster resource scope. The user should have minimum of 'owner' or 'Resource Policy Contributor' permissions on AKS cluster resource. - In the Azure portal, policies can be assigned at the Management group/subscription/resource group level. | Authorizing policies| Users and Service Accounts require explicit permissions to use pod security policies. | No additional assignment is required to authorize policies. Once policies are assigned in Azure, all cluster users can use these policies.
-| Policy applicability | The admin user bypasses the enforcement of pod security policies. | All users (admin & non-admin) see the same policies. There is no special casing based on users. Policy application can be excluded at the namespace level.
-| Policy scope | Pod security policies are not namespaced | Constraint templates used by Azure Policy are not namespaced.
-| Deny/Audit/Mutation action | Pod security policies support only deny actions. Mutation can be done with default values on create requests. Validation can be done during update requests.| Azure Policy supports both audit & deny actions. Mutation is not supported yet, but planned.
-| Pod security policy compliance | There is no visibility on compliance of pods that existed before enabling pod security policy. Non-compliant pods created after enabling pod security policies are denied. | Non-compliant pods that existed before applying Azure policies would show up in policy violations. Non-compliant pods created after enabling Azure policies are denied if policies are set with a deny effect.
+| Policy applicability | The admin user bypasses the enforcement of pod security policies. | All users (admin & non-admin) see the same policies. There's no special casing based on users. Policy application can be excluded at the namespace level.
+| Policy scope | Pod security policies aren't namespaced | Constraint templates used by Azure Policy aren't namespaced.
+| Deny/Audit/Mutation action | Pod security policies support only deny actions. Mutation can be done with default values on create requests. Validation can be done during update requests.| Azure Policy supports both audit & deny actions. Mutation isn't yet supported.
+| Pod security policy compliance | There's no visibility into compliance of pods that existed before enabling pod security policy. Non-compliant pods created after enabling pod security policies are denied. | Non-compliant pods that existed before applying Azure policies would show up in policy violations. Non-compliant pods created after enabling Azure policies are denied if policies are set with a deny effect.
| How to view policies on the cluster | `kubectl get psp` | `kubectl get constrainttemplate` - All policies are returned.
-| Pod security policy standard - Privileged | A privileged pod security policy resource is created by default when enabling the feature. | Privileged mode implies no restriction, as a result it is equivalent to not having any Azure Policy assignment.
+| Pod security policy standard - Privileged | A privileged pod security policy resource is created by default when enabling the feature. | Privileged mode implies no restriction, as a result it's equivalent to not having any Azure Policy assignment.
| [Pod security policy standard - Baseline/default](https://kubernetes.io/docs/concepts/security/pod-security-standards/#baseline-default) | User installs a pod security policy baseline resource. | Azure Policy provides a [built-in baseline initiative](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicySetDefinitions%2Fa8640138-9b0a-4a28-b8cb-1666c838647d) which maps to the baseline pod security policy. | [Pod security policy standard - Restricted](https://kubernetes.io/docs/concepts/security/pod-security-standards/#restricted) | User installs a pod security policy restricted resource. | Azure Policy provides a [built-in restricted initiative](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicySetDefinitions%2F42b8ef37-b724-4e24-bbc8-7a7708edfe00) which maps to the restricted pod security policy. ## Enable pod security policy on an AKS cluster
-You can enable or disable pod security policy using the [az aks update][az-aks-update] command. The following example enables pod security policy on the cluster name *myAKSCluster* in the resource group named *myResourceGroup*.
- > [!NOTE]
-> For real-world use, don't enable the pod security policy until you have defined your own custom policies. In this article, you enable pod security policy as the first step to see how the default policies limit pod deployments.
+> For real-world use, don't enable the pod security policy until you define your own custom policies. In this article, we enable pod security policy as the first step to see how the default policies limit pod deployments.
-```azurecli-interactive
-az aks update \
- --resource-group myResourceGroup \
- --name myAKSCluster \
- --enable-pod-security-policy
-```
+* Enable the pod security policy using the [`az aks update`][az-aks-update] command.
+
+ ```azurecli-interactive
+ az aks update \
+ --resource-group myResourceGroup \
+ --name myAKSCluster \
+ --enable-pod-security-policy
+ ```
## Default AKS policies When you enable pod security policy, AKS creates one default policy named *privileged*. Don't edit or remove the default policy. Instead, create your own policies that define the settings you want to control. Let's first look at what these default policies are how they impact pod deployments.
-To view the policies available, use the [kubectl get psp][kubectl-get] command, as shown in the following example
+1. View the available policies using the [`kubectl get psp`][kubectl-get] command.
+
+ ```console
+ kubectl get psp
+ ```
+
+ Your output will look similar to the following example output:
-```console
-$ kubectl get psp
+ ```output
+ NAME PRIV CAPS SELINUX RUNASUSER FSGROUP SUPGROUP READONLYROOTFS VOLUMES
+ privileged true * RunAsAny RunAsAny RunAsAny RunAsAny false * configMap,emptyDir,projected,secret,downwardAPI,persistentVolumeClaim
+ ```
-NAME PRIV CAPS SELINUX RUNASUSER FSGROUP SUPGROUP READONLYROOTFS VOLUMES
-privileged true * RunAsAny RunAsAny RunAsAny RunAsAny false * configMap,emptyDir,projected,secret,downwardAPI,persistentVolumeClaim
-```
+ The *privileged* pod security policy is applied to any authenticated user in the AKS cluster. This assignment is controlled by `ClusterRoles` and `ClusterRoleBindings`.
-The *privileged* pod security policy is applied to any authenticated user in the AKS cluster. This assignment is controlled by ClusterRoles and ClusterRoleBindings. Use the [kubectl get rolebindings][kubectl-get] command and search for the *default:privileged:* binding in the *kube-system* namespace:
+2. Search for the *default:privileged:* binding in the *kube-system* namespace using the [`kubectl get rolebindings`][kubectl-get] command.
-```console
-kubectl get rolebindings default:privileged -n kube-system -o yaml
-```
+ ```console
+ kubectl get rolebindings default:privileged -n kube-system -o yaml
+ ```
-As shown in the following condensed output, the *psp:privileged* ClusterRole is assigned to any *system:authenticated* users. This ability provides a basic level of privilege without your own policies being defined.
+ The following condensed example output shows the *psp:privileged* `ClusterRole` is assigned to any *system:authenticated* users. This ability provides a basic level of privilege without your own policies being defined.
-```
-apiVersion: rbac.authorization.k8s.io/v1
-kind: RoleBinding
-metadata:
- [...]
- name: default:privileged
- [...]
-roleRef:
- apiGroup: rbac.authorization.k8s.io
- kind: ClusterRole
- name: psp:privileged
-subjects:
-- apiGroup: rbac.authorization.k8s.io
- kind: Group
- name: system:masters
-```
+ ```output
+ apiVersion: rbac.authorization.k8s.io/v1
+ kind: RoleBinding
+ metadata:
+ [...]
+ name: default:privileged
+ [...]
+ roleRef:
+ apiGroup: rbac.authorization.k8s.io
+ kind: ClusterRole
+ name: psp:privileged
+ subjects:
+ - apiGroup: rbac.authorization.k8s.io
+ kind: Group
+ name: system:masters
+ ```
-It's important to understand how these default policies interact with user requests to schedule pods before you start to create your own pod security policies. In the next few sections, let's schedule some pods to see these default policies in action.
+It's important to understand how these default policies interact with user requests to schedule pods before you start to create your own pod security policies. In the next few sections, we schedule some pods to see the default policies in action.
## Create a test user in an AKS cluster
-By default, when you use the [az aks get-credentials][az-aks-get-credentials] command, the *admin* credentials for the AKS cluster are added to your `kubectl` config. The admin user bypasses the enforcement of pod security policies. If you use Azure Active Directory integration for your AKS clusters, you could sign in with the credentials of a non-admin user to see the enforcement of policies in action. In this article, let's create a test user account in the AKS cluster that you can use.
+When you use the [`az aks get-credentials`][az-aks-get-credentials] command, the *admin* credentials for the AKS cluster are added to your `kubectl` config by default. The admin user bypasses the enforcement of pod security policies. If you use Azure Active Directory integration for your AKS clusters, you can sign in with the credentials of a non-admin user to see the enforcement of policies in action.
+
+1. Create a sample namespace named *psp-aks* for test resources using the [`kubectl create namespace`][kubectl-create] command.
+
+ ```console
+ kubectl create namespace psp-aks
+ ```
-Create a sample namespace named *psp-aks* for test resources using the [kubectl create namespace][kubectl-create] command. Then, create a service account named *nonadmin-user* using the [kubectl create serviceaccount][kubectl-create] command:
+2. Create a service account named *nonadmin-user* using the [`kubectl create serviceaccount`][kubectl-create] command.
-```console
-kubectl create namespace psp-aks
-kubectl create serviceaccount --namespace psp-aks nonadmin-user
-```
+ ```console
+ kubectl create serviceaccount --namespace psp-aks nonadmin-user
+ ```
-Next, create a RoleBinding for the *nonadmin-user* to perform basic actions in the namespace using the [kubectl create rolebinding][kubectl-create] command:
+3. Create a RoleBinding for the *nonadmin-user* to perform basic actions in the namespace using the [`kubectl create rolebinding`][kubectl-create] command.
-```console
-kubectl create rolebinding \
- --namespace psp-aks \
- psp-aks-editor \
- --clusterrole=edit \
- --serviceaccount=psp-aks:nonadmin-user
-```
+ ```console
+ kubectl create rolebinding \
+ --namespace psp-aks \
+ psp-aks-editor \
+ --clusterrole=edit \
+ --serviceaccount=psp-aks:nonadmin-user
+ ```
### Create alias commands for admin and non-admin user
-To highlight the difference between the regular admin user when using `kubectl` and the non-admin user created in the previous steps, create two command-line aliases:
+When using `kubectl`, you can highlight the differences between the regular admin user and the non-admin user by creating two command-line aliases:
-* The **kubectl-admin** alias is for the regular admin user, and is scoped to the *psp-aks* namespace.
-* The **kubectl-nonadminuser** alias is for the *nonadmin-user* created in the previous step, and is scoped to the *psp-aks* namespace.
+1. The **kubectl-admin** alias for the regular admin user, which is scoped to the *psp-aks* namespace.
+2. The **kubectl-nonadminuser** alias for the *nonadmin-user* created in the previous step, which is scoped to the *psp-aks* namespace.
-Create these two aliases as shown in the following commands:
+* Create the two aliases using the following commands.
-```console
-alias kubectl-admin='kubectl --namespace psp-aks'
-alias kubectl-nonadminuser='kubectl --as=system:serviceaccount:psp-aks:nonadmin-user --namespace psp-aks'
-```
+ ```console
+ alias kubectl-admin='kubectl --namespace psp-aks'
+ alias kubectl-nonadminuser='kubectl --as=system:serviceaccount:psp-aks:nonadmin-user --namespace psp-aks'
+ ```
## Test the creation of a privileged pod
-Let's first test what happens when you schedule a pod with the security context of `privileged: true`. This security context escalates the pod's privileges. In the previous section that showed the default AKS pod security policies, the *privilege* policy should deny this request.
+Let's test what happens when you schedule a pod with the security context of `privileged: true`. This security context escalates the pod's privileges. The default *privilege* AKS security policy should deny this request.
-Create a file named `nginx-privileged.yaml` and paste the following YAML manifest:
+1. Create a file named `nginx-privileged.yaml` and paste in the contents of following YAML manifest.
-```yaml
-apiVersion: v1
-kind: Pod
-metadata:
- name: nginx-privileged
-spec:
- containers:
- - name: nginx-privileged
- image: mcr.microsoft.com/oss/nginx/nginx:1.14.2-alpine
- securityContext:
- privileged: true
-```
+ ```yaml
+ apiVersion: v1
+ kind: Pod
+ metadata:
+ name: nginx-privileged
+ spec:
+ containers:
+ - name: nginx-privileged
+ image: mcr.microsoft.com/oss/nginx/nginx:1.14.2-alpine
+ securityContext:
+ privileged: true
+ ```
-Create the pod using the [kubectl apply][kubectl-apply] command and specify the name of your YAML manifest:
+2. Create the pod using the [`kubectl apply`][kubectl-apply] command and specify the name of your YAML manifest.
-```console
-kubectl-nonadminuser apply -f nginx-privileged.yaml
-```
+ ```console
+ kubectl-nonadminuser apply -f nginx-privileged.yaml
+ ```
-The pod fails to be scheduled, as shown in the following example output:
+ The following example output shows the pod failed to be scheduled:
-```console
-$ kubectl-nonadminuser apply -f nginx-privileged.yaml
+ ```output
+ Error from server (Forbidden): error when creating "nginx-privileged.yaml": pods "nginx-privileged" is forbidden: unable to validate against any pod security policy: []
+ ```
-Error from server (Forbidden): error when creating "nginx-privileged.yaml": pods "nginx-privileged" is forbidden: unable to validate against any pod security policy: []
-```
-
-The pod doesn't reach the scheduling stage, so there are no resources to delete before you move on.
+ Since the pod doesn't reach the scheduling stage, there are no resources to delete before you move on.
## Test creation of an unprivileged pod
-In the previous example, the pod specification requested privileged escalation. This request is denied by the default *privilege* pod security policy, so the pod fails to be scheduled. Let's try now running that same NGINX pod without the privilege escalation request.
-
-Create a file named `nginx-unprivileged.yaml` and paste the following YAML manifest:
+In the previous example, the pod specification requested privileged escalation. This request is denied by the default *privilege* pod security policy, so the pod fails to be scheduled. Let's try running the same NGINX pod without the privilege escalation request.
-```yaml
-apiVersion: v1
-kind: Pod
-metadata:
- name: nginx-unprivileged
-spec:
- containers:
- - name: nginx-unprivileged
- image: mcr.microsoft.com/oss/nginx/nginx:1.14.2-alpine
-```
+1. Create a file named `nginx-unprivileged.yaml` and paste in the contents of the following YAML manifest.
-Create the pod using the [kubectl apply][kubectl-apply] command and specify the name of your YAML manifest:
+ ```yaml
+ apiVersion: v1
+ kind: Pod
+ metadata:
+ name: nginx-unprivileged
+ spec:
+ containers:
+ - name: nginx-unprivileged
+ image: mcr.microsoft.com/oss/nginx/nginx:1.14.2-alpine
+ ```
-```console
-kubectl-nonadminuser apply -f nginx-unprivileged.yaml
-```
+2. Create the pod using the [`kubectl apply`][kubectl-apply] command and specify the name of your YAML manifest.
-The pod fails to be scheduled, as shown in the following example output:
+ ```console
+ kubectl-nonadminuser apply -f nginx-unprivileged.yaml
+ ```
-```console
-$ kubectl-nonadminuser apply -f nginx-unprivileged.yaml
+ The following example output shows the pod failed to be scheduled:
-Error from server (Forbidden): error when creating "nginx-unprivileged.yaml": pods "nginx-unprivileged" is forbidden: unable to validate against any pod security policy: []
-```
+ ```output
+ Error from server (Forbidden): error when creating "nginx-unprivileged.yaml": pods "nginx-unprivileged" is forbidden: unable to validate against any pod security policy: []
+ ```
-The pod doesn't reach the scheduling stage, so there are no resources to delete before you move on.
+ Since the pod doesn't reach the scheduling stage, there are no resources to delete before you move on.
## Test creation of a pod with a specific user context
-In the previous example, the container image automatically tried to use root to bind NGINX to port 80. This request was denied by the default *privilege* pod security policy, so the pod fails to start. Let's try now running that same NGINX pod with a specific user context, such as `runAsUser: 2000`.
-
-Create a file named `nginx-unprivileged-nonroot.yaml` and paste the following YAML manifest:
+In the previous example, the container image automatically tried to use root to bind NGINX to port 80. This request was denied by the default *privilege* pod security policy, so the pod fails to start. Let's try running the same NGINX pod with a specific user context, such as `runAsUser: 2000`.
-```yaml
-apiVersion: v1
-kind: Pod
-metadata:
- name: nginx-unprivileged-nonroot
-spec:
- containers:
- - name: nginx-unprivileged
- image: mcr.microsoft.com/oss/nginx/nginx:1.14.2-alpine
- securityContext:
- runAsUser: 2000
-```
+1. Create a file named `nginx-unprivileged-nonroot.yaml` and paste in the following YAML manifest.
-Create the pod using the [kubectl apply][kubectl-apply] command and specify the name of your YAML manifest:
+ ```yaml
+ apiVersion: v1
+ kind: Pod
+ metadata:
+ name: nginx-unprivileged-nonroot
+ spec:
+ containers:
+ - name: nginx-unprivileged
+ image: mcr.microsoft.com/oss/nginx/nginx:1.14.2-alpine
+ securityContext:
+ runAsUser: 2000
+ ```
-```console
-kubectl-nonadminuser apply -f nginx-unprivileged-nonroot.yaml
-```
+2. Create the pod using the [`kubectl apply`][kubectl-apply] command and specify the name of your YAML manifest.
-The pod fails to be scheduled, as shown in the following example output:
+ ```console
+ kubectl-nonadminuser apply -f nginx-unprivileged-nonroot.yaml
+ ```
-```console
-$ kubectl-nonadminuser apply -f nginx-unprivileged-nonroot.yaml
+ The following example output shows the pod failed to be scheduled:
-Error from server (Forbidden): error when creating "nginx-unprivileged-nonroot.yaml": pods "nginx-unprivileged-nonroot" is forbidden: unable to validate against any pod security policy: []
-```
+ ```output
+ Error from server (Forbidden): error when creating "nginx-unprivileged-nonroot.yaml": pods "nginx-unprivileged-nonroot" is forbidden: unable to validate against any pod security policy: []
+ ```
-The pod doesn't reach the scheduling stage, so there are no resources to delete before you move on.
+ Since the pod doesn't reach the scheduling stage, there are no resources to delete before you move on.
## Create a custom pod security policy Now that you've seen the behavior of the default pod security policies, let's provide a way for the *nonadmin-user* to successfully schedule pods.
-Let's create a policy to reject pods that request privileged access. Other options, such as *runAsUser* or allowed *volumes*, aren't explicitly restricted. This type of policy denies a request for privileged access, but otherwise lets the cluster run the requested pods.
-
-Create a file named `psp-deny-privileged.yaml` and paste the following YAML manifest:
-
-```yaml
-apiVersion: policy/v1beta1
-kind: PodSecurityPolicy
-metadata:
- name: psp-deny-privileged
-spec:
- privileged: false
- seLinux:
- rule: RunAsAny
- supplementalGroups:
- rule: RunAsAny
- runAsUser:
- rule: RunAsAny
- fsGroup:
- rule: RunAsAny
- volumes:
- - '*'
-```
-
-Create the policy using the [kubectl apply][kubectl-apply] command and specify the name of your YAML manifest:
-
-```console
-kubectl apply -f psp-deny-privileged.yaml
-```
-
-To view the policies available, use the [kubectl get psp][kubectl-get] command, as shown in the following example. Compare the *psp-deny-privileged* policy with the default *privilege* policy that was enforced in the previous examples to create a pod. Only the use of *PRIV* escalation is denied by your policy. There are no restrictions on the user or group for the *psp-deny-privileged* policy.
-
-```console
-$ kubectl get psp
-
-NAME PRIV CAPS SELINUX RUNASUSER FSGROUP SUPGROUP READONLYROOTFS VOLUMES
-privileged true * RunAsAny RunAsAny RunAsAny RunAsAny false *
-psp-deny-privileged false RunAsAny RunAsAny RunAsAny RunAsAny false *
-```
+We'll create a policy to reject pods that request privileged access. Other options, such as *runAsUser* or allowed *volumes*, aren't explicitly restricted. This type of policy denies a request for privileged access, but allows the cluster to run the requested pods.
+
+1. Create a file named `psp-deny-privileged.yaml` and paste in the following YAML manifest.
+
+ ```yaml
+ apiVersion: policy/v1beta1
+ kind: PodSecurityPolicy
+ metadata:
+ name: psp-deny-privileged
+ spec:
+ privileged: false
+ seLinux:
+ rule: RunAsAny
+ supplementalGroups:
+ rule: RunAsAny
+ runAsUser:
+ rule: RunAsAny
+ fsGroup:
+ rule: RunAsAny
+ volumes:
+ - '*'
+ ```
+
+2. Create the policy using the [`kubectl apply`][kubectl-apply] command and specify the name of your YAML manifest.
+
+ ```console
+ kubectl apply -f psp-deny-privileged.yaml
+ ```
+
+3. View the available policies using the [`kubectl get psp`][kubectl-get] command.
+
+ ```console
+ kubectl get psp
+ ```
+
+ In the following example output, compare the *psp-deny-privileged* policy with the default *privilege* policy that was enforced in the previous examples to create a pod. Only the use of *PRIV* escalation is denied by your policy. There are no restrictions on the user or group for the *psp-deny-privileged* policy.
+
+ ```output
+ NAME PRIV CAPS SELINUX RUNASUSER FSGROUP SUPGROUP READONLYROOTFS VOLUMES
+ privileged true * RunAsAny RunAsAny RunAsAny RunAsAny false *
+ psp-deny-privileged false RunAsAny RunAsAny RunAsAny RunAsAny false *
+ ```
## Allow user account to use the custom pod security policy
-In the previous step, you created a pod security policy to reject pods that request privileged access. To allow the policy to be used, you create a *Role* or a *ClusterRole*. Then, you associate one of these roles using a *RoleBinding* or *ClusterRoleBinding*.
-
-For this example, create a ClusterRole that allows you to *use* the *psp-deny-privileged* policy created in the previous step. Create a file named `psp-deny-privileged-clusterrole.yaml` and paste the following YAML manifest:
-
-```yaml
-kind: ClusterRole
-apiVersion: rbac.authorization.k8s.io/v1
-metadata:
- name: psp-deny-privileged-clusterrole
-rules:
-- apiGroups:
- - extensions
- resources:
- - podsecuritypolicies
- resourceNames:
- - psp-deny-privileged
- verbs:
- - use
-```
-
-Create the ClusterRole using the [kubectl apply][kubectl-apply] command and specify the name of your YAML manifest:
-
-```console
-kubectl apply -f psp-deny-privileged-clusterrole.yaml
-```
-
-Now create a ClusterRoleBinding to use the ClusterRole created in the previous step. Create a file named `psp-deny-privileged-clusterrolebinding.yaml` and paste the following YAML manifest:
-
-```yaml
-apiVersion: rbac.authorization.k8s.io/v1
-kind: ClusterRoleBinding
-metadata:
- name: psp-deny-privileged-clusterrolebinding
-roleRef:
- apiGroup: rbac.authorization.k8s.io
- kind: ClusterRole
- name: psp-deny-privileged-clusterrole
-subjects:
-- apiGroup: rbac.authorization.k8s.io
- kind: Group
- name: system:serviceaccounts
-```
-
-Create a ClusterRoleBinding using the [kubectl apply][kubectl-apply] command and specify the name of your YAML manifest:
-
-```console
-kubectl apply -f psp-deny-privileged-clusterrolebinding.yaml
-```
+In the previous step, you created a pod security policy to reject pods that request privileged access. To allow the policy to be used, you create a *Role* or a *ClusterRole*. Then, you associate one of these roles using a *RoleBinding* or *ClusterRoleBinding*. For this example, we'll create a ClusterRole that allows you to *use* the *psp-deny-privileged* policy created in the previous step.
+
+1. Create a file named `psp-deny-privileged-clusterrole.yaml` and paste in the following YAML manifest.
+
+ ```yaml
+ kind: ClusterRole
+ apiVersion: rbac.authorization.k8s.io/v1
+ metadata:
+ name: psp-deny-privileged-clusterrole
+ rules:
+ - apiGroups:
+ - extensions
+ resources:
+ - podsecuritypolicies
+ resourceNames:
+ - psp-deny-privileged
+ verbs:
+ - use
+ ```
+
+2. Create the ClusterRole using the [`kubectl apply`][kubectl-apply] command and specify the name of your YAML manifest.
+
+ ```console
+ kubectl apply -f psp-deny-privileged-clusterrole.yaml
+ ```
+
+3. Create a file named `psp-deny-privileged-clusterrolebinding.yaml` and paste in the following YAML manifest.
+
+ ```yaml
+ apiVersion: rbac.authorization.k8s.io/v1
+ kind: ClusterRoleBinding
+ metadata:
+ name: psp-deny-privileged-clusterrolebinding
+ roleRef:
+ apiGroup: rbac.authorization.k8s.io
+ kind: ClusterRole
+ name: psp-deny-privileged-clusterrole
+ subjects:
+ - apiGroup: rbac.authorization.k8s.io
+ kind: Group
+ name: system:serviceaccounts
+ ```
+
+4. Create the ClusterRoleBinding using the [`kubectl apply`][kubectl-apply] command and specify the name of your YAML manifest.
+
+ ```console
+ kubectl apply -f psp-deny-privileged-clusterrolebinding.yaml
+ ```
> [!NOTE]
-> In the first step of this article, the pod security policy feature was enabled on the AKS cluster. The recommended practice was to only enable the pod security policy feature after you've defined your own policies. This is the stage where you would enable the pod security policy feature. One or more custom policies have been defined, and user accounts have been associated with those policies. Now you can safely enable the pod security policy feature and minimize problems caused by the default policies.
+> In the first step of this article, the pod security policy feature was enabled on the AKS cluster. The recommended practice was to only enable the pod security policy feature after you've defined your own policies. This is the stage where you would enable the pod security policy feature. One or more custom policies have been defined, and user accounts have been associated with those policies. You can now safely enable the pod security policy feature and minimize problems caused by the default policies.
## Test the creation of an unprivileged pod again
-With your custom pod security policy applied and a binding for the user account to use the policy, let's try to create an unprivileged pod again. Use the same `nginx-privileged.yaml` manifest to create the pod using the [kubectl apply][kubectl-apply] command:
+With your custom pod security policy applied and a binding for the user account to use the policy, let's try to create an unprivileged pod again.
-```console
-kubectl-nonadminuser apply -f nginx-unprivileged.yaml
-```
+This example shows how you can create custom pod security policies to define access to the AKS cluster for different users or groups. The default AKS policies provide tight controls on what pods can run, so create your own custom policies to then correctly define the restrictions you need.
-The pod is successfully scheduled. When you check the status of the pod using the [kubectl get pods][kubectl-get] command, the pod is *Running*:
+1. Use the `nginx-privileged.yaml` manifest to create the pod using the [`kubectl apply`][kubectl-apply] command.
-```
-$ kubectl-nonadminuser get pods
+ ```console
+ kubectl-nonadminuser apply -f nginx-unprivileged.yaml
+ ```
-NAME READY STATUS RESTARTS AGE
-nginx-unprivileged 1/1 Running 0 7m14s
-```
+2. Check the status of the pod using the [`kubectl get pods`][kubectl-get] command.
-This example shows how you can create custom pod security policies to define access to the AKS cluster for different users or groups. The default AKS policies provide tight controls on what pods can run, so create your own custom policies to then correctly define the restrictions you need.
+ ```output
+ kubectl-nonadminuser get pods
+ ```
-Delete the NGINX unprivileged pod using the [kubectl delete][kubectl-delete] command and specify the name of your YAML manifest:
+ The following example output shows the pod was successfully scheduled and is *Running*:
-```console
-kubectl-nonadminuser delete -f nginx-unprivileged.yaml
-```
+ ```output
+ NAME READY STATUS RESTARTS AGE
+ nginx-unprivileged 1/1 Running 0 7m14s
+ ```
+
+3. Delete the NGINX unprivileged pod using the [`kubectl delete`][kubectl-delete] command and specify the name of your YAML manifest.
+
+ ```console
+ kubectl-nonadminuser delete -f nginx-unprivileged.yaml
+ ```
## Clean up resources
-To disable pod security policy, use the [az aks update][az-aks-update] command again. The following example disables pod security policy on the cluster name *myAKSCluster* in the resource group named *myResourceGroup*:
+1. Disable pod security policy using the [`az aks update`][az-aks-update] command.
+
+ ```azurecli-interactive
+ az aks update \
+ --resource-group myResourceGroup \
+ --name myAKSCluster \
+ --disable-pod-security-policy
+ ```
-```azurecli-interactive
-az aks update \
- --resource-group myResourceGroup \
- --name myAKSCluster \
- --disable-pod-security-policy
-```
+2. Delete the ClusterRole and ClusterRoleBinding using the [`kubectl delete`][kubectl-delete] command.
-Next, delete the ClusterRole and ClusterRoleBinding:
+ ```console
+ kubectl delete -f psp-deny-privileged-clusterrole.yaml
+ ```
-```console
-kubectl delete -f psp-deny-privileged-clusterrolebinding.yaml
-kubectl delete -f psp-deny-privileged-clusterrole.yaml
-```
+3. Delete the ClusterRoleBinding using the [`kubectl delete`][kubectl-delete] command.
-Delete the security policy using [kubectl delete][kubectl-delete] command and specify the name of your YAML manifest:
+ ```console
+ kubectl delete -f psp-deny-privileged-clusterrolebinding.yaml
+ ```
-```console
-kubectl delete -f psp-deny-privileged.yaml
-```
+4. Delete the security policy using [`kubectl delete`][kubectl-delete] command and specify the name of your YAML manifest.
-Finally, delete the *psp-aks* namespace:
+ ```console
+ kubectl delete -f psp-deny-privileged.yaml
+ ```
-```console
-kubectl delete namespace psp-aks
-```
+5. Delete the *psp-aks* namespace using the [`kubectl delete`][kubectl-delete] command.
+
+ ```console
+ kubectl delete namespace psp-aks
+ ```
## Next steps
-This article showed you how to create a pod security policy to prevent the use of privileged access. There are lots of features that a policy can enforce, such as type of volume or the RunAs user. For more information on the available options, see the [Kubernetes pod security policy reference docs][kubernetes-policy-reference].
+This article showed you how to create a pod security policy to prevent the use of privileged access. Policies can enforce a lot of features, such as type of volume or the RunAs user. For more information on the available options, see the [Kubernetes pod security policy reference docs][kubernetes-policy-reference].
For more information about limiting pod network traffic, see [Secure traffic between pods using network policies in AKS][network-policies].
For more information about limiting pod network traffic, see [Secure traffic bet
[kubectl-delete]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#delete [kubectl-get]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#get [kubectl-create]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#create
-[kubectl-describe]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#describe
-[kubectl-logs]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#logs
-[terms-of-use]: https://azure.microsoft.com/support/legal/preview-supplemental-terms/
[kubernetes-policy-reference]: https://kubernetes.io/docs/concepts/policy/pod-security-policy/#policy-reference <!-- LINKS - internal --> [aks-quickstart-cli]: ./learn/quick-kubernetes-deploy-cli.md
For more information about limiting pod network traffic, see [Secure traffic bet
[az-aks-get-credentials]: /cli/azure/aks#az_aks_get_credentials [az-aks-update]: /cli/azure/aks#az_aks_update [az-extension-add]: /cli/azure/extension#az_extension_add
-[aks-support-policies]: support-policies.md
-[aks-faq]: faq.md
-[az-extension-add]: /cli/azure/extension#az_extension_add
[az-extension-update]: /cli/azure/extension#az_extension_update
-[policy-samples]: ./policy-reference.md#microsoftcontainerservice
-[azure-policy-add-on]: ../governance/policy/concepts/policy-for-kubernetes.md
api-management Api Management Revisions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-revisions.md
You can set a revision as current using the Azure portal. If you use PowerShell,
## Revision descriptions
-When you create a revision, you can set a description for your own tracking purposes. Descriptions aren't played to your API users.
+When you create a revision, you can set a description for your own tracking purposes. Descriptions aren't displayed to your API users.
When you set a revision as current you can also optionally specify a public change log note. The change log is included in the developer portal for your API users to view. You can modify your change log note using the `Update-AzApiManagementApiRelease` PowerShell cmdlet.
api-management How To Server Sent Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/how-to-server-sent-events.md
Follow these guidelines when using API Management to reach a backend API that im
* **Avoid other policies that buffer responses** - Certain policies such as [`validate-content`](validate-content-policy.md) can also buffer response content and shouldn't be used with APIs that implement SSE.
+* **Avoid logging request/response body for Azure Monitor and Application Insights** - You can configure API request logging for Azure Monitor or Application Insights using diagnostic settings. The diagnostic settings allow you to log the request/response body at various stages of the request execution. For APIs that implement SSE, this can cause unexpected buffering which can lead to problems. Diagnostic settings for Azure Monitor and Application Insights configured at the global/All APIs scope apply to all APIs in the service. You can override the settings for individual APIs as needed. For APIs that implement SSE, ensure you have disabled request/response body logging for Azure Monitor and Application Insights.
+ * **Disable response caching** - To ensure that notifications to the client are timely, verify that [response caching](api-management-howto-cache.md) isn't enabled. For more information, see [API Management caching policies](api-management-caching-policies.md). * **Test API under load** - Follow general practices to test your API under load to detect performance or configuration issues before going into production.
Follow these guidelines when using API Management to reach a backend API that im
## Next steps * Learn more about [configuring policies](./api-management-howto-policies.md) in API Management.
-* Learn about API Management [capacity](api-management-capacity.md).
+* Learn about API Management [capacity](api-management-capacity.md).
api-management Mitigate Owasp Api Threats https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/mitigate-owasp-api-threats.md
The Open Web Application Security Project ([OWASP](https://owasp.org/about/)) Fo
The OWASP [API Security Project](https://owasp.org/www-project-api-security/) focuses on strategies and solutions to understand and mitigate the unique *vulnerabilities and security risks of APIs*. In this article, we'll discuss recommendations to use Azure API Management to mitigate the top 10 API threats identified by OWASP. > [!NOTE]
-> In addition to following the recommendations in this article, you can enable Defender for APIs (preview), a capability of [Microsoft Defender for Cloud](/azure/defender-for-cloud/defender-for-cloud-introduction), for API security insights, recommendations, and threat detection. [Learn more about using Defender for APIs with API Management](protect-with-defender-for-apis.md)
+> In addition to following the recommendations in this article, you can enable [Defender for APIs](/azure/defender-for-cloud/defender-for-apis-introduction) (preview), a capability of [Microsoft Defender for Cloud](/azure/defender-for-cloud/defender-for-cloud-introduction), for API security insights, recommendations, and threat detection. [Learn more about using Defender for APIs with API Management](protect-with-defender-for-apis.md)
## Broken object level authorization
Learn more about:
* [Security baseline for API Management](/security/benchmark/azure/baselines/api-management-security-baseline) * [Security controls by Azure policy](security-controls-policy.md) * [Landing zone accelerator for API Management](/azure/cloud-adoption-framework/scenarios/app-platform/api-management/landing-zone-accelerator)
-* [Microsoft Defender for Cloud](/azure/defender-for-cloud/defender-for-cloud-introduction)
+* [Microsoft Defender for Cloud](/azure/defender-for-cloud/defender-for-cloud-introduction)
api-management Set Body Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/set-body-policy.md
The `set-body` policy can be configured to use the [Liquid](https://shopify.gith
> [!IMPORTANT] > In order to correctly bind to an XML body using the Liquid template, use a `set-header` policy to set Content-Type to either application/xml, text/xml (or any type ending with +xml); for a JSON body, it must be application/json, text/json (or any type ending with +json).
+> [!IMPORTANT]
+> Liquid templates use the request/response body in the current execution pipeline as their input. For this reason, liquid templates do not work when used inside a return-response policy. A return-response policy cancels the current execution pipeline and removes the request/response body. As a result, any liquid template used inside the return-response will receive an empty string as its input and will not produced the expected output.
+ ### Supported Liquid filters The following Liquid filters are supported in the `set-body` policy. For filter examples, see the [Liquid documentation](https://shopify.github.io/liquid/).
The following example uses the `AsFormUrlEncodedContent()` expression to access
* [API Management transformation policies](api-management-transformation-policies.md)
api-management Validate Graphql Request Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/validate-graphql-request-policy.md
This example applies the following validation and authorization rules to a Graph
## Related policies
-* [API Management policies for GraphQL APIs](graphql-policies.md)
+* [Validation policies](api-management-policies.md#validation-policies)
application-gateway Configuration Request Routing Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/configuration-request-routing-rules.md
Previously updated : 09/09/2020 Last updated : 04/25/2023
When you create a rule, you choose between [*basic* and *path-based*](./applicat
For the v1 and v2 SKU, pattern matching of incoming requests is processed in the order that the paths are listed in the URL path map of the path-based rule. If a request matches the pattern in two or more paths in the path map, the path that's listed first is matched. And the request is forwarded to the back end that's associated with that path.
+If you have multiple listeners, it's even more important that rules are processed in the correct order so that client traffic is received by the correct listener. For more information about rules evaluation order, see [Request Routing rules evaluation order](multiple-site-overview.md#request-routing-rules-evaluation-order).
+ ## Associated listener Associate a listener to the rule so that the *request-routing rule* that's associated with the listener is evaluated to determine the backend pool to route the request to.
application-gateway Disabled Listeners https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/disabled-listeners.md
description: The article explains the details of a disabled listener and ways to
Previously updated : 02/22/2022 Last updated : 04/25/2023
The SSL/TLS certificates for Azure Application GatewayΓÇÖs listeners can be referenced from a customerΓÇÖs Key Vault resource. Your application gateway must always have access to such linked key vault resource and its certificate object to ensure smooth operations of the TLS termination feature and the overall health of the gateway resource.
-It is important to consider any impact on your Application Gateway resource when making changes or revoking access to your Key Vault resource. In case your application gateway is unable to access the associated key vault or locate its certificate object, it will automatically put that listener in a disabled state. The action is triggered only for configuration errors. Transient connectivity problems do not have any impact on the listeners.
+It's important to consider any impact on your Application Gateway resource when making changes or revoking access to your Key Vault resource. In case your application gateway is unable to access the associated key vault or locate its certificate object, it will automatically put that listener in a disabled state. **The action is triggered only for configuration errors**. Any customer misconfigurations like deletion/disablement of certificates or prohibiting the application gateway's access through key vault's firewall or permissions cause the key vault-based HTTPS listener to get disabled. Transient connectivity problems don't have any impact on the listeners.
-A disabled listener doesnΓÇÖt affect the traffic for other operational listeners on your Application Gateway. For example, the HTTP listeners or HTTPS listeners for which PFX certificate file is directly uploaded on Application Gateway resource will never go in a disabled state.
+A disabled listener doesnΓÇÖt affect the traffic for other operational listeners on your Application Gateway. For example, the HTTP listeners or HTTPS listeners for which the PFX certificate file is directly uploaded on the Application Gateway resource are never disabled.
[![An illustration showing affected listeners.](../application-gateway/media/disabled-listeners/affected-listener.png)](../application-gateway/media/disabled-listeners/affected-listener.png#lightbox)
Understanding the behavior of the Application GatewayΓÇÖs periodic check and its
[ ![Screenshot of client error will look.](../application-gateway/media/disabled-listeners/client-error.png) ](../application-gateway/media/disabled-listeners/client-error.png#lightbox)
-2. You can verify if the error is a result of a disabled listener on your gateway by checking your [Application GatewayΓÇÖs Resource Health page](../application-gateway/resource-health-overview.md). You will see an event as shown below.
+2. You can verify if the client error results from a disabled listener on your gateway by checking your [Application GatewayΓÇÖs Resource Health page](../application-gateway/resource-health-overview.md), as shown in the screenshot.
![A screenshot of user-driven resource health.](../application-gateway/media/disabled-listeners/resource-health-event.png)
You can narrow down to the exact cause and find steps to resolve the problem by
1. Sign-in to your Azure portal 1. Select Advisor 1. Select Operational Excellence category from the left menu.
-1. You will find a recommendation titled **Resolve Azure Key Vault issue for your Application Gateway**, if your gateway is experiencing this issue. Ensure the correct Subscription is selcted from the drop-down options above.
+1. Find the recommendation titled **Resolve Azure Key Vault issue for your Application Gateway** (shown only if your gateway is experiencing this issue). Ensure the correct subscription is selected.
1. Select it to view the error details and the associated key vault resource along with the [troubleshooting guide](../application-gateway/application-gateway-key-vault-common-errors.md) to fix your exact issue. > [!NOTE]
application-gateway Http Response Codes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/http-response-codes.md
For cases when mutual authentication is configured, several scenarios can lead t
- OCSP Client Revocation check is enabled, but OCSP responder isn't provided in the certificate. For more information about troubleshooting mutual authentication, see [Error code troubleshooting](mutual-authentication-troubleshooting.md#solution-2).
+#### 401 ΓÇô Unauthorized
-#### 403 ΓÇô Forbidden
+An HTTP 401 unauthorized response can be returned when backend pool is configured with [NTLM](/windows/win32/secauthn/microsoft-ntlm?redirectedfrom=MSDN) authentication.
+There are several ways to resolve this:
+- Allow anonymous access on backend pool.
+- Configure the probe to send the request to another ΓÇ£fakeΓÇ¥ site that doesnΓÇÖt require NTLM.
+- Not recommended, as this will not tell us if the actual site behind the application gateway is active or not.
+- Configure application gateway to allow 401 responses as valid for the probes: [Probe matching conditions](/azure/application-gateway/application-gateway-probe-overview).
+ #### 403 ΓÇô Forbidden
HTTP 403 Forbidden is presented when customers are utilizing WAF skus and have WAF configured in Prevention mode. If enabled WAF rulesets or custom deny WAF rules match the characteristics of an inbound request, the client is presented a 403 forbidden response.
Azure application Gateway V2 SKU sent HTTP 504 errors if the backend response ti
## Next steps If the information in this article doesn't help to resolve the issue, [submit a support ticket](https://azure.microsoft.com/support/options/).++++
application-gateway Multiple Site Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/multiple-site-overview.md
Title: Hosting multiple sites on Azure Application Gateway
-description: This article provides an overview of the Azure Application Gateway multi-site support. Examples are provided of rule priority and the order of evaluation for rules applied to incoming requests. Conditions and limitations for using wildcard rules are described.
+description: This article provides an overview of the Azure Application Gateway multi-site support. Examples are provided of rule priority and the order of evaluation for rules applied to incoming requests. Application Gateway rule priority evaluation order is described in detail. Conditions and limitations for using wildcard rules are provided.
Previously updated : 04/07/2023 Last updated : 04/25/2023
application-gateway Ssl Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/ssl-overview.md
For the TLS connection to work, you need to ensure that the TLS/SSL certificate
- That the current date and time is within the "Valid from" and "Valid to" date range on the certificate. - That the certificate's "Common Name" (CN) matches the host header in the request. For example, if the client is making a request to `https://www.contoso.com/`, then the CN must be `www.contoso.com`.
+If you have errors with the backend certificate common name (CN), see [Backend certificate invalid common name (CN)](application-gateway-backend-health-troubleshooting.md#backend-certificate-invalid-common-name-cn).
+ ### Certificates supported for TLS termination Application gateway supports the following types of certificates:
applied-ai-services Changelog Release History https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/changelog-release-history.md
+
+ Title: Form Recognizer changelog and release history
+
+description: A version-based description of Form Recognizer feature and capability releases, changes, enhancements, and updates.
+++++ Last updated : 04/24/2023+
+recommendations: false
++
+<!-- markdownlint-disable MD001 -->
+<!-- markdownlint-disable MD033 -->
+<!-- markdownlint-disable MD051 -->
+
+# Changelog and release history
+
+This reference article provides a version-based description of Form Recognizer feature and capability releases, changes, updates, and enhancements.
+
+#### Form Recognizer SDK April 2023 preview release
+
+This release includes the following updates:
+
+### [**C#**](#tab/csharp)
+
+* **Version 4.1.0-beta.1 (2023-04-13**)
+* **Targets 2023-02-28-preview by default**
+* **No breaking changes**
+
+[**Package (NuGet)**](https://www.nuget.org/packages/Azure.AI.FormRecognizer/4.1.0-beta.1)
+
+[**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/formrecognizer/Azure.AI.FormRecognizer/CHANGELOG.md#410-beta1-2023-04-13)
+
+[**ReadMe**](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/formrecognizer/Azure.AI.FormRecognizer/README.md)
+
+[**Samples**](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/formrecognizer/Azure.AI.FormRecognizer/samples/README.md)
+
+### [**Java**](#tab/java)
+
+* **Version 4.1.0-beta.1 (2023-04-12**)
+* **Targets 2023-02-28-preview by default**
+* **No breaking changes**
+
+[**Package (MVN)**](https://mvnrepository.com/artifact/com.azure/azure-ai-formrecognizer/4.1.0-beta.1)
+
+[**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-jav#410-beta1-2023-04-12)
+
+[**ReadMe**](https://github.com/Azure/azure-sdk-for-jav)
+
+[**Samples**](https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/formrecognizer/azure-ai-formrecognizer/src/samples#readme)
+
+### [**JavaScript**](#tab/javascript)
+
+* **Version 4.1.0-beta.1 (2023-04-11**)
+* **Targets 2023-02-28-preview by default**
+* **No breaking changes**
+
+[**Package (npm)**](https://www.npmjs.com/package/@azure/ai-form-recognizer/v/4.1.0-beta.1)
+
+[**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-js/blob/a162daee4be05eadff0be1caa7fb2071960bbf44/sdk/formrecognizer/ai-form-recognizer/CHANGELOG.md#410-beta1-2023-04-11)
+
+[**ReadMe**](https://github.com/Azure/azure-sdk-for-js/blob/a162daee4be05eadff0be1caa7fb2071960bbf44/sdk/formrecognizer/ai-form-recognizer/README.md)
+
+[**Samples**](https://github.com/Azure/azure-sdk-for-js/tree/a162daee4be05eadff0be1caa7fb2071960bbf44/sdk/formrecognizer/ai-form-recognizer/samples/v4-beta)
+
+### [**Python**](#tab/python)
+
+* **Version 3.3.0b1 (2023-04-13**)
+* **Targets 2023-02-28-preview by default**
+* **No breaking changes**
+
+[**Package (PyPi)**](https://pypi.org/project/azure-ai-formrecognizer/3.3.0b1/)
+
+[**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-python/blob/azure-ai-formrecognizer_3.3.0b1/sdk/formrecognizer/azure-ai-formrecognizer/CHANGELOG.md#330b1-2023-04-13)
+
+[**ReadMe**](https://github.com/Azure/azure-sdk-for-python/blob/azure-ai-formrecognizer_3.3.0b1/sdk/formrecognizer/azure-ai-formrecognizer/README.md)
+
+[**Samples**](https://github.com/Azure/azure-sdk-for-python/tree/azure-ai-formrecognizer_3.3.0b1/sdk/formrecognizer/azure-ai-formrecognizer/samples)
+++
+#### Form Recognizer SDK September 2022 GA release
+
+This release includes the following updates:
+
+> [!IMPORTANT]
+> The `DocumentAnalysisClient` and `DocumentModelAdministrationClient` now target API version v3.0 GA, released 2022-08-31. These clients are no longer supported by API versions 2020-06-30-preview or earlier.
+
+### [**C#**](#tab/csharp)
+
+* **Version 4.0.0 GA (2022-09-08)**
+* **Supports REST API v3.0 and v2.0 clients**
+
+[**Package (NuGet)**](https://www.nuget.org/packages/Azure.AI.FormRecognizer/4.0.0)
+
+[**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/formrecognizer/Azure.AI.FormRecognizer/CHANGELOG.md)
+
+[**Migration guide**](https://github.com/Azure/azure-sdk-for-net/blob/Azure.AI.FormRecognizer_4.0.0/sdk/formrecognizer/Azure.AI.FormRecognizer/MigrationGuide.md)
+
+[**ReadMe**](https://github.com/Azure/azure-sdk-for-net/blob/Azure.AI.FormRecognizer_4.0.0/sdk/formrecognizer/Azure.AI.FormRecognizer/README.md)
+
+[**Samples**](https://github.com/Azure/azure-sdk-for-net/blob/Azure.AI.FormRecognizer_4.0.0/sdk/formrecognizer/Azure.AI.FormRecognizer/samples/README.md)
+
+### [**Java**](#tab/java)
+
+* **Version 4.0.0 GA (2022-09-08)**
+* **Supports REST API v3.0 and v2.0 clients**
+
+[**Package (Maven)**](https://oss.sonatype.org/#nexus-search;quick~azure-ai-formrecognizer)
+
+[**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-jav)
+
+[**Migration guide**](https://github.com/Azure/azure-sdk-for-jav)
+
+[**ReadMe**](https://github.com/Azure/azure-sdk-for-jav)
+
+[**Samples**](https://github.com/Azure/azure-sdk-for-jav)
+
+### [**JavaScript**](#tab/javascript)
+
+* **Version 4.0.0 GA (2022-09-08)**
+* **Supports REST API v3.0 and v2.0 clients**
+
+[**Package (npm)**](https://www.npmjs.com/package/@azure/ai-form-recognizer)
+
+[**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-js/blob/%40azure/ai-form-recognizer_4.0.0/sdk/formrecognizer/ai-form-recognizer/CHANGELOG.md)
+
+[**Migration guide**](https://github.com/Azure/azure-sdk-for-js/blob/%40azure/ai-form-recognizer_4.0.0/sdk/formrecognizer/ai-form-recognizer/MIGRATION-v3_v4.md)
+
+[**ReadMe**](https://github.com/Azure/azure-sdk-for-js/blob/%40azure/ai-form-recognizer_4.0.0/sdk/formrecognizer/ai-form-recognizer/README.md)
+
+[**Samples**](https://github.com/witemple-msft/azure-sdk-for-js/blob/7e3196f7e529212a6bc329f5f06b0831bf4cc174/sdk/formrecognizer/ai-form-recognizer/samples/v4/javascript/README.md)
+
+### [**Python**](#tab/python)
+
+> [!NOTE]
+> Python 3.7 or later is required to use this package.
+
+* **Version 3.2.0 GA (2022-09-08)**
+* **Supports REST API v3.0 and v2.0 clients**
+
+[**Package (PyPi)**](https://pypi.org/project/azure-ai-formrecognizer/3.2.0/)
+
+[**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-python/blob/azure-ai-formrecognizer_3.2.0/sdk/formrecognizer/azure-ai-formrecognizer/CHANGELOG.md)
+
+[**Migration guide**](https://github.com/Azure/azure-sdk-for-python/blob/azure-ai-formrecognizer_3.2.0/sdk/formrecognizer/azure-ai-formrecognizer/MIGRATION_GUIDE.md)
+
+[**ReadMe**](https://github.com/Azure/azure-sdk-for-python/blob/azure-ai-formrecognizer_3.2.0/sdk/formrecognizer/azure-ai-formrecognizer/README.md)
+
+[**Samples**](https://github.com/Azure/azure-sdk-for-python/blob/azure-ai-formrecognizer_3.2.0/sdk/formrecognizer/azure-ai-formrecognizer/samples/README.md)
+++
+#### Form Recognizer SDK beta August 2022 preview release
+
+This release includes the following updates:
+
+### [**C#**](#tab/csharp)
+
+**Version 4.0.0-beta.5 (2022-08-09)**
+**Supports REST API 2022-06-30-preview clients**
+
+[**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/formrecognizer/Azure.AI.FormRecognizer/CHANGELOG.md#400-beta5-2022-08-09)
+
+[**Package (NuGet)**](https://www.nuget.org/packages/Azure.AI.FormRecognizer/4.0.0-beta.5)
+
+[**SDK reference documentation**](/dotnet/api/overview/azure/ai.formrecognizer-readme?view=azure-dotnet-preview&preserve-view=true)
+
+### [**Java**](#tab/java)
+
+**Version 4.0.0-beta.6 (2022-08-10)**
+**Supports REST API 2022-06-30-preview and earlier clients**
+
+[**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-jav#400-beta6-2022-08-10)
+
+ [**Package (Maven)**](https://oss.sonatype.org/#nexus-search;quick~azure-ai-formrecognizer)
+
+ [**SDK reference documentation**](/java/api/overview/azure/ai-formrecognizer-readme?view=azure-java-preview&preserve-view=true)
+
+### [**JavaScript**](#tab/javascript)
+
+**Version 4.0.0-beta.6 (2022-08-09)**
+**Supports REST API 2022-06-30-preview and earlier clients**
+
+ [**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-js/blob/%40azure/ai-form-recognizer_4.0.0-beta.6/sdk/formrecognizer/ai-form-recognizer/CHANGELOG.md)
+
+ [**Package (npm)**](https://www.npmjs.com/package/@azure/ai-form-recognizer/v/4.0.0-beta.6)
+
+ [**SDK reference documentation**](/javascript/api/overview/azure/ai-form-recognizer-readme?view=azure-node-preview&preserve-view=true)
+
+### [**Python**](#tab/python)
+
+> [!IMPORTANT]
+> Python 3.6 is no longer supported in this release. Use Python 3.7 or later.
+
+**Version 3.2.0b6 (2022-08-09)**
+**Supports REST API 2022-06-30-preview and earlier clients**
+
+ [**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-python/blob/azure-ai-formrecognizer_3.2.0b6/sdk/formrecognizer/azure-ai-formrecognizer/CHANGELOG.md)
+
+ [**Package (PyPi)**](https://pypi.org/project/azure-ai-formrecognizer/3.2.0b6/)
+
+ [**SDK reference documentation**](https://pypi.org/project/azure-ai-formrecognizer/3.2.0b6/)
+++
+### Form Recognizer SDK beta June 2022 preview release
+
+This release includes the following updates:
+
+### [**C#**](#tab/csharp)
+
+**Version 4.0.0-beta.4 (2022-06-08)**
+
+[**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-net/blob/Azure.AI.FormRecognizer_4.0.0-beta.4/sdk/formrecognizer/Azure.AI.FormRecognizer/CHANGELOG.md)
+
+[**Package (NuGet)**](https://www.nuget.org/packages/Azure.AI.FormRecognizer/4.0.0-beta.4)
+
+[**SDK reference documentation**](/dotnet/api/azure.ai.formrecognizer?view=azure-dotnet-preview&preserve-view=true)
+
+### [**Java**](#tab/java)
+
+**Version 4.0.0-beta.5 (2022-06-07)**
+
+[**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-jav)
+
+ [**Package (Maven)**](https://search.maven.org/artifact/com.azure/azure-ai-formrecognizer/4.0.0-beta.5/jar)
+
+ [**SDK reference documentation**](/java/api/overview/azure/ai-formrecognizer-readme?view=azure-java-preview&preserve-view=true)
+
+### [**JavaScript**](#tab/javascript)
+
+**Version 4.0.0-beta.4 (2022-06-07)**
+
+ [**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-js/blob/%40azure/ai-form-recognizer_4.0.0-beta.4/sdk/formrecognizer/ai-form-recognizer/CHANGELOG.md)
+
+ [**Package (npm)**](https://www.npmjs.com/package/@azure/ai-form-recognizer/v/4.0.0-beta.4)
+
+ [**SDK reference documentation**](/javascript/api/@azure/ai-form-recognizer/?view=azure-node-preview&preserve-view=true)
+
+### [**Python**](#tab/python)
+
+**Version 3.2.0b5 (2022-06-07**
+
+ [**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-python/blob/azure-ai-formrecognizer_3.2.0b5/sdk/formrecognizer/azure-ai-formrecognizer/CHANGELOG.md)
+
+ [**Package (PyPi)**](https://pypi.org/project/azure-ai-formrecognizer/3.2.0b5/)
+
+ [**SDK reference documentation**](/python/api/azure-ai-formrecognizer/azure.ai.formrecognizer?view=azure-python-preview&preserve-view=true)
++
applied-ai-services Concept Invoice https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-invoice.md
See how data, including customer information, vendor details, and line items, is
| &bullet; French (fr) | France (fr) | | &bullet; Italian (it) | Italy (it)| | &bullet; Portuguese (pt) | Portugal (pt), Brazil (br)|
-| &bullet; Dutch (de) | Netherlands (de)|
+| &bullet; Dutch (nl) | Netherlands (nl)|
## Field extraction
applied-ai-services Sdk Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/sdk-overview.md
Title: Form Recognizer SDKs
+ Title: Form Recognizer SDKs
-description: The Form Recognizer software development kits (SDKs) expose Form Recognizer models, features and capabilities, using C#, Java, JavaScript, or Python programming language.
+description: Form Recognizer software development kits (SDKs) expose Form Recognizer models, features and capabilities, using C#, Java, JavaScript, or Python programming language.
Previously updated : 01/06/2023 Last updated : 04/25/2023 recommendations: false
recommendations: false
<!-- markdownlint-disable MD001 --> <!-- markdownlint-disable MD051 -->
-# Form Recognizer SDKs
+# Form Recognizer SDK (GA)
[!INCLUDE [applies to v3.0 and v2.1](includes/applies-to-v3-0-and-v2-1.md)] > [!IMPORTANT]
-> The **2023-02-28-preview** version is currently only available through the [**Form Recognizer 2023-02-28-preview REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-02-28-preview/operations/AnalyzeDocument).
+> For more information on the latest public preview version (**2023-02-28-preview**), *see* [Form Recognizer SDK (preview)](sdk-preview.md)
Azure Cognitive Services Form Recognizer is a cloud service that uses machine learning to analyze text and structured data from documents. The Form Recognizer software development kit (SDK) is a set of libraries and tools that enable you to easily integrate Form Recognizer models and capabilities into your applications. Form Recognizer SDK is available across platforms in C#/.NET, Java, JavaScript, and Python programming languages.
Azure Cognitive Services Form Recognizer is a cloud service that uses machine le
Form Recognizer SDK supports the following languages and platforms:
-| Language → SDK version | Package| Azure Form Recognizer SDK |Supported API version| Platform support |
-|:-:|:-|:-| :-|--|
-|[.NET/C# → 4.0.0 (latest GA release)](/dotnet/api/overview/azure/form-recognizer?view=azure-dotnet&preserve-view=true) | [Azure SDK for .NET](https://azuresdkdocs.blob.core.windows.net/$web/dotnet/Azure.AI.FormRecognizer/4.0.0/https://docsupdatetracker.net/index.html)|[**v3.0**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)</br> [**v2.1**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeBusinessCardAsync)</br>[**v2.0**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2/operations/AnalyzeLayoutAsync) |[Windows, macOS, Linux, Docker](https://dotnet.microsoft.com/download)|
-|[Java → 4.0.0 (latest GA release)](/java/api/overview/azure/form-recognizer?view=azure-java-stable&preserve-view=true) | [Azure SDK for Java](https://azuresdkdocs.blob.core.windows.net/$web/java/azure-ai-formrecognizer/4.0.0/https://docsupdatetracker.net/index.html)|[**v3.0**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)</br> [**v2.1**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeBusinessCardAsync)</br>[**v2.0**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2/operations/AnalyzeLayoutAsync) |[Windows, macOS, Linux](/java/openjdk/install)|
-|[JavaScript → 4.0.0 (latest GA release)](/javascript/api/overview/azure/form-recognizer?view=azure-node-latest&preserve-view=true)| [npm](https://www.npmjs.com/package/@azure/ai-form-recognizer)| [Azure SDK for JavaScript](https://azuresdkdocs.blob.core.windows.net/$web/javascript/azure-ai-form-recognizer/4.0.0/https://docsupdatetracker.net/index.html) | [**v3.0**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)</br> [**v2.1**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeBusinessCardAsync)</br>[**v2.0**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2/operations/AnalyzeLayoutAsync) | [Browser, Windows, macOS, Linux](https://nodejs.org/en/download/) |
-|[Python → 3.2.0 (latest GA release)](/python/api/overview/azure/form-recognizer?view=azure-python&preserve-view=true) | [PyPI](https://pypi.org/project/azure-ai-formrecognizer/3.2.0/)| [Azure SDK for Python](https://azuresdkdocs.blob.core.windows.net/$web/python/azure-ai-formrecognizer/3.2.0/https://docsupdatetracker.net/index.html)| [**v3.0**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)</br> [**v2.1**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeBusinessCardAsync)</br>[**v2.0**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2/operations/AnalyzeLayoutAsync) |[Windows, macOS, Linux](/azure/developer/python/configure-local-development-environment?tabs=windows%2Capt%2Ccmd#use-the-azure-cli)
+| Language → Azure Form Recognizer SDK version | Package| Supported API version| Platform support |
+|:-:|:-|:-| :-|
+| [.NET/C# → 4.0.0 (latest GA release)](https://azuresdkdocs.blob.core.windows.net/$web/dotnet/Azure.AI.FormRecognizer/4.0.0/https://docsupdatetracker.net/index.html)|[NuGet](https://www.nuget.org/packages/Azure.AI.FormRecognizer)|[**v3.0**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)</br> [**v2.1**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeBusinessCardAsync)</br>[**v2.0**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2/operations/AnalyzeLayoutAsync) |[Windows, macOS, Linux, Docker](https://dotnet.microsoft.com/download)|
+|[Java → 4.0.6 (latest GA release)](https://azuresdkdocs.blob.core.windows.net/$web/java/azure-ai-formrecognizer/4.0.0/https://docsupdatetracker.net/index.html) |[MVN repository](https://mvnrepository.com/artifact/com.azure/azure-ai-formrecognizer/4.0.0-beta.6) |[**v3.0**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)</br> [**v2.1**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeBusinessCardAsync)</br>[**v2.0**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2/operations/AnalyzeLayoutAsync) |[Windows, macOS, Linux](/java/openjdk/install)|
+|[JavaScript → 4.0.0 (latest GA release)](https://azuresdkdocs.blob.core.windows.net/$web/javascript/azure-ai-form-recognizer/4.0.0/https://docsupdatetracker.net/index.html)| [npm](https://www.npmjs.com/package/@azure/ai-form-recognizer)| [**v3.0**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)</br> [**v2.1**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeBusinessCardAsync)</br>[**v2.0**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2/operations/AnalyzeLayoutAsync) | [Browser, Windows, macOS, Linux](https://nodejs.org/en/download/) |
+|[Python → 3.2.0 (latest GA release)](https://azuresdkdocs.blob.core.windows.net/$web/python/azure-ai-formrecognizer/3.2.0/https://docsupdatetracker.net/index.html) | [PyPI](https://pypi.org/project/azure-ai-formrecognizer/3.2.0/)| [**v3.0**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)</br> [**v2.1**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeBusinessCardAsync)</br>[**v2.0**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2/operations/AnalyzeLayoutAsync) |[Windows, macOS, Linux](/azure/developer/python/configure-local-development-environment?tabs=windows%2Capt%2Ccmd#use-the-azure-cli)
## Supported Clients | Language| SDK version | API version | Supported clients| | : | :--|:- | :--|
-|<ul><li> C# /.NET </li><li>Java</li><li>JavaScript</li></ul>| <ul><li>4.0.0 (latest GA release)</li></ul>| <ul><li> v3.0 (default)</li></ul>| <ul><li> **DocumentAnalysisClient**</li><li>**DocumentModelAdministrationClient**</li></ul> |
-|<ul><li> C# /.NET </li><li>Java</li><li>JavaScript</li></ul>| <ul><li>4.0.0 (latest GA release)</li></ul>| <ul><li> v2.1</li><li>v2.0</li></ul> | <ul><li> **FormRecognizerClient**</li><li>**FormTrainingClient**</li></ul> |
-|<ul><li> C# /.NET </li><li>Java</li><li>JavaScript</li></ul>| <ul><li>3.1.x</li></ul> | <ul><li> v2.1 (default)</li><li>v2.0</li></ul> | <ul><li> **FormRecognizerClient**</li><li>**FormTrainingClient**</li></ul> |
-|<ul><li> C# /.NET </li><li>Java</li><li>JavaScript</li></ul>| <ul><li>3.0.x</li></ul>| <ul><li>v2.0</li></ul> | <ul><li> **FormRecognizerClient**</li><li>**FormTrainingClient**</li></ul> |
-|<ul><li> Python</li></ul>| <ul><li>3.2.0 (latest GA release)</li></ul> | <ul><li> v3.0 (default)</li></ul> | <ul><li> **DocumentAnalysisClient**</li><li>**DocumentModelAdministrationClient**</li></ul>|
-|<ul><li> Python</li></ul>| <ul><li>3.2.0 (latest GA release)</li></ul> | <ul><li> v2.1</li><li>v2.0</li></ul> | <ul><li> **FormRecognizerClient**</li><li>**FormTrainingClient**</li></ul> |
-|<ul><li> Python </li></ul>| <ul><li>3.1.x</li></ul> | <ul><li> v2.1 (default)</li><li>v2.0</li></ul> |<ul><li>**FormRecognizerClient**</li><li>**FormTrainingClient**</li></ul> |
-|<ul><li> Python</li></ul>| <ul><li>3.0.0</li></ul> | <ul><li>v2.0</li></ul>| <ul><li> **FormRecognizerClient**</li><li>**FormTrainingClient**</li></ul> |
+|**.NET/C#**</br> **Java**</br> **JavaScript**</br>| 4.0.0 (latest GA release)| v3.0 / 2022-08-31 (default)| **DocumentAnalysisClient**<br>**DocumentModelAdministrationClient** |
+|**.NET/C#**</br> **Java**</br> **JavaScript**</br>| 3.1.x | v2.1 (default)</br>v2.0 | **FormRecognizerClient**</br>**FormTrainingClient** |
+|**.NET/C#**</br> **Java**</br> **JavaScript**</br>| 3.0.x| v2.0 | **FormRecognizerClient**</br>**FormTrainingClient** |
+| **Python**| 3.2.x (latest GA release) | v3.0 / 2022-08-31 (default)| **DocumentAnalysisClient**</br>**DocumentModelAdministrationClient**|
+| **Python** | 3.1.x | v2.1 (default)</br>v2.0 |**FormRecognizerClient**</br>**FormTrainingClient** |
+| **Python** | 3.0.0 | v2.0 |**FormRecognizerClient**</br>**FormTrainingClient** |
## Use Form Recognizer SDK in your applications
The Form Recognizer SDK enables the use and management of the Form Recognizer se
### [C#/.NET](#tab/csharp) ```dotnetcli
-dotnet add package Azure.AI.FormRecognizer --version 4.0.0-beta.5
+dotnet add package Azure.AI.FormRecognizer --version 4.0.0
``` ```powershell
-Install-Package Azure.AI.FormRecognizer -Version 4.0.0-beta.5
+Install-Package Azure.AI.FormRecognizer -Version 4.0.0
``` ### [Java](#tab/java) ```xml
- <dependency>
- <groupId>com.azure</groupId>
- <artifactId>azure-ai-formrecognizer</artifactId>
- <version>4.0.0-beta.5</version>
- </dependency>
+<dependency>
+<groupId>com.azure</groupId>
+<artifactId>azure-ai-formrecognizer</artifactId>
+<version>4.0.6</version>
+</dependency>
``` ```kotlin
-implementation("com.azure:azure-ai-formrecognizer:4.0.0-beta.5")
+implementation("com.azure:azure-ai-formrecognizer:4.0.6")
``` ### [JavaScript](#tab/javascript) ```javascript
-npm i @azure/ai-form-recognizer@4.0.0-beta.6
+npm i @azure/ai-form-recognizer
``` ### [Python](#tab/python) ```python
-pip install azure-ai-formrecognizer==3.2.0b6
+pip install azure-ai-formrecognizer
```
For more information, *see* [Authenticate the client](https://github.com/Azure/a
### 4. Build your application
-You'll create a client object to interact with the Form Recognizer SDK, and then call methods on that client object to interact with the service. The SDKs provide both synchronous and asynchronous methods. For more insight, try a [quickstart](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true) in a language of your choice.
-
-## Changelog and release history
-
-#### Form Recognizer SDK September 2022 GA release
-
-This release includes the following updates:
-
-> [!IMPORTANT]
-> The `DocumentAnalysisClient` and `DocumentModelAdministrationClient` now target API version v3.0 GA, released 2022-08-31. These clients are no longer supported by API versions 2020-06-30-preview or earlier.
-
-### [**C#**](#tab/csharp)
-
-* **Version 4.0.0 GA (2022-09-08)**
-* **Supports REST API v3.0 and v2.0 clients**
-
-[**Package (NuGet)**](https://www.nuget.org/packages/Azure.AI.FormRecognizer/4.0.0)
-
-[**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/formrecognizer/Azure.AI.FormRecognizer/CHANGELOG.md)
-
-[**Migration guide**](https://github.com/Azure/azure-sdk-for-net/blob/Azure.AI.FormRecognizer_4.0.0/sdk/formrecognizer/Azure.AI.FormRecognizer/MigrationGuide.md)
-
-[**ReadMe**](https://github.com/Azure/azure-sdk-for-net/blob/Azure.AI.FormRecognizer_4.0.0/sdk/formrecognizer/Azure.AI.FormRecognizer/README.md)
-
-[**Samples**](https://github.com/Azure/azure-sdk-for-net/blob/Azure.AI.FormRecognizer_4.0.0/sdk/formrecognizer/Azure.AI.FormRecognizer/samples/README.md)
-
-### [**Java**](#tab/java)
-
-* **Version 4.0.0 GA (2022-09-08)**
-* **Supports REST API v3.0 and v2.0 clients**
-
-[**Package (Maven)**](https://oss.sonatype.org/#nexus-search;quick~azure-ai-formrecognizer)
-
-[**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-jav)
-
-[**Migration guide**](https://github.com/Azure/azure-sdk-for-jav)
-
-[**ReadMe**](https://github.com/Azure/azure-sdk-for-jav)
-
-[**Samples**](https://github.com/Azure/azure-sdk-for-jav)
-
-### [**JavaScript**](#tab/javascript)
-
-* **Version 4.0.0 GA (2022-09-08)**
-* **Supports REST API v3.0 and v2.0 clients**
-
-[**Package (npm)**](https://www.npmjs.com/package/@azure/ai-form-recognizer)
-
-[**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-js/blob/%40azure/ai-form-recognizer_4.0.0/sdk/formrecognizer/ai-form-recognizer/CHANGELOG.md)
-
-[**Migration guide**](https://github.com/Azure/azure-sdk-for-js/blob/%40azure/ai-form-recognizer_4.0.0/sdk/formrecognizer/ai-form-recognizer/MIGRATION-v3_v4.md)
-
-[**ReadMe**](https://github.com/Azure/azure-sdk-for-js/blob/%40azure/ai-form-recognizer_4.0.0/sdk/formrecognizer/ai-form-recognizer/README.md)
-
-[**Samples**](https://github.com/witemple-msft/azure-sdk-for-js/blob/7e3196f7e529212a6bc329f5f06b0831bf4cc174/sdk/formrecognizer/ai-form-recognizer/samples/v4/javascript/README.md)
-
-### [Python](#tab/python)
-
-> [!NOTE]
-> Python 3.7 or later is required to use this package.
-
-* **Version 3.2.0 GA (2022-09-08)**
-* **Supports REST API v3.0 and v2.0 clients**
-
-[**Package (PyPi)**](https://pypi.org/project/azure-ai-formrecognizer/3.2.0/)
-
-[**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-python/blob/azure-ai-formrecognizer_3.2.0/sdk/formrecognizer/azure-ai-formrecognizer/CHANGELOG.md)
-
-[**Migration guide**](https://github.com/Azure/azure-sdk-for-python/blob/azure-ai-formrecognizer_3.2.0/sdk/formrecognizer/azure-ai-formrecognizer/MIGRATION_GUIDE.md)
-
-[**ReadMe**](https://github.com/Azure/azure-sdk-for-python/blob/azure-ai-formrecognizer_3.2.0/sdk/formrecognizer/azure-ai-formrecognizer/README.md)
-
-[**Samples**](https://github.com/Azure/azure-sdk-for-python/blob/azure-ai-formrecognizer_3.2.0/sdk/formrecognizer/azure-ai-formrecognizer/samples/README.md)
---
-#### Form Recognizer SDK beta August 2022 preview release
-
-This release includes the following updates:
-
-### [**C#**](#tab/csharp)
-
-**Version 4.0.0-beta.5 (2022-08-09)**
-**Supports REST API 2022-06-30-preview clients**
-
-[**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/formrecognizer/Azure.AI.FormRecognizer/CHANGELOG.md#400-beta5-2022-08-09)
-
-[**Package (NuGet)**](https://www.nuget.org/packages/Azure.AI.FormRecognizer/4.0.0-beta.5)
-
-[**SDK reference documentation**](/dotnet/api/overview/azure/ai.formrecognizer-readme?view=azure-dotnet-preview&preserve-view=true)
-
-### [**Java**](#tab/java)
-
-**Version 4.0.0-beta.6 (2022-08-10)**
-**Supports REST API 2022-06-30-preview and earlier clients**
-
-[**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-jav#400-beta6-2022-08-10)
-
- [**Package (Maven)**](https://oss.sonatype.org/#nexus-search;quick~azure-ai-formrecognizer)
-
- [**SDK reference documentation**](/java/api/overview/azure/ai-formrecognizer-readme?view=azure-java-preview&preserve-view=true)
-
-### [**JavaScript**](#tab/javascript)
-
-**Version 4.0.0-beta.6 (2022-08-09)**
-**Supports REST API 2022-06-30-preview and earlier clients**
-
- [**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-js/blob/%40azure/ai-form-recognizer_4.0.0-beta.6/sdk/formrecognizer/ai-form-recognizer/CHANGELOG.md)
-
- [**Package (npm)**](https://www.npmjs.com/package/@azure/ai-form-recognizer/v/4.0.0-beta.6)
-
- [**SDK reference documentation**](/javascript/api/overview/azure/ai-form-recognizer-readme?view=azure-node-preview&preserve-view=true)
-
-### [Python](#tab/python)
-
-> [!IMPORTANT]
-> Python 3.6 is no longer supported in this release. Use Python 3.7 or later.
-
-**Version 3.2.0b6 (2022-08-09)**
-**Supports REST API 2022-06-30-preview and earlier clients**
-
- [**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-python/blob/azure-ai-formrecognizer_3.2.0b6/sdk/formrecognizer/azure-ai-formrecognizer/CHANGELOG.md)
-
- [**Package (PyPi)**](https://pypi.org/project/azure-ai-formrecognizer/3.2.0b6/)
-
- [**SDK reference documentation**](https://pypi.org/project/azure-ai-formrecognizer/3.2.0b6/)
---
-### Form Recognizer SDK beta June 2022 preview release
-
-This release includes the following updates:
-
-### [**C#**](#tab/csharp)
-
-**Version 4.0.0-beta.4 (2022-06-08)**
-
-[**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-net/blob/Azure.AI.FormRecognizer_4.0.0-beta.4/sdk/formrecognizer/Azure.AI.FormRecognizer/CHANGELOG.md)
-
-[**Package (NuGet)**](https://www.nuget.org/packages/Azure.AI.FormRecognizer/4.0.0-beta.4)
-
-[**SDK reference documentation**](/dotnet/api/azure.ai.formrecognizer?view=azure-dotnet-preview&preserve-view=true)
-
-### [**Java**](#tab/java)
-
-**Version 4.0.0-beta.5 (2022-06-07)**
-
-[**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-jav)
-
- [**Package (Maven)**](https://search.maven.org/artifact/com.azure/azure-ai-formrecognizer/4.0.0-beta.5/jar)
-
- [**SDK reference documentation**](/java/api/overview/azure/ai-formrecognizer-readme?view=azure-java-preview&preserve-view=true)
-
-### [**JavaScript**](#tab/javascript)
-
-**Version 4.0.0-beta.4 (2022-06-07)**
-
- [**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-js/blob/%40azure/ai-form-recognizer_4.0.0-beta.4/sdk/formrecognizer/ai-form-recognizer/CHANGELOG.md)
-
- [**Package (npm)**](https://www.npmjs.com/package/@azure/ai-form-recognizer/v/4.0.0-beta.4)
-
- [**SDK reference documentation**](/javascript/api/@azure/ai-form-recognizer/?view=azure-node-preview&preserve-view=true)
-
-### [Python](#tab/python)
-
-**Version 3.2.0b5 (2022-06-07**
-
- [**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-python/blob/azure-ai-formrecognizer_3.2.0b5/sdk/formrecognizer/azure-ai-formrecognizer/CHANGELOG.md)
-
- [**Package (PyPi)**](https://pypi.org/project/azure-ai-formrecognizer/3.2.0b5/)
-
- [**SDK reference documentation**](/python/api/azure-ai-formrecognizer/azure.ai.formrecognizer?view=azure-python-preview&preserve-view=true)
--
+Create a client object to interact with the Form Recognizer SDK, and then call methods on that client object to interact with the service. The SDKs provide both synchronous and asynchronous methods. For more insight, try a [quickstart](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true) in a language of your choice.
## Help options
The [Microsoft Q&A](/answers/topics/azure-form-recognizer.html) and [Stack Overf
## Next steps >[!div class="nextstepaction"]
-> [**Try a Form Recognizer quickstart**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true)
+> [**Explore Form Recognizer REST API v3.0**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)
> [!div class="nextstepaction"]
-> [**Explore the Form Recognizer REST API v3.0**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)
+> [**Try a Form Recognizer quickstart**](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true)
applied-ai-services Sdk Preview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/sdk-preview.md
+
+ Title: Form Recognizer SDKs (preview)
+
+description: The preview Form Recognizer software development kits (SDKs) expose Form Recognizer models, features and capabilities that are in active development for C#, Java, JavaScript, or Python programming language.
+++++ Last updated : 04/25/2023+
+recommendations: false
++
+<!-- markdownlint-disable MD024 -->
+<!-- markdownlint-disable MD036 -->
+<!-- markdownlint-disable MD001 -->
+<!-- markdownlint-disable MD051 -->
+
+# Form Recognizer SDK (public preview)
+
+**This article applies to:** ![Form Recognizer checkmark](media/yes-icon.png) **Form Recognizer version 2023-02-28-preview**.
+
+> [!IMPORTANT]
+>
+> * Form Recognizer public preview releases provide early access to features that are in active development.
+> * Features, approaches, and processes may change, prior to General Availability (GA), based on user feedback.
+> * The public preview version of Form Recognizer client libraries default to service version [**Form Recognizer 2023-02-28-preview REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-02-28-preview/operations/AnalyzeDocument).
+
+Azure Cognitive Services Form Recognizer is a cloud service that uses machine learning to analyze text and structured data from documents. The Form Recognizer software development kit (SDK) is a set of libraries and tools that enable you to easily integrate Form Recognizer models and capabilities into your applications. Form Recognizer SDK is available across platforms in C#/.NET, Java, JavaScript, and Python programming languages.
+
+## Supported languages
+
+Form Recognizer SDK supports the following languages and platforms:
+
+| Language → Azure Form Recognizer SDK version | Package| Supported API version| Platform support |
+|:-:|:-|:-| :-|
+| [.NET/C# → 4.1.0-beta.1 (preview)](https://azuresdkdocs.blob.core.windows.net/$web/dotnet/Azure.AI.FormRecognizer/4.1.0-beta.1/https://docsupdatetracker.net/index.html)|[NuGet](https://www.nuget.org/packages/Azure.AI.FormRecognizer/4.1.0-beta.1)|[**2023-02-28-preview**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-02-28-preview/operations/AnalyzeDocument)</br> [**2022-08-31 (GA)**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)</br> [**v2.1**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeBusinessCardAsync)</br>[**v2.0**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2/operations/AnalyzeLayoutAsync) |[Windows, macOS, Linux, Docker](https://dotnet.microsoft.com/download)|
+|[Java → 4.1.0-beta.1 (preview)](https://azuresdkdocs.blob.core.windows.net/$web/java/azure-ai-formrecognizer/4.1.0-beta.1/https://docsupdatetracker.net/index.html) |[MVN repository](https://mvnrepository.com/artifact/com.azure/azure-ai-formrecognizer/4.1.0-beta.1) |[**2023-02-28-preview**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-02-28-preview/operations/AnalyzeDocument)</br> [**2022-08-31 (GA)**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)</br> [**v2.1**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeBusinessCardAsync)</br>[**v2.0**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2/operations/AnalyzeLayoutAsync) |[Windows, macOS, Linux](/java/openjdk/install)|
+|[JavaScript → 4.1.0-beta.1 (preview)](https://azuresdkdocs.blob.core.windows.net/$web/javascript/azure-ai-form-recognizer/4.1.0-beta.1/https://docsupdatetracker.net/index.html)| [npm](https://www.npmjs.com/package/@azure/ai-form-recognizer/v/4.1.0-beta.1)| [**2023-02-28-preview**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-02-28-preview/operations/AnalyzeDocument)</br> [**2022-08-31 (GA)**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)</br> [**v2.1**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeBusinessCardAsync)</br>[**v2.0**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2/operations/AnalyzeLayoutAsync) | [Browser, Windows, macOS, Linux](https://nodejs.org/en/download/) |
+|[Python → 3.3.0b1 (preview)](https://azuresdkdocs.blob.core.windows.net/$web/python/azure-ai-formrecognizer/3.3.0b1/https://docsupdatetracker.net/index.html) | [PyPI](https://pypi.org/project/azure-ai-formrecognizer/3.3.0b1/)| [**2023-02-28-preview**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-02-28-preview/operations/AnalyzeDocument)</br> [**2022-08-31 (GA)**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)</br> [**v2.1**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeBusinessCardAsync)</br>[**v2.0**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2/operations/AnalyzeLayoutAsync) |[Windows, macOS, Linux](/azure/developer/python/configure-local-development-environment?tabs=windows%2Capt%2Ccmd#use-the-azure-cli)
+
+## Supported Clients
+
+| Language| SDK version | API version (default) | Supported clients|
+| : | :--|:- | :--|
+|**.NET/C#**</br> **Java**</br> **JavaScript**</br>| 4.1.0-beta-1 (preview)| 2023_02_28_preview|**DocumentAnalysisClient**</br>**DocumentModelAdministrationClient** |
+|**.NET/C#**</br> **Java**</br> **JavaScript**</br>| 4.0.0 (GA)| v3.0 / 2022-08-31| **DocumentAnalysisClient**</br>**DocumentModelAdministrationClient** |
+|**.NET/C#**</br> **Java**</br> **JavaScript**</br>| 3.1.x | v2.1 | **FormRecognizerClient**</br>**FormTrainingClient** |
+|**.NET/C#**</br> **Java**</br> **JavaScript**</br>| 3.0.x| v2.0 | **FormRecognizerClient**</br>**FormTrainingClient** |
+| **Python**| 3.3.0bx (preview) | 2023-02-28-preview | **DocumentAnalysisClient**</br>**DocumentModelAdministrationClient**|
+| **Python**| 3.2.x (GA) | v3.0 / 2022-08-31| **DocumentAnalysisClient**</br>**DocumentModelAdministrationClient**|
+| **Python**| 3.1.x | v2.1 | **FormRecognizerClient**</br>**FormTrainingClient** |
+| **Python** | 3.0.0 | v2.0 |**FormRecognizerClient**</br>**FormTrainingClient** |
+
+## Use Form Recognizer SDK in your applications
+
+The Form Recognizer SDK enables the use and management of the Form Recognizer service in your application. The SDK builds on the underlying Form Recognizer REST API allowing you to easily use those APIs within your programming language paradigm. Here's how you use the Form Recognizer SDK for your preferred language:
+
+### 1. Install the SDK client library
+
+### [C#/.NET](#tab/csharp)
+
+```dotnetcli
+dotnet add package Azure.AI.FormRecognizer --version 4.1.0-beta.1
+```
+
+```powershell
+Install-Package Azure.AI.FormRecognizer -Version 4.1.0-beta.1
+```
+
+### [Java](#tab/java)
+
+```xml
+ <dependency>
+ <groupId>com.azure</groupId>
+ <artifactId>azure-ai-formrecognizer</artifactId>
+ <version>4.1.0-beta.1</version>
+ </dependency>
+```
+
+```kotlin
+implementation("com.azure:azure-ai-formrecognizer:4.1.0-beta.1")
+```
+
+### [JavaScript](#tab/javascript)
+
+```javascript
+npm i @azure/ai-form-recognizer@4.1.0-beta.1
+```
+
+### [Python](#tab/python)
+
+```python
+pip install azure-ai-formrecognizer==3.3.0b1
+```
+++
+### 2. Import the SDK client library into your application
+
+### [C#/.NET](#tab/csharp)
+
+```csharp
+using Azure;
+using Azure.AI.FormRecognizer.DocumentAnalysis;
+```
+
+### [Java](#tab/java)
+
+```java
+import com.azure.ai.formrecognizer.*;
+import com.azure.ai.formrecognizer.models.*;
+import com.azure.ai.formrecognizer.DocumentAnalysisClient.*;
+
+import com.azure.core.credential.AzureKeyCredential;
+```
+
+### [JavaScript](#tab/javascript)
+
+```javascript
+const { AzureKeyCredential, DocumentAnalysisClient } = require("@azure/ai-form-recognizer");
+```
+
+### [Python](#tab/python)
+
+```python
+from azure.ai.formrecognizer import DocumentAnalysisClient
+from azure.core.credentials import AzureKeyCredential
+```
+++
+### 3. Set up authentication
+
+There are two supported methods for authentication
+
+* Use a [Form Recognizer API key](#use-your-api-key) with AzureKeyCredential from azure.core.credentials.
+
+* Use a [token credential from azure-identity](#use-an-azure-active-directory-azure-ad-token-credential) to authenticate with [Azure Active Directory](../../active-directory/fundamentals/active-directory-whatis.md).
+
+#### Use your API key
+
+Here's where to find your Form Recognizer API key in the Azure portal:
++
+### [C#/.NET](#tab/csharp)
+
+```csharp
+
+//set `<your-endpoint>` and `<your-key>` variables with the values from the Azure portal to create your `AzureKeyCredential` and `DocumentAnalysisClient` instance
+string key = "<your-key>";
+string endpoint = "<your-endpoint>";
+AzureKeyCredential credential = new AzureKeyCredential(key);
+DocumentAnalysisClient client = new DocumentAnalysisClient(new Uri(endpoint), credential);
+```
+
+### [Java](#tab/java)
+
+```java
+
+// create your `DocumentAnalysisClient` instance and `AzureKeyCredential` variable
+DocumentAnalysisClient client = new DocumentAnalysisClientBuilder()
+ .credential(new AzureKeyCredential("<your-key>"))
+ .endpoint("<your-endpoint>")
+ .buildClient();
+```
+
+### [JavaScript](#tab/javascript)
+
+```javascript
+
+// create your `DocumentAnalysisClient` instance and `AzureKeyCredential` variable
+async function main() {
+ const client = new DocumentAnalysisClient("<your-endpoint>", new AzureKeyCredential("<your-key>"));
+```
+
+### [Python](#tab/python)
+
+```python
+
+# create your `DocumentAnalysisClient` instance and `AzureKeyCredential` variable
+ document_analysis_client = DocumentAnalysisClient(endpoint="<your-endpoint>", credential=AzureKeyCredential("<your-key>"))
+```
+++
+#### Use an Azure Active Directory (Azure AD) token credential
+
+> [!NOTE]
+> Regional endpoints do not support AAD authentication. Create a [custom subdomain](../../cognitive-services/authentication.md?tabs=powershell#create-a-resource-with-a-custom-subdomain) for your resource in order to use this type of authentication.
+
+Authorization is easiest using the `DefaultAzureCredential`. It provides a default token credential, based upon the running environment, capable of handling most Azure authentication scenarios.
+
+### [C#/.NET](#tab/csharp)
+
+Here's how to acquire and use the [DefaultAzureCredential](/dotnet/api/azure.identity.defaultazurecredential?view=azure-dotnet&preserve-view=true) for .NET applications:
+
+1. Install the [Azure Identity library for .NET](/dotnet/api/overview/azure/identity-readme):
+
+ ```console
+ dotnet add package Azure.Identity
+ ```
+
+ ```powershell
+ Install-Package Azure.Identity
+ ```
+
+1. [Register an Azure AD application and create a new service principal](../../cognitive-services/authentication.md?tabs=powershell#assign-a-role-to-a-service-principal).
+
+1. Grant access to Form Recognizer by assigning the **`Cognitive Services User`** role to your service principal.
+
+1. Set the values of the client ID, tenant ID, and client secret in the Azure AD application as environment variables: **`AZURE_CLIENT_ID`**, **`AZURE_TENANT_ID`**, and **`AZURE_CLIENT_SECRET`**, respectively.
+
+1. Create your **`DocumentAnalysisClient`** instance including the **`DefaultAzureCredential`**:
+
+ ```csharp
+ string endpoint = "<your-endpoint>";
+ var client = new DocumentAnalysisClient(new Uri(endpoint), new DefaultAzureCredential());
+ ```
+
+For more information, *see* [Authenticate the client](https://github.com/Azure/azure-sdk-for-net/tree/Azure.AI.FormRecognizer_4.0.0-beta.4/sdk/formrecognizer/Azure.AI.FormRecognizer#authenticate-the-client)
+
+### [Java](#tab/java)
+
+Here's how to acquire and use the [DefaultAzureCredential](/java/api/com.azure.identity.defaultazurecredential?view=azure-java-stable&preserve-view=true) for Java applications:
+
+1. Install the [Azure Identity library for Java](/java/api/overview/azure/identity-readme?view=azure-java-stable&preserve-view=true):
+
+ ```xml
+ <dependency>
+ <groupId>com.azure</groupId>
+ <artifactId>azure-identity</artifactId>
+ <version>1.5.3</version>
+ </dependency>
+ ```
+
+1. [Register an Azure AD application and create a new service principal](../../cognitive-services/authentication.md?tabs=powershell#assign-a-role-to-a-service-principal).
+
+1. Grant access to Form Recognizer by assigning the **`Cognitive Services User`** role to your service principal.
+
+1. Set the values of the client ID, tenant ID, and client secret of the Azure AD application as environment variables: **`AZURE_CLIENT_ID`**, **`AZURE_TENANT_ID`**, and **`AZURE_CLIENT_SECRET`**, respectively.
+
+1. Create your **`DocumentAnalysisClient`** instance and **`TokenCredential`** variable:
+
+ ```java
+ TokenCredential credential = new DefaultAzureCredentialBuilder().build();
+ DocumentAnalysisClient documentAnalysisClient = new DocumentAnalysisClientBuilder()
+ .endpoint("{your-endpoint}")
+ .credential(credential)
+ .buildClient();
+ ```
+
+For more information, *see* [Authenticate the client](https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/formrecognizer/azure-ai-formrecognizer#authenticate-the-client)
+
+### [JavaScript](#tab/javascript)
+
+Here's how to acquire and use the [DefaultAzureCredential](/javascript/api/@azure/identity/defaultazurecredential?view=azure-node-latest&preserve-view=true) for JavaScript applications:
+
+1. Install the [Azure Identity library for JavaScript](/javascript/api/overview/azure/identity-readme?view=azure-node-latest&preserve-view=true):
+
+ ```javascript
+ npm install @azure/identity
+ ```
+
+1. [Register an Azure AD application and create a new service principal](../../cognitive-services/authentication.md?tabs=powershell#assign-a-role-to-a-service-principal).
+
+1. Grant access to Form Recognizer by assigning the **`Cognitive Services User`** role to your service principal.
+
+1. Set the values of the client ID, tenant ID, and client secret of the Azure AD application as environment variables: **`AZURE_CLIENT_ID`**, **`AZURE_TENANT_ID`**, and **`AZURE_CLIENT_SECRET`**, respectively.
+
+1. Create your **`DocumentAnalysisClient`** instance including the **`DefaultAzureCredential`**:
+
+ ```javascript
+ const { DocumentAnalysisClient } = require("@azure/ai-form-recognizer");
+ const { DefaultAzureCredential } = require("@azure/identity");
+
+ const client = new DocumentAnalysisClient("<your-endpoint>", new DefaultAzureCredential());
+ ```
+
+For more information, *see* [Create and authenticate a client](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/formrecognizer/ai-form-recognizer#create-and-authenticate-a-client).
+
+### [Python](#tab/python)
+
+Here's how to acquire and use the [DefaultAzureCredential](/python/api/azure-identity/azure.identity.defaultazurecredential?view=azure-python&preserve-view=true) for Python applications.
+
+1. Install the [Azure Identity library for Python](/python/api/overview/azure/identity-readme?view=azure-python&preserve-view=true):
+
+ ```python
+ pip install azure-identity
+ ```
+
+1. [Register an Azure AD application and create a new service principal](../../cognitive-services/authentication.md?tabs=powershell#assign-a-role-to-a-service-principal).
+
+1. Grant access to Form Recognizer by assigning the **`Cognitive Services User`** role to your service principal.
+
+1. Set the values of the client ID, tenant ID, and client secret of the Azure AD application as environment variables: **`AZURE_CLIENT_ID`**, **`AZURE_TENANT_ID`**, and **`AZURE_CLIENT_SECRET`**, respectively.
+
+1. Create your **`DocumentAnalysisClient`** instance including the **`DefaultAzureCredential`**:
+
+ ```python
+ from azure.identity import DefaultAzureCredential
+ from azure.ai.formrecognizer import DocumentAnalysisClient
+
+ credential = DefaultAzureCredential()
+ document_analysis_client = DocumentAnalysisClient(
+ endpoint="https://<my-custom-subdomain>.cognitiveservices.azure.com/",
+ credential=credential
+ )
+ ```
+
+For more information, *see* [Authenticate the client](https://github.com/Azure/azure-sdk-for-python/tree/azure-ai-formrecognizer_3.2.0b5/sdk/formrecognizer/azure-ai-formrecognizer#authenticate-the-client)
+++
+### 4. Build your application
+
+Create a client object to interact with the Form Recognizer SDK, and then call methods on that client object to interact with the service. The SDKs provide both synchronous and asynchronous methods. For more insight, try a [quickstart](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true) in a language of your choice.
+
+## Help options
+
+The [Microsoft Q&A](/answers/topics/azure-form-recognizer.html) and [Stack Overflow](https://stackoverflow.com/questions/tagged/azure-form-recognizer) forums are available for the developer community to ask and answer questions about Azure Form Recognizer and other services. Microsoft monitors the forums and replies to questions that the community has yet to answer. To make sure that we see your question, tag it with **`azure-form-recognizer`**.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [**Explore Form Recognizer REST API 2023-02-28-preview**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-02-28-preview/operations/AnalyzeDocument)
azure-app-configuration Concept Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/concept-disaster-recovery.md
Title: Azure App Configuration resiliency and disaster recovery
-description: Lean how to implement resiliency and disaster recovery with Azure App Configuration.
--
+description: Learn how to implement resiliency and disaster recovery with Azure App Configuration.
++ Previously updated : 07/09/2020 Last updated : 04/20/2023 # Resiliency and disaster recovery
-> [!IMPORTANT]
-> Azure App Configuration supports [geo-replication](./concept-geo-replication.md). You can enable replicas of your data across multiple locations for enhanced resiliency to regional outages. You can also leverage App Configuration provider libraries in your applications for [automatic failover](./howto-geo-replication.md#use-replicas). Utilizing geo-replication is the recommended solution for high availability.
+Azure App Configuration is a regional service. Each configuration store is created in a particular Azure region. A region-wide outage affects all stores in that region, and failover between regions isn't available by default. However, Azure App Configuration supports [geo-replication](./concept-geo-replication.md). You can enable replicas of your data across multiple locations for enhanced resiliency to regional outages. Utilizing geo-replication is the recommended solution for high availability.
-Currently, Azure App Configuration is a regional service. Each configuration store is created in a particular Azure region. A region-wide outage affects all stores in that region. App Configuration doesn't offer automatic failover to another region. This article provides general guidance on how you can use multiple configuration stores across Azure regions to increase the geo-resiliency of your application.
+This article provides general guidance on how you can use multiple replicas across Azure regions to increase the geo-resiliency of your application.
## High-availability architecture
-To realize cross-region redundancy, you need to create multiple App Configuration stores in different regions. With this setup, your application has at least one additional configuration store to fall back on if the primary store becomes inaccessible. The following diagram illustrates the topology between your application and its primary and secondary configuration stores:
+The original App Configuration store is also considered a replica, so to realize cross-region redundancy, you need to create at least one new replica in a different region. However, you can choose to create multiple App Configuration replicas in different regions based on your requirements. You may then utilize these replicas in your application in the order of your preference. With this setup, your application has at least one additional replica to fall back on if the primary replica becomes inaccessible.
-![Geo-redundant stores](./media/geo-redundant-app-configuration-stores.png)
+The following diagram illustrates the topology between your application and two replicas:
-Your application loads its configuration from both the primary and secondary stores in parallel. Doing this increases the chance of successfully getting the configuration data. You're responsible for keeping the data in both stores in sync. The following sections explain how you can build geo-resiliency into your application.
-## Failover between configuration stores
+Your application loads its configuration from the more preferred replica. If the preferred replica is not available, configuration is loaded from the less preferred replica. This increases the chance of successfully getting the configuration data. The data in both replicas is always in sync.
-Technically, your application isn't executing a failover. It's attempting to retrieve the same set of configuration data from two App Configuration stores simultaneously. Arrange your code so that it loads from the secondary store first and then the primary store. This approach ensures that the configuration data in the primary store takes precedence whenever it's available. The following code snippet shows how you can implement this arrangement in .NET Core:
+## Failover between replicas
-#### [.NET Core 2.x](#tab/core2x)
+If you want to leverage automatic failover between replicas, follow [these instructions](./howto-geo-replication.md#use-replicas) to set up failover using App Configuration provider libraries. This is the recommended approach for building resiliency in your application.
-```csharp
-public static IWebHostBuilder CreateWebHostBuilder(string[] args) =>
- WebHost.CreateDefaultBuilder(args)
- .ConfigureAppConfiguration((hostingContext, config) =>
- {
- var settings = config.Build();
- config.AddAzureAppConfiguration(settings["ConnectionString_SecondaryStore"], optional: true)
- .AddAzureAppConfiguration(settings["ConnectionString_PrimaryStore"], optional: true);
- })
- .UseStartup<Startup>();
-
-```
-
-#### [.NET Core 3.x](#tab/core3x)
-
-```csharp
-public static IHostBuilder CreateHostBuilder(string[] args) =>
- Host.CreateDefaultBuilder(args)
- .ConfigureWebHostDefaults(webBuilder =>
- webBuilder.ConfigureAppConfiguration((hostingContext, config) =>
- {
- var settings = config.Build();
- config.AddAzureAppConfiguration(settings["ConnectionString_SecondaryStore"], optional: true)
- .AddAzureAppConfiguration(settings["ConnectionString_PrimaryStore"], optional: true);
- })
- .UseStartup<Startup>());
-```
--
-Notice the `optional` parameter passed into the `AddAzureAppConfiguration` function. When set to `true`, this parameter prevents the application from failing to continue if the function can't load configuration data.
-
-## Synchronization between configuration stores
-
-It's important that your geo-redundant configuration stores all have the same set of data. There are two ways to achieve this:
-
-### Backup manually using the Export function
-
-You can use the **Export** function in App Configuration to copy data from the primary store to the secondary on demand. This function is available through both the Azure portal and the CLI.
-
-From the Azure portal, you can push a change to another configuration store by following these steps.
-
-1. Go to the **Import/Export** tab, and select **Export** > **App Configuration** > **Target** > **Select a resource**.
-
-1. In the new blade that opens, specify the subscription, resource group, and resource name of your secondary store, then select **Apply**.
-
-1. The UI is updated so that you can choose what configuration data you want to export to your secondary store. You can leave the default time value as is and set both **From label** and **Label** to the same value. Select **Apply**. Repeat this for all the labels in your primary store.
-
-1. Repeat the previous steps whenever your configuration changes.
-
-The export process can also be achieved using the Azure CLI. The following command shows how to export all configurations from the primary store to the secondary:
-
-```azurecli
- az appconfig kv export --destination appconfig --name {PrimaryStore} --dest-name {SecondaryStore} --label * --preserve-labels -y
-```
-
-### Backup automatically using Azure Functions
-
-The backup process can be automated by using Azure Functions. It leverages the integration with Azure Event Grid in App Configuration. Once set up, App Configuration will publish events to Event Grid for any changes made to key-values in a configuration store. Thus, an Azure Functions app can listen to these events and backup data accordingly. For details, see the tutorial on [how to backup App Configuration stores automatically](./howto-backup-config-store.md).
+If the App Configuration provider libraries don't meet your requirements, you can still implement your own failover strategy. When geo-replication is enabled, and if one replica isn't accessible, you can let your application failover to another replica for accessing your configuration.
## Next steps
azure-app-configuration Howto Backup Config Store https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/howto-backup-config-store.md
- Title: Automatically back up key-values from Azure App Configuration stores
-description: Learn how to set up an automatic backup of key-values between App Configuration stores.
----- Previously updated : 08/24/2022---
-#Customer intent: I want to back up all key-values to a secondary App Configuration store and keep them up to date with any changes in the primary store.
--
-# Back up App Configuration stores automatically
-
-> [!IMPORTANT]
-> Azure App Configuration supports [geo-replication](./concept-geo-replication.md). You can enable replicas of your data across multiple locations for enhanced resiliency to regional outages. You can also leverage App Configuration provider libraries in your applications for [automatic failover](./howto-geo-replication.md#use-replicas). Utilizing geo-replication is the recommended solution for high availability.
-
-In this article, you'll learn how to set up an automatic backup of key-values from a primary Azure App Configuration store to a secondary store. The automatic backup uses the integration of Azure Event Grid with App Configuration.
-
-After you set up the automatic backup, App Configuration will publish events to Azure Event Grid for any changes made to key-values in a configuration store. Event Grid supports various Azure services from which users can subscribe to the events emitted whenever key-values are created, updated, or deleted.
-
-## Overview
-
-In this article, you'll use Azure Queue storage to receive events from Event Grid and use a timer-trigger of Azure Functions to process events in the queue in batches.
-
-When a function is triggered, based on the events, it will fetch the latest values of the keys that have changed from the primary App Configuration store and update the secondary store accordingly. This setup helps combine multiple changes that occur in a short period in one backup operation, which avoids excessive requests made to your App Configuration stores.
-
-![Diagram that shows the architecture of the App Configuration store backup.](./media/config-store-backup-architecture.png)
-
-## Resource provisioning
-
-The motivation behind backing up App Configuration stores is to use multiple configuration stores across different Azure regions to increase the geo-resiliency of your application. To achieve this, your primary and secondary stores should be in different Azure regions. All other resources created in this tutorial can be provisioned in any region of your choice. This is because if primary region is down, there will be nothing new to back up until the primary region is accessible again.
-
-In this tutorial, you'll create a secondary store in the `centralus` region and all other resources in the `westus` region.
--
-## Prerequisites
--- [Visual Studio 2019](https://visualstudio.microsoft.com/vs) with the Azure development workload.--- [.NET Core SDK](https://dotnet.microsoft.com/download).---- This tutorial requires version 2.3.1 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.-
-## Create a resource group
-
-The resource group is a logical collection into which Azure resources are deployed and managed.
-
-Create a resource group by using the [az group create](/cli/azure/group) command.
-
-The following example creates a resource group named `<resource_group_name>` in the `westus` location. Replace `<resource_group_name>` with a unique name for your resource group.
-
-```azurecli-interactive
-resourceGroupName="<resource_group_name>"
-az group create --name $resourceGroupName --location westus
-```
-
-## Create App Configuration stores
-
-Create your primary and secondary App Configuration stores in different regions.
-Replace `<primary_appconfig_name>` and `<secondary_appconfig_name>` with unique names for your configuration stores. Each store name must be unique because it's used as a DNS name.
-
-```azurecli-interactive
-primaryAppConfigName="<primary_appconfig_name>"
-secondaryAppConfigName="<secondary_appconfig_name>"
-az appconfig create \
- --name $primaryAppConfigName \
- --location westus \
- --resource-group $resourceGroupName\
- --sku standard
-
-az appconfig create \
- --name $secondaryAppConfigName \
- --location centralus \
- --resource-group $resourceGroupName\
- --sku standard
-```
-
-## Create a queue
-
-Create a storage account and a queue for receiving the events published by Event Grid.
-
-```azurecli-interactive
-storageName="<unique_storage_name>"
-queueName="<queue_name>"
-az storage account create -n $storageName -g $resourceGroupName -l westus --sku Standard_LRS
-az storage queue create --name $queueName --account-name $storageName --auth-mode login
-```
--
-## Subscribe to your App Configuration store events
-
-You subscribe to these two events from the primary App Configuration store:
--- `Microsoft.AppConfiguration.KeyValueModified`-- `Microsoft.AppConfiguration.KeyValueDeleted`-
-The following command creates an Event Grid subscription for the two events sent to your queue. The endpoint type is set to `storagequeue`, and the endpoint is set to the queue ID. Replace `<event_subscription_name>` with the name of your choice for the event subscription.
-
-```azurecli-interactive
-storageId=$(az storage account show --name $storageName --resource-group $resourceGroupName --query id --output tsv)
-queueId="$storageId/queueservices/default/queues/$queueName"
-appconfigId=$(az appconfig show --name $primaryAppConfigName --resource-group $resourceGroupName --query id --output tsv)
-eventSubscriptionName="<event_subscription_name>"
-az eventgrid event-subscription create \
- --source-resource-id $appconfigId \
- --name $eventSubscriptionName \
- --endpoint-type storagequeue \
- --endpoint $queueId \
- --included-event-types Microsoft.AppConfiguration.KeyValueModified Microsoft.AppConfiguration.KeyValueDeleted
-```
-
-## Create functions for handling events from Queue storage
-
-### Set up with ready-to-use functions
-
-In this article, you'll work with C# functions that have the following properties:
--- Runtime stack .NET Core 3.1-- Azure Functions runtime version 3.x-- Function triggered by timer every 10 minutes-
-To make it easier for you to start backing up your data, we've [tested and published a function](https://github.com/Azure/AppConfiguration/tree/master/examples/ConfigurationStoreBackup) that you can use without making any changes to the code. Download the project files and [publish them to your own function app from Visual Studio](../azure-functions/functions-develop-vs.md#publish-to-azure).
-
-> [!IMPORTANT]
-> Don't make any changes to the environment variables in the code you've downloaded. You'll create the required app settings in the next section.
->
-
-### Build your own function
-
-If the sample code provided earlier doesn't meet your requirements, you can also create your own function. Your function must be able to perform the following tasks in order to complete the backup:
--- Periodically read contents of your queue to see if it contains any notifications from Event Grid. Refer to the [Storage Queue SDK](../storage/queues/storage-quickstart-queues-dotnet.md) for implementation details.-- If your queue contains [event notifications from Event Grid](./concept-app-configuration-event.md#event-schema), extract all the unique `<key, label>` information from event messages. The combination of key and label is the unique identifier for key-value changes in the primary store.-- Read all settings from the primary store. Update only those settings in the secondary store that have a corresponding event in the queue. Delete all settings from the secondary store that were present in the queue but not in the primary store. You can use the [App Configuration SDK](https://github.com/Azure/AppConfiguration#sdks) to access your configuration stores programmatically.-- Delete messages from the queue if there were no exceptions during processing.-- Implement error handling according to your needs. Refer to the preceding code sample to see some common exceptions that you might want to handle.-
-To learn more about creating a function, see: [Create a function in Azure that is triggered by a timer](../azure-functions/functions-create-scheduled-function.md) and [Develop Azure Functions using Visual Studio](../azure-functions/functions-develop-vs.md).
-
-> [!IMPORTANT]
-> Use your best judgement to choose the timer schedule based on how often you make changes to your primary configuration store. Running the function too often might end up throttling requests for your store.
->
-
-## Create function app settings
-
-If you're using a function that we've provided, you need the following app settings in your function app:
--- `PrimaryStoreEndpoint`: Endpoint for the primary App Configuration store. An example is `https://{primary_appconfig_name}.azconfig.io`.-- `SecondaryStoreEndpoint`: Endpoint for the secondary App Configuration store. An example is `https://{secondary_appconfig_name}.azconfig.io`.-- `StorageQueueUri`: Queue URI. An example is `https://{unique_storage_name}.queue.core.windows.net/{queue_name}`.-
-The following command creates the required app settings in your function app. Replace `<function_app_name>` with the name of your function app.
-
-```azurecli-interactive
-functionAppName="<function_app_name>"
-primaryStoreEndpoint="https://$primaryAppConfigName.azconfig.io"
-secondaryStoreEndpoint="https://$secondaryAppConfigName.azconfig.io"
-storageQueueUri="https://$storageName.queue.core.windows.net/$queueName"
-az functionapp config appsettings set --name $functionAppName --resource-group $resourceGroupName --settings StorageQueueUri=$storageQueueUri PrimaryStoreEndpoint=$primaryStoreEndpoint SecondaryStoreEndpoint=$secondaryStoreEndpoint
-```
-
-## Grant access to the managed identity of the function app
-
-Use the following command or the [Azure portal](../app-service/overview-managed-identity.md#add-a-system-assigned-identity) to add a system-assigned managed identity for your function app.
-
-```azurecli-interactive
-az functionapp identity assign --name $functionAppName --resource-group $resourceGroupName
-```
-
-> [!NOTE]
-> To perform the required resource creation and role management, your account needs `Owner` permissions at the appropriate scope (your subscription or resource group). If you need assistance with role assignment, learn [how to assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md).
-
-Use the following commands or the [Azure portal](./howto-integrate-azure-managed-service-identity.md#grant-access-to-app-configuration) to grant the managed identity of your function app access to your App Configuration stores. Use these roles:
--- Assign the `App Configuration Data Reader` role in the primary App Configuration store.-- Assign the `App Configuration Data Owner` role in the secondary App Configuration store.-
-```azurecli-interactive
-functionPrincipalId=$(az functionapp identity show --name $functionAppName --resource-group $resourceGroupName --query principalId --output tsv)
-primaryAppConfigId=$(az appconfig show -n $primaryAppConfigName --query id --output tsv)
-secondaryAppConfigId=$(az appconfig show -n $secondaryAppConfigName --query id --output tsv)
-
-az role assignment create \
- --role "App Configuration Data Reader" \
- --assignee $functionPrincipalId \
- --scope $primaryAppConfigId
-
-az role assignment create \
- --role "App Configuration Data Owner" \
- --assignee $functionPrincipalId \
- --scope $secondaryAppConfigId
-```
-
-Use the following command or the [Azure portal](../storage/blobs/assign-azure-role-data-access.md#assign-an-azure-role) to grant the managed identity of your function app access to your queue. Assign the `Storage Queue Data Contributor` role in the queue.
-
-```azurecli-interactive
-az role assignment create \
- --role "Storage Queue Data Contributor" \
- --assignee $functionPrincipalId \
- --scope $queueId
-```
-
-## Trigger an App Configuration event
-
-To test that everything works, you can create, update, or delete a key-value from the primary store. You should automatically see this change in the secondary store a few seconds after the timer triggers Azure Functions.
-
-```azurecli-interactive
-az appconfig kv set --name $primaryAppConfigName --key Foo --value Bar --yes
-```
-
-You've triggered the event. In a few moments, Event Grid will send the event notification to your queue. *After the next scheduled run of your function*, view configuration settings in your secondary store to see if it contains the updated key-value from the primary store.
-
-> [!NOTE]
-> You can [trigger your function manually](../azure-functions/functions-manually-run-non-http.md) during the testing and troubleshooting without waiting for the scheduled timer-trigger.
-
-After you make sure that the backup function ran successfully, you can see that the key is now present in your secondary store.
-
-```azurecli-interactive
-az appconfig kv show --name $secondaryAppConfigName --key Foo
-```
-
-```json
-{
- "contentType": null,
- "etag": "eVgJugUUuopXnbCxg0dB63PDEJY",
- "key": "Foo",
- "label": null,
- "lastModified": "2020-04-27T23:25:08+00:00",
- "locked": false,
- "tags": {},
- "value": "Bar"
-}
-```
-
-## Troubleshooting
-
-If you don't see the new setting in your secondary store:
--- Make sure the backup function was triggered *after* you created the setting in your primary store.-- It's possible that Event Grid couldn't send the event notification to the queue in time. Check if your queue still contains the event notification from your primary store. If it does, trigger the backup function again.-- Check [Azure Functions logs](../azure-functions/functions-create-scheduled-function.md#test-the-function) for any errors or warnings.-- Use the [Azure portal](../azure-functions/functions-how-to-use-azure-function-app-settings.md#get-started-in-the-azure-portal) to ensure that the Azure function app contains correct values for the application settings that the Azure function is trying to read.-- You can also set up monitoring and alerting for Azure Functions by using [Azure Application Insights](../azure-functions/functions-monitoring.md?tabs=cmd).-
-## Clean up resources
-
-If you plan to continue working with this App Configuration and event subscription, you might want to leave these resources in place. If you don't plan to continue, use the [az group delete](/cli/azure/group#az-group-delete) command, which deletes the resource group and the resources in it.
-
-```azurecli-interactive
-az group delete --name $resourceGroupName
-```
-
-## Next steps
-
-Now that you know how to set up automatic backup of your key-values, learn more about how you can increase the geo-resiliency of your application:
-
-> [!div class="nextstepaction"]
-> [Resiliency and disaster recovery](concept-disaster-recovery.md)
azure-app-configuration Howto Import Export Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/howto-import-export-data.md
You may encounter the following error messages when importing or exporting App C
## Next steps > [!div class="nextstepaction"]
-> [Back up App Configuration stores automatically](./howto-backup-config-store.md)
+> [Integrate with a CI/CD pipeline](./integrate-ci-cd-pipeline.md)
azure-app-configuration Pull Key Value Devops Pipeline https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/pull-key-value-devops-pipeline.md
The [Azure App Configuration](https://marketplace.visualstudio.com/items?itemNam
## Prerequisites - Azure subscription - [create one for free](https://azure.microsoft.com/free/)-- App Configuration store - create one for free in the [Azure portal](https://portal.azure.com).
+- App Configuration store - [create one for free](./quickstart-azure-app-configuration-create.md#create-an-app-configuration-store)
- Azure DevOps project - [create one for free](https://go.microsoft.com/fwlink/?LinkId=2014881) - Azure App Configuration task - download for free from the [Visual Studio Marketplace](https://marketplace.visualstudio.com/items?itemName=AzureAppConfiguration.azure-app-configuration-task#:~:text=Navigate%20to%20the%20Tasks%20tab,the%20Azure%20App%20Configuration%20instance.). -- [Node 16](https://nodejs.org/en/blog/release/v16.16.0/) - for users running the task on self-hosted agents.
+- [Azure Pipelines agent version 2.206.1](https://github.com/microsoft/azure-pipelines-agent/releases/tag/v2.206.1) or later and [Node version 16](https://nodejs.org/en/blog/release/v16.16.0/) or later for running the task on self-hosted agents.
## Create a service connection
azure-app-configuration Push Kv Devops Pipeline https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/push-kv-devops-pipeline.md
The [Azure App Configuration Push](https://marketplace.visualstudio.com/items?it
## Prerequisites - Azure subscription - [create one for free](https://azure.microsoft.com/free/)-- App Configuration resource - create one for free in the [Azure portal](https://portal.azure.com).
+- App Configuration store - [create one for free](./quickstart-azure-app-configuration-create.md#create-an-app-configuration-store)
- Azure DevOps project - [create one for free](https://go.microsoft.com/fwlink/?LinkId=2014881) - Azure App Configuration Push task - download for free from the [Visual Studio Marketplace](https://marketplace.visualstudio.com/items?itemName=AzureAppConfiguration.azure-app-configuration-task-push).-- [Node 16](https://nodejs.org/en/blog/release/v16.16.0/) - for users running the task on self-hosted agents.
+- [Azure Pipelines agent version 2.206.1](https://github.com/microsoft/azure-pipelines-agent/releases/tag/v2.206.1) or later and [Node version 16](https://nodejs.org/en/blog/release/v16.16.0/) or later for running the task on self-hosted agents.
## Create a service connection
azure-arc Azcmagent Check https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/azcmagent-check.md
+
+ Title: azcmagent check CLI reference
+description: Syntax for the azcmagent check command line tool
+ Last updated : 04/20/2023++
+# azcmagent check
+
+Run a series of network connectivity checks to see if the agent can successfully communicate with required network endpoints. The command outputs a table showing connectivity test results for each required endpoint, including whether the agent used a private endpoint and/or proxy server.
+
+## Usage
+
+```
+azcmagent check [flags]
+```
+
+## Examples
+
+Check connectivity with the agent's currently configured cloud and region.
+
+```
+azcmagent check
+```
+
+Check connectivity with the East US region using public endpoints.
+
+```
+azcmagent check --location "eastus"
+```
+
+Check connectivity with the Central India region using private endpoints.
+
+```
+azcmagent check --location "centralindia" --enable-pls-check
+```
+
+## Flags
+
+`--cloud`
+
+Specifies the Azure cloud instance. Must be used with the `--location` flag. If the machine is already connected to Azure Arc, the default value is the cloud to which the agent is already connected. Otherwise, the default value is "AzureCloud".
+
+Supported values:
+
+* AzureCloud (public regions)
+* AzureUSGovernment (Azure US Government regions)
+* AzureChinaCloud (Azure China regions)
+
+`-l`, `--location`
+
+The Azure region to check connectivity with. If the machine is already connected to Azure Arc, the current region is selected as the default.
+
+Sample value: westeurope
+
+`-p`, `--enable-pls-check`
+
+Checks if supported Azure Arc endpoints resolve to private IP addresses. This flag should be used when you intend to connect the server to Azure using an Azure Arc private link scope.
+
azure-arc Azcmagent Config https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/azcmagent-config.md
+
+ Title: azcmagent config CLI reference
+description: Syntax for the azcmagent config command line tool
+ Last updated : 04/20/2023++
+# azcmagent config
+
+Configure settings for the Azure connected machine agent. Configurations are stored locally and are unique to each machine. Available configuration properties vary by agent version. Use [azcmagent config info](#azcmagent-config-info) to see all available configuration properties and supported values for the currently installed agent.
+
+## Commands
+
+| Command | Purpose |
+| - | - |
+| [azcmagent config clear](#azcmagent-config-clear) | Clear a configuration property's value |
+| [azcmagent config get](#azcmagent-config-get) | Gets a configuration property's value |
+| [azcmagent config info](#azcmagent-config-info) | Describes all available configuration properties and supported values |
+| [azcmagent config list](#azcmagent-config-list) | Lists all configuration properties and values |
+| [azcmagent config set](#azcmagent-config-set) | Set a value for a configuration property |
+
+## azcmagent config clear
+
+Clear a configuration property's value and reset it to its default state.
+
+### Usage
+
+```
+azcmagent config clear [property] [flags]
+```
+
+### Examples
+
+Clear the proxy server URL property.
+
+```
+azcmagent config clear proxy.url
+```
+
+### Flags
++
+## azcmagent config get
+
+Get a configuration property's value.
+
+### Usage
+
+```
+azcmagent config get [property] [flags]
+```
+
+### Examples
+
+Get the agent mode.
+
+```
+azcmagent config get config.mode
+```
+
+### Flags
++
+## azcmagent config info
+
+Describes available configuration properties and supported values. When run without specifying a specific property, the command describes all available properties their supported values.
+
+### Usage
+
+```
+azcmagent config info [property] [flags]
+```
+
+### Examples
+
+Describe all available configuration properties and supported values.
+
+```
+azcmagent config info
+```
+
+Learn more about the extensions allowlist property and its supported values.
+
+```
+azcmagent config info extensions.allowlist
+```
+
+### Flags
++
+## azcmagent config list
+
+Lists all configuration properties and their current values
+
+### Usage
+
+```
+azcmagent config list [flags]
+```
+
+### Examples
+
+List the current agent configuration.
+
+```
+azcmagent config list
+```
+
+### Flags
++
+## azcmagent config set
+
+Set a value for a configuration property.
+
+### Usage
+
+```
+azcmagent config set [property] [value] [flags]
+```
+
+### Examples
+
+Configure the agent to use a proxy server.
+
+```
+azcmagent config set proxy.url "http://proxy.contoso.corp:8080"
+```
+
+Append an extension to the extension allowlist.
+
+```
+azcmagent config set extensions.allowlist "Microsoft.Azure.Monitor/AzureMonitorWindowsAgent" --add
+```
+
+### Flags
+
+`-a`, `--add`
+
+Append the value to the list of existing values. If not specified, the default behavior is to replace the list of existing values. This flag is only supported for configuration properties that support more than one value. Can't be used with the `--remove` flag.
+
+`-r`, `--remove`
+
+Remove the specified value from the list, retaining all other values. If not specified, the default behavior is to replace the list of existing values. This flag is only supported for configuration properties that support more than one value. Can't be used in conjunction with the `--add` flag.
+
azure-arc Azcmagent Connect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/azcmagent-connect.md
+
+ Title: azcmagent connect CLI reference
+description: Syntax for the azcmagent connect command line tool
+ Last updated : 04/20/2023++
+# azcmagent connect
+
+Connects the server to Azure Arc by creating a metadata representation of the server in Azure and associating the Azure connected machine agent with it. The command requires information about the tenant, subscription, and resource group where you want to represent the server in Azure and valid credentials with permissions to create Azure Arc-enabled server resources in that location.
+
+## Usage
+
+```
+azcmagent connect [authentication] --subscription-id [subscription] --resource-group [resourcegroup] --location [region] [flags]
+```
+
+## Examples
+
+Connect a server using the default login method (interactive browser or device code).
+
+```
+azcmagent connect --subscription "Production" --resource-group "HybridServers" --location "eastus"
+```
+
+Connect a server using a service principal.
+
+```
+azcmagent connect --subscription "aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee" --resource-group "HybridServers" --location "australiaeast" --service-principal-id "ID" --service-principal-secret "SECRET" --tenant-id "TENANT"
+```
+
+Connect a server using a private endpoint and device code login method.
+
+```
+azcmagent connect --subscription "Production" --resource-group "HybridServers" --location "koreacentral" --use-device-code --private-link-scope "/subscriptions/.../Microsoft.HybridCompute/privateLinkScopes/ScopeName"
+```
+
+## Authentication options
+
+There are 4 ways to provide authentication credentials to the Azure connected machine agent. Choose one authentication option and replace the `[authentication]` section in the usage syntax with the recommended flags.
+
+### Interactive browser login (Windows-only)
+
+This option is the default on Windows operating systems with a desktop experience. It login page opens in your default web browser. This option may be required if your organization has configured conditional access policies that require you to log in from trusted machines.
+
+No flag is required to use the interactive browser login.
+
+### Device code login
+
+This option generates a code that you can use to log in on a web browser on another device. This is the default option on Windows Server core editions and all Linux distributions. When you execute the connect command, you have 5 minutes to open the specified login URL on an internet-connected device and complete the login flow.
+
+To authenticate with a device code, use the `--use-device-code` flag. If the account you're logging in with and the subscription where you're registering the server aren't in the same tenant, you must also provide the tenant ID for the subscription with `--tenant-id [tenant]`.
+
+### Service principal
+
+Service principals allow you to authenticate non-interactively and are often used for at-scale deployments where the same script is run across multiple servers. It's recommended that you provide service principal information via a configuration file (see `--config`) to avoid exposing the secret in any console logs. The service principal should also be dedicated for Arc onboarding and have as few permissions as possible, to limit the impact of a stolen credential.
+
+To authenticate with a service principal, provide the service principal's application ID, secret, and tenant ID: `--service-principal-id [appid] --service-principal-secret [secret] --tenant-id [tenantid]`
+
+### Access token
+
+Access tokens can also be used for non-interactive authentication, but are short-lived and typically used by automation solutions onboarding several servers over a short period of time. You can get an access token with [Get-AzAccessToken](/powershell/module/az.accounts/get-azaccesstoken) or any other Azure Active Directory client.
+
+To authenticate with an access token, use the `--access-token [token]` flag. If the account you're logging in with and the subscription where you're registering the server aren't in the same tenant, you must also provide the tenant ID for the subscription with `--tenant-id [tenant]`.
+
+## Flags
+
+`--access-token`
+
+Specifies the Azure Active Directory access token used to create the Azure Arc-enabled server resource in Azure. For more information, see [authentication options](#authentication-options).
+
+`--automanage-profile`
+
+Resource ID of an Azure Automanage best practices profile that will be applied to the server once it's connected to Azure.
+
+Sample value: /providers/Microsoft.Automanage/bestPractices/AzureBestPracticesProduction
+
+`--cloud`
+
+Specifies the Azure cloud instance. Must be used with the `--location` flag. If the machine is already connected to Azure Arc, the default value is the cloud to which the agent is already connected. Otherwise, the default value is "AzureCloud".
+
+Supported values:
+
+* AzureCloud (public regions)
+* AzureUSGovernment (Azure US Government regions)
+* AzureChinaCloud (Azure China regions)
+
+`--correlation-id`
+
+Identifies the mechanism being used to connect the server to Azure Arc. For example, scripts generated in the Azure portal include a GUID that helps Microsoft track usage of that experience. This flag is optional and only used for telemetry purposes to improve your experience.
+
+`--ignore-network-check`
+
+Instructs the agent to continue onboarding even if the network check for required endpoints fails. You should only use this option if you're sure that the network check results are incorrect. In most cases, a failed network check indicates that the Arc agent won't function correctly on the server.
+
+`-l`, `--location`
+
+The Azure region to check connectivity with. If the machine is already connected to Azure Arc, the current region is selected as the default.
+
+Sample value: westeurope
+
+`--private-link-scope`
+
+Specifies the resource ID of the Azure Arc private link scope to associate with the server. This flag is required if you're using private endpoints to connect the server to Azure.
+
+`-g`, `--resource-group`
+
+Name of the Azure resource group where you want to create the Azure Arc-enabled server resource.
+
+Sample value: HybridServers
+
+`-n`, `--resource-name`
+
+Name for the Azure Arc-enabled server resource. By default, the resource name is:
+
+* The AWS instance ID, if the server is on AWS
+* The hostname for all other machines
+
+You can override the default name with a name of your own choosing to avoid naming conflicts. Once chosen, the name of the Azure resource can't be changed without disconnecting and re-connecting the agent.
+
+If you want to force AWS servers to use the hostname instead of the instance ID, pass in `$(hostname)` to have the shell evaluate the current hostname and pass that in as the new resource name.
+
+Sample value: FileServer01
+
+`-i`, `--service-principal-id`
+
+Specifies the application ID of the service principal used to create the Azure Arc-enabled server resource in Azure. Must be used with the `--service-principal-secret` and `--tenant-id` flags. For more information, see [authentication options](#authentication-options).
+
+`-p`, `--service-principal-secret`
+
+Specifies the service principal secret. Must be used with the `--service-principal-id` and `--tenant-id` flags. To avoid exposing the secret in console logs, it's recommended to pass in the service principal secret in a configuration file. For more information, see [authentication options](#authentication-options).
+
+`-s`, `--subscription-id`
+
+The subscription name or ID where you want to create the Azure Arc-enabled server resource.
+
+Sample values: Production, aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee
+
+`--tags`
+
+Comma-delimited list of tags to apply to the Azure Arc-enabled server resource. Each tag should be specified in the format: TagName=TagValue. If the tag name or value contains a space, use single quotes around the name or value.
+
+Sample value: Datacenter=NY3,Application=SharePoint,Owner='Shared Infrastructure Services'
+
+`-t`, `--tenant-id`
+
+The tenant ID for the subscription where you want to create the Azure Arc-enabled server resource. This flag is required when authenticating with a service principal. For all other authentication methods, the home tenant of the account used to authenticate with Azure is used for the resource as well. If the tenants for the account and subscription are different (guest accounts, Lighthouse), you must specify the tenant ID to clarify the tenant where the subscription is located.
+
+`--use-device-code`
+
+Generate an Azure Active Directory device login code that can be entered in a web browser on another computer to authenticate the agent with Azure. For more information, see [authentication options](#authentication-options).
+
azure-arc Azcmagent Disconnect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/azcmagent-disconnect.md
+
+ Title: azcmagent disconnect CLI reference
+description: Syntax for the azcmagent disconnect command line tool
+ Last updated : 04/20/2023++
+# azcmagent disconnect
+
+Deletes the Azure Arc-enabled server resource in the cloud and resets the configuration of the local agent. For detailed information on removing extensions and disconnecting and uninstalling the agent, see [uninstall the agent](manage-agent.md#uninstall-the-agent).
+
+## Usage
+
+```
+azcmagent disconnect [authentication] [flags]
+```
+
+## Examples
+
+Disconnect a server using the default login method (interactive browser or device code).
+
+```
+azcmagent disconnect
+```
+
+Disconnect a server using a service principal.
+
+```
+azcmagent disconnect --service-principal-id "ID" --service-principal-secret "SECRET"
+```
+
+Disconnect a server if the corresponding resource in Azure has already been deleted.
+
+```
+azcmagent disconnect --force-local-only
+```
+
+## Authentication options
+
+There are 4 ways to provide authentication credentials to the Azure connected machine agent. Choose one authentication option and replace the `[authentication]` section in the usage syntax with the recommended flags.
+
+> [!NOTE]
+> The account used to disconnect a server must be from the same tenant as the subscription where the server is registered.
+
+### Interactive browser login (Windows-only)
+
+This option is the default on Windows operating systems with a desktop experience. The login page opens in your default web browser. This option may be required if your organization has configured conditional access policies that require you to log in from trusted machines.
+
+No flag is required to use the interactive browser login.
+
+### Device code login
+
+This option generates a code that you can use to log in on a web browser on another device. This is the default option on Windows Server core editions and all Linux distributions. When you execute the connect command, you have 5 minutes to open the specified login URL on an internet-connected device and complete the login flow.
+
+To authenticate with a device code, use the `--use-device-code` flag.
+
+### Service principal
+
+Service principals allow you to authenticate non-interactively and are often used for at-scale operations where the same script is run across multiple servers. It's recommended that you provide service principal information via a configuration file (see `--config`) to avoid exposing the secret in any console logs.
+
+To authenticate with a service principal, provide the service principal's application ID and secret: `--service-principal-id [appid] --service-principal-secret [secret]`
+
+### Access token
+
+Access tokens can also be used for non-interactive authentication, but are short-lived and typically used by automation solutions operating on several servers over a short period of time. You can get an access token with [Get-AzAccessToken](/powershell/module/az.accounts/get-azaccesstoken) or any other Azure Active Directory client.
+
+To authenticate with an access token, use the `--access-token [token]` flag.
+
+## Flags
+
+`--access-token`
+
+Specifies the Azure Active Directory access token used to create the Azure Arc-enabled server resource in Azure. For more information, see [authentication options](#authentication-options).
+
+`-f`, `--force-local-only`
+
+Disconnects the server without deleting the resource in Azure. Primarily used if the Azure resource has already been deleted and the local agent configuration needs to be cleaned up.
+
+`-i`, `--service-principal-id`
+
+Specifies the application ID of the service principal used to create the Azure Arc-enabled server resource in Azure. Must be used with the `--service-principal-secret` and `--tenant-id` flags. For more information, see [authentication options](#authentication-options).
+
+`-p`, `--service-principal-secret`
+
+Specifies the service principal secret. Must be used with the `--service-principal-id` and `--tenant-id` flags. To avoid exposing the secret in console logs, it's recommended to pass in the service principal secret in a configuration file. For more information, see [authentication options](#authentication-options).
+
+`--use-device-code`
+
+Generate an Azure Active Directory device login code that can be entered in a web browser on another computer to authenticate the agent with Azure. For more information, see [authentication options](#authentication-options).
+
azure-arc Azcmagent Genkey https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/azcmagent-genkey.md
+
+ Title: azcmagent genkey CLI reference
+description: Syntax for the azcmagent genkey command line tool
+ Last updated : 04/20/2023++
+# azcmagent genkey
+
+Generates a private-public key pair that can be used to onboard a machine asynchronously. This command is used when connecting a server to an Azure Arc-enabled virtual machine offering (for example, [Azure Arc-enabled VMware vSphere VMs](../vmware-vsphere/overview.md)). You should normally use [azcmagent connect](azcmagent-connect.md) to configure the agent.
+
+## Usage
+
+```
+azcmagent genkey [flags]
+```
+
+## Examples
+
+Generate a key pair and print the public key to the console.
+
+```
+azcmagent genkey
+```
+
+## Flags
+
azure-arc Azcmagent Help https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/azcmagent-help.md
+
+ Title: azcmagent help CLI reference
+description: Syntax for the azcmagent help command line tool
+ Last updated : 04/20/2023++
+# azcmagent help
+
+Prints usage information and a list of all available commands for the Azure Connected Machine agent CLI. For help with a particular command, use `azcmagent COMMANDNAME --help`.
+
+## Usage
+
+```
+azcmagent help [flags]
+```
+
+## Examples
+
+Show all available commands for the command line interface.
+
+```
+azcmagent help
+```
+
+## Flags
+
azure-arc Azcmagent License https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/azcmagent-license.md
+
+ Title: azcmagent license CLI reference
+description: Syntax for the azcmagent license command line tool
+ Last updated : 04/20/2023++
+# azcmagent license
+
+Show the license agreement for the Azure Connected Machine agent.
+
+## Usage
+
+```
+azcmagent license [flags]
+```
+
+## Examples
+
+Show the license agreement.
+
+```
+azcmagent license
+```
+
+## Flags
+
azure-arc Azcmagent Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/azcmagent-logs.md
+
+ Title: azcmagent logs CLI reference
+description: Syntax for the azcmagent logs command line tool
+ Last updated : 04/20/2023++
+# azcmagent logs
+
+Collects log files for the Azure connected machine agent and extensions into a ZIP archive.
+
+## Usage
+
+```
+azcmagent logs [flags]
+```
+
+## Examples
+
+Collect the most recent log files and store them in a ZIP archive in the current directory.
+
+```
+azcmagent logs
+```
+
+Collect all log files and store them in a specific location.
+
+```
+azcmagent logs --full --output "/tmp/azcmagent-logs.zip"
+```
+
+## Flags
+
+`-f`, `--full`
+
+Collect all log files on the system instead of just the most recent. Useful when troubleshooting older problems.
+
+`-o`, `--output`
+
+Specifies the path and name for the ZIP file. If this flag isn't specified, the ZIP is saved to the console's current directory with the name "azcmagent-_TIMESTAMP_-_COMPUTERNAME_.zip"
+
+Sample value: custom-logname.zip
+
azure-arc Azcmagent Show https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/azcmagent-show.md
+
+ Title: azcmagent show CLI reference
+description: Syntax for the azcmagent show command line tool
+ Last updated : 04/20/2023++
+# azcmagent show
+
+Displays the current state of the Azure Connected Machine agent, including whether or not it's connected to Azure, the Azure resource information, and the status of dependent services.
+
+## Usage
+
+```
+azcmagent show [flags]
+```
+
+## Examples
+
+Check the status of the agent.
+
+```
+azcmagent show
+```
+
+Check the status of the agent and save it in a JSON file in the current directory.
+
+```
+azcmagent show -j > "agent-status.json"
+```
+
+## Flags
+
+`--os`
+
+Outputs additional information about the operating system.
+
azure-arc Azcmagent Version https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/azcmagent-version.md
+
+ Title: azcmagent version CLI reference
+description: Syntax for the azcmagent version command line tool
+ Last updated : 04/20/2023++
+# azcmagent version
+
+Shows the version of the currently installed agent.
+
+## Usage
+
+```
+azcmagent version [flags]
+```
+
+## Examples
+
+Show the agent version.
+
+```
+azcmagent version
+```
+
+## Flags
+
azure-arc Azcmagent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/azcmagent.md
+
+ Title: azcmagent CLI reference
+description: Reference documentation for the Azure Connected Machine agent command line tool
+ Last updated : 04/20/2023++
+# azcmagent CLI reference
+
+The Azure Connected Machine agent command line tool, azcmagent, helps you configure, manage, and troubleshoot a server's connection with Azure Arc. The azcmagent CLI is installed with the Azure Connected Machine agent and controls actions specific to the server where it's running. Once the server is connected to Azure Arc, you can use the [Azure CLI](/cli/azure/connectedmachine) or [Azure PowerShell](/powershell/module/az.connectedmachine/) module to enable extensions, manage tags, and perform other operations on the server resource.
+
+Unless otherwise specified, the command syntax and flags represent available options in the most recent release of the Azure Connected Machine agent. For more information, see [What's new with the Azure Arc-enabled servers agent](agent-release-notes.md).
+
+## Commands
+
+| Command | Purpose |
+| - | - |
+| [azcmagent check](azcmagent-check.md) | Run network connectivity checks for Azure Arc endpoints |
+| [azcmagent config](azcmagent-config.md) | Manage agent settings |
+| [azcmagent connect](azcmagent-connect.md) | Connect the server to Azure Arc |
+| [azcmagent disconnect](azcmagent-disconnect.md) | Disconnect the server from Azure Arc |
+| [azcmagent genkey](azcmagent-genkey.md) | Generate a public-private key pair for asynchronous onboarding |
+| [azcmagent help](azcmagent-help.md) | Get help for commands |
+| [azcmagent license](azcmagent-license.md) | Display the end-user license agreement |
+| [azcmagent logs](azcmagent-logs.md) | Collect logs to troubleshoot agent issues |
+| [azcmagent show](azcmagent-show.md) | Display the agent status |
+| [azcmagent version](azcmagent-version.md) | Display the agent version |
+
+## Frequently asked questions
+
+### How can I install the azcmagent CLI?
+
+The azcmagent CLI is bundled with the Azure Connected Machine agent. Review your [deployment options](deployment-options.md) for Azure Arc to learn how to install and configure the agent.
+
+### Where is the CLI installed?
+
+On Windows operating systems, the CLI is installed at `%PROGRAMFILES%\AzureConnectedMachineAgent\azcmagent.exe`. This path is automatically added to the system PATH variable during the installation process. You may need to close and reopen your console to refresh the PATH variable and be able to run `azcmagent` without specifying the full path.
+
+On Linux operating systems, the CLI is installed at `/opt/azcmagent/bin/azcmagent`
+
+### What's the difference between the azcmagent CLI and the Azure CLI for Azure Arc-enabled servers?
+
+The azcmagent CLI is used to configure the local agent. It's responsible for connecting the agent to Azure, disconnecting it, and configuring local settings like proxy URLs and security features.
+
+The Azure CLI and other management experiences are used to interact with the Azure Arc resource in Azure once the agent is connected. These tools help you manage extensions, move the resource to another subscription or resource group, and change certain settings of the Arc server remotely.
azure-functions Develop Python Worker Extensions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/develop-python-worker-extensions.md
Title: Develop Python worker extensions for Azure Functions description: Learn how to create and publish worker extensions that let you inject middleware behavior into Python functions running in Azure. Previously updated : 6/1/2021 Last updated : 04/13/2023 # Develop Python worker extensions for Azure Functions
-Azure Functions lets you integrate custom behaviors as part of Python function execution. This feature enables you to create business logic that customers can easily use in their own function apps. To learn more, see the [Python developer reference](functions-reference-python.md#python-worker-extensions).
+Azure Functions lets you integrate custom behaviors as part of Python function execution. This feature enables you to create business logic that customers can easily use in their own function apps. To learn more, see the [Python developer reference](functions-reference-python.md#python-worker-extensions). Worker extensions are supported in both the v1 and v2 Python programming models.
In this tutorial, you'll learn how to: > [!div class="checklist"]
In this tutorial, you'll learn how to:
Before you start, you must meet these requirements:
-* [Python 3.6.x or above](https://www.python.org/downloads/release/python-374/). To check the full list of supported Python versions in Azure Functions, see the [Python developer guide](functions-reference-python.md#python-version).
+* [Python 3.7 or above](https://www.python.org/downloads). To check the full list of supported Python versions in Azure Functions, see the [Python developer guide](functions-reference-python.md#python-version).
-* The [Azure Functions Core Tools](functions-run-local.md#v2), version 3.0.3568 or later.
+* The [Azure Functions Core Tools](functions-run-local.md#v2), version 4.0.5095 or later, which supports using the extension with the [v2 Python programming model](./functions-reference-python.md). Check your version with `func --version`.
* [Visual Studio Code](https://code.visualstudio.com/) installed on one of the [supported platforms](https://code.visualstudio.com/docs/supporting/requirements#_platforms).
The folder for your extension project should be like the following structure:
| **.venv/** | (Optional) Contains a Python virtual environment used for local development. | | **python_worker_extension/** | Contains the source code of the Python worker extension. This folder contains the main Python module to be published into PyPI. | | **setup.py** | Contains the metadata of the Python worker extension package. |
-| **readme.md** | (Optional) Contains the instruction and usage of your extension. This content is displayed as the description in the home page in your PyPI project. |
+| **readme.md** | Contains the instruction and usage of your extension. This content is displayed as the description in the home page in your PyPI project. |
### Configure project metadata
The `pre_invocation_app_level` method is called by the Python worker before the
Similarly, the `post_invocation_app_level` is called after function execution. This example calculates the elapsed time based on the start time and current time. It also overwrites the return value of the HTTP response.
+### Create a readme.md
+
+Create a readme.md file in the root of your extension project. This file contains the instructions and usage of your extension. The readme.md content is displayed as the description in the home page in your PyPI project.
+
+```markdown
+# Python Worker Extension Timer
+
+In this file, tell your customers when they need to call `Extension.configure()`.
+
+The readme should also document the extension capabilities, possible configuration,
+and usage of your extension.
+```
+ ## Consume your extension locally Now that you've created an extension, you can use it in an app project to verify it works as intended.
Now that you've created an extension, you can use it in an app project to verify
pip install -e <PYTHON_WORKER_EXTENSION_ROOT> ```
- In this example, replace `<PYTHON_WORKER_EXTENSION_ROOT>` with the file location of your extension project.
+ In this example, replace `<PYTHON_WORKER_EXTENSION_ROOT>` with the root file location of your extension project.
+ When a customer uses your extension, they'll instead add your extension package location to the requirements.txt file, as in the following examples: # [PyPI](#tab/pypi)
Now that you've created an extension, you can use it in an app project to verify
When running in Azure, you instead add `PYTHON_ENABLE_WORKER_EXTENSIONS=1` to the [app settings in the function app](functions-how-to-use-azure-function-app-settings.md#settings).
-1. Add following two lines before the `main` function in \_\_init.py\_\_:
+1. Add following two lines before the `main` function in *\_\_init.py\_\_* file for the v1 programming model, or in the *function_app.py* file for the v2 programming model:
```python from python_worker_extension_timer import TimerExtension
Now that you've created an extension, you can use it in an app project to verify
1. In the browser, send a GET request to `https://localhost:7071/api/HttpTrigger`. You should see a response like the following, with the **TimeElapsed** data for the request appended.
- <pre>
+ ```
This HTTP triggered function executed successfully. Pass a name in the query string or in the request body for a personalized response. (TimeElapsed: 0.0009996891021728516 sec)
- </pre>
+ ```
## Publish your extension
To publish your extension to PyPI:
twine upload dist/* ```
- You may need to provide your PyPI account credentials during upload.
+ You may need to provide your PyPI account credentials during upload. You can also test your package upload with `twine upload -r testpypi dist/*`. For more information, see the [Twine documentation](https://twine.readthedocs.io/en/stable/).
After these steps, customers can use your extension by including your package name in their requirements.txt.
azure-functions Functions Run Local https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-run-local.md
curl --request POST -H "Content-Type:application/json" --data "{'input':'sample
```
+The administrator endpoint also provides a list of all (HTTP triggered and non-HTTP triggered) functions on `http://localhost:{port}/admin/functions/`.
+ When you call an administrator endpoint on your function app in Azure, you must provide an access key. To learn more, see [Function access keys](functions-bindings-http-webhook-trigger.md#authorization-keys). ## <a name="publish"></a>Publish to Azure
azure-government Documentation Government Csp List https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/documentation-government-csp-list.md
Below you can find a list of all the authorized Cloud Solution Providers (CSPs),
|[People Services Inc. DBA CATCH Intelligence](https://catchintelligence.com)| |[Perizer Corp.](https://perizer.com)| |[Perrygo Consulting Group, LLC](https://perrygo.com)|
-|[Perspecta](https://perspecta.com/)|
|[Phacil (By Light)](https://www.bylight.com/phacil/)| |[Pharicode LLC](https://pharicode.com)| |Philistin & Heller Group, Inc.|
Below you can find a list of all the authorized Cloud Solution Providers (CSPs),
|[TestPros Inc.](https://www.testpros.com)| |[The Cram Group LLC](https://aeccloud.com/)| |[The Informatics Application Group Inc.](https://tiag.net)|
-|[The Porter Group, LLC](https://www.thepottergroupllc.com/)|
|[Thundercat Technology](https://www.thundercattech.com/)| |[TIC Business Consultants, Ltd.](https://www.ticbiz.com/)| |[Tier1, Inc.](https://www.tier1inc.com)|
Below you can find a list of all the authorized Cloud Solution Providers (CSPs),
|[Novetta](https://www.novetta.com)| |[PAX 8](https://www.pax8.com)| |[Permuta Technologies, Inc.](http://www.permuta.com/)|
-|[Perspecta](https://perspecta.com)|
|[Planet Technologies, Inc.](https://go-planet.com)| |[Progeny Systems](https://www.progeny.net/)| |[Project Hosts](https://www.projecthosts.com)|
azure-maps How To Use Indoor Module Ios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-use-indoor-module-ios.md
The Azure Maps iOS SDK allows you to render indoor maps created in Azure Maps Creator services.
-> [!NOTE]
-> The iOS SDK will support *dynamic styling* in a future release, coming soon!
- ## Prerequisites
-1. Be sure to complete the steps in the [Quickstart: Create an iOS app](quick-ios-app.md). Code blocks in this article can be inserted into the `viewDidLoad` function of `ViewController`.
-1. [Create a Creator resource](how-to-manage-creator.md)
-1. Get a `tilesetId` by completing the [tutorial for creating Indoor maps](tutorial-creator-indoor-maps.md). You'll use this identifier to render indoor maps with the Azure Maps iOS SDK.
+1. Complete the steps in the [Quickstart: Create an iOS app]. Code blocks in this article can be inserted into the `viewDidLoad` function of `ViewController`.
+1. A [Creator resource]
+1. Get a `tilesetId` by completing the [Tutorial: Use Creator to create indoor maps]. The tileset ID is used to render indoor maps with the Azure Maps iOS SDK.
## Instantiate the indoor manager
func indoorManager(
## Example
-The screenshot below shows the above code displaying an indoor map.
+The following screenshot shows the above code displaying an indoor map.
![A screenshot that displays an indoor map created using the above sample code.](./media/ios-sdk/indoor-maps/indoor.png) ## Additional information -- [Creator for indoor maps](creator-indoor-maps.md)-- [Drawing package requirements](drawing-requirements.md)
+- [Creator for indoor maps]
+- [Drawing package requirements]
+
+[Quickstart: Create an iOS app]: quick-ios-app.md
+[Creator resource]: how-to-manage-creator.md
+[Tutorial: Use Creator to create indoor maps]: tutorial-creator-indoor-maps.md
+[Creator for indoor maps]: creator-indoor-maps.md
+[Drawing package requirements]: drawing-requirements.md
azure-monitor App Insights Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/app-insights-overview.md
Title: Application Insights overview description: Learn how Application Insights in Azure Monitor provides performance management and usage tracking of your live web application. Previously updated : 03/22/2023 Last updated : 04/24/2023 # Application Insights overview
To understand the number of Application Insights resources required to cover you
## How do I use Application Insights?
-Application Insights is enabled through either [autoinstrumentation](codeless-overview.md) (agent) or by adding the [Application Insights SDK](sdk-support-guidance.md) to your application code. [Many languages](platforms.md) are supported. The applications could be on Azure, on-premises, or hosted by another cloud. To figure out which type of instrumentation is best for you, see [How do I instrument an application?](#how-do-i-instrument-an-application).
+Application Insights is enabled through either [autoinstrumentation](codeless-overview.md) (agent) or by adding the [Application Insights SDK](sdk-support-guidance.md) to your application code. [Many languages](#supported-languages) are supported. The applications could be on Azure, on-premises, or hosted by another cloud. To figure out which type of instrumentation is best for you, see [How do I instrument an application?](#how-do-i-instrument-an-application).
The Application Insights agent or SDK preprocesses telemetry and metrics before sending the data to Azure. Then it's ingested and processed further before it's stored in Azure Monitor Logs (Log Analytics). For this reason, an Azure account is required to use Application Insights.
Leave product feedback for the engineering team in the [Feedback Community](http
- [Autoinstrumentation overview](codeless-overview.md) - [Overview dashboard](overview-dashboard.md) - [Availability overview](availability-overview.md)-- [Application Map](app-map.md)
+- [Application Map](app-map.md)
azure-monitor Asp Net Core https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/asp-net-core.md
description: Monitor ASP.NET Core web applications for availability, performance
ms.devlang: csharp Previously updated : 03/22/2023 Last updated : 04/24/2023 # Application Insights for ASP.NET Core applications
azure-monitor Asp Net https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/asp-net.md
Title: Configure monitoring for ASP.NET with Azure Application Insights | Microsoft Docs description: Configure performance, availability, and user behavior analytics tools for your ASP.NET website hosted on-premises or in Azure. Previously updated : 03/22/2023 Last updated : 04/24/2023 ms.devlang: csharp
To add Application Insights to your ASP.NET website, you need to:
## Add Application Insights automatically
-This section will guide you through automatically adding Application Insights to a template-based ASP.NET web app. From within your ASP.NET web app project in Visual Studio:
+This section guides you through automatically adding Application Insights to a template-based ASP.NET web app. From within your ASP.NET web app project in Visual Studio:
1. Select **Project** > **Add Application Insights Telemetry** > **Application Insights Sdk (local)** > **Next** > **Finish** > **Close**. 2. Open the *ApplicationInsights.config* file.
This section will guide you through automatically adding Application Insights to
``` 4. Select **Project** > **Manage NuGet Packages** > **Updates**. Then update each `Microsoft.ApplicationInsights` NuGet package to the latest stable release.
-5. Run your application by selecting **IIS Express**. A basic ASP.NET app opens. As you browse through the pages on the site, telemetry will be sent to Application Insights.
+5. Run your application by selecting **IIS Express**. A basic ASP.NET app opens. As you browse through the pages on the site, telemetry is sent to Application Insights.
## Add Application Insights manually
-This section will guide you through manually adding Application Insights to a template-based ASP.NET web app. This section assumes that you're using a web app based on the standard MVC web app template for the ASP.NET Framework.
+This section guides you through manually adding Application Insights to a template-based ASP.NET web app. This section assumes that you're using a web app based on the standard MVC web app template for the ASP.NET Framework.
1. Add the following NuGet packages and their dependencies to your project:
This section will guide you through manually adding Application Insights to a te
2. In some cases, the *ApplicationInsights.config* file is created for you automatically. If the file is already present, skip to step 4.
- If it's not created automatically, you'll need to create it yourself. In the root directory of an ASP.NET application, create a new file called *ApplicationInsights.config*.
+ If it's not created automatically, you need to create it yourself. In the root directory of an ASP.NET application, create a new file called *ApplicationInsights.config*.
3. Copy the following XML configuration into your newly created file:
You have now successfully configured server-side application monitoring. If you
The previous sections provided guidance on methods to automatically and manually configure server-side monitoring. To add client-side monitoring, use the [client-side JavaScript SDK](javascript.md). You can monitor any web page's client-side transactions by adding a [JavaScript snippet](javascript.md#snippet-based-setup) before the closing `</head>` tag of the page's HTML.
-Although it's possible to manually add the snippet to the header of each HTML page, we recommend that you instead add the snippet to a primary page. That action will inject the snippet into all pages of a site.
+Although it's possible to manually add the snippet to the header of each HTML page, we recommend that you instead add the snippet to a primary page. That action injects the snippet into all pages of a site.
For the template-based ASP.NET MVC app from this article, the file that you need to edit is *_Layout.cshtml*. You can find it under **Views** > **Shared**. To add client-side monitoring, open *_Layout.cshtml* and follow the [snippet-based setup instructions](javascript.md#snippet-based-setup) from the article about client-side JavaScript SDK configuration.
azure-monitor Availability Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/availability-overview.md
Title: Application Insights availability tests description: Set up recurring web tests to monitor availability and responsiveness of your app or website. Previously updated : 03/22/2023 Last updated : 04/24/2023
azure-monitor Java Standalone Sampling Overrides https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-standalone-sampling-overrides.md
Title: Sampling overrides (preview) - Azure Monitor Application Insights for Java description: Learn to configure sampling overrides in Azure Monitor Application Insights for Java. Previously updated : 11/15/2022 Last updated : 04/24/2023 ms.devlang: java
azure-monitor Javascript Framework Extensions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/javascript-framework-extensions.md
You can also run custom queries to divide Application Insights data to generate
### Use React Hooks
-[React Hooks](https://reactjs.org/docs/hooks-reference.html) are an approach to state and lifecycle management in a React application without relying on class-based React components. The Application Insights React plug-in provides several Hooks integrations that operate in a similar way to the higher-order component approach.
+[React Hooks](https://react.dev/reference/react) are an approach to state and lifecycle management in a React application without relying on class-based React components. The Application Insights React plug-in provides several Hooks integrations that operate in a similar way to the higher-order component approach.
#### Use React Context
-The React Hooks for Application Insights are designed to use [React Context](https://reactjs.org/docs/context.html) as a containing aspect for it. To use Context, initialize Application Insights, and then import the Context object:
+The React Hooks for Application Insights are designed to use [React Context](https://react.dev/learn/passing-data-deeply-with-context) as a containing aspect for it. To use Context, initialize Application Insights, and then import the Context object:
```javascript import React from "react";
When the Hook is used, a data payload can be provided to it to add more data to
### React error boundaries
-[Error boundaries](https://reactjs.org/docs/error-boundaries.html) provide a way to gracefully handle an exception when it occurs within a React application. When such an error occurs, it's likely that the exception needs to be logged. The React plug-in for Application Insights provides an error boundary component that automatically logs the error when it occurs.
+[Error boundaries](https://react.dev/reference/react/Component#catching-rendering-errors-with-an-error-boundary) provide a way to gracefully handle an exception when it occurs within a React application. When such an error occurs, it's likely that the exception needs to be logged. The React plug-in for Application Insights provides an error boundary component that automatically logs the error when it occurs.
```javascript import React from "react";
azure-monitor Monitor Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/monitor-functions.md
Title: Monitor applications running on Azure Functions with Application Insights - Azure Monitor | Microsoft Docs description: Azure Monitor integrates with your Azure Functions application, allowing performance monitoring and quickly identifying problems. Previously updated : 02/09/2023 Last updated : 04/24/2023
azure-monitor Opencensus Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opencensus-python.md
Title: Monitor Python applications with Azure Monitor | Microsoft Docs description: This article provides instructions on how to wire up OpenCensus Python with Azure Monitor. Previously updated : 03/22/2023 Last updated : 04/24/2023 ms.devlang: python
azure-monitor Sdk Support Guidance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/sdk-support-guidance.md
Title: Application Insights SDK support guidance
description: Support guidance for Application Insights legacy and preview SDKs Previously updated : 11/15/2022 Last updated : 04/24/2023
azure-monitor Tutorial Asp Net Core https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/tutorial-asp-net-core.md
description: Application Insights SDK tutorial to monitor ASP.NET Core web appli
ms.devlang: csharp Previously updated : 03/22/2023 Last updated : 04/24/2023
azure-monitor Worker Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/worker-service.md
description: Monitoring .NET Core/.NET Framework non-HTTP apps with Azure Monito
ms.devlang: csharp Previously updated : 01/24/2023 Last updated : 04/24/2023
azure-monitor Data Platform Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/data-platform-metrics.md
na Previously updated : 11/08/2022 Last updated : 04/25/2023
Azure Monitor collects metrics from the following sources. After these metrics a
For a complete list of data sources that can send data to Azure Monitor Metrics, see [What is monitored by Azure Monitor?](../monitor-reference.md).
+## REST API
+Azure Monitor provides REST APIs that allow you to get data in and out of Azure Monitor Metrics.
+- **Custom metrics API** - [Custom metrics](./metrics-custom-overview.md) allow you to load your own metrics into the Azure Monitor Metrics database. Those metrics can then be used by the same analysis tools that process Azure Monitor platform metrics.
+- **Azure Monitor Metrics REST API** - Allows you to access Azure Monitor platform metrics definitions and values. For more information, see [Azure Monitor REST API](/rest/api/monitor/). For information on how to use the API, see the [Azure monitoring REST API walkthrough](./rest-api-walkthrough.md).
+- **Azure Monitor Metrics Data plane REST API** - [Azure Monitor Metrics data plane API](/rest/api/monitor/metrics-data-plane/) is a high-volume API designed for customers with large volume metrics queries. It's similar to the existing standard Azure Monitor Metrics REST API, but provides the capability to retrieve metric data for up to 50 resource IDs in the same subscription and region in a single batch API call. This improves query throughput and reduces the risk of throttling.
++ ## Metrics Explorer Use [Metrics Explorer](metrics-charts.md) to interactively analyze the data in your metric database and chart the values of multiple metrics over time. You can pin the charts to a dashboard to view them with other visualizations. You can also retrieve metrics by using the [Azure monitoring REST API](./rest-api-walkthrough.md).
azure-monitor Prometheus Remote Write Active Directory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/prometheus-remote-write-active-directory.md
This step is only required if you didn't enable Azure Key Vault Provider for Sec
| Value | Description | |:|:| | `<CLUSTER-NAME>` | Name of your AKS cluster |
- | `<CONTAINER-IMAGE-VERSION>` | `mcr.microsoft.com/azuremonitor/prometheus/promdev/prom-remotewrite:prom-remotewrite-20221103.1`<br>This is the remote write container image version. |
+ | `<CONTAINER-IMAGE-VERSION>` | `mcr.microsoft.com/azuremonitor/prometheus/promdev/prom-remotewrite:prom-remotewrite-20230323.1`<br>This is the remote write container image version. |
| `<INGESTION-URL>` | **Metrics ingestion endpoint** from the **Overview** page for the Azure Monitor workspace | | `<APP-REGISTRATION -CLIENT-ID> ` | Client ID of your application | | `<TENANT-ID> ` | Tenant ID of the Azure Active Directory application |
azure-monitor Prometheus Remote Write Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/prometheus-remote-write-managed-identity.md
This step isn't required if you're using an AKS identity since it will already h
| Value | Description | |:|:| | `<AKS-CLUSTER-NAME>` | Name of your AKS cluster |
- | `<CONTAINER-IMAGE-VERSION>` | `mcr.microsoft.com/azuremonitor/prometheus/promdev/prom-remotewrite:prom-remotewrite-20221103.1`<br>This is the remote write container image version. |
+ | `<CONTAINER-IMAGE-VERSION>` | `mcr.microsoft.com/azuremonitor/prometheus/promdev/prom-remotewrite:prom-remotewrite-20230323.1`<br>This is the remote write container image version. |
| `<INGESTION-URL>` | **Metrics ingestion endpoint** from the **Overview** page for the Azure Monitor workspace | | `<MANAGED-IDENTITY-CLIENT-ID>` | **Client ID** from the **Overview** page for the managed identity | | `<CLUSTER-NAME>` | Name of the cluster Prometheus is running on |
azure-monitor Daily Cap https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/daily-cap.md
Daily caps are typically used by organizations that are particularly cost consci
When data collection stops, you effectively have no monitoring of features and resources relying on that workspace. Instead of relying on the daily cap alone, you can [create an alert rule](#alert-when-daily-cap-is-reached) to notify you when data collection reaches some level before the daily cap. Notification allows you to address any increases before data collection shuts down, or even to temporarily disable collection for less critical resources. ## Application Insights
-You shouldn't create a daily cap for workspace-based Application Insights resources but instead create a daily cap for their workspace. You do need to create a separate daily cap for any classic Application Insights resources since their data doesn't reside in a Log Analytics workspace.
+You should configure the daily cap setting for both Application Insights and Log Analytics to limit the amount of telemetry data ingested by your service. For workspace-based Application Insights resources, the effective daily cap is the minimum of the two settings. For classic Application Insights resources, only the Application Insights daily cap applies since their data doesnΓÇÖt reside in a Log Analytics workspace.
> [!TIP] > If you're concerned about the amount of billable data collected by Application Insights, you should configure [sampling](../app/sampling.md) to tune its data volume to the level you want. Use the daily cap as a safety method in case your application unexpectedly begins to send much higher volumes of telemetry. The maximum cap for an Application Insights classic resource is 1,000 GB/day unless you request a higher maximum for a high-traffic application. When you create a resource in the Azure portal, the daily cap is set to 100 GB/day. When you create a resource in Visual Studio, the default is small (only 32.3 MB/day). The daily cap default is set to facilitate testing. It's intended that the user will raise the daily cap before deploying the app into production.
-We've removed the restriction on some subscription types that have credit that couldn't be used for Application Insights. Previously, if the subscription has a spending limit, the daily cap dialog has instructions to remove the spending limit and enable the daily cap to be raised beyond 32.3 MB/day.
+> [!NOTE]
+> If you are using connection strings to send data to Application Insights using [regional ingestion endpoints](../app/ip-addresses.md#outgoing-ports), then the Application Insights and Log Analytics daily cap settings are effective per region. If you are using only instrumentation key (ikey) to send data to Application Insights using the [global ingestion endpoint](../app/ip-addresses.md#outgoing-ports), then the Application Insights daily cap setting may not be effective across regions, but the Log Analytics daily cap setting will still apply.
+We've removed the restriction on some subscription types that have credit that couldn't be used for Application Insights. Previously, if the subscription has a spending limit, the daily cap dialog has instructions to remove the spending limit and enable the daily cap to be raised beyond 32.3 MB/day.
## Determine your daily cap To help you determine an appropriate daily cap for your workspace, see [Azure Monitor cost and usage](../usage-estimated-costs.md) to understand your data ingestion trends. You can also review [Analyze usage in Log Analytics workspace](analyze-usage.md) which provides methods to analyze your workspace usage in more detail.
azure-monitor Snapshot Debugger App Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/snapshot-debugger/snapshot-debugger-app-service.md
reviewer: cweining Previously updated : 01/24/2023 Last updated : 04/24/2023
azure-netapp-files Application Volume Group Considerations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/application-volume-group-considerations.md
na Previously updated : 01/11/2022 Last updated : 04/25/2023 # Requirements and considerations for application volume group for SAP HANA
This article describes the requirements and considerations you need to be aware
Application volume group for SAP HANA will create multiple IP addresses, up to six IP addresses for larger-sized estates. Ensure that the delegated subnet has sufficient free IP addresses. ItΓÇÖs recommended that you use a delegated subnet with a minimum of 59 IP addresses with a subnet size of /26. See [Considerations about delegating a subnet to Azure NetApp Files](azure-netapp-files-delegate-subnet.md#considerations).
+>[!IMPORTANT]
+>The use of application volume group for SAP HANA for applications other than SAP HANA is not supported. Reach out to your Azure NetApp Files specialist for guidance on using Azure NetApp Files multi-volume layouts with other database applications.
+ ## Best practices about proximity placement groups To deploy SAP HANA volumes using the application volume group, you need to use your HANA database VMs as an anchor for a proximity placement group (PPG). ItΓÇÖs recommended that you create an availability set per database and use the **[SAP HANA VM pinning request form](https://aka.ms/HANAPINNING)** to pin the availability set to a dedicated compute cluster. After pinning, you need to add a PPG to the availability set and then deploy all hosts of an SAP HANA database using that availability set. Doing so ensures that all virtual machines are at the same location. If the virtual machines are started, the PPG has its anchor.
azure-netapp-files Azure Netapp Files Solution Architectures https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-solution-architectures.md
na Previously updated : 03/16/2023 Last updated : 04/25/2023 # Solution architectures using Azure NetApp Files
This section provides references to SAP on Azure solutions.
* [Attach Azure NetApp Files to Azure VMware Solution VMs - Guest OS Mounts](../azure-vmware/netapp-files-with-azure-vmware-solution.md) * [Disaster Recovery with Azure NetApp Files, JetStream DR and Azure VMware Solution](../azure-vmware/deploy-disaster-recovery-using-jetstream.md#disaster-recovery-with-azure-netapp-files-jetstream-dr-and-azure-vmware-solution) * [Disaster Recovery with Azure NetApp Files, JetStream DR and AVS (Azure VMware Solution)](https://www.jetstreamsoft.com/portal/jetstream-knowledge-base/disaster-recovery-with-azure-netapp-files-jetstream-dr-and-avs-azure-vmware-solution/) - Jetstream
+* [Enable App Volume Replication for Horizon VDI on Azure VMware Solution using Azure NetApp Files](https://techcommunity.microsoft.com/t5/azure-migration-and/enable-app-volume-replication-for-horizon-vdi-on-azure-vmware/ba-p/3798178)
## Virtual Desktop Infrastructure solutions
azure-netapp-files Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/whats-new.md
Azure NetApp Files is updated regularly. This article provides a summary about t
* [Azure Application Consistent Snapshot Tool (AzAcSnap) v5.1 Public Preview](azacsnap-release-notes.md)
- [Azure Application Consistent Snapshot Tool](azacsnap-introduction.md) (AzAcSnap) is a command-line tool that enables customers to simplify data protection for third-party databases (SAP HANA) in Linux environments (for example, SUSE and RHEL).
+ [Azure Application Consistent Snapshot Tool](azacsnap-introduction.md) (AzAcSnap) is a command-line tool that enables customers to simplify data protection for third-party databases (SAP HANA) in Linux environments (for example, `SUSE` and `RHEL`).
The public preview of v5.1 brings the following new capabilities to AzAcSnap: * Oracle Database support
Azure NetApp Files is updated regularly. This article provides a summary about t
* Azure NetApp Files Application Consistent Snapshot tool [(AzAcSnap)](azacsnap-introduction.md) is now generally available.
- AzAcSnap is a command-line tool that enables you to simplify data protection for third-party databases (SAP HANA) in Linux environments (for example, SUSE and RHEL). See [Release Notes for AzAcSnap](azacsnap-release-notes.md) for the latest changes about the tool.
+ AzAcSnap is a command-line tool that enables you to simplify data protection for third-party databases (SAP HANA) in Linux environments (for example, `SUSE` and `RHEL`). See [Release Notes for AzAcSnap](azacsnap-release-notes.md) for the latest changes about the tool.
* [Support for capacity pool billing tags](manage-billing-tags.md)
Azure NetApp Files is updated regularly. This article provides a summary about t
* [Azure Application Consistent Snapshot Tool](azacsnap-introduction.md) (Preview)
- Azure Application Consistent Snapshot Tool (AzAcSnap) is a command-line tool that enables you to simplify data protection for third-party databases (SAP HANA) in Linux environments (for example, SUSE and RHEL).
+ Azure Application Consistent Snapshot Tool (AzAcSnap) is a command-line tool that enables you to simplify data protection for third-party databases (SAP HANA) in Linux environments (for example, `SUSE` and `RHEL`).
AzAcSnap leverages the volume snapshot and replication functionalities in Azure NetApp Files and Azure Large Instance. It provides the following benefits:
azure-resource-manager User Defined Data Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/user-defined-data-types.md
resource storageAccount 'Microsoft.Storage/storageAccounts@2022-09-01' = {
## Next steps -- For a list of the Bicep date types, see [Data types](./data-types.md).
+- For a list of the Bicep data types, see [Data types](./data-types.md).
azure-resource-manager Deploy Marketplace App Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/deploy-marketplace-app-quickstart.md
+
+ Title: Deploy an Azure Marketplace managed application
+description: Describes how to deploy an Azure Marketplace managed application using Azure portal.
+++ Last updated : 04/25/2023++
+# Quickstart: Deploy an Azure Marketplace managed application
+
+In this quickstart, you deploy an Azure Marketplace managed application and verify the resource deployments in Azure. A Marketplace managed application publisher charges a fee to maintain the application, and during the deployment, the publisher is given permissions to your application's managed resource group. As a customer, you have limited access to the deployed resources, but can delete the managed application from your Azure subscription.
+
+To avoid unnecessary costs for the managed application's Azure resources, go to [clean up resources](#clean-up-resources) when you're finished.
+
+## Prerequisites
+
+An Azure account with an active subscription. If you don't have an account, [create a free account](https://azure.microsoft.com/free/) before you begin.
+
+## Find a managed application
+
+To get a managed application from the Azure portal, use the following steps.
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Search for _Marketplace_ and select it from the available options. Or if you've recently used **Marketplace**, select it from the list.
+
+ :::image type="content" source="media/deploy-marketplace-app-quickstart/select-marketplace.png" alt-text="Screenshot of the Azure portal home page to search for Marketplace or select it from the list of Azure services.":::
+
+1. On the **Marketplace** page, search for _Microsoft community training_.
+1. Select **Microsoft Community Training (Preview)**.
+
+ :::image type="content" source="media/deploy-marketplace-app-quickstart/select-marketplace-app.png" alt-text="Screenshot of the Azure Marketplace that shows the managed application to select for deployment.":::
+
+1. Select the **Basic** plan and then select **Create**.
+
+ :::image type="content" source="media/deploy-marketplace-app-quickstart/select-plan.png" alt-text="Screenshot that shows the Basic plan is selected and the create button is highlighted.":::
+
+## Deploy the managed application
+
+1. On the **Basics** tab, enter the required information.
+
+ :::image type="content" source="media/deploy-marketplace-app-quickstart/portal-basics.png" alt-text="Screenshot that shows the form's Basics tab to deploy the managed application.":::
+
+ - **Subscription**: Select your Azure subscription.
+ - **Resource group**: Create a new resource group. For this example use _demo-marketplace-app_.
+ - **Region**: Select a region, like _West US_.
+ - **Application Name**: Enter a name, like _demotrainingapp_.
+ - **Managed Resource Group**: Use the default name for this example. The format is `mrg-microsoft-community-training-<dateTime>`. But you can change the name if you want.
+
+1. Select **Next: Setup your portal**.
+1. On the **Setup your portal** tab, enter the required information.
+
+ :::image type="content" source="media/deploy-marketplace-app-quickstart/portal-setup.png" alt-text="Screenshot that shows the form's Setup your portal tab to deploy the managed application.":::
+
+ - **Website name**: Enter a name that meets the criteria specified on the form, like _demotrainingsite_. Your website name should be globally unique across Azure.
+ - **Organization name**: Enter your organization's name.
+ - **Contact email addresses**: Enter at least one valid email address.
+
+1. Select **Next: Setup your login type**.
+1. On the **Setup your login type** tab, enter the required information.
+
+ :::image type="content" source="media/deploy-marketplace-app-quickstart/portal-setup-login.png" alt-text="Screenshot that shows the form's Setup your login type tab to deploy the managed application.":::
+
+ - **Login type**: For this example, select **Mobile**.
+ - **Org admin's mobile number**: Enter a valid mobile phone number including the country/region code, in the format _+1 1234567890_. The phone number is used to sign in to the training site.
+
+1. Select **Next: Review + create**.
+1. After **Validation passed** is displayed, verify the information is correct.
+1. Read **Co-Admin Access Permission** and check the box to agree to the terms.
+
+ :::image type="content" source="media/deploy-marketplace-app-quickstart/create-app.png" alt-text="Screenshot that shows the validation passed, the co-admin permission box is selected, and create button is highlighted.":::
+
+1. Select **Create**.
+
+The deployment begins and because many resources are created, the Azure deployment takes about 20 minutes to finish. You can verify the Azure deployments before the website becomes available.
+
+## Verify the managed application deployment
+
+After the managed application deployment is finished, you can verify the resources.
+
+1. Go to resource group **demo-marketplace-app** and select the managed application.
+
+ :::image type="content" source="media/deploy-marketplace-app-quickstart/app-resource-group.png" alt-text="Screenshot of the resource group where the managed application is installed that highlights the application name.":::
+
+1. Select the **Overview** tab to display the managed application and link to the managed resource group.
+
+ :::image type="content" source="media/deploy-marketplace-app-quickstart/managed-app.png" alt-text="Screenshot of the managed application that highlights the link to the managed resource group.":::
+
+1. The managed resource group shows the resources that were deployed and the deployments that created the resources.
+
+ :::image type="content" source="media/deploy-marketplace-app-quickstart/mrg-apps.png" alt-text="Screenshot of the managed resource group that that highlights the deployments and list of deployed resources.":::
+
+1. To review the publisher's permissions in the managed resource group, select **Access Control (IAM)** > **Role assignments**.
+
+ You can also verify the **Deny assignments**.
+
+For this example, the website's availability isn't necessary. The article's purpose is to show how to deploy an Azure Marketplace managed application and verify the resources. To avoid unnecessary costs, go to [clean up resources](#clean-up-resources) when you're finished.
+
+### Launch the website (optional)
+
+After the deployment is completed, from the managed resource group, you can go to the App Service resource and launch your website.
+
+ :::image type="content" source="media/deploy-marketplace-app-quickstart/app-service.png" alt-text="Screenshot of the App Service with the website link highlighted.":::
+
+The site might respond with a page that the deployment is still processing.
+
+ :::image type="content" source="media/deploy-marketplace-app-quickstart/deployment-message.png" alt-text="Screenshot that shows the website deployment is in progress.":::
+
+When your website is available, a default sign-in page is displayed. You can sign-in with the mobile phone number that you used during the deployment and you'll receive a text message confirmation. When you're finished, be sure to sign out of your training website.
+
+## Clean up resources
+
+When you're finished with the managed application, you can delete the resource groups and that removes all the Azure resources you created. For example, in this quickstart you created the resource groups _demo-marketplace-app_ and a managed resource group with the prefix _mrg-microsoft-community-training_.
+
+When you delete the **demo-marketplace-app** resource group, the managed application, managed resource group, and all the Azure resources are deleted.
+
+1. Go to the **demo-marketplace-app** resource group and **Delete resource group**.
+
+ :::image type="content" source="media/deploy-marketplace-app-quickstart/delete-resource-group.png" alt-text="Screenshot of the highlighted delete resource group button.":::
+
+1. To confirm the deletion, enter the resource group name and select **Delete**.
+
+ :::image type="content" source="media/deploy-marketplace-app-quickstart/confirm-delete-resource-group.png" alt-text="Screenshot that shows the delete resource group confirmation.":::
++
+## Next steps
+
+- To learn how to create and publish the definition files for a managed application, go to [Quickstart: Create and publish an Azure Managed Application definition](publish-service-catalog-app.md).
+- To learn how to deploy a managed application, go to [Quickstart: Deploy a service catalog managed application](deploy-service-catalog-quickstart.md)
+- To use your own storage to create and publish the definition files for a managed application, go to [Quickstart: Bring your own storage to create and publish an Azure Managed Application definition](publish-service-catalog-bring-your-own-storage.md).
azure-resource-manager Deploy Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/deploy-python.md
+
+ Title: Deploy resources with Python and template
+description: Use Azure Resource Manager and Python to deploy resources to Azure. The resources are defined in an Azure Resource Manager template.
+ Last updated : 04/24/2023+++
+# Deploy resources with ARM templates and Python
+
+This article explains how to use Python with Azure Resource Manager templates (ARM templates) to deploy your resources to Azure. If you aren't familiar with the concepts of deploying and managing your Azure solutions, see [template deployment overview](overview.md).
++
+## Prerequisites
+
+* A template to deploy. If you don't already have one, download and save an [example template](https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/quickstarts/microsoft.storage/storage-account-create/azuredeploy.json) from the Azure Quickstart templates repo.
+
+* Python 3.7 or later installed. To install the latest, see [Python.org](https://www.python.org/downloads/)
+
+* The following Azure library packages for Python installed in your virtual environment. To install any of the packages, use `pip install {package-name}`
+ * azure-identity
+ * azure-mgmt-resource
+
+ If you have older versions of these packages already installed in your virtual environment, you may need to update them with `pip install --upgrade {package-name}`
+
+* The examples in this article use CLI-based authentication (`AzureCliCredential`). Depending on your environment, you may need to run `az login` first to authenticate.
++
+## Deployment scope
+
+You can target your deployment to a resource group, subscription, management group, or tenant. Depending on the scope of the deployment, you use different methods.
+
+* To deploy to a **resource group**, use [ResourceManagementClient.deployments.begin_create_or_update](/python/api/azure-mgmt-resource/azure.mgmt.resource.resources.v2022_09_01.operations.deploymentsoperations#azure-mgmt-resource-resources-v2022-09-01-operations-deploymentsoperations-begin-create-or-update):
+
+* To deploy to a **subscription**, use [ResourceManagementClient.deployments.begin_create_or_update_at_subscription_scope](/python/api/azure-mgmt-resource/azure.mgmt.resource.resources.v2022_09_01.operations.deploymentsoperations#azure-mgmt-resource-resources-v2022-09-01-operations-deploymentsoperations-begin-create-or-update-at-subscription-scope):
+
+ For more information about subscription level deployments, see [Create resource groups and resources at the subscription level](deploy-to-subscription.md).
+
+* To deploy to a **management group**, use [ResourceManagementClient.deployments.begin_create_or_update_at_management_group_scope](/python/api/azure-mgmt-resource/azure.mgmt.resource.resources.v2022_09_01.operations.deploymentsoperations#azure-mgmt-resource-resources-v2022-09-01-operations-deploymentsoperations-begin-create-or-update-at-management-group-scope).
+
+ For more information about management group level deployments, see [Create resources at the management group level](deploy-to-management-group.md).
+
+* To deploy to a **tenant**, use [ResourceManagementClient.deployments.begin_create_or_update_at_tenant_scope](/python/api/azure-mgmt-resource/azure.mgmt.resource.resources.v2022_09_01.operations.deploymentsoperations#azure-mgmt-resource-resources-v2022-09-01-operations-deploymentsoperations-begin-create-or-update-at-tenant-scope).
+
+ For more information about tenant level deployments, see [Create resources at the tenant level](deploy-to-tenant.md).
+
+For every scope, the user deploying the template must have the required permissions to create resources.
+
+## Deployment name
+
+When deploying an ARM template, you can give the deployment a name. This name can help you retrieve the deployment from the deployment history. If you don't provide a name for the deployment, the name of the template file is used. For example, if you deploy a template named `azuredeploy.json` and don't specify a deployment name, the deployment is named `azuredeploy`.
+
+Every time you run a deployment, an entry is added to the resource group's deployment history with the deployment name. If you run another deployment and give it the same name, the earlier entry is replaced with the current deployment. If you want to maintain unique entries in the deployment history, give each deployment a unique name.
+
+To create a unique name, you can assign a random number.
+
+```python
+import random
+
+suffix = random.randint(1, 1000)
+deployment_name = f"ExampleDeployment{suffix}"
+```
+
+Or, add a date value.
+
+```python
+from datetime import datetime
+
+today = datetime.now().strftime("%m-%d-%Y")
+deployment_name = f"ExampleDeployment{today}"
+```
+
+If you run concurrent deployments to the same resource group with the same deployment name, only the last deployment is completed. Any deployments with the same name that haven't finished are replaced by the last deployment. For example, if you run a deployment named `newStorage` that deploys a storage account named `storage1`, and at the same time run another deployment named `newStorage` that deploys a storage account named `storage2`, you deploy only one storage account. The resulting storage account is named `storage2`.
+
+However, if you run a deployment named `newStorage` that deploys a storage account named `storage1`, and immediately after it completes you run another deployment named `newStorage` that deploys a storage account named `storage2`, then you have two storage accounts. One is named `storage1`, and the other is named `storage2`. But, you only have one entry in the deployment history.
+
+When you specify a unique name for each deployment, you can run them concurrently without conflict. If you run a deployment named `newStorage1` that deploys a storage account named `storage1`, and at the same time run another deployment named `newStorage2` that deploys a storage account named `storage2`, then you have two storage accounts and two entries in the deployment history.
+
+To avoid conflicts with concurrent deployments and to ensure unique entries in the deployment history, give each deployment a unique name.
+
+## Deploy local template
+
+You can deploy a template from your local machine or one that is stored externally. This section describes deploying a local template.
+
+If you're deploying to a resource group that doesn't exist, create the resource group. The name of the resource group can only include alphanumeric characters, periods, underscores, hyphens, and parenthesis. It can be up to 90 characters. The name can't end in a period.
+
+```python
+import os
+from azure.identity import AzureCliCredential
+from azure.mgmt.resource import ResourceManagementClient
+
+credential = AzureCliCredential()
+subscription_id = os.environ["AZURE_SUBSCRIPTION_ID"]
+
+resource_client = ResourceManagementClient(credential, subscription_id)
+
+rg_result = resource_client.resource_groups.create_or_update(
+ "exampleGroup",
+ {
+ "location": "Central US"
+ }
+)
+
+print(f"Provisioned resource group with ID: {rg_result.id}")
+```
+
+To deploy an ARM template, use [ResourceManagementClient.deployments.begin_create_or_update](/python/api/azure-mgmt-resource/azure.mgmt.resource.resources.v2022_09_01.operations.deploymentsoperations#azure-mgmt-resource-resources-v2022-09-01-operations-deploymentsoperations-begin-create-or-update). The following example requires a local template named `storage.json`.
+
+```python
+import os
+import json
+from azure.identity import AzureCliCredential
+from azure.mgmt.resource import ResourceManagementClient
+from azure.mgmt.resource.resources.models import DeploymentMode
+
+credential = AzureCliCredential()
+subscription_id = os.environ["AZURE_SUBSCRIPTION_ID"]
+
+resource_client = ResourceManagementClient(credential, subscription_id)
+
+with open("storage.json", "r") as template_file:
+ template_body = json.load(template_file)
+
+rg_deployment_result = resource_client.deployments.begin_create_or_update(
+ "exampleGroup",
+ "exampleDeployment",
+ {
+ "properties": {
+ "template": template_body,
+ "parameters": {
+ "storagePrefix": {
+ "value": "demostore"
+ },
+ },
+ "mode": DeploymentMode.incremental
+ }
+ }
+)
+```
+
+The deployment can take several minutes to complete.
+
+## Deploy remote template
+
+Instead of storing ARM templates on your local machine, you may prefer to store them in an external location. You can store templates in a source control repository (such as GitHub). Or, you can store them in an Azure storage account for shared access in your organization.
+
+If you're deploying to a resource group that doesn't exist, create the resource group. The name of the resource group can only include alphanumeric characters, periods, underscores, hyphens, and parenthesis. It can be up to 90 characters. The name can't end in a period.
+
+```python
+import os
+from azure.identity import AzureCliCredential
+from azure.mgmt.resource import ResourceManagementClient
+
+credential = AzureCliCredential()
+subscription_id = os.environ["AZURE_SUBSCRIPTION_ID"]
+
+resource_client = ResourceManagementClient(credential, subscription_id)
+
+rg_result = resource_client.resource_groups.create_or_update(
+ "exampleGroup",
+ {
+ "location": "Central US"
+ }
+)
+
+print(f"Provisioned resource group with ID: {rg_result.id}")
+```
+
+To deploy an ARM template, use [ResourceManagementClient.deployments.begin_create_or_update](/python/api/azure-mgmt-resource/azure.mgmt.resource.resources.v2022_09_01.operations.deploymentsoperations#azure-mgmt-resource-resources-v2022-09-01-operations-deploymentsoperations-begin-create-or-update). The following example deploys a [remote template](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.storage/storage-account-create). That template creates a storage account.
+
+```python
+import os
+from azure.identity import AzureCliCredential
+from azure.mgmt.resource import ResourceManagementClient
+from azure.mgmt.resource.resources.models import DeploymentMode
+
+credential = AzureCliCredential()
+subscription_id = os.environ["AZURE_SUBSCRIPTION_ID"]
+
+resource_client = ResourceManagementClient(credential, subscription_id)
+
+resource_group_name = "exampleGroup"
+location = "westus"
+template_uri = "https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/quickstarts/microsoft.storage/storage-account-create/azuredeploy.json"
+
+rg_deployment_result = resource_client.deployments.begin_create_or_update(
+ resource_group_name,
+ "exampleDeployment",
+ {
+ "properties": {
+ "templateLink": {
+ "uri": template_uri
+ },
+ "parameters": {
+ "location": {
+ "value": location
+ }
+ },
+ "mode": DeploymentMode.incremental
+ }
+ }
+)
+```
+
+The preceding example requires a publicly accessible URI for the template, which works for most scenarios because your template shouldn't include sensitive data. If you need to specify sensitive data (like an admin password), pass that value as a secure parameter. If you keep your templates in a storage account that doesn't allow anonymous access, you need to provide a SAS token.
+
+```python
+import os
+from azure.identity import AzureCliCredential
+from azure.mgmt.resource import ResourceManagementClient
+from azure.mgmt.resource.resources.models import DeploymentMode
+
+credential = AzureCliCredential()
+subscription_id = os.environ["AZURE_SUBSCRIPTION_ID"]
+sas_token = os.environ["SAS_TOKEN"]
+
+resource_client = ResourceManagementClient(credential, subscription_id)
+
+resource_group_name = "exampleGroup"
+location = "westus"
+template_uri = f"https://stage20230425.blob.core.windows.net/templates/storage.json?{sas_token}"
+
+rg_deployment_result = resource_client.deployments.begin_create_or_update(
+ resource_group_name,
+ "exampleDeployment",
+ {
+ "properties": {
+ "templateLink": {
+ "uri": template_uri
+ },
+ "parameters": {
+ "location": {
+ "value": location
+ }
+ },
+ "mode": DeploymentMode.incremental
+ }
+ }
+)
+```
+
+For more information, see [Use relative path for linked templates](./linked-templates.md#linked-template).
+
+## Deploy template spec
+
+Instead of deploying a local or remote template, you can create a [template spec](template-specs.md). The template spec is a resource in your Azure subscription that contains an ARM template. It makes it easy to securely share the template with users in your organization. You use Azure role-based access control (Azure RBAC) to grant access to the template spec.
+
+The following examples show how to create and deploy a template spec.
+
+First, create the template spec by providing the ARM template.
+
+```python
+import os
+import json
+from azure.identity import AzureCliCredential
+from azure.mgmt.resource.templatespecs import TemplateSpecsClient
+from azure.mgmt.resource.templatespecs.models import TemplateSpecVersion, TemplateSpec
+
+credential = AzureCliCredential()
+subscription_id = os.environ["AZURE_SUBSCRIPTION_ID"]
+
+template_specs_client = TemplateSpecsClient(credential, subscription_id)
+
+template_spec = TemplateSpec(
+ location="westus2",
+ description="Storage Spec"
+)
+
+template_specs_client.template_specs.create_or_update(
+ "templateSpecsRG",
+ "storageSpec",
+ template_spec
+)
+
+with open("storage.json", "r") as template_file:
+ template_body = json.load(template_file)
+
+version = TemplateSpecVersion(
+ location="westus2",
+ description="Storage Spec",
+ main_template=template_body
+)
+
+template_spec_result = template_specs_client.template_spec_versions.create_or_update(
+ "templateSpecsRG",
+ "storageSpec",
+ "1.0.0",
+ version
+)
+
+print(f"Provisioned template spec with ID: {template_spec_result.id}")
+```
+
+Then, get the ID for template spec and deploy it.
+
+```python
+import os
+from azure.identity import AzureCliCredential
+from azure.mgmt.resource import ResourceManagementClient
+from azure.mgmt.resource.resources.models import DeploymentMode
+from azure.mgmt.resource.templatespecs import TemplateSpecsClient
+
+credential = AzureCliCredential()
+subscription_id = os.environ["AZURE_SUBSCRIPTION_ID"]
+
+resource_client = ResourceManagementClient(credential, subscription_id)
+template_specs_client = TemplateSpecsClient(credential, subscription_id)
+
+template_spec = template_specs_client.template_spec_versions.get(
+ "templateSpecsRg",
+ "storageSpec",
+ "1.0.0"
+)
+
+rg_deployment_result = resource_client.deployments.begin_create_or_update(
+ "exampleGroup",
+ "exampleDeployment",
+ {
+ "properties": {
+ "template_link": {
+ "id": template_spec.id
+ },
+ "mode": DeploymentMode.incremental
+ }
+ }
+)
+```
+
+For more information, see [Azure Resource Manager template specs](template-specs.md).
+
+## Preview changes
+
+Before deploying your template, you can preview the changes the template will make to your environment. Use the [what-if operation](./deploy-what-if.md) to verify that the template makes the changes that you expect. What-if also validates the template for errors.
+
+## Next steps
+
+- To roll back to a successful deployment when you get an error, see [Rollback on error to successful deployment](rollback-on-error.md).
+- To specify how to handle resources that exist in the resource group but aren't defined in the template, see [Azure Resource Manager deployment modes](deployment-modes.md).
+- To understand how to define parameters in your template, see [Understand the structure and syntax of ARM templates](./syntax.md).
+- For information about deploying a template that requires a SAS token, see [Deploy private ARM template with SAS token](secure-template-with-sas-token.md).
azure-video-indexer Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/release-notes.md
Title: Azure Video Indexer release notes | Microsoft Docs
description: To stay up-to-date with the most recent developments, this article provides you with the latest updates on Azure Video Indexer. Previously updated : 04/17/2023 Last updated : 04/25/2023
To stay up-to-date with the most recent Azure Video Indexer developments, this a
## April 2023
+### Resource Health support
+
+Azure Video Indexer is now integrated with Azure Resource Health enabling you to see the health and availability of each of your Video Indexer resources and if needed, help with diagnosing and solving problems. You can also set alerts to be notified when your resources are affected. For more information, see [Azure Resource Health overview](../service-health/resource-health-overview.md).
+ ### The animation character recognition model has been retired The **animation character recognition** model has been retired on March 1st, 2023. For any related issues, [open a support ticket via the Azure portal](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/overview).
azure-vmware Azure Vmware Solution Platform Updates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/azure-vmware-solution-platform-updates.md
description: Learn about the platform updates to Azure VMware Solution.
Previously updated : 4/20/2023 Last updated : 4/24/2023 # What's new in Azure VMware Solution
Introducing Run Commands for VMware HCX on Azure VMware Solution. You can use th
All new Azure VMware Solution private clouds are being deployed with VMware NSX-T Data Center version 3.2.2. NSX-T Data Center versions in existing private clouds will be upgraded to NSX-T Data Center version 3.2.2 through April 2023.
-**HCX Enterprise Edition - Default**
+**VMware HCX Enterprise Edition - Default**
VMware HCX Enterprise is now available and supported on Azure VMware Solution at no extra cost. VMware HCX Enterprise brings valuable [services](https://docs.vmware.com/en/VMware-HCX/4.5/hcx-user-guide/GUID-32AF32BD-DE0B-4441-95B3-DF6A27733EED.html), like Replicated Assisted vMotion (RAV) and Mobility Optimized Networking (MON). VMware HCX Enterprise is now automatically installed for all new VMware HCX add-on requests, and existing VMware HCX Advanced customers can upgrade to VMware HCX Enterprise using the Azure portal. Learn more on how to [Install and activate VMware HCX in Azure VMware Solution](install-vmware-hcx.md).
-**Log analytics - monitor Azure VMware Solution**
+**Azure Log analytics - monitor Azure VMware Solution**
The data in Azure Log Analytics offer insights into issues by searching using Kusto Query Language.
All new Azure VMware Solution private clouds are now deployed with NSX-T Data Ce
You'll receive an email with the planned maintenance date and time. You can reschedule an upgrade. The email also provides details on the upgraded component, its effect on workloads, private cloud access, and other Azure services.
-For more information on this NSX-T Data Center version, see [VMware NSX-T Data Center [!INCLUDE [nsxt-version](includes/nsxt-version.md)] Release Notes](https://docs.vmware.com/en/VMware-NSX-T-Data-Center/3.1/rn/VMware-NSX-T-Data-Center-312-Release-Notes.html).
+For more information on this NSX-T Data Center version, see [VMware NSX-T Data Center 3.1.1 Release Notes](https://docs.vmware.com/en/VMware-NSX/3.1/rn/VMware-NSX-T-Data-Center-311-Release-Notes.html).
## May 2021
backup Archive Tier Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/archive-tier-support.md
Title: Azure Backup - Archive tier overview description: Learn about Archive tier support for Azure Backup. Previously updated : 04/15/2023 Last updated : 04/25/2023
When you restore from recovery point in Archive tier in primary region, the reco
The recovery points for Virtual Machines meet the eligibility criteria. So, there are archivable recovery points. However, the churn in the Virtual Machine may be low, thus there are no recommendations. In this scenario, though you can move the archivable recovery points to archive tier, but it may increase the overall backup storage costs.
-### I have stopped protection and retained data for my workload. Can I move the recovery points to archive tier?
-
-No. Once protection is stopped for a particular workload, the corresponding recovery points can't be moved to the archive tier. To move recovery points to archive tier, you need to resume the protection on the data source.
- ### How do I ensure that all recovery points are moved to Archive tier, if moved via Azure portal? To ensure that all recovery points are moved to Archive tier,
backup Azure Kubernetes Service Cluster Manage Backups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/azure-kubernetes-service-cluster-manage-backups.md
Title: Manage Azure Kubernetes Service (AKS) backups using Azure Backup
description: This article explains how to manage Azure Kubernetes Service (AKS) backups using Azure Backup. Previously updated : 03/27/2023 Last updated : 04/21/2023
This section provides the set of Azure CLI commands to perform create, update, o
To install the Backup Extension, run the following command: ```azurecli-interactive
- az k8s-extension create --name azure-aks-backup --extension-type Microsoft.DataProtection.Kubernetes --scope cluster --cluster-type managedClusters --cluster-name aksclustername --resource-group aksclusterrg --release-train stable --configuration-settings blobContainer=containername storageAccount=storageaccountname storageAccountResourceGroup=storageaccountrg storageAccountSubscriptionId=subscriptionid
+ az k8s-extension create --name azure-aks-backup --extension-type Microsoft.DataProtection.Kubernetes --scope cluster --cluster-type managedClusters --cluster-name <aksclustername> --resource-group <aksclusterrg> --release-train stable --configuration-settings blobContainer=<containername> storageAccount=<storageaccountname> storageAccountResourceGroup=<storageaccountrg> storageAccountSubscriptionId=<subscriptionid>
``` ### View Backup Extension installation status
To install the Backup Extension, run the following command:
To view the progress of Backup Extension installation, use the following command: ```azurecli-interactive
- az k8s-extension show --name azure-aks-backup --cluster-type managedClusters --cluster-name aksclustername --resource-group aksclusterrg
+ az k8s-extension show --name azure-aks-backup --cluster-type managedClusters --cluster-name <aksclustername> --resource-group <aksclusterrg>
``` ### Update resources in Backup Extension
To view the progress of Backup Extension installation, use the following command
To update blob container, CPU, and memory in the Backup Extension, use the following command: ```azurecli-interactive
- az k8s-extension update --name azure-aks-backup --cluster-type managedClusters --cluster-name aksclustername --resource-group aksclusterrg --release-train stable --configuration-settings [blobContainer=containername storageAccount=storageaccountname storageAccountResourceGroup=storageaccountrg storageAccountSubscriptionId=subscriptionid] [cpuLimit=1] [memoryLimit=1Gi]
+ az k8s-extension update --name azure-aks-backup --cluster-type managedClusters --cluster-name <aksclustername> --resource-group <aksclusterrg> --release-train stable --configuration-settings [blobContainer=<containername> storageAccount=<storageaccountname> storageAccountResourceGroup=<storageaccountrg> storageAccountSubscriptionId=<subscriptionid>] [cpuLimit=1] [memoryLimit=1Gi]
[]: denotes the 3 different sub-groups of updates possible (discard the brackets while using the command)
To update blob container, CPU, and memory in the Backup Extension, use the follo
To stop the Backup Extension install operation, use the following command: ```azurecli-interactive
- az k8s-extension delete --name azure-aks-backup --cluster-type managedClusters --cluster-name aksclustername --resource-group aksclusterrg
+ az k8s-extension delete --name azure-aks-backup --cluster-type managedClusters --cluster-name <aksclustername> --resource-group <aksclusterrg>
``` ### Grant permission on storage account
To stop the Backup Extension install operation, use the following command:
To provide *Storage Account Contributor Permission* to the Extension Identity on storage account, run the following command: ```azurecli-interactive
- az role assignment create --assignee-object-id $(az k8s-extension show --name azure-aks-backup --cluster-name aksclustername --resource-group aksclusterresourcegroup --cluster-type managedClusters --query aksAssignedIdentity.principalId --output tsv) --role 'Storage Account Contributor' --scope /subscriptions/subscriptionid/resourceGroups/storageaccountresourcegroup/providers/Microsoft.Storage/storageAccounts/storageaccountname
+ az role assignment create --assignee-object-id $(az k8s-extension show --name azure-aks-backup --cluster-name <aksclustername> --resource-group <aksclusterrg> --cluster-type managedClusters --query identity.principalId --output tsv) --role 'Storage Account Contributor' --scope /subscriptions/<subscriptionid>/resourceGroups/<storageaccountrg>/providers/Microsoft.Storage/storageAccounts/<storageaccountname>
```
To enable Trusted Access between Backup vault and AKS cluster, use the following
```azurecli-interactive az aks trustedaccess rolebinding create \
- -g $myResourceGroup \
- --cluster-name $myAKSCluster
- ΓÇôn <randomRoleBindingName> \
- -s <vaultID> \
+ --resource-group <backupvaultrg> \
+ --cluster-name <aksclustername> \
+ --name <randomRoleBindingName> \
+ --source-resource-id /subscriptions/<subscriptionid>/resourcegroups/<backupvaultrg>/providers/Microsoft.DataProtection/BackupVaults/<backupvaultname> \
--roles Microsoft.DataProtection/backupVaults/backup-operator- ``` Learn more about [other commands related to Trusted Access](../aks/trusted-access-feature.md#trusted-access-feature-overview).
Learn more about [other commands related to Trusted Access](../aks/trusted-acces
- [Back up Azure Kubernetes Service cluster (preview)](azure-kubernetes-service-cluster-backup.md) - [Restore Azure Kubernetes Service cluster (preview)](azure-kubernetes-service-cluster-restore.md)-- [Supported scenarios for backing up Azure Kubernetes Service cluster (preview)](azure-kubernetes-service-cluster-backup-support-matrix.md)
+- [Supported scenarios for backing up Azure Kubernetes Service cluster (preview)](azure-kubernetes-service-cluster-backup-support-matrix.md)
backup Backup Azure Diagnostic Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-diagnostic-events.md
Title: Use diagnostics settings for Recovery Services vaults description: 'This article describes how to use the old and new diagnostics events for Azure Backup.'- Previously updated : 03/31/2023+ Last updated : 04/18/2023 + # Use diagnostics settings for Recovery Services vaults
backup Backup Mabs Whats New Mabs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-mabs-whats-new-mabs.md
Title: What's new in Microsoft Azure Backup Server description: Microsoft Azure Backup Server gives you enhanced backup capabilities for protecting VMs, files and folders, workloads, and more. Previously updated : 03/02/2023 Last updated : 04/25/2023 + # What's new in Microsoft Azure Backup Server (MABS)?
bastion Kerberos Authentication Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/kerberos-authentication-portal.md
This article shows you how to configure Azure Bastion to use Kerberos authentica
## Considerations
-* During Preview, the Kerberos setting for Azure Bastion can be configured in the Azure portal only.
+* During Preview, the Kerberos setting for Azure Bastion can be configured in the Azure portal only and not with native client.
* VMs migrated from on-premises to Azure are not currently supported for Kerberos.  * Cross-realm authentication is not currently supported for Kerberos.  * Changes to DNS server are not currently supported for Kerberos. After making any changes to DNS server, you will need to delete and re-create the Bastion resource.
bastion Shareable Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/shareable-link.md
By default, users in your org will have only read access to shared links. If a u
## Considerations
-* Shareable Links isn't currently supported for peered VNets that aren't in the same subscription.
* Shareable Links isn't currently supported for peered VNEts across tenants.
-* Shareable Links isn't currently supported for peered VNets that aren't in the same region.
* Shareable Links isn't supported for national clouds during preview. * The Standard SKU is required for this feature.
cdn Cdn Manage Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-manage-powershell.md
Title: Manage Azure CDN with PowerShell | Microsoft Docs description: Use this tutorial to learn how to use PowerShell to manage aspects of your Azure Content Delivery Network endpoint profiles and endpoints. - Previously updated : 02/27/2023 Last updated : 04/24/2023 - # Manage Azure CDN with PowerShell
PS C:\> Get-Command -Module Az.Cdn
CommandType Name Version Source -- - -
-Cmdlet Confirm-AzCdnEndpointProbeURL 1.4.0 Az.Cdn
-Cmdlet Disable-AzCdnCustomDomain 1.4.0 Az.Cdn
-Cmdlet Disable-AzCdnCustomDomainHttps 1.4.0 Az.Cdn
-Cmdlet Enable-AzCdnCustomDomain 1.4.0 Az.Cdn
-Cmdlet Enable-AzCdnCustomDomainHttps 1.4.0 Az.Cdn
-Cmdlet Get-AzCdnCustomDomain 1.4.0 Az.Cdn
-Cmdlet Get-AzCdnEdgeNode 1.4.0 Az.Cdn
-Cmdlet Get-AzCdnEndpoint 1.4.0 Az.Cdn
-Cmdlet Get-AzCdnEndpointNameAvailability 1.4.0 Az.Cdn
-Cmdlet Get-AzCdnEndpointResourceUsage 1.4.0 Az.Cdn
-Cmdlet Get-AzCdnOrigin 1.4.0 Az.Cdn
-Cmdlet Get-AzCdnProfile 1.4.0 Az.Cdn
-Cmdlet Get-AzCdnProfileResourceUsage 1.4.0 Az.Cdn
-Cmdlet Get-AzCdnProfileSsoUrl 1.4.0 Az.Cdn
-Cmdlet Get-AzCdnProfileSupportedOptimizationType 1.4.0 Az.Cdn
-Cmdlet Get-AzCdnSubscriptionResourceUsage 1.4.0 Az.Cdn
-Cmdlet New-AzCdnCustomDomain 1.4.0 Az.Cdn
-Cmdlet New-AzCdnDeliveryPolicy 1.4.0 Az.Cdn
-Cmdlet New-AzCdnDeliveryRule 1.4.0 Az.Cdn
-Cmdlet New-AzCdnDeliveryRuleAction 1.4.0 Az.Cdn
-Cmdlet New-AzCdnDeliveryRuleCondition 1.4.0 Az.Cdn
-Cmdlet New-AzCdnEndpoint 1.4.0 Az.Cdn
-Cmdlet New-AzCdnProfile 1.4.0 Az.Cdn
-Cmdlet Publish-AzCdnEndpointContent 1.4.0 Az.Cdn
-Cmdlet Remove-AzCdnCustomDomain 1.4.0 Az.Cdn
-Cmdlet Remove-AzCdnEndpoint 1.4.0 Az.Cdn
-Cmdlet Remove-AzCdnProfile 1.4.0 Az.Cdn
-Cmdlet Set-AzCdnEndpoint 1.4.0 Az.Cdn
-Cmdlet Set-AzCdnOrigin 1.4.0 Az.Cdn
-Cmdlet Set-AzCdnProfile 1.4.0 Az.Cdn
-Cmdlet Start-AzCdnEndpoint 1.4.0 Az.Cdn
-Cmdlet Stop-AzCdnEndpoint 1.4.0 Az.Cdn
-Cmdlet Test-AzCdnCustomDomain 1.4.0 Az.Cdn
-Cmdlet Unpublish-AzCdnEndpointContent 1.4.0 Az.Cdn
+Cmdlet Confirm-AzCdnEndpointProbeURL 2.1.0 Az.Cdn
+Cmdlet Disable-AzCdnCustomDomain 2.1.0 Az.Cdn
+Cmdlet Disable-AzCdnCustomDomainHttps 2.1.0 Az.Cdn
+Cmdlet Enable-AzCdnCustomDomain 2.1.0 Az.Cdn
+Cmdlet Enable-AzCdnCustomDomainHttps 2.1.0 Az.Cdn
+Cmdlet Get-AzCdnCustomDomain 2.1.0 Az.Cdn
+Cmdlet Get-AzCdnEdgeNode 2.1.0 Az.Cdn
+Cmdlet Get-AzCdnEndpoint 2.1.0 Az.Cdn
+Cmdlet Get-AzCdnEndpointResourceUsage 2.1.0 Az.Cdn
+Cmdlet Get-AzCdnOrigin 2.1.0 Az.Cdn
+Cmdlet Get-AzCdnProfile 2.1.0 Az.Cdn
+Cmdlet Get-AzCdnProfileResourceUsage 2.1.0 Az.Cdn
+Cmdlet Get-AzCdnProfileSupportedOptimizationType 2.1.0 Az.Cdn
+Cmdlet Get-AzCdnSubscriptionResourceUsage 2.1.0 Az.Cdn
+Cmdlet New-AzCdnCustomDomain 2.1.0 Az.Cdn
+Cmdlet New-AzCdnDeliveryPolicy 2.1.0 Az.Cdn
+Cmdlet New-AzCdnDeliveryRule 2.1.0 Az.Cdn
+Cmdlet New-AzCdnDeliveryRuleAction 2.1.0 Az.Cdn
+Cmdlet New-AzCdnDeliveryRuleCondition 2.1.0 Az.Cdn
+Cmdlet New-AzCdnEndpoint 2.1.0 Az.Cdn
+Cmdlet New-AzCdnProfile 2.1.0 Az.Cdn
+Cmdlet Remove-AzCdnCustomDomain 2.1.0 Az.Cdn
+Cmdlet Remove-AzCdnEndpoint 2.1.0 Az.Cdn
+Cmdlet Remove-AzCdnProfile 2.1.0 Az.Cdn
+Cmdlet Set-AzCdnProfile 2.1.0 Az.Cdn
+Cmdlet Start-AzCdnEndpoint 2.1.0 Az.Cdn
+Cmdlet Stop-AzCdnEndpoint 2.1.0 Az.Cdn
``` ## Getting help
DESCRIPTION
RELATED LINKS
+ https://docs.microsoft.com/powershell/module/az.cdn/get-azcdnprofile
REMARKS To see the examples, type: "get-help Get-AzCdnProfile -examples". For more information, type: "get-help Get-AzCdnProfile -detailed". For technical information, type: "get-help Get-AzCdnProfile -full".-
+ For online help, type: "get-help Get-AzCdnProfile -online"
``` ## Listing existing Azure CDN profiles
This output can be piped to cmdlets for enumeration.
```powershell # Output the name of all profiles on this subscription. Get-AzCdnProfile | ForEach-Object { Write-Host $_.Name }-
-# Return only **Azure CDN from Verizon** profiles.
-Get-AzCdnProfile | Where-Object { $_.Sku.Name -eq "Standard_Verizon" }
``` You can also return a single profile by specifying the profile name and resource group.
Get-AzCdnProfile -ProfileName CdnDemo -ResourceGroupName CdnDemoRG
> [!TIP] > It is possible to have multiple CDN profiles with the same name, so long as they are in different resource groups. Omitting the `ResourceGroupName` parameter returns all profiles with a matching name. >
->
## Listing existing CDN endpoints `Get-AzCdnEndpoint` can retrieve an individual endpoint or all the endpoints on a profile.
Get-AzCdnEndpoint -ProfileName CdnDemo -ResourceGroupName CdnDemoRG -EndpointNam
# Get all of the endpoints on a given profile. Get-AzCdnEndpoint -ProfileName CdnDemo -ResourceGroupName CdnDemoRG-
-# Return all of the endpoints on all of the profiles.
-Get-AzCdnProfile | Get-AzCdnEndpoint
-
-# Return all of the endpoints in this subscription that are currently running.
-Get-AzCdnProfile | Get-AzCdnEndpoint | Where-Object { $_.ResourceState -eq "Running" }
``` ## Creating CDN profiles and endpoints
Get-AzCdnProfile | Get-AzCdnEndpoint | Where-Object { $_.ResourceState -eq "Runn
```powershell # Create a new profile
-New-AzCdnProfile -ProfileName CdnPoshDemo -ResourceGroupName CdnDemoRG -Sku Standard_Akamai -Location "Central US"
+New-AzCdnProfile -ProfileName CdnPoshDemo -ResourceGroupName CdnDemoRG -Sku Standard_Microsoft -Location "Central US"
# Create a new endpoint
-New-AzCdnEndpoint -ProfileName CdnPoshDemo -ResourceGroupName CdnDemoRG -Location "Central US" -EndpointName cdnposhdoc -OriginName "Contoso" -OriginHostName "www.contoso.com"
-
-# Create a new profile and endpoint (same as above) in one line
-New-AzCdnProfile -ProfileName CdnPoshDemo -ResourceGroupName CdnDemoRG -Sku Standard_Akamai -Location "Central US" | New-AzCdnEndpoint -EndpointName cdnposhdoc -OriginName "Contoso" -OriginHostName "www.contoso.com"
+$origin = @{
+ Name = "Contoso"
+ HostName = "www.contoso.com"
+};
-```
-
-## Checking endpoint name availability
-`Get-AzCdnEndpointNameAvailability` returns an object indicating if an endpoint name is available.
-
-```powershell
-# Retrieve availability
-$availability = Get-AzCdnEndpointNameAvailability -EndpointName "cdnposhdoc"
-
-# If available, write a message to the console.
-If($availability.NameAvailable) { Write-Host "Yes, that endpoint name is available." }
-Else { Write-Host "No, that endpoint name is not available." }
+New-AzCdnEndpoint -ProfileName CdnPoshDemo -ResourceGroupName CdnDemoRG -Location "Central US" -EndpointName cdnposhdoc -Origin $origin
``` ## Adding a custom domain
Else { Write-Host "No, that endpoint name is not available." }
> ```powershell
-# Get an existing endpoint
-$endpoint = Get-AzCdnEndpoint -ProfileName CdnPoshDemo -ResourceGroupName CdnDemoRG -EndpointName cdnposhdoc
-
-# Check the mapping
-$result = Test-AzCdnCustomDomain -CdnEndpoint $endpoint -CustomDomainHostName "cdn.contoso.com"
- # Create the custom domain on the endpoint
-If($result.CustomDomainValidated){ New-AzCdnCustomDomain -CustomDomainName Contoso -HostName "cdn.contoso.com" -CdnEndpoint $endpoint }
+New-AzCdnCustomDomain -ResourceGroupName CdnDemoRG -ProfileName CdnPoshDemo -Name contoso -HostName "cdn.contoso.com" -EndpointName cdnposhdoc
``` ## Modifying an endpoint
-`Set-AzCdnEndpoint` modifies an existing endpoint.
+`Update-AzCdnEndpoint` modifies an existing endpoint.
```powershell
-# Get an existing endpoint
-$endpoint = Get-AzCdnEndpoint -ProfileName CdnPoshDemo -ResourceGroupName CdnDemoRG -EndpointName cdnposhdoc
-
-# Set up content compression
-$endpoint.IsCompressionEnabled = $true
-$endpoint.ContentTypesToCompress = "text/javascript","text/css","application/json"
-
-# Save the changed endpoint and apply the changes
-Set-AzCdnEndpoint -CdnEndpoint $endpoint
+# Update endpoint with compression settings
+Update-AzCdnEndpoint -Name cdnposhdoc -ProfileName CdnPoshDemo -ResourceGroupName CdnDemoRG -IsCompressionEnabled -ContentTypesToCompress "text/javascript","text/css","application/json"
```
-## Purging/Pre-loading CDN assets
-`Unpublish-AzCdnEndpointContent` purges cached assets, while `Publish-AzCdnEndpointContent` pre-loads assets on supported endpoints.
+## Purging
+`Clear-AzCdnEndpointContent` purges cached assets.
```powershell # Purge some assets.
-Unpublish-AzCdnEndpointContent -ProfileName CdnDemo -ResourceGroupName CdnDemoRG -EndpointName cdndocdemo -PurgeContent "/images/kitten.png","/video/rickroll.mp4"
+Clear-AzCdnEndpointContent -ProfileName CdnPoshDemo -ResourceGroupName CdnDemoRG -EndpointName cdnposhdoc -ContentFilePath @("/images/kitten.png","/video/rickroll.mp4")
+```
+
+## Pre-load some assets
-# Pre-load some assets.
-Publish-AzCdnEndpointContent -ProfileName CdnDemo -ResourceGroupName CdnDemoRG -EndpointName cdndocdemo -LoadContent "/images/kitten.png","/video/rickroll.mp4"
+> [!NOTE]
+> Pre-loading is only available on Azure CDN from Verizon profiles.
-# Purge everything in /images/ on all endpoints.
-Get-AzCdnProfile | Get-AzCdnEndpoint | Unpublish-AzCdnEndpointContent -PurgeContent "/images/*"
+`Import-AzCdnEndpointContent` pre-loads assets into the CDN cache.
+
+```powershell
+Import-AzCdnEndpointContent -ProfileName CdnPoshDemo -ResourceGroupName CdnDemoRG -EndpointName cdnposhdoc -ContentFilePath @("/images/kitten.png","/video/rickroll.mp4")`
``` ## Starting/Stopping CDN endpoints `Start-AzCdnEndpoint` and `Stop-AzCdnEndpoint` can be used to start and stop individual endpoints or groups of endpoints. ```powershell
-# Stop the cdndocdemo endpoint
-Stop-AzCdnEndpoint -ProfileName CdnDemo -ResourceGroupName CdnDemoRG -EndpointName cdndocdemo
-
-# Stop all endpoints
-Get-AzCdnProfile | Get-AzCdnEndpoint | Stop-AzCdnEndpoint
+# Stop the CdnPoshDemo endpoint
+Stop-AzCdnEndpoint -ProfileName CdnPoshDemo -ResourceGroupName CdnDemoRG -Name cdnposhdoc
-# Start all endpoints
-Get-AzCdnProfile | Get-AzCdnEndpoint | Start-AzCdnEndpoint
+# Start the CdnPoshDemo endpoint
+Start-AzCdnEndpoint -ProfileName CdnPoshDemo -ResourceGroupName CdnDemoRG -Name cdnposhdoc
``` ## Creating Standard Rules engine policy and applying to an existing CDN endpoint
-`New-AzCdnDeliveryRule`, `New=AzCdnDeliveryRuleCondition`, and `New-AzCdnDeliveryRuleAction` can be used to configure the Azure CDN Standard Rules engine on Azure CDN from Microsoft profiles.
-```powershell
-# Create a new http to https redirect rule
-$Condition=New-AzCdnDeliveryRuleCondition -MatchVariable RequestProtocol -Operator Equal -MatchValue HTTP
-$Action=New-AzCdnDeliveryRuleAction -RedirectType Found -DestinationProtocol HTTPS
-$HttpToHttpsRedirectRule=New-AzCdnDeliveryRule -Name "HttpToHttpsRedirectRule" -Order 2 -Condition $Condition -Action $Action
+The following list of cmdlets can be used to create a Standard Rules engine policy and apply it to an existing CDN endpoint.
+
+Conditions:
+
+* [New-AzFrontDoorCdnRuleCookiesConditionObject](/powershell/module/az.cdn/new-azcdndeliveryrulecookiesconditionobject)
+* [New-AzCdnDeliveryRuleHttpVersionConditionObject](/powershell/module/az.cdn/new-azcdndeliveryrulehttpversionconditionobject)
+* [New-AzCdnDeliveryRuleIsDeviceConditionObject](/powershell/module/az.cdn/new-azcdndeliveryruleisdeviceconditionobject)
+* [New-AzCdnDeliveryRulePostArgsConditionObject](/powershell/module/az.cdn/new-azcdndeliveryrulepostargsconditionobject)
+* [New-AzCdnDeliveryRuleQueryStringConditionObject](/powershell/module/az.cdn/new-azcdndeliveryrulequerystringconditionobject)
+* [New-AzCdnDeliveryRuleRemoteAddressConditionObject](/powershell/module/az.cdn/new-azcdndeliveryruleremoteaddressconditionobject)
+* [New-AzCdnDeliveryRuleRequestBodyConditionObject](/powershell/module/az.cdn/new-azcdndeliveryrulerequestbodyconditionobject)
+* [New-AzCdnDeliveryRuleRequestHeaderConditionObject](/powershell/module/az.cdn/new-azcdndeliveryrulerequestheaderconditionobject)
+* [New-AzCdnDeliveryRuleRequestMethodConditionObject](/powershell/module/az.cdn/new-azcdndeliveryrulerequestmethodconditionobject)
+* [New-AzCdnDeliveryRuleRequestSchemeConditionObject](/powershell/module/az.cdn/new-azcdndeliveryrulerequestschemeconditionobject)
+* [New-AzCdnDeliveryRuleRequestUriConditionObject](/powershell/module/az.cdn/new-azcdndeliveryrulerequesturiconditionobject)
+* [New-AzCdnDeliveryRuleResponseHeaderActionObject](/powershell/module/az.cdn/new-azcdndeliveryruleresponseheaderactionobject)
+* [New-AzCdnDeliveryRuleUrlFileExtensionConditionObject](/powershell/module/az.cdn/new-azcdndeliveryruleurlfileextensionconditionobject)
+* [New-AzCdnDeliveryRuleUrlFileNameConditionObject](/powershell/module/az.cdn/new-azcdndeliveryruleurlfilenameconditionobject)
+* [New-AzCdnDeliveryRuleUrlPathConditionObject](/powershell/module/az.cdn/new-azcdndeliveryruleurlpathconditionobject)
+
+Actions:
+
+* [New-AzCdnDeliveryRuleRequestHeaderActionObject](/powershell/module/az.cdn/new-azcdndeliveryrulerequestheaderactionobject)
+* [New-AzCdnDeliveryRuleRequestHeaderActionObject](/powershell/module/az.cdn/new-azcdndeliveryrulerequestheaderactionobject)
+* [New-AzCdnUrlRedirectActionObject](/powershell/module/az.cdn/new-azcdnurlredirectactionobject)
+* [New-AzCdnUrlRewriteActionObject](/powershell/module/az.cdn/new-azcdnurlrewriteactionobject)
+* [New-AzCdnUrlSigningActionObject](/powershell/module/az.cdn/new-azcdnurlsigningactionobject)
+```powershell
# Create a path based Response header modification rule.
-$Cond1=New-AzCdnDeliveryRuleCondition -MatchVariable UrlPath -Operator BeginsWith -MatchValue "/images/"
-$Action1=New-AzCdnDeliveryRuleAction -HeaderActionType ModifyResponseHeader -Action Overwrite -HeaderName "Access-Control-Allow-Origin" -Value "*"
-$PathBasedCacheOverrideRule=New-AzCdnDeliveryRule -Name "PathBasedCacheOverride" -Order 1 -Condition $Cond1 -Action $action1
+$cond1 = New-AzCdnDeliveryRuleUrlPathConditionObject -Name UrlPath -ParameterOperator BeginsWith -ParameterMatchValue "/images/"
+$action1 = New-AzCdnDeliveryRuleResponseHeaderActionObject -Name ModifyResponseHeader -ParameterHeaderAction Overwrite -ParameterHeaderName "Access-Control-Allow-Origin" -ParameterValue "*"
+$rule1 = New-AzCdnDeliveryRuleObject -Name "PathBasedCacheOverride" -Order 1 -Condition $cond1 -Action $action1
-# Create a delivery policy with above deliveryRules.
-$Policy = New-AzCdnDeliveryPolicy -Description "DeliveryPolicy" -Rule $HttpToHttpsRedirectRule,$UrlRewriteRule
+# Create a new http to https redirect rule
+$cond1 = New-AzCdnDeliveryRuleRequestSchemeConditionObject -Name RequestScheme -ParameterMatchValue HTTPS
+$action1 = New-AzCdnUrlRedirectActionObject -Name UrlRedirect -ParameterRedirectType Found -ParameterDestinationProtocol Https
+$rule2 = New-AzCdnDeliveryRuleObject -Name "UrlRewriteRule" -Order 2 -Condition $cond1 -Action $action1
-# Update existing endpoint with created delivery policy
-$ep = Get-AzCdnEndpoint -EndpointName cdndocdemo -ProfileName CdnDemo -ResourceGroupName CdnDemoRG
-$ep.DeliveryPolicy = $Policy
-Set-AzCdnEndpoint -CdnEndpoint $ep
+# Update existing endpoint with new rules
+Update-AzCdnEndpoint -Name cdnposhdoc -ProfileName CdnPoshDemo -ResourceGroupName CdnDemoRG -DeliveryPolicyRule $rule1,$rule2
``` ## Deleting CDN resources
Set-AzCdnEndpoint -CdnEndpoint $ep
# Remove a single endpoint Remove-AzCdnEndpoint -ProfileName CdnPoshDemo -ResourceGroupName CdnDemoRG -EndpointName cdnposhdoc
-# Remove all the endpoints on a profile and skip confirmation (-Force)
-Get-AzCdnProfile -ProfileName CdnPoshDemo -ResourceGroupName CdnDemoRG | Get-AzCdnEndpoint | Remove-AzCdnEndpoint -Force
- # Remove a single profile Remove-AzCdnProfile -ProfileName CdnPoshDemo -ResourceGroupName CdnDemoRG ``` ## Next Steps
-Learn how to automate Azure CDN with [.NET](cdn-app-dev-net.md) or [Node.js](cdn-app-dev-node.md).
-To learn about CDN features, see [CDN Overview](cdn-overview.md).
+* Learn how to automate Azure CDN with [.NET](cdn-app-dev-net.md) or [Node.js](cdn-app-dev-node.md).
+
+* To learn about CDN features, see [CDN Overview](cdn-overview.md).
communication-services End Of Call Survey Concept https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/voice-video-calling/end-of-call-survey-concept.md
+
+ Title: Azure Communication Services End of Call Survey overview
+
+description: Learn about the End of Call Survey.
+++++ Last updated : 4/03/2023+++++++
+# End of Call Survey overview
++++
+> [!NOTE]
+> End of Call Survey is currently supported only for our JavaScript / Web SDK.
++
+The End of Call Survey allows Azure Communication Services to improve the overall Calling SDK.
+
+<!-- provides you with a tool to understand how your end users perceive the overall quality and reliability of your JavaScript / Web SDK calling solution. -->
+<!--
+## Purpose of the End of Call Survey
+ItΓÇÖs difficult to determine a customerΓÇÖs perceived calling experience and determine how well your calling solution is performing without gathering subjective feedback from customers.
+
+You can use the End of Call Survey to collect and analyze customers **subjective** opinions on their calling experience as opposed to relying only on **objective** measurements such as audio and video bitrate, jitter, and latency, which may not indicate if a customer had a poor calling experience.
+
+After publishing survey data, you can view the survey results through Azure for analysis and improvements. Azure Communication Services uses these survey results to monitor and improve quality and reliability. -->
++
+## Survey structure
+
+The survey is designed to answer two questions from a userΓÇÖs point of view.
+
+- **Question 1:** How did the users perceive their overall call quality experience?
+
+- **Question 2:** Did the user perceive any Audio, Video, or Screen Share issues in the call?
+
+The API allows applications to gather data points that describe user perceived ratings of their Overall Call, Audio, Video, and Screen Share experiences. Microsoft analyzes survey API results according to the following goals.
+
+### End of Call Survey API goals
++
+| API Rating Categories | Question Goal |
+| -- | -- |
+| Overall Call | Responses indicate how a call participant perceived their overall call quality. |
+| Audio | Responses indicate if the user perceived any Audio issues. |
+| Video | Responses indicate if the user perceived any Video issues. |
+| Screenshare | Responses indicate if the user perceived any Screen Share issues. |
+++
+## Survey capabilities
+++
+### Default survey API configuration
+
+| API Rating Categories | Cutoff Value* | Input Range | Comments |
+| -- | -- | -- | -- |
+| Overall Call | 2 | 1 - 5 | Surveys a calling participantΓÇÖs overall quality experience on a scale of 1-5. A response of 1 indicates an imperfect call experience and 5 indicates a perfect call. The cutoff value of 2 means that a customer response of 1 or 2 indicates a less than perfect call experience. |
+| Audio | 2 | 1 - 5 | A response of 1 indicates an imperfect audio experience and 5 indicates no audio issues were experienced. |
+| Video | 2 | 1 - 5 | A response of 1 indicates an imperfect video experience and 5 indicates no video issues were experienced. |
+| Screenshare | 2 | 1 - 5 | A response of 1 indicates an imperfect screen share experience and 5 indicates no screen share issues were experienced. |
+++
+> [!NOTE]
+>A questionΓÇÖs indicated cutoff value in the API is the threshold that Microsoft uses when analyzing your survey data. When you customize the cutoff value or Input Range, Microsoft analyzes your survey data according to your customization.
+
+### More survey tags
+| Rating Categories | Optional Tags |
+| -- | -- |
+| Overall Call | `CallCannotJoin` `CallCannotInvite` `HadToRejoin` `CallEndedUnexpectedly` `OtherIssues` |
+| Audio | `NoLocalAudio` `NoRemoteAudio` `Echo` `AudioNoise` `LowVolume` `AudioStoppedUnexpectedly` `DistortedSpeech` `AudioInterruption` `OtherIssues` |
+| Video | `NoVideoReceived` `NoVideoSent` `LowQuality` `Freezes` `StoppedUnexpectedly` `DarkVideoReceived` `AudioVideoOutOfSync` `OtherIssues` |
+| Screenshare | `NoContentLocal` `NoContentRemote` `CannotPresent` `LowQuality` `Freezes` `StoppedUnexpectedly` `LargeDelay` `OtherIssues` |
+++
+### End of Call Survey customization
++
+You can choose to collect each of the four API values or only the ones
+you find most important. For example, you can choose to only ask
+customers about their overall call experience instead of asking them
+about their audio, video, and screen share experience. You can also
+customize input ranges to suit your needs. The default input range is 1
+to 5 for Overall Call, Audio, Video, and
+Screenshare. However, each API value can be customized from a minimum of
+0 to maximum of 100.
+
+### Customization options
++
+| API Rating Categories | Cutoff Value* | Input Range |
+| -- | -- | -- |
+| Overall Call | 0 - 100 | 0 - 100 |
+| Audio | 0 - 100 | 0 - 100 |
+| Video | 0 - 100 | 0 - 100 |
+| Screenshare | 0 - 100 | 0 - 100 |
+
+ > [!NOTE]
+ > A questionΓÇÖs indicated cutoff value in the API is the threshold that Microsoft uses when analyzing your survey data. When you customize the cutoff value or Input Range, Microsoft analyzes your survey data according to your customization.
+
+<!-- ## Store and view survey data:
+
+> [!IMPORTANT]
+> You must enable a Diagnostic Setting in Azure Monitor to send the log data of your surveys to a Log Analytics workspace, Event Hubs, or an Azure storage account to receive and analyze your survey data. If you do not send survey data to one of these options your survey data will not be stored and will be lost. To enable these logs for your Communications Services, see: **[Enable logging in Diagnostic Settings](../analytics/enable-logging.md)**
+
+You can only view your survey data if you have enabled a Diagnostic Setting to capture your survey data. -->
+
+## Next Steps
+
+<!-- - Learn how to use the Log Analytics workspace, see: [Log Analytics Tutorial](../../../azure-monitor/logs/log-analytics-tutorial.md)
+
+- Create your own queries in Log Analytics, see: [Get Started Queries](../../../azure-monitor/logs/get-started-queries.md) -->
+Learn how to use the End of Call Survey, see our tutorial: [Use the End of Call Survey to collect user feedback](../../tutorials/end-of-call-survey-tutorial.md)
communication-services Simulcast https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/voice-video-calling/simulcast.md
# Simulcast
-Simulcast is provided as a preview for developers and may change based on feedback that we receive. To use this feature, use 1.9.1-beta.1+ release of Azure Communication Services Calling Web SDK. Currently, we support simulcast send from desktop chrome and desktop edge. Simulcast send from mobile devices will be available shortly in the future.
+
+Simulcast is supported starting from 1.9.1-beta.1+ release of Azure Communication Services Calling Web SDK. Currently, simulcast on the sender side is supported on following desktop browsers - Chrome and Edge. Simulcast on receiver side is supported on all platforms that Azure Communication Services Calling supports.
+Support for Sender side Simulcast capability from mobile browsers will be added in the future.
Simulcast is a technique by which an endpoint encodes the same video feed using different qualities, sends these video feeds of multiple qualities to a selective forwarding unit ΓÇô SFU that decides which of the receivers gets which quality. The lack of simulcast support leads to a degraded video experience in calls with three or more participants. If a video receiver with poor network conditions joins the conference, it will impact the quality of video received from the sender without simulcast support for all other participants. This is because the video sender will optimize its video feed against the lowest common denominator. With simulcast, the impact of lowest common denominator will be minimized. That is because the video sender will produce specialized low fidelity video encoding for a subset of receivers that run on poor networks (or otherwise constrained).
communication-services End Of Call Survey Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/tutorials/end-of-call-survey-tutorial.md
+
+ Title: Azure Communication Services End of Call Survey
+
+description: Learn how to use the End of Call Survey to collect user feedback.
+++++ Last updated : 4/03/2023+++++++
+# Use the End of Call Survey to collect user feedback
+++++
+> [!NOTE]
+> End of Call Survey is currently supported only for our JavaScript / Web SDK.
+
+This tutorial shows you how to use the Azure Communication Services End of Call Survey for JavaScript / Web SDK.
++
+## Prerequisites
+
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+
+- [Node.js](https://nodejs.org/) active Long Term Support(LTS) versions are recommended.
+
+- An active Communication Services resource. [Create a Communication Services resource](../quickstarts/create-communication-resource.md). Survey results are tied to single Communication Services resources.
+- An active Log Analytics Workspace, also known as Azure Monitor Logs. [Enable logging in Diagnostic Settings](../concepts/analytics/enable-logging.md).
++
+<!-- - An active Log Analytics Workspace, also known as Azure Monitor Logs, to ensure you don't lose your survey results. [Enable logging in Diagnostic Settings](../concepts/analytics/enable-logging.md). -->
+
+> [!IMPORTANT]
+> End of Call Survey is available starting on the version [1.13.0-beta.4](https://www.npmjs.com/package/@azure/communication-calling/v/1.13.0-beta.4) of the Calling SDK. Make sure to use that version or later when trying the instructions.
+
+## Sample of API usage
++
+The End of Call Survey feature should be used after the call ends. Users can rate any kind of VoIP call, 1:1, group, meeting, outgoing and incoming. Once a user's call ends, your application can show a UI to the end user allowing them to choose a rating score, and if needed, pick issues theyΓÇÖve encountered during the call from our predefined list.
+
+The following code snips show an example of one-to-one call. After the end of the call, your application can show a survey UI and once the user chooses a rating, your application should call the feature API to submit the survey with the user choices.
+
+We encourage you to use the default rating scale. However, you can submit a survey with custom rating scale. You can check out the [sample application](https://github.com/Azure-Samples/communication-services-web-calling-tutorial/blob/main/Project/src/MakeCall/CallSurvey.js) for the sample API usage.
++
+### Rate call only - no custom scale
+
+```javascript
+call.feature(Features.CallSurvey).submitSurvey({
+ overallRating: { score: 5 }, // issues are optional
+}).then(() => console.log('survey submitted successfully'));
+```
+
+OverallRating is a required category for all surveys.
++
+### Rate call only - with custom scale and issues
+
+```javascript
+call.feature(Features.CallSurvey).submitSurvey({
+ overallRating: {
+ score: 1, // my score
+ scale: { // my custom scale
+ lowerBound: 0,
+ upperBound: 1,
+ lowScoreThreshold: 0
+ },
+ issues: ['HadToRejoin'] // my issues, check the table below for all available issues
+ }
+}).then(() => console.log('survey submitted successfully'));
+```
+
+### Rate overall, audio, and video with a sample issue
+
+``` javascript
+call.feature(Features.CallSurvey).submitSurvey({
+ overallRating: { score: 3 },
+ audioRating: { score: 4 },
+ videoRating: { score: 3, issues: ['Freezes'] }
+}).then(() => console.log('survey submitted successfully'))
+```
+
+### Handle errors the SDK can send
+ ``` javascript
+call.feature(Features.CallSurvey).submitSurvey({
+ overallRating: { score: 3 }
+}).catch((e) => console.log('error when submitting survey: ' + e))
+```
+++
+<!-- ## Find different types of errors
+
+### Failures while submitting survey:
+
+API will return the error messages when data validation failed or unable to submit the survey.
+- At least one survey rating is required.
+- In default scale X should be 1 to 5. - where X is either of
+- overallRating.score
+- audioRating.score
+- videoRating.score
+- ScreenshareRating.score
+- ${propertyName}: ${rating.score} should be between ${rating.scale?.lowerBound} and ${rating.scale?.upperBound}. ;
+- ${propertyName}: ${rating.scale?.lowScoreThreshold} should be between ${rating.scale?.lowerBound} and ${rating.scale?.upperBound}. ;
+- ${propertyName} lowerBound: ${rating.scale?.lowerBound} and upperBound: ${rating.scale?.upperBound} should be between 0 and 100. ;
+- event discarded [ACS failed to submit survey, due to network or other error] -->
+
+## All possible values
+
+### Default survey API configuration
+
+| API Rating Categories | Cutoff Value* | Input Range | Comments |
+| -- | -- | -- | -- |
+| Overall Call | 2 | 1 - 5 | Surveys a calling participantΓÇÖs overall quality experience on a scale of 1-5. A response of 1 indicates an imperfect call experience and 5 indicates a perfect call. The cutoff value of 2 means that a customer response of 1 or 2 indicates a less than perfect call experience. |
+| Audio | 2 | 1 - 5 | A response of 1 indicates an imperfect audio experience and 5 indicates no audio issues were experienced. |
+| Video | 2 | 1 - 5 | A response of 1 indicates an imperfect video experience and 5 indicates no video issues were experienced. |
+| Screenshare | 2 | 1 - 5 | A response of 1 indicates an imperfect screen share experience and 5 indicates no screen share issues were experienced. |
+++
+> [!NOTE]
+>A questionΓÇÖs indicated cutoff value in the API is the threshold that Microsoft uses when analyzing your survey data. When you customize the cutoff value or Input Range, Microsoft analyzes your survey data according to your customization.
++
+### More survey tags
+| Rating Categories | Optional Tags |
+| -- | -- |
+| Overall Call | `CallCannotJoin` `CallCannotInvite` `HadToRejoin` `CallEndedUnexpectedly` `OtherIssues` |
+| Audio | `NoLocalAudio` `NoRemoteAudio` `Echo` `AudioNoise` `LowVolume` `AudioStoppedUnexpectedly` `DistortedSpeech` `AudioInterruption` `OtherIssues` |
+| Video | `NoVideoReceived` `NoVideoSent` `LowQuality` `Freezes` `StoppedUnexpectedly` `DarkVideoReceived` `AudioVideoOutOfSync` `OtherIssues` |
+| Screenshare | `NoContentLocal` `NoContentRemote` `CannotPresent` `LowQuality` `Freezes` `StoppedUnexpectedly` `LargeDelay` `OtherIssues` |
++
+### Customization options
+
+You can choose to collect each of the four API values or only the ones
+you find most important. For example, you can choose to only ask
+customers about their overall call experience instead of asking them
+about their audio, video, and screen share experience. You can also
+customize input ranges to suit your needs. The default input range is 1
+to 5 for Overall Call, Audio, Video, and
+Screenshare. However, each API value can be customized from a minimum of
+0 to maximum of 100.
+
+### Customization examples
++
+| API Rating Categories | Cutoff Value* | Input Range |
+| -- | -- | -- |
+| Overall Call | 0 - 100 | 0 - 100 |
+| Audio | 0 - 100 | 0 - 100 |
+| Video | 0 - 100 | 0 - 100 |
+| Screenshare | 0 - 100 | 0 - 100 |
+
+ > [!NOTE]
+ > A questionΓÇÖs indicated cutoff value in the API is the threshold that Microsoft uses when analyzing your survey data. When you customize the cutoff value or Input Range, Microsoft analyzes your survey data according to your customization.
+
+<!--
+## Collect survey data
+
+> [!IMPORTANT]
+> You must enable a Diagnostic Setting in Azure Monitor to send the log data of your surveys to a Log Analytics workspace, Event Hubs, or an Azure storage account to receive and analyze your survey data. If you do not send survey data to one of these options your survey data will not be stored and will be lost. To enable these logs for your Communications Services, see: [Enable logging in Diagnostic Settings](../concepts/analytics/enable-logging.md)
+
+
+### View survey data with a Log Analytics workspace
+
+You need to enable a Log Analytics Workspace to both store the log data of your surveys and access survey results. To enable these logs for your Communications Services, see: [Enable logging in Diagnostic Settings](../concepts/analytics/enable-logging.md). Follow the steps to add a diagnostic setting. Select the ΓÇ£ACSCallSurveyΓÇ¥ data source when choosing category details. Also, choose ΓÇ£Send to Log Analytics workspaceΓÇ¥ as your destination detail.
+
+- You can also integrate your Log Analytics workspace with Power BI, see: [Integrate Log Analytics with Power BI](../../../articles/azure-monitor/logs/log-powerbi.md)
+ -->
+
+## Best practices
+Here are our recommended survey flows and suggested question prompts for consideration. Your development can use our recommendation or use customized question prompts and flows for your visual interface.
+
+**Question 1:** How did the users perceive their overall call quality experience?
+We recommend you start the survey by only asking about the participantsΓÇÖ overall quality. If you separate the first and second questions, it helps to only collect responses to Audio, Video, and Screen Share issues if a survey participant indicates they experienced call quality issues.
++
+- Suggested prompt: ΓÇ£How was the call quality?ΓÇ¥
+- API Question Values: Overall Call
+
+**Question 2:** Did the user perceive any Audio, Video, or Screen Sharing issues in the call?
+If a survey participant responded to Question 1 with a score at or below the cutoff value for the overall call, then present the second question.
+
+- Suggested prompt: ΓÇ£What could have been better?ΓÇ¥
+- API Question Values: Audio, Video, and Screenshare
+
+Surveying Guidelines
+- Avoid survey burnout, donΓÇÖt survey all call participants.
+- The order of your questions matters. We recommend you randomize the sequence of optional tags in Question 2 in case respondents focus most of their feedback on the first prompt they visually see.
+<!-- - Consider using surveys for separate Azure Communication Services Resources in controlled experiments to identify release impacts. -->
++
+## Next steps
+
+- Learn more about the End of Call Survey, see: [End of Call Survey overview](../concepts/voice-video-calling/end-of-call-survey-concept.md)
+
+<!-- - Learn how to use the Log Analytics workspace, see: [Log Analytics Tutorial](../../../articles/azure-monitor/logs/log-analytics-tutorial.md)
+
+- Create your own queries in Log Analytics, see: [Get Started Queries](../../../articles/azure-monitor/logs/get-started-queries.md) -->
+
communication-services Proxy Calling Support Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/tutorials/proxy-calling-support-tutorial.md
In certain situations, it might be useful to have all your client traffic proxie
Many times, establishing a network connection between two peers isn't straightforward. A direct connection might not work because of many reasons: firewalls with strict rules, peers sitting behind a private network, or computers are running in a NAT environment. To solve these network connection issues, you can use a TURN server. The term stands for Traversal Using Relays around NAT, and it's a protocol for relaying network traffic STUN and TURN servers are the relay servers here. Learn more about how ACS [mitigates](../concepts/network-traversal.md) network challenges by utilizing STUN and TURN. ### Provide your TURN servers details to the SDK
-To provide the details of your TURN servers, you need to pass details of what TURN server to use as part of `CallClientOptions` while initializing the `CallClient`. For more information how to setup a call, see [Azure Communication Services Web SDK](../quickstarts/voice-video-calling/get-started-with-video-calling.md?pivots=platform-web)) for the Quickstart on how to setup Voice and Video.
+To provide the details of your TURN servers, you need to pass details of what TURN server to use as part of `CallClientOptions` while initializing the `CallClient`. For more information how to setup a call, see [Azure Communication Services Web SDK](../quickstarts/voice-video-calling/get-started-with-video-calling.md?pivots=platform-web) for the Quickstart on how to setup Voice and Video.
```js import { CallClient } from '@azure/communication-calling';
const callClient = new CallClient({
``` > [!IMPORTANT]
-> Note that if you have provided your TURN server details while initializing the `CallClient`, all the media traffic will <i>exclusively</i> flow through these TURN servers. Any other ICE candidates that are normally generated when creating a call won't be considered while trying to establish connectivity between peers i.e. only 'relay' candidates will be considered. To learn more about different types of Ice candidates can be found [here](https://developer.mozilla.org/en-US/docs/Web/API/RTCIceCandidate/type).
+> Note that if you have provided your TURN server details while initializing the `CallClient`, all the media traffic will <i>exclusively</i> flow through these TURN servers. Any other ICE candidates that are normally generated when creating a call won't be considered while trying to establish connectivity between peers i.e. only 'relay' candidates will be considered. To learn more about different types of Ice candidates click here [here](https://developer.mozilla.org/en-US/docs/Web/API/RTCIceCandidate/type).
> [!NOTE]
-> If the '?transport' query parameter is not present as part of the TURN url or is not one of these values - 'udp', 'tcp', 'tls', the default will behaviour will be UDP.
+> If the '?transport' query parameter is not present as part of the TURN url or is not one of these values - 'udp', 'tcp', 'tls', the default behaviour will be UDP.
> [!NOTE] > If any of the URLs provided are invalid or don't have one of these schemas - 'turn:', 'turns:', 'stun:', the `CallClient` initialization will fail and will throw errors accordingly. The error messages thrown should help you troubleshoot if you run into issues.
confidential-computing Multi Party Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/multi-party-data.md
++
+ Title: Multi-party Data Analytics
+
+description: Data cleanroom and multi-party data confidential computing solutions
+++++++++++++ Last updated : 04/20/2023+++++
+# Cleanroom and Multi-party Data Analytics
+
+Azure confidential computing (ACC) provides a foundation for solutions that enable multiple parties to collaborate on data. There are various approaches to solutions, and a growing ecosystem of partners to help enable Azure customers, researchers, data scientists and data providers to collaborate on data while preserving privacy. This overview covers some of the approaches and existing solutions that can be used, all running on ACC.
+
+## What are the data and model protections?
+
+Data cleanroom solutions typically offer a means for one or more data providers to combine data for processing. There's typically agreed upon code, queries, or models that are created by one of the providers or another participant, such as a researcher or solution provider. In many cases, the data can be considered sensitive and undesired to directly share to other participants ΓÇô whether another data provider, a researcher, or solution vendor. To help ensure security and privacy on both the data and models used within data cleanrooms, confidential computing can be used to cryptographically verify that participants don't have access to the data or models, including during processing. By using ACC, the solutions can bring protections on the data and model IP from the cloud operator, solution provider, and data collaboration participants.
+
+## What are examples of industry use cases?
+
+With ACC, customers and partners build privacy preserving multi-party data analytics solutions, sometimes referred to as "confidential cleanrooms" ΓÇô both net new solutions uniquely confidential, and existing cleanroom solutions made confidential with ACC.
+
+1. **Royal Bank of Canada** - [Virtual clean room](https://aka.ms/RBCstory) solution combining merchant data with bank data in order to provide personalized offers, using Azure confidential computing VMs and Azure SQL AE in secure enclaves.
+2. **Scotiabank** ΓÇô Proved the use of AI on cross-bank money flows to identify money laundering to flag human trafficking instances, using Azure confidential computing and a solution partner, Opaque.
+3. **Novartis Biome** ΓÇô used a partner solution from [BeeKeeperAI](https://aka.ms/ACC-BeeKeeperAI) running on ACC in order to find candidates for clinical trials for rare diseases.
+4. **Leading payment providers** connecting data across banks for fraud and anomaly detection.
+5. **Data analytic services** and clean room solutions using ACC to increase data protection and meet EU customer compliance needs and privacy regulation.
++
+## Why confidential computing?
+
+Data cleanrooms aren't a brand-new concept, however with advances in confidential computing, there are more opportunities to take advantage of cloud scale with broader datasets, securing IP of AI models, and ability to better meet data privacy regulations. In previous cases, certain data might be inaccessible for reasons such as
+
+- Competitive disadvantages or regulation preventing of sharing data across industry companies.
+- Anonymization reducing the quality of insights on data, or being too costly and time consuming.
+- Data being bound to certain locations and refrained from processing in the cloud due to security concerns.
+- Costly or lengthy legal processes cover liability if data is exposed or abused
+
+These realities could lead to incomplete or ineffective datasets that result in weaker insights, or more time needed in training and using AI models.
+
+## What are considerations when building a cleanroom solution?
+
+_Batch analytics vs. real-time data pipelines:_ The size of the datasets and speed of insights should be considered when designing or using a cleanroom solution. When data is available "offline", it can be loaded into a verified and secured compute environment for data analytic processing on large portions of data, if not the entire dataset. This batch analytics allow for large datasets to be evaluated with models and algorithms that aren't expected to provide an immediate result. For example, batch analytics work well when doing ML inferencing across millions of health records to find best candidates for a clinical trial. Other solutions require real-time insights on data, such as when algorithms and models aim to identify fraud on near real-time transactions between multiple entities.
+
+_Zero-trust participation:_ A major differentiator in confidential cleanrooms is the ability to have no party involved trusted ΓÇô from all data providers, code and model developers, solution providers and infrastructure operator admins. Solutions can be provided where both the data and model IP can be protected from all parties. When onboarding or building a solution, participants should consider both what is desired to protect, and from whom to protect each of the code, models, and data.
+
+_Federated learning:_ Federated learning involves creating or using a solution whereas models process in the data owner's tenant, and insights are aggregated in a central tenant. In some cases, the models can even be run on data outside of Azure, with model aggregation still occurring in Azure. Many times, federated learning iterates on data many times as the parameters of the model improve after insights are aggregated. The iteration costs and quality of the model should be factored into the solution and expected outcomes.
+
+_Data residency and sources:_ Customers have data stored in multiple clouds and on-premises. Collaboration can include data and models from different sources. Cleanroom solutions can facilitate data and models coming to Azure from these other locations. When data can't move to Azure from an on-premises data store, some cleanroom solutions can run on site where the data resides. Management and policies can be powered by a common solution provider, where available.
+
+_Code integrity and confidential ledgers:_ With distributed ledger technology (DLT) running on Azure confidential computing, solutions can be built that run on a network across organizations. The code logic and analytic rules can be added only when there's consensus across the various participants. All updates to the code are recorded for auditing via tamper-proof logging enabled with Azure confidential computing.
+
+## What are options to get started?
+
+### ACC platform offerings that help enable confidential cleanrooms
+Roll up your sleeves and build a data clean room solution directly on these confidential computing service offerings.
+
+[Confidential containers](./confidential-containers.md) on Azure Container Instances (ACI) and Intel SGX VMs with application enclaves provide a container solution for building confidential cleanroom solutions.
+
+[Confidential Virtual Machines (VMs)](./confidential-vm-overview.md) provide a VM platform for confidential cleanroom solutions.
+
+[Azure SQL AE in secure enclaves](/sql/relational-databases/security/encryption/always-encrypted-enclaves) provides a platform service for encrypting data and queries in SQL that can be used in multi-party data analytics and confidential cleanrooms.
+
+[Confidential Consortium Framework](https://ccf.microsoft.com/) is an open-source framework for building highly available stateful services that use centralized compute for ease of use and performance, while providing decentralized trust. It enables multiple parties to execute auditable compute over confidential data without trusting each other or a privileged operator.
+
+### ACC partner solutions that enable confidential cleanrooms
+Use a partner that has built a multi-party data analytics solution on top of the Azure confidential computing platform.
+
+- [**Anjuna**](https://www.anjuna.io/use-case-solutions) provides a confidential computing platform to enable various use cases, including secure clean rooms, for organizations to share data for joint analysis, such as calculating credit risk scores or developing machine learning models, without exposing sensitive information.
+- [**BeeKeeperAI**](https://www.beekeeperai.com/) enables healthcare AI through a secure collaboration platform for algorithm owners and data stewards. BeeKeeperAIΓäó uses privacy-preserving analytics on multi-institutional sources of protected data in a confidential computing environment. The solution supports end-to-end encryption, secure computing enclaves, and Intel's latest SGX enabled processors to protect the data and the algorithm IP.
+- [**Decentriq**](https://www.decentriq.com/) provides Software as a Service (SaaS) data clean rooms to enable companies to collaborate with other organizations on their most sensitive datasets and create value for their clients. The technologies help prevent anyone to see the sensitive data, including Decentriq.
+- [**Fortanix**](https://www.fortanix.com/platform/confidential-ai) provides a confidential computing platform that can enable confidential AI, including multiple organizations collaborating together for multi-party analytics.
+- [**Mithril Security**](https://www.mithrilsecurity.io/) provides tooling to help SaaS vendors serve AI models inside secure enclaves, and providing an on-premises level of security and control to data owners. Data owners can use their SaaS AI solutions while remaining compliant and in control of their data.
+- [**Opaque**](https://opaque.co/) provides a confidential computing platform for collaborative analytics and AI, giving the ability to perform collaborative scalable analytics while protecting data end-to-end and enabling organizations to comply with legal and regulatory mandates.
+- [**SafeLiShare**](https://safelishare.com/solution/encrypted-data-clean-room/) provides policy-driven encrypted data clean rooms where access to data is auditable, trackable, and visible, while keeping data protected during multi-party data sharing.
confidential-computing Tdx Confidential Vm Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/tdx-confidential-vm-overview.md
+
+ Title: DCesv5 and ECesv5 series confidential VMs
+description: Learn about Azure DCesv5 and ECesv5 series confidential virtual machines (confidential VMs). These series are for tenants with high security and confidentiality requirements.
+++++ Last updated : 4/25/2023++
+# DCesv5 and ECesv5 series confidential VMs
+
+Starting with the 4th Gen Intel® Xeon® Scalable processors, Azure has begun supporting VMs backed by an all-new hardware-based Trusted Execution Environment called [Intel® Trust Domain Extensions (TDX)](https://www.intel.com/content/www/us/en/developer/articles/technical/intel-trust-domain-extensions.html#inpage-nav-2). Organizations can use these VMs to seamlessly bring confidential workloads to the cloud without any code changes to their applications.
+
+Intel TDX helps harden the virtualized environment to deny the hypervisor and other host management code access to VM memory and state, including the cloud operator. Intel TDX helps assure workload integrity and confidentiality by mitigating a wide range of software and hardware attacks, including intrusion or inspection by software running in other VMs.
+
+> [!IMPORTANT]
+> DCesv5 and ECesv5 are now available in preview, customers can sign-up [today](https://aka.ms/TDX-signup).
+
+## Benefits
+
+Some of the benefits of Confidential VMs with Intel TDX include:
+
+- Support for general-purpose and memory-optimized virtual machines.
+- Improved performance for compute, memory, IO and network-intensive workloads.
+- Ability to retrieve raw hardware evidence and submit for judgment to attestation provider, including open-sourcing our client application.
+- Support for [Microsoft Azure Attestation](https://learn.microsoft.com/azure/attestation) (coming soon) backed by high availability zonal capabilities and disaster recovery capabilities.
+- Support for operator-independent remote attestation with [Intel Project Amber](http://projectamber.intel.com/).
+- Support for Ubuntu 22.04, SUSE Linux Enterprise Server 15 SP5 and SUSE Linux Enterprise Server for SAP 15 SP5.
+
+## See also
+
+- [Read our product announcement](https://aka.ms/tdx-blog)
+- [Try Ubuntu confidential VMs with Intel TDX today: limited preview now available on Azure](https://canonical.com/blog/ubuntu-confidential-vms-intel-tdx-microsoft-azure-confidential-computing)
cosmos-db How To Configure Private Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/how-to-configure-private-endpoints.md
description: Learn how to set up Azure Private Link to access an Azure Cosmos DB
Previously updated : 03/03/2023 Last updated : 04/24/2023
When you have an approved Private Link for an Azure Cosmos DB account, in the Az
## <a id="private-zone-name-mapping"></a>API types and private zone names
-The following table shows the mapping between different Azure Cosmos DB account API types, supported subresources, and the corresponding private zone names. You can also access the Gremlin and API for Table accounts through the API for NoSQL, so there are two entries for these APIs. There's also an extra entry for the API for NoSQL for accounts using the [dedicated gateway](./dedicated-gateway.md).
+Please review [Azure Private Endpoint DNS configuration](../private-link/private-endpoint-dns.md) for a more detailed explanation about private zones and DNS configurations for private endpoint. The following table shows the mapping between different Azure Cosmos DB account API types, supported subresources, and the corresponding private zone names. You can also access the Gremlin and API for Table accounts through the API for NoSQL, so there are two entries for these APIs. There's also an extra entry for the API for NoSQL for accounts using the [dedicated gateway](./dedicated-gateway.md).
|Azure Cosmos DB account API type |Supported subresources or group IDs |Private zone name | ||||
cosmos-db Feature Support 36 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/feature-support-36.md
Azure Cosmos DB for MongoDB supports the following database commands:
| `Text Index` | No | | `2dsphere` | Yes | | `2d Index` | No |
-| `Hashed Index` | Yes |
+| `Hashed Index` | No |
### Index properties
cosmos-db Feature Support 40 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/feature-support-40.md
We recommend enabling Server Side Retry and avoiding wildcard indexes to ensure
| `Text Index` | No | | `2dsphere` | Yes | | `2d Index` | No |
-| `Hashed Index` | Yes |
+| `Hashed Index` | No |
### Index properties
cosmos-db Feature Support 42 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/feature-support-42.md
We recommend enabling Server Side Retry and avoiding wildcard indexes to ensure
| `Text Index` | No | | `2dsphere` | Yes | | `2d Index` | No |
-| `Hashed Index` | Yes |
+| `Hashed Index` | No |
### Index properties
cosmos-db Compatibility https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/vcore/compatibility.md
Azure Cosmos DB for MongoDB vCore supports the following indexes and index prope
| `Multikey Index` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes | | `Text Index` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No | | `Geospatial Index` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
-| `Hashed Index` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
+| `Hashed Index` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
### Index properties
cosmos-db Tune Connection Configurations Java Sdk V4 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/tune-connection-configurations-java-sdk-v4.md
As a first step, use the following recommended configuration settings below. The
| maxConnectionsPerEndpoint | "130" | "130" | This represents the upper bound size of the *connection pool* for an endpoint/backend node (representing a replica). SDK creates connections to endpoint/backend node on-demand and based on incoming concurrent requests. By default, if required, SDK will create maximum 130 connections to an endpoint/backend node. (NOTE: SDK doesn't create these 130 connections upfront). | | maxRequestsPerConnection | "30" | "30" | This represents the upper bound size of the maximum number of requests that can be queued on a *single connection* for a specific endpoint/backend node (representing a replica). SDK queues requests to a single connection to an endpoint/backend node on-demand and based on incoming concurrent requests. By default, if required, SDK will queue maximum 30 requests to a single connection for a specific endpoint/backend node. (NOTE: SDK doesn't queue these 30 requests to a single connection upfront). | | connectTimeout | "PT5S" | "~PT1S" | This represents the connection establishment timeout duration for a *single connection* to be established with an endpoint/backend node. By default SDK will wait for maximum 5 seconds for connection establishment before throwing an error. TCP connection establishment uses [multi-step handshake](https://en.wikipedia.org/wiki/Transmission_Control_Protocol#Protocol_operation) which increases latency of the connection establishment time, hence, customers are recommended to set this value according to their network bandwidth and environment settings. NOTE: This recommendation of ~PT1S is only for applications deployed in colocated regions of their Cosmos DB accounts. |
-| networkRequestTimeout | "PT5S" | "PT5S" | This represents the network timeout duration for a *single request*. SDK will wait maximum for this duration to consume a service response after the request has been written to the network connection. SDK only allows values between 5 seconds (min) and 10 seconds (max). Setting a value too high can result in fewer retries and reduce chances of success by retries. |
+| networkRequestTimeout | "PT5S" | "PT5S" | This represents the network timeout duration for a *single request*. SDK will wait maximum for this duration to consume a service response after the request has been written to the network connection. SDK only allows values between 1 second (min) and 10 seconds (max). Setting a value too high can result in fewer retries and reduce chances of success by retries. |
### Gateway Connection mode
cost-management-billing Direct Ea Administration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/direct-ea-administration.md
Title: EA Billing administration on the Azure portal
description: This article explains the common tasks that an enterprise administrator accomplishes in the Azure portal. Previously updated : 04/18/2023 Last updated : 04/25/2023
This article explains the common tasks that an Enterprise Agreement (EA) adminis
> As of February 20, 2023 indirect EA customers wonΓÇÖt be able to manage their billing account in the EA portal. Instead, they must use the Azure portal. > > This change doesnΓÇÖt affect Azure Government EA enrollments. They continue using the EA portal to manage their enrollment.
+>
+> As of April 24, 2023 EA customers won't be able to manage their Azure Government EA enrollments from [Azure portal](https://portal.azure.com) instead they can manage it from [Azure Government portal](https://portal.azure.us).
## Manage your enrollment
cost-management-billing Understand Ea Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/understand-ea-roles.md
The following sections describe the limitations and capabilities of each role.
- ⁴ Notification contacts are sent email communications about the Azure Enterprise Agreement. - ⁵ Task is limited to accounts in your department.-- ⁶ The Enterprise Administrator (read only) role doesn't allow reservation purchases. However, if the EA Admin (read only) is also a subscription owner or subscription reservation purchaser, they can purchase a reservation.
+- ⁶ A subscription owner or reservation purchaser may purchase and manage reservations and savings plans within the subscription, and only if permitted by the reservation purchase enabled flag. Enterprise administrators may purchase and manage reservations and savings plans across the billing account. Enterprise administrators (read-only) may view all purchased reservations and savings plans. Neither EA administrator role is governed by the reservation purchase enabled flag. While the Enterprise Admin (read-only) role holder is not permitted to make purchases, as it is not governed by reservation purchase enabled, if a user with that role also holds either a subscription owner or reservation purchaser permission, that user may purchase reservations and savings plans even if the reservation purchase enabled flag is set to false
## Add a new enterprise administrator
cost-management-billing Tutorial Azure Hybrid Benefits Sql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/scope-level/tutorial-azure-hybrid-benefits-sql.md
Title: Tutorial - Optimize centrally managed Azure Hybrid Benefit for SQL Server
description: This tutorial guides you through proactively assigning SQL Server licenses in Azure to manage and optimize Azure Hybrid Benefit. Previously updated : 04/20/2022 Last updated : 04/25/2022
The preceding section discusses ongoing monitoring. We also recommend that you e
- Monitor usage and adjust on the fly, as needed. - Repeat the process every year or at whatever frequency best suits your needs.
+### License assignment review date
+
+After you assign licenses and set a review date, Microsoft later sends you email notifications to let you know that the license assignment will expire.
+
+Email notifications are sent:
+
+- 90 days before expiration
+- 30 days before expiration
+- 7 days before expiration
+
+No notification is sent on the review date. The license assignment becomes inactive and no longer applies 90 days after expiration.
+ ## Example walkthrough In the following example, assume that you're the billing administrator for the Contoso Insurance company. You manage Contoso's Azure Hybrid Benefit for SQL Server.
data-factory Airflow Pricing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/airflow-pricing.md
Managed Airflow supports either small (D2v4) or large (D4v4) node sizing. Small
## Next steps - [Run an existing pipeline with Managed Airflow](tutorial-run-existing-pipeline-with-airflow.md)-- [Refresh a Power BI dataset with Managed Airflow](tutorial-refresh-power-bi-dataset-with-airflow.md) - [Changing password for Airflow environments](password-change-airflow.md)
data-factory Concept Managed Airflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/concept-managed-airflow.md
Azure Data Factory offers serverless pipelines for data process orchestration, data movement with 100+ managed connectors, and visual transformations with the mapping data flow. Azure Data Factory's Managed Airflow service is a simple and efficient way to create and manage [Apache Airflow](https://airflow.apache.org) environments, enabling you to run data pipelines at scale with ease.
-[Apache Airflow](https://airflow.apache.org) is an open-source platform used to programmatically create, schedule, and monitor complex data workflows. It allows you to define a set of tasks, called operators, that can be combined into directed acyclic graphs (DAGs) to represent data pipelines. Airflow enables you to execute these DAGs on a schedule or in response to an event, monitor the progress of workflows, and provide visibility into the state of each task. It is widely used in data engineering and data science to orchestrate data pipelines, and is known for its flexibility, extensibility, and ease of use.
+[Apache Airflow](https://airflow.apache.org) is an open-source platform used to programmatically create, schedule, and monitor complex data workflows. It allows you to define a set of tasks, called operators, that can be combined into directed acyclic graphs (DAGs) to represent data pipelines. Airflow enables you to execute these DAGs on a schedule or in response to an event, monitor the progress of workflows, and provide visibility into the state of each task. It's widely used in data engineering and data science to orchestrate data pipelines, and is known for its flexibility, extensibility, and ease of use.
:::image type="content" source="media/concept-managed-airflow/data-integration.png" alt-text="Screenshot shows data integration.":::
You can install any provider package by editing the airflow environment from the
## Limitations
-* Managed Airflow in other regions will be available by GA (Tentative GA is Q2 2023 ).
+* Managed Airflow in other regions is available by GA.
* Data Sources connecting through airflow should be publicly accessible.
-* Blob Storage behind VNet are not supported during the public preview (Tentative GA is Q2 2023
+* Blob Storage behind VNet is not supported during the public preview.
* DAGs that are inside a Blob Storage in VNet/behind Firewall is currently not supported.
-* Azure Key Vault is not supported in LinkedServices to import dags.(Tentative GA is Q2 2023)
+* Azure Key Vault isn't supported in LinkedServices to import dags.
* Airflow supports officially Blob Storage and ADLS with some limitations. ## Next steps - [Run an existing pipeline with Managed Airflow](tutorial-run-existing-pipeline-with-airflow.md)-- [Refresh a Power BI dataset with Managed Airflow](tutorial-refresh-power-bi-dataset-with-airflow.md) - [Managed Airflow pricing](airflow-pricing.md) - [How to change the password for Managed Airflow environments](password-change-airflow.md)
data-factory Format Delta https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/format-delta.md
Delta will only read 2 partitions where **part_col == 5 and 8** from the target
In Settings tab, you will find three more options to optimize delta sink transformation.
-* When **Merge schema** option is enabled, any columns that are present in the previous stream, but not in the Delta table, are automatically added on to the end of the schema.
+* When **Merge schema** option is enabled, it allows schema evolution, i.e. any columns that are present in the current incoming stream but not in the target Delta table are automatically added to its schema. This option is supported across all update methods.
* When **Auto compact** is enabled, after an individual write, transformation checks if files can further be compacted, and runs a quick OPTIMIZE job (with 128 MB file sizes instead of 1GB) to further compact files for partitions that have the most number of small files. Auto compaction helps in coalescing a large number of small files into a smaller number of large files. Auto compaction only kicks in when there are at least 50 files. Once a compaction operation is performed, it creates a new version of the table, and writes a new file containing the data of several previous files in a compact compressed form.
data-factory How Does Managed Airflow Work https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-does-managed-airflow-work.md
If you're using Airflow version 1.x, delete DAGs that are deployed on any Airflo
## Next steps
-* [Run an existing pipeline with Managed Airflow](tutorial-run-existing-pipeline-with-airflow.md)
-* [Refresh a Power BI dataset with Managed Airflow](tutorial-refresh-power-bi-dataset-with-airflow.md)
-* [Managed Airflow pricing](airflow-pricing.md)
-* [How to change the password for Managed Airflow environments](password-change-airflow.md)
+- [Run an existing pipeline with Managed Airflow](tutorial-run-existing-pipeline-with-airflow.md)
+- [Managed Airflow pricing](airflow-pricing.md)
+- [How to change the password for Managed Airflow environments](password-change-airflow.md)
data-factory Password Change Airflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/password-change-airflow.md
We recommend using **Azure AD** authentication in Managed Airflow environments.
## Next steps - [Run an existing pipeline with Managed Airflow](tutorial-run-existing-pipeline-with-airflow.md)-- [Refresh a Power BI dataset with Managed Airflow](tutorial-refresh-power-bi-dataset-with-airflow.md) - [Managed Airflow pricing](airflow-pricing.md)
data-factory Tutorial Refresh Power Bi Dataset With Airflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-refresh-power-bi-dataset-with-airflow.md
- Title: Refresh a Power BI dataset with Managed Airflow
-description: This tutorial provides step-by-step instructions for refreshing a Power BI dataset with Managed Airflow.
---- Previously updated : 01/24/2023---
-# Refresh a Power BI dataset with Managed Airflow
--
-> [!NOTE]
-> Managed Airflow for Azure Data Factory relies on the open source Apache Airflow application. Documentation and more tutorials for Airflow can be found on the Apache Airflow [Documentation](https://airflow.apache.org/docs/) or [Community](https://airflow.apache.org/community/) pages.
-
-This tutorial shows you how to refresh a Power BI dataset with Managed Airflow in Azure Data Factory.
-
-## Prerequisites
-
-* **Azure subscription**. If you don't have an Azure subscription, create a [free Azure account](https://azure.microsoft.com/free/) before you begin.
-* **Azure storage account**. If you don't have a storage account, see [Create an Azure storage account](../storage/common/storage-account-create.md?tabs=azure-portal) for steps to create one. *Ensure the storage account allows access only from selected networks.*
-* **Setup a Service Principal**. You will need to [create a new service principal](../active-directory/develop/howto-create-service-principal-portal.md) or use an existing one and grant it permission to run the pipeline (example ΓÇô contributor role in the data factory where the existing pipelines exist), even if the Managed Airflow environment and the pipelines exist in the same data factory. You will need to get the Service PrincipalΓÇÖs Client ID and Client Secret (API Key).
-
-## Steps
-
-1. Create a new Python file **pbi-dataset-refresh.py** with the below contents:
- ```python
- from airflow import DAG
- from airflow.operators.python_operator import PythonOperator
- from datetime import datetime, timedelta
- from powerbi.datasets import Datasets
-
- # Default arguments for the DAG
- default_args = {
- 'owner': 'me',
- 'start_date': datetime(2022, 1, 1),
- 'depends_on_past': False,
- 'retries': 1,
- 'retry_delay': timedelta(minutes=5),
- }
-
- # Create the DAG
- dag = DAG(
- 'refresh_power_bi_dataset',
- default_args=default_args,
- schedule_interval=timedelta(hours=1),
- )
-
- # Define a function to refresh the dataset
- def refresh_dataset(**kwargs):
- # Create a Power BI client
- datasets = Datasets(client_id='your_client_id',
- client_secret='your_client_secret',
- tenant_id='your_tenant_id')
-
- # Refresh the dataset
- dataset_name = 'your_dataset_name'
- datasets.refresh(dataset_name)
- print(f'Successfully refreshed dataset: {dataset_name}')
-
- # Create a PythonOperator to run the dataset refresh
- refresh_dataset_operator = PythonOperator(
- task_id='refresh_dataset',
- python_callable=refresh_dataset,
- provide_context=True,
- dag=dag,
- )
-
- refresh_dataset_operator
- ```
-
- You will have to fill in your **client_id**, **client_secret**, **tenant_id**, and **dataset_name** with your own values.
-
- Also, you will need to install the **powerbi** python package to use the above code using Managed Airflow requirements. Edit a Managed Airflow environment and add the **powerbi** python package under **Airflow requirements**.
-
-1. Upload the **pbi-dataset-refresh.py** file to the blob storage within a folder named **DAG**.
-1. [Import the **DAG** folder into your Airflow environment](). If you do not have one, [create a new one]().
- :::image type="content" source="media/tutorial-run-existing-pipeline-with-airflow/airflow-environment.png" alt-text="Screenshot showing the data factory management tab with the Airflow section selected.":::
-
-## Next Steps
-
-* [Run an existing pipeline with Managed Airflow](tutorial-run-existing-pipeline-with-airflow.md)
-* [Managed Airflow pricing](airflow-pricing.md)
-* [Changing password for Managed Airflow environments](password-change-airflow.md)
data-factory Tutorial Run Existing Pipeline With Airflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-run-existing-pipeline-with-airflow.md
Data Factory pipelines provide 100+ data source connectors that provide scalable
* **Azure subscription**. If you don't have an Azure subscription, create a [free Azure account](https://azure.microsoft.com/free/) before you begin. * **Azure storage account**. If you don't have a storage account, see [Create an Azure storage account](../storage/common/storage-account-create.md?tabs=azure-portal) for steps to create one. *Ensure the storage account allows access only from selected networks.*
-* **Azure Data Factory pipeline**. You can follow any of the tutorials and create a new data factory pipeline in case you do not already have one, or create one with one click in [Get started and try out your first data factory pipeline](quickstart-get-started.md).
-* **Setup a Service Principal**. You will need to [create a new service principal](../active-directory/develop/howto-create-service-principal-portal.md) or use an existing one and grant it permission to run the pipeline (example ΓÇô contributor role in the data factory where the existing pipelines exist), even if the Managed Airflow environment and the pipelines exist in the same data factory. You will need to get the Service PrincipalΓÇÖs Client ID and Client Secret (API Key).
+* **Azure Data Factory pipeline**. You can follow any of the tutorials and create a new data factory pipeline in case you don't already have one, or create one with one select in [Get started and try out your first data factory pipeline](quickstart-get-started.md).
+* **Setup a Service Principal**. You'll need to [create a new service principal](../active-directory/develop/howto-create-service-principal-portal.md) or use an existing one and grant it permission to run the pipeline (example ΓÇô contributor role in the data factory where the existing pipelines exist), even if the Managed Airflow environment and the pipelines exist in the same data factory. You'll need to get the Service PrincipalΓÇÖs Client ID and Client Secret (API Key).
## Steps
Data Factory pipelines provide 100+ data source connectors that provide scalable
# run_pipeline2 >> pipeline_run_sensor ```
- You will have to create the connection using the Airflow UI (Admin -> Connections -> '+' -> Choose 'Connection type' as 'Azure Data Factory', then fill in your **client_id**, **client_secret**, **tenant_id**, **subscription_id**, **resource_group_name**, **data_factory_name**, and **pipeline_name**.
+ You'll have to create the connection using the Airflow UI (Admin -> Connections -> '+' -> Choose 'Connection type' as 'Azure Data Factory', then fill in your **client_id**, **client_secret**, **tenant_id**, **subscription_id**, **resource_group_name**, **data_factory_name**, and **pipeline_name**.
1. Upload the **adf.py** file to your blob storage within a folder called **DAGS**.
-1. [Import the **DAGS** folder into your Managed Airflow environment](./how-does-managed-airflow-work.md#import-dags). If you do not have one, [create a new one](./how-does-managed-airflow-work.md#create-a-managed-airflow-environment)
+1. [Import the **DAGS** folder into your Managed Airflow environment](./how-does-managed-airflow-work.md#import-dags). If you don't have one, [create a new one](./how-does-managed-airflow-work.md#create-a-managed-airflow-environment)
:::image type="content" source="media/tutorial-run-existing-pipeline-with-airflow/airflow-environment.png" alt-text="Screenshot showing the data factory management tab with the Airflow section selected."::: ## Next steps
-* [Refresh a Power BI dataset with Managed Airflow](tutorial-refresh-power-bi-dataset-with-airflow.md)
-* [Managed Airflow pricing](airflow-pricing.md)
-* [Changing password for Managed Airflow environments](password-change-airflow.md)
+- [Managed Airflow pricing](airflow-pricing.md)
+- [Changing password for Managed Airflow environments](password-change-airflow.md)
deployment-environments Concept Common Components https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/concept-common-components.md
Previously updated : 01/26/2022 Last updated : 04/25/2023 # Components common to Azure Deployment Environments and Microsoft Dev Box
deployment-environments Concept Environments Key Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/concept-environments-key-concepts.md
Previously updated : 10/12/2022 Last updated : 04/25/2023 # Key concepts for Azure Deployment Environments Preview
deployment-environments Concept Environments Scenarios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/concept-environments-scenarios.md
Previously updated : 10/12/2022 Last updated : 04/25/2023 # Scenarios for using Azure Deployment Environments Preview
deployment-environments Configure Catalog Item https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/configure-catalog-item.md
Previously updated : 10/12/2022 Last updated : 04/25/2023
deployment-environments How To Configure Catalog https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/how-to-configure-catalog.md
Previously updated : 10/12/2022 Last updated : 04/25/2023
deployment-environments How To Configure Deployment Environments User https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/how-to-configure-deployment-environments-user.md
Previously updated : 10/12/2022 Last updated : 04/25/2023
deployment-environments How To Configure Devcenter Environment Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/how-to-configure-devcenter-environment-types.md
Previously updated : 10/12/2022 Last updated : 04/25/2023
deployment-environments How To Configure Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/how-to-configure-managed-identity.md
Previously updated : 10/12/2022 Last updated : 04/25/2023
deployment-environments How To Configure Project Admin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/how-to-configure-project-admin.md
Previously updated : 10/12/2022 Last updated : 04/25/2023
deployment-environments How To Configure Project Environment Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/how-to-configure-project-environment-types.md
Previously updated : 10/12/2022 Last updated : 04/25/2023
deployment-environments How To Create Access Environments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/how-to-create-access-environments.md
Previously updated : 03/14/2023 Last updated : 04/25/2023 # Create and access an environment by using the Azure CLI
deployment-environments How To Install Devcenter Cli Extension https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/how-to-install-devcenter-cli-extension.md
Previously updated : 03/19/2023 Last updated : 04/25/2023 Customer intent: As a dev infra admin, I want to install the Deployment Environments CLI extension so that I can create Deployment Environments resources from the command line.
deployment-environments How To Manage Environments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/how-to-manage-environments.md
Previously updated : 02/28/2023 Last updated : 04/25/2023 # Manage your deployment environment
deployment-environments Overview What Is Azure Deployment Environments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/overview-what-is-azure-deployment-environments.md
Previously updated : 10/12/2022 Last updated : 04/25/2023 # What is Azure Deployment Environments Preview?
deployment-environments Quickstart Create Access Environments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/quickstart-create-access-environments.md
Previously updated : 10/26/2022 Last updated : 04/25/2023 # Quickstart: Create and access Azure Deployment Environments by using the developer portal
deployment-environments Quickstart Create And Configure Devcenter https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/quickstart-create-and-configure-devcenter.md
Previously updated : 02/08/2023 Last updated : 04/25/2023 # Quickstart: Create and configure a dev center for Azure Deployment Environments
deployment-environments Quickstart Create And Configure Projects https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/quickstart-create-and-configure-projects.md
Previously updated : 02/08/2023 Last updated : 04/25/2023 # Quickstart: Create and configure a project
dev-box Concept Common Components https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/concept-common-components.md
Previously updated : 01/26/2022 Last updated : 04/25/2023 # Components common to Microsoft Dev Box and Azure Deployment Environments
dev-box Concept Dev Box Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/concept-dev-box-concepts.md
Previously updated : 10/12/2022 Last updated : 04/25/2023 #Customer intent: As a developer, I want to understand Dev Box concepts and terminology so that I can set up a Dev Box environment.
dev-box How To Configure Azure Compute Gallery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/how-to-configure-azure-compute-gallery.md
Previously updated : 10/17/2022 Last updated : 04/25/2023
dev-box How To Configure Network Connections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/how-to-configure-network-connections.md
Previously updated : 10/12/2022 Last updated : 04/25/2023 #Customer intent: As a dev infrastructure manager, I want to be able to manage network connections so that I can enable dev boxes to connect to my existing networks and deploy them in the desired region.
dev-box How To Configure Stop Schedule https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/how-to-configure-stop-schedule.md
Previously updated : 12/19/2022 Last updated : 04/25/2023
dev-box How To Create Dev Boxes Developer Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/how-to-create-dev-boxes-developer-portal.md
Previously updated : 09/18/2022 Last updated : 04/25/2023
dev-box How To Customize Devbox Azure Image Builder https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/how-to-customize-devbox-azure-image-builder.md
Previously updated : 11/17/2022 Last updated : 04/25/2023
dev-box How To Dev Box User https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/how-to-dev-box-user.md
Previously updated : 10/12/2022 Last updated : 04/25/2023
dev-box How To Get Help https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/how-to-get-help.md
Previously updated : 04/20/2023 Last updated : 04/25/2023
dev-box How To Install Dev Box Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/how-to-install-dev-box-cli.md
Previously updated : 03/19/2023 Last updated : 04/25/2023 Customer intent: As a dev infra admin, I want to install the Dev Box CLI extension so that I can create Dev Box resources from the command line.
dev-box How To Manage Dev Box Definitions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/how-to-manage-dev-box-definitions.md
Previously updated : 10/10/2022 Last updated : 04/25/2023 #Customer intent: As a dev infrastructure manager, I want to be able to manage dev box definitions so that I can provide appropriate dev boxes to my users.
dev-box How To Manage Dev Box Pools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/how-to-manage-dev-box-pools.md
Previously updated : 10/12/2022 Last updated : 04/25/2023 #Customer intent: As a dev infrastructure manager, I want to be able to manage dev box pools so that I can provide appropriate dev boxes to my users.
dev-box How To Manage Dev Box Projects https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/how-to-manage-dev-box-projects.md
Previously updated : 10/26/2022 Last updated : 04/25/2023
dev-box How To Manage Dev Center https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/how-to-manage-dev-center.md
Previously updated : 10/12/2022 Last updated : 04/25/2023 #Customer intent: As a dev infrastructure manager, I want to be able to manage dev centers so that I can manage my Microsoft Dev Box Preview implementation.
dev-box How To Project Admin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/how-to-project-admin.md
Previously updated : 10/12/2022 Last updated : 04/25/2023
dev-box Overview What Is Microsoft Dev Box https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/overview-what-is-microsoft-dev-box.md
Previously updated : 03/16/2023 Last updated : 04/25/2023 adobe-target: true
dev-box Quickstart Configure Dev Box Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/quickstart-configure-dev-box-service.md
Previously updated : 01/24/2023 Last updated : 04/25/2023 #Customer intent: As an enterprise admin, I want to understand how to create and configure dev box components so that I can provide dev box projects for my users.
dev-box Quickstart Create Dev Box https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/quickstart-create-dev-box.md
Previously updated : 10/12/2022 Last updated : 04/25/2023 #Customer intent: As a dev box user, I want to understand how to create and access a dev box so that I can start work.
dev-box Tutorial Connect To Dev Box With Remote Desktop App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/tutorial-connect-to-dev-box-with-remote-desktop-app.md
Previously updated : 03/29/2023 Last updated : 04/25/2023
devtest-labs Devtest Lab Auto Shutdown https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-auto-shutdown.md
description: Learn how to set auto shutdown schedules and policies for Azure Dev
Previously updated : 12/18/2021 Last updated : 04/24/2023 # Configure auto shutdown for labs and VMs in DevTest Labs
After you update auto shutdown settings, you can see the activity logged in the
## Auto shutdown notifications
-When you enable notifications in auto shutdown configuration, lab users receive a notification 30 minutes before auto shutdown if any of their VMs will be affected. The notification gives users a chance to save their work before the shutdown. If the auto shutdown settings specify an email address, the notification sends to that email address. If the settings specify a webhook, the notification sends to the webhook URL.
+When you enable notifications in auto shutdown configuration, lab users receive a notification 30 minutes before auto shutdown affects any of their VMs. The notification gives users a chance to save their work before the shutdown. If the auto shutdown settings specify an email address, the notification sends to that email address. If the settings specify a webhook, the notification sends to the webhook URL.
The notification can also provide links that allow the following actions for each VM if someone needs to keep working:
To get started, create a logic app in Azure with the following steps:
1. At the top of the **Logic apps** page, select **Add**. 1. On the **Create Logic App** page:
+
+ |Name |Value |
+ |||
+ |Subscription |Select your Azure Subscription. |
+ |Resource group |Select a resource group or create a new one. |
+ |Logic app name |Enter a descriptive name for your logic app. |
+ |Publish | Workflow |
+ |Region |Select a region near you or near other services your logic app accesses. |
+ |Plan type |Consumption. A consumption plan allows you to use the logic app designer to create your app. |
+ |Windows Plan |Accept the default App Service Plan (ASP). |
+ |Pricing plan |Accept the default Workflow Standard WS1 (210 total ACU, 3.5 GB memory, 1 vCPU) |
+ |Zone redundancy |Accept the default: Disabled. |
- - Select your Azure **Subscription**.
- - Select a **Resource Group** or create a new one.
- - Enter a **Logic App name**.
- - Select a **Region** for the logic app.
- - Select a **Plan type** for the logic app.
- - Select a **Windows Plan** for the logic app.
- - Select a **Pricing plan** for the logic app.
- - Enabled **Zone redundancy** if necessary.
:::image type="content" source="media/devtest-lab-auto-shutdown/new-logic-app-page.png" alt-text="Screenshot showing the Create Logic App page.":::
devtest-labs Devtest Lab Integrate Ci Cd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-integrate-ci-cd.md
$labVmRgName = (Get-AzResource -Id $labVmComputeId).ResourceGroupName
$labVmName = (Get-AzResource -Id $labVmId).Name # Get lab VM public IP address
-$labVMIpAddress = (Get-AzPublicIpAddress -ResourceGroupName $labVmRgName
- -Name $labVmName).IpAddress
+$labVMIpAddress = (Get-AzPublicIpAddress -ResourceGroupName $labVmRgName -Name $labVmName).IpAddress
# Get lab VM FQDN
-$labVMFqdn = (Get-AzPublicIpAddress -ResourceGroupName $labVmRgName
- -Name $labVmName).DnsSettings.Fqdn
+$labVMFqdn = (Get-AzPublicIpAddress -ResourceGroupName $labVmRgName -Name $labVmName).DnsSettings.Fqdn
# Set a variable labVmRgName to store the lab VM resource group name Write-Host "##vso[task.setvariable variable=labVmRgName;]$labVmRgName"
The next step creates a golden image VM to use for future deployments. This step
- **Virtual Machine Name**: the variable you specified for your virtual machine name: *$vmName*. - **Template**: Browse to and select the template file you checked in to your project repository. - **Parameters File**: If you checked a parameters file into your repository, browse to and select it.
- - **Parameter Overrides**: Enter `-newVMName '$(vmName)' -userName '$(userName)' -password (ConvertTo-SecureString -String '$(password)' -AsPlainText -Force)`.
- - Drop down **Output Variables**, and under **Reference name**, enter the variable for the created lab VM ID. If you use the default *labVmId*, you can refer to the variable in subsequent tasks as **$(labVmId)**.
+ - **Parameter Overrides**: Enter `-newVMName '$(vmName)' -userName '$(userName)' -password '$(password)'`.
+ - Drop down **Output Variables**, and under **Reference name**, enter the variable for the created lab VM ID. Let's enter *vm* for **Reference name** for simplicity. **labVmId** will be an attribute of this variable and will be referred to later as *$vm.labVmId*. If you use any other name, then remember to use it accordingly in the subsequent tasks.
- You can create a name other than the default, but remember to use the correct name in subsequent tasks. You can write the Lab VM ID in the following form: `/subscriptions/{subscription Id}/resourceGroups/{resource group Name}/providers/Microsoft.DevTestLab/labs/{lab name}/virtualMachines/{vmName}`.
+ Lab VM ID will be in the following form: `/subscriptions/{subscription Id}/resourceGroups/{resource group Name}/providers/Microsoft.DevTestLab/labs/{lab name}/virtualMachines/{vmName}`.
### Collect the details of the DevTest Labs VM
Next, the pipeline runs the script you created to collect the details of the Dev
- **Azure Subscription**: Select your service connection or subscription. - **Script Type**: Select **Script File Path**. - **Script Path**: Browse to and select the PowerShell script that you checked in to your source code repository. You can use built-in properties to simplify the path, for example: `$(System.DefaultWorkingDirectory/Scripts/GetLabVMParams.ps1`.
- - **Script Arguments**: Enter the name of the **labVmId** variable you populated in the previous task, for example *-labVmId '$(labVmId)'*.
+ - **Script Arguments**: Enter the value as **-labVmId $(vm.labVmId)**.
The script collects the required values and stores them in environment variables within the release pipeline, so you can refer to them in later steps.
The next task creates an image of the newly deployed VM in your lab. You can use
- **Lab**: Select your lab. - **Custom Image Name**: Enter a name for the custom image. - **Description**: Enter an optional description to make it easy to select the correct image.
- - **Source Lab VM**: The source **labVmId**. If you changed the default name of the **labVmId** variable, enter it here. The default value is **$(labVmId)**.
+ - **Source Lab VM**: The source **labVmId**. Enter the value as **$(vm.labVmId)**.
- **Output Variables**: You can edit the name of the default Custom Image ID variable if necessary. ### Deploy your app to the DevTest Labs VM (optional)
The final task is to delete the VM that you deployed in your lab. You'd ordinari
1. Configure the task as follows: - **Azure RM Subscription**: Select your service connection or subscription. - **Lab**: Select your lab.
- - **Virtual Machine**: Select the VM you want to delete.
+ - **Virtual Machine**: Enter the value as **$(vm.labVmId)**.
- **Output Variables**: Under **Reference name**, if you changed the default name of the **labVmId** variable, enter it here. The default value is **$(labVmId)**. ### Save the release pipeline
energy-data-services Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/release-notes.md
Azure Data Manager for Energy Preview is updated on an ongoing basis. To stay up
## April 2023
+### Enabled Monitoring of OSDU Service Logs
+
+Now you can configure diagnostic settings of your Azure Data Manager for Energy Preview to export OSDU Service Logs to Azure Monitor. You can access, query, & analyze the logs in a Log Analytics Workspace. You can archive them in a storage account for later use.
+ ### Monitoring and investigating actions with Audit logs Knowing who is taking what action on which item is critical in helping organizations meet regulatory compliance and record management requirements. Azure Data Manager for Energy captures audit logs for data plane APIs of OSDU services and audit events listed [here](https://community.opengroup.org/osdu/documentation/-/wikis/Releases/R3.0/GCP/GCP-Operation/Logging/Audit-Logging-Status). Learn more about [audit logging in Azure Data Manager for Energy](how-to-manage-audit-logs.md).
governance Assign Policy Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/assign-policy-bicep.md
If there are any existing resources that aren't compliant with this new assignme
under **Non-compliant resources**. For more information, see
-[How compliance works](./how-to/get-compliance-data.md#how-compliance-works).
+[How compliance works](./concepts/compliance-states.md).
## Clean up resources
governance Assign Policy Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/assign-policy-template.md
The first step in understanding compliance in Azure is to identify the status of your resources. This quickstart steps you through the process of using an Azure Resource Manager template (ARM
-template) to create a policy assignment to identify virtual machines that aren't using managed
-disks. At the end of this process, you'll successfully identify virtual machines that aren't using
-managed disks. They're _non-compliant_ with the policy assignment.
+template) to create a policy assignment that identifies virtual machines that aren't using managed
+disks, and flags them as _non-compliant_ to the policy assignment.
[!INCLUDE [About Azure Resource Manager](../../../includes/resource-manager-quickstart-introduction.md)]
The resource defined in the template is:
| Resource group | Select **Create new**, specify a name, and then select **OK**. In the screenshot, the resource group name is _mypolicyquickstart\<Date in MMDD\>rg_. | | Location | Select a region. For example, **Central US**. | | Policy Assignment Name | Specify a policy assignment name. You can use the policy definition display if you want. For example, _Audit VMs that do not use managed disks_. |
- | Rg Name | Specify a resource group name where you want to assign the policy to. In this quickstart, use the default value **[resourceGroup().name]**. **[resourceGroup()](../../azure-resource-manager/templates/template-functions-resource.md#resourcegroup)** is a template function that retrieves the resource group. |
+ | Resource Group Name | Specify a resource group name where you want to assign the policy to. In this quickstart, use the default value **[resourceGroup().name]**. **[resourceGroup()](../../azure-resource-manager/templates/template-functions-resource.md#resourcegroup)** is a template function that retrieves the resource group. |
| Policy Definition ID | Specify **/providers/Microsoft.Authorization/policyDefinitions/0a914e76-4921-4c19-b460-a2d36003525a**. | | I agree to the terms and conditions stated above | (Select) |
If there are any existing resources that aren't compliant with this new assignme
under **Non-compliant resources**. For more information, see
-[How compliance works](./how-to/get-compliance-data.md#how-compliance-works).
+[How compliance works](./concepts/compliance-states.md).
## Clean up resources
governance Compliance States https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/concepts/compliance-states.md
+
+ Title: Azure Policy compliance states
+description: This article describes the concept of compliance states in Azure Policy.
Last updated : 04/05/2023+++
+# Azure Policy compliance states
+
+## How compliance works
+
+When initiative or policy definitions are assigned, Azure Policy determines which resources are [applicable](./policy-applicability.md) then evaluates those which haven't been [excluded](./assignment-structure.md#excluded-scopes) or [exempted](./exemption-structure.md). Evaluation yields **compliance states** based on conditions in the policy rule and each resources' adherence to those requirements.
+
+## Available compliance states
+
+### Non-compliant
+
+Policy assignments with `audit`, `auditIfNotExists`, or `modify` effects are considered non-compliant for _new_, _updated_, or _existing_ resources when the conditions of the policy rule evaluate to **TRUE**.
+
+Policy assignments with `append`, `deny`, and `deployIfNotExists` effects are considered non-compliant for _existing_ resources when the conditions of the policy rule evaluate to **TRUE**. _New_ and _updated_ resources are automatically remediated or denied at request time to enforce compliance. When a previously existing non-compliant resource is updated, the compliance state remains non-compliant until the resource deployment and Policy evaluation complete.
+
+> [!NOTE]
+> The DeployIfNotExist and AuditIfNotExist effects require the IF statement to be TRUE and the
+> existence condition to be FALSE to be non-compliant. When TRUE, the IF condition triggers
+> evaluation of the existence condition for the related resources.
+
+Policy assignments with `manual` effects are considered non-compliant under two circumstances:
+1. The policy definition has a default compliance state of non-compliant and there is no active [attestation](./attestation-structure.md) for the applicable resource stating otherwise.
+1. The resource has been attested as non-compliant.
+
+To determine
+the reason a resource is non-compliant or to find the change responsible, see
+[Determine non-compliance](../how-to/determine-non-compliance.md). To [remediate](./remediation-structure.md) non-compliant resources for `deployIfNotExists` and `modify` policies, see [Remediate non-compliant resources with Azure Policy](../how-to/remediate-resources.md).
+
+### Compliant
+
+Policy assignments with `append`, `audit`, `auditIfNotExists`, `deny`, `deployIfNotExists`, or `modify` effects are considered compliant for _new_, _updated_, or _existing_ resources when the conditions of the policy rule evaluate to **FALSE**.
+
+Policy assignments with `manual` effects are considered compliant under two circumstances:
+1. The policy definition has a default compliance state of compliant and there is no active [attestation](./attestation-structure.md) for the applicable resource stating otherwise.
+1. The resource has been attested as compliant.
+
+### Error
+
+The error compliance state is given to policy assignments that generate a system error, such as template or evaluation error.
+
+### Conflicting
+
+A policy assignment is considered conflicting when there are two or more policy assignments existing in the same scope with contradicting or conflicting rules. For example, two definitions that append the same tag with different values.
+
+### Exempt
+
+An applicable resource has a compliance state of exempt for a policy assignment when it is in the scope of an [exemption](./exemption-structure.md).
+
+> [!NOTE]
+> _Exempt_ is different than _excluded_. For more details, see [scope](./scope.md).
+
+### Unknown (preview)
+
+ Unknown is the default compliance state for definitions with `manual` effect, unless the default has been explicitly set to compliant or non-compliant. This state indicates that an [attestation](./attestation-structure.md) of compliance is warranted. This compliance state only occurs for policy assignments with `manual` effect.
+
+### Not registered
+
+This compliance state is visible in portal when the Azure Policy Resource Provider hasn't been registered, or when the account logged in doesn't have permission to read compliance data.
+
+> [!NOTE]
+> If compliance state is being reported as **Not registered**, verify that the
+> **Microsoft.PolicyInsights** Resource Provider is registered and that the user has the appropriate Azure role-based access control (Azure RBAC) permissions as described in
+> [Azure RBAC permissions in Azure Policy](../overview.md#azure-rbac-permissions-in-azure-policy).
+> To register Microsoft.PolicyInsights, [follow these steps](../../../azure-resource-manager/management/resource-providers-and-types.md).
+
+### Not started
+
+This compliance state indicates that the evaluation cycle hasn't started for the policy or resource.
+
+## Example
+
+Now that you have an understanding of what compliance states exist and what each one means, let's look at an example using compliant and non-compliant states.
+
+Suppose you have a resource group - ContosoRG, with some storage accounts
+(highlighted in red) that are exposed to public networks.
+
+ Diagram showing images for five storage accounts in the Contoso R G resource group. Storage accounts one and three are blue, while storage accounts two, four, and five are red.
+
+In this example, you need to be wary of security risks. Assume you assign a policy definition that audits for storage accounts that are exposed to public networks, and that no exemptions are created for this assignment. The policy checks for applicable resources (which includes all storage accounts in the ContosoRG resource group), then evaluates those resources that aren't excluded from evaluation. It audits the three storage accounts exposed to public networks, changing their compliance states to **Non-compliant.** The remainder are marked **compliant**.
+
+ Diagram showing images for five storage accounts in the Contoso R G resource group. Storage accounts one and three now have green checkmarks beneath them, while storage accounts two, four, and five now have red warning signs beneath them.
+
+## Compliance rollup
+
+Compliance state is determined per-resource, per-policy assignment. However, we often need a big-picture view of the state of the environment, which is where aggregate compliance comes into play.
+
+There are several ways to view aggregated compliance results in the portal:
+
+| Aggregate compliance view | Factors determining compliance state |
+| | |
+| Scope | All policies within the selected scope |
+| Initiative | All policies within the initiative |
+| Initiative group or control | All policies within the group or control |
+| Policy | All applicable resources |
+| Resource | All applicable policies |
+
+### Comparing different compliance states
+
+So how is the aggregate compliance state determined if multiple resources or policies have different compliance states themselves? Azure Policy ranks each compliance state so that one "wins" over another in this situation. The rank order is:
+1. Non-compliant
+1. Compliant
+1. Error
+1. Conflicting
+1. Exempted
+1. Unknown (preview)
+
+> [!NOTE]
+> [Not started](#not-started) and [not registered](#not-registered) aren't considered in compliance rollup calculations.
+
+With this ranking, if there are both non-compliant and compliant states, then the rolled up aggregate would be non-compliant, and so on. Let's look at an example:
+
+Assume an initiative contains 10 policies, and a resource is exempt from one policy but compliant to the remaining nine. Because a compliant state has a higher rank than an exempted state, the resource would register as compliant in the rolled-up summary of the initiative. So, a resource only shows as exempt for the entire initiative if it's exempt from, or has unknown compliance to, every other single applicable policy in that initiative. On the other extreme, a resource that is non-compliant to at least one applicable policy in the initiative has an overall compliance state of non-compliant, regardless of the remaining applicable policies.
+
+### Compliance percentage
+
+The compliance percentage is determined by dividing **Compliant**, **Exempt**, and **Unknown** resources by _total resources_. _Total resources_ include **Compliant**, **Non-compliant**,
+**Exempt**, and **Conflicting** resources. The overall compliance numbers are the sum of distinct
+resources that are **Compliant**, **Exempt**, and **Unknown** divided by the sum of all distinct resources.
+
+```text
+overall compliance % = (compliant + exempt + unknown) / (compliant + non-compliant + exempt + conflicting)
+```
+
+In the image shown, there are 20 distinct resources that are applicable and only one is **Non-compliant**.
+The overall resource compliance is 95% (19 out of 20).
++
+## Next steps
+
+- Learn how to [get compliance data](../how-to/get-compliance-data.md)
+- Learn how to [determine causes of non-compliance](../how-to/determine-non-compliance.md)
+- Get compliance data through [ARG query samples](../samples/resource-graph-samples.md)
governance Scope https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/concepts/scope.md
The following table is a comparison of the scope options:
|**Resource Manager object** | - | - | &#10004; | |**Requires modifying policy assignment object** | &#10004; | &#10004; | - |
+So how do you choose whether to use an exclusion or exemption? Typically exclusions are recommended to permanently bypass evaluation for a broad scope like a test environment which doesn't require the same level of governance. Exemptions are recommended for time-bound or more specific scenarios where a resource or resource hierarchy should still be tracked and would otherwise be evaluated, but there is a specific reason it should not be assessed for compliance.
+ ## Next steps - Learn about the [policy definition structure](./definition-structure.md).
governance Get Compliance Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/how-to/get-compliance-data.md
wrong location, enforce common and consistent tag usage, or audit existing resou
appropriate configurations and settings. In all cases, data is generated by Azure Policy to enable you to understand the compliance state of your environment.
+Before reviewing compliance data, it is important to [understand compliance states](../concepts/compliance-states.md) in Azure Policy.
+ There are several ways to access the compliance information generated by your policy and initiative assignments: - Using the [Azure portal](#portal) - Through [command line](#command-line) scripting
+- By viewing [Azure Monitor logs](#azure-monitor-logs)
+- Through [Azure Resource Graph](#azure-resource-graph) queries
Before looking at the methods to report on compliance, let's look at when compliance information is updated and the frequency and events that trigger an evaluation cycle.
-> [!WARNING]
-> If compliance state is being reported as **Not registered**, verify that the
-> **Microsoft.PolicyInsights** Resource Provider is registered and that the user has the appropriate
-> Azure role-based access control (Azure RBAC) permissions as described in
-> [Azure RBAC permissions in Azure Policy](../overview.md#azure-rbac-permissions-in-azure-policy).
- ## Evaluation triggers The results of a completed evaluation cycle are available in the `Microsoft.PolicyInsights` Resource
with the status:
#### On-demand evaluation scan - Visual Studio Code
-The Azure Policy extension for Visual Studio code is capable of running an evaluation scan for a
+The Azure Policy extension for Visual Studio Code is capable of running an evaluation scan for a
specific resource. This scan is a synchronous process, unlike the Azure PowerShell and REST methods. For details and steps, see [On-demand evaluation with the VS Code extension](./extension-for-vscode.md#on-demand-evaluation-scan).
-## How compliance works
-
-When initiative or policy definitions are assigned and evaluated, resulting compliance states are determined based on conditions in the policy rule and resources' adherence to those requirements.
-
-Azure Policy supports the following compliance states:
-- Non-compliant-- Compliant-- Conflict-- Exempted-- Unknown (preview)-
-### Compliant and non-compliant states
-
-In an assignment, a resource is **non-compliant** if it's applicable to the policy assignment and doesn't adhere to conditions in the policy rule. The following table shows how different policy effects work with the condition evaluation for the resulting compliance state:
-
-| Resource State | Effect | Policy Evaluation | Compliance State |
-| | | | |
-| New or Updated | Audit, Modify, AuditIfNotExist | True | Non-Compliant |
-| New or Updated | Audit, Modify, AuditIfNotExist | False | Compliant |
-| Exists | Deny, Audit, Append, Modify, DeployIfNotExist, AuditIfNotExist | True | Non-Compliant |
-| Exists | Deny, Audit, Append, Modify, DeployIfNotExist, AuditIfNotExist | False | Compliant |
-
-> [!NOTE]
-> The DeployIfNotExist and AuditIfNotExist effects require the IF statement to be TRUE and the
-> existence condition to be FALSE to be non-compliant. When TRUE, the IF condition triggers
-> evaluation of the existence condition for the related resources.
-
-#### Example
-
-For example, assume that you have a resource group - ContsoRG, with some storage accounts
-(highlighted in red) that are exposed to public networks.
-
- Diagram showing images for five storage accounts in the Contoso R G resource group. Storage accounts one and three are blue, while storage accounts two, four, and five are red.
-
-In this example, you need to be wary of security risks. Now that you've created a policy assignment,
-it's evaluated for all included and non-exempt storage accounts in the ContosoRG resource group. It
-audits the three non-compliant storage accounts, changing their states to
-**Non-compliant.**
-
- Diagram showing images for five storage accounts in the Contoso R G resource group. Storage accounts one and three now have green checkmarks beneath them, while storage accounts two, four, and five now have red warning signs beneath them.
-
-#### Understand non-compliance
-
-When a resource is determined to be **non-compliant**, there are many possible reasons. To determine
-the reason a resource is **non-compliant** or to find the change responsible, see
-[Determine non-compliance](./determine-non-compliance.md).
-
-### Other compliance states
-
-Besides **Compliant** and **Non-compliant**, policies and resources have four other states:
--- **Exempt**: The resource is in scope of an assignment, but has a
- [defined exemption](../concepts/exemption-structure.md).
-- **Conflicting**: Two or more policy definitions exist with conflicting rules. For example, two
- definitions append the same tag with different values.
-- **Not started**: The evaluation cycle hasn't started for the policy or resource.-- **Not registered**: The Azure Policy Resource Provider hasn't been registered or the account
- logged in doesn't have permission to read compliance data.
-
-Azure Policy relies on several factors to determine whether a resource is considered [applicable](../concepts/policy-applicability.md), then to determine its compliance state.
-
-The compliance percentage is determined by dividing **Compliant**, **Exempt**, and **Unknown** resources by _total
-resources_. _Total resources_ include **Compliant**, **Non-compliant**,
-**Exempt**, and **Conflicting** resources. The overall compliance numbers are the sum of distinct
-resources that are **Compliant**, **Exempt**, and **Unknown** divided by the sum of all distinct resources. In the
-image below, there are 20 distinct resources that are applicable and only one is **Non-compliant**.
-The overall resource compliance is 95% (19 out of 20).
--
-> [!NOTE]
-> Regulatory Compliance in Azure Policy is a Preview feature. Compliance properties from SDK and
-> pages in portal are different for enabled initiatives. For more information, see
-> [Regulatory Compliance](../concepts/regulatory-compliance.md)
-
-### Compliance rollup
-
-There are several ways to view aggregated compliance results:
-
-| Aggregate scope | Factors determining resulting compliance state |
-| | |
-| Initiative | All policies within |
-| Initiative group or control | All policies within |
-| Policy | All applicable resources |
-| Resource | All applicable policies |
-
-So how is the aggregate compliance state determined if multiple resources or policies have different compliance states themselves? This is done by ranking each compliance state so that one "wins" over another in this situation. The rank order is:
-1. Non-compliant
-1. Compliant
-1. Conflict
-1. Exempted
-1. Unknown (preview)
-
-This means that if there are both non-compliant and compliant states, the rolled up aggregate would be non-compliant, and so on. Let's look at an example.
-
-Assume an initiative contains 10 policies, and a resource is exempt from one policy but compliant to the remaining nine. Because a compliant state has a higher rank than an exempted state, the resource would register as compliant in the rolled-up summary of the initiative. So, a resource will only show as exempt for the entire initiative if it's exempt from, or has unknown compliance to, every other single applicable policy in that initiative. On the other extreme, if the resource is non-compliant to at least one applicable policy in the initiative, it will have an overall compliance state of non-compliant, regardless of the remaining applicable policies.
- ## Portal The Azure portal showcases a graphical experience of visualizing and understanding the state of
logs, alerts can be configured to watch for non-compliance.
:::image type="content" source="../media/getting-compliance-data/compliance-loganalytics.png" alt-text="Screenshot of Azure Monitor logs showing Azure Policy actions in the AzureActivity table." border="false":::
+## Azure Resource Graph
+
+Compliance records are stored in Azure Resource Graph (ARG). Data can be exported from ARG queries to form customized dashboards based on the scopes and policies of interest. Review our [sample queries](../samples/resource-graph-samples.md) for exporting compliance data through ARG.
+ ## Next steps - Review examples at [Azure Policy samples](../samples/index.md).
hdinsight Apache Domain Joined Configure Using Azure Adds https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/domain-joined/apache-domain-joined-configure-using-azure-adds.md
description: Learn how to set up and configure an HDInsight cluster integrated w
Previously updated : 04/01/2022 Last updated : 04/25/2023 # Configure HDInsight clusters for Azure Active Directory integration with Enterprise Security Package
hdinsight Apache Hadoop Use Hive Dotnet Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hadoop/apache-hadoop-use-hive-dotnet-sdk.md
description: Learn how to submit Apache Hadoop jobs to Azure HDInsight Apache Ha
Previously updated : 12/24/2019 Last updated : 04/24/2023 # Run Apache Hive queries using HDInsight .NET SDK
hdinsight Apache Hadoop Use Sqoop Curl https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hadoop/apache-hadoop-use-sqoop-curl.md
Title: Use Curl to export data with Apache Sqoop in Azure HDInsight
description: Learn how to remotely submit Apache Sqoop jobs to Azure HDInsight using Curl. Previously updated : 01/06/2020 Last updated : 04/25/2023 # Run Apache Sqoop jobs in HDInsight with Curl
hdinsight Using Json In Hive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hadoop/using-json-in-hive.md
description: Learn how to use JSON documents and analyze them by using Apache Hi
Previously updated : 04/01/2022 Last updated : 04/24/2023 # Process and analyze JSON documents by using Apache Hive in Azure HDInsight
hdinsight Hdinsight Capacity Planning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-capacity-planning.md
description: Identify key questions for capacity and performance planning of an
Previously updated : 09/08/2022 Last updated : 04/25/2023 # Capacity planning for HDInsight clusters
You're charged for a cluster's lifetime. If there are only specific times that y
Sometimes errors can occur because of the parallel execution of multiple maps and reduce components on a multi-node cluster. To help isolate the issue, try distributed testing. Run concurrent multiple jobs on a single worker node cluster. Then expand this approach to run multiple jobs concurrently on clusters containing more than one node. To create a single-node HDInsight cluster in Azure, use the *`Custom(size, settings, apps)`* option and use a value of 1 for *Number of Worker nodes* in the **Cluster size** section when provisioning a new cluster in the portal.
+## View quota management for HDInsight
+
+View a granular level and categorization of the quota at a VM at family level. View the current quota and how much quota is remaining for a region at a VM family level.
+
+> [!NOTE]
+> This feature is currently available on HDInsight 4.x and 5.x for East US EUAP region. Other regions to follow subsequently.
+
+1. View current quota:
+
+ See the current quota and how much quota is remaining for a region at a VM family level.
+
+ 1. From Azure portal, in the top search bar, search and select **Quotas**.
+ 1. From the Quota page, select **Azure HDInsight**
+
+ :::image type="content" source="./media/hdinsight-capacity-planning/hdinsight-search-quota.png" alt-text="Screenshot showing how to search quotas." lightbox="./media/hdinsight-capacity-planning/hdinsight-search-quota.png":::
+
+ 1. From the dropdown box, select your **Subscription** and **Region**
+
+ :::image type="content" source="./media/hdinsight-capacity-planning/select-cluster-and-region.png" alt-text="Screenshot showing how to select cluster and region for quota allocation." lightbox="./media/hdinsight-capacity-planning/select-cluster-and-region.png":::
+
+ :::image type="content" source="./media/hdinsight-capacity-planning/view-and-manage-quota.png" alt-text="Screenshot showing how to view and manage quota." lightbox="./media/hdinsight-capacity-planning/view-and-manage-quota.png":::
+
+1. View quota details:
+
+ 1. Click on the row for which you want to view the quota details.
+
+ :::image type="content" source="./media/hdinsight-capacity-planning/quota-details.png" alt-text="Screenshot showing the quota details." lightbox="./media/hdinsight-capacity-planning/quota-details.png":::
+
+
## Quotas For more information on managing subscription quotas, see [Requesting quota increases](quota-increase-request.md).
hdinsight Hdinsight Hadoop Manage Ambari https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-hadoop-manage-ambari.md
description: Learn how to use Apache Ambari UI to monitor and manage HDInsight c
Previously updated : 04/01/2022 Last updated : 04/25/2023 # Manage HDInsight clusters by using the Apache Ambari Web UI
hdinsight Hdinsight Upload Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-upload-data.md
description: Learn how to upload and access data for Apache Hadoop jobs in HDIns
Previously updated : 04/27/2020 Last updated : 04/25/2023 # Upload data for Apache Hadoop jobs in HDInsight
hdinsight Hdinsight Connect Hive Zeppelin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/interactive-query/hdinsight-connect-hive-zeppelin.md
description: In this quickstart, you learn how to use Apache Zeppelin to run Apa
Previously updated : 12/28/2012 Last updated : 04/25/2023 #Customer intent: As a Hive user, I want learn Zeppelin so that I can run queries.
hdinsight Interactive Query Troubleshoot View Time Out https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/interactive-query/interactive-query-troubleshoot-view-time-out.md
Title: Apache Hive View times out from query result - Azure HDInsight
description: Apache Hive View times out when fetching a query result in Azure HDInsight Previously updated : 02/11/2022 Last updated : 04/24/2023 # Scenario: Apache Hive View times out when fetching a query result in Azure HDInsight
This article describes troubleshooting steps and possible resolutions for issues
## Issue
-When running certain queries from the Apache Hive view, the following error may be encountered:
+When you run certain queries from the Apache Hive view, the following error may be encountered:
``` ERROR [ambari-client-thread-1] [HIVE 2.0.0 AUTO_HIVE20_INSTANCE] NonPersistentCursor:131 - Result fetch timed out
java.util.concurrent.TimeoutException: deadline passed
## Cause
-The Hive View default timeout value may not be suitable for the query you are running. The specified time period is too short for the Hive View to fetch the query result.
+The Hive View default timeout value may not be suitable for the query you're running. The specified time period is too short for the Hive View to fetch the query result.
## Resolution
The Hive View default timeout value may not be suitable for the query you are ru
``` Confirm the Hive View instance name `AUTO_HIVE20_INSTANCE` by going to YOUR_USERNAME > Manage Ambari > Views. Get the instance name from the Name column. If it doesn't match, then replace this value. **Do not use the URL Name column**.
-2. Restart the active Ambari server by running the following. If you get an error message saying it's not the active Ambari server, just ssh into the next headnode and repeat this step. Note down the PID of the current Ambari server process.
+2. Restart the active Ambari server by running the following. If you get an error message saying it's not the active Ambari server, ssh into the next headnode and repeat this step. Note down the PID of the current Ambari server process.
``` sudo ambari-server status sudo systemctl restart ambari-server ```
-3. Confirm Ambari server actually restarted. If you followed the steps, you will notice the PID has changed.
+3. Confirm Ambari server restarted. If you followed the steps, you notice the PID has changed.
``` sudo ambari-server status ``` ## Notes
-If you get a 502 error, then that is coming from the HDI gateway. You can confirm by opening web inspector, go to network tab, then re-submit query. You'll see a request fail, returning a 502 status code, and the time will show 2 mins elapsed.
+If you get a 502 error, then that is coming from the HDI gateway. You can confirm by opening web inspector, go to network tab, then resubmit query. You see a request fail, returning a 502 status code, and the time shows two mins elapsed.
-The query is not suited for Hive View. It is recommended that you either try the following instead:
+The query isn't suited for Hive View. It's recommended that you either try the following instead:
- Use beeline-- Re-write the query to be more optimal
+- Rewrite the query to be more optimal
## Next steps
healthcare-apis Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/release-notes.md
Azure API for FHIR provides a fully managed deployment of the Microsoft FHIR Ser
> [!Note] > Azure Health Data services is the evolved version of Azure API for FHIR enabling customers to manage FHIR, DICOM, and MedTech services with integrations into other Azure Services. To learn about Azure Health Data Services [click here](https://azure.microsoft.com/products/health-data-services/).
+## **April 2023**
+
+**Fixed transient issues associated with loading custom search parameters**
+This bug fix address the issue, where the FHIR service would not load the latest SearchParameter status in event of failure.
+For more details, visit [#3222](https://github.com/microsoft/fhir-server/pull/3222)
+ ## **November 2022** **Fixed the Error generated when resource is updated using if-match header and PATCH**
healthcare-apis Dicom Cast Access Request https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/dicom-cast-access-request.md
- Title: DICOM access request reference guide - Azure Health Data Services
-description: This reference guide provides information about to create an Azure support ticket to request DICOMcast access.
---- Previously updated : 06/03/2022---
-# DICOMcast access request
-
-This article describes how to request DICOMcast access.
-
-## Create Azure support ticket
-
-To enable DICOMcast for your Azure subscription, please request access for DICOMcast by opening an [Azure support ticket](https://azure.microsoft.com/support/create-ticket/).
-
-> [!IMPORTANT]
-> Ensure that you include the **resource IDs** of your DICOM service and FHIR service when you submit a support ticket.
-
-### Basics tab
-
-1. In the **Summary** field, enter "Access request for DICOMcast".
-
- [ ![Screenshot of basic tab in new support request.](media/new-support-request-basic-tab.png) ](media/new-support-request-basic-tab.png#lightbox)
-
-1. Select the **Issue type** drop-down list, and then select **Technical**.
-1. Select the **Subscription** drop-down list, and then select your Azure subscription.
-1. Select the **Service type** drop-down list, and then select **Azure Health Data Services**.
-1. Select the **Resource** drop-down list, and then select your resource.
-1. Select the **Problem** drop-down list, and then select **DICOM service**.
-1. Select the **Problem subtype** drop-down list, and then select **About the DICOM service**.
-1. Select **Next Solutions**.
-1. From the **Solutions** tab, select **Next Details**.
-
-### Details tab
-
-1. Under the **Problem details** section, select today's date to submit your support request. You may keep the default time as 12:00AM.
-
- [ ![Screenshot of details tab in new support request.](media/new-support-request-details-tab.png) ](media/new-support-request-details-tab.png#lightbox)
-
-1. In the **Description** box, ensure to include the Resource IDs of your FHIR service and DICOM service.
-
- > [!NOTE]
- > To obtain your DICOM service and FHIR service resource IDs, select your DICOM service instance in the Azure portal, and select the **Properties** blade that's listed under **Settings**.
-
-1. File upload isn't required, so you may omit this option.
-1. Under the **Support method** section, select the **Severity** and the **Preferred contact method** options.
-1. Select **Next: Review + Create >>**.
-1. In the **Review + create** tab, select **Create** to submit your Azure support ticket.
--
-## Next steps
-
-This article described the steps for creating an Azure support ticket to request DICOMcast access. For more information about using the DICOM service, see
-
->[!div class="nextstepaction"]
->[Deploy DICOM service to Azure](deploy-dicom-services-in-azure.md)
-
-For more information about DICOMcast, see
-
->[!div class="nextstepaction"]
->[DICOMcast overview](dicom-cast-overview.md)
-
-FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis Dicom Cast Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/dicom-cast-overview.md
# DICOMcast overview
+> [!NOTE]
+> On **July 31, 2023** DICOMcast will be retired. DICOMcast will continue to be available as an open-source component that can be self-hosted. For more information about deploying the DICOMcast service, see the [migration guidance](https://aka.ms/dicomcast-migration).
+ DICOMcast offers customers the ability to synchronize the data from a DICOM service to a [FHIR service](../../healthcare-apis/fhir/overview.md), which allows healthcare organizations to integrate clinical and imaging data. DICOMcast expands the use cases for health data by supporting both a streamlined view of longitudinal patient data and the ability to effectively create cohorts for medical studies, analytics, and machine learning. ## Architecture
DICOM has different date time VR types. Some tags (like Study and Series) have t
## Summary
-In this concept, we reviewed the architecture and mappings of DICOMcast. This feature is available on demand. To enable DICOMcast for your Azure subscription, please request access for DICOMcast by opening an [Azure support ticket](https://azure.microsoft.com/support/create-ticket/). For more information about requesting access to DICOMcast, see [DICOMcast request access](dicom-cast-access-request.md).
+In this concept, we reviewed the architecture and mappings of DICOMcast. This feature is available as an open-source component that can be self-hosted. For more information about deploying the DICOMcast service, see the [deployment instructions](https://github.com/microsoft/dicom-server/blob/main/docs/quickstarts/deploy-dicom-cast.md).
> [!IMPORTANT] > Ensure that you include the **resource IDs** of your DICOM service and FHIR service when you submit a support ticket.
healthcare-apis Dicom Services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/dicom-services-overview.md
The DICOM service is a managed service within [Azure Health Data Services](../he
- **PHI Compliant**: Protect your PHI with unparalleled security intelligence. Your data is isolated to a unique database per API instance and protected with multi-region failover. The DICOM service implements a layered, in-depth defense and advanced threat protection for your data. - **Extended Query Tags**: Additionally index DICOM studies, series, and instances on both standard and private DICOM tags by expanding list of tags that are already specified within [DICOM Conformance Statement](dicom-services-conformance-statement.md). - **Change Feed**: Access ordered, guaranteed, immutable, read-only logs of all the changes that occur in DICOM service. Client applications can read these logs at any time independently, in parallel and at their own pace.-- **DICOMcast**: Via DICOMcast, DICOM service can inject DICOM metadata into a FHIR service, or FHIR server, as an imaging study resource allowing a single source of truth for both clinical data and imaging metadata. This feature is available on demand. To enable DICOMcast for your Azure subscription, please request access for DICOMcast via opening an [Azure Technical Support](https://azure.microsoft.com/support/create-ticket/) ticket.
+- **DICOMcast**: Via DICOMcast, the DICOM service can inject DICOM metadata into a FHIR service, or FHIR server, as an imaging study resource allowing a single source of truth for both clinical data and imaging metadata. This feature is available as an open-source feature that can be self-hosted in Azure. Learn more about [deploying DICOMcast](https://github.com/microsoft/dicom-server/blob/main/docs/quickstarts/deploy-dicom-cast.md).
- **Region availability**: DICOM service has wide-range of [availability across many regions](https://azure.microsoft.com/global-infrastructure/services/?products=azure-api-for-fhir&regions=all) with multi-region failover protection and continuously expanding. - **Scalability**: DICOM service is designed out-of-the-box to support different workload levels at a hospital, region, country and global scale without sacrificing any performance spec by using autoscaling features. - **Role-based access**: You control your data. Role-based access control (RBAC) enables you to manage how your data is stored and accessed. Providing increased security and reducing administrative workload, you determine who has access to the datasets you create, based on role definitions you create for your environment.
healthcare-apis Get Started With Dicom https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/get-started-with-dicom.md
You can find more details on DICOMweb standard APIs and change feed in the [DICO
#### DICOMcast
-DICOMcast is currently available as an [open source](https://github.com/microsoft/dicom-server/blob/main/docs/concepts/dicom-cast.md) project, and it's under private preview as a managed service. To enable DICOMcast as a managed service for your Azure subscription, request access by creating an [Azure support ticket](https://azure.microsoft.com/support/create-ticket/) by following the guidance in the article [DICOMcast access request](dicom-cast-access-request.md).
+DICOMcast is currently available as an [open source](https://github.com/microsoft/dicom-server/blob/main/docs/concepts/dicom-cast.md) project.
## Next steps
healthcare-apis Deploy Arm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/deploy-arm-template.md
+
+ Title: Deploy the MedTech service using an Azure Resource Manager template - Azure Health Data Services
+description: In this article, you'll learn how to deploy the MedTech service using an Azure Resource Manager template.
+++++ Last updated : 04/14/2023+++
+# Quickstart: Deploy the MedTech service using an Azure Resource Manager template
+
+> [!NOTE]
+> [Fast Healthcare Interoperability Resources (FHIR&#174;)](https://www.hl7.org/fhir/) is an open healthcare specification.
+
+To implement infrastructure as code for your Azure solutions, use Azure Resource Manager templates (ARM templates). The template is a [JavaScript Object Notation (JSON)](https://www.json.org/) file that defines the infrastructure and configuration for your project. The template uses declarative syntax, which lets you state what you intend to deploy without having to write the sequence of programming commands to create it. In the template, you specify the resources to deploy and the properties for those resources.
+
+In this quickstart, you'll learn how to:
+
+- Open an ARM template in the Azure portal.
+- Configure the ARM template for your deployment.
+- Deploy the ARM template.
+
+> [!TIP]
+> To learn more about ARM templates, see [What are ARM templates?](./../../azure-resource-manager/templates/overview.md)
+
+## Prerequisites
+
+To begin your deployment and complete the quickstart, you must have the following prerequisites:
+
+- An active Azure subscription account. If you don't have an Azure subscription, see [Subscription decision guide](/azure/cloud-adoption-framework/decision-guides/subscriptions/).
+
+- Owner or Contributor and User Access Administrator role assignments in the Azure subscription. For more information, see [What is Azure role-based access control (Azure RBAC)?](../../role-based-access-control/overview.md)
+
+- The Microsoft.HealthcareApis and Microsoft.EventHub resource providers registered with your Azure subscription. To learn more about registering resource providers, see [Azure resource providers and types](../../azure-resource-manager/management/resource-providers-and-types.md).
+
+When you have these prerequisites, you're ready to configure the ARM template by using the **Deploy to Azure** button.
+
+## Review the ARM template - Optional
+
+The ARM template used to deploy the resources in this quickstart is available at [Azure Quickstart Templates](/samples/azure/azure-quickstart-templates/iotconnectors-with-iothub/) by using the *azuredeploy.json* file on [GitHub](https://github.com/azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.healthcareapis/workspaces/iotconnectors-with-iothub).
+
+## Use the Deploy to Azure button
+
+To begin deployment in the Azure portal, select the **Deploy to Azure** button:
+
+ [![Deploy to Azure](https://aka.ms/deploytoazurebutton)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.healthcareapis%2Fworkspaces%2Fiotconnectors%2Fazuredeploy.json).
+
+## Configure the deployment
+
+1. In the Azure portal, on the Basics tab of the Azure Quickstart Template, select or enter the following information for your deployment:
+
+ - **Subscription** - The Azure subscription to use for the deployment.
+
+ - **Resource group** - An existing resource group, or you can create a new resource group.
+
+ - **Region** - The Azure region of the resource group that's used for the deployment. Region autofills by using the resource group region.
+
+ - **Basename** - A value that's appended to the name of the Azure resources and services that are deployed.
+
+ - **Location** - Use the drop-down list to select a supported Azure region for the Azure Health Data Services (the value could be the same or different region than your resource group).
+
+ - **Device Mapping** - Don't change the default values for this quickstart.
+
+ - **Destination Mapping** - Don't change the default values for this quickstart.
+
+ :::image type="content" source="media\deploy-new-arm\iot-deploy-quickstart-options.png" alt-text="Screenshot of Azure portal page displaying deployment options for the Azure Health Data Service MedTech service." lightbox="media\deploy-new-arm\iot-deploy-quickstart-options.png":::
+
+2. To validate your configuration, select **Review + create**.
+
+ :::image type="content" source="media\deploy-new-arm\iot-review-and-create-button.png" alt-text="Screenshot that shows the Review + create button selected in the Azure portal.":::
+
+3. In **Review + create**, check the template validation status. If validation is successful, the template displays **Validation Passed**. If validation fails, fix the detail that's indicated in the error message, and then select **Review + create** again.
+
+ :::image type="content" source="media\deploy-new-arm\iot-validation-completed.png" alt-text="Screenshot that shows the Review + create pane displaying the Validation Passed message.":::
+
+4. After a successful validation, to begin the deployment, select **Create**.
+
+ :::image type="content" source="media\deploy-new-arm\iot-create-button.png" alt-text="Screenshot that shows the highlighted Create button.":::
+
+5. In a few minutes, the Azure portal displays the message that your deployment is completed.
+
+ :::image type="content" source="media\deploy-new-arm\iot-deployment-complete-banner.png" alt-text="Screenshot that shows a green checkmark and the message Your deployment is complete.":::
+
+ > [!IMPORTANT]
+ > If you're going to allow access from multiple services to the device message event hub, it's required that each service has its own event hub consumer group.
+ >
+ > Consumer groups enable multiple consuming applications to have a separate view of the event stream, and to read the stream independently at their own pace and with their own offsets. For more information, see [Consumer groups](../../event-hubs/event-hubs-features.md#consumer-groups).
+ >
+ > Examples:
+ >
+ > - Two MedTech services accessing the same device message event hub.
+ >
+ > - A MedTech service and a storage writer application accessing the same device message event hub.
+
+## Review deployed resources and access permissions
+
+When deployment is completed, the following resources and access roles are created in the ARM template deployment:
+
+- Azure Event Hubs namespace and device message event hub. In this deployment, the device message event hub is named *devicedata*.
+
+ - An event hub consumer group. In this deployment, the consumer group is named *$Default*.
+
+ - An Azure Event Hubs Data Sender role. In this deployment, the sender role is named *devicedatasender* and can be used to provide access to the device event hub using a shared access signature (SAS). To learn more about authorizing access using a SAS, see [Authorizing access to Event Hubs resources using Shared Access Signatures](../../event-hubs/authorize-access-shared-access-signature.md).
+
+- A Health Data Services workspace.
+
+- A Health Data Services Fast Healthcare Interoperability Resources FHIR service.
+
+- A Health Data Services MedTech service with the required [system-assigned managed identity](../../active-directory/managed-identities-azure-resources/overview.md) roles:
+
+ - For the device message event hub, the Azure Events Hubs Data Receiver role is assigned in the [Access control section (IAM)](../../role-based-access-control/overview.md) of the device message event hub.
+
+ - For the FHIR service, the FHIR Data Writer role is assigned in the [Access control section (IAM)](../../role-based-access-control/overview.md) of the FHIR service.
+
+> [!IMPORTANT]
+> In this quickstart, the ARM template configures the MedTech service to operate in **Create** mode. A patient resource and a device resource are created for each device that sends data to your FHIR service.
+>
+> To learn more about the MedTech service resolution types Create and Lookup, see [Destination properties](deploy-new-config.md#destination-properties).
+
+## Post-deployment mappings
+
+After you've successfully deployed an instance of the MedTech service, you'll still need to provide conforming and valid device and FHIR destination mappings.
+
+ - To learn about the device mapping, see [Overview of the MedTech service device mapping](overview-of-device-mapping.md).
+
+ - To learn about the FHIR destination mapping, see [Overview of the MedTech service FHIR destination mapping](overview-of-fhir-destination-mapping.md).
+
+## Next steps
+
+In this quickstart, you learned how to deploy an instance of the MedTech service in the Azure portal using an ARM template with a **Deploy to Azure** button.
+
+To learn about other methods for deploying the MedTech service, see
+
+> [!div class="nextstepaction"]
+> [Choose a deployment method for the MedTech service](deploy-new-choose.md)
+
+FHIR&#174; is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
healthcare-apis Deploy Bicep Powershell Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/deploy-bicep-powershell-cli.md
+
+ Title: Deploy the MedTech service using a Bicep file and Azure PowerShell or the Azure CLI - Azure Health Data Services
+description: In this article, you'll learn how to deploy the MedTech service using a Bicep file and Azure PowerShell or the Azure CLI.
+++++ Last updated : 04/14/2023+++
+# Quickstart: Deploy the MedTech service using a Bicep file and Azure PowerShell or the Azure CLI
+
+> [!NOTE]
+> [Fast Healthcare Interoperability Resources (FHIR&#174;)](https://www.hl7.org/fhir/) is an open healthcare specification.
+
+Bicep is a domain-specific language (DSL) that uses declarative syntax to deploy Azure resources. In a Bicep file, you define the infrastructure you want to deploy to Azure, and then use that file throughout the development lifecycle to repeatedly deploy your infrastructure. Your resources are deployed in a consistent manner Bicep provides concise syntax, reliable type safety, and support for code reuse. Bicep offers a first-class authoring experience for your infrastructure-as-code solutions in Azure.
+
+In this quickstart, you'll learn how to use Azure PowerShell or the Azure CLI to deploy an instance of the MedTech service using a Bicep file.
+
+> [!TIP]
+> To learn more about Bicep, see [What is Bicep?](../../azure-resource-manager/bicep/overview.md?tabs=bicep)
+
+## Prerequisites
+
+To begin your deployment and complete the quickstart, you must have the following prerequisites:
+
+- An active Azure subscription account. If you don't have an Azure subscription, see [Subscription decision guide](/azure/cloud-adoption-framework/decision-guides/subscriptions/).
+
+- Owner or Contributor and User Access Administrator role assignments in the Azure subscription. For more information, see [What is Azure role-based access control (Azure RBAC)?](../../role-based-access-control/overview.md)
+
+- The Microsoft.HealthcareApis and Microsoft.EventHub resource providers registered with your Azure subscription. To learn more about registering resource providers, see [Azure resource providers and types](../../azure-resource-manager/management/resource-providers-and-types.md).
+
+- [Azure PowerShell](/powershell/azure/install-az-ps) and/or the [Azure CLI](/cli/azure/install-azure-cli) installed locally.
+ - For Azure PowerShell, you'll also need to install [Bicep CLI](../../azure-resource-manager/bicep/install.md#windows) to deploy the Bicep file used in this quickstart.
+
+When you have these prerequisites, you're ready to deploy the Bicep file.
+
+## Review the Bicep file - Optional
+
+The Bicep file used to deploy the resources in this quickstart is available at [Azure Quickstart Templates](/samples/azure/azure-quickstart-templates/iotconnectors/) by using the *main.bicep* file on [GitHub](https://github.com/azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.healthcareapis/workspaces/iotconnectors/).
+
+## Save the Bicep file locally
+
+Save the Bicep file locally as *main.bicep*. You'll need to have the working directory of your Azure PowerShell or the Azure CLI console pointing to the location where this file is saved.
+
+## Deploy the MedTech service with the Bicep file and Azure PowerShell
+
+Complete the following five steps to deploy the MedTech service using Azure PowerShell:
+
+1. Sign-in into Azure.
+
+ ```azurepowershell
+ Connect-AzAccount
+ ```
+
+2. Set your Azure subscription deployment context using your subscription ID. To learn how to get your subscription ID, see [Get subscription and tenant IDs in the Azure portal](../../azure-portal/get-subscription-tenant-id.md).
+
+ ```azurepowershell
+ Set-AzContext <AzureSubscriptionId>
+ ```
+
+ For example: `Set-AzContext abcdef01-2345-6789-0abc-def012345678`
+
+3. Confirm the location you want to deploy in. See the [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=health-data-services) site for the current Azure regions where Azure Health Data Services is available.
+
+ You can also review the **location** section of the locally saved *main.bicep* file.
+
+ If you need a list of the Azure regions location names, you can use this code to display a list:
+
+ ```azurepowershell
+ Get-AzLocation | Format-Table -Property DisplayName,Location
+ ```
+
+4. If you don't already have a resource group created for this quickstart, you can use this code to create one:
+
+ ```azurepowershell
+ New-AzResourceGroup -name <ResourceGroupName> -location <AzureRegion>
+ ```
+
+ For example: `New-AzResourceGroup -name BicepTestDeployment -location southcentralus`
+
+ > [!IMPORTANT]
+ > For a successful deployment of the MedTech service, you'll need to use numbers and lowercase letters for the basename of your resources. The minimum basename requirement is three characters with a maximum of 16 characters.
+
+5. Use the following code to deploy the MedTech service using the Bicep file:
+
+ ```azurepowershell
+ New-AzResourceGroupDeployment -ResourceGroupName <ResourceGroupName> -TemplateFile main.bicep -basename <BaseName> -location <AzureRegion>
+ ```
+
+ For example: `New-AzResourceGroupDeployment -ResourceGroupName BicepTestDeployment -TemplateFile main.bicep -basename abc123 -location southcentralus`
+
+ > [!IMPORTANT]
+ > If you're going to allow access from multiple services to the device message event hub, it is highly recommended that each service has its own event hub consumer group.
+ >
+ > Consumer groups enable multiple consuming applications to have a separate view of the event stream, and to read the stream independently at their own pace and with their own offsets. For more information, see [Consumer groups](../../event-hubs/event-hubs-features.md#consumer-groups).
+ >
+ > Examples:
+ >
+ > - Two MedTech services accessing the same device message event hub.
+ >
+ > - A MedTech service and a storage writer application accessing the same device message event hub.
+
+## Deploy the MedTech service with the Bicep file and the Azure CLI
+
+Complete the following five steps to deploy the MedTech service using the Azure CLI:
+
+1. Sign-in into Azure.
+
+ ```azurecli
+ az login
+ ```
+
+2. Set your Azure subscription deployment context using your subscription ID. To learn how to get your subscription ID, see [Get subscription and tenant IDs in the Azure portal](../../azure-portal/get-subscription-tenant-id.md).
+
+ ```azurecli
+ az account set <AzureSubscriptionId>
+ ```
+
+ For example: `az account set abcdef01-2345-6789-0abc-def012345678`
+
+3. Confirm the location you want to deploy in. See the [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=health-data-services) site for the current Azure regions where Azure Health Data Services is available.
+
+ You can also review the **location** section of the locally saved *main.bicep* file.
+
+ If you need a list of the Azure regions location names, you can use this code to display a list:
+
+ ```azurecli
+ az account list-locations -o table
+ ```
+
+4. If you don't already have a resource group created for this quickstart, you can use this code to create one:
+
+ ```azurecli
+ az group create --resource-group <ResourceGroupName> --location <AzureRegion>
+ ```
+
+ For example: `az group create --resource-group BicepTestDeployment --location southcentralus`
+
+ > [!IMPORTANT]
+ > For a successful deployment of the MedTech service, you'll need to use numbers and lowercase letters for the basename of your resources.
+
+5. Use the following code to deploy the MedTech service using the Bicep file:
+
+ ```azurecli
+ az deployment group create --resource-group BicepTestDeployment --template-file main.bicep --parameters basename=<BaseName> location=<AzureRegion>
+ ```
+
+ For example: `az deployment group create --resource-group BicepTestDeployment --template-file main.bicep --parameters basename=abc location=southcentralus`
+
+ > [!IMPORTANT]
+ > If you're going to allow access from multiple services to the device message event hub, it is highly recommended that each service has its own event hub consumer group.
+ >
+ > Consumer groups enable multiple consuming applications to have a separate view of the event stream, and to read the stream independently at their own pace and with their own offsets. For more information, see [Consumer groups](../../event-hubs/event-hubs-features.md#consumer-groups).
+ >
+ > Examples:
+ >
+ > - Two MedTech services accessing the same device message event hub.
+ >
+ > - A MedTech service and a storage writer application accessing the same device message event hub.
+
+## Review deployed resources and access permissions
+
+When deployment is completed, the following resources and access roles are created in the Bicep file deployment:
+
+- Azure Event Hubs namespace and device message event hub. In this deployment, the device message event hub is named *devicedata*.
+
+ - An event hub consumer group. In this deployment, the consumer group is named *$Default*.
+
+ - An Azure Event Hubs Data Sender role. In this deployment, the sender role is named *devicedatasender* and can be used to provide access to the device event hub using a shared access signature (SAS). To learn more about authorizing access using a SAS, see [Authorizing access to Event Hubs resources using Shared Access Signatures](../../event-hubs/authorize-access-shared-access-signature.md).
+
+- A Health Data Services workspace.
+
+- A Health Data Services FHIR service.
+
+- A Health Data Services MedTech service with the required [system-assigned managed identity](../../active-directory/managed-identities-azure-resources/overview.md) roles:
+
+ - For the device message event hub, the Azure Events Hubs Data Receiver role is assigned in the [Access control section (IAM)](../../role-based-access-control/overview.md) of the device message event hub.
+
+ - For the FHIR service, the FHIR Data Writer role is assigned in the [Access control section (IAM)](../../role-based-access-control/overview.md) of the FHIR service.
+
+> [!IMPORTANT]
+> In this quickstart, the ARM template configures the MedTech service to operate in **Create** mode. A Patient resource and a Device resource are created for each device that sends data to your FHIR service.
+>
+> To learn more about the MedTech service resolution types **Create** and **Lookup**, see [Destination properties](deploy-new-config.md#destination-properties).
+
+## Post-deployment mappings
+
+After you've successfully deployed an instance of the MedTech service, you'll still need to provide conforming and valid device and FHIR destination mappings.
+
+- To learn about the device mapping, see [Overview of the device mapping](overview-of-device-mapping.md).
+
+- To learn about the FHIR destination mapping, see [Overview of the FHIR destination mapping](overview-of-fhir-destination-mapping.md).
+
+## Clean up Azure PowerShell deployed resources
+
+When your resource group and deployed Bicep file resources are no longer needed, delete the resource group, which deletes the resources in the resource group.
+
+```azurepowershell
+Remove-AzResourceGroup -Name <ResourceGroupName>
+```
+
+For example: `Remove-AzResourceGroup -Name BicepTestDeployment`
+
+## Clean up the Azure CLI deployed resources
+
+When your resource group and deployed Bicep file resources are no longer needed, delete the resource group, which deletes the resources in the resource group.
+
+```azurecli
+az group delete --name <ResourceGroupName>
+```
+
+For example: `az group delete --resource-group BicepTestDeployment`
+
+> [!TIP]
+> For a step-by-step tutorial that guides you through the process of creating a Bicep file, see [Build your first Bicep template](/training/modules/build-first-bicep-template/).
+
+## Next steps
+
+In this quickstart, you learned about how to use Azure PowerShell or the Azure CLI to deploy an instance of the MedTech service using a Bicep file.
+
+To learn about other methods for deploying the MedTech service, see
+
+> [!div class="nextstepaction"]
+> [Choose a deployment method for the MedTech service](deploy-new-choose.md)
+
+FHIR&#174; is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
healthcare-apis Deploy Choose Method https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/deploy-choose-method.md
+
+ Title: Choose a deployment method for the MedTech service - Azure Health Data Services
+description: In this article, learn about the different methods for deploying the MedTech service.
+++++ Last updated : 04/25/2023+++
+# Quickstart: Choose a deployment method for the MedTech service
+
+> [!NOTE]
+> [Fast Healthcare Interoperability Resources (FHIR&#174;)](https://www.hl7.org/fhir/) is an open healthcare specification.
+
+The MedTech service provides multiple methods for deployment into Azure. Each deployment method has different advantages that allow you to customize your deployment to suit your needs and use cases.
+
+In this quickstart, learn about these deployment methods:
+
+* Azure Resource Manager template (ARM template) including an Azure Iot Hub using the **Deploy to Azure** button.
+* ARM template using the **Deploy to Azure** button.
+* ARM template using Azure PowerShell or the Azure CLI.
+* Bicep file using Azure PowerShell or the Azure CLI.
+* Manually in the Azure portal.
+
+## Deployment overview
+
+The following diagram outlines the basic steps of the MedTech service deployment. These steps may help you analyze the deployment options and determine which deployment method is best for you.
++
+## ARM template including an Azure Iot Hub using the Deploy to Azure button
+
+Using an ARM template with the **Deploy to Azure** button is an easy and fast deployment method because it automates the deployment, most configuration steps, and uses the Azure portal. The deployed MedTech service and Azure IoT Hub are fully functional including conforming and valid device and FHIR destination mappings. Use the Azure IoT Hub to create devices and send device messages to the MedTech service.
+
+[![Deploy to Azure](https://aka.ms/deploytoazurebutton)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.healthcareapis%2Fworkspaces%2Fiotconnectors-with-iothub%2Fazuredeploy.json)
+
+To learn more about deploying the MedTech service including an Azure IoT Hub using an ARM template and the **Deploy to Azure** button, see [Receive device messages through Azure IoT Hub](device-messages-through-iot-hub.md).
+
+## ARM template using the Deploy to Azure button
+
+Using an ARM template with the **Deploy to Azure** button is an easy and fast deployment method because it automates the deployment, most configuration steps, and uses the Azure portal. The deployed MedTech service requires conforming and valid device and FHIR destination mappings to be fully functional.
+
+[![Deploy to Azure](https://aka.ms/deploytoazurebutton)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.healthcareapis%2Fworkspaces%2Fiotconnectors%2Fazuredeploy.json).
+
+To learn more about deploying the MedTech service using an ARM template and the **Deploy to Azure** button, see [Deploy the MedTech service using an Azure Resource Manager template](deploy-arm-template.md).
+
+## ARM template using Azure PowerShell or the Azure CLI
+
+Using an ARM template with Azure PowerShell or the Azure CLI is a more advanced deployment method. This deployment method can be useful for adding automation and repeatability so that you can scale and customize your deployments. The deployed MedTech service requires conforming and valid device and FHIR destination mappings to be fully functional.
+
+To learn more about deploying the MedTech service using an ARM template and Azure PowerShell or the Azure CLI, see [Deploy the MedTech service using an Azure Resource Manager template and Azure PowerShell or the Azure CLI](deploy-json-powershell-cli.md).
+
+## Bicep file using Azure PowerShell or the Azure CLI
+
+Using a Bicep file with Azure PowerShell or the Azure CLI is a more advanced deployment method. This deployment method can be useful for adding automation and repeatability so that you can scale and customize your deployments. The deployed MedTech service requires conforming and valid device and FHIR destination mappings to be fully functional.
+
+To learn more about deploying the MedTech service using a Bicep file and Azure PowerShell or the Azure CLI, see [Deploy the MedTech service using a Bicep file and Azure PowerShell or the Azure CLI](deploy-bicep-powershell-cli.md).
+
+## Manually in the Azure portal
+
+Using the Azure portal manual deployment allows you to see the details of each deployment step. The manual deployment has many steps, but it provides valuable technical information that may be useful for customizing and troubleshooting your MedTech service.
+
+To learn more about deploying the MedTech service manually using the Azure portal, see [Deploy the MedTech service manually using the Azure portal](deploy-manual-prerequisites.md).
+
+> [!IMPORTANT]
+> If you're going to allow access from multiple services to the device message event hub, it is highly recommended that each service has its own event hub consumer group.
+>
+> Consumer groups enable multiple consuming applications to have a separate view of the event stream, and to read the stream independently at their own pace and with their own offsets. For more information, see [Consumer groups](../../event-hubs/event-hubs-features.md#consumer-groups).
+>
+> Examples:
+>
+> * Two MedTech services accessing the same device message event hub.
+>
+> * A MedTech service and a storage writer application accessing the same device message event hub.
+
+## Next steps
+
+In this quickstart, you learned about the different types of deployment methods for the MedTech service.
+
+To learn about the MedTech service, see
+
+> [!div class="nextstepaction"]
+> [What is the MedTech service?](overview.md)
+
+FHIR&#174; is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
healthcare-apis Deploy Json Powershell Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/deploy-json-powershell-cli.md
+
+ Title: Deploy the MedTech service using an Azure Resource Manager template and Azure PowerShell or the Azure CLI - Azure Health Data Services
+description: In this article, you'll learn how to deploy the MedTech service using an Azure Resource Manager template and Azure PowerShell or the Azure CLI
+++++ Last updated : 04/14/2023+++
+# Quickstart: Deploy the MedTech service using an Azure Resource Manager template and Azure PowerShell or the Azure CLI
+
+> [!NOTE]
+> [Fast Healthcare Interoperability Resources (FHIR&#174;)](https://www.hl7.org/fhir/) is an open healthcare specification.
+
+To implement infrastructure as code for your Azure solutions, use Azure Resource Manager templates (ARM templates). The template is a JavaScript Object Notation (JSON) file that defines the infrastructure and configuration for your project. The template uses declarative syntax, which lets you state what you intend to deploy without having to write the sequence of programming commands to create it. In the template, you specify the resources to deploy and the properties for those resources.
+
+In this quickstart, you'll learn how to use Azure PowerShell or the Azure CLI to deploy an instance of the MedTech service using an Azure Resource Manager template (ARM template).
+
+> [!TIP]
+> To learn more about ARM templates, see [What are ARM templates?](./../../azure-resource-manager/templates/overview.md)
+
+## Prerequisites
+
+To begin your deployment and complete the quickstart, you must have the following prerequisites:
+
+- An active Azure subscription account. If you don't have an Azure subscription, see [Subscription decision guide](/azure/cloud-adoption-framework/decision-guides/subscriptions/).
+
+- Owner or Contributor and User Access Administrator role assignments in the Azure subscription. For more information, see [What is Azure role-based access control (Azure RBAC)?](../../role-based-access-control/overview.md)
+
+- The Microsoft.HealthcareApis and Microsoft.EventHub resource providers registered with your Azure subscription. To learn more about registering resource providers, see [Azure resource providers and types](../../azure-resource-manager/management/resource-providers-and-types.md).
+
+- [Azure PowerShell](/powershell/azure/install-az-ps) and/or the [Azure CLI](/cli/azure/install-azure-cli) installed locally.
+
+When you have these prerequisites, you're ready to deploy the ARM template.
+
+## Review the ARM template - Optional
+
+The ARM template used to deploy the resources in this quickstart is available at [Azure Quickstart Templates](/samples/azure/azure-quickstart-templates/iotconnectors/) by using the *azuredeploy.json* file on [GitHub](https://github.com/azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.healthcareapis/workspaces/iotconnectors/).
+
+## Deploy the MedTech service with the Azure Resource Manager template and Azure PowerShell
+
+Complete the following five steps to deploy the MedTech service using Azure PowerShell:
+
+1. Sign-in into Azure.
+
+ ```azurepowershell
+ Connect-AzAccount
+ ```
+
+2. Set your Azure subscription deployment context using your subscription ID. To learn how to get your subscription ID, see [Get subscription and tenant IDs in the Azure portal](../../azure-portal/get-subscription-tenant-id.md).
+
+ ```azurepowershell
+ Set-AzContext <AzureSubscriptionId>
+ ```
+
+ For example: `Set-AzContext abcdef01-2345-6789-0abc-def012345678`
+
+3. Confirm the location you want to deploy in. See the [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=health-data-services) site for the current Azure regions where Azure Health Data Services is available.
+
+ You can also review the **location** section of the *azuredeploy.json* file.
+
+ If you need a list of the Azure regions location names, you can use this code to display a list:
+
+ ```azurepowershell
+ Get-AzLocation | Format-Table -Property DisplayName,Location
+ ```
+
+4. If you don't already have a resource group created for this quickstart, you can use this code to create one:
+
+ ```azurepowershell
+ New-AzResourceGroup -name <ResourceGroupName> -location <AzureRegion>
+ ```
+
+ For example: `New-AzResourceGroup -name ArmTestDeployment -location southcentralus`
+
+ > [!IMPORTANT]
+ > For a successful deployment of the MedTech service, you'll need to use numbers and lowercase letters for the basename of your resources. The minimum basename requirement is three characters with a maximum of 16 characters.
+
+5. Use the following code to deploy the MedTech service using the ARM template:
+
+ ```azurepowershell
+ New-AzResourceGroupDeployment -ResourceGroupName <ResourceGroupName> -TemplateUri https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/quickstarts/microsoft.healthcareapis/workspaces/iotconnectors/azuredeploy.json -basename <BaseName> -location <AzureRegion>
+ ```
+
+ For example: `New-AzResourceGroupDeployment -ResourceGroupName ArmTestDeployment -TemplateUri https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/quickstarts/microsoft.healthcareapis/workspaces/iotconnectors/azuredeploy.json -basename abc123 -location southcentralus`
+
+ > [!IMPORTANT]
+ > If you're going to allow access from multiple services to the device message event hub, it is highly recommended that each service has its own event hub consumer group.
+ >
+ > Consumer groups enable multiple consuming applications to have a separate view of the event stream, and to read the stream independently at their own pace and with their own offsets. For more information, see [Consumer groups](../../event-hubs/event-hubs-features.md#consumer-groups).
+ >
+ > Examples:
+ >
+ > - Two MedTech services accessing the same device message event hub.
+ >
+ > - A MedTech service and a storage writer application accessing the same device message event hub.
+
+## Deploy the MedTech service with the Azure Resource Manager template and the Azure CLI
+
+Complete the following five steps to deploy the MedTech service using the Azure CLI:
+
+1. Sign-in into Azure.
+
+ ```azurecli
+ az login
+ ```
+
+2. Set your Azure subscription deployment context using your subscription ID. To learn how to get your subscription ID, see [Get subscription and tenant IDs in the Azure portal](../../azure-portal/get-subscription-tenant-id.md).
+
+ ```azurecli
+ az account set <AzureSubscriptionId>
+ ```
+
+ For example: `az account set abcdef01-2345-6789-0abc-def012345678`
+
+3. Confirm the location you want to deploy in. See the [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=health-data-services) site for the current Azure regions where Azure Health Data Services is available.
+
+ You can also review the **location** section of the *azuredeploy.json* file.
+
+ If you need a list of the Azure regions location names, you can use this code to display a list:
+
+ ```azurecli
+ az account list-locations -o table
+ ```
+
+4. If you don't already have a resource group created for this quickstart, you can use this code to create one:
+
+ ```azurecli
+ az group create --resource-group <ResourceGroupName> --location <AzureRegion>
+ ```
+
+ For example: `az group create --resource-group ArmTestDeployment --location southcentralus`
+
+ > [!IMPORTANT]
+ > For a successful deployment of the MedTech service, you'll need to use numbers and lowercase letters for the basename of your resources.
+
+5. Use the following code to deploy the MedTech service using the ARM template:
+
+ ```azurecli
+ az deployment group create --resource-group <ResourceGroupName> --template-uri https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/quickstarts/microsoft.healthcareapis/workspaces/iotconnectors/azuredeploy.json --parameters basename=<BaseName> location=<AzureRegion>
+ ```
+
+ For example: `az deployment group create --resource-group ArmTestDeployment --template-uri https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/quickstarts/microsoft.healthcareapis/workspaces/iotconnectors/azuredeploy.json --parameters basename=abc123 location=southcentralus`
+
+ > [!IMPORTANT]
+ > If you're going to allow access from multiple services to the device message event hub, it is highly recommended that each service has its own event hub consumer group.
+ >
+ > Consumer groups enable multiple consuming applications to have a separate view of the event stream, and to read the stream independently at their own pace and with their own offsets. For more information, see [Consumer groups](../../event-hubs/event-hubs-features.md#consumer-groups).
+ >
+ > Examples:
+ >
+ > - Two MedTech services accessing the same device message event hub.
+ >
+ > - A MedTech service and a storage writer application accessing the same device message event hub.
+
+## Review deployed resources and access permissions
+
+When deployment is completed, the following resources and access roles are created in the ARM template deployment:
+
+- Azure Event Hubs namespace and device message event hub. In this deployment, the device message event hub is named *devicedata*.
+
+ - An event hub consumer group. In this deployment, the consumer group is named *$Default*.
+
+ - An Azure Event Hubs Data Sender role. In this deployment, the sender role is named *devicedatasender* and can be used to provide access to the device event hub using a shared access signature (SAS). To learn more about authorizing access using a SAS, see [Authorizing access to Event Hubs resources using Shared Access Signatures](../../event-hubs/authorize-access-shared-access-signature.md).
+
+- A Health Data Services workspace.
+
+- A Health Data Services FHIR service.
+
+- A Health Data Services MedTech service with the required [system-assigned managed identity](../../active-directory/managed-identities-azure-resources/overview.md) roles:
+
+ - For the device message event hub, the Azure Events Hubs Data Receiver role is assigned in the [Access control section (IAM)](../../role-based-access-control/overview.md) of the device message event hub.
+
+ - For the FHIR service, the FHIR Data Writer role is assigned in the [Access control section (IAM)](../../role-based-access-control/overview.md) of the FHIR service.
+
+> [!IMPORTANT]
+> In this quickstart, the ARM template configures the MedTech service to operate in **Create** mode. A Patient resource and a Device resource are created for each device that sends data to your FHIR service.
+>
+> To learn more about the MedTech service resolution types **Create** and **Lookup**, see [Destination properties](deploy-new-config.md#destination-properties).
+
+## Post-deployment mappings
+
+After you've successfully deployed an instance of the MedTech service, you'll still need to provide conforming and valid device and FHIR destination mappings.
+
+ - To learn about the device mapping, see [Overview of the MedTech service device mapping](overview-of-device-mapping.md).
+
+ - To learn about the FHIR destination mapping, see [Overview of the FHIR destination mapping](overview-of-fhir-destination-mapping.md).
+
+## Clean up Azure PowerShell resources
+
+When your resource group and deployed Bicep file resources are no longer needed, delete the resource group, which deletes the resources in the resource group.
+
+```azurepowershell
+Remove-AzResourceGroup -Name <ResourceGroupName>
+```
+
+For example: `Remove-AzResourceGroup -Name ArmTestDeployment`
+
+## Clean up the Azure CLI resources
+
+When your resource group and deployed Bicep file resources are no longer needed, delete the resource group, which deletes the resources in the resource group.
+
+```azurecli
+az group delete --name <ResourceGroupName>
+```
+
+For example: `az group delete --resource-group ArmTestDeployment`
+
+> [!TIP]
+> For a step-by-step tutorial that guides you through the process of creating an ARM template, see [Tutorial: Create and deploy your first ARM template](../../azure-resource-manager/templates/template-tutorial-create-first-template.md).
+
+## Next steps
+
+In this quickstart, you learned how to use Azure PowerShell or Azure CLI to deploy an instance of the MedTech service using an ARM template.
+
+To learn about other methods for deploying the MedTech service, see
+
+> [!div class="nextstepaction"]
+> [Choose a deployment method for the MedTech service](deploy-new-choose.md)
+
+FHIR&#174; is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
healthcare-apis Deploy Manual Config https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/deploy-manual-config.md
+
+ Title: Configure the MedTech service for deployment using the Azure portal - Azure Health Data Services
+description: In this article, you'll learn how to configure the MedTech service for manual deployment using the Azure portal.
++++ Last updated : 04/14/2023+++
+# Quickstart: Part 2: Configure the MedTech service for manual deployment using the Azure portal
+
+> [!NOTE]
+> [Fast Healthcare Interoperability Resources (FHIR&#174;)](https://www.hl7.org/fhir/) is an open healthcare specification.
+
+Before you can manually deploy the MedTech service, you must complete the following configuration tasks:
+
+## Set up the MedTech service configuration
+
+Start with these three steps to begin configuring the MedTech service so it will be ready to accept your tabbed configuration input:
+
+1. Start by going to the Health Data Services workspace you created in the manual deployment [Prerequisites](deploy-new-manual.md#part-1-prerequisites) section. Select the **Create MedTech service** box.
+
+2. This step will take you to the **Add MedTech service** button. Select the button.
+
+3. This step will take you to the **Create MedTech service** page. This page has five tabs you need to fill out:
+
+- Basics
+- Device mapping
+- Destination mapping
+- Tags (optional)
+- Review + create
+
+## Configure the Basics tab
+
+Follow these six steps to fill in the Basics tab configuration:
+
+1. Enter the **MedTech service name**.
+
+ The **MedTech service name** is a friendly, unique name for your MedTech service. For this example, we'll name the MedTech service `mt-azuredocsdemo`.
+
+2. Enter the **Event Hubs Namespace**.
+
+ The Event Hubs Namespace is the name of the **Event Hubs Namespace** that you previously deployed. For this example, we'll use `eh-azuredocsdemo` with our MedTech service device messages.
+
+ > [!TIP]
+ > For information about deploying an Azure Event Hubs Namespace, see [Create an Event Hubs Namespace](../../event-hubs/event-hubs-create.md#create-an-event-hubs-namespace).
+ >
+ > For more information about Azure Event Hubs Namespaces, see [Namespace](../../event-hubs/event-hubs-features.md?WT.mc_id=Portal-Microsoft_Healthcare_APIs#namespace) in the Features and terminology in Azure Event Hubs document.
+
+3. Enter the **Events Hubs name**.
+
+ The Event Hubs name is the name of the event hub that you previously deployed within the Event Hubs Namespace. For this example, we'll use `devicedata` with our MedTech service device messages.
+
+ > [!TIP]
+ > For information about deploying an Azure event hub, see [Create an event hub](../../event-hubs/event-hubs-create.md#create-an-event-hub).
+
+4. Enter the **Consumer group**.
+
+ The Consumer group name is located by going to the **Overview** page of the Event Hubs Namespace and selecting the event hub to be used for the MedTech service device messages. In this example, the event hub is named `devicedata`.
+
+5. When you're inside the event hub, select the **Consumer groups** button under **Entities** to display the name of the consumer group to be used by your MedTech service.
+
+6. By default, a consumer group named **$Default** is created during the deployment of an event hub. Use this consumer group for your MedTech service deployment.
+
+ > [!IMPORTANT]
+ > If you're going to allow access from multiple services to the device message event hub, it is highly recommended that each service has its own event hub consumer group.
+ >
+ > Consumer groups enable multiple consuming applications to have a separate view of the event stream, and to read the stream independently at their own pace and with their own offsets. For more information, see [Consumer groups](../../event-hubs/event-hubs-features.md#consumer-groups).
+ >
+ > Examples:
+ >
+ > - Two MedTech services accessing the same device message event hub.
+ >
+ > - A MedTech service and a storage writer application accessing the same device message event hub.
+
+The Basics tab should now look like this after you've filled it out:
+
+ :::image type="content" source="media\deploy-new-config\select-device-mapping-button.png" alt-text="Screenshot of Basics tab filled out correctly." lightbox="media\deploy-new-config\select-device-mapping-button.png":::
+
+You're now ready to select the Device mapping tab and begin setting up the device mappings for your MedTech service.
+
+## Configure the Device mapping tab
+
+You need to configure device mappings so that your instance of the MedTech service can normalize the incoming device data. The device data will first be sent to your event hub instance and then picked up by the MedTech service.
+
+The easiest way to configure the Device mapping tab is to use the Internet of Medical Things (IoMT) Connector Data Mapper tool to visualize, edit, and test your device mapping. This open source tool is available from [IoMT Connector Data Mapper](https://github.com/microsoft/iomt-fhir/tree/master/tools/data-mapper).
+
+To begin configuring the device mapping tab, go to the Create MedTech service page and select the **Device mapping** tab. Then follow these two steps:
+
+1. Go to the IoMT Connector Data Mapper and get the appropriate JSON code.
+
+2. Return to the Create MedTech service page. Enter the JSON code for the template you want to use into the **Device mapping** tab. After you enter the template code, the Device mapping code will be displayed on the screen.
+
+3. If the Device code is correct, select the **Next: Destination >** tab to enter the destination properties you want to use with your MedTech service. Your device configuration data will be saved for this session.
+
+For more information regarding device mappings, see the relevant GitHub open source documentation at [Device Content Mapping](https://github.com/microsoft/iomt-fhir/blob/master/docs/Configuration.md#device-content-mapping).
+
+For Azure docs information about the device mapping, see [How to configure the MedTech service device mapping](how-to-configure-device-mappings.md).
+
+## Configure the Destination tab
+
+In order to configure the **Destination** tab, you can use the [Mapping debugger](how-to-use-mapping-debugger.md) tool to create, edit, and test the FHIR destination mapping. You need to configure FHIR destination mapping so that your instance of MedTech service can send transformed device data to the FHIR service.
+
+To begin configuring FHIR destination mapping, go to the **Create** MedTech service page and select the **Destination mapping** tab. There are two parts of the tab you must fill out:
+
+ 1. Destination properties
+ 2. JSON template request
+
+### Destination properties
+
+Under the **Destination** tab, use these values to enter the destination properties for your MedTech service instance:
+
+- First, enter the name of your **FHIR server** using the following four steps:
+
+ 1. The **FHIR Server** name (also known as the **FHIR service**) can be located by using the **Search** bar at the top of the screen.
+ 1. To connect to your FHIR service instance, enter the name of the FHIR service you used in the manual deploy configuration article at [Deploy the FHIR service](deploy-new-manual.md#deploy-the-fhir-service).
+ 1. Then select the **Properties** button.
+ 1. Next, Copy and paste the **Name** string into the **FHIR Server** text field. In this example, the **FHIR Server** name is `fs-azuredocsdemo`.
+
+- Next, enter the **Destination Name**.
+
+ The **Destination Name** is a friendly name for the destination. Enter a unique name for your destination. In this example, the **Destination Name** is
+
+ `fs-azuredocsdemo`.
+
+- Then, select the **Resolution Type**.
+
+ **Resolution Type** specifies how MedTech service will resolve missing data when reading from the FHIR service. MedTech reads device and patient resources from the FHIR service using [device identifiers](https://www.hl7.org/fhir/device-definitions.html#Device.identifier) and [patient identifiers](https://www.hl7.org/fhir/patient-definitions.html#Patient.identifier).
+
+ Missing data can be resolved by choosing a **Resolution Type** of **Create** and **Lookup**:
+
+ - **Create**
+
+ If **Create** was selected, and device or patient resources are missing when you're reading data, new resources will be created, containing just the identifier.
+
+ - **Lookup**
+
+ If **Lookup** was selected, and device or patient resources are missing, an error will occur, and the data won't be processed. The errors **DeviceNotFoundException** and/or a **PatientNotFoundException** error will be generated, depending on the type of resource not found.
+
+For more information regarding destination mapping, see the FHIR service GitHub documentation at [FHIR mapping](https://github.com/microsoft/iomt-fhir/blob/master/docs/Configuration.md#fhir-mapping).
+
+For Azure docs information about the FHIR destination mapping, see [Overview of the FHIR destination mapping](overview-of-fhir-destination-mapping.md).
+
+### JSON template request
+
+Before you can complete the FHIR destination mapping, you must get a FHIR destination mapping code. Follow these four steps:
+
+1. Go to the [Mapping debugger](how-to-use-mapping-debugger.md) and get the JSON template for your FHIR destination.
+1. Go back to the Destination tab of the Create MedTech service page.
+1. Go to the large box below the boxes for FHIR server name, Destination name, and Resolution type. Enter the JSON template request in that box.
+1. You'll then receive the FHIR Destination mapping code, which will be saved as part of your configuration.
+
+## Configure the Tags tab (optional)
+
+Before you complete your configuration in the **Review + create** tab, you may want to configure tabs. You can do this step by selecting the **Next: Tags >** tabs.
+
+Tags are name and value pairs used for categorizing resources. This step is an optional step when you may have many resources and want to sort them. For more information about tags, see [Use tags to organize your Azure resources and management hierarchy](../../azure-resource-manager/management/tag-resources.md).
+
+Follow these steps if you want to create tags:
+
+1. Under the **Tags** tab, enter the tag properties associated with the MedTech service.
+
+ - Enter a **Name**.
+ - Enter a **Value**.
+
+2. Once you've entered your tag(s), you're ready to do the last step of your configuration.
+
+## Select the Review + create tab to validate your deployment request
+
+To begin the validation process of your MedTech service deployment, select the **Review + create** tab. There will be a short delay and then you should see a screen that displays a **Validation success** message. Below the message, you should see the following values for your deployment.
+
+**Basics**
+- MedTech service name
+- Event Hubs name
+- Consumer group
+- Event Hubs namespace
+++
+**Destination**
+- FHIR server
+- Destination name
+- Resolution type
+
+Your validation screen should look something like this:
+
+ :::image type="content" source="media\deploy-new-config\validate-and-review-medtech-service.png" alt-text="Screenshot of validation success with details displayed." lightbox="media\deploy-new-config\validate-and-review-medtech-service.png":::
+
+If your MedTech service didn't validate, review the validation failure message, and troubleshoot the issue. Check all properties under each MedTech service tab that you've configured. Go back and try again.
+
+## Continue on to Part 3: Deployment and post-deployment
+
+After your configuration is successfully completed, you can go on to Part 3: Deployment and post deployment. See **Next steps**.
+
+## Next steps
+
+When you're ready to begin Part 3 of Manual Deployment, see
+
+> [!div class="nextstepaction"]
+> [Part 3: Manual deployment and post-deployment of MedTech service](deploy-new-deploy.md)
+
+FHIR&#174; is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
healthcare-apis Deploy Manual Post https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/deploy-manual-post.md
+
+ Title: Manual deployment and post-deployment of the MedTech service using the Azure portal - Azure Health Data Services
+description: In this article, you'll learn how to manually create a deployment and post-deployment of the MedTech service in the Azure portal.
++++ Last updated : 03/10/2023+++
+# Quickstart: Part 3: Manual deployment and post-deployment of the MedTech service
+
+> [!NOTE]
+> [Fast Healthcare Interoperability Resources (FHIR&#174;)](https://www.hl7.org/fhir/) is an open healthcare specification.
+
+When you're satisfied with your configuration and it has been successfully validated, you can complete the deployment and post-deployment process.
+
+## Create your manual deployment
+
+1. Select the **Create** button to begin the deployment.
+
+2. The deployment process may take several minutes. The screen will display a message saying that your deployment is in progress.
+
+3. When Azure has finished deploying, a message will appear will say, "Your Deployment is complete" and will also display the following information:
+
+- Deployment name
+- Subscription
+- Resource group
+- Deployment details
+
+Your screen should look something like this:
+
+ :::image type="content" source="media\deploy-new-deploy\created-medtech-service.png" alt-text="Screenshot of the MedTech service deployment completion." lightbox="media\deploy-new-deploy\created-medtech-service.png":::
+
+## Manual post-deployment requirements
+
+There are two post-deployment steps you must perform or the MedTech service can't:
+
+1. Read device data from the device message event hub.
+2. Read or write to the FHIR service.
+
+These steps are:
+
+1. Grant access to the device message event hub.
+2. Grant access to the FHIR service.
+
+These two other steps are needed because MedTech service uses [Azure role-based access control (Azure RBAC)](../../role-based-access-control/overview.md) and a [system-assigned managed identity](../../active-directory/managed-identities-azure-resources/overview.md) for extra security and control of your MedTech service assets.
+
+### Grant access to the device message event hub
+
+Follow these steps to grant access to the device message event hub:
+
+1. In the **Search** bar at the top center of the Azure portal, enter and select the name of your **Event Hubs Namespace** that was previously created for your MedTech service device messages.
+
+2. Select the **Event Hubs** button under **Entities**.
+
+3. Select the event hub that will be used for your MedTech service device messages. For this example, the device message event hub is named **devicedata**.
+
+4. Select the **Access control (IAM)** button.
+
+5. Select the **Add role assignment** button.
+
+6. On the **Add role assignment** page, select the **View** button directly across from the **Azure Event Hubs Data Receiver** role. The Azure Event Hubs Data Receiver role allows the MedTech service to receive device message data from this event hub. For more information about application roles, see [Authentication & Authorization for Azure Health Data Services](.././authentication-authorization.md).
+
+7. Select the **Select role** button.
+
+8. Select the **Next** button.
+
+9. In the **Add role assignment** page, select **Managed identity** next to **Assign access to** and **+ Select members** next to **Members**.
+
+10. When the **Select managed identities** box opens, under the **Managed identity** box, select **MedTech service,** and find your MedTech service system-assigned managed identity under the **Select** box. Once the system-assigned managed identity for your MedTech service is found, select it, and then select the **Select** button.
+
+ The system-assigned managed identify name for your MedTech service is a concatenation of the workspace name and the name of your MedTech service, using the format: **"your workspace name"/"your MedTech service name"** or **"your workspace name"/iotconnectors/"your MedTech service name"**. For example: **azuredocsdemo/mt-azuredocsdemo** or **azuredocsdemo/iotconnectors/mt-azuredocsdemo**.
+
+11. On the **Add role assignment** page, select the **Review + assign** button.
+
+12. On the **Add role assignment** confirmation page, select the **Review + assign** button.
+
+13. After the role assignment has been successfully added to the event hub, a notification will display on your screen with a green check mark. This notification indicates that your MedTech service can now read from your device message event hub. It should look like this:
+
+ :::image type="content" source="media\deploy-new-deploy\validate-medtech-service-managed-identity-added-to-event-hub.png" alt-text="Screenshot of the MedTech service system-assigned managed identity being successfully granted access to the event hub with a red box around the message." lightbox="media\deploy-new-deploy\validate-medtech-service-managed-identity-added-to-event-hub.png":::
+
+For more information about authorizing access to Event Hubs resources, see [Authorize access with Azure Active Directory](../../event-hubs/authorize-access-azure-active-directory.md).
+
+### Grant access to the FHIR service
+
+The process for granting your MedTech service system-assigned managed identity access to your **FHIR service** requires the same 13 steps that you used to grant access to your device message event hub. There are two exceptions. The first is that, instead of navigating to the **Access Control (IAM)** menu from within your event hub (as outlined in steps 1-4), you should navigate to the equivalent **Access Control (IAM)** menu from within your **FHIR service**. The second exception is that, in step 6, your MedTech service system-assigned managed identity will require you to select the **View** button directly across from **FHIR Data Writer** access instead of the button across from **Azure Event Hubs Data Receiver**.
+
+The **FHIR Data Writer** role provides read and write access to your FHIR service, which your MedTech service uses to access or persist data. Because the MedTech service is deployed as a separate resource, the FHIR service will receive requests from the MedTech service. If the FHIR service doesnΓÇÖt know who's making the request, it will deny the request as unauthorized.
+
+For more information about assigning roles to the FHIR service, see [Configure Azure Role-based Access Control (RBAC)](.././configure-azure-rbac.md).
+
+For more information about application roles, see [Authentication & Authorization for Azure Health Data Services](.././authentication-authorization.md).
+
+Now that you have granted access to the device message event hub and the FHIR service, your manual deployment is complete. You're MedTech service is now ready to receive data from a device and process it into a FHIR Observation resource.
+
+## Next steps
+
+In this article, you learned how to perform the manual deployment and post-deployment steps to implement your MedTech service.
+
+To learn about other methods for deploying the MedTech service, see
+
+> [!div class="nextstepaction"]
+> [Choose a deployment method for the MedTech service](deploy-new-choose.md)
+
+FHIR&#174; is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
healthcare-apis Deploy Manual Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/deploy-manual-prerequisites.md
+
+ Title: Deploy the MedTech service manually using the Azure portal - Azure Health Data Services
+description: In this article, you'll learn how to deploy the MedTech service manually using the Azure portal.
++++ Last updated : 04/19/2022+++
+# Quickstart: Deploy the MedTech service manually using the Azure portal
+
+> [!NOTE]
+> [Fast Healthcare Interoperability Resources (FHIR&#174;)](https://www.hl7.org/fhir/) is an open healthcare specification.
+
+You may prefer to manually deploy the MedTech service if you need to track every step of the developmental process. Manual deployment might be necessary if you have to customize or troubleshoot your deployment. Manual deployment will help you by providing all the details for implementing each task.
+
+The explanation of the MedTech service manual deployment using the Azure portal is divided into three parts that cover each of key tasks required:
+
+- Part 1: Prerequisites (see Prerequisites below)
+- Part 2: Configuration (see [Configure for manual deployment](deploy-new-config.md))
+- Part 3: Deployment and Post Deployment (see [Manual deployment and post-deployment](deploy-new-deploy.md))
+
+If you need a diagram with information on the MedTech service deployment, there's an overview at [Choose a deployment method](deploy-new-choose.md#deployment-overview). This diagram shows the steps of deployment and how MedTech service processes device data into FHIR Observations.
+
+## Part 1: Prerequisites
+
+Before you can begin configuring to deploy MedTech services, you need to have the following five prerequisites:
+
+- A valid Azure subscription
+- A resource group deployed in the Azure portal
+- A workspace deployed in Azure Health Data Services
+- An event hub deployed in a namespace
+- FHIR service deployed in Azure Health Data Services
+
+## Open your Azure account
+
+The first thing you need to do is determine if you have a valid Azure subscription. If you don't have an Azure subscription, see [Subscription decision guide](/azure/cloud-adoption-framework/decision-guides/subscriptions/).
+
+## Deploy a resource group in the Azure portal
+
+When you sign in to your Azure account, go to the Azure portal and select the **Create a resource** button. Enter "Azure Health Data Services" in the "Search services and marketplace" box. This step should take you to the Azure Health Data Services page.
+
+## Deploy a workspace in Azure Health Data Services
+
+The first resource you must create is a workspace to contain your Azure Health Data Services resources. Start by selecting Create from the Azure Health Data Services resource page. This step will take you to the first page of Create Azure Health Data Services workspace, when you need to do the following eight steps:
+
+1. Fill in the resource group you want to use or create a new one.
+
+2. Give the workspace a unique name.
+
+3. Select the region you want to use.
+
+4. Select the Networking button at the bottom to continue.
+
+5. Choose whether you want a public or private endpoint.
+
+6. Create tags if you want to use them. They're optional.
+
+7. When you're ready to continue, select the Review + create tab.
+
+8. Select the Create button to deploy your workspace.
+
+After a short delay, you'll start to see information about your new workspace. Make sure you wait until all parts of the screen are displayed. If your initial deployment was successful, you should see:
+
+- "Your deployment is complete"
+- Deployment name
+- Subscription name
+- Resource group name
+
+## Deploy an event hub in the Azure portal using a namespace
+
+An event hub is the next prerequisite you need to create. It's an important step because the event hub receives the data flow from a device and stores it until the MedTech service picks up the device data. Once the MedTech service picks up the device data, it can begin the transformation of the device data into a FHIR service Observation resource. Because Internet propagation times are indeterminate, the event hub is needed to buffer the data and store it for as much as 24 hours before expiring.
+
+Before you can create an event hub, you must create a namespace in Azure portal to contain it. For more information on how To create a namespace and an event hub, see [Azure Event Hubs namespace and event hub deployed in the Azure portal](../../event-hubs/event-hubs-create.md).
+
+## Deploy the FHIR service
+
+The last prerequisite you need to do before you can configure and deploy MedTech service, is to deploy the FHIR service.
+
+There are three ways to deploy FHIR service:
+
+1. Using portal. See [Deploy a FHIR service within Azure Health Data Services - using portal](../fhir/fhir-portal-quickstart.md).
+
+2. Using Bicep. See [Deploy a FHIR service within Azure Health Data Services using Bicep](../fhir/fhir-service-bicep.md).
+
+3. Using an ARM template. See [Deploy a FHIR service within Azure Health Data Services - using ARM template](../fhir/fhir-service-resource-manager-template.md).
+
+After you have deployed FHIR service, it will be ready to receive the data processed by MedTech and persist it as a FHIR service Observation.
+
+## Continue on to Part 2: Configuration
+
+After your prerequisites are successfully completed, you can go on to Part 2: Configuration. See **Next steps**.
+
+## Next steps
+
+When you're ready to begin Part 2 of Manual Deployment, see
+
+> [!div class="nextstepaction"]
+> [Part 2: Configure the MedTech service for manual deployment using the Azure portal](deploy-new-config.md)
+
+FHIR&#174; is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
healthcare-apis Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/get-started.md
Title: Get started with the MedTech service - Azure Health Data Services
-description: This article describes how to get started with the MedTech service.
+description: This article describes the basic steps for deploying the MedTech service.
Previously updated : 04/21/2023 Last updated : 04/25/2023
> [!NOTE] > [Fast Healthcare Interoperability Resources (FHIR&#174;)](https://www.hl7.org/fhir/) is an open healthcare specification.
-This article and diagram outlines the basic steps to get started with the MedTech service in the [Azure Health Data Services](../healthcare-apis-overview.md). These basic steps may help you analyze the MedTech service deployment options and determine which deployment method is best for you.
+This article and diagram outlines the basic steps to get started with the MedTech service in the [Azure Health Data Services](../healthcare-apis-overview.md). These steps may help you analyze the MedTech service deployment options and determine which deployment method is best for you.
-As a prerequisite, you need an Azure subscription and have been granted proper permissions to deploy Azure resource groups and resources. You can follow all the steps, or skip some if you have an existing environment. Also, you can combine all the steps and complete them in Azure PowerShell, Azure CLI, and REST API scripts.
+As a prerequisite, you need an Azure subscription and have been granted the proper permissions to deploy Azure resource groups and resources. You can follow all the steps, or skip some if you have an existing environment. Also, you can combine all the steps and complete them in Azure PowerShell, Azure CLI, and REST API scripts.
:::image type="content" source="media/get-started/get-started-with-medtech-service.png" alt-text="Diagram showing the MedTech service deployment overview." lightbox="media/get-started/get-started-with-medtech-service.png"::: > [!TIP]
-> See the MedTech service article, [Quickstart: Choose a deployment method for the MedTech service](deploy-new-choose.md), for a description of the different deployment methods that can help to simply and automate the deployment of the MedTech service.
+> See the MedTech service article, [Choose a deployment method for the MedTech service](deploy-choose-method.md), for a description of the different deployment methods that can help to simply and automate the deployment of the MedTech service.
## Deploy resources
Deploy a [resource group](../../azure-resource-manager/management/manage-resourc
### Deploy an Event Hubs namespace and event hub
-Deploy an Event Hubs namespace into the resource group. Event Hubs namespaces are logical containers for event hubs. Once the namespace is deployed, you can deploy an event hub, which the MedTech service reads from. For information about deploying Event Hubs namespaces and event hubs, see [Quickstart: Create an event hub using Azure portal](../../event-hubs/event-hubs-create.md).
+Deploy an Event Hubs namespace into the resource group. Event Hubs namespaces are logical containers for event hubs. Once the namespace is deployed, you can deploy an event hub, which the MedTech service reads from. For information about deploying Event Hubs namespaces and event hubs, see [Create an event hub using Azure portal](../../event-hubs/event-hubs-create.md).
### Deploy a workspace
Deploy a [FHIR service](../fhir/fhir-portal-quickstart.md) into your resource gr
### Deploy a MedTech service
-If you have successfully deployed the prerequisite resources, you're now ready to deploy a [MedTech service](deploy-new-manual.md) using your workspace.
+If you have successfully deployed the prerequisite resources, you're now ready to deploy a [MedTech service](deploy-manual-prerequisites.md) using your workspace.
## Next steps
healthcare-apis Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/release-notes.md
Azure Health Data Services is a set of managed API services based on open standards and frameworks for the healthcare industry. They enable you to build scalable and secure healthcare solutions by bringing protected health information (PHI) datasets together and connecting them end-to-end with tools for machine learning, analytics, and AI. This document provides details about the features and enhancements made to Azure Health Data Services including the different service types (FHIR service, DICOM service, and MedTech service) that seamlessly work with one another.
+## April 2023
+#### FHIR Service
+
+**Fixed performance for Search Queries with identifiers**
+This bug fix addresses timeout issues observed for search queries with identifiers, by leveraging OPTIMIZE clause.
+For more details, visit [#3207](https://github.com/microsoft/fhir-server/pull/3207)
+
+**Fixed transient issues associated with loading custom search parameters**
+This bug fix addresses the issue, where the FHIR service would not load the latest SearchParameter status in event of failure.
+For more details, visit [#3222](https://github.com/microsoft/fhir-server/pull/3222)
+ ## March 2023 #### Azure Health Data Services
Azure Health Data Services is a set of managed API services based on open standa
General availability (GA) of Azure Health Data services in Japan East region. - ## February 2023 #### FHIR service
internet-peering Howto Subscription Association Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/internet-peering/howto-subscription-association-powershell.md
Title: Associate peer ASN to Azure subscription - PowerShell
-description: Associate peer ASN to Azure subscription using PowerShell.
+description: Learn how to associate peer ASN to Azure subscription using PowerShell.
Previously updated : 01/23/2023 Last updated : 04/24/2023
> - [Azure portal](howto-subscription-association-portal.md) > - [PowerShell](howto-subscription-association-powershell.md)
-Before you submit a peering request, you should first associate your ASN with Azure subscription using the steps below.
+Before you submit a peering request, you should first associate your ASN with Azure subscription using the steps in this article.
If you prefer, you can complete this guide using the [Azure portal](howto-subscription-association-portal.md).
If you prefer, you can complete this guide using the [Azure portal](howto-subscr
[!INCLUDE [Account](./includes/account-powershell.md)] ### Register for peering resource provider
-Register for peering resource provider in your subscription using the command below. If you don't execute this, then Azure resources required to set up peering aren't accessible.
+Register for peering resource provider in your subscription using [Register-AzResourceProvider](/powershell/module/az.resources/register-azresourceprovider). If you don't execute this, then Azure resources required to set up peering aren't accessible.
```powershell Register-AzResourceProvider -ProviderNamespace Microsoft.Peering ```
-You can check the registration status using the commands below:
+You can check the registration status using [Get-AzResourceProvider](/powershell/module/az.resources/get-azresourceprovider):
```powershell Get-AzResourceProvider -ProviderNamespace Microsoft.Peering ```
Get-AzResourceProvider -ProviderNamespace Microsoft.Peering
Below is an example to update peer information. ```powershell
-New-AzPeerAsn `
- -Name "Contoso_1234" `
- -PeerName "Contoso" `
- -PeerAsn 1234 `
- -Email noc@contoso.com, support@contoso.com `
- -Phone "+1 (555) 555-5555"
+$contactDetails = New-AzPeerAsnContactDetail -Role Noc -Email "noc@contoso.com" -Phone "+1 (555) 555-5555"
+New-AzPeerAsn -Name "Contoso_1234" -PeerName "Contoso" -PeerAsn 1234 -ContactDetail $contactDetails
``` > [!NOTE]
A subscription can have multiple ASNs. Update the peering information for each A
Peers are expected to have a complete and up-to-date profile on [PeeringDB](https://www.peeringdb.com). We use this information during registration to validate the peer's details such as NOC information, technical contact information, and their presence at the peering facilities etc.
-Note that in place of **{subscriptionId}** in the output above, actual subscription ID will be displayed.
+In place of **{subscriptionId}** in the output, actual subscription ID is displayed.
## View status of a PeerASN
-Check for ASN Validation state using the command below:
+Check for ASN Validation state using [Get-AzPeerAsn](/powershell/module/az.peering/get-azpeerasn):
```powershell Get-AzPeerAsn
iot-central Tutorial Use Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/tutorial-use-rest-api.md
The tutorial uses a predefined Postman collection that includes some scripts to
## Import the Postman collection
-To import the collection, open Postman and select **Import**. In the **Import** dialog, select **Link** and paste in the following [URL](https://raw.githubusercontent.com/Azure-Samples/iot-central-docs-samples/main/postman-collection/IoT%20Central.postman_collection.json), <!-- TODO: Add link here --> Select **Continue**.
+To import the collection, open Postman and select **Import**. In the **Import** dialog, select **Link** and paste in the following [URL](https://raw.githubusercontent.com/Azure-Samples/iot-central-docs-samples/main/postman-collection/IoT%20Central%20REST%20tutorial.postman_collection.json), select **Continue**.
Your workspace now contains the **IoT Central REST tutorial** collection. This collection includes all the APIs you use in the tutorial.
iot-develop Concepts Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/concepts-architecture.md
The following diagram shows the key elements of an IoT Plug and Play solution:
## Model repository
-The [model repository](./concepts-model-repository.md) is a store for model and interface definitions. You define models and interfaces using the [Digital Twins Definition Language (DTDL) V2](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/DTDL.v2.md).
+The [model repository](./concepts-model-repository.md) is a store for model and interface definitions. You define models and interfaces using the [Digital Twins Definition Language (DTDL)](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/README.md).
The web UI lets you manage the models and interfaces.
iot-develop Concepts Convention https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/concepts-convention.md
IoT Plug and Play devices should follow a set of conventions when they exchange
A device can include [modules](../iot-hub/iot-hub-devguide-module-twins.md), or be implemented in an [IoT Edge module](../iot-edge/about-iot-edge.md) hosted by the IoT Edge runtime.
-You describe the telemetry, properties, and commands that an IoT Plug and Play device implements with a [Digital Twins Definition Language (DTDL) V2](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/DTDL.v2.md) _model_. There are two types of model referred to in this article:
+You describe the telemetry, properties, and commands that an IoT Plug and Play device implements with a [Digital Twins Definition Language (DTDL)](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/README.md) _model_. There are two types of model referred to in this article:
- **No component** - A model with no components. The model declares telemetry, properties, and commands as top-level elements in the contents section of the main interface. In the Azure IoT explorer tool, this model appears as a single _default component_. - **Multiple components** - A model composed of two or more interfaces. A main interface, which appears as the _default component_, with telemetry, properties, and commands. One or more interfaces declared as components with more telemetry, properties, and commands.
A read-only property is set by the device and reported to the back-end applicati
### Sample no component read-only property
-A device or module can send any valid JSON that follows the DTDL V2 rules.
+A device or module can send any valid JSON that follows the DTDL rules.
DTDL that defines a property on an interface:
The device responds with an acknowledgment that looks like the following example
When a device receives multiple desired properties in a single payload, it can send the reported property responses across multiple payloads or combine the responses into a single payload.
-A device or module can send any valid JSON that follows the DTDL V2 rules.
+A device or module can send any valid JSON that follows the DTDL rules.
DTDL:
On a device or module, multiple component interfaces use command names with the
Now that you've learned about IoT Plug and Play conventions, here are some other resources: -- [Digital Twins Definition Language (DTDL) V2](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/DTDL.v2.md)
+- [Digital Twins Definition Language (DTDL)](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/README.md)
- [C device SDK](https://github.com/Azure/azure-iot-sdk-c/) - [IoT REST API](/rest/api/iothub/device) - [IoT Plug and Play modeling guide](concepts-modeling-guide.md)
iot-develop Concepts Developer Guide Device https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/concepts-developer-guide-device.md
This guide describes the basic steps required to create a device, module, or IoT
To build an IoT Plug and Play device, module, or IoT Edge module, follow these steps: 1. Ensure your device is using either the MQTT or MQTT over WebSockets protocol to connect to Azure IoT Hub.
-1. Create a [Digital Twins Definition Language (DTDL) V2](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/DTDL.v2.md) model to describe your device. To learn more, see [Understand components in IoT Plug and Play models](concepts-modeling-guide.md).
+1. Create a [Digital Twins Definition Language (DTDL)](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/README.md) model to describe your device. To learn more, see [Understand components in IoT Plug and Play models](concepts-modeling-guide.md).
1. Update your device or module to announce the `model-id` as part of the device connection. 1. Implement telemetry, properties, and commands that follow the [IoT Plug and Play conventions](concepts-convention.md)
Once your device or module implementation is ready, use the [Azure IoT explorer]
Now that you've learned about IoT Plug and Play device development, here are some other resources: -- [Digital Twins Definition Language (DTDL) V2](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/DTDL.v2.md)
+- [Digital Twins Definition Language (DTDL)](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/README.md)
- [C device SDK](https://github.com/Azure/azure-iot-sdk-c/) - [IoT REST API](/rest/api/iothub/device) - [Understand components in IoT Plug and Play models](concepts-modeling-guide.md)
iot-develop Concepts Developer Guide Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/concepts-developer-guide-service.md
The service SDKs let you access device information from a solution component suc
Now that you've learned about device modeling, here are some more resources: -- [Digital Twins Definition Language (DTDL) V2](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/DTDL.v2.md)
+- [Digital Twins Definition Language (DTDL)](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/README.md)
- [C device SDK](https://github.com/Azure/azure-iot-sdk-c/) - [IoT REST API](/rest/api/iothub/device) - [IoT Plug and Play modeling guide](concepts-modeling-guide.md)
iot-develop Concepts Digital Twin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/concepts-digital-twin.md
Title: Understand IoT Plug and Play digital twins
description: Understand how IoT Plug and Play uses digital twins Previously updated : 11/17/2022 Last updated : 04/25/2023
# Understand IoT Plug and Play digital twins
-An IoT Plug and Play device implements a model described by the [Digital Twins Definition Language (DTDL) V2](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/DTDL.v2.md) schema. A model describes the set of components, properties, commands, and telemetry messages that a particular device can have.
-
-IoT Plug and Play uses DTDL version 2. For more information about this version, see the [Digital Twins Definition Language (DTDL) V2](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/DTDL.v2.md) specification on GitHub.
+An IoT Plug and Play device implements a model described by the [Digital Twins Definition Language (DTDL)](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/README.md) schema. A model describes the set of components, properties, commands, and telemetry messages that a particular device can have.
> [!NOTE] > DTDL isn't exclusive to IoT Plug and Play. Other IoT services, such as [Azure Digital Twins](../digital-twins/overview.md), use it to represent entire environments such as buildings and energy networks.
iot-develop Concepts Model Parser https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/concepts-model-parser.md
Title: Understand the Azure Digital Twins model parser | Microsoft Docs
description: As a developer, learn how to use the DTDL parser to validate models. Previously updated : 11/17/2022 Last updated : 04/25/2023
# Understand the digital twins model parser
-The Digital Twins Definition Language (DTDL) is described in the [DTDL Specification V2](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/DTDL.v2.md). Users can use the _Digital Twins Model Parser_ NuGet package to validate and query a DTDL model. The DTDL model may be defined in multiple files.
+The Digital Twins Definition Language (DTDL) is described in the [DTDL Specification](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/README.md). Users can use the _Digital Twins Model Parser_ NuGet package to validate and query a DTDL model. The DTDL model may be defined in multiple files.
## Install the DTDL model parser
-The parser is available in NuGet.org with the ID: [Microsoft.Azure.DigitalTwins.Parser](https://www.nuget.org/packages/Microsoft.Azure.DigitalTwins.Parser). To install the parser, use any compatible NuGet package manager such as the one in Visual Studio or in the `dotnet` CLI.
+The parser is available in NuGet.org with the ID: [DTDLParser](https://www.nuget.org/packages/DTDLParser). To install the parser, use any compatible NuGet package manager such as the one in Visual Studio or in the `dotnet` CLI.
```bash
-dotnet add package Microsoft.Azure.DigitalTwins.Parser
+dotnet add package DTDLParser
``` > [!NOTE]
-> At the time of writing, the parser version is `3.12.7`.
-
-## Use the parser to validate a model
-
-A model can be composed of one or more interfaces described in JSON files. You can use the parser to load all the files in a given folder and then validate all the files as a whole, including any references between the files:
-
-1. Create an `IEnumerable<string>` with a list of all model contents:
-
- ```csharp
- using System.IO;
-
- string folder = @"c:\myModels\";
- string filespec = "*.json";
-
- List<string> modelJson = new List<string>();
- foreach (string filename in Directory.GetFiles(folder, filespec))
- {
- using StreamReader modelReader = new StreamReader(filename);
- modelJson.Add(modelReader.ReadToEnd());
- }
- ```
-
-1. Instantiate the `ModelParser` and call `ParseAsync`:
-
- ```csharp
- using Microsoft.Azure.DigitalTwins.Parser;
-
- ModelParser modelParser = new ModelParser();
- IReadOnlyDictionary<Dtmi, DTEntityInfo> parseResult = await modelParser.ParseAsync(modelJson);
- ```
-
-1. Check for validation errors. If the parser finds any errors, it throws an `ParsingException` with a list of errors:
-
- ```csharp
- try
- {
- IReadOnlyDictionary<Dtmi, DTEntityInfo> parseResult = await modelParser.ParseAsync(modelJson);
- }
- catch (ParsingException pex)
- {
- Console.WriteLine(pex.Message);
- foreach (var err in pex.Errors)
- {
- Console.WriteLine(err.PrimaryID);
- Console.WriteLine(err.Message);
- }
- }
- ```
-
-1. Inspect the `Model`. If the validation succeeds, you can use the model parser API to inspect the model. The following code snippet shows how to iterate over all the models parsed and display the existing properties:
-
- ```csharp
- foreach (var item in parseResult)
- {
- Console.WriteLine($"\t{item.Key}");
- Console.WriteLine($"\t{item.Value.DisplayName?.Values.FirstOrDefault()}");
- }
- ```
+> At the time of writing, the parser version is `1.0.52`.
+
+## Use the parser to validate and inspect a model
+
+The DTDLParser is a library that you can use to:
+
+- Determine whether one or more models are valid according to the language v2 or v3 specifications.
+- Identify specific modeling errors.
+- Inspect model contents.
+
+A model can be composed of one or more interfaces described in JSON files. You can use the parser to load all the files that define a model and then validate all the files as a whole, including any references between the files.
+
+The [DTDLParser for .NET](https://github.com/digitaltwinconsortium/DTDLParser) repository includes the following samples that illustrate the use of the parser:
+
+- [DTDLParserResolveSample](https://github.com/digitaltwinconsortium/DTDLParser/blob/main/samples/DTDLParserResolveSample) shows how to parse an interface with external references, resolve the dependencies using the `Azure.IoT.ModelsRepository` client.
+- [DTDLParserJSInteropSample](https://github.com/digitaltwinconsortium/DTDLParser/blob/main/samples/DTDLParserJSInteropSample) shows how to use the DTDL Parser from JavaScript running in the browser, using .NET JSInterop.
+
+The DTDLParser for .NET repository also includes a [collection of tutorials](https://github.com/digitaltwinconsortium/DTDLParser/blob/main/tutorials/README.md) that show you how to use the parser to validate and inspect models.
## Next steps
iot-develop Concepts Model Repository https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/concepts-model-repository.md
# Device models repository
-The device models repository (DMR) enables device builders to manage and share IoT Plug and Play device models. The device models are JSON LD documents defined using the [Digital Twins Modeling Language (DTDL) V2](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/DTDL.v2.md).
+The device models repository (DMR) enables device builders to manage and share IoT Plug and Play device models. The device models are JSON LD documents defined using the [Digital Twins Modeling Language (DTDL)](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/README.md).
The DMR defines a pattern to store DTDL interfaces in a folder structure based on the device twin model identifier (DTMI). You can locate an interface in the DMR by converting the DTMI to a relative path. For example, the `dtmi:com:example:Thermostat;1` DTMI translates to `/dtmi/com/example/thermostat-1.json` and can be obtained from the public base URL `devicemodels.azure.com` at the URL [https://devicemodels.azure.com/dtmi/com/example/thermostat-1.json](https://devicemodels.azure.com/dtmi/com/example/thermostat-1.json).
iot-develop Concepts Modeling Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/concepts-modeling-guide.md
At the core of IoT Plug and Play, is a device _model_ that describes a device's
To learn more about how IoT Plug and Play uses device models, see [IoT Plug and Play device developer guide](concepts-developer-guide-device.md) and [IoT Plug and Play service developer guide](concepts-developer-guide-service.md).
-To define a model, you use the Digital Twins Definition Language (DTDL) V2. DTDL uses a JSON variant called [JSON-LD](https://json-ld.org/). The following snippet shows the model for a thermostat device that:
+To define a model, you use the Digital Twins Definition Language (DTDL). DTDL uses a JSON variant called [JSON-LD](https://json-ld.org/). The following snippet shows the model for a thermostat device that:
- Has a unique model ID: `dtmi:com:example:Thermostat;1`. - Sends temperature telemetry.
The thermostat model has a single interface. Later examples in this article show
This article describes how to design and author your own models and covers topics such as data types, model structure, and tools.
-To learn more, see the [Digital Twins Definition Language V2](https://github.com/Azure/opendigitaltwins-dtdl) specification.
+To learn more, see the [Digital Twins Definition Language](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/README.md) specification.
## Model structure
There's a DTDL authoring extension for VS Code.
To install the DTDL extension for VS Code, go to [DTDL editor for Visual Studio Code](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.vscode-dtdl). You can also search for **DTDL** in the **Extensions** view in VS Code.
-When you've installed the extension, use it to help you author DTDL model files in VS code:
+When you've installed the extension, use it to help you author DTDL model files in VS Code:
- The extension provides syntax validation in DTDL model files, highlighting errors as shown on the following screenshot:
The following list summarizes some key constraints and limits on models:
Now that you've learned about device modeling, here are some more resources: -- [Digital Twins Definition Language V2 (DTDL)](https://github.com/Azure/opendigitaltwins-dtdl)
+- [Digital Twins Definition Language (DTDL)](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/README.md)
- [Model repositories](./concepts-model-repository.md)
iot-develop Howto Convert To Pnp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/howto-convert-to-pnp.md
In summary, the sample implements the following capabilities:
## Design a model
-Every IoT Plug and Play device has a model that describes the features and capabilities of the device. The model uses the [Digital Twin Definition Language (DTDL) V2](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/DTDL.v2.md) to describe the device capabilities.
+Every IoT Plug and Play device has a model that describes the features and capabilities of the device. The model uses the [Digital Twin Definition Language (DTDL)](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/README.md) to describe the device capabilities.
For a simple model that maps the existing capabilities of your device, use the *Telemetry*, *Property*, and *Command* DTDL elements.
iot-develop Howto Manage Digital Twin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/howto-manage-digital-twin.md
IoT Plug and Play supports **Get digital twin** and **Update digital twin** oper
## Update a digital twin
-An IoT Plug and Play device implements a model described by [Digital Twins Definition Language (DTDL) V2](https://github.com/Azure/opendigitaltwins-dtdl). Solution developers can use the **Update Digital Twin API** to update the state of component and the properties of the digital twin.
+An IoT Plug and Play device implements a model described by [Digital Twins Definition Language (DTDL)](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/README.md). Solution developers can use the **Update Digital Twin API** to update the state of component and the properties of the digital twin.
The IoT Plug and Play device used as an example in this article implements the [Temperature Controller model](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/samples/TemperatureController.json) with [Thermostat](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/samples/Thermostat.json) components.
The following JSON Patch sample shows how to add, replace, or remove a property
**Name**
-The name of a component or property must be valid DTDL V2 name.
+The name of a component or property must be valid DTDL name.
Allowed characters are a-z, A-Z, 0-9 (not as the first character), and underscore (not as the first or last character).
A name can be 1-64 characters long.
**Property value**
-The value must be a valid [DTDL V2 Property](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/DTDL.v2.md#property).
+The value must be a valid [DTDL Property](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v3/DTDL.v3.md#property).
-All primitive types are supported. Within complex types, enums, maps, and objects are supported. To learn more, see [DTDL V2 Schemas](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/DTDL.v2.md#schema).
+All primitive types are supported. Within complex types, enums, maps, and objects are supported. To learn more, see [DTDL Schemas](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v3/DTDL.v3.md#schema).
Properties don't support array or any complex schema with an array. A maximum depth of a five levels is supported for a complex object.
-All field names within complex object should be valid DTDL V2 names.
+All field names within complex object should be valid DTDL names.
-All map keys should be valid DTDL V2 names.
+All map keys should be valid DTDL names.
## Troubleshoot update digital twin API errors
iot-develop Overview Iot Plug And Play https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/overview-iot-plug-and-play.md
IoT Plug and Play enables solution builders to integrate IoT devices with their
You can group these elements in interfaces to reuse across models to make collaboration easier and to speed up development.
-To make IoT Plug and Play work with [Azure Digital Twins](../digital-twins/overview.md), you define models and interfaces using the [Digital Twins Definition Language (DTDL) V2](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/DTDL.v2.md). IoT Plug and Play and the DTDL are open to the community, and Microsoft welcomes collaboration with customers, partners, and industry. Both are based on open W3C standards such as JSON-LD and RDF, which enables easier adoption across services and tooling.
+To make IoT Plug and Play work with [Azure Digital Twins](../digital-twins/overview.md), you define models and interfaces using the [Digital Twins Definition Language (DTDL)](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/README.md). IoT Plug and Play and the DTDL are open to the community, and Microsoft welcomes collaboration with customers, partners, and industry. Both are based on open W3C standards such as JSON-LD and RDF, which enables easier adoption across services and tooling.
There's no extra cost for using IoT Plug and Play and DTDL. Standard rates for [Azure IoT Hub](../iot-hub/about-iot-hub.md) and other Azure services remain the same.
iot-develop Tutorial Migrate Device To Module https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/tutorial-migrate-device-to-module.md
This tutorial shows you how to connect a generic IoT Plug and Play [module](../i
A device is an IoT Plug and Play device if it: * Publishes its model ID when it connects to an IoT hub.
-* Implements the properties and methods described in the Digital Twins Definition Language (DTDL) V2 model identified by the model ID.
+* Implements the properties and methods described in the Digital Twins Definition Language (DTDL) model identified by the model ID.
To learn more about how devices use a DTDL and model ID, see [IoT Plug and Play developer guide](./concepts-developer-guide-device.md). Modules use model IDs and DTDL models in the same way.
iot-develop Tutorial Use Mqtt https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/tutorial-use-mqtt.md
if (rc != MOSQ_ERR_SUCCESS)
printf("Publish returned OK\r\n"); ```
-To learn more, see [Sending device-to-cloud messages](../iot-hub/iot-hub-mqtt-support.md#sending-device-to-cloud-messages).
+To learn more, see [Sending device-to-cloud messages](../iot/iot-mqtt-connect-to-iot-hub.md#sending-device-to-cloud-messages).
## Receive a cloud-to-device message
void message_callback(struct mosquitto* mosq, void* obj, const struct mosquitto_
} ```
-To learn more, see [Use MQTT to receive cloud-to-device messages](../iot-hub/iot-hub-mqtt-support.md#receiving-cloud-to-device-messages).
+To learn more, see [Use MQTT to receive cloud-to-device messages](../iot/iot-mqtt-connect-to-iot-hub.md#receiving-cloud-to-device-messages).
## Update a device twin
void message_callback(struct mosquitto* mosq, void* obj, const struct mosquitto_
} ```
-To learn more, see [Use MQTT to update a device twin reported property](../iot-hub/iot-hub-mqtt-support.md#update-device-twins-reported-properties) and [Use MQTT to retrieve a device twin property](../iot-hub/iot-hub-mqtt-support.md#retrieving-a-device-twins-properties).
+To learn more, see [Use MQTT to update a device twin reported property](../iot/iot-mqtt-connect-to-iot-hub.md#update-device-twins-reported-properties) and [Use MQTT to retrieve a device twin property](../iot/iot-mqtt-connect-to-iot-hub.md#retrieving-a-device-twins-properties).
## Clean up resources
To learn more, see [Use MQTT to update a device twin reported property](../iot-h
Now that you've learned how to use the Mosquitto MQTT library to communicate with IoT Hub, a suggested next step is to review: > [!div class="nextstepaction"]
-> [Communicate with your IoT hub using the MQTT protocol](../iot-hub/iot-hub-mqtt-support.md)
+> [Communicate with your IoT hub using the MQTT protocol](../iot/iot-mqtt-connect-to-iot-hub.md)
iot-dps Quick Create Simulated Device Symm Key https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/quick-create-simulated-device-symm-key.md
Title: Quickstart - Provision a simulated symmetric key device to Microsoft Azur
description: Learn how to provision a device that authenticates with a symmetric key in the Azure IoT Hub Device Provisioning Service (DPS) Previously updated : 09/29/2021 Last updated : 04/06/2023 zone_pivot_groups: iot-dps-set1
-#Customer intent: As a new IoT developer, I want to connect a device to an IoT Hub using the SDK, to learn how secure provisioning works with symmetric keys.
+#Customer intent: As a new IoT developer, I want to connect a device to an IoT hub using the SDK, to learn how secure provisioning works with symmetric keys.
# Quickstart: Provision a simulated symmetric key device
-In this quickstart, you'll create a simulated device on your Windows machine. The simulated device will be configured to use the [symmetric key attestation](concepts-symmetric-key-attestation.md) mechanism for authentication. After you've configured your device, you'll then provision it to your IoT hub using the Azure IoT Hub Device Provisioning Service.
+In this quickstart, you create a simulated device on your Windows machine. The simulated device is configured to use the [symmetric key attestation](concepts-symmetric-key-attestation.md) mechanism for authentication. After you've configured your device, you then provision it to your IoT hub using the Azure IoT Hub Device Provisioning Service.
If you're unfamiliar with the process of provisioning, review the [provisioning](about-iot-dps.md#provisioning-process) overview.
-This quickstart demonstrates a solution for a Windows-based workstation. However, you can also perform the procedures on Linux. For a Linux example, see [Tutorial: provision for geolatency](how-to-provision-multitenant.md).
+This quickstart demonstrates a solution for a Windows-based workstation. However, you can also perform the procedures on Linux. For a Linux example, see [Tutorial: provision for geo latency](how-to-provision-multitenant.md).
## Prerequisites
Once you create the individual enrollment, a **primary key** and **secondary key
1. Copy the value of the generated **Primary key**.
- :::image type="content" source="./media/quick-create-simulated-device-symm-key/copy-device-enrollment-primary-key.png" alt-text="Copy the primary key of the device enrollment":::
+ :::image type="content" source="./media/quick-create-simulated-device-symm-key/copy-device-enrollment-primary-key.png" alt-text="Screenshot showing the enrollment details, highlighting the Copy button for the primary key of the device enrollment":::
<a id="firstbootsequence"></a>
To update and run the provisioning sample with your device information:
2. Copy the **ID Scope** value.
- :::image type="content" source="./media/quick-create-simulated-device-symm-key/extract-dps-endpoints.png" alt-text="Extract Device Provisioning Service endpoint information":::
+ :::image type="content" source="./media/quick-create-simulated-device-symm-key/extract-dps-endpoints.png" alt-text="Screenshot showing the overview of the Device Provisioning Service instance, highlighting the ID Scope value for the instance.":::
3. In Visual Studio, open the *azure_iot_sdks.sln* solution file that was generated by running CMake. The solution file should be in the following location:
To update and run the provisioning sample with your device information:
static const char* id_scope = "0ne00002193"; ```
-6. Find the definition for the `main()` function in the same file. Make sure the `hsm_type` variable is set to `SECURE_DEVICE_TYPE_SYMMETRIC_KEY` as shown below:
+6. Find the definition for the `main()` function in the same file. Make sure the `hsm_type` variable is set to `SECURE_DEVICE_TYPE_SYMMETRIC_KEY` as shown in the following example:
```c SECURE_DEVICE_TYPE hsm_type;
To update and run the provisioning sample with your device information:
2. Copy the **ID Scope** value.
- :::image type="content" source="./media/quick-create-simulated-device-symm-key/extract-dps-endpoints.png" alt-text="Extract Device Provisioning Service endpoint information":::
+ :::image type="content" source="./media/quick-create-simulated-device-symm-key/extract-dps-endpoints.png" alt-text="Screenshot showing the overview of the Device Provisioning Service instance, highlighting the ID Scope value for the instance.":::
3. Open a command prompt and go to the *SymmetricKeySample* in the cloned sdk repository:
To update and run the provisioning sample with your device information:
cd '.\azure-iot-sdk-csharp\provisioning\device\samples\how to guides\SymmetricKeySample\' ```
-4. In the *SymmetricKeySample* folder, open *Parameters.cs* in a text editor. This file shows the parameters that are supported by the sample. Only the first three required parameters are used in this article when running the sample. Review the code in this file. No changes are needed.
+4. In the *SymmetricKeySample* folder, open *Parameters.cs* in a text editor. This file shows the available parameters for the sample. Only the first three required parameters are used in this article when running the sample. Review the code in this file. No changes are needed.
| Parameter | Required | Description | | :-- | :- | :-- |
To update and run the provisioning sample with your device information:
2. Copy the **ID Scope** and **Global device endpoint** values.
- :::image type="content" source="./media/quick-create-simulated-device-symm-key/copy-id-scope-and-global-device-endpoint.png" alt-text="Extract Device Provisioning Service endpoint information":::
+ :::image type="content" source="./media/quick-create-simulated-device-symm-key/copy-id-scope-and-global-device-endpoint.png" alt-text="Screenshot showing the overview of the Device Provisioning Service instance, highlighting the global device endpoint and ID Scope values for the instance.":::
3. Open a command prompt for executing Node.js commands, and go to the following directory:
To update and run the provisioning sample with your device information:
provisioningClient.setProvisioningPayload({a: 'b'}); ```
- You may comment out this code, as it is not needed with for this quick start. A custom payload would be required you wanted to use a custom allocation function to assign your device to an IoT Hub. For more information, see [Tutorial: Use custom allocation policies](tutorial-custom-allocation-policies.md).
+ You may comment out this code, as it's not needed with for this quickstart. A custom payload would be required you wanted to use a custom allocation function to assign your device to an IoT hub. For more information, see [Tutorial: Use custom allocation policies](tutorial-custom-allocation-policies.md).
The `provisioningClient.register()` method attempts the registration of your device.
To update and run the provisioning sample with your device information:
7. You should now see something similar to the following output. A "Hello World" string is sent to the hub as a test message. ```output
- D:\azure-iot-samples-csharp\provisioning\Samples\device\SymmetricKeySample>dotnet run --s 0ne00000A0A --i symm-key-csharp-device-01 --p sbDDeEzRuEuGKag+kQKV+T1QGakRtHpsERLP0yPjwR93TrpEgEh/Y07CXstfha6dhIPWvdD1nRxK5T0KGKA+nQ==
-
- Initializing the device provisioning client...
- Initialized for registration Id symm-key-csharp-device-01.
- Registering with the device provisioning service...
- Registration status: Assigned.
- Device csharp-device-01 registered to ExampleIoTHub.azure-devices.net.
- Creating symmetric key authentication for IoT Hub...
- Testing the provisioned device with IoT Hub...
- Sending a telemetry message...
- Finished.
- Enter any key to exit.
+ D:\azure-iot-samples-csharp\provisioning\device\samples>node register_symkey.js
+ registration succeeded
+ assigned hub=ExampleIoTHub.azure-devices.net
+ deviceId=nodejs-device-01
+ payload=undefined
+ Client connected
+ send status: MessageEnqueued
``` ::: zone-end
To update and run the provisioning sample with your device information:
2. Copy the **ID Scope** and **Global device endpoint** values.
- :::image type="content" source="./media/quick-create-simulated-device-symm-key/copy-id-scope-and-global-device-endpoint.png" alt-text="Extract Device Provisioning Service endpoint information":::
+ :::image type="content" source="./media/quick-create-simulated-device-symm-key/copy-id-scope-and-global-device-endpoint.png" alt-text="Screenshot showing the overview of the Device Provisioning Service instance, highlighting the global device endpoint and ID Scope values for the instance.":::
3. Open a command prompt and go to the directory where the sample file, _provision_symmetric_key.py_, is located.
To update and run the provisioning sample with your device information:
1. In the main menu of your Device Provisioning Service, select **Overview**.
-2. Copy the **ID Scope** and **Global device endpoint** values. These are your `SCOPE_ID` and `GLOBAL_ENDPOINT` respectively.
+2. Copy the **ID Scope** and **Global device endpoint** values. These values are your `SCOPE_ID` and `GLOBAL_ENDPOINT` parameters, respectively.
- :::image type="content" source="./media/quick-create-simulated-device-symm-key/copy-id-scope-and-global-device-endpoint.png" alt-text="Extract Device Provisioning Service endpoint information":::
+ :::image type="content" source="./media/quick-create-simulated-device-symm-key/copy-id-scope-and-global-device-endpoint.png" alt-text="Screenshot showing the overview of the Device Provisioning Service instance, highlighting the global device endpoint and ID Scope values for the instance.":::
3. Open the Java device sample code for editing. The full path to the device sample code is:
To update and run the provisioning sample with your device information:
3. Select the IoT hub to which your device was assigned.
-4. In the **Explorers** menu, select **IoT Devices**.
+4. In the **Device management** menu, select **Devices**.
-5. If your device was provisioned successfully, the device ID should appear in the list, with **Status** set as *enabled*. If you don't see your device, select **Refresh** at the top of the page.
+5. If your device was provisioned successfully, the device ID should appear in the list, with **Status** set as *Enabled*. If you don't see your device, select **Refresh** at the top of the page.
:::zone pivot="programming-language-ansi-c"
- :::image type="content" source="./media/quick-create-simulated-device-symm-key/hub-registration.png" alt-text="Device is registered with the IoT hub":::
+ :::image type="content" source="./media/quick-create-simulated-device-symm-key/hub-registration.png" alt-text="Screenshot showing that the device is registered with the IoT hub and enabled for the C example.":::
::: zone-end :::zone pivot="programming-language-csharp"
- :::image type="content" source="./media/quick-create-simulated-device-symm-key/hub-registration-csharp.png" alt-text="CSharp device is registered with the IoT hub":::
+ :::image type="content" source="./media/quick-create-simulated-device-symm-key/hub-registration-csharp.png" alt-text="Screenshot showing that the device is registered with the IoT hub and enabled for the C# example.":::
::: zone-end :::zone pivot="programming-language-nodejs"
- :::image type="content" source="./media/quick-create-simulated-device-symm-key/hub-registration-nodejs.png" alt-text="Node.js device is registered with the IoT hub":::
+ :::image type="content" source="./media/quick-create-simulated-device-symm-key/hub-registration-nodejs.png" alt-text="Screenshot showing that the device is registered with the IoT hub and enabled for the Node.js example.":::
::: zone-end :::zone pivot="programming-language-python"
- :::image type="content" source="./media/quick-create-simulated-device-symm-key/hub-registration-python.png" alt-text="Python device is registered with the IoT hub":::
+ :::image type="content" source="./media/quick-create-simulated-device-symm-key/hub-registration-python.png" alt-text="Screenshot showing that the device is registered with the IoT hub and enabled for the Python example.":::
::: zone-end ::: zone pivot="programming-language-java"
- :::image type="content" source="./media/quick-create-simulated-device-symm-key/hub-registration-java.png" alt-text="Java device is registered with the IoT hub":::
+ :::image type="content" source="./media/quick-create-simulated-device-symm-key/hub-registration-java.png" alt-text="Screenshot showing that the device is registered with the IoT hub and enabled for the Java example.":::
::: zone-end
iot-dps Quick Create Simulated Device X509 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/quick-create-simulated-device-x509.md
Title: Quickstart - Provision an X.509 certificate simulated device to Microsoft
description: Learn how to provision a simulated device that authenticates with an X.509 certificate in the Azure IoT Hub Device Provisioning Service Previously updated : 11/01/2022 Last updated : 04/06/2023
zone_pivot_groups: iot-dps-set1
# Quickstart: Provision an X.509 certificate simulated device
-In this quickstart, you'll create a simulated device on your Windows machine. The simulated device will be configured to use the [X.509 certificate attestation](concepts-x509-attestation.md) mechanism for authentication. After you've configured your device, you'll then provision it to your IoT hub using the Azure IoT Hub Device Provisioning Service.
+In this quickstart, you create a simulated device on your Windows machine. The simulated device is configured to use the [X.509 certificate attestation](concepts-x509-attestation.md) mechanism for authentication. After you've configured your device, you then provision it to your IoT hub using the Azure IoT Hub Device Provisioning Service.
If you're unfamiliar with the process of provisioning, review the [provisioning](about-iot-dps.md#provisioning-process) overview. Also make sure you've completed the steps in [Set up IoT Hub Device Provisioning Service with the Azure portal](./quick-setup-auto-provision.md) before continuing.
-This quickstart demonstrates a solution for a Windows-based workstation. However, you can also perform the procedures on Linux. For a Linux example, see [Tutorial: Provision for geolatency](how-to-provision-multitenant.md).
+This quickstart demonstrates a solution for a Windows-based workstation. However, you can also perform the procedures on Linux. For a Linux example, see [Tutorial: Provision for geo latency](how-to-provision-multitenant.md).
## Prerequisites
The following prerequisites are for a Windows development environment. For Linux
* Open both a Windows command prompt and a Git Bash prompt.
- The steps in this quickstart assume that you're using a Windows machine and the OpenSSL installation that is installed as part of Git. You'll use the Git Bash prompt to issue OpenSSL commands and the Windows command prompt for everything else. If you're using Linux, you can issue all commands from a Bash shell.
+ The steps in this quickstart assume that you're using a Windows machine and the OpenSSL installation that is installed as part of Git. You use the Git Bash prompt to issue OpenSSL commands and the Windows command prompt for everything else. If you're using Linux, you can issue all commands from a Bash shell.
## Prepare your development environment ::: zone pivot="programming-language-ansi-c"
-In this section, you'll prepare a development environment that's used to build the [Azure IoT C SDK](https://github.com/Azure/azure-iot-sdk-c). The sample code attempts to provision the device, during the device's boot sequence.
+In this section, you prepare a development environment that's used to build the [Azure IoT C SDK](https://github.com/Azure/azure-iot-sdk-c). The sample code attempts to provision the device, during the device's boot sequence.
1. Open a web browser, and go to the [Release page of the Azure IoT C SDK](https://github.com/Azure/azure-iot-sdk-c/releases/latest).
git clone -b v2 https://github.com/Azure/azure-iot-sdk-python.git --recursive
## Create a self-signed X.509 device certificate
-In this section, you'll use OpenSSL to create a self-signed X.509 certificate and a private key. This certificate will be uploaded to your provisioning service instance and verified by the service.
+In this section, you use OpenSSL to create a self-signed X.509 certificate and a private key. This certificate is uploaded to your provisioning service instance and verified by the service.
> [!CAUTION] > Use certificates created with OpenSSL in this quickstart for development testing only.
Perform the steps in this section in your Git Bash prompt.
7. When asked to **Enter pass phrase for device-key.pem:**, use the same pass phrase you did previously, `1234`.
-Keep the Git Bash prompt open. You'll need it later in this quickstart.
+Keep the Git Bash prompt open. You need it later in this quickstart.
::: zone-end ::: zone pivot="programming-language-csharp"
-The C# sample code is set up to use X.509 certificates that are stored in a password-protected PKCS#12 formatted file (`certificate.pfx`). You'll still need the PEM formatted public key certificate file (`device-cert.pem`) that you just created to create an individual enrollment entry later in this quickstart.
+The C# sample code is set up to use X.509 certificates that are stored in a password-protected PKCS#12 formatted file (`certificate.pfx`). You still need the PEM formatted public key certificate file (`device-cert.pem`) that you just created to create an individual enrollment entry later in this quickstart.
1. To generate the PKCS12 formatted file expected by the sample, enter the following command:
The C# sample code is set up to use X.509 certificates that are stored in a pass
cp certificate.pfx ./azure-iot-sdk-csharp/provisioning/device/samples/"Getting Started"/X509Sample ```
-You won't need the Git Bash prompt for the rest of this quickstart. However, you may want to keep it open to check your certificate if you have problems in later steps.
+You don't need the Git Bash prompt for the rest of this quickstart. However, you may want to keep it open to check your certificate if you have problems in later steps.
::: zone-end
You won't need the Git Bash prompt for the rest of this quickstart. However, you
cp unencrypted-device-key.pem ./azure-iot-sdk-node/provisioning/device/samples ```
-You won't need the Git Bash prompt for the rest of this quickstart. However, you may want to keep it open to check your certificate if you have problems in later steps.
+You don't need the Git Bash prompt for the rest of this quickstart. However, you may want to keep it open to check your certificate if you have problems in later steps.
::: zone-end
You won't need the Git Bash prompt for the rest of this quickstart. However, you
cp device-key.pem ./azure-iot-sdk-python/samples/async-hub-scenarios ```
-You won't need the Git Bash prompt for the rest of this quickstart. However, you may want to keep it open to check your certificate if you have problems in later steps.
+You don't need the Git Bash prompt for the rest of this quickstart. However, you may want to keep it open to check your certificate if you have problems in later steps.
::: zone-end ::: zone pivot="programming-language-java"
You won't need the Git Bash prompt for the rest of this quickstart. However, you
7. When asked to **Enter pass phrase for device-key.pem:**, use the same pass phrase you did previously, `1234`.
-Keep the Git Bash prompt open. You'll need it later in this quickstart.
+Keep the Git Bash prompt open. You need it later in this quickstart.
::: zone-end
In this section, you update the sample code with your Device Provisioning Servic
### Configure the custom HSM stub code
-The specifics of interacting with actual secure hardware-based storage vary depending on the hardware. As a result, the certificate and private key used by the simulated device in this quickstart will be hardcoded in the custom Hardware Security Module (HSM) stub code.
+The specifics of interacting with actual secure hardware-based storage vary depending on the hardware. As a result, the certificate and private key used by the simulated device in this quickstart is hardcoded in the custom Hardware Security Module (HSM) stub code.
To update the custom HSM stub code to simulate the identity of the device with ID `my-x509-device`:
To update the custom HSM stub code to simulate the identity of the device with I
"--END CERTIFICATE--"; ```
- Updating this string value manually can be prone to error. To generate the proper syntax, you can copy and paste the following command into your **Git Bash prompt**, and press **ENTER**. This command will generate the syntax for the `CERTIFICATE` string constant value and write it to the output.
+ Updating this string value manually can be prone to error. To generate the proper syntax, you can copy and paste the following command into your **Git Bash prompt**, and press **ENTER**. This command generates the syntax for the `CERTIFICATE` string constant value and writes it to the output.
```Bash sed -e 's/^/"/;$ !s/$/""\\n"/;$ s/$/"/' device-cert.pem
To update the custom HSM stub code to simulate the identity of the device with I
"--END RSA PRIVATE KEY--"; ```
- Updating this string value manually can be prone to error. To generate the proper syntax, you can copy and paste the following command into your **Git Bash prompt**, and press **ENTER**. This command will generate the syntax for the `PRIVATE_KEY` string constant value and write it to the output.
+ Updating this string value manually can be prone to error. To generate the proper syntax, you can copy and paste the following command into your **Git Bash prompt**, and press **ENTER**. This command generates the syntax for the `PRIVATE_KEY` string constant value and writes it to the output.
```Bash sed -e 's/^/"/;$ !s/$/""\\n"/;$ s/$/"/' unencrypted-device-key.pem
To update the custom HSM stub code to simulate the identity of the device with I
::: zone pivot="programming-language-csharp"
-In this section, you'll use your Windows command prompt.
+In this section, you use your Windows command prompt.
1. In the Azure portal, select the **Overview** tab for your Device Provisioning Service.
In this section, you'll use your Windows command prompt.
3. In your Windows command prompt, change to the X509Sample directory. This directory is located in the *.\azure-iot-sdk-csharp\provisioning\device\samples\getting started\X509Sample* directory off the directory where you cloned the samples on your computer.
-4. Enter the following command to build and run the X.509 device provisioning sample (replace the `<IDScope>` value with the ID Scope that you copied in the previous section. The certificate file will default to *./certificate.pfx* and prompt for the .pfx password.
+4. Enter the following command to build and run the X.509 device provisioning sample (replace the `<IDScope>` value with the ID Scope that you copied in the previous section. The certificate file defaults to *./certificate.pfx* and prompts for the .pfx password.
```cmd dotnet run -- -s <IDScope>
In this section, you'll use your Windows command prompt.
dotnet run -- -s 0ne00000A0A -c certificate.pfx -p 1234 ```
-5. The device connects to DPS and is assigned to an IoT hub. Then, the device will send a telemetry message to the IoT hub.
+5. The device connects to DPS and is assigned to an IoT hub. Then, the device sends a telemetry message to the IoT hub.
```output Loading the certificate...
In this section, you'll use your Windows command prompt.
::: zone pivot="programming-language-nodejs"
-In this section, you'll use your Windows command prompt.
+In this section, you use your Windows command prompt.
1. In the Azure portal, select the **Overview** tab for your Device Provisioning Service.
In this section, you'll use your Windows command prompt.
::: zone pivot="programming-language-python"
-In this section, you'll use your Windows command prompt.
+In this section, you use your Windows command prompt.
1. In the Azure portal, select the **Overview** tab for your Device Provisioning Service.
In this section, you'll use your Windows command prompt.
1. Review the code for [provision_x509.py](https://github.com/Azure/azure-iot-sdk-python/blob/v2/samples/async-hub-scenarios/provision_x509.py). If you're not using **Python version 3.7** or later, make the [code change mentioned here](https://github.com/Azure/azure-iot-sdk-python/tree/v2/samples/async-hub-scenarios#advanced-iot-hub-scenario-samples-for-the-azure-iot-hub-device-sdk) to replace `asyncio.run(main())` and save your changes.
-1. Run the sample. The sample connects to DPS, which will provision the device to an IoT hub. After the device is provisioned, the sample will send some test messages to the IoT hub.
+1. Run the sample. The sample connects to DPS, which provisions the device to an IoT hub. After the device is provisioned, the sample sends some test messages to the IoT hub.
```cmd $ python azure-iot-sdk-python/samples/async-hub-scenarios/provision_x509.py
In this section, you use both your Windows command prompt and your Git Bash prom
"--END CERTIFICATE--"; ```
- Updating this string value manually can be prone to error. To generate the proper syntax, you can copy and paste the following command into your **Git Bash prompt**, and press **ENTER**. This command will generate the syntax for the `leafPublicPem` string constant value and write it to the output.
+ Updating this string value manually can be prone to error. To generate the proper syntax, you can copy and paste the following command into your **Git Bash prompt**, and press **ENTER**. This command generates the syntax for the `leafPublicPem` string constant value and writes it to the output.
```Bash sed 's/^/"/;$ !s/$/\\n" +/;$ s/$/"/' device-cert.pem
In this section, you use both your Windows command prompt and your Git Bash prom
"--END PRIVATE KEY--"; ```
- Updating this string value manually can be prone to error. To generate the proper syntax, you can copy and paste the following command into your **Git Bash prompt**, and press **ENTER**. This command will generate the syntax for the `leafPrivateKey` string constant value and write it to the output.
+ Updating this string value manually can be prone to error. To generate the proper syntax, you can copy and paste the following command into your **Git Bash prompt**, and press **ENTER**. This command generates the syntax for the `leafPrivateKey` string constant value and writes it to the output.
```Bash sed 's/^/"/;$ !s/$/\\n" +/;$ s/$/"/' unencrypted-device-key.pem
In this section, you use both your Windows command prompt and your Git Bash prom
java -jar ./provisioning-x509-sample-1.8.1-with-deps.jar ```
- The sample connects to DPS, which will provision the device to an IoT hub. After the device is provisioned, the sample will send some test messages to the IoT hub.
+ The sample connects to DPS, which provisions the device to an IoT hub. After the device is provisioned, the sample sends some test messages to the IoT hub.
```output Starting...
iot-dps Quick Setup Auto Provision Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/quick-setup-auto-provision-cli.md
# Quickstart: Set up the IoT Hub Device Provisioning Service with Azure CLI
-The Azure CLI is used to create and manage Azure resources from the command line or in scripts. This quickstart details using the Azure CLI to create an IoT hub and an IoT Hub Device Provisioning Service, and to link the two services together.
+The Azure CLI is used to create and manage Azure resources from the command line or in scripts. This quickstart details using the Azure CLI to create an IoT hub and an IoT Hub Device Provisioning Service instance, and to link the two services together.
[!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)]
az group create --name my-sample-resource-group --location westus
Create an IoT hub with the [az iot hub create](/cli/azure/iot/hub#az-iot-hub-create) command.
-The following example creates an IoT hub named *my-sample-hub* in the *westus* location. An IoT hub name must be globally unique in Azure, so you may want to add a unique prefix or suffix to the example name, or choose a new name altogether. Make sure your name follows proper naming conventions for an IoT hub: it should be 3-50 characters in length, and can contain only upper or lower case alphanumeric characters or hyphens ('-').
+The following example creates an IoT hub named *my-sample-hub* in the *westus* location. An IoT hub name must be globally unique in Azure, so either add a unique prefix or suffix to the example name or choose a new name altogether. Make sure your name follows proper naming conventions for an IoT hub: it should be 3-50 characters in length, and can contain only upper or lower case alphanumeric characters or hyphens ('-').
```azurecli-interactive az iot hub create --name my-sample-hub --resource-group my-sample-resource-group --location westus ```
-## Create a Device Provisioning Service
+## Create a Device Provisioning Service instance
-Create a Device Provisioning Service with the [az iot dps create](/cli/azure/iot/dps#az-iot-dps-create) command.
+Create a Device Provisioning Service instance with the [az iot dps create](/cli/azure/iot/dps#az-iot-dps-create) command.
-The following example creates a provisioning service named *my-sample-dps* in the *westus* location. You'll also choose a globally unique name for your own provisioning service. Make sure it follows proper naming conventions for an IoT Hub Device Provisioning Service: it should be 3-64 characters in length and can contain only upper or lower case alphanumeric characters or hyphens ('-').
+The following example creates a Device Provisioning Service instance named *my-sample-dps* in the *westus* location. You must also choose a globally unique name for your own instance. Make sure it follows proper naming conventions for an IoT Hub Device Provisioning Service: it should be 3-64 characters in length and can contain only upper or lower case alphanumeric characters or hyphens ('-').
```azurecli-interactive az iot dps create --name my-sample-dps --resource-group my-sample-resource-group --location westus
az iot dps create --name my-sample-dps --resource-group my-sample-resource-group
## Get the connection string for the IoT hub
-You need your IoT hub's connection string to link it with the Device Provisioning Service. Use the [az iot hub show-connection-string](/cli/azure/iot/hub#az-iot-hub-show-connection-string) command to get the connection string and use its output to set a variable that you'll use when you link the two resources.
+You need your IoT hub's connection string to link it with the Device Provisioning Service. Use the [az iot hub show-connection-string](/cli/azure/iot/hub#az-iot-hub-show-connection-string) command to get the connection string and use its output to set a variable that's used later, when you link the two resources.
The following example sets the *hubConnectionString* variable to the value of the connection string for the primary key of the hub's *iothubowner* policy (the `--policy-name` parameter can be used to specify a different policy). Trade out *my-sample-hub* for the unique IoT hub name you chose earlier. The command uses the Azure CLI [query](/cli/azure/query-azure-cli) and [output](/cli/azure/format-output-azure-cli#tsv-output-format) options to extract the connection string from the command output.
The linked IoT hub is shown in the *properties.iotHubs* collection.
## Clean up resources
-Other quickstarts in this collection build upon this quickstart. If you plan to continue on to work with subsequent quickstarts or with the tutorials, don't clean up the resources created in this quickstart. If you don't plan to continue, you can use the following commands to delete the provisioning service, the IoT hub or the resource group and all of its resources. Replace the names of the resources written below with the names of your own resources.
+Other quickstarts in this collection build upon this quickstart. If you plan to continue on to work with subsequent quickstarts or with the tutorials, don't clean up the resources created in this quickstart. If you don't plan to continue, you can use the following commands to delete the provisioning service, the IoT hub or the resource group and all of its resources. Replace the names of the resources included in the following commands with the names of your own resources.
To delete the provisioning service, run the [az iot dps delete](/cli/azure/iot/dps#az-iot-dps-delete) command:
iot-dps Quick Setup Auto Provision Rm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/quick-setup-auto-provision-rm.md
Title: Quickstart - Create an Azure IoT Hub Device Provisioning Service (DPS) us
description: Azure quickstart - Learn how to create an Azure IoT Hub Device Provisioning Service (DPS) using Azure Resource Manager template (ARM template). Previously updated : 01/27/2021 Last updated : 04/06/2023
You can use an [Azure Resource Manager](../azure-resource-manager/management/ove
[!INCLUDE [About Azure Resource Manager](../../includes/resource-manager-quickstart-introduction.md)]
-This quickstart uses [Azure portal](../azure-resource-manager/templates/deploy-portal.md), and the [Azure CLI](../azure-resource-manager/templates/deploy-cli.md) to perform the programmatic steps necessary to create a resource group and deploy the template, but you can easily use the [PowerShell](../azure-resource-manager/templates/deploy-powershell.md), .NET, Ruby, or other programming languages to perform these steps and deploy your template.
+This quickstart uses [Azure portal](../azure-resource-manager/templates/deploy-portal.md) and the [Azure CLI](../azure-resource-manager/templates/deploy-cli.md) to perform the programmatic steps necessary to create a resource group and deploy the template. However, you can also use [PowerShell](../azure-resource-manager/templates/deploy-powershell.md), .NET, Ruby, or other programming languages to perform these steps and deploy your template.
-If your environment meets the prerequisites, and you're already familiar with using ARM templates, selecting the **Deploy to Azure** button below will open the template for deployment in the Azure portal.
+If your environment meets the prerequisites, and you're already familiar with using ARM templates, selecting the **Deploy to Azure** button opens the template for deployment in the Azure portal.
[![Deploy to Azure in overview](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3a%2f%2fraw.githubusercontent.com%2fAzure%2fazure-quickstart-templates%2fmaster%2Fquickstarts%2Fmicrosoft.devices%2Fiothub-device-provisioning%2fazuredeploy.json)
The template used in this quickstart is from [Azure Quickstart Templates](https:
:::code language="json" source="~/quickstart-templates/quickstarts/microsoft.devices/iothub-device-provisioning/azuredeploy.json":::
-Two Azure resources are defined in the template above:
+Two Azure resources are defined in the previous template:
-* [**Microsoft.Devices/iothubs**](/azure/templates/microsoft.devices/iothubs): Creates a new Azure IoT Hub.
-* [**Microsoft.Devices/provisioningservices**](/azure/templates/microsoft.devices/provisioningservices): Creates a new Azure IoT Hub Device Provisioning Service with the new IoT Hub already linked to it.
+* [**Microsoft.Devices/IotHubs**](/azure/templates/microsoft.devices/iothubs): Creates a new Azure IoT hub.
+* [**Microsoft.Devices/provisioningServices**](/azure/templates/microsoft.devices/provisioningservices): Creates a new Azure IoT Hub Device Provisioning Service with the new IoT hub already linked to it.
## Deploy the template #### Deploy with the Portal
-1. Select the following image to sign in to Azure and open the template for deployment. The template creates a new Iot Hub and DPS resource. The hub will be linked in the DPS resource.
+1. Select the following image to sign in to Azure and open the template for deployment. The template creates a new Iot hub and DPS resource. The new IoT hub is linked to the DPS resource.
[![Deploy to Azure in Portal Steps](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3a%2f%2fraw.githubusercontent.com%2fAzure%2fazure-quickstart-templates%2fmaster%2Fquickstarts%2Fmicrosoft.devices%2Fiothub-device-provisioning%2fazuredeploy.json)
Two Azure resources are defined in the template above:
![ARM template deployment parameters on the portal](./media/quick-setup-auto-provision-rm/arm-template-deployment-parameters-portal.png)
- Unless it's specified below, use the default value to create the Iot Hub and DPS resource.
+ Unless otherwise specified for the following fields, use the default value to create the Iot Hub and DPS resource.
| Field | Description | | :- | :- |
Two Azure resources are defined in the template above:
3. On the next screen, read the terms. If you agree to all terms, select **Create**.
- The deployment will take a few moments to complete.
+ The deployment takes a few moments to complete.
In addition to the Azure portal, you can also use the Azure PowerShell, Azure CLI, and REST API. To learn other deployment methods, see [Deploy templates](../azure-resource-manager/templates/deploy-powershell.md).
Sign in to your Azure account and select your subscription.
az account set --subscription {your subscription name or id} ```
-3. Copy and paste the following commands into your CLI prompt. Then execute the commands by pressing **ENTER**.
+3. Copy and paste the following commands into your CLI prompt. Then execute the commands by selecting the Enter key.
> [!TIP]
- > The commands will prompt for a resource group location.
+ > The commands prompt for a resource group location.
> You can view a list of available locations by first running the command: > > `az account list-locations -o table`
Sign in to your Azure account and select your subscription.
read ```
-4. The commands will prompt you for the following information. Provide each value and press **ENTER**.
+4. The commands prompt you for the following information. Provide each value and select the Enter key.
| Parameter | Description | | :-- | :- |
- | **Project name** | The value of this parameter will be used to create a resource group to hold all resources. The string `rg` will be added to the end of the value for your resource group name. |
- | **location** | This value is the region where all resources will reside. |
+ | **Project name** | The value of this parameter is used to create a resource group to hold all resources. The string `rg` is added to the end of the value for your resource group name. |
+ | **location** | This value is the region where all resources are created. |
| **iotHubName** | Enter a name for the IoT Hub that must be globally unique within the *.azure-devices.net* namespace. You need the hub name in the next section when you validate the deployment. | | **provisioningServiceName** | Enter a name for the new Device Provisioning Service (DPS) resource. The name must be globally unique within the *.azure-devices-provisioning.net* namespace. You need the DPS name in the next section when you validate the deployment. |
- The AzureCLI is used to deploy the template. In addition to the Azure CLI, you can also use the Azure PowerShell, Azure portal, and REST API. To learn other deployment methods, see [Deploy templates](../azure-resource-manager/templates/deploy-powershell.md).
+ The Azure CLI is used to deploy the template. In addition to the Azure CLI, you can also use the Azure PowerShell, Azure portal, and REST API. To learn other deployment methods, see [Deploy templates](../azure-resource-manager/templates/deploy-powershell.md).
## Review deployed resources
Sign in to your Azure account and select your subscription.
Notice the hubs that are linked on the `iotHubs` member. - ## Clean up resources Other quickstarts in this collection build upon this quickstart. If you plan to continue on to work with subsequent quickstarts or with the tutorials, don't clean up the resources created in this quickstart. If you don't plan to continue, you can use the Azure portal or Azure CLI to delete the resource group and all of its resources.
To delete the resource group deployed using the Azure CLI:
az group delete --name "${projectName}rg" ```
-You can also delete resource groups and individual resources using the Azure portal, PowerShell, or REST APIs, as well as with supported platform SDKs published for Azure Resource Manager or IoT Hub Device Provisioning Service.
+You can also delete resource groups and individual resources using any of the following options:
+
+- Azure portal
+- PowerShell
+- REST APIs
+- Supported platform SDKs published for Azure Resource Manager or IoT Hub Device Provisioning Service
## Next steps
iot-dps Quick Setup Auto Provision https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/quick-setup-auto-provision.md
Title: Quickstart - Set up Device Provisioning Service in portal+ description: Quickstart - Set up the Azure IoT Hub Device Provisioning Service (DPS) in the Microsoft Azure portal Previously updated : 08/06/2021 Last updated : 04/06/2023
# Quickstart: Set up the IoT Hub Device Provisioning Service with the Azure portal
-In this quickstart, you will learn how to set up the IoT Hub Device Provisioning Service in the Azure portal. The IoT Hub Device Provisioning Service enables zero-touch, just-in-time device provisioning to any IoT hub. The Device Provisioning Service enables customers to provision millions of IoT devices in a secure and scalable manner, without requiring human intervention. Azure IoT Hub Device Provisioning Service supports IoT devices with TPM, symmetric key, and X.509 certificate authentications. For more information, please refer to [IoT Hub Device Provisioning Service overview](about-iot-dps.md).
+In this quickstart, you learn how to set up the IoT Hub Device Provisioning Service in the Azure portal. The IoT Hub Device Provisioning Service enables zero-touch, just-in-time device provisioning to any IoT hub. The Device Provisioning Service enables customers to provision millions of IoT devices in a secure and scalable manner, without requiring human intervention. Azure IoT Hub Device Provisioning Service supports IoT devices with TPM, symmetric key, and X.509 certificate authentications. For more information, please refer to [IoT Hub Device Provisioning Service overview](about-iot-dps.md).
-To provision your devices, you will:
+To provision your devices, you first perform the following steps:
-* Use the Azure portal to create an IoT Hub
-* Use the Azure portal to create an IoT Hub Device Provisioning Service
-* Link the IoT hub to the Device Provisioning Service
+> [!div class="checklist"]
+> * Use the Azure portal to create an IoT hub
+> * Use the Azure portal to create an IoT Hub Device Provisioning Service instance
+> * Link the IoT hub to the Device Provisioning Service instance
## Prerequisites
-You'll need an Azure subscription to begin with this article. You can create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F), if you haven't already.
+If you don't have an Azure subscription, create a [free Azure account](https://azure.microsoft.com/free/) before you begin.
## Create an IoT hub [!INCLUDE [iot-hub-include-create-hub](../../includes/iot-hub-include-create-hub.md)]
-## Create a new IoT Hub Device Provisioning Service
+## Create a new IoT Hub Device Provisioning Service instance
-1. In the Azure portal, select **+ Create a resource** .
+1. In the Azure portal, select the **+ Create a resource** button.
-2. From the **Categories** menu, select **Internet of Things** then **IoT Hub Device Provisioning Service**.
+1. From the **Categories** menu, select **Internet of Things**, and then select **IoT Hub Device Provisioning Service**.
-3. Select **Create**.
+1. On the **Basics** tab, provide the following information:
+
+ | Property | Value |
+ | | |
+ | **Subscription** | Select the subscription to use for your Device Provisioning Service instance. |
+ | **Resource group** | This field allows you to create a new resource group, or choose an existing one to contain the new instance. Choose the same resource group that contains the Iot hub you created in the previous steps. By putting all related resources in a group together, you can manage them together. For example, deleting the resource group deletes all resources contained in that group. For more information, see [Manage Azure Resource Manager resource groups](../azure-resource-manager/management/manage-resource-groups-portal.md). |
+ | **Name** | Provide a unique name for your new Device Provisioning Service instance. If the name you enter is available, a green check mark appears. |
+ | **Region** | Select a location that's close to your devices. For resiliency and reliability, we recommend deploying to one of the regions that support [Availability Zones](iot-dps-ha-dr.md). |
-4. Enter the following information:
+ :::image type="content" source="./media/quick-setup-auto-provision/create-iot-dps-portal.png" alt-text="Screenshot showing the Basics tab of the IoT Hub device provisioning service. Enter basic information about your Device Provisioning Service instance in the portal blade.":::
- * **Name:** Provide a unique name for your new Device Provisioning Service instance. If the name you enter is available, a green check mark appears.
- * **Subscription:** Choose the subscription that you want to use to create this Device Provisioning Service instance.
- * **Resource group:** This field allows you to create a new resource group, or choose an existing one to contain the new instance. Choose the same resource group that contains the Iot hub you created in the previous steps. By putting all related resources in a group together, you can manage them together. For example, deleting the resource group deletes all resources contained in that group. For more information, see [Manage Azure Resource Manager resource groups](../azure-resource-manager/management/manage-resource-groups-portal.md).
- * **Location:** Select a location that's close to your devices. For resiliency and reliability, we recommend deploying to one of the regions that support [Availability Zones](iot-dps-ha-dr.md).
+1. Select **Review + create** to validate your provisioning service.
- :::image type="content" source="./media/quick-setup-auto-provision/create-iot-dps-portal.png" alt-text="Enter basic information about your Device Provisioning Service instance in the portal blade":::
+1. Select **Create** to start the deployment of your Device Provisioning Service instance.
-5. Select **Review + Create** to validate your provisioning service.
+1. After the deployment successfully completes, select **Go to resource** to view your Device Provisioning Service instance.
-6. Select **Create**.
+## Link the IoT hub and your Device Provisioning Service instance
-7. After the deployment successfully completes, select **Go to resource** to view your Device Provisioning Service instance.
+In this section, you add a configuration to the Device Provisioning Service instance. This configuration sets the IoT hub for which the instance provisions IoT devices.
-## Link the IoT hub and your Device Provisioning Service
+1. In the **Settings** menu, select **Linked IoT hubs**.
-In this section, you'll add a configuration to the Device Provisioning Service instance. This configuration sets the IoT hub for which devices will be provisioned.
+1. Select **+ Add**.
-1. In the *Settings* menu, select **Linked IoT hubs**.
+1. On the **Add link to IoT hub** panel, provide the following information:
-2. Select **+ Add**.
+ | Property | Value |
+ | | |
+ | **Subscription** | Select the subscription containing the IoT hub that you want to link with your new Device Provisioning Service instance. |
+ | **IoT hub** | Select the IoT hub to link with your new Device Provisioning System instance. |
+ | **Access Policy** | Select **iothubowner (RegistryWrite, ServiceConnect, DeviceConnect)** as the credentials for establishing the link with the IoT hub. |
-3. On the **Add link to IoT hub** panel, provide the following information:
+ :::image type="content" source="./media/quick-setup-auto-provision/link-iot-hub-to-dps-portal.png" alt-text="Screenshot showing how to link an IoT hub to the Device Provisioning Service instance in the portal blade.":::
- * **Subscription:** Select the subscription containing the IoT hub that you want to link with your new Device Provisioning Service instance.
- * **Iot hub:** Select the IoT hub to link with your new Device Provisioning Service instance.
- * **Access Policy:** Select **iothubowner** as the credentials for establishing the link with the IoT hub.
+1. Select **Save**.
- :::image type="content" source="./media/quick-setup-auto-provision/link-iot-hub-to-dps-portal.png" alt-text="Link the hub name to link to the Device Provisioning Service instance in the portal blade":::
-
-4. Select **Save**.
-
-5. Select **Refresh**. Now you should see the selected hub under the **Linked IoT hubs** blade.
+1. Select **Refresh**. You should now see the selected hub under the **Linked IoT hubs** blade.
## Clean up resources
-The rest of the Device Provisioning Service quickstarts and tutorials use the resources that you created in this quickstart. However, if you don't plan on doing any more quickstarts or tutorials, you'll want to delete those resources.
+The rest of the Device Provisioning Service quickstarts and tutorials use the resources that you created in this quickstart. However, if you don't plan on doing any more quickstarts or tutorials, delete these resources.
To clean up resources in the Azure portal: 1. From the left-hand menu in the Azure portal, select **All resources**.
-2. Select your Device Provisioning Service.
+1. Select your Device Provisioning Service instance.
-3. At the top of the device detail pane, select **Delete**.
+1. At the top of the device detail pane, select **Delete**.
-4. From the left-hand menu in the Azure portal, select **All resources**.
+1. From the left-hand menu in the Azure portal, select **All resources**.
-5. Select your IoT hub.
+1. Select your IoT hub.
-6. At the top of the hub detail pane, select **Delete**.
+1. At the top of the hub detail pane, select **Delete**.
## Next steps
-In this quickstart, you deployed an IoT hub and a Device Provisioning Service instance, and linked the two resources. To learn how to use this setup to provision a device, continue to the quickstart for creating a device.
+In this quickstart, you deployed an IoT hub and a Device Provisioning Service instance, and then linked the two resources. To learn how to use this setup to provision a device, continue to the quickstart for creating a device.
> [!div class="nextstepaction"] > [Quickstart: Provision a simulated symmetric key device](./quick-create-simulated-device-symm-key.md)
iot-hub C2d Messaging Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/c2d-messaging-dotnet.md
You can find more information on cloud-to-device messages in [D2C and C2D Messag
* A complete working version of the [Send telemetry from a device to an IoT hub](../iot-develop/quickstart-send-telemetry-iot-hub.md?pivots=programming-language-java) quickstart or the [Configure message routing with IoT Hub](tutorial-routing.md) article. This cloud-to-device article builds on the quickstart.
-* Make sure that port 8883 is open in your firewall. The device sample in this article uses MQTT protocol, which communicates over port 8883. This port may be blocked in some corporate and educational network environments. For more information and ways to work around this issue, see [Connecting to IoT Hub (MQTT)](iot-hub-mqtt-support.md#connecting-to-iot-hub).
+* Make sure that port 8883 is open in your firewall. The device sample in this article uses MQTT protocol, which communicates over port 8883. This port may be blocked in some corporate and educational network environments. For more information and ways to work around this issue, see [Connecting to IoT Hub (MQTT)](../iot/iot-mqtt-connect-to-iot-hub.md#connecting-to-iot-hub).
## Receive messages in the device app
iot-hub C2d Messaging Ios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/c2d-messaging-ios.md
To learn more about cloud-to-device messages, see [Send cloud-to-device messages
* The latest version of [CocoaPods](https://guides.cocoapods.org/using/getting-started.html).
-* Make sure that port 8883 is open in your firewall. The device sample in this article uses MQTT protocol, which communicates over port 8883. This port may be blocked in some corporate and educational network environments. For more information and ways to work around this issue, see [Connecting to IoT Hub (MQTT)](iot-hub-mqtt-support.md#connecting-to-iot-hub).
+* Make sure that port 8883 is open in your firewall. The device sample in this article uses MQTT protocol, which communicates over port 8883. This port may be blocked in some corporate and educational network environments. For more information and ways to work around this issue, see [Connecting to IoT Hub (MQTT)](../iot/iot-mqtt-connect-to-iot-hub.md#connecting-to-iot-hub).
## Simulate an IoT device
iot-hub C2d Messaging Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/c2d-messaging-java.md
To learn more about cloud-to-device messages, see [Send cloud-to-device messages
* [Maven 3](https://maven.apache.org/download.cgi)
-* Make sure that port 8883 is open in your firewall. The device sample in this article uses MQTT protocol, which communicates over port 8883. This port may be blocked in some corporate and educational network environments. For more information and ways to work around this issue, see [Connecting to IoT Hub (MQTT)](iot-hub-mqtt-support.md#connecting-to-iot-hub).
+* Make sure that port 8883 is open in your firewall. The device sample in this article uses MQTT protocol, which communicates over port 8883. This port may be blocked in some corporate and educational network environments. For more information and ways to work around this issue, see [Connecting to IoT Hub (MQTT)](../iot/iot-mqtt-connect-to-iot-hub.md#connecting-to-iot-hub).
## Receive messages in the simulated device app
iot-hub C2d Messaging Node https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/c2d-messaging-node.md
To learn more about cloud-to-device messages, see [Send cloud-to-device messages
* Node.js version 10.0.x or later. [Prepare your development environment](https://github.com/Azure/azure-iot-sdk-node/tree/main/doc/node-devbox-setup.md) describes how to install Node.js for this article on either Windows or Linux.
-* Make sure that port 8883 is open in your firewall. The device sample in this article uses MQTT protocol, which communicates over port 8883. This port may be blocked in some corporate and educational network environments. For more information and ways to work around this issue, see [Connecting to IoT Hub (MQTT)](iot-hub-mqtt-support.md#connecting-to-iot-hub).
+* Make sure that port 8883 is open in your firewall. The device sample in this article uses MQTT protocol, which communicates over port 8883. This port may be blocked in some corporate and educational network environments. For more information and ways to work around this issue, see [Connecting to IoT Hub (MQTT)](../iot/iot-mqtt-connect-to-iot-hub.md#connecting-to-iot-hub).
## Receive messages in the simulated device app
iot-hub C2d Messaging Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/c2d-messaging-python.md
To learn more about cloud-to-device messages, see [Send cloud-to-device messages
* [Python version 3.7 or later](https://www.python.org/downloads/) is recommended. Make sure to use the 32-bit or 64-bit installation as required by your setup. When prompted during the installation, make sure to add Python to your platform-specific environment variable.
-* Make sure that port 8883 is open in your firewall. The device sample in this article uses MQTT protocol, which communicates over port 8883. This port may be blocked in some corporate and educational network environments. For more information and ways to work around this issue, see [Connecting to IoT Hub (MQTT)](iot-hub-mqtt-support.md#connecting-to-iot-hub).
+* Make sure that port 8883 is open in your firewall. The device sample in this article uses MQTT protocol, which communicates over port 8883. This port may be blocked in some corporate and educational network environments. For more information and ways to work around this issue, see [Connecting to IoT Hub (MQTT)](../iot/iot-mqtt-connect-to-iot-hub.md#connecting-to-iot-hub).
## Receive messages in the simulated device app
iot-hub Device Management Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/device-management-cli.md
This article shows you how to create two Azure CLI sessions:
* An IoT hub. Create one with the [CLI](iot-hub-create-using-cli.md) or the [Azure portal](iot-hub-create-through-portal.md).
-* Make sure that port 8883 is open in your firewall. The device sample in this article uses MQTT protocol, which communicates over port 8883. This port may be blocked in some corporate and educational network environments. For more information and ways to work around this issue, see [Connecting to IoT Hub (MQTT)](iot-hub-mqtt-support.md#connecting-to-iot-hub).
+* Make sure that port 8883 is open in your firewall. The device sample in this article uses MQTT protocol, which communicates over port 8883. This port may be blocked in some corporate and educational network environments. For more information and ways to work around this issue, see [Connecting to IoT Hub (MQTT)](../iot/iot-mqtt-connect-to-iot-hub.md#connecting-to-iot-hub).
## Prepare the Cloud Shell
iot-hub Device Management Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/device-management-dotnet.md
This article shows you how to create:
* A registered device. Register one in the [Azure portal](iot-hub-create-through-portal.md#register-a-new-device-in-the-iot-hub).
-* Make sure that port 8883 is open in your firewall. The device sample in this article uses MQTT protocol, which communicates over port 8883. This port may be blocked in some corporate and educational network environments. For more information and ways to work around this issue, see [Connecting to IoT Hub (MQTT)](iot-hub-mqtt-support.md#connecting-to-iot-hub).
+* Make sure that port 8883 is open in your firewall. The device sample in this article uses MQTT protocol, which communicates over port 8883. This port may be blocked in some corporate and educational network environments. For more information and ways to work around this issue, see [Connecting to IoT Hub (MQTT)](../iot/iot-mqtt-connect-to-iot-hub.md#connecting-to-iot-hub).
## Create a device app with a direct method
iot-hub Device Management Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/device-management-java.md
This article shows you how to create:
* [Maven 3](https://maven.apache.org/download.cgi)
-* Make sure that port 8883 is open in your firewall. The device sample in this article uses MQTT protocol, which communicates over port 8883. This port may be blocked in some corporate and educational network environments. For more information and ways to work around this issue, see [Connecting to IoT Hub (MQTT)](iot-hub-mqtt-support.md#connecting-to-iot-hub).
+* Make sure that port 8883 is open in your firewall. The device sample in this article uses MQTT protocol, which communicates over port 8883. This port may be blocked in some corporate and educational network environments. For more information and ways to work around this issue, see [Connecting to IoT Hub (MQTT)](../iot/iot-mqtt-connect-to-iot-hub.md#connecting-to-iot-hub).
## Create a device app with a direct method
iot-hub Device Management Node https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/device-management-node.md
This article shows you how to create:
* Node.js version 10.0.x or later. [Prepare your development environment](https://github.com/Azure/azure-iot-sdk-node/tree/main/doc/node-devbox-setup.md) describes how to install Node.js for this article on either Windows or Linux.
-* Make sure that port 8883 is open in your firewall. The device sample in this article uses MQTT protocol, which communicates over port 8883. This port may be blocked in some corporate and educational network environments. For more information and ways to work around this issue, see [Connecting to IoT Hub (MQTT)](iot-hub-mqtt-support.md#connecting-to-iot-hub).
+* Make sure that port 8883 is open in your firewall. The device sample in this article uses MQTT protocol, which communicates over port 8883. This port may be blocked in some corporate and educational network environments. For more information and ways to work around this issue, see [Connecting to IoT Hub (MQTT)](../iot/iot-mqtt-connect-to-iot-hub.md#connecting-to-iot-hub).
## Create a device app with a direct method
iot-hub Device Management Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/device-management-python.md
This article shows you how to create:
* [Python version 3.7 or later](https://www.python.org/downloads/) is recommended. Make sure to use the 32-bit or 64-bit installation as required by your setup. When prompted during the installation, make sure to add Python to your platform-specific environment variable.
-* Make sure that port 8883 is open in your firewall. The device sample in this article uses MQTT protocol, which communicates over port 8883. This port may be blocked in some corporate and educational network environments. For more information and ways to work around this issue, see [Connecting to IoT Hub (MQTT)](iot-hub-mqtt-support.md#connecting-to-iot-hub).
+* Make sure that port 8883 is open in your firewall. The device sample in this article uses MQTT protocol, which communicates over port 8883. This port may be blocked in some corporate and educational network environments. For more information and ways to work around this issue, see [Connecting to IoT Hub (MQTT)](../iot/iot-mqtt-connect-to-iot-hub.md#connecting-to-iot-hub).
## Register a new device in the IoT hub
iot-hub Device Twins Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/device-twins-cli.md
This article shows you how to create two Azure CLI sessions:
* An IoT hub. Create one with the [CLI](iot-hub-create-using-cli.md) or the [Azure portal](iot-hub-create-through-portal.md).
-* Make sure that port 8883 is open in your firewall. The samples in this article use MQTT protocol, which communicates over port 8883. This port can be blocked in some corporate and educational network environments. For more information and ways to work around this issue, see [Connecting to IoT Hub (MQTT)](iot-hub-mqtt-support.md#connecting-to-iot-hub).
+* Make sure that port 8883 is open in your firewall. The samples in this article use MQTT protocol, which communicates over port 8883. This port can be blocked in some corporate and educational network environments. For more information and ways to work around this issue, see [Connecting to IoT Hub (MQTT)](../iot/iot-mqtt-connect-to-iot-hub.md#connecting-to-iot-hub).
## Prepare the Cloud Shell
iot-hub Device Twins Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/device-twins-dotnet.md
In this article, you create two .NET console apps:
* A registered device. Register one in the [Azure portal](iot-hub-create-through-portal.md#register-a-new-device-in-the-iot-hub).
-* Make sure that port 8883 is open in your firewall. The device sample in this article uses MQTT protocol, which communicates over port 8883. This port may be blocked in some corporate and educational network environments. For more information and ways to work around this issue, see [Connecting to IoT Hub (MQTT)](iot-hub-mqtt-support.md#connecting-to-iot-hub).
+* Make sure that port 8883 is open in your firewall. The device sample in this article uses MQTT protocol, which communicates over port 8883. This port may be blocked in some corporate and educational network environments. For more information and ways to work around this issue, see [Connecting to IoT Hub (MQTT)](../iot/iot-mqtt-connect-to-iot-hub.md#connecting-to-iot-hub).
## Get the IoT hub connection string
iot-hub Device Twins Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/device-twins-java.md
In this article, you create two Java console apps:
* [Maven 3](https://maven.apache.org/download.cgi)
-* Make sure that port 8883 is open in your firewall. The device sample in this article uses MQTT protocol, which communicates over port 8883. This port may be blocked in some corporate and educational network environments. For more information and ways to work around this issue, see [Connecting to IoT Hub (MQTT)](iot-hub-mqtt-support.md#connecting-to-iot-hub).
+* Make sure that port 8883 is open in your firewall. The device sample in this article uses MQTT protocol, which communicates over port 8883. This port may be blocked in some corporate and educational network environments. For more information and ways to work around this issue, see [Connecting to IoT Hub (MQTT)](../iot/iot-mqtt-connect-to-iot-hub.md#connecting-to-iot-hub).
## Get the IoT hub connection string
iot-hub Device Twins Node https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/device-twins-node.md
To complete this article, you need:
* Node.js version 10.0.x or later.
-* Make sure that port 8883 is open in your firewall. The device sample in this article uses MQTT protocol, which communicates over port 8883. This port may be blocked in some corporate and educational network environments. For more information and ways to work around this issue, see [Connecting to IoT Hub (MQTT)](iot-hub-mqtt-support.md#connecting-to-iot-hub).
+* Make sure that port 8883 is open in your firewall. The device sample in this article uses MQTT protocol, which communicates over port 8883. This port may be blocked in some corporate and educational network environments. For more information and ways to work around this issue, see [Connecting to IoT Hub (MQTT)](../iot/iot-mqtt-connect-to-iot-hub.md#connecting-to-iot-hub).
## Get the IoT hub connection string
iot-hub Device Twins Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/device-twins-python.md
In this article, you create two Python console apps:
* [Python version 3.7 or later](https://www.python.org/downloads/) is recommended. Make sure to use the 32-bit or 64-bit installation as required by your setup. When prompted during the installation, make sure to add Python to your platform-specific environment variable.
-* Make sure that port 8883 is open in your firewall. The device sample in this article uses MQTT protocol, which communicates over port 8883. This port may be blocked in some corporate and educational network environments. For more information and ways to work around this issue, see [Connecting to IoT Hub (MQTT)](iot-hub-mqtt-support.md#connecting-to-iot-hub).
+* Make sure that port 8883 is open in your firewall. The device sample in this article uses MQTT protocol, which communicates over port 8883. This port may be blocked in some corporate and educational network environments. For more information and ways to work around this issue, see [Connecting to IoT Hub (MQTT)](../iot/iot-mqtt-connect-to-iot-hub.md#connecting-to-iot-hub).
## Get the IoT hub connection string
iot-hub File Upload Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/file-upload-dotnet.md
At the end of this article, you run two .NET console apps:
* Download the Azure IoT C# SDK from [Download sample](https://github.com/Azure/azure-iot-sdk-csharp/archive/main.zip) and extract the ZIP archive.
-* Port 8883 should be open in your firewall. The sample in this article uses MQTT protocol, which communicates over port 8883. This port may be blocked in some corporate and educational network environments. For more information and ways to work around this issue, see [Connecting to IoT Hub (MQTT)](iot-hub-mqtt-support.md#connecting-to-iot-hub).
+* Port 8883 should be open in your firewall. The sample in this article uses MQTT protocol, which communicates over port 8883. This port may be blocked in some corporate and educational network environments. For more information and ways to work around this issue, see [Connecting to IoT Hub (MQTT)](../iot/iot-mqtt-connect-to-iot-hub.md#connecting-to-iot-hub).
[!INCLUDE [iot-hub-associate-storage](../../includes/iot-hub-include-associate-storage.md)]
iot-hub File Upload Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/file-upload-java.md
These files are typically batch processed in the cloud, using tools such as [Azu
* [Maven 3](https://maven.apache.org/download.cgi)
-* Port 8883 should be open in your firewall. The device sample in this article uses MQTT protocol, which communicates over port 8883. This port may be blocked in some corporate and educational network environments. For more information and ways to work around this issue, see [Connecting to IoT Hub (MQTT)](iot-hub-mqtt-support.md#connecting-to-iot-hub).
+* Port 8883 should be open in your firewall. The device sample in this article uses MQTT protocol, which communicates over port 8883. This port may be blocked in some corporate and educational network environments. For more information and ways to work around this issue, see [Connecting to IoT Hub (MQTT)](../iot/iot-mqtt-connect-to-iot-hub.md#connecting-to-iot-hub).
[!INCLUDE [iot-hub-associate-storage](../../includes/iot-hub-include-associate-storage.md)]
iot-hub File Upload Node https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/file-upload-node.md
At the end of this article, you run two Node.js console apps:
* Node.js version 10.0.x or later. The LTS version is recommended. You can download Node.js from [nodejs.org](https://nodejs.org).
-* Port 8883 should be open in your firewall. The device sample in this article uses MQTT protocol, which communicates over port 8883. This port may be blocked in some corporate and educational network environments. For more information and ways to work around this issue, see [Connecting to IoT Hub (MQTT)](iot-hub-mqtt-support.md#connecting-to-iot-hub).
+* Port 8883 should be open in your firewall. The device sample in this article uses MQTT protocol, which communicates over port 8883. This port may be blocked in some corporate and educational network environments. For more information and ways to work around this issue, see [Connecting to IoT Hub (MQTT)](../iot/iot-mqtt-connect-to-iot-hub.md#connecting-to-iot-hub).
[!INCLUDE [iot-hub-associate-storage](../../includes/iot-hub-include-associate-storage.md)]
iot-hub File Upload Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/file-upload-python.md
At the end of this article, you run the Python console app **FileUpload.py**, wh
* [Python version 3.7 or later](https://www.python.org/downloads/) is recommended. Make sure to use the 32-bit or 64-bit installation as required by your setup. When prompted during the installation, make sure to add Python to your platform-specific environment variable.
-* Port 8883 should be open in your firewall. The device sample in this article uses MQTT protocol, which communicates over port 8883. This port may be blocked in some corporate and educational network environments. For more information and ways to work around this issue, see [Connecting to IoT Hub (MQTT)](iot-hub-mqtt-support.md#connecting-to-iot-hub).
+* Port 8883 should be open in your firewall. The device sample in this article uses MQTT protocol, which communicates over port 8883. This port may be blocked in some corporate and educational network environments. For more information and ways to work around this issue, see [Connecting to IoT Hub (MQTT)](../iot/iot-mqtt-connect-to-iot-hub.md#connecting-to-iot-hub).
[!INCLUDE [iot-hub-associate-storage](../../includes/iot-hub-include-associate-storage.md)]
iot-hub Iot Hub Amqp Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-amqp-support.md
To learn more about IoT Hub messaging, see:
* [Cloud-to-device messages](./iot-hub-devguide-messages-c2d.md) * [Support for additional protocols](../iot-edge/iot-edge-as-gateway.md)
-* [Support for the Message Queuing Telemetry Transport (MQTT) Protocol](./iot-hub-mqtt-support.md)
+* [Support for the Message Queuing Telemetry Transport (MQTT) Protocol](../iot/iot-mqtt-connect-to-iot-hub.md)
iot-hub Iot Hub Dev Guide Sas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-dev-guide-sas.md
The result, which would grant access to read all device identities, would be:
### Supported X.509 certificates
-You can use any X.509 certificate to authenticate a device with IoT Hub by uploading either a certificate thumbprint or a certificate authority (CA) to Azure IoT Hub. To learn more, see [Device Authentication using X.509 CA Certificates](iot-hub-x509ca-overview.md). For information about how to upload and verify a certificate authority with your IoT hub, see [Set up X.509 security in your Azure IoT hub](./tutorial-x509-prove-possession.md).
+You can use any X.509 certificate to authenticate a device with IoT Hub by uploading either a certificate thumbprint or a certificate authority (CA) to Azure IoT Hub. To learn more, see [Device Authentication using X.509 CA Certificates](iot-hub-x509ca-overview.md). For information about how to upload and verify a certificate authority with your IoT hub for testing, see [Tutorial: Create and upload certificates for testing](tutorial-x509-test-certs.md).
### Enforcing X.509 authentication
Other reference topics in the IoT Hub developer guide include:
* [IoT Hub query language](iot-hub-devguide-query-language.md) describes the query language you can use to retrieve information from IoT Hub about your device twins and jobs.
-* [IoT Hub MQTT support](iot-hub-mqtt-support.md) provides more information about IoT Hub support for the MQTT protocol.
+* [IoT Hub MQTT support](../iot/iot-mqtt-connect-to-iot-hub.md) provides more information about IoT Hub support for the MQTT protocol.
* [RFC 5246 - The Transport Layer Security (TLS) Protocol Version 1.2](https://www.rfc-editor.org/rfc/rfc5246) provides more information about TLS authentication.
iot-hub Iot Hub Devguide Device Twins https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-devguide-device-twins.md
Other reference topics in the IoT Hub developer guide include:
* The [IoT Hub query language for device twins, jobs, and message routing](iot-hub-devguide-query-language.md) article describes the IoT Hub query language you can use to retrieve information from IoT Hub about your device twins and jobs.
-* The [IoT Hub MQTT support](iot-hub-mqtt-support.md) article provides more information about IoT Hub support for the MQTT protocol.
+* The [IoT Hub MQTT support](../iot/iot-mqtt-connect-to-iot-hub.md) article provides more information about IoT Hub support for the MQTT protocol.
## Next steps
iot-hub Iot Hub Devguide Direct Methods https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-devguide-direct-methods.md
On an IoT device, direct methods can be received over MQTT, AMQP, or either of t
### MQTT
-The following section is for the MQTT protocol. To learn more about using the MQTT protocol directly with IoT Hub, see [MQTT protocol support](iot-hub-mqtt-support.md).
+The following section is for the MQTT protocol. To learn more about using the MQTT protocol directly with IoT Hub, see [MQTT protocol support](../iot/iot-mqtt-connect-to-iot-hub.md).
#### Method invocation
Other reference topics in the IoT Hub developer guide include:
* [IoT Hub query language for device twins, jobs, and message routing](iot-hub-devguide-query-language.md) describes the IoT Hub query language you can use to retrieve information from IoT Hub about your device twins and jobs.
-* [IoT Hub MQTT support](iot-hub-mqtt-support.md) provides more information about IoT Hub support for the MQTT protocol.
+* [IoT Hub MQTT support](../iot/iot-mqtt-connect-to-iot-hub.md) provides more information about IoT Hub support for the MQTT protocol.
## Next steps
iot-hub Iot Hub Devguide Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-devguide-endpoints.md
Other reference topics in this IoT Hub developer guide include:
* [IoT Hub query language for device and module twins, jobs, and message routing](iot-hub-devguide-query-language.md) * [IoT Hub quotas and throttling](iot-hub-devguide-quotas-throttling.md)
-* [Communicate with your IoT hub using the MQTT protocol](iot-hub-mqtt-support.md)
+* [Communicate with your IoT hub using the MQTT protocol](../iot/iot-mqtt-connect-to-iot-hub.md)
* [IoT Hub IP addresses](iot-hub-understand-ip-address.md)
iot-hub Iot Hub Devguide Identity Registry https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-devguide-identity-registry.md
Other reference articles in the IoT Hub developer guide include:
* [IoT Hub query language](iot-hub-devguide-query-language.md) describes the query language you can use to retrieve information from IoT Hub about your device twins and jobs.
-* [IoT Hub MQTT support](iot-hub-mqtt-support.md) provides more information about IoT Hub support for the MQTT protocol.
+* [IoT Hub MQTT support](../iot/iot-mqtt-connect-to-iot-hub.md) provides more information about IoT Hub support for the MQTT protocol.
## Next steps
iot-hub Iot Hub Devguide Jobs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-devguide-jobs.md
Other reference topics in the IoT Hub developer guide include:
* [IoT Hub query language for device twins, jobs, and message routing](iot-hub-devguide-query-language.md) describes the IoT Hub query language. Use this query language to retrieve information from IoT Hub about your device twins and jobs.
-* [IoT Hub MQTT support](iot-hub-mqtt-support.md) provides more information about IoT Hub support for the MQTT protocol.
+* [IoT Hub MQTT support](../iot/iot-mqtt-connect-to-iot-hub.md) provides more information about IoT Hub support for the MQTT protocol.
## Next steps
iot-hub Iot Hub Devguide Messages Construct https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-devguide-messages-construct.md
An IoT Hub message consists of:
* A message body, which can be any type of data.
-Each device protocol implements setting properties in different ways. Please see the related [MQTT](./iot-hub-mqtt-support.md) and [AMQP](./iot-hub-amqp-support.md) developer guides for details.
+Each device protocol implements setting properties in different ways. Please see the related [MQTT](../iot/iot-mqtt-connect-to-iot-hub.md) and [AMQP](./iot-hub-amqp-support.md) developer guides for details.
Property names and values can only contain ASCII alphanumeric characters, plus ``{'!', '#', '$', '%, '&', ''', '*', '+', '-', '.', '^', '_', '`', '|', '~'}`` when you send device-to-cloud messages using the HTTPS protocol or send cloud-to-device messages.
iot-hub Iot Hub Devguide Messages D2c https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-devguide-messages-d2c.md
For example, if a route is created with the data source set to **Device Twin Cha
### Limitations for device connection state events
-Device connection state events are available for devices connecting using either the MQTT or AMQP protocol, or using either of these protocols over WebSockets. Requests made only with HTTPS won't trigger device connection state notifications. For IoT Hub to start sending device connection state events, after opening a connection a device must call either the *cloud-to-device receive message* operation or the *device-to-cloud send telemetry* operation. Outside of the Azure IoT SDKs, in MQTT these operations equate to SUBSCRIBE or PUBLISH operations on the appropriate messaging [topics](iot-hub-mqtt-support.md). Over AMQP these operations equate to attaching or transferring a message on the [appropriate link paths](iot-hub-amqp-support.md).
+Device connection state events are available for devices connecting using either the MQTT or AMQP protocol, or using either of these protocols over WebSockets. Requests made only with HTTPS won't trigger device connection state notifications. For IoT Hub to start sending device connection state events, after opening a connection a device must call either the *cloud-to-device receive message* operation or the *device-to-cloud send telemetry* operation. Outside of the Azure IoT SDKs, in MQTT these operations equate to SUBSCRIBE or PUBLISH operations on the appropriate messaging [topics](../iot/iot-mqtt-connect-to-iot-hub.md). Over AMQP these operations equate to attaching or transferring a message on the [appropriate link paths](iot-hub-amqp-support.md).
IoT Hub doesn't report each individual device connect and disconnect, but rather publishes the current connection state taken at a periodic, 60-second snapshot. Receiving either the same connection state event with different sequence numbers or different connection state events both mean that there was a change in the device connection state during the 60-second window.
iot-hub Iot Hub Devguide No Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-devguide-no-sdk.md
For help using the following protocols without an Azure IoT SDK:
* Device or back-end apps on **AMQP**, see [AMQP support](iot-hub-amqp-support.md).
-* Device apps on **MQTT**, see [MQTT support](iot-hub-mqtt-support.md). Most of this topic treats using the MQTT protocol directly. It also contains information about using the [IoT MQTT Sample repository](https://github.com/Azure-Samples/IoTMQTTSample). This repository contains C samples that use the Eclipse Mosquitto library to send messages to IoT Hub.
+* Device apps on **MQTT**, see [MQTT support](../iot/iot-mqtt-connect-to-iot-hub.md). Most of this topic treats using the MQTT protocol directly. It also contains information about using the [IoT MQTT Sample repository](https://github.com/Azure-Samples/IoTMQTTSample). This repository contains C samples that use the Eclipse Mosquitto library to send messages to IoT Hub.
* Device or back-end apps on **HTTPS**, consult the [Azure IoT Hub REST APIs](/rest/api/iothub/). Be aware, as noted in [Development prerequisites](#development-prerequisites), that you can't use X.509 certificate authority (CA) authentication with HTTPS.
For devices, we strongly recommend using MQTT if your device supports it.
## Next steps
-* [MQTT support](iot-hub-mqtt-support.md)
+* [MQTT support](../iot/iot-mqtt-connect-to-iot-hub.md)
iot-hub Iot Hub Devguide Pricing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-devguide-pricing.md
Use the following table to help determine which operations are charged. All bill
| Cloud-to-device messages | Successfully sent messages are charged in 4-KB chunks. For example, a 6-KB message is charged as two messages. <br/><br/> [Receive Device Bound Notification](/rest/api/iothub/device/receive-device-bound-notification): *Cloud To Device Command* | | File uploads | File transfer to Azure Storage isn't metered by IoT Hub. File transfer initiation and completion messages are charged as messaged metered in 4-KB increments. For example, transferring a 10-MB file is charged as two messages in addition to the Azure Storage cost. <br/><br/> [Create File Upload Sas Uri](/rest/api/iothub/device/create-file-upload-sas-uri): *Device To Cloud File Upload* <br/> [Update File Upload Status](/rest/api/iothub/device/update-file-upload-status): *Device To Cloud File Upload* | | Direct methods | Successful method requests are charged in 4-KB chunks, and responses are charged in 4-KB chunks as additional messages. Requests or responses with no payload are charged as one message. For example, a method with a 4-KB body that results in a response with no payload from the device is charged as two messages. A method with a 6-KB body that results in a 1-KB response from the device is charged as two messages for the request plus another message for the response. Requests to disconnected devices are charged as messages in 4-KB chunks plus one message for a response that indicates the device isn't online. <br/><br/> [Device - Invoke Method](/rest/api/iothub/service/devices/invoke-method): *Device Direct Invoke Method*, <br/> [Module - Invoke Method](/rest/api/iothub/service/modules/invoke-method): *Module Direct Invoke Method* |
-| Device and module twin reads | Twin reads from the device or module and from the solution back end are charged as messages in 4-KB chunks. For example, reading an 8-KB twin is charged as two messages. <br/><br/> [Get Twin](/rest/api/iothub/service/devices/get-twin): *Get Twin* <br/> [Get Module Twin](/rest/api/iothub/service/modules/get-twin): *Get Module Twin* <br/><br/> Read device and module twins from a device: <br/> **Endpoint**: `/devices/{id}/twin` ([MQTT](iot-hub-mqtt-support.md#retrieving-a-device-twins-properties), AMQP only): *D2C Get Twin* <br/> **Endpoint**: `/devices/{deviceid}/modules/{moduleid}/twin` (MQTT, AMQP only): *Module D2C Get Twin* |
-| Device and module twin updates (tags and properties) | Twin updates from the device or module and from the solution back end are charged as messages in 4-KB chunks. For example, a 12-KB update to a twin is charged as three messages. <br/><br/> [Update Twin](/rest/api/iothub/service/devices/update-twin): *Update Twin* <br/> [Update Module Twin](/rest/api/iothub/service/modules/update-twin): *Update Module Twin* <br/> [Replace Twin](/rest/api/iothub/service/devices/replace-twin): *Replace Twin* <br/> [Replace Module Twin](/rest/api/iothub/service/modules/replace-twin): *Replace Module Twin* <br/><br/> Update device or module twin reported properties from a device: <br/> **Endpoint**: `/twin/PATCH/properties/reported/` ([MQTT](iot-hub-mqtt-support.md#update-device-twins-reported-properties), AMQP only): *D2 Patch ReportedProperties* or *Module D2 Patch ReportedProperties* <br/><br/> Receive desired properties update notifications on a device: <br/> **Endpoint**: `/twin/PATCH/properties/desired/` ([MQTT](iot-hub-mqtt-support.md#receiving-desired-properties-update-notifications), AMQP only): *D2C Notify DesiredProperties* or *Module D2C Notify DesiredProperties* |
+| Device and module twin reads | Twin reads from the device or module and from the solution back end are charged as messages in 4-KB chunks. For example, reading an 8-KB twin is charged as two messages. <br/><br/> [Get Twin](/rest/api/iothub/service/devices/get-twin): *Get Twin* <br/> [Get Module Twin](/rest/api/iothub/service/modules/get-twin): *Get Module Twin* <br/><br/> Read device and module twins from a device: <br/> **Endpoint**: `/devices/{id}/twin` ([MQTT](../iot/iot-mqtt-connect-to-iot-hub.md#retrieving-a-device-twins-properties), AMQP only): *D2C Get Twin* <br/> **Endpoint**: `/devices/{deviceid}/modules/{moduleid}/twin` (MQTT, AMQP only): *Module D2C Get Twin* |
+| Device and module twin updates (tags and properties) | Twin updates from the device or module and from the solution back end are charged as messages in 4-KB chunks. For example, a 12-KB update to a twin is charged as three messages. <br/><br/> [Update Twin](/rest/api/iothub/service/devices/update-twin): *Update Twin* <br/> [Update Module Twin](/rest/api/iothub/service/modules/update-twin): *Update Module Twin* <br/> [Replace Twin](/rest/api/iothub/service/devices/replace-twin): *Replace Twin* <br/> [Replace Module Twin](/rest/api/iothub/service/modules/replace-twin): *Replace Module Twin* <br/><br/> Update device or module twin reported properties from a device: <br/> **Endpoint**: `/twin/PATCH/properties/reported/` ([MQTT](../iot/iot-mqtt-connect-to-iot-hub.md#update-device-twins-reported-properties), AMQP only): *D2 Patch ReportedProperties* or *Module D2 Patch ReportedProperties* <br/><br/> Receive desired properties update notifications on a device: <br/> **Endpoint**: `/twin/PATCH/properties/desired/` ([MQTT](../iot/iot-mqtt-connect-to-iot-hub.md#receiving-desired-properties-update-notifications), AMQP only): *D2C Notify DesiredProperties* or *Module D2C Notify DesiredProperties* |
| Device and module twin queries | Queries against **devices** or **devices.modules** are charged as messages depending on the result size in 4-KB chunks. Queries against **jobs** aren't charged. <br/><br/> [Get Twins](/rest/api/iothub/service/query/get-twins) (query against **devices** or **devices.modules** collections): *Query Devices* | | Digital twin reads | Digital twin reads from the solution back end are charged as messages in 4-KB chunks. For example, reading an 8-KB twin is charged as two messages. <br/><br/> [Get Digital Twin](/rest/api/iothub/service/digital-twin/get-digital-twin): *Get Digital Twin* | | Digital twin updates | Digital twin updates from the solution back end are charged as messages in 4-KB chunks. For example, a 12-KB update to a twin is charged as three messages. <br/><br/> [Update Digital Twin](/rest/api/iothub/service/digital-twin/update-digital-twin): *Patch Digital Twin* |
iot-hub Iot Hub Devguide Protocols https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-devguide-protocols.md
The IP address of an IoT hub is subject to change without notice. To learn how t
## Next steps
-For more information about how IoT Hub implements the MQTT protocol, see [Communicate with your IoT hub using the MQTT protocol](iot-hub-mqtt-support.md).
+For more information about how IoT Hub implements the MQTT protocol, see [Communicate with your IoT hub using the MQTT protocol](../iot/iot-mqtt-connect-to-iot-hub.md).
iot-hub Iot Hub Devguide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-devguide.md
The following articles can help you get started exploring IoT Hub features in mo
* [Azure IoT Hub SDKs](iot-hub-devguide-sdks.md) lists the Azure IoT SDKs for developing device and service apps that interact with your IoT hub. The article includes links to online API documentation.
-* [IoT Hub MQTT support](iot-hub-mqtt-support.md) provides detailed information about how IoT Hub supports the MQTT protocol. The article describes the support for the MQTT protocol built in to the Azure IoT SDKs and provides information about using the MQTT protocol directly.
+* [IoT Hub MQTT support](../iot/iot-mqtt-connect-to-iot-hub.md) provides detailed information about how IoT Hub supports the MQTT protocol. The article describes the support for the MQTT protocol built in to the Azure IoT SDKs and provides information about using the MQTT protocol directly.
iot-hub Iot Hub Event Grid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-event-grid.md
Event Grid enables [filtering](../event-grid/event-filtering.md) on event types,
Device connection state events are available for devices connecting using either the MQTT or AMQP protocol, or using either of these protocols over WebSockets. Requests made only with HTTPS won't trigger device connection state notifications.
-* For devices connecting using Java, Node, or Python [Azure IoT SDKs](iot-hub-devguide-sdks.md) with the [MQTT protocol](iot-hub-mqtt-support.md) will have connection states sent automatically.
+* For devices connecting using Java, Node, or Python [Azure IoT SDKs](iot-hub-devguide-sdks.md) with the [MQTT protocol](../iot/iot-mqtt-connect-to-iot-hub.md) will have connection states sent automatically.
* For devices connecting using the Java, Node, or Python [Azure IoT SDKs](iot-hub-devguide-sdks.md) with the [AMQP protocol](iot-hub-amqp-support.md), a cloud-to-device link should be created to reduce any delay in accurate connection states.
-* For devices connecting using the .NET [Azure IoT SDK](iot-hub-devguide-sdks.md) with the [MQTT](iot-hub-mqtt-support.md) or [AMQP](iot-hub-amqp-support.md) protocol wonΓÇÖt send a device connected event until an initial device-to-cloud or cloud-to-device message is sent/received.
-* Outside of the Azure IoT SDKs, in MQTT these operations equate to SUBSCRIBE or PUBLISH operations on the appropriate messaging [topics](iot-hub-mqtt-support.md). Over AMQP these equate to attaching or transferring a message on the [appropriate link paths](iot-hub-amqp-support.md).
+* For devices connecting using the .NET [Azure IoT SDK](iot-hub-devguide-sdks.md) with the [MQTT](../iot/iot-mqtt-connect-to-iot-hub.md) or [AMQP](iot-hub-amqp-support.md) protocol wonΓÇÖt send a device connected event until an initial device-to-cloud or cloud-to-device message is sent/received.
+* Outside of the Azure IoT SDKs, in MQTT these operations equate to SUBSCRIBE or PUBLISH operations on the appropriate messaging [topics](../iot/iot-mqtt-connect-to-iot-hub.md). Over AMQP these equate to attaching or transferring a message on the [appropriate link paths](iot-hub-amqp-support.md).
### Device state interval
iot-hub Iot Hub Preview Mode https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-preview-mode.md
Do *not* use an IoT hub in preview mode for production. Preview mode is intended
## Next steps -- To preview the MQTT 5 support, see [IoT Hub MQTT 5 support overview (preview)](iot-hub-mqtt-5.md)
+- To preview the MQTT 5 support, see [IoT Hub MQTT 5 support overview (preview)](../iot/iot-mqtt-5-preview.md)
- To preview the ECC server certificate, see [Elliptic Curve Cryptography (ECC) server TLS certificate (preview)](iot-hub-tls-support.md#elliptic-curve-cryptography-ecc-server-tls-certificate-preview) - To preview TLS fragment size negotiation, see [TLS maximum fragment length negotiation (preview)](iot-hub-tls-support.md#tls-maximum-fragment-length-negotiation-preview)
iot-hub Iot Hub Tls Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-tls-support.md
After a successful TLS handshake, IoT Hub can authenticate a device using a symm
## Mutual TLS support
-Mutual TLS authentication ensures that the client _authenticates_ the server (IoT Hub) certificate and the server (IoT Hub) _authenticates_ the [X.509 client certificate or X.509 thumbprint](tutorial-x509-prove-possession.md). _Authorization_ is performed by IoT Hub after _authentication_ is complete.
+Mutual TLS authentication ensures that the client _authenticates_ the server (IoT Hub) certificate and the server (IoT Hub) _authenticates_ the [X.509 client certificate or X.509 thumbprint](tutorial-x509-test-certs.md#create-a-client-certificate-for-a-device). _Authorization_ is performed by IoT Hub after _authentication_ is complete.
For AMQP and MQTT protocols, IoT Hub requests a client certificate in the initial TLS handshake. If one is provided, IoT Hub _authenticates_ the client certificate and the client _authenticates_ the IoT Hub certificate. This process is called mutual TLS authentication. When IoT Hub receives an MQTT connect packet or an AMQP link opens, IoT Hub performs _authorization_ for the requesting client and determines if the client requires X.509 authentication. If mutual TLS authentication was completed and the client is authorized to connect as the device, it is allowed. However, if the client requires X.509 authentication and client authentication was not completed during the TLS handshake, then IoT Hub rejects the connection.
iot-hub Iot Hub Troubleshoot Connectivity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-troubleshoot-connectivity.md
If the previous steps didn't help, try:
* Verify that your devices are **Enabled** in the Azure portal > your IoT hub > IoT devices.
-* If your device uses MQTT protocol, verify that port 8883 is open. For more information, see [Connecting to IoT Hub (MQTT)](iot-hub-mqtt-support.md#connecting-to-iot-hub).
+* If your device uses MQTT protocol, verify that port 8883 is open. For more information, see [Connecting to IoT Hub (MQTT)](../iot/iot-mqtt-connect-to-iot-hub.md#connecting-to-iot-hub).
* Get help from [Microsoft Q&A question page for Azure IoT Hub](/answers/topics/azure-iot-hub.html), [Stack Overflow](https://stackoverflow.com/questions/tagged/azure-iot-hub), or [Azure support](https://azure.microsoft.com/support/options/).
iot-hub Iot Hub X509 Certificate Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-x509-certificate-concepts.md
To learn more about the fields that make up an X.509 certificate, see [X.509 cer
If you're already familiar with X.509 certificates, and you want to generate test versions that you can use to authenticate to your IoT hub, see the following articles:
-* [Tutorial: Use OpenSSL to create test certificates](tutorial-x509-openssl.md)
+* [Tutorial: Create and upload certificates for testing](tutorial-x509-test-certs.md)
* If you want to use self-signed certificates for testing, see the [Create a self-signed certificate](reference-x509-certificates.md#create-a-self-signed-certificate) section of [X.509 certificates](reference-x509-certificates.md). >[!IMPORTANT] >We recommend that you use certificates signed by an issuing Certificate Authority (CA), even for testing purposes. Never use self-signed certificates in production.
-If you have a root CA certificate or subordinate CA certificate and you want to upload it to your IoT hub, you must verify that you own that certificate. For more information, see [Tutorial: Upload and verify a CA certificate to IoT Hub](tutorial-x509-prove-possession.md).
+If you have a root CA certificate or subordinate CA certificate and you want to upload it to your IoT hub, you must verify that you own that certificate. For more information, see [Tutorial: Create and upload certificates for testing](tutorial-x509-test-certs.md).
iot-hub Iot Hub X509ca Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-x509ca-overview.md
The X.509 CA feature enables device authentication to IoT Hub using a certificat
The X.509 CA certificate is at the top of the chain of certificates for each of your devices. You may purchase or create one depending on how you intend to use it.
-For production environments, we recommend that you purchase an X.509 CA certificate from a public root certificate authority. Purchasing a CA certificate has the benefit of the root CA acting as a trusted third party to vouch for the legitimacy of your devices. Consider this option if your devices are part of an open IoT network where they interact with third-party products or services.
+For production environments, we recommend that you purchase an X.509 CA certificate from a professional certificate services provider. Purchasing a CA certificate has the benefit of the root CA acting as a trusted third party to vouch for the legitimacy of your devices. Consider this option if your devices are part of an open IoT network where they interact with third-party products or services.
-You may also create a self-signed X.509 CA for experimentation or for use in closed IoT networks.
+You may also create a self-signed X.509 CA certificate for testing purposes. For more information about creating certificates for testing, see [Create and upload certificates for testing](tutorial-x509-test-certs.md).
-Regardless of how you obtain your X.509 CA certificate, make sure to keep its corresponding private key secret and protected always. This precaution is necessary for building trust in the X.509 CA authentication.
+>[!NOTE]
+>We do not recommend the use of self-signed certificates for production environments.
-Learn how to [create a self-signed CA certificate](https://github.com/Azure/azure-iot-sdk-c/blob/master/tools/CACertificates/CACertificateOverview.md), which you can use for testing.
+Regardless of how you obtain your X.509 CA certificate, make sure to keep its corresponding private key secret and protected always. This precaution is necessary for building trust in the X.509 CA authentication.
## Sign devices into the certificate chain of trust
Register your X.509 CA certificate to IoT Hub, which uses it to authenticate you
The upload process entails uploading a file that contains your certificate. This file should never contain any private keys.
-The proof of possession step involves a cryptographic challenge and response process between you and IoT Hub. Given that digital certificate contents are public and therefore susceptible to eavesdropping, IoT Hub has to verify that you really own the CA certificate. It does so by generating a random challenge that you sign with the CA certificate's corresponding private key. If you kept the private key secret and protected as recommended, then only you possess the knowledge to complete this step. Secrecy of private keys is the source of trust in this method. After signing the challenge, you complete this step by uploading a file containing the results.
+The proof of possession step involves a cryptographic challenge and response process between you and IoT Hub. Given that digital certificate contents are public and therefore susceptible to eavesdropping, IoT Hub has to verify that you really own the CA certificate. You can choose to either automatically or manually verify ownership. For manual verification, Azure IoT Hub generates a random challenge that you sign with the CA certificate's corresponding private key. If you kept the private key secret and protected as recommended, then only you possess the knowledge to complete this step. Secrecy of private keys is the source of trust in this method. After signing the challenge, you complete this step and manually verify your certificate by uploading a file containing the results.
-Learn how to [register your CA certificate](./tutorial-x509-prove-possession.md)
+Learn how to [register your CA certificate](tutorial-x509-test-certs.md#register-your-subordinate-ca-certificate-to-your-iot-hub).
## Create a device on IoT Hub
With your X.509 CA certificate registered and devices signed into a certificate
A successful device connection to IoT Hub completes the authentication process and is also indicative of a proper setup. Every time a device connects, IoT Hub renegotiates the TLS session and verifies the deviceΓÇÖs X.509 certificate.
-Learn how to [complete this device connection step](./tutorial-x509-prove-possession.md).
- ## Revoke a device certificate IoT Hub doesn't check certificate revocation lists from the certificate authority when authenticating devices with certificate-based authentication. If you have a device that needs to be blocked from connecting to IoT Hub because of a potentially compromised certificate, you should disable the device in the identity registry. For more information, see [Disable or delete a device in an IoT hub](./iot-hub-create-through-portal.md#disable-or-delete-a-device-in-an-iot-hub).
iot-hub Module Twins Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/module-twins-cli.md
This article shows you how to create an Azure CLI session in which you:
* An IoT Hub. Create one with the [CLI](iot-hub-create-using-cli.md) or the [Azure portal](iot-hub-create-through-portal.md).
-* Make sure that port 8883 is open in your firewall. The samples in this article use MQTT protocol, which communicates over port 8883. This port can be blocked in some corporate and educational network environments. For more information and ways to work around this issue, see [Connecting to IoT Hub (MQTT)](iot-hub-mqtt-support.md#connecting-to-iot-hub).
+* Make sure that port 8883 is open in your firewall. The samples in this article use MQTT protocol, which communicates over port 8883. This port can be blocked in some corporate and educational network environments. For more information and ways to work around this issue, see [Connecting to IoT Hub (MQTT)](../iot/iot-mqtt-connect-to-iot-hub.md#connecting-to-iot-hub).
## Prepare the Cloud Shell
iot-hub Reference X509 Certificates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/reference-x509-certificates.md
You can use [OpenSSL](https://www.openssl.org/) to create self-signed certificat
openssl x509 -in {CrtFile} -noout -fingerprint ```
+### Verify certificate manually after upload
+
+When you upload your root certificate authority (CA) certificate or subordinate CA certificate to your IoT hub, you can choose to automatically verify the certificate. If you didn't choose to automatically verify your certificate during upload, your certificate is shown with its status set to **Unverified**. You must perform the following steps to manually verify your certificate.
+
+1. Select the certificate to view the **Certificate Details** dialog.
+
+1. Select **Generate Verification Code** in the dialog.
+
+ :::image type="content" source="media/reference-x509-certificates/certificate-details.png" alt-text="Screenshot showing the certificate details dialog.":::
+
+1. Copy the verification code to the clipboard. You must use this verification code as the certificate subject in subsequent steps. For example, if the verification code is `75B86466DA34D2B04C0C4C9557A119687ADAE7D4732BDDB3`, add that as the subject of your certificate as shown in the next step.
+
+1. There are three ways to generate a verification certificate:
+
+ - If you're using the PowerShell script supplied by Microsoft, run `New-CACertsVerificationCert "<verification code>"` to create a certificate named `VerifyCert4.cer`, replacing `<verification code>` with the previously generated verification code. For more information, see [Managing test CA certificates for samples and tutorials](https://github.com/Azure/azure-iot-sdk-c/blob/main/tools/CACertificates/CACertificateOverview.md) in the GitHub repository for the [Azure IoT Hub Device SDK for C](https://github.com/Azure/azure-iot-sdk-c).
+
+ - If you're using the Bash script supplied by Microsoft, run `./certGen.sh create_verification_certificate "<verification code>"` to create a certificate named verification-code.cert.pem, replacing `<verification code>` with the previously generated verification code. For more information, see [Managing test CA certificates for samples and tutorials](https://github.com/Azure/azure-iot-sdk-c/blob/main/tools/CACertificates/CACertificateOverview.md) in the GitHub repository for the Azure IoT Hub Device SDK for C.
+
+ - If you're using OpenSSL to generate your certificates, you must first generate a private key, then generate a certificate signing request (CSR) file. In the following example, replace `<verification code>` with the previously generated verification code:
+
+ ```bash
+ openssl genpkey -out pop.key -algorithm RSA -pkeyopt rsa_keygen_bits:2048
+
+ openssl req -new -key pop.key -out pop.csr
+
+ --
+ Country Name (2 letter code) [XX]:.
+ State or Province Name (full name) []:.
+ Locality Name (eg, city) [Default City]:.
+ Organization Name (eg, company) [Default Company Ltd]:.
+ Organizational Unit Name (eg, section) []:.
+ Common Name (eg, your name or your server hostname) []:<verification code>
+ Email Address []:
+
+ Please enter the following 'extra' attributes
+ to be sent with your certificate request
+ A challenge password []:
+ An optional company name []:
+ ```
+
+ Then, create a certificate using the appropriate configuration file for either the root CA or the subordinate CA, and the CSR file. The following example demonstrates how to use OpenSSL to create the certificate from a root CA configuration file and the CSR file.
+
+ ```bash
+ openssl ca -config rootca.conf -in pop.csr -out pop.crt -extensions client_ext
+ ```
+
+ For more information, see [Tutorial - Create and upload certificates for testing](tutorial-x509-test-certs.md).
+
+1. Select the new certificate in the **Certificate Details** view.
+
+1. After the certificate uploads, select **Verify**. The certificate status should change to **Verified**.
+ ## For more information For more information about X.509 certificates and how they're used in IoT Hub, see the following articles:
iot-hub Schedule Jobs Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/schedule-jobs-cli.md
This article shows you how to create two Azure CLI sessions:
* An IoT Hub. Create one with the [CLI](iot-hub-create-using-cli.md) or the [Azure portal](iot-hub-create-through-portal.md).
-* Make sure that port 8883 is open in your firewall. The device sample in this article uses MQTT protocol, which communicates over port 8883. This port may be blocked in some corporate and educational network environments. For more information and ways to work around this issue, see [Connecting to IoT Hub (MQTT)](iot-hub-mqtt-support.md#connecting-to-iot-hub).
+* Make sure that port 8883 is open in your firewall. The device sample in this article uses MQTT protocol, which communicates over port 8883. This port may be blocked in some corporate and educational network environments. For more information and ways to work around this issue, see [Connecting to IoT Hub (MQTT)](../iot/iot-mqtt-connect-to-iot-hub.md#connecting-to-iot-hub).
## Prepare the Cloud Shell
iot-hub Schedule Jobs Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/schedule-jobs-dotnet.md
This article shows you how to create two .NET (C#) console apps:
* A registered device. Register one in the [Azure portal](iot-hub-create-through-portal.md#register-a-new-device-in-the-iot-hub).
-* Make sure that port 8883 is open in your firewall. The device sample in this article uses MQTT protocol, which communicates over port 8883. This port may be blocked in some corporate and educational network environments. For more information and ways to work around this issue, see [Connecting to IoT Hub (MQTT)](iot-hub-mqtt-support.md#connecting-to-iot-hub).
+* Make sure that port 8883 is open in your firewall. The device sample in this article uses MQTT protocol, which communicates over port 8883. This port may be blocked in some corporate and educational network environments. For more information and ways to work around this issue, see [Connecting to IoT Hub (MQTT)](../iot/iot-mqtt-connect-to-iot-hub.md#connecting-to-iot-hub).
## Create a simulated device app
iot-hub Schedule Jobs Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/schedule-jobs-java.md
This article shows you how to create two Java apps:
* [Maven 3](https://maven.apache.org/download.cgi)
-* Make sure that port 8883 is open in your firewall. The device sample in this article uses MQTT protocol, which communicates over port 8883. This port may be blocked in some corporate and educational network environments. For more information and ways to work around this issue, see [Connecting to IoT Hub (MQTT)](iot-hub-mqtt-support.md#connecting-to-iot-hub).
+* Make sure that port 8883 is open in your firewall. The device sample in this article uses MQTT protocol, which communicates over port 8883. This port may be blocked in some corporate and educational network environments. For more information and ways to work around this issue, see [Connecting to IoT Hub (MQTT)](../iot/iot-mqtt-connect-to-iot-hub.md#connecting-to-iot-hub).
> [!NOTE] > To keep things simple, this article does not implement a retry policy. In production code, you should implement retry policies (such as an exponential backoff), as suggested in the article, [Transient Fault Handling](/azure/architecture/best-practices/transient-faults).
iot-hub Schedule Jobs Node https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/schedule-jobs-node.md
This article shows you how to create two Node.js apps:
* Node.js version 10.0.x or later. [Prepare your development environment](https://github.com/Azure/azure-iot-sdk-node/tree/main/doc/node-devbox-setup.md) describes how to install Node.js for this article on either Windows or Linux.
-* Make sure that port 8883 is open in your firewall. The device sample in this article uses MQTT protocol, which communicates over port 8883. This port may be blocked in some corporate and educational network environments. For more information and ways to work around this issue, see [Connecting to IoT Hub (MQTT)](iot-hub-mqtt-support.md#connecting-to-iot-hub).
+* Make sure that port 8883 is open in your firewall. The device sample in this article uses MQTT protocol, which communicates over port 8883. This port may be blocked in some corporate and educational network environments. For more information and ways to work around this issue, see [Connecting to IoT Hub (MQTT)](../iot/iot-mqtt-connect-to-iot-hub.md#connecting-to-iot-hub).
## Create a simulated device app
iot-hub Troubleshoot Error Codes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/troubleshoot-error-codes.md
For device developers, if the volume of errors is a concern, switch to the C SDK
In general, the error message presented should explain how to fix the error. If for some reason you don't have access to the error message detail, make sure: * The SAS or other security token you use isn't expired.
-* For X.509 certificate authentication, the device certificate or the CA certificate associated with the device isn't expired. To learn how to register X.509 CA certificates with IoT Hub, see [Set up X.509 security in your Azure IoT hub](tutorial-x509-prove-possession.md).
+* For X.509 certificate authentication, the device certificate or the CA certificate associated with the device isn't expired. To learn how to register X.509 CA certificates with IoT Hub, see [Tutorial: Create and upload certificates for testing](tutorial-x509-test-certs.md).
* For X.509 certificate thumbprint authentication, the thumbprint of the device certificate is registered with IoT Hub. * The authorization credential is well formed for the protocol that you use. To learn more, see [Control access to IoT Hub](iot-hub-devguide-security.md). * The authorization rule used has the permission for the operation requested.
This error can occur because the [SAS token used to connect to IoT Hub](iot-hub-
Some other possibilities include:
-* The device lost underlying network connectivity longer than the [MQTT keep-alive](iot-hub-mqtt-support.md#default-keep-alive-timeout), resulting in a remote idle timeout. The MQTT keep-alive setting can be different per device.
+* The device lost underlying network connectivity longer than the [MQTT keep-alive](../iot/iot-mqtt-connect-to-iot-hub.md#default-keep-alive-timeout), resulting in a remote idle timeout. The MQTT keep-alive setting can be different per device.
* The device sent a TCP/IP-level reset but didn't send an application-level `MQTT DISCONNECT`. Basically, the device abruptly closed the underlying socket connection. Sometimes, this issue is caused by bugs in older versions of the Azure IoT SDK. * The device side application crashed.
iot-hub Tutorial Connectivity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/tutorial-connectivity.md
In this tutorial, you learn how to:
* Clone or download the sample Node.js project from [Azure IoT samples for Node.js](https://github.com/Azure-Samples/azure-iot-samples-node).
-* Make sure that port 8883 is open in your firewall. The device sample in this tutorial uses MQTT protocol, which communicates over port 8883. This port may be blocked in some corporate and educational network environments. For more information and ways to work around this issue, see [Connecting to IoT Hub (MQTT)](iot-hub-mqtt-support.md#connecting-to-iot-hub).
+* Make sure that port 8883 is open in your firewall. The device sample in this tutorial uses MQTT protocol, which communicates over port 8883. This port may be blocked in some corporate and educational network environments. For more information and ways to work around this issue, see [Connecting to IoT Hub (MQTT)](../iot/iot-mqtt-connect-to-iot-hub.md#connecting-to-iot-hub).
## Create an IoT hub
iot-hub Tutorial Device Twins https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/tutorial-device-twins.md
If you don't have an Azure subscription, create a [free account](https://azure.m
* Clone or download the sample Node.js project from [Azure IoT samples for Node.js](https://github.com/Azure-Samples/azure-iot-samples-node).
-* Make sure that port 8883 is open in your firewall. The device sample in this tutorial uses MQTT protocol, which communicates over port 8883. This port may be blocked in some corporate and educational network environments. For more information and ways to work around this issue, see [Connecting to IoT Hub (MQTT)](iot-hub-mqtt-support.md#connecting-to-iot-hub).
+* Make sure that port 8883 is open in your firewall. The device sample in this tutorial uses MQTT protocol, which communicates over port 8883. This port may be blocked in some corporate and educational network environments. For more information and ways to work around this issue, see [Connecting to IoT Hub (MQTT)](../iot/iot-mqtt-connect-to-iot-hub.md#connecting-to-iot-hub).
## Set up Azure resources
iot-hub Tutorial Message Enrichments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/tutorial-message-enrichments.md
In this tutorial, you perform the following tasks:
* You must have completed [Tutorial: Send device data to Azure Storage using IoT Hub message routing](tutorial-routing.md) and maintained the resources you created for it.
-* Make sure that port 8883 is open in your firewall. The device sample in this tutorial uses MQTT protocol, which communicates over port 8883. This port may be blocked in some corporate and educational network environments. For more information and ways to work around this issue, see [Connecting to IoT Hub (MQTT)](iot-hub-mqtt-support.md#connecting-to-iot-hub).
+* Make sure that port 8883 is open in your firewall. The device sample in this tutorial uses MQTT protocol, which communicates over port 8883. This port may be blocked in some corporate and educational network environments. For more information and ways to work around this issue, see [Connecting to IoT Hub (MQTT)](../iot/iot-mqtt-connect-to-iot-hub.md#connecting-to-iot-hub).
# [Azure portal](#tab/portal)
iot-hub Tutorial Routing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/tutorial-routing.md
In this tutorial, you perform the following tasks:
* Download or clone the SDK repo to your development machine. * Have .NET Core 3.0.0 or greater on your development machine. Check your version by running `dotnet --version` and [Download .NET](https://dotnet.microsoft.com/download) if necessary.
-* Make sure that port 8883 is open in your firewall. The sample in this tutorial uses MQTT protocol, which communicates over port 8883. This port may be blocked in some corporate and educational network environments. For more information and ways to work around this issue, see [Connecting to IoT Hub (MQTT)](iot-hub-mqtt-support.md#connecting-to-iot-hub).
+* Make sure that port 8883 is open in your firewall. The sample in this tutorial uses MQTT protocol, which communicates over port 8883. This port may be blocked in some corporate and educational network environments. For more information and ways to work around this issue, see [Connecting to IoT Hub (MQTT)](../iot/iot-mqtt-connect-to-iot-hub.md#connecting-to-iot-hub).
* Optionally, install [Azure IoT Explorer](https://github.com/Azure/azure-iot-explorer). This tool helps you observe the messages as they arrive at your IoT hub. This article uses Azure IoT Explorer.
iot-hub Tutorial Use Metrics And Diags https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/tutorial-use-metrics-and-diags.md
In this tutorial, you perform the following tasks:
* An email account capable of receiving mail.
-* Make sure that port 8883 is open in your firewall. The device sample in this tutorial uses MQTT protocol, which communicates over port 8883. This port may be blocked in some corporate and educational network environments. For more information and ways to work around this issue, see [Connecting to IoT Hub (MQTT)](iot-hub-mqtt-support.md#connecting-to-iot-hub).
+* Make sure that port 8883 is open in your firewall. The device sample in this tutorial uses MQTT protocol, which communicates over port 8883. This port may be blocked in some corporate and educational network environments. For more information and ways to work around this issue, see [Connecting to IoT Hub (MQTT)](../iot/iot-mqtt-connect-to-iot-hub.md#connecting-to-iot-hub).
[!INCLUDE [azure-cli-prepare-your-environment-no-header.md](~/articles/reusable-content/azure-cli/azure-cli-prepare-your-environment-no-header.md)]
iot-hub Tutorial X509 Openssl https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/tutorial-x509-openssl.md
- Title: Tutorial - Use OpenSSL to create X.509 test certificates for Azure IoT Hub| Microsoft Docs
-description: Tutorial - Use OpenSSL to create CA and device certificates for Azure IoT hub
----- Previously updated : 02/24/2022--
-#Customer intent: As a developer, I want to be able to use X.509 certificates to authenticate devices to an IoT hub. This step of the tutorial needs to introduce me to OpenSSL that I can use to generate test certificates.
--
-# Tutorial: Use OpenSSL to create test certificates
-
-For production environments, we recommend that you purchase an X.509 CA certificate from a public root certificate authority (CA). However, creating your own test certificate hierarchy is adequate for testing IoT Hub device authentication. For more information about getting an X.509 CA certificate from a public root CA, see the [Get an X.509 CA certificate](iot-hub-x509ca-overview.md#get-an-x509-ca-certificate) section of [Authenticate devices using X.509 CA certificates](iot-hub-x509ca-overview.md).
-
-The following example uses [OpenSSL](https://www.openssl.org/) and the [OpenSSL Cookbook](https://www.feistyduck.com/library/openssl-cookbook/online/ch-openssl.html) to create a certificate authority (CA), a subordinate CA, and a device certificate. The example then signs the subordinate CA and the device certificate into a certificate hierarchy. This example is presented for demonstration purposes only.
-
->[!NOTE]
->Microsoft provides PowerShell and Bash scripts to help you understand how to create your own X.509 certificates and authenticate them to an IoT hub. The scripts are included with the [Azure IoT Hub Device SDK for C](https://github.com/Azure/azure-iot-sdk-c). The scripts are provided for demonstration purposes only. Certificates created by them must not be used for production. The certificates contain hard-coded passwords (ΓÇ£1234ΓÇ¥) and expire after 30 days. You must use your own best practices for certificate creation and lifetime management in a production environment. For more information, see [Managing test CA certificates for samples and tutorials](https://github.com/Azure/azure-iot-sdk-c/blob/main/tools/CACertificates/CACertificateOverview.md) in the GitHub repository for the [Azure IoT Hub Device SDK for C](https://github.com/Azure/azure-iot-sdk-c).
-
-## Step 1 - Create the root CA directory structure
-
-Create a directory structure for the certificate authority.
-
-* The *certs* directory stores new certificates.
-* The *db* directory stores the certificate database.
-* The *private* directory stores the CA private key.
-
-```bash
- mkdir rootca
- cd rootca
- mkdir certs db private
- touch db/index
- openssl rand -hex 16 > db/serial
- echo 1001 > db/crlnumber
-```
-
-## Step 2 - Create a root CA configuration file
-
-Before creating a CA, create a configuration file and save it as *rootca.conf* in the *rootca* directory.
-
-```xml
-[default]
-name = rootca
-domain_suffix = example.com
-aia_url = http://$name.$domain_suffix/$name.crt
-crl_url = http://$name.$domain_suffix/$name.crl
-default_ca = ca_default
-name_opt = utf8,esc_ctrl,multiline,lname,align
-
-[ca_dn]
-commonName = "Test Root CA"
-
-[ca_default]
-home = ../rootca
-database = $home/db/index
-serial = $home/db/serial
-crlnumber = $home/db/crlnumber
-certificate = $home/$name.crt
-private_key = $home/private/$name.key
-RANDFILE = $home/private/random
-new_certs_dir = $home/certs
-unique_subject = no
-copy_extensions = none
-default_days = 3650
-default_crl_days = 365
-default_md = sha256
-policy = policy_c_o_match
-
-[policy_c_o_match]
-countryName = optional
-stateOrProvinceName = optional
-organizationName = optional
-organizationalUnitName = optional
-commonName = supplied
-emailAddress = optional
-
-[req]
-default_bits = 2048
-encrypt_key = yes
-default_md = sha256
-utf8 = yes
-string_mask = utf8only
-prompt = no
-distinguished_name = ca_dn
-req_extensions = ca_ext
-
-[ca_ext]
-basicConstraints = critical,CA:true
-keyUsage = critical,keyCertSign,cRLSign
-subjectKeyIdentifier = hash
-
-[sub_ca_ext]
-authorityKeyIdentifier = keyid:always
-basicConstraints = critical,CA:true,pathlen:0
-extendedKeyUsage = clientAuth,serverAuth
-keyUsage = critical,keyCertSign,cRLSign
-subjectKeyIdentifier = hash
-
-[client_ext]
-authorityKeyIdentifier = keyid:always
-basicConstraints = critical,CA:false
-extendedKeyUsage = clientAuth
-keyUsage = critical,digitalSignature
-subjectKeyIdentifier = hash
-
-```
-
-## Step 3 - Create a root CA
-
-First, generate a private key and the certificate signing request (CSR) in the *rootca* directory.
-
-```bash
- openssl req -new -config rootca.conf -out rootca.csr -keyout private/rootca.key
-```
-
-Next, create a self-signed CA certificate. Self-signing is suitable for testing purposes. Specify the `ca_ext` configuration file extensions on the command line. These extensions indicate that the certificate is for a root CA and can be used to sign certificates and certificate revocation lists (CRLs). Sign the certificate, and commit it to the database.
-
-```bash
- openssl ca -selfsign -config rootca.conf -in rootca.csr -out rootca.crt -extensions ca_ext
-```
-
-## Step 4 - Create the subordinate CA directory structure
-
-Create a directory structure for the subordinate CA at the same level as the *rootca* directory.
-
-```bash
- mkdir subca
- cd subca
- mkdir certs db private
- touch db/index
- openssl rand -hex 16 > db/serial
- echo 1001 > db/crlnumber
-```
-
-## Step 5 - Create a subordinate CA configuration file
-
-Create a configuration file and save it as *subca.conf* in the *subca* directory.
-
-```bash
-[default]
-name = subca
-domain_suffix = example.com
-aia_url = http://$name.$domain_suffix/$name.crt
-crl_url = http://$name.$domain_suffix/$name.crl
-default_ca = ca_default
-name_opt = utf8,esc_ctrl,multiline,lname,align
-
-[ca_dn]
-commonName = "Test Subordinate CA"
-
-[ca_default]
-home = .
-database = $home/db/index
-serial = $home/db/serial
-crlnumber = $home/db/crlnumber
-certificate = $home/$name.crt
-private_key = $home/private/$name.key
-RANDFILE = $home/private/random
-new_certs_dir = $home/certs
-unique_subject = no
-copy_extensions = copy
-default_days = 365
-default_crl_days = 90
-default_md = sha256
-policy = policy_c_o_match
-
-[policy_c_o_match]
-countryName = optional
-stateOrProvinceName = optional
-organizationName = optional
-organizationalUnitName = optional
-commonName = supplied
-emailAddress = optional
-
-[req]
-default_bits = 2048
-encrypt_key = yes
-default_md = sha256
-utf8 = yes
-string_mask = utf8only
-prompt = no
-distinguished_name = ca_dn
-req_extensions = ca_ext
-
-[ca_ext]
-basicConstraints = critical,CA:true
-keyUsage = critical,keyCertSign,cRLSign
-subjectKeyIdentifier = hash
-
-[sub_ca_ext]
-authorityKeyIdentifier = keyid:always
-basicConstraints = critical,CA:true,pathlen:0
-extendedKeyUsage = clientAuth,serverAuth
-keyUsage = critical,keyCertSign,cRLSign
-subjectKeyIdentifier = hash
-
-[client_ext]
-authorityKeyIdentifier = keyid:always
-basicConstraints = critical,CA:false
-extendedKeyUsage = clientAuth
-keyUsage = critical,digitalSignature
-subjectKeyIdentifier = hash
-```
-
-## Step 6 - Create a subordinate CA
-
-This example shows you how to create a subordinate or registration CA. Because you can use the root CA to sign certificates, creating a subordinate CA isnΓÇÖt strictly necessary. Having a subordinate CA does, however, mimic real world certificate hierarchies in which the root CA is kept offline and a subordinate CA issues client certificates.
-
-From the *subca* directory, use the configuration file to generate a private key and a certificate signing request (CSR).
-
-```bash
- openssl req -new -config subca.conf -out subca.csr -keyout private/subca.key
-```
-
-Submit the CSR to the root CA and use the root CA to issue and sign the subordinate CA certificate. Specify `sub_ca_ext` for the extensions switch on the command line. The extensions indicate that the certificate is for a CA that can sign certificates and certificate revocation lists (CRLs). When prompted, sign the certificate, and commit it to the database.
-
-```bash
- openssl ca -config ../rootca/rootca.conf -in subca.csr -out subca.crt -extensions sub_ca_ext
-```
-
-## Step 7 - Demonstrate proof of possession
-
-You now have both a root CA certificate and a subordinate CA certificate. You can use either one to sign device certificates. The one you choose must be uploaded to your IoT Hub. The following steps assume that you're using the subordinate CA certificate. To upload and register your subordinate CA certificate to your IoT Hub:
-
-1. In the Azure portal, navigate to your IoTHub and select **Settings > Certificates**.
-
-2. Select **Add** to add your new subordinate CA certificate.
-
-3. Enter a display name in the **Certificate Name** field, and select the PEM certificate file you created previously.
-
- > [!NOTE]
- > The .crt certificates created above are the same as .pem certificates. You can simply change the extension when uploading a certificate to prove possession, or you can use the following OpenSSL command:
- >
- > ```bash
- > openssl x509 -in mycert.crt -out mycert.pem -outform PEM
- > ```
-
-4. Select **Save**. Your certificate is shown in the certificate list with a status of **Unverified**. The verification process will prove that you own the certificate.
-
-5. Select the certificate to view the **Certificate Details** dialog.
-
-6. Select **Generate Verification Code**. For more information, see [Prove Possession of a CA certificate](tutorial-x509-prove-possession.md).
-
-7. Copy the verification code to the clipboard. You must set the verification code as the certificate subject. For example, if the verification code is BB0C656E69AF75E3FB3C8D922C1760C58C1DA5B05AAA9D0A, add that as the subject of your certificate as shown in step 9.
-
-8. Generate a private key.
-
- ```bash
- openssl genpkey -out pop.key -algorithm RSA -pkeyopt rsa_keygen_bits:2048
- ```
-
-9. Generate a certificate signing request (CSR) from the private key. Add the verification code as the subject of your certificate.
-
- ```bash
- openssl req -new -key pop.key -out pop.csr
-
- --
- Country Name (2 letter code) [XX]:.
- State or Province Name (full name) []:.
- Locality Name (eg, city) [Default City]:.
- Organization Name (eg, company) [Default Company Ltd]:.
- Organizational Unit Name (eg, section) []:.
- Common Name (eg, your name or your server hostname) []:BB0C656E69AF75E3FB3C8D922C1760C58C1DA5B05AAA9D0A
- Email Address []:
-
- Please enter the following 'extra' attributes
- to be sent with your certificate request
- A challenge password []:
- An optional company name []:
-
- ```
-
-10. Create a certificate using the subordinate CA configuration file and the CSR for the proof of possession certificate.
-
- ```bash
- openssl ca -config subca.conf -in pop.csr -out pop.crt -extensions client_ext
- ```
-
-11. Select the new certificate in the **Certificate Details** view. To find the PEM file, navigate to the *certs* folder.
-
-12. After the certificate uploads, select **Verify**. The CA certificate status should change to **Verified**.
-
-## Step 8 - Create a device in your IoT Hub
-
-Navigate to your IoT Hub in the Azure portal and create a new IoT device identity with the following values:
-
-1. Provide the **Device ID** that matches the subject name of your device certificates.
-
-1. Select the **X.509 CA Signed** authentication type.
-
-1. Select **Save**.
-
-## Step 9 - Create a client device certificate
-
-To generate a client certificate, you must first generate a private key. The following command shows how to use OpenSSL to create a private key. Create the key in the *subca* directory.
-
-```bash
-openssl genpkey -out device.key -algorithm RSA -pkeyopt rsa_keygen_bits:2048
-```
-
-Create a certificate signing request (CSR) for the key. You don't need to enter a challenge password or an optional company name. You must, however, enter the device ID in the common name field. You can also enter your own values for the other parameters such as **Country Name**, **Organization Name**, and so on.
-
-```bash
-openssl req -new -key device.key -out device.csr
-
-Country Name (2 letter code) [XX]:.
-State or Province Name (full name) []:.
-Locality Name (eg, city) [Default City]:.
-Organization Name (eg, company) [Default Company Ltd]:.
-Organizational Unit Name (eg, section) []:
-Common Name (eg, your name or your server hostname) []:`<your device ID>`
-Email Address []:
-
-Please enter the following 'extra' attributes
-to be sent with your certificate request
-A challenge password []:
-An optional company name []:
-
-```
-
-Check that the CSR is what you expect.
-
-```bash
-openssl req -text -in device.csr -noout
-```
-
-Send the CSR to the subordinate CA for signing into the certificate hierarchy. Specify `client_ext` in the `-extensions` switch. Notice that the `Basic Constraints` in the issued certificate indicate that this certificate isn't for a CA. If you're signing multiple certificates, be sure to update the serial number before generating each certificate by using the openssl `rand -hex 16 > db/serial` command.
-
-```bash
-openssl ca -config subca.conf -in device.csr -out device.crt -extensions client_ext
-```
-
-## Next Steps
-
-Go to [Tutorial: Test certificate authentication](tutorial-x509-test-certificate.md) to determine if your certificate can authenticate your device to your IoT Hub. The code on that page requires that you use a PFX certificate. Use the following OpenSSL command to convert your device .crt certificate to .pfx format.
-
-```bash
-openssl pkcs12 -export -in device.crt -inkey device.key -out device.pfx
-```
iot-hub Tutorial X509 Prove Possession https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/tutorial-x509-prove-possession.md
- Title: Tutorial - Upload and verify CA certificates in Azure IoT Hub | Microsoft Docs
-description: Tutorial - Upload and verify a CA certificate to Azure IoT Hub
---- Previously updated : 01/09/2023--
-#Customer intent: As a developer, I want to be able to use X.509 certificates to authenticate devices to an IoT hub. This step of the tutorial needs to show me how to upload and verify CA certificates to IoT hub.
--
-# Tutorial: Upload and verify a CA certificate to IoT Hub
-
-When you upload your root certificate authority (CA) certificate or subordinate CA certificate to your IoT hub, you can set it to verified automatically, or manually prove that you own the certificate.
-
-## Verify certificate automatically
-
-1. In the Azure portal, navigate to your IoT hub and select **Certificates** from the resource menu, under **Security settings**.
-
-1. Select **Add** from the command bar to add a new CA certificate.
-
-1. Enter a display name in the **Certificate name** field.
-
-1. Select the certificate file to add in the **Certificate .pem or .cer file** field.
-
-1. To automatically verify the certificate, check the box next to **Set certificate status to verified on upload**.
-
- :::image type="content" source="media/tutorial-x509-prove-possession/skip-pop.png" alt-text="Screenshot showing how to automatically verify the certificate status on upload.":::
-
-1. Select **Save**.
-
-If you chose to automatically verify your certificate during upload, your certificate is shown with its status set to **Verified** on the **Certificates** tab of the working pane.
-
-## Verify certificate manually after upload
-
-If you didn't choose to automatically verify your certificate during upload, your certificate is shown with its status set to **Unverified**. You must perform the following steps to manually verify your certificate.
-
-1. Select the certificate to view the **Certificate Details** dialog.
-
-1. Select **Generate Verification Code** in the dialog.
-
- :::image type="content" source="media/tutorial-x509-prove-possession/certificate-details.png" alt-text="Screenshot showing the certificate details dialog.":::
-
-1. Copy the verification code to the clipboard. You must use this verification code as the certificate subject in subsequent steps. For example, if the verification code is 75B86466DA34D2B04C0C4C9557A119687ADAE7D4732BDDB3, add that as the subject of your certificate as shown in the next step.
-
-5. There are three ways to generate a verification certificate:
-
- * If you're using the PowerShell script supplied by Microsoft, run `New-CACertsVerificationCert "<verification code>"` to create a certificate named `VerifyCert4.cer`, replacing `<verification code>` with the previously generated verification code. For more information, see [Tutorial: Use OpenSSL to create test certificates](tutorial-x509-openssl.md).
-
- * If you're using the Bash script supplied by Microsoft, run `./certGen.sh create_verification_certificate "<verification code>"` to create a certificate named `verification-code.cert.pem`, replacing `<verification code>` with the previously generated verification code. For more information, see [Tutorial: Use OpenSSL to create test certificates](tutorial-x509-openssl.md).
-
- * If you're using OpenSSL to generate your certificates, you must first generate a private key, then generate a certificate signing request (CSR) file. In the following example, replace `<verification code>` with the previously generated verification code:
-
- ```bash
- $ openssl genpkey -out pop.key -algorithm RSA -pkeyopt rsa_keygen_bits:2048
-
- $ openssl req -new -key pop.key -out pop.csr
-
- --
- Country Name (2 letter code) [XX]:.
- State or Province Name (full name) []:.
- Locality Name (eg, city) [Default City]:.
- Organization Name (eg, company) [Default Company Ltd]:.
- Organizational Unit Name (eg, section) []:.
- Common Name (eg, your name or your server hostname) []:<verification code>
- Email Address []:
-
- Please enter the following 'extra' attributes
- to be sent with your certificate request
- A challenge password []:
- An optional company name []:
-
- ```
-
- Then, create a certificate using the appropriate configuration file for either the root CA or the subordinate CA, and the CSR file. The following example demonstrates how to use OpenSSL to create the certificate from a root CA configuration file and the CSR file.
-
- ```bash
- openssl ca -config rootca.conf -in pop.csr -out pop.crt -extensions client_ext
-
- ```
-
- For more information, see [Tutorial: Use OpenSSL to create test certificates](tutorial-x509-openssl.md).
-
-1. Select the new certificate in the **Certificate Details** view.
-
-1. After the certificate uploads, select **Verify**. The certificate status should change to **Verified**.
iot-hub Tutorial X509 Test Certificate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/tutorial-x509-test-certificate.md
- Title: Tutorial - Test ability of X.509 certificates to authenticate devices to an Azure IoT Hub | Microsoft Docs
-description: Tutorial - Test your X.509 certificates to authenticate to Azure IoT Hub
----- Previously updated : 02/26/2021--
-#Customer intent: As a developer, I want to be able to use X.509 certificates to authenticate devices to an IoT hub. This step of the tutorial needs to show me how to test that my certificate authenticates my device.
--
-# Tutorial: Test certificate authentication
-
-You can use the following C# code example to test that your certificate can authenticate your device to your IoT hub. Note that you must complete the following prerequisites before you run the test code:
-
-* Create a root CA or subordinate CA certificate.
-* Upload your CA certificate to your IoT hub.
-* Prove that you possess the CA certificate.
-* Add a device to your IoT hub.
-* Create a device certificate with the same device ID as your device.
-
->[!IMPORTANT]
->The authentication process checks that your device is associated with the correct IoT hub name.
-
-## Code Example
-
-The following code example shows how to create a C# application to simulate the X.509 device registered for your IoT hub. The example sends temperature and humidity values from the simulated device to your hub. In this tutorial, we will create only the device application. It is left as an exercise to the readers to create the IoT Hub service application that sends responses to the events sent by this simulated device.
-
-1. Open Visual Studio, select **Create a new project**, and then choose the **Console App (.NET Framework)** project template. Select **Next**.
-
-1. In **Configure your new project**, name the project *SimulateX509Device*, and then select **Create**.
-
- ![Create X.509 device project in Visual Studio](./media/iot-hub-security-x509-get-started/create-device-project-vs2019.png)
-
-1. In Solution Explorer, right-click the **SimulateX509Device** project, and then select **Manage NuGet Packages**.
-
-1. In the **NuGet Package Manager**, select **Browse** and search for and choose **Microsoft.Azure.Devices.Client**. Select **Install**.
-
- ![Add device SDK NuGet package in Visual Studio](./media/iot-hub-security-x509-get-started/device-sdk-nuget.png)
-
- This step downloads, installs, and adds a reference to the Azure IoT device SDK NuGet package and its dependencies.
-
- Input and run the following code:
-
-```csharp
-using System;
-using Microsoft.Azure.Devices.Client;
-using System.Security.Cryptography.X509Certificates;
-using System.Threading.Tasks;
-using System.Text;
-
-namespace SimulateX509Device
-{
- class Program
- {
- private static int MESSAGE_COUNT = 5;
-
- // Temperature and humidity variables.
- private const int TEMPERATURE_THRESHOLD = 30;
- private static float temperature;
- private static float humidity;
- private static Random rnd = new Random();
-
- // Set the device ID to the name (device identifier) of your device.
- private static String deviceId = "{your-device-id}";
-
- static async Task SendEvent(DeviceClient deviceClient)
- {
- string dataBuffer;
- Console.WriteLine("Device sending {0} messages to IoTHub...\n", MESSAGE_COUNT);
-
- // Iterate MESSAGE_COUNT times to set random temperature and humidity values.
- for (int count = 0; count < MESSAGE_COUNT; count++)
- {
- // Set random values for temperature and humidity.
- temperature = rnd.Next(20, 35);
- humidity = rnd.Next(60, 80);
- dataBuffer = string.Format("{{\"deviceId\":\"{0}\",\"messageId\":{1},\"temperature\":{2},\"humidity\":{3}}}", deviceId, count, temperature, humidity);
- Message eventMessage = new Message(Encoding.UTF8.GetBytes(dataBuffer));
- eventMessage.Properties.Add("temperatureAlert", (temperature > TEMPERATURE_THRESHOLD) ? "true" : "false");
- Console.WriteLine("\t{0}> Sending message: {1}, Data: [{2}]", DateTime.Now.ToLocalTime(), count, dataBuffer);
-
- // Send to IoT Hub.
- await deviceClient.SendEventAsync(eventMessage);
- }
- }
- static void Main(string[] args)
- {
- try
- {
- // Create an X.509 certificate object.
- var cert = new X509Certificate2(@"{full path to pfx certificate.pfx}", "{your certificate password}");
-
- // Create an authentication object using your X.509 certificate.
- var auth = new DeviceAuthenticationWithX509Certificate("{your-device-id}", cert);
-
- // Create the device client.
- var deviceClient = DeviceClient.Create("{your-IoT-Hub-name}.azure-devices.net", auth, TransportType.Mqtt);
-
- if (deviceClient == null)
- {
- Console.WriteLine("Failed to create DeviceClient!");
- }
- else
- {
- Console.WriteLine("Successfully created DeviceClient!");
- SendEvent(deviceClient).Wait();
- }
-
- Console.WriteLine("Exiting...\n");
- }
- catch (Exception ex)
- {
- Console.WriteLine("Error in sample: {0}", ex.Message);
- }
- }
- }
-}
-```
-## Next steps
-
-Use the Device Provisioning Service to [Provision multiple X.509 devices using enrollment groups](../iot-dps/tutorial-custom-hsm-enrollment-group-x509.md).
iot-hub Tutorial X509 Test Certs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/tutorial-x509-test-certs.md
+
+ Title: Tutorial - Create and upload certificates for testing
+
+description: Tutorial - Create a root certificate authority and use it to create subordinate CA and client certificates that you can use for testing purposes with Azure IoT Hub
+++++ Last updated : 03/03/2023++
+#Customer intent: As a developer, I want to create and use X.509 certificates to authenticate my devices on an IoT hub for testing purposes.
++
+# Tutorial: Create and upload certificates for testing
+
+You can use X.509 certificates to authenticate devices to your IoT hub. For production environments, we recommend that you purchase an X.509 CA certificate from a professional certificate services vendor. You can then issue certificates within your organization from an internal, self-managed certificate authority (CA) chained to the purchased CA certificate as part of a comprehensive public key infrastructure (PKI) strategy. For more information about getting an X.509 CA certificate from a professional certificate services vendor, see the [Get an X.509 CA certificate](iot-hub-x509ca-overview.md#get-an-x509-ca-certificate) section of [Authenticate devices using X.509 CA certificates](iot-hub-x509ca-overview.md).
+
+However, creating your own self-managed, private CA that uses an internal root CA as the trust anchor is adequate for testing environments. A self-managed private CA with at least one subordinate CA chained to your internal root CA, with client certificates for your devices that are signed by your subordinate CAs, allows you to simulate a recommended production environment.
+
+>[!NOTE]
+>We do not recommend the use of self-signed certificates for production environments. This tutorial is presented for demonstration purposes only.
+
+The following tutorial uses [OpenSSL](https://www.openssl.org/) and the [OpenSSL Cookbook](https://www.feistyduck.com/library/openssl-cookbook/online/ch-openssl.html) to describe how to accomplish the following tasks:
+
+> [!div class="checklist"]
+> * Create an internal root certificate authority (CA) and root CA certificate
+> * Create an internal subordinate CA and subordinate CA certificate, signed by your internal root CA certificate
+> * Upload your subordinate CA certificate to your IoT hub for testing purposes
+> * Use the subordinate CA to create client certificates for the IoT devices you want to test with your IoT hub
+
+>[!NOTE]
+>Microsoft provides PowerShell and Bash scripts to help you understand how to create your own X.509 certificates and authenticate them to an IoT hub. The scripts are included with the [Azure IoT Hub Device SDK for C](https://github.com/Azure/azure-iot-sdk-c). The scripts are provided for demonstration purposes only. Certificates created by them must not be used for production. The certificates contain hard-coded passwords (ΓÇ£1234ΓÇ¥) and expire after 30 days. You must use your own best practices for certificate creation and lifetime management in a production environment. For more information, see [Managing test CA certificates for samples and tutorials](https://github.com/Azure/azure-iot-sdk-c/blob/main/tools/CACertificates/CACertificateOverview.md) in the GitHub repository for the [Azure IoT Hub Device SDK for C](https://github.com/Azure/azure-iot-sdk-c).
+
+## Prerequisites
+
+* An Azure subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+
+* An IoT hub in your Azure subscription. If you don't have a hub yet, you can follow the steps in [Create an IoT hub](iot-hub-create-through-portal.md).
+
+* The latest version of [Git](https://git-scm.com/download/). Make sure that Git is added to the environment variables accessible to the command window. See [Software Freedom Conservancy's Git client tools](https://git-scm.com/download/) for the latest version of `git` tools to install, which includes *Git Bash*, the command-line app that you can use to interact with your local Git repository.
+
+* An [OpenSSL](https://www.openssl.org/) installation. On Windows, your installation of Git includes an installation of OpenSSL. You can access OpenSSL from the Git Bash prompt. To verify that OpenSSL is installed, open a Git Bash prompt and enter `openssl version`.
+
+ >[!NOTE]
+ > Unless you're familiar with OpenSSL and already have it installed on your Windows machine, we recommend using OpenSSL from the Git Bash prompt. Alternatively, you can choose to download the source code and build OpenSSL. To learn more, see the [OpenSSL Downloads](https://www.openssl.org/source/) page. Or, you can download OpenSSL pre-built from a third-party. To learn more, see the [OpenSSL wiki](https://wiki.openssl.org/index.php/Binaries). Microsoft makes no guarantees about the validity of packages downloaded from third-parties. If you do choose to build or download OpenSSL make sure that the OpenSSL binary is accessible in your path and that the `OPENSSL_CNF` environment variable is set to the path of your *openssl.cnf* file.
+
+## Create a root CA
+
+You must first create an internal root certificate authority (CA) and a self-signed root CA certificate to serve as a trust anchor from which you can create other certificates for testing. The files used to create and maintain your internal root CA are stored in a folder structure and initialized as part of this process. Perform the following steps to:
+
+> [!div class="checklist"]
+> * Create and initialize the folders and files used by your root CA
+> * Create a configuration file used by OpenSSL to configure your root CA and certificates created with your root CA
+> * Request and create a self-signed CA certificate that serves as your root CA certificate
+
+1. Start a Git Bash window and run the following command, replacing *{base_dir}* with the desired directory in which to create the root CA.
+
+ ```bash
+ cd {base_dir}
+ ```
+
+1. In the Git Bash window, run the following commands, one at a time. This step creates the following directory structure and support files for the root CA.
+
+ | Directory or file | Description |
+ | | |
+ | rootca | The root directory of the root CA. |
+ | rootca/certs | The directory in which CA certificates for the root CA are created and stored. |
+ | rootca/db | The directory in which the certificate database and support files for the root CA are stored. |
+ | rootca/db/index | The certificate database for the root CA. The `touch` command creates a file without any content, for later use. The certificate database is a plain text file managed by OpenSSL that contains information about issued certificates. For more information about the certificate database, see the [openssl-ca](https://www.openssl.org/docs/man3.1/man1/openssl-ca.html) manual page in [OpenSSL documentation](https://www.openssl.org/docs/). |
+ | rootca/db/serial | A file used to store the serial number of the next certificate to be created for the root CA. The `openssl` command creates a 16-byte random number in hexadecimal format, then stores it in this file to initialize the file for creating the root CA certificate. |
+ | rootca/db/crlnumber | A file used to store serial numbers for revoked certificates issued by the root CA. The `echo` command pipes a sample serial number, 1001, into the file. |
+ | rootca/private | The directory in which private files for the root CA, including the private key, are stored.<br/>The files in this directory must be secured and protected. |
+
+ ```bash
+ mkdir rootca
+ cd rootca
+ mkdir certs db private
+ chmod 700 private
+ touch db/index
+ openssl rand -hex 16 > db/serial
+ echo 1001 > db/crlnumber
+ ```
+
+1. Create a text file named *rootca.conf* in the *rootca* directory created in the previous step. Open that file in a text editor, and then copy and save the following OpenSSL configuration settings into that file, replacing the following placeholders with their corresponding values.
+
+ | Placeholder | Description |
+ | | |
+ | {rootca_name} | The name of the root CA. For example, `rootca`. |
+ | {domain_suffix} | The suffix of the domain name for the root CA. For example, `example.com`. |
+ | {rootca_common_name} | The common name of the root CA. For example, `Test Root CA`. |
+
+ The file provides OpenSSL with the values needed to configure your test root CA. For this example, the file configures a root CA using the directories and files created in previous steps. The file also provides configuration settings for:
+
+ - The CA policy used by the root CA for certificate Distinguished Name (DN) fields
+ - Certificate requests created by the root CA
+ - X.509 extensions applied to root CA certificates, subordinate CA certificates, and client certificates issued by the root CA
+
+ For more information about the syntax of OpenSSL configuration files, see the [config](https://www.openssl.org/docs/manmaster/man5/config.html) manual page in OpenSSL documentation.
+
+ ```bash
+ [default]
+ name = {rootca_name}
+ domain_suffix = {domain_suffix}
+ aia_url = http://$name.$domain_suffix/$name.crt
+ crl_url = http://$name.$domain_suffix/$name.crl
+ default_ca = ca_default
+ name_opt = utf8,esc_ctrl,multiline,lname,align
+
+ [ca_dn]
+ commonName = "{rootca_common_name}"
+
+ [ca_default]
+ home = ../rootca
+ database = $home/db/index
+ serial = $home/db/serial
+ crlnumber = $home/db/crlnumber
+ certificate = $home/$name.crt
+ private_key = $home/private/$name.key
+ RANDFILE = $home/private/random
+ new_certs_dir = $home/certs
+ unique_subject = no
+ copy_extensions = none
+ default_days = 3650
+ default_crl_days = 365
+ default_md = sha256
+ policy = policy_c_o_match
+
+ [policy_c_o_match]
+ countryName = optional
+ stateOrProvinceName = optional
+ organizationName = optional
+ organizationalUnitName = optional
+ commonName = supplied
+ emailAddress = optional
+
+ [req]
+ default_bits = 2048
+ encrypt_key = yes
+ default_md = sha256
+ utf8 = yes
+ string_mask = utf8only
+ prompt = no
+ distinguished_name = ca_dn
+ req_extensions = ca_ext
+
+ [ca_ext]
+ basicConstraints = critical,CA:true
+ keyUsage = critical,keyCertSign,cRLSign
+ subjectKeyIdentifier = hash
+
+ [sub_ca_ext]
+ authorityKeyIdentifier = keyid:always
+ basicConstraints = critical,CA:true,pathlen:0
+ extendedKeyUsage = clientAuth,serverAuth
+ keyUsage = critical,keyCertSign,cRLSign
+ subjectKeyIdentifier = hash
+
+ [client_ext]
+ authorityKeyIdentifier = keyid:always
+ basicConstraints = critical,CA:false
+ extendedKeyUsage = clientAuth
+ keyUsage = critical,digitalSignature
+ subjectKeyIdentifier = hash
+ ```
+
+1. In the Git Bash window, run the following command to generate a certificate signing request (CSR) in the *rootca* directory and a private key in the *rootca/private* directory. For more information about the OpenSSL `req` command, see the [openssl-req](https://www.openssl.org/docs/man3.1/man1/openssl-req.html) manual page in OpenSSL documentation.
+
+ > [!NOTE]
+ > Even though this root CA is for testing purposes and won't be exposed as part of a public key infrastructure (PKI), we recommend that you do not copy or share the private key.
+
+ # [Windows](#tab/windows)
+
+ ```bash
+ winpty openssl req -new -config rootca.conf -out rootca.csr \
+ -keyout private/rootca.key
+ ```
+
+ # [Linux](#tab/linux)
+
+ ```bash
+ openssl req -new -config rootca.conf -out rootca.csr \
+ -keyout private/rootca.key
+ ```
+
+
+
+ You're prompted to enter a PEM pass phrase, as shown in the following example, for the private key file. Enter and confirm a pass phrase to generate your private key and CSR.
+
+ ```bash
+ Enter PEM pass phrase:
+ Verifying - Enter PEM pass phrase:
+ --
+ ```
+
+ Confirm that the CSR file, *rootca.csr*, is present in the *rootca* directory and the private key file, *rootca.key*, is present in the *private* subdirectory before continuing. For more information about the formats of the CSR and private key files, see [X.509 certificates](reference-x509-certificates.md#certificate-formats).
+
+1. In the Git Bash window, run the following command to create a self-signed root CA certificate. The command applies the `ca_ext` configuration file extensions to the certificate. These extensions indicate that the certificate is for a root CA and can be used to sign certificates and certificate revocation lists (CRLs). For more information about the OpenSSL `ca` command, see the [openssl-ca](https://www.openssl.org/docs/man3.1/man1/openssl-ca.html) manual page in OpenSSL documentation.
+
+ # [Windows](#tab/windows)
+
+ ```bash
+ winpty openssl ca -selfsign -config rootca.conf -in rootca.csr -out rootca.crt \
+ -extensions ca_ext
+ ```
+
+ # [Linux](#tab/linux)
+
+ ```bash
+ openssl ca -selfsign -config rootca.conf -in rootca.csr -out rootca.crt \
+ -extensions ca_ext
+ ```
+
+
+
+ You're prompted to provide the PEM pass phrase, as shown in the following example, for the private key file. After providing the pass phrase, OpenSSL generates a certificate, then prompts you to sign and commit the certificate for your root CA. Specify *y* for both prompts to generate the self-signed certificate for your root CA.
+
+ ```bash
+ Using configuration from rootca.conf
+ Enter pass phrase for ../rootca/private/rootca.key:
+ Check that the request matches the signature
+ Signature ok
+ Certificate Details:
+ {Details omitted from output for clarity}
+ Certificate is to be certified until Mar 24 18:51:41 2033 GMT (3650 days)
+ Sign the certificate? [y/n]:
++
+ 1 out of 1 certificate requests certified, commit? [y/n]
+ Write out database with 1 new entries
+ Data Base Updated
+ ```
+
+ After OpenSSL updates the certificate database, confirm that both the certificate file, *rootca.crt*, is present in the *rootca* directory and the PEM certificate (.pem) file for the certificate is present in the *rootc#certificate-formats).
+
+## Create a subordinate CA
+
+After you've created your internal root CA, you should create a subordinate CA to use as an *intermediate CA* with which to sign client certificates for your devices. In theory, you don't need to create a subordinate CA; you can upload your root CA certificate to your IoT hub and sign client certificates directly from your root CA. However, using a subordinate CA as an intermediate CA to sign client certificates more closely simulates a recommended production environment, in which your root CA is kept offline. You can also use a subordinate CA to sign another subordinate CA, which in turn can sign another subordinate CA, and so on. Using subordinate CAs to sign other subordinate CAs creates a hierarchy of intermediate CAs as part of a *certificate chain of trust.* In a production environment, the certificate chain of trust allows a delegation of trust towards signing devices. For more information about signing devices into a certificate chain of trust, see [Authenticate devices using X.509 CA certificates](iot-hub-x509ca-overview.md#sign-devices-into-the-certificate-chain-of-trust).
+
+Similar to your root CA, the files used to create and maintain your subordinate CA are stored in a folder structure and initialized as part of this process. Perform the following steps to:
+
+> [!div class="checklist"]
+> * Create and initialize the folders and files used by your subordinate CA
+> * Create a configuration file used by OpenSSL to configure your subordinate CA and certificates created with your subordinate CA
+> * Request and create a CA certificate signed by your root CA that serves as your subordinate CA certificate
+
+1. Start a Git Bash window and run the following command, replacing *{base_dir}* with the directory that contains your previously created root CA.
+
+ ```bash
+ cd {base_dir}
+ ```
+
+1. In the Git Bash window, run the following commands, one at a time, replacing the following placeholders with their corresponding values.
+
+ | Placeholder | Description |
+ | | |
+ | {subca_dir} | The name of the directory for the subordinate CA. For example, `subca`. |
+
+ This step creates a directory structure and support files for the subordinate CA similar to the folder structure and files created for the root CA in [Create a root CA](#create-a-root-ca).
+
+ ```bash
+ mkdir {subca_dir}
+ cd {subca_dir}
+ mkdir certs db private
+ chmod 700 private
+ touch db/index
+ openssl rand -hex 16 > db/serial
+ echo 1001 > db/crlnumber
+ ```
+
+1. Create a text file named *subca.conf* in the directory for the subordinate CA created in the previous step. Open that file in a text editor, and then copy and save the following OpenSSL configuration settings into that file, replacing the following placeholders with their corresponding values.
+
+ | Placeholder | Description |
+ | | |
+ | {subca_name} | The name of the subordinate CA. For example, `subca`. |
+ | {domain_suffix} | The suffix of the domain name for the subordinate CA. For example, `example.com`. |
+ | {subca_common_name} | The common name of the subordinate CA. For example, `Test Subordinate CA`. |
+
+ As with the configuration file for your test root CA, this file provides OpenSSL with the values needed to configure your test subordinate CA. You can create multiple subordinate CAs, for managing testing scenarios or environments.
+
+ For more information about the syntax of OpenSSL configuration files, see the [config](https://www.openssl.org/docs/manmaster/man5/config.html) master manual page in OpenSSL documentation.
+
+ ```bash
+ [default]
+ name = {subca_name}
+ domain_suffix = {domain_suffix}
+ aia_url = http://$name.$domain_suffix/$name.crt
+ crl_url = http://$name.$domain_suffix/$name.crl
+ default_ca = ca_default
+ name_opt = utf8,esc_ctrl,multiline,lname,align
+
+ [ca_dn]
+ commonName = "{subca_common_name}"
+
+ [ca_default]
+ home = ../{subca_name}
+ database = $home/db/index
+ serial = $home/db/serial
+ crlnumber = $home/db/crlnumber
+ certificate = $home/$name.crt
+ private_key = $home/private/$name.key
+ RANDFILE = $home/private/random
+ new_certs_dir = $home/certs
+ unique_subject = no
+ copy_extensions = copy
+ default_days = 365
+ default_crl_days = 90
+ default_md = sha256
+ policy = policy_c_o_match
+
+ [policy_c_o_match]
+ countryName = optional
+ stateOrProvinceName = optional
+ organizationName = optional
+ organizationalUnitName = optional
+ commonName = supplied
+ emailAddress = optional
+
+ [req]
+ default_bits = 2048
+ encrypt_key = yes
+ default_md = sha256
+ utf8 = yes
+ string_mask = utf8only
+ prompt = no
+ distinguished_name = ca_dn
+ req_extensions = ca_ext
+
+ [ca_ext]
+ basicConstraints = critical,CA:true
+ keyUsage = critical,keyCertSign,cRLSign
+ subjectKeyIdentifier = hash
+
+ [sub_ca_ext]
+ authorityKeyIdentifier = keyid:always
+ basicConstraints = critical,CA:true,pathlen:0
+ extendedKeyUsage = clientAuth,serverAuth
+ keyUsage = critical,keyCertSign,cRLSign
+ subjectKeyIdentifier = hash
+
+ [client_ext]
+ authorityKeyIdentifier = keyid:always
+ basicConstraints = critical,CA:false
+ extendedKeyUsage = clientAuth
+ keyUsage = critical,digitalSignature
+ subjectKeyIdentifier = hash
+ ```
+
+1. In the Git Bash window, run the following commands to generate a private key and a certificate signing request (CSR) in the subordinate CA directory.
+
+ # [Windows](#tab/windows)
+
+ ```bash
+ winpty openssl req -new -config subca.conf -out subca.csr \
+ -keyout private/subca.key
+ ```
+
+ # [Linux](#tab/linux)
+
+ ```bash
+ openssl req -new -config subca.conf -out subca.csr \
+ -keyout private/subca.key
+ ```
+
+
+
+ You're prompted to enter a PEM pass phrase, as shown in the following example, for the private key file. Enter and verify a pass phrase to generate your private key and CSR.
+
+ ```bash
+ Enter PEM pass phrase:
+ Verifying - Enter PEM pass phrase:
+ --
+ ```
+
+ Confirm that the CSR file, *subca.csr*, is present in the subordinate CA directory and the private key file, *subca.key*, is present in the *private* subdirectory before continuing. For more information about the formats of the CSR and private key files, see [X.509 certificates](reference-x509-certificates.md#certificate-formats).
+
+1. In the Git Bash window, run the following command to create a subordinate CA certificate in the subordinate CA directory. The command applies the `sub_ca_ext` configuration file extensions to the certificate. These extensions indicate that the certificate is for a subordinate CA and can also be used to sign certificates and certificate revocation lists (CRLs). Unlike the root CA certificate, this certificate isn't self-signed. Instead, the subordinate CA certificate is signed with the root CA certificate, establishing a certificate chain similar to what you would use for a public key infrastructure (PKI). The subordinate CA certificate is then used to sign client certificates for testing your devices.
+
+ # [Windows](#tab/windows)
+
+ ```bash
+ winpty openssl ca -config ../rootca/rootca.conf -in subca.csr -out subca.crt \
+ -extensions sub_ca_ext
+ ```
+
+ # [Linux](#tab/linux)
+
+ ```bash
+ openssl ca -config ../rootca/rootca.conf -in subca.csr -out subca.crt \
+ -extensions sub_ca_ext
+ ```
+
+
+
+ You're prompted to enter the pass phrase, as shown in the following example, for the private key file of your root CA. After you enter the pass phrase, OpenSSL generates and displays the details of the certificate, then prompts you to sign and commit the certificate for your subordinate CA. Specify *y* for both prompts to generate the certificate for your subordinate CA.
+
+ ```bash
+ Using configuration from rootca.conf
+ Enter pass phrase for ../rootca/private/rootca.key:
+ Check that the request matches the signature
+ Signature ok
+ Certificate Details:
+ {Details omitted from output for clarity}
+ Certificate is to be certified until Mar 24 18:55:00 2024 GMT (365 days)
+ Sign the certificate? [y/n]:
++
+ 1 out of 1 certificate requests certified, commit? [y/n]
+ Write out database with 1 new entries
+ Data Base Updated
+ ```
+
+ After OpenSSL updates the certificate database, confirm that the certificate file, *subca.crt*, is present in the subordinate CA directory and that the PEM certificate (.pem) file for the certificate is present in the *rootc#certificate-formats).
+
+## Register your subordinate CA certificate to your IoT hub
+
+After you've created your subordinate CA certificate, you must then register the subordinate CA certificate to your IoT hub, which uses it to authenticate your devices during registration and connection. Registering the certificate is a two-step process that includes uploading the certificate file and then establishing proof of possession. When you upload your subordinate CA certificate to your IoT hub, you can set it to be verified automatically so that you don't need to manually establish proof of possession. The following steps describe how to upload and automatically verify your subordinate CA certificate to your IoT hub.
+
+1. In the Azure portal, navigate to your IoT hub and select **Certificates** from the resource menu, under **Security settings**.
+
+1. Select **Add** from the command bar to add a new CA certificate.
+
+1. Enter a display name for your subordinate CA certificate in the **Certificate name** field.
+
+1. Select the PEM certificate (.pem) file of your subordinate CA certificate from the *rootca/certs* directory to add in the **Certificate .pem or .cer file** field.
+
+1. Check the box next to **Set certificate status to verified on upload**.
+
+ :::image type="content" source="media/tutorial-x509-test-certs/skip-pop.png" alt-text="Screenshot showing how to automatically verify the certificate status on upload.":::
+
+1. Select **Save**.
+
+Your uploaded subordinate CA certificate is shown with its status set to **Verified** on the **Certificates** tab of the working pane.
+
+## Create a client certificate for a device
+
+After you've created your subordinate CA, you can create client certificates for your devices. The files and folders created for your subordinate CA are used to store the CSR, private key, and certificate files for your client certificates.
+
+The client certificate must have the value of its Subject Common Name (CN) field set to the value of the device ID that was used when registering the corresponding device in Azure IoT Hub. For more information about certificate fields, see the [Certificate fields](reference-x509-certificates.md#certificate-fields) section of [X.509 certificates](reference-x509-certificates.md).
+
+Perform the following steps to:
+
+> [!div class="checklist"]
+> * Create a private key and certificate signing request (CSR) for a client certificate
+> * Create a client certificate signed by your subordinate CA certificate
+
+1. Start a Git Bash window and run the following command, replacing *{base_dir}* with the directory that contains your previously created root CA and subordinate CA.
+
+ ```bash
+ cd {base_dir}
+ ```
+
+1. In the Git Bash window, run the following commands, one at a time, replacing the following placeholders with their corresponding values. This step creates the private key and CSR for your client certificate.
+
+ | Placeholder | Description |
+ | | |
+ | {subca_dir} | The name of the directory for the subordinate CA. For example, `subca`. |
+ | {device_name} | The name of the IoT device. For example, `testdevice`. |
+
+ This step creates a 2048-bit RSA private key for your client certificate, and then generates a certificate signing request (CSR) using that private key.
+
+ # [Windows](#tab/windows)
+
+ ```bash
+ cd {subca_dir}
+ winpty openssl genpkey -out private/{device_name}.key -algorithm RSA \
+ -pkeyopt rsa_keygen_bits:2048
+ winpty openssl req -new -key private/{device_name}.key -out {device_name}.csr
+ ```
+
+ # [Linux](#tab/linux)
+
+ ```bash
+ cd {subca_dir}
+ openssl genpkey -out private/{device_name}.key -algorithm RSA \
+ -pkeyopt rsa_keygen_bits:2048
+ openssl req -new -key private/{device_name}.key -out {device_name}.csr
+ ```
+
+
+
+ You're prompted to provide certificate details, as shown in the following example. Replace the following placeholders with the corresponding values.
+
+ | Placeholder | Description |
+ | | |
+ | {device_id} | The identifier of the IoT device. For example, `testdevice`. <br/><br/>This value must match the device ID specified for the corresponding device identity in your IoT hub for your device. |
+
+ You can optionally enter your own values for the other fields, such as **Country Name**, **Organization Name**, and so on. You don't need to enter a challenge password or an optional company name. After providing the certificate details, OpenSSL generates and displays the details of the certificate, then prompts you to sign and commit the certificate for your subordinate CA. Specify *y* for both prompts to generate the certificate for your subordinate CA.
+
+ ```bash
+ --
+ Country Name (2 letter code) [XX]:.
+ State or Province Name (full name) []:.
+ Locality Name (eg, city) [Default City]:.
+ Organization Name (eg, company) [Default Company Ltd]:.
+ Organizational Unit Name (eg, section) []:
+ Common Name (eg, your name or your server hostname) []:'{device_id}'
+ Email Address []:
+
+ Please enter the following 'extra' attributes
+ to be sent with your certificate request
+ A challenge password []:
+ An optional company name []:
+
+ ```
+
+ Confirm that the CSR file is present in the subordinate CA directory and the private key file is present in the *private* subdirectory before continuing. For more information about the formats of the CSR and private key files, see [X.509 certificates](reference-x509-certificates.md#certificate-formats).
+
+1. In the Git Bash window, run the following command, replacing the following placeholders with their corresponding values. This step creates a client certificate in the subordinate CA directory. The command applies the `client_ext` configuration file extensions to the certificate. These extensions indicate that the certificate is for a client certificate, which can't be used as a CA certificate. The client certificate is signed with the subordinate CA certificate.
+
+ # [Windows](#tab/windows)
+
+ ```bash
+ winpty openssl ca -config subca.conf -in {device_name}.csr -out {device_name}.crt \
+ -extensions client_ext
+ ```
+
+ # [Linux](#tab/linux)
+
+ ```bash
+ openssl ca -config subca.conf -in {device_name}.csr -out {device_name}.crt \
+ -extensions client_ext
+ ```
+
+
+
+ You're prompted to enter the pass phrase, as shown in the following example, for the private key file of your subordinate CA. After you enter the pass phrase, OpenSSL generates and displays the details of the certificate, then prompts you to sign and commit the client certificate for your device. Specify *y* for both prompts to generate the client certificate.
+
+ ```bash
+ Using configuration from subca.conf
+ Enter pass phrase for ../subca/private/subca.key:
+ Check that the request matches the signature
+ Signature ok
+ Certificate Details:
+ {Details omitted from output for clarity}
+ Certificate is to be certified until Mar 24 18:51:41 2024 GMT (365 days)
+ Sign the certificate? [y/n]:
++
+ 1 out of 1 certificate requests certified, commit? [y/n]
+ Write out database with 1 new entries
+ Data Base Updated
+ ```
+
+ After OpenSSL updates the certificate database, confirm that the certificate file for the client certificate is present in the subordinate CA directory and that the PEM certificate (.pem) file for the client certificate is present in the *certs* subdirectory of the subordinate CA directory. The file name of the .pem file matches the serial number of the client certificate. For more information about the formats of the certificate files, see [X.509 certificates](reference-x509-certificates.md#certificate-formats).
+
+## Next steps
+
+You can register your device with your IoT hub for testing the client certificate that you've created for that device. For more information about registering a device, see the [Register a new device in the IoT hub](iot-hub-create-through-portal.md#register-a-new-device-in-the-iot-hub) section in [Create an IoT hub using the Azure portal](iot-hub-create-through-portal.md).
+
+If you have multiple related devices to test, you can use the Azure IoT Hub Device Provisioning Service to provision multiple devices in an enrollment group. For more information about using enrollment groups in the Device Provisioning Service, see [Tutorial: Provision multiple X.509 devices using enrollment groups](../iot-dps/tutorial-custom-hsm-enrollment-group-x509.md).
iot Iot Mqtt 5 Preview Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot/iot-mqtt-5-preview-reference.md
+
+ Title: Azure IoT Hub MQTT 5 API reference (preview)
+ description: Learn about the IoT Hub MQTT 5 preview API
+
+
+
+
+
+ Last updated 04/24/2023
+++
+# IoT Hub data plane MQTT 5 API reference (preview)
+
+This document defines operations available in version 2.0 (api-version: `2020-10-01-preview`) of IoT Hub data plane API.
+
+## Operations
+
+### Get Twin
+
+Get Twin state
+
+#### Request
+
+**Topic name:** `$iothub/twin/get`
+
+**Properties**:
+ none
+
+**Payload**: empty
+
+#### Success Response
+
+**Properties**:
+ none
+
+**Payload**: Twin
+
+#### Alternative Responses
+
+| Status | Name | Description |
+| :-- | : | :- |
+| 0100 | Bad Request | Operation message is malformed and can't be processed. |
+| 0101 | Not Authorized | Client isn't authorized to perform the operation. |
+| 0102 | Not Allowed | Operation isn't allowed. |
+| 0501 | Throttled | request rate is too high per SKU |
+| 0502 | Quota Exceeded | daily quota per current SKU is exceeded |
+| 0601 | Server Error | internal server error |
+| 0602 | Timeout | operation timed out before it could be completed |
+| 0603 | Server Busy | server busy |
+
+#### Pseudo-code Sample
+
+```
+
+-> PUBLISH
+ QoS: 0
+ Topic: $iothub/twin/get
+<- PUBLISH
+ QoS: 0
+ Topic: $iothub/responses
+
+```
+
+### Patch Twin Reported
+
+Patch Twin's reported state
+
+#### Request
+
+**Topic name:** `$iothub/twin/patch/reported`
+
+**Properties**:
+
+| Name | Type | Required | Description |
+| : | : | :- | :- |
+| if-version | u64 | no | |
+
+**Payload**: TwinState
+
+#### Success Response
+
+**Properties**:
+
+| Name | Type | Required | Description |
+| : | : | :- | :- |
+| version | u64 | yes | Version of reported state after patch was applied |
+
+**Payload**: empty
+
+#### Alternative Responses
+
+| Status | Name | Description |
+| :-- | : | :- |
+| 0104 | Precondition Failed | precondition wasn't met resulting in request being canceled |
+| 0100 | Bad Request | Operation message is malformed and can't be processed. |
+| 0101 | Not Authorized | Client isn't authorized to perform the operation. |
+| 0102 | Not Allowed | Operation isn't allowed. |
+| 0501 | Throttled | request rate is too high per SKU |
+| 0502 | Quota Exceeded | daily quota per current SKU is exceeded |
+| 0601 | Server Error | internal server error |
+| 0602 | Timeout | operation timed out before it could be completed |
+| 0603 | Server Busy | server busy |
+
+#### Pseudo-code Sample
+
+```
+
+-> PUBLISH
+ QoS: 0
+ Topic: $iothub/twin/patch/reported
+ [if-version: <u64>]
+<- PUBLISH
+ QoS: 0
+ Topic: $iothub/responses
+
+```
+
+### Receive Commands
+
+Receive and handle commands
+
+#### Message
+
+**Topic name:** `$iothub/commands`
+
+**Properties**:
+
+| Name | Type | Required | Description |
+| : | : | :- | :- |
+| sequence-no | u64 | yes | Sequence number of the message |
+| enqueued-time | time | yes | Timestamp of when the message entered the system |
+| delivery-count | u32 | yes | Number of times the message delivery was attempted |
+| creation-time | time | no | Timestamp of when the message was created (provided by sender) |
+| message-id | string | no | Message identity (provided by sender) |
+| user-id | string | no | User identity (provided by sender) |
+| correlation-id | string | no | Correlation identity (provided by sender) |
+| Content Type | string | no | determines Content Type of the payload |
+| content-encoding | string | no | determines Content Encoding of the payload |
+
+**Payload**: any byte sequence
+
+#### Success Acknowledgment
+
+Indicates command was accepted for handling by the client
+
+**Properties**:
+ none
+
+**Payload**: empty
+
+#### Alternative Acknowledgments
+
+| Reason Code | Status | Name | Description |
+| :- | :-- | : | :- |
+| 131 | 0603 | Abandon | Indicates command won't be processed at this time and should be redelivered in the future. |
+| 131 | 0100 | Reject | Indicates the client rejected the command and it shouldn't be attempted again. |
+
+#### Pseudo-code Sample
+
+```
+
+-> SUBSCRIBE
+ - Topic: $iothub/commands
+ QoS: 1
+<- PUBLISH
+ QoS: 1
+ Topic: $iothub/commands
+ sequence-no: <u64>enqueued-time: <time>delivery-count: <u32>[creation-time: <time>][message-id: <string>][user-id: <string>][correlation-id: <string>][Content Type: <string>][content-encoding: <string>]
+ Payload: ...
+
+-> PUBACK
+
+```
+
+### Receive Direct Methods
+
+Receive and handle Direct Method calls
+
+#### Request
+
+**Topic name:** `$iothub/methods/{name}`
+
+**Properties**:
+ none
+
+**Payload**: any byte sequence
+
+#### Success Response
+
+**Properties**:
+
+| Name | Type | Required | Description |
+| : | : | :- | :- |
+| response-code | u32 | yes | |
+
+**Payload**: any byte sequence
+
+#### Alternative Responses
+
+| Status | Name | Description |
+| :-- | : | :- |
+| 06A0 | Unavailable | Indicates that client isn't reachable through this connection. |
+
+#### Pseudo-code Sample
+
+```
+
+-> SUBSCRIBE
+ - Topic: methods/{name}
+ QoS: 0
+<- SUBACK
+<- PUBLISH
+ QoS: 0
+ Topic: $iothub/methods/{name}
+-> PUBLISH
+ QoS: 0
+ Topic: $iothub/responses
+
+```
+
+### Receive Twin Desired State Changes
+
+Receive updates to Twin's desired state
+
+#### Message
+
+**Topic name:** `$iothub/twin/patch/desired`
+
+**Properties**:
+
+| Name | Type | Required | Description |
+| : | : | :- | :- |
+| version | u64 | yes | Version of desired state matching this update |
+
+**Payload**: TwinState
+
+#### Pseudo-code Sample
+
+```
+
+-> SUBSCRIBE
+ - Topic: $iothub/twin/patch/desired
+ QoS: 0
+<- PUBLISH
+ QoS: 0
+ Topic: $iothub/twin/patch/desired
+ version: <u64>
+ Payload: ...
+
+```
+
+### Send Telemetry
+
+Post message to telemetry channel - Event Hubs by default or other endpoint via routing configuration.
+
+#### Message
+
+**Topic name:** `$iothub/telemetry`
+
+**Properties**:
+
+| Name | Type | Required | Description |
+| : | : | :- | :- |
+| Content Type | string | no | translates into `content-type` system property on posted message |
+| content-encoding | string | no | translates into `content-encoding` system property on posted message |
+| message-id | string | no | translates into `message-id` system property on posted message |
+| user-id | string | no | translates into `user-id` system property on posted message |
+| correlation-id | string | no | translates into `correlation-id` system property on posted message |
+| creation-time | time | no | translates into `iothub-creation-time-utc` property on posted message |
+
+> [!TIP]
+> The format of `creation-time` must be UTC with no timezone information. For example, `2021-04-21T11:30:16Z` is valid, `2021-04-21T11:30:16-07:00` is invalid.
+
+**Payload**: any byte sequence
+
+#### Success Acknowledgment
+
+Message has been successfully posted to telemetry channel
+
+**Properties**:
+ none
+
+**Payload**: empty
+
+#### Alternative Acknowledgments
+
+| Reason Code | Status | Name | Description |
+| :- | :-- | : | :- |
+| 131 | 0100 | Bad Request | Operation message is malformed and can't be processed. |
+| 135 | 0101 | Not Authorized | Client isn't authorized to perform the operation. |
+| 131 | 0102 | Not Allowed | Operation isn't allowed. |
+| 131 | 0601 | Server Error | internal server error |
+| 151 | 0501 | Throttled | request rate is too high per SKU |
+| 151 | 0502 | Quota Exceeded | daily quota per current SKU is exceeded |
+| 131 | 0602 | Timeout | operation timed out before it could be completed |
+| 131 | 0603 | Server Busy | server busy |
+
+#### Pseudo-code Sample
+
+```
+-> PUBLISH
+ QoS: 1
+ Topic: $iothub/telemetry
+ [Content Type: <string>]
+ [content-encoding: <string>]
+ [message-id: <string>]
+ [user-id: <string>]
+ [correlation-id: <string>]
+ [creation-time: <time>]
+
+<- PUBACK
+
+```
+
+## Responses
+
+### Bad Request
+
+Operation message is malformed and can't be processed.
+
+**Reason Code:** `131`
+
+**Status:** `0100`
+
+**Properties**:
+
+| Name | Type | Required | Description |
+| : | : | :- | :- |
+| reason | string | no | contains information on what specifically isn't valid about the message |
+
+**Payload**: empty
+
+### Conflict
+
+Operation is in conflict with another ongoing operation.
+
+**Reason Code:** `131`
+
+**Status:** `0103`
+
+**Properties**:
+
+| Name | Type | Required | Description |
+| : | : | :- | :- |
+| trace-id | string | no | trace ID for correlation with other diagnostics for the error |
+| reason | string | no | contains information on what specifically isn't valid about the message |
+
+**Payload**: empty
+
+### Not Allowed
+
+Operation isn't allowed.
+
+**Reason Code:** `131`
+
+**Status:** `0102`
+
+**Properties**:
+
+| Name | Type | Required | Description |
+| : | : | :- | :- |
+| reason | string | no | contains information on what specifically isn't valid about the message |
+
+**Payload**: empty
+
+### Not Authorized
+
+Client isn't authorized to perform the operation.
+
+**Reason Code:** `135`
+
+**Status:** `0101`
+
+**Properties**:
+
+| Name | Type | Required | Description |
+| : | : | :- | :- |
+| trace-id | string | no | trace ID for correlation with other diagnostics for the error |
+
+**Payload**: empty
+
+### Not Found
+
+requested resource doesn't exist
+
+**Reason Code:** `131`
+
+**Status:** `0504`
+
+**Properties**:
+
+| Name | Type | Required | Description |
+| : | : | :- | :- |
+| reason | string | no | contains information on what specifically isn't valid about the message |
+
+**Payload**: empty
+
+### Not Modified
+
+Resource wasn't modified based on provided precondition.
+
+**Reason Code:** `0`
+
+**Status:** `0001`
+
+**Properties**:
+ none
+
+**Payload**: empty
+
+### Precondition Failed
+
+Precondition wasn't met resulting in request being canceled
+
+**Reason Code:** `131`
+
+**Status:** `0104`
+
+**Properties**:
+ none
+
+**Payload**: empty
+
+### Quota Exceeded
+
+daily quota per current SKU is exceeded
+
+**Reason Code:** `151`
+
+**Status:** `0502`
+
+**Properties**:
+ none
+
+**Payload**: empty
+
+### Resource Exhausted
+
+resource has no capacity to complete the operation
+
+**Reason Code:** `131`
+
+**Status:** `0503`
+
+**Properties**:
+
+| Name | Type | Required | Description |
+| : | : | :- | :- |
+| reason | string | no | contains information on what specifically isn't valid about the message |
+
+**Payload**: empty
+
+### Server Busy
+
+server busy
+
+**Reason Code:** `131`
+
+**Status:** `0603`
+
+**Properties**:
+
+| Name | Type | Required | Description |
+| : | : | :- | :- |
+| trace-id | string | no | trace ID for correlation with other diagnostics for the error |
+
+**Payload**: empty
+
+### Server Error
+
+internal server error
+
+**Reason Code:** `131`
+
+**Status:** `0601`
+
+**Properties**:
+
+| Name | Type | Required | Description |
+| : | : | :- | :- |
+| trace-id | string | no | trace ID for correlation with other diagnostics for the error |
+
+**Payload**: empty
+
+### Target Failed
+
+Target responded but the response was invalid or malformed
+
+**Reason Code:** `131`
+
+**Status:** `06A2`
+
+**Properties**:
+
+| Name | Type | Required | Description |
+| : | : | :- | :- |
+| reason | string | no | contains information on what specifically isn't valid about the message |
+
+**Payload**: empty
+
+### Target Timeout
+
+timed out waiting for target to complete the request
+
+**Reason Code:** `131`
+
+**Status:** `06A1`
+
+**Properties**:
+
+| Name | Type | Required | Description |
+| : | : | :- | :- |
+| trace-id | string | no | trace ID for correlation with other diagnostics for the error |
+| reason | string | no | contains information on what specifically isn't valid about the message |
+
+**Payload**: empty
+
+### Target Unavailable
+
+Target is unreachable to complete the request
+
+**Reason Code:** `131`
+
+**Status:** `06A0`
+
+**Properties**:
+ none
+
+**Payload**: empty
+
+### Throttled
+
+request rate is too high per SKU
+
+**Reason Code:** `151`
+
+**Status:** `0501`
+
+**Properties**:
+ none
+
+**Payload**: empty
+
+### Timeout
+
+operation timed out before it could be completed
+
+**Reason Code:** `131`
+
+**Status:** `0602`
+
+**Properties**:
+
+| Name | Type | Required | Description |
+| : | : | :- | :- |
+| trace-id | string | no | trace ID for correlation with other diagnostics for the error |
+
+**Payload**: empty
+
iot Iot Mqtt 5 Preview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot/iot-mqtt-5-preview.md
+
+ Title: Azure IoT Hub MQTT 5 support (preview)
+ description: Learn about MQTT 5 support in IoT Hub
+
+
+
+
+
+ Last updated 04/24/2023
++
+# IoT Hub MQTT 5 support (preview)
+
+**Version:** 2.0
+**api-version:** 2020-10-01-preview
+
+This document defines IoT Hub data plane API over MQTT version 5.0 protocol. See [API Reference](iot-mqtt-5-preview-reference.md) for complete definitions in this API.
+
+## Prerequisites
+
+- [Enable preview mode](../iot-hub/iot-hub-preview-mode.md) on a brand new IoT hub to try MQTT 5.
+- Prior knowledge of [MQTT 5 specification](https://docs.oasis-open.org/mqtt/mqtt/v5.0/os/mqtt-v5.0-os.html) is required.
+
+## Level of support and limitations
+
+IoT Hub support for MQTT 5 is in preview and limited in following ways (communicated to client via `CONNACK` properties unless explicitly noted otherwise):
+
+- No official [Azure IoT device SDKs](iot-sdks.md) support yet.
+- Subscription identifiers aren't supported.
+- Shared subscriptions aren't supported.
+- `RETAIN` isn't supported.
+- `Maximum QoS` is `1`.
+- `Maximum Packet Size` is `256 KiB` (subject to further restrictions per operation).
+- Assigned Client IDs aren't supported.
+- `Keep Alive` is limited to `19 min` (max delay for liveness check ΓÇô `28.5 min`).
+- `Topic Alias Maximum` is `10`.
+- `Response Information` isn't supported; `CONNACK` doesn't return `Response Information` property even if `CONNECT` contains `Request Response Information` property.
+- `Receive Maximum` (maximum number of allowed outstanding unacknowledged `PUBLISH` packets (in client-server direction) with `QoS: 1`) is `16`.
+- Single client can have no more than `50` subscriptions.
+ When the limit's reached, `SUBACK` returns `0x97` (Quota exceeded) reason code for subscriptions.
+
+## Connection lifecycle
+
+### Connection
+
+To connect a client to IoT Hub using this API, establish connection per MQTT 5 specification.
+Client must send `CONNECT` packet within 30 seconds following successful TLS handshake, or the server closes the connection.
+Here's an example of `CONNECT` packet:
+
+```yaml
+-> CONNECT
+ Protocol_Version: 5
+ Clean_Start: 0
+ Client_Id: D1
+ Authentication_Method: SAS
+ Authentication_Data: {SAS bytes}
+ api-version: 2020-10-10
+ host: abc.azure-devices.net
+ sas-at: 1600987795320
+ sas-expiry: 1600987195320
+ client-agent: artisan;Linux
+```
+
+- `Authentication Method` property is required and identifies which authentication method is used. For more information about authentication method, see [Authentication](#authentication).
+- `Authentication Data` property handling depends on `Authentication Method`. If `Authentication Method` is set to `SAS`, then `Authentication Data` is required and must contain valid signature. For more information about authentication data, see [Authentication](#authentication).
+- `api-version` property is required and must be set to API version value provided in this specification's header for this specification to apply.
+- `host` property defines host name of the tenant. It's required unless SNI extension was presented in Client Hello record during TLS handshake
+- `sas-at` defines time of connection.
+- `sas-expiry` defines expiration time for the provided SAS.
+- `client-agent` optionally communicates information about the client creating the connection.
+
+> [!NOTE]
+> `Authentication Method` and other properties throughout the specification with capitalized names are first-class properties in MQTT 5 - they are described in details in MQTT 5 specification. `api-version` and other properties in dash case are user properties specific to IoT Hub API.
+
+IoT Hub responds with `CONNACK` packet once it finishes with authentication and fetching data to support the connection. If connection is established successfully, `CONNACK` looks like:
+
+```yaml
+<- CONNACK
+ Session_Present: 1
+ Reason_Code: 0x00
+ Session_Expiry_Interval: 0xFFFFFFFF # included only if CONNECT specified value less than 0xFFFFFFFF and more than 0x00
+ Receive_Maximum: 16
+ Maximum_QoS: 1
+ Retain_Available: 0
+ Maximum_Packet_Size: 262144
+ Topic_Alias_Maximum: 10
+ Subscription_Identifiers_Available: 0
+ Shared_Subscriptions_Available: 0
+ Server_Keep_Alive: 1140 # included only if client did not specify Keep Alive or if it specified a bigger value
+```
+
+These `CONNACK` packet properties follow [MQTT 5 specification](https://docs.oasis-open.org/mqtt/mqtt/v5.0/os/mqtt-v5.0-os.html#_Toc3901080). They reflect IoT Hub's capabilities.
+
+### Authentication
+
+The `Authentication Method` property on `CONNECT` client defines what kind of authentication it uses for this connection:
+
+- `SAS` - Shared Access Signature is provided in `CONNECT`'s `Authentication Data` property,
+- `X509` - client relies on client certificate authentication.
+
+ Authentication fails if authentication method doesn't match the client's configured method in IoT Hub.
+
+> [!NOTE]
+> This API requires `Authentication Method` property to be set in `CONNECT` packet. If `Authentication Method` property isn't provided, connection fails with `Bad Request` response.
+
+Username/password authentication used in previous API versions isn't supported.
+
+#### SAS
+
+With SAS-based authentication, a client must provide the signature of the connection context. The signature proves authenticity of the MQTT connection. The signature must be based on one of two authentication keys in the client's configuration in IoT Hub. Or it must be based on one of two shared access keys of a [shared access policy](../iot-hub/iot-hub-dev-guide-sas.md).
+
+String to sign must be formed as follows:
+
+```text
+{host name}\n
+{Client Id}\n
+{sas-policy}\n
+{sas-at}\n
+{sas-expiry}\n
+```
+
+- `host name` is derived either from SNI extension (presented by client in Client Hello record during TLS handshake) or `host` user property in `CONNECT` packet.
+- `Client Id` is Client Identifier in `CONNECT` packet.
+- `sas-policy` - if present, defines IoT Hub access policy used for authentication. It's encoded as user property on `CONNECT` packet. Optional: omitting it means authentication settings in device registry are used instead.
+- `sas-at` - if present, specifies time of connection - current time. It's encoded as user property of `time` type on `CONNECT` packet.
+- `sas-expiry` defines expiration time for the authentication. It's a `time`-typed user property on `CONNECT` packet. This property is required.
+
+For optional parameters, if omitted, empty string MUST be used instead in string to sign.
+
+HMAC-SHA256 is used to hash the string based on one of device's symmetric keys. Hash value is then set as value of `Authentication Data` property.
+
+#### X509
+
+If `Authentication Method` property is set to `X509`, IoT Hub authenticates the connection based on the provided client certificate.
+
+#### Reauthentication
+
+If SAS-based authentication is used, we recommend using short-lived authentication tokens. To keep connection authenticated and prevent disconnection because of expiration, client must reauthenticate by sending `AUTH` packet with `Reason Code: 0x19` (reauthentication):
+
+```yaml
+-> AUTH
+ Reason_Code: 0x19
+ Authentication_Method: SAS
+ Authentication_Data: {SAS bytes}
+ sas-at: {current time}
+ sas-expiry: {SAS expiry time}
+```
+
+Rules:
+
+- `Authentication Method` must be the same as the one used for initial authentication
+- if connection was originally authenticated using SAS based on Shared Access Policy, signature used in reauthentication must be based on the same policy.
+
+If reauthentication succeeds, IoT Hub sends `AUTH` packet with `Reason Code: 0x00` (success). Otherwise, IoT Hub sends `DISCONNECT` packet with `Reason Code: 0x87` (Not authorized) and closes the connection.
+
+### Disconnection
+
+Server can disconnect client for a few reasons:
+
+- client is misbehaving in a way that is impossible to respond to with negative acknowledgment (or response) directly,
+- server is failing to keep state of the connection up to date,
+- client with the same identity has connected.
+
+Server may disconnect with any reason code defined in MQTT 5.0 specification. Notable mentions:
+
+- `135` (Not authorized) when reauthentication fails, current SAS token expires or device's credentials change
+- `142` (Session taken over) when new connection with the same client identity has been opened.
+- `159` (Connection rate exceeded) when connection rate for the IoT hub exceeds
+- `131` (Implementation-specific error) is used for any custom errors defined in this API. `status` and `reason` properties are used to communicate further details about the cause for disconnection (see [Response](#response) for details).
+
+## Operations
+
+All functionalities in this API are expressed as operations. Here's an example of Send Telemetry operation:
+
+```yaml
+-> PUBLISH
+ QoS: 1
+ Packet_Id: 3
+ Topic: $iothub/telemetry
+ Payload: Hello
+
+<- PUBACK
+ Packet_Id: 3
+ Reason_Code: 0
+```
+
+For complete specification of operations in this API, see [IoT Hub data plane MQTT 5 API reference](iot-mqtt-5-preview-reference.md).
+
+> [!NOTE]
+> All the samples in this specification are shown from client's perspective. Sign `->` means client sending packet, `<-` - receiving.
+
+### Message topics and subscriptions
+
+Topics used in operations' messages in this API start with `$iothub/`.
+MQTT broker semantics don't apply to these operations (see "[Topics beginning with \$](https://docs.oasis-open.org/mqtt/mqtt/v5.0/os/mqtt-v5.0-os.html#_Toc3901246)" for details).
+Topics starting with `$iothub/` that aren't defined in this API aren't supported:
+
+- Sending messages to undefined topic results in `Not Found` response (see [Response](#response) for details),
+- Subscribing to undefined topic results in `SUBACK` with `Reason Code: 0x8F` (Topic Filter Invalid).
+
+Topic names and property names are case-sensitive and must be exact match. For example, `$iothub/telemetry/` isn't supported while `$iothub/telemetry` is.
+
+> [!NOTE]
+> Wildcards in subscriptions under `$iothub/..` aren't supported. That is, a client can't subscribe to `$iothub/+` or `$iothub/#`. Attempting to do so results in `SUBACK` with `Reason Code: 0xA2` (Wildcard Subscriptions not supported). Only single-segment wildcards (`+`) are supported instead of path parameters in topic name for operations that have them.
+
+### Interaction types
+
+All the operations in this API are based on one of two interaction types:
+
+- Message with optional acknowledgment (MessageAck)
+- Request-Response (ReqRep)
+
+Operations also vary by direction (determined by direction of initial message in exchange):
+
+- Client-to-Server (c2s)
+- Server-to-Client (s2c)
+
+For example, Send Telemetry is Client-to-Server operation of "Message with acknowledgment" type, while Handle Direct Method is Server-to-Client operation of Request-Response type.
+
+#### Message-acknowledgement interactions
+
+Message with optional Acknowledgment (MessageAck) interaction is expressed as an exchange of `PUBLISH` and `PUBACK` packets in MQTT. Acknowledgment is optional and sender may choose to not request it by sending `PUBLISH` packet with `QoS: 0`.
+
+> [!NOTE]
+> If properties in `PUBACK` packet must be truncated due to `Maximum Packet Size` declared by the client, IoT Hub will retain as many User properties as it can fit within the given limit. User properties listed first have higher chance to be sent than those listed later; `Reason String` property has the least priority.
+
+##### Example of simple MessageAck interaction
+
+Message:
+
+```yaml
+PUBLISH
+ QoS: 1
+ Packet_Id: 34
+ Topic: $iothub/{request.path}
+ Payload: <any>
+```
+
+Acknowledgment (success):
+
+```yaml
+PUBACK
+ Packet_Id: 34
+ Reason_Code: 0
+```
+
+#### Request-Response Interactions
+
+In Request-Response (ReqRep) interactions, both Request and Response translate into `PUBLISH` packets with `QoS: 0`.
+
+`Correlation Data` property must be set in both and is used to match Response packet to Request packet.
+
+This API uses single response topic `$iothub/responses` for all ReqRep operations. Subscribing to / unsubscribing from this topic for client-to-server operations isn't required - server assumes all clients to be subscribed.
+
+##### Example of simple ReqRep interaction
+
+Request:
+
+```yaml
+PUBLISH
+ QoS: 0
+ Topic: $iothub/{request.path}
+ Correlation_Data: 0x01 0xFA
+ Payload: ...
+```
+
+Response (success):
+
+```yaml
+PUBLISH
+ QoS: 0
+ Topic: $iothub/responses
+ Correlation_Data: 0x01 0xFA
+ Payload: ...
+```
+
+ReqRep interactions don't support `PUBLISH` packets with `QoS: 1` as request or response messages. Sending Request `PUBLISH` results in `Bad Request` response.
+
+Maximum length supported in `Correlation Data` property is 16 bytes. If `Correlation Data` property on `PUBLISH` packet is set to a value longer than 16 bytes, IoT Hub sends `DISCONNECT` with `Bad Request` outcome, and closes the connection. This behavior only applies to packets exchanged within this API.
+
+> [!NOTE]
+> Correlation Data is an arbitrary byte sequence, e.g. it isn't guaranteed to be UTF-8 string.
+>
+> ReqRep use predefined topics for response; Response Topic property in Request `PUBLISH` packet (if set by the sender) is ignored.
+
+IoT Hub automatically subscribes client to response topics for all client-to-server ReqRep operations. Even if client explicitly unsubscribes from response topic, IoT Hub reinstates the subscription automatically. For server-to-client ReqRep interactions, it's still necessary for device to subscribe.
+
+### Message Properties
+
+Operation properties - system or user-defined - are expressed as packet properties in MQTT 5.
+
+User property names are case-sensitive and must be spelled exactly as defined. For example, `Trace-ID` isn't supported while `trace-id` is.
+
+Requests with User properties outside specification and without prefix `@` result in error.
+
+System properties are encoded either as first class properties (for example, `Content Type`) or as User properties. Specification provides exhaustive list of supported system properties.
+All first class properties are ignored unless support for them is explicitly stated in the specification.
+
+Where user-defined properties are allowed, their names must follow the format `@{property name}`. User-defined properties only support valid UTF-8 string values. for example, `MyProperty1` property with value `15` must be encoded as User property with name `@MyProperty` and value `15`.
+
+If IoT Hub doesn't recognize User property, it's considered an error, and IoT Hub responds with `PUBACK` with `Reason Code: 0x83` (Implementation-specific error) and `status: 0100` (Bad Request). If acknowledgment wasn't requested (QoS: 0), `DISCONNECT` packet with the same error is sent back and connection is terminated.
+
+This API defines following data types besides `string`:
+
+- `time`: number of milliseconds since `1970-01-01T00:00:00.000Z`. for example, `1600987195320` for `2020-09-24T22:39:55.320Z`,
+- `u32`: unsigned 32-bit integer number,
+- `u64`: unsigned 64-bit integer number,
+- `i32`: signed 32-bit integer number.
+
+### Response
+
+Interactions can result in different outcomes: `Success`, `Bad Request`, `Not Found`, and others.
+Outcomes are distinguished from each other by `status` user property. `Reason Code` in `PUBACK` packets (for MessageAck interactions) matches `status` in meaning where possible.
+
+> [!NOTE]
+> If client specifies `Request Problem Information: 0` in CONNECT packet, no user properties will be sent on `PUBACK` packets to comply with MQTT 5 specification, including `status` property. In this case, client can still rely on `Reason Code` to determine whether acknowledge is positive or negative.
+
+Every interaction has a default (or success). It has `Reason Code` of `0` and `status` property of "not set". Otherwise:
+
+- For MessageAck interactions, `PUBACK` gets `Reason Code` other than 0x0 (Success). `status` property may be present to further clarify the outcome.
+- For ReqRep interactions, Response `PUBLISH` gets `status` property set.
+- Since there's no way to respond to MessageAck interactions with `QoS: 0` directly, `DISCONNECT` packet is sent instead with response information, followed by disconnect.
+
+Examples:
+
+Bad Request (MessageAck):
+
+```yaml
+PUBACK
+ Reason_Code: 131
+ status: 0100
+ reason: Unknown property `test`
+```
+
+Not Authorized (MessageAck):
+
+```yaml
+PUBACK
+ Reason_Code: 135
+ status: 0101
+```
+
+Not Authorized (ReqRep):
+
+```yaml
+PUBLISH
+ QoS: 0
+ Topic: $iothub/responses
+ Correlation_Data: ...
+ status: 0101
+```
+
+When needed, IoT Hub sets the following user properties:
+
+- `status` - IoT Hub's extended code for operation's status. This code can be used to differentiate outcomes.
+- `trace-id` ΓÇô trace ID for the operation; IoT Hub may keep more diagnostics concerning the operation that could be used for internal investigation.
+- `reason` - human-readable message providing further information on why operation ended up in a state indicated by `status` property.
+
+> [!NOTE]
+> If client sets `Maximum Packet Size` property in CONNECT packet to a very small value, not all user properties may fit and would not appear in the packet.
+>
+> `reason` is meant only for people and should not be used in client logic. This API allows for messages to be changed at any point without warning or change of version.
+>
+> If client sends `RequestProblemInformation: 0` in CONNECT packet, user properties won't be included in acknowledgements per [MQTT 5 specification](https://docs.oasis-open.org/mqtt/mqtt/v5.0/os/mqtt-v5.0-os.html#_Toc3901053).
+
+#### Status code
+
+`status` property carries status code for operation. It's optimized for machine reading efficiency.
+It consists of two-byte unsigned integer encoded as hex in string like `0501`.
+Code structure (bit map):
+
+```text
+7 6 5 4 3 2 1 0 | 7 6 5 4 3 2 1 0
+0 0 0 0 0 R T T | C C C C C C C C
+```
+
+First byte is used for flags:
+
+- bits 0 and 1 indicate type of outcomes:
+ - `00` - success
+ - `01` - client error
+ - `10` - server error
+- bit 2: `1` indicates error is retryable
+- bits 3 through 7 are reserved and must be set to `0`
+
+Second byte contains actual distinct response code. Error codes with different flags can have the same second byte value. For example, there can be `0001`, `0101`, `0201`, `0301` error codes having distinct meaning.
+
+For example, `Too Many Requests` is a client, retryable error with own code of `1`. Its value is
+`0000 0101 0000 0001` or `0x0501`.
+
+Clients may use type bits to identify whether operation concluded successfully. Clients may also use retryable bit to decide whether it's sensible to retry operation.
+
+## Recommendations
+
+### Session management
+
+`CONNACK` packet carries `Session Present` property to indicate whether server restored previously created session. Use this property to figure out whether to subscribe to topics or skip subscribing since subscription was done earlier.
+
+To rely on `Session Present`, client must keep track of subscriptions it's made (that is, sent `SUBSCRIBE` packet and received `SUBACK` with successful reason code), or make sure to subscribe to all topics in a single `SUBSCRIBE`/`SUBACK` exchange. Otherwise, if client sends two `SUBSCRIBE` packets, and the server processes only one of them successfully, the server communicates `Session Present: 1` in `CONNACK` while having only part of client's subscriptions accepted.
+
+To prevent the case where an older version of client didn't subscribe to all the topics, it's better to subscribe unconditionally when client behavior changes (for example, as part of firmware update). Also, to ensure no stale subscriptions are left behind (taking from maximum allowed number of subscriptions), explicitly unsubscribe from subscriptions that are no longer in use.
+
+### Batching
+
+There's no special format to send a batch of messages. To reduce overhead of resource-intensive operations in TLS and networking, bundle packets (`PUBLISH`, `PUBACK`, `SUBSCRIBE`, and so no) together before handing them over to underlying TLS/TCP stack. Also, client can make topic alias easier within the "batch":
+
+- Put complete topic name in the first `PUBLISH` packet for the connection and associate topic alias with it,
+- Put following packets for the same topic with empty topic name and topic alias property.
+
+## Migration
+
+This section lists the changes in the API compared to [previous MQTT support](iot-mqtt-connect-to-iot-hub.md).
+
+- Transport protocol is MQTT 5. Previously - MQTT 3.1.1.
+- Context information for SAS Authentication is contained in `CONNECT` packet directly instead of being encoded along with signature.
+- Authentication Method is used to indicate authentication method used.
+- Shared Access Signature is put in Authentication Data property. Previously Password field was used.
+- Topics for operations are different:
+ - Telemetry: `$iothub/telemetry` instead of `devices/{Client Id}/messages/events`,
+ - Commands: `$iothub/commands` instead of `devices/{Client Id}/messages/devicebound`,
+ - Patch Twin Reported: `$iothub/twin/patch/reported` instead of `$iothub/twin/PATCH/properties/reported`,
+ - Notify Twin Desired State Changed: `$iothub/twin/patch/desired` instead of `$iothub/twin/PATCH/properties/desired`.
+- Subscription for client-server request-response operations' response topic isn't required.
+- User properties are used instead of encoding properties in topic name segment.
+- property names are spelled in "dash-case" naming convention instead of abbreviations with special prefix. User-defined properties now require prefix instead. For instance, `$.mid` is now `message-id`, while `myProperty1` becomes `@myProperty1`.
+- Correlation Data property is used to correlate request and response messages for request-response operations instead of `$rid` property encoded in topic.
+- `iothub-connection-auth-method` property is no longer stamped on telemetry events.
+- C2D commands aren't purged in absence of subscription from device. They remain queued up until device subscribes or they expire.
+
+## Examples
+
+### Send telemetry
+
+Message:
+
+```yaml
+-> PUBLISH
+ QoS: 1
+ Packet_Id: 31
+ Topic: $iothub/telemetry
+ @myProperty1: My String Value # optional
+ creation-time: 1600987195320 # optional
+ @ No_Rules-ForUser-PROPERTIES: Any UTF-8 string value # optional
+ Payload: <data>
+```
+
+Acknowledgment:
+
+```yaml
+<- PUBACK
+ Packet_Id: 31
+ Reason_Code: 0
+```
+
+Alternative acknowledgment (throttled):
+
+```yaml
+<- PUBACK
+ Packet_Id: 31
+ Reason_Code: 151
+ status: 0501
+```
+++
+### Send get twin's state
+
+Request:
+
+```yaml
+-> PUBLISH
+ QoS: 0
+ Topic: $iothub/twin/get
+ Correlation_Data: 0x01 0xFA
+ Payload: <empty>
+```
+
+Response (success):
+
+```yaml
+<- PUBLISH
+ QoS: 0
+ Topic: $iothub/responses
+ Correlation_Data: 0x01 0xFA
+ Payload: <twin/desired state>
+```
+
+Response (not allowed):
+
+```yaml
+<- PUBLISH
+ QoS: 0
+ Topic: $iothub/responses
+ Correlation_Data: 0x01 0xFA
+ status: 0102
+ reason: Operation not allowed for `B2` SKU
+ Payload: <empty>
+```
+++
+### Handle direct method call
+
+Request:
+
+```yaml
+<- PUBLISH
+ QoS: 0
+ Topic: $iothub/methods/abc
+ Correlation_Data: 0x0A 0x10
+ Payload: <data>
+```
+
+Response (success):
+
+```yaml
+-> PUBLISH
+ QoS: 0
+ Topic: $iothub/responses
+ Correlation_Data: 0x0A 0x10
+ response-code: 200 # user defined response code
+ Payload: <data>
+```
+
+> [!NOTE]
+> `status` isn't set - it's a success response.
+
+Device Unavailable Response:
+
+```yaml
+-> PUBLISH
+ QoS: 0
+ Topic: $iothub/responses
+ Correlation_Data: 0x0A 0x10
+ status: 0603
+```
+++
+### Error while using QoS 0, part 1
+
+Request:
+
+```yaml
+-> PUBLISH
+ QoS: 0
+ Topic: $iothub/twin/gett # misspelled topic name - server won't recognize it as Request-Response interaction
+ Correlation_Data: 0x0A 0x10
+ Payload: <data>
+```
+
+Response:
+
+```yaml
+<- DISCONNECT
+ Reason_Code: 144
+ reason: "Unsupported topic: `$iothub/twin/gett`"
+```
+++
+### Error while using QoS 0, part 2
+
+Request:
+
+```yaml
+-> PUBLISH # missing Correlation Data
+ QoS: 0
+ Topic: $iothub/twin/get
+ Payload: <data>
+```
+
+Response:
+
+```yaml
+<- DISCONNECT
+ Reason_Code: 131
+ status: 0100
+ reason: "`Correlation Data` property is missing"
+```
+## Next steps
+
+- To review the MQTT 5 preview API reference, see [IoT Hub data plane MQTT 5 API reference (preview)](iot-mqtt-5-preview-reference.md).
+- To follow a C# sample, see [GitHub sample repository](https://github.com/Azure-Samples/iot-hub-mqtt-5-preview-samples-csharp).
iot Iot Mqtt Connect To Iot Dps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot/iot-mqtt-connect-to-iot-dps.md
+
+ Title: Use MQTT to communicate with Azure IoT DPS
+
+description: Support for devices that use MQTT to connect to the Azure IoT Device Provisioning Service (DPS) device-facing endpoint.
++++ Last updated : 04/24/2023+++
+# Communicate with DPS using the MQTT protocol
+
+DPS enables devices to communicate with the DPS device endpoint using:
+
+* [MQTT v3.1.1](https://mqtt.org/) on port 8883
+* [MQTT v3.1.1](http://docs.oasis-open.org/mqtt/mqtt/v3.1.1/os/mqtt-v3.1.1-os.html#_Toc398718127) over WebSocket on port 443.
+
+DPS isn't a full-featured MQTT broker and doesn't support all the behaviors specified in the MQTT v3.1.1 standard. This article describes how devices can use supported MQTT behaviors to communicate with DPS.
+
+All device communication with DPS must be secured using TLS/SSL. Therefore, DPS doesn't support nonsecure connections over port 1883.
+
+ > [!NOTE]
+ > DPS does not currently support devices using TPM [attestation mechanism](../iot-dps/concepts-service.md#attestation-mechanism) over the MQTT protocol.
+
+## Connecting to DPS
+
+A device can use the MQTT protocol to connect to a DPS instance using any of the following options.
+
+* Libraries in the [Azure IoT Provisioning SDKs](iot-sdks.md#dps-device-sdks).
+* The MQTT protocol directly.
+
+## Using the MQTT protocol directly (as a device)
+
+If a device can't use the device SDKs, it can still connect to the public device endpoints using the MQTT protocol on port 8883. In the **CONNECT** packet, the device should use the following values:
+
+* For the **ClientId** field, use **registrationId**.
+
+* For the **Username** field, use `{idScope}/registrations/{registration_id}/api-version=2019-03-31`, where `{idScope}` is the [ID scope](../iot-dps/concepts-service.md#id-scope) of the DPS and `{registration_id}` is the [Registration ID](../iot-dps/concepts-service.md#registration-id) for your device.
+
+ > [!NOTE]
+ > If you use X.509 certificate authentication, the registration ID is provided by the subject common name (CN) of your device leaf (end-entity) certificate. `{registration_id}` in the **Username** field must match the common name.
+
+* For the **Password** field, use a SAS token. The format of the SAS token is the same as for both the HTTPS and AMQP protocols:
+
+ `SharedAccessSignature sr={URL-encoded-resourceURI}&sig={signature-string}&se={expiry}&skn=registration`
+ The resourceURI should be in the format `{idScope}/registrations/{registration_id}`. The policy name (`skn`) should be set to `registration`.
+
+ > [!NOTE]
+ > If you use X.509 certificate authentication, SAS token passwords are not required.
+
+ For more information about how to generate SAS tokens, see the security tokens section of [Control access to DPS](../iot-dps/how-to-control-access.md#security-tokens).
+
+The following list contains DPS implementation-specific behaviors:
+
+ * DPS doesn't support persistent sessions. It treats every session as nonpersistent, regardless of the value of the **CleanSession** flag. We recommend setting **CleanSession** to true.
+
+ * When a device app subscribes to a topic with **QoS 2**, DPS grants maximum QoS level 1 in the **SUBACK** packet. After that, DPS delivers messages to the device using QoS 1.
+
+## TLS/SSL configuration
+
+To use the MQTT protocol directly, your client *must* connect over TLS 1.2. Attempts to skip this step fail with connection errors.
++
+## Registering a device
+
+To register a device through DPS, a device should subscribe using `$dps/registrations/res/#` as a **Topic Filter**. The multi-level wildcard `#` in the Topic Filter is used only to allow the device to receive more properties in the topic name. DPS doesn't allow the usage of the `#` or `?` wildcards for filtering of subtopics. Since DPS isn't a general-purpose pub-sub messaging broker, it only supports the documented topic names and topic filters.
+
+The device should publish a register message to DPS using `$dps/registrations/PUT/iotdps-register/?$rid={request_id}` as a **Topic Name**. The payload should contain the [Device Registration](/rest/api/iot-dps/device/runtime-registration/register-device) object in JSON format.
+In a successful scenario, the device receives a response on the `$dps/registrations/res/202/?$rid={request_id}&retry-after=x` topic name where x is the retry-after value in seconds. The payload of the response contains the [RegistrationOperationStatus](/rest/api/iot-dps/device/runtime-registration/register-device#registrationoperationstatus) object in JSON format.
+
+## Polling for registration operation status
+
+The device must poll the service periodically to receive the result of the device registration operation. Assuming that the device has already subscribed to the `$dps/registrations/res/#` topic, it can publish a get operation status message to the `$dps/registrations/GET/iotdps-get-operationstatus/?$rid={request_id}&operationId={operationId}` topic name. The operation ID in this message should be the value received in the RegistrationOperationStatus response message in the previous step. In the successful case, the service responds on the `$dps/registrations/res/200/?$rid={request_id}` topic. The payload of the response contains the RegistrationOperationStatus object. The device should keep polling the service if the response code is 202 after a delay equal to the retry-after period. The device registration operation is successful if the service returns a 200 status code.
+
+## Connecting over Websocket
+When connecting over Websocket, specify the subprotocol as `mqtt`. Follow [RFC 6455](https://tools.ietf.org/html/rfc6455).
+
+## Next steps
+
+To learn more about the MQTT protocol, see the [MQTT documentation](https://mqtt.org/).
+
+To further explore the capabilities of DPS, see:
+
+> [!div class="nextstepaction"]
+> [What is Azure IoT Hub Device Provisioning Service?](../iot-dps/about-iot-dps.md)
iot Iot Mqtt Connect To Iot Hub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot/iot-mqtt-connect-to-iot-hub.md
+
+ Title: Use MQTT to communicate with Azure IoT Hub
+
+description: Support for devices that use MQTT to connect to an IoT Hub device-facing endpoint. Includes information about built-in MQTT support in the Azure IoT device SDKs.
++++ Last updated : 04/24/2023++++
+# Communicate with an IoT hub using the MQTT protocol
+
+IoT Hub enables devices to communicate with the IoT Hub device endpoints using:
+
+* [MQTT v3.1.1](https://mqtt.org/) on TCP port 8883
+* MQTT v3.1.1 over WebSocket on TCP port 443.
+
+IoT Hub isn't a full-featured MQTT broker and doesn't support all the behaviors specified in the MQTT v3.1.1 standard. This article describes how devices can use supported MQTT behaviors to communicate with IoT Hub.
++
+All device communication with IoT Hub must be secured using TLS/SSL. Therefore, IoT Hub doesn't support nonsecure connections over TCP port 1883.
+
+## Connecting to IoT Hub
+
+A device can use the MQTT protocol to connect to an IoT hub using any of the following options:
+
+* Libraries in the [Azure IoT SDKs](https://github.com/Azure/azure-iot-sdks).
+* The MQTT protocol directly.
+
+The MQTT port (TCP port 8883) is blocked in many corporate and educational networking environments. If you can't open port 8883 in your firewall, we recommend using MQTT over WebSockets. MQTT over WebSockets communicates over port 443, which is almost always open in networking environments. To learn how to specify the MQTT and MQTT over WebSockets protocols when using the Azure IoT SDKs, see [Using the device SDKs](#using-the-device-sdks).
+
+## Using the device SDKs
+
+[Device SDKs](https://github.com/Azure/azure-iot-sdks) that support the MQTT protocol are available for Java, Node.js, C, C#, and Python. The device SDKs use the chosen [authentication mechanism](../iot-hub/iot-concepts-and-iot-hub.md#device-identity-and-authentication) to establish a connection to an IoT hub. To use the MQTT protocol, the client protocol parameter must be set to **MQTT**. You can also specify MQTT over WebSockets in the client protocol parameter. By default, the device SDKs connect to an IoT Hub with the **CleanSession** flag set to **0** and use **QoS 1** for message exchange with the IoT hub. While it's possible to configure **QoS 0** for faster message exchange, you should note that the delivery isn't guaranteed nor acknowledged. For this reason, **QoS 0** is often referred as "fire and forget".
+
+When a device is connected to an IoT hub, the device SDKs provide methods that enable the device to exchange messages with an IoT hub.
+
+The following table contains links to code samples for each supported language and specifies the parameter to use to establish a connection to IoT Hub using the MQTT or the MQTT over WebSockets protocol.
+
+| Language | MQTT protocol parameter | MQTT over WebSockets protocol parameter
+| | | |
+| [Node.js](https://github.com/Azure/azure-iot-sdk-node/blob/main/device/samples/javascript/simple_sample_device.js) | azure-iot-device-mqtt.Mqtt | azure-iot-device-mqtt.MqttWs |
+| [Java](https://github.com/Azure/azure-iot-sdk-java/blob/main/iothub/device/iot-device-samples/send-receive-sample/src/main/java/samples/com/microsoft/azure/sdk/iot/SendReceive.java) |[IotHubClientProtocol](/java/api/com.microsoft.azure.sdk.iot.device.iothubclientprotocol).MQTT | IotHubClientProtocol.MQTT_WS |
+| [C](https://github.com/Azure/azure-iot-sdk-c/tree/master/iothub_client/samples/iothub_client_sample_mqtt_dm) | [MQTT_Protocol](https://github.com/Azure/azure-iot-sdk-c/blob/main/iothub_client/inc/iothubtransportmqtt.h) | [MQTT_WebSocket_Protocol](https://github.com/Azure/azure-iot-sdk-c/blob/main/iothub_client/inc/iothubtransportmqtt_websockets.h) |
+| [C#](https://github.com/Azure/azure-iot-sdk-csharp/tree/main/iothub/device/samples) | [TransportType](/dotnet/api/microsoft.azure.devices.client.transporttype).Mqtt | TransportType.Mqtt falls back to MQTT over WebSockets if MQTT fails. To specify MQTT over WebSockets only, use TransportType.Mqtt_WebSocket_Only |
+| [Python](https://github.com/Azure/azure-iot-sdk-python/tree/main/samples) | Supports MQTT by default | Add `websockets=True` in the call to create the client |
+
+The following fragment shows how to specify the MQTT over WebSockets protocol when using the Azure IoT Node.js SDK:
+
+```javascript
+var Client = require('azure-iot-device').Client;
+var Protocol = require('azure-iot-device-mqtt').MqttWs;
+var client = Client.fromConnectionString(deviceConnectionString, Protocol);
+```
+
+The following fragment shows how to specify the MQTT over WebSockets protocol when using the Azure IoT Python SDK:
+
+```python
+from azure.iot.device.aio import IoTHubDeviceClient
+device_client = IoTHubDeviceClient.create_from_connection_string(deviceConnectionString, websockets=True)
+```
+
+### Default keep-alive timeout
+
+In order to ensure a client/IoT Hub connection stays alive, both the service and the client regularly send a *keep-alive* ping to each other. The client using IoT SDK sends a keep-alive at the interval defined in the following table:
+
+|Language |Default keep-alive interval |Configurable |
+||||
+|Node.js | 180 seconds | No |
+|Java | 230 seconds | [Yes](https://github.com/Azure/azure-iot-sdk-java/blob/main/iothub/device/iot-device-client/src/main/java/com/microsoft/azure/sdk/iot/device/ClientOptions.java#L64) |
+|C | 240 seconds | [Yes](https://github.com/Azure/azure-iot-sdk-c/blob/master/doc/Iothub_sdk_options.md#mqtt-transport) |
+|C# | 300 seconds* | [Yes](/dotnet/api/microsoft.azure.devices.client.transport.mqtt.mqtttransportsettings.keepaliveinseconds) |
+|Python | 60 seconds | [Yes](https://github.com/Azure/azure-iot-sdk-python/blob/v2/azure-iot-device/azure/iot/device/iothub/abstract_clients.py#L343) |
+
+*The C# SDK defines the default value of the MQTT KeepAliveInSeconds property as 300 seconds. In reality, the SDK sends a ping request four times per keep-alive duration set. In other words, the SDK sends a keep-alive ping once every 75 seconds.
+
+Following the [MQTT v3.1.1 specification](http://docs.oasis-open.org/mqtt/mqtt/v3.1.1/os/mqtt-v3.1.1-os.html#_Toc398718081), IoT Hub's keep-alive ping interval is 1.5 times the client keep-alive value; however, IoT Hub limits the maximum server-side timeout to 29.45 minutes (1767 seconds). This limit exists because all Azure services are bound to the Azure load balancer TCP idle timeout, which is 29.45 minutes.
+
+For example, a device using the Java SDK sends the keep-alive ping, then loses network connectivity. 230 seconds later, the device misses the keep-alive ping because it's offline. However, IoT Hub doesn't close the connection immediately - it waits another `(230 * 1.5) - 230 = 115` seconds before disconnecting the device with the error [404104 DeviceConnectionClosedRemotely](../iot-hub/iot-hub-troubleshoot-error-404104-deviceconnectionclosedremotely.md).
+
+The maximum client keep-alive value you can set is `1767 / 1.5 = 1177` seconds. Any traffic resets the keep-alive. For example, a successful shared access signature (SAS) token refresh resets the keep-alive.
+
+### Migrating a device app from AMQP to MQTT
+
+If you're using the [device SDKs](https://github.com/Azure/azure-iot-sdks), switching from using AMQP to MQTT requires changing the protocol parameter in the client initialization, as stated previously.
+
+When doing so, make sure to check the following items:
+
+* AMQP returns errors for many conditions, while MQTT terminates the connection. As a result your exception handling logic might require some changes.
+
+* MQTT doesn't support the *reject* operations when receiving [cloud-to-device messages](../iot-hub/iot-hub-devguide-messaging.md). If your back-end app needs to receive a response from the device app, consider using [direct methods](../iot-hub/iot-hub-devguide-direct-methods.md).
+
+* AMQP isn't supported in the Python SDK.
+
+## Using the MQTT protocol directly (as a device)
+
+If a device can't use the device SDKs, it can still connect to the public device endpoints using the MQTT protocol on port 8883. In the **CONNECT** packet, the device should use the following values:
+
+* For the **ClientId** field, use the **deviceId**.
+
+* For the **Username** field, use `{iotHub-hostname}/{device-id}/?api-version=2021-04-12`, where `{iotHub-hostname}` is the full `CName` of the IoT hub.
+
+ For example, if the name of your IoT hub is **contoso.azure-devices.net** and if the name of your device is **MyDevice01**, the full **Username** field should contain:
+
+ `contoso.azure-devices.net/MyDevice01/?api-version=2021-04-12`
+
+ It's recommended to include api-version in the field. Otherwise it could cause unexpected behaviors.
+
+* For the **Password** field, use a SAS token. The format of the SAS token is the same as for both the HTTPS and AMQP protocols:
+
+ `SharedAccessSignature sig={signature-string}&se={expiry}&sr={URL-encoded-resourceURI}`
+
+ > [!NOTE]
+ > If you use X.509 certificate authentication, SAS token passwords are not required. For more information, see [Tutorial: Create and upload certificates for testing](../iot-hub/tutorial-x509-test-certs.md) and follow code instructions in the [TLS/SSL configuration section](#tlsssl-configuration).
+
+ For more information about how to generate SAS tokens, see the [Use SAS tokens as a device](../iot-hub/iot-hub-dev-guide-sas.md#use-sas-tokens-as-a-device) section of [Control access to IoT Hub using Shared Access Signatures](../iot-hub/iot-hub-dev-guide-sas.md).
+
+ You can also use the cross-platform [Azure IoT Hub extension for Visual Studio Code](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-toolkit) or the CLI extension command [az iot hub generate-sas-token](/cli/azure/iot/hub#az-iot-hub-generate-sas-token) to quickly generate a SAS token. You can then copy and paste the SAS token into your own code for testing purposes.
+
+### Using the Azure IoT Hub extension for Visual Studio Code
+
+1. In the side bar, expand the **Devices** node under the **Azure IoT Hub** section.
+
+1. Right-click your IoT device and select **Generate SAS Token for Device** from the context menu.
+
+1. Enter the expiration time, in hours, for the SAS token in the input box, and then select the Enter key.
+
+1. The SAS token is created and copied to clipboard.
+
+ The SAS token that's generated has the following structure:
+
+ `HostName={iotHub-hostname};DeviceId=javadevice;SharedAccessSignature=SharedAccessSignature sr={iotHub-hostname}%2Fdevices%2FMyDevice01%2Fapi-version%3D2016-11-14&sig=vSgHBMUG.....Ntg%3d&se=1456481802`
+
+ The part of this token to use as the **Password** field to connect using MQTT is:
+
+ `SharedAccessSignature sr={iotHub-hostname}%2Fdevices%2FMyDevice01%2Fapi-version%3D2016-11-14&sig=vSgHBMUG.....Ntg%3d&se=1456481802`
+
+The device app can specify a **Will** message in the **CONNECT** packet. The device app should use `devices/{device-id}/messages/events/` or `devices/{device-id}/messages/events/{property-bag}` as the **Will** topic name to define **Will** messages to be forwarded as a telemetry message. In this case, if the network connection is closed, but a **DISCONNECT** packet wasn't previously received from the device, then IoT Hub sends the **Will** message supplied in the **CONNECT** packet to the telemetry channel. The telemetry channel can be either the default **Events** endpoint or a custom endpoint defined by IoT Hub routing. The message has the **iothub-MessageType** property with a value of **Will** assigned to it.
+
+## Using the MQTT protocol directly (as a module)
+
+You can connect to IoT Hub over MQTT using a module identity, similar to connecting to IoT Hub as a device. For more information about connecting to IoT Hub over MQTT as a device, see [Using the MQTT protocol directly (as a device)](#using-the-mqtt-protocol-directly-as-a-device). However, you need to use the following values:
+
+* Set the client ID to `{device-id}/{module-id}`.
+
+* If authenticating with username and password, set the username to `<hubname>.azure-devices.net/{device_id}/{module_id}/?api-version=2021-04-12` and use the SAS token associated with the module identity as your password.
+
+* Use `devices/{device-id}/modules/{module-id}/messages/events/` as a topic for publishing telemetry.
+
+* Use `devices/{device-id}/modules/{module-id}/messages/events/` as WILL topic.
+
+* Use `devices/{device-id}/modules/{module-id}/#` as a topic for receiving messages.
+
+* The twin GET and PATCH topics are identical for modules and devices.
+
+* The twin status topic is identical for modules and devices.
+
+For more information about using MQTT with modules, see [Publish and subscribe with IoT Edge](../iot-edge/how-to-publish-subscribe.md) and learn more about the [IoT Edge hub MQTT endpoint](https://github.com/Azure/iotedge/blob/main/doc/edgehub-api.md#edge-hub-mqtt-endpoint).
+
+## Samples using MQTT without an Azure IoT SDK
+
+The [IoT MQTT Sample repository](https://github.com/Azure-Samples/IoTMQTTSample), contains C/C++, Python, and CLI samples that show you how to send telemetry messages, receive cloud-to-device messages, and use device twins without using the Azure device SDKs.
+
+The C/C++ samples use the [Eclipse Mosquitto](https://mosquitto.org) library, the Python sample uses [Eclipse Paho](https://www.eclipse.org/paho/), and the CLI samples use `mosquitto_pub`.
+
+To learn more, see [Tutorial - Use MQTT to develop an IoT device client](../iot-develop/tutorial-use-mqtt.md).
+
+## TLS/SSL configuration
+
+To use the MQTT protocol directly, your client *must* connect over TLS/SSL. Attempts to skip this step fail with connection errors.
+
+In order to establish a TLS connection, you may need to download and reference the DigiCert Baltimore Root Certificate. This certificate is the one that Azure uses to secure the connection. You can find this certificate in the [Azure-iot-sdk-c](https://github.com/Azure/azure-iot-sdk-c/blob/master/certs/certs.c) repository. More information about these certificates can be found on [Digicert's website](https://www.digicert.com/digicert-root-certificates.htm).
+
+The following example demonstrates how to implement this configuration, by using the Python version of the [Paho MQTT library](https://pypi.python.org/pypi/paho-mqtt) by the Eclipse Foundation.
+
+First, install the Paho library from your command-line environment:
+
+```cmd/sh
+pip install paho-mqtt
+```
+
+Then, implement the client in a Python script. Replace these placeholders in the following code snippet:
+
+* `<local path to digicert.cer>` is the path to a local file that contains the DigiCert Baltimore Root certificate. You can create this file by copying the certificate information from [certs.c](https://github.com/Azure/azure-iot-sdk-c/blob/master/certs/certs.c) in the Azure IoT SDK for C. Include the lines `--BEGIN CERTIFICATE--` and `--END CERTIFICATE--`, remove the `"` marks at the beginning and end of every line, and remove the `\r\n` characters at the end of every line.
+
+* `<device id from device registry>` is the ID of a device you added to your IoT hub.
+
+* `<generated SAS token>` is a SAS token for the device created as described previously in this article.
+
+* `<iot hub name>` the name of your IoT hub.
+
+```python
+from paho.mqtt import client as mqtt
+import ssl
+
+path_to_root_cert = "<local path to digicert.cer file>"
+device_id = "<device id from device registry>"
+sas_token = "<generated SAS token>"
+iot_hub_name = "<iot hub name>"
++
+def on_connect(client, userdata, flags, rc):
+ print("Device connected with result code: " + str(rc))
++
+def on_disconnect(client, userdata, rc):
+ print("Device disconnected with result code: " + str(rc))
++
+def on_publish(client, userdata, mid):
+ print("Device sent message")
++
+client = mqtt.Client(client_id=device_id, protocol=mqtt.MQTTv311)
+
+client.on_connect = on_connect
+client.on_disconnect = on_disconnect
+client.on_publish = on_publish
+
+client.username_pw_set(username=iot_hub_name+".azure-devices.net/" +
+ device_id + "/?api-version=2021-04-12", password=sas_token)
+
+client.tls_set(ca_certs=path_to_root_cert, certfile=None, keyfile=None,
+ cert_reqs=ssl.CERT_REQUIRED, tls_version=ssl.PROTOCOL_TLSv1_2, ciphers=None)
+client.tls_insecure_set(False)
+
+client.connect(iot_hub_name+".azure-devices.net", port=8883)
+
+client.publish("devices/" + device_id + "/messages/events/", '{"id":123}', qos=1)
+client.loop_forever()
+```
+
+To authenticate using a device certificate, update the previous code snippet with the changes specified in the following code snippet. For more information about how to prepare for certificate-based authentication, see the [Get an X.509 CA certificate](../iot-hub/iot-hub-x509ca-overview.md#get-an-x509-ca-certificate) section of [Authenticate devices using X.509 CA certificates](../iot-hub/iot-hub-x509ca-overview.md).
+
+```python
+# Create the client as before
+# ...
+
+# Set the username but not the password on your client
+client.username_pw_set(username=iot_hub_name+".azure-devices.net/" +
+ device_id + "/?api-version=2021-04-12", password=None)
+
+# Set the certificate and key paths on your client
+cert_file = "<local path to your certificate file>"
+key_file = "<local path to your device key file>"
+client.tls_set(ca_certs=path_to_root_cert, certfile=cert_file, keyfile=key_file,
+ cert_reqs=ssl.CERT_REQUIRED, tls_version=ssl.PROTOCOL_TLSv1_2, ciphers=None)
+
+# Connect as before
+client.connect(iot_hub_name+".azure-devices.net", port=8883)
+```
+
+## Sending device-to-cloud messages
+
+After a device connects, it can send messages to IoT Hub using `devices/{device-id}/messages/events/` or `devices/{device-id}/messages/events/{property-bag}` as a **Topic Name**. The `{property-bag}` element enables the device to send messages with other properties in a url-encoded format. For example:
+
+```text
+RFC 2396-encoded(<PropertyName1>)=RFC 2396-encoded(<PropertyValue1>)&RFC 2396-encoded(<PropertyName2>)=RFC 2396-encoded(<PropertyValue2>)…
+```
+
+> [!NOTE]
+> This `{property_bag}` element uses the same encoding as query strings in the HTTPS protocol.
+
+> [!NOTE]
+> If you're routing D2C messages to an Azure Storage account and you want to leverage JSON encoding, you must specify the Content Type and Content Encoding information, including `$.ct=application%2Fjson&$.ce=utf-8`, as part of the `{property_bag}` mentioned in the previous note.
+>
+> The format of these attributes are protocol-specific. IoT Hub translates these attributes into their corresponding system properties. For more information, see the [System properties](../iot-hub/iot-hub-devguide-routing-query-syntax.md#system-properties) section of [IoT Hub message routing query syntax](../iot-hub/iot-hub-devguide-routing-query-syntax.md#system-properties).
+
+The following list describes IoT Hub implementation-specific behaviors:
+
+* IoT Hub doesn't support QoS 2 messages. If a device app publishes a message with **QoS 2**, IoT Hub closes the network connection.
+
+* IoT Hub doesn't persist Retain messages. If a device sends a message with the **RETAIN** flag set to 1, IoT Hub adds the **mqtt-retain** application property to the message. In this case, instead of persisting the retain message, IoT Hub passes it to the backend app.
+
+* IoT Hub only supports one active MQTT connection per device. Any new MQTT connection on behalf of the same device ID causes IoT Hub to drop the existing connection and **400027 ConnectionForcefullyClosedOnNewConnection** is logged into IoT Hub Logs
+
+* To route messages based on message body, you must first add property 'contentType' (`ct`) to the end of the MQTT topic and set its value to be `application/json;charset=utf-8` as shown in the following example. For more information about routing messages either based on message properties or message body, see the [IoT Hub message routing query syntax documentation](../iot-hub/iot-hub-devguide-routing-query-syntax.md).
+
+ ```devices/{device-id}/messages/events/$.ct=application%2Fjson%3Bcharset%3Dutf-8```
+
+For more information, see [Send device-to-cloud and cloud-to-device messages with IoT Hub](../iot-hub/iot-hub-devguide-messaging.md).
+
+## Receiving cloud-to-device messages
+
+To receive messages from IoT Hub, a device should subscribe using `devices/{device-id}/messages/devicebound/#` as a **Topic Filter**. The multi-level wildcard `#` in the Topic Filter is used only to allow the device to receive more properties in the topic name. IoT Hub doesn't allow the usage of the `#` or `?` wildcards for filtering of subtopics. Since IoT Hub isn't a general-purpose pub-sub messaging broker, it only supports the documented topic names and topic filters.
+
+The device doesn't receive any messages from IoT Hub until it has successfully subscribed to its device-specific endpoint, represented by the `devices/{device-id}/messages/devicebound/#` topic filter. After a subscription has been established, the device receives cloud-to-device messages that were sent to it after the time of the subscription. If the device connects with **CleanSession** flag set to **0**, the subscription is persisted across different sessions. In this case, the next time the device connects with **CleanSession 0** it receives any outstanding messages sent to it while disconnected. If the device uses **CleanSession** flag set to **1** though, it doesn't receive any messages from IoT Hub until it subscribes to its device-endpoint.
+
+IoT Hub delivers messages with the **Topic Name** `devices/{device-id}/messages/devicebound/`, or `devices/{device-id}/messages/devicebound/{property-bag}` when there are message properties. `{property-bag}` contains url-encoded key/value pairs of message properties. Only application properties and user-settable system properties (such as **messageId** or **correlationId**) are included in the property bag. System property names have the prefix **$**, application properties use the original property name with no prefix. For more information about the format of the property bag, see [Sending device-to-cloud messages](#sending-device-to-cloud-messages).
+
+In cloud-to-device messages, values in the property bag are represented as in the following table:
+
+| Property value | Representation | Description |
+|-|-|-|
+| `null` | `key` | Only the key appears in the property bag |
+| empty string | `key=` | The key followed by an equal sign with no value |
+| non-null, nonempty value | `key=value` | The key followed by an equal sign and the value |
+
+The following example shows a property bag that contains three application properties: **prop1** with a value of `null`; **prop2**, an empty string (""); and **prop3** with a value of "a string".
+
+```mqtt
+/?prop1&prop2=&prop3=a%20string
+```
+
+When a device app subscribes to a topic with **QoS 2**, IoT Hub grants maximum QoS level 1 in the **SUBACK** packet. After that, IoT Hub delivers messages to the device using QoS 1.
+
+## Retrieving a device twin's properties
+
+First, a device subscribes to `$iothub/twin/res/#`, to receive the operation's responses. Then, it sends an empty message to topic `$iothub/twin/GET/?$rid={request id}`, with a populated value for **request ID**. The service then sends a response message containing the device twin data on topic `$iothub/twin/res/{status}/?$rid={request-id}`, using the same **request ID** as the request.
+
+The request ID can be any valid value for a message property value, and status is validated as an integer. For more information, see [Send device-to-cloud and cloud-to-device messages with IoT Hub](../iot-hub/iot-hub-devguide-messaging.md).
+
+The response body contains the properties section of the device twin, as shown in the following response example:
+
+```json
+{
+ "desired": {
+ "telemetrySendFrequency": "5m",
+ "$version": 12
+ },
+ "reported": {
+ "telemetrySendFrequency": "5m",
+ "batteryLevel": 55,
+ "$version": 123
+ }
+}
+```
+
+The possible status codes are:
+
+|Status | Description |
+| -- | -- |
+| 200 | Success |
+| 429 | Too many requests (throttled). For more information, see [IoT Hub throttling](../iot-hub/iot-hub-devguide-quotas-throttling.md) |
+| 5** | Server errors |
+
+For more information, see [Understand and use device twins in IoT Hub](../iot-hub/iot-hub-devguide-device-twins.md).
+
+## Update device twin's reported properties
+
+To update reported properties, the device issues a request to IoT Hub via a publication over a designated MQTT topic. After IoT Hub processes the request, it responds the success or failure status of the update operation via a publication to another topic. The device can subscribe to this topic in order to notify it about the result of its twin update request. To implement this type of request/response interaction in MQTT, we use the notion of request ID (`$rid`) provided initially by the device in its update request. This request ID is also included in the response from IoT Hub to allow the device to correlate the response to its particular earlier request.
+
+The following sequence describes how a device updates the reported properties in the device twin in IoT Hub:
+
+1. A device must first subscribe to the `$iothub/twin/res/#` topic to receive the operation's responses from IoT Hub.
+
+2. A device sends a message that contains the device twin update to the `$iothub/twin/PATCH/properties/reported/?$rid={request-id}` topic. This message includes a **request ID** value.
+
+3. The service then sends a response message that contains the new ETag value for the reported properties collection on topic `$iothub/twin/res/{status}/?$rid={request-id}`. This response message uses the same **request ID** as the request.
+
+The request message body contains a JSON document that contains new values for reported properties. Each member in the JSON document updates or add the corresponding member in the device twin's document. A member set to `null` deletes the member from the containing object. For example:
+
+```json
+{
+ "telemetrySendFrequency": "35m",
+ "batteryLevel": 60
+}
+```
+
+The possible status codes are:
+
+|Status | Description |
+| -- | -- |
+| 204 | Success (no content is returned) |
+| 400 | Bad Request. Malformed JSON |
+| 429 | Too many requests (throttled), as per [IoT Hub throttling](../iot-hub/iot-hub-devguide-quotas-throttling.md) |
+| 5** | Server errors |
+
+The following Python code snippet demonstrates the twin reported properties update process over MQTT using the Paho MQTT client:
+
+```python
+from paho.mqtt import client as mqtt
+
+# authenticate the client with IoT Hub (not shown here)
+
+client.subscribe("$iothub/twin/res/#")
+rid = "1"
+twin_reported_property_patch = "{\"firmware_version\": \"v1.1\"}"
+client.publish("$iothub/twin/PATCH/properties/reported/?$rid=" +
+ rid, twin_reported_property_patch, qos=0)
+```
+
+Upon success of the twin reported properties update process in the previous code snippet, the publication message from IoT Hub has the following topic: `$iothub/twin/res/204/?$rid=1&$version=6`, where `204` is the status code indicating success, `$rid=1` corresponds to the request ID provided by the device in the code, and `$version` corresponds to the version of reported properties section of device twins after the update.
+
+For more information, see [Understand and use device twins in IoT Hub](../iot-hub/iot-hub-devguide-device-twins.md).
+
+## Receiving desired properties update notifications
+
+When a device is connected, IoT Hub sends notifications to the topic `$iothub/twin/PATCH/properties/desired/?$version={new-version}`, which contain the content of the update performed by the solution back end. For example:
+
+```json
+{
+ "telemetrySendFrequency": "5m",
+ "route": null,
+ "$version": 8
+}
+```
+
+As for property updates, `null` values mean that the JSON object member is being deleted. Also, `$version` indicates the new version of the desired properties section of the twin.
+
+> [!IMPORTANT]
+> IoT Hub generates change notifications only when devices are connected. Make sure to implement the [device reconnection flow](../iot-hub/iot-hub-devguide-device-twins.md#device-reconnection-flow) to keep the desired properties synchronized between IoT Hub and the device app.
+
+For more information, see [Understand and use device twins in IoT Hub](../iot-hub/iot-hub-devguide-device-twins.md).
+
+## Respond to a direct method
+
+First, a device has to subscribe to `$iothub/methods/POST/#`. IoT Hub sends method requests to the topic `$iothub/methods/POST/{method-name}/?$rid={request-id}`, with either a valid JSON or an empty body.
+
+To respond, the device sends a message with a valid JSON or empty body to the topic `$iothub/methods/res/{status}/?$rid={request-id}`. In this message, the **request ID** must match the one in the request message, and **status** must be an integer.
+
+For more information, see [Understand and invoke direct methods from IoT Hub](../iot-hub/iot-hub-devguide-direct-methods.md).
+
+## Next steps
+
+To learn more about the MQTT protocol, see the [MQTT documentation](https://mqtt.org/).
+
+To learn more about planning your IoT Hub deployment, see:
+
+* [Azure Certified Device Catalog](https://devicecatalog.azure.com/)
+* [How an IoT Edge device can be used as a gateway](../iot-edge/iot-edge-as-gateway.md)
+* [Connecting IoT Devices to Azure: IoT Hub and Event Hubs](../iot-hub/iot-hub-compare-event-hubs.md)
+* [Choose the right IoT Hub tier for your solution](../iot-hub/iot-hub-scaling.md)
+
+To further explore the capabilities of IoT Hub, see:
+
+* [Azure IoT Hub concepts overview](../iot-hub/iot-hub-devguide.md)
+* [Quickstart: Deploy your first IoT Edge module to a virtual Linux device](../iot-edge/quickstart-linux.md)
iot Iot Overview Device Connectivity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot/iot-overview-device-connectivity.md
An IoT device can use one of several network protocols when it connects to an Io
To learn more about how to choose a protocol for your devices to connect to the cloud, see: - [Protocol support in Azure IoT Hub](../iot-hub/iot-hub-devguide-protocols.md)-- [Communicate with DPS using the MQTT protocol](../iot-dps/iot-dps-mqtt-support.md)
+- [Communicate with DPS using the MQTT protocol](iot-mqtt-connect-to-iot-dps.md)
- [Communicate with DPS using the HTTPS protocol (symmetric keys)](../iot-dps/iot-dps-https-sym-key-support.md) - [Communicate with DPS using the HTTPS protocol (X.509)](../iot-dps/iot-dps-https-x509-support.md)
iot Iot Overview Device Development https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot/iot-overview-device-development.md
Although you're recommended to use one of the device SDKS, there may be scenario
For more information, see: -- [Using the MQTT protocol directly (as a device)](../iot-hub/iot-hub-mqtt-support.md#using-the-mqtt-protocol-directly-as-a-device)
+- [Using the MQTT protocol directly (as a device)](iot-mqtt-connect-to-iot-hub.md#using-the-mqtt-protocol-directly-as-a-device)
- [Using the AMQP protocol directly (as a device)](../iot-hub/iot-hub-amqp-support.md#device-client) ## Device modeling
key-vault Developers Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/developers-guide.md
For tutorials on how to authenticate to Key Vault in applications, see:
- [Use a managed identity to connect Key Vault to an Azure web app in .NET](./tutorial-net-create-vault-azure-web-app.md) ## Manage keys, certificates, and secrets
+> !Note]
+> SDKs for .NET, Python, Java, JavaScript, PowerShell and Azure CLI are part of Key Vault feature release process through Public Preview and GA with Key Vault service team support. Other SDK clients for Key Vault are available, but they are built and supported by individual SDK teams over GitHub and released in their teams schedule.
The data plane controls access to keys, certificates, and secrets. You can use local vault access policies or Azure RBAC for access control through the data plane.
key-vault Howto Logging https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/howto-logging.md
To complete this tutorial, you'll need an Azure key vault. You can create a new
You'll also need a destination for your logs. The destination can be an existing or new Azure storage account and/or Log Analytics workspace.
-> [!IMPORTANT]
-> If you use an existing Azure storage account or Log Analytics workspace, it must be in the same subscription as your key vault. It must also use the Azure Resource Manager deployment model, rather than the classic deployment model.
->
-> If you create a new Azure storage account or Log Analytics workspace, we recommend you create it in the same resource group as your key vault, for ease of management.
- You can create a new Azure storage account using one of these methods: - [Create a storage account using the Azure CLI](../../storage/common/storage-account-create.md?tabs=azure-cli) - [Create a storage account using Azure PowerShell](../../storage/common/storage-account-create.md?tabs=azure-powershell)
key-vault Network Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/network-security.md
By default, when you create a new key vault, the Azure Key Vault firewall is dis
### Key Vault Firewall Enabled (Trusted Services Only)
-When you enable the Key Vault Firewall, you'll be given an option to 'Allow Trusted Microsoft Services to bypass this firewall.' The trusted services list does not cover every single Azure service. For example, Azure DevOps isn't on the trusted services list. **This does not imply that services that do not appear on the trusted services list are not trusted or insecure.** The trusted services list encompasses services where Microsoft controls all of the code that runs on the service. Since users can write custom code in Azure services such as Azure DevOps, Microsoft does not provide the option to create a blanket approval for the service. Furthermore, just because a service appears on the trusted service list, doesn't mean it is allowed for all scenarios.
+When you enable the Key Vault Firewall, you'll be given an option to 'Allow Trusted Microsoft Services to bypass this firewall.' The trusted services list does not cover every single Azure service. For example, Azure DevOps isn't on the trusted services list. **This does not imply that services that do not appear on the trusted services list are not trusted or are insecure.** The trusted services list encompasses services where Microsoft controls all of the code that runs on the service. Since users can write custom code in Azure services such as Azure DevOps, Microsoft does not provide the option to create a blanket approval for the service. Furthermore, just because a service appears on the trusted service list, doesn't mean it is allowed for all scenarios.
To determine if a service you are trying to use is on the trusted service list, see [Virtual network service endpoints for Azure Key Vault](overview-vnet-service-endpoints.md#trusted-services).
-For how-to guide, follow the instructions here for [Portal, Azure CLI and PowerShell](how-to-azure-key-vault-network-security.md)
+For a how-to guide, follow the instructions here for [Portal, Azure CLI and PowerShell](how-to-azure-key-vault-network-security.md)
### Key Vault Firewall Enabled (IPv4 Addresses and Ranges - Static IPs)
In this case, you should create the resource within a virtual network, and then
To understand how to configure a private link connection on your key vault, please see the document [here](./private-link-service.md). > [!IMPORTANT]
-> After firewall rules are in effect, users can only perform Key Vault [data plane](security-features.md#privileged-access) operations when their requests originate from allowed virtual networks or IPv4 address ranges. This also applies to accessing Key Vault from the Azure portal. Although users can browse to a key vault from the Azure portal, they might not be able to list keys, secrets, or certificates if their client machine is not in the allowed list. This also affects the Key Vault Picker by other Azure services. Users might be able to see list of key vaults, but not list keys, if firewall rules prevent their client machine.
+> After firewall rules are in effect, users can only perform Key Vault [data plane](security-features.md#privileged-access) operations when their requests originate from allowed virtual networks or IPv4 address ranges. This also applies to accessing Key Vault from the Azure portal. Although users can browse to a key vault from the Azure portal, they might not be able to list keys, secrets, or certificates if their client machine is not in the allowed list. This also affects the Key Vault Picker used by other Azure services. Users might be able to see a list of key vaults, but not list keys, if firewall rules prevent their client machine.
> [!NOTE] > Be aware of the following configuration limitations:
lab-services Approaches For Custom Image Creation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/approaches-for-custom-image-creation.md
Title: Recommended approaches for creating custom images for labs
-description: Describes approaches for creating custom images for labs.
Previously updated : 07/27/2021-
+ Title: Recommendation for creating custom images
+
+description: Describes approaches for creating custom virtual machine images for labs in Azure Lab Services.
++++ Last updated : 04/24/2023+
-# Recommended approaches for creating custom images
-This article describes the following recommended approaches for creating a custom image:
+# Recommended approaches for creating custom images for Azure Lab Services labs
+
+This article describes recommended approaches for creating a custom image for Azure Lab Services labs. Learn how you can create a save a custom image from an existing lab template virtual machine, or import a virtual machine image from an Azure VM or physical lab environment.
- Create and save a custom image from a [lab's template virtual machine (VM)](how-to-create-manage-template.md). - Bring a custom image from outside of the context of a lab by using: - An [Azure VM](https://azure.microsoft.com/services/virtual-machines/). - A VHD in your physical lab environment.
-## Save a custom image from a lab's template VM
+## Save a custom image from a lab template virtual machine
+
+The easiest way to create a custom virtual machine image for labs is to export an existing lab template virtual machine in the Azure portal.
-Using a lab's template VM to create and save a custom image is the simplest way to create an image because it's supported by using the Azure Lab Services portal. As a result, both IT departments and educators can create custom images by using a lab's template VM.
+For example, you can start to create a new lab with one of the Azure Marketplace images, and then install extra software applications and tooling in the [template VM](./how-to-create-manage-template.md) that are needed for a class. After you've finished setting up the template VM, you can save it in the [connected compute gallery](how-to-attach-detach-shared-image-gallery.md), for others to use for creating new labs.
-For example, you can start with one of the Azure Marketplace images and then install the software applications and tooling that are needed for a class. After you've finished setting up the image, you can save it in the [connected compute gallery](how-to-attach-detach-shared-image-gallery.md) so that you and other educators can use the image to create new labs.
+You can use a lab's template VM to create either Windows or Linux custom images. For more information, see [Save an image to a compute gallery](how-to-use-shared-image-gallery.md#save-an-image-to-a-compute-gallery)
There are a few key points to be aware of with this approach: -- Lab Services automatically saves a *specialized* image when you export the image from the template VM. In most cases, specialized images are well suited for creating new labs because the image retains machine-specific information and user profiles. Using a specialized image helps to ensure that the installed software will run the same when you use the image to create new labs. If you need to create a *generalized* image, you must use one of the other recommended approaches in this article to create a custom image.
+- Azure Lab Services automatically saves a *specialized* image when you export the image from the template VM. In most cases, specialized images are well suited for creating new labs because the image retains machine-specific information and user profiles. Using a specialized image helps to ensure that the installed software runs the same when you use the image to create new labs. If you need to create a *generalized* image, you must use one of the other recommended approaches in this article to create a custom image.
You can create labs based on both generalized and specialized images in Azure Lab Services. For more information about the differences, see [Generalized and specialized images](../virtual-machines/shared-image-galleries.md#generalized-and-specialized-images). -- For more advanced scenarios with setting up your image, you might find it helpful to instead create an image outside of labs by using either an Azure VM or a VHD from your physical lab environment. Read the next sections for more information.-
-### Use a lab's template VM to save a custom image
-
-You can use a lab's template VM to create either Windows or Linux custom images. For more information, see [Save an image to a compute gallery](how-to-use-shared-image-gallery.md#save-an-image-to-a-compute-gallery)
+- For more advanced scenarios with setting up your image, you might instead create an image outside of Azure Lab Services by using either an Azure VM or a VHD from your physical lab environment. For example, if you need to use virtual machine extensions.
## Bring a custom image from an Azure VM
-Another approach is to use an Azure VM to set up a custom image. After you've finished setting up the image, you can save it to a compute gallery so that you and your colleagues can use the image to create new labs.
+Another approach to set up a custom image is to use an Azure VM. After you've finished setting up the image, you can save it to a compute gallery so that you can use the image to create new labs.
Using an Azure VM gives you more flexibility: -- You can create either [generalized or specialized](../virtual-machines/shared-image-galleries.md#generalized-and-specialized-images) images. Otherwise, if you use a lab's template VM to [export an image](how-to-use-shared-image-gallery.md) the image is always specialized.-- You have access to more advanced features of an Azure VM that might be helpful for setting up an image. For example, you can use [extensions](../virtual-machines/extensions/overview.md) to do post-deployment configuration and automation. Also, you can access the VM's [boot diagnostics](../virtual-machines/boot-diagnostics.md) and [serial console](/troubleshoot/azure/virtual-machines/serial-console-overview).
+- You can create either [generalized or specialized](/azure/virtual-machines/shared-image-galleries#generalized-and-specialized-images) images. Otherwise, if you use a lab template VM to [export an image](how-to-use-shared-image-gallery.md) the image is always specialized.
-Setting up an image by using an Azure VM is more complex. As a result, IT departments are typically responsible for creating custom images on Azure VMs.
+- You have access to more advanced features of an Azure VM that might be helpful for setting up an image. For example, you can use [extensions](/azure/virtual-machines/extensions/overview) to do post-deployment configuration and automation. Also, you can access the VM's [boot diagnostics](/azure/virtual-machines/boot-diagnostics) and [serial console](/troubleshoot/azure/virtual-machines/serial-console-overview).
+
+The process for setting up an image by using an Azure VM is more complex. As a result, IT departments are typically responsible for creating custom images on Azure VMs.
### Use an Azure VM to set up a custom image
-Here are the high-level steps to bring a custom image from an Azure VM:
+To create a custom image from an Azure virtual machine:
+
+1. Create an [Azure VM](https://azure.microsoft.com/services/virtual-machines/) by using a Windows or Linux Azure Marketplace image.
-1. Create an [Azure VM](https://azure.microsoft.com/services/virtual-machines/) by using a Windows or Linux Marketplace image.
1. Connect to the Azure VM and install more software. You can also make other customizations that are needed for your lab.
-1. When you've finished setting up the image, [save the VM's image to a compute gallery](../virtual-machines/image-version.md). As part of this step, you'll also need to create the image's definition and version.
-1. After the custom image is saved in the gallery, you can use your image to create new labs.
+1. When you've finished setting up the image, [save the VM image to a compute gallery](/azure/virtual-machines/image-version). As part of this step, you also need to create the image definition and version.
-The steps vary depending on if you're creating a custom Windows or Linux image. Read the following articles for the detailed steps:
+1. After you save the custom image in the gallery, use your image to create new labs.
+
+The steps might vary depending on if you're creating a custom Windows or Linux image. Read the following articles for the detailed steps:
- [Bring a custom Windows image from an Azure VM](how-to-bring-custom-windows-image-azure-vm.md) - [Bring a custom Linux image from an Azure VM](how-to-bring-custom-linux-image-azure-vm.md) ## Bring a custom image from a VHD in your physical lab environment
-The third approach to consider is to bring a custom image from a VHD in your physical lab environment to a compute gallery. After the image is in a compute gallery, you and other educators can use the image to create new labs.
+Another approach is to import a custom image from a virtual hard drive (VHD) in your physical lab environment to an Azure compute gallery. After the image is in a compute gallery, you can use it to create new labs.
-Here are a few reasons why you might want to use this approach:
+The reasons you might import a custom image from a physical environment are:
- You can create either [generalized or specialized](../virtual-machines/shared-image-galleries.md#generalized-and-specialized-images) images to use in your labs. Otherwise, if you use a [lab's template VM](how-to-use-shared-image-gallery.md) to export an image, the image is always specialized.-- You can access resources that exist within your on-premises environment. For example, you might have large installation files in your on-premises environment that are too time consuming to copy to a lab's template VM.+
+- You can access resources that exist within your on-premises environment during the VM configuration. For example, you might have large installation files in your on-premises environment that are too time-consuming to copy to a lab template VM.
+ - You can upload images created by using other tools, such as [Microsoft Configuration Manager](/mem/configmgr/core/understand/introduction), so that you don't have to manually set up an image by using a lab's template VM.
-Bringing a custom image from a VHD is the most advanced approach because you must ensure that the image is set up properly so that it works within Azure. As a result, IT departments are typically responsible for creating custom images from VHDs.
+Bringing a custom image from a VHD is the most advanced approach because you must ensure that the image is set up properly to function in Azure. As a result, IT departments are typically responsible for creating custom images from VHDs.
### Bring a custom image from a VHD
-Here are the high-level steps to bring a custom image from a VHD:
+Follow these steps to import a custom image from a VHD:
1. Use [Windows Hyper-V](/virtualization/hyper-v-on-windows/about/) on your on-premises machine to create a Windows or Linux VHD.+ 1. Connect to the Hyper-V VM and install more software. You can also make other customizations that are needed for your lab.+ 1. When you've finished setting up the image, upload the VHD to create a [managed disk](../virtual-machines/managed-disks-overview.md) in Azure.+ 1. From the managed disk, create the [image's definition](../virtual-machines/shared-image-galleries.md#image-definitions) and version in a compute gallery.
-1. After the custom image is saved in the gallery, you can use the image to create new labs.
+
+1. After you saved the custom image in the gallery, you can use the image to create new labs.
The steps vary depending on if you're creating a custom Windows or Linux image. Read the following articles for the detailed steps:
lab-services Class Type Adobe Creative Cloud https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/class-type-adobe-creative-cloud.md
To fix this issue:
This section provides a cost estimate for running this class for 25 users. There are 20 hours of scheduled class time. Also, each user gets 10 hours quota for homework or assignments outside scheduled class time. The virtual machine size we chose was **Small GPU (Visualization)**, which is 160 lab units.
-25 students \* (20 scheduled hours + 10 quota hours) \* 160 Lab Units * 0.01 USD per hour = 1200.00 USD
+25 lab users \* (20 scheduled hours + 10 quota hours) \* 160 Lab Units * 0.01 USD per hour = 1200.00 USD
>[!IMPORTANT] > This cost estimate is for example purposes only. For current details on pricing, see [Azure Lab Services Pricing](https://azure.microsoft.com/pricing/details/lab-services/).
lab-services Class Type Arcgis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/class-type-arcgis.md
The steps in this section show how to set up the template VM:
Let's cover a possible cost estimate for this class. This estimate doesn't include the cost of running the license server. We'll use a class of 25 students. There are 20 hours of scheduled class time. Also, each student gets 10 hours quota for homework or assignments outside scheduled class time. The virtual machine size we selected was **Medium**, which is 42 lab units.
-25 students \* (20 scheduled hours + 10 quota hours) \* 42 Lab Units * 0.01 USD per hour = 315.00 USD
+25 lab users \* (20 scheduled hours + 10 quota hours) \* 42 Lab Units * 0.01 USD per hour = 315.00 USD
> [!IMPORTANT] > Cost estimate is for example purposes only. For current details on pricing, see [Azure Lab Services Pricing](https://azure.microsoft.com/pricing/details/lab-services/).
lab-services Class Type Autodesk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/class-type-autodesk.md
Follow these steps to [enable these Azure Marketplace images available to lab cr
This section provides a cost estimate for running this class for 25 users. There are 20 hours of scheduled class time. Also, each user gets 10 hours quota for homework or assignments outside scheduled class time. The virtual machine size we chose was **Small GPU (Visualization)**, which is 160 lab units. This estimate doesnΓÇÖt include the cost of running a license server. -- 25 students &times; (20 scheduled hours + 10 quota hours) &times; 160 lab units
+- 25 lab users &times; (20 scheduled hours + 10 quota hours) &times; 160 lab units
> [!IMPORTANT] > The cost estimate is for example purposes only. For current pricing information, see [Azure Lab Services pricing](https://azure.microsoft.com/pricing/details/lab-services/).
lab-services Class Type Big Data Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/class-type-big-data-analytics.md
Title: Set up a lab to teach big data analytics using Azure Lab Services | Microsoft Docs
-description: Learn how to set up a lab to teach the big data analytics using Docker deployment of Hortonworks Data Platform (HDP).
-- Previously updated : 03/08/2022-
+ Title: Set up big data analytics lab
+
+description: Learn how to set up a lab in Azure Lab Services to teach the big data analytics using Docker deployment of Hortonworks Data Platform (HDP).
+ +++ Last updated : 04/25/2023
-# Set up a lab for big data analytics using Docker deployment of HortonWorks Data Platform
+# Set up a lab for big data analytics in Azure Lab Services using Docker deployment of HortonWorks Data Platform
[!INCLUDE [preview note](./includes/lab-services-new-update-focused-article.md)]
-This article shows you how to set up a lab to teach a big data analytics class. A big data analytics class teaches students to learn how to handle large volumes of data. It also teaches them to apply machine and statistical learning algorithms to derive data insights. A key objective for students is to learn how to use data analytics tools, such as [Apache Hadoop's open-source software package](https://hadoop.apache.org/). The software package provides tools for storing, managing, and processing big data.
+This article shows you how to set up a lab to teach a big data analytics class. A big data analytics class teaches users how to handle large volumes of data. It also teaches them to apply machine and statistical learning algorithms to derive data insights. A key objective is to learn how to use data analytics tools, such as [Apache Hadoop's open-source software package](https://hadoop.apache.org/). The software package provides tools for storing, managing, and processing big data.
-In this lab, students will use a popular commercial version of Hadoop provided by [Cloudera](https://www.cloudera.com/), called [Hortonworks Data Platform (HDP)](https://www.cloudera.com/products/hdp.html). Specifically, students will use [HDP Sandbox 3.0.1](https://www.cloudera.com/tutorials/getting-started-with-hdp-sandbox/1.html) that's a simplified, easy-to-use version of the platform. HDP Sandbox 3.0.1 is also free of cost and is intended for learning and experimentation. Although this class may use either Windows or Linux virtual machines (VM) with HDP Sandbox deployed. This article will show you how to use Windows.
+In this lab, lab users work with a popular commercial version of Hadoop provided by [Cloudera](https://www.cloudera.com/), called [Hortonworks Data Platform (HDP)](https://www.cloudera.com/products/hdp.html). Specifically, lab users use [HDP Sandbox 3.0.1](https://www.cloudera.com/tutorials/getting-started-with-hdp-sandbox/1.html) that's a simplified, easy-to-use version of the platform. HDP Sandbox 3.0.1 is also free of cost and is intended for learning and experimentation. Although this class may use either Windows or Linux virtual machines (VM) with HDP Sandbox deployed. This article shows you how to use Windows.
-Another interesting aspect is that we'll deploy HDP Sandbox on the lab VMs using [Docker](https://www.docker.com/) containers. Each Docker container provides its own isolated environment for software applications to run inside. Conceptually, Docker containers are like nested VMs and can be used to easily deploy and run a wide variety of software applications based on container images provided on [Docker Hub](https://www.docker.com/products/docker-hub). Cloudera's deployment script for HDP Sandbox automatically pulls the [HDP Sandbox 3.0.1 Docker image](https://hub.docker.com/r/hortonworks/sandbox-hdp) from Docker Hub and runs two Docker containers:
+Another interesting aspect is that you deploy the HDP Sandbox on the lab VMs using [Docker](https://www.docker.com/) containers. Each Docker container provides its own isolated environment for software applications to run inside. Conceptually, Docker containers are like nested VMs and can be used to easily deploy and run a wide variety of software applications based on container images provided on [Docker Hub](https://www.docker.com/products/docker-hub). Cloudera's deployment script for HDP Sandbox automatically pulls the [HDP Sandbox 3.0.1 Docker image](https://hub.docker.com/r/hortonworks/sandbox-hdp) from Docker Hub and runs two Docker containers:
- sandbox-hdp - sandbox-proxy
-## Lab configuration
+## Prerequisites
-To set up this lab, you need an Azure subscription to get started. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/) before you begin.
+
+## Lab configuration
### Lab plan settings
-Once you've an Azure subscription, you can create a new lab plan in Azure Lab Services. For more information about creating a new lab plan, see the tutorial on [how to set up a lab plan](./quick-create-resources.md). You can also use an existing lab plan.
-Enable your lab plan settings as described in the following table. For more information about how to enable Azure Marketplace images, see [Specify the Azure Marketplace images available to lab creators](./specify-marketplace-images.md).
+This lab uses a Windows 10 Pro Azure Marketplace images as the base VM image. You first need to enable this image in your lab plan. This lets lab creators then select the image as a base image for their lab.
-| Lab plan setting | Instructions |
-| - | |
-|Marketplace image| Enable the **Windows 10 Pro** image.|
+Follow these steps to [enable these Azure Marketplace images available to lab creators](specify-marketplace-images.md). Select one of the **Windows 10** Azure Marketplace images.
### Lab settings
-For instructions on how to create a lab, see [Tutorial: Set up a lab](tutorial-setup-lab.md). Use the following settings when creating the lab.
+Create a lab for your lab plan. [!INCLUDE [create lab](./includes/lab-services-class-type-lab.md)] Use the following settings when creating the lab.
| Lab settings | Value/instructions | | | | |Virtual Machine Size| **Medium (Nested Virtualization)**. This VM size is best suited for relational databases, in-memory caching, and analytics. The size also supports nested virtualization.|
-|Virtual Machine Image| Windows 10 Pro|
-
+|Virtual Machine Image| **Windows 10 Pro**|
+
> [!NOTE]
-> We need to use Medium (Nested Virtualization) since deploying HDP Sandbox using Docker requires Windows Hyper-V with nested virtualization and at least 10 GB of RAM.
+> Use the Medium (Nested Virtualization) VM size because the HDP Sandbox using Docker requires Windows Hyper-V with nested virtualization and at least 10 GB of RAM.
## Template machine configuration
-To set up the template machine, we'll:
+To set up the template machine:
-- Install Docker-- Deploy HDP Sandbox-- Use PowerShell and Windows Task Scheduler to automatically start the Docker containers
+1. Install Docker
+1. Deploy HDP Sandbox
+1. Use PowerShell and Windows Task Scheduler to automatically start the Docker containers
### Install Docker
To use Docker containers, you must first install Docker Desktop on the template
### Deploy HDP Sandbox
-In this section, you'll deploy HDP Sandbox and then access HDP Sandbox using the browser.
+Next, deploy HDP Sandbox and then access HDP Sandbox using the browser.
1. Ensure that you have installed [Git Bash](https://gitforwindows.org/) as listed in the [Prerequisites section](https://www.cloudera.com/tutorials/sandbox-deployment-and-install-guide/3.html#prerequisites) of the guide. It's recommended for completing the next steps.
In this section, you'll deploy HDP Sandbox and then access HDP Sandbox using the
> [!NOTE] > These instructions assume that you have first mapped the local IP address of the sandbox environment to the sandbox-hdp.hortonworks.com in the host file on your template VM. If you **don't** do this mapping, you can access the Sandbox Welcome page by navigating to `http://localhost:8080`.
-### Automatically start Docker containers when students log in
+### Automatically start Docker containers when lab users sign in
-To provide an easy to use, experience for students, we'll use a PowerShell script that automatically:
+To provide an easy-to-use experience for lab users, create a PowerShell script that automatically:
-- Starts the HDP Sandbox Docker containers when a student starts and connects to their lab VM.-- Launches the browser and navigates to the Sandbox Welcome Page.
+1. Starts the HDP Sandbox Docker containers when a lab user starts and connects to their lab VM.
+1. Launches the browser and navigates to the Sandbox Welcome page.
-We'll also use Windows Task Scheduler to automatically run this script when a student logs into their VM.
-To set up a Task Scheduler, follow these steps: [Big Data Analytics scripting](https://aka.ms/azlabs/scripts/BigDataAnalytics).
+Use Windows Task Scheduler to automatically run this script when a lab user logs into their VM. To set up a Task Scheduler, follow these steps: [Big Data Analytics scripting](https://aka.ms/azlabs/scripts/BigDataAnalytics).
## Cost estimate
-If you would like to estimate the cost of this lab, you can use the following example:
+This section provides a cost estimate for running this class for 25 lab users. There are 20 hours of scheduled class time. Also, each user gets 10 hours quota for homework or assignments outside scheduled class time. The virtual machine size we chose was **Medium (Nested Virtualization)**, which is 55 lab units.
-For a class of 25 students with 20 hours of scheduled class time and 10 hours of quota for homework or assignments, the price for the lab would be:
-
-25 students \* (20 + 10) hours \* 55 Lab Units \* 0.01 USD per hour = 412.50 USD
+- 25 lab users &times; (20 scheduled hours + 10 quota hours) &times; 55 lab units
->[!IMPORTANT]
->Cost estimate is for example purposes only. For current details on pricing, see [Azure Lab Services Pricing](https://azure.microsoft.com/pricing/details/lab-services/).
+> [!IMPORTANT]
+> The cost estimate is for example purposes only. For current pricing information, see [Azure Lab Services pricing](https://azure.microsoft.com/pricing/details/lab-services/).
## Conclusion
lab-services Class Type Jupyter Notebook https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/class-type-jupyter-notebook.md
Follow these steps to configure an SSH tunnel between a user's local machine and
## Cost estimate
-This section provides a cost estimate for running this class for 25 users. There are 20 hours of scheduled class time. Also, each user gets 10 hours quota for homework or assignments outside scheduled class time. The VM size we chose was small GPU (compute), which is 139 lab units. If you want to use the Small (20 lab units) or Medium size (42 lab units), you can replace the lab unit part in the equation below with the correct number.
+This section provides a cost estimate for running this class for 25 lab users. There are 20 hours of scheduled class time. Also, each user gets 10 hours quota for homework or assignments outside scheduled class time. The VM size we chose was small GPU (compute), which is 139 lab units. If you want to use the Small (20 lab units) or Medium size (42 lab units), you can replace the lab unit part in the equation below with the correct number.
Here's an example of a possible cost estimate for this class:
-25 students \* (20 scheduled hours + 10 quota hours) \* 139 lab units \* 0.01 USD per hour = 1042.5 USD
+25 lab users \* (20 scheduled hours + 10 quota hours) \* 139 lab units \* 0.01 USD per hour = 1042.5 USD
>[!IMPORTANT] >This cost estimate is for example purposes only. For current details on pricing, see [Azure Lab Services Pricing](https://azure.microsoft.com/pricing/details/lab-services/).
lab-services Class Type Networking Gns3 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/class-type-networking-gns3.md
Title: Set up a networking lab with GNS3
+ Title: Set up a GNS3 networking lab
+ description: Learn how to set up a lab using Azure Lab Services to teach networking with GNS3. ++++ Previously updated : 04/19/2022 Last updated : 04/24/2023
-# Set up a lab to teach a networking class
+# Set up a lab to teach a networking class with GNS3 in Azure Lab Services
[!INCLUDE [preview note](./includes/lab-services-new-update-focused-article.md)]
-This article shows you how to set up a class that focuses on allowing students to emulate, configure, test, and troubleshoot virtual and real networks using [GNS3](https://www.gns3.com/) software.
+This article shows you how to set up a class for emulating, configuring, testing, and troubleshooting virtual and real networks using [GNS3](https://www.gns3.com/) software.
-This article has two main sections. The first section covers how to create the lab. The second section covers how to create the template machine with nested virtualization enabled and with GNS3 installed and configured.
+This article has two main sections. The first section covers how to create the lab. The second section covers how to create the [template machine](./classroom-labs-concepts.md#template-virtual-machine) with nested virtualization enabled and with GNS3 installed and configured.
-## Lab configuration
+## Prerequisites
[!INCLUDE [must have subscription](./includes/lab-services-class-type-subscription.md)] [!INCLUDE [must have lab plan](./includes/lab-services-class-type-lab-plan.md)]
-### Lab settings
+## Lab configuration
[!INCLUDE [create lab](./includes/lab-services-class-type-lab.md)] Use the following settings when creating the lab.
This article has two main sections. The first section covers how to create the l
[!INCLUDE [configure template vm](./includes/lab-services-class-type-template-vm.md)]
-To configure the template VM, we'll complete the following major tasks.
+To configure the template VM, complete the following tasks:
1. Prepare the template machine for nested virtualization.
-2. Install GNS3.
-3. Create nested GNS3 VM in Hyper-V.
-4. Configure GNS3 to use Windows Hyper-V VM.
-5. Add appropriate appliances.
-6. Publish template.
+1. Install GNS3.
+1. Create nested GNS3 VM in Hyper-V.
+1. Configure GNS3 to use Windows Hyper-V VM.
+1. Add appropriate appliances.
+1. Publish the template.
### Prepare template machine for nested virtualization
-Follow instructions to [enable nested virtualization](how-to-enable-nested-virtualization-template-vm.md) to prepare your template virtual machine for nested virtualization.
+To prepare the template virtual machine for nested virtualization, follow the detailed steps in [enable nested virtualization](how-to-enable-nested-virtualization-template-vm.md).
### Install GNS3 -- Follow the instructions for [installing GNS3 on Windows](https://docs.gns3.com/docs/getting-started/installation/windows). Make sure to include installing the **GNS3 VM** in the component dialog, see below.
+1. Connect to the template VM by using remote desktop.
+
+1. Follow the detailed instructions on the GNS3 website, to [install GNS3 on Windows](https://docs.gns3.com/docs/getting-started/installation/windows).
-![SelectGNS3vm](./media/class-type-networking-gns3/gns3-select-vm.png)
+ 1. Make sure to select **GNS3 VM** in the component dialog:
-Eventually you'll reach the GNS3 VM selection. Make sure to select the **Hyper-V** option.
+ :::image type="content" source="./media/class-type-networking-gns3/gns3-select-vm.png" alt-text="Screenshot that shows the Choose Components page in the GNS3 installation wizard, with the GNS3 VM option selected.":::
-![SelectHyperV](./media/class-type-networking-gns3/gns3-vm-hyper-v.png)
+ 1. On the **GNS3 VM** page, select the **Hyper-V** option:
- This option will download the PowerShell script and VHD files to create the GNS3 VM in the Hyper-V manager. Continue installation using the default values.
+ :::image type="content" source="./media/class-type-networking-gns3/gns3-vm-hyper-v.png" alt-text="Screenshot that shows the GNS3 VM page in the GNS3 installation wizard, with the Hyper-V option selected.":::
- > [!IMPORTANT]
- > Once the setup is complete, don't start GNS3.
+ When you select the Hyper-V option, the installer downloads the PowerShell script and VHD files to create the GNS3 VM in the Hyper-V manager.
+
+ 1. Continue the installation with the default values.
+
+> [!IMPORTANT]
+> After the setup completes, don't start GNS3.
### Create GNS3 VM
-Once the setup has completed, a zip file **"GNS3.VM.Hyper-V.2.2.17.zip"** is downloaded to the same folder as the installation file, containing the drives and the PowerShell script to create the Hyper-V vm.
+When the setup finishes, a zip file `GNS3.VM.Hyper-V.2.2.17.zip` is downloaded to the same folder as the installation file. The zip file contains the virtual disks and the PowerShell script to create the Hyper-V virtual machine.
+
+To create the GNS 3 VM:
+
+1. Connect to the template VM by using remote desktop.
+
+1. Extract all files in the `GNS3.VM.Hyper-V.2.2.17.zip` file.
-- **Extract all** on the GNS3.VM.Hyper-V.2.2.17.zip. This action will extract out the drives and the PowerShell script to create the VM.-- **Run with PowerShell** on the "create-vm.ps1" PowerShell script by right-clicking on the file.-- An Execution Policy Change request may show up. Enter "Y" to execute the script.
+1. Right-select the `create-vm.ps1` PowerShell script, and then select **Run with PowerShell**.
-![PSExecutionPolicy](./media/class-type-networking-gns3/powershell-execution-policy-change.png)
+1. When the `Execution Policy Change` request shows, enter **Y** to execute the script.
-- Once the script has completed, you can confirm the VM "GNS3 VM" has been created in the Hyper-V Manager.
+ :::image type="content" source="./media/class-type-networking-gns3/powershell-execution-policy-change.png" alt-text="Screenshot that shows the PowerShell command line, asking for an Execution Policy change.":::
+
+1. After the script completes, confirm that the **GNS3 VM** virtual machine is available in Hyper-V Manager.
### Configure GNS3 to use Hyper-V VM
-Now that GNS3 is installed and the GNS3 VM is added, start up GNS3 to link the two together. The [GNS3 Setup wizard will start automatically.](https://docs.gns3.com/docs/getting-started/setup-wizard-gns3-vm#local-gns3-vm-setup-wizard).
+Now that you installed GNS3, and added the GNS3 VM, configure GNS 3 to use the Hyper-V virtual machine.
+
+1. Connect to the template VM by using remote desktop.
+
+1. Start GNS3. The [GNS3 Setup wizard](https://docs.gns3.com/docs/getting-started/setup-wizard-gns3-vm#local-gns3-vm-setup-wizard) starts automatically.
+
+1. Select the **Run appliances from virtual machine** option, and select **Next**.
+
+1. Use the default values in the following pages.
-- Use the **Run appliances from virtual machine** option. Use the defaults for the rest of the wizard until you hit the **VMware vmrun tool cannot be found** error.
+1. When you get the **VMware vmrun tool cannot be found** error, select **Ok**, and then **Cancel** out of the wizard.
-![VMWareError](./media/class-type-networking-gns3/gns3-vmware-vmrun-tool-not-found.png)
+ :::image type="content" source="./media/class-type-networking-gns3/gns3-vmware-vmrun-tool-not-found.png" alt-text="Screenshot that shows a VMware error message in the GNS3 Setup wizard.":::
-- Choose **Ok**, and **Cancel** out of the wizard.-- To complete the connection to the Hyper-V vm, open the **Edit** -> **Preferences** -> **GNS3 VM** and select **Enable the GNS3 VM** and select the **Hyper-V** option.
+1. To complete the connection to the Hyper-V VM, select **Edit** > **Preferences** > **GNS3 VM**.
-![EnableGNS3VMs](./media/class-type-networking-gns3/gns3-preference-vm.png)
+1. Select **Enable the GNS3 VM**, and then select the **Hyper-V** option.
+
+ :::image type="content" source="./media/class-type-networking-gns3/gns3-preference-vm.png" alt-text="Screenshot that shows the GNS3 VM preferences page, showing the GNS3 VM option enabled, and Hyper-V selected.":::
### Add appropriate appliances
-At this point, you'll want to add the appropriate [appliances for the class.](https://docs.gns3.com/docs/using-gns3/beginners/install-from-marketplace)
+Next, you can add appliances for the class. Follow the detailed steps from the GNS3 documentation to [install appliances from the GNS3 marketplace](https://docs.gns3.com/docs/using-gns3/beginners/install-from-marketplace).
### Prepare to publish template
-Now that the template VM is set up properly, and ready for publishing there are a few key points to check.
+Now that you set up the template virtual machine, verify the following key points before you publish the template:
-- Make sure that the GNS3 VM is shut down or turned off. Publishing while the VM is still running will corrupt the VM.-- Close down GNS3, publishing while and running can lead to unintended side effects.-- Clean up any installation files or other unnecessary files.
+- Make sure that the GNS3 VM is shut down or turned off. Publishing while the VM is still running, corrupts the virtual machine.
+- Stop GNS3. Publishing while GNS3 is running can lead to unintended side effects.
+- Clean up any installation files or other unnecessary files from the template VM.
>[!IMPORTANT]
->Publishing while the VM is still running will corrupt the template VMs and create unusable lab VMs.
+>Publishing while the VM is still running, corrupts the template virtual machine and creates unusable lab virtual machines.
## Cost
-If you would like to estimate the cost of this lab, you can use the following example:
-
-For a class of 25 students with 20 hours of scheduled class time and 10 hours of quota for homework or assignments, the price for the lab would be:
+This section provides a cost estimate for running this class for 25 lab users. There are 20 hours of scheduled class time. Also, each user gets 10 hours quota for homework or assignments outside scheduled class time. The virtual machine size we chose is **Large (Nested Virtualization)**, which is 84 lab units.
-25 students \* (20 + 10) hours \* 84 Lab Units \* 0.01 USD per hour = 630 USD.
+- 25 lab users &times; (20 scheduled hours + 10 quota hours) &times; 84 lab units
> [!IMPORTANT]
-> Cost estimate is for example purposes only. For current details on pricing, see [Azure Lab Services Pricing](https://azure.microsoft.com/pricing/details/lab-services/).
-
-## Conclusion
-
-This article walked you through the steps to create a lab for network configuration using GNS3.
+> The cost estimate is for example purposes only. For current pricing information, see [Azure Lab Services pricing](https://azure.microsoft.com/pricing/details/lab-services/).
## Next steps
lab-services Class Type Rstudio Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/class-type-rstudio-windows.md
Title: Set up a lab with R and RStudio on Windows using Azure Lab Services
-description: Learn how to set up labs to teach R using RStudio on Windows
- Previously updated : 08/26/2021
+ Title: Set up RStudio lab on Windows
+
+description: Learn how to set up a lab in Azure Lab Services to teach R using RStudio on Windows.
+ +++ Last updated : 04/24/2023
-# Set up a lab to teach R on Windows
+# Set up a lab to teach R on Windows with Azure Lab Services
[!INCLUDE [preview note](./includes/lab-services-new-update-focused-article.md)]
-[R](https://www.r-project.org/https://docsupdatetracker.net/about.html) is an open-source language used for statistical computing and graphics. It's used in the statistical analysis of genetics to natural language processing to analyzing financial data. R provides an [interactive command line](https://cran.r-project.org/doc/manuals/r-release/R-intro.html#Invoking-R-from-the-command-line) experience. [RStudio](https://www.rstudio.com/products/rstudio/) is an interactive development environment (IDE) available for the R language. The free version provides code-editing tools, an integrated debugging experience, and package development tools.
+This article shows you how to set up a class in Azure Lab Services for teaching R and RStudio.
-This article will focus on solely RStudio and R as a building block for a class that requires the use of statistical computing. The [deep learning](class-type-deep-learning-natural-language-processing.md) and [Python and Jupyter Notebooks](class-type-jupyter-notebook.md)
-class types set up RStudio differently. Each article describes how to use the [Data Science Virtual Machine for Linux (Ubuntu)](https://azuremarketplace.microsoft.com/en-us/marketplace/apps?search=Data%20science%20Virtual%20machine&page=1&filters=microsoft%3Blinux) marketplace image, which has many [data science related tools](../machine-learning/data-science-virtual-machine/tools-included.md), including RStudio, pre-installed.
+[R](https://www.r-project.org/https://docsupdatetracker.net/about.html) is an open-source language used for statistical computing and graphics. The R language is used in the statistical analysis of genetics to natural language processing to analyzing financial data. R provides an [interactive command line](https://cran.r-project.org/doc/manuals/r-release/R-intro.html#Invoking-R-from-the-command-line) experience. [RStudio](https://www.rstudio.com/products/rstudio/) is an interactive development environment (IDE) available for the R language. The free version provides code-editing tools, an integrated debugging experience, and package development tools.
-## Lab configuration
+This article focuses on using R and RStudio for statistical computing. The [deep learning] (class-type-deep-learning-natural-language-processing.md) and [Python and Jupyter Notebooks](class-type-jupyter-notebook.md)
+class types set up RStudio differently. Each article describes how to use the [Data Science Virtual Machine for Linux (Ubuntu)](https://azuremarketplace.microsoft.com/marketplace/apps?search=Data%20science%20Virtual%20machine&page=1&filters=microsoft%3Blinux) marketplace image, which has many [data science related tools](../machine-learning/data-science-virtual-machine/tools-included.md), including RStudio, pre-installed.
++
+## Prerequisites
-To set up this lab, you need an Azure subscription and lab plan to get started. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/) before you begin.
+
+## Lab configuration
### External resource configuration
If you choose to use any external resources, youΓÇÖll need to [Connect to your v
### Lab plan settings
-Once you get have Azure subscription, you can create a new lab plan in Azure Lab Services. For more information about creating a new lab plan, see the tutorial on [how to set up a lab plan](./quick-create-resources.md). You can also use an existing lab plan.
### Lab settings
For instructions on how to create a lab, see [Tutorial: Set up a lab](tutorial-s
| Lab setting | Value and description | | | |
-| Virtual Machine Size | Small GPU (Compute)|
-| VM image | Windows 10 Pro. Version 2004 |
+| Virtual Machine Size | **Small GPU (Compute)** |
+| VM image | **Windows 10 Pro** |
## Template configuration
+After the template virtual machine is created, perform the following steps to configure the lab:
+
+1. Start the template virtual machine and connect to the machine using RDP.
+
+1. [Install R](https://docs.rstudio.com/resources/install-r/) in the template VM
-After the template machine is created, start the machine, and connect to it to [install R](https://docs.rstudio.com/resources/install-r/) and [RStudio Desktop](https://www.rstudio.com/products/rstudio/download/).
+1. [Install RStudio](https://www.rstudio.com/products/rstudio/download/) in the template VM
### Install R
+To install R in the template virtual machine:
+
+1. Download the [latest installer for R for Windows](https://cran.r-project.org/bin/windows/base/release.html).
+
+ For a full list of versions available, see the [R for Windows download page](https://cran.r-project.org/bin/windows/base/).
-1. Download the [latest installer for R for Windows](https://cran.r-project.org/bin/windows/base/release.html). For a full list of versions available, see the [R for Windows download page](https://cran.r-project.org/bin/windows/base/).
2. Run the installer.+ 1. For the **Select Setup Language** prompt, choose the language you want and select **OK**
- 2. On the **Information** page of the installer, read the license agreement. Select **Next** to accept agreement and continue on.
- 3. On the **Select Destination Location** page, accept the default install location and select **Next**.
- 4. On the **Select Components** page, optionally uncheck **32-bit files** option. For more information about running both 32-bit and 62-bit versions of R, see [Can both 32-bit and 64-bit R be installed on the same machine?](https://cran.r-project.org/bin/windows/base/rw-FAQ.html#Can-both-32_002d-and-64_002dbit-R-be-installed-on-the-same-machine_003f) frequently asked question.
- 5. On the **Startup options** page, leave startup options as **No (accept defaults)**. If you want the R graphical user interface (GUI) to use separate windows (SDI) or plain text help, choose **Yes (customize startup)** radio button and change startup options in the following to pages of the wizard.
- 6. On the **Select Start Menu Folder** page, select **Next**.
- 7. On the **Select Additional Tasks** page, optionally select **Create a desktop shortcut**. Select **Next**.
- 8. On the **Installing** page, wait for the installation to finish.
- 9. On the **Completing the R for Windows** page, select **Finish**.
+ 1. On the **Information** page of the installer, read the license agreement. Select **Next** to accept agreement and continue on.
+ 1. On the **Select Destination Location** page, accept the default install location and select **Next**.
+ 1. On the **Select Components** page, optionally uncheck **32-bit files** option. For more information about running both 32-bit and 62-bit versions of R, see [Can both 32-bit and 64-bit R be installed on the same machine?](https://cran.r-project.org/bin/windows/base/rw-FAQ.html#Can-both-32_002d-and-64_002dbit-R-be-installed-on-the-same-machine_003f) frequently asked question.
+ 1. On the **Startup options** page, leave startup options as **No (accept defaults)**. If you want the R graphical user interface (GUI) to use separate windows (SDI) or plain text help, choose **Yes (customize startup)** radio button and change startup options in the following to pages of the wizard.
+ 1. On the **Select Start Menu Folder** page, select **Next**.
+ 1. On the **Select Additional Tasks** page, optionally select **Create a desktop shortcut**. Select **Next**.
+ 1. On the **Installing** page, wait for the installation to finish.
+ 1. On the **Completing the R for Windows** page, select **Finish**.
-You can also execute the installation of R using PowerShell. The code example shows how to install R without the 32-bit component and adds a desktop icon for the latest version of R. To see a full list of command-line options for the installer, see [setup command-line parameters](https://jrsoftware.org/ishelp/index.php?topic=setupcmdline).
+You can also perform the installation of R by using PowerShell. The following code example shows how to install R without the 32-bit component and adds a desktop icon for the latest version of R. To see a full list of command-line options for the installer, see [setup command-line parameters](https://jrsoftware.org/ishelp/index.php?topic=setupcmdline).
```powershell #Avoid prompt to setup Internet Explorer if we must parse download page
Start-Process -FilePath $installPath.FullName -ArgumentList "/VERYSILENT /LOG=r-
### Install RStudio
-Now that we have R installed locally, we can install the RStudio IDE. We'll install the free version of RStudio Desktop. For all available versions, see [RStudio downloads](https://www.rstudio.com/products/rstudio/download/).
+After you install R in the template VM, install the RStudio IDE. In this article, you install the free version of RStudio Desktop. For all available versions, see [RStudio downloads](https://www.rstudio.com/products/rstudio/download/).
+
+1. Download the [installer for R Studio](https://www.rstudio.com/products/rstudio/download/#download) for Windows 10. The installer file is in the format `rstudio-{version}.exe`.
+
+1. Run the RStudio installer.
-1. Download the [installer for R Studio](https://www.rstudio.com/products/rstudio/download/#download) for Windows 10. The installer file will be in the format `rstudio-{version}.exe`.
-2. Run the RStudio installer.
1. On the **Welcome to RStudio Setup** page of the **RStudio Setup** wizard, select **Next**.
- 2. On the **Choose Install Location** page, select **Next**.
- 3. On the **Choose Start Menu Folder** page, select **Install**.
- 4. On the **Installing** page, wait for the installation to finish.
- 5. On the **Completing RStudio Setup** page, select **Finish**.
+ 1. On the **Choose Install Location** page, select **Next**.
+ 1. On the **Choose Start Menu Folder** page, select **Install**.
+ 1. On the **Installing** page, wait for the installation to finish.
+ 1. On the **Completing RStudio Setup** page, select **Finish**.
-To execute the RStudio installation steps using PowerShell, run the following commands. See [RStudio downloads](https://www.rstudio.com/products/rstudio/download/) to verify the RStudio version is available before executing the commands.
+To perform the RStudio installation steps by using PowerShell, run the following commands. See [RStudio downloads](https://www.rstudio.com/products/rstudio/download/) to verify the RStudio version is available before executing the commands.
```powershell $rstudiover="1.4.1717"
$installPath = Get-Item -Path $outputfile
Start-Process -FilePath $installPath.FullName -ArgumentList "/S" -NoNewWindow -Wait ```
-### CRAN packages
+### Install CRAN packages
+
+Comprehensive R Archive Network (CRAN) is R's central software repository. Among others, the repository contains R packages, which you can use to extend your R programs.
+
+To install CRAN packages on the template virtual machine:
+
+- Use the `install.packages(ΓÇ£package nameΓÇ¥)` command in an R interactive session as shown in [quick list of useful R packages](https://support.rstudio.com/hc/articles/201057987-Quick-list-of-useful-R-packages) article.
-Use the `install.packages(ΓÇ£package nameΓÇ¥)` command in an R interactive session as shown in [quick list of useful R packages](https://support.rstudio.com/hc/articles/201057987-Quick-list-of-useful-R-packages) article. Alternately, use Tools -> Install Packages menu item in RStudio.
+- Alternately, use the **Tools** > **Install Packages** menu item in RStudio.
-If you need help with finding a package, see a [list of packages by task](https://cran.r-project.org/web/views/) or [alphabetic list of packages](https://cloud.r-project.org/web/packages/available_packages_by_name.html).
+See the [list of packages by task](https://cran.r-project.org/web/views/) or [alphabetic list of packages](https://cloud.r-project.org/web/packages/available_packages_by_name.html).
## Cost
-LetΓÇÖs cover an example cost estimate for this class. Suppose you have a class of 25 students. Each student has 20 hours of scheduled class time. Another 10 quota hours for homework or assignments outside of scheduled class time is given to each student. The virtual machine size we chose was **Small GPU (Compute)**, which is 139 lab units.
+This section provides a cost estimate for running this class for 25 lab users. There are 20 hours of scheduled class time. Also, each user gets 10 hours quota for homework or assignments outside scheduled class time. The virtual machine size we chose was **Small GPU (Compute)**, which is 139 lab units.
-25 students &times; (20 scheduled hours + 10 quota hours) &times; 139 Lab Units &times; 0.01 USD per hour = 1042.5 USD
+- 25 lab users &times; (20 scheduled hours + 10 quota hours) &times; 139 lab units
> [!IMPORTANT] > The cost estimate is for example purposes only. For current pricing information, see [Azure Lab Services pricing](https://azure.microsoft.com/pricing/details/lab-services/).
lab-services Class Type Solidworks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/class-type-solidworks.md
SOLIDWORKS supports other versions of Windows besides Windows 10. See [SOLIDWOR
| Lab setting | Value and description | | | | | Virtual Machine Size | **Small GPU (Visualization)**. Best suited for remote visualization, streaming, gaming, and encoding with frameworks such as OpenGL and DirectX. |
- | Virtual Machine Image | Windows 10 Pro |
+ | Virtual Machine Image | **Windows 10 Pro** |
1. When you create a lab with the **Small GPU (Visualization)** size, follow these steps to [set up a lab with GPUs](./how-to-setup-lab-gpu.md).
The steps in this section show how to set up your template virtual machine by do
## Cost
-This section provides a cost estimate for running this class for 25 users. There are 20 hours of scheduled class time. Also, each user gets 10 hours quota for homework or assignments outside scheduled class time. The virtual machine size we chose was **Small GPU (Visualization)**, which is 160 lab units. This estimate doesnΓÇÖt include the cost of running a license server.
+This section provides a cost estimate for running this class for 25 lab users. There are 20 hours of scheduled class time. Also, each user gets 10 hours quota for homework or assignments outside scheduled class time. The virtual machine size we chose was **Small GPU (Visualization)**, which is 160 lab units. This estimate doesnΓÇÖt include the cost of running a license server.
-- 25 students &times; (20 scheduled hours + 10 quota hours) &times; 160 lab units
+- 25 lab users &times; (20 scheduled hours + 10 quota hours) &times; 160 lab units
> [!IMPORTANT] > The cost estimate is for example purposes only. For current pricing information, see [Azure Lab Services pricing](https://azure.microsoft.com/pricing/details/lab-services/).
lab-services Class Type Sql Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/class-type-sql-server.md
Now that you installed SSMS, you can [connect and query a SQL Server](/sql/ssms/
## Cost estimate
-This section provides a cost estimate for running this class for 25 users. The estimate doesn't include the cost of running the Azure SQL database. See [SQL Database pricing](https://azure.microsoft.com/pricing/details/sql-database) for current details on database pricing.
+This section provides a cost estimate for running this class for 25 lab users. The estimate doesn't include the cost of running the Azure SQL database. See [SQL Database pricing](https://azure.microsoft.com/pricing/details/sql-database) for current details on database pricing.
There are 20 hours of scheduled class time. Also, each user gets 10 hours quota for homework or assignments outside scheduled class time. The virtual machine size we chose was **Medium**, which is 42 lab units. -- 25 students &times; (20 scheduled hours + 10 quota hours) &times; 42 lab units
+- 25 lab users &times; (20 scheduled hours + 10 quota hours) &times; 42 lab units
> [!IMPORTANT] > The cost estimate is for example purposes only. For current pricing information, see [Azure Lab Services pricing](https://azure.microsoft.com/pricing/details/lab-services/).
lab-services Class Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/class-types.md
Title: Example class types on Azure Lab Services | Microsoft Docs
-description: Provides some types of classes for which you can set up labs using Azure Lab Services.
- Previously updated : 04/04/2022
+ Title: Example lab class types
+
+description: Learn about different example class types for which you can set up labs using Azure Lab Services.
+++++ Last updated : 04/24/2023
-# Class types overview - Azure Lab Services
+# Class types in Azure Lab Services
-Azure Lab Services enables you to quickly set up lab environments in the cloud. Articles in this section provide guidance on how to set up several types of labs using Azure Lab Services.
+Azure Lab Services enables you to quickly set up lab environments in the cloud. Learn about the different class types for you can use Azure Lab Services.
## Adobe Creative Cloud
For detailed information on how to set up this type of lab, see [Set up a lab fo
## Big data analytics
-You can set up a GPU lab to teach a big data analytics class. With this type of class, students learn how to handle large volumes of data, and apply machine and statistical learning algorithms to derive data insights. A key goal for students is to learn to use data analytics tools, such as Apache Hadoop's open-source software package that provides tools for storing, managing, and processing big data.
+You can set up a GPU lab to teach a big data analytics class. With this type of class, users learn how to handle large volumes of data, and apply machine and statistical learning algorithms to derive data insights. A key goal for users is to learn to use data analytics tools, such as Apache Hadoop's open-source software package that provides tools for storing, managing, and processing big data.
For detailed information on how to set up this type of lab, see [Set up a lab for big data analytics using Docker deployment of HortonWorks Data Platform](class-type-big-data-analytics.md). ## Database management
-Databases concepts are one of the introductory courses taught in most of the Computer Science departments in college. You can set up a lab for a basic databases management class in Azure Lab Services. For example, you can set up a virtual machine template in a lab with a [MySQL](https://www.mysql.com/) Database Server or a [SQL Server 2019](https://www.microsoft.com/sql-server/sql-server-2019) server.
+Database concepts is one of the introductory courses taught in most of the Computer Science departments in college. You can set up a lab for a basic databases management class in Azure Lab Services. For example, you can set up a virtual machine template in a lab with a [MySQL](https://www.mysql.com/) database server or a [SQL Server 2019](https://www.microsoft.com/sql-server/sql-server-2019) server.
For detailed information on how to set up this type of lab, see [Set up a lab to teach database management for relational databases](class-type-database-management.md). ## Deep learning in natural language processing
-You can set up a lab focused on deep learning in natural language processing (NLP) using Azure Lab Services. Natural language processing (NLP) is a form of artificial intelligence (AI) that enables computers with translation, speech recognition, and other language understanding capabilities. Students taking an NLP class get a Linux virtual machine (VM) to learn how to apply neural network algorithms to develop deep learning models that are used for analyzing written human language.
+You can set up a lab focused on deep learning in natural language processing (NLP) using Azure Lab Services. Natural language processing (NLP) is a form of artificial intelligence (AI) that enables computers with translation, speech recognition, and other language understanding capabilities. Users taking an NLP class get a Linux virtual machine (VM) to learn how to apply neural network algorithms to develop deep learning models that are used for analyzing written human language.
For detailed information on how to set up this type of lab, see [Set up a lab focused on deep learning in natural language processing using Azure Lab Services](class-type-deep-learning-natural-language-processing.md). ## Ethical hacking with Hyper-V
-You can set up a lab for a class that focuses on forensics side of ethical hacking. Penetration testing, a practice used by the ethical hacking community, occurs when someone attempts to gain access to the system or network to demonstrate vulnerabilities that a malicious attacker may exploit.
+You can set up a lab for a class that focuses on the forensics side of ethical hacking. Penetration testing, a practice used by the ethical hacking community, occurs when someone attempts to gain access to the system or network to demonstrate vulnerabilities that a malicious attacker may exploit.
-In an ethical hacking class, students can learn modern techniques for defending against vulnerabilities. Each student gets a Windows Server host virtual machine that has two nested virtual machines ΓÇô one virtual machine with [Metasploitable3](https://github.com/rapid7/metasploitable3) image and another machine with [Kali Linux](https://www.kali.org/) image. The Metasploitable virtual machine is used for exploiting purposes. The Kali Linux virtual machine provides access to the tools needed to execute forensic tasks.
+In an ethical hacking class, users can learn modern techniques for defending against vulnerabilities. Each user gets a Windows Server host virtual machine that has two nested virtual machines ΓÇô one virtual machine with [Metasploitable3](https://github.com/rapid7/metasploitable3) image and another machine with [Kali Linux](https://www.kali.org/) image. The Metasploitable virtual machine is used for exploiting purposes. The Kali Linux virtual machine provides access to the tools needed to run forensic tasks.
For detailed information on how to set up this type of lab, see [Set up a lab to teach ethical hacking class](class-type-ethical-hacking.md). ## MATLAB
-[MATLAB](https://www.mathworks.com/products/matlab.html), which stands for Matrix laboratory, is programming platform from [MathWorks](https://www.mathworks.com/). It combines computational power and visualization making it popular tool in the fields of math, engineering, physics, and chemistry.
+[MATLAB](https://www.mathworks.com/products/matlab.html), which stands for Matrix laboratory, is a programming platform from [MathWorks](https://www.mathworks.com/). It combines computational power and visualization, making it a popular tool in the fields of math, engineering, physics, and chemistry.
For detailed information on how to set up this type of lab, see [Setup a lab to teach MATLAB](class-type-matlab.md). ## Networking with GNS3
-You can set up a lab for a class that focuses on allowing students to emulate, configure, test, and troubleshoot virtual and real networks using [GNS3](https://www.gns3.com/) software.
+You can set up a lab for a class that focuses on emulating, configuring, testing, and troubleshooting virtual and real networks by using [GNS3](https://www.gns3.com/) software.
For detailed information on how to set up this type of lab, see [Setup a lab to teach a networking class](class-type-networking-gns3.md). ## Project Lead the Way (PLTW)
-[Project Lead the Way (PLTW)](https://www.pltw.org/) is a nonprofit organization that provides PreK-12 curriculum across the United States in computer science, engineering, and biomedical science. In each PLTW class, students use various software applications as part of their hands-on learning experience.
+[Project Lead the Way (PLTW)](https://www.pltw.org/) is a nonprofit organization that provides PreK-12 curriculum across the United States in computer science, engineering, and biomedical science. In each PLTW class, users use various software applications as part of their hands-on learning experience.
For detailed information on how to set up these types of labs, see [Set up labs for Project Lead the Way classes](class-type-pltw.md). ## Python and Jupyter notebooks
-You can set up a template machine in Azure Lab Services with the tools needed to teach students how to use [Jupyter Notebooks](http://jupyter-notebook.readthedocs.io). Jupyter Notebooks is an open-source project that lets you easily combine rich text and executable [Python](https://www.python.org/) source code on a single canvas called a notebook. Running a notebook results in a linear record of inputs and outputs. Those outputs can include text, tables of information, scatter plots, and more.
+You can set up a template machine in Azure Lab Services with the tools needed to teach users how to use [Jupyter Notebooks](http://jupyter-notebook.readthedocs.io). Jupyter Notebooks is an open-source project that lets you easily combine rich text and executable [Python](https://www.python.org/) source code on a single canvas called a notebook. Running a notebook results in a linear record of inputs and outputs. Those outputs can include text, tables of information, scatter plots, and more.
For detailed information on how to set up this type of lab, see [Set up a lab to teach data science with Python and Jupyter Notebooks](class-type-jupyter-notebook.md). ## React
-[React](https://reactjs.org/) is a popular JavaScript library for building user interfaces (UI). React is a declarative way to create reusable components for your website. There are many popular libraries for JavaScript-based front-end development. We'll use a few of these libraries while creating our lab. [Redux](https://redux.js.org/) is a library that provides predictable state container for JavaScript apps and is often used in compliment with React. [JSX](https://reactjs.org/docs/introducing-jsx.html) is a library syntax extension to JavaScript often used with React to describe what the UI should look like. [NodeJS](https://nodejs.org/) is a convenient way to run a webserver for your React application.
+[React](https://reactjs.org/) is a popular JavaScript library for building user interfaces (UI). React is a declarative way to create reusable components for your website. There are many popular libraries for JavaScript-based front-end development. [Redux](https://redux.js.org/) is a library that provides predictable state container for JavaScript apps and is often used in compliment with React. [JSX](https://reactjs.org/docs/introducing-jsx.html) is a library syntax extension to JavaScript often used with React to describe what the UI should look like. [NodeJS](https://nodejs.org/) is a convenient way to run a webserver for your React application.
For detailed information on how to set up this type of lab on Linux using [Visual Studio Code](https://code.visualstudio.com/) for your development environment, see [Set up lab for React on Linux](class-type-react-linux.md). For detailed information on how to set up this type of lab on Windows using [Visual Studio 2019](https://visualstudio.microsoft.com/vs/) for your development environment, see [Set up lab for React on Windows](class-type-react-windows.md). ## RStudio
-[R](https://www.r-project.org/https://docsupdatetracker.net/about.html) is an open-source language used for statistical computing and graphics. It's used in the statistical analysis of genetics, natural language processing, analyzing financial data, and more. R provides an [interactive command line](https://cran.r-project.org/doc/manuals/r-release/R-intro.html#Invoking-R-from-the-command-line) experience. [RStudio](https://www.rstudio.com/products/rstudio/) is an interactive development environment (IDE) available for the R language. The free version provides code-editing tools, an integrated debugging experience, and package development tools. This class type will focus on solely RStudio and R as a building block for a class that requires the use of statistical computing.
+[R](https://www.r-project.org/https://docsupdatetracker.net/about.html) is an open-source language used for statistical computing and graphics. The language is used in the statistical analysis of genetics, natural language processing, analyzing financial data, and more. R provides an [interactive command line](https://cran.r-project.org/doc/manuals/r-release/R-intro.html#Invoking-R-from-the-command-line) experience. [RStudio](https://www.rstudio.com/products/rstudio/) is an interactive development environment (IDE) available for the R language. The free version provides code-editing tools, an integrated debugging experience, and package development tools. This class type focuses on solely RStudio and R as a building block for a class that requires the use of statistical computing.
For detailed information on how to set up this type of lab, see [Set up a lab to teach R on Linux](class-type-rstudio-linux.md) or [Set up a lab to teach R on Windows](class-type-rstudio-windows.md).
For detailed information on how to set up this type of lab, see [Set up a lab to
You can set up a lab to teach shell scripting on Linux. Scripting is a useful part of system administration that allows administrators to avoid repetitive tasks. In this sample scenario, the class covers traditional bash scripts and enhanced scripts. Enhanced scripts are scripts that combine bash commands and Ruby. This approach allows Ruby to pass data around and bash commands to interact with the shell.
-Students taking these scripting classes get a Linux virtual machine to learn the basics of Linux, and also get familiar with the bash shell scripting. The Linux virtual machine comes with remote desktop access enabled and with [Gedit](https://help.gnome.org/users/gedit/stable/) and [Visual Studio Code](https://code.visualstudio.com/) text editors installed.
+Users taking these scripting classes get a Linux virtual machine to learn the basics of Linux, and also get familiar with the bash shell scripting. The Linux virtual machine has remote desktop access enabled, and has the [Gedit](https://help.gnome.org/users/gedit/stable/) and [Visual Studio Code](https://code.visualstudio.com/) text editors installed.
For detailed information on how to set up this type of lab, see [Set up a lab for Shell scripting on Linux](class-type-shell-scripting-linux.md). ## SolidWorks computer-aided design (CAD)
-You can set up a GPU lab that gives engineering students access to [SolidWorks](https://www.solidworks.com/). SolidWorks provides a 3D CAD environment for modeling solid objects. With SolidWorks, engineers can easily create, visualize, simulate, and document their designs.
+You can set up a GPU lab that gives engineering users access to [SolidWorks](https://www.solidworks.com/). SolidWorks provides a 3D CAD environment for modeling solid objects. With SolidWorks, engineers can easily create, visualize, simulate, and document their designs.
For detailed information on how to set up this type of lab, see [Set up a lab for engineering classes using SolidWorks](class-type-solidworks.md). ## SQL database and management
-Structured Query Language (SQL) is the standard language for relational database management including adding, accessing, and managing content in a database. You can set up a lab to teach database concepts using both [MySQL](https://www.mysql.com/) Database server and [SQL Server 2019](https://www.microsoft.com/sql-server/sql-server-2019) server.
+Structured Query Language (SQL) is the standard language for relational database management including adding, accessing, and managing content in a database. You can set up a lab to teach database concepts using both [MySQL](https://www.mysql.com/) and [SQL Server 2019](https://www.microsoft.com/sql-server/sql-server-2019) server.
For detailed information on how to set up this type of lab, see [Set up a lab to teach database management for relational databases](class-type-database-management.md).
lab-services Classroom Labs Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/classroom-labs-concepts.md
A lab can use either quota time, [scheduled time](#schedule), or a combination o
- [Create the resources to get started](./quick-create-resources.md) - [Tutorial: Set up a lab for classroom training](./tutorial-setup-lab.md)
+- Learn about the [architecture fundamentals of Azure Lab Services](./classroom-labs-fundamentals.md)
lab-services Classroom Labs Fundamentals https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/classroom-labs-fundamentals.md
Title: Architecture Fundamentals in Azure Lab Services | Microsoft Docs
-description: This article will cover the fundamental resources used by Lab Services and basic architecture of a lab.
- Previously updated : 05/30/2022
+ Title: Architecture fundamentals
+
+description: This article covers the fundamental resources used by Azure Lab Services and the basic architecture of a lab environment.
+ +++ Last updated : 04/24/2023
-# Architecture Fundamentals in Azure Lab Services
+# Architecture fundamentals in Azure Lab Services
[!INCLUDE [preview note](./includes/lab-services-new-update-focused-article.md)]
-Azure Lab Services is a SaaS (software as a service) solution, which means that the resources needed by Lab Services are handled for you. This article will cover the fundamental resources used by Lab Services and basic architecture of a lab.
+Azure Lab Services is a SaaS (software as a service) solution, which means that the infrastructure resources needed by Azure Lab Services are managed for you. This article covers the fundamental resources that Azure Lab Services uses and the basic architecture of a lab.
-Azure Lab Services does provide a couple of areas that allow you to use your own resources with Lab Services. For more information about using VMs on your own network, see [Connect to your virtual network in Azure Lab Services](how-to-connect-vnet-injection.md) to use virtual network injection instead of virtual network peering. To reuse images from an Azure Compute Gallery, see how to [attach a compute gallery](how-to-attach-detach-shared-image-gallery.md).
+While Azure Lab Services is a managed service, you can configure the service to integrate with your own resources. For example, [connect lab virtual machines to your own network with virtual network injection](how-to-connect-vnet-injection.md) instead of using virtual network peering. Or reuse your own custom virtual machine images by [attaching an Azure compute gallery](./how-to-attach-detach-shared-image-gallery.md).
-Below is the basic architecture of a lab without advanced networking enabled. The lab plan is hosted in your subscription. The student VMs, along with the resources needed to support the VMs are hosted in a subscription owned by Azure Lab Services. LetΓÇÖs talk about what is in Azure Lab Service's subscriptions in more detail.
+The following diagram shows the basic architecture of a lab without advanced networking enabled. The [lab plan](./classroom-labs-concepts.md#lab-plan) is hosted in your subscription. The lab virtual machines, along with the resources needed to support the virtual machines, are hosted in a subscription owned by Azure Lab Services.
:::image type="content" source="./media/classroom-labs-fundamentals/labservices-basic-architecture.png" alt-text="Architecture diagram of basic lab in Azure Lab Services.":::
-## Hosted Resources
+## Hosted resources
-The resources required to run a lab are hosted in one of the Microsoft-managed Azure subscriptions. Resources include:
+Azure Lab Services hosts the resources to run a lab in one of the Microsoft-managed Azure subscriptions. These resources include:
-- template virtual machine for the educator-- virtual machine for each student-- network-related items such as a load balancer, virtual network, and network security group
+- [template virtual machine](./classroom-labs-concepts.md#template-virtual-machine) for the lab creator to configure the lab
+- [lab virtual machine](./classroom-labs-concepts.md#lab-virtual-machine) for each lab user to remotely connect to
+- network-related items, such as a load balancer, virtual network, and network security group
-These subscriptions are monitored for suspicious activity. It's important to note that this monitoring is done externally to the virtual machines through VM extension or network pattern monitoring. If [shutdown on disconnect](how-to-enable-shutdown-disconnect.md) is enabled, a diagnostic extension is enabled on the virtual machine. The extension allows Lab Services to be informed of the remote desktop protocol (RDP) session disconnect event.
+Azure monitors these managed subscriptions for suspicious activity. It's important to note that this monitoring is done externally to the virtual machines through VM extensions or network pattern monitoring. If you enable [shutdown on disconnect](how-to-enable-shutdown-disconnect.md), a diagnostic extension is enabled on the virtual machine. The extension allows Azure Lab Services to be informed of the remote desktop protocol (RDP) session disconnect event.
-## Virtual Network
+## Virtual network
By default, each lab is isolated by its own virtual network.
-Students connect to their virtual machine through a load balancer. No student virtual machines have a public IP address; they only have a private IP address. The connection string for the student will be the public IP address of the load balancer and a random port between:
+Lab users connect to their lab virtual machine through a load balancer. Lab virtual machines don't have a public IP address and only have a private IP address. The connection string to remotely connect to the lab virtual machine uses the public IP address of the load balancer and a random port between:
- 4980-4989 and 5000-6999 for SSH connections - 4990-4999 and 7000-8999 for RDP connections
-Inbound rules on the load balancer forward the connection, depending on the operating system, to either port 22 (SSH) or port 3389 (RDP) of the appropriate virtual machine. An NSG prevents outside traffic on any other ports.
+Inbound rules on the load balancer forward the connection, depending on the operating system, to either port 22 (SSH) or port 3389 (RDP) of the lab virtual machine. A network security group (NSG) blocks external traffic to any other port.
-If the lab is using [advanced networking](how-to-connect-vnet-injection.md), then each lab is using the same subnet that has been delegated to Azure Lab Services and connected to the lab plan. You'll also be responsible for creating an [NSG with an inbound security rule to allow RDP and SSH traffic](how-to-connect-vnet-injection.md#associate-delegated-subnet-with-nsg) so students can connect to their VMs.
+If you configured the lab to use [advanced networking](how-to-connect-vnet-injection.md), then each lab uses the subnet that was connected to the lab plan and delegated to Azure Lab Services. In this case, you're responsible for creating a [network security group with an inbound security rule to allow RDP and SSH traffic](how-to-connect-vnet-injection.md#associate-delegated-subnet-with-nsg) to the lab virtual machines.
-## Access control to the virtual machines
+## Access control to the lab virtual machines
-Lab Services handles the studentΓÇÖs ability to perform actions like start and stop on their virtual machines. It also controls access to their VM connection information.
+Azure Lab Services manages access to lab virtual machines at different levels:
-Lab Services also handles the registration of students to the service. There are currently two different access settings: restricted and nonrestricted. For more information, see the [manage lab users](how-to-configure-student-usage.md#send-invitations-to-users) article. Restricted access means Lab Services verifies that the students are added as user before allowing access. Nonrestricted means any user can register as long as they have the registration link and there's capacity in the lab. Nonrestricted can be useful for hackathon events.
+- Start or stop a lab VM. Azure Lab Services grants lab users permission to perform such actions on their own virtual machines. The service also controls access to the lab virtual machine connection information.
-Student VMs that are hosted in the lab have a username and password set by the creator of the lab. Alternately, the creator of the lab can allow registered students to choose their own password on first sign-in.
+- Register for a lab. Azure Lab Services offers two different access settings: restricted and nonrestricted. *Restricted access* means that Azure Lab Services verifies that lab users are added to the lab before allowing access. *Nonrestricted access* means that any user can register for a lab by using the lab registration link, if there's capacity in the lab. Nonrestricted access can be useful for hackathon events. For more information, see the [manage lab users](how-to-configure-student-usage.md#send-invitations-to-users) article.
+
+- Virtual machine credentials. Lab virtual machines that are hosted in the lab have a username and password set by the creator of the lab. Alternately, the creator of the lab can allow registered users to choose their own password on first sign-in.
## Next steps
-To learn more about features available in Lab Services, see [Azure Lab Services concepts](classroom-labs-concepts.md) and [Azure Lab Services overview](lab-services-overview.md).
+- What is [Azure Lab Services](./lab-services-overview.md)
+
+- Learn more about the [key concepts in Azure Lab Services](./classroom-labs-concepts.md)
lab-services Connect Virtual Machine Linux X2go https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/connect-virtual-machine-linux-x2go.md
Title: Connect to a Linux VM using X2Go in Azure Lab Services | Microsoft Docs
+ Title: Connect to a Linux VM using X2Go
+ description: Learn how to use X2Go for Linux virtual machines in a lab in Azure Lab Services. ++++ Previously updated : 02/01/2022 Last updated : 04/24/2023
-# Connect to a VM using X2Go
+# Connect to a lab VM using X2Go in Azure Lab Services
-Students can use X2Go to connect to their Linux VMs after their educator sets up their lab with X2Go and the GUI packages for a Linux graphical desktop environment
+In this article, you learn how to connect to a Linux-based lab VM in Azure Lab Services by using X2Go. Before you can connect with X2Go, the lab needs to have the X2Go and the GUI packages for a Linux graphical desktop environment.
-Students need to find out from their educator which Linux graphical desktop environment their educator has installed. This information is needed in the next steps to connect using the X2Go client.
+When you connect to the lab VM by using X2Go, you need to provide the Linux graphical desktop environment version. For example, select `XFCE` if you're using either XFCE or Xubuntu graphical desktop environments. You can get this information from the person that created the lab.
## Install X2Go client
-Install the [X2Go client](https://wiki.x2go.org/doku.php/doc:installation:x2goclient) on your local computer. Follow the instructions that match the client OS you are using.
+Install the [X2Go client](https://wiki.x2go.org/doku.php/doc:installation:x2goclient) on your local computer. Follow the instructions that match your client OS.
## Connect to the VM using X2Go client
-1. Copy SSH connection information for VM. For instructions to get the SSH command, see [Connect to a Linux lab VM Using SSH](connect-virtual-machine.md#connect-to-a-linux-lab-vm-using-ssh). You need this information to connect using the X2Go client.
+1. Copy SSH connection information for the lab VM.
+
+ Learn how to [Connect to a Linux lab VM Using SSH](connect-virtual-machine.md#connect-to-a-linux-lab-vm-using-ssh). You need this information to connect using the X2Go client.
1. Once you have the SSH connection information, open the X2Go client and select **Session** > **New Session**. :::image type="content" source="./media/how-to-use-classroom-lab/x2go-new-session.png" alt-text="Screenshot of X 2 Go client Session menu.":::
-1. Enter the values in the **Session Preferences** pane based on your SSH connection information. For example, your connection information will look similar to following command.
+1. Enter the values in the **Session Preferences** pane based on your SSH connection information.
+
+ For example, your connection information might look similar to following command.
```bash ssh -p 12345 student@ml-lab-00000000-0000-0000-0000-000000000000.eastus2.cloudapp.azure.com ```
- Using this example, the following values are entered:
+ Based on this sample, enter the following values:
- **Session name** - Specify a name, such as the name of your VM. - **Host** - The ID of your VM; for example, **`ml-lab-00000000-0000-0000-0000-000000000000.eastus2.cloudapp.azure.com`**. - **Login** - The username for your VM; for example, **student**. - **SSH port** - The unique port assigned to your VM; for example, **12345**.
- - **Session type** - Select the Linux graphical desktop environment that your educator configured your VM. You need to get this information from your educator. For example, select `XFCE` if you're using either XFCE or Xubuntu graphical desktop environments.
+ - **Session type** - Select the Linux graphical desktop environment that was used when setting up the lab. For example, select `XFCE` if you're using either XFCE or Xubuntu graphical desktop environments.
- Finally, select **OK** to create the session.
+1. Select **OK** to create the remote session.
:::image type="content" source="./media/how-to-use-classroom-lab/x2go-session-preferences.png" alt-text="Screenshot of new session window in X 2 Go client. The session name, server information and session type settings are highlighted.":::
-1. Select on your session in the right-hand pane.
+1. Select your session in the right-hand pane.
:::image type="content" source="./media/how-to-use-classroom-lab/x2go-start-session.png" alt-text="Screenshot of X 2 Go with saved session."::: > [!NOTE]
- > If you are prompted with a message about authenticity, select **yes** to continue to entering your password. Message will be similar to "The authenticity of host '[`00000000-0000-0000-0000-000000000000.eastus2.cloudapp.eastus.cloudapp.azure.com`]:12345' can't be established. ECDSA key fingerprint is SHA256:00000000000000000000000000000000000000000000.Are you sure you want to continue connecting (yes/no)?"
+ > If you're prompted with a message about authenticity, select **yes** to continue, and enter your password. The message is similar to "The authenticity of host '[`00000000-0000-0000-0000-000000000000.eastus2.cloudapp.eastus.cloudapp.azure.com`]:12345' can't be established. ECDSA key fingerprint is SHA256:00000000000000000000000000000000000000000000.Are you sure you want to continue connecting (yes/no)?"
+
+1. When prompted, enter your password and select **OK**.
-1. When prompted, enter your password and select **OK**. You'll now be remotely connected to your VM's GUI desktop environment.
+ You're now remotely connected to your lab VM's GUI desktop environment.
## Next steps -- [As an educator, configure X2Go on a template VM](how-to-enable-remote-desktop-linux.md#setting-up-x2go)-- [As a student, stop the VM](how-to-use-lab.md#start-or-stop-the-vm)
+- [Configure X2Go on a template VM](how-to-enable-remote-desktop-linux.md#setting-up-x2go)
lab-services How To Attach External Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-attach-external-storage.md
Title: Use external file storage in Azure Lab Services | Microsoft Docs
-description: Learn how to set up a lab that uses external file storage in Lab Services.
- Previously updated : 03/30/2021
+ Title: Use external file storage
+
+description: Learn how to set up a lab that uses external file storage in Azure Lab Services.
+ +++ Last updated : 04/25/2023
-# Use external file storage in Lab Services
+# Use external file storage in Azure Lab Services
-This article covers some of the options for external file storage when you use Azure Lab Services. [Azure Files](https://azure.microsoft.com/services/storage/files/) offers fully managed file shares in the cloud, [accessible via SMB 2.1 and SMB 3.0](../storage/files/storage-how-to-use-files-windows.md). An Azure Files share can be connected either publicly or privately within a virtual network. You can also configure the share to use a studentΓÇÖs Active Directory credentials for connecting to the file share. If you're on a Linux machine, you can also use Azure NetApp Files with NFS volumes for external file storage with Azure Lab Services.
+This article covers some of the options for using external file storage in Azure Lab Services. [Azure Files](https://azure.microsoft.com/services/storage/files/) offers fully managed file shares in the cloud, [accessible via SMB 2.1 and SMB 3.0](/azure/storage/files/storage-how-to-use-files-windows). An Azure Files share can be connected either publicly or privately within a virtual network. You can also configure the share to use a lab userΓÇÖs Active Directory credentials for connecting to the file share. If you're on a Linux machine, you can also use Azure NetApp Files with NFS volumes for external file storage with Azure Lab Services.
## Which solution to use
-Each solution has different requirements and abilities. The following table lists important points to consider for each solution.
+The following table lists important considerations for each external storage solution.
| Solution | Important to know | | -- | |
-| [Azure Files share with public endpoint](#azure-files-share) | <ul><li>Everyone has read/write access.</li><li>No virtual network peering is required.</li><li>Accessible to all VMs, not just lab VMs.</li><li>If you're using Linux, students will have access to the storage account key.</li></ul> |
-| [Azure Files share with private endpoint](#azure-files-share) | <ul><li>Everyone has read/write access.</li><li>Virtual network peering is required.</li><li>Accessible only to VMs on the same network (or a peered network) as the storage account.</li><li>If you're using Linux, students will have access to the storage account key.</li></ul> |
-| [Azure Files with identity-based authorization](#azure-files-with-identity-based-authorization) | <ul><li>Either read or read/write access permissions can be set for folder or file.</li><li>Virtual network peering is required.</li><li>Storage account must be connected to Active Directory.</li><li>Lab VMs must be domain-joined.</li><li>Storage account key isn't used for students to connect to the file share.</li></ul> |
-| [Azure NetApp Files with NFS volumes](#azure-netapp-files-with-nfs-volumes) | <ul><li>Either read or read/write access can be set for volumes.</li><li>Permissions are set by using a student VMΓÇÖs IP address.</li><li>Virtual network peering is required.</li><li>You might need to register to use the Azure NetApp Files service.</li><li>Linux only.</li></ul>
+| [Azure Files share with public endpoint](#azure-files-share) | <ul><li>Everyone has read/write access.</li><li>No virtual network peering is required.</li><li>Accessible to all VMs, not just lab VMs.</li><li>If you're using Linux, lab users have access to the storage account key.</li></ul> |
+| [Azure Files share with private endpoint](#azure-files-share) | <ul><li>Everyone has read/write access.</li><li>Virtual network peering is required.</li><li>Accessible only to VMs on the same network (or a peered network) as the storage account.</li><li>If you're using Linux, lab users have access to the storage account key.</li></ul> |
+| [Azure Files with identity-based authorization](#azure-files-with-identity-based-authorization) | <ul><li>Either read or read/write access permissions can be set for folder or file.</li><li>Virtual network peering is required.</li><li>Storage account must be connected to Active Directory.</li><li>Lab VMs must be domain-joined.</li><li>Storage account key isn't used for lab users to connect to the file share.</li></ul> |
+| [Azure NetApp Files with NFS volumes](#azure-netapp-files-with-nfs-volumes) | <ul><li>Either read or read/write access can be set for volumes.</li><li>Permissions are set by using a lab VMΓÇÖs IP address.</li><li>Virtual network peering is required.</li><li>You might need to register to use the Azure NetApp Files service.</li><li>Linux only.</li></ul>
The cost of using external storage isn't included in the cost of using Azure Lab Services. For more information about pricing, see [Azure Files pricing](https://azure.microsoft.com/pricing/details/storage/files/) and [Azure NetApp Files pricing](https://azure.microsoft.com/pricing/details/netapp/). ## Azure Files share
-Azure Files shares are accessed by using a public or private endpoint. Mount the shares by using the storage account key as the password. With this approach, everyone has read-write access to the file share.
+Azure Files shares are accessed by using a public or private endpoint. You mount the shares to a virtual machine by using the storage account key as the password. With this approach, everyone has read-write access to the file share.
-If you're using a public endpoint to the Azure Files share, it's important to remember the following:
+By default, standard file shares can span up to 5 TiB. See [Create an Azure file share](/azure/storage/files/storage-how-to-create-file-share) for information on how to create file shares that span up to 100 TiB.
+
+### Considerations for using a public endpoint
- The virtual network for the storage account doesn't have to be connected to the lab virtual network. You can create the file share anytime before the template VM is published.-- The file share can be accessed from any machine if a user has the storage account key. -- Linux students can see the storage account key. Credentials for mounting an Azure Files share are stored in `{file-share-name}.cred` on Linux VMs, and are readable by sudo. Because students are given sudo access by default in Azure Lab Services VMs, they can read the storage account key. If the storage account endpoint is public, students can get access to the file share outside of their student VM. Consider rotating the storage account key after class has ended, and using private file shares.
+- The file share can be accessed from any machine if a user has the storage account key.
+- Linux lab users can see the storage account key. Credentials for mounting an Azure Files share are stored in `{file-share-name}.cred` on Linux VMs, and are readable by *sudo*. Because lab users are given sudo access by default in Azure Lab Services VMs, they can read the storage account key. If the storage account endpoint is public, lab users can get access to the file share outside of their lab VM. Consider rotating the storage account key after class has ended, or using private file shares.
-If you're using a private endpoint to the Azure Files share, it's important to remember the following:
+### Considerations for using a private endpoint
-- Access is restricted to traffic originating from the private network, and canΓÇÖt be accessed through the public internet. Only VMs in the private virtual network, VMs in a network peered to the private virtual network, or machines connected to a VPN for the private network, can access the file share. -- Linux students can see the storage account key. Credentials for mounting an Azure Files share are stored in `{file-share-name}.cred` on Linux VMs, and are readable by sudo. Because students are given sudo access by default in Azure Lab Services VMs, they can read the storage account key. Consider rotating the storage account key after class has ended. - This approach requires the file share virtual network to be connected to the lab. To enable advanced networking for labs, see [Connect to your virtual network in Azure Lab Services using vnet injection](how-to-connect-vnet-injection.md). VNet injection must be done during lab plan creation.
+- Access is restricted to traffic originating from the private network, and canΓÇÖt be accessed through the public internet. Only VMs in the private virtual network, VMs in a network peered to the private virtual network, or machines connected to a VPN for the private network, can access the file share.
+- Linux lab users can see the storage account key. Credentials for mounting an Azure Files share are stored in `{file-share-name}.cred` on Linux VMs, and are readable by *sudo*. Because lab users are given sudo access by default in Azure Lab Services VMs, they can read the storage account key. Consider rotating the storage account key after class has ended.
-> [!NOTE]
-> By default, standard file shares can span up to 5 TiB. See [Create an Azure file share](../storage/files/storage-how-to-create-file-share.md) for information on how to create file shares than span up to 100 TiB.
+### Connect a lab VM to an Azure file share
Follow these steps to create a VM connected to an Azure file share.
-1. Create an [Azure Storage account](../storage/files/storage-how-to-create-file-share.md). On the **Connectivity method** page, choose **public endpoint** or **private endpoint**.
-2. If you've chosen the private method, create a [private endpoint](../private-link/tutorial-private-endpoint-storage-portal.md) in order for the file shares to be accessible from the virtual network.
-3. Create an [Azure file share](../storage/files/storage-how-to-create-file-share.md). The file share is reachable by the public host name of the storage account if using a public endpoint. The file share is reachable by private IP address if using a private endpoint.
-4. Mount the Azure file share in the template VM:
- - [Windows](../storage/files/storage-how-to-use-files-windows.md)
- - [Linux](../storage/files/storage-how-to-use-files-linux.md). To avoid mounting issues on student VMs, see the [use Azure Files with Linux](#use-azure-files-with-linux) section.
-5. [Publish](how-to-create-manage-template.md#publish-the-template-vm) the template VM.
+1. Create an [Azure Storage account](/azure/storage/files/storage-how-to-create-file-share). On the **Connectivity method** page, choose **public endpoint** or **private endpoint**.
+
+1. If you've chosen the private method, create a [private endpoint](/azure/private-link/tutorial-private-endpoint-storage-portal) in order for the file shares to be accessible from the virtual network.
+
+1. Create an [Azure file share](/azure/storage/files/storage-how-to-create-file-share). The file share is reachable by the public host name of the storage account if using a public endpoint. The file share is reachable by private IP address if using a private endpoint.
+
+1. Mount the Azure file share in the template VM:
+
+ - [Windows](/azure/storage/files/storage-how-to-use-files-windows)
+ - [Linux](/azure/storage/files/storage-how-to-use-files-linux). To avoid mounting issues on lab VMs, see the [use Azure Files with Linux](#use-azure-files-with-linux) section.
+
+1. [Publish](how-to-create-manage-template.md#publish-the-template-vm) the template VM.
> [!IMPORTANT] > Make sure Windows Defender Firewall isn't blocking the outgoing SMB connection through port 445. By default, SMB is allowed for Azure VMs. ### Use Azure Files with Linux
-If you use the default instructions to mount an Azure Files share, the file share will seem to disappear on student VMs after the template is published. The following modified script addresses this issue.
+If you use the default instructions to mount an Azure Files share, the file share will seem to disappear on lab VMs after the template is published. The following modified script addresses this issue.
For file share with a public endpoint:
STORAGE_ACCOUNT_KEY=""
FILESHARE_NAME="" # Do not use 'mnt' for mount directory.
-# Using ΓÇÿmntΓÇÖ will cause issues on student VMs.
+# Using ΓÇÿmntΓÇÖ will cause issues on lab VMs.
MOUNT_DIRECTORY="prm-mnt" sudo mkdir /$MOUNT_DIRECTORY/$FILESHARE_NAME
STORAGE_ACCOUNT_KEY=""
FILESHARE_NAME="" # Do not use 'mnt' for mount directory.
-# Using ΓÇÿmntΓÇÖ will cause issues on student VMs.
+# Using ΓÇÿmntΓÇÖ will cause issues on lab VMs.
MOUNT_DIRECTORY="prm-mnt" sudo mkdir /$MOUNT_DIRECTORY/$FILESHARE_NAME
sudo bash -c "echo ""//$STORAGE_ACCOUNT_IP/$FILESHARE_NAME /$MOUNT_DIRECTORY/$fi
sudo mount -t cifs //$STORAGE_ACCOUNT_NAME.file.core.windows.net/$FILESHARE_NAME /$MOUNT_DIRECTORY/$FILESHARE_NAME -o vers=3.0,credentials=/etc/smbcredentials/$STORAGE_ACCOUNT_NAME.cred,dir_mode=0777,file_mode=0777,serverino ```
-If the template VM that mounts the Azure Files share to the `/mnt` directory is already published, the student can either:
+If the template VM that mounts the Azure Files share to the `/mnt` directory is already published, the lab user can either:
- Move the instruction to mount `/mnt` to the top of the `/etc/fstab` file. - Modify the instruction to mount `/mnt/{file-share-name}` to a different directory, like `/prm-mnt/{file-share-name}`.
-Students should run `mount -a` to remount directories.
+Lab users should run `mount -a` to remount directories.
-For more general information, see [Use Azure Files with Linux](../storage/files/storage-how-to-use-files-linux.md).
+For more general information, see [Use Azure Files with Linux](/azure/storage/files/storage-how-to-use-files-linux).
## Azure Files with identity-based authorization Azure Files shares can also be accessed by using Active Directory authentication, if the following are both true: -- The student's VM is domain-joined.-- Active Directory authentication is [enabled on the Azure Storage account](../storage/files/storage-files-active-directory-overview.md) that hosts the file share.
+- The lab VM is domain-joined.
+- Active Directory authentication is [enabled on the Azure Storage account](/azure/storage/files/storage-files-active-directory-overview) that hosts the file share.
The network drive is mounted on the virtual machine by using the userΓÇÖs identity, not the key to the storage account. Public or private endpoints provide access to the storage account.
For a private endpoint:
To create an Azure Files share that's enabled for Active Directory authentication, and to domain-join the lab VMs, follow these steps:
-1. Create an [Azure Storage account](../storage/files/storage-how-to-create-file-share.md).
-2. If you've chosen the private method, create a [private endpoint](../private-link/tutorial-private-endpoint-storage-portal.md) in order for the file shares to be accessible from the virtual network. Create a [private DNS zone](../dns/private-dns-privatednszone.md), or use an existing one. Private Azure DNS zones provide name resolution within a virtual network.
-3. Create an [Azure file share](../storage/files/storage-how-to-create-file-share.md).
-4. Follow the steps to enable identity-based authorization. If you're using Active Directory on-premises, and you're synchronizing it with Azure Active Directory (Azure AD), see [On-premises Active Directory Domain Services authentication over SMB for Azure file shares](../storage/files/storage-files-identity-auth-active-directory-enable.md). If you're using only Azure AD, see [Enable Azure Active Directory Domain Services authentication on Azure Files](../storage/files/storage-files-identity-auth-active-directory-domain-service-enable.md).
+1. Create an [Azure Storage account](/azure/storage/files/storage-how-to-create-file-share).
+1. If you've chosen the private method, create a [private endpoint](/azure/private-link/tutorial-private-endpoint-storage-portal) in order for the file shares to be accessible from the virtual network. Create a [private DNS zone](/azure/dns/private-dns-privatednszone), or use an existing one. Private Azure DNS zones provide name resolution within a virtual network.
+1. Create an [Azure file share](/azure/storage/files/storage-how-to-create-file-share).
+1. Follow the steps to enable identity-based authorization. If you're using Active Directory on-premises, and you're synchronizing it with Azure Active Directory (Azure AD), see [On-premises Active Directory Domain Services authentication over SMB for Azure file shares](/azure/storage/files/storage-files-identity-auth-active-directory-enable). If you're using only Azure AD, see [Enable Azure Active Directory Domain Services authentication on Azure Files](/azure/storage/files/storage-files-identity-auth-active-directory-domain-service-enable).
>[!IMPORTANT] >Talk to the team that manages your Active Directory instance to verify that all prerequisites listed in the instructions are met.
-5. Assign SMB share permission roles in Azure. For details about permissions that are granted to each role, see [share-level permissions](../storage/files/storage-files-identity-ad-ds-assign-permissions.md).
- - **Storage File Data SMB Share Elevated Contributor** role must be assigned to the person or group that will set up permissions for contents of the file share.
- - **Storage File Data SMB Share Contributor** role should be assigned to students who need to add or edit files on the file share.
- - **Storage File Data SMB Share Reader** role should be assigned to students who only need to read the files from the file share.
-6. Set up directory-level and/or file-level permissions for the file share. You must set up permissions from a domain-joined machine that has network access to the file share. To modify directory-level and/or file-level permissions, mount the file share by using the storage key, not your Azure AD credentials. To assign permissions, use the [Set-Acl](/powershell/module/microsoft.powershell.security/set-acl) PowerShell command, or [icacls](/windows-server/administration/windows-commands/icacls) in Windows.
-7. [Connect to your virtual network in Azure Lab Services](how-to-connect-vnet-injection.md).
-8. [Create the lab](how-to-manage-labs.md).
-9. Save a script on the template VM that students can run to connect to the network drive. To get example script:
+1. Assign SMB share permission roles in Azure. For details about permissions that are granted to each role, see [share-level permissions](/azure/storage/files/storage-files-identity-ad-ds-assign-permissions).
+ - **Storage File Data SMB Share Elevated Contributor** role must be assigned to the person or group that grants permissions for contents of the file share.
+ - **Storage File Data SMB Share Contributor** role should be assigned to lab users who need to add or edit files on the file share.
+ - **Storage File Data SMB Share Reader** role should be assigned to lab users who only need to read the files from the file share.
+
+1. Set up directory-level and/or file-level permissions for the file share. You must set up permissions from a domain-joined machine that has network access to the file share. To modify directory-level and/or file-level permissions, mount the file share by using the storage key, not your Azure AD credentials. To assign permissions, use the [Set-Acl](/powershell/module/microsoft.powershell.security/set-acl) PowerShell command, or [icacls](/windows-server/administration/windows-commands/icacls) in Windows.
+1. [Connect to your virtual network in Azure Lab Services](how-to-connect-vnet-injection.md).
+1. [Create the lab](how-to-manage-labs.md).
+1. Save a script on the template VM that lab users can run to connect to the network drive:
1. Open the storage account in the Azure portal. 1. Under **File Service**, select **File Shares**. 1. Find the share that you want to connect to, select the ellipses button on the far right, and choose **Connect**.
- 1. You'll see instructions for Windows, Linux, and macOS. If you're using Windows, set **Authentication method** to **Active Directory**.
+ 1. The page shows instructions for Windows, Linux, and macOS. If you're using Windows, set **Authentication method** to **Active Directory**.
1. Copy the code in the example, and save it on the template machine in a `.ps1` file for Windows, or an `.sh` file for Linux.
-10. On the template machine, download and run the script to [join student machines to the domain](https://aka.ms/azlabs/scripts/ActiveDirectoryJoin). The `Join-AzLabADTemplate` script [publishes the template VM](how-to-create-manage-template.md#publish-the-template-vm) automatically.
+
+1. On the template machine, download and run the script to [join lab user machines to the domain](https://aka.ms/azlabs/scripts/ActiveDirectoryJoin).
+
+ The `Join-AzLabADTemplate` script [publishes the template VM](how-to-create-manage-template.md#publish-the-template-vm) automatically.
+ > [!NOTE]
- > The template machine isn't domain-joined. To view files on the share, educators need to use a student VM for themselves.
-11. Students using Windows can connect to the Azure Files share by using [File Explorer](../storage/files/storage-how-to-use-files-windows.md) with their credentials, after they've been given the path to the file share. Alternately, students can run the preceding script to connect to the network drive. For students who are using Linux, run the preceding script.
+ > The template machine isn't domain-joined. To view files on the share, educators need to use a lab VM for themselves.
+
+1. Connect to the Azure Files share from the lab VM.
+
+ - Lab users on Windows can connect to the Azure Files share by using [File Explorer](/azure/storage/files/storage-how-to-use-files-windows) with their credentials, after they've been given the path to the file share. Alternately, lab users can run the script you saved earlier to connect to the network drive.
+ - For lab users who are using Linux, run the script you saved previously to connect to the network drive.
## Azure NetApp Files with NFS volumes [Azure NetApp Files](https://azure.microsoft.com/services/netapp/) is an enterprise-class, high-performance, metered file storage service. -- Access policies can be set on a per-volume basis.-- Permission policies are IP-based for each volume.-- If students need their own volume that other students don't have access to, permission policies must be assigned after the lab is published.-- In the context of Azure Lab Services, only Linux machines are supported.
+- Set access policies on a per-volume basis
+- Permission policies are IP-based for each volume
+- If lab users need their own volume that other lab users don't have access to, permission policies must be assigned after the lab is published
+- Azure Lab Services only supports Linux-based lab virtual machines to connect to Azure NetApp Files
- The virtual network for the Azure NetApp Files capacity pool must be connected to the lab. To enable advanced networking for labs, see [Connect to your virtual network in Azure Lab Services using vnet injection](how-to-connect-vnet-injection.md). VNet injection must be done during lab plan creation. To use an Azure NetApp Files share in Azure Lab
-1. To create an Azure NetApp Files capacity pool and one or more NFS volumes, see [set up Azure NetApp Files and NFS volume](../azure-netapp-files/azure-netapp-files-quickstart-set-up-account-create-volumes.md). For information about service levels, see [Service levels for Azure NetApp Files](../azure-netapp-files/azure-netapp-files-service-levels.md).
-2. [Connect to your virtual network in Azure Lab Services](how-to-connect-vnet-injection.md)
-3. [Create the lab](how-to-manage-labs.md).
-4. On the template VM, install the components necessary to use NFS file shares.
+1. Create an Azure NetApp Files capacity pool and one or more NFS volumes by following the steps in [Set up Azure NetApp Files and NFS volume](/azure/azure-netapp-files/azure-netapp-files-quickstart-set-up-account-create-volumes).
+
+ For information about service levels, see [Service levels for Azure NetApp Files](/azure/azure-netapp-files/azure-netapp-files-service-levels).
+
+1. [Connect to your virtual network in Azure Lab Services](how-to-connect-vnet-injection.md)
+
+1. [Create the lab](how-to-manage-labs.md).
+
+1. On the template VM, install the components necessary to use NFS file shares.
+ - Ubuntu: ```bash
To use an Azure NetApp Files share in Azure Lab
sudo yum install nfs-utils ```
-5. On the template VM, save the following script as `mount_fileshare.sh` to [mount the Azure NetApp Files share](../azure-netapp-files/azure-netapp-files-mount-unmount-volumes-for-virtual-machines.md). Assign the `capacity_pool_ipaddress` variable the mount target IP address for the capacity pool. Get the mount instructions for the volume to find the appropriate value. The script expects the path name of the Azure NetApp Files volume. To ensure that users can run the script, run `chmod u+x mount_fileshare.sh`.
+1. On the template VM, save the following script as `mount_fileshare.sh` to [mount the Azure NetApp Files share](/azure/azure-netapp-files/azure-netapp-files-mount-unmount-volumes-for-virtual-machines).
+
+ Assign the `capacity_pool_ipaddress` variable the mount target IP address for the capacity pool. Get the mount instructions for the volume to find the appropriate value. The script expects the path name of the Azure NetApp Files volume.
+
+ To ensure that users can run the script, run `chmod u+x mount_fileshare.sh`.
```bash #!/bin/bash
To use an Azure NetApp Files share in Azure Lab
CAPACITY_POOL_IP_ADDR=0.0.0.0 # IP address of capacity pool # Do not use 'mnt' for mount directory.
- # Using ΓÇÿmntΓÇÖ might cause issues on student VMs.
+ # Using ΓÇÿmntΓÇÖ might cause issues on lab VMs.
MOUNT_DIRECTORY="prm-mnt" sudo mkdir -p /$MOUNT_DIRECTORY
To use an Azure NetApp Files share in Azure Lab
sudo bash -c "echo ""$CAPACITY_POOL_IP_ADDR:/$VOLUME_NAME /$MOUNT_DIRECTORY/$VOLUME_NAME nfs bg,rw,hard,noatime,nolock,rsize=65536,wsize=65536,vers=3,tcp,_netdev 0 0"" >> /etc/fstab" ```
-6. If all students are sharing access to the same Azure NetApp Files volume, you can run the `mount_fileshare.sh` script on the template machine before publishing. If students each get their own volume, save the script to be run later by the student.
-7. [Publish](how-to-create-manage-template.md#publish-the-template-vm) the template VM.
-8. [Configure the policy](../azure-netapp-files/azure-netapp-files-configure-export-policy.md) for the file share. The export policy can allow for a single VM or multiple VMs to have access to a volume. You can grant read-only or read/write access.
-9. Students must start their VM and run the script to mount the file share. They'll only have to run the script once. The command will look like the following: `./mount_fileshare.sh myvolumename`.
+1. If all lab users are sharing access to the same Azure NetApp Files volume, you can run the `mount_fileshare.sh` script on the template machine before publishing. If lab users each get their own volume, save the script to be run later by the lab user.
-## Next steps
+1. [Publish](how-to-create-manage-template.md#publish-the-template-vm) the template VM.
+
+1. [Configure the policy](/azure/azure-netapp-files/azure-netapp-files-configure-export-policy) for the file share.
+
+ The export policy can allow for a single VM or multiple VMs to have access to a volume. You can grant read-only or read/write access.
-These steps are common to setting up any lab.
+1. Lab users must start their VM and run the script to mount the file share. They have to run the script only once.
+
+ The command looks like the following: `./mount_fileshare.sh myvolumename`.
+
+## Next steps
+- Learn how to [create a lab for classroom training](./tutorial-setup-lab.md)
+- Get started by following the steps in [Quickstart: Create and connect to a lab](./quick-create-connect-lab.md)
- [Create and manage a template](how-to-create-manage-template.md)-- [Add users](tutorial-setup-lab.md#add-users-to-the-lab)-- [Set quota](how-to-configure-student-usage.md#set-quotas-for-users)-- [Set a schedule](tutorial-setup-lab.md#add-a-lab-schedule)-- [Email registration links to students](how-to-configure-student-usage.md#send-invitations-to-users)
lab-services How To Setup Lab Gpu https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-setup-lab-gpu.md
Title: Set up a lab with GPUs in Azure Lab Services | Microsoft Docs
-description: Learn how to set up a lab with graphics processing unit (GPU) virtual machines.
-- Previously updated : 06/09/2022
+ Title: Set up a lab with GPUs
+
+description: Learn how to set up a lab in Azure Lab Services with graphics processing unit (GPU) virtual machines.
+ +++ Last updated : 04/24/2023
-# Set up a lab with GPU virtual machines
+# Set up a lab with GPU virtual machines in Azure Lab Services
[!INCLUDE [preview note](./includes/lab-services-new-update-focused-article.md)]
-This article shows you how to do the following tasks:
--- Choose between *visualization* and *compute* graphics processing units (GPUs).-- Ensure that the appropriate GPU drivers are installed.
+In this article, you learn how to choose between the different GPU-based virtual machines sizes when creating a lab in Azure Lab Services. Learn how to install the necessary drivers in the lab to take advantage of the GPUs.
## Choose between visualization and compute GPU sizes
-On the first page of the lab creation wizard, in the **Virtual machine size** drop-down list, you select the size of the VMs that are needed for your class.
+When you create a lab in Azure Lab Services, you have to select a virtual machine size. Choose the right virtual machine size, based on the usage scenario or [class type](./class-types.md).
++
+Azure Lab Services has two GPU-based virtual machines size categories:
+
+- Compute GPUs
+- Visualization GPUs
-![Screenshot of the "New lab" pane for selecting a VM size](./media/how-to-setup-gpu/lab-gpu-selection.png)
+> [!NOTE]
+> You may not see some of these VM sizes in the list when you create a lab. The list of VM sizes is based on the capacity assigned to your Microsoft-managed Azure subscription. For more information about capacity, see [Capacity limits in Azure Lab Services](../lab-services/capacity-limits.md). For availability of VM sizes, see [Products available by region](https://azure.microsoft.com/regions/services/?products=virtual-machines).
-In this process, you have the option of selecting either **Visualization** or **Compute** GPUs. It's important to choose the type of GPU that's based on the software that your students will use.
+### Compute GPU sizes
-As described in the following table, the *compute* GPU size is intended for compute-intensive applications. For example, the [Deep Learning in Natural Language Processing class type](./class-type-deep-learning-natural-language-processing.md) uses the **Small GPU (Compute)** size. The compute GPU is suitable for this type of class, because students use deep learning frameworks and tools that are provided by the [Data Science Virtual Machine image](https://azuremarketplace.microsoft.com/en-us/marketplace/apps?search=Data%20science%20Virtual%20machine&page=1&filters=microsoft%3Blinux) to train deep learning models with large sets of data.
+The *compute* GPU size is intended for compute-intensive applications. For example, the [Deep Learning in Natural Language Processing class type](./class-type-deep-learning-natural-language-processing.md) uses the **Small GPU (Compute)** size. The compute GPU is suitable for this type of class, because lab users apply deep learning frameworks and tools that are provided by the [Data Science Virtual Machine image](https://azuremarketplace.microsoft.com/en-us/marketplace/apps?search=Data%20science%20Virtual%20machine&page=1&filters=microsoft%3Blinux) to train deep learning models with large sets of data.
| Size | vCPUs | RAM | Description | | - | -- | | -- | | Small GPU (Compute) | 6 vCPUs | 112 GB RAM | [Standard_NC6s_v3](../virtual-machines/ncv3-series.md). This size supports both Windows and Linux and is best suited for compute-intensive applications such as artificial intelligence (AI) and deep learning. |
-The *visualization* GPU sizes are intended for graphics-intensive applications. For example, the [SOLIDWORKS engineering class type](./class-type-solidworks.md) shows using the **Small GPU (Visualization)** size. The visualization GPU is suitable for this type of class, because students interact with the SOLIDWORKS 3D computer-aided design (CAD) environment for modeling and visualizing solid objects.
+### Visualization GPU sizes
+
+The *visualization* GPU sizes are intended for graphics-intensive applications. For example, the [SOLIDWORKS engineering class type](./class-type-solidworks.md) shows using the **Small GPU (Visualization)** size. The visualization GPU is suitable for this type of class, because lab users interact with the SOLIDWORKS 3D computer-aided design (CAD) environment for modeling and visualizing solid objects.
| Size | vCPUs | RAM | Description | | - | -- | | -- | | Small GPU (Visualization) | 8 vCPUs | 28 GB RAM | [Standard_NV8as_v4](../virtual-machines/nvv4-series.md). This size is best suited for remote visualization, streaming, gaming, and encoding that use frameworks such as OpenGL and DirectX. Currently, this size supports Windows only. | | Medium GPU (Visualization) | 12 vCPUs | 112 GB RAM | [Standard_NV12s_v3](../virtual-machines/nvv3-series.md). This size supports both Windows and Linux. It's best suited for remote visualization, streaming, gaming, and encoding that use frameworks such as OpenGL and DirectX. |
-> [!NOTE]
-> You may not see some of these VM sizes in the list when creating a lab. This list is populated based on the capacity assigned to your Microsoft-managed Azure subscription. For more information about capacity, see [Capacity limits in Azure Lab Services](../lab-services/capacity-limits.md). For availability of VM sizes, see [Products available by region](https://azure.microsoft.com/regions/services/?products=virtual-machines).
- ## Ensure that the appropriate GPU drivers are installed To take advantage of the GPU capabilities of your lab VMs, ensure that the appropriate GPU drivers are installed. In the lab creation wizard, when you select a GPU VM size, you can select the **Install GPU drivers** option. This option is enabled by default.
-![Screenshot of the "New lab" showing the "Install GPU drivers" option](./media/how-to-setup-gpu/lab-gpu-drivers.png)
-Selecting **Install GPU drivers** ensures that recently released drivers are installed for the type of GPU and image that you selected.
+When you select **Install GPU drivers**, it ensures that recently released drivers are installed for the type of GPU and image that you selected.
- When you select the Small GPU *(Compute)* size, your lab VMs are powered by the [NVIDIA Tesla V100 GPU](https://www.nvidia.com/en-us/data-center/v100/) GPU. In this case, recent Compute Unified Device Architecture (CUDA) drivers are installed, which enables high-performance computing.-- When you select the Small GPU *(Visualization)* size, your lab VMs are powered by the [AMD Raedon Instinct MI25 Accelerator GPU](https://www.amd.com/en/products/professional-graphics/instinct-mi25). In this case, recent AMD GPU drivers are installed, which enables the use of graphics-intensive applications.
+- When you select the Small GPU *(Visualization)* size, your lab VMs are powered by the [AMD Radeon Instinct MI25 Accelerator GPU](https://www.amd.com/en/products/professional-graphics/instinct-mi25). In this case, recent AMD GPU drivers are installed, which enables the use of graphics-intensive applications.
- When you select the Medium GPU *(Visualization)* size, your lab VMs are powered by the [NVIDIA Tesla M60](https://images.nvidia.com/content/tesla/pdf/188417-Tesla-M60-DS-A4-fnl-Web.pdf) GPU and [GRID technology](https://www.nvidia.com/content/dam/en-zz/Solutions/design-visualization/solutions/resources/documents1/NVIDIA_GRID_vPC_Solution_Overview.pdf). In this case, recent GRID drivers are installed, which enables the use of graphics-intensive applications. > [!IMPORTANT]
-> The **Install GPU drivers** option only installs the drivers when they aren't present on your lab's image. For example, NVIDIA GPU drivers are already installed on the Azure marketplace's [Data Science image](../machine-learning/data-science-virtual-machine/overview.md#whats-included-on-the-dsvm). If you create a Small GPU (Compute) lab using the Data Science image and choose to **Install GPU drivers**, the drivers won't be updated to a more recent version. To update the drivers, you will need to manually install them as explained in the next section.
+> The **Install GPU drivers** option only installs the drivers when they aren't present on your lab's image. For example, NVIDIA GPU drivers are already installed on the Azure marketplace's [Data Science Virtual Machine image](../machine-learning/data-science-virtual-machine/overview.md#whats-included-on-the-dsvm). If you create a Small GPU (Compute) lab using the Data Science image and choose to **Install GPU drivers**, the drivers won't be updated to a more recent version. To update the drivers, you will need to manually install the drivers.
-### Install the drivers manually
+### Install GPU drivers manually
You might need to install a different version of the drivers than the version that Azure Lab Services installs for you. This section shows how to manually install the appropriate drivers.
You might need to install a different version of the drivers than the version th
To manually install drivers for the Small GPU *(Compute)* size, follow these steps:
-1. In the lab creation wizard, when you're [creating your lab](./how-to-manage-labs.md), disable the **Install GPU drivers** setting.
+1. In the lab creation wizard, when you [create your lab](./how-to-manage-labs.md), disable the **Install GPU drivers** setting.
-1. After your lab is created, connect to the template VM to install the appropriate drivers. Read [NVIDIA Tesla (CUDA) drivers](../virtual-machines/windows/n-series-driver-setup.md#nvidia-tesla-cuda-drivers) for more information about specific driver versions that are recommended depending on the Windows OS version being used. Otherwise, follow the below steps to install the latest NVIDIA drivers:
+1. After your lab is created, connect to the template VM to install the appropriate drivers.
- ![Screenshot of the NVIDIA Driver Downloads page](./media/how-to-setup-gpu/nvidia-driver-download.png)
+ - Follow the detailed installation steps in [NVIDIA Tesla (CUDA) drivers](../virtual-machines/windows/n-series-driver-setup.md#nvidia-tesla-cuda-drivers) for more information about specific driver versions that are recommended depending on the Windows OS version being used.
- Otherwise, follow the below steps to install the latest NVIDIA drivers:
+ :::image type="content" source="./media/how-to-setup-gpu/nvidia-driver-download.png" alt-text="Screenshot of the NVIDIA Driver Downloads page.":::
- a. In a browser, go to the [NVIDIA Driver Downloads page](https://www.nvidia.com/Download/index.aspx).
- b. Set the **Product Type** to **Tesla**.
- c. Set the **Product Series** to **V-Series**.
- d. Set the **Operating System** according to the type of base image you selected when you created your lab.
- e. Set the **CUDA Toolkit** to the version of CUDA driver that you need.
- f. Select **Search** to look for your drivers.
- g. Select **Download** to download the installer.
- h. Run the installer so that the drivers are installed on the template VM.
+ - Alternately, follow these steps to install the latest NVIDIA drivers:
+ 1. Go to the [NVIDIA Driver Downloads page](https://www.nvidia.com/Download/index.aspx).
+ 1. Set the **Product Type** to **Tesla**.
+ 1. Set the **Product Series** to **V-Series**.
+ 1. Set the **Operating System** according to the type of base image you selected when you created your lab.
+ 1. Set the **CUDA Toolkit** to the version of CUDA driver that you need.
+ 1. Select **Search** to look for your drivers.
+ 1. Select **Download** to download the installer.
+ 1. Run the installer so that the drivers are installed on the template VM.
1. Validate that the drivers are installed correctly by following instructions in the [Validate the installed drivers](how-to-setup-lab-gpu.md#validate-the-installed-drivers) section.
-1. After you've installed the drivers and other software that is required for your class, select **Publish** to create your students' VMs.
+
+1. After you've installed the drivers and other software that is required for your class, select **Publish** to create the lab virtual machines.
> [!NOTE] > If you're using a Linux image, after you've downloaded the installer, install the drivers by following the instructions in [Install CUDA drivers on Linux](../virtual-machines/linux/n-series-driver-setup.md?toc=%2fazure%2fvirtual-machines%2flinux%2ftoc.json#install-cuda-drivers-on-n-series-vms). #### Install the Small GPU (Visualization) drivers
-To manually install drivers for the Small GPU *(visualization)* size, doing the following steps:
+To manually install drivers for the Small GPU *(visualization)* size, follow these steps:
+
+1. In the lab creation wizard, when you [create your lab](./how-to-manage-labs.md), disable the **Install GPU drivers** setting.
-1. In the lab creation wizard, when you're [creating your lab](./how-to-manage-labs.md), disable the **Install GPU drivers** setting.
1. After your lab is created, connect to the template VM to install the appropriate drivers.
-1. Install the AMD drivers template VM by following instructions in the [Install AMD GPU drivers on N-series VMs running Windows](../virtual-machines/windows/n-series-amd-driver-setup.md) article.
+
+1. Install the AMD drivers template VM by following the instructions in [Install AMD GPU drivers on N-series VMs running Windows](../virtual-machines/windows/n-series-amd-driver-setup.md).
+ 1. Restart the template VM.+ 1. Validate that the drivers are installed correctly by following the instructions in the [Validate the installed drivers](./how-to-setup-lab-gpu.md#validate-the-installed-drivers) section.
-1. After you've installed the drivers and other software that are required for your class, select **Publish** to create your students' VMs.
+
+1. After you've installed the drivers and other software that are required for your class, select **Publish** to create your lab virtual machines.
#### Install the Medium GPU (Visualization) drivers
-1. In the lab creation wizard, when you're [creating your lab](./how-to-manage-labs.md), disable the **Install GPU drivers** setting.
+To manually install drivers for the Medium GPU *(visualization)* size, follow these steps:
+
+1. In the lab creation wizard, when you [create your lab](./how-to-manage-labs.md), disable the **Install GPU drivers** setting.
+ 1. After your lab is created, connect to the template VM to install the appropriate drivers.+ 1. Install the GRID drivers that are provided by Microsoft on the template VM by following the instructions for your operating system:+ - [Windows NVIDIA GRID drivers](../virtual-machines/windows/n-series-driver-setup.md#nvidia-grid-drivers) - [Linux NVIDIA GRID drivers](../virtual-machines/linux/n-series-driver-setup.md?toc=%2fazure%2fvirtual-machines%2flinux%2ftoc.json#nvidia-grid-drivers) 1. Restart the template VM.+ 1. Validate that the drivers are installed correctly by following the instructions in the [Validate the installed drivers](how-to-setup-lab-gpu.md#validate-the-installed-drivers) section.
-1. After you've installed the drivers and other software that are required for your class, select **Publish** to create your students' VMs.
+
+1. After you've installed the drivers and other software that are required for your class, select **Publish** to create your lab virtual machines.
### Validate the installed drivers
To verify driver installation for Linux images, see [verify driver installation
## Next steps
-See the following articles:
--- As an administrator, [create and manage labs](how-to-manage-labs.md).-- As an educator, create a class using [SOLIDWORKS computer-aided design (CAD)](class-type-solidworks.md) software.-- As an educator, create a class using [MATLAB (matrix laboratory)](class-type-matlab.md) software.
+- Learn how to [create and manage labs](how-to-manage-labs.md).
+- Create a lab with the [SOLIDWORKS computer-aided design (CAD)](class-type-solidworks.md) software.
+- Create a lab with the [MATLAB (matrix laboratory)](class-type-matlab.md) software.
lab-services Upload Custom Image Shared Image Gallery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/upload-custom-image-shared-image-gallery.md
Title: How to bring a Windows custom image from your physical lab environment
-description: Describes how to bring a Windows custom image from your physical lab environment.
Previously updated : 07/27/2021
+ Title: Import a Windows image from a physical lab
+
+description: Learn how to import a Windows custom image from your physical lab environment into Azure Lab Services.
++++ Last updated : 04/24/2023
-# Bring a Windows custom image from a physical lab environment
+# Bring a Windows custom image from a physical lab environment to Azure Lab Services
-The steps in this article show how to import a custom image that starts from your physical lab environment. With this approach, you create a VHD from your physical environment and import the VHD into a compute gallery so that it can be used within Lab Services. Before you use this approach for creating a custom image, read the article [Recommended approaches for creating custom images](approaches-for-custom-image-creation.md) to decide the best approach for your scenario.
+This article describes how to import a custom image from a physical lab environment for creating a lab in Azure Lab Services.
+
+The import process consists of the following steps:
+
+1. Create a virtual hard drive (VHD) from your physical environment
+1. Import the VHD into an Azure compute gallery
+1. [Attach the compute gallery to your lab plan](/azure/lab-services/how-to-attach-detach-shared-image-gallery)
+1. Create a lab based by using the image in the compute gallery
+
+Before you import an image from a physical lab, learn more about [recommended approaches for creating custom images](approaches-for-custom-image-creation.md).
## Prerequisites
-You will need permission to create an [Azure managed disk](../virtual-machines/managed-disks-overview.md) in your school's Azure subscription to complete the steps in this article.
+- Your Azure account has permission to create an [Azure managed disk](/azure/virtual-machines/managed-disks-overview). Learn about the [Azure RBAC roles you need to create a managed disk](/azure/virtual-machines/windows/disks-upload-vhd-to-managed-disk-powershell#assign-rbac-role).
-When moving images from a physical lab environment to Lab Services, you should restructure each image so that it only includes software needed for a lab's class. For more information, read the [Moving from a Physical Lab to Azure Lab Services](https://techcommunity.microsoft.com/t5/azure-lab-services/moving-from-a-physical-lab-to-azure-lab-services/ba-p/1654931) blog post.
+- Restructure each virtual machine image so that it only includes the software that is needed for a lab's class. Learn more about [moving from a Physical Lab to Azure Lab Services](./concept-migrating-physical-labs.md).
## Prepare a custom image using Hyper-V Manager
-The following steps show how to create a Windows image from a Windows Hyper-V virtual machine (VM) using Hyper-V
+First, create a virtual hard disk (VHD) for the physical environment. The following steps describe how to create a VHD from a Windows Hyper-V virtual machine (VM) by using Hyper-V
+
+1. Create a Hyper-V virtual machine in your physical lab environment based on your custom image.
-1. Start with a Hyper-V VM in your physical lab environment that has been created from your image. Read the article on [how to create a virtual machine in Hyper-V](/windows-server/virtualization/hyper-v/get-started/create-a-virtual-machine-in-hyper-v) for more information.
- The VM must be created as a **Generation 1** VM. - Use the **Default Switch** network configuration option to allow the VM to connect to the internet.
- - The VM's virtual disk must be a fixed size VHD. The disk size must *not* be greater than 128 GB. When you create the VM, enter the size of the disk as shown in the below image.
+ - The VM's virtual disk must be a fixed size VHD. The disk size must *not* be greater than 128 GB. When you create the VM, enter the size of the disk as shown in the below image.
+
+ :::image type="content" source="./media/upload-custom-image-shared-image-gallery/connect-virtual-hard-disk.png" alt-text="Screenshot of the Connect virtual hard disk screen in Hyper-V Manager, highlighting the option for fixed disk size.":::
+
+ Azure Lab Services does *not* support images with disk size greater than 128 GB.
+
+ Learn more about [how to create a virtual machine in Hyper-V](/windows-server/virtualization/hyper-v/get-started/create-a-virtual-machine-in-hyper-v).
+
+1. Connect to the Hyper-V VM and [prepare it for Azure](/azure/virtual-machines/windows/prepare-for-upload-vhd-image) by following these steps:
+
+ 1. [Set Windows configurations for Azure](/azure/virtual-machines/windows/prepare-for-upload-vhd-image#set-windows-configurations-for-azure).
+ 1. [Check the Windows services that are needed to ensure VM connectivity](/azure/virtual-machines/windows/prepare-for-upload-vhd-image#check-the-windows-services).
+ 1. [Update remote desktop registry settings](/azure/virtual-machines/windows/prepare-for-upload-vhd-image#update-remote-desktop-registry-settings).
+ 1. [Configure Windows Firewall rules](/azure/virtual-machines/windows/prepare-for-upload-vhd-image#configure-windows-firewall-rules).
+ 1. [Install Windows updates](/azure/virtual-machines/windows/prepare-for-upload-vhd-image).
+ 1. [Install Azure VM Agent and extra configuration](/azure/virtual-machines/windows/prepare-for-upload-vhd-image#complete-the-recommended-configurations)
+
+ You can upload either specialized or generalized images to a compute gallery and use them to create labs. The previous steps create a specialized image. If you need a generalized image, you also have to [run SysPrep](/azure/virtual-machines/windows/prepare-for-upload-vhd-image#determine-when-to-use-sysprep).
- :::image type="content" source="./media/upload-custom-image-shared-image-gallery/connect-virtual-hard-disk.png" alt-text="Connect virtual hard disk":::
+ You should create a specialized image if you want to maintain machine-specific information and user profiles. For more information about the differences between generalized and specialized images, see [Generalized and specialized images](/azure/virtual-machines/shared-image-galleries#generalized-and-specialized-images).
- Images with disk size greater than 128 GB are *not* supported by Lab Services.
+1. Convert the default Hyper-V `VHDX` hard disk file format to `VHD`:
-1. Connect to the Hyper-V VM and [prepare it for Azure](../virtual-machines/windows/prepare-for-upload-vhd-image.md) by following these steps:
- 1. [Set Windows configurations for Azure](../virtual-machines/windows/prepare-for-upload-vhd-image.md#set-windows-configurations-for-azure).
- 1. [Check the Windows Services that are needed to ensure VM connectivity](../virtual-machines/windows/prepare-for-upload-vhd-image.md#check-the-windows-services).
- 1. [Update remote desktop registry settings](../virtual-machines/windows/prepare-for-upload-vhd-image.md#update-remote-desktop-registry-settings).
- 1. [Configure Windows Firewall rules](../virtual-machines/windows/prepare-for-upload-vhd-image.md#configure-windows-firewall-rules).
- 1. [Install Windows Updates](../virtual-machines/windows/prepare-for-upload-vhd-image.md).
- 1. [Install Azure VM Agent and additional configuration as shown here](../virtual-machines/windows/prepare-for-upload-vhd-image.md#complete-the-recommended-configurations)
+ 1. In Hyper-V Manager, select the virtual machine, and then select **Action** > **Edit Disk**.
- You can upload either specialized or generalized images to a compute gallery and use them to create labs. The steps above will create a specialized image. If you need to instead create a generalized image, you also will need to [run SysPrep](../virtual-machines/windows/prepare-for-upload-vhd-image.md#determine-when-to-use-sysprep).
+ 1. Next, select **Convert** to convert the disk from a VHDX to a VHD.
- You should create a specialized image if you want to maintain machine-specific information and user profiles. For more information about the differences between generalized and specialized images, see [Generalized and specialized images](../virtual-machines/shared-image-galleries.md#generalized-and-specialized-images).
+ :::image type="content" source="./media/upload-custom-image-shared-image-gallery/choose-action.png" alt-text="Screenshot that shows the Choose Action screen when editing a virtual machine in Hyper-V Manager.":::
-1. Since **Hyper-V** creates a **VHDX** file by default, you need to convert this to a VHD file.
- 1. Navigate to **Hyper-V Manager** -> **Action** -> **Edit Disk**.
- 1. Next, **Convert** the disk from a VHDX to a VHD.
- - If you expand the disk size, make sure that you do *not* exceed 128 GB.
- :::image type="content" source="./media/upload-custom-image-shared-image-gallery/choose-action.png" alt-text="Choose action":::
+ If you expand the disk size, make sure that you do *not* exceed 128 GB.
- For more information, read the article that shows how to [convert the virtual disk to a fixed size VHD](../virtual-machines/windows/prepare-for-upload-vhd-image.md#convert-the-virtual-disk-to-a-fixed-size-vhd).
+ Learn more about how to [convert a virtual disk to a fixed size VHD](/azure/virtual-machines/windows/prepare-for-upload-vhd-image#convert-the-virtual-disk-to-a-fixed-size-vhd).
-To help with resizing the VHD and converting to a VHDX, you can also use the following PowerShell cmdlets:
+Alternately, you can resize and convert a VHDX by using PowerShell:
- [Resize-VHD](/powershell/module/hyper-v/resize-vhd) - [Convert-VHD](/powershell/module/hyper-v/convert-vhd) ## Upload the custom image to a compute gallery
+Next, you upload the VHD file from your physical environment to an Azure compute gallery.
+ 1. Upload the VHD to Azure to create a managed disk.
- 1. You can use either Storage Explorer or AzCopy from the command line, as shown in [Upload a VHD to Azure or copy a managed disk to another region](../virtual-machines/windows/disks-upload-vhd-to-managed-disk-powershell.md).
- 1. After you've uploaded the VHD, you should now have a managed disk that you can see in the Azure portal.
- If your machine goes to sleep or locks, the upload process may get interrupted and fail. Also, make sure after AzCopy completes, that you revoke the SAS access to the disk. Otherwise, when you attempt to create an image from the disk, you will see an error: **Operation 'Create Image' is not supported with disk 'your disk name' in state 'Active Upload'. Error Code: OperationNotAllowed**
+ You can use either Storage Explorer or AzCopy from the command line, as shown in [Upload a VHD to Azure or copy a managed disk to another region](/azure/virtual-machines/windows/disks-upload-vhd-to-managed-disk-powershell).
+
+ If your machine goes to sleep or locks, the upload process may get interrupted and fail. Also, make sure after AzCopy completes, that you revoke the SAS access to the disk. Otherwise, when you attempt to create an image from the disk, you encounter an error: **Operation 'Create Image' is not supported with disk 'your disk name' in state 'Active Upload'. Error Code: OperationNotAllowed**.
+
+ After you've uploaded the VHD, you should now have a managed disk that you can see in the Azure portal.
The Azure portal's **Size+Performance** tab for the managed disk allows you to change your disk size. As mentioned before, the size must *not* be greater than 128 GB.
-1. In a compute gallery, create an image definition and version:
- 1. [Create an image definition](../virtual-machines/image-version.md).
+1. In a compute gallery, follow these steps to [create an image definition and version](/azure/virtual-machines/image-version).
+ - Choose **Gen 1** for the **VM generation**.
- - Choose whether you are creating a **specialized** or **generalized** image for the **Operating system state**.
- For more information about the values you can specify for an image definition, see [Image definitions](../virtual-machines/shared-image-galleries.md#image-definitions).
+ - Choose whether you're creating a **specialized** or **generalized** image for the **Operating system state**
+
+ For more information about the values you can specify for an image definition, see [Image definitions](/azure/virtual-machines/shared-image-galleries#image-definitions).
You can also choose to use an existing image definition and create a new version for your custom image.
-1. [Create an image version](../virtual-machines/image-version.md).
- - The **Version number** property uses the following format: *MajorVersion.MinorVersion.Patch*. When you use Lab Services to create a lab and choose a custom image, the most recent version of the image is automatically used. The most recent version is chosen based on the highest value of MajorVersion, then MinorVersion, then Patch.
+1. Follow these steps to [create an image version](/azure/virtual-machines/image-version).
+
+ - The **Version number** property uses the following format: *MajorVersion.MinorVersion.Patch*. When you use Azure Lab Services to create a lab and choose a custom image, the most recent version of the image is automatically used. The most recent version is chosen based on the highest value of MajorVersion, then MinorVersion, then Patch.
+ - For the **Source**, choose **Disks and/or snapshots** from the drop-down list.+ - For the **OS disk** property, choose the disk that you created in previous steps.
- For more information about the values you can specify for an image definition, see [Image versions](../virtual-machines/shared-image-galleries.md#image-versions).
+ For more information about the values you can specify for an image definition, see [Image versions](/azure/virtual-machines/shared-image-galleries#image-versions).
## Create a lab
-1. [Create the lab](tutorial-setup-lab.md) in Lab Services and select the custom image from the compute gallery.
+Now that the custom image is available in an Azure compute gallery, you can create a lab by using the image.
+
+1. [Attach the compute gallery to your lab plan](./how-to-attach-detach-shared-image-gallery.md)
+
+1. [Create the lab](tutorial-setup-lab.md) and select the custom image from the compute gallery.
- If you expanded the disk *after* the OS was installed on the original Hyper-V VM, you may also need to extend the C drive in Windows to use the unallocated disk space:
- - Log into the lab's template VM and follow steps similar to what is shown in [Extend a basic volume](/windows-server/storage/disk-management/extend-a-basic-volume).
+ If you expanded the disk *after* the OS was installed on the original Hyper-V VM, you may also need to extend the C drive in Windows to use the unallocated disk space. Log into the lab's template VM and follow these steps to [extend a basic volume](/windows-server/storage/disk-management/extend-a-basic-volume).
## Next steps -- [Azure Compute Gallery overview](../virtual-machines/shared-image-galleries.md) - [Attach or detach a compute gallery](how-to-attach-detach-shared-image-gallery.md) - [Use a compute gallery](how-to-use-shared-image-gallery.md)
+- [Azure Compute Gallery overview](/azure/virtual-machines/shared-image-galleries)
load-balancer Gateway Partners https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/gateway-partners.md
Azure has a growing ecosystem of partners offering their network appliances for
**Gopinath Durairaj - Sr. Director, Product Management**
-[Learn more](https://www.citrix.com/blogs/2021/11/02/citrix-adc-integration-with-azure-gw-load-balancer/)
- ### cPacket Networks :::image type="content" source="./media/gateway-partners/cpacket.png" alt-text="Screenshot of cPacket Networks logo.":::
logic-apps Logic Apps Enterprise Integration Maps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-enterprise-integration-maps.md
ms.suite: integration Previously updated : 04/18/2023 Last updated : 04/25/2023 # Add maps for transformations in workflows with Azure Logic Apps
This article shows how to add a map to your integration account. If you're worki
* Standard workflows
- * References to external assemblies from maps are currently in preview. To configure support for external assemblies, see [.NET Framework assembly support for XSLT transformations added to Azure Logic Apps (Standard)](https://techcommunity.microsoft.com/t5/integrations-on-azure-blog/net-framework-assembly-support-added-to-azure-logic-apps/ba-p/3669120).
+ * Supports references to external assemblies from maps, which enable direct calls from XSLT maps to custom .NET code. To configure support for external assemblies, see [.NET Framework assembly support for XSLT transformations added to Azure Logic Apps (Standard)](https://techcommunity.microsoft.com/t5/integrations-on-azure-blog/net-framework-assembly-support-added-to-azure-logic-apps/ba-p/3669120).
* No limits apply to map file sizes.
The following example shows a map that references an assembly named `XslUtilitie
<a name="add-assembly"></a>
-## Add referenced assemblies (Consumption workflows only)
+## Add referenced assemblies
+
+### [Consumption](#tab/consumption)
+
+A Consumption logic app resource supports referencing external assemblies from maps, which enable direct calls from XSLT maps to custom .NET code.
1. In the [Azure portal](https://portal.azure.com), sign in with your Azure account credentials.
Based on your assembly file's size, follow the steps for uploading an assembly t
<a name="smaller-assembly"></a>
-### Add assemblies up to 2 MB
+#### Add assemblies up to 2 MB
1. Under **Add Assembly**, enter a name for your assembly. Keep **Small file** selected. Next to the **Assembly** box, select the folder icon. Find and select the assembly you're uploading.
Based on your assembly file's size, follow the steps for uploading an assembly t
<a name="larger-assembly"></a>
-### Add assemblies more than 2 MB
+#### Add assemblies more than 2 MB
To add larger assemblies, you can upload your assembly to an Azure blob container in your Azure storage account. Your steps for adding assemblies differ based whether your blob container has public read access. So first, check whether or not your blob container has public read access by following these steps: [Set public access level for blob container](../vs-azure-tools-storage-explorer-blobs.md#set-the-public-access-level-for-a-blob-container)
To add larger assemblies, you can upload your assembly to an Azure blob containe
<a name="public-access-assemblies"></a>
-#### Upload to containers with public access
+##### Upload to containers with public access
1. Upload the assembly to your storage account. In the right-side window, select **Upload**.
To add larger assemblies, you can upload your assembly to an Azure blob containe
<a name="no-public-access-assemblies"></a>
-#### Upload to containers without public access
+##### Upload to containers without public access
1. Upload the assembly to your storage account. In the right-side window, select **Upload**.
To add larger assemblies, you can upload your assembly to an Azure blob containe
After your assembly finishes uploading, the assembly appears in the **Assemblies** list. On your integration account's **Overview** page, under **Artifacts**, your uploaded assembly also appears.
+### [Standard](#tab/standard)
+
+A Standard logic app resource supports referencing external assemblies from maps, which enable direct calls from XSLT maps to custom .NET code. To configure this support, see [.NET Framework assembly support for XSLT transformations added to Azure Logic Apps (Standard)](https://techcommunity.microsoft.com/t5/integrations-on-azure-blog/net-framework-assembly-support-added-to-azure-logic-apps/ba-p/3669120).
+++ <a name="add-map"></a> ## Add maps
logic-apps Logic Apps Enterprise Integration Transform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-enterprise-integration-transform.md
Previously updated : 08/23/2022 Last updated : 04/25/2023 # Transform XML in workflows with Azure Logic Apps
If you're new to logic apps, review [What is Azure Logic Apps](logic-apps-overvi
* Exists in the same location or Azure region as your logic app resource where you plan to use the **Transform XML** action.
- * If you're using the [**Logic App (Consumption)** resource type](logic-apps-overview.md#resource-environment-differences), your integration account requires the following items:
+ * If you're working on a [Consumption logic app resource and workflow](logic-apps-overview.md#resource-environment-differences), your integration account requires the following items:
* The [map](logic-apps-enterprise-integration-maps.md) to use for transforming XML content. * A [link to your logic app resource](logic-apps-enterprise-integration-create-integration-account.md#link-account).
- * If you're using the [**Logic App (Standard)** resource type](logic-apps-overview.md#resource-environment-differences), you don't store maps in your integration account. Instead, you can [directly add maps to your logic app resource](logic-apps-enterprise-integration-maps.md) using either the Azure portal or Visual Studio Code. Only XSLT 1.0 is currently supported. You can then use these maps across multiple workflows within the *same logic app resource*.
+ * If you're working on a [Standard logic app resource and workflow](logic-apps-overview.md#resource-environment-differences), you don't store maps in your integration account. Instead, you can [directly add maps to your logic app resource](logic-apps-enterprise-integration-maps.md) using either the Azure portal or Visual Studio Code. Only XSLT 1.0 is currently supported. You can then use these maps across multiple workflows within the *same logic app resource*.
You still need an integration account to store other artifacts, such as partners, agreements, and certificates, along with using the [AS2](logic-apps-enterprise-integration-as2.md), [X12](logic-apps-enterprise-integration-x12.md), and [EDIFACT](logic-apps-enterprise-integration-edifact.md) operations. However, you don't need to link your logic app resource to your integration account, so the linking capability doesn't exist. Your integration account still has to meet other requirements, such as using the same Azure subscription and existing in the same location as your logic app resource.
- > [!NOTE]
- > Currently, only the **Logic App (Consumption)** resource type supports [RosettaNet](logic-apps-enterprise-integration-rosettanet.md) operations.
- > The **Logic App (Standard)** resource type doesn't include [RosettaNet](logic-apps-enterprise-integration-rosettanet.md) operations.
- ## Add Transform XML action
-1. In the [Azure portal](https://portal.azure.com), open your logic app and workflow in designer view.
+1. In the [Azure portal](https://portal.azure.com), open your logic app resource and workflow in designer view.
-1. If you have a blank logic app that doesn't have a trigger, add any trigger you want. This example uses the Request trigger. Otherwise, continue to the next step.
+1. If you have a blank workflow that doesn't have a trigger, add any trigger you want. This example uses the Request trigger. Otherwise, continue to the next step.
To add the Request trigger, in the designer search box, enter `HTTP request`, and select the Request trigger named **When an HTTP request is received**. 1. Under the step in your workflow where you want to add the **Transform XML** action, choose one of the following steps:
- For a Consumption or ISE plan-based logic app, choose a step:
+ For a Consumption or ISE-based logic app workflow, choose a step:
* To add the **Transform XML** action at the end of your workflow, select **New step**. * To add the **Transform XML** action between existing steps, move your pointer over the arrow that connects those steps so that the plus sign (**+**) appears. Select that plus sign, and then select **Add an action**.
- For a Standard plan-based logic app, choose a step:
+ For a Standard-based logic app workflow, choose a step:
* To add the **Transform XML** action at the end of your workflow, select the plus sign (**+**), and then select **Add an action**.
If you're new to logic apps, review [What is Azure Logic Apps](logic-apps-overvi
The dynamic content list shows property tokens that represent the outputs from the previous steps in the workflow. If the list doesn't show an expected property, check the trigger or action heading in the list and whether you can select **See more**.
- For a Consumption or ISE plan-based logic app, the designer looks like this example:
+ For a Consumption or ISE-based logic app workflow, the designer looks like this example:
![Screenshot showing multi-tenant designer with opened dynamic content list, cursor in "Content" box, and opened dynamic content list.](./media/logic-apps-enterprise-integration-transform/open-dynamic-content-list-multi-tenant.png)
- For a Standard plan-based logic app, the designer looks like this example:
+ For a Standard logic app workflow, the designer looks like this example:
![Screenshot showing single-tenant designer with opened dynamic content list, cursor in "Content" box, and opened dynamic content list](./media/logic-apps-enterprise-integration-transform/open-dynamic-content-list-single-tenant.png)
If you're new to logic apps, review [What is Azure Logic Apps](logic-apps-overvi
### Reference assembly or custom code from maps
-In **Logic App (Consumption)** workflows, the **Transform XML** action supports maps that reference an external assembly. For more information, review [Add XSLT maps for workflows in Azure Logic Apps](logic-apps-enterprise-integration-maps.md#add-assembly).
+The **Transform XML** action supports maps that reference an external assembly. For more information, review [Add XSLT maps for workflows in Azure Logic Apps](logic-apps-enterprise-integration-maps.md#add-assembly).
### Byte order mark
machine-learning How To Automl Forecasting Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-automl-forecasting-faq.md
Title: Frequently asked questions about forecasting in AutoML
-description: Read answers to frequently asked questions about forecasting in AutoML
+description: Read answers to frequently asked questions about forecasting in AutoML.
Last updated 01/27/2023
[!INCLUDE [sdk v2](../../includes/machine-learning-sdk-v2.md)]
-This article answers common questions about forecasting in AutoML. See the [methods overview article](./concept-automl-forecasting-methods.md) for more general information about forecasting methodology in AutoML. Instructions and examples for training forecasting models in AutoML can be found in our [set up AutoML for time series forecasting](./how-to-auto-train-forecast.md) article.
+This article answers common questions about forecasting in automatic machine learning (AutoML). For general information about forecasting methodology in AutoML, see the [Overview of forecasting methods in AutoML](./concept-automl-forecasting-methods.md) article.
## How do I start building forecasting models in AutoML?
-You can start by reading our guide on [setting up AutoML to train a time-series forecasting model with Python](./how-to-auto-train-forecast.md). We've also provided hands-on examples in several Jupyter notebooks:
-1. [Bike share example](https://github.com/Azure/azureml-examples/blob/main/sdk/python/jobs/automl-standalone-jobs/automl-forecasting-task-bike-share/auto-ml-forecasting-bike-share.ipynb)
-2. [Forecasting using deep learning](https://github.com/Azure/azureml-examples/blob/main/sdk/python/jobs/automl-standalone-jobs/automl-forecasting-github-dau/auto-ml-forecasting-github-dau.ipynb)
-3. [Many models](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/forecasting-many-models/auto-ml-forecasting-many-models.ipynb)
-4. [Forecasting Recipes](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/forecasting-recipes-univariate/auto-ml-forecasting-univariate-recipe-experiment-settings.ipynb)
-5. [Advanced forecasting scenarios](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/forecasting-forecast-function/auto-ml-forecasting-function.ipynb)
+
+You can start by reading the [Set up AutoML to train a time-series forecasting model](./how-to-auto-train-forecast.md) article. You can also find hands-on examples in several Jupyter notebooks:
+
+- [Bike share example](https://github.com/Azure/azureml-examples/blob/main/sdk/python/jobs/automl-standalone-jobs/automl-forecasting-task-bike-share/auto-ml-forecasting-bike-share.ipynb)
+- [Forecasting using deep learning](https://github.com/Azure/azureml-examples/blob/main/sdk/python/jobs/automl-standalone-jobs/automl-forecasting-github-dau/auto-ml-forecasting-github-dau.ipynb)
+- [Many Models solution](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/forecasting-many-models/auto-ml-forecasting-many-models.ipynb)
+- [Forecasting recipes](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/forecasting-recipes-univariate/auto-ml-forecasting-univariate-recipe-experiment-settings.ipynb)
+- [Advanced forecasting scenarios](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/forecasting-forecast-function/auto-ml-forecasting-function.ipynb)
## Why is AutoML slow on my data?
-We're always working to make it faster and more scalable! To work as a general forecasting platform, AutoML does extensive data validations, complex feature engineering, and searches over a large model space. This complexity can require a lot of time, depending on the data and the configuration.
+We're always working to make AutoML faster and more scalable. To work as a general forecasting platform, AutoML does extensive data validations and complex feature engineering, and it searches over a large model space. This complexity can require a lot of time, depending on the data and the configuration.
-One common source of slow runtime is training AutoML with default settings on data containing numerous time series. The cost of many forecasting methods scales with the number of series. For example, methods like Exponential Smoothing and Prophet [train a model for each time series](./concept-automl-forecasting-methods.md#model-grouping) in the training data. **The Many Models feature of AutoML scales to these scenarios** by distributing training jobs across a compute cluster and has been successfully applied to data with millions of time series. For more information, see the [forecasting at scale](./how-to-auto-train-forecast.md#forecasting-at-scale) article. You can also read about [the success of Many Models](https://techcommunity.microsoft.com/t5/ai-machine-learning-blog/automated-machine-learning-on-the-m5-forecasting-competition/ba-p/2933391) on a high-profile competition data set.
+One common source of slow runtime is training AutoML with default settings on data that contains numerous time series. The cost of many forecasting methods scales with the number of series. For example, methods like Exponential Smoothing and Prophet [train a model for each time series](./concept-automl-forecasting-methods.md#model-grouping) in the training data.
+
+The Many Models feature of AutoML scales to these scenarios by distributing training jobs across a compute cluster. It has been successfully applied to data with millions of time series. For more information, see the [Forecasting at scale](./how-to-auto-train-forecast.md#forecasting-at-scale) article. You can also read about [the success of Many Models](https://techcommunity.microsoft.com/t5/ai-machine-learning-blog/automated-machine-learning-on-the-m5-forecasting-competition/ba-p/2933391) on a high-profile competition dataset.
## How can I make AutoML faster?
-See the ["why is AutoML slow on my data"](#why-is-automl-slow-on-my-data) answer to understand why it may be slow in your case.
-Consider the following configuration changes that may speed up your job:
-- [Block time series models](./how-to-auto-train-forecast.md#model-search-settings) like ARIMA and Prophet-- Turn off look-back features like lags and rolling windows-- Reduce
- - number of trials/iterations
- - trial/iteration timeout
- - experiment timeout
- - number of cross validation folds.
+
+See the [Why is AutoML slow on my data?](#why-is-automl-slow-on-my-data) answer to understand why AutoML might be slow in your case.
+
+Consider the following configuration changes that might speed up your job:
+
+- [Block time series models](./how-to-auto-train-forecast.md#model-search-settings) like ARIMA and Prophet.
+- Turn off look-back features like lags and rolling windows.
+- Reduce:
+ - The number of trials/iterations.
+ - Trial/iteration timeout.
+ - Experiment timeout.
+ - The number of cross-validation folds.
- Ensure that early termination is enabled. ## What modeling configuration should I use?
-There are four basic configurations supported by AutoML forecasting:
+AutoML forecasting supports four basic configurations:
|Configuration|Scenario|Pros|Cons| |--|--|--|--|
-|**Default AutoML**|Recommended if the dataset has a small number of time series that have roughly similar historic behavior.|- Simple to configure from code/SDK or Azure Machine Learning studio <br><br> - AutoML has the chance to cross-learn across different time series since the regression models pool all series together in training. See the [model grouping](./concept-automl-forecasting-methods.md#model-grouping) section for more information.|- Regression models may be less accurate if the time series in the training data have divergent behavior <br> <br> - Time series models may take a long time to train if there are a large number of series in the training data. See the ["why is AutoML slow on my data"](#why-is-automl-slow-on-my-data) answer for more information.|
-|**AutoML with deep learning**|Recommended for datasets with more than 1000 observations and, potentially, numerous time series exhibiting complex patterns. When enabled, AutoML will sweep over [temporal convolutional neural network (TCN) models](./concept-automl-forecasting-deep-learning.md#introduction-to-tcnforecaster) during training. See the [enable deep learning](./how-to-auto-train-forecast.md#enable-deep-learning) section for more information.|- Simple to configure from code/SDK or Azure Machine Learning studio <br> <br> - Cross-learning opportunities since the TCN pools data over all series <br> <br> - Potentially higher accuracy due to the large capacity of DNN models. See the [forecasting models in AutoML](./concept-automl-forecasting-methods.md#forecasting-models-in-automl) section for more information.|- Training can take much longer due to the complexity of DNN models <br> <br> - Series with small amounts of history are unlikely to benefit from these models.|
-|**Many Models**|Recommended if you need to train and manage a large number of forecasting models in a scalable way. See the [forecasting at scale](./how-to-auto-train-forecast.md#forecasting-at-scale) section for more information.|- Scalable <br> <br> - Potentially higher accuracy when time series have divergent behavior from one another.|- No cross-learning across time series <br> <br> - You can't configure or launch Many Models jobs from Azure Machine Learning studio, only the code/SDK experience is currently available.|
-|**Hierarchical Time Series**|HTS is recommended if the series in your data have nested, hierarchical structure and you need to train or make forecasts at aggregated levels of the hierarchy. See the [hierarchical time series forecasting](how-to-auto-train-forecast.md#hierarchical-time-series-forecasting) section for more information.|- Training at aggregated levels can reduce noise in the leaf node time series and potentially lead to higher accuracy models. <br> <br> - Forecasts can be retrieved for any level of the hierarchy by aggregating or dis-aggregating forecasts from the training level.|- You need to provide the aggregation level for training. AutoML doesn't currently have an algorithm to find an optimal level.|
+|**Default AutoML**|Recommended if the dataset has a small number of time series that have roughly similar historical behavior.|- Simple to configure from code/SDK or Azure Machine Learning studio. <br><br> - AutoML can learn across different time series because the regression models pool all series together in training. For more information, see [Model grouping](./concept-automl-forecasting-methods.md#model-grouping).|- Regression models might be less accurate if the time series in the training data have divergent behavior. <br> <br> - Time series models might take a long time to train if the training data has a large number of series. For more information, see the [Why is AutoML slow on my data?](#why-is-automl-slow-on-my-data) answer.|
+|**AutoML with deep learning**|Recommended for datasets with more than 1,000 observations and, potentially, numerous time series that exhibit complex patterns. When it's enabled, AutoML will sweep over [temporal convolutional neural network (TCN) models](./concept-automl-forecasting-deep-learning.md#introduction-to-tcnforecaster) during training. For more information, see [Enable deep learning](./how-to-auto-train-forecast.md#enable-deep-learning).|- Simple to configure from code/SDK or Azure Machine Learning studio. <br> <br> - Cross-learning opportunities, because the TCN pools data over all series. <br> <br> - Potentially higher accuracy because of the large capacity of deep neural network (DNN) models. For more information, see [Forecasting models in AutoML](./concept-automl-forecasting-methods.md#forecasting-models-in-automl).|- Training can take much longer because of the complexity of DNN models. <br> <br> - Series with small amounts of history are unlikely to benefit from these models.|
+|**Many Models**|Recommended if you need to train and manage a large number of forecasting models in a scalable way. For more information, see [Forecasting at scale](./how-to-auto-train-forecast.md#forecasting-at-scale).|- Scalable. <br> <br> - Potentially higher accuracy when time series have divergent behavior from one another.|- No learning across time series. <br> <br> - You can't configure or run Many Models jobs from Azure Machine Learning studio. Only the code/SDK experience is currently available.|
+|**Hierarchical time series (HTS)**|Recommended if the series in your data have a nested, hierarchical structure, and you need to train or make forecasts at aggregated levels of the hierarchy. For more information, see [Hierarchical time series forecasting](how-to-auto-train-forecast.md#hierarchical-time-series-forecasting).|- Training at aggregated levels can reduce noise in the leaf-node time series and potentially lead to higher-accuracy models. <br> <br> - You can retrieve forecasts for any level of the hierarchy by aggregating or disaggregating forecasts from the training level.|- You need to provide the aggregation level for training. AutoML doesn't currently have an algorithm to find an optimal level.|
> [!NOTE]
-> We recommend using compute nodes with GPUs when deep learning is enabled to best take advantage of high DNN capacity. Training time can be much faster in comparison to nodes with only CPUs. See the GPU optimized compute article for more information.
-
+> We recommend using compute nodes with GPUs when deep learning is enabled to best take advantage of high DNN capacity. Training time can be much faster in comparison to nodes with only CPUs. For more information, see the [GPU-optimized virtual machine sizes](/azure/virtual-machines/sizes-gpu) article.
+ > [!NOTE]
-> HTS is designed for tasks where training or prediction is required at aggregated levels in the hierarchy. For hierarchical data requiring only leaf node training and prediction, use [Many Models](./how-to-auto-train-forecast.md#many-models) instead.
+> HTS is designed for tasks where training or prediction is required at aggregated levels in the hierarchy. For hierarchical data that requires only leaf-node training and prediction, use [Many Models](./how-to-auto-train-forecast.md#many-models) instead.
-## How can I prevent over-fitting and data leakage?
+## How can I prevent overfitting and data leakage?
-AutoML uses machine learning best practices, such as cross-validated model selection, that mitigate many over-fitting issues. However, there are other potential sources of over-fitting:
+AutoML uses machine learning best practices, such as cross-validated model selection, that mitigate many overfitting issues. However, there are other potential sources of overfitting:
-- The input data contains **feature columns that are derived from the target with a simple formula**. For example, a feature that is an exact multiple of the target can result in a nearly perfect training score. The model, however, will likely not generalize to out-of-sample data. We advise you to explore the data prior to model training and to drop columns that "leak" the target information.-- The training data uses **features that are not known into the future**, up to the forecast horizon. AutoML's regression models currently assume all features are known to the forecast horizon. We advise you to explore your data prior to training and remove any feature columns that are only known historically.-- There are **significant structural differences - regime changes - between the training, validation, or test portions of the data**. For example, consider the effect of the COVID-19 pandemic on demand for almost any good during 2020 and 2021; this is a classic example of a regime change. Over-fitting due to regime change is the most challenging issue to address because it's highly scenario dependent and can require deep knowledge to identify. As a first line of defense, try to reserve 10 - 20% of the total history for validation, or cross-validation, data. It isn't always possible to reserve this amount of validation data if the training history is short, but is a best practice. See our guide on [configuring validation](./how-to-auto-train-forecast.md#training-and-validation-data) for more information.
+- **The input data contains feature columns that are derived from the target with a simple formula**. For example, a feature that's an exact multiple of the target can result in a nearly perfect training score. The model, however, will likely not generalize to out-of-sample data. We advise you to explore the data prior to model training and to drop columns that "leak" the target information.
+- **The training data uses features that are not known into the future, up to the forecast horizon**. AutoML's regression models currently assume that all features are known to the forecast horizon. We advise you to explore your data prior to training and remove any feature columns that are known only historically.
+- **There are significant structural differences (regime changes) between the training, validation, or test portions of the data**. For example, consider the effect of the COVID-19 pandemic on demand for almost any good during 2020 and 2021. This is a classic example of a regime change. Overfitting due to regime change is the most challenging problem to address because it's highly scenario dependent and can require deep knowledge to identify.
+
+ As a first line of defense, try to reserve 10 to 20 percent of the total history for validation data or cross-validation data. It isn't always possible to reserve this amount of validation data if the training history is short, but it's a best practice. For more information, see [Training and validation data](./how-to-auto-train-forecast.md#training-and-validation-data).
## What does it mean if my training job achieves perfect validation scores?
-It's possible to see perfect scores when viewing validation metrics from a training job. A perfect score means that the forecast and the actuals on the validation set are the same, or very nearly the same. For example, a root mean squared error equal to 0.0 or an R2 score of 1.0. A perfect validation score is _usually_ an indicator that the model is severely overfit, likely due to [data leakage](#how-can-i-prevent-over-fitting-and-data-leakage). The best course of action is to inspect the data for leaks and drop the column(s) that are causing the leak.
+It's possible to see perfect scores when you're viewing validation metrics from a training job. A perfect score means that the forecast and the actuals on the validation set are the same or nearly the same. For example, you have a root mean squared error equal to 0.0 or an R2 score of 1.0.
+
+A perfect validation score _usually_ indicates that the model is severely overfit, likely because of [data leakage](#how-can-i-prevent-overfitting-and-data-leakage). The best course of action is to inspect the data for leaks and drop the columns that are causing the leak.
## What if my time series data doesn't have regularly spaced observations?
-AutoML's forecasting models all require that training data have regularly spaced observations with respect to the calendar. This requirement includes cases like monthly or yearly observations where the number of days between observations may vary. There are two cases where time dependent data may not meet this requirement:
+AutoML's forecasting models all require that training data has regularly spaced observations with respect to the calendar. This requirement includes cases like monthly or yearly observations where the number of days between observations can vary. Time-dependent data might not meet this requirement in two cases:
-- The data has a well defined frequency, but **there are missing observations that create gaps in the series**. In this case, AutoML will attempt to detect the frequency, fill in new observations for the gaps, and impute missing target and feature values therein. The imputation methods can be optionally configured by the user via SDK settings or through the Web UI. See the [custom featurization](./how-to-auto-train-forecast.md#custom-featurization)
-guide for more information on configuring imputation.
+- **The data has a well-defined frequency, but missing observations are creating gaps in the series**. In this case, AutoML will try to detect the frequency, fill in new observations for the gaps, and impute missing target and feature values. Optionally, the user can configure the imputation methods via SDK settings or through the Web UI. For more information, see [Custom featurization](./how-to-auto-train-forecast.md#custom-featurization).
-- **The data doesn't have a well defined frequency**. That is, the duration between observations doesn't have a discernible pattern. Transactional data, like that from a point-of-sales system, is one example. In this case, you can set AutoML to aggregate your data to a chosen frequency. You can choose a regular frequency that best suites the data and the modeling objectives. See the [data aggregation](./how-to-auto-train-forecast.md#frequency--target-data-aggregation) section for more information.
+- **The data doesn't have a well-defined frequency**. That is, the duration between observations doesn't have a discernible pattern. Transactional data, like that from a point-of-sales system, is one example. In this case, you can set AutoML to aggregate your data to a chosen frequency. You can choose a regular frequency that best suits the data and the modeling objectives. For more information, see [Data aggregation](./how-to-auto-train-forecast.md#frequency--target-data-aggregation).
## How do I choose the primary metric?
-The primary metric is very important since its value on validation data determines the best model during [ sweeping and selection](./concept-automl-forecasting-sweeping.md). **Normalized root mean squared error (NRMSE) or normalized mean absolute error (NMAE) are usually the best choices for the primary metric** in forecasting tasks. To choose between them, note that RMSE penalizes outliers in the training data more than MAE because it uses the square of the error. The NMAE may be a better choice if you want the model to be less sensitive to outliers. See the [regression and forecasting metrics](./how-to-understand-automated-ml.md#regressionforecasting-metrics) guide for more information.
+The primary metric is important because its value on validation data determines the best model during [sweeping and selection](./concept-automl-forecasting-sweeping.md). Normalized root mean squared error (NRMSE) and normalized mean absolute error (NMAE) are usually the best choices for the primary metric in forecasting tasks.
+
+To choose between them, note that NRMSE penalizes outliers in the training data more than NMAE because it uses the square of the error. NMAE might be a better choice if you want the model to be less sensitive to outliers. For more information, see [Regression and forecasting metrics](./how-to-understand-automated-ml.md#regressionforecasting-metrics).
> [!NOTE]
-> We do not recommend using the R2 score, or _R_<sup>2</sup>, as a primary metric for forecasting.
+> We don't recommend using the R2 score, or _R_<sup>2</sup>, as a primary metric for forecasting.
> [!NOTE]
-> AutoML doesn't support custom, or user-provided functions for the primary metric. You must choose one of the predefined primary metrics that AutoML supports.
+> AutoML doesn't support custom or user-provided functions for the primary metric. You must choose one of the predefined primary metrics that AutoML supports.
## How can I improve the accuracy of my model? -- Ensure that you're configuring AutoML the best way for your data. See the [model configuration](#what-modeling-configuration-should-i-use) answer for more information.
+- Ensure that you're configuring AutoML the best way for your data. For more information, see the [What modeling configuration should I use?](#what-modeling-configuration-should-i-use) answer.
- Check out the [forecasting recipes notebook](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/forecasting-recipes-univariate/auto-ml-forecasting-univariate-recipe-experiment-settings.ipynb) for step-by-step guides on how to build and improve forecast models. -- Evaluate the model using back-tests over several forecasting cycles. This procedure gives a more robust estimate of forecasting error and gives you a baseline to measure improvements against. See our [back-testing notebook](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/forecasting-backtest-single-model/auto-ml-forecasting-backtest-single-model.ipynb) for an example.-- If the data is noisy, consider aggregating it to a coarser frequency to increase the signal-to-noise ratio. See the [data aggregation](./how-to-auto-train-forecast.md#frequency--target-data-aggregation) guide for more information.-- Add new features that may help predict the target. Subject matter expertise can help greatly when selecting training data.-- Compare validation and test metric values and determine if the selected model is under-fitting or over-fitting the data. This knowledge can guide you to a better training configuration. For example, you might determine that you need to use more cross-validation folds in response to over-fitting.
+- Evaluate the model by using back tests over several forecasting cycles. This procedure gives a more robust estimate of forecasting error and gives you a baseline to measure improvements against. For an example, see the [back-testing notebook](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/forecasting-backtest-single-model/auto-ml-forecasting-backtest-single-model.ipynb).
+- If the data is noisy, consider aggregating it to a coarser frequency to increase the signal-to-noise ratio. For more information, see [Frequency and target data aggregation](./how-to-auto-train-forecast.md#frequency--target-data-aggregation).
+- Add new features that can help predict the target. Subject matter expertise can help greatly when you're selecting training data.
+- Compare validation and test metric values, and determine if the selected model is underfitting or overfitting the data. This knowledge can guide you to a better training configuration. For example, you might determine that you need to use more cross-validation folds in response to overfitting.
+
+## Will AutoML always select the same best model from the same training data and configuration?
-## Will AutoML always select the same best model given the same training data and configuration?
+[AutoML's model search process](./concept-automl-forecasting-sweeping.md#model-sweeping) is not deterministic, so it doesn't always select the same model from the same data and configuration.
-[AutoML's model search process](./concept-automl-forecasting-sweeping.md#model-sweeping) is not deterministic, so it does not always select the same model given the same data and configuration.
+## How do I fix an out-of-memory error?
-## How do I fix an Out-Of-Memory error?
+There are two types of memory errors:
-There are two types of memory issues:
-- RAM Out-of-Memory -- Disk Out-of-Memory
+- RAM out-of-memory
+- Disk out-of-memory
-First, ensure that you're configuring AutoML in the best way for your data. See the [model configuration](#what-modeling-configuration-should-i-use) answer for more information.
+First, ensure that you're configuring AutoML in the best way for your data. For more information, see the [What modeling configuration should I use?](#what-modeling-configuration-should-i-use) answer.
-For default AutoML settings, RAM Out-of-Memory may be fixed by using compute nodes with more RAM. A useful rule-of-thumb is that the amount of free RAM should be at least 10 times larger than the raw data size to run AutoML with default settings.
+For default AutoML settings, you can fix RAM out-of-memory errors by using compute nodes with more RAM. A general rule is that the amount of free RAM should be at least 10 times larger than the raw data size to run AutoML with default settings.
-Disk Out-of-Memory errors may be resolved by deleting the compute cluster and creating a new one.
+You can resolve disk out-of-memory errors by deleting the compute cluster and creating a new one.
-## What advanced forecasting scenarios are supported by AutoML?
+## What advanced forecasting scenarios does AutoML support?
+
+AutoML supports the following advanced prediction scenarios:
-We support the following advanced prediction scenarios:
- Quantile forecasts - Robust model evaluation via [rolling forecasts](./how-to-auto-train-forecast.md#evaluating-model-accuracy-with-a-rolling-forecast) - Forecasting beyond the forecast horizon-- Forecasting when there's a gap in time between training and forecasting periods.
+- Forecasting when there's a gap in time between training and forecasting periods
-See the [advanced forecasting scenarios notebook](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/forecasting-forecast-function/auto-ml-forecasting-function.ipynb) for examples and details.
+For examples and details, see the [notebook for advanced forecasting scenarios](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/forecasting-forecast-function/auto-ml-forecasting-function.ipynb).
## How do I view metrics from forecasting training jobs?
-See our [metrics in studio UI](how-to-log-view-metrics.md#view-jobsruns-information-in-the-studio) guide for finding training and validation metric values. You can view metrics for any forecasting model trained in AutoML by navigating to a model from the AutoML job UI in the studio and clicking on the "metrics" tab.
+To find training and validation metric values, see [View jobs/runs information in the studio](how-to-log-view-metrics.md#view-jobsruns-information-in-the-studio). You can view metrics for any forecasting model trained in AutoML by going to a model from the AutoML job UI in the studio and selecting the **Metrics** tab.
## How do I debug failures with forecasting training jobs?
-If your AutoML forecasting job fails, you'll see an error message in the studio UI that may help to diagnose and fix the problem. The best source of information about the failure beyond the error message is the driver log for the job. Check out the [run logs](how-to-log-view-metrics.md#view-and-download-diagnostic-logs) guide for instructions on finding driver logs.
+If your AutoML forecasting job fails, an error message on the studio UI can help you diagnose and fix the problem. The best source of information about the failure beyond the error message is the driver log for the job. For instructions on finding driver logs, see [View jobs/runs information with MLflow](how-to-log-view-metrics.md#view-and-download-diagnostic-logs).
> [!NOTE]
-> For Many Models or HTS job, training is usually on multi-node compute clusters. Logs for these jobs are present for each node IP address. You will need to search for error logs in each node in this case. The error logs, along with the driver logs, are in the `user_logs` folder for each node IP.
+> For a Many Models or HTS job, training is usually on multiple-node compute clusters. Logs for these jobs are present for each node IP address. In this case, you need to search for error logs in each node. The error logs, along with the driver logs, are in the *user_logs* folder for each node IP.
+
+## How do I deploy a model from forecasting training jobs?
-## How do I deploy model from forecasting training jobs?
+You can deploy a model from forecasting training jobs in either of these ways:
-Model from forecasting training jobs can be deployed in either of the two ways:
+- **Online endpoint**: Check the scoring file used in the deployment, or select the **Test** tab on the endpoint page in the studio, to understand the structure of input that the deployment expects. See [this notebook](https://github.com/Azure/azureml-examples/blob/main/sdk/python/jobs/automl-standalone-jobs/automl-forecasting-task-energy-demand/automl-forecasting-task-energy-demand-advanced-mlflow.ipynb) for an example. For more information about online deployment, see [Deploy an AutoML model to an online endpoint](./how-to-deploy-automl-endpoint.md).
+- **Batch endpoint**: This deployment method requires you to develop a custom scoring script. Refer to [this notebook](https://github.com/Azure/azureml-examples/blob/main/sdk/python/jobs/automl-standalone-jobs/automl-forecasting-orange-juice-sales/automl-forecasting-orange-juice-sales-mlflow.ipynb) for an example. For more information about batch deployment, see [Use batch endpoints for batch scoring](./how-to-use-batch-endpoint.md).
-- Online Endpoint
- - Please refer [this link](./how-to-deploy-automl-endpoint.md) for online deployment.
- - You can check the scoring file used in the deployment or click on the "Test" tab on the endpoint page in the studio to understand the structure of input that is expected by the deployment.
- - You can refer [this notebook](https://github.com/Azure/azureml-examples/blob/main/sdk/python/jobs/automl-standalone-jobs/automl-forecasting-task-energy-demand/automl-forecasting-task-energy-demand-advanced-mlflow.ipynb) to see an example.
-- Batch Endpoint
- - Please refer [this link](./how-to-use-batch-endpoint.md) for batch deployment.
- - It requires you to develop a custom scoring script.
- - You can refer [this notebook](https://github.com/Azure/azureml-examples/blob/main/sdk/python/jobs/automl-standalone-jobs/automl-forecasting-orange-juice-sales/automl-forecasting-orange-juice-sales-mlflow.ipynb) to see an example.
+For UI deployments, we encourage you to use either of these options:
-For UI deployments, we encourage to use either of the two options:
-- Real-time endpoint-- Batch endpoint
+- **Real-time endpoint**
+- **Batch endpoint**
-**Please don't use the 1st option i.e. "Real-time-endpoint (quick)"**.
+Don't use the first option, **Real-time-endpoint (quick)**.
> [!NOTE]
-> As of now, we don't support deploying MLflow model from forecasting training jobs through SDK, CLI, or UI. You will run into errors if you try this.
+> As of now, we don't support deploying the MLflow model from forecasting training jobs via SDK, CLI, or UI. You'll get errors if you try it.
-## What is a workspace / environment / experiment/ compute instance / compute target?
+## What is a workspace, environment, experiment, compute instance, or compute target?
-If you aren't familiar with Azure Machine Learning concepts, start with the ["What is Azure Machine Learning"](overview-what-is-azure-machine-learning.md) article and the [workspaces](./concept-workspace.md) article.
+If you aren't familiar with Azure Machine Learning concepts, start with the [What is Azure Machine Learning?](overview-what-is-azure-machine-learning.md) and [What is an Azure Machine Learning workspace?](./concept-workspace.md) articles.
## Next steps
-* Learn more about [how to set up AutoML to train a time-series forecasting model](./how-to-auto-train-forecast.md).
-* Learn about [calendar features for time series forecasting in AutoML](./concept-automl-forecasting-calendar-features.md).
-* Learn about [how AutoML uses machine learning to build forecasting models](./concept-automl-forecasting-methods.md).
-* Learn about [AutoML Forecasting Lagged Features](./concept-automl-forecasting-lags.md).
+
+- Learn more about [how to set up AutoML to train a time-series forecasting model](./how-to-auto-train-forecast.md).
+- Learn about [calendar features for time series forecasting in AutoML](./concept-automl-forecasting-calendar-features.md).
+- Learn about [how AutoML uses machine learning to build forecasting models](./concept-automl-forecasting-methods.md).
+- Learn about [AutoML forecasting for lagged features](./concept-automl-forecasting-lags.md).
migrate Common Questions Business Case https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/common-questions-business-case.md
Business case creates assessments in the background, which could take some time
### How do I build a business case?
-Currently, you can create a Business case on servers and workloads discovered using a lightweight Azure Migrate appliance in your VMware environment. The appliance discovers on-premises servers and workloads. It then sends server metadata and performance data to Azure Migrate.
+Currently, you can create a Business case on servers and workloads discovered using a lightweight Azure Migrate appliance in your VMware, Hyper-V and Physical/Baremetal environment. The appliance discovers on-premises servers and workloads. It then sends server metadata and performance data to Azure Migrate.
### Why is the Build business case feature disabled?
-The **Build business case** feature will be enabled only when you have discovery performed using an Azure Migrate appliance for servers and workloads in a VMware environment. The Business case feature is not supported for servers and/or workloads discovered only from any of the discovery sources below:
-- Servers and/or SQL Server deployments from Hyper-V environment-- Servers imported via .csv templates
+The **Build business case** feature will be enabled only when you have discovery performed using an Azure Migrate appliance for servers and workloads in your VMware, Hyper-V and Physical/Baremetal environment. The Business case feature is not supported for servers and/or workloads imported via a .csv file.
### Why canΓÇÖt I build business case from my project?
-You will not be able to create a business case if your project is in one of the 6 project regions:
+You will not be able to create a business case if your project is in one of the 3 project regions:
-East Asia, Germany West Central, Japan West, Korea Central, Norway East, and Switzerland North.
+Germany West Central, East Asia and Switzerland North.
To verify in an existing project: 1. You can use the https://portal.azure.com/ URL to get started
To verify in an existing project:
5. Check the Project location. 6. The Business case feature is not supported in the following regions:
- koreacentral, eastasia, germanywestcentral, japanwest, norwayeast, switzerlandnorth
+ Germany West Central, East Asia and Switzerland North.
### Why can't I change the currency during business case creation?
-Currently, the currency is defaulted to USD.
+Currently, the currency is defaulted to USD.
### What does the different migration strategies mean? **Migration Strategy** | **Details** | **Assessment insights** | | **Azure recommended to minimize cost** | You can get the most cost efficient and compatible target recommendation in Azure across Azure IaaS and Azure PaaS targets | For SQL Servers, sizing and cost comes from the *Recommended report* with optimization strategy- minimize cost from Azure SQL assessment.<br/><br/> For web apps, sizing and cost comes from Azure App Service assessment is picked.<br/><br/> For general servers, sizing and cost comes from Azure VM assessment. **Migrate to all IaaS (Infrastructure as a Service)** | You can get a quick lift and shift recommendation to Azure IaaS. | For SQL Servers, sizing and cost comes from the *Instance to SQL Server on Azure VM* report.<br/><br/> For general servers and servers hosting web apps, sizing and cost comes from Azure VM assessment.
-**Modernize to PaaS (Platform as a Service)** | You can get a PaaS preferred recommendation that means, the logic identifies workloads best fit for PaaS targets.<br/><br/> General servers are recommended with a quick lift and shift recommendation to Azure IaaS. | For SQL Servers, sizing and cost comes from the *Instance to Azure SQL MI* report.<br/><br/> For web apps, sizing and cost comes from Azure App Service assessment. For general servers, sizing and cost comes from Azure VM assessment.
+**Modernize to PaaS (Platform as a Service)** | You can get a PaaS preferred recommendation that means, the logic identifies workloads best fit for PaaS targets.<br/><br/> General servers are recommended with a quick lift and shift recommendation to Azure IaaS. | For SQL Servers, sizing and cost comes from the *Recommended report* with optimization strategy - *Modernize to PaaS* from Azure SQL assessment.<br/><br/> For web apps, sizing and cost comes from Azure App Service assessment. For general servers, sizing and cost comes from Azure VM assessment.
> [!NOTE] > Although the Business case picks Azure recommendations from certain assessments, you won't be able to access the assessments directly. To deep dive into sizing, readiness and Azure cost estimates, you can create respective assessments for the servers or workloads.
migrate Concepts Business Case Calculation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/concepts-business-case-calculation.md
There are three types of migration strategies that you can choose while building
| | **Azure recommended to minimize cost** | You can get the most cost efficient and compatible target recommendation in Azure across Azure IaaS and Azure PaaS targets. | For SQL Servers, sizing and cost comes from the *Recommended report* with optimization strategy - minimize cost from Azure SQL assessment.<br/><br/> For web apps, sizing and cost comes from Azure App Service assessment is picked. <br/><br/>For general servers, sizing and cost comes from Azure VM assessment. **Migrate to all IaaS (Infrastructure as a Service)** | You can get a quick lift and shift recommendation to Azure IaaS. | For SQL Servers, sizing and cost comes from the *Instance to SQL Server on Azure VM* report. <br/><br/>For general servers and servers hosting web apps, sizing and cost comes from Azure VM assessment.
-**Modernize to PaaS (Platform as a Service)** | You can get a PaaS preferred recommendation that means, the logic identifies workloads best fit for PaaS targets. <br/><br/>General servers are recommended with a quick lift and shift recommendation to Azure IaaS. | <br/><br/>For SQL Servers, sizing and cost comes from the *Instance to Azure SQL MI* report. <br/><br/>For web apps, sizing and cost comes from Azure App Service assessment. For general servers, sizing and cost comes from Azure VM assessment. Although the Business case picks Azure recommendations from certain assessments, you won't be able to access the assessments directly. To deep dive into sizing, readiness, and Azure cost estimates, you can create respective assessments for the servers or workloads.
+**Modernize to PaaS (Platform as a Service)** | You can get a PaaS preferred recommendation that means, the logic identifies workloads best fit for PaaS targets. <br/><br/>General servers are recommended with a quick lift and shift recommendation to Azure IaaS. | For SQL Servers, sizing and cost comes from the *Recommended report* with optimization strategy - *Modernize to PaaS* from Azure SQL assessment.<br/><br/> For web apps, sizing and cost comes from Azure App Service assessment. For general servers, sizing and cost comes from Azure VM assessment.
+
+Although the Business case picks Azure recommendations from certain assessments, you won't be able to access the assessments directly. To deep dive into sizing, readiness, and Azure cost estimates, you can create respective assessments for the servers or workloads.
## How do I build a business case?
-Currently, you can create a Business case on servers and workloads discovered using a lightweight Azure Migrate appliance in your VMware environment. The appliance discovers on-premises servers and workloads. It then sends server metadata and performance data to Azure Migrate.
+Currently, you can create a Business case on servers and workloads discovered using a lightweight Azure Migrate appliance in your VMware, Microsoft Hyper-V and Physical/ Bare-metal environments as well as IaaS services of other public cloud. The appliance discovers on-premises servers and workloads. It then sends server metadata and performance data to Azure Migrate.
## How do I use the appliance?
If you're deploying an Azure Migrate appliance to discover on-premises servers,
1. For your first Business case, create an Azure project and add the Discovery and assessment tool to it. 1. Deploy a lightweight Azure Migrate appliance. The appliance continuously discovers on-premises servers and sends server metadata and performance data to Azure Migrate. Deploy the appliance as a VM. You don't need to install anything on servers that you want to assess.
-After the appliance begins server discovery, you can start building your Business case. Follow our tutorials for [VMware](./tutorial-discover-vmware.md) to try out these steps.
+After the appliance begins server discovery, you can start building your Business case. Follow our tutorials for [VMware](./tutorial-discover-vmware.md) or [Hyper-V](tutorial-discover-hyper-v.md) or [Physical/Bare-metal or other clouds](tutorial-discover-physical.md) to try out these steps.
-We recommend that you wait at least a day after starting discovery before you build a Business case so that enough performance/resource utilization data points are collected. Also, review the notifications/resolve issues blades on Azure Migrate hub to identify any discovery related issues prior to Business case computation. It will ensure that the IT estate in your datacenter is represented more accurately and the Business case recommendations will be more valuable.
+We recommend that you wait at least a day after starting discovery before you build a Business case so that enough performance/resource utilization data points are collected. Also, review the notifications/resolve issues blades on Azure Migrate hub to identify any discovery related issues prior to Business case computation. It ensures that the IT estate in your datacenter is represented more accurately and the Business case recommendations are more valuable.
## What data does the appliance collect?
-If you're using the Azure Migrate appliance, learn about the metadata and performance data that's collected for [VMware](discovered-metadata.md#collected-metadata-for-vmware-servers).
+If you're using the Azure Migrate appliance, learn about the metadata and performance data that's collected for:
+- [VMware](discovered-metadata.md#collected-metadata-for-vmware-servers).
+- [Hyper-V](discovered-metadata.md#collected-metadata-for-hyper-v-servers)
+- [Physical](discovered-metadata.md#collected-data-for-physical-servers)
## How does the appliance calculate performance data? If you use the appliance for discovery, it collects performance data for compute settings with these steps:
-1. The appliance collects a real-time sample point. For VMware VMs, a sample point is collected every 20 seconds.
+1. The appliance collects a real-time sample point.
+ - **VMware VMs**: A sample point is collected every 20 seconds.
+ - **Hyper-V VMs**: A sample point is collected every 30 seconds.
+ - **Physical servers**: A sample point is collected every five minutes.
+ 1. The appliance combines the sample points to create a single data point every 10 minutes for VMware and Hyper-V servers, and every 5 minutes for physical servers. To create the data point, the appliance selects the peak values from all samples. It then sends the data point to Azure. 1. The assessment service stores all the 10-minute data points for the last month. 1. When you create a Business case, multiple assessments are triggered in the background.
It covers which servers are ideal for cloud, servers that can be decommissioned
- **Idle servers**: These servers were on but didn't deliver business value by having their CPU and memory utilization below 5% and network utilization below 2%. - **Decommission**: These servers were expected to deliver business value, but didn't and can be decommissioned on-premises and recommended to not migrate to Azure: - **Zombie**: The CPU, memory and network utilization were 0% with no performance data collection issues.-- These servers were on but do not have adequate metrics available:
+- These servers were on but don't have adequate metrics available:
- **Unknown**: Many servers can land in this section if the discovery is still ongoing or has some unaddressed discovery issues. ## What comprises a business case?
There are four major reports that you need to review:
## What's in a business case?
-Here's what's included in a Business case:
+Here's what's included in a business case:
### Total cost of ownership (steady state) #### On-premises cost
-Cost components for running on-premises servers. For TCO calculations, a one year cost is computed for following heads:
+Cost components for running on-premises servers. For TCO calculations, an annual cost is computed for the following heads:
**Cost heads** | **Category** | **Component** | **Logic** | | | | |
-| Compute | Hardware | Server Hardware (Host machines) | Total hardware acquisition cost is calculated using a cost per core linear regression formula : Cost per core = 16.232*(Hyperthreaded core: memory in GB ratio) + 113.87. Hyperthreaded cores = 2*(cores)
+| Compute | Hardware | Server Hardware (Host machines) | Total hardware acquisition cost is calculated using a cost per core linear regression formula: Cost per core = 16.232*(Hyperthreaded core: memory in GB ratio) + 113.87. Hyperthreaded cores = 2*(cores)
| | Software - SQL Server licensing | License cost | Calculated per two core pack license pricing of 2019 Enterprise or Standard. | | | | Software Assurance | Calculated per year as in settings. |
-| | Software - Windows Server licensing | License cost | Calculated per two corepack license pricing of Windows Server. |
+| | Software - Windows Server licensing | License cost | Calculated per two core pack license pricing of Windows Server. |
| | | Software Assurance | Calculated per year as in settings. |
-| | Virtualization software | Virtualization Software (VMware license cost + support + management software cost) | License cost for vSphere Standard license + Production support for vSphere Standard license + Management software cost for VSphere Standard + production support cost of management software. _Not included- other hypervisor software cost_ or _Antivirus / Monitoring Agents_.|
+| | Virtualization software for servers running in VMware environment | Virtualization Software (VMware license cost + support + management software cost) | License cost for vSphere Standard license + Production support for vSphere Standard license + Management software cost for VSphere Standard + production support cost of management software. _Not included- other hypervisor software cost_ or _Antivirus / Monitoring Agents_.|
+| | Virtualization software for servers running in Microsoft Hyper-V environment| Virtualization Software (management software cost + software assurance) | Management software cost for System Center + software assurance. _Not included- other hypervisor software cost_ or _Antivirus / Monitoring Agents_.|
| Storage | Storage Hardware | | The total storage hardware acquisition cost is calculated by multiplying the Total volume of storage attached to per GB cost. Default is USD 2 per GB per month. | | | Storage Maintenance | | Default is 10% of storage hardware acquisition cost. | | Network | Network Hardware and software | Network equipment (Cabinets, switches, routers, load balancers etc.) and software | As an industry standard and used by sellers in Business cases, it's a % of compute and storage cost. Default is 10% of storage and compute cost. | | | Maintenance | Maintenance | Defaulted to 15% of network hardware and software cost. |
-| Facilities | Facilities & Infrastructure | DC Facilities ΓÇô Lease and Power | Facilities cost has not been added by default in the on-premises cost calculations. Any lease/colocation/power cost specified here will be included as part of the Business case. |
+| Facilities | Facilities & Infrastructure | DC Facilities ΓÇô Lease and Power | Facilities cost hasn't been added by default in the on-premises cost calculations. Any lease/colocation/power cost specified here will be included as part of the Business case. |
| Labor | Labor | IT admin | DC admin cost = ((Number of virtual machines) / (Avg. # of virtual machines that can be managed by a full-time administrator)) * 730 * 12 | #### Azure cost
Cost components for running on-premises servers. For TCO calculations, a one yea
| | Storage (PaaS) | N/A | N/A | | Network | Network Hardware and software | Network equipment (Cabinets, switches, routers, load balancers etc.) and software | As an industry standard and used by sellers in Business cases, it's a % of compute and storage cost. Default is 10% of storage and compute cost. | | | Maintenance | Maintenance | Defaulted to 15% of network hardware and software cost. |
-| Facilities | Facilities & Infrastructure | DC Facilities - Lease and Power | Facilities cost is not applicable for Azure cost. |
+| Facilities | Facilities & Infrastructure | DC Facilities - Lease and Power | Facilities cost isn't applicable for Azure cost. |
| Labor | Labor | IT admin | DC admin cost = ((Number of virtual machines) / (Avg. # of virtual machines that can be managed by a full-time administrator)) * 730 * 12 | ### Year on Year costs
Cost components for running on-premises servers. For TCO calculations, a one yea
| Server Depreciation | (Total server hardware acquisition cost)/(Depreciable life) | Depreciable life = 4 years | | | Storage Depreciation | (Total storage hardware acquisition cost)/(Depreciable life) | Depreciable life = 4 years | | | Fit out and Networking Equipment | (Total network hardware acquisition cost)/(Depreciable life) | Depreciable life = 5 years | |
-| License Amortization | (License cost for VMware + Windows Server + SQL Server + Linux OS)/(Depreciable life) | Depreciable life = 5 years | VMware licenses won't be retained; Windows + SQL. Licenses will be retained based on AHUB option in Azure). |
+| License Amortization | (virtualization cost + Windows Server + SQL Server + Linux OS)/(Depreciable life) | Depreciable life = 5 years | VMware licenses are not retained; Windows, SQL and Hyper-V management software licenses are retained based on AHUB option in Azure).|
| **Operating Asset Expense (OPEX) (B)** | | | | | Network maintenance | Per year | | | | Storage maintenance | Per year | Power draw per Server, Average price per KW per month based on location. | |
-| License Support | License support cost for VMware + Windows Server + SQL Server + Linux OS | | VMware licenses won't be retained; Windows + SQL. licenses will be retained based on AHUB option in Azure). |
-| Datacenter Admin cost | Number of people * costs per hour for one hour | Cost per hour based on location. | |
+| License Support | License support cost for virtualization + Windows Server + SQL Server + Linux OS | | VMware licenses are not retained; Windows, SQL and Hyper-V management software licenses are retained based on AHUB option in Azure). |
+| Datacenter Admin cost | Number of people * hourly cost * 730 hours | Cost per hour based on location. | |
#### Future state (on-premises + Azure)
It assumes that the customer does a phased migration to Azure with following % o
You can override the above values in the assumptions section of the Business case.
- In Azure, there is no CAPEX, the yearly Azure cost is an OPEX
+ In Azure, there's no CAPEX, the yearly Azure cost is an OPEX
| **Component** | **Year 0** | **Year 1** | **Year 2** | **Year 3** | **Year 4** | | | | | | |
migrate How To Build A Business Case https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/how-to-build-a-business-case.md
# Build a business case (preview)
-This article describes how to build a Business case for on-premises servers and workloads in your VMware environment with Azure Migrate: Discovery and assessment tool.
+This article describes how to build a Business case for on-premises servers and workloads in your datacenter with Azure Migrate: Discovery and assessment tool.
-[Azure Migrate](migrate-services-overview.md) helps you to plan and execute migration and modernization projects to Azure. Azure Migrate provides a centralized hub to track discovery, assessment, and migration of on-premises infrastructure, applications, and data to Azure. The hub provides Azure tools for assessment and migration, as well as third-party independent software vendor (ISV) offerings.
+[Azure Migrate](migrate-services-overview.md) helps you to plan and execute migration and modernization projects to Azure. Azure Migrate provides a centralized hub to track discovery, assessment, and migration of on-premises infrastructure, applications, and data to Azure. The hub provides Azure tools for assessment and migration, and third-party independent software vendor (ISV) offerings.
## Prerequisites - Make sure you've [created](./create-manage-projects.md) an Azure Migrate project. You can also reuse an existing project to use this capability. - Once you've created a project, the Azure Migrate: Discovery and assessment tool is automatically [added](how-to-assess.md) to the project.-- Before you build the Business case, you need to set up an Azure Migrate appliance for [VMware](how-to-set-up-appliance-vmware.md). The appliance discovers on-premises servers, SQL Server instance and databases, and ASP.NET webapps and sends metadata and performance (resource utilization) data to Azure Migrate. [Learn more](migrate-appliance.md).
+- Before you build the Business case, you need to set up an Azure Migrate appliance for [VMware](how-to-set-up-appliance-vmware.md) or [Hyper-V](how-to-set-up-appliance-hyper-v.md) or [Physical/Bare-metal or other clouds](how-to-set-up-appliance-physical.md). The appliance discovers servers, SQL Server instance and databases, and ASP.NET webapps and sends metadata and performance (resource utilization) data to Azure Migrate. [Learn more](migrate-appliance.md).
## Business case overview
There are three types of migration strategies that you can choose while building
:::image type="content" source="./media/how-to-build-a-business-case/build-inline.png" alt-text="Screenshot of the Build Business case button." lightbox="./media/how-to-build-a-business-case/build-expanded.png":::
- We recommend that you wait at least a day after starting discovery before you build a Business case so that enough performance/resource utilization data points are collected. Also, review the **Notifications**/**Resolve issues** blades on the Azure Migrate hub to identify any discovery related issues prior to Business case computation. It will ensure that the IT estate in your datacenter is represented more accurately and the Business case recommendations will be more valuable.
+ We recommend that you wait at least a day after starting discovery before you build a Business case so that enough performance/resource utilization data points are collected. Also, review the **Notifications**/**Resolve issues** blades on the Azure Migrate hub to identify any discovery related issues prior to Business case computation. It ensures that the IT estate in your datacenter is represented more accurately and the Business case recommendations are more valuable.
-1. In **Business case name**, specify a name for the Business case. Make sure the Business case name is unique within a project, else the previous Business case with the same name will get recomputed.
+1. In **Business case name**, specify a name for the Business case. Make sure the Business case name is unique within a project, else the previous Business case with the same name gets recomputed.
1. In **Target location**, specify the Azure region to which you want to migrate.
There are three types of migration strategies that you can choose while building
- With the default *Azure recommended approach to minimize cost*, you can get the most cost-efficient and compatible target recommendation in Azure across Azure IaaS and Azure PaaS targets. - With *Migrate to all IaaS (Infrastructure as a Service)*, you can get a quick lift and shift recommendation to Azure IaaS. - With *Modernize to PaaS (Platform as a Service)*, you can get cost effective recommendations for Azure IaaS and more PaaS preferred targets in Azure PaaS.
-1. In **Savings options**, specify the savings options combination that you want to be considered while optimizing your Azure costs and maximize savings. Based on the availability of the savings option in the chosen region and the targets, the business case will recommend the appropriate savings options to maximize your savings on Azure.
- - Choose 'Reserved Instance', if your datacenter comprises of most consistently running resources.
+1. In **Savings options**, specify the savings options combination that you want to be considered while optimizing your Azure costs and maximize savings. Based on the availability of the savings option in the chosen region and the targets, the business case recommends the appropriate savings options to maximize your savings on Azure.
+ - Choose 'Reserved Instance', if your datacenter comprises most consistently running resources.
- Choose 'Reserved Instance + Azure Savings Plan', if you want additional flexibility and automated cost optimization for workloads applicable for Azure Savings Plan (Compute targets including Azure VM and Azure App Service).
-1. In **Discount (%) on Pay as you go**, add any subscription-specific discounts you receive on top of the Azure offer. The default setting is 0%. Note that the discount isn't applicable on top of reserved instance savings option.
+1. In **Discount (%) on Pay as you go**, add any subscription-specific discounts you receive on top of the Azure offer. The default setting is 0%. The discount isn't applicable on top of reserved instance savings option.
1. **Currency** is defaulted to USD and can't be edited. 1. Review the chosen inputs, and select **Build business case**. :::image type="content" source="./media/how-to-build-a-business-case/build-button.png" alt-text="Screenshot of the button to initiate the Business case creation.":::
-1. You'll be directed to the newly created Business case with a banner that says that your Business case is computing. The computation might take some time, depending on the number of servers and workloads in the project. You can come back to the Business case page after ~30 minutes and select **Refresh**.
+1. You are directed to the newly created Business case with a banner that says that your Business case is computing. The computation might take some time, depending on the number of servers and workloads in the project. You can come back to the Business case page after ~30 minutes and select **Refresh**.
:::image type="content" source="./media/how-to-build-a-business-case/refresh-inline.png" alt-text="Screenshot of the refresh button to refresh the Business case." lightbox="./media/how-to-build-a-business-case/refresh-expanded.png":::
migrate How To View A Business Case https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/how-to-view-a-business-case.md
# View a business case (preview)
-This article describes how to review the business case reports for on-premises servers and workloads in your VMware environment with Azure Migrate: Discovery and assessment tool.
+This article describes how to review the business case reports for on-premises servers and workloads in your datacenter with Azure Migrate: Discovery and assessment tool.
[Azure Migrate](migrate-services-overview.md) helps you to plan and execute migration and modernization projects to Azure. Azure Migrate provides a centralized hub to track discovery, assessment, and migration of on-premises infrastructure, applications, and data to Azure. The hub provides Azure tools for assessment and migration, as well as third-party Independent Software Vendor (ISV) offerings.
This card covers your potential total cost of ownership savings based on the cho
It covers the cost of running all the servers scoped in the business case using some of the industry benchmarks. It doesn't cover Facilities (lease/colocation/power) cost by default, but you can edit it in the on-premises cost assumptions section. It includes one time cost for some of the capital expenditures like hardware acquisition etc., and annual cost for other components that you might pay as operating expenses like maintenance etc. ### Estimated Azure cost
-It covers the cost of all servers and workloads that have been identified as ready for migration/modernization as per the recommendation. Refer the respective *Azure IaaS* and *Azure PaaS* report for details. The Azure cost is calculated based on the right sized Azure configuration, ideal migration target and most suitable pricing offers for your workloads. You can override the migration strategy, target location or other settings in the 'Azure cost' assumptions to see how your savings could change by migrating to Azure.
+It covers the cost of all servers and workloads that have been identified as ready for migration/modernization as per the recommendation. Refer to the respective *Azure IaaS* and *Azure PaaS* report for details. The Azure cost is calculated based on the right sized Azure configuration, ideal migration target and most suitable pricing offers for your workloads. You can override the migration strategy, target location or other settings in the 'Azure cost' assumptions to see how your savings could change by migrating to Azure.
### YoY estimated current vs future state cost As you plan to migrate to Azure in phases, this line chart shows your cashflow per year based on the estimated migration completed that year. By default, it's assumed that you'll migrate 0% in the current year, 20% in Year 1, 50% in Year 2, and 100% in Year 3.
migrate Prepare For Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/prepare-for-migration.md
This article describes how to prepare on-premises machines before you migrate th
In this article, you: > [!div class="checklist"] > * Review migration limitations.
-> * Select a method for migrating VMware vSphere VMs
+> * Select a method for migrating VMware vSphere VMs.
> * Check hypervisor and operating system requirements for machines you want to migrate. > * Review URL and port access for machines you want to migrate. > * Review changes you might need to make before you begin migration.
-> * Check Azure VMs requirements for migrated machines
+> * Check Azure VMs requirements for migrated machines.
> * Prepare machines so you can connect to the Azure VMs after migration.
The table summarizes discovery, assessment, and migration limits for Azure Migra
## Select a VMware vSphere migration method
-If you're migrating VMware vSphere VMs to Azure, [compare](server-migrate-overview.md#compare-migration-methods) the agentless and agent-based migration methods, to decide what works for you.
+If you're migrating VMware vSphere VMs to Azure, [compare](server-migrate-overview.md#compare-migration-methods) the agentless and agent-based migration methods, to decide what works best for you.
## Verify hypervisor requirements
migrate Replicate Using Expressroute https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/replicate-using-expressroute.md
To manually create a private DNS zone:
1. On the **Create private DNS zone** page, fill in the required details. Enter the name of the private DNS zone as **_privatelink_.blob.core.windows.net**. 1. On the **Review + create** tab, review and create the DNS zone.
-1. Link the private DNS zone to your virtual network.
-
- The private DNS zone you created must be linked to the virtual network that the private endpoint is attached to.
+1. Link the private DNS zone to your virtual network.
+ The private DNS zone you created must be linked to the virtual network that the private endpoint is attached to.
1. Go to the private DNS zone created in the previous step, and go to virtual network links on the left side of the page. Select **+ Add**. 1. Fill in the required details. The **Subscription** and **Virtual network** fields must be filled with the corresponding details of the virtual network where your private endpoint is attached. The other fields can be left as is.
migrate Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/whats-new.md
ms. Previously updated : 04/13/2023 Last updated : 04/24/2023
[Azure Migrate](migrate-services-overview.md) helps you to discover, assess, and migrate on-premises servers, apps, and data to the Microsoft Azure cloud. This article summarizes new releases and features in Azure Migrate.
+## Update (April 2023)
+- Build business case using Azure Migrate for:
+ - Servers and workloads running in your Microsoft Hyper-V and Physical/ Bare-metal environments as well as IaaS services of other public cloud.
+ - SQL Server Always On Failover Cluster Instances and Always On Availability Groups. [Learn more](how-to-discover-applications.md).
+ ## Update (March 2023) - Support for discovery and assessment of web apps for Azure app service for Hyper-V and Physical servers. [Learn more](how-to-create-azure-app-service-assessment.md).
mysql Concepts Server Parameters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/concepts-server-parameters.md
Title: Server parameters - Azure Database for MySQL - Flexible Server
-description: This topic provides guidelines for configuring server parameters in Azure Database for MySQL - Flexible Server.
+description: This article provides guidelines for configuring server parameters in Azure Database for MySQL - Flexible Server.
Previously updated : 05/24/2022 Last updated : 04/26/2023 # Server parameters in Azure Database for MySQL - Flexible Server
Azure Database for MySQL - Flexible Server exposes the ability to change the val
## Configurable server parameters
-You can manage Azure Database for MySQL - Flexible Server configuration using server parameters. The server parameters are configured with the default and recommended value when you create the server. The server parameter blade on Azure portal shows both the modifiable and non-modifiable server parameters. The non-modifiable server parameters are greyed out.
+You can manage Azure Database for MySQL - Flexible Server configuration using server parameters. The server parameters are configured with the default and recommended value when you create the server. The server parameter blade on Azure portal shows both the modifiable and nonmodifiable server parameters. The nonmodifiable server parameters are greyed out.
The list of supported server parameters is constantly growing. Use the server parameters tab in the Azure portal to view the full list and configure server parameters values.
-Refer to the following sections below to learn more about the limits of the several commonly updated server parameters. The limits are determined by the compute tier and size (vCores) of the server.
+Refer to the following sections to learn more about the limits of the several commonly updated server parameters. The limits are determined by the compute tier and size (vCores) of the server.
> [!NOTE]
->* If you are looking to modify a server parameter which are static using the portal, it will request you to restart the server for the changes to take effect. In case you are using automation scripts (using tools like ARM templates , Terraform, Azure CLI etc) then your script should have a provision to restart the service for the settings to take effect even if you are changing the configurations as a part of create experience.
->* If you are looking to modify a server parameter which is non-modifiable but you would like to see as a modifiable for your environment, please open a [UserVoice](https://feedback.azure.com/d365community/forum/47b1e71d-ee24-ec11-b6e6-000d3a4f0da0) item or vote if the feedback already exist which can help us prioritize.
+>
+>* If you modify a static server parameter using the portal, you need to restart the server for the changes to take effect. In case you are using automation scripts (using tools like ARM templates , Terraform, Azure CLI etc) then your script should have a provision to restart the service for the settings to take effect even if you are changing the configurations as a part of create experience.
+>* If you want to modify a non-modifiable server parameter for your environment, please open a [UserVoice](https://feedback.azure.com/d365community/forum/47b1e71d-ee24-ec11-b6e6-000d3a4f0da0) item or vote if the feedback already exist which can help us prioritize.
### log_bin_trust_function_creators In Azure Database for MySQL - Flexible Server, binary logs are always enabled (that is, `log_bin` is set to ON). log_bin_trust_function_creators is set to ON by default in flexible servers.
-The binary logging format is always **ROW** and all connections to the server **ALWAYS** use row-based binary logging. With row-based binary logging, security issues do not exist and binary logging cannot break, so you can safely allow [`log_bin_trust_function_creators`](https://dev.mysql.com/doc/refman/5.7/en/replication-options-binary-log.html#sysvar_log_bin_trust_function_creators) to remain **ON**.
+The binary logging format is always **ROW** and all connections to the server **ALWAYS** use row-based binary logging. With row-based binary logging, security issues don't exist and binary logging can't break, so you can safely allow [`log_bin_trust_function_creators`](https://dev.mysql.com/doc/refman/5.7/en/replication-options-binary-log.html#sysvar_log_bin_trust_function_creators) to remain **ON**.
If [`log_bin_trust_function_creators`] is set to OFF, if you try to create triggers you may get errors similar to *you do not have the SUPER privilege and binary logging is enabled (you might want to use the less safe `log_bin_trust_function_creators` variable)*.
Azure Database for MySQL - Flexible Server supports at largest, **4 TB**, in a s
### innodb_log_file_size
-[innodb_log_file_size](https://dev.mysql.com/doc/refman/8.0/en/innodb-parameters.html#sysvar_innodb_log_file_size) is the size in bytes of each [log file](https://dev.mysql.com/doc/refman/8.0/en/glossary.html#glos_log_file) in a [log group](https://dev.mysql.com/doc/refman/8.0/en/glossary.html#glos_log_group). The combined size of log files [(innodb_log_file_size](https://dev.mysql.com/doc/refman/8.0/en/innodb-parameters.html#sysvar_innodb_log_file_size) * [innodb_log_files_in_group](https://dev.mysql.com/doc/refman/8.0/en/innodb-parameters.html#sysvar_innodb_log_files_in_group)) cannot exceed a maximum value that is slightly less than 512GB). A bigger log file size is better for performance, but it has a drawback that the recovery time after a crash will be high. You need to balance recovery time in the rare event of a crash recovery versus maximizing throughput during peak operations. These can also result in longer restart times. You can configure innodb_log_size to any of these values - 256MB, 512MB, 1GB or 2GB for Azure Database for MySQL - Flexible Server. The parameter is static and requires a restart.
+[innodb_log_file_size](https://dev.mysql.com/doc/refman/8.0/en/innodb-parameters.html#sysvar_innodb_log_file_size) is the size in bytes of each [log file](https://dev.mysql.com/doc/refman/8.0/en/glossary.html#glos_log_file) in a [log group](https://dev.mysql.com/doc/refman/8.0/en/glossary.html#glos_log_group). The combined size of log files [(innodb_log_file_size](https://dev.mysql.com/doc/refman/8.0/en/innodb-parameters.html#sysvar_innodb_log_file_size) * [innodb_log_files_in_group](https://dev.mysql.com/doc/refman/8.0/en/innodb-parameters.html#sysvar_innodb_log_files_in_group)) can't exceed a maximum value that is slightly less than 512 GB). A bigger log file size is better for performance, but it has a drawback that the recovery time after a crash is high. You need to balance recovery time in the rare event of a crash recovery versus maximizing throughput during peak operations. These can also result in longer restart times. You can configure innodb_log_size to any of these values - 256 MB, 512 MB, 1 GB or 2 GB for Azure Database for MySQL - Flexible Server. The parameter is static and requires a restart.
> [!NOTE] > If you have changed the parameter innodb_log_file_size from default, check if the value of "show global status like 'innodb_buffer_pool_pages_dirty'" stays at 0 for 30 seconds to avoid restart delay. -- ### max_connections
-The value of max_connection is determined by the memory size of the server.
+The value of `max_connection` is determined by the memory size of the server.
|**Pricing Tier**|**vCore(s)**|**Memory Size (GiB)**|**Default value**|**Min value**|**Max value**| |||||||
When connections exceed the limit, you may receive the following error:
> [!IMPORTANT] >For best experience, we recommend that you use a connection pooler like ProxySQL to efficiently manage connections.
-Creating new client connections to MySQL takes time and once established, these connections occupy database resources, even when idle. Most applications request many short-lived connections, which compounds this situation. The result is fewer resources available for your actual workload leading to decreased performance. A connection pooler that decreases idle connections and reuses existing connections will help avoid this. To learn about setting up ProxySQL, visit our [blog post](https://techcommunity.microsoft.com/t5/azure-database-for-mysql/load-balance-read-replicas-using-proxysql-in-azure-database-for/ba-p/880042).
+Creating new client connections to MySQL takes time and once established, these connections occupy database resources, even when idle. Most applications request many short-lived connections, which compounds this situation. The result is fewer resources available for your actual workload leading to decreased performance. A connection pooler that decreases idle connections and reuses existing connections helps to avoid this. To learn about setting up ProxySQL, visit our [blog post](https://techcommunity.microsoft.com/t5/azure-database-for-mysql/load-balance-read-replicas-using-proxysql-in-azure-database-for/ba-p/880042).
>[!Note] >ProxySQL is an open source community tool. It is supported by Microsoft on a best effort basis. In order to get production support with authoritative guidance, you can evaluate and reach out to [ProxySQL Product support](https://proxysql.com/services/support/). ### innodb_strict_mode
-If you receive an error similar to "Row size too large (> 8126)", you may want to turn OFF the parameter **innodb_strict_mode**. The server parameter **innodb_strict_mode** is not allowed to be modified globally at the server level because if row data size is larger than 8k, the data will be truncated without an error, which can lead to potential data loss. We recommend modifying the schema to fit the page size limit.
+If you receive an error similar to "Row size too large (> 8126)", you may want to turn OFF the parameter **innodb_strict_mode**. The server parameter **innodb_strict_mode** isn't allowed to be modified globally at the server level because if row data size is larger than 8k, the data is truncated without an error, which can lead to potential data loss. We recommend modifying the schema to fit the page size limit.
This parameter can be set at a session level using `init_connect`. To set **innodb_strict_mode** at session level, refer to [setting parameter not listed](./how-to-configure-server-parameters-portal.md#setting-non-modifiable-server-parameters).
This parameter can be set at a session level using `init_connect`. To set **inno
### time_zone
-Upon initial deployment, an Azure for MySQL Flexible Server includes system tables for time zone information, but these tables are not populated. The time zone tables can be populated by calling the `mysql.az_load_timezone` stored procedure from a tool like the MySQL command line or MySQL Workbench. Refer to the [Azure portal](./how-to-configure-server-parameters-portal.md#working-with-the-time-zone-parameter) or [Azure CLI](./how-to-configure-server-parameters-cli.md#working-with-the-time-zone-parameter) articles for how to call the stored procedure and set the global or session-level time zones.
+Upon initial deployment, an Azure for MySQL Flexible Server includes system tables for time zone information, but these tables aren't populated. The time zone tables can be populated by calling the `mysql.az_load_timezone` stored procedure from a tool like the MySQL command line or MySQL Workbench. Refer to the [Azure portal](./how-to-configure-server-parameters-portal.md#working-with-the-time-zone-parameter) or [Azure CLI](./how-to-configure-server-parameters-cli.md#working-with-the-time-zone-parameter) articles for how to call the stored procedure and set the global or session-level time zones.
-### binlog_expire_logs_seconds
+### binlog_expire_logs_seconds
In Azure Database for MySQL this parameter specifies the number of seconds the service waits before purging the binary log file.
-The binary log contains ΓÇ£eventsΓÇ¥ that describe database changes such as table creation operations or changes to table data. It also contains events for statements that potentially could have made changes. The binary log are used mainly for two purposes , replication and data recovery operations. Usually, the binary logs are purged as soon as the handle is free from service, backup or the replica set. In case of multiple replica, it would wait for the slowest replica to read the changes before it is been purged. If you want to persist binary logs for a more duration of time you can configure the parameter binlog_expire_logs_seconds. If the binlog_expire_logs_seconds is set to 0, which is the default value, it will purge as soon as the handle to the binary log is freed. If binlog_expire_logs_seconds > 0, then it would wait until the seconds configured before it purges. For Azure database for MySQL, managed features like backup and read replica purging of binary files are handled internally . When you replicate the data-out from the Azure Database for MySQL service, this parameter needs to be set in primary to avoid purging of binary logs before the replica reads from the changes from the primary. If you set the binlog_expire_logs_seconds to a higher value, then the binary logs will not get purged soon enough and can lead to increase in the storage billing.
+The binary log contains ΓÇ£eventsΓÇ¥ that describe database changes such as table creation operations or changes to table data. It also contains events for statements that potentially could have made changes. The binary log is used mainly for two purposes, replication and data recovery operations. Usually, the binary logs are purged as soon as the handle is free from service, backup or the replica set. If there are multiple replicas, the binary logs wait for the slowest replica to read the changes before it's been purged. If you want to persist binary logs for a more duration of time, you can configure the parameter binlog_expire_logs_seconds. If the binlog_expire_logs_seconds is set to 0, which is the default value, it purges as soon as the handle to the binary log is freed. If binlog_expire_logs_seconds > 0, then it would wait until the seconds configured before it purges. For Azure database for MySQL, managed features like backup and read replica purging of binary files are handled internally. When you replicate the data-out from the Azure Database for MySQL service, this parameter needs to be set in primary to avoid purging of binary logs before the replica reads from the changes from the primary. If you set the binlog_expire_logs_seconds to a higher value, then the binary logs won't get purged soon enough and can lead to increase in the storage billing.
+
+### event_scheduler
+
+In Azure Database for MySQL, the `event_schedule` server parameter manages creating, scheduling, and running events, i.e., tasks that run according to a schedule, and they're run by a special event scheduler thread. When the `event_scheduler` parameter is set to ON, the event scheduler thread is listed as a daemon process in the output of SHOW PROCESSLIST. You can create and schedule events using the following SQL syntax:
+
+```sql
+CREATE EVENT <event name>
+ON SCHEDULE EVERY _ MINUTE / HOUR / DAY
+STARTS TIMESTAMP / CURRENT_TIMESTAMP
+ENDS TIMESTAMP / CURRENT_TIMESTAMP + INTERVAL 1 MINUTE / HOUR / DAY
+COMMENT ΓÇÿ<comment>ΓÇÖ
+DO
+<your statement>;
+```
+
+> [!NOTE]
+> For more information about creating an event, see the MySQL Event Scheduler documentation here:
+>
+> * [MySQL :: MySQL 5.7 Reference Manual :: 23.4 Using the Event Scheduler](https://dev.mysql.com/doc/refman/5.7/en/event-scheduler.html)
+> * [MySQL :: MySQL 8.0 Reference Manual :: 25.4 Using the Event Scheduler](https://dev.mysql.com/doc/refman/8.0/en/event-scheduler.html)
+>
+
+#### Configuring the event_scheduler server parameter
+
+The following scenario illustrates one way to use the `event_scheduler` parameter in Azure Database for MySQL. To demonstrate the scenario, consider the following example, a simple table:
+
+```azurecli
+mysql> describe tab1;
++--+-++--++-+
+| Field | Type | Null | Key | Default | Extra |
++--+-++--++-+
+| id | int(11) | NO | PRI | NULL | auto_increment |
+| CreatedAt | timestamp | YES | | NULL | |
+| CreatedBy | varchar(16) | YES | | NULL | |
++--+-++--++-+
+3 rows in set (0.23 sec)
+```
+
+To configure the `event_scheduler` server parameter in Azure Database for MySQL, perform the following steps:
+
+1. In the Azure portal, navigate to your server, and then, under **Settings**, select **Server parameters**.
+2. On the **Server parameters** blade, search for `event_scheduler`, in the **VALUE** drop-down list, select **ON**, and then select **Save**.
+
+ > [!NOTE]
+ > The dynamic server parameter configuration change will be deployed without a restart.
+
+3. Then to create an event, connect to the MySQL server, and run the following SQL command:
+
+ ```sql
+ CREATE EVENT test_event_01
+ ON SCHEDULE EVERY 1 MINUTE
+ STARTS CURRENT_TIMESTAMP
+ ENDS CURRENT_TIMESTAMP + INTERVAL 1 HOUR
+ COMMENT ΓÇÿInserting record into the table tab1 with current timestampΓÇÖ
+ DO
+ INSERT INTO tab1(id,createdAt,createdBy)
+ VALUES('',NOW(),CURRENT_USER());
+ ```
+
+4. To view the Event Scheduler Details, run the following SQL statement:
+
+ ```slq
+ SHOW EVENTS;
+ ```
+
+ The following output appears:
+
+ ```azurecli
+ mysql> show events;
+ +--++-+--+--++-+-+--
+
+#### Limitations
+
+For servers with High Availability configured, when failover occurs, it's possible that the `event_scheduler` server parameter will be set to ΓÇÿOFFΓÇÖ. If this occurs, when the failover is complete, configure the parameter to set the value to ΓÇÿONΓÇÖ.
-## Non-modifiable server parameters
+## Nonmodifiable server parameters
-The server parameter blade on the Azure portal shows both the modifiable and non-modifiable server parameters. The non-modifiable server parameters are greyed out. If you want to configure a non-modifiable server parameter at session level, refer to the [Azure portal](./how-to-configure-server-parameters-portal.md#setting-non-modifiable-server-parameters) or [Azure CLI](./how-to-configure-server-parameters-cli.md#setting-non-modifiable-server-parameters) article for setting the parameter at the connection level using `init_connect`.
+The server parameter blade on the Azure portal shows both the modifiable and nonmodifiable server parameters. The nonmodifiable server parameters are greyed out. If you want to configure a nonmodifiable server parameter at session level, refer to the [Azure portal](./how-to-configure-server-parameters-portal.md#setting-non-modifiable-server-parameters) or [Azure CLI](./how-to-configure-server-parameters-cli.md#setting-non-modifiable-server-parameters) article for setting the parameter at the connection level using `init_connect`.
## Next steps -- How to configure [server parameters in Azure portal](./how-to-configure-server-parameters-portal.md)-- How to configure [server parameters in Azure CLI](./how-to-configure-server-parameters-cli.md)
+* How to configure [server parameters in Azure portal](./how-to-configure-server-parameters-portal.md)
+* How to configure [server parameters in Azure CLI](./how-to-configure-server-parameters-cli.md)
mysql Concepts Server Parameters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-server-parameters.md
Previously updated : 06/20/2022 Last updated : 04/26/2023 # Server parameters in Azure Database for MySQL
Last updated 06/20/2022
This article provides considerations and guidelines for configuring server parameters in Azure Database for MySQL.
-## What are server parameters?
+## What are server parameters?
The MySQL engine provides many different server variables and parameters that you use to configure and tune engine behavior. Some parameters can be set dynamically during runtime, while others are static, and require a server restart in order to apply.
Refer to the following sections to learn more about the limits of several common
### Thread pools
-MySQL traditionally assigns a thread for every client connection. As the number of concurrent users grows, there is a corresponding drop in performance. Many active threads can affect the performance significantly, due to increased context switching, thread contention, and bad locality for CPU caches.
+MySQL traditionally assigns a thread for every client connection. As the number of concurrent users grows, there's a corresponding drop in performance. Many active threads can affect the performance significantly, due to increased context switching, thread contention, and bad locality for CPU caches.
-*Thread pools*, a server-side feature and distinct from connection pooling, maximize performance by introducing a dynamic pool of worker threads. You use this feature to limit the number of active threads running on the server and minimize thread churn. This helps ensure that a burst of connections won't cause the server to run out of resources or memory. Thread pools are most efficient for short queries and CPU intensive workloads, such as OLTP workloads.
+*Thread pools*, a server-side feature and distinct from connection pooling, maximize performance by introducing a dynamic pool of worker threads. You use this feature to limit the number of active threads running on the server and minimize thread churn. This helps ensure that a burst of connections doesn't cause the server to run out of resources or memory. Thread pools are most efficient for short queries and CPU intensive workloads, such as OLTP workloads.
For more information, see [Introducing thread pools in Azure Database for MySQL](https://techcommunity.microsoft.com/t5/azure-database-for-mysql/introducing-thread-pools-in-azure-database-for-mysql-service/ba-p/1504173). > [!NOTE]
-> Thread pools aren't supported for MySQL 5.6.
+> Thread pools aren't supported for MySQL 5.6.
### Configure the thread pool
To enable a thread pool, update the `thread_handling` server parameter to `pool-
You can also configure the maximum and minimum number of threads in the pool by setting the following server parameters: -- `thread_pool_max_threads`: This value ensures that there won't be more than this number of threads in the pool.-- `thread_pool_min_threads`: This value sets the number of threads that will be reserved even after connections are closed.
+- `thread_pool_max_threads`: This value limits the number of threads in the pool.
+- `thread_pool_min_threads`: This value sets the number of threads that are reserved, even after connections are closed.
To improve performance issues of short queries on the thread pool, you can enable *batch execution*. Instead of returning back to the thread pool immediately after running a query, threads will keep active for a short time to wait for the next query through this connection. The thread then runs the query rapidly and, when this is complete, the thread waits for the next one. This process continues until the overall time spent exceeds a threshold. You determine the behavior of batch execution by using the following server parameters: -- `thread_pool_batch_wait_timeout`: This value specifies the time a thread waits for another query to process.
+- `thread_pool_batch_wait_timeout`: This value specifies the time a thread waits for another query to process.
- `thread_pool_batch_max_time`: This value determines the maximum time a thread will repeat the cycle of query execution and waiting for the next query. > [!IMPORTANT]
-> Don't turn on the thread pool in production until you've tested it.
+> Don't turn on the thread pool in production until you've tested it.
### log_bin_trust_function_creators
-In Azure Database for MySQL, binary logs are always enabled (the `log_bin` parameter is set to `ON`). If you want to use triggers, you get error similar to the following: *You do not have the SUPER privilege and binary logging is enabled (you might want to use the less safe `log_bin_trust_function_creators` variable)*.
+In Azure Database for MySQL, binary logs are always enabled (the `log_bin` parameter is set to `ON`). If you want to use triggers, you get an error similar to the following: *You do not have the SUPER privilege and binary logging is enabled (you might want to use the less safe `log_bin_trust_function_creators` variable)*.
The binary logging format is always **ROW**, and all connections to the server *always* use row-based binary logging. Row-based binary logging helps maintain security, and binary logging can't break, so you can safely set [`log_bin_trust_function_creators`](https://dev.mysql.com/doc/refman/5.7/en/replication-options-binary-log.html#sysvar_log_bin_trust_function_creators) to `TRUE`.
Review the [MySQL documentation](https://dev.mysql.com/doc/refman/5.7/en/server-
### innodb_strict_mode
-If you receive an error similar to `Row size too large (> 8126)`, consider turning off the `innodb_strict_mode` parameter. You can't modify `innodb_strict_mode` globally at the server level. If row data size is larger than 8K, the data is truncated, without an error notification, leading to potential data loss. It's a good idea to modify the schema to fit the page size limit.
+If you receive an error similar to `Row size too large (> 8126)`, consider turning off the `innodb_strict_mode` parameter. You can't modify `innodb_strict_mode` globally at the server level. If row data size is larger than 8K, the data is truncated, without an error notification, leading to potential data loss. It's a good idea to modify the schema to fit the page size limit.
You can set this parameter at a session level, by using `init_connect`. To set `innodb_strict_mode` at a session level, refer to [setting parameter not listed](./how-to-server-parameters.md#setting-parameters-not-listed).
After you restart Azure Database for MySQL, the data pages that reside in the di
You can use `InnoDB` buffer pool warmup to shorten the warmup period. This process reloads disk pages that were in the buffer pool *before* the restart, rather than waiting for DML or SELECT operations to access corresponding rows. For more information, see [InnoDB buffer pool server parameters](https://dev.mysql.com/doc/refman/8.0/en/innodb-preload-buffer-pool.html).
-Note that improved performance comes at the expense of longer start-up time for the server. When you enable this parameter, the server startup and restart time is expected to increase, depending on the IOPS provisioned on the server. It's a good idea to test and monitor the restart time, to ensure that the start-up or restart performance is acceptable, because the server is unavailable during that time. Don't use this parameter when the IOPS provisioned is less than 1000 IOPS (in other words, when the storage provisioned is less than 335 GB).
+However, improved performance comes at the expense of longer start-up time for the server. When you enable this parameter, the server startup and restart times are expected to increase, depending on the IOPS provisioned on the server. It's a good idea to test and monitor the restart time, to ensure that the start-up or restart performance is acceptable, because the server is unavailable during that time. Don't use this parameter when the number of IOPS provisioned is less than 1000 IOPS (in other words, when the storage provisioned is less than 335 GB).
To save the state of the buffer pool at server shutdown, set the server parameter `innodb_buffer_pool_dump_at_shutdown` to `ON`. Similarly, set the server parameter `innodb_buffer_pool_load_at_startup` to `ON` to restore the buffer pool state at server startup. You can control the impact on start-up or restart by lowering and fine-tuning the value of the server parameter `innodb_buffer_pool_dump_pct`. By default, this parameter is set to `25`.
To save the state of the buffer pool at server shutdown, set the server paramete
Upon initial deployment, a server running Azure Database for MySQL includes systems tables for time zone information, but these tables aren't populated. You can populate the tables by calling the `mysql.az_load_timezone` stored procedure from tools like the MySQL command line or MySQL Workbench. For information about how to call the stored procedures and set the global or session-level time zones, see [Working with the time zone parameter (Azure portal)](how-to-server-parameters.md#working-with-the-time-zone-parameter) or [Working with the time zone parameter (Azure CLI)](how-to-configure-server-parameters-using-cli.md#working-with-the-time-zone-parameter).
-### binlog_expire_logs_seconds
+### binlog_expire_logs_seconds
In Azure Database for MySQL, this parameter specifies the number of seconds the service waits before purging the binary log file. The *binary log* contains events that describe database changes, such as table creation operations or changes to table data. It also contains events for statements that can potentially make changes. The binary log is used mainly for two purposes, replication and data recovery operations.
-Usually, the binary logs are purged as soon as the handle is free from service, backup, or the replica set. In case of multiple replicas, the binary logs wait for the slowest replica to read the changes before being purged. If you want binary logs to persist longer, you can configure the parameter `binlog_expire_logs_seconds`. If you set `binlog_expire_logs_seconds` to `0`, which is the default value, it purges as soon as the handle to the binary log is freed. If you set `binlog_expire_logs_seconds` to greater than 0, then the binary log only purges after that period of time.
+Usually, the binary logs are purged as soon as the handle is free from service, backup, or the replica set. If there are multiple replicas, the binary logs wait for the slowest replica to read the changes before being purged. If you want binary logs to persist longer, you can configure the parameter `binlog_expire_logs_seconds`. If you set `binlog_expire_logs_seconds` to `0`, which is the default value, it purges as soon as the handle to the binary log is freed. If you set `binlog_expire_logs_seconds` to greater than 0, then the binary log only purges after that period of time.
-For Azure Database for MySQL, managed features like backup and read replica purging of binary files are handled internally. When you replicate the data out from the Azure Database for MySQL service, you must set this parameter in the primary to avoid purging binary logs before the replica reads from the changes from the primary. If you set the `binlog_expire_logs_seconds` to a higher value, then the binary logs won't get purged soon enough. This can lead to an increase in the storage billing.
+For Azure Database for MySQL, managed features like backup and read replica purging of binary files are handled internally. When you replicate the data out from the Azure Database for MySQL service, you must set this parameter in the primary to avoid purging binary logs before the replica reads from the changes from the primary. If you set the `binlog_expire_logs_seconds` to a higher value, then the binary logs won't get purged soon enough. This can lead to an increase in the storage billing.
-## Non-configurable server parameters
+### event_scheduler
+
+In Azure Database for MySQL, the `event_schedule` server parameter manages creating, scheduling, and running events, i.e., tasks that run according to a schedule, and they're run by a special event scheduler thread. When the `event_scheduler` parameter is set to ON, the event scheduler thread is listed as a daemon process in the output of SHOW PROCESSLIST. You can create and schedule events using the following SQL syntax:
+
+```sql
+CREATE EVENT <event name>
+ON SCHEDULE EVERY _ MINUTE / HOUR / DAY
+STARTS TIMESTAMP / CURRENT_TIMESTAMP
+ENDS TIMESTAMP / CURRENT_TIMESTAMP + INTERVAL 1 MINUTE / HOUR / DAY
+COMMENT ΓÇÿ<comment>ΓÇÖ
+DO
+<your statement>;
+```
+
+> [!NOTE]
+> For more information about creating an event, see the MySQL Event Scheduler documentation here:
+>
+> - [MySQL :: MySQL 5.7 Reference Manual :: 23.4 Using the Event Scheduler](https://dev.mysql.com/doc/refman/5.7/en/event-scheduler.html)
+> - [MySQL :: MySQL 8.0 Reference Manual :: 25.4 Using the Event Scheduler](https://dev.mysql.com/doc/refman/8.0/en/event-scheduler.html)
+>
+
+#### Configuring the event_scheduler server parameter
+
+The following scenario illustrates one way to use the `event_scheduler` parameter in Azure Database for MySQL. To demonstrate the scenario, consider the following example, a simple table:
+
+```azurecli
+mysql> describe tab1;
++--+-++--++-+
+| Field | Type | Null | Key | Default | Extra |
++--+-++--++-+
+| id | int(11) | NO | PRI | NULL | auto_increment |
+| CreatedAt | timestamp | YES | | NULL | |
+| CreatedBy | varchar(16) | YES | | NULL | |
++--+-++--++-+
+3 rows in set (0.23 sec)
+```
+
+To configure the `event_scheduler` server parameter in Azure Database for MySQL, perform the following steps:
+
+1. In the Azure portal, navigate to your server, and then, under **Settings**, select **Server parameters**.
+2. On the **Server parameters** blade, search for `event_scheduler`, in the **VALUE** drop-down list, select **ON**, and then select **Save**.
+
+ > [!NOTE]
+ > The dynamic server parameter configuration change will be deployed without a restart.
+
+3. Then to create an event, connect to the MySQL server, and run the following SQL command:
+
+ ```sql
+ CREATE EVENT test_event_01
+ ON SCHEDULE EVERY 1 MINUTE
+ STARTS CURRENT_TIMESTAMP
+ ENDS CURRENT_TIMESTAMP + INTERVAL 1 HOUR
+ COMMENT ΓÇÿInserting record into the table tab1 with current timestampΓÇÖ
+ DO
+ INSERT INTO tab1(id,createdAt,createdBy)
+ VALUES('',NOW(),CURRENT_USER());
+ ```
+
+4. To view the Event Scheduler Details, run the following SQL statement:
+
+ ```slq
+ SHOW EVENTS;
+ ```
+
+ The following output appears:
+
+ ```azurecli
+ mysql> show events;
+ +--++-+--+--++-+-+++++-+-+--+
+ | Db | Name | Definer | Time zone | Type | Execute at | Interval value | Interval field | Starts | Ends | Status | Originator | character_set_client | collation_connection | Database Collation |
+ +--++-+--+--++-+-+++++-+-+--+
+ | db1 | test_event_01 | azureuser@% | SYSTEM | RECURRING | NULL | 1 | MINUTE | 2023-04-05 14:47:04 | 2023-04-05 15:47:04 | ENABLED | 3221153808 | latin1 | latin1_swedish_ci | latin1_swedish_ci |
+ +--++-+--+--++-+-+++++-+-+--+
+ 1 row in set (0.23 sec)
+ ```
+
+5. After few minutes, query the rows from the table to begin viewing the rows inserted every minute as per the `event_scheduler` parameter you configured:
+
+ ```azurecli
+ mysql> select * from tab1;
+ +-++-+
+ | id | CreatedAt | CreatedBy |
+ +-++-+
+ | 1 | 2023-04-05 14:47:04 | azureuser@% |
+ | 2 | 2023-04-05 14:48:04 | azureuser@% |
+ | 3 | 2023-04-05 14:49:04 | azureuser@% |
+ | 4 | 2023-04-05 14:50:04 | azureuser@% |
+ +-++-+
+ 4 rows in set (0.23 sec)
+ ```
+
+6. After an hour, run a Select statement on the table to view the complete result of the values inserted into table every minute for an hour as the `event_scheduler` is configured in our case.
+
+ ```azurecli
+ mysql> select * from tab1;
+ +-++-+
+ | id | CreatedAt | CreatedBy |
+ +-++-+
+ | 1 | 2023-04-05 14:47:04 | azureuser@% |
+ | 2 | 2023-04-05 14:48:04 | azureuser@% |
+ | 3 | 2023-04-05 14:49:04 | azureuser@% |
+ | 4 | 2023-04-05 14:50:04 | azureuser@% |
+ | 5 | 2023-04-05 14:51:04 | azureuser@% |
+ | 6 | 2023-04-05 14:52:04 | azureuser@% |
+ ..< 50 lines trimmed to compact output >..
+ | 56 | 2023-04-05 15:42:04 | azureuser@% |
+ | 57 | 2023-04-05 15:43:04 | azureuser@% |
+ | 58 | 2023-04-05 15:44:04 | azureuser@% |
+ | 59 | 2023-04-05 15:45:04 | azureuser@% |
+ | 60 | 2023-04-05 15:46:04 | azureuser@% |
+ | 61 | 2023-04-05 15:47:04 | azureuser@% |
+ +-++-+
+ 61 rows in set (0.23 sec)
+ ```
+
+#### Other scenarios
+
+You can set up an event based on the requirements of your specific scenario. A few similar examples of scheduling SQL statements to run at different time intervals follow.
+
+**Run a SQL statement now and repeat one time per day with no end**
+
+```sql
+CREATE EVENT <event name>
+ON SCHEDULE
+EVERY 1 DAY
+STARTS (TIMESTAMP(CURRENT_DATE) + INTERVAL 1 DAY + INTERVAL 1 HOUR)
+COMMENT 'Comment'
+DO
+<your statement>;
+```
+
+**Run a SQL statement every hour with no end**
+
+```sql
+CREATE EVENT <event name>
+ON SCHEDULE
+EVERY 1 HOUR
+COMMENT 'Comment'
+DO
+<your statement>;
+```
+
+**Run a SQL statement every day with no end**
+
+```sql
+CREATE EVENT <event name>
+ON SCHEDULE
+EVERY 1 DAY
+STARTS str_to_date( date_format(now(), '%Y%m%d 0200'), '%Y%m%d %H%i' ) + INTERVAL 1 DAY
+COMMENT 'Comment'
+DO
+<your statement>;
+```
+
+## Nonconfigurable server parameters
The following server parameters aren't configurable in the service:
mysql Connect Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/connect-nodejs.md
Get the connection information needed to connect to the Azure Database for MySQL
1. Paste the JavaScript code into new text files, and then save it into a project folder with file extension .js (such as C:\nodejsmysql\createtable.js or /home/username/nodejsmysql/createtable.js). 1. Replace `host`, `user`, `password` and `database` config options in the code with the values that you specified when you created the server and database.
-1. **Obtain SSL certificate**: Download the certificate needed to communicate over SSL with your Azure Database for MySQL server from [https://www.digicert.com/CACerts/BaltimoreCyberTrustRoot.crt.pem](https://www.digicert.com/CACerts/BaltimoreCyberTrustRoot.crt.pem) and save the certificate file to your local drive.
+1. **Obtain SSL certificate**: Download the certificate needed to communicate over SSL with your Azure Database for MySQL server from [https://dl.cacerts.digicert.com/DigiCertGlobalRootCA.crt.pem](https://dl.cacerts.digicert.com/DigiCertGlobalRootCA.crt.pem) and save the certificate file to your local drive.
- **For Microsoft Internet Explorer and Microsoft Edge:** After the download has completed, rename the certificate to BaltimoreCyberTrustRoot.crt.pem.
+ **For Microsoft Internet Explorer and Microsoft Edge:** After the download has completed, rename the certificate to DigiCertGlobalRootCA.crt.pem.
See the following links for certificates for servers in sovereign clouds: [Azure Government](https://www.digicert.com/CACerts/BaltimoreCyberTrustRoot.crt.pem), [Azure China](https://dl.cacerts.digicert.com/DigiCertGlobalRootCA.crt.pem), and [Azure Germany](https://www.d-trust.net/cgi-bin/D-TRUST_Root_Class_3_CA_2_2009.crt). 1. In the `ssl` config option, replace the `ca-cert` filename with the path to this local file.
var config =
password: 'your_password', database: 'quickstartdb', port: 3306,
- ssl: {ca: fs.readFileSync("your_path_to_ca_cert_file_BaltimoreCyberTrustRoot.crt.pem")}
+ ssl: {ca: fs.readFileSync("your_path_to_ca_cert_file_DigiCertGlobalRootCA.crt.pem")}
}; const conn = new mysql.createConnection(config);
var config =
password: 'your_password', database: 'quickstartdb', port: 3306,
- ssl: {ca: fs.readFileSync("your_path_to_ca_cert_file_BaltimoreCyberTrustRoot.crt.pem")}
+ ssl: {ca: fs.readFileSync("your_path_to_ca_cert_file_DigiCertGlobalRootCA.crt.pem")}
}; const conn = new mysql.createConnection(config);
var config =
password: 'your_password', database: 'quickstartdb', port: 3306,
- ssl: {ca: fs.readFileSync("your_path_to_ca_cert_file_BaltimoreCyberTrustRoot.crt.pem")}
+ ssl: {ca: fs.readFileSync("your_path_to_ca_cert_file_DigiCertGlobalRootCA.crt.pem")}
}; const conn = new mysql.createConnection(config);
var config =
password: 'your_password', database: 'quickstartdb', port: 3306,
- ssl: {ca: fs.readFileSync("your_path_to_ca_cert_file_BaltimoreCyberTrustRoot.crt.pem")}
+ ssl: {ca: fs.readFileSync("your_path_to_ca_cert_file_DigiCertGlobalRootCA.crt.pem")}
}; const conn = new mysql.createConnection(config);
network-watcher Network Watcher Network Configuration Diagnostics Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-network-configuration-diagnostics-overview.md
Title: Introduction to NSG Diagnostics in Azure Network Watcher
-description: Learn about Network Security Group (NSG) Diagnostics tool in Azure Network Watcher
+ Title: NSG diagnostics
+
+description: Learn about NSG diagnostics tool in Azure Network Watcher.
- Previously updated : 01/20/2023 Last updated : 04/24/2023
-# Introduction to NSG Diagnostics in Azure Network Watcher
+# Azure Network Watcher NSG diagnostics
-The Network Security Group (NSG) Diagnostics is an Azure Network Watcher tool that helps you understand which network traffic is allowed or denied in your Azure Virtual Network along with detailed information for debugging. It can help you in understanding if your NSG rules are configured correctly.
+The NSG diagnostics is an Azure Network Watcher tool that helps you understand which network traffic is allowed or denied in your Azure virtual network along with detailed information for debugging. NSG diagnostics can help you verify that your network security group rules are set up properly.
> [!NOTE]
-> To use NSG Diagnostics, Network Watcher must be enabled in your subscription. See [Create an Azure Network Watcher instance](./network-watcher-create.md) to enable.
+> To use NSG diagnostics, Network Watcher must be enabled in your subscription. For more information, see [Network Watcher is automatically enabled](./network-watcher-create.md#network-watcher-is-automatically-enabled).
## Background -- Your resources in Azure are connected via [virtual networks (VNets)](../virtual-network/virtual-networks-overview.md) and subnets. The security of these VNets and subnets can be managed using [network security groups (NSGs)](../virtual-network/network-security-groups-overview.md).-- An NSG contains a list of [security rules](../virtual-network/network-security-groups-overview.md#security-rules) that allow or deny network traffic to resources it's connected to. An NSG can be associated to a virtual network subnet or individual network interface (NIC) attached to a virtual machine (VM). -- All traffic flows in your network are evaluated using the rules in the applicable NSG.
+- Your resources in Azure are connected via [virtual networks (VNets)](../virtual-network/virtual-networks-overview.md) and subnets. The security of these virtual networks and subnets can be managed using [network security groups](../virtual-network/network-security-groups-overview.md).
+- A network security group contains a list of [security rules](../virtual-network/network-security-groups-overview.md#security-rules) that allow or deny network traffic to resources it's connected to. A network security group can be associated to a virtual network subnet or individual network interface (NIC) attached to a virtual machine (VM).
+- All traffic flows in your network are evaluated using the rules in the applicable network security group.
- Rules are evaluated based on priority number from lowest to highest.
-## How does NSG Diagnostics work?
+## How does NSG diagnostics work?
-For a given flow, after you provide details like source and destination, the NSG Diagnostics tool runs a simulation of the flow and returns whether the flow would be allowed or denied with detailed information about the security rule allowing or denying the flow.
+The NSG diagnostics tool can simulate a given flow based on the source and destination you provide. It returns whether the flow is allowed or denied with detailed information about the security rule allowing or denying the flow.
## Next steps
-Use NSG Diagnostics using [REST API](/rest/api/network-watcher/networkwatchers/getnetworkconfigurationdiagnostic), [PowerShell](/powershell/module/az.network/invoke-aznetworkwatchernetworkconfigurationdiagnostic), and [Azure CLI](/cli/azure/network/watcher#az-network-watcher-run-configuration-diagnostic).
+Run NSG diagnostics using [PowerShell](/powershell/module/az.network/invoke-aznetworkwatchernetworkconfigurationdiagnostic), [Azure CLI](/cli/azure/network/watcher#az-network-watcher-run-configuration-diagnostic), or [REST API](/rest/api/network-watcher/networkwatchers/getnetworkconfigurationdiagnostic).
postgresql Quickstart Create Server Python Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/quickstart-create-server-python-sdk.md
+
+ Title: 'Quickstart: Create an Azure Database for PostgreSQL Flexible Server - Azure libraries (SDK) for Python'
+description: In this Quickstart, learn how to create an Azure Database for PostgreSQL Flexible server using Azure libraries (SDK) for Python.
++++++ Last updated : 04/24/2023++
+# Quickstart: Use an Azure libraries (SDK) for Python to create an Azure Database for PostgreSQL - Flexible Server
++
+In this quickstart, you'll learn how to use the [Azure libraries (SDK) for Python](/azure/developer/python/sdk/azure-sdk-overview?view=azure-python&preserve-view=true)
+to create an Azure Database for PostgreSQL - Flexible Server.
+
+Flexible server is a managed service that you use to run, manage, and scale highly available PostgreSQL databases in the cloud. You can use Python SDK to provision a PostgreSQL Flexible Server, multiple servers or multiple databases on a server.
++
+## Prerequisites
+
+An Azure account with an active subscription. [Create one for free](https://azure.microsoft.com/free/).
+
+## Create the Server
+
+First, install the required packages.
+
+```bash
+pip install azure-mgmt-resource
+pip install azure-identity
+pip install azure-mgmt-rdbms
+```
+
+Create a `create_postgres_flexible_server.py` file and include the following code.
+
+```python
+from azure.identity import DefaultAzureCredential
+from azure.mgmt.resource import ResourceManagementClient
+from azure.mgmt.rdbms.postgresql_flexibleservers import PostgreSQLManagementClient
+from azure.mgmt.rdbms.postgresql_flexibleservers.models import Server, Sku, Storage
++
+def create_postgres_flexible_server(subscription_id, resource_group, server_name, location):
+ # Authenticate with your Azure account
+ credential = DefaultAzureCredential()
+
+ # Create resource management client and PostgreSQL management client
+ resource_client = ResourceManagementClient(credential, subscription_id)
+ postgres_client = PostgreSQLManagementClient(credential, subscription_id)
+
+ # Create resource group
+ resource_client.resource_groups.create_or_update(
+ resource_group,
+ {
+ 'location': location
+ }
+ )
+
+ # Create PostgreSQL Flexible Server
+ server_params = Server(
+ sku=Sku(name='Standard_D4s_v3', tier='GeneralPurpose'),
+ administrator_login='pgadmin',
+ administrator_login_password='<mySecurePassword>',
+ storage=Storage(storage_size_gb=32),
+ version="14",
+ create_mode="Create"
+ )
+
+ postgres_client.servers.begin_create(
+ resource_group,
+ server_name,
+ server_params
+ ).result()
+
+ print(f"PostgreSQL Flexible Server '{server_name}' created in resource group '{resource_group}'")
++
+if __name__ == '__main__':
+ subscription_id = '<subscription_id>'
+ resource_group = '<resource_group>'
+ server_name = '<servername>'
+ location = 'eastus'
+
+ create_postgres_flexible_server(subscription_id, resource_group, server_name, location)
+
+```
+
+Replace the following parameters with your data:
+
+- **subscription_id**: Your own [subscription ID](../../azure-portal/get-subscription-tenant-id.md#find-your-azure-subscription).
+- **resource_group**: The name of the resource group you want to use. The script will create a new resource group if it doesn't exist.
+- **server_name**: A unique name that identifies your Azure Database for PostgreSQL server. The domain name `postgres.database.azure.com` is appended to the server name you provide. The server name must be at least 3 characters and at most 63 characters, and can only contain lowercase letters, numbers, and hyphens.
+- **administrator_login**: The primary administrator username for the server. You can create additional users after the server has been created.
+- **administrator_login_password**: A password for the primary administrator for the server. It must contain between 8 and 128 characters. Your password must contain characters from three of the following categories: English uppercase letters, English lowercase letters, numbers (0 through 9), and non-alphanumeric characters (!, $, #, %, etc.).
+
+You can also customize other parameters like location, storage size, engine version, etc.
++
+> [!NOTE]
+> Note that the DefaultAzureCredential class will try to authenticate using various methods, such as environment variables, managed identities, or the Azure CLI.
+> Make sure you have one of these methods set up. You can find more information on authentication in the [Azure SDK documentation](/python/api/overview/azure/identity-readme?view=azure-python#defaultazurecredential&preserve-view=true).
+
+## Review deployed resources
+
+You can use the Python SDK, Azure portal, Azure CLI, Azure PowerShell, and various other tools to validate the deployment and review the deployed resources. Some examples are provided below.
++
+# [Python SDK](#tab/PythonSDK)
+Add the `check_server_created` function to your existing script to use the servers attribute of the [`PostgreSQLManagementClient`](/python/api/azure-mgmt-rdbms/azure.mgmt.rdbms.postgresql_flexibleservers.postgresqlmanagementclient?view=azure-python&preserve-view=true) instance to check if the PostgreSQL Flexible Server was created:
+
+```python
+def check_server_created(subscription_id, resource_group, server_name):
+ # Authenticate with your Azure account
+ credential = DefaultAzureCredential()
+
+ # Create PostgreSQL management client
+ postgres_client = PostgreSQLManagementClient(credential, subscription_id)
+
+ try:
+ server = postgres_client.servers.get(resource_group, server_name)
+ if server:
+ print(f"Server '{server_name}' exists in resource group '{resource_group}'.")
+ print(f"Server state: {server.state}")
+ else:
+ print(f"Server '{server_name}' not found in resource group '{resource_group}'.")
+ except Exception as e:
+ print(f"Error occurred: {e}")
+ print(f"Server '{server_name}' not found in resource group '{resource_group}'.")
+```
+
+Call it with the appropriate parameters.
+
+```python
+ check_server_created(subscription_id, resource_group, server_name)
+```
+
+> [!NOTE]
+> The `check_server_created` function will return the server state as soon as the server is provisioned. However, it might take a few minutes for the server to become fully available. Ensure that you wait for the server to be in the Ready state before connecting to it.
++
+# [CLI](#tab/CLI)
+
+```azurecli
+az resource list --resource-group <resource_group>
+```
+
+# [PowerShell](#tab/PowerShell)
+
+```azurepowershell
+Get-AzResource -ResourceGroupName <resource_group>
+```
+++
+## Clean up resources
+
+If you no longer need the PostgreSQL Flexible Server, you can delete it and the associated resource group using the following methods.
+
+# [Python SDK](#tab/PythonSDK)
+Add the `delete_resources` function to your existing script to delete your Postgres server and the associated resource group that was created in this quickstart.
+
+```python
+def delete_resources(subscription_id, resource_group, server_name):
+ # Authenticate with your Azure account
+ credential = DefaultAzureCredential()
+
+ # Create resource management client and PostgreSQL management client
+ resource_client = ResourceManagementClient(credential, subscription_id)
+ postgres_client = PostgreSQLManagementClient(credential, subscription_id)
+
+ # Delete PostgreSQL Flexible Server
+ postgres_client.servers.begin_delete(resource_group, server_name).result()
+ print(f"Deleted PostgreSQL Flexible Server '{server_name}' in resource group '{resource_group}'.")
+
+ # Delete resource group
+ resource_client.resource_groups.begin_delete(resource_group).result()
+ print(f"Deleted resource group '{resource_group}'.")
+
+# Call the delete_resources function
+delete_resources(subscription_id, resource_group, server_name)
+```
++
+# [CLI](#tab/CLI)
+
+```azurecli
+az group delete --name <resource_group>
+```
+
+# [PowerShell](#tab/PowerShell)
+
+```azurepowershell
+Remove-AzResourceGroup -Name <resource_group>
+```
++
resource-mover Support Matrix Extension Resource Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/resource-mover/support-matrix-extension-resource-types.md
Title: Support of Extension resource types in Azure Resource Mover description: Supported Extension resource types -+
# Support for moving extension resource types between Azure regions
-This article summarizes all the [Extension resource types ](/articles/azure-resource-manager/management/extension-resource-types.md)that are currently supported while moving Azure resources across regions using Azure resource mover.
-
+This article summarizes all the [Extension resource types](../azure-resource-manager/management/extension-resource-types.md)that are currently supported while moving Azure resources across regions using Azure resource mover.
## Extension resource types supported
sap Configure Devops https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/automation/configure-devops.md
Open PowerShell ISE and copy the following script and update the parameters to m
remove-item .\configureDevOps.ps1 }
- Invoke-WebRequest -Uri https://raw.githubusercontent.com/Azure/sap-automation/main/deploy/scripts/setup_devops.ps1 -OutFile .\configureDevOps.ps1 ; .\configureDevOps.ps1
+ Invoke-WebRequest -Uri https://raw.githubusercontent.com/Azure/sap-automation/main/deploy/scripts/New-SDAFDevopsProject.ps1 -OutFile .\New-SDAFDevopsProject.ps1 ; .\New-SDAFDevopsProject.ps1
```
Validate that the project has been created by navigating to the Azure DevOps por
> [!IMPORTANT] > Run the following steps on your local workstation, also ensure that you have the latest Azure CLI installed by running the 'az upgrade' command.
+### Configure Azure DevOps Services artifacts for a new Workload zone.
+
+You can use the following script to deploy the artifacts needed to support a new workload zone.
++ ### Create a sample Control Plane configuration You can run the 'Create Sample Deployer Configuration' pipeline to create a sample configuration for the Control Plane. When running choose the appropriate Azure region. You can also control if you want to deploy Azure Firewall and Azure Bastion.
sap Businessobjects Deployment Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/businessobjects-deployment-guide.md
Azure Storage has different Storage types available for customers and details fo
### Networking
-SAP BOBI is a reporting and analytics BI platform that doesnΓÇÖt hold any business data. So the system is connected to other database servers from where it fetches all the data and provide insight to users. Azure provides a network infrastructure, which allows the mapping of all scenarios that can be realized with SAP BI Platform like connecting to on-premises system, systems in different virtual network and others. For more information check [Microsoft Azure Networking for SAP Workload](planning-guide.md#61678387-8868-435d-9f8c-450b2424f5bd).
+SAP BOBI is a reporting and analytics BI platform that doesnΓÇÖt hold any business data. So the system is connected to other database servers from where it fetches all the data and provide insight to users. Azure provides a network infrastructure, which allows the mapping of all scenarios that can be realized with SAP BI Platform like connecting to on-premises system, systems in different virtual network and others. For more information check [Microsoft Azure Networking for SAP Workload](planning-guide.md#azure-networking).
For Database-as-a-Service offering, any newly created database (Azure SQL Database or Azure Database for MySQL) has a firewall that blocks all external connections. To allow access to the DBaaS service from BI Platform virtual machines, you need to specify one or more server-level firewall rules to enable access to your DBaaS server. For more information, see [Firewall rules](../../mysql/concepts-firewall-rules.md) for Azure Database for MySQL and [Network Access Controls](/azure/azure-sql/database/network-access-controls-overview) section for Azure SQL database.
sap Deployment Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/deployment-guide.md
[dbms-guide-2.1]:dbms-guide-general.md#c7abf1f0-c927-4a7c-9c1d-c7b5b3b7212f (Caching for VMs and VHDs) [dbms-guide-2.2]:dbms-guide-general.md#c8e566f9-21b7-4457-9f7f-126036971a91 (Software RAID) [dbms-guide-2.3]:dbms-guide-general.md#10b041ef-c177-498a-93ed-44b3441ab152 (Microsoft Azure Storage)
-[dbms-guide-2]:dbms-guide-general.md#65fa79d6-a85f-47ee-890b-22e794f51a64 (Structure of a RDBMS deployment)
+[dbms-guide-2]:dbms-guide-general.md#65fa79d6-a85f-47ee-890b-22e794f51a64 (Structure of an RDBMS deployment)
[dbms-guide-3]:dbms-guide-general.md#871dfc27-e509-4222-9370-ab1de77021c3 (High availability and disaster recovery with Azure VMs) [dbms-guide-5.5.1]:dbms-guide-general.md#0fef0e79-d3fe-4ae2-85af-73666a6f7268 (SQL Server 2012 SP1 CU4 and later) [dbms-guide-5.5.2]:dbms-guide-general.md#f9071eff-9d72-4f47-9da4-1852d782087b (SQL Server 2012 SP1 CU3 and earlier releases)
[dbms-guide-5.8]:dbms-guide-general.md#9053f720-6f3b-4483-904d-15dc54141e30 (General SQL Server for SAP on Azure summary) [dbms-guide-5]:dbms-guide-general.md#3264829e-075e-4d25-966e-a49dad878737 (Specifics to SQL Server RDBMS) [dbms-guide-8.4.1]:dbms-guide-general.md#b48cfe3b-48e9-4f5b-a783-1d29155bd573 (Storage configuration)
-[dbms-guide-8.4.2]:dbms-guide-general.md#23c78d3b-ca5a-4e72-8a24-645d141a3f5d (Backup and restore)
[dbms-guide-8.4.3]:dbms-guide-general.md#77cd2fbb-307e-4cbf-a65f-745553f72d2c (Performance considerations for backup and restore) [dbms-guide-8.4.4]:dbms-guide-general.md#f77c1436-9ad8-44fb-a331-8671342de818 (Other) [dbms-guide-900-sap-cache-server-on-premises]:dbms-guide-general.md#642f746c-e4d4-489d-bf63-73e80177a0a8
[deployment-guide-install-vm-agent-windows]:deployment-guide.md#b2db5c9a-a076-42c6-9835-16945868e866 [deployment-guide-troubleshooting-chapter]:deployment-guide.md#564adb4f-5c95-4041-9616-6635e83a810b (Checks and Troubleshooting)
-[deploy-template-cli]:../../../resource-group-template-deploy-cli.md
-[deploy-template-portal]:../../../resource-group-template-deploy-portal.md
-[deploy-template-powershell]:../../../resource-group-template-deploy.md
+[deploy-template-cli]:../../resource-group-template-deploy-cli.md
+[deploy-template-portal]:../../resource-group-template-deploy-portal.md
+[deploy-template-powershell]:../../resource-group-template-deploy.md
[dr-guide-classic]:https://go.microsoft.com/fwlink/?LinkID=521971
[msdn-set-Azvmaemextension]:/powershell/module/az.compute/set-azvmaemextension
-[planning-guide]:planning-guide.md (Azure Virtual Machines planning and implementation for SAP)
-[planning-guide-1.2]:planning-guide.md#e55d1e22-c2c8-460b-9897-64622a34fdff (Resources)
-[planning-guide-11]:planning-guide.md#7cf991a1-badd-40a9-944e-7baae842a058 (High availability and disaster recovery for SAP NetWeaver running on Azure Virtual Machines)
-[planning-guide-11.4.1]:planning-guide.md#5d9d36f9-9058-435d-8367-5ad05f00de77 (High availability for SAP Application Servers)
-[planning-guide-11.5]:planning-guide.md#4e165b58-74ca-474f-a7f4-5e695a93204f (Using Autostart for SAP instances)
-[planning-guide-2.1]:planning-guide.md#1625df66-4cc6-4d60-9202-de8a0b77f803 (Cloud-only - Virtual Machine deployments in Azure without dependencies on the on-premises customer network)
-[planning-guide-2.2]:planning-guide.md#f5b3b18c-302c-4bd8-9ab2-c388f1ab3d10 (Cross-premises - Deployment of single or multiple SAP VMs in Azure fully integrated with the on-premises network)
-[planning-guide-3.1]:planning-guide.md#be80d1b9-a463-4845-bd35-f4cebdb5424a (Azure regions)
-[planning-guide-3.2.1]:planning-guide.md#df49dc09-141b-4f34-a4a2-990913b30358 (Fault domains)
-[planning-guide-3.2.2]:planning-guide.md#fc1ac8b2-e54a-487c-8581-d3cc6625e560 (Upgrade domains)
-[planning-guide-3.2.3]:planning-guide.md#18810088-f9be-4c97-958a-27996255c665 (Azure availability sets)
-[planning-guide-3.2]:planning-guide.md#8d8ad4b8-6093-4b91-ac36-ea56d80dbf77 (Microsoft Azure virtual machines concept)
-[planning-guide-5.1.1]:planning-guide.md#4d175f1b-7353-4137-9d2f-817683c26e53 (Moving a VM from on-premises to Azure with a non-generalized disk)
-[planning-guide-5.1.2]:planning-guide.md#e18f7839-c0e2-4385-b1e6-4538453a285c (Deploying a VM with a customer specific image)
-[planning-guide-5.2.1]:planning-guide.md#1b287330-944b-495d-9ea7-94b83aff73ef (Preparation for moving a VM from on-premises to Azure with a non-generalized disk)
-[planning-guide-5.2.2]:planning-guide.md#57f32b1c-0cba-4e57-ab6e-c39fe22b6ec3 (Preparation for deploying a VM with a customer specific image for SAP)
-[planning-guide-5.2]:planning-guide.md#6ffb9f41-a292-40bf-9e70-8204448559e7 (Preparing VMs with SAP for Azure)
-[planning-guide-5.3.1]:planning-guide.md#6e835de8-40b1-4b71-9f18-d45b20959b79 (Difference between an Azure disk and an Azure image)
-[planning-guide-5.3.2]:planning-guide.md#a43e40e6-1acc-4633-9816-8f095d5a7b6a (Uploading a VHD from on-premises to Azure)
-[planning-guide-5.4.2]:planning-guide.md#9789b076-2011-4afa-b2fe-b07a8aba58a1 (Copying disks between Azure Storage accounts)
-[planning-guide-5.5.1]:planning-guide.md#4efec401-91e0-40c0-8e64-f2dceadff646 (VM/VHD structure for SAP deployments)
-[planning-guide-5.5.3]:planning-guide.md#17e0d543-7e8c-4160-a7da-dd7117a1ad9d (Setting automount for attached disks)
-[planning-guide-9.1]:planning-guide.md#6f0a47f3-a289-4090-a053-2521618a28c3 (Azure Monitoring Solution for SAP)
-[planning-guide-figure-100]:media/virtual-machines-shared-sap-planning-guide/100-single-vm-in-azure.png
-[planning-guide-figure-1300]:media/virtual-machines-shared-sap-planning-guide/1300-ref-config-iaas-for-sap.png
-[planning-guide-figure-1400]:media/virtual-machines-shared-sap-planning-guide/1400-attach-detach-disks.png
-[planning-guide-figure-1600]:media/virtual-machines-shared-sap-planning-guide/1600-firewall-port-rule.png
-[planning-guide-figure-1700]:media/virtual-machines-shared-sap-planning-guide/1700-single-vm-demo.png
-[planning-guide-figure-1900]:media/virtual-machines-shared-sap-planning-guide/1900-vm-set-vnet.png
-[planning-guide-figure-200]:media/virtual-machines-shared-sap-planning-guide/200-multiple-vms-in-azure.png
-[planning-guide-figure-2100]:media/virtual-machines-shared-sap-planning-guide/2100-s2s.png
-[planning-guide-figure-2200]:media/virtual-machines-shared-sap-planning-guide/2200-network-printing.png
-[planning-guide-figure-2300]:media/virtual-machines-shared-sap-planning-guide/2300-sapgui-stms.png
-[planning-guide-figure-2400]:media/virtual-machines-shared-sap-planning-guide/2400-vm-extension-overview.png
-[planning-guide-figure-2500]:media/virtual-machines-shared-sap-planning-guide/2500-vm-extension-details.png
-[planning-guide-figure-2600]:media/virtual-machines-shared-sap-planning-guide/2600-sap-router-connection.png
-[planning-guide-figure-2700]:media/virtual-machines-shared-sap-planning-guide/2700-exposed-sap-portal.png
-[planning-guide-figure-2800]:media/virtual-machines-shared-sap-planning-guide/2800-endpoint-config.png
-[planning-guide-figure-2900]:media/virtual-machines-shared-sap-planning-guide/2900-azure-ha-sap-ha.png
-[planning-guide-figure-300]:media/virtual-machines-shared-sap-planning-guide/300-vpn-s2s.png
-[planning-guide-figure-3000]:media/virtual-machines-shared-sap-planning-guide/3000-sap-ha-on-azure.png
-[planning-guide-figure-3200]:media/virtual-machines-shared-sap-planning-guide/3200-sap-ha-with-sql.png
-[planning-guide-figure-400]:media/virtual-machines-shared-sap-planning-guide/400-vm-services.png
-[planning-guide-figure-600]:media/virtual-machines-shared-sap-planning-guide/600-s2s-details.png
-[planning-guide-figure-700]:media/virtual-machines-shared-sap-planning-guide/700-decision-tree-deploy-to-azure.png
-[planning-guide-figure-800]:media/virtual-machines-shared-sap-planning-guide/800-portal-vm-overview.png
-[planning-guide-microsoft-azure-networking]:planning-guide.md#61678387-8868-435d-9f8c-450b2424f5bd (Microsoft Azure networking)
-[planning-guide-storage-microsoft-azure-storage-and-data-disks]:planning-guide.md#a72afa26-4bf4-4a25-8cf7-855d6032157f (Storage: Microsoft Azure Storage and data disks)
-
-[resource-group-authoring-templates]:../../../resource-group-authoring-templates.md
+[planning-guide]:planning-guide.md (Azure Virtual Machines planning and implementation for SAP NetWeaver)
+
+[resource-group-authoring-templates]:../../resource-group-authoring-templates.md
[resource-group-overview]:../../azure-resource-manager/management/overview.md [resource-groups-networking]:../../networking/network-overview.md [sap-pam]:https://support.sap.com/pam (SAP Product Availability Matrix)
[howto-assign-access-powershell]:../../active-directory/managed-identities-azure-resources/howto-assign-access-powershell.md [howto-assign-access-cli]:../../active-directory/managed-identities-azure-resources/howto-assign-access-cli.md - Azure Virtual Machines is the solution for organizations that need compute and storage resources, in minimal time, and without lengthy procurement cycles. You can use Azure Virtual Machines to deploy classical applications, like SAP NetWeaver-based applications, in Azure. Extend an application's reliability and availability without additional on-premises resources. Azure Virtual Machines supports cross-premises connectivity, so you can integrate Azure Virtual Machines into your organization's on-premises domains, private clouds, and SAP system landscape.
-In this article, we cover the steps to deploy SAP applications on virtual machines (VMs) in Azure, including alternate deployment options and troubleshooting. This article builds on the information in [Azure Virtual Machines planning and implementation for SAP NetWeaver][planning-guide]. It also complements SAP installation documentation and SAP Notes, which are the primary resources for installing and deploying SAP software.
+In this article, we cover the steps to deploy SAP applications on virtual machines (VMs) in Azure, including alternate deployment options and troubleshooting. This article builds on the information in [Azure Virtual Machines planning and implementation for SAP NetWeaver](./planning-guide.md). It also complements SAP installation documentation and SAP Notes, which are the primary resources for installing and deploying SAP software.
## Prerequisites - Setting up an Azure virtual machine for SAP software deployment involves multiple steps and resources. Before you start, make sure that you meet the prerequisites for installing SAP software on virtual machines in Azure. ### Local computer
The wizard guides you through setting the required parameters to create the virt
* **Username and password** or **SSH public key**: Enter the username and password of the user that is created during the provisioning. For a Linux virtual machine, you can enter the public Secure Shell (SSH) key that you use to sign in to the machine. * **Subscription**: Select the subscription that you want to use to provision the new virtual machine. * **Resource group**: The name of the resource group for the VM. You can enter either the name of a new resource group or the name of a resource group that already exists.
- * **Location**: Where to deploy the new virtual machine. If you want to connect the virtual machine to your on-premises network, make sure you select the location of the virtual network that connects Azure to your on-premises network. For more information, see [Microsoft Azure networking][planning-guide-microsoft-azure-networking] in [Azure Virtual Machines planning and implementation for SAP NetWeaver][planning-guide].
+ * **Location**: Where to deploy the new virtual machine. If you want to connect the virtual machine to your on-premises network, make sure you select the location of the virtual network that connects Azure to your on-premises network. For more information, see [Microsoft Azure networking](planning-guide.md#azure-networking).
1. **Size**:
- For a list of supported VM types, see SAP Note [1928533]. Be sure you select the correct VM type if you want to use Azure Premium Storage. Not all VM types support Premium Storage. For more information, see [Storage: Microsoft Azure Storage and data disks][planning-guide-storage-microsoft-azure-storage-and-data-disks] and [Azure storage for SAP workloads](./planning-guide-storage.md) in [Azure Virtual Machines planning and implementation for SAP NetWeaver][planning-guide].
+ For a list of supported VM types, see SAP Note [1928533]. Be sure you select the correct VM type if you want to use Azure Premium Storage. Not all VM types support Premium Storage. For more information, see [Azure storage for SAP workloads](./planning-guide-storage.md).
1. **Settings**: * **Storage**
The wizard guides you through setting the required parameters to create the virt
* **Public IP address**: Select the public IP address that you want to use, or enter parameters to create a new public IP address. You can use a public IP address to access your virtual machine over the Internet. Make sure that you also create a network security group to help secure access to your virtual machine. * **Network security group**: For more information, see [Control network traffic flow with network security groups][virtual-networks-nsg]. * **Extensions**: You can install virtual machine extensions by adding them to the deployment. You do not need to add extensions in this step. The extensions required for SAP support are installed later. See chapter [Configure the Azure Extension for SAP][deployment-guide-4.5] in this guide.
- * **High Availability**: Select an availability set, or enter the parameters to create a new availability set. For more information, see [Azure availability sets][planning-guide-3.2.3].
+ * **High Availability**: Select an availability set, or enter the parameters to create a new availability set. For more information, see [Azure availability sets](planning-guide.md#availability-sets).
* **Monitoring** * **Boot diagnostics**: You can select **Disable** for boot diagnostics. * **Guest OS diagnostics**: You can select **Disable** for monitoring diagnostics.
The wizard guides you through setting the required parameters to create the virt
* **Username and password** or **SSH public key**: Enter the username and password of the user that is created during the provisioning. For a Linux virtual machine, you can enter the public Secure Shell (SSH) key that you use to sign in to the machine. * **Subscription**: Select the subscription that you want to use to provision the new virtual machine. * **Resource group**: The name of the resource group for the VM. You can enter either the name of a new resource group or the name of a resource group that already exists.
- * **Location**: Where to deploy the new virtual machine. If you want to connect the virtual machine to your on-premises network, make sure you select the location of the virtual network that connects Azure to your on-premises network. For more information, see [Microsoft Azure networking][planning-guide-microsoft-azure-networking] in [Azure Virtual Machines planning and implementation for SAP NetWeaver][planning-guide].
+ * **Location**: Where to deploy the new virtual machine. If you want to connect the virtual machine to your on-premises network, make sure you select the location of the virtual network that connects Azure to your on-premises network. For more information, see [Microsoft Azure networking](./planning-guide.md#azure-networking) in [Azure Virtual Machines planning and implementation for SAP NetWeaver][planning-guide].
1. **Size**:
- For a list of supported VM types, see SAP Note [1928533]. Be sure you select the correct VM type if you want to use Azure Premium Storage. Not all VM types support Premium Storage. For more information, see [Storage: Microsoft Azure Storage and data disks][planning-guide-storage-microsoft-azure-storage-and-data-disks] and [Azure storage for SAP workloads](./planning-guide-storage.md) in [Azure Virtual Machines planning and implementation for SAP NetWeaver][planning-guide].
+ For a list of supported VM types, see SAP Note [1928533]. Be sure you select the correct VM type if you want to use Azure Premium Storage. Not all VM types support Premium Storage. For more information, see [Azure storage for SAP workloads](./planning-guide-storage.md).
1. **Settings**: * **Storage**
The wizard guides you through setting the required parameters to create the virt
* **Public IP address**: Select the public IP address that you want to use, or enter parameters to create a new public IP address. You can use a public IP address to access your virtual machine over the Internet. Make sure that you also create a network security group to help secure access to your virtual machine. * **Network security group**: For more information, see [Control network traffic flow with network security groups][virtual-networks-nsg]. * **Extensions**: You can install virtual machine extensions by adding them to the deployment. You do not need to add extension in this step. The extensions required for SAP support are installed later. See chapter [Configure the Azure Extension for SAP][deployment-guide-4.5] in this guide.
- * **High Availability**: Select an availability set, or enter the parameters to create a new availability set. For more information, see [Azure availability sets][planning-guide-3.2.3].
+ * **High Availability**: Select an availability set, or enter the parameters to create a new availability set. For more information, see [Azure availability sets](./planning-guide.md#availability-sets).
* **Monitoring** * **Boot diagnostics**: You can select **Disable** for boot diagnostics. * **Guest OS diagnostics**: You can select **Disable** for monitoring diagnostics.
sap Disaster Recovery Overview Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/disaster-recovery-overview-guide.md
Last updated 12/06/2022
Many organizations running critical business applications on Azure set up both High Availability (HA) and Disaster Recovery (DR) strategy. The purpose of high availability is to increase the SLA of business systems by eliminating single points of failure in the underlying system infrastructure. High Availability technologies reduce the effect of unplanned infrastructure failure and help with planned maintenance. Disaster Recovery is defined as policies, tools and procedures to enable the recovery or continuation of vital technology infrastructure and systems following a geographically widespread natural or human-induced disaster.
-To achieve [high availability for SAP workload on Azure](sap-high-availability-guide-start.md), virtual machines are typically deployed in an [availability set](planning-guide.md#18810088-f9be-4c97-958a-27996255c665) or in [availability zones](planning-guide.md#availability-zones) to protect applications from infrastructure maintenance or failure within region. But the deployment doesnΓÇÖt protect applications from widespread disaster within region. So to protect applications from regional disaster, disaster recovery strategy for the applications should be in place. Disaster recovery is a documented and structured approach that is designed to assist an organization in executing the recovery processes in response to a disaster, and to protect or minimize IT services disruption and promote recovery.
+To achieve [high availability for SAP workload on Azure](sap-high-availability-guide-start.md), virtual machines are typically deployed in an [availability set](planning-guide.md#availability-sets) or in [availability zones](planning-guide.md#availability-zones) to protect applications from infrastructure maintenance or failure within region. But the deployment doesnΓÇÖt protect applications from widespread disaster within region. So to protect applications from regional disaster, disaster recovery strategy for the applications should be in place. Disaster recovery is a documented and structured approach that is designed to assist an organization in executing the recovery processes in response to a disaster, and to protect or minimize IT services disruption and promote recovery.
This document provides details on protecting SAP workloads from large scale catastrophe by implementing structured DR approach. The details in this document are presented at an abstract level, based on different Azure services and SAP components. Exact DR strategy and the order of recovery for your SAP workload must be tested, documented and fine tuned regularly. Also, the document focuses on the Azure-to-Azure DR strategy for SAP workload.
An SAP workload running on Azure uses different infrastructure components to run
### Network -- [ExpressRoute](../../expressroute/expressroute-introduction.md) extends your on-premises network into the Microsoft cloud over a private connection with the help of a connectivity provider. On designing disaster recovery architecture, one must account for building a robust backend network connectivity using geo-redundant ExpressRoute circuit. It's advised setting up at least one ExpressRoute circuit from on-premises to the primary region. And the other(s) should connect to the disaster recovery region. Refer to the [Designing of Azure ExpressRoute for disaster recovery](../../expressroute/designing-for-disaster-recovery-with-expressroute-privatepeering.md) article, which describe different scenarios to design disaster recovery for ExpressRoute.
+- [ExpressRoute](../../expressroute/expressroute-introduction.md) extends your on-premises network into the Microsoft cloud over a private connection with the help of a connectivity provider. On designing disaster recovery architecture, one must account for building a robust backend network connectivity using geo-redundant ExpressRoute circuit. It's advised setting up at least one ExpressRoute circuit from on-premises to the primary region. And the other(s) should connect to the disaster recovery region. Refer to the [Designing of Azure ExpressRoute for disaster recovery](../../expressroute/designing-for-disaster-recovery-with-expressroute-privatepeering.md) article, which describes different scenarios to design disaster recovery for ExpressRoute.
>[!Note] > Consider setting up a site-to-site (S2S) VPN as a backup of Azure ExpressRoute. For more information, see [Using S2S VPN as a backup for Azure ExpressRoute Private Peering](../../expressroute/use-s2s-vpn-as-backup-for-expressroute-privatepeering.md).
sap Planning Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/planning-guide.md
Title: 'SAP on Azure: Planning and Implementation Guide' description: Azure Virtual Machines planning and implementation for SAP NetWeaver-+ tags: azure-resource-manager
vm-linux Previously updated : 04/08/2021- Last updated : 04/17/2023+
-# Azure Virtual Machines planning and implementation for SAP NetWeaver
+# SAP on Azure: Planning and Implementation Guide
+[106267]:https://launchpad.support.sap.com/#/notes/106267
[767598]:https://launchpad.support.sap.com/#/notes/767598 [773830]:https://launchpad.support.sap.com/#/notes/773830 [826037]:https://launchpad.support.sap.com/#/notes/826037
+[974876]:https://launchpad.support.sap.com/#/notes/974876
[965908]:https://launchpad.support.sap.com/#/notes/965908 [1031096]:https://launchpad.support.sap.com/#/notes/1031096 [1139904]:https://launchpad.support.sap.com/#/notes/1139904 [1173395]:https://launchpad.support.sap.com/#/notes/1173395 [1245200]:https://launchpad.support.sap.com/#/notes/1245200
+[1380493]:https://launchpad.support.sap.com/#/notes/1380493
[1409604]:https://launchpad.support.sap.com/#/notes/1409604 [1558958]:https://launchpad.support.sap.com/#/notes/1558958
+[1555903]:https://launchpad.support.sap.com/#/notes/1555903
[1585981]:https://launchpad.support.sap.com/#/notes/1585981 [1588316]:https://launchpad.support.sap.com/#/notes/1588316 [1590719]:https://launchpad.support.sap.com/#/notes/1590719
[1928533]:https://launchpad.support.sap.com/#/notes/1928533 [1941500]:https://launchpad.support.sap.com/#/notes/1941500 [1956005]:https://launchpad.support.sap.com/#/notes/1956005
+[1972360]:https://launchpad.support.sap.com/#/notes/1972360
[1973241]:https://launchpad.support.sap.com/#/notes/1973241 [1984787]:https://launchpad.support.sap.com/#/notes/1984787 [1999351]:https://launchpad.support.sap.com/#/notes/1999351
[2191498]:https://launchpad.support.sap.com/#/notes/2191498 [2233094]:https://launchpad.support.sap.com/#/notes/2233094 [2243692]:https://launchpad.support.sap.com/#/notes/2243692
+[2731110]:https://launchpad.support.sap.com/#/notes/2731110
+[2808515]:https://launchpad.support.sap.com/#/notes/2808515
+[3048191]:https://launchpad.support.sap.com/#/notes/3048191
-[azure-cli-inst]:../../../cli/azure/install-classic-cli
-[azure-cli]:/cli/azure/install-cli-version-1.0
-[azure-portal]:https://portal.azure.com
-[azure-ps]:/powershell/azure/
-[azure-quickstart-templates-github]:https://github.com/Azure/azure-quickstart-templates
-[azure-script-ps]:https://go.microsoft.com/fwlink/p/?LinkID=395017
-[azure-resource-manager/management/azure-subscription-service-limits]:../../azure-resource-manager/management/azure-subscription-service-limits.md
-[azure-resource-manager/management/azure-subscription-service-limits-subscription]:../../azure-resource-manager/management/azure-subscription-service-limits.md#subscription-limits
-
-[dbms-guide]:dbms-guide-general.md
-[dbms-guide-2.1]:dbms-guide-general.md#c7abf1f0-c927-4a7c-9c1d-c7b5b3b7212f
-[dbms-guide-2.2]:dbms-guide-general.md#c8e566f9-21b7-4457-9f7f-126036971a91
-[dbms-guide-2.3]:dbms-guide-general.md#10b041ef-c177-498a-93ed-44b3441ab152
-[dbms-guide-2]:dbms-guide-general.md#65fa79d6-a85f-47ee-890b-22e794f51a64
-[dbms-guide-3]:dbms-guide-general.md#871dfc27-e509-4222-9370-ab1de77021c3
-[dbms-guide-5.5.1]:dbms-guide-general.md#0fef0e79-d3fe-4ae2-85af-73666a6f7268
-[dbms-guide-5.5.2]:dbms-guide-general.md#f9071eff-9d72-4f47-9da4-1852d782087b
-[dbms-guide-5.6]:dbms-guide-general.md#1b353e38-21b3-4310-aeb6-a77e7c8e81c8
-[dbms-guide-5.8]:dbms-guide-general.md#9053f720-6f3b-4483-904d-15dc54141e30
-[dbms-guide-5]:dbms-guide-general.md#3264829e-075e-4d25-966e-a49dad878737
-[dbms-guide-8.4.1]:dbms-guide-general.md#b48cfe3b-48e9-4f5b-a783-1d29155bd573
-[dbms-guide-8.4.2]:dbms-guide-general.md#23c78d3b-ca5a-4e72-8a24-645d141a3f5d
-[dbms-guide-8.4.3]:dbms-guide-general.md#77cd2fbb-307e-4cbf-a65f-745553f72d2c
-[dbms-guide-8.4.4]:dbms-guide-general.md#f77c1436-9ad8-44fb-a331-8671342de818
-[dbms-guide-900-sap-cache-server-on-premises]:dbms-guide-general.md#642f746c-e4d4-489d-bf63-73e80177a0a8
-
-[dbms-guide-figure-100]:media/virtual-machines-shared-sap-dbms-guide/100_storage_account_types.png
-[dbms-guide-figure-200]:media/virtual-machines-shared-sap-dbms-guide/200-ha-set-for-dbms-ha.png
-[dbms-guide-figure-300]:media/virtual-machines-shared-sap-dbms-guide/300-reference-config-iaas.png
-[dbms-guide-figure-400]:media/virtual-machines-shared-sap-dbms-guide/400-sql-2012-backup-to-blob-storage.png
-[dbms-guide-figure-500]:media/virtual-machines-shared-sap-dbms-guide/500-sql-2012-backup-to-blob-storage-different-containers.png
-[dbms-guide-figure-600]:media/virtual-machines-shared-sap-dbms-guide/600-iaas-maxdb.png
-[dbms-guide-figure-700]:media/virtual-machines-shared-sap-dbms-guide/700-livecach-prod.png
-[dbms-guide-figure-800]:media/virtual-machines-shared-sap-dbms-guide/800-azure-vm-sap-content-server.png
-[dbms-guide-figure-900]:media/virtual-machines-shared-sap-dbms-guide/900-sap-cache-server-on-premises.png
-
-[deployment-guide]:deployment-guide.md
-[deployment-guide-2.2]:deployment-guide.md#42ee2bdb-1efc-4ec7-ab31-fe4c22769b94
-[deployment-guide-3.1.2]:deployment-guide.md#3688666f-281f-425b-a312-a77e7db2dfab
-[deployment-guide-3.2]:deployment-guide.md#db477013-9060-4602-9ad4-b0316f8bb281
-[deployment-guide-3.3]:deployment-guide.md#54a1fc6d-24fd-4feb-9c57-ac588a55dff2
-[deployment-guide-3.4]:deployment-guide.md#a9a60133-a763-4de8-8986-ac0fa33aa8c1
-[deployment-guide-3]:deployment-guide.md#b3253ee3-d63b-4d74-a49b-185e76c4088e
-[deployment-guide-4.1]:deployment-guide.md#604bcec2-8b6e-48d2-a944-61b0f5dee2f7
-[deployment-guide-4.2]:deployment-guide.md#7ccf6c3e-97ae-4a7a-9c75-e82c37beb18e
-[deployment-guide-4.3]:deployment-guide.md#31d9ecd6-b136-4c73-b61e-da4a29bbc9cc
-[deployment-guide-4.4.2]:deployment-guide.md#6889ff12-eaaf-4f3c-97e1-7c9edc7f7542
-[deployment-guide-4.4]:deployment-guide.md#c7cbb0dc-52a4-49db-8e03-83e7edc2927d
-[deployment-guide-4.5.1]:deployment-guide.md#987cf279-d713-4b4c-8143-6b11589bb9d4
-[deployment-guide-4.5.2]:deployment-guide.md#408f3779-f422-4413-82f8-c57a23b4fc2f
-[deployment-guide-4.5]:deployment-guide.md#d98edcd3-f2a1-49f7-b26a-07448ceb60ca
-[deployment-guide-5.1]:deployment-guide.md#bb61ce92-8c5c-461f-8c53-39f5e5ed91f2
-[deployment-guide-5.2]:deployment-guide.md#e2d592ff-b4ea-4a53-a91a-e5521edb6cd1
-[deployment-guide-5.3]:deployment-guide.md#fe25a7da-4e4e-4388-8907-8abc2d33cfd8
-
-[deployment-guide-configure-monitoring-scenario-1]:deployment-guide.md#ec323ac3-1de9-4c3a-b770-4ff701def65b
-[deployment-guide-configure-proxy]:deployment-guide.md#baccae00-6f79-4307-ade4-40292ce4e02d
-[deployment-guide-figure-100]:media/virtual-machines-shared-sap-deployment-guide/100-deploy-vm-image.png
-[deployment-guide-figure-1000]:media/virtual-machines-shared-sap-deployment-guide/1000-service-properties.png
-[deployment-guide-figure-11]:deployment-guide.md#figure-11
-[deployment-guide-figure-1100]:media/virtual-machines-shared-sap-deployment-guide/1100-azperflib.png
-[deployment-guide-figure-1200]:medi-test-login.png
-[deployment-guide-figure-1300]:medi-test-executed.png
-[deployment-guide-figure-14]:deployment-guide.md#figure-14
-[deployment-guide-figure-1400]:media/virtual-machines-shared-sap-deployment-guide/1400-azperflib-error-servicenotstarted.png
-[deployment-guide-figure-300]:media/virtual-machines-shared-sap-deployment-guide/300-deploy-private-image.png
-[deployment-guide-figure-400]:media/virtual-machines-shared-sap-deployment-guide/400-deploy-using-disk.png
-[deployment-guide-figure-5]:deployment-guide.md#figure-5
-[deployment-guide-figure-50]:media/virtual-machines-shared-sap-deployment-guide/50-forced-tunneling-suse.png
-[deployment-guide-figure-500]:media/virtual-machines-shared-sap-deployment-guide/500-install-powershell.png
-[deployment-guide-figure-6]:deployment-guide.md#figure-6
-[deployment-guide-figure-600]:media/virtual-machines-shared-sap-deployment-guide/600-powershell-version.png
-[deployment-guide-figure-7]:deployment-guide.md#figure-7
-[deployment-guide-figure-700]:media/virtual-machines-shared-sap-deployment-guide/700-install-powershell-installed.png
-[deployment-guide-figure-760]:media/virtual-machines-shared-sap-deployment-guide/760-azure-cli-version.png
-[deployment-guide-figure-900]:medi-update-executed.png
-[deployment-guide-figure-azure-cli-installed]:deployment-guide.md#402488e5-f9bb-4b29-8063-1c5f52a892d0
-[deployment-guide-figure-azure-cli-version]:deployment-guide.md#0ad010e6-f9b5-4c21-9c09-bb2e5efb3fda
-[deployment-guide-install-vm-agent-windows]:deployment-guide.md#b2db5c9a-a076-42c6-9835-16945868e866
-[deployment-guide-troubleshooting-chapter]:deployment-guide.md#564adb4f-5c95-4041-9616-6635e83a810b
-
-[deploy-template-cli]:../../../resource-group-template-deploy-cli.md
-[deploy-template-portal]:../../../resource-group-template-deploy-portal.md
-[deploy-template-powershell]:../../../resource-group-template-deploy.md
-
-[dr-guide-classic]:https://go.microsoft.com/fwlink/?LinkID=521971
-
-[getting-started]:get-started.md
-[getting-started-dbms]:get-started.md#1343ffe1-8021-4ce6-a08d-3a1553a4db82
-[getting-started-deployment]:get-started.md#6aadadd2-76b5-46d8-8713-e8d63630e955
-[getting-started-planning]:get-started.md#3da0389e-708b-4e82-b2a2-e92f132df89c
-
-[getting-started-windows-classic]:../../virtual-machines-windows-classic-sap-get-started.md
-[getting-started-windows-classic-dbms]:../../virtual-machines-windows-classic-sap-get-started.md#c5b77a14-f6b4-44e9-acab-4d28ff72a930
-[getting-started-windows-classic-deployment]:../../virtual-machines-windows-classic-sap-get-started.md#f84ea6ce-bbb4-41f7-9965-34d31b0098ea
-[getting-started-windows-classic-dr]:../../virtual-machines-windows-classic-sap-get-started.md#cff10b4a-01a5-4dc3-94b6-afb8e55757d3
-[getting-started-windows-classic-ha-sios]:../../virtual-machines-windows-classic-sap-get-started.md#4bb7512c-0fa0-4227-9853-4004281b1037
-[getting-started-windows-classic-planning]:../../virtual-machines-windows-classic-sap-get-started.md#f2a5e9d8-49e4-419e-9900-af783173481c
-
-[ha-guide-classic]:https://go.microsoft.com/fwlink/?LinkId=613056
-
-[install-extension-cli]:virtual-machines-linux-enable-aem.md
-[azure-cli-install]:/cli/azure/install-azure-cli
-
-[Logo_Linux]:media/virtual-machines-shared-sap-shared/Linux.png
-[Logo_Windows]:media/virtual-machines-shared-sap-shared/Windows.png
-
-[msdn-set-Azvmaemextension]:https://msdn.microsoft.com/library/azure/mt670598.aspx
-
-[planning-guide]:planning-guide.md
-[planning-guide-1.2]:planning-guide.md#e55d1e22-c2c8-460b-9897-64622a34fdff
-[planning-guide-11]:planning-guide.md#7cf991a1-badd-40a9-944e-7baae842a058
-[planning-guide-11.4.1]:planning-guide.md#5d9d36f9-9058-435d-8367-5ad05f00de77
-[planning-guide-11.5]:planning-guide.md#4e165b58-74ca-474f-a7f4-5e695a93204f
-[planning-guide-2.2]:planning-guide.md#f5b3b18c-302c-4bd8-9ab2-c388f1ab3d10
-[planning-guide-3.1]:planning-guide.md#be80d1b9-a463-4845-bd35-f4cebdb5424a
-[planning-guide-3.2.1]:planning-guide.md#df49dc09-141b-4f34-a4a2-990913b30358
-[planning-guide-3.2.2]:planning-guide.md#fc1ac8b2-e54a-487c-8581-d3cc6625e560
-[planning-guide-3.2.3]:planning-guide.md#18810088-f9be-4c97-958a-27996255c665
-[planning-guide-3.2]:planning-guide.md#8d8ad4b8-6093-4b91-ac36-ea56d80dbf77
-[planning-guide-3.3.2]:planning-guide.md#ff5ad0f9-f7f4-4022-9102-af07aef3bc92
-[planning-guide-5.1.1]:planning-guide.md#4d175f1b-7353-4137-9d2f-817683c26e53
-[planning-guide-5.1.2]:planning-guide.md#e18f7839-c0e2-4385-b1e6-4538453a285c
-[planning-guide-5.2.1]:planning-guide.md#1b287330-944b-495d-9ea7-94b83aff73ef
-[planning-guide-5.2.2]:planning-guide.md#57f32b1c-0cba-4e57-ab6e-c39fe22b6ec3
-[planning-guide-5.2]:planning-guide.md#6ffb9f41-a292-40bf-9e70-8204448559e7
-[planning-guide-5.3.1]:planning-guide.md#6e835de8-40b1-4b71-9f18-d45b20959b79
-[planning-guide-5.3.2]:planning-guide.md#a43e40e6-1acc-4633-9816-8f095d5a7b6a
-[planning-guide-5.4.2]:planning-guide.md#9789b076-2011-4afa-b2fe-b07a8aba58a1
-[planning-guide-5.5.1]:planning-guide.md#4efec401-91e0-40c0-8e64-f2dceadff646
-[planning-guide-5.5.3]:planning-guide.md#17e0d543-7e8c-4160-a7da-dd7117a1ad9d
-[planning-guide-7.1]:planning-guide.md#3e9c3690-da67-421a-bc3f-12c520d99a30
-[planning-guide-7]:planning-guide.md#96a77628-a05e-475d-9df3-fb82217e8f14
-[planning-guide-9.1]:planning-guide.md#6f0a47f3-a289-4090-a053-2521618a28c3
-[planning-guide-azure-premium-storage]:planning-guide.md#ff5ad0f9-f7f4-4022-9102-af07aef3bc92
-
-[planning-guide-figure-100]:media/virtual-machines-shared-sap-planning-guide/100-single-vm-in-azure.png
-[planning-guide-figure-1300]:media/virtual-machines-shared-sap-planning-guide/1300-ref-config-iaas-for-sap.png
-[planning-guide-figure-1400]:media/virtual-machines-shared-sap-planning-guide/1400-attach-detach-disks.png
-[planning-guide-figure-1600]:media/virtual-machines-shared-sap-planning-guide/1600-firewall-port-rule.png
-[planning-guide-figure-1700]:media/virtual-machines-shared-sap-planning-guide/1700-single-vm-demo.png
-[planning-guide-figure-1900]:media/virtual-machines-shared-sap-planning-guide/1900-vm-set-vnet.png
-[planning-guide-figure-200]:media/virtual-machines-shared-sap-planning-guide/200-multiple-vms-in-azure.png
-[planning-guide-figure-2100]:media/virtual-machines-shared-sap-planning-guide/2100-s2s.png
-[planning-guide-figure-2200]:media/virtual-machines-shared-sap-planning-guide/2200-network-printing.png
-[planning-guide-figure-2300]:media/virtual-machines-shared-sap-planning-guide/2300-sapgui-stms.png
-[planning-guide-figure-2400]:media/virtual-machines-shared-sap-planning-guide/2400-vm-extension-overview.png
-[planning-guide-figure-2500]:media/virtual-machines-shared-sap-planning-guide/planning-monitoring-overview-2502.png
-[planning-guide-figure-2600]:media/virtual-machines-shared-sap-planning-guide/2600-sap-router-connection.png
-[planning-guide-figure-2700]:media/virtual-machines-shared-sap-planning-guide/2700-exposed-sap-portal.png
-[planning-guide-figure-2800]:media/virtual-machines-shared-sap-planning-guide/2800-endpoint-config.png
-[planning-guide-figure-2900]:media/virtual-machines-shared-sap-planning-guide/2900-azure-ha-sap-ha.png
-[planning-guide-figure-2901]:medi.png
-[planning-guide-figure-300]:media/virtual-machines-shared-sap-planning-guide/300-vpn-s2s.png
-[planning-guide-figure-3000]:media/virtual-machines-shared-sap-planning-guide/3000-sap-ha-on-azure.png
-[planning-guide-figure-3200]:media/virtual-machines-shared-sap-planning-guide/3200-sap-ha-with-sql.png
-[planning-guide-figure-3201]:medi.png
-[planning-guide-figure-400]:media/virtual-machines-shared-sap-planning-guide/400-vm-services.png
-[planning-guide-figure-600]:media/virtual-machines-shared-sap-planning-guide/600-s2s-details.png
-[planning-guide-figure-700]:media/virtual-machines-shared-sap-planning-guide/700-decision-tree-deploy-to-azure.png
-[planning-guide-figure-800]:media/virtual-machines-shared-sap-planning-guide/800-portal-vm-overview.png
-[planning-guide-microsoft-azure-networking]:planning-guide.md#61678387-8868-435d-9f8c-450b2424f5bd
-[planning-guide-storage-microsoft-azure-storage-and-data-disks]:planning-guide.md#a72afa26-4bf4-4a25-8cf7-855d6032157f
-
-[powershell-install-configure]:/powershell/azure/install-az-ps
-[resource-group-authoring-templates]:../../../resource-group-authoring-templates.md
-[resource-group-overview]:../../azure-resource-manager/management/overview.md
-[resource-groups-networking]:../../networking/networking-overview.md
-[sap-pam]:https://support.sap.com/pam
-[sap-templates-2-tier-marketplace-image]:https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fapplication-workloads%2Fsap%2Fsap-2-tier-marketplace-image%2Fazuredeploy.json
-[sap-templates-2-tier-os-disk]:https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fsap-2-tier-user-disk%2Fazuredeploy.json
-[sap-templates-2-tier-user-image]:https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fsap-2-tier-user-image%2Fazuredeploy.json
-[sap-templates-3-tier-marketplace-image]:https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fsap-3-tier-marketplace-image%2Fazuredeploy.json
-[sap-templates-3-tier-user-image]:https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fsap-3-tier-user-image%2Fazuredeploy.json
-[storage-azure-cli]:../../storage/common/storage-azure-cli.md
-[storage-azure-cli-copy-blobs]:../../storage/common/storage-azure-cli.md#copy-blobs
-[storage-introduction]:../../storage/common/storage-introduction.md
-[storage-powershell-guide-full-copy-vhd]:../../storage/common/storage-powershell-guide-full.md
-[storage-premium-storage-preview-portal]:../../virtual-machines/disks-types.md
-[storage-redundancy]:../../storage/common/storage-redundancy.md
-[storage-scalability-targets]:../../storage/common/scalability-targets-standard-accounts.md
-[storage-use-azcopy]:../../storage/common/storage-use-azcopy.md
-[template-201-vm-from-specialized-vhd]:https://github.com/Azure/azure-quickstart-templates/tree/master/201-vm-from-specialized-vhd
-[templates-101-simple-windows-vm]:https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.compute/vm-simple-windows
-[templates-101-vm-from-user-image]:https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.compute/vm-from-user-image
-[virtual-machines-linux-attach-disk-portal]:../../virtual-machines/linux/attach-disk-portal.md
-[virtual-machines-azure-resource-manager-architecture]:/azure/azure-resource-manager/management/deployment-models
-[virtual-machines-Az-versus-azuresm]:virtual-machines-linux-compare-deployment-models.md
-[virtual-machines-windows-classic-configure-oracle-data-guard]:../../virtual-machines-windows-classic-configure-oracle-data-guard.md
-[virtual-machines-linux-cli-deploy-templates]:../../virtual-machines/linux/cli-deploy-templates.md
-[virtual-machines-deploy-rmtemplates-powershell]:../../virtual-machines/windows/index.yml
-[virtual-machines-linux-agent-user-guide]:../../virtual-machines/extensions/agent-linux.md
-[virtual-machines-linux-agent-user-guide-command-line-options]:../../virtual-machines/extensions/agent-windows.md#command-line-options
-[virtual-machines-linux-capture-image]:../../virtual-machines/linux/capture-image.md
-[virtual-machines-linux-capture-image-resource-manager]:../../virtual-machines/linux/capture-image.md
-[virtual-machines-linux-capture-image-resource-manager-capture]:../../virtual-machines/linux/capture-image.md#step-2-capture-the-vm
-[virtual-machines-windows-capture-image-resource-manager]:../../virtual-machines/windows/capture-image.md
-[virtual-machines-windows-capture-image]:../../virtual-machines-windows-generalize-vhd.md
-[virtual-machines-windows-capture-image-prepare-the-vm-for-image-capture]:../../virtual-machines-windows-generalize-vhd.md
-[virtual-machines-linux-configure-raid]:../../virtual-machines/linux/configure-raid.md
-[virtual-machines-linux-configure-lvm]:../../virtual-machines/linux/configure-lvm.md
-[virtual-machines-linux-classic-create-upload-vhd-step-1]:../../virtual-machines-linux-classic-create-upload-vhd.md#step-1-prepare-the-image-to-be-uploaded
-[virtual-machines-linux-create-upload-vhd-suse]:../../virtual-machines/linux/suse-create-upload-vhd.md
-[virtual-machines-linux-create-upload-vhd-oracle]:../../virtual-machines/linux/oracle-create-upload-vhd.md
-[virtual-machines-linux-redhat-create-upload-vhd]:../../virtual-machines/linux/redhat-create-upload-vhd.md
-[virtual-machines-linux-how-to-attach-disk]:../../virtual-machines/linux/add-disk.md
-[virtual-machines-linux-how-to-attach-disk-how-to-initialize-a-new-data-disk-in-linux]:../../virtual-machines/linux/add-disk.md#format-and-mount-the-disk
-[virtual-machines-linux-tutorial]:../../virtual-machines/linux/quick-create-cli.md
-[virtual-machines-linux-update-agent]:../../virtual-machines/linux/update-agent.md
-[virtual-machines-manage-availability]:../../virtual-machines/linux/availability.md
-[virtual-machines-ps-create-preconfigure-windows-resource-manager-vms]:virtual-machines-windows-create-powershell.md
-[virtual-machines-sizes-linux]:../../virtual-machines/linux/sizes.md
-[virtual-machines-sizes-windows]:../../virtual-machines/windows/sizes.md
-[virtual-machines-windows-classic-ps-sql-alwayson-availability-groups]:./../../virtual-machines/windows/sqlclassic/virtual-machines-windows-classic-ps-sql-alwayson-availability-groups.md
-[virtual-machines-windows-classic-ps-sql-int-listener]:./../../virtual-machines/windows/sqlclassic/virtual-machines-windows-classic-ps-sql-int-listener.md
-[virtual-machines-sql-server-high-availability-and-disaster-recovery-solutions]:./../../virtual-machines/windows/sql/virtual-machines-windows-sql-high-availability-dr.md
-[virtual-machines-sql-server-infrastructure-services]:./../../virtual-machines/windows/sql/virtual-machines-windows-sql-server-iaas-overview.md
-[virtual-machines-sql-server-performance-best-practices]:/azure/azure-sql/virtual-machines/windows/performance-guidelines-best-practices-checklist
-[virtual-machines-upload-image-windows-resource-manager]:../../virtual-machines-windows-upload-image.md
-[virtual-machines-windows-tutorial]:../../virtual-machines-windows-hero-tutorial.md
-[virtual-machines-workload-template-sql-alwayson]:https://azure.microsoft.com/documentation/templates/sql-server-2014-alwayson-dsc/
-[virtual-network-deploy-multinic-arm-cli]:../../virtual-machines/linux/multiple-nics.md
-[virtual-network-deploy-multinic-arm-ps]:../../virtual-machines/windows/multiple-nics.md
-[virtual-network-deploy-multinic-arm-template]:../../virtual-network/template-samples.md
-[virtual-networks-configure-vnet-to-vnet-connection]:../../vpn-gateway/vpn-gateway-vnet-vnet-rm-ps.md
-[virtual-networks-create-vnet-arm-pportal]:../../virtual-network/manage-virtual-network.md#create-a-virtual-network
-[virtual-networks-manage-dns-in-vnet]:../../virtual-network/virtual-networks-name-resolution-for-vms-and-role-instances.md
-[virtual-networks-multiple-nics-windows]:../../virtual-machines/windows/multiple-nics.md
-[virtual-networks-multiple-nics-linux]:../../virtual-machines/linux/multiple-nics.md
-[virtual-networks-nsg]:../../virtual-network/security-overview.md
-[virtual-networks-reserved-private-ip]:../../virtual-network/virtual-networks-static-private-ip-arm-ps.md
-[virtual-networks-static-private-ip-arm-pportal]:../../virtual-network/virtual-networks-static-private-ip-arm-pportal.md
-[virtual-networks-udr-overview]:../../virtual-network/virtual-networks-udr-overview.md
-[vpn-gateway-about-vpn-devices]:../../vpn-gateway/vpn-gateway-about-vpn-devices.md
-[vpn-gateway-create-site-to-site-rm-powershell]:../../vpn-gateway/vpn-gateway-create-site-to-site-rm-powershell.md
-[vpn-gateway-cross-premises-options]:../../vpn-gateway/vpn-gateway-about-vpngateways.md
-[vpn-gateway-site-to-site-create]:../../vpn-gateway/vpn-gateway-site-to-site-create.md
-[vpn-gateway-vpn-faq]:../../vpn-gateway/vpn-gateway-vpn-faq.md
-[xplat-cli]:/cli/azure/install-cli-version-1.0
-[xplat-cli-azure-resource-manager]:/azure/azure-resource-manager/management/manage-resources-cli
-[capture-image-linux-step-2-create-vm-image]:../../virtual-machines/linux/capture-image.md#step-2-create-vm-image
---
-Microsoft Azure enables companies to acquire compute and storage resources in minimal time without lengthy procurement cycles. Azure Virtual Machine service allows companies to deploy classical applications, like SAP NetWeaver based applications into Azure and extend their reliability and availability without having further resources available on-premises. Azure Virtual Machine Services also supports cross-premises connectivity, which enables companies to actively integrate Azure Virtual Machines into their on-premises domains, their Private Clouds and their SAP System Landscape.
-This white paper describes the fundamentals of Microsoft Azure Virtual Machine and provides a walk-through of planning and implementation considerations for SAP NetWeaver installations in Azure and as such should be the document to read before starting actual deployments of SAP NetWeaver on Azure.
-The paper complements the SAP Installation Documentation and SAP Notes, which represent the primary resources for installations and deployments of SAP software on given platforms.
-
+Azure enables companies to acquire resources and services in minimal time without lengthy procurement cycles. Running your SAP landscape in Azure requires planning and knowledge about available options and choosing the right architecture. This documentation complements SAP's installation documentation and SAP notes, which represent the primary resources for installations and deployments of SAP software on given platforms.
## Summary
-Cloud Computing is a widely used term, which is gaining more and more importance within the IT industry, from small companies up to large and multinational corporations.
-Microsoft Azure is the Cloud Services Platform from Microsoft, which offers a wide spectrum of new possibilities. Now customers are able to rapidly provision and de-provision applications as a service in the cloud, so they are not limited to technical or budgeting restrictions. Instead of investing time and budget into hardware infrastructure, companies can focus on the application, business processes, and its benefits for customers and users.
+Azure offers a comprehensive platform for running SAP applications. Infrastructure as a Service (IaaS) and Platform as a Service (PaaS) services combined give optimal choices for successful deployments for the entire SAP landscape of your enterprise.
-With Microsoft Azure Virtual Machine Services, Microsoft offers a comprehensive Infrastructure as a Service (IaaS) platform. SAP NetWeaver based applications are supported on Azure Virtual Machines (IaaS). This whitepaper describes how to plan and implement SAP NetWeaver based applications within Microsoft Azure as the platform of choice.
-
-The paper itself focuses on two main aspects:
-
-* The first part describes two supported deployment patterns for SAP NetWeaver based applications on Azure. It also describes general handling of Azure with SAP deployments in mind.
-* The second part details implementing the different scenarios described in the first part.
-
-For additional resources, see chapter [Resources][planning-guide-1.2] in this document.
+Azure offers a comprehensive platform for running SAP. Infrastructure as a Service (IaaS) and Platform as a Service (PaaS) services combine to give optimal choices for successful deployment for the entire SAP landscape of your enterprise.
### Definitions upfront+ Throughout the document, we use the following terms: * IaaS: Infrastructure as a Service * PaaS: Platform as a Service * SaaS: Software as a Service
-* SAP Component: an individual SAP application such as ECC, BW, Solution Manager, or S/4HANA. SAP components can be based on traditional ABAP or Java technologies or a non-NetWeaver based application such as Business Objects.
+* SAP Component: an individual SAP application such as S/4HANA, ECC, BW or Solution Manager. SAP components can be based on traditional ABAP or Java technologies or a non-NetWeaver based application such as Business Objects.
* SAP Environment: one or more SAP components logically grouped to perform a business function such as Development, QAS, Training, DR, or Production. * SAP Landscape: This term refers to the entire SAP assets in a customer's IT landscape. The SAP landscape includes all production and non-production environments.
-* SAP System: The combination of DBMS layer and application layer of, for example, an SAP ERP development system, SAP BW test system, SAP CRM production system, etc. In Azure deployments, it is not supported to divide these two layers between on-premises and Azure. Means an SAP system is either deployed on-premises or it is deployed in Azure. However, you can deploy the different systems of an SAP landscape into either Azure or on-premises. For example, you could deploy the SAP CRM development and test systems in Azure but the SAP CRM production system on-premises.
-* Cross-premises or hybrid: Describes a scenario where VMs are deployed to an Azure subscription that has site-to-site, multi-site, or ExpressRoute connectivity between the on-premises datacenter(s) and Azure. In common Azure documentation, these kinds of deployments are also described as cross-premises or hybrid scenarios. The reason for the connection is to extend on-premises domains, on-premises Active Directory/OpenLDAP, and on-premises DNS into Azure. The on-premises landscape is extended to the Azure assets of the subscription. Having this extension, the VMs can be part of the on-premises domain. Domain users of the on-premises domain can access the servers and can run services on those VMs (like DBMS services). Communication and name resolution between VMs deployed on-premises and Azure deployed VMs is possible. This is the most common and nearly exclusive case deploying SAP assets into Azure. For more information, see [this][vpn-gateway-cross-premises-options] article and [this][vpn-gateway-site-to-site-create].
-* Azure Monitoring Extension, Enhanced Monitoring, and Azure Extension for SAP: Describe one and the same item. It describes a VM extension that needs to be deployed by you to provide some basic data about the Azure infrastructure to the SAP Host Agent. SAP in SAP notes might refer to it as Monitoring Extension or Enhanced monitoring. In Azure, we are referring to it as **Azure Extension for SAP**.
-
-> [!NOTE]
-> Cross-premises or hybrid deployments of SAP systems where Azure Virtual Machines running SAP systems are members of an on-premises domain are supported for production SAP systems. Cross-premises or hybrid configurations are supported for deploying parts or complete SAP landscapes into Azure. Even running the complete SAP landscape in Azure requires having those VMs being part of on-premises domain and ADS/OpenLDAP.
->
->
+* SAP System: The combination of DBMS layer and application layer of, for example, an SAP ERP development system, SAP BW test system, etc. In Azure deployments, it isn't supported to divide these two layers between on-premises and Azure. Means an SAP system is either deployed on-premises or it's deployed in Azure. However, you can operate different systems of an SAP landscape in either Azure or on-premises.
+### Resources
+The entry point for SAP workload on Azure documentation is found at [Get started with SAP on Azure VMs](get-started.md). Starting with this entry point you find many articles that cover the topics of:
-### <a name="e55d1e22-c2c8-460b-9897-64622a34fdff"></a>Resources
-The entry point for SAP workload on Azure documentation is found at [Get started with SAP on Azure VMs](./get-started.md). Starting with this entry point you find many articles that cover the topics of:
--- SAP NetWeaver and Business One on Azure
+- SAP workload specifics for storage, networking and supported options
- SAP DBMS guides for various DBMS systems in Azure-- High availability and disaster recovery for SAP workload on Azure-- Specific guidance for running SAP HANA on Azure-- Guidance specific to Azure HANA Large Instances for the SAP HANA DBMS-
+- SAP deployment guides, manual and through automation
+- High availability and disaster recovery details for SAP workload on Azure
+- Integration with SAP on Azure with other service and third party applications
> [!IMPORTANT]
-> Wherever possible a link to the referring SAP Installation Guides or other SAP documentation is used (Reference InstGuide-01 via the [SAP Help Portal](http://service.sap.com/instguides)). When it comes to the prerequisites, installation process, or details of specific SAP functionality the SAP documentation and guides should always be read carefully, as the Microsoft documents only covers specific tasks for SAP software installed and operated in an Azure virtual machine.
+> When it comes to the prerequisites, installation process, or details of specific SAP functionality, the SAP documentation and guides should always be read carefully. The Microsoft documents only covers specific tasks for SAP software installed and operated in an Azure virtual machine.
-The following SAP Notes are related to the topic of SAP on Azure:
+The following few SAP Notes are the base of the topic SAP on Azure:
| Note number | Title | | | | | [1928533] |SAP Applications on Azure: Supported Products and Sizing |
-| [2015553] |SAP on Microsoft Azure: Support Prerequisites |
+| [2015553] |SAP on Azure: Support Prerequisites |
+| [2039619] |SAP Applications on Azure using the Oracle Database |
+| [2233094] |DB6: SAP Applications on Azure Using IBM DB2 for Linux, UNIX, and Windows |
| [1999351] |Troubleshooting Enhanced Azure Monitoring for SAP |
-| [2178632] |Key Monitoring Metrics for SAP on Microsoft Azure |
| [1409604] |Virtualization on Windows: Enhanced Monitoring | | [2191498] |SAP on Linux with Azure: Enhanced Monitoring |
-| [2243692] |Linux on Microsoft Azure (IaaS) VM: SAP license issues |
-| [1984787] |SUSE LINUX Enterprise Server 12: Installation notes |
-| [2002167] |Red Hat Enterprise Linux 7.x: Installation and Upgrade |
-| [2069760] |Oracle Linux 7.x SAP Installation and Upgrade |
-| [1597355] |Swap-space recommendation for Linux |
+| [2731110] |Support of Network Virtual Appliances (NVA) for SAP on Azure |
-Also read the [SCN Wiki](https://wiki.scn.sap.com/wiki/display/HOME/SAPonLinuxNotes) that contains all SAP Notes for Linux.
-
-General default limitations and maximum limitations of Azure subscriptions can be found in [this article][azure-resource-manager/management/azure-subscription-service-limits-subscription].
+General default limitations and maximum limitations of Azure subscriptions and resources can be found in [this article](/azure/azure-resource-manager/management/azure-subscription-service-limits).
## Possible Scenarios
-SAP is often seen as one of the most mission-critical applications within enterprises. The architecture and operations of these applications is mostly complex and ensuring that you meet requirements on availability and performance is important.
-Thus enterprises have to think carefully about which cloud provider to choose for running such business critical business processes on. Azure is the ideal public cloud platform for business critical SAP applications and business processes. Given the wide variety of Azure infrastructure, nearly all existing SAP NetWeaver, and S/4HANA systems can be hosted in Azure today. Azure provides VMs with many Terabytes of memory and more than 200 CPUs. Beyond that Azure offers [HANA Large Instances](../../virtual-machines/workloads/sap/hana-overview-architecture.md), which allow scale-up HANA deployments of up to 24 TB and SAP HANA scale-out deployments of up to 120 TB. One can state today that nearly all on-premises SAP scenarios can be run in Azure as well.
-
-For a rough description of the scenarios and some non-supported scenarios, see the document [SAP workload on Azure virtual machine supported scenarios](./planning-supported-configurations.md).
-
-Check these scenarios and some of the conditions that were named as not supported in the referenced documentation throughout the planning and the development of your architecture that you want to deploy into Azure.
-
-Overall the most common deployment pattern is a cross-premises scenario like displayed
+SAP is often seen as one of the most mission-critical applications within enterprises. The architecture and operations of these applications is mostly complex and ensuring that you meet requirements on availability and performance is important.
-![VPN with Site-To-Site Connectivity (cross-premises)][planning-guide-figure-300]
+Thus enterprises have to think carefully about which cloud provider to choose for running such business critical business processes on. Azure is the ideal public cloud platform for business critical SAP applications and business processes. Given the wide variety of Azure infrastructure, most of the current SAP software, including SAP NetWeaver, and SAP S/4 HANA systems can be hosted in Azure today. Azure provides VMs with many terabytes of memory and more than 800 CPUs.
-Reason for many customers to apply a cross-premises deployment pattern is that fact that it is most transparent for all applications to extend on-premises into Azure using Azure ExpressRoute and treat Azure as virtual datacenter. As more and more assets are getting moved into Azure, the Azure deployed infrastructure and network infrastructure will grow and the on-premises assets will reduce accordingly. Everything transparent to users and applications.
+For a description of the scenarios and some non-supported scenarios, see the document [SAP workload on Azure virtual machine supported scenarios](./planning-supported-configurations.md). Check these scenarios and the conditions that were named as not supported in the referenced documentation throughout the planning of your architecture that you want to deploy into Azure.
-In order to successfully deploy SAP systems into either Azure IaaS or IaaS in general, it is important to understand the significant differences between the offerings of traditional outsourcers or hosters and IaaS offerings. Whereas the traditional hoster or outsourcer adapts infrastructure (network, storage and server type) to the workload a customer wants to host, it is instead the customer's or partner's responsibility to characterize the workload and choose the correct Azure components of VMs, storage, and network for IaaS deployments.
+In order to successfully deploy SAP systems into Azure IaaS or IaaS in general, it's important to understand the significant differences between the offerings of traditional private clouds and IaaS offerings. Whereas the traditional hoster or outsourcer adapts infrastructure (network, storage and server type) to the workload a customer wants to host, it's instead the customer's or partner's responsibility to characterize the workload and choose the correct Azure components of VMs, storage, and network for IaaS deployments.
-In order to gather data for the planning of your deployment into Azure, it is important to:
+In order to gather data for the planning of your deployment into Azure, it's important to:
-- Evaluate what SAP products are supported running in Azure VMs-- Evaluate what specific Operating System releases are supported with specific Azure VMs for those SAP products
+- Evaluate what SAP products and versions are supported running in Azure
+- Evaluate if used operating system releases are supported with chosen Azure VMs for those SAP products
- Evaluate what DBMS releases are supported for your SAP products with specific Azure VMs-- Evaluate whether some of the required OS/DBMS releases require you to perform SAP release, Support Package upgrade, and kernel upgrades to get to a supported configuration
+- Evaluate if some of the required OS/DBMS releases require you to modernize your SAP landscape, such as perform SAP release upgrades, to get to a supported configuration
- Evaluate whether you need to move to different operating systems in order to deploy on Azure.
-Details on supported SAP components on Azure, supported Azure infrastructure units and related operating system releases and DBMS releases are explained in the article [What SAP software is supported for Azure deployments](./supported-product-on-azure.md). Results gained out of the evaluation of valid SAP releases, operating system, and DBMS releases have a large impact on the efforts moving SAP systems to Azure. Results out of this evaluation are going to define whether there could be significant preparation efforts in cases where SAP release upgrades or changes of operating systems are needed.
+Details on supported SAP components on Azure, Azure infrastructure units and related operating system releases and DBMS releases are explained in [What SAP software is supported for Azure deployments](./supported-product-on-azure.md) article. Results gained out of the evaluation of valid SAP releases, operating system, and DBMS releases have a large impact on the efforts moving SAP systems to Azure. Results out of this evaluation are going to define whether there could be significant preparation efforts in cases where SAP release upgrades or changes of operating systems are needed.
+## First steps planning a deployment
-## <a name="be80d1b9-a463-4845-bd35-f4cebdb5424a"></a>Azure Regions
-Microsoft's Azure services are collected in Azure regions. An Azure region is a one or a collection out of datacenters that contain the hardware and infrastructure that runs and hosts the different Azure services. This infrastructure includes a large number of nodes that function as compute nodes or storage nodes, or run network functionality.
-
-For a list of the different Azure regions, check the article [Azure geographies](https://azure.microsoft.com/global-infrastructure/geographies/). Not all the Azure regions offer the same services. Dependent on the SAP product you want to run, and the operating system and DBMS related to it, you can end up in a situation that a certain region does not offer the VM types you require. This is especially true for running SAP HANA, where you usually need VMs of the M/Mv2 VM-series. These VM families are deployed only in a subset of the regions. You can find out what exact VM, types, Azure storage types or, other Azure Services are available in which of the regions with the help of the site [Products available by region](https://azure.microsoft.com/global-infrastructure/services/). As you start your planning and have certain regions in mind as primary region and eventually secondary region, you need to investigate first whether the necessary services are available in those regions.
-
-### Availability Zones
-Several of the Azure regions implemented a concept called Availability Zones. Availability Zones are physically separate locations within an Azure region. Each Availability Zone is made up of one or more datacenters equipped with independent power, cooling, and networking. For example, deploying two VMs across two Availability Zones of Azure, and implementing a high-availability framework for your SAP DBMS system or the SAP Central Services gives you the best SLA in Azure. For this particular virtual machine SLA in Azure, check the latest version of [Virtual Machine SLAs](https://azure.microsoft.com/support/legal/sla/virtual-machines/). Since Azure regions developed and extended rapidly over the last years, the topology of the Azure regions, the number of physical datacenters, the distance among those datacenters, and the distance between Azure Availability Zones can be different. And with that the network latency.
-
-The principle of Availability Zones does not apply to the HANA specific service of [HANA Large Instances](../../virtual-machines/workloads/sap/hana-overview-architecture.md). Service Level agreements for HANA Large Instances can be found in the article [SLA for SAP HANA on Azure Large Instances](https://azure.microsoft.com/support/legal/sla/sap-hana-large/)
-
+The first step in deployment planning is NOT to check for VMs available to run SAP. The first step can be one that is time consuming. But most important, is to work with compliance and security teams in your company on what the boundary conditions are for deploying which type of SAP workload or business process into public cloud. If your company deployed other software before into Azure, the process can be easy. If your company is more at the beginning of the journey, there might be larger discussions necessary in order to figure out the boundary conditions, security conditions, enterprise architecture that allows certain SAP data and SAP business processes to be hosted in public cloud.
-### <a name="df49dc09-141b-4f34-a4a2-990913b30358"></a>Fault Domains
-Fault Domains represent a physical unit of failure, closely related to the physical infrastructure contained in data centers, and while a physical blade or rack can be considered a Fault Domain, there is no direct one-to-one mapping between the two.
+As useful help, you can point to [Microsoft compliance offerings](/microsoft-365/compliance/offering-home) for a list of compliance offers Microsoft can provide.
-When you deploy multiple Virtual Machines as part of one SAP system in Microsoft Azure Virtual Machine Services, you can influence the Azure Fabric Controller to deploy your application into different Fault Domains, thereby meeting higher requirements of availability SLAs. However, the distribution of Fault Domains over an Azure Scale Unit (collection of hundreds of Compute nodes or Storage nodes and networking) or the assignment of VMs to a specific Fault Domain is something over which you do not have direct control. In order to direct the Azure fabric controller to deploy a set of VMs over different Fault Domains, you need to assign an Azure availability set to the VMs at deployment time. For more information on Azure availability sets, see chapter [Azure availability sets][planning-guide-3.2.3] in this document.
+Other areas of concerns, like data encryption for data at rest or other encryption in Azure service is documented in [Azure encryption overview](../../security/fundamentals/encryption-overview.md) and in sections at the end of this article for SAP specific topics.
+Don't underestimate this phase of the project in your planning. Only when you have agreements and rules around this topic, you need to go to the next steps, which is the planning of the geographical placement and network architecture that you deploy in Azure.
-### <a name="fc1ac8b2-e54a-487c-8581-d3cc6625e560"></a>Upgrade Domains
-Upgrade Domains represent a logical unit that helps to determine how a VM within an SAP system, that consists of SAP instances running in multiple VMs, is updated. When an upgrade occurs, Microsoft Azure goes through the process of updating these Upgrade Domains one by one. By spreading VMs at deployment time over different Upgrade Domains, you can protect your SAP system partly from potential downtime. In order to force Azure to deploy the VMs of an SAP system spread over different Upgrade Domains, you need to set a specific attribute at deployment time of each VM. Similar to Fault Domains, an Azure Scale Unit is divided into multiple Upgrade Domains. In order to direct the Azure fabric controller to deploy a set of VMs over different Upgrade Domains, you need to assign an Azure Availability Set to the VMs at deployment time. For more information on Azure availability sets, see chapter [Azure availability sets][planning-guide-3.2.3] below.
+### Azure resource organization
+Together with the security and compliance review, if not yet existing, a design for Azure resource naming and placement is required. This will include decisions on:
-### <a name="18810088-f9be-4c97-958a-27996255c665"></a>Azure availability sets
-Azure Virtual Machines within one Azure availability set are distributed by the Azure Fabric Controller over different Fault and Upgrade Domains. The purpose of the distribution over different Fault and Upgrade Domains is to prevent all VMs of an SAP system from being shut down in the case of infrastructure maintenance or a failure within one Fault Domain. By default, VMs are not part of an availability set. The participation of a VM in an availability set is defined at deployment time or later on by a reconfiguration and redeployment of a VM.
+- Naming convention used for every Azure resource, such as VMs or resource groups
+- Subscription and management group design for SAP workload, whether multiple subscriptions should be created per workload or deployment tier or business units
+- Enterprise wide usage of Azure policy on subscriptions and management groups
-To understand the concept of Azure availability sets and the way availability sets relate to Fault and Upgrade Domains, read [this article][virtual-machines-manage-availability].
+Many details of this enterprise architecture are described to help make the right decisions in the [Azure cloud architecture framework](/azure/cloud-adoption-framework/ready/landing-zone/design-area/resource-org).
-As you define availability sets and try to mix various VMs of different VM families within one availability set, you may encounter problems that prevent you to include a certain VM type into such an availability set. The reason is that the availability set is bound to scale unit that contains a certain type of compute hosts. And a certain type of compute host can only run certain types of VM families. For example, if you create an availability set and deploy the first VM into the availability set and you choose a VM type of the Esv3 family and then you try to deploy as second VM a VM of the M family, you will be rejected in the second allocation. Reason is that the Esv3 family VMs are not running on the same host hardware as the virtual machines of the M family do. The same problem can occur, when you try to resize VMs and try to move a VM out of the Esv3 family to a VM type of the M family. In the case of resizing to a VM family that can't be hosted on the same host hardware, you need to shut down all VMs in your availability set and resize them to be able to run on the other host machine type. For SLAs of VMs that are deployed within availability set, check the article [Virtual Machine SLAs](https://azure.microsoft.com/support/legal/sla/virtual-machines/).
+## Azure geographies and regions
-The principle of availability set and related update and fault domain does not apply to the HANA specific service of [HANA Large Instances](../../virtual-machines/workloads/sap/hana-overview-architecture.md). Service Level agreements for HANA Large Instances can be found in the article [SLA for SAP HANA on Azure Large Instances](https://azure.microsoft.com/support/legal/sla/sap-hana-large/).
+Azure services are collected in Azure regions. An Azure region is collection of datacenters that contain the hardware and infrastructure that runs and hosts the different Azure services. This infrastructure includes a large number of nodes that function as compute nodes or storage nodes, or run network functionality.
-> [!IMPORTANT]
-> The concepts of Azure Availability Zones and Azure availability sets are mutually exclusive. That means, you can either deploy a pair or multiple VMs into a specific Availability Zone or an Azure availability set. But not both.
+For a list of the different Azure regions, check the article [Azure geographies](https://azure.microsoft.com/global-infrastructure/geographies/) and an interactive map at [Azure global infrastructure](https://infrastructuremap.microsoft.com/explore). Not all Azure regions offer the same services. Dependent on the SAP product you want to run, sizing requirements, and the operating system and DBMS related to it, you can end up in a situation that a certain region doesn't offer the VM types you require. This is especially true for running SAP HANA, where you usually need VMs of the various M-series VM families. These VM families are deployed only in a subset of the regions. You can find out what exact VM types, Azure storage types or other Azure Services are available in each region with the help of [Products available by region](https://azure.microsoft.com/global-infrastructure/services/). As you start your planning and have certain regions in mind as primary region and eventually secondary region, you need to investigate first whether the necessary services are available in those regions.
### Azure paired regions
-Azure is offering Azure Region pairs where replication of certain data is enabled between these fixed region pairs. The region pairing is documented in the article [Cross-region replication in Azure: Business continuity and disaster recovery](../../availability-zones/cross-region-replication-azure.md). As the article describes, the replication of data is tied Azure storage types that can be configured by you to replicate into the paired region. See also the article [Storage redundancy in a secondary region](../../storage/common/storage-redundancy.md#redundancy-in-a-secondary-region). The storage types that allow such a replication are storage types, which are not suitable for DBMS workload. As such the usability of the Azure storage replication would be limited to Azure blob storage (like for backup purposes) or other high latency storage scenarios. As you check for paired regions and the services you want to use as your primary or secondary region, you may encounter situations where Azure services and/or VM types you intend to use in your primary region are not available in the paired region. Or you might encounter a situation where the Azure paired region is not acceptable out of data compliance reasons. For those cases, you need to use a non-paired region as secondary/disaster recovery region. In such a case, you need to take care on replication of some of the part of the data that Azure would have replicated yourself. An example on how to replicate your Active Directory and DNS to your disaster recovery region is described in the article [Set up disaster recovery for Active Directory and DNS](../../site-recovery/site-recovery-active-directory.md)
--
-## Azure virtual machine services
-Azure offers a large variety of virtual machines that you can select to deploy. There is no need for up-front technology and infrastructure purchases. The Azure VM service offering simplifies maintaining and operating applications by providing on-demand compute and storage to host, scale, and manage web application and connected applications. Infrastructure management is automated with a platform that is designed for high availability and dynamic scaling to match usage needs with the option of several different pricing models.
-
-![Positioning of Microsoft Azure Virtual Machine Services][planning-guide-figure-400]
-
-With Azure virtual machines, Microsoft is enabling you to deploy custom server images to Azure as IaaS instances. Or you are able to choose from a rich selection of consumable operating system images from the Azure Marketplace.
-
-From an operational perspective, the Azure Virtual Machine Service offers similar experiences as virtual machines deployed on premises. You are responsible for the administration, operations and also the patching of the particular operating system, running in an Azure VM and its applications in that VM. Microsoft is not providing any more services beyond hosting that VM on its Azure infrastructure (Infrastructure as a Service - IaaS). For SAP workload that you as a customer deploy, Microsoft has no offers beyond the IaaS offerings.
-
-The Microsoft Azure platform is a multi-tenant platform. As a result storage, network, and compute resources that host Azure VMs are, with a few exceptions, shared between tenants. Intelligent throttling and quota logic is used to prevent one tenant from impacting the performance of another tenant (noisy neighbor) in a drastic way. Especially for certifying the Azure platform for SAP HANA, Microsoft needs to prove the resource isolation for cases where multiple VMs can run on the same host on a regular basis to SAP. Though logic in Azure tries to keep variances in bandwidth experienced small, highly shared platforms tend to introduce larger variances in resource/bandwidth availability than customers might experience in their on-premises deployments. The probability that an SAP system on Azure could experience larger variances than in an on-premises system needs to be taken into account.
-
-### Azure virtual machines for SAP workload
-
-For SAP workload, we narrowed down the selection to different VM families that are suitable for SAP workload and SAP HANA workload more specifically. The way how you find the correct VM type and its capability to work through SAP workload is described in the document [What SAP software is supported for Azure deployments](./supported-product-on-azure.md).
-> [!NOTE]
-> The VM types that are certified for SAP workload, there is no over-provisioning of CPU and memory resources.
-
-Beyond the selection of purely supported VM types, you also need to check whether those VM types are available in a specific region based on the site [Products available by region](https://azure.microsoft.com/global-infrastructure/services/). But more important, you need to evaluate whether:
--- CPU and memory resources of different VM types-- IOPS bandwidth of different VM types-- Network capabilities of different VM types-- Number of disks that can be attached-- Ability to leverage certain Azure storage types-
-fit your need. Most of that data can be found [here (Linux)][virtual-machines-sizes-linux] and [here (Windows)][virtual-machines-sizes-windows] for a particular VM type.
-
-As pricing model you have several different pricing options that list like:
--- Pay as you go-- One year reserved-- Three years reserved-- Spot pricing-
-The pricing of each of the different offers with different service offers around operating systems and different regions is available on the site [Linux Virtual Machines Pricing](https://azure.microsoft.com/pricing/details/virtual-machines/linux/) and [Windows Virtual Machines Pricing](https://azure.microsoft.com/pricing/details/virtual-machines/windows/). For details and flexibility of one year and three year reserved instances, check these articles:
--- [What are Azure Reservations?](../../cost-management-billing/reservations/save-compute-costs-reservations.md)-- [Virtual machine size flexibility with Reserved VM Instances](../../virtual-machines/reserved-vm-instance-size-flexibility.md)-- [How the Azure reservation discount is applied to virtual machines](../../cost-management-billing/manage/understand-vm-reservation-charges.md)-
-For more information on spot pricing, read the article [Azure Spot Virtual Machines](https://azure.microsoft.com/pricing/spot/). Pricing of the same VM type can also be different between different Azure regions. For some customers, it was worth to deploy into a less expensive Azure region.
-
-Additionally, Azure offers the concepts of a dedicated host. The dedicated host concept gives you more control on patching cycles that are done by Azure. You can time the patching according to your own schedules. This offer is specifically targeting customers with workload that might not follow the normal cycle of workload. To read up on the concepts of Azure dedicated host offers, read the article [Azure Dedicated Host](../../virtual-machines/dedicated-hosts.md). Using this offer is supported for SAP workload and is used by several SAP customers who want to have more control on patching of infrastructure and eventual maintenance plans of Microsoft. For more information on how Microsoft maintains and patches the Azure infrastructure that hosts virtual machines, read the article [Maintenance for virtual machines in Azure](../../virtual-machines/maintenance-and-updates.md).
-
-#### Generation 1 and Generation 2 virtual machines
-Microsoft's hypervisor is able to handle two different generations of virtual machines. Those formats are called **Generation 1** and **Generation 2**. **Generation 2** was introduced in the year 2012 with Windows Server 2012 hypervisor. Azure started out using Generation 1 virtual machines. As you deploy Azure virtual machines, the default is still to use the Generation 1 format. Meanwhile you can deploy Generation 2 VM formats as well. The article [Support for generation 2 VMs on Azure](../../virtual-machines/generation-2.md) lists the Azure VM families that can be deployed as Generation 2 VM. This article also lists the important functional differences of Generation 2 virtual machines as they can run on Hyper-V private cloud and Azure. More important this article also lists functional differences between Generation 1 virtual machines and Generation 2 VMs, as those run in Azure.
-
-> [!NOTE]
-> There are functional differences of Generation 1 and Generation 2 VMs running in Azure. Read the article [Support for generation 2 VMs on Azure](../../virtual-machines/generation-2.md) to see a list of those differences.
+Azure is offering Azure Region pairs where replication of certain data is enabled between these fixed region pairs. The region pairing is documented in the article [Cross-region replication in Azure: Business continuity and disaster recovery](../../availability-zones/cross-region-replication-azure.md). As the article describes, the replication of data is tied to Azure storage types that can be configured by you to replicate into the paired region. See also the article [Storage redundancy in a secondary region](../../storage/common/storage-redundancy.md#redundancy-in-a-secondary-region). The storage types that allow such a replication are storage types, which are **not suitable** for SAP components and DBMS workload. As such, the usability of the Azure storage replication would be limited to Azure blob storage (for backup purposes), file shares and volumes, or other high latency storage scenarios. Now as you check for paired regions and the services you want to use as your primary or secondary region, you may encounter situations where Azure services and/or VM types you intend to use in your primary region aren't available in the paired region. Or you might encounter a situation where the Azure paired region isn't acceptable out of data compliance reasons. For those cases, you need to use a non-paired region as secondary/disaster recovery region. In such a case, you need to take care of replication of some parts of the data, that Azure would have replicated for you, yourself.
-Moving an existing VM from one generation to the other generation is not possible. To change the virtual machine generation, you need to deploy a new VM of the generation you desire and re-install the software that you are running in the virtual machine of the generation. This change only affects the base VHD image of the VM and has no impact on the data disks or attached NFS or SMB shares. Data disks, NFS, or SMB shares that originally were assigned to, for example, on a Generation 1 VM.
-
-> [!NOTE]
-> Deploying Mv1 VM family VMs as Generation 2 VMs is possible as of beginning of May 2020. With that a seeming less up and downsizing between Mv1 and Mv2 family VMs is possible.
--
-### <a name="a72afa26-4bf4-4a25-8cf7-855d6032157f"></a>Storage: Microsoft Azure Storage and Data Disks
-Microsoft Azure Virtual Machines utilize different storage types. When implementing SAP on Azure Virtual Machine Services, it is important to understand the differences between these two main types of storage:
+### Availability Zones
-* Non-Persistent, volatile storage.
-* Persistent storage.
+Many Azure regions implement a concept called [availability zones](/azure/availability-zones/az-overview). Availability zones are physically separate locations within an Azure region. Each availability zone is made up of one or more datacenters equipped with independent power, cooling, and networking. For example, deploying two VMs across two availability zones of Azure, and implementing a high-availability framework for your SAP DBMS system or the (A)SCS gives you the best SLA in Azure. For more information on virtual machine SLAs in Azure, check the latest version of [virtual machine SLAs](https://azure.microsoft.com/support/legal/sla/virtual-machines/). Since Azure regions developed and extended rapidly over the last years, the topology of the Azure regions, the number of physical datacenters, the distance among those datacenters, and the distance between Azure Availability Zones can be different. And with that the network latency.
-Azure VMs offer non-persistent disks after a VM is deployed. In case of a VM reboot, all content on those drives will be wiped out. Hence, it is a given that data files and log/redo files of databases should under no circumstances be located on those non-persisted drives. There might be exceptions for some of the databases, where these non-persisted drives could be suitable for tempdb and temp tablespaces. However, avoid using those drives for A-Series VMs since those non-persisted drives are limited in throughput with that VM family. For further details, read the article [Understanding the temporary drive on Windows VMs in Azure](/archive/blogs/mast/understanding-the-temporary-drive-on-windows-azure-virtual-machines)
+Follow the guidance in [SAP workload configurations with Azure availability zones](./high-availability-zones.md) when choosing a region with availability zones. Also determine which zonal deployment model is best suited for your requirements, chosen region and workload.
-
-> ![Windows logo.][Logo_Windows] Windows
->
-> Drive D:\ in an Azure VM is a non-persisted drive, which is backed by some local disks on the Azure compute node. Because it is non-persisted, this means that any changes made to the content on the D:\ drive is lost when the VM is rebooted. By "any changes", like files stored, directories created, applications installed, etc.
->
-> ![Linux logo.][Logo_Linux] Linux
->
-> Linux Azure VMs automatically mount a drive at /mnt/resource that is a non-persisted drive backed by local disks on the Azure compute node. Because it is non-persisted, this means that any changes made to content in /mnt/resource are lost when the VM is rebooted. By any changes, like files stored, directories created, applications installed, etc.
->
->
+### Fault domains
-#### Azure Storage accounts
+Fault domains represent a physical unit of failure, closely related to the physical infrastructure contained in data centers. While a physical blade or rack can be considered a Fault Domain, there's no direct one-to-one mapping between the two.
-When deploying services or VMs in Azure, deployment of VHDs and VM Images are organized in units called Azure Storage Accounts. [Azure storage accounts](../../storage/common/storage-account-overview.md) have limitations either in IOPS, throughput, or sizes those can contain. In the past these limitations, which are documented in:
+When you deploy multiple virtual machines as part of one SAP system, you can influence the Azure fabric controller to deploy your VMs into different fault domains, thereby meeting higher requirements of availability SLAs. However, the distribution of fault domains over an Azure scale unit (collection of hundreds of compute nodes or storage nodes and networking) or the assignment of VMs to a specific fault domain is something over which you don't have direct control. In order to direct the Azure fabric controller to deploy a set of VMs over different fault domains, you need to assign an Azure availability set to the VMs at deployment time. For more information on Azure availability sets, see chapter [Azure availability sets](#availability-sets) in this document.
-- [Scalability targets for standard storage accounts](../../storage/common/scalability-targets-standard-account.md)-- [Scalability targets for premium page blob storage accounts](../../storage/blobs/scalability-targets-premium-page-blobs.md)
+### Update domains
-played an important role in planning an SAP deployment in Azure. It was on you to manage the number of persisted disks within a storage account. You needed to manage the storage accounts and eventually create new storage accounts to create more persisted disks.
+Update domains represent a logical unit that helps to determine how a VM within an SAP system that consists of SAP instances running on multiple VMs is updated. When a platform update occurs, Azure goes through the process of updating these update domains one by one. By spreading VMs at deployment time over different update domains, you can protect your SAP system party from potential downtime. Similar to Fault Domains, an Azure scale unit is divided into multiple update domains. In order to direct the Azure fabric controller to deploy a set of VMs over different update domains, you need to assign an Azure Availability Set to the VMs at deployment time. For more information on Azure availability sets, see chapter [Azure availability sets](#availability-sets) below.
-In recent years, the introduction of [Azure managed disks](../../virtual-machines/managed-disks-overview.md) relieved you from those tasks. The recommendation for SAP deployments is to leverage Azure managed disks instead of managing Azure storage accounts yourself. Azure managed disks will distribute disks across different storage accounts, so, that the limits of the individual storage accounts are not exceeded.
+[ ![Diagram of update and failure domains.](./media/virtual-machines-shared-sap-planning-guide/3000-sap-ha-on-azure.png) ](./media/virtual-machines-shared-sap-planning-guide/3000-sap-ha-on-azure.png#lightbox)
-Within a storage account, you have a type of a folder concept called 'containers' that can be used to group certain disks into specific containers.
+### Availability sets
-Within Azure, a disk/VHD name follows the following naming connection that needs to provide a unique name for the VHD within Azure:
+Azure virtual machines within one Azure availability set are distributed by the Azure fabric controller over different fault domains. The purpose of the distribution over different fault domains is to prevent all VMs of an SAP system from being shut down if infrastructure maintenance or a failure within one Fault Domain. By default, VMs aren't part of an availability set. The participation of a VM in an availability set is defined at deployment time only or during redeployment of a VM.
-`http(s)://<storage account name>.blob.core.windows.net/<container name>/<vhd name>`
+To understand the concept of Azure availability sets and the way availability sets relate to fault domains, see the documentation on [Azure availability sets](/azure/virtual-machines/availability-set-overview).
-The string above needs to uniquely identify the disk/VHD that is stored on Azure Storage.
+As you define availability sets and try to mix various VMs of different VM families within one availability set, you may encounter problems that prevent you to include a certain VM type into such an availability set. The reason is that the availability set is bound to a scale unit that contains a certain type of compute hosts. And a certain type of compute host can only run certain types of VM families. For example, if you create an availability set and deploy the first VM into the availability set and you choose a VM type of the Edsv5 family and then you try to deploy a second VM of the M family, this deployment will fail. Reason is that the Edsv5 family VMs aren't running on the same host hardware as the virtual machines of the M family do. The same problem can occur, when you try to resize VMs and try to move a VM out of the Edsv5 family to a VM type of the M family. If resizing to a VM family that can't be hosted on the same host hardware, you need to shut down all VMs in your availability set and resize them all to be able to run on the other host machine type. For SLAs of VMs that are deployed within availability set, check the article [Virtual Machine SLAs](https://azure.microsoft.com/support/legal/sla/virtual-machines/).
+> [!IMPORTANT]
+> The concepts of Azure availability zones and Azure availability sets are mutually exclusive. That means, you can either deploy a pair or multiple VMs into a specific availability zone or an Azure availability set. But not both can be assigned to a VM.
+> Combination of availability sets and availability zones is possible with proximity placement groups, see chapter [proximity placement groups](#proximity-placement-groups) for more details.
-#### Azure persisted storage types
-Azure offers a variety of persisted storage option that can be used for SAP workload and specific SAP stack components. For more details, read the document [Azure storage for SAP workloads](./planning-guide-storage.md).
+> [!TIP]
+> It isn't possible to switch between availability sets and availability zones for deployed VMs directly. The VM and disks need to be recreated with zone constraint placed from existing resources. This [open-source project](https://github.com/Azure/SAP-on-Azure-Scripts-and-Utilities/tree/main/Move-VM-from-AvSet-to-AvZone/Move-Regional-SAP-HA-To-Zonal-SAP-HA-WhitePaper) with PowerShell functions can be used as sample to change a VM between availability set to availability zone. A [blog post](https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/how-to-migrate-a-highly-available-sap-system-in-azure-from/ba-p/3216917) shows the modification of a highly available SAP system from availability set to zones.
+### Proximity placement groups
-### <a name="61678387-8868-435d-9f8c-450b2424f5bd"></a>Microsoft Azure Networking
+Network latency between individual SAP VMs can have large implications on performance. Especially the network roundtrip time between SAP application servers and DBMS can have significant impact on business applications. Optimally all compute elements running your SAP VMs are as closely located as possible. This isn't always possible in every combination and without Azure knowing which VMs to keep together. In most situations and regions the default placement fulfills network roundtrip latency requirements.
-Microsoft Azure provides a network infrastructure, which allows the mapping of all scenarios, which we want to realize with SAP software. The capabilities are:
+When default placement isn't sufficient for network roundtrip requirements within an SAP system, [proximity placement groups (PPGs)](proximity-placement-scenarios.md) exist to address this need. They can be used for SAP deployments, together with other location constraints of Azure region, availability zone and availability set. With a proximity placement group, combination of both availability zone and availability set, while setting different update and failure domains, is possible. A proximity placement group should only contain a single SAP system.
-* Access from the outside, directly to the VMs via Windows Terminal Services or ssh/VNC
-* Access to services and specific ports used by applications within the VMs
-* Internal Communication and Name Resolution between a group of VMs deployed as Azure VMs
-* Cross-premises Connectivity between a customer's on-premises network and the Azure network
-* Cross Azure Region or data center connectivity between Azure sites
+While a deployment in a PPG can result in the most latency optimized placement, deploying with PPG also brings drawbacks. Some VM families can't be combined in one PPG or you run into problems when resizing between VM families. The constraints on VM families used, regions and optionally zones don't allow such a co-location. See the [linked documentation](proximity-placement-scenarios.md) for further details on the topic, its advantages and potential challenges.
-For more information, see the [Virtual Network documentation](../../virtual-network/index.yml).
+VMs without PPGs should be default deployment method in most situations for SAP systems. This is especially true with a zonal (single availability zone) and cross-zonal (VMs spread between two zones) deployment for an SAP system, without the need for any proximity placement group. Use of proximity placement groups should be limited to SAP systems and Azure regions only when required for performance reasons.
-There are many different possibilities to configure name and IP resolution in Azure. There is also an Azure DNS service, which can be used instead of setting up your own
-DNS server. More information can be found in [this article][virtual-networks-manage-dns-in-vnet] and on [this page](https://azure.microsoft.com/services/dns/).
+## Azure networking
-For cross-premises or hybrid scenarios, we are relying on the fact that the on-premises AD/OpenLDAP/DNS has been extended via VPN or private connection to Azure. For certain scenarios as documented here, it might be necessary to have an AD/OpenLDAP replica installed in Azure.
+Azure provides a network infrastructure, which allows the mapping of all scenarios, which we want to realize with SAP software. The capabilities are:
-Because networking and name resolution is a vital part of the database deployment for an SAP system, this concept is discussed in more detail in the [DBMS Deployment Guide][dbms-guide].
+* Access to Azure services and specific ports used by applications within VMs
+* Access to VMs for management and administration, directly to the VMs via ssh or Windows Remote Desktop (RDP)
+* Internal communication and name resolution between VMs and by Azure services
+* On-premises connectivity between a customer's on-premises network and the Azure networks
+* Communication between services deployed in different Azure regions
-##### Azure Virtual Networks
+For more detailed information on networking, see the [virtual network documentation](/azure/virtual-network/).
-By building up an Azure Virtual Network, you can define the address range of the private IP addresses allocated by Azure DHCP functionality. In cross-premises scenarios, the IP address range defined is still allocated using DHCP by Azure. However, Domain Name resolution is done on-premises (assuming that the VMs are a part of an on-premises domain) and hence can resolve addresses beyond different Azure Cloud Services.
+Networking is typically the first technical activity when planning and deploying in Azure and often has a central enterprise architecture, with SAP as part of overall networking requirements. In the planning stage, you should complete the networking architecture in as much detail as possible. Changes at later point might require a complete move or deletion of deployed resources, such as subnet network address changes.
-Every Virtual Machine in Azure needs to be connected to a Virtual Network.
+### Azure virtual networks
-More details can be found in [this article][resource-groups-networking] and on [this page](../../virtual-network/index.yml).
+Virtual network is a fundamental building block for your private network in Azure. You can define the address range of the network and separate it into network subnets. Network subnets can be used by SAP VMs, or can be dedicated subnets, as required by Azure for some services like network or application gateway.
+The definition of the virtual network(s), subnets and private network address ranges is part of the design required when planning. The network design should address several requirements for SAP deployment:
-> [!NOTE]
-> By default, once a VM is deployed you cannot change the Virtual Network configuration. The TCP/IP settings must be left to the Azure DHCP server. Default behavior is Dynamic IP assignment.
->
->
+* No [network virtual appliances](https://azure.microsoft.com/solutions/network-appliances/), such as firewalls, are placed in the communication path between SAP application and DBMS layer of SAP products using the SAP kernel, such as S/4HANA or SAP NetWeaver.
+* Network routing restrictions are enforced by [network security groups (NSGs)](/azure/virtual-network/network-security-groups-overview) on the subnet level. Group IPs of VMs into [application security groups (ASGs)](/azure/virtual-network/application-security-groups) which are maintained in the NSG rules and provide per-role, tier and SID grouping of permissions.
+* SAP application and database VMs run in the same virtual network, within the same or different subnets of a single virtual network. Different subnets for application and database VMs or alternatively dedicated application and DBMS ASGs to group rules applicable to each workload type within same subnet.
+* Accelerated networking is enabled on all network cards of all VMs for SAP workload, where technically possible.
+* Dependency on central services - name resolution (DNS), identity management (AD domain/Azure AD) and administrative access.
+* Access to and by public endpoints, as required. For example, Azure management for Pacemaker operations in high-availability or Azure services such as backup
+* Use of multiple NICs, only if required for designated subnets with own routes and NSG rules
-The MAC address of the virtual network card may change, for example after resize and the Windows or Linux guest OS picks up the new network card and automatically uses DHCP to assign the IP and DNS addresses in this case.
+A virtual network acts as a network boundary. As such, resources like network interface cards (NICs) for VMs, once deployed, can't change its virtual network assignment. Changes to virtual network or [subnet address range](/azure/virtual-network/virtual-network-manage-subnet#change-subnet-settings) might require you to move all deployed resources to another subnet to execute such change.
-##### Static IP Assignment
-It is possible to assign fixed or reserved IP addresses to VMs within an Azure Virtual Network. Running the VMs in an Azure Virtual Network opens a great possibility to leverage this functionality if needed or required for some scenarios. The IP assignment remains valid throughout the existence of the VM, independent of whether the VM is running or shutdown. As a result, you need to take the overall number of VMs (running and stopped VMs) into account when defining the range of IP addresses for the Virtual Network. The IP address remains assigned either until the VM and its Network Interface is deleted or until the IP address gets de-assigned again. For more information, read [this article][virtual-networks-static-private-ip-arm-pportal].
+Example architecture for SAP can be accessed below:
+* [SAP S/4HANA on Linux in Azure](/azure/architecture/guide/sap/sap-s4hana)
+* [SAP NetWeaver on Windows in Azure](/azure/architecture/guide/sap/sap-netweaver)
+* [In- and Outbound internet communication for SAP on Azure](/azure/architecture/guide/sap/sap-internet-inbound-outbound)
-> [!NOTE]
-> You should assign static IP addresses through Azure means to individual vNICs. You should not assign static IP addresses within the guest OS to a vNIC. Some Azure services like Azure Backup Service rely on the fact that at least the primary vNIC is set to DHCP and not to static IP addresses. See also the document [Troubleshoot Azure virtual machine backup](../../backup/backup-azure-vms-troubleshoot.md#networking).
+> [!WARNING]
+> Configuring [network virtual appliances](https://azure.microsoft.com/solutions/network-appliances/) in the communication path between the SAP application and the DBMS layer of SAP products using the SAP kernel, such as S/4HANA or SAP NetWeaver, isn't supported. This restriction is for functionality and performance reasons. The communication path between the SAP application layer and the DBMS layer must be a direct one. The restriction doesn't include [application security group (ASG) and NSG rules](../../virtual-network/network-security-groups-overview.md) if those ASG and NSG rules allow a direct communication path.
>--
-##### Secondary IP addresses for SAP hostname virtualization
-Each Azure Virtual Machine's network interface card can have multiple IP addresses assigned to it, this secondary IP can be used for SAP virtual hostnames which is mapped to a DNS #limitations) secondary for secondary IP configurations such as Pacemaker clusters, in this case the IP of the Load Balancer enables the SAP virtual hostname(s). See also SAP's note [#962955](https://launchpad.support.sap.com/#/notes/962955) on general guidance using virtual host names.
--
-##### Multiple NICs per VM
-
-You can define multiple virtual network interface cards (vNIC) for an Azure Virtual Machine. With the ability to have multiple vNICs you can start to set up network traffic separation where, for example, client traffic is routed through one vNIC and backend traffic is routed through a second vNIC. Dependent on the type of VM there are different limitations for the number of vNICs a VM can have assigned. Exact details, functionality, and restrictions can be found in these articles:
-
-* [Create a Windows VM with multiple NICs][virtual-networks-multiple-nics-windows]
-* [Create a Linux VM with multiple NICs][virtual-networks-multiple-nics-linux]
-* [Deploy multi NIC VMs using a template][virtual-network-deploy-multinic-arm-template]
-* [Deploy multi NIC VMs using PowerShell][virtual-network-deploy-multinic-arm-ps]
-* [Deploy multi NIC VMs using the Azure CLI][virtual-network-deploy-multinic-arm-cli]
-
-#### Site-to-Site Connectivity
-
-Cross-premises is Azure VMs and On-Premises linked with a transparent and permanent VPN connection. It is expected to become the most common SAP deployment pattern in Azure. The assumption is that operational procedures and processes with SAP instances in Azure should work transparently. This means you should be able to print out of these systems as well as use the SAP Transport Management System (TMS) to transport changes from a development system in Azure to a test system, which is deployed on-premises. More documentation around site-to-site can be found in [this article][vpn-gateway-create-site-to-site-rm-powershell]
-
-##### VPN Tunnel Device
-
-In order to create a site-to-site connection (on-premises data center to Azure data center), you need to either obtain and configure a VPN device, or use Routing and Remote Access Service (RRAS) which was introduced as a software component with Windows Server 2012.
-
-* [Create a virtual network with a site-to-site VPN connection using PowerShell][vpn-gateway-create-site-to-site-rm-powershell]
-* [About VPN devices for Site-to-Site VPN Gateway connections][vpn-gateway-about-vpn-devices]
-* [VPN Gateway FAQ][vpn-gateway-vpn-faq]
-
-![Site-to-site connection between on-premises and Azure][planning-guide-figure-600]
-
-The Figure above shows two Azure subscriptions have IP address subranges reserved for usage in Virtual Networks in Azure. The connectivity from the on-premises network to Azure is established via VPN.
-
-#### Point-to-Site VPN
-
-Point-to-site VPN requires every client machine to connect with its own VPN into Azure. For the SAP scenarios, we are looking at, point-to-site connectivity is not practical. Therefore, no further references are given to point-to-site VPN connectivity.
-
-More information can be found here
-* [Configure a Point-to-Site connection to a VNet using the Azure portal](../../vpn-gateway/vpn-gateway-howto-point-to-site-resource-manager-portal.md)
-* [Configure a Point-to-Site connection to a VNet using PowerShell](../../vpn-gateway/vpn-gateway-howto-point-to-site-rm-ps.md)
-
-#### Multi-Site VPN
-
-Azure also nowadays offers the possibility to create Multi-Site VPN connectivity for one Azure subscription. Previously a single subscription was limited to one site-to-site VPN connection. This limitation went away with Multi-Site VPN connections for a single subscription. This makes it possible to leverage more than one Azure Region for a specific subscription through cross-premises configurations.
-
-For more documentation, see [this article][vpn-gateway-create-site-to-site-rm-powershell]
-
-#### VNet to VNet Connection
-
-Using Multi-Site VPN, you need to configure a separate Azure Virtual Network in each of the regions. However often you have the requirement that the software components in the different regions should communicate with each other. Ideally this communication should not be routed from one Azure Region to on-premises and from there to the other Azure Region. To shortcut, Azure offers the possibility to configure a connection from one Azure Virtual Network in one region to another Azure Virtual Network hosted in another region. This functionality is called VNet-to-VNet connection. More details on this functionality can be found here:
-[Configure a VNet-to-VNet VPN gateway connection by using the Azure portal](../../vpn-gateway/vpn-gateway-howto-vnet-vnet-resource-manager-portal.md).
-
-#### Private Connection to Azure ExpressRoute
-
-Microsoft Azure ExpressRoute allows the creation of private connections between Azure data centers and either the customer's on-premises infrastructure or in a co-location environment. ExpressRoute is offered by various MPLS (packet switched) VPN providers or other Network Service Providers. ExpressRoute connections do not go over the public Internet. ExpressRoute connections offer higher security, more reliability through multiple parallel circuits, faster speeds, and lower latencies than typical connections over the Internet.
-
-Find more details on Azure ExpressRoute and offerings here:
-
-* [ExpressRoute documentation](../../expressroute/index.yml)
-* [Azure ExpressRoute pricing](https://azure.microsoft.com/pricing/details/expressroute/)
-* [ExpressRoute FAQ](../../expressroute/expressroute-faqs.md)
-
-Express Route enables multiple Azure subscriptions through one ExpressRoute circuit as documented here
-
-* [Tutorial: Connect a virtual network to an ExpressRoute circuit](../../expressroute/expressroute-howto-linkvnet-arm.md)
-* [Quickstart: Create and modify an ExpressRoute circuit using Azure PowerShell](../../expressroute/expressroute-howto-circuit-arm.md)
-
-#### Forced tunneling in case of cross-premises
-For VMs joining on-premises domains through site-to-site, point-to-site, or ExpressRoute, you need to make sure that the Internet proxy settings are getting deployed for all the users in those VMs as well. By default, software running in those VMs or users using a browser to access the internet would not go through the company proxy, but would connect straight through Azure to the internet. But even the proxy setting is not a 100% solution to direct the traffic through the company proxy since it is responsibility of software and services to check for the proxy. If software running in the VM is not doing that or an administrator manipulates the settings, traffic to the Internet can be detoured again directly through Azure to the Internet.
-
-In order to avoid such a direct internet connectivity, you can configure Forced Tunneling with site-to-site connectivity between on-premises and Azure. The detailed description of the Forced Tunneling feature is published here:
-[Configure forced tunneling using the classic deployment model](../../vpn-gateway/vpn-gateway-about-forced-tunneling.md)
-
-Forced Tunneling with ExpressRoute is enabled by customers advertising a default route via the ExpressRoute BGP peering sessions.
-
-#### Summary of Azure networking
-
-This chapter contained many important points about Azure Networking. Here is a summary of the main points:
-
-* Azure Virtual Networks allow you to put a network structure into your Azure deployment. VNets can be isolated against each other or with the help of Network Security Groups traffic between VNets can be controlled.
-* Azure Virtual Networks can be leveraged to assign IP address ranges to VMs or assign fixed IP addresses to VMs
-* To set up a Site-To-Site or Point-To-Site connection you need to create an Azure Virtual Network first
-* Once a virtual machine has been deployed, it is no longer possible to change the Virtual Network assigned to the VM
-
-### Quotas in Azure virtual machine services
-We need to be clear about the fact that the storage and network infrastructure is shared between VMs running a variety of services in the Azure infrastructure. As in the customer's own data centers, over-provisioning of some of the infrastructure resources does take place to a degree. The Microsoft Azure Platform uses disk, CPU, network, and other quotas to limit the resource consumption and to preserve consistent and deterministic performance. The different VM types (A5, A6, etc.) have different quotas for the number of disks, CPU, RAM, and Network.
-
-> [!NOTE]
-> CPU and memory resources of the VM types supported by SAP are pre-allocated on the host nodes. This means that once the VM is deployed, the resources on the host are available as defined by the VM type.
+> Other scenarios where network virtual appliances aren't supported are:
>
+> * Communication paths between Azure VMs that represent Linux Pacemaker cluster nodes and SBD devices as described in [High availability for SAP NetWeaver on Azure VMs on SUSE Linux Enterprise Server for SAP Applications](high-availability-guide-suse.md).
+> * Communication paths between Azure VMs and Windows Server Scale-Out File Server (SOFS) set up as described in [Cluster an SAP ASCS/SCS instance on a Windows failover cluster by using a file share in Azure](sap-high-availability-guide-wsfc-file-share.md).
>
+> Network virtual appliances in communication paths can easily double the network latency between two communication partners. They also can restrict throughput in critical paths between the SAP application layer and the DBMS layer. In some customer scenarios, network virtual appliances can cause Pacemaker Linux clusters to fail.
-When planning and sizing SAP on Azure solutions, the quotas for each virtual machine size must be considered. The VM quotas are described [here (Linux)][virtual-machines-sizes-linux] and [here (Windows)][virtual-machines-sizes-windows].
-
-The quotas described represent the theoretical maximum values. The limit of IOPS per disk may be achieved with small I/Os (8 KB) but possibly may not be achieved with large I/Os (1 MB). The IOPS limit is enforced on the granularity of single disk.
-
-As a rough decision tree to decide whether an SAP system fits into Azure Virtual Machine Services and its capabilities or whether an existing system needs to be configured differently in order to deploy the system on Azure, the decision tree below can be used:
-
-![Decision tree to decide ability to deploy SAP on Azure][planning-guide-figure-700]
-
-1. The most important information to start with is the SAPS requirement for a given SAP system. The SAPS requirements need to be separated out into the DBMS part and the SAP application part, even if the SAP system is already deployed on-premises in a 2-tier configuration. For existing systems, the SAPS related to the hardware in use often can be determined or estimated based on existing SAP benchmarks. The results can be found on the [About SAP Standard Application Benchmarks](https://sap.com/about/benchmark.html) page. For newly deployed SAP systems, you should have gone through a sizing exercise, which should determine the SAPS requirements of the system.
-1. For existing systems, the I/O volume and I/O operations per second on the DBMS server should be measured. For newly planned systems, the sizing exercise for the new system also should give rough ideas of the I/O requirements on the DBMS side. If unsure, you eventually need to conduct a Proof of Concept.
-1. Compare the SAPS requirement for the DBMS server with the SAPS the different VM types of Azure can provide. The information on SAPS of the different Azure VM types is documented in SAP Note [1928533]. The focus should be on the DBMS VM first since the database layer is the layer in an SAP NetWeaver system that does not scale out in the majority of deployments. In contrast, the SAP application layer can be scaled out. If none of the SAP supported Azure VM types can deliver the required SAPS, the workload of the planned SAP system can't be run on Azure. You either need to deploy the system on-premises or you need to change the workload volume for the system.
-1. As documented [here (Linux)][virtual-machines-sizes-linux] and [here (Windows)][virtual-machines-sizes-windows], Azure enforces an IOPS quota per disk independent whether you use Standard Storage or Premium Storage. Dependent on the VM type, the number of data disks, which can be mounted varies. As a result, you can calculate a maximum IOPS number that can be achieved with each of the different VM types. Dependent on the database file layout, you can stripe disks to become one volume in the guest OS. However, if the current IOPS volume of a deployed SAP system exceeds the calculated limits of the largest VM type of Azure and if there is no chance to compensate with more memory, the workload of the SAP system can be impacted severely. In such cases, you can hit a point where you should not deploy the system on Azure.
-1. Especially in SAP systems, which are deployed on-premises in 2-Tier configurations, the chances are that the system might need to be configured on Azure in a 3-Tier configuration. In this step, you need to check whether there is a component in the SAP application layer, which can't be scaled out and which would not fit into the CPU and memory resources the different Azure VM types offer. If there indeed is such a component, the SAP system and its workload can't be deployed into Azure. But if you can scale out the SAP application components into multiple Azure VMs, the system can be deployed into Azure.
-
-If the DBMS and SAP application layer components can be run in Azure VMs, the configuration needs to be defined with regard to:
-
-* Number of Azure VMs
-* VM types for the individual components
-* Number of VHDs in DBMS VM to provide enough IOPS
-
-## Managing Azure assets
-
-### Azure portal
-
-The Azure portal is one of three interfaces to manage Azure VM deployments. The basic management tasks, like deploying VMs from images, can be done through the Azure portal. In addition, the creation of Storage Accounts, Virtual Networks, and other Azure components are also tasks the Azure portal can handle well. However, functionality like uploading VHDs from on-premises to Azure or copying a VHD within Azure are tasks, which require either third-party tools or administration through PowerShell or CLI.
-
-![Microsoft Azure portal - Virtual Machine overview][planning-guide-figure-800]
--
-Administration and configuration tasks for the Virtual Machine instance are possible from within the Azure portal.
-
-Besides restarting and shutting down a Virtual Machine you can also attach, detach, and create data disks for the Virtual Machine instance, to capture the instance for image preparation, and configure the size of the Virtual Machine instance.
-
-The Azure portal provides basic functionality to deploy and configure VMs and many other Azure services. However not all available functionality is covered by the Azure portal. In the Azure portal, it's not possible to perform tasks like:
-
-* Uploading VHDs to Azure
-* Copying VMs
--
-### Management via Microsoft Azure PowerShell cmdlets
-
-Windows PowerShell is a powerful and extensible framework that has been widely adopted by customers deploying larger numbers of systems in Azure. After the installation of PowerShell cmdlets on a desktop, laptop or dedicated management station, the PowerShell cmdlets can be run remotely.
-
-The process to enable a local desktop/laptop for the usage of Azure PowerShell cmdlets and how to configure those for the usage with the Azure subscription(s) is described in [this article][powershell-install-configure].
-
-More detailed steps on how to install, update, and configure the Azure PowerShell cmdlets can also be found in [Install the Azure PowerShell module](/powershell/azure/install-az-ps).
-Customer experience so far has been that PowerShell is certainly the more powerful tool to deploy VMs and to create custom steps in the deployment of VMs. All of the customers running SAP instances in Azure are using PowerShell cmdlets to supplement management tasks they do in the Azure portal or are even using PowerShell cmdlets exclusively to manage their deployments in Azure. Since the Azure-specific cmdlets share the same naming convention as the more than 2000 Windows-related cmdlets, it is an easy task for Windows administrators to leverage those cmdlets.
-
-See example here:
-<https://blogs.technet.com/b/keithmayer/archive/2015/07/07/18-steps-for-end-to-end-iaas-provisioning-in-the-cloud-with-azure-resource-manager-arm-powershell-and-desired-state-configuration-dsc.aspx>
--
-Deployment of the Azure Extension for SAP (see chapter [Azure Extension for SAP][planning-guide-9.1] in this document) is only possible via PowerShell or CLI. Therefore it is mandatory to set up and configure PowerShell or CLI when deploying or administering an SAP NetWeaver system in Azure.
-
-As Azure provides more functionality, new PowerShell cmdlets are going to be added that requires an update of the cmdlets. Therefore, you should check the [Azure Downloads site](https://azure.microsoft.com/downloads/) at least monthly for a new version of the cmdlets. The new version is installed on top of the older version.
-
-For a general list of Azure-related PowerShell commands check here: [Azure PowerShell documentation][azure-ps].
-
-### Management via Microsoft Azure CLI commands
-
-For customers who use Linux and want to manage Azure resources PowerShell might not be an option. Microsoft offers Azure CLI as an alternative.
-The Azure CLI provides a set of open source, cross-platform commands for working with the Azure Platform. The Azure CLI provides much of
-the same functionality found in the Azure portal.
-
-For information about installation, configuration and how to use CLI commands to accomplish Azure tasks see
-
-* [Install the Azure classic CLI][xplat-cli]
-* [Install the Azure CLI 2.0][azure-cli-install]
-* [Deploy and manage virtual machines by using Azure Resource Manager templates and the Azure CLI](../../virtual-machines/linux/create-ssh-secured-vm-from-template.md)
-* [Use the Azure classic CLI for Mac, Linux, and Windows with Azure Resource Manager][xplat-cli-azure-resource-manager]
--
-## First steps planning a deployment
-The first step in deployment planning is NOT to check for VMs available to run SAP. The first step can be one that is time consuming, but most important, is to work with compliance and security teams in your company on what the boundary conditions are for deploying which type of SAP workload or business process into public cloud. If your company deployed other software before into Azure, the process can be easy. If your company is more at the beginning of the journey, there might be larger discussions necessary in order to figure out the boundary conditions, security conditions, that allow certain SAP data and SAP business processes to be hosted in public cloud.
-
-As useful help, you can point to [Microsoft compliance offerings](/microsoft-365/compliance/offering-home) for a list of compliance offers Microsoft can provide.
-
-Other areas of concerns like data encryption for data at rest or other encryption in Azure service is documented in [Azure encryption overview](../../security/fundamentals/encryption-overview.md).
-
-Don't underestimate this phase of the project in your planning. Only when you have agreement and rules around this topic, you need to go to the next step, which is the planning of the network architecture that you deploy in Azure.
--
-## Different ways to deploy VMs for SAP in Azure
-
-In this chapter, you learn the different ways to deploy a VM in Azure. Additional preparation procedures, as well as handling of VHDs and VMs in Azure are covered in this chapter.
-
-### Deployment of VMs for SAP
-
-Microsoft Azure offers multiple ways to deploy VMs and associated disks. Thus it is important to understand the differences since preparations of the VMs might differ depending on the method of deployment. In general, we take a look at the following scenarios:
-
-#### <a name="4d175f1b-7353-4137-9d2f-817683c26e53"></a>Moving a VM from on-premises to Azure with a non-generalized disk
-
-You plan to move a specific SAP system from on-premises to Azure. This can be done by uploading the VHD, which contains the OS, the SAP Binaries, and DBMS binaries plus the VHDs with the data and log files of the DBMS to Azure. In contrast to [scenario #2 below][planning-guide-5.1.2], you keep the hostname, SAP SID, and SAP user accounts in the Azure VM as they were configured in the on-premises environment. Therefore, generalizing the image is not necessary. See chapters [Preparation for moving a VM from on-premises to Azure with a non-generalized disk][planning-guide-5.2.1] of this document for on-premises preparation steps and upload of non-generalized VMs or VHDs to Azure. Read chapter [Scenario 3: Moving a VM from on-premises using a non-generalized Azure VHD with SAP][deployment-guide-3.4] in the [Deployment Guide][deployment-guide] for detailed steps of deploying such an image in Azure.
-
-Another option which we will not discuss in detail in this guide is using Azure Site Recovery to replicate SAP NetWeaver Application Servers and SAP NetWeaver Central Services to Azure. We do not recommend to use Azure Site Recovery for the database layer and rather use database specific replication mechanisms, like HANA System Replication. For more information, see chapter [Protect SAP](../../site-recovery/site-recovery-workload.md#protect-sap) of the [About disaster recovery for on-premises apps](../../site-recovery/site-recovery-workload.md) guide.
-
-#### <a name="e18f7839-c0e2-4385-b1e6-4538453a285c"></a>Deploying a VM with a customer-specific image
-
-Due to specific patch requirements of your OS or DBMS version, the provided images in the Azure Marketplace might not fit your needs. Therefore, you might need to create a VM using your own private OS/DBMS VM image, which can be deployed several times afterwards. To prepare such a private image for duplication, the following items have to be considered:
--
-> ![Windows logo.][Logo_Windows] Windows
->
-> For more details, read [Upload a generalized Windows VHD and use it to create new VMs in Azure](../../virtual-machines/windows/upload-generalized-managed.md)
-> The Windows settings (like Windows SID and hostname) must be abstracted/generalized on the on-premises VM via the sysprep command.
->
->
-> ![Linux logo.][Logo_Linux] Linux
->
-> Follow the steps described in these articles for [SUSE][virtual-machines-linux-create-upload-vhd-suse], [Red Hat][virtual-machines-linux-redhat-create-upload-vhd], or [Oracle Linux][virtual-machines-linux-create-upload-vhd-oracle], to prepare a VHD to be uploaded to Azure.
->
+> [!IMPORTANT]
+> Another design that is *not* supported is the segregation of the SAP application layer and the DBMS layer into different Azure virtual networks that aren't [peered](../../virtual-network/virtual-network-peering-overview.md) with each other. We recommend that you segregate the SAP application layer and DBMS layer by using subnets within the same Azure virtual network instead of using different Azure virtual networks.
>
+> If you decide not to follow the recommendation and instead segregate the two layers into different virtual networks, the two virtual networks *must be* [peered](../../virtual-network/virtual-network-peering-overview.md). Be aware that network traffic between two [peered](../../virtual-network/virtual-network-peering-overview.md) Azure virtual networks is subject to transfer costs. Huge data volume that consists of many terabytes is exchanged between the SAP application layer and the DBMS layer each day. You can accumulate substantial costs if the SAP application layer and DBMS layer are segregated between two peered Azure virtual networks.
-
-If you have already installed SAP content in your on-premises VM (especially for 2-Tier systems), you can adapt the SAP system settings after the deployment of the Azure VM through the instance rename procedure supported by the SAP Software Provisioning Manager (SAP Note [1619720]). See chapters [Preparation for deploying a VM with a customer-specific image for SAP][planning-guide-5.2.2] and [Uploading a VHD from on-premises to Azure][planning-guide-5.3.2] of this document for on-premises preparation steps and upload of a generalized VM to Azure. Read chapter [Scenario 2: Deploying a VM with a custom image for SAP][deployment-guide-3.3] in the [Deployment Guide][deployment-guide] for detailed steps of deploying such an image in Azure.
-
-#### Deploying a VM out of the Azure Marketplace
+#### Name resolution and domain services
-You would like to use a Microsoft or third-party provided VM image from the Azure Marketplace to deploy your VM. After you deployed your VM in Azure, you follow the same guidelines and tools to install the SAP software and/or DBMS inside your VM as you would do in an on-premises environment. For more detailed deployment description, see chapter [Scenario 1: Deploying a VM out of the Azure Marketplace for SAP][deployment-guide-3.2] in the [Deployment Guide][deployment-guide].
+Hostname to IP name resolution through DNS is often a crucial element for SAP networking. There are many different possibilities to configure name and IP resolution in Azure. Often an enterprise central DNS solution exists and is part of the overall architecture. Several options for name resolution in Azure natively, instead of setting up your own DNS server(s), are described in [name resolution for resources in Azure virtual networks](/azure/virtual-network/virtual-networks-name-resolution-for-vms-and-role-instances).
-### <a name="6ffb9f41-a292-40bf-9e70-8204448559e7"></a>Preparing VMs with SAP for Azure
+Similarly to DNS services, there might be a requirement for Windows Active Directory to be accessible by the SAP VMs or services.
-Before uploading VMs into Azure, you need to make sure the VMs and VHDs fulfill certain requirements. There are small differences depending on the deployment method that is used.
+#### IP address assignment
-#### <a name="1b287330-944b-495d-9ea7-94b83aff73ef"></a>Preparation for moving a VM from on-premises to Azure with a non-generalized disk
+An IP of a NIC remains claimed and used throughout the existence of VMs NIC, independent of whether the VM is running or shutdown. This applies to [both dynamic and static IP assignment](/azure/virtual-network/ip-services/private-ip-addresses) and independent of whether the VM is running or shutdown. Dynamic IP assignment is released if the NIC is deleted, subnet changes or allocation method changed to static.
-A common deployment method is to move an existing VM, which runs an SAP system from on-premises to Azure. That VM and the SAP system in the VM just should run in Azure using the same hostname and likely the same SAP SID. In this case, the guest OS of VM should not be generalized for multiple deployments. If the on-premises network got extended into Azure, then even the same domain accounts can be used within the VM as those were used before on-premises.
+It's possible to assign fixed IP addresses to VMs within an Azure virtual network. This is often done for SAP systems to depend on external DNS servers and static entries. The IP address remains assigned, either until the VM and its network interface is deleted or until the IP address gets deassigned again. For more information, read [this article](/azure/virtual-network/ip-services/virtual-networks-static-private-ip-arm-pportal). As a result, you need to take the overall number of VMs (running and stopped VMs) into account when defining the range of IP addresses for the virtual network.
-Requirements when preparing your own Azure VM Disk are:
-
-* Originally the VHD containing the operating system could have a maximum size of 127 GB only. This limitation got eliminated at the end of March 2015. Now the VHD containing the operating system can be up to 1 TB in size as any other Azure Storage hosted VHD as well.
-* It needs to be in the fixed VHD format. Dynamic VHDs or VHDs in VHDx format are not yet supported on Azure. Dynamic VHDs will be converted to static VHDs when you upload the VHD with PowerShell commandlets or CLI
-* VHDs, which are mounted to the VM and should be mounted again in Azure to the VM need to be in a fixed VHD format as well. Read [this article](../../virtual-machines/managed-disks-overview.md) for size limits of data disks. Dynamic VHDs will be converted to static VHDs when you upload the VHD with PowerShell commandlets or CLI
-* Add another local account with administrator privileges, which can be used by Microsoft support or which can be assigned as context for services and applications to run in until the VM is deployed and more appropriate users can be used.
-* Add other local accounts as those might be needed for the specific deployment scenario.
--
-> ![Windows logo.][Logo_Windows] Windows
->
-> In this scenario no generalization (sysprep) of the VM is required to upload and deploy the VM on Azure.
-> Make sure that drive D:\ is not used.
-> Set disk automount for attached disks as described in chapter [Setting automount for attached disks][planning-guide-5.5.3] in this document.
->
-> ![Linux logo.][Logo_Linux] Linux
->
-> In this scenario no generalization (waagent -deprovision) of the VM is required to upload and deploy the VM on Azure.
-> Make sure that /mnt/resource is not used and that ALL disks are mounted via uuid. For the OS disk, make sure that the bootloader entry also reflects the uuid-based mount.
->
->
+> [!NOTE]
+> You should decide between static and dynamic IP address allocation for Azure VMs and their NIC(s). The guest OS of the VM will obtain the IP assigned to the NIC during boot. You shouldn't assign static IP addresses within the guest OS to a NIC. Some Azure services like Azure Backup Service rely on the fact that at least the primary NIC is set to DHCP inside the OS and not to static IP addresses. See also the document [Troubleshoot Azure virtual machine backup](../../backup/backup-azure-vms-troubleshoot.md#networking).
-
-#### <a name="57f32b1c-0cba-4e57-ab6e-c39fe22b6ec3"></a>Preparation for deploying a VM with a customer-specific image for SAP
+#### Secondary IP addresses for SAP hostname virtualization
-VHD files that contain a generalized OS are stored in containers on Azure Storage Accounts or as Managed Disk images. You can deploy a new VM from such an image by referencing the VHD or Managed Disk image as a source in your deployment template files as described in chapter [Scenario 2: Deploying a VM with a custom image for SAP][deployment-guide-3.3] of the [Deployment Guide][deployment-guide].
+Each Azure Virtual Machine's network interface card can have multiple IP addresses assigned to it. This secondary IP can be used for SAP virtual hostname(s), which is mapped to a DNS ). The secondary IP also must be configured within the OS statically, as secondary IPs are often not assigned through DHCP. Each secondary IP must be from the same subnet the NIC is bound to. Secondary IPs can be added and removed from Azure NICs without stopping or deallocate the VM, unlike the primary IPs of a NIC where deallocating the VM is required.
-Requirements when preparing your own Azure VM Image are:
+> [!NOTE]
+> Azure load balancer's floating IP is [not supported](../../load-balancer/load-balancer-multivip-overview.md#limitations) on secondary IP configs. Azure load balancer is used by SAP high-availability architectures with Pacemaker clusters. In such case the load balancer enables the SAP virtual hostname(s). See also SAP's note [#962955](https://launchpad.support.sap.com/#/notes/962955) on general guidance using virtual host names.
-* Originally the VHD containing the operating system could have a maximum size of 127 GB only. This limitation got eliminated at the end of March 2015. Now the VHD containing the operating system can be up to 1 TB in size as any other Azure Storage hosted VHD as well.
-* It needs to be in the fixed VHD format. Dynamic VHDs or VHDs in VHDx format are not yet supported on Azure. Dynamic VHDs will be converted to static VHDs when you upload the VHD with PowerShell commandlets or CLI
-* VHDs, which are mounted to the VM and should be mounted again in Azure to the VM need to be in a fixed VHD format as well. Read [this article](../../virtual-machines/managed-disks-overview.md) for size limits of data disks. Dynamic VHDs will be converted to static VHDs when you upload the VHD with PowerShell commandlets or CLI
-* Add other local accounts as those might be needed for the specific deployment scenario.
-* If the image contains an installation of SAP NetWeaver and renaming of the host name from the original name at the point of the Azure deployment is likely, it is recommended to copy the latest versions of the SAP Software Provisioning Manager DVD into the template. This will enable you to easily use the SAP provided rename functionality to adapt the changed hostname and/or change the SID of the SAP system within the deployed VM image as soon as a new copy is started.
+#### Azure load balancer with VMs running SAP
-
-> ![Windows logo.][Logo_Windows] Windows
->
-> Make sure that drive D:\ is not used
-> Set disk automount for attached disks as described in chapter [Setting automount for attached disks][planning-guide-5.5.3] in this document.
->
-> ![Linux logo.][Logo_Linux] Linux
->
-> Make sure that /mnt/resource is not used and that ALL disks are mounted via uuid. For the OS disk, make sure the bootloader entry also reflects the uuid-based mount.
->
->
+Typically used in high availability architectures to provide floating IPs between active and passive cluster nodes, load balancers can be used for single VMs for the purpose of holding a virtual IP address for SAP virtual hostname(s). Using load balancer for single VMs this way is an alternative to secondary IPs on a NIC or utilizing multiple NICs in the same subnet.
-
-* SAP GUI (for administrative and setup purposes) can be pre-installed in such a template.
-* Other software necessary to run the VMs successfully in cross-premises scenarios can be installed as long as this software can work with the rename of the VM.
+Standard load balancer modifies the [default outbound access](/azure/virtual-network/ip-services/default-outbound-access) path due to it's secure by default architecture. VMs behind a standard load balancer might not be able to reach the same public endpoints anymore - for example OS update repositories or public endpoints of Azure services. Follow guidance in article [Public endpoint connectivity for Virtual Machines using Azure Standard Load Balancer](high-availability-guide-standard-load-balancer-outbound-connections.md) for available options to provide outbound connectivity.
-If the VM is prepared sufficiently to be generic and eventually independent of accounts/users not available in the targeted Azure deployment scenario, the last preparation step of generalizing such an image is conducted.
+> [!TIP]
+> Basic load balancer should NOT be used with any SAP architecture in Azure and is announced to be [retired](/azure/load-balancer/skus) in future.
-##### Generalizing a VM
-
-> ![Windows logo.][Logo_Windows] Windows
->
-> The last step is to sign in to a VM with an Administrator account. Open a Windows command window as *administrator*. Go to %windir%\windows\system32\sysprep and execute sysprep.exe.
-> A small window will appear. It is important to check the **Generalize** option (the default is unchecked) and change the Shutdown Option from its default of 'Reboot' to 'shutdown'. This procedure assumes that the sysprep process is executed on-premises in the Guest OS of a VM.
-> If you want to perform the procedure with a VM already running in Azure, follow the steps described in [this article](../../virtual-machines/windows/capture-image-resource.md).
->
-> ![Linux logo.][Logo_Linux] Linux
->
-> [How to capture a Linux virtual machine to use as a Resource Manager template][capture-image-linux-step-2-create-vm-image]
->
->
+#### Multiple vNICs per VM
-
-### Transferring VMs and VHDs between on-premises to Azure
-Since uploading VM images and disks to Azure is not possible via the Azure portal, you need to use Azure PowerShell cmdlets or CLI. Another possibility is the use of the tool 'AzCopy'. The tool can copy VHDs between on-premises and Azure (in both directions). It also can copy VHDs between Azure Regions. Consult [this documentation][storage-use-azcopy] for download and usage of AzCopy.
+You can define multiple virtual network interface cards (vNIC) for an Azure VM, each assigned to any subnet within the same virtual network as the primary vNIC. With the ability to have multiple vNICs, you can start to set up network traffic separation, if necessary. For example, client traffic is routed through the primary vNIC and some admin or backend traffic is routed through a second vNIC. Depending on operating system (OS) and image used, traffic routes for NICs inside the OS will need to be set up for correct routing.
-A third alternative would be to use various third-party GUI-oriented tools. However, make sure that these tools are supporting Azure Page Blobs. For our purposes, we need to use Azure Page Blob store (the differences are described in [Understanding block blobs, append blobs, and page blobs](/rest/api/storageservices/Understanding-Block-Blobs--Append-Blobs--and-Page-Blobs). Also the tools provided by Azure are efficient in compressing the VMs and VHDs, which need to be uploaded. This is important because this efficiency in compression reduces the upload time (which varies anyway depending on the upload link to the internet from the on-premises facility and the Azure deployment region targeted). It is a fair assumption that uploading a VM or VHD from European location to the U.S.-based Azure data centers will take longer than uploading the same VMs/VHDs to the European Azure data centers.
+The type and size of VM will restrict how many vNICs a VM can have assigned. Exact details, functionality, and restrictions can be found in this article - [Assign multiple IP addresses to virtual machines using the Azure portal](/azure/virtual-network/ip-services/virtual-network-multiple-ip-addresses-portal)
-#### <a name="a43e40e6-1acc-4633-9816-8f095d5a7b6a"></a>Uploading a VHD from on-premises to Azure
-To upload an existing VM or VHD from the on-premises network such a VM or VHD needs to meet the requirements as listed in chapter [Preparation for moving a VM from on-premises to Azure with a non-generalized disk][planning-guide-5.2.1] of this document.
+> [!NOTE]
+> Adding additional vNICs to a VM does not increase the available network bandwidth. All network interfaces share the same bandwidth. Use of multiple NICs is only recommended if private subnets need to be accessed by VMs. Recommended design pattern is to rely on NSG functionality and simplify the network and subnet requirements with as few network interfaces, typically just one, if possible. Exception is HANA scale-out where a secondary vNIC is required for HANA internal network.
-Such a VM does NOT need to be generalized and can be uploaded in the state and shape it has after shutdown on the on-premises side. The same is true for additional VHDs, which don't contain any operating system.
+> [!WARNING]
+> If using multiple vNICs on a VM, it's recommended for primary network card's subnet to handle user network traffic.
-##### Uploading a VHD and making it an Azure Disk
-In this case we want to upload a VHD, either with or without an OS in it, and mount it to a VM as a data disk or use it as OS disk. This is a multi-step process
+#### Accelerated networking
-**PowerShell**
+To further reduce network latency between Azure VMs, we recommend that you confirm [Azure accelerated networking](/azure/virtual-network/accelerated-networking-overview) is enabled every VM running SAP workload. This is enabled by default for new VMs, [deployment checklist](deployment-checklist.md) should verify the state. Benefits are greatly improved networking performance and latencies. Use it when you deploy Azure VMs for SAP workload on all supported VMs, especially for the SAP application layer and the SAP DBMS layer. The linked documentation contains support dependencies on OS versions and VM instances.
-* Sign in to your subscription with *Connect-AzAccount*
-* Set the subscription of your context with *Set-AzContext* and parameter SubscriptionId or SubscriptionName - see [Set-AzContext](/powershell/module/az.accounts/set-azcontext)
-* Upload the VHD with *Add-AzVhd* to an Azure Storage Account - see [Add-AzVhd](/powershell/module/az.compute/add-azvhd)
-* (Optional) Create a Managed Disk from the VHD with *New-AzDisk* - see [New-AzDisk](/powershell/module/az.compute/new-azdisk)
-* Set the OS disk of a new VM config to the VHD or Managed Disk with *Set-AzVMOSDisk* - see [Set-AzVMOSDisk](/powershell/module/az.compute/set-azvmosdisk)
-* Create a new VM from the VM config with *New-AzVM* - see [New-AzVM](/powershell/module/az.compute/new-azvm)
-* Add a data disk to a new VM with *Add-AzVMDataDisk* - see [Add-AzVMDataDisk](/powershell/module/az.compute/add-azvmdatadisk)
+### On-premises connectivity
-**Azure CLI**
+SAP deployment in Azure assumes a central, enterprise-wide network architecture and communication hub is in place to enable on-premises connectivity. Such on-premises network connectivity is essential to allow users and applications access the SAP landscape in Azure, to access other central company services such as central DNS, domain, security and patch management infrastructure and others.
-* Sign in to your subscription with *az login*
-* Select your subscription with *az account set --subscription `<subscription name or id`>*
-* Upload the VHD with *az storage blob upload* - see [Using the Azure CLI with Azure Storage][storage-azure-cli].
-* (Optional) Create a Managed Disk from the VHD with *az disk create* - see [az disk](/cli/azure/disk).
-* Create a new VM specifying the uploaded VHD or Managed Disk as OS disk with *az vm create* and parameter *--attach-os-disk*
-* Add a data disk to a new VM with *az vm disk attach* and parameter *--new*
+Many options exist to provide such on-premises connectivity and deployment are most often a [hub-spoke network topology](/azure/architecture/reference-architectures/hybrid-networking/hub-spoke?tabs=cli) or an extension of it, a global [virtual WAN](/azure/virtual-wan/virtual-wan-global-transit-network-architecture).
-**Template**
+For SAP deployments, for on-premises a private connection over [Azure ExpressRoute](/azure/expressroute/expressroute-introduction) is recommended. For smaller SAP workloads, remote region or smaller offices, [VPN on-premises connectivity](/azure/vpn-gateway/design) it available. Use of [ExpressRoute with VPN](/azure/expressroute/how-to-configure-coexisting-gateway-portal) site-to-site connection as a failover path is a possible combination of both services.
-* Upload the VHD with PowerShell or Azure CLI
-* (Optional) Create a Managed Disk from the VHD with PowerShell, Azure CLI, or the Azure portal
-* Deploy the VM with a JSON template referencing the VHD as shown in [this example JSON template](https://github.com/Azure/azure-quickstart-templates/blob/master/quickstarts/microsoft.compute/vm-specialized-vhd-new-or-existing-vnet/azuredeploy.json) or using Managed Disks as shown in [this example JSON template](https://github.com/Azure/azure-quickstart-templates/blob/master/application-workloads/sap/sap-2-tier-user-image-md/azuredeploy.json).
+### Out- and inbound connectivity to/from the Internet
-#### Deployment of a VM Image
-To upload an existing VM or VHD from the on-premises network, in order to use it as an Azure VM image such a VM or VHD need to meet the requirements listed in chapter [Preparation for deploying a VM with a customer-specific image for SAP][planning-guide-5.2.2] of this document.
+Your SAP landscape requires connectivity to the Internet. Be it for OS repository updates, establishing a connection to SAP's SaaS applications on their public endpoints or accessing Azure services via their public endpoint. Similarly, it might be required to provide access for your clients to SAP Fiori applications, with Internet users accessing services provided by your SAP landscape. Your SAP network architecture requires to plan for the path towards the Internet and for any incoming requests.
-* Use *sysprep* on Windows or *waagent -deprovision* on Linux to generalize your VM - see [Sysprep Technical Reference](/previous-versions/windows/it-pro/windows-vista/cc766049(v=ws.10)) for Windows or [How to capture a Linux virtual machine to use as a Resource Manager template][capture-image-linux-step-2-create-vm-image] for Linux
-* Sign in to your subscription with *Connect-AzAccount*
-* Set the subscription of your context with *Set-AzContext* and parameter SubscriptionId or SubscriptionName - see [Set-AzContext](/powershell/module/az.accounts/set-azcontext)
-* Upload the VHD with *Add-AzVhd* to an Azure Storage Account - see [Add-AzVhd](/powershell/module/az.compute/add-azvhd)
-* (Optional) Create a Managed Disk Image from the VHD with *New-AzImage* - see [New-AzImage](/powershell/module/az.compute/new-azimage)
-* Set the OS disk of a new VM config to the
- * VHD with *Set-AzVMOSDisk -SourceImageUri -CreateOption fromImage* - see [Set-AzVMOSDisk](/powershell/module/az.compute/set-azvmosdisk)
- * Managed Disk Image *Set-AzVMSourceImage* - see [Set-AzVMSourceImage](/powershell/module/az.compute/set-azvmsourceimage)
-* Create a new VM from the VM config with *New-AzVM* - see [New-AzVM](/powershell/module/az.compute/new-azvm)
+Secure your virtual network with [NSG rules](/azure/virtual-network/network-security-groups-overview), utilizing network [service tags](/azure/virtual-network/service-tags-overview) for known services, establishing routing and IP addressing to your firewall or other network virtual appliance is all part of the architecture. Resources in private networks need to be protected by network layer 4 and 7 firewalls.
-**Azure CLI**
+A [best practice architecture](/azure/architecture/guide/sap/sap-internet-inbound-outbound) focusing on communication paths with Internet can be accessed in the architecture center.
-* Use *sysprep* on Windows or *waagent -deprovision* on Linux to generalize your VM - see [Sysprep Technical Reference](/previous-versions/windows/it-pro/windows-vista/cc766049(v=ws.10)) for Windows or [How to capture a Linux virtual machine to use as a Resource Manager template][capture-image-linux-step-2-create-vm-image] for Linux
-* Sign in to your subscription with *az login*
-* Select your subscription with *az account set --subscription `<subscription name or id`>*
-* Upload the VHD with *az storage blob upload* - see [Using the Azure CLI with Azure Storage][storage-azure-cli].
-* (Optional) Create a Managed Disk Image from the VHD with *az image create* - see [az image](/cli/azure/image).
-* Create a new VM specifying the uploaded VHD or Managed Disk Image as OS disk with *az vm create* and parameter *--image*
+## Azure virtual machines for SAP workload
-**Template**
+For SAP workload, we narrowed down the selection to different VM families that are suitable for SAP workload and SAP HANA workload more specifically. The way how you find the correct VM type and its capability to work through SAP workload is described in the document [What SAP software is supported for Azure deployments](supported-product-on-azure.md). Additionally, SAP note [1928533] lists all certified Azure VMs, their performance capability as measured by SAPS benchmark and limitation as applicable. The VM types that are certified for SAP workload don't use over-provisioning of CPU and memory resources.
-* Use *sysprep* on Windows or *waagent -deprovision* on Linux to generalize your VM - see [Sysprep Technical Reference](/previous-versions/windows/it-pro/windows-vista/cc766049(v=ws.10)) for Windows or [How to capture a Linux virtual machine to use as a Resource Manager template][capture-image-linux-step-2-create-vm-image] for Linux
-* Upload the VHD with PowerShell or Azure CLI
-* (Optional) Create a Managed Disk Image from the VHD with PowerShell, Azure CLI, or the Azure portal
-* Deploy the VM with a JSON template referencing the image VHD as shown in [this example JSON template](https://github.com/Azure/azure-quickstart-templates/blob/master/quickstarts/microsoft.compute/vm-specialized-vhd-new-or-existing-vnet/azuredeploy.json) or using the Managed Disk Image as shown in [this example JSON template](https://github.com/Azure/azure-quickstart-templates/blob/master/application-workloads/sap/sap-2-tier-user-image-md/azuredeploy.json).
+Beyond the selection of purely supported VM types, you also need to check whether those VM types are available in a specific region based on the site [Products available by region](https://azure.microsoft.com/global-infrastructure/services/). But more important, you need to evaluate if:
-#### Downloading VHDs or Managed Disks to on-premises
-Azure Infrastructure as a Service is not a one-way street of only being able to upload VHDs and SAP systems. You can move SAP systems from Azure back into the on-premises world as well.
+- CPU and memory resources of different VM types
+- IOPS bandwidth of different VM types
+- Network capabilities of different VM types
+- Number of disks that can be attached
+- Ability to use certain Azure storage types
-During the time of the download the VHDs or Managed Disks can't be active. Even when downloading disks, which are mounted to VMs, the VM needs to be shut down and deallocated. If you only want to download the database content, which, then should be used to set up a new system on-premises and if it is acceptable that during the time of the download and the setup of the new system that the system in Azure can still be operational, you could avoid a long downtime by performing a compressed database backup into a disk and just download that disk instead of also downloading the OS base VM.
+fit your need. Most of that data can be found [here](/azure/virtual-machines/sizes) for a particular VM family and type.
-#### PowerShell
+### Pricing models for Azure VMs
-* Downloading a Managed Disk
- You first need to get access to the underlying blob of the Managed Disk. Then you can copy the underlying blob to a new storage account and download the blob from this storage account.
+As pricing model you have several different pricing options that list like:
- ```powershell
- $access = Grant-AzDiskAccess -ResourceGroupName <resource group> -DiskName <disk name> -Access Read -DurationInSecond 3600
- $key = (Get-AzStorageAccountKey -ResourceGroupName <resource group> -Name <storage account name>)[0].Value
- $destContext = (New-AzStorageContext -StorageAccountName <storage account name -StorageAccountKey $key)
- Start-AzStorageBlobCopy -AbsoluteUri $access.AccessSAS -DestContainer <container name> -DestBlob <blob name> -DestContext $destContext
- # Wait for blob copy to finish
- Get-AzStorageBlobCopyState -Container <container name> -Blob <blob name> -Context $destContext
- Save-AzVhd -SourceUri <blob in new storage account> -LocalFilePath <local file path> -StorageKey $key
- # Wait for download to finish
- Revoke-AzDiskAccess -ResourceGroupName <resource group> -DiskName <disk name>
- ```
+- Pay as you go
+- One year reserved or savings plan
+- Three years reserved or savings plan
+- Spot pricing
-* Downloading a VHD
- Once the SAP system is stopped and the VM is shut down, you can use the PowerShell cmdlet `Save-AzVhd` on the on-premises target to download the VHD disks back to the on-premises world. In order to do that, you need the URL of the VHD, which you can find in the 'storage Section' of the Azure portal (need to navigate to the Storage Account and the storage container where the VHD was created) and you need to know where the VHD should be copied to.
+The pricing of each of the different offerings with different service offerings around operating systems and different regions is available on the site [Virtual Machines Pricing](https://azure.microsoft.com/pricing/details/virtual-machines/linux/). For details and flexibility of one year and three year savings plan and reserved instances, check these articles:
- Then you can leverage the command by defining the parameter SourceUri as the URL of the VHD to download and the LocalFilePath as the physical location of the VHD (including its name). The command could look like:
+- [What is Azure savings plans for compute?](../../cost-management-billing/savings-plan/savings-plan-compute-overview.md)
+- [What are Azure Reservations?](../../cost-management-billing/reservations/save-compute-costs-reservations.md)
+- [Virtual machine size flexibility with Reserved VM Instances](../../virtual-machines/reserved-vm-instance-size-flexibility.md)
+- [How the Azure reservation discount is applied to virtual machines](../../cost-management-billing/manage/understand-vm-reservation-charges.md)
- ```powershell
- Save-AzVhd -ResourceGroupName <resource group name of storage account> -SourceUri http://<storage account name>.blob.core.windows.net/<container name>/sapidedata.vhd -LocalFilePath E:\Azure_downloads\sapidesdata.vhd
- ```
+For more information on spot pricing, read the article [Azure Spot Virtual Machines](https://azure.microsoft.com/pricing/spot/). Pricing of the same VM type can also be different between different Azure regions. For some customers, it was worth to deploy into a less expensive Azure region.
- For more details of the Save-AzVhd cmdlet, see [Save-AzVhd](/powershell/module/az.compute/save-azvhd).
+Additionally, Azure offers the concepts of a dedicated host. The dedicated host concept gives you more control on patching cycles that are done by Azure. You can time the patching according to your own schedules. This offer is specifically targeting customers with workload that might not follow the normal cycle of workload. To read up on the concepts of Azure dedicated host offers, read the article [Azure Dedicated Host](../../virtual-machines/dedicated-hosts.md). Using this offer is supported for SAP workload and is used by several SAP customers who want to have more control on patching of infrastructure and eventual maintenance plans of Microsoft. For more information on how Microsoft maintains and patches the Azure infrastructure that hosts virtual machines, read the article [Maintenance for virtual machines in Azure](../../virtual-machines/maintenance-and-updates.md).
-#### Azure CLI
-* Downloading a Managed Disk
- You first need to get access to the underlying blob of the Managed Disk. Then you can copy the underlying blob to a new storage account and download the blob from this storage account.
+### Operating system for VMs
- ```azurecli
- az disk grant-access --ids "/subscriptions/<subscription id>/resourceGroups/<resource group>/providers/Microsoft.Compute/disks/<disk name>" --duration-in-seconds 3600
- az storage blob download --sas-token "<sas token>" --account-name <account name> --container-name <container name> --name <blob name> --file <local file>
- az disk revoke-access --ids "/subscriptions/<subscription id>/resourceGroups/<resource group>/providers/Microsoft.Compute/disks/<disk name>"
- ```
+WWhen deploying new VMs for SAP landscapes in Azure, either for installation or migration of SAP systems, it's important to choose the right operation system. Azure provides a large variety of operating system images for Linux and Windows, with many suitable options for SAP usage. Additionally you can create or upload custom images from on-premises. You can also consume and generalize from image galleries. See the following documentation on details and options available:
-* Downloading a VHD
- Once the SAP system is stopped and the VM is shut down, you can use the Azure CLI command `_azure storage blob download_` on the on-premises target to download the VHD disks back to the on-premises world. In order to do that, you need the name and the container of the VHD, which you can find in the 'Storage Section' of the Azure portal (need to navigate to the Storage Account and the storage container where the VHD was created) and you need to know where the VHD should be copied to.
+- Find Azure Marketplace image information - [using CLI](/azure/virtual-machines/linux/cli-ps-findimage) / [using PowerShell](/azure/virtual-machines/windows/cli-ps-findimage)
+- Create custom images - [for Linux](/azure/virtual-machines/linux/imaging) / [for Windows](/azure/virtual-machines/windows/prepare-for-upload-vhd-image)
+- [Using VM Image Builder](/azure/virtual-machines/image-builder-overview)
- Then you can leverage the command by defining the parameters blob and container of the VHD to download and the destination as the physical target location of the VHD (including its name). The command could look like:
+Plan for an OS update infrastructure and its dependencies for SAP workload, as required. Considerations are needed for a repository staging environment to keep all tiers of an SAP landscape - sandbox/development/pre-prod/production - in sync with same version of patches and updates over your update time period.
- ```azurecli
- az storage blob download --name <name of the VHD to download> --container-name <container of the VHD to download> --account-name <storage account name of the VHD to download> --account-key <storage account key> --file <destination of the VHD to download>
- ```
+### Generation 1 and Generation 2 virtual machines
-### Transferring VMs and disks within Azure
+Azure allows you to deploy VMs as either generation 1 or generation 2 VMs. The article [Support for generation 2 VMs on Azure](../../virtual-machines/generation-2.md) lists the Azure VM families that can be deployed as generation 2 VM. More important this article also lists functional differences between generation 1 and generation 2 virtual machines in Azure.
-#### Copying SAP systems within Azure
+At deployment of a virtual machine, the OS image selection decides if the VM will be a generation 1 or 2 VM. All OS images for SAP usage available in Azure - RedHat Enterprise Linux, SuSE Enterprise Linux, Windows or Oracle Enterprise Linux - in their latest versions are available with both generation versions. Careful selection based on the image description is needed to deploy the correct VM generation. Similarly, custom OS images can be created as generation 1 or 2 and impact the VM generation at deployment of the virtual machine.
-An SAP system or even a dedicated DBMS server supporting an SAP application layer will likely consist of several disks, which contain either the OS with the binaries or the data and log file(s) of the SAP database. Neither the Azure functionality of copying disks nor the Azure functionality of saving disks to a local disk has a synchronization mechanism, which snapshots multiple disks in a consistent manner. Therefore, the state of the copied or saved disks even if those are mounted against the same VM would be different. This means that in the concrete case of having different data and logfile(s) contained in the different disks, the database in the end would be inconsistent.
+> [!NOTE]
+> It's recommended to use generation 2 VMs in *all* your SAP on Azure deployments, regardless of VM size. All latest Azure VMs for SAP are generation 2 capable or limited to generation 2 only. Some VM families allow generation 2 only today. Similarly, some upcoming VM families could support generation 2 only.
+> Determination if a VM will be generation 1 or 2 is done purely with the selected OS image. Changing an existing VM from one generation to the other generation isn't possible.
-**Conclusion: In order to copy or save disks, which are part of an SAP system configuration you need to stop the SAP system and also need to shut down the deployed VM. Only then you can copy or download the set of disks to either create a copy of the SAP system in Azure or on-premises.**
+Change from generation 1 to generation 2 isn't possible in Azure. To change the virtual machine generation, you need to deploy a new VM of the generation you desire, and reinstall the software that you're running in the new gen2 VM. This change only affects the base VHD image of the VM and has no impact on the data disks or attached NFS or SMB shares. Data disks, NFS, or SMB shares that originally were assigned to, for example, on a generation 1 VM, and could reattach to new gen2 VM.
-Data disks can be stored as VHD files in an Azure Storage Account and can be directly attached to a virtual machine or be used as an image. In this case, the VHD is copied to another location before being attached to the virtual machine. The full name of the VHD file in Azure must be unique within Azure. As mentioned earlier already, the name is kind of a three-part name that looks like:
+Some VM families, like [Mv2-series](../../virtual-machines/mv2-series.md) support generation 2 only. The same requirement might be true for some future, new VM families. An existing generation 1 VM could then not be resized to such new VM family. Beyond Azure platform's generation 2 requirement, SAP requirements might exist too. See SAP note [1928533] for any such generation 2 requirements on chosen VM family.
-`http(s)://<storage account name>.blob.core.windows.net/<container name>/<vhd name>`
+### Performance limits for Azure VMs
-Data disks can also be Managed Disks. In this case, the Managed Disk is used to create a new Managed Disk before being attached to the virtual machine. The name of the Managed Disk must be unique within a resource group.
+Azure as a public cloud depends on sharing infrastructure in a secured manner throughout its customer base. Performance limits are defined for each resource and service, to enable scaling and capacity. On the compute side of the Azure infrastructure, the limits for each virtual machine size must be considered. The VM quotes are described in [this document](/azure/virtual-machines/sizes).
-##### PowerShell
+Each VM has a different quota on disk and network throughput, number of disks that can be attached, whether it contains a temporary, VM local storage with own throughput and IOPS limits, size of memory and how many vCPUs are available.
-You can use Azure PowerShell cmdlets to copy a VHD as shown in [this article][storage-powershell-guide-full-copy-vhd]. To create a new Managed Disk, use New-AzDiskConfig and New-AzDisk as shown in the following example.
+> [!NOTE]
+> When planning and sizing SAP on Azure solutions, the performance limits for each virtual machine size must be considered.
+> The quotas described represent the theoretical maximum values attainable. The limit of IOPS per disk may be achieved with small I/Os (8 KB) but possibly may not be achieved with large I/Os (1 MB).
-```powershell
-$config = New-AzDiskConfig -CreateOption Copy -SourceUri "/subscriptions/<subscription id>/resourceGroups/<resource group>/providers/Microsoft.Compute/disks/<disk name>" -Location <location>
-New-AzDisk -ResourceGroupName <resource group name> -DiskName <disk name> -Disk $config
-```
+Similarly to virtual machines, same performance limits exist for [each storage type for SAP workload](/azure/virtual-machines/workloads/sap/planning-guide-storage), and for any other Azure service as well.
-##### Azure CLI
+When planning and selecting suitable VMs for SAP deployment, consider these factors
-You can use Azure CLI to copy a VHD. To create a new Managed Disk, use *az disk create* as shown in the following example.
+- Start with the memory and CPU requirement. The SAPS requirements for CPU power need to be separated out into the DBMS part and the SAP application part(s). For existing systems, the SAPS related to the hardware in use often can be determined or estimated based on existing SAP benchmarks. The results can be found on the [About SAP Standard Application Benchmarks](https://sap.com/about/benchmark.html) page. For newly deployed SAP systems, you should have gone through a sizing exercise, which should determine the SAPS requirements of the system.
+- For existing systems, the I/O throughput and I/O operations per second on the DBMS server should be measured. For new systems, the sizing exercise for the new system also should give rough ideas of the I/O requirements on the DBMS side. If unsure, you eventually need to conduct a Proof of Concept.
+- Compare the SAPS requirement for the DBMS server with the SAPS the different VM types of Azure can provide. The information on SAPS of the different Azure VM types is documented in SAP Note [1928533]. The focus should be on the DBMS VM first since the database layer is the layer in an SAP NetWeaver system that doesn't scale out in most deployments. In contrast, the SAP application layer can be scaled out. Individual DBMS guides in this documentation provide recommended storage configuration to use.
+- Summarize your findings for
+ - number of Azure VMs
+ - Individual VM family and VM SKUs for each SAP layers - DBMS, (A)SCS, application server
+ - IO throughput measures or the calculated storage capacity requirements
-```azurecli
-az disk create --source "/subscriptions/<subscription id>/resourceGroups/<resource group>/providers/Microsoft.Compute/disks/<disk name>" --name <disk name> --resource-group <resource group name> --location <location>
-```
+### HANA Large Instance service
-##### Azure Storage tools
+Azure provides another compute capabilities for running large HANA database in both scale-up and scale-out manner on a dedicated offering called HANA Large Instances. Details of this solution are described in separate documentation section starting with [SAP HANA on Azure Large Instances](/azure/virtual-machines/workloads/sap/hana-overview-architecture). This offering extended the available VMs in Azure.
-* [Azure Storage Explorer](../../vs-azure-tools-storage-manage-with-storage-explorer.md)
+> [!NOTE]
+> HANA Large Instance service is in sunset mode and doesn't accept new customers anymore. Providing units for existing HANA Large Instance customers is still possible.
-Professional editions of Azure Storage Explorers can be found here:
+## Storage for SAP on Azure
-* [Cerebrata](https://www.cerebrata.com/)
-* [Cloud Xplorer](https://clumsyleaf.com/products/cloudxplorer)
+Azure virtual machines use different storage options for persistence. In simple terms, they can be divided into persisted and temporary, or non-persisted storage types.
-The copy of a VHD itself within a storage account is a process, which takes only a few seconds (similar to SAN hardware creating snapshots with lazy copy and
-copy on write). After you have a copy of the VHD file, you can attach it to a virtual machine or use it as an image to attach copies of the VHD to virtual machines.
+There are multiple storage options that can be used for SAP workloads and specific SAP components. For more information, read the document [Azure storage for SAP workloads](planning-guide-storage.md). The article covers the storage architecture for everything SAP - operating system, application binaries, configuration files, database data, log and traces and file interfaces with other applications, stored on disk or accessed on file shares.
-##### PowerShell
+### Temporary disk on VMs
-```powershell
-# attach a vhd to a vm
-$vm = Get-AzVM -ResourceGroupName <resource group name> -Name <vm name>
-$vm = Add-AzVMDataDisk -VM $vm -Name newdatadisk -VhdUri <path to vhd> -Caching <caching option> -DiskSizeInGB $null -Lun <lun, for example 0> -CreateOption attach
-$vm | Update-AzVM
+Most Azure VMs for SAP offer a temporary disk, which isn't a managed disk. Such temporary disk should be used for expendable data **only**, as the data may be lost during unforeseen maintenance events or during VM redeployment. The performance characteristics of the temporary disk make them ideal for swap/page files of the operating system. No application or non-expendable operating system data should be stored on such a temporary disk. In Windows environments, the temporary drive is typically accessed as D:\ drive, in Linux systems /dev/sdb device, /mnt or /mnt/resource is often the mountpoint.
-# attach a managed disk to a vm
-$vm = Get-AzVM -ResourceGroupName <resource group name> -Name <vm name>
-$vm = Add-AzVMDataDisk -VM $vm -Name newdatadisk -ManagedDiskId <managed disk id> -Caching <caching option> -DiskSizeInGB $null -Lun <lun, for example 0> -CreateOption attach
-$vm | Update-AzVM
+Some VMs aren't [offering a temporary drive](/azure/virtual-machines/azure-vms-no-temp-disk) and planning to utilize these virtual machine sizes for SAP might require increasing the size of the operating system disk. Refer to SAP Note [1928533] for details. For VMs with temporary disk present, see this article [Azure documentation for virtual machine families and sizes](/azure/virtual-machines/sizes) for more information on the temporary disk size and IOPS/throughput limits available for each VM family.
-# attach a copy of the vhd to a vm
-$vm = Get-AzVM -ResourceGroupName <resource group name> -Name <vm name>
-$vm = Add-AzVMDataDisk -VM $vm -Name <disk name> -VhdUri <new path of vhd> -SourceImageUri <path to image vhd> -Caching <caching option> -DiskSizeInGB $null -Lun <lun, for example 0> -CreateOption fromImage
-$vm | Update-AzVM
+It's important to understand the resize between VM families with and VM families without temporary disk isn't directly possible. A resize between such two VM families fails currently. A work around is to re-create the VM with new size without temp disk, from an OS disk snapshot and keeping all other data disks and network interface. See the article [Can I resize a VM size that has a local temp disk to a VM size with no local temp disk?](/azure/virtual-machines/azure-vms-no-temp-disk#can-i-resize-a-vm-size-that-has-a-local-temp-disk-to-a-vm-size-with-no-local-temp-disk) for details.
-# attach a copy of the managed disk to a vm
-$vm = Get-AzVM -ResourceGroupName <resource group name> -Name <vm name>
-$diskConfig = New-AzDiskConfig -Location $vm.Location -CreateOption Copy -SourceUri <source managed disk id>
-$disk = New-AzDisk -DiskName <disk name> -Disk $diskConfig -ResourceGroupName <resource group name>
-$vm = Add-AzVMDataDisk -VM $vm -Caching <caching option> -Lun <lun, for example 0> -CreateOption attach -ManagedDiskId $disk.Id
-$vm | Update-AzVM
-```
+### Network shares and volumes for SAP
-##### Azure CLI
+SAP systems usually require one or more network file shares. These are typically:
-```azurecli
+- SAP transport directory (/usr/sap/trans, TRANSDIR)
+- SAP volumes/shared sapmnt or saploc, when deploying multiple application servers
+- High-availability architecture volumes for (A)SCS, ERS or database (/hana/shared)
+- File interfaces with third party applications for file import/export
-# attach a vhd to a vm
-az vm unmanaged-disk attach --resource-group <resource group name> --vm-name <vm name> --vhd-uri <path to vhd>
+Azure services such as [Azure Files](/azure/storage/files/storage-files-introduction) and [Azure NetApp Files](/azure/azure-netapp-files/) should be used. Alternatives when these services aren't available in chosen region(s), or required by chosen architecture. These options are to provide NFS/SMB file shares from self-managed, VM-based applications, or third party services. See SAP Note [2015553] about limitation in support when using third party services for storage layers of an SAP system in Azure.
-# attach a managed disk to a vm
-az vm disk attach --resource-group <resource group name> --vm-name <vm name> --disk <managed disk id>
+Due to the often critical nature of network shares and often being a single point of failure in a design (high-availability) or process (file interface), it's recommended to rely on Azure native service with their own availability, SLA and resiliency. In the planning phase, consideration needs to be made for
-# attach a copy of the vhd to a vm
-# this scenario is currently not possible with Azure CLI. A workaround is to manually copy the vhd to the destination.
+* NFS/SMB share design - which shares per SID, per landscape, region
+* Subnet sizing - IP requirement for private endpoints or dedicated subnets for services like Azure NetApp Files
+* Network routing to SAP systems and connected applications
+* Use of public or [private endpoint](/azure/private-link/private-endpoint-overview) for Azure Files
-# attach a copy of a managed disk to a vm
-az disk create --name <new disk name> --resource-group <resource group name> --location <location of target virtual machine> --source <source managed disk id>
-az vm disk attach --disk <new disk name or managed disk id> --resource-group <resource group name> --vm-name <vm name> --caching <caching option> --lun <lun, for example 0>
-```
+Usage and requirements for NFS/SMB shares in high-availability scenarios are described in chapter [high-availability](#high-availability).
-#### <a name="9789b076-2011-4afa-b2fe-b07a8aba58a1"></a>Copying disks between Azure Storage Accounts
-This task cannot be performed on the Azure portal. You can use Azure PowerShell cmdlets, Azure CLI, or a third-party storage browser. The PowerShell cmdlets or CLI commands can create and manage blobs, which include the ability to asynchronously copy blobs across Storage Accounts and across regions within the Azure subscription.
+> [!NOTE]
+> If using Azure Files for your network share(s), it's recommended to use a private endpoint. In the unlikely event of a zonal failure, your NFS client will be automatically redirect to a healthy zone. You don't have to remount the NFS or SMB shares on your VMs.
-##### PowerShell
-You can also copy VHDs between subscriptions. For more information, read [this article][storage-powershell-guide-full-copy-vhd].
+## Securing your SAP landscape
-The basic flow of the PowerShell cmdlet logic looks like this:
+Planning to protect the SAP on Azure workload needs to be approached from different angles. These include:
-* Create a storage account context for the **source** storage account with *New-AzStorageContext* - see [New-AzStorageContext](/powershell/module/az.storage/new-azstoragecontext)
-* Create a storage account context for the **target** storage account with *New-AzStorageContext* - see [New-AzStorageContext](/powershell/module/az.storage/new-azstoragecontext)
-* Start the copy with
+> [!div class="checklist"]
+> * Network segmentation and security of each subnet and network interface
+> * Encryption on each layer within the SAP landscape
+> * Identity solution for end-user and administrative access, single sign-on services
+> * Threat and operation monitoring
-```powershell
-Start-AzStorageBlobCopy -SrcBlob <source blob name> -SrcContainer <source container name> -SrcContext <variable containing context of source storage account> -DestBlob <target blob name> -DestContainer <target container name> -DestContext <variable containing context of target storage account>
-```
+The topics contained in this chapter aren't an exhaustive list of all available services, options and alternatives. It does list several best practices, which should be considered for all SAP deployments in Azure. There are other aspects to cover depending on your enterprise or workload requirements. For further information on security design, consider for general Azure guidance following resources:
-* Check the status of the copy in a loop with
+- [Azure Well Architected Framework - security pillar](/azure/architecture/framework/security/overview)
+- [Azure Cloud Adoption Framework - Security](/azure/cloud-adoption-framework/secure/)
-```powershell
-Get-AzStorageBlobCopyState -Blob <target blob name> -Container <target container name> -Context <variable containing context of target storage account>
-```
+### Securing virtual networks with security groups
-* Attach the new VHD to a virtual machine as described above.
+Planning your SAP landscape in Azure should include some degree of network segmentation, with virtual networks and subnets dedicated to SAP workloads only. Best practices for subnet definition have been shared in the [networking](#azure-networking) chapter in this article and architecture guides. Using [network security groups (NSGs)](/azure/virtual-network/network-security-groups-overview) together with [application security groups (ASGs)](/azure/virtual-network/application-security-groups) within NSGs to permit inbound and outbound connectivity is recommended. When you design ASGs, each NIC on a VM can be associated with multiple ASGs, allowing you to create different groups. For example an ASG for DBMS VMs, which contains all DB servers across your landscape. Another ASG for all VMs - application and DBMS - of a single SAP-SID. This way you can define one NSG rule for the overall DB-ASG and another, more specific rule for SID the specific ASG only.
-For examples see [this article][storage-powershell-guide-full-copy-vhd].
+NSGs don't restrict performance with the rules defined. For monitoring of traffic flow, you can optionally activate [NSG flow logging](/azure/network-watcher/network-watcher-nsg-flow-logging-overview) with logs evaluated by a SIEM or IDS of your choice to monitor and act on suspicious network activity.
-##### Azure CLI
-* Start the copy with
+> [!TIP]
+> Activate NSGs on subnet level only. While NSGs can be activated on both subnet and NIC level, activation on both is very often a hindrance in troubleshooting situations when analyzing network traffic restrictions. Use NSGs on NIC level only in exceptional situations and when required.
-```azurecli
-az storage blob copy start --source-blob <source blob name> --source-container <source container name> --source-account-name <source storage account name> --source-account-key <source storage account key> --destination-container <target container name> --destination-blob <target blob name> --account-name <target storage account name> --account-key <target storage account name>
-```
+### Private endpoints for services
-* Check the status if the copy is still in a loop with
+Many Azure PaaS services are accessed by default through a public endpoint. While located on the Azure backend network, the communication endpoint is exposed to public internet. [Private endpoints](/azure/private-link/private-endpoint-overview) are a network interface inside your own private virtual network. Through [Azure private link](/azure/private-link/), the private endpoint projects the service into your virtual network. Selected PaaS services are then privately accessed through the IP inside your network and depending on the configuration, the service can potentially be set to communicate through private endpoint only.
-```azurecli
-az storage blob show --name <target blob name> --container <target container name> --account-name <target storage account name> --account-key <target storage account name>
-```
+Use of private endpoints increases protection against data leakage, often simplifies access from on-premises and peered networks. Also in many situations the network routing and process to open firewall ports, often needed for public endpoints, is simplified since the resources are inside your chosen network already with private endpoint use.
-* Attach the new VHD to a virtual machine as described above.
+See [available services](/azure/private-link/availability) to find which Azure services offer the usage of private endpoints. For NFS or SMB with Azure Files, the usage of private endpoints is always recommended for SAP workloads. See [private endpoint pricing](https://azure.microsoft.com/pricing/details/private-link/) about charges incurred with use of the service. Some Azure services might optionally include the cost with the service. Such case is identified in a service's pricing information.
-### Disk Handling
+### Encryption
-#### <a name="4efec401-91e0-40c0-8e64-f2dceadff646"></a>VM/disk structure for SAP deployments
+Depending on your corporate policies, encryption [beyond the default options](/azure/security/fundamentals/encryption-overview) in Azure might be required for your SAP workloads.
-Ideally the handling of the structure of a VM and the associated disks should be simple. In on-premises installations, customers developed many ways of structuring a server installation.
-* One base disk, which contains the OS and all the binaries of the DBMS and/or SAP. Since March 2015, this disk can be up to 1 TB in size instead of earlier restrictions that limited it to 127 GB.
-* One or multiple disks, which contain the DBMS log file of the SAP database and the log file of the DBMS temp storage area (if the DBMS supports this). If the database log IOPS requirements are high, you need to stripe multiple disks in order to reach the IOPS volume required.
-* A number of disks containing one or two database files of the SAP database and the DBMS temp data files as well (if the DBMS supports this).
+#### Encryption for infrastructure resources
-![Reference Configuration of Azure IaaS VM for SAP][planning-guide-figure-1300]
+By default, Azure storage - managed disks and blobs - is [encrypted with a platform managed key (PMK)](/azure/security/fundamentals/encryption-overview). In addition, bring-your-own-key (BYOK) encryption for managed disks and blob storage is supported for SAP workloads in Azure. For [managed disk encryption](/azure/virtual-machines/disk-encryption-overview), different options available, including:
+- platform managed key (SSE-PMK)
+- customer managed key (SSE-CMK)
+- double encryption at rest
+- host-based encryption
-
-> ![Windows logo.][Logo_Windows] Windows
->
-> With many customers we saw configurations where, for example, SAP and DBMS binaries were not installed on the c:\ drive where the OS was installed. There were various reasons
-> for this, but when we went back to the root, it usually was that the drives were small and OS upgrades needed additional space 10-15 years ago. Both conditions do not apply these
-> days too often anymore. Today the c:\ drive can be mapped on large volume disks or VMs. In order to keep deployments simple in their structure, it is recommended to follow the
-> following deployment pattern for SAP NetWeaver systems in Azure
->
-> The Windows operating system pagefile should be on the D: drive (non-persistent disk)
->
-> ![Linux logo.][Logo_Linux] Linux
->
-> Place the Linux swapfile under /mnt /mnt/resource on Linux as described in [this article][virtual-machines-linux-agent-user-guide]. The swap file can be configured in the configuration file of the Linux Agent /etc/waagent.conf. Add or change the following settings:
->
->
+as per your corporate security requirement. A [comparison of the encryption options](/azure/virtual-machines/disk-encryption-overview#comparison), together with Azure Disk Encryption, is available.
-```console
-ResourceDisk.EnableSwap=y
-ResourceDisk.SwapSizeMB=30720
-```
+> [!NOTE]
+> Don't use host based encryption on M-series VM family when running with Linux, currently, due to potential performance limitation. The use of SSE-CMK encryption for managed disks is unaffected by this limitation.
-To activate the changes, you need to restart the Linux Agent with
+> [!IMPORTANT]
+> Importance of a careful plan to store and protect the encryption keys if using customer managed encryption can't be overstated. Without encryption keys encrypted resources such as disks will be be inaccessible and lead to data loss. Carefully consider protection of the keys and the access to them only by privileged users or services only.
-```console
-sudo service waagent restart
-```
+Azure Disk Encryption (ADE), with encryption running inside the SAP VMs using customer managed keys from Azure key vault, shouldn't be used for SAP deployments with Linux systems. For Linux, Azure Disk Encryption doesn't support the [OS images](/azure/virtual-machines/linux/disk-encryption-overview#supported-operating-systems) used for SAP workloads. Azure Disk Encryption can be used on Windows systems with SAP workloads, however, don't combine Azure Disk Encryption with database native encryption. The use of database native encryption is recommended over ADE. For more information, see below.
-Read SAP Note [1597355] for more details on the recommended swap file size
+Similarly to managed disk encryption, [Azure Files](/azure/storage/common/customer-managed-keys-overview) encryption at rest (SMB and NFS) is available with platform or customer managed keys.
-
-The number of disks used for the DBMS data files and the type of Azure Storage these disks are hosted on should be determined by the IOPS requirements and the latency required. Exact quotas are described in [this article (Linux)][virtual-machines-sizes-linux] and [this article (Windows)][virtual-machines-sizes-windows].
+For SMB network shares, [Azure Files service](/azure/storage/files/files-smb-protocol?tabs=azure-portal) and [OS dependencies](/windows-server/storage/file-server/smb-security) with SMB versions and thus encryption in-transit support, need to be reviewed.
-Experience of SAP deployments over the last two years taught us some lessons, which can be summarized as:
+#### Encryption for SAP components
-* IOPS traffic to different data files is not always the same since existing customer systems might have differently sized data files representing their SAP database(s). As a result it turned out to be better using a RAID configuration over multiple disks to place the data files LUNs carved out of those. There were situations, especially with Azure Standard Storage where an IOPS rate hit the quota of a single disk against the DBMS transaction log. In such scenarios, the use of Premium Storage is recommended or alternatively aggregating multiple Standard Storage disks with a software stripe.
+Encryption on SAP level can be broken down in two layers
-
-> ![Windows logo.][Logo_Windows] Windows
->
-> * [Performance best practices for SQL Server in Azure Virtual Machines][virtual-machines-sql-server-performance-best-practices]
->
-> ![Linux logo.][Logo_Linux] Linux
->
-> * [Configure Software RAID on Linux][virtual-machines-linux-configure-raid]
-> * [Configure LVM on a Linux VM in Azure][virtual-machines-linux-configure-lvm]
->
->
+- DBMS encryption
+- Transport encryption
-
-* Premium Storage is showing significant better performance, especially for critical transaction log writes. For SAP scenarios that are expected to deliver production like performance, it is highly recommended to use VM-Series that can leverage Azure Premium Storage.
+For DBMS encryption, each database supported for SAP NetWeaver or S/4HANA deployment supports native encryption. Transparent database encryption is entirely independent of any infrastructure encryption in place in Azure. Both database encryption and [storage side encryption](/azure/virtual-machines/disk-encryption) (SSE) can be used at the same time. Of utmost importance when using encryption, is the location, storage and safekeeping of encryption keys. Any loss of encryption keys leads to data loss due to an impossible to start or recover a database.
-Keep in mind that the disk, which contains the OS, and as we recommend, the binaries of SAP and the database (base VM) as well, is not anymore limited to 127 GB. It now can have
-up to 1 TB in size. This should be enough space to keep all the necessary file including, for example, SAP batch job logs.
+Some databases might not have a database encryption method or require a dedicated setting to enable. For other databases, DBMS backups might be encrypted implicitly when database encryption is activated. See SAP notes of the respective database on how to enable and use transparent database encryption.
-For more suggestions and more details, specifically for DBMS VMs, consult the [DBMS Deployment Guide][dbms-guide]
+* [SAP HANA data and log volume encryption](https://help.sap.com/viewer/b3ee5778bc2e4a089d3299b82ec762a7/2.0.02/en-US/dc01f36fbb5710148b668201a6e95cf2.html)
+* SQL Server - SAP note [1380493]
+* Oracle - SAP note [974876]
+* DB2 - SAP note [1555903]
+* SAP ASE - SAP note [1972360]
-#### Disk Handling
+> [!NOTE]
+> Contact SAP or the DBMS vendor for support on how to enable, use or troubleshoot software encryption.
-In most scenarios, you need to create additional disks in order to deploy the SAP database into the VM. We talked about the considerations on number of disks in chapter [VM/disk structure for SAP deployments][planning-guide-5.5.1] of this document. The Azure portal allows to attach and detach disks once a base VM is deployed. The disks can be attached/detached when the VM is up and running as well as when it is stopped. When attaching a disk, the Azure portal offers to attach an empty disk or an existing disk, which at this point in time is not attached to another VM.
+> [!IMPORTANT]
+> Importance of a careful plan to store and protect the encryption keys can't be overstated. Without encryption keys the database or SAP software might be inaccessible and lead to data loss. Carefully consider protection of the keys and the access to them only by privileged users or services only.
-**Note**: Disks can only be attached to one VM at any given time.
+Transport, or communication encryption can be applied for SQL connections between SAP engines and the DBMS. Similarly, connections from SAP presentation layer - SAPGui secure network connections (SNC) or https connection to web front-ends - can be encrypted. See the applications vendor's documentation to enable and manage encryption in transit.
-![Attach / detach disks with Azure Standard Storage][planning-guide-figure-1400]
+### Threat monitoring and alerting
-During the deployment of a new virtual machine, you can decide whether you want to use Managed Disks or place your disks on Azure Storage Accounts. If you want to use Premium Storage, we recommend using Managed Disks.
+Follow corporate architecture to deploy and use threat monitoring and alerting solutions. Available Azure services provide threat protection and security view, should be considered for the overall SAP deployment plan. [Microsoft Defender for Cloud](/azure/security-center/security-center-introduction) addresses this requirement and is typically part of an overall governance model for entire Azure deployments, not just for SAP components.
-Next, you need to decide whether you want to create a new and empty disk or whether you want to select an existing disk that was uploaded earlier and should be attached to the VM now.
+For more information on security information event management (SIEM) and security orchestration automated response (SOAR) solutions, read [Microsoft Sentinel provides SAP integration](/azure/sentinel/sap/deployment-overview).
-**IMPORTANT**: You **DO NOT** want to use Host Caching with Azure Standard Storage. You should leave the Host Cache preference at the default of NONE. With Azure Premium Storage, you should enable Read Caching if the I/O characteristic is mostly read like typical I/O traffic against database data files. In case of database transaction log file, no caching is recommended.
+### Security software inside SAP VMs
-
-> ![Windows logo.][Logo_Windows] Windows
->
-> [How to attach a data disk in the Azure portal][virtual-machines-linux-attach-disk-portal]
->
-> If disks are attached, you need to sign in to the VM to open the Windows Disk Manager. If automount is not enabled as recommended in chapter [Setting automount for attached disks][planning-guide-5.5.3], the newly attached volume needs to be taken online and initialized.
->
-> ![Linux logo.][Logo_Linux] Linux
->
-> If disks are attached, you need to sign in to the VM and initialize the disks as described in [this article][virtual-machines-linux-how-to-attach-disk-how-to-initialize-a-new-data-disk-in-linux]
->
->
+SAP notes [2808515] for Linux and [106267] for Windows describe requirements and best practices when using virus scanners or security software on SAP servers. The SAP recommendations should be followed when deploying SAP components in Azure.
-
-If the new disk is an empty disk, you need to format the disk as well. For formatting, especially for DBMS data and log files the same recommendations as for bare-metal deployments of the DBMS apply.
+## High availability
-An Azure Storage account does not provide infinite resources in terms of I/O volume, IOPS, and data volume. Usually DBMS VMs are most affected by this. It might be best to use a separate Storage Account for each VM if you have few high I/O volume VMs to deploy in order to stay within the limit of the Azure Storage Account volume. Otherwise, you need to see how you can balance these VMs between different Storage accounts without hitting the limit of each single Storage Account. More details are discussed in the [DBMS Deployment Guide][dbms-guide]. You should also keep these limitations in mind for pure SAP application server VMs or other VMs, which eventually might require additional VHDs. These restrictions do not apply if you use Managed Disk. If you plan to use Premium Storage, we recommend using Managed Disk.
+We can separate the discussion about SAP high availability in Azure into two parts:
-Another topic, which is relevant for Storage Accounts is whether the VHDs in a Storage Account are getting Geo-replicated. Geo-replication is enabled or disabled on the Storage Account level and not on the VM level. If geo-replication is enabled, the VHDs within the Storage Account would be replicated into another Azure data center within the same region. Before deciding on this, you should think about the following restriction:
+* **Azure infrastructure high availability**, for example HA of compute (VMs), network, storage etc. and its benefits for increasing SAP application availability.
+* **SAP application high availability**, for example HA of SAP software components:
+ * SAP (A)SCS and ERS instance
+ * DB server
-Azure Geo-replication works locally on each VHD in a VM and does not replicate the I/Os in chronological order across multiple VHDs in a VM. Therefore, the VHD that represents the base VM as well as any additional VHDs attached to the VM are replicated independent of each other. This means there is no synchronization between the changes in the different VHDs. The fact that the I/Os are replicated independently of the order in which they are written means that geo-replication is not of value for database servers that have their databases distributed over multiple VHDs. In addition to the DBMS, there also might be other applications where processes write or manipulate data in different VHDs and where it is important to keep the order of changes. If that is a requirement, geo-replication in Azure should not be enabled. Dependent on whether you need or want geo-replication for a set of VMs, but not for another set, you can already categorize VMs and their related VHDs into different Storage Accounts that have geo-replication enabled or disabled.
+and how it can be combined with Azure infrastructure HA with service healing.
-#### <a name="17e0d543-7e8c-4160-a7da-dd7117a1ad9d"></a>Setting automount for attached disks
-
-> ![Windows logo.][Logo_Windows] Windows
->
-> For VMs, which are created from own Images or Disks, it is necessary to check and possibly set the automount parameter. Setting this parameter will allow the VM after a restart or redeployment in Azure to mount the attached/mounted drives again automatically.
-> The parameter is set for the images provided by Microsoft in the Azure Marketplace.
->
-> In order to set the automount, check the documentation of the command-line executable diskpart.exe here:
->
-> * [DiskPart Command-Line Options](/previous-versions/windows/it-pro/windows-xp/bb490893(v=technet.10))
-> * [Automount](/previous-versions/windows/it-pro/windows-server-2012-R2-and-2012/cc753703(v=ws.11))
->
-> The Windows command-line window should be opened as administrator.
->
-> If disks are attached, you need to sign in to the VM to open the Windows Disk Manager. If automount is not enabled as recommended in chapter [Setting automount for attached disks][planning-guide-5.5.3], the newly attached volume >needs to be taken online and initialized.
->
-> ![Linux logo.][Logo_Linux] Linux
->
-> You need to initialize a newly attached empty disk as described in [this article][virtual-machines-linux-how-to-attach-disk-how-to-initialize-a-new-data-disk-in-linux].
-> You also need to add new disks to the /etc/fstab.
->
->
+To obtain more details on high availability for SAP in Azure, use the following documentation
-
-### Final Deployment
+* [Supported scenarios - High Availability protection for the SAP DBMS layer](planning-supported-configurations.md#high-availability-protection-for-the-sap-dbms-layer)
+* [Supported scenarios - High Availability for SAP Central Services](planning-supported-configurations.md#high-availability-for-sap-central-service)
+* [Supported scenarios - Supported storage with the SAP Central Services scenarios](planning-supported-configurations.md#supported-storage-with-the-sap-central-services-scenarios-listed-above)
+* [Supported scenarios - Multi-SID SAP Central Services failover clusters](planning-supported-configurations.md#multi-sid-sap-central-services-failover-clusters)
+* [Azure Virtual Machines high availability for SAP NetWeaver](sap-high-availability-guide-start.md)
+* [High-availability architecture and scenarios for SAP NetWeaver](sap-high-availability-architecture-scenarios.md)
+* [Utilize Azure infrastructure VM restart to achieve ΓÇ£higher availabilityΓÇ¥ of an SAP system without clustering](sap-higher-availability-architecture-scenarios.md)
+* [SAP workload configurations with Azure Availability Zones](high-availability-zones.md)
+* [Public endpoint connectivity for Virtual Machines using Azure Standard Load Balancer in SAP high-availability scenarios](high-availability-guide-standard-load-balancer-outbound-connections.md)
-For the final deployment and exact steps, especially the deployment of the Azure Extension for SAP, refer to the [Deployment Guide][deployment-guide].
+Pacemaker on Linux and Windows Server Failover Cluster is the only high availability frameworks for SAP workload directly supported by Microsoft on Azure. Any other high availability framework isn't supported by Microsoft and will need the design, implementation details and operations support from the vendor. For more information, refer to the document for [supported scenarios for SAP in Azure](planning-supported-configurations.md).
-## Accessing SAP systems running within Azure VMs
+## Disaster recovery
-For scenarios where you want to connect to those SAP systems across the public internet using SAP GUI, the following procedures need to be applied.
+Often the SAP applications are some of the most business critical within an enterprise. Based on their importance and time required to be operational again if there was an unforeseen event, business continuity and disaster recovery (BCDR) scenarios should be planned.
-Later in the document we will discuss the other major scenario, connecting to SAP systems in cross-premises deployments, which have a site-to-site connection (VPN tunnel) or Azure ExpressRoute connection between the on-premises systems and Azure systems.
+Article [Disaster recovery overview and infrastructure guidelines for SAP workload](disaster-recovery-overview-guide.md) contains all details to address this requirement.
-### Remote Access to SAP systems
+## Backup
-With Azure Resource Manager, there are no default endpoints anymore like in the former classic model. All ports of an Azure Resource Manager VM are open as long as:
+As part of business continuity and disaster recovery (BCDR) strategy, backup for SAP workload must be an integral part of any planned deployment. As previously with high availability or DR, the backup solution must cover all layers of an SAP solution stack - VM, OS, SAP application layer, DBMS layer and any shared storage solution. Additionally, backup for Azure services that are used by your SAP workload and other crucial resources like encryption and access keys, must be part of the backup and BCDR design.
-1. No Network Security Group is defined for the subnet or the network interface. Network traffic to Azure VMs can be secured via so-called "Network Security Groups". For more information, see [What is a Network Security Group (NSG)?][virtual-networks-nsg]
-2. No Azure Load Balancer is defined for the network interface
+Azure Backup offers a PaaS solution for the backup of
-See the architecture difference between classic model and ARM as described in [this article][virtual-machines-azure-resource-manager-architecture].
+- VM configuration, OS and SAP application layer (data resizing on managed disks) through Azure Backup for VM. Review the [support matrix](/azure/backup/backup-support-matrix-iaas) to verify your design can use this solution.
+- [SQL Server](/azure/backup/sql-support-matrix) and [SAP HANA](/azure/backup/sap-hana-backup-support-matrix) database data and log backup. Including support for database replication technologies, such HANA system replication or SQL Always On, and cross-region support for paired regions
+- File share backup through Azure Files. [Verify support](/azure/backup/azure-file-share-support-matrix) for NFS/SMB and other configuration details
-#### Configuration of the SAP System and SAP GUI connectivity over the internet
+Alternatively if you deploy Azure NetApp Files, [backup options are available](/azure/azure-netapp-files/backup-introduction) on volume level, including [SAP HANA and Oracle DBMS](/azure/azure-netapp-files/azacsnap-introduction) integration with a scheduled backup.
-See this article, which describes details to this topic: [SAP GUI connection closed when connecting to SAP system in Azure](/archive/blogs/saponsqlserver/sap-gui-connection-closed-when-connecting-to-sap-system-in-azure)
+Backup solutions with Azure backup are offering a [soft-delete option](/azure/backup/backup-azure-security-feature-cloud) to prevent malicious or accidental deletion and thus preventing data loss. Soft-delete is also available for file shares with Azure Files.
-#### Changing Firewall Settings within VM
+Further backup options are possible with self created and managed solution, or using third party software. These are using Azure storage in its different versions, including options to use [immutable storage for blob data](/azure/storage/blobs/immutable-storage-overview). This self-managed option would be currently required for DBMS backup option for some SAP databases like SAP ASE or DB2.
-It might be necessary to configure the firewall on your virtual machines to allow inbound traffic to your SAP system.
+Follow recommendations to [protect and validate against ransomware](/azure/security/fundamentals/backup-plan-to-protect-against-ransomware) attacks with Azure best practices.
-
-> ![Windows logo.][Logo_Windows] Windows
->
-> By default, the Windows Firewall within an Azure deployed VM is turned on. You now need to allow the SAP Port to be opened, otherwise the SAP GUI will not be able to connect.
-> To do this:
->
-> * Open Control Panel\System and Security\Windows Firewall to **Advanced Settings**.
-> * Now right-click on Inbound Rules and chose **New Rule**.
-> * In the following Wizard chose to create a new **Port** rule.
-> * In the next step of the wizard, leave the setting at TCP and type in the port number you want to open. Since our SAP instance ID is 00, we took 3200. If your instance has a different instance number, the port you defined earlier based on the instance number should be opened.
-> * In the next part of the wizard, you need to leave the item **Allow Connection** checked.
-> * In the next step of the wizard you need to define whether the rule applies for Domain, Private and Public network. Adjust it if necessary to your needs. However, connecting with SAP GUI from the outside through the public network, you need to have the rule applied to the public network.
-> * In the last step of the wizard, name the rule and save by pressing **Finish**.
->
-> The rule becomes effective immediately.
->
-> ![Port rule definition][planning-guide-figure-1600]
->
-> ![Linux logo.][Logo_Linux] Linux
->
-> The Linux images in the Azure Marketplace do not enable the iptables firewall by default and the connection to your SAP system should work. If you enabled iptables or another firewall, refer to the documentation of iptables or the used firewall to allow inbound tcp traffic to port 32xx (where xx is the system number of your SAP system).
->
->
+> [!TIP]
+> Ensure your backup strategy covers protecting your deployment automation, encryption keys for both Azure resources and transparent database encryption, if used.
-
-#### Security recommendations
+> [!WARNING]
+> For any cross-region backup requirement, determine the RTO and RPO offered by the solution and if this matches your BCDR design and needs.
-The SAP GUI does not connect immediately to any of the SAP instances (port 32xx) which are running, but first connects via the port opened to the SAP message server
-process (port 36xx). In the past, the same port was used by the message server for the internal communication to the application instances. To prevent on-premises
-application servers from inadvertently communicating with a message server in Azure, the internal communication ports can be changed. It is highly recommended to change
-the internal communication between the SAP message server and its application instances to a different port number on systems that have been cloned from on-premises
-systems, such as a clone of development for project testing etc. This can be done with the default profile parameter:
+## Migration approach to Azure
-> rdisp/msserv_internal
->
->
+With large variety of SAP products, version dependencies and native OS and DBMS technologies, it isn't possible to capture all available approaches and options. The executing project team on customer and/or service provider side is to consider several techniques for a successful and performant SAP migration to Azure.
-as documented in [Security Settings for the SAP Message Server](https://help.sap.com/saphelp_nwpi71/helpdata/en/47/c56a6938fb2d65e10000000a42189c/content.htm)
+- **Performance testing during migration**
+ An important part of the SAP migration planning is the technical performance testing. The migration team needs to allow sufficient time and key user personnel to execute application and technical testing of the migrated SAP system, including connected interfaces and applications. Comparing the runtime and correctness of key business processes and optimize them before production migration is critical for a successful SAP migration.
+- **Using Azure services for SAP migration**
+ Some VM based workloads are migrated without change to Azure using services such as [Azure Migrate](/azure/migrate/) or [Azure Site Recovery](/azure/site-recovery/physical-azure-disaster-recovery) or third party tools. Diligently confirm the OS version and running workload is supported by the service. Often any database workload is intentionally not supported as the service can't guarantee database consistency. Should the DBMS type be supported by migration service, the database change / churn rate is often too high and most busy SAP systems won't meet the change rate the migration tools are allowing, with issues noticed only during production migration. In many situations, these Azure services aren't suitable for migration of SAP systems. No validation of Azure Site Recovery or Azure Migrate for large scale SAP migration was performed and proven SAP migration methodology is to rely on DBMS replication or SAP migration tools.
-### <a name="3e9c3690-da67-421a-bc3f-12c520d99a30"></a>Single VM with SAP NetWeaver demo/training scenario
+ A deployment in Azure instead of plain VM migration is preferable and easier to accomplish than on premise. Automated deployment frameworks such as [Azure Center for SAP solutions](../center-sap-solutions/overview.md) and [Azure deployment automation framework](../automation/deployment-framework.md) allow for quick execution of automated tasks. Migration of SAP landscapes using DBMS native replication technologies such as HANA system replication, DBMS backup & restore or SAP migration tools onto the new deployed infrastructure uses established SAP know-how.
-![Running single VM SAP demo systems with the same VM names, isolated in Azure Cloud Services][planning-guide-figure-1700]
+- **Infrastructure up-sizing**
+ During an SAP migration, more infrastructure capacity can lead to quicker execution. The project team should consider up-sizing the [VM's size](/azure/virtual-machines/sizes) to provide more CPU and memory, as well as VM aggregate storage and network throughput. Similarly, on VM level, storage elements such as individual disks should be considered to increase throughput with [on-demand bursting](/azure/virtual-machines/disks-enable-bursting), [performance tiers](/azure/virtual-machines/disks-performance-tiers-portal) for Premium SSD v1. Increase IOPS and throughput values if using [Premium SSD v2](/azure/virtual-machines/disks-deploy-premium-v2?tabs=azure-cli#adjust-disk-performance) above configured values. Enlarge NFS / SMB file shares to increase performance limits. Keep in mind that Azure manage disks can't be reduced in size and reduction in size, performance tiers and throughput KPIs can have various cooldown times.
-In this scenario we are implementing a typical training/demo system scenario where the complete training/demo scenario is contained in a single VM. We assume that the deployment is done through VM image templates. We also assume that multiple of these demo/trainings VMs need to be deployed with the VMs having the same name. The whole training systems don't have connectivity to your on-premises assets and are an opposite to a hybrid deployment.
+- **Network and data copy optimization**
+ Migration of SAP system always involves moving large amount of data to Azure. These could be database and file backups or replication, application to application data transfer or SAP migration export. Depending on chosen migration process, the right network path to move this data needs to be selected. For many data move operations, using the Internet to copy data securely to Azure storage is the quickest path, as opposed to private networks.
-The assumption is that you created a VM Image as described in some sections of chapter [Preparing VMs with SAP for Azure][planning-guide-5.2] in this document.
+ Using ExpressRoute or VPN can often lead to bottlenecks, these can be
+ - Migration data uses too much bandwidth and interferes with user access to workloads running in Azure
+ - Network bottlenecks on-premises are only identified during migration, for example throughput limiting route or firewall
+
+ Regardless of network connection used, single stream network performance for data copy is often low. Multi-stream capable tools should be used to increase data transfer speed over multiple TCP streams. Follow optimization techniques described by SAP and many blog posts on this topic.
-The sequence of events to implement the scenario looks like this:
+> [!TIP]
+> Dedicated migration networks for large data transfer to Azure, such as backups or database replication, or using public endpoint for data transfer to Azure storage should be considered in planning. Impact on network paths for end users and applications to on-premises by the migration should be avoided. Network planning should consider all phases and a partially productive workload in Azure during migration.
-##### PowerShell
+## Support and operation aspects for SAP
-* Create a new resource group for every training/demo landscape
+To close the SAP planning guide, few other areas, which are important to consider before and during deployment in Azure.
-```powershell
-$rgName = "SAPERPDemo1"
-New-AzResourceGroup -Name $rgName -Location "North Europe"
-```
+### Azure VM extension for SAP
-* Create a new storage account if you don't want to use Managed Disks
+Azure Monitoring Extension, Enhanced Monitoring, and Azure Extension for SAP - all describe one and the same item. It describes a VM extension that you need to deploy to provide some basic data about the Azure infrastructure to the SAP host agent. SAP notes might refer to it as Monitoring Extension or Enhanced monitoring. In Azure, we're referring to it as Azure Extension for SAP. It's required to be installed on all Azure VMs running SAP workload for support purposes. See the [available article](vm-extension-for-sap.md) to implement the Azure VM extension for SAP.
-```powershell
-$suffix = Get-Random -Minimum 100000 -Maximum 999999
-$account = New-AzStorageAccount -ResourceGroupName $rgName -Name "saperpdemo$suffix" -SkuName Standard_LRS -Kind "Storage" -Location "North Europe"
-```
+### SAProuter for SAP support
-* Create a new virtual network for every training/demo landscape to enable the usage of the same hostname and IP addresses. The virtual network is protected by a Network Security Group that only allows traffic to port 3389 to enable Remote Desktop access and port 22 for SSH.
+Operating SAP landscape in Azure requires connectivity to and from SAP for support purposes. Typically this is in the form of SAProuter connection either through encryption network channel via Internet or private VPN connection to SAP. Consult the available architectures for best practices or example implementation of SAProuter in Azure.
-```powershell
-# Create a new Virtual Network
-$rdpRule = New-AzNetworkSecurityRuleConfig -Name SAPERPDemoNSGRDP -Protocol * -SourcePortRange * -DestinationPortRange 3389 -Access Allow -Direction Inbound -SourceAddressPrefix * -DestinationAddressPrefix * -Priority 100
-$sshRule = New-AzNetworkSecurityRuleConfig -Name SAPERPDemoNSGSSH -Protocol * -SourcePortRange * -DestinationPortRange 22 -Access Allow -Direction Inbound -SourceAddressPrefix * -DestinationAddressPrefix * -Priority 101
-$nsg = New-AzNetworkSecurityGroup -Name SAPERPDemoNSG -ResourceGroupName $rgName -Location "North Europe" -SecurityRules $rdpRule,$sshRule
-
-$subnetConfig = New-AzVirtualNetworkSubnetConfig -Name Subnet1 -AddressPrefix 10.0.1.0/24 -NetworkSecurityGroup $nsg
-$vnet = New-AzVirtualNetwork -Name SAPERPDemoVNet -ResourceGroupName $rgName -Location "North Europe" -AddressPrefix 10.0.1.0/24 -Subnet $subnetConfig
-```
-
-* Create a new public IP address that can be used to access the virtual machine from the internet
-
-```powershell
-# Create a public IP address with a DNS name
-$pip = New-AzPublicIpAddress -Name SAPERPDemoPIP -ResourceGroupName $rgName -Location "North Europe" -DomainNameLabel $rgName.ToLower() -AllocationMethod Dynamic
-```
-
-* Create a new network interface for the virtual machine
-
-```powershell
-# Create a new Network Interface
-$nic = New-AzNetworkInterface -Name SAPERPDemoNIC -ResourceGroupName $rgName -Location "North Europe" -Subnet $vnet.Subnets[0] -PublicIpAddress $pip
-```
-
-* Create a virtual machine. For this scenario, every VM will have the same name. The SAP SID of the SAP NetWeaver instances in those VMs will be the same as well. Within the Azure Resource Group, the name of the VM needs to be unique, but in different Azure Resource Groups you can run VMs with the same name. The default 'Administrator' account of Windows or 'root' for Linux are not valid. Therefore, a new administrator user name needs to be defined together with a password. The size of the VM also needs to be defined.
-
-```powershell
-#####
-# Create a new virtual machine with an official image from the Azure Marketplace
-#####
-$cred=Get-Credential -Message "Type the name and password of the local administrator account."
-$vmconfig = New-AzVMConfig -VMName SAPERPDemo -VMSize Standard_D11
-
-# select image
-$vmconfig = Set-AzVMSourceImage -VM $vmconfig -PublisherName "MicrosoftWindowsServer" -Offer "WindowsServer" -Skus "2012-R2-Datacenter" -Version "latest"
-$vmconfig = Set-AzVMOperatingSystem -VM $vmconfig -Windows -ComputerName "SAPERPDemo" -Credential $cred -ProvisionVMAgent -EnableAutoUpdate
-# $vmconfig = Set-AzVMSourceImage -VM $vmconfig -PublisherName "SUSE" -Offer "SLES-SAP" -Skus "12-SP1" -Version "latest"
-# $vmconfig = Set-AzVMSourceImage -VM $vmconfig -PublisherName "RedHat" -Offer "RHEL" -Skus "7.2" -Version "latest"
-# $vmconfig = Set-AzVMSourceImage -VM $vmconfig -PublisherName "Oracle" -Offer "Oracle-Linux" -Skus "7.2" -Version "latest"
-# $vmconfig = Set-AzVMOperatingSystem -VM $vmconfig -Linux -ComputerName "SAPERPDemo" -Credential $cred
-
-$vmconfig = Add-AzVMNetworkInterface -VM $vmconfig -Id $nic.Id
-
-$vmconfig = Set-AzVMBootDiagnostics -Disable -VM $vmconfig
-$vm = New-AzVM -ResourceGroupName $rgName -Location "North Europe" -VM $vmconfig
-```
-
-```powershell
-#####
-# Create a new virtual machine with a VHD that contains the private image that you want to use
-#####
-$cred=Get-Credential -Message "Type the name and password of the local administrator account."
-$vmconfig = New-AzVMConfig -VMName SAPERPDemo -VMSize Standard_D11
-
-$vmconfig = Add-AzVMNetworkInterface -VM $vmconfig -Id $nic.Id
-
-$diskName="osfromimage"
-$osDiskUri=$account.PrimaryEndpoints.Blob.ToString() + "vhds/" + $diskName + ".vhd"
-
-$vmconfig = Set-AzVMOSDisk -VM $vmconfig -Name $diskName -VhdUri $osDiskUri -CreateOption fromImage -SourceImageUri <path to VHD that contains the OS image> -Windows
-$vmconfig = Set-AzVMOperatingSystem -VM $vmconfig -Windows -ComputerName "SAPERPDemo" -Credential $cred
-#$vmconfig = Set-AzVMOSDisk -VM $vmconfig -Name $diskName -VhdUri $osDiskUri -CreateOption fromImage -SourceImageUri <path to VHD that contains the OS image> -Linux
-#$vmconfig = Set-AzVMOperatingSystem -VM $vmconfig -Linux -ComputerName "SAPERPDemo" -Credential $cred
-
-$vmconfig = Set-AzVMBootDiagnostics -Disable -VM $vmconfig
-$vm = New-AzVM -ResourceGroupName $rgName -Location "North Europe" -VM $vmconfig
-```
-
-```powershell
-#####
-# Create a new virtual machine with a Managed Disk Image
-#####
-$cred=Get-Credential -Message "Type the name and password of the local administrator account."
-$vmconfig = New-AzVMConfig -VMName SAPERPDemo -VMSize Standard_D11
-
-$vmconfig = Add-AzVMNetworkInterface -VM $vmconfig -Id $nic.Id
-
-$vmconfig = Set-AzVMSourceImage -VM $vmconfig -Id <Id of Managed Disk Image>
-$vmconfig = Set-AzVMOperatingSystem -VM $vmconfig -Windows -ComputerName "SAPERPDemo" -Credential $cred
-#$vmconfig = Set-AzVMOperatingSystem -VM $vmconfig -Linux -ComputerName "SAPERPDemo" -Credential $cred
-
-$vmconfig = Set-AzVMBootDiagnostics -Disable -VM $vmconfig
-$vm = New-AzVM -ResourceGroupName $rgName -Location "North Europe" -VM $vmconfig
-```
-
-* Optionally add additional disks and restore necessary content. All blob names (URLs to the blobs) must be unique within Azure.
-
-```powershell
-# Optional: Attach additional VHD data disks
-$vm = Get-AzVM -ResourceGroupName $rgName -Name SAPERPDemo
-$dataDiskUri = $account.PrimaryEndpoints.Blob.ToString() + "vhds/datadisk.vhd"
-Add-AzVMDataDisk -VM $vm -Name datadisk -VhdUri $dataDiskUri -DiskSizeInGB 1023 -CreateOption empty | Update-AzVM
-
-# Optional: Attach additional Managed Disks
-$vm = Get-AzVM -ResourceGroupName $rgName -Name SAPERPDemo
-Add-AzVMDataDisk -VM $vm -Name datadisk -DiskSizeInGB 1023 -CreateOption empty -Lun 0 | Update-AzVM
-```
-
-##### CLI
-
-The following example code can be used on Linux. For Windows, either use PowerShell as described above or adapt the example to use %rgName% instead of $rgName and set the environment variable using the Windows command *set*.
-
-* Create a new resource group for every training/demo landscape
-
-```azurecli
-rgName=SAPERPDemo1
-rgNameLower=saperpdemo1
-az group create --name $rgName --location "North Europe"
-```
-
-* Create a new storage account
-
-```azurecli
-az storage account create --resource-group $rgName --location "North Europe" --kind Storage --sku Standard_LRS --name $rgNameLower
-```
-
-* Create a new virtual network for every training/demo landscape to enable the usage of the same hostname and IP addresses. The virtual network is protected by a Network Security Group that only allows traffic to port 3389 to enable Remote Desktop access and port 22 for SSH.
-
-```azurecli
-az network nsg create --resource-group $rgName --location "North Europe" --name SAPERPDemoNSG
-az network nsg rule create --resource-group $rgName --nsg-name SAPERPDemoNSG --name SAPERPDemoNSGRDP --protocol \* --source-address-prefix \* --source-port-range \* --destination-address-prefix \* --destination-port-range 3389 --access Allow --priority 100 --direction Inbound
-az network nsg rule create --resource-group $rgName --nsg-name SAPERPDemoNSG --name SAPERPDemoNSGSSH --protocol \* --source-address-prefix \* --source-port-range \* --destination-address-prefix \* --destination-port-range 22 --access Allow --priority 101 --direction Inbound
-
-az network vnet create --resource-group $rgName --name SAPERPDemoVNet --location "North Europe" --address-prefixes 10.0.1.0/24
-az network vnet subnet create --resource-group $rgName --vnet-name SAPERPDemoVNet --name Subnet1 --address-prefix 10.0.1.0/24 --network-security-group SAPERPDemoNSG
-```
-
-* Create a new public IP address that can be used to access the virtual machine from the internet
-
-```azurecli
-az network public-ip create --resource-group $rgName --name SAPERPDemoPIP --location "North Europe" --dns-name $rgNameLower --allocation-method Dynamic
-```
-
-* Create a new network interface for the virtual machine
-
-```azurecli
-az network nic create --resource-group $rgName --location "North Europe" --name SAPERPDemoNIC --public-ip-address SAPERPDemoPIP --subnet Subnet1 --vnet-name SAPERPDemoVNet
-```
-
-* Create a virtual machine. For this scenario, every VM will have the same name. The SAP SID of the SAP NetWeaver instances in those VMs will be the same as well. Within the Azure Resource Group, the name of the VM needs to be unique, but in different Azure Resource Groups you can run VMs with the same name. The default 'Administrator' account of Windows or 'root' for Linux are not valid. Therefore, a new administrator user name needs to be defined together with a password. The size of the VM also needs to be defined.
-
-```azurecli
-#####
-# Create virtual machines using storage accounts
-#####
-az vm create --resource-group $rgName --location "North Europe" --name SAPERPDemo --nics SAPERPDemoNIC --image MicrosoftWindowsServer:WindowsServer:2012-R2-Datacenter:latest --admin-username <username> --admin-password <password> --size Standard_D11 --use-unmanaged-disk --storage-account $rgNameLower --storage-container-name vhds --os-disk-name os
-#az vm create --resource-group $rgName --location "North Europe" --name SAPERPDemo --nics SAPERPDemoNIC --image SUSE:SLES-SAP:12-SP1:latest --admin-username <username> --admin-password <password> --size Standard-D11 --use-unmanaged-disk --storage-account $rgNameLower --storage-container-name vhds --os-disk-name os --authentication-type password
-#az vm create --resource-group $rgName --location "North Europe" --name SAPERPDemo --nics SAPERPDemoNIC --image RedHat:RHEL:7.2:latest --admin-username <username> --admin-password <password> --size Standard-D11 --use-unmanaged-disk --storage-account $rgNameLower --storage-container-name vhds --os-disk-name os --authentication-type password
-#az vm create --resource-group $rgName --location "North Europe" --name SAPERPDemo --nics SAPERPDemoNIC --image "Oracle:Oracle-Linux:7.2:latest" --admin-username <username> --admin-password <password> --size Standard-D11 --use-unmanaged-disk --storage-account $rgNameLower --storage-container-name vhds --os-disk-name os --authentication-type password
-
-#####
-# Create virtual machines using Managed Disks
-#####
-az vm create --resource-group $rgName --location "North Europe" --name SAPERPDemo --nics SAPERPDemoNIC --image MicrosoftWindowsServer:WindowsServer:2012-R2-Datacenter:latest --admin-username <username> --admin-password <password> --size Standard-DS11-v2 --os-disk-name os
-#az vm create --resource-group $rgName --location "North Europe" --name SAPERPDemo --nics SAPERPDemoNIC --image SUSE:SLES-SAP:12-SP1:latest --admin-username <username> --admin-password <password> --size Standard-DS11-v2 --os-disk-name os --authentication-type password
-#az vm create --resource-group $rgName --location "North Europe" --name SAPERPDemo --nics SAPERPDemoNIC --image RedHat:RHEL:7.2:latest --admin-username <username> --admin-password <password> --size Standard-DS11-v2 --os-disk-name os --authentication-type password
-#az vm create --resource-group $rgName --location "North Europe" --name SAPERPDemo --nics SAPERPDemoNIC --image "Oracle:Oracle-Linux:7.2:latest" --admin-username <username> --admin-password <password> --size Standard-DS11-v2 --os-disk-name os --authentication-type password
-```
-
-```azurecli
-#####
-# Create a new virtual machine with a VHD that contains the private image that you want to use
-#####
-az vm create --resource-group $rgName --location "North Europe" --name SAPERPDemo --nics SAPERPDemoNIC --os-type Windows --admin-username <username> --admin-password <password> --size Standard-D11 --use-unmanaged-disk --storage-account $rgNameLower --storage-container-name vhds --os-disk-name os --image <path to image vhd>
-#az vm create --resource-group $rgName --location "North Europe" --name SAPERPDemo --nics SAPERPDemoNIC --os-type Linux --admin-username <username> --admin-password <password> --size Standard-D11 --use-unmanaged-disk --storage-account $rgNameLower --storage-container-name vhds --os-disk-name os --image <path to image vhd> --authentication-type password
-
-#####
-# Create a new virtual machine with a Managed Disk Image
-#####
-az vm create --resource-group $rgName --location "North Europe" --name SAPERPDemo --nics SAPERPDemoNIC --admin-username <username> --admin-password <password> --size Standard-DS11-v2 --os-disk-name os --image <managed disk image id>
-#az vm create --resource-group $rgName --location "North Europe" --name SAPERPDemo --nics SAPERPDemoNIC --admin-username <username> --admin-password <password> --size Standard-DS11-v2 --os-disk-name os --image <managed disk image id> --authentication-type password
-```
-
-* Optionally add additional disks and restore necessary content. All blob names (URLs to the blobs) must be unique within Azure.
-
-```azurecli
-# Optional: Attach additional VHD data disks
-az vm unmanaged-disk attach --resource-group $rgName --vm-name SAPERPDemo --size-gb 1023 --vhd-uri https://$rgNameLower.blob.core.windows.net/vhds/data.vhd --new
-
-# Optional: Attach additional Managed Disks
-az vm disk attach --resource-group $rgName --vm-name SAPERPDemo --size-gb 1023 --disk datadisk --new
-```
-
-##### Template
-
-You can use the sample templates on the Azure-quickstart-templates repository on GitHub.
-
-* [Simple Linux VM](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.compute/vm-simple-linux)
-* [Simple Windows VM](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.compute/vm-simple-windows)
-* [VM from image](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.compute/vm-from-user-image)
-
-### Implement a set of VMs that communicate within Azure
-
-This non-hybrid scenario is a typical scenario for training and demo purposes where the software representing the demo/training scenario is spread over multiple VMs. The different components installed in the different VMs need to communicate with each other. Again, in this scenario no on-premises network communication or cross-premises scenario is needed.
-
-This scenario is an extension of the installation described in chapter [Single VM with SAP NetWeaver demo/training scenario][planning-guide-7.1] of this document. In this case, more virtual machines will be added to an existing resource group. In the following example, the training landscape consists of an SAP ASCS/SCS VM, a VM running a DBMS, and an SAP Application Server instance VM.
-
-Before you build this scenario, you need to think about basic settings as already exercised in the scenario before.
-
-#### Resource Group and Virtual Machine naming
-
-All resource group names must be unique. Develop your own naming scheme of your resources, such as `<rg-name`>-suffix.
-
-The virtual machine name has to be unique within the resource group.
-
-#### Set up Network for communication between the different VMs
-
-![Set of VMs within an Azure Virtual Network][planning-guide-figure-1900]
-
-To prevent naming collisions with clones of the same training/demo landscapes, you need to create an Azure Virtual Network for every landscape. DNS name resolution will be provided by Azure or you can configure your own DNS server outside Azure (not to be further discussed here). In this scenario, we do not configure our own DNS. For all virtual machines inside one Azure Virtual Network, communication via hostnames will be enabled.
-
-The reasons to separate training or demo landscapes by virtual networks and not only resource groups could be:
-
-* The SAP landscape as set up needs its own AD/OpenLDAP and a Domain Server needs to be part of each of the landscapes.
-* The SAP landscape as set up has components that need to work with fixed IP addresses.
-
-More details about Azure Virtual Networks and how to define them can be found in [this article][virtual-networks-create-vnet-arm-pportal].
-
-## Deploying SAP VMs with corporate network connectivity (Cross-Premises)
-
-You run an SAP landscape and want to divide the deployment between bare-metal for high-end DBMS servers, on-premises virtualized environments for application layers, and smaller 2-Tier configured SAP systems and Azure IaaS. The base assumption is that SAP systems within one SAP landscape need to communicate with each other and with many other software components deployed in the company, independent of their deployment form. There also should be no differences introduced by the deployment form for the end user connecting with SAP GUI or other interfaces. These conditions can only be met when we have the on-premises Active Directory/OpenLDAP and DNS services extended to the Azure systems through site-to-site/multi-site connectivity or private connections like Azure ExpressRoute.
---
-### Scenario of an SAP landscape
-
-The cross-premises or hybrid scenario can be roughly described like in the graphics below:
-
-![Site-to-Site connectivity between on-premises and Azure assets][planning-guide-figure-2100]
-
-The minimum requirement is the use of secure communication protocols such as SSL/TLS for browser access or VPN-based connections for system access to the Azure services. The assumption is that companies handle the VPN connection between their corporate network and Azure differently. Some companies might blankly open all the ports. Some other companies might want to be precise in which ports they need to open, etc.
-
-In the table below typical SAP communication ports are listed. Basically it is sufficient to open the SAP gateway port.
-
-<!-- sapms is prefix of a SAP service name and not a spelling error -->
-
-| Service | Port Name | Example `<nn`> = 01 | Default Range (min-max) | Comment |
-| | | | | |
-| Dispatcher |sapdp`<nn>` see * |3201 |3200 - 3299 |SAP Dispatcher, used by SAP GUI for Windows and Java |
-| Message server |sapms`<sid`> see ** |3600 |free sapms`<anySID`> |sid = SAP-System-ID |
-| Gateway |sapgw`<nn`> see * |3301 |free |SAP gateway, used for CPIC and RFC communication |
-| SAP router |sapdp99 |3299 |free |Only CI (central instance) Service names can be reassigned in /etc/services to an arbitrary value after installation. |
-
-*) nn = SAP Instance Number
-
-**) sid = SAP-System-ID
-
-For more information, see [TCP/IP Ports Used by SAP Applications]
-(https://help.sap.com/docs/Security/575a9f0e56f34c6e8138439eefc32b16/616a3c0b1cc748238de9c0341b15c63c.html). Using this document, you can open dedicated ports in the VPN device necessary for specific SAP products and scenarios.
-
-Other security measures when deploying VMs in such a scenario could be to create a [Network Security Group][virtual-networks-nsg] to define access rules.
---
-#### Printing on a local network printer from SAP instance in Azure
-
-##### Printing over TCP/IP in Cross-Premises scenario
-
-Setting up your on-premises TCP/IP based network printers in an Azure VM is overall the same as in your corporate network, assuming you do have a VPN Site-To-Site tunnel or ExpressRoute connection established.
--
-> ![Windows logo.][Logo_Windows] Windows
->
-> To do this:
->
-> * Some network printers come with a configuration wizard which makes it easy to set up your printer in an Azure VM. If no wizard software has been distributed with the printer, the manual way to set up the printer is to create a new TCP/IP printer port.
-> * Open Control Panel -> Devices and Printers -> Add a printer
-> * Choose Add a printer using a TCP/IP address or hostname
-> * Type in the IP address of the printer
-> * Printer Port standard 9100
-> * If necessary install the appropriate printer driver manually.
->
-> ![Linux logo.][Logo_Linux] Linux
->
-> * like for Windows just follow the standard procedure to install a network printer
-> * just follow the public Linux guides for [SUSE](https://www.suse.com/documentation/sles-12/book_sle_deployment/data/sec_y2_hw_print.html) or [Red Hat and Oracle Linux](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/deployment_guide/sec-printer_configuration) on how to add a printer.
->
->
--
-![Network printing][planning-guide-figure-2200]
-
-##### Host-based printer over SMB (shared printer) in Cross-Premises scenario
-
-Host-based printers are not network-compatible by design. But a host-based printer can be shared among computers on a network as long as the printer is connected to a powered-on computer. Connect your corporate network either Site-To-Site or ExpressRoute and share your local printer. The SMB protocol uses NetBIOS instead of DNS as name service. The NetBIOS host name can be different from the DNS host name. The standard case is that the NetBIOS host name and the DNS host name are identical. The DNS domain does not make sense in the NetBIOS name space. Accordingly, the fully qualified DNS host name consisting of the DNS host name and DNS domain must not be used in the NetBIOS name space.
-
-The printer share is identified by a unique name in the network:
-
-* Host name of the SMB host (always needed).
-* Name of the share (always needed).
-* Name of the domain if printer share is not in the same domain as SAP system.
-* Additionally, a user name and a password may be required to access the printer share.
-
-How to:
--
-> ![Windows logo.][Logo_Windows] Windows
->
-> Share your local printer.
-> In the Azure VM, open the Windows Explorer and type in the share name of the printer.
-> A printer installation wizard will guide you through the installation process.
->
-> ![Linux logo.][Logo_Linux] Linux
->
-> Here are some examples of documentation about configuring network printers in Linux or that include a chapter about printing in Linux. It works the same way in an Azure Linux VM as long as the VM is part of a VPN:
->
-> * [SLES - Printing via SMB (Samba) Share or Windows Share](https://en.opensuse.org/SDB:Printing_via_SMB_(Samba)_Share_or_Windows_Share)
-> * [RHEL or Oracle Linux - Starting the Print Settings Configuration Tool](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html-single/system_administrators_guide/index#sec-Starting_Print_Settings_Config)
---
-##### USB Printer (printer forwarding)
-
-In Azure the ability of the Remote Desktop Services to provide users the access to their local printer devices in a remote session is not available.
--
-> ![Windows logo.][Logo_Windows] Windows
->
-> For more information, see [Printer sharing technical details](https://technet.microsoft.com/library/jj590748.aspx>)
--
-#### Integration of SAP Azure Systems into Correction and Transport System (TMS) in Cross-Premises
-
-The SAP Change and Transport System (TMS) needs to be configured to export and import transport request across systems in the landscape. We assume that the development instances of an SAP system (DEV) are located in Azure whereas the quality assurance (QA) and productive systems (PRD) are on-premises. Furthermore, we assume that there is a central transport directory.
-
-##### Configuring the Transport Domain
-
-Configure your Transport Domain on the system you designated as the Transport Domain Controller as described in [Configuring the Transport Domain Controller](https://help.sap.com/viewer/4a368c163b08418890a406d413933ba7/202009.001/en-US/44b4a0b47acc11d1899e0000e829fbbd.html?q=Configuring%20the%20Transport%20Domain%20Controller). A system user TMSADM will be created and the required RFC destination will be generated. You may check these RFC connections using the transaction SM59. Hostname resolution must be enabled across your transport domain.
-
-How to:
-
-* In our scenario, we decided the on-premises QAS system will be the CTS domain controller. Call transaction STMS. The TMS dialog box appears. A Configure Transport Domain dialog box is displayed. (This dialog box only appears if you have not yet configured a transport domain.)
-* Make sure that the automatically created user TMSADM is authorized (SM59 -> ABAP Connection -> TMSADM@E61.DOMAIN_E61 -> Details -> Utilities(M) -> Authorization Test). The initial screen of transaction STMS should show that this SAP System is now functioning as the controller of the transport domain as shown here:
-
-![Initial screen of transaction STMS on the domain controller][planning-guide-figure-2300]
-
-#### Including SAP Systems in the Transport Domain
-
-The sequence of including an SAP system in a transport domain looks as follows:
-
-* On the DEV system in Azure, go to the transport system (Client 000) and call transaction STMS. Choose Other Configuration from the dialog box and continue with Include System in Domain. Specify the Domain Controller as target host ([Including SAP Systems in the Transport Domain](https://help.sap.com/viewer/4a368c163b08418890a406d413933ba7/202009.001/en-US/44b4a0c17acc11d1899e0000e829fbbd.html?q=Including%20SAP%20Systems%20in%20the%20Transport%20Domain)). The system is now waiting to be included in the transport domain.
-* For security reasons, you then have to go back to the domain controller to confirm your request. Choose System Overview and Approve of the waiting system. Then confirm the prompt and the configuration will be distributed.
-
-This SAP system now contains the necessary information about all the other SAP systems in the transport domain. At the same time, the address data of the new SAP system is sent to all the other SAP systems, and the SAP system is entered in the transport profile of the transport control program. Check whether RFCs and access to the transport directory of the domain work.
-
-Continue with the configuration of your transport system as usual as described in the documentation [Change and Transport System](https://help.sap.com/viewer/4a368c163b08418890a406d413933ba7/202009.001/en-US/3bdfba3692dc635ce10000009b38f839.html).
-
-How to:
-
-* Make sure your STMS on premises is configured correctly.
-* Make sure the hostname of the Transport Domain Controller can be resolved by your virtual machine on Azure and vice visa.
-* Call transaction STMS -> Other Configuration -> Include System in Domain.
-* Confirm the connection in the on premises TMS system.
-* Configure transport routes, groups, and layers as usual.
-
-In site-to-site connected cross-premises scenarios, the latency between on-premises and Azure still can be substantial. If we follow the sequence of transporting objects through development and test systems to production or think about applying transports or support packages to the different systems, you realize that, dependent on the location of the central transport directory, some of the systems will encounter high latency reading or writing data in the central transport directory. The situation is similar to SAP landscape configurations where the different systems are spread through different data centers with substantial distance between the data centers.
-
-In order to work around such latency and have the systems work fast in reading or writing to or from the transport directory, you can set up two STMS transport domains (one for on-premises and one with the systems in Azure and link the transport domains. Check this [documentation](https://help.sap.com/saphelp_me60/helpdata/en/c4/6045377b52253de10000009b38f889/content.htm?frameset=/en/57/38dd924eb711d182bf0000e829fbfe/frameset.htm), which explains the principles behind this concept in the SAP TMS.
--
-How to:
-
-* [Set up a transport domain](https://help.sap.com/viewer/4a368c163b08418890a406d413933ba7/202009.001/en-US/44b4a0b47acc11d1899e0000e829fbbd.html?q=Set%20up%20a%20transport%20domain) in each location (on-premises and Azure) using transaction STMS
-* [Link the domains with a domain link](https://help.sap.com/viewer/4a368c163b08418890a406d413933ba7/202009.001/en-US/14c795388d62e450e10000009b38f889.html?q=Link%20the%20domains%20with%20a%20domain%20link) and confirm the link between the two domains.
-* Distribute the configuration to the linked system.
-
-#### RFC traffic between SAP instances located in Azure and on-premises (Cross-Premises)
-
-RFC traffic between systems, which are on-premises and in Azure needs to work. To set up a connection call transaction SM59 in a source system where you need to define an RFC connection towards the target system. The configuration is similar to the standard setup of an RFC Connection.
-
-We assume that in the cross-premises scenario, the VMs, which run SAP systems that need to communicate with each other are in the same domain. Therefore the setup of an RFC connection between SAP systems does not differ from the setup steps and inputs in on-premises scenarios.
-
-#### Accessing local fileshares from SAP instances located in Azure or vice versa
-
-SAP instances located in Azure need to access file shares, which are within the corporate premises. In addition, on-premises SAP instances need to access file shares, which are located in Azure. To enable the file shares, you must configure the permissions and sharing options on the local system. Make sure to open the ports on the VPN or ExpressRoute connection between Azure and your datacenter.
-
-## Supportability
-
-### <a name="6f0a47f3-a289-4090-a053-2521618a28c3"></a>Azure Extension for SAP
-
-In order to feed some portion of Azure infrastructure information of mission critical SAP systems to the SAP Host Agent instances, installed in VMs, an Azure (VM) Extension for SAP needs to get installed for the deployed VMs. Since the demands by SAP were specific to SAP applications, Microsoft decided not to generically implement the required functionality into Azure, but leave it for customers to deploy the necessary VM extension and configurations to their Virtual Machines running in Azure. However, deployment and lifecycle management of the Azure VM Extension for SAP will be mostly automated by Azure.
-
-#### Solution design
-
-The solution developed to enable SAP Host Agent getting the required information is based on the architecture of Azure VM Agent and Extension framework. The idea of the Azure VM Agent and Extension framework is to allow installation of software application(s) available in the Azure VM Extension gallery within a VM. The principle idea behind this concept is to allow (in cases like the Azure Extension for SAP), the deployment of special functionality into a VM and the configuration of such software at deployment time.
-
-The 'Azure VM Agent' that enables handling of specific Azure VM Extensions within the VM is injected into Windows VMs by default on VM creation in the Azure portal. In case of SUSE, Red Hat or Oracle Linux, the VM agent is already part of the
-Azure Marketplace image. In case, one would upload a Linux VM from on-premises to Azure the VM agent has to be installed manually.
-
-The basic building blocks of the solution to provide Azure infrastructure information to SAP Host agent in Azure looks like this:
-
-![Microsoft Azure Extension components][planning-guide-figure-2400]
-
-As shown in the block diagram above, one part of the solution is hosted in the Azure VM Image and Azure Extension Gallery, which is a globally replicated repository that is managed by Azure Operations. It is the responsibility of the joint SAP/MS team working on the Azure implementation of SAP to work with Azure Operations to publish new versions of the Azure Extension for SAP.
-
-When you deploy a new Windows VM, the Azure VM Agent is automatically added into the VM. The function of this agent is to coordinate the loading and configuration of the Azure Extensions of the VMs. For Linux VMs, the Azure VM Agent is already part of the Azure Marketplace OS image.
-
-However, there is a step that still needs to be executed by the customer. This is the enablement and configuration of the performance collection. The process related to the configuration is automated by a PowerShell script or CLI command. The PowerShell script can be downloaded in the Microsoft Azure Script Center as described in the [Deployment Guide][deployment-guide].
-
-The overall Architecture of the Azure extension for SAP looks like:
-
-![Azure extension for SAP ][planning-guide-figure-2500]
-
-**For the exact how-to and for detailed steps of using these PowerShell cmdlets or CLI command during deployments, follow the instructions given in the [Deployment Guide][deployment-guide].**
-
-### Integration of Azure located SAP instance into SAProuter
-
-SAP instances running in Azure need to be accessible from SAProuter as well.
-
-![SAP-Router Network Connection][planning-guide-figure-2600]
-
-A SAProuter enables the TCP/IP communication between participating systems if there is no direct IP connection. This provides the advantage that no end-to-end connection between the communication partners is necessary on network level. The SAProuter is listening on port 3299 by default.
-To connect SAP instances through a SAProuter, you need to give the SAProuter string and host name with any attempt to connect.
-
-## SAP NetWeaver AS Java
-
-So far the focus of the document has been SAP NetWeaver in general or the SAP NetWeaver ABAP stack. In this small section, specific considerations for the SAP Java stack are listed. One of the most important SAP NetWeaver Java exclusively based applications is the SAP Enterprise Portal. Other SAP NetWeaver based applications like SAP PI and SAP Solution Manager use both the SAP NetWeaver ABAP and Java stacks. Therefore, there certainly is a need to consider specific aspects related to the SAP NetWeaver Java stack as well.
-
-### SAP Enterprise Portal
-
-The setup of an SAP Portal in an Azure Virtual Machine does not differ from an on premises installation if you are deploying in cross-premises scenarios. Since the DNS is done by on-premises, the port settings of the individual instances can be done as configured on-premises. The recommendations and restrictions described in this document so far apply for an application like SAP Enterprise Portal or the SAP NetWeaver Java stack in general.
-
-![Exposed SAP Portal][planning-guide-figure-2700]
-
-A special deployment scenario by some customers is the direct exposure of the SAP Enterprise Portal to the Internet while the virtual machine host is connected to the company network via site-to-site VPN tunnel or ExpressRoute. For such a scenario, you have to make sure that specific ports are open and not blocked by firewall or network security group.
-
-The initial portal URI is http(s):`<Portalserver`>:5XX00/irj where the port is formed as documented by SAP in [AS Java Ports](https://help.sap.com/saphelp_nw70ehp1/helpdata/de/a2/f9d7fed2adc340ab462ae159d19509/frameset.htm).
-
-![Endpoint configuration][planning-guide-figure-2800]
-
-If you want to customize the URL and/or ports of your SAP Enterprise Portal, check this documentation:
-
-* [Change Portal URL](https://wiki.scn.sap.com/wiki/display/EP/Change+Portal+URL)
-* [Change Default port numbers, Portal port numbers](https://wiki.scn.sap.com/wiki/display/NWTech/Change+Default++port+numbers%2C+Portal+port+numbers)
-
-## High Availability (HA) and Disaster Recovery (DR) for SAP NetWeaver running on Azure Virtual Machines
-
-### Definition of terminologies
-
-The term **high availability (HA)** is generally related to a set of technologies that minimizes IT disruptions by providing business continuity of IT services through redundant, fault-tolerant, or failover protected components inside the **same** data center. In our case, within one Azure Region.
-
-**Disaster recovery (DR)** is also targeting minimizing IT services disruption, and their recovery but across **different** data centers, that are usually located hundreds of kilometers away. In our case usually between different Azure Regions within the same geopolitical region or as established by you as a customer.
-
-### Overview of High Availability
-
-We can separate the discussion about SAP high availability in Azure into two parts:
-
-* **Azure infrastructure high availability**, for example HA of compute (VMs), network, storage etc. and its benefits for increasing SAP application availability.
-* **SAP application high availability**, for example HA of SAP software components:
- * SAP application servers
- * SAP ASCS/SCS instance
- * DB server
-
-and how it can be combined with Azure infrastructure HA.
-
-SAP High Availability in Azure has some differences compared to SAP High Availability in an on-premises physical or virtual environment. The following paper from SAP describes [standard SAP High Availability configurations in virtualized environments on Windows](https://help.sap.com/docs/SAP_NETWEAVER_703/a2cf03bc73a44b2a87d535cdb35e529e/45237d7e9f9b4002e10000000a155369.html). There is no sapinst-integrated SAP-HA configuration for Linux. For more information about SAP HA on-premises for Linux, see [SAP High Availability Partner Information](https://scn.sap.com/docs/DOC-8541).
-
-### Azure Infrastructure High Availability
-
-There is currently a single-VM SLA of 99.9%. To get an idea what the availability of a single VM might look like, you can build the product of the different available [Azure SLAs](https://azure.microsoft.com/support/legal/sla/).
-
-The basis for the calculation is 30 days per month, or 43200 minutes. Therefore, 0.05% downtime corresponds to 21.6 minutes. As usual, the availability of the different services will multiply in the following way:
-
-(Availability Service #1/100) * (Availability Service #2/100) * (Availability Service #3/100)
-
-Like:
-
-(99.95/100) * (99.9/100) * (99.9/100) = 0.9975 or an overall availability of 99.75%.
-
-#### Virtual Machine (VM) High Availability
-
-There are two types of Azure platform events that can affect the availability of your virtual machines: planned maintenance and unplanned maintenance.
-
-* Planned maintenance events are periodic updates made by Microsoft to the underlying Azure platform to improve overall reliability, performance, and security of the platform infrastructure that your virtual machines run on.
-* Unplanned maintenance events occur when the hardware or physical infrastructure underlying your virtual machine has faulted in some way. This may include local network failures, local disk failures, or other rack level failures. When such a failure is detected, the Azure platform will automatically migrate your virtual machine from the unhealthy physical server hosting your virtual machine to a healthy physical server. Such events are rare, but may also cause your virtual machine to reboot.
-
-For more details, see [Availability of Windows virtual machines in Azure](../../virtual-machines/availability.md) and [Availability of Linux virtual machines in Azure](../../virtual-machines/availability.md).
-
-#### Azure Storage Redundancy
-
-The data in your Microsoft Azure Storage Account is always replicated to ensure durability and high availability, meeting the Azure Storage SLA even in the face of transient hardware failures.
-
-Since Azure Storage is keeping three images of the data by default, RAID5 or RAID1 across multiple Azure disks are not necessary.
-
-For more details, see [Azure Storage redundancy](../../storage/common/storage-redundancy.md).
-
-#### Utilizing Azure Infrastructure VM Restart to Achieve Higher Availability of SAP Applications
-
-If you decide not to use functionalities like Windows Server Failover Clustering (WSFC) or Pacemaker on Linux (currently only supported for SLES 12 and higher), Azure VM Restart is utilized to protect an SAP System against planned and unplanned downtime of the Azure physical server infrastructure and overall underlying Azure platform.
-
-> [!NOTE]
-> It is important to mention that Azure VM Restart primarily protects VMs and NOT applications. VM Restart does not offer high availability for SAP applications, but it does offer a certain level of infrastructure availability and therefore indirectly higher availability of SAP systems. There is also no SLA for the time it will take to restart a VM after a planned or unplanned host outage. Therefore, this method of high availability is not suitable for critical components of an SAP system like (A)SCS or DBMS.
->
->
-
-Another important infrastructure element for high availability is storage. For example Azure Storage SLA is 99.9 % availability. If one deploys all VMs with its disks into a single Azure Storage Account, potential Azure Storage unavailability will cause unavailability of all VMs that are placed in that Azure Storage Account, and also all SAP components running inside of those VMs.
-
-Instead of putting all VMs into one single Azure Storage Account, you can also use dedicated storage accounts for each VM, and in this way increase overall VM and SAP application availability by using multiple independent Azure Storage Accounts.
-
-Azure managed disks are automatically placed in the Fault Domain of the virtual machine they are attached to. If you place two virtual machines in an availability set and use Managed Disks, the platform will take care of distributing the Managed Disks into different Fault Domains as well. If you plan to use Premium Storage, we highly recommend using Manage Disks as well.
-
-A sample architecture of an SAP NetWeaver system that uses Azure infrastructure HA and storage accounts could look like this:
-
-![Diagram that shows an SAP NetWeaver system that uses Azure infrastructure HA and storage accounts.][planning-guide-figure-2900]
-
-A sample architecture of an SAP NetWeaver system that uses Azure infrastructure HA and Managed Disks could look like this:
-
-![Utilizing Azure infrastructure HA to achieve SAP application higher availability][planning-guide-figure-2901]
-
-For critical SAP components, we achieved the following so far:
-
-* High Availability of SAP Application Servers (AS)
-
- SAP application server instances are redundant components. Each SAP AS instance is deployed on its own VM, that is running in a different Azure Fault and Upgrade Domain (see chapters [Fault Domains][planning-guide-3.2.1] and [Upgrade Domains][planning-guide-3.2.2]). This is ensured by using Azure availability sets (see chapter [Azure Availability Sets][planning-guide-3.2.3]). Potential planned or unplanned unavailability of an Azure Fault or Upgrade Domain will cause unavailability of a restricted number of VMs with their SAP AS instances.
-
- Each SAP AS instance is placed in its own Azure Storage account - potential unavailability of one Azure Storage Account will cause unavailability of only one VM with its SAP AS instance. However, be aware that there is a limit of Azure Storage Accounts within one Azure subscription. To ensure automatic start of (A)SCS instance after the VM reboot, make sure to set the Autostart parameter in (A)SCS instance start profile described in chapter [Using Autostart for SAP instances][planning-guide-11.5].
- Also read chapter [High Availability for SAP Application Servers][planning-guide-11.4.1] for more details.
-
- Even if you use Managed Disks, those disks are also stored in an Azure Storage Account and can be unavailable in an event of a storage outage.
-
-* *Higher* Availability of SAP (A)SCS instance
-
- Here we utilize Azure VM Restart to protect the VM with installed SAP (A)SCS instance. In the case of planned or unplanned downtime of Azure severs, VMs will be restarted on another available server. As mentioned earlier, Azure VM Restart primarily protects VMs and NOT applications, in this case, the (A)SCS instance. Through the VM Restart, we'll reach indirectly higher availability of SAP (A)SCS instance. To insure automatic start of (A)SCS instance after the VM reboot, make sure to set Autostart parameter in (A)SCS instance start profile described in chapter [Using Autostart for SAP instances][planning-guide-11.5]. This means the (A)SCS instance as a Single Point of Failure (SPOF) running in a single VM will be the determinative factor for the availability of the whole SAP landscape.
-
-* *Higher* Availability of DBMS Server
-
- Here, similar to the SAP (A)SCS instance use case, we utilize Azure VM Restart to protect the VM with installed DBMS software, and we achieve higher availability of DBMS software through VM Restart.
- DBMS running in a single VM is also a SPOF, and it is the determinative factor for the availability of the whole SAP landscape.
-
-### SAP Application High Availability on Azure IaaS
-
-To achieve full SAP system high availability, we need to protect all critical SAP system components, for example redundant SAP application servers, and unique components (for example Single Point of Failure) like SAP (A)SCS instance, and DBMS.
-
-#### <a name="5d9d36f9-9058-435d-8367-5ad05f00de77"></a>High Availability for SAP Application Servers
-
-For the SAP application servers/dialog instances, it's not necessary to think about a specific high availability solution. High availability is achieved by redundancy and thereby having enough of them in different virtual machines. They should all be placed in the same Azure availability set to avoid that the VMs might be updated at the same time during planned maintenance downtime. The basic functionality, which builds on different Upgrade and Fault Domains within an Azure Scale Unit was already introduced in chapter [Upgrade Domains][planning-guide-3.2.2]. Azure availability sets were presented in chapter [Azure Availability Sets][planning-guide-3.2.3] of this document.
-
-There is no infinite number of Fault and Upgrade Domains that can be used by an Azure availability set within an Azure Scale Unit. This means that putting a number of VMs into one availability set, sooner or later more than one VM ends up in the same Fault or Upgrade Domain.
-
-Deploying a few SAP application server instances in their dedicated VMs and assuming that we got five Upgrade Domains, the following picture emerges at the end. The actual max number of fault and update domains within an availability set might change in the future:
-
-![HA of SAP Application Servers in Azure][planning-guide-figure-3000]
--
-#### High Availability for SAP Central Services on Azure
-
-For High availability architecture of SAP Central Services on Azure, check the article [High-availability architecture and scenarios for SAP NetWeaver](./sap-high-availability-architecture-scenarios.md) as entry information. The article points to more detailed descriptions for the particular operating systems.
-
-#### High Availability for the SAP database instance
-
-The typical SAP DBMS HA setup is based on two DBMS VMs where DBMS high-availability functionality is used to replicate data from the active DBMS instance to the second VM into a passive DBMS instance.
-
-High Availability and Disaster recovery functionality for DBMS in general as well as specific DBMS are described in the [DBMS Deployment Guide][dbms-guide].
-
-#### End-to-End High Availability for the Complete SAP System
-
-Here are two examples of a complete SAP NetWeaver HA architecture in Azure - one for Windows and one for Linux.
-
-Unmanaged disks only: The concepts as explained below may need to be compromised a bit when you deploy many SAP systems and the number of VMs deployed are exceeding the maximum limit of Storage Accounts per subscription. In such cases, VHDs of VMs need to be combined within one Storage Account. Usually you would do so by combining VHDs of SAP application layer VMs of different SAP systems. We also combined different VHDs of different DBMS VMs of different SAP systems in one Azure Storage Account. Thereby keeping the IOPS limits of Azure Storage Accounts in mind [Scalability and performance targets for standard storage accounts](../../storage/common/scalability-targets-standard-account.md)
--
-##### ![Windows logo.][Logo_Windows] HA on Windows
-
-![Diagram that shows the SAP NetWeaver Application HA Architecture with SQL Server in Azure IaaS.][planning-guide-figure-3200]
-
-The following Azure constructs are used for the SAP NetWeaver system, to minimize impact by infrastructure issues and host patching:
-
-* The complete system is deployed on Azure (required - DBMS layer, (A)SCS instance, and complete application layer need to run in the same location).
-* The complete system runs within one Azure subscription (required).
-* The complete system runs within one Azure Virtual Network (required).
-* The separation of the VMs of one SAP system into three availability sets is possible even with all the VMs belonging to the same Virtual Network.
-* Each layer (for example DBMS, ASCS, Application Servers) must use a dedicated availability set.
-* All VMs running DBMS instances of one SAP system are in one availability set. We assume that there is more than one VM running DBMS instances per system since native DBMS high availability features are used, like SQL Server Always On or Oracle Data Guard.
-* All VMs running DBMS instances use their own storage account. DBMS data and log files are replicated from one storage account to another storage account using DBMS high availability functions that synchronize the data. Unavailability of one storage account will cause unavailability of one SQL Windows cluster node, but not the whole SQL Server service.
-* All VMs running (A)SCS instance of one SAP system are in one availability set. A Windows Server Failover Cluster (WSFC) is configured inside of those VMs to protect the (A)SCS instance.
-* All VMs running (A)SCS instances use their own storage account. (A)SCS instance files and SAP global folder are replicated from one storage account to another storage account using SIOS DataKeeper replication. Unavailability of one storage account will cause unavailability of one (A)SCS Windows cluster node, but not the whole (A)SCS service.
-* ALL the VMs representing the SAP application server layer are in a third availability set.
-* ALL the VMs running SAP application servers use their own storage account. Unavailability of one storage account will cause unavailability of one SAP application server, where other SAP application servers continue to run.
-
-The following figure illustrated the same landscape using Managed Disks.
-
-![SAP NetWeaver Application HA Architecture with SQL Server in Azure IaaS][planning-guide-figure-3201]
-
-##### ![Linux logo.][Logo_Linux] HA on Linux
-
-The architecture for SAP HA on Linux on Azure is basically the same as for Windows as described above. Refer to SAP Note [1928533] for a list of supported high availability solutions.
-
-### <a name="4e165b58-74ca-474f-a7f4-5e695a93204f"></a>Using Autostart for SAP instances
-
-SAP offered the functionality to start SAP instances immediately after the start of the OS within the VM. The exact steps were documented in SAP Knowledge Base Article [1909114]. However, SAP is not recommending to use the setting anymore because there is no control in the order of instance restarts, assuming more than one VM got affected or multiple instances ran per VM. Assuming a typical Azure scenario of one SAP application server instance in a VM and the case of a single VM eventually getting restarted, the Autostart is not critical and can be enabled by adding this parameter:
-
-`Autostart = 1`
-
-Into the start profile of the SAP ABAP and/or Java instance.
-
-> [!NOTE]
-> The Autostart parameter can have some downfalls as well. In more detail, the parameter triggers the start of an SAP ABAP or Java instance when the related Windows/Linux service of the instance is started. That certainly is the case when the operating system boots up. However, restarts of SAP services are also a common thing for SAP Software Lifecycle Management functionality like SUM or other updates or upgrades. These functionalities are not expecting an instance to be restarted automatically at all. Therefore, the Autostart parameter should be disabled before running such tasks. The Autostart parameter also should not be used for SAP instances that are clustered, like ASCS/SCS/CI.
->
->
-
-See additional information regarding autostart for SAP instances here:
-
-* [Start/Stop SAP along with your Unix Server Start/Stop](https://scn.sap.com/community/unix/blog/2012/08/07/startstop-sap-along-with-your-unix-server-startstop)
-* [Starting and Stopping SAP NetWeaver Management Agents](https://help.sap.com/saphelp_nwpi711/helpdata/en/49/9a15525b20423ee10000000a421938/content.htm)
-* [How to enable auto Start of HANA Database](http://sapbasisinfo.com/blog/2016/08/15/enabling-autostart-of-sap-hana-database-on-server-boot-situation/)
-
-### Larger 3-Tier SAP systems
-High-Availability aspects of 3-Tier SAP configurations got discussed in earlier sections already. But what about systems where the DBMS server requirements are too large to have it located in Azure, but the SAP application layer could be deployed into Azure?
-
-#### Location of 3-Tier SAP configurations
-It is not supported to split the application tier itself or the application and DBMS tier between on-premises and Azure. An SAP system is either completely deployed on-premises OR in Azure. It is also not supported to have some of the application servers run on-premises and some others in Azure. That is the starting point of the discussion. We also are not supporting to have the DBMS components of an SAP system and the SAP application server layer deployed in two different Azure Regions. For example, DBMS in West US and SAP application layer in Central US. Reason for not supporting such configurations is the latency sensitivity of the SAP NetWeaver architecture.
-
-However, over the course of last year data center partners developed co-locations to Azure Regions. These co-locations often are in close proximity to the physical Azure data centers within an Azure Region. The short distance and connection of assets in the co-location through ExpressRoute into Azure can result in a latency that is less than 2 milliseconds. In such cases, to locate the DBMS layer (including storage SAN/NAS) in such a co-location and the SAP application layer in Azure is possible. [HANA Large Instances](../../virtual-machines/workloads/sap/hana-overview-architecture.md).
-
-### Offline Backup of SAP systems
-Dependent on the SAP configuration chosen (2-Tier or 3-Tier) there could be a need to back up. The content of the VM itself plus to have a backup of the database. The DBMS-related backups are expected to be done with database methods. A detailed description for the different databases, can be found in [DBMS Guide][dbms-guide]. On the other hand, the SAP data can be backed up in an offline manner (including the database content as well) as described in this section or online as described in the next section.
-
-The offline backup would basically require a shutdown of the VM through the Azure portal and a copy of the base VM disk plus all attached disks to the VM. This would preserve a point in time image of the VM and its associated disk. It is recommended to copy the backups into a different Azure Storage Account. Hence the procedure described in chapter [Copying disks between Azure Storage Accounts][planning-guide-5.4.2] of this document would apply.
-
-A restore of that state would consist of deleting the base VM as well as the original disks of the base VM and mounted disks, copying back the saved disks to the original Storage Account or resource group for managed disks and then redeploying the system.
-For more information, see [how to script this process in PowerShell](https://www.westerndevs.com/_/azure-snapshots/).
-
-Make sure to install a new SAP license since restoring a VM backup as described above creates a new hardware key.
-
-### Online backup of an SAP system
-
-Backup of the DBMS is performed with DBMS-specific methods as described in the [DBMS Guide][dbms-guide].
-
-Other VMs within the SAP system can be backed up using Azure Virtual Machine Backup functionality. Azure Virtual Machine Backup is a standard method to back up a complete VM in Azure. Azure Backup stores the backups in Azure and allows a restore of a VM again.
-
-> [!NOTE]
-> As of Dec 2015 using VM Backup does NOT keep the unique VM ID which is used for SAP licensing. This means that a restore from a VM
-> backup requires installation of a new SAP license key as the restored VM is considered to be a new VM and not a replacement of the
-> former one which was saved.
->
-> ![Windows logo.][Logo_Windows] Windows
->
-> Theoretically, VMs that run databases can be backed up in a consistent manner as well if the DBMS system supports the [Windows Volume Shadow Copy Service](/windows/win32/vss/volume-shadow-copy-service-portal) (VSS) as, for example, SQL Server does. However, be aware that based on Azure VM backups point-in-time restores of databases are not possible. Therefore, the recommendation is to perform backups of databases with DBMS functionality instead of relying on Azure VM Backup.
->
-> To get familiar with Azure Virtual Machine Backup start here:
-> [Back up an Azure VM from the VM settings](/../../../azure/backup/backup-azure-vms).
->
-> Other possibilities are to use a combination of Microsoft Data Protection Manager installed in an Azure VM and Azure Backup to
-> backup/restore databases. More information can be found here:
-> [Prepare to back up workloads to Azure with System Center DPM](/../../../azure/backup/backup-azure-dpm-introduction).
->
-> ![Linux logo.][Logo_Linux] Linux
->
-> There is no equivalent to Windows VSS in Linux. Therefore only file-consistent backups are possible but not application-consistent backups. The SAP DBMS backup should be done using DBMS functionality. The file system that includes the SAP-related data can be saved, for example, using tar as described in [Backing Up and Restoring your SAP System on UNIX](https://help.sap.com/saphelp_nw70ehp2/helpdata/en/d3/c0da3ccbb04d35b186041ba6ac301f/content.htm).
-
-### Azure as DR site for production SAP landscapes
-
-Since Mid 2014, extensions to various components around Hyper-V, System Center, and Azure enable the usage of Azure as DR site for VMs running on-premises based on Hyper-V.
-
-A blog detailing how to deploy this solution is documented here: [Protecting SAP Solutions with Azure Site Recovery](/archive/blogs/saponsqlserver/protecting-sap-solutions-with-azure-site-recovery).
-
-## Summary for High Availability for SAP systems
-
-The key points of High Availability for SAP systems in Azure are:
-
-* At this point in time, the SAP single point of failure cannot be secured exactly the same way as it can be done in on-premises deployments. The reason is that Shared Disk clusters can't yet be built in Azure without the use of 3rd party software.
-* For the DBMS layer, you need to use DBMS functionality that does not rely on shared disk cluster technology. Details are documented in the [DBMS Guide][dbms-guide].
-* To minimize the impact of problems within Fault Domains in the Azure infrastructure or host maintenance, you should use Azure availability sets:
- * It is recommended to have one availability set for the SAP application layer.
- * It is recommended to have a separate availability set for the SAP DBMS layer.
- * It is NOT recommended to apply the same availability set for VMs of different SAP systems.
- * It is recommended to use Premium Managed Disks.
-* For Backup purposes of the SAP DBMS layer, check the [DBMS Guide][dbms-guide].
-* Backing up SAP Dialog instances makes little sense since it is usually faster to redeploy simple dialog instances.
-* Backing up the VM, which contains the global directory of the SAP system and with it all the profiles of the different instances, does make sense and should be performed with Windows Backup or, for example, tar on Linux. Since there are differences between Windows Server 2008 (R2) and Windows Server 2012 (R2), which make it easier to back up using the more recent Windows Server releases, we recommend running Windows Server 2012 (R2) as Windows guest operating system.
+- [Azure Architecture Center | In- and outbound internet connections for SAP on Azure](/azure/architecture/guide/sap/sap-internet-inbound-outbound)
## Next steps
-Read the articles:
--- [Azure Virtual Machines deployment for SAP NetWeaver](./deployment-guide.md)-- [Considerations for Azure Virtual Machines DBMS deployment for SAP workload](./dbms-guide-general.md)-- [SAP HANA infrastructure configurations and operations on Azure](./hana-vm-operations.md)
+- [Azure Virtual Machines deployment for SAP NetWeaver](deployment-guide.md)
+- [Considerations for Azure Virtual Machines DBMS deployment for SAP workload](dbms-guide-general.md)
+- [SAP workloads on Azure: planning and deployment checklist](deployment-checklist.md)
sap Sap High Availability Architecture Scenarios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/sap-high-availability-architecture-scenarios.md
An availability set is used for achieving high availability of:
### Azure Availability Zones
-Azure is in process of rolling out a concepts of [Azure Availability Zones](../../availability-zones/az-overview.md) throughout different [Azure Regions](https://azure.microsoft.com/global-infrastructure/regions/). In Azure regions where Availability Zones are offered, the Azure regions have multiple data centers, which are independent in supply of power source, cooling, and network. Reason for offering different zones within a single Azure region is to enable you to deploy applications across two or three Availability Zones offered. Assuming that issues in power sources and/or network would affect one Availability Zone infrastructure only, your application deployment within an Azure region is still fully functional. Eventually with some reduced capacity since some VMs in one zone might be lost. But VMs in the other two zones are still up and running. The Azure regions that offer zones are listed in [Azure Availability Zones](../../availability-zones/az-overview.md).
+Azure is in process of rolling out a concept of [Azure Availability Zones](../../availability-zones/az-overview.md) throughout different [Azure Regions](https://azure.microsoft.com/global-infrastructure/regions/). In Azure regions where Availability Zones are offered, the Azure regions have multiple data centers, which are independent in supply of power source, cooling, and network. Reason for offering different zones within a single Azure region is to enable you to deploy applications across two or three Availability Zones offered. Assuming that issues in power sources and/or network would affect one Availability Zone infrastructure only, your application deployment within an Azure region is still fully functional. Eventually with some reduced capacity since some VMs in one zone might be lost. But VMs in the other two zones are still up and running. The Azure regions that offer zones are listed in [Azure Availability Zones](../../availability-zones/az-overview.md).
-Using Availability Zones, there are some things to consider. The considerations list like:
+When using Availability Zones, there are some things to consider. The considerations list like:
-- You can't deploy Azure Availability Sets within an Availability Zone. Only possibility to combine Availability sets and Availability Zones is with [proximity placement groups](../../virtual-machines/co-location.md). For more information see , [combine availability sets and availability zones with proximity placement groups](./proximity-placement-scenarios.md#combine-availability-sets-and-availability-zones-with-proximity-placement-groups)
+- You can't deploy Azure Availability Sets within an Availability Zone. Only possibility to combine Availability sets and Availability Zones is with [proximity placement groups](../../virtual-machines/co-location.md). For more information see article [Combine availability sets and availability zones with proximity placement groups](./proximity-placement-scenarios.md#combine-availability-sets-and-availability-zones-with-proximity-placement-groups)
- You can't use the [Basic Load Balancer](../../load-balancer/load-balancer-overview.md) to create failover cluster solutions based on Windows Failover Cluster Services or Linux Pacemaker. Instead you need to use the [Azure Standard Load Balancer SKU](../../load-balancer/load-balancer-standard-availability-zones.md) - Azure Availability Zones are not giving any guarantees of certain distance between the different zones within one region - The network latency between different Azure Availability Zones within the different Azure regions might be different from Azure region to region. There will be cases, where you as a customer can reasonably run the SAP application layer deployed across different zones since the network latency from one zone to the active DBMS VM is still acceptable from a business process impact. Whereas there will be customer scenarios where the latency between the active DBMS VM in one zone and an SAP application instance in a VM in another zone can be too intrusive and not acceptable for the SAP business processes. As a result, the deployment architectures need to be different with an active/active architecture for the application or active/passive architecture if latency is too high.
We recommend that you use managed disks because they simplify the deployment and
## Utilizing Azure infrastructure high availability to achieve *higher availability* of SAP applications
-If you decide not to use functionalities such as WSFC or Pacemaker on Linux (supported for SUSE Linux Enterprise Server [SLES] 12 and later and Red Hat Enterprise Linux [RHEL] 7 and later ), Azure VM restart is utilized. It protects SAP systems against planned and unplanned downtime of the Azure physical server infrastructure and overall underlying Azure platform.
+If you decide not to use functionalities such as WSFC or Pacemaker on Linux (supported for SUSE Linux Enterprise Server [SLES] 12 and later and Red Hat Enterprise Linux [RHEL] 7 and later), Azure VM restart is utilized. It protects SAP systems against planned and unplanned downtime of the Azure physical server infrastructure and overall underlying Azure platform.
For more information about this approach, see [Utilize Azure infrastructure VM restart to achieve higher availability of the SAP system][sap-higher-availability].
You must place all virtual machines that host SAP application server instances i
* All virtual machines are not part of the same update domain. An update domain ensures that the virtual machines aren't updated at the same time during planned maintenance downtime.
- The basic functionality, which builds on different update and fault domains within an Azure scale unit, was already introduced in the [update domains][planning-guide-3.2.2] section.
+ The basic functionality, which builds on different update and fault domains within an Azure scale unit, was already introduced in the [update domains](./planning-guide.md#update-domains) section.
* All virtual machines are not part of the same fault domain. A fault domain ensures that virtual machines are deployed so that no single point of failure affects the availability of all virtual machines.
_**Figure 2:** High availability of SAP application servers in an Azure availabi
For more information, see [Manage the availability of Windows virtual machines in Azure][azure-virtual-machines-manage-availability].
-For more information, see the [Azure availability sets][planning-guide-3.2.3] section of the Azure virtual machines planning and implementation for SAP NetWeaver document.
+For more information, see the [Azure availability sets](./planning-guide.md#availability-sets) section of the Azure virtual machines planning and implementation for SAP NetWeaver document.
**Unmanaged disks only:** Because the Azure storage account is a potential single point of failure, it's important to have at least two Azure storage accounts, in which at least two virtual machines are distributed. In an ideal setup, the disks of each virtual machine that is running an SAP dialog instance would be deployed in a different storage account.
sap Sap Higher Availability Architecture Scenarios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/sap-higher-availability-architecture-scenarios.md
For critical SAP components, you have achieved the following so far:
* High availability of SAP application servers
- SAP application server instances are redundant components. Each SAP application server instance is deployed on its own VM, which is running in a different Azure fault and upgrade domain. For more information, see the [Fault domains][planning-guide-3.2.1] and [Upgrade domains][planning-guide-3.2.2] sections.
+ SAP application server instances are redundant components. Each SAP application server instance is deployed on its own VM, which is running in a different Azure fault and upgrade domain. For more information, see the [Fault domains](./planning-guide.md#fault-domains) and [Update domains](./planning-guide.md#update-domains) sections.
- You can ensure this configuration by using Azure availability sets. For more information, see the [Azure availability sets][planning-guide-3.2.3] section.
+ You can ensure this configuration by using Azure availability sets. For more information, see the [Azure availability sets](./planning-guide.md#availability-sets) section.
Potential planned or unplanned unavailability of an Azure fault or upgrade domain will cause unavailability of a restricted number of VMs with their SAP application server instances.
- Each SAP application server instance is placed in its own Azure storage account. The potential unavailability of one Azure storage account will cause the unavailability of only one VM with its SAP application server instance. However, be aware that there is a limit on the number of Azure storage accounts within one Azure subscription. To ensure automatic start of an ASCS/SCS instance after the VM reboot, set the Autostart parameter in the ASCS/SCS instance start profile that is described in the [Using Autostart for SAP instances][planning-guide-11.5] section.
+ Each SAP application server instance is placed in its own Azure storage account. The potential unavailability of one Azure storage account will cause the unavailability of only one VM with its SAP application server instance. However, be aware that there is a limit on the number of Azure storage accounts within one Azure subscription. To ensure automatic start of an ASCS/SCS instance after the VM reboot, set the Autostart parameter in the ASCS/SCS instance start profile.
- For more information, see [High availability for SAP application servers][planning-guide-11.4.1].
+ For more information, see [High availability for SAP application servers](./planning-guide.md#high-availability).
Even if you use managed disks, the disks are stored in an Azure storage account and might be unavailable in the event of a storage outage.
For critical SAP components, you have achieved the following so far:
In this scenario, utilize Azure VM restart to protect the VM with the installed SAP ASCS/SCS instance. In the case of planned or unplanned downtime of Azure servers, VMs are restarted on another available server. As mentioned earlier, Azure VM restart primarily protects VMs and *not* applications, in this case the ASCS/SCS instance. Through the VM restart, you indirectly reach ΓÇ£higher availabilityΓÇ¥ of the SAP ASCS/SCS instance.
- To ensure an automatic start of ASCS/SCS instance after the VM reboot, set the Autostart parameter in the ASCS/SCS instance start profile, as described in the [Using Autostart for SAP instances][planning-guide-11.5] section. This setting means that the ASCS/SCS instance as a single point of failure (SPOF) running in a single VM will determine the availability of the whole SAP landscape.
+ To ensure an automatic start of ASCS/SCS instance after the VM reboot, set the Autostart parameter in the ASCS/SCS instance start profile. This setting means that the ASCS/SCS instance as a single point of failure (SPOF) running in a single VM will determine the availability of the whole SAP landscape.
* *Higher availability* of the DBMS server
sap Vm Extension For Sap https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/vm-extension-for-sap.md
> General Support Statement: > Support for the Azure Extension for SAP is provided through SAP support channels. If you need assistance with the Azure VM extension for SAP solutions, please open a support case with SAP Support.
-When you've prepared the VM as described in [Deployment scenarios of VMs for SAP on Azure][deployment-guide-3], the Azure VM Agent is installed on the virtual machine. The next step is to deploy the Azure Extension for SAP, which is available in the Azure Extension Repository in the global Azure datacenters. For more information, see [Azure Virtual Machines planning and implementation for SAP NetWeaver][planning-guide-9.1].
+When you've prepared the VM as described in [Deployment scenarios of VMs for SAP on Azure][deployment-guide-3], the Azure VM Agent is installed on the virtual machine. The next step is to deploy the Azure Extension for SAP, which is available in the Azure Extension Repository in the global Azure datacenters.
To be sure SAP supports your environment, enable the Azure VM extension for SAP solutions as described in Configure the Azure Extension for SAP.
When you are setting up your SAP software deployment, you need the following SAP
## Differences between the two versions of the Azure VM extension for SAP solutions
-There are two Version of the VM Extension for SAP. Check the prerequisites for SAP and required minimum versions of SAP Kernel and SAP Host Agent in the resources listed in [SAP resources][sap-resources].
+There are two versions of the VM Extension for SAP. Check the prerequisites for SAP and required minimum versions of SAP Kernel and SAP Host Agent in the resources listed in [SAP resources][sap-resources].
### Standard Version of VM Extension for SAP
-This version is the current standard VM Extension for SAP. There are some exceptions where Microsoft recommends to install the new VM Extension for SAP. In addition, when opening a support case, SAP Support is able to request to install the new VM Extension. For more details on when to use the new version of the VM Extension for SAP, see chapter [New Version of VM Extension for SAP][new-monitoring]
+This version is the current standard VM Extension for SAP. There are some exceptions where Microsoft recommends installing the new VM Extension for SAP. In addition, when opening a support case, SAP Support is able to request to install the new VM Extension. For more details on when to use the new version of the VM Extension for SAP, see chapter [New Version of VM Extension for SAP][new-monitoring]
### <a name="38d9f33f-d0af-4b8f-8134-f1f97d656fb6"></a>New Version of VM Extension for SAP
This version is the new Azure VM extension for SAP solutions. With further impro
## Recommendation
-We currently recommend to use the standard version of the extension for each installation where none of the use cases for the new version of the extension applies. We are currently working on improving the new version of the VM extension to be able to make it the default and deprecate the standard version of the extension. During this time, you can use the new version. However, you need to make sure the VM Extension can access management.azure.com.
+We currently recommend using the standard version of the extension for each installation where none of the use cases for the new version of the extension applies. We are currently working on improving the new version of the VM extension to be able to make it the default and deprecate the standard version of the extension. During this time, you can use the new version. However, you need to make sure the VM Extension can access management.azure.com.
> [!NOTE] > Make sure to uninstall the VM Extension before switching between the two versions.
site-recovery Azure To Azure How To Reprotect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-how-to-reprotect.md
Title: Reprotect Azure VMs to the primary region with Azure Site Recovery | Microsoft Docs
+ Title: Reprotect Azure VMs to the primary region with Azure Site Recovery
description: Describes how to reprotect Azure VMs after failover, the secondary to primary region, using Azure Site Recovery. -+ Previously updated : 12/07/2022- Last updated : 04/20/2023+ # Reprotect failed over Azure VMs to the primary region
When you [fail over](site-recovery-failover.md) Azure VMs from one region to ano
1. In **Vault** > **Replicated items**, right-click the failed over VM, and select **Re-Protect**. The reprotection direction should show from secondary to primary.
- ![Screenshot shows a virtual machine with a contextual menu with Re-protect selected.](./media/site-recovery-how-to-reprotect-azure-to-azure/reprotect.png)
+ :::image type="content" source="./media/site-recovery-how-to-reprotect-azure-to-azure/reprotect.png" alt-text="Screenshot shows a virtual machine with a contextual menu with Re-protect selected." lightbox="./media/site-recovery-how-to-reprotect-azure-to-azure/reprotect.png":::
1. Review the resource group, network, storage, and availability sets. Then select **OK**. If there are any resources marked as new, they're created as part of the reprotection process. 1. The reprotection job seeds the target site with the latest data. After the job finishes, delta replication takes place. Then, you can fail over back to the primary site. You can select the storage account or the network you want to use during reprotect, using the customize option.
- ![Customize option](./media/site-recovery-how-to-reprotect-azure-to-azure/customize.png)
+ :::image type="content" source="./media/site-recovery-how-to-reprotect-azure-to-azure/customize.png" alt-text="Screenshot displays Customize option on the Azure portal." lightbox="./media/site-recovery-how-to-reprotect-azure-to-azure/customize.png":::
### Customize reprotect settings You can customize the following properties of the target VM during reprotection.
-![Customize](./media/site-recovery-how-to-reprotect-azure-to-azure/customizeblade.png)
|Property |Notes | |||
When you trigger a reprotect job, and the target VM and disks don't exist, the f
#### Estimated time to do the reprotection
-In most cases, Azure Site Recovery doesn't replicate the complete data to the source region.
-The following conditions determine how much data is replicated:
+In most cases, Azure Site Recovery doesn't replicate the complete data to the source region. The amount of data replicated depends on the following conditions:
-1. If the source VM data is deleted, corrupted, or inaccessible for some reason, such as a resource group change/delete, then during reprotection a complete initial replication will happen because there's no data available on the source region to use. Data transfer happens at ~23% of the disk throughput.
-1. If the source VM data is accessible, then only differentials are computed by comparing both the disks and then transferred. Check the table below to get the estimated time. The differential calculation and comparison happens at ~46% of the disk throughput, and then the transfer of differential data happens at ~23% of the disk throughput.
+1. If the source VM data is deleted, corrupted, or inaccessible for some reason, such as a resource group change or delete, a complete initial replication will happen during reprotection because there's no data available on the source region to use. In this case, the reprotection time taken will be at least as long as the initial replication time taken from the primary to the disaster recovery location.
+2. If the source VM data is accessible, then differentials are computed by comparing both the disks and only the differences are transferred.
+ In this case, the **reprotection time** is greater than or equal to the `checksum calculation time + checksum differentials transfer time + time taken to process the recovery points from Azure Site Recovery agent + auto scale time`.
-|Example situation | Time taken to reprotect |
-|||
-|Source region has 1 VM with 1-TB standard disk.<br/>Only 127-GB data is used, and the rest of the disk is empty.<br/>Disk type is standard with 60-MBps throughput.<br/>No data change after failover.| Approximate time: 75-105 minutes.<br/> During reprotection, Site Recovery will populate the checksum of all data, which operates at 46% of the disk throughput - 28 MBps. So the total time that it will take is 127 GB/28 MBps, approximately 76 minutes.<br/>Some overhead time may be required for Site Recovery to auto scale, approximately 20-30 minutes. |
-|Source region has 1 VM with 1-TB standard disk.<br/>Only 127-GB data is used and rest of the disk is empty.<br/>Disk type is standard with 60-MBps throughput.<br/>45-GB data changes after failover.| Approximate time: 2-2.5 hours.<br/> During reprotection, Site Recovery will populate the checksum of all data, which operates at 46% of the disk throughput - 28 MBps. The total time that it will take is 127 GB/28 MBps, approximately 76 minutes.<br/>Transfer speed is approximately 23% of throughput, or 14 MBps. Therefore, transfer time to apply changes of 45 GB that is 45 GB/14 MBps, approximately 55 minutes.<br/>Some overhead time may be required for Site Recovery to auto scale, approximately 20-30 minutes. |
-|Source region has 1 VM with 1-TB standard disk.<br/>Only 20-GB data is used and rest of the disk is empty.<br/>Disk type is standard with 60-MBps throughput.<br/>The initial data on the disk immediately after failover was 15 GB. There was 5-GB data change after failover. Total populated data is therefore 20 GB.| Approximate time: 30-60 mins.<br/>Since the data populated in the disk is less than 10% of the size of the disk, we perform a complete initial replication.<br/> Transfer speed is approximately 23% of throughput, or 14 MBps. Therefore, transfer time to apply changes of 20 GB that is 20 GB/14 MBps, approximately 24 minutes.<br/>Some overhead time may be required for Site Recovery to auto scale, approximately 20-30 minutes. |
-|Source region has 1 VM with 1-TB premium disk.<br/>Only 127-GB data is used, and the rest of the disk is empty.<br/>Disk type is premium with 200-MBps throughput.<br/>No data change after failover.| Approximate time: 30-60 mins.<br/>During reprotection, Site Recovery will populate the checksum of all data, which operates at 46% of disk throughput - 92 MBps. The total time that it will take is 127 GB/92 MBps, approximately 25 minutes.<br/>Some overhead time may be required for Site Recovery to auto scale, approximately 20-30 minutes. |
-|Source region has 1 VM with 1-TB premium disk.<br/>Only 127-GB data is used and rest of the disk is empty.<br/>Disk type is premium with 200-MBps throughput.<br/>45-GB data changes after failover.| Approximate time: 45-75 mins.<br/>During reprotection, Site Recovery will populate the checksum of all data, which operates at 46% of disk throughput - 92 MBps. The total time that it will take is 127 GB/92 MBps, approximately 25 minutes. </br>Transfer speed is approximately 23% of throughput, or 46 MBps. Therefore, transfer time to apply changes of 45 GB that is 45 GB/46 MBps, approximately 17 minutes.<br/>Some overhead time may be required for Site Recovery to auto scale, approximately 20-30 minutes. |
-|Source region has 1 VM with 1-TB premium disk.<br/>Only 20-GB data is used and rest of the disk is empty.<br/>Disk type is premium with 200-MBps throughput.<br/>The initial data on the disk immediately after failover was 15 GB. There was 5-GB data change after failover. Total populated data is therefore 20 GB| Approximate time: 10-40 minutes.<br/>Since the data populated in the disk is less than 10% of the size of the disk, we perform a complete initial replication.<br/>Transfer speed is approximately 23% of throughput, or 46-MBps. Therefore, transfer time to apply changes of 20 GB that is 20 GB/46 MBps, approximately 8 minutes.<br/>Some overhead time may be required for Site Recovery to auto scale, approximately 20-30 minutes |
+**Factors governing reprotection time in scenario 2**
-When the VM is re-protected from DR region to primary region (that is, after failing over from the primary region to DR region), the target VM (original source VM), and associated NIC(s) are deleted.
+The following factors affect the reprotection time when the source VM is accessible in scenario 2:
-When the VM is re-protected again from the primary region to DR region after failback, we do not delete the VM and associated NIC(s) in the DR region that were created during the earlier failover.
+1. **Checksum calculation time** - The time taken to complete the enable replication process from the primary to the disaster recovery location is used as a benchmark for the checksum differential calculation. Navigate to **Recovery Services vaults** > **Monitoring** > **Site Recovery jobs** to see the time taken to complete the enable replication process. This will be the minimum time required to complete the checksum calculation.
+ :::image type="content" source="./media/site-recovery-how-to-reprotect-azure-to-azure/estimated-reprotection.png" alt-text="Screenshot displays duration of reprotection of a VM on the Azure portal." lightbox="./media/site-recovery-how-to-reprotect-azure-to-azure/estimated-reprotection.png":::
+
+1. **Checksum differential data transfer** happens at approximately 23% of disk throughput.
+1. **The time taken to process the recovery points sent from Azure Site Recovery agent** ΓÇô Azure Site Recovery agent continues to send recovery points during the checksum calculation and transfer phase, as well. However, Azure Site Recovery processes them only once the checksum diff transfer is complete.
+ The time taken to process recovery points will be around one-fifth (1/5th) of the time taken for checksum differentials calculation and checksum differentials transfer time (time for checksum diff calculation + time for checksum diff transfer). For example, if the time taken for checksum differential calculation and checksum differential transfer is 15 hours, the time taken to process the recovery points from the agent will be three hours.
+1. The **auto scale time** is approximately 20-30 minutes.
++
+**Example scenario:**
+
+Let's take the example from the following screenshot, where Enable Replication from the primary to the disaster recovery location took an hour and 12 minutes. The Checksum calculation time would be at least an hour and 12 minutes. Assuming that the amount of data change post failover is 45 GB, and the disk has a throughput of 60 Mbps, the differential transfer will occur at 14 Mbps, and the time taken for differential transfer will be 45 GB / 14 Mbps, that is approximately 55 minutes. The time taken to process the recovery points is approximately one-fifth of the total time taken for the checksum calculation (72 minutes) and time taken for data transfer (55minutes), which is approximately 25 minutes. Additionally, it takes 20-30 minutes for auto-scaling. So, the total time for reprotection should be at least three hours.
+
+ :::image type="content" source="./media/site-recovery-how-to-reprotect-azure-to-azure/estimated-reprotection.png" alt-text="Screenshot displays example duration of reprotection of a VM on the Azure portal." lightbox="./media/site-recovery-how-to-reprotect-azure-to-azure/estimated-reprotection.png":::
+
+The above is a simple illustration of how to estimate the reprotection time.
+
+When the VM is re-protected from disaster recovery region to primary region (that is, after failing over from the primary region to disaster recovery region), the target VM (original source VM), and associated NIC(s) are deleted.
+
+However, when the VM is re-protected again from the primary region to disaster recovery region after failback, we do not delete the VM and associated NIC(s) in the disaster recovery region that were created during the earlier failover.
## Next steps
site-recovery Azure To Azure Replicate After Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-replicate-after-migration.md
Install the [Azure Linux VM](../virtual-machines/extensions/agent-linux.md) agen
1. Run this command: **ps -e** to ensure that the Azure agent is running on the Linux VM. 2. If the process isn't running, restart it by using the following commands:
- - For Ubuntu: **service walinuxagent start**
- - For other distributions: **service waagent start**
+ - For Ubuntu/Debian:
+
+ ```bash
+ sudo systemctl enable --now walinuxagent.service
+ ```
+ - For other distributions:
+
+ ```bash
+ sudo systemctl enable --now waagent.service
+ ```
## Uninstall the Mobility service
site-recovery Site Recovery Deployment Planner History https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/site-recovery-deployment-planner-history.md
This article provides history of all versions of Azure Site Recovery Deployment
- Added VMs with up to 20 Mbps of data change rate (churn) to the compatibility checklist. - Improved error messages - Added support for vCenter 6.7.-- Added support for Windows Server 2019 and Red Hat Enterprise Linux (RHEL) workstation.
+- Added support for Windows Server 2019 and Red Hat Enterprise Linux (`RHEL`) workstation.
> [!Note] >- It is not recommended to run the deployment planner on the ESXi version 6.7.0 Update 2 Build 13006603, as it does not work as expected.
site-recovery Site Recovery Extension Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/site-recovery-extension-troubleshoot.md
Most agent-related or extension-related failures for Linux VMs are caused by iss
If the process isn't running, restart it by using the following commands:
- - For Ubuntu: `service walinuxagent start`
- - For other distributions: `service waagent start`
+ - For Ubuntu/Debian:
+
+ ```bash
+ sudo systemctl enable --now walinuxagent.service
+ ```
+ - For other distributions:
+
+ ```bash
+ sudo systemctl enable --now waagent.service
+ ```
1. [Configure the automatic restart agent](https://github.com/Azure/WALinuxAgent/wiki/Known-Issues#mitigate_agent_crash). 1. Enable protection of the virtual machine.
site-recovery Site Recovery Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/site-recovery-whats-new.md
For Site Recovery components, we support N-4 versions, where N is the latest rel
| **Providers and agents** | Updates to Site Recovery agents and providers as detailed in the rollup KB article. **Issue fixes/improvements** | Many fixes and improvement as detailed in the rollup KB article.
-**Azure VM disaster recovery** | Added support for Ubuntu 22.04, RHEL 8.7 and Cent OS 8.7 Linux distro.
-**VMware VM/physical disaster recovery to Azure** | Added support for Ubuntu 22.04, RHEL 8.7 and Cent OS 8.7 Linux distro.
+**Azure VM disaster recovery** | Added support for Ubuntu 22.04, RHEL 8.7 and CentOS 8.7 Linux distro.
+**VMware VM/physical disaster recovery to Azure** | Added support for Ubuntu 22.04, RHEL 8.7 and CentOS 8.7 Linux distro.
## Updates (November 2022)
Features added this month are summarized in the table.
## Next steps
-Keep up-to-date with our updates on the [Azure Updates](https://azure.microsoft.com/updates/?product=site-recovery) page.
+Keep up-to-date with our updates on the [Azure Updates](https://azure.microsoft.com/updates/?product=site-recovery) page.
site-recovery Vmware Azure Install Linux Master Target https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-azure-install-linux-master-target.md
Create the master target in accordance with the following sizing guidelines:
### Install Ubuntu 16.04.2 Minimal
+>[!IMPORTANT]
+>Ubuntu 16.04 (Xenial Xerus) has reached its end of life and is no longer supported by Canonical or the Ubuntu community. This means that no security updates or bug fixes will be provided for this version of Ubuntu. Continuing to use Ubuntu 16.04 may expose your system to potential security vulnerabilities or software compatibility issues. We strongly recommend upgrading to a supported version of Ubuntu, such as Ubuntu 18.04 or Ubuntu 20.04.
+ Take the following the steps to install the Ubuntu 16.04.2 64-bit operating system.
Azure Site Recovery master target server requires a specific version of the Ubun
> [!NOTE] > Make sure that you have Internet connectivity to download and install additional packages. If you don't have Internet connectivity, you need to manually find these Deb packages and install them.
- `apt-get install -y multipath-tools lsscsi python-pyasn1 lvm2 kpartx`
-
+ ```bash
+ sudo apt-get install -y multipath-tools lsscsi python-pyasn1 lvm2 kpartx
+ ```
+
>[!NOTE] > From, version [9.42](https://support.microsoft.com/en-us/topic/update-rollup-55-for-azure-site-recovery-kb5003408-b19c8190-5f88-43ea-85b1-d9e0cc5ca7e8), Ubuntu 20.04 operating system is supported for Linux master target server. > If you wish to use the latest OS, upgrade the operating system to Ubuntu 20.04 before proceeding. To upgrade the operating system later, you can follow the instructions listed [here](#upgrade-os-of-master-target-server-from-ubuntu-1604-to-ubuntu-2004).
If your master target has Internet connectivity, you can use the following steps
To download it using Linux, type:
-`wget https://aka.ms/latestlinuxmobsvc -O latestlinuxmobsvc.tar.gz`
+```bash
+ sudo wget https://aka.ms/latestlinuxmobsvc -O latestlinuxmobsvc.tar.gz
+```
> [!WARNING] > Make sure that you download and unzip the installer in your home directory. If you unzip to **/usr/Local**, then the installation fails.
To apply custom configuration changes, use the following steps as a ROOT user:
1. Run the following command to untar the binary.
- `tar -xvf latestlinuxmobsvc.tar.gz`
-
+ ```bash
+ sudo tar -xvf latestlinuxmobsvc.tar.gz
+ ```
![Screenshot of the command to run](./media/vmware-azure-install-linux-master-target/image16.png) 2. Run the following command to give permission.
- `chmod 755 ./ApplyCustomChanges.sh`
-
+ ```bash
+ sudo chmod 755 ./ApplyCustomChanges.sh
+ ```
3. Run the following command to run the script.
- `./ApplyCustomChanges.sh`
+ ```bash
+ sudo ./ApplyCustomChanges.sh
+ ```
> [!NOTE] > Run the script only once on the server. Then shut down the server. Restart the server after you add a disk, as described in the next section.
Use the following steps to create a retention disk:
4. After you create the file system, mount the retention disk.
- ```
- mkdir /mnt/retention
- mount /dev/mapper/<Retention disk's multipath id> /mnt/retention
+ ```bash
+ sudo mkdir /mnt/retention
+ sudo mount /dev/mapper/<Retention disk's multipath id> /mnt/retention
``` 5. Create the **fstab** entry to mount the retention drive every time the system starts.
- `vi /etc/fstab`
+ ```bash
+ sudo vi /etc/fstab
+ ```
Select **Insert** to begin editing the file. Create a new line, and then insert the following text. Edit the disk multipath ID based on the highlighted multipath ID from the previous command.
Use the following steps to create a retention disk:
1. Run the following command to install the master target.
- ```
- ./install -q -d /usr/local/ASR -r MT -v VmWare
+ ```bash
+ sudo ./install -q -d /usr/local/ASR -r MT -v VmWare
``` 2. Copy the passphrase from **C:\ProgramData\Microsoft Azure Site Recovery\private\connection.passphrase** on the configuration server. Then save it as **passphrase.txt** in the same local directory by running the following command:
- `echo <passphrase> >passphrase.txt`
+ ```bash
+ sudo echo <passphrase> >passphrase.txt
+ ```
Example:
- `echo itUx70I47uxDuUVY >passphrase.txt`
-
+ ```bash
+ sudo echo itUx70I47uxDuUVY >passphrase.txt`
+ ```
3. Note down the configuration server's IP address. Run the following command to register the server with the configuration server.
- ```
- /usr/local/ASR/Vx/bin/UnifiedAgentConfigurator.sh -i <ConfigurationServer IP Address> -P passphrase.txt
+ ```bash
+ sudo /usr/local/ASR/Vx/bin/UnifiedAgentConfigurator.sh -i <ConfigurationServer IP Address> -P passphrase.txt
``` Example:
- ```
- /usr/local/ASR/Vx/bin/UnifiedAgentConfigurator.sh -i 104.40.75.37 -P passphrase.txt
+ ```bash
+ sudo /usr/local/ASR/Vx/bin/UnifiedAgentConfigurator.sh -i 104.40.75.37 -P passphrase.txt
``` Wait until the script finishes. If the master target registers successfully, the master target is listed on the **Site Recovery Infrastructure** page of the portal.
Wait until the script finishes. If the master target registers successfully, the
1. Run the following command to install the master target. For the agent role, choose **master target**.
- ```
- ./install
+ ```bash
+ sudo ./install
``` 2. Choose the default location for installation, and then select **Enter** to continue.
After the installation has finished, register the configuration server by using
2. Run the following command to register the server with the configuration server.
- ```
- /usr/local/ASR/Vx/bin/UnifiedAgentConfigurator.sh
+ ```bash
+ sudo /usr/local/ASR/Vx/bin/UnifiedAgentConfigurator.sh
``` Wait until the script finishes. If the master target is registered successfully, the master target is listed on the **Site Recovery Infrastructure** page of the portal.
Running the installer will automatically detect that the agent is installed on t
After the setup has been completed, check the version of the master target installed by using the following command:
-`cat /usr/local/.vx_version`
-
+```bash
+ sudo cat /usr/local/.vx_version
+```
You will see that the **Version** field gives the version number of the master target.
From 9.42 version, ASR supports Linux master target server on Ubuntu 20.04. To u
Restart the networking service using the following command: <br>
-`sudo systemctl restart networking`
-
+```bash
+ sudo systemctl restart networking
+```
## Next steps After the installation and registration of the master target has finished, you can see the master target appear on the **master target** section in **Site Recovery Infrastructure**, under the configuration server overview.
site-recovery Vmware Physical Mobility Service Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-physical-mobility-service-overview.md
Agent configuration logs | `%ProgramData%\ASRSetupLogs\ASRUnifiedAgentConfigurat
1. From a terminal session, copy the installer to a local folder such as _/tmp_ on the server that you want to protect. Replace the installer's file name with your Linux distribution's actual file name, then run the commands.
- ```shell
+ ```bash
cd /tmp ; tar -xvf Microsoft-ASR_UA_version_LinuxVersion_GA_date_release.tar.gz ``` 2. Install as follows (root account is not required, but root permissions are required):
- ```shell
+ ```bash
sudo ./install -r MS -v VmWare -d <Install Location> -q ``` 3. After the installation is finished, the Mobility service must be registered to the configuration server. Run the following command to register the Mobility service with the configuration server.
- ```shell
+ ```bash
/usr/local/ASR/Vx/bin/UnifiedAgentConfigurator.sh -i <CSIP> -P /var/passphrase.txt ```
Syntax | `"<InstallLocation>\UnifiedAgentConfigurator.exe" /SourceConfigFilePath
1. From a terminal session, copy the installer to a local folder such as **/tmp** on the server that you want to protect. Then run the below command:
- ```shell
+ ```bash
cd /tmp ; tar -xvf Microsoft-ASR_UA_version_LinuxVersion_GA_date_release.tar.gz ``` 2. To install, use the below command:
- ```shell
- ./install -q -r MS -v VmWare -c CSPrime
+ ```bash
+ sudo ./install -q -r MS -v VmWare -c CSPrime
``` Once the installation is complete, copy the string that is generated alongside the parameter *Agent Config Input*. This string is required to [generate the Mobility Service configuration file](#generate-mobility-service-configuration-file). 3. After successfully installing, register the source machine with the above appliance using the following command:
- ```shell
+ ```bash
<InstallLocation>/Vx/bin/UnifiedAgentConfigurator.sh -c CSPrime -S config.json -q ``` #### Installation settings
spring-apps Quickstart Deploy Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/quickstart-deploy-apps.md
Use the following steps to create and deploys apps on Azure Spring Apps using th
az configure --defaults group=<resource-group-name> spring=<service-name> ```
-1. Create the two core Spring applications for PetClinic: API gateway and customers-service.
+1. Create the two core Spring applications for PetClinic: `api-gateway` and `customers-service`.
```azurecli az spring app create \
Use the following steps to create and deploys apps on Azure Spring Apps using th
## Verify the services
-Access the app gateway and customers service from browser with the **Public Url** shown previously, in the format of `https://<service name>-api-gateway.azuremicroservices.io`.
+Access `api-gateway` and `customers-service` from a browser with the **Public Url** shown previously, in the format of `https://<service name>-api-gateway.azuremicroservices.io`.
:::image type="content" source="media/quickstart-deploy-apps/access-customers-service.png" alt-text="Screenshot of the PetClinic customers service." lightbox="media/quickstart-deploy-apps/access-customers-service.png"::: > [!TIP]
-> To troubleshot deployments, you can use the following command to get logs streaming in real time whenever the app is running `az spring app logs --name <app name> -f`.
+> To troubleshot deployments, you can use the following command to get logs streaming in real time whenever the app is running `az spring app logs --name <app name> --follow`.
## Deploy extra apps
To deploy to Azure, you must sign in with your Azure account with Azure Toolkit
:::image type="content" source="media/quickstart-deploy-apps/memory-jvm-options.png" alt-text="Screenshot of memory and JVM options." lightbox="media/quickstart-deploy-apps/memory-jvm-options.png"::: 1. In the **Before launch** section of the dialog, double-click **Run Maven Goal**.
-1. In the **Working directory** textbox, navigate to the *spring-petclinic-microservices/gateway* folder.
+1. In the **Working directory** textbox, navigate to the *spring-petclinic-microservices/spring-petclinic-api-gateway* folder.
1. In the **Command line** textbox, enter *package -DskipTests*. Select **OK**. :::image type="content" source="media/quickstart-deploy-apps/deploy-to-azure-spring-apps-2-pet-clinic.png" alt-text="Screenshot of the spring-petclinic-microservices/gateway page and command line textbox." lightbox="media/quickstart-deploy-apps/deploy-to-azure-spring-apps-2-pet-clinic.png":::
spring-apps Quickstart Logs Metrics Tracing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/quickstart-logs-metrics-tracing.md
zone_pivot_groups: programming-languages-spring-apps
> [!NOTE] > Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
-**This article applies to:** ✔️ Basic/Standard tier ❌ Enterprise tier
+**This article applies to:** ✔️ Basic/Standard ❌ Enterprise
::: zone pivot="programming-language-csharp"
There are two ways to see logs on Azure Spring Apps: **Log Streaming** of real-t
You can use log streaming in the Azure CLI with the following command. ```azurecli
-az spring app logs -n solar-system-weather -f
+az spring app logs --name solar-system-weather --follow
```
-You'll see output similar to the following example:
+You're shown output similar to the following example:
```output => ConnectionId:0HM2HOMHT82UK => RequestPath:/weatherforecast RequestId:0HM2HOMHT82UK:00000003, SpanId:|e8c1682e-46518cc0202c5fd9., TraceId:e8c1682e-46518cc0202c5fd9, ParentId: => Microsoft.Azure.SpringCloud.Sample.SolarSystemWeather.Controllers.WeatherForecastController.Get (Microsoft.Azure.SpringCloud.Sample.SolarSystemWeather)
Executing ObjectResult, writing value of type 'System.Collections.Generic.KeyVal
1. Edit the query to remove the Where clauses that limit the display to warning and error logs.
-1. Then select **Run**, and you'll see logs. For more information, see [Get started with log queries in Azure Monitor](../azure-monitor/logs/get-started-queries.md).
+1. Select **Run**. You're shown logs. For more information, see [Get started with log queries in Azure Monitor](../azure-monitor/logs/get-started-queries.md).
:::image type="content" source="media/quickstart-logs-metrics-tracing/logs-query-steeltoe.png" alt-text="Screenshot of a Logs Analytics query." lightbox="media/quickstart-logs-metrics-tracing/logs-query-steeltoe.png":::
There are two ways to see logs on Azure Spring Apps: **Log Streaming** of real-t
You can use log streaming in the Azure CLI with the following command. ```azurecli
-az spring app logs -s <service instance name> -g <resource group name> -n gateway -f
+az spring app logs \
+ --resource-group <resource-group-name> \
+ --service <service-instance-name> \
+ --name api-gateway \
+ --follow
```
-You'll see logs like this:
+You're shown logs like this:
:::image type="content" source="media/quickstart-logs-metrics-tracing/logs-streaming-cli.png" alt-text="Screenshot of CLI log output." lightbox="media/quickstart-logs-metrics-tracing/logs-streaming-cli.png":::
To get the logs using Azure Toolkit for IntelliJ:
![Select instance](media/quickstart-logs-metrics-tracing/select-instance.png)
-1. The streaming log will be visible in the output window.
+1. The streaming log is visible in the output window.
![Streaming log output](media/quickstart-logs-metrics-tracing/streaming-log-output.png)
To get the logs using Azure Toolkit for IntelliJ:
:::image type="content" source="media/quickstart-logs-metrics-tracing/logs-entry.png" alt-text="Screenshot of the Logs opening page." lightbox="media/quickstart-logs-metrics-tracing/logs-entry.png":::
-1. Then you'll see filtered logs. For more information, see [Get started with log queries in Azure Monitor](../azure-monitor/logs/get-started-queries.md).
+1. Then you're shown filtered logs. For more information, see [Get started with log queries in Azure Monitor](../azure-monitor/logs/get-started-queries.md).
:::image type="content" source="media/quickstart-logs-metrics-tracing/logs-query.png" alt-text="Screenshot of filtered logs." lightbox="media/quickstart-logs-metrics-tracing/logs-query.png":::
The following chart shows `gateway-requests` (Spring Cloud Gateway), `hikaricp_c
:::image type="content" source="media/quickstart-logs-metrics-tracing/petclinic-microservices-metrics.jpg" alt-text="Screenshot of gateway requests." lightbox="media/quickstart-logs-metrics-tracing/petclinic-microservices-metrics.jpg":::
-Spring Boot registers several core metrics, including JVM, CPU, Tomcat, and Logback. The Spring Boot auto-configuration enables the instrumentation of requests handled by Spring MVC. All three REST controllers (`OwnerResource`, `PetResource`, and `VisitResource`) have been instrumented by the `@Timed` Micrometer annotation at the class level.
+Spring Boot registers several core metrics, including JVM, CPU, Tomcat, and Logback. The Spring Boot autoconfiguration enables the instrumentation of requests handled by Spring MVC. All three REST controllers (`OwnerResource`, `PetResource`, and `VisitResource`) are instrumented by the `@Timed` Micrometer annotation at the class level.
The `customers-service` application has the following custom metrics enabled:
- - @Timed: `petclinic.owner`
- - @Timed: `petclinic.pet`
+- @Timed: `petclinic.owner`
+- @Timed: `petclinic.pet`
The `visits-service` application has the following custom metrics enabled:
- - @Timed: `petclinic.visit`
+- @Timed: `petclinic.visit`
You can see these custom metrics in the `Metrics` blade:
storage Storage Blob Container Create Typescript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-create-typescript.md
The Azure SDK for JavaScript contains libraries that build on top of the Azure R
### Code samples -- [View code samples from this article (GitHub)](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/TypeScript/NodeJS-v12/dev-guide/container-create.ts)
+- [View code samples from this article (GitHub)](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/TypeScript/NodeJS-v12/dev-guide/src/container-create.ts)
storage Storage Blob Container Delete Typescript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-delete-typescript.md
The Azure SDK for JavaScript contains libraries that build on top of the Azure R
### Code samples -- [View code samples from this article (GitHub)](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/TypeScript/NodeJS-v12/dev-guide/containers-delete.ts)
+- [View code samples from this article (GitHub)](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/TypeScript/NodeJS-v12/dev-guide/src/containers-delete.ts)
[!INCLUDE [storage-dev-guide-resources-typescript](../../../includes/storage-dev-guides/storage-dev-guide-resources-typescript.md)]
storage Storage Blob Container Properties Metadata Typescript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-properties-metadata-typescript.md
The `getProperties` method retrieves container properties and metadata by callin
### Code samples -- [View code samples from this article (GitHub)](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/TypeScript/NodeJS-v12/dev-guide/container-set-properties-and-metadata.js)
+- [View code samples from this article (GitHub)](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/JavaScript/NodeJS-v12/dev-guide/container-set-properties-and-metadata.js)
[!INCLUDE [storage-dev-guide-resources-typescript](../../../includes/storage-dev-guides/storage-dev-guide-resources-typescript.md)]
storage Storage Blob Containers List Typescript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-containers-list-typescript.md
The Azure SDK for JavaScript contains libraries that build on top of the Azure R
### Code samples -- [View code samples from this article (GitHub)](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/TypeScript/NodeJS-v12/dev-guide/containers-list.ts)
+- [View code samples from this article (GitHub)](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/TypeScript/NodeJS-v12/dev-guide/src/containers-list.ts)
[!INCLUDE [storage-dev-guide-resources-typescript](../../../includes/storage-dev-guides/storage-dev-guide-resources-typescript.md)]
storage Storage Blob Copy Async Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-copy-async-dotnet.md
To work with the code examples in this article, make sure you have:
- [Abort Copy Blob](/rest/api/storageservices/abort-copy-blob#authorization) - Packages installed to your project directory. These examples use **Azure.Storage.Blobs**. If you're using `DefaultAzureCredential` for authorization, you also need **Azure.Identity**. To learn more about setting up your project, see [Get Started with Azure Storage and .NET](storage-blob-dotnet-get-started.md#set-up-your-project). To see the necessary `using` directives, see [Code samples](#code-samples).
-## About copying blobs with asynchronous scheduling
-
-The `Copy Blob` operation can finish asynchronously and is performed on a best-effort basis, which means that the operation isn't guaranteed to start immediately or complete within a specified time frame. The copy operation is scheduled in the background and performed as the server has available resources. The operation can complete synchronously if the copy occurs within the same storage account.
-
-A `Copy Blob` operation can perform any of the following actions:
--- Copy a source blob to a destination blob with a different name. The destination blob can be an existing blob of the same blob type (block, append, or page), or it can be a new blob created by the copy operation.-- Copy a source blob to a destination blob with the same name, which replaces the destination blob. This type of copy operation removes any uncommitted blocks and overwrites the destination blob's metadata.-- Copy a source file in the Azure File service to a destination blob. The destination blob can be an existing block blob, or can be a new block blob created by the copy operation. Copying from files to page blobs or append blobs isn't supported.-- Copy a snapshot over its base blob. By promoting a snapshot to the position of the base blob, you can restore an earlier version of a blob.-- Copy a snapshot to a destination blob with a different name. The resulting destination blob is a writeable blob and not a snapshot.-
-The source blob for a copy operation may be one of the following types: block blob, append blob, page blob, blob snapshot, or blob version. The copy operation always copies the entire source blob or file. Copying a range of bytes or set of blocks isn't supported.
-
-If the destination blob already exists, it must be of the same blob type as the source blob, and the existing destination blob is overwritten. The destination blob can't be modified while a copy operation is in progress, and a destination blob can only have one outstanding copy operation.
-
-To learn more about the `Copy Blob` operation, including information about properties, index tags, metadata, and billing, see [Copy Blob remarks](/rest/api/storageservices/copy-blob#remarks).
## Copy a blob with asynchronous scheduling
storage Storage Blob Copy Async Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-copy-async-java.md
To work with the code examples in this article, make sure you have:
- [Abort Copy Blob](/rest/api/storageservices/abort-copy-blob#authorization) - Packages installed to your project directory. These examples use **azure-storage-blob**. If you're using `DefaultAzureCredential` for authorization, you also need **azure-identity**. To learn more about setting up your project, see [Get Started with Azure Storage and Java](storage-blob-dotnet-get-started.md#set-up-your-project). To see the necessary `import` directives, see [Code samples](#code-samples).
-## About copying a blob with asynchronous scheduling
-
-The `Copy Blob` operation can finish asynchronously and is performed on a best-effort basis, which means that the operation isn't guaranteed to start immediately or complete within a specified time frame. The copy operation is scheduled in the background and performed as the server has available resources. The operation can complete synchronously if the copy occurs within the same storage account.
-
-A `Copy Blob` operation can perform any of the following actions:
--- Copy a source blob to a destination blob with a different name. The destination blob can be an existing blob of the same blob type (block, append, or page), or it can be a new blob created by the copy operation.-- Copy a source blob to a destination blob with the same name, which replaces the destination blob. This type of copy operation removes any uncommitted blocks and overwrites the destination blob's metadata.-- Copy a source file in the Azure File service to a destination blob. The destination blob can be an existing block blob, or can be a new block blob created by the copy operation. Copying from files to page blobs or append blobs isn't supported.-- Copy a snapshot over its base blob. By promoting a snapshot to the position of the base blob, you can restore an earlier version of a blob.-- Copy a snapshot to a destination blob with a different name. The resulting destination blob is a writeable blob and not a snapshot.-
-The source blob for a copy operation may be one of the following types: block blob, append blob, page blob, blob snapshot, or blob version. The copy operation always copies the entire source blob or file. Copying a range of bytes or set of blocks isn't supported.
-
-If the destination blob already exists, it must be of the same blob type as the source blob, and the existing destination blob is overwritten. The destination blob can't be modified while a copy operation is in progress, and a destination blob can only have one outstanding copy operation.
-
-To learn more about the `Copy Blob` operation, including information about properties, index tags, metadata, and billing, see [Copy Blob remarks](/rest/api/storageservices/copy-blob#remarks).
## Copy a blob with asynchronous scheduling
storage Storage Blob Copy Typescript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-copy-typescript.md
The Azure SDK for JavaScript contains libraries that build on top of the Azure R
### Code samples -- [View code samples from this article (GitHub)](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/TypeScript/NodeJS-v12/dev-guide/blob-copy.ts)
+- [View code samples from this article (GitHub)](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/TypeScript/NodeJS-v12/dev-guide/src/blob-copy.ts)
[!INCLUDE [storage-dev-guide-resources-typescript](../../../includes/storage-dev-guides/storage-dev-guide-resources-typescript.md)]
storage Storage Blob Copy Url Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-copy-url-dotnet.md
To work with the code examples in this article, make sure you have:
- [Put Block From URL](/rest/api/storageservices/put-block-from-url#authorization) - Packages installed to your project directory. These examples use **Azure.Storage.Blobs**. If you're using `DefaultAzureCredential` for authorization, you also need **Azure.Identity**. To learn more about setting up your project, see [Get Started with Azure Storage and .NET](storage-blob-dotnet-get-started.md#set-up-your-project). To see the necessary `using` directives, see [Code samples](#code-samples).
-## About copying blobs from a source object URL
-
-The `Put Blob From URL` operation creates a new block blob where the contents of the blob are read from a given URL. The operation completes synchronously.
-
-The source can be any object retrievable via a standard HTTP GET request on the given URL. This includes block blobs, append blobs, page blobs, blob snapshots, blob versions, or any accessible object inside or outside Azure.
-
-When the source object is a block blob, all committed blob content is copied. The content of the destination blob is identical to the content of the source, but the committed block list isn't preserved and uncommitted blocks aren't copied.
-
-The destination is always a block blob, either an existing block blob, or a new block blob created by the operation. The contents of an existing blob are overwritten with the contents of the new blob.
-
-The `Put Blob From URL` operation always copies the entire source blob. Copying a range of bytes or set of blocks isn't supported. To perform partial updates to a block blobΓÇÖs contents by using a source URL, use the [Put Block From URL](/rest/api/storageservices/put-block-from-url) API along with [Put Block List](/rest/api/storageservices/put-block-list).
-
-To learn more about the `Put Blob From URL` operation, including blob size limitations and billing considerations, see [Put Blob From URL remarks](/rest/api/storageservices/put-blob-from-url#remarks).
## Copy a blob from a source object URL
storage Storage Blob Copy Url Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-copy-url-java.md
To work with the code examples in this article, make sure you have:
- [Put Block From URL](/rest/api/storageservices/put-block-from-url#authorization) - Packages installed to your project directory. These examples use **azure-storage-blob**. If you're using `DefaultAzureCredential` for authorization, you also need **azure-identity**. To learn more about setting up your project, see [Get Started with Azure Storage and Java](storage-blob-java-get-started.md#set-up-your-project). To see the necessary `using` directives, see [Code samples](#code-samples).
-## About copying a blob from a source object URL
-
-The `Put Blob From URL` operation creates a new block blob where the contents of the blob are read from a given URL. The operation completes synchronously.
-
-The source can be any object retrievable via a standard HTTP GET request on the given URL. This includes block blobs, append blobs, page blobs, blob snapshots, blob versions, or any accessible object inside or outside Azure.
-
-When the source object is a block blob, all committed blob content is copied. The content of the destination blob is identical to the content of the source, but the committed block list isn't preserved and uncommitted blocks aren't copied.
-
-The destination is always a block blob, either an existing block blob, or a new block blob created by the operation. The contents of an existing blob are overwritten with the contents of the new blob.
-
-The `Put Blob From URL` operation always copies the entire source blob. Copying a range of bytes or set of blocks isn't supported. To perform partial updates to a block blobΓÇÖs contents by using a source URL, use the [Put Block From URL](/rest/api/storageservices/put-block-from-url) API along with [Put Block List](/rest/api/storageservices/put-block-list).
-
-To learn more about the `Put Blob From URL` operation, including blob size limitations and billing considerations, see [Put Blob From URL remarks](/rest/api/storageservices/put-blob-from-url#remarks).
## Copy a blob from a source object URL
storage Storage Blob Delete Typescript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-delete-typescript.md
The Azure SDK for JavaScript contains libraries that build on top of the Azure R
### Code samples -- [View code samples from this article (GitHub)](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/TypeScript/NodeJS-v12/dev-guide/blob-delete.ts)
+- [View code samples from this article (GitHub)](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/TypeScript/NodeJS-v12/dev-guide/src/blob-delete.ts)
[!INCLUDE [storage-dev-guide-resources-typescript](../../../includes/storage-dev-guides/storage-dev-guide-resources-typescript.md)]
storage Storage Blob Download Typescript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-download-typescript.md
The Azure SDK for JavaScript contains libraries that build on top of the Azure R
### Code samples View code samples from this article (GitHub):-- [Download to file](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/TypeScript/NodeJS-v12/dev-guide/blob-download-to-file.js)-- [Download to stream](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/TypeScript/NodeJS-v12/dev-guide/blob-download-to-stream.js)-- [Download to string](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/TypeScript/NodeJS-v12/dev-guide/blob-download-to-string.js)
+- [Download to file](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/JavaScript/NodeJS-v12/dev-guide/download-blob-to-file.js)
+- [Download to stream](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/JavaScript/NodeJS-v12/dev-guide/download-blob-to-stream.js)
+- [Download to string](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/JavaScript/NodeJS-v12/dev-guide/download-blob-to-string.js)
[!INCLUDE [storage-dev-guide-resources-typescript](../../../includes/storage-dev-guides/storage-dev-guide-resources-typescript.md)]
storage Storage Blob Properties Metadata Typescript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-properties-metadata-typescript.md
The Azure SDK for JavaScript contains libraries that build on top of the Azure R
### Code samples -- [View code samples from this article (GitHub)](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/TypeScript/NodeJS-v12/dev-guide/blob-set-properties-and-metadata.ts)
+- [View code samples from this article (GitHub)](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/TypeScript/NodeJS-v12/dev-guide/src/blob-set-properties-and-metadata.ts)
[!INCLUDE [storage-dev-guide-resources-typescript](../../../includes/storage-dev-guides/storage-dev-guide-resources-typescript.md)]
storage Storage Blob Static Website https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-static-website.md
In the Azure portal, open the static website configuration page of your account
- [Map a custom domain to an Azure Blob Storage endpoint](storage-custom-domain-name.md) - [Azure Functions](../../azure-functions/functions-overview.md) - [Azure App Service](../../app-service/overview.md)-- [Build your first serverless web app](/azure/functions/tutorial-static-website-serverless-api-with-database)
+- [Build your first serverless web app](/labs/build2018/serverlesswebapp/)
- [Tutorial: Host your domain in Azure DNS](../../dns/dns-delegate-domain-azure-dns.md)
storage Storage Blob Tags Typescript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-tags-typescript.md
The Azure SDK for JavaScript contains libraries that build on top of the Azure R
### Code samples -- [View code samples from this article (GitHub)](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/TypeScript/NodeJS-v12/dev-guide/blob-set-and-retrieve-tags.js)
+- [View code samples from this article (GitHub)](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/JavaScript/NodeJS-v12/dev-guide/set-and-retrieve-blob-tags.js)
[!INCLUDE [storage-dev-guide-resources-typescript](../../../includes/storage-dev-guides/storage-dev-guide-resources-typescript.md)]
storage Storage Blob Use Access Tier Typescript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-use-access-tier-typescript.md
Create a [BlobBatchClient](/javascript/api/@azure/storage-blob/blobbatchclient).
## Code samples
-* [Set blob's access tier during upload](https://github.com/Azure-Samples/AzureStorageSnippets/tree/master/blobs/howto/TypeScript/NodeJS-v12/dev-guide/src/blob-upload-from-string-with-access-tier.js)
+* [Set blob's access tier during upload](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/JavaScript/NodeJS-v12/dev-guide/upload-blob-from-string-with-access-tier.js)
* [Change blob's access tier after upload](https://github.com/Azure-Samples/AzureStorageSnippets/tree/master/blobs/howto/TypeScript/NodeJS-v12/dev-guide/src/blob-change-access-tier.ts) * [Copy blob into different access tier](https://github.com/Azure-Samples/AzureStorageSnippets/tree/master/blobs/howto/TypeScript/NodeJS-v12/dev-guide/src/blob-copy-to-different-access-tier.ts) * [Use a batch to change access tier for many blobs](https://github.com/Azure-Samples/AzureStorageSnippets/tree/master/blobs/howto/TypeScript/NodeJS-v12/dev-guide/src/blob-batch-set-access-tier-for-container.ts)
storage Storage Blobs List Typescript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blobs-list-typescript.md
The Azure SDK for JavaScript contains libraries that build on top of the Azure R
### Code samples -- [View code samples from this article (GitHub)](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/TypeScript/NodeJS-v12/dev-guide/blobs-list.ts)
+- [View code samples from this article (GitHub)](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/TypeScript/NodeJS-v12/dev-guide/src/blobs-list.ts)
[!INCLUDE [storage-dev-guide-resources-typescript](../../../includes/storage-dev-guides/storage-dev-guide-resources-typescript.md)] ### See also - [Enumerating Blob Resources](/rest/api/storageservices/enumerating-blob-resources)-- [Blob versioning](versioning-overview.md)
+- [Blob versioning](versioning-overview.md)
storage Storage Blobs Tune Upload Download https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blobs-tune-upload-download.md
BlobUploadOptions options = new BlobUploadOptions
InitialTransferSize = 8 * 1024 * 1024, // Set the maximum length of a transfer to 4 MiB
- MaximumTransferSize = 4 * 1024 * 1024;
+ MaximumTransferSize = 4 * 1024 * 1024
} };
During a download, the Storage client libraries make one download range request
## Next steps - To understand more about factors that can influence performance for Azure Storage operations, see [Latency in Blob storage](storage-blobs-latency.md).-- To see a list of design considerations to optimize performance for apps using Blob storage, see [Performance and scalability checklist for Blob storage](storage-performance-checklist.md).
+- To see a list of design considerations to optimize performance for apps using Blob storage, see [Performance and scalability checklist for Blob storage](storage-performance-checklist.md).
storage Storage Manage Find Blobs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-manage-find-blobs.md
For any blobs with at least one blob index tag, the `x-ms-tag-count` is returned
## Finding data using blob index tags
-The indexing engine exposes your key-value attributes into a multi-dimensional index. After you set your index tags, they exist on the blob and can be retrieved immediately. It may take some time before the blob index updates. After the blob index updates, you can use the native query and discovery capabilities offered by Blob Storage.
+The indexing engine exposes your key-value attributes into a multi-dimensional index. After you set your index tags, they exist on the blob and can be retrieved immediately.
+
+It might take some time before the blob index updates. This is true for both adding tags and editing existing ones. The amount of time required depends on the workload. For example, if a [Set Blob Tags](/rest/api/storageservices/set-blob-tags) operation takes 30 minutes to complete at a rate of 15000 to 20000 transactions per second, then it can take up to 10 minutes to index all of those blobs. At a lower rate, the indexing delay can be under a second. The distribution of traffic also affects indexing delays. For example, if a client application sets tags on blobs in sequential order under the same container, the delay could be higher than it would be if tags are applied to blobs that aren't located together.
+
+After the blob index updates, you can use the native query and discovery capabilities offered by Blob Storage.
The [Find Blobs by Tags](/rest/api/storageservices/find-blobs-by-tags) operation enables you to get a filtered set of blobs whose index tags match a given query expression. `Find Blobs by Tags` supports filtering across all containers within your storage account or you can scope the filtering to just a single container. Since all the index tag keys and values are strings, relational operators use a lexicographic sorting.
storage File Sync Cloud Tiering Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-cloud-tiering-overview.md
If files recalled to the server aren't needed locally, then the unnecessary reca
Enabling proactive recalling might also result in increased bandwidth usage on the server and could cause other relatively new content on the local server to be aggressively tiered due to the increase in files being recalled. In turn, tiering too soon might lead to more recalls if the files being tiered are considered hot by servers. - For more information on proactive recall, see [Deploy Azure File Sync](file-sync-deployment-guide.md#optional-proactively-recall-new-and-changed-files-from-an-azure-file-share). ## Tiered vs. locally cached file behavior
storage Files Troubleshoot Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/files-troubleshoot-performance.md
Lack of support for directory leases.
### Workaround - If possible, avoid using an excessive opening/closing handle on the same directory within a short period of time.-- For Linux VMs, increase the directory entry cache timeout by specifying `actimeo=<sec>` as a mount option. By default, the timeout is 1 second, so a larger value, such as 3 or 5 seconds, might help.
+- For Linux VMs, increase the directory entry cache timeout by specifying `actimeo=<sec>` as a mount option. By default, the timeout is 1 second, so a larger value, such as 30 seconds, might help.
- For CentOS Linux or Red Hat Enterprise Linux (RHEL) VMs, upgrade the system to CentOS Linux 8.2 or RHEL 8.2. For other Linux distros, upgrade the kernel to 5.0 or later.
storage Storage Files Identity Auth Azure Active Directory Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-identity-auth-azure-active-directory-enable.md
Azure AD Kerberos authentication only supports using AES-256 encryption.
## Regional availability
-Azure Files authentication with Azure AD Kerberos is available in Azure public cloud in [all Azure regions](https://azure.microsoft.com/global-infrastructure/locations/) except China and Government clouds.
+Azure Files authentication with Azure AD Kerberos is available in Azure public cloud in [all Azure regions](https://azure.microsoft.com/global-infrastructure/locations/).
## Enable Azure AD Kerberos authentication for hybrid user accounts
storage Storage Snapshots Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-snapshots-files.md
description: A share snapshot is a read-only version of an Azure file share that
Previously updated : 12/06/2022 Last updated : 04/25/2023 # Overview of share snapshots for Azure Files
-Azure Files provides the capability to take share snapshots of SMB file shares. Share snapshots capture the share state at that point in time. This article describes the capabilities that file share snapshots provide and how you can take advantage of them in your custom use case.
+Azure Files provides the capability to take snapshots of file shares. Share snapshots capture the share state at that point in time. This article describes the capabilities that file share snapshots provide and how you can take advantage of them in your custom use case.
## Applies to | File share type | SMB | NFS |
Snapshots don't count towards the maximum share size limit, which is 100 TiB for
The maximum number of share snapshots that Azure Files allows today is 200 per share. After 200 share snapshots, you have to delete older share snapshots in order to create new ones.
-There is no limit to the simultaneous calls for creating share snapshots. There is no limit to amount of space that share snapshots of a particular file share can consume.
+There's no limit to the simultaneous calls for creating share snapshots. There's no limit to amount of space that share snapshots of a particular file share can consume.
Taking snapshots of NFS Azure file shares isn't currently supported.
Before you deploy the share snapshot scheduler, carefully consider your share sn
Share snapshots provide only file-level protection. Share snapshots don't prevent fat-finger deletions on a file share or storage account. To help protect a storage account from accidental deletions, you can either [enable soft delete](storage-files-prevent-file-share-deletion.md), or lock the storage account and/or the resource group.
-## Delete multiple snapshots
-
-Use the following PowerShell script to delete multiple file share snapshots. Be sure to replace **storageaccount_name**, **resource-GROUP**, and **sharename** with your own values.
-
-```powerShell
-$storageAccount = "storageaccount_name" 
-$RG = "resource-GROUP" $sharename = "sharename"
-$sa = get-azstorageaccount -Name $storageAccount -ResourceGroupName $RG $items = "","","" 
-ForEach ($item in $items)
-{
-    $snapshotTime = "$item"
-    $snap = Get-AzStorageShare -Name $sharename -SnapshotTime "$snapshotTime" -Context $sa.Context
-    $lease = [Azure.Storage.Files.Shares.Specialized.ShareLeaseClient]::new($snap.ShareClient)
-    $l
-}
-```
-## Next steps
+## See also
- Working with share snapshots in: - [Azure file share backup](../../backup/azure-file-share-backup-overview.md) - [Azure PowerShell](/powershell/module/az.storage/new-azrmstorageshare)
storage Storage Quickstart Queues Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/queues/storage-quickstart-queues-nodejs.md
Use the following JavaScript classes to interact with these resources:
- [`QueueServiceClient`](/javascript/api/@azure/storage-queue/queueserviceclient): The `QueueServiceClient` allows you to manage the all queues in your storage account. - [`QueueClient`](/javascript/api/@azure/storage-queue/queueclient): The `QueueClient` class allows you to manage and manipulate an individual queue and its messages.-- [`QueueMessage`](/javascript/api/@azure/storage-queue/queuemessage): The `QueueMessage` class represents the individual objects returned when calling [`ReceiveMessages`](/javascript/api/@azure/storage-queue/queueclient#receivemessages-queuereceivemessageoptions-) on a queue.
+- [`QueueMessage`](/javascript/api/preview-docs/@azure/storage-queue/queuemessage): The `QueueMessage` class represents the individual objects returned when calling [`ReceiveMessages`](/javascript/api/@azure/storage-queue/queueclient#receivemessages-queuereceivemessageoptions-) on a queue.
## Code examples
console.log("Queue created, requestId:", createQueueResponse.requestId);
### Add messages to a queue
-The following code snippet adds messages to queue by calling the [`sendMessage`](/javascript/api/@azure/storage-queue/queueclient#sendmessage-string--queuesendmessageoptions-) method. It also saves the [`QueueMessage`](/javascript/api/@azure/storage-queue/queuemessage) returned from the third `sendMessage` call. The returned `sendMessageResponse` is used to update the message content later in the program.
+The following code snippet adds messages to queue by calling the [`sendMessage`](/javascript/api/@azure/storage-queue/queueclient#sendmessage-string--queuesendmessageoptions-) method. It also saves the [`QueueMessage`](/javascript/api/preview-docs/@azure/storage-queue/queuemessage) returned from the third `sendMessage` call. The returned `sendMessageResponse` is used to update the message content later in the program.
Add this code to the end of the `main` function:
traffic-manager Traffic Manager Manage Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/traffic-manager/traffic-manager-manage-endpoints.md
Previously updated : 05/08/2017 Last updated : 04/24/2023
You can also disable individual endpoints that are part of a Traffic Manager pro
4. Click the endpoint that you want to delete. 5. In the **Endpoint** blade, click **Delete** - ## Next steps * [Manage Traffic Manager profiles](traffic-manager-manage-profiles.md) * [Configure routing methods](./traffic-manager-configure-priority-routing-method.md)
+* [Traffic Manager endpoint monitoring](traffic-manager-monitoring.md)
* [Troubleshooting Traffic Manager degraded state](traffic-manager-troubleshooting-degraded.md) * [Traffic Manager performance considerations](traffic-manager-performance-considerations.md) * [Operations on Traffic Manager (REST API Reference)](/previous-versions/azure/reference/hh758255(v=azure.100))
traffic-manager Traffic Manager Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/traffic-manager/traffic-manager-monitoring.md
Previously updated : 03/31/2023 Last updated : 04/25/2023 # Traffic Manager endpoint monitoring
-Azure Traffic Manager includes built-in endpoint monitoring and automatic endpoint failover. This feature helps you deliver high-availability applications that are resilient to endpoint failure, including Azure region failures.
+Azure Traffic Manager includes built-in endpoint monitoring and automatic endpoint failover. This feature helps you deliver high-availability applications that are resilient to endpoint failure, including Azure region failures. Endpoint monitoring is enabled by default. To disable monitoring, see [Enable or disable health checks](#enable-or-disable-health-checks-preview).
## Configure endpoint monitoring
Endpoint monitor status is a Traffic Manager-generated value that shows the stat
| Enabled |Enabled |Online |The endpoint is monitored and is healthy. It's included in DNS responses and can receive traffic. | | Enabled |Enabled |Degraded |Endpoint monitoring health checks are failing. The endpoint isn't included in DNS responses and doesn't receive traffic. <br>An exception is if all endpoints are degraded. In which case all of them are considered to be returned in the query response. | | Enabled |Enabled |CheckingEndpoint |The endpoint is monitored, but the results of the first probe haven't been received yet. CheckingEndpoint is a temporary state that usually occurs immediately after adding or enabling an endpoint in the profile. An endpoint in this state is included in DNS responses and can receive traffic. |
-| Enabled |Enabled |Stopped |The web app that the endpoint points to isn't running. Check the web app settings. This status can also happen if the endpoint is of type nested endpoint and the child profile get disabled or is inactive. <br>An endpoint with a Stopped status isn't monitored. It isn't included in DNS responses and doesn't receive traffic. An exception is if all endpoints are degraded. In which case all of them will be considered to be returned in the query response. |
+| Enabled |Enabled |Stopped |The web app that the endpoint points to isn't running. Check the web app settings. This status can also happen if the endpoint is of type nested endpoint and the child profile get disabled or is inactive. <br>An endpoint with a Stopped status isn't monitored. It isn't included in DNS responses and doesn't receive traffic. An exception is if all endpoints are degraded. In which case all of them will be considered to be returned in the query response.</br>|
+| Enabled |Enabled |Not monitored |The endpoint is configured to always serve traffic. Health checks are not enabled. |
For details about how endpoint monitor status is calculated for nested endpoints, see [Nested Traffic Manager profiles](traffic-manager-nested-profiles.md).
The profile monitor status is a combination of the configured profile status and
| Enabled |The status of at least one endpoint is CheckingEndpoint. No endpoints are in Online or Degraded status. |CheckingEndpoints |This transition state occurs when a profile if created or enabled. The endpoint health is being checked for the first time. | | Enabled |The statuses of all endpoints in the profile are either Disabled or Stopped, or the profile has no defined endpoints. |Inactive |No endpoints are active, but the profile is still Enabled. | + ## Endpoint failover and recovery Traffic Manager periodically checks the health of every endpoint, including unhealthy endpoints. Traffic Manager detects when an endpoint becomes healthy and brings it back into rotation.
The timeline in the following figure is a detailed description of the monitoring
## Traffic-routing methods
-When an endpoint has a Degraded status, it's no longer returned in response to DNS queries. Instead, an alternative endpoint is chosen and returned. The traffic-routing method configured in the profile determines how the alternative endpoint is chosen.
+When an endpoint has a **Degraded** status, it's no longer returned in response to DNS queries. Instead, an alternative endpoint is chosen and returned. The traffic-routing method configured in the profile determines how the alternative endpoint is chosen.
* **Priority**. Endpoints form a prioritized list. The first available endpoint on the list is always returned. If an endpoint status is Degraded, then the next available endpoint is returned. * **Weighted**. Any available endpoints get chosen at random based on their assigned weights and the weights of the other available endpoints. * **Performance**. The endpoint closest to the end user is returned. If that endpoint is unavailable, Traffic Manager moves traffic to the endpoints in the next closest Azure region. You can configure alternative failover plans for performance traffic-routing by using [nested Traffic Manager profiles](traffic-manager-nested-profiles.md#example-4-controlling-performance-traffic-routing-between-multiple-endpoints-in-the-same-region).
-* **Geographic**. The endpoint mapped to serve the geographic location based on the query request IPs is returned. If that endpoint is unavailable, another endpoint won't be selected to fail over to, since a geographic location can be mapped only to one endpoint in a profile. (More details are in the [FAQ](traffic-manager-FAQs.md#traffic-manager-geographic-traffic-routing-method)). As a best practice, when using geographic routing, we recommend customers to use nested Traffic Manager profiles with more than one endpoint as the endpoints of the profile.
+* **Geographic**. The endpoint mapped to serve the geographic location (based on the query request IP addresses) is returned. If that endpoint is unavailable, another endpoint won't be selected to fail over to, since a geographic location can be mapped only to one endpoint in a profile. (More details are in the [FAQ](traffic-manager-FAQs.md#traffic-manager-geographic-traffic-routing-method)). As a best practice, when using geographic routing, we recommend customers to use nested Traffic Manager profiles with more than one endpoint as the endpoints of the profile.
* **MultiValue** Multiple endpoints mapped to IPv4/IPv6 addresses are returned. When a query is received for this profile, healthy endpoints are returned based on the **Maximum record count in response** value that you've specified. The default number of responses is two endpoints. * **Subnet** The endpoint mapped to a set of IP address ranges is returned. When a request is received from that IP address, the endpoint returned is the one mapped for that IP address. 
For more information, see [Traffic Manager traffic-routing methods](traffic-mana
For more information about troubleshooting failed health checks, see [Troubleshooting degraded status on Azure Traffic Manager](traffic-manager-troubleshooting-degraded.md).
-## FAQ
+## Enable or disable health checks (Preview)
-* [Is Traffic Manager resilient to Azure region failures?](./traffic-manager-faqs.md#is-traffic-manager-resilient-to-azure-region-failures)
+Azure Traffic Manager also enables you to configure endpoint **Health Checks** to be enabled or disabled. To disable monitoring, choose the option to **Always serve traffic**.
-* [How does the choice of resource group location affect Traffic Manager?](./traffic-manager-faqs.md#how-does-the-choice-of-resource-group-location-affect-traffic-manager)
+> [!IMPORTANT]
+> The Always Serve function is in public preview. To access this preview, use the [API version 2022-04-01-preview](https://ms.portal.azure.com/?feature.canmodifystamps=true&feature.trafficmanageralwaysserve=true) link.
-* [How do I determine the current health of each endpoint?](./traffic-manager-faqs.md#how-do-i-determine-the-current-health-of-each-endpoint)
+There are two available settings for **Health Checks**:
-* [Can I monitor HTTPS endpoints?](./traffic-manager-faqs.md#can-i-monitor-https-endpoints)
+1. **Enable** (health checks). Traffic is served to the endpoint based on health. This is the default setting.
+2. **Always serve traffic**. This setting disables health checks.
-* [Do I use an IP address or a DNS name when adding an endpoint?](./traffic-manager-faqs.md#do-i-use-an-ip-address-or-a-dns-name-when-adding-an-endpoint)
+### Always Serve
-* [What types of IP addresses can I use when adding an endpoint?](./traffic-manager-faqs.md#what-types-of-ip-addresses-can-i-use-when-adding-an-endpoint)
+When **Always serve traffic** is selected, monitoring is bypassed and traffic is always sent to an endpoint. The [endpoint monitor status](#endpoint-monitor-status) displayed will be **Unmonitored**.
-* [Can I use different endpoint addressing types within a single profile?](./traffic-manager-faqs.md#can-i-use-different-endpoint-addressing-types-within-a-single-profile)
+To enable Always Serve:
+1. Use the [API version 2022-04-01-preview](https://ms.portal.azure.com/?feature.canmodifystamps=true&feature.trafficmanageralwaysserve=true) link to access the portal.
+2. Select **Endpoints** in the **Settings** section of your Traffic Manager profile blade.
+3. Select the endpoint that you want to configure.
+4. Under **Health Checks**, choose **Always serve traffic**.
+5. Select **Save**.
-* [What happens when an incoming queryΓÇÖs record type is different from the record type associated with the addressing type of the endpoints?](./traffic-manager-faqs.md#what-happens-when-an-incoming-querys-record-type-is-different-from-the-record-type-associated-with-the-addressing-type-of-the-endpoints)
+See the following example:
-* [Can I use a profile with IPv4 / IPv6 addressed endpoints in a nested profile?](./traffic-manager-faqs.md#can-i-use-a-profile-with-ipv4--ipv6-addressed-endpoints-in-a-nested-profile)
+[ ![Screenshot of endpoint health checks. ](./media/traffic-manager-monitoring/health-checks.png) ](./media/traffic-manager-monitoring/health-checks.png#lightbox)
-* [I stopped a web application endpoint in my Traffic Manager profile but I'm not receiving any traffic even after I restarted it. How can I fix this?](./traffic-manager-faqs.md#i-stopped-a-web-application-endpoint-in-my-traffic-manager-profile-but-im-not-receiving-any-traffic-even-after-i-restarted-it-how-can-i-fix-this)
+> [!NOTE]
+> - Health checks can't be disabled on nested Traffic Manager profiles.
+> - An endpoint must be enabled to configure health checks.
+> - Enabling and disabling an endpoint doesn't reset the **Health Checks** configuration.
+> - Endpoints that are configured to always serve traffic are billed for [basic health checks](https://azure.microsoft.com/pricing/details/traffic-manager/).
-* [Can I use Traffic Manager even if my application doesn't have support for HTTP or HTTPS?](./traffic-manager-faqs.md#can-i-use-traffic-manager-even-if-my-application-doesnt-have-support-for-http-or-https)
+## FAQs
+* [Is Traffic Manager resilient to Azure region failures?](./traffic-manager-faqs.md#is-traffic-manager-resilient-to-azure-region-failures)
+* [How does the choice of resource group location affect Traffic Manager?](./traffic-manager-faqs.md#how-does-the-choice-of-resource-group-location-affect-traffic-manager)
+* [How do I determine the current health of each endpoint?](./traffic-manager-faqs.md#how-do-i-determine-the-current-health-of-each-endpoint)
+* [Can I monitor HTTPS endpoints?](./traffic-manager-faqs.md#can-i-monitor-https-endpoints)
+* [Do I use an IP address or a DNS name when adding an endpoint?](./traffic-manager-faqs.md#do-i-use-an-ip-address-or-a-dns-name-when-adding-an-endpoint)
+* [What types of IP addresses can I use when adding an endpoint?](./traffic-manager-faqs.md#what-types-of-ip-addresses-can-i-use-when-adding-an-endpoint)
+* [Can I use different endpoint addressing types within a single profile?](./traffic-manager-faqs.md#can-i-use-different-endpoint-addressing-types-within-a-single-profile)
+* [What happens when an incoming queryΓÇÖs record type is different from the record type associated with the addressing type of the endpoints?](./traffic-manager-faqs.md#what-happens-when-an-incoming-querys-record-type-is-different-from-the-record-type-associated-with-the-addressing-type-of-the-endpoints)
+* [Can I use a profile with IPv4 / IPv6 addressed endpoints in a nested profile?](./traffic-manager-faqs.md#can-i-use-a-profile-with-ipv4--ipv6-addressed-endpoints-in-a-nested-profile)
+* [I stopped a web application endpoint in my Traffic Manager profile but I'm not receiving any traffic even after I restarted it. How can I fix this?](./traffic-manager-faqs.md#i-stopped-a-web-application-endpoint-in-my-traffic-manager-profile-but-im-not-receiving-any-traffic-even-after-i-restarted-it-how-can-i-fix-this)
+* [Can I use Traffic Manager even if my application doesn't have support for HTTP or HTTPS?](./traffic-manager-faqs.md#can-i-use-traffic-manager-even-if-my-application-doesnt-have-support-for-http-or-https)
* [What specific responses are required from the endpoint when using TCP monitoring?](./traffic-manager-faqs.md#what-specific-responses-are-required-from-the-endpoint-when-using-tcp-monitoring)- * [How fast does Traffic Manager move my users away from an unhealthy endpoint?](./traffic-manager-faqs.md#how-fast-does-traffic-manager-move-my-users-away-from-an-unhealthy-endpoint)- * [How can I specify different monitoring settings for different endpoints in a profile?](./traffic-manager-faqs.md#how-can-i-specify-different-monitoring-settings-for-different-endpoints-in-a-profile)- * [How can I assign HTTP headers to the Traffic Manager health checks to my endpoints?](./traffic-manager-faqs.md#how-can-i-assign-http-headers-to-the-traffic-manager-health-checks-to-my-endpoints)- * [What host header do endpoint health checks use?](./traffic-manager-faqs.md#what-host-header-do-endpoint-health-checks-use)- * [What are the IP addresses from which the health checks originate?](./traffic-manager-faqs.md#what-are-the-ip-addresses-from-which-the-health-checks-originate)- * [How many health checks to my endpoint can I expect from Traffic Manager?](./traffic-manager-faqs.md#how-many-health-checks-to-my-endpoint-can-i-expect-from-traffic-manager)- * [How can I get notified if one of my endpoints goes down?](./traffic-manager-faqs.md#how-can-i-get-notified-if-one-of-my-endpoints-goes-down) ## Next steps
virtual-desktop Add Session Hosts Host Pool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/add-session-hosts-host-pool.md
Here's how to create session hosts and register them to a host pool using the Az
| Parameter | Value/Description | |--|--| | Resource group | This automatically defaults to the same resource group as your host pool, but you can select an alternative existing one from the drop-down list. |
- | Name prefix | Enter a name for your session hosts, for example **aad-hp01-sh**.<br /><br />This will be used as the prefix for your session host VMs. Each session host will have a hyphen and then a sequential number added to the end, for example **aad-hp01-sh-0**. This name prefix can be a maximum of 11 characters and will also be in the computer name in the operating system. Session host names must be unique. |
+ | Name prefix | Enter a name for your session hosts, for example **aad-hp01-sh**.<br /><br />This will be used as the prefix for your session host VMs. Each session host has a suffix of a hyphen and then a sequential number added to the end, for example **aad-hp01-sh-0**.<br /><br />This name prefix can be a maximum of 11 characters and is used in the computer name in the operating system. The prefix and the suffix combined can be a maximum of 15 characters. Session host names must be unique. |
| Virtual machine location | Select the Azure region where your session host VMs will be deployed. This must be the same region that your virtual network is in. | | Availability options | Select from **[availability zones](../reliability/availability-zones-overview.md)**, **[availability set](../virtual-machines/availability-set-overview.md)**, or **No infrastructure dependency required**. If you select availability zones or availability set, complete the extra parameters that appear. | | Security type | Select from **Standard**, **[Trusted launch virtual machines](../virtual-machines/trusted-launch.md)**, or **[Confidential virtual machines](../confidential-computing/confidential-vm-overview.md)**. |
virtual-desktop Azure Ad Joined Session Hosts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/azure-ad-joined-session-hosts.md
To access Azure AD-joined VMs using the web, Android, macOS and iOS clients, you
You can use Azure AD Multi-Factor Authentication with Azure AD-joined VMs. Follow the steps to [Enforce Azure Active Directory Multi-Factor Authentication for Azure Virtual Desktop using Conditional Access](set-up-mfa.md) and note the extra steps for [Azure AD-joined session host VMs](set-up-mfa.md#azure-ad-joined-session-host-vms).
+If you're using Azure AD Multi-Factor Authentication and you don't want to restrict signing in to strong authentication methods like Windows Hello for Business, you'll need to [exclude the Azure Windows VM Sign-In app](../active-directory/devices/howto-vm-sign-in-azure-ad-windows.md#mfa-sign-in-method-required) from your Conditional Access policy.
+ ### Single sign-on You can enable a single sign-on experience using Azure AD authentication when accessing Azure AD-joined VMs. Follow the steps to [Configure single sign-on](configure-single-sign-on.md) to provide a seamless connection experience.
virtual-desktop Create Host Pool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/create-host-pool.md
Here's how to create a host pool using the Azure portal.
|--|--| | Add Azure virtual machines | Select **Yes**. This shows several new options. | | Resource group | This automatically defaults to the resource group you chose your host pool to be in on the *Basics* tab, but you can also select an alternative. |
- | Name prefix | Enter a name for your session hosts, for example **aad-hp01-sh**.<br /><br />This will be used as the prefix for your session host VMs. Each session host has a hyphen and then a sequential number added to the end, for example **aad-hp01-sh-0**. This name prefix can be a maximum of 11 characters and is used in the computer name in the operating system. |
+ | Name prefix | Enter a name for your session hosts, for example **aad-hp01-sh**.<br /><br />This will be used as the prefix for your session host VMs. Each session host has a suffix of a hyphen and then a sequential number added to the end, for example **aad-hp01-sh-0**.<br /><br />This name prefix can be a maximum of 11 characters and is used in the computer name in the operating system. The prefix and the suffix combined can be a maximum of 15 characters. Session host names must be unique. |
| Virtual machine location | Select the Azure region where your session host VMs will be deployed. This must be the same region that your virtual network is in. | | Availability options | Select from **[availability zones](../reliability/availability-zones-overview.md)**, **[availability set](../virtual-machines/availability-set-overview.md)**, or **No infrastructure dependency required**. If you select availability zones or availability set, complete the extra parameters that appear. | | Security type | Select from **Standard**, **[Trusted launch virtual machines](../virtual-machines/trusted-launch.md)**, or **[Confidential virtual machines](../confidential-computing/confidential-vm-overview.md)**. |
virtual-desktop Create Netapp Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/create-netapp-files.md
Your MSIX image should now be accessible to your session hosts when they add an
## Next steps
-Now that you've created an Azure NetApp Files share, here are some resources about what you can use it for in Azure Virtual Desktop:
--- [Create a profile container with Azure NetApp Files and AD DS](create-fslogix-profile-container.md)-- [Storage options for FSLogix profile containers in Azure Virtual Desktop](store-fslogix-profile.md)-- [Create replication peering for Azure NetApp Files](../azure-netapp-files/cross-region-replication-create-peering.md)
+Now that you've created an Azure NetApp Files share to store MSIX images, learn how to [Create replication peering for Azure NetApp Files](../azure-netapp-files/cross-region-replication-create-peering.md)
virtual-desktop Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/disaster-recovery.md
There are three ways to keep the domain controller available:
- Use an on-premises Active Directory Domain Controller - Replicate Active Directory Domain Controller using [Azure Site Recovery](../site-recovery/site-recovery-active-directory.md)
+## User profiles
+
+We recommend that you use FSLogix for managing user profiles. For information, see [Business continuity and disaster recovery options for FSLogix](/fslogix/concepts-container-recovery-business-continuity).
+ ## Back up your data You also have the option to back up your data. You can choose one of the following methods to back up your Azure Virtual Desktop data:
virtual-desktop Fslogix Containers Azure Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/fslogix-containers-azure-files.md
The following table shows benefits and limitations of previous user profile tech
| **Roaming User Profile (RUP), maintenance mode** | No | Yes | Yes | Yes | Yes| No | Yes | Win 7+ | No | | | **Enterprise State Roaming (ESR)** | Yes | No | Yes | No | See notes | Yes | No | Win 10 | No | Functions on server SKU but no supporting user interface | | **User Experience Virtualization (UE-V)** | Yes | Yes | Yes | No | Yes | No | Yes | Win 7+ | No | |
-| **OneDrive cloud files** | No | No | No | Yes | See notes | See notes | See Notes | Win 10 RS3 | No | Not tested on server SKU. Back-end storage on Azure depends on sync client. Back-end storage on-prem needs a sync client. |
+| **OneDrive cloud files** | No | No | No | Yes | See notes | See notes | See Notes | Win 10 RS3 | No | Not tested on server SKU. Back-end storage on Azure depends on sync client. Back-end storage on-premises needs a sync client. |
#### Performance
On November 19, 2018, [Microsoft acquired FSLogix](https://blogs.microsoft.com/b
Since the acquisition, Microsoft started replacing existing user profile solutions, like UPD, with FSLogix profile containers.
-## Azure Files integration with Azure Active Directory Domain Service
-
-FSLogix profile containers' performance and features take advantage of the cloud. On August 7th, 2019, Microsoft Azure Files announced the general availability of [Azure Files authentication with Azure Active Directory Domain Service (Azure AD DS)](../storage/files/storage-files-active-directory-overview.md). By addressing both cost and administrative overhead, Azure Files with Azure AD DS Authentication is a premium solution for user profiles in the Azure Virtual Desktop service.
- ## Best practices for Azure Virtual Desktop Azure Virtual Desktop offers full control over size, type, and count of VMs that are being used by customers. For more information, see [What is Azure Virtual Desktop?](overview.md).
To ensure your Azure Virtual Desktop environment follows best practices:
## Next steps
-To learn more about storage options for FSLogix profile containers, see [Storage options for FSLogix profile containers in Azure Virtual Desktop](store-fslogix-profile.md).
+- Learn more about storage options for FSLogix profile containers, see [Storage options for FSLogix profile containers in Azure Virtual Desktop](store-fslogix-profile.md).
+- [Set up FSLogix Profile Container with Azure Files and Active Directory](fslogix-profile-container-configure-azure-files-active-directory.md)
+- [Set up FSLogix Profile Container with Azure Files and Azure Active Directory](create-profile-container-azure-ad.md)
+- [Set up FSLogix Profile Container with Azure NetApp Files](create-fslogix-profile-container.md)
virtual-desktop Troubleshoot Vm Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/troubleshoot-vm-configuration.md
When the Azure Virtual Desktop Agent is first installed on session host VMs (eit
## Troubleshooting issues with the Azure Virtual Desktop side-by-side stack
-The Azure Virtual Desktop side-by-side stack is automatically installed with Windows Server 2019 and newer. Use Microsoft Installer (MSI) to install the side-by-side stack on Microsoft Windows Server 2016 or Windows Server 2012 R2. For Microsoft Windows 10, the Azure Virtual Desktop side-by-side stack is enabled with **enablesxstackrs.ps1**.
- There are three main ways the side-by-side stack gets installed or enabled on session host pool VMs: - With the Azure portal creation template
The output of **qwinsta** will list **rdp-sxs** in the output if the side-by-sid
> [!div class="mx-imgBorder"] > ![Side-by-side stack installed or enabled with qwinsta listed as rdp-sxs in the output.](media/23b8e5f525bb4e24494ab7f159fa6b62.png)
-Examine the registry entries listed below and confirm that their values match. If registry keys are missing or values are mismatched, make sure you're running [a supported operating system](troubleshoot-agent.md#error-operating-a-pro-vm-or-other-unsupported-os). If you are, follow the instructions in [Create a host pool with PowerShell](create-host-pools-powershell.md) on how to reinstall the side-by-side stack.
+Examine the registry entries listed below and confirm that their values match. If registry keys are missing or values are mismatched, make sure you're running [a supported operating system](troubleshoot-agent.md#error-operating-a-pro-vm-or-other-unsupported-os). If you are, follow the instructions in [Register session hosts to a host pool](add-session-hosts-host-pool.md#register-session-hosts-to-a-host-pool) for how to reinstall the side-by-side stack.
```registry HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Terminal
Examine the registry entries listed below and confirm that their values match. I
**Fix:** Follow these instructions to install the side-by-side stack on the session host VM. 1. Use Remote Desktop Protocol (RDP) to get directly into the session host VM as local administrator.
-2. Install the side-by-side stack using [Create a host pool with PowerShell](create-host-pools-powershell.md).
+2. Install the side-by-side stack by following the steps to [Register session hosts to a host pool](add-session-hosts-host-pool.md#register-session-hosts-to-a-host-pool).
## How to fix an Azure Virtual Desktop side-by-side stack that malfunctions
There are known circumstances that can cause the side-by-side stack to malfuncti
- Not following the correct order of the steps to enable the side-by-side stack - Auto update to Windows 10 Enhanced Versatile Disc (EVD) - Missing the Remote Desktop Session Host (RDSH) role-- Running enablesxsstackrc.ps1 multiple times-- Running enablesxsstackrc.ps1 in an account that doesn't have local admin privileges
-The instructions in this section can help you uninstall the Azure Virtual Desktop side-by-side stack. Once you uninstall the side-by-side stack, go to "Register the VM with the Azure Virtual Desktop host pool" in [Create a host pool with PowerShell](create-host-pools-powershell.md) to reinstall the side-by-side stack.
+The instructions in this section can help you uninstall the Azure Virtual Desktop side-by-side stack. Once you uninstall the side-by-side stack, follow the steps to [Register session hosts to a host pool](add-session-hosts-host-pool.md#register-session-hosts-to-a-host-pool) to reinstall the side-by-side stack.
The VM used to run remediation must be on the same subnet and domain as the VM with the malfunctioning side-by-side stack. Follow these instructions to run remediation from the same subnet and domain: 1. Connect with standard Remote Desktop Protocol (RDP) to the VM from where fix will be applied.
-2. Download PsExec from [PsExec v2.40](/sysinternals/downloads/psexec).
-3. Unzip the downloaded file.
-4. Start command prompt as local administrator.
-5. Navigate to folder where PsExec was unzipped.
-6. From command prompt, use the following command:
-
- ```cmd
- psexec.exe \\<VMname> cmd
- ```
-
- >[!NOTE]
- >VMname is the machine name of the VM with the malfunctioning side-by-side stack.
-
-7. Accept the PsExec License Agreement by clicking Agree.
-
- > [!div class="mx-imgBorder"]
- > ![Software license agreement screenshot.](media/SoftwareLicenseTerms.png)
-
- >[!NOTE]
- >This dialog will show up only the first time PsExec is run.
-8. After the command prompt session opens on the VM with the malfunctioning side-by-side stack, run qwinsta and confirm that an entry named rdp-sxs is available. If not, a side-by-side stack isn't present on the VM so the issue isn't tied to the side-by-side stack.
+1. [Download and install PsExec](/sysinternals/downloads/psexec).
- > [!div class="mx-imgBorder"]
- > ![Administrator command prompt](media/AdministratorCommandPrompt.png)
-
-9. Run the following command, which will list Microsoft components installed on the VM with the malfunctioning side-by-side stack.
-
- ```cmd
- wmic product get name
- ```
+1. Start command prompt as local administrator, then navigate to folder where PsExec was unzipped.
-10. Run the command below with product names from step above.
+1. From command prompt, use the following command, where `<VMname>` is the hostname name of the VM with the malfunctioning side-by-side stack. If this is the first time you have run PsExec, you'll also need to accept the PsExec License Agreement to continue by clicking **Agree**.
```cmd
- wmic product where name="<Remote Desktop Services Infrastructure Agent>" call uninstall
+ psexec.exe \\<VMname> cmd
```
-11. Uninstall all products that start with "Remote Desktop."
+1. After the command prompt session opens on the VM with the malfunctioning side-by-side stack, run the following command and confirm that an entry named rdp-sxs is available. If not, a side-by-side stack isn't present on the VM so the issue isn't tied to the side-by-side stack.
-12. After all Azure Virtual Desktop components have been uninstalled, follow the instructions for your operating system:
+ ```cmd
+ qwinsta
+ ```
-13. If your operating system is Windows Server, restart the VM that had the malfunctioning side-by-side stack (either with Azure portal or from the PsExec tool).
+ > [!div class="mx-imgBorder"]
+ > ![Administrator command prompt](media/AdministratorCommandPrompt.png)
-If your operating system is Microsoft Windows 10, continue with the instructions below:
-
-14. From the VM running PsExec, open File Explorer and copy disablesxsstackrc.ps1 to the system drive of the VM with the malfunctioned side-by-side stack.
+1. Run the following command, which will list Microsoft components installed on the VM with the malfunctioning side-by-side stack.
```cmd
- \\<VMname>\c$\
+ wmic product get name
```
- >[!NOTE]
- >VMname is the machine name of the VM with the malfunctioning side-by-side stack.
-
-15. The recommended process: from the PsExec tool, start PowerShell and navigate to the folder from the previous step and run disablesxsstackrc.ps1. Alternatively, you can run the following cmdlets:
+1. Run the command below with product names from step above, for example:
- ```PowerShell
- Remove-ItemProperty -Path "HKLM:\SYSTEM\CurrentControlSet\Control\Terminal Server\ClusterSettings" -Name "SessionDirectoryListener" -Force
- Remove-Item -Path "HKLM:\SYSTEM\CurrentControlSet\Control\Terminal Server\WinStations\rdp-sxs" -Recurse -Force
- Remove-ItemProperty -Path "HKLM:\SYSTEM\CurrentControlSet\Control\Terminal Server\WinStations" -Name "ReverseConnectionListener" -Force
+ ```cmd
+ wmic product where name="<Remote Desktop Services Infrastructure Agent>" call uninstall
```
-16. When the cmdlets are done running, restart the VM with the malfunctioning side-by-side stack.
+1. Uninstall all products that start with **Remote Desktop**.
+
+1. After all Azure Virtual Desktop components have been uninstalled, restart the VM that had the malfunctioning side-by-side stack (either with Azure portal or from the PsExec tool). You can then reinstall the side-by-side stack by following the steps to [Register session hosts to a host pool](add-session-hosts-host-pool.md#register-session-hosts-to-a-host-pool).
## Remote Desktop licensing mode isn't configured
virtual-desktop Tutorial Create Connect Personal Desktop https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/tutorial-create-connect-personal-desktop.md
To create a personal host pool, workspace, application group, and session host V
|--|--| | Add Azure virtual machines | Select **Yes**. This shows several new options. | | Resource group | This automatically defaults to the resource group you chose your host pool to be in on the *Basics* tab. |
- | Name prefix | Enter a name for your session hosts, for example **aad-hp01-sh**.<br /><br />This will be used as the prefix for your session host VMs. Each session host has a hyphen and then a sequential number added to the end, for example **aad-hp01-sh-0**. This name prefix can be a maximum of 11 characters and is used in the computer name in the operating system. |
+ | Name prefix | Enter a name for your session hosts, for example **aad-hp01-sh**.<br /><br />This will be used as the prefix for your session host VMs. Each session host has a suffix of a hyphen and then a sequential number added to the end, for example **aad-hp01-sh-0**.<br /><br />This name prefix can be a maximum of 11 characters and is used in the computer name in the operating system. The prefix and the suffix combined can be a maximum of 15 characters. Session host names must be unique. |
| Virtual machine location | Select the Azure region where your session host VMs will be deployed. This must be the same region that your virtual network is in. | | Availability options | Select **No infrastructure dependency required**. This means that your session host VMs won't be deployed in an availability set or in availability zones. | | Security type | Select **Standard**. | | Image | Select **Windows 11 Enterprise, version 22H2**. | | Virtual machine size | Accept the default SKU. If you want to use a different SKU, select **Change size**, then select from the list. |
- | Number of VMs | Enter **1** as a minimum. You can deploy up to 400 session host VMs at this point if you wish, or you can add more later. <br /><br />With a personal host pool, each session host can only be assigned to one user, so you'll need one session host for each user connecting to this host pool. Once you've completed this tutorial, you can create a pooled host pool, where multiple users can connect to the same session host. |
+ | Number of VMs | Enter **1** as a minimum. You can deploy up to 400 session host VMs at this point if you wish, or you can add more later.<br /><br />With a personal host pool, each session host can only be assigned to one user, so you'll need one session host for each user connecting to this host pool. Once you've completed this tutorial, you can create a pooled host pool, where multiple users can connect to the same session host. |
| OS disk type | Select **Premium SSD** for best performance. | | Boot Diagnostics | Select **Enable with managed storage account (recommended)**. | | **Network and security** | |
virtual-machines Automatic Vm Guest Patching https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/automatic-vm-guest-patching.md
Set-AzVMOperatingSystem -VM $VirtualMachine -Windows -ComputerName $ComputerName
Use the [Set-AzVMOperatingSystem](/powershell/module/az.compute/set-azvmoperatingsystem) and [Update-AzVM](/powershell/module/az.compute/update-azvm) cmdlet to enable automatic VM guest patching on an existing VM. ```azurepowershell-interactive
-Get-AzVM -VM $VirtualMachine -Windows -ComputerName $ComputerName -Credential $Credential
+$VirtualMachine = Get-AzVM -ResourceGroupName "myResourceGroup" -Name "myVM"
Set-AzVMOperatingSystem -VM $VirtualMachine -PatchMode "AutomaticByPlatform" Update-AzVM -VM $VirtualMachine ```
virtual-machines Agent Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/agent-linux.md
Testing has confirmed that the following systems work with the Azure Linux VM Ag
| Distribution | x64 | ARM64 | |:--|:--:|:--:|
-| Alma Linux | 9.x+ | .x+ |
+| Alma Linux | 9.x+ | 9.x+ |
| CentOS | 7.x+, 8.x+ | 7.x+ | | Debian | 10+ | 11.x+ | | Flatcar Linux | 3374.2.x+ | 3374.2.x+ |
+| Mariner | 2.x | 2.x |
| openSUSE | 12.3+ | *Not supported* | | Oracle Linux | 6.4+, 7.x+, 8.x+ | *Not supported* | | Red Hat Enterprise Linux | 6.7+, 7.x+, 8.x+ | 8.6+, 9.0+ |
virtual-machines Custom Script Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/custom-script-linux.md
Use Version 2 for new and existing deployments. The new version is a drop-in rep
| CentOS | 7.x+, 8.x+ | 7.x+ | | Debian | 10+ | 11.x+ | | Flatcar Linux | 3374.2.x+ | 3374.2.x+ |
+| Mariner | 2.x | 2.x |
| openSUSE | 12.3+ | Not Supported | | Oracle Linux | 6.4+, 7.x+, 8.x+ | Not Supported | | Red Hat Enterprise Linux | 6.7+, 7.x+, 8.x+ | 8.6+, 9.0+ |
virtual-machines Vmaccess https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/vmaccess.md
The VM Access extension can be run against these Linux distributions:
| CentOS | 7.x+, 8.x+ | 7.x+ | | Debian | 10+ | 11.x+ | | Flatcar Linux | 3374.2.x+ | 3374.2.x+ |
+| Mariner | 2.x | 2.x |
| openSUSE | 12.3+ | Not Supported | | Oracle Linux | 6.4+, 7.x+, 8.x+ | Not Supported | | Red Hat Enterprise Linux | 6.7+, 7.x+, 8.x+ | 8.6+, 9.0+ |
virtual-machines Run Command Managed https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/run-command-managed.md
The *updated* managed Run Command uses the same VM agent channel to execute scri
| CentOS | 7.x+, 8.x+ | Not Supported | | Debian | 10+ | Not Supported | | Flatcar Linux | 3374.2.x+ | Not Supported |
+| Mariner | 2.x | Not Supported |
| openSUSE | 12.3+ | Not Supported | | Oracle Linux | 6.4+, 7.x+, 8.x+ | Not Supported | | Red Hat Enterprise Linux | 6.7+, 7.x+, 8.x+ | Not Supported |
virtual-machines Run Command https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/run-command.md
This capability is useful in all scenarios where you want to run a script within
| CentOS | 7.x+, 8.x+ | 7.x+ | | Debian | 10+ | 11.x+ | | Flatcar Linux | 3374.2.x+ | 3374.2.x+ |
+| Mariner | 2.x | 2.x |
| openSUSE | 12.3+ | Not Supported | | Oracle Linux | 6.4+, 7.x+, 8.x+ | Not Supported | | Red Hat Enterprise Linux | 6.7+, 7.x+, 8.x+ | 8.6+, 9.0+ |
virtual-machines Trusted Launch Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/trusted-launch-portal.md
- Configure machines to automatically install the Azure Monitor and Azure Security agents on virtual machines - Make sure that the firewall policies are allowing access to *.attest.azure.net
+ > [!NOTE]
+ > If you are using a Linux image and anticipate the VM may have kernel drivers either unsigned or not signed by the Linux distro vendor, then you may want to consider turning off secure boot. In Portal, in the ΓÇÿCreate a virtual machineΓÇÖ page for ΓÇÿSecurity typeΓÇÖ parameter with ΓÇÿTrusted Launch Virtual MachinesΓÇÖ selected, click on ΓÇÿConfigure security featuresΓÇÖ and uncheck the ΓÇÿEnable secure bootΓÇÖ checkbox. In CLI, PowerShell, or SDK, set secure boot parameter to false.
+
-
## Deploy a trusted launch VM Create a virtual machine with trusted launch enabled. Choose an option below:
virtual-machines Redhat Images https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/redhat/redhat-images.md
Last updated 04/07/2023 - # Overview of Red Hat Enterprise Linux images
For information on Red Hat support policies for all versions of RHEL, see [Red H
> [!IMPORTANT] > RHEL images currently available in Azure Marketplace support either bring your own subscription (BYOS) or pay-as-you-go licensing models. You can dynamically switch between BYOS and pay-as-you-go licensing through [Azure Hybrid Benefit](../../linux/azure-hybrid-benefit-linux.md).
+> Note: BYOS images are based on private plans and currently not supported in CSP subscriptions (see [https://learn.microsoft.com/en-us/partner-center/marketplace/private-plans#unlock-enterprise-deals-with-private-plans](/partner-center/marketplace/private-plans))
>[!NOTE] > For any problem related to RHEL images in Azure Marketplace, file a support ticket with Microsoft.
Current policy is to keep all previously published images. We reserve the right
- To learn more about the Azure Red Hat Update Infrastructure, see [Red Hat Update Infrastructure for on-demand RHEL VMs in Azure](./redhat-rhui.md). - To learn more about the RHEL BYOS offer, see [Red Hat Enterprise Linux bring-your-own-subscription Gold Images in Azure](./byos.md). - For information on Red Hat support policies for all versions of RHEL, see [Red Hat Enterprise Linux Life Cycle](https://access.redhat.com/support/policy/updates/errata).++
virtual-network-manager Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/faq.md
For current region support, refer to [products available by region](https://azur
* Always allow traffic: You want to permit a specific security scanner to always have inbound connectivity to all your resources, even if there are NSG rules configured to deny the traffic.
+### What's the cost of using Azure Virtual Network Manager?
+
+Azure Virtual Network Manager charges $0.10/hour per subscription managed. AVNM charges are based on the number of subscriptions that contain a virtual network with an active network manager configuration deployed onto it. For example, if a network manager's scope consists of ten subscriptions but only three subscriptions' virtual networks are covered by a network manager deployment, then there are three managed subscriptions, so $0.10/hour * three subscriptions = $0.30/hour.
+ ## Technical ### Can a virtual network belong to multiple Azure Virtual Network Managers?
virtual-network Create Peering Different Subscriptions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/create-peering-different-subscriptions.md
This tutorial peers virtual networks in the same region. You can also peer virtu
- An Azure account with permissions in both subscriptions or an account in each subscription with the proper permissions to create a virtual network peering. For a list of permissions, see [Virtual network peering permissions](virtual-network-manage-peering.md#permissions).
- - To separate the duty of managing the network belonging to each tenant, add the user from each tenant as a guest in the opposite tenant and assign them a reader role to the virtual network. This procedure applies if the virtual networks are in different subscriptions and Active Directory tenants.
+ - To separate the duty of managing the network belonging to each tenant, add the user from each tenant as a guest in the opposite tenant and assign them the Network Contributor role to the virtual network. This procedure applies if the virtual networks are in different subscriptions and Active Directory tenants.
- - To establish a network peering when you don't intend to separate the duty of managing the network belonging to each tenant, add the user from tenant A as a guest in the opposite tenant. Then, assign them the correct permissions to initiate and connect the network peering from each subscription. With these permissions, the user is able to establish the network peering from each subscription.
+ - To establish a network peering when you don't intend to separate the duty of managing the network belonging to each tenant, add the user from tenant A as a guest in the opposite tenant. Then, assign them the Network Contributor role to initiate and connect the network peering from each subscription. With these permissions, the user is able to establish the network peering from each subscription.
- For more information about guest users, see [Add Azure Active Directory B2B collaboration users in the Azure portal](../active-directory/external-identities/add-users-administrator.md?toc=%2fazure%2fvirtual-network%2ftoc.json#add-guest-users-to-the-directory).
This tutorial peers virtual networks in the same region. You can also peer virtu
- An Azure account with permissions in both subscriptions or an account in each subscription with the proper permissions to create a virtual network peering. For a list of permissions, see [Virtual network peering permissions](virtual-network-manage-peering.md#permissions).
- - To separate the duty of managing the network belonging to each tenant, add the user from each tenant as a guest in the opposite tenant and assign them a reader role to the virtual network. This procedure applies if the virtual networks are in different subscriptions and Active Directory tenants.
+ - To separate the duty of managing the network belonging to each tenant, add the user from each tenant as a guest in the opposite tenant and assign them the Network Contributor role to the virtual network. This procedure applies if the virtual networks are in different subscriptions and Active Directory tenants.
- - To establish a network peering when you don't intend to separate the duty of managing the network belonging to each tenant, add the user from tenant A as a guest in the opposite tenant. Then, assign them the correct permissions to initiate and connect the network peering from each subscription. With these permissions, the user is able to establish the network peering from each subscription.
+ - To establish a network peering when you don't intend to separate the duty of managing the network belonging to each tenant, add the user from tenant A as a guest in the opposite tenant. Then, assign them the Network Contributor role to initiate and connect the network peering from each subscription. With these permissions, the user is able to establish the network peering from each subscription.
- For more information about guest users, see [Add Azure Active Directory B2B collaboration users in the Azure portal](../active-directory/external-identities/add-users-administrator.md?toc=%2fazure%2fvirtual-network%2ftoc.json#add-guest-users-to-the-directory).
If you choose to install and use PowerShell locally, this article requires the A
- An Azure account with permissions in both subscriptions or an account in each subscription with the proper permissions to create a virtual network peering. For a list of permissions, see [Virtual network peering permissions](virtual-network-manage-peering.md#permissions).
- - To separate the duty of managing the network belonging to each tenant, add the user from each tenant as a guest in the opposite tenant and assign them a reader role to the virtual network. This procedure applies if the virtual networks are in different subscriptions and Active Directory tenants.
+ - To separate the duty of managing the network belonging to each tenant, add the user from each tenant as a guest in the opposite tenant and assign them the Network Contributor role to the virtual network. This procedure applies if the virtual networks are in different subscriptions and Active Directory tenants.
- - To establish a network peering when you don't intend to separate the duty of managing the network belonging to each tenant, add the user from tenant A as a guest in the opposite tenant. Then, assign them the correct permissions to initiate and connect the network peering from each subscription. With these permissions, the user is able to establish the network peering from each subscription.
+ - To establish a network peering when you don't intend to separate the duty of managing the network belonging to each tenant, add the user from tenant A as a guest in the opposite tenant. Then, assign them the Network Contributor role to initiate and connect the network peering from each subscription. With these permissions, the user is able to establish the network peering from each subscription.
- For more information about guest users, see [Add Azure Active Directory B2B collaboration users in the Azure portal](../active-directory/external-identities/add-users-administrator.md?toc=%2fazure%2fvirtual-network%2ftoc.json#add-guest-users-to-the-directory).
virtual-network Nat Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/nat-gateway/nat-overview.md
Virtual appliance UDR / ExpressRoute >> NAT gateway >> Instance-level public IP
* Public IP addresses and prefixes derived from custom IP prefixes (BYOIP), to learn more, see [Custom IP address prefix (BYOIP)](../ip-services/custom-ip-address-prefix.md).
-* NAT gateway canΓÇÖt be associated to an IPv6 public IP address or IPv6 public IP prefix. It can be associated to a dual stack subnet, but will only be able to direct outbound traffic with an IPv4 address.
+* NAT gateway canΓÇÖt be associated to an IPv6 public IP address or IPv6 public IP prefix. It can be associated to a dual stack subnet, but will only be able to direct outbound traffic with an IPv4 address. To set up a dual stack outbound configuration, see [dual stack outbound connectivity with NAT gateway and Load balancer](/azure/virtual-network/nat-gateway/tutorial-dual-stack-outbound-nat-load-balancer?tabs=dual-stack-outbound-portal).
* NAT gateway can be associated to an Azure Firewall subnet in a hub virtual network and provide outbound connectivity from spoke virtual networks peered to the hub. To learn more, see [Azure Firewall integration with NAT gateway](../../firewall/integrate-with-nat-gateway.md).
virtual-network Quickstart Create Nat Gateway Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/nat-gateway/quickstart-create-nat-gateway-bicep.md
Title: 'Create a NAT gateway - Bicep' description: This quickstart shows how to create a NAT gateway using Bicep.- - Previously updated : 04/08/2022 Last updated : 04/24/2023 # Customer intent: I want to create a NAT gateway using Bicep so that I can provide outbound connectivity for my virtual machines.
The Bicep file used in this quickstart is from [Azure Quickstart Templates](http
This Bicep file is configured to create a: * Virtual network+ * NAT gateway resource+ * Ubuntu virtual machine The Ubuntu VM is deployed to a subnet that's associated with the NAT gateway resource.
The Ubuntu VM is deployed to a subnet that's associated with the NAT gateway res
Nine Azure resources are defined in the Bicep file: * **[Microsoft.Network/networkSecurityGroups](/azure/templates/microsoft.network/networksecuritygroups)**: Creates a network security group.+ * **[Microsoft.Network/networkSecurityGroups/securityRules](/azure/templates/microsoft.network/networksecuritygroups/securityrules)**: Creates a security rule.+ * **[Microsoft.Network/publicIPAddresses](/azure/templates/microsoft.network/publicipaddresses)**: Creates a public IP address.+ * **[Microsoft.Network/publicIPPrefixes](/azure/templates/microsoft.network/publicipprefixes)**: Creates a public IP prefix.+ * **[Microsoft.Compute/virtualMachines](/azure/templates/Microsoft.Compute/virtualMachines)**: Creates a virtual machine.+ * **[Microsoft.Network/virtualNetworks](/azure/templates/microsoft.network/virtualnetworks)**: Creates a virtual network.+ * **[Microsoft.Network/natGateways](/azure/templates/microsoft.network/natgateways)**: Creates a NAT gateway resource.+ * **[Microsoft.Network/virtualNetworks/subnets](/azure/templates/microsoft.network/virtualnetworks/subnets)**: Creates a virtual network subnet.+ * **[Microsoft.Network/networkinterfaces](/azure/templates/microsoft.network/networkinterfaces)**: Creates a network interface. ## Deploy the Bicep file 1. Save the Bicep file as **main.bicep** to your local computer.+ 1. Deploy the Bicep file using either Azure CLI or Azure PowerShell. # [CLI](#tab/CLI)
Remove-AzResourceGroup -Name exampleRG
In this quickstart, you created a: * NAT gateway resource+ * Virtual network+ * Ubuntu virtual machine The virtual machine is deployed to a virtual network subnet associated with the NAT gateway.
-To learn more about Azure NAT Gateway and Bicep, continue to the articles below.
+To learn more about Azure NAT Gateway and Bicep, continue to the following articles.
* Read an [Overview of Azure NAT Gateway](nat-overview.md)+ * Read about the [NAT Gateway resource](nat-gateway-resource.md)+ * Learn more about [Bicep](../../azure-resource-manager/bicep/overview.md)
virtual-network Quickstart Create Nat Gateway Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/nat-gateway/quickstart-create-nat-gateway-template.md
Title: 'Create a NAT gateway - Resource Manager Template' description: This quickstart shows how to create a NAT gateway by using the Azure Resource Manager template (ARM template).- - + - Previously updated : 10/27/2020 Last updated : 04/24/2023 # Customer intent: I want to create a NAT gateway by using an Azure Resource Manager template so that I can provide outbound connectivity for my virtual machines.
Get started with Azure NAT Gateway by using an Azure Resource Manager template (
[!INCLUDE [About Azure Resource Manager](../../../includes/resource-manager-quickstart-introduction.md)]
-If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button. The template will open in the Azure portal.
+If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button. The template opens in the Azure portal.
[![Deploy to Azure](../../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.network%2Fnat-gateway-1-vm%2Fazuredeploy.json) ## Prerequisites
-If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+# [**Portal**](#tab/create-nat-portal)
+- If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+
+# [**PowerShell**](#tab/create-nat-powershell)
+
+- If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+
+- Azure PowerShell installed locally or Azure Cloud Shell.
+
+- Sign in to Azure PowerShell and ensure you've selected the subscription with which you want to use this feature. For more information, see [Sign in with Azure PowerShell](/powershell/azure/authenticate-azureps).
+
+- Ensure your `Az.Network` module is 4.3.0 or later. To verify the installed module, use the command `Get-InstalledModule -Name "Az.Network"`. If the module requires an update, use the command `Update-Module -Name Az.Network` if necessary.
+
+If you choose to install and use PowerShell locally, this article requires the Azure PowerShell module version 5.4.1 or later. Run `Get-Module -ListAvailable Az` to find the installed version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-Az-ps). If you're running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
+
+# [**Azure CLI**](#tab/create-nat-cli)
+
+ - If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
++
+- This how-to article requires version 2.31.0 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
++ ## Review the template The template used in this quickstart is from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/nat-gateway-1-vm).
The template used in this quickstart is from [Azure Quickstart Templates](https:
This template is configured to create a: * Virtual network+ * NAT gateway resource+ * Ubuntu virtual machine The Ubuntu VM is deployed to a subnet that's associated with the NAT gateway resource.
The Ubuntu VM is deployed to a subnet that's associated with the NAT gateway res
Nine Azure resources are defined in the template: * **[Microsoft.Network/networkSecurityGroups](/azure/templates/microsoft.network/networksecuritygroups)**: Creates a network security group.+ * **[Microsoft.Network/networkSecurityGroups/securityRules](/azure/templates/microsoft.network/networksecuritygroups/securityrules)**: Creates a security rule.+ * **[Microsoft.Network/publicIPAddresses](/azure/templates/microsoft.network/publicipaddresses)**: Creates a public IP address.+ * **[Microsoft.Network/publicIPPrefixes](/azure/templates/microsoft.network/publicipprefixes)**: Creates a public IP prefix.+ * **[Microsoft.Compute/virtualMachines](/azure/templates/Microsoft.Compute/virtualMachines)**: Creates a virtual machine.+ * **[Microsoft.Network/virtualNetworks](/azure/templates/microsoft.network/virtualnetworks)**: Creates a virtual network.+ * **[Microsoft.Network/natGateways](/azure/templates/microsoft.network/natgateways)**: Creates a NAT gateway resource.+ * **[Microsoft.Network/virtualNetworks/subnets](/azure/templates/microsoft.network/virtualnetworks/subnets)**: Creates a virtual network subnet.+ * **[Microsoft.Network/networkinterfaces](/azure/templates/microsoft.network/networkinterfaces)**: Creates a network interface. ## Deploy the template
-**Azure CLI**
+# [**Portal**](#tab/create-nat-portal)
-```azurecli-interactive
-read -p "Enter the location (i.e. westcentralus): " location
-resourceGroupName="myResourceGroupNAT"
-templateUri="https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/quickstarts/microsoft.network/nat-gateway-1-vm/azuredeploy.json"
+[![Deploy to Azure](../../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.network%2Fnat-gateway-1-vm%2Fazuredeploy.json)
-az group create \
name $resourceGroupName \location $location
+## Review deployed resources
-az deployment group create \
resource-group $resourceGroupName \template-uri $templateUri
-```
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+1. Select **Resource groups** from the left pane.
+
+1. Select the resource group that you created in the previous section. The default resource group name is **myResourceGroupNAT**
+
+1. Verify the following resources were created in the resource group:
+
+ ![Virtual Network NAT resource group](./media/quick-create-template/nat-gateway-template-rg.png)
-**Azure PowerShell**
+# [**PowerShell**](#tab/create-nat-powershell)
```azurepowershell-interactive $location = Read-Host -Prompt "Enter the location (i.e. westcentralus)"
New-AzResourceGroup -Name $resourceGroupName -Location $location
New-AzResourceGroupDeployment -ResourceGroupName $resourceGroupName -TemplateUri $templateUri ```
-**Azure portal**
-
-[![Deploy to Azure](../../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.network%2Fnat-gateway-1-vm%2Fazuredeploy.json)
-
-## Review deployed resources
+# [**Azure CLI**](#tab/create-nat-cli)
-1. Sign in to the [Azure portal](https://portal.azure.com).
-
-1. Select **Resource groups** from the left pane.
+```azurecli-interactive
+read -p "Enter the location (i.e. westcentralus): " location
+resourceGroupName="myResourceGroupNAT"
+templateUri="https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/quickstarts/microsoft.network/nat-gateway-1-vm/azuredeploy.json"
-1. Select the resource group that you created in the previous section. The default resource group name is **myResourceGroupNAT**
+az group create \
+--name $resourceGroupName \
+--location $location
-1. Verify the following resources were created in the resource group:
+az deployment group create \
+--resource-group $resourceGroupName \
+--template-uri $templateUri
+```
- ![Virtual Network NAT resource group](./media/quick-create-template/nat-gateway-template-rg.png)
+ ## Clean up resources
-**Azure CLI**
-
-When no longer needed, you can use the [az group delete](/cli/azure/group#az-group-delete) command to remove the resource group and all resources contained within.
+# [**Portal**](#tab/create-nat-portal)
-```azurecli-interactive
- az group delete \
- --name myResourceGroupNAT
-```
+When no longer needed, delete the resource group, NAT gateway, and all related resources. Select the resource group **myResourceGroupNAT** that contains the NAT gateway, and then select **Delete**.
-**Azure PowerShell**
+# [**PowerShell**](#tab/create-nat-powershell)
When no longer needed, you can use the [Remove-AzResourceGroup](/powershell/module/az.resources/remove-azresourcegroup) command to remove the resource group and all resources contained within.
When no longer needed, you can use the [Remove-AzResourceGroup](/powershell/modu
Remove-AzResourceGroup -Name myResourceGroupNAT ```
-**Azure portal**
+# [**Azure CLI**](#tab/create-nat-cli)
-When no longer needed, delete the resource group, NAT gateway, and all related resources. Select the resource group **myResourceGroupNAT** that contains the NAT gateway, and then select **Delete**.
+When no longer needed, you can use the [az group delete](/cli/azure/group#az-group-delete) command to remove the resource group and all resources contained within.
+
+```azurecli-interactive
+ az group delete \
+ --name myResourceGroupNAT
+```
++ ## Next steps In this quickstart, you created a: * NAT gateway resource+ * Virtual network+ * Ubuntu virtual machine The virtual machine is deployed to a virtual network subnet associated with the NAT gateway.
-To learn more about Azure NAT Gateway and Azure Resource Manager, continue to the articles below.
+To learn more about Azure NAT Gateway and Azure Resource Manager, continue to the following articles.
* Read an [Overview of Azure NAT Gateway](nat-overview.md)+ * Read about the [NAT Gateway resource](nat-gateway-resource.md)+ * Learn more about [Azure Resource Manager](../../azure-resource-manager/management/overview.md)
virtual-network Troubleshoot Nat Connectivity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/nat-gateway/troubleshoot-nat-connectivity.md
Title: Troubleshoot Azure NAT Gateway connectivity-
-description: Troubleshoot connectivity issues with NAT gateway.
+description: Troubleshoot connectivity issues with a NAT gateway.
Previously updated : 08/29/2022 Last updated : 04/24/2023
SNAT exhaustion issues with NAT gateway typically have to do with the configurat
Each public IP address provides 64,512 SNAT ports for connecting outbound with NAT gateway. From those available SNAT ports, NAT gateway can support up to 50,000 concurrent connections to the same destination endpoint. If outbound connections are dropping because SNAT ports are being exhausted, then NAT gateway may not be scaled out enough to handle the workload. More public IP addresses on NAT gateway may be required in order to provide more SNAT ports for outbound connectivity.
-The table below describes two common outbound connectivity failure scenarios due to scalability issues and how to validate and mitigate these issues:
+The following table describes two common outbound connectivity failure scenarios due to scalability issues and how to validate and mitigate these issues:
| Scenario | Evidence |Mitigation | ||||
-| You're experiencing contention for SNAT ports and SNAT port exhaustion during periods of high usage. | You run the following [metrics](nat-metrics.md) in Azure Monitor: **Total SNAT Connection Count**: "Sum" aggregation shows high connection volume. For **SNAT Connection Count**, "Failed" connection state shows transient or persistent failures over time. **Dropped Packets**: "Sum" aggregation shows packets dropping consistent with high connection volume and connection failures. | Add more public IP addresses or public IP prefixes as need (assign up to 16 IP addresses in total to your NAT gateway). This addition will provide more SNAT port inventory and allow you to scale your scenario further. |
-| You've already assigned 16 IP addresses to your NAT gateway and still are experiencing SNAT port exhaustion. | Attempt to add more IP addresses fails. Total number of IP addresses from public IP address or public IP prefix resources exceeds a total of 16. | Distribute your application environment across multiple subnets and provide a NAT gateway resource for each subnet. |
+| You're experiencing contention for SNAT ports and SNAT port exhaustion during periods of high usage. | You run the following [metrics](nat-metrics.md) in Azure Monitor: **Total SNAT Connection Count**: "Sum" aggregation shows high connection volume. For **SNAT Connection Count**, "Failed" connection state shows transient or persistent failures over time. **Dropped Packets**: "Sum" aggregation shows packets dropping consistent with high connection volume and connection failures. | Add more public IP addresses or public IP prefixes as need (assign up to 16 IP addresses in total to your NAT gateway). This addition provides more SNAT port inventory and allow you to scale your scenario further. |
+| You have already assigned 16 IP addresses to your NAT gateway and still are experiencing SNAT port exhaustion. | Attempt to add more IP addresses fails. Total number of IP addresses from public IP address or public IP prefix resources exceeds a total of 16. | Distribute your application environment across multiple subnets and provide a NAT gateway resource for each subnet. |
>[!NOTE]
->It is important to understand why SNAT exhaustion occurs. Make sure you are using the right patterns for scalable and reliable scenarios. Adding more SNAT ports to a scenario without understanding the cause of the demand should be a last resort. If you do not understand why your scenario is applying pressure on SNAT port inventory, adding more SNAT ports by adding more IP addresses will only delay the same exhaustion failure as your application scales. You may be masking other inefficiencies and anti-patterns. See [best practices for efficient use of outbound connections](#outbound-connectivity-best-practices) for additional guidance.
+>It is important to understand why SNAT exhaustion occurs. Make sure you are using the right patterns for scalable and reliable scenarios. Adding more SNAT ports to a scenario without understanding the cause of the demand should be a last resort. If you do not understand why your scenario is applying pressure on SNAT port inventory, adding more SNAT ports by adding more IP addresses will only delay the same exhaustion failure as your application scales. You may be masking other inefficiencies and anti-patterns. For more information, see [best practices for efficient use of outbound connections](#outbound-connectivity-best-practices).
### TCP idle timeout timers set higher than the default value
-The NAT gateway TCP idle timeout timer is set to 4 minutes by default but is configurable up to 120 minutes. If the timer is set to a higher value than the default, NAT gateway will hold on to flows longer, and can create [extra pressure on SNAT port inventory](./nat-gateway-resource.md#timers). The table below describes a scenario where a long TCP idle timeout timer is causing SNAT exhaustion and provides mitigation steps to take:
+The NAT gateway TCP idle timeout timer is set to 4 minutes by default but is configurable up to 120 minutes. If the timer is set to a higher value than the default, NAT gateway holds on to flows longer, and can create [extra pressure on SNAT port inventory](./nat-gateway-resource.md#timers).
+
+The following table describes a scenario where a long TCP idle timeout timer is causing SNAT exhaustion and provides mitigation steps to take:
| Scenario | Evidence | Mitigation | ||||
-| You want to ensure that TCP connections stay active for long periods of time without going idle and timing out. You increase the TCP idle timeout timer setting. After a period of time, you start to notice that connection failures occur more often. You suspect that you may be exhausting your inventory of SNAT ports since connections are holding on to them longer. | You check the following [NAT gateway metrics](nat-metrics.md) in Azure Monitor to determine if SNAT port exhaustion is happening: **Total SNAT Connection Count**: "Sum" aggregation shows high connection volume. For **SNAT Connection Count**, "Failed" connection state shows transient or persistent failures over time. **Dropped Packets**: "Sum" aggregation shows packets dropping consistent with high connection volume and connection failures. | Some possible steps you can take to resolve SNAT port exhaustion include: </br></br> **Reduce the TCP idle timeout** to a lower value to free up SNAT port inventory earlier. The TCP idle timeout timer can't be set lower than 4 minutes. </br></br> Consider **[asynchronous polling patterns](/azure/architecture/patterns/async-request-reply)** to free up connection resources for other operations. </br></br> **Use TCP keepalives or application layer keepalives** to avoid intermediate systems timing out. For examples, see [.NET examples](/dotnet/api/system.net.servicepoint.settcpkeepalive). </br></br> Make connections to Azure PaaS services over the Azure backbone using **[Private Link](../../private-link/private-link-overview.md)**. This frees up SNAT ports for outbound connections to the internet. |
+| You want to ensure that TCP connections stay active for long periods of time without idling and timing out. You increase the TCP idle timeout timer setting. After a period of time, you start to notice that connection failures occur more often. You suspect that you may be exhausting your inventory of SNAT ports since connections are holding on to them longer. | You check the following [NAT gateway metrics](nat-metrics.md) in Azure Monitor to determine if SNAT port exhaustion is happening: **Total SNAT Connection Count**: "Sum" aggregation shows high connection volume. For **SNAT Connection Count**, "Failed" connection state shows transient or persistent failures over time. **Dropped Packets**: "Sum" aggregation shows packets dropping consistent with high connection volume and connection failures. | Some possible steps you can take to resolve SNAT port exhaustion include: </br></br> **Reduce the TCP idle timeout** to a lower value to free up SNAT port inventory earlier. The TCP idle timeout timer can't be set lower than 4 minutes. </br></br> Consider **[asynchronous polling patterns](/azure/architecture/patterns/async-request-reply)** to free up connection resources for other operations. </br></br> **Use TCP keepalives or application layer keepalives** to avoid intermediate systems timing out. For examples, see [.NET examples](/dotnet/api/system.net.servicepoint.settcpkeepalive). </br></br> Make connections to Azure PaaS services over the Azure backbone using **[Private Link](../../private-link/private-link-overview.md)**. The use of private link frees up SNAT ports for outbound connections to the internet. |
## Connection failures due to idle timeouts ### TCP idle timeout
-As described in the [TCP timers](#tcp-idle-timeout-timers-set-higher-than-the-default-value) section above, TCP keepalives should be used to refresh idle flows and reset the idle timeout. TCP keepalives only need to be enabled from one side of a connection in order to keep a connection alive from both sides. When a TCP keepalive is sent from one side of a connection, the other side automatically sends an ACK packet. The idle timeout timer is then reset on both sides of the connection. To learn more, see [Timer considerations](./nat-gateway-resource.md#timer-considerations).
+As described in the [TCP timers](#tcp-idle-timeout-timers-set-higher-than-the-default-value) in the previous section, TCP keepalives should be used to refresh idle flows and reset the idle timeout. TCP keepalives only need to be enabled from one side of a connection in order to keep a connection alive from both sides. When a TCP keepalive is sent from one side of a connection, the other side automatically sends an ACK packet. The idle timeout timer is then reset on both sides of the connection. To learn more, see [Timer considerations](./nat-gateway-resource.md#timer-considerations).
>[!Note] >Increasing the TCP idle timeout is a last resort and may not resolve the root cause. A long timeout can cause low-rate failures when timeout expires and introduce delay and unnecessary failures. ### UDP idle timeout
-UDP idle timeout timers are set to 4 minutes. Unlike TCP idle timeout timers for NAT gateway, UDP idle timeout timers aren't configurable. The table below describes a common scenario encountered with connections dropping due to UDP traffic idle timing out and steps to take to mitigate the issue.
+UDP idle timeout timers are set to 4 minutes. Unlike TCP idle timeout timers for NAT gateway, UDP idle timeout timers aren't configurable.
+
+The following table describes a common scenario encountered with connections dropping due to UDP traffic idle timing out and steps to take to mitigate the issue.
| Scenario | Evidence | Mitigation | ||||
UDP idle timeout timers are set to 4 minutes. Unlike TCP idle timeout timers for
### VMs hold on to prior SNAT IP with active connection after NAT gateway added to a virtual network
-[NAT gateway](nat-overview.md) becomes the default route to the internet when configured to a subnet. Migration from default outbound access or load balancer to NAT gateway results in new connections immediately using the IP address(es) associated with the NAT gateway resource. If a virtual machine has an established connection during the migration, the connection will continue to use the old SNAT IP address that was assigned when the connection was established.
+[NAT gateway](nat-overview.md) becomes the default route to the internet when configured to a subnet. Migration from default outbound access or load balancer to NAT gateway results in new connections immediately using the IP address(es) associated with the NAT gateway resource. If a virtual machine has an established connection during the migration, the connection continues to use the old SNAT IP address that was assigned when the connection was established.
Test and resolve issues with VMs holding on to old SNAT IP addresses by: - Ensure you've established a new connection and that existing connections aren't being reused in the OS or that the browser is caching the connections. For example, when using curl in PowerShell, make sure to specify the -DisableKeepalive parameter to force a new connection. If you're using a browser, connections may also be pooled. -- It isn't necessary to reboot a virtual machine in a subnet configured to NAT gateway. However, if a virtual machine is rebooted, the connection state is flushed. When the connection state has been flushed, all connections will begin using the NAT gateway resource's IP address(es). This behavior is a side effect of the virtual machine reboot and not an indicator that a reboot is required.
+- It isn't necessary to reboot a virtual machine in a subnet configured to NAT gateway. However, if a virtual machine is rebooted, the connection state is flushed. When the connection state has been flushed, all connections begin using the NAT gateway resource's IP address(es). This behavior is a side effect of the virtual machine reboot and not an indicator that a reboot is required.
If you're still having trouble, open a support case for further troubleshooting.
Once the custom UDR is removed from the routing table, the NAT gateway public IP
### Private IPs are used to connect to Azure services by Private Link
-[Private Link](../../private-link/private-link-overview.md) connects your Azure virtual networks privately to Azure PaaS services such as Azure Storage, Azure SQL, or Azure Cosmos DB over the Azure backbone network instead of over the internet. Private Link uses the private IP addresses of virtual machine instances in your virtual network to connect to these Azure platform services instead of the public IP of NAT gateway. As a result, when looking at the source IP address used to connect to these Azure services, you'll notice that the private IPs of your instances are used. See [Azure services listed here](../../private-link/availability.md) for all services supported by Private Link.
+[Private Link](../../private-link/private-link-overview.md) connects your Azure virtual networks privately to Azure PaaS services such as Azure Storage, Azure SQL, or Azure Cosmos DB over the Azure backbone network instead of over the internet. Private Link uses the private IP addresses of virtual machine instances in your virtual network to connect to these Azure platform services instead of the public IP of NAT gateway. As a result, when looking at the source IP address used to connect to these Azure services, you notice that the private IPs of your instances are used. See [Azure services listed here](../../private-link/availability.md) for all services supported by Private Link.
To check which Private Endpoints you have set up with Private Link:
Service endpoints can also be used to connect your virtual network to Azure PaaS
1. From the Azure portal, navigate to your virtual network and select "Service endpoints" from Settings.
-2. All Service endpoints created will be listed along with which subnets they're configured. For more information, see [logging and troubleshooting Service endpoints](../virtual-network-service-endpoints-overview.md#logging-and-troubleshooting).
+2. All Service endpoints created are listed along with which subnets they're configured. For more information, see [logging and troubleshooting Service endpoints](../virtual-network-service-endpoints-overview.md#logging-and-troubleshooting).
>[!NOTE] >Private Link is the recommended option over Service endpoints for private access to Azure hosted services.
Use NAT gateway [metrics](nat-metrics.md) in Azure monitor to diagnose connectio
* Look at packet count at the source and the destination (if available) to determine how many connection attempts were made.
-* Look at dropped packets to see how many packets were dropped by NAT gateway.
+* Look at dropped packets to see how many packets dropped by NAT gateway.
What else to check for:
Outbound Passive FTP may not work for NAT gateway with multiple public IP addres
Passive FTP establishes different connections for control and data channels. When a NAT gateway with multiple public IP addresses sends traffic outbound, it randomly selects one of its public IP addresses for the source IP address. FTP may fail when data and control channels use different source IP addresses, depending on your FTP server configuration.
-To prevent possible passive FTP connection failures, make sure to do the following:
+To prevent possible passive FTP connection failures, do the following steps:
+ 1. Check that your NAT gateway is attached to a single public IP address rather than multiple IP addresses or a prefix. + 2. Make sure that the passive port range from your NAT gateway is allowed to pass any firewalls that may be at the destination endpoint. ### Extra network captures
If your investigation is inconclusive, open a support case for further troublesh
## Outbound connectivity best practices
-Azure monitors and operates its infrastructure with great care. However, transient failures can still occur from deployed applications, there's no guarantee that transmissions are lossless. NAT gateway is the preferred option to connect outbound from Azure deployments in order to ensure highly reliable and resilient outbound connectivity. In addition to using NAT gateway to connect outbound, use the guidance below for how to ensure that applications are using connections efficiently.
+Azure monitors and operates its infrastructure with great care. However, transient failures can still occur from deployed applications, there's no guarantee that transmissions are lossless. NAT gateway is the preferred option to connect outbound from Azure deployments in order to ensure highly reliable and resilient outbound connectivity. In addition to using NAT gateway to connect outbound, use the guidance later in the article for how to ensure that applications are using connections efficiently.
### Modify the application to use connection pooling
When possible, Private Link should be used to connect directly from your virtual
To create a Private Link, see the following Quickstart guides to get started: * [Create a Private Endpoint](../../private-link/create-private-endpoint-portal.md?tabs=dynamic-ip) + * [Create a Private Link](../../private-link/create-private-link-service-portal.md) ## Next steps
-We're always looking to improve the experience of our customers. If you're experiencing issues with NAT gateway that aren't listed or resolved by this article, submit feedback through GitHub via the bottom of this page. We'll address your feedback as soon as possible.
+We always strive to enhance our customers' experience. If you encounter NAT gateway issues that not addressed or resolved by this article, provide feedback through GitHub at the bottom of this page.
To learn more about NAT gateway, see: * [Azure NAT Gateway](./nat-overview.md) + * [NAT gateway resource](./nat-gateway-resource.md) + * [Metrics and alerts for NAT gateway resources](./nat-metrics.md)
virtual-network Troubleshoot Nat https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/nat-gateway/troubleshoot-nat.md
Refer to the table below for which tools to use to validate NAT gateway connecti
### How to analyze outbound connectivity
-To analyze outbound traffic from NAT gateway, use NSG flow logs. NSG flow logs provide connection information for your virtual machines. The connection information contains the source IP and port and the destination IP and port and the state of the connection. The traffic flow direction and the size of the traffic in number of packets and bytes sent is also logged.
+To analyze outbound traffic from NAT gateway, use NSG flow logs. NSG flow logs provide connection information for your virtual machines. The connection information contains the source IP and port and the destination IP and port and the state of the connection. The traffic flow direction and the size of the traffic in number of packets and bytes sent is also logged. The source IP and port specified in the NSG flow log will be that of the virtual machine and not of the NAT gateway.
* To learn more about NSG flow logs, see [NSG flow log overview](../../network-watcher/network-watcher-nsg-flow-logging-overview.md).
To get your virtual machine NIC out of a failed state, you can use one of the tw
### Can't exceed 16 public IP addresses on NAT gateway
-NAT gateway can't be associated with more than 16 public IP addresses. You can use any combination of public IP addresses and prefixes with NAT gateway up to a total of 16 IP addresses. The following IP prefix sizes can be used with NAT gateway:
+NAT gateway can't be associated with more than 16 public IP addresses. You can use any combination of public IP addresses and prefixes with NAT gateway up to a total of 16 IP addresses. To add or remove a public IP, see [add or remove a public IP address](/azure/virtual-network/nat-gateway/manage-nat-gateway?tabs=manage-nat-portal#add-or-remove-a-public-ip-address).
+
+The following IP prefix sizes can be used with NAT gateway:
* /28 (sixteen addresses)
NAT gateway can't be associated with more than 16 public IP addresses. You can u
### IPv6 coexistence
-[NAT gateway](nat-overview.md) supports IPv4 UDP and TCP protocols. NAT gateway can't be associated to an IPv6 Public IP address or IPv6 Public IP Prefix. NAT gateway can be deployed on a dual stack subnet, but will still only use IPv4 Public IP addresses for directing outbound traffic. Deploy NAT gateway on a dual stack subnet when you need IPv6 resources to exist in the same subnet as IPv4 resources.
+[NAT gateway](nat-overview.md) supports IPv4 UDP and TCP protocols. NAT gateway can't be associated to an IPv6 Public IP address or IPv6 Public IP Prefix. NAT gateway can be deployed on a dual stack subnet, but will still only use IPv4 Public IP addresses for directing outbound traffic. Deploy NAT gateway on a dual stack subnet when you need IPv6 resources to exist in the same subnet as IPv4 resources. See [Configure dual stack outbound connectivity with NAT gateway and public Load balancer](/azure/virtual-network/nat-gateway/tutorial-dual-stack-outbound-nat-load-balancer?tabs=dual-stack-outbound-portal) to learn how to provide IPv4 and IPv6 outbound connectivity from your dual stack subnet.
### Can't use basic SKU public IPs with NAT gateway
virtual-network Setup Dpdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/setup-dpdk.md
Title: DPDK in an Azure Linux VM+ description: Learn the benefits of the Data Plane Development Kit (DPDK) and how to set up the DPDK on a Linux virtual machine. - - Previously updated : 05/12/2020 Last updated : 04/24/2023
Data Plane Development Kit (DPDK) on Azure offers a faster user-space packet processing framework for performance-intensive applications. This framework bypasses the virtual machineΓÇÖs kernel network stack.
-In typical packet processing that uses the kernel network stack, the process is interrupt-driven. When the network interface receives incoming packets, there is a kernel interrupt to process the packet and a context switch from the kernel space to the user space. DPDK eliminates context switching and the interrupt-driven method in favor of a user-space implementation that uses poll mode drivers for fast packet processing.
+In typical packet processing that uses the kernel network stack, the process is interrupt-driven. When the network interface receives incoming packets, there's a kernel interrupt to process the packet and a context switch from the kernel space to the user space. DPDK eliminates context switching and the interrupt-driven method in favor of a user-space implementation that uses poll mode drivers for fast packet processing.
DPDK consists of sets of user-space libraries that provide access to lower-level resources. These resources can include hardware, logical cores, memory management, and poll mode drivers for network interface cards.
DPDK can run on Azure virtual machines that are supporting multiple operating sy
**Higher packets per second (PPS)**: Bypassing the kernel and taking control of packets in the user space reduces the cycle count by eliminating context switches. It also improves the rate of packets that are processed per second in Azure Linux virtual machines. - ## Supported operating systems minimum versions The following distributions from the Azure Marketplace are supported:
All Azure regions support DPDK.
## Prerequisites
-Accelerated networking must be enabled on a Linux virtual machine. The virtual machine should have at least two network interfaces, with one interface for management. Enabling Accelerated networking on management interface is not recommended. Learn how to [create a Linux virtual machine with accelerated networking enabled](create-vm-accelerated-networking-cli.md).
+Accelerated networking must be enabled on a Linux virtual machine. The virtual machine should have at least two network interfaces, with one interface for management. Enabling Accelerated networking on management interface isn't recommended. Learn how to [create a Linux virtual machine with accelerated networking enabled](create-vm-accelerated-networking-cli.md).
On virtual machines that are using InfiniBand, ensure the appropriate `mlx4_ib` or `mlx5_ib` drivers are loaded, see [Enable InfiniBand](../virtual-machines/workloads/hpc/enable-infiniband.md). ## Install DPDK via system package (recommended)
-### Ubuntu 18.04
+# [RHEL, CentOS](#tab/redhat)
```bash
-sudo add-apt-repository ppa:canonical-server/server-backports -y
-sudo apt-get update
-sudo apt-get install -y dpdk
+sudo yum install -y dpdk
```
-### Ubuntu 20.04 and newer
+# [openSUSE, SLES](#tab/suse)
```bash
-sudo apt-get install -y dpdk
+sudo zypper install -y dpdk
```
-### Debian 10 and newer
-
-```bash
-sudo apt-get install -y dpdk
-```
-
-## Install DPDK manually (not recommended)
-
-### Install build dependencies
+# [Ubuntu, Debian](#tab/ubuntu)
-#### Ubuntu 18.04
+### Ubuntu 18.04
```bash sudo add-apt-repository ppa:canonical-server/server-backports -y sudo apt-get update
-sudo apt-get install -y build-essential librdmacm-dev libnuma-dev libmnl-dev meson
+sudo apt-get install -y dpdk
```
-#### Ubuntu 20.04 and newer
+### Ubuntu 20.04/Debian 10 and newer
```bash
-sudo apt-get install -y build-essential librdmacm-dev libnuma-dev libmnl-dev meson
+sudo apt-get install -y dpdk
```
-#### Debian 10 and newer
+
-```bash
-sudo apt-get install -y build-essential librdmacm-dev libnuma-dev libmnl-dev meson
-```
+## Install DPDK manually (not recommended)
+
+### Install build dependencies
+
+# [RHEL, CentOS](#tab/redhat)
#### RHEL7.5/CentOS 7.5
sudo dracut --add-drivers "mlx4_en mlx4_ib mlx5_ib" -f
yum install -y gcc kernel-devel-`uname -r` numactl-devel.x86_64 librdmacm-devel libmnl-devel meson ```
+# [openSUSE, SLES](#tab/suse)
+ #### SLES 15 SP1 **Azure kernel**
zypper \
--gpg-auto-import-keys install kernel-default-devel gcc make libnuma-devel numactl librdmacm1 rdma-core-devel meson ```
+# [Ubuntu, Debian](#tab/ubuntu)
+
+#### Ubuntu 18.04
+
+```bash
+sudo add-apt-repository ppa:canonical-server/server-backports -y
+sudo apt-get update
+sudo apt-get install -y build-essential librdmacm-dev libnuma-dev libmnl-dev meson
+```
+
+#### Ubuntu 20.04/Debian 10 and newer
+
+```bash
+sudo apt-get install -y build-essential librdmacm-dev libnuma-dev libmnl-dev meson
+```
++ ### Compile and install DPDK manually 1. [Download the latest DPDK](https://core.dpdk.org/download). Version 19.11 LTS or newer is required for Azure.+ 2. Build the default config with `meson builddir`.+ 3. Compile with `ninja -C builddir`.+ 4. Install with `DESTDIR=<output folder> ninja -C builddir install`. ## Configure the runtime environment
After restarting, run the following commands once:
1. Hugepages
- * Configure hugepage by running the following command, once for each numa node:
+ * Configure hugepage by running the following command, once for each numa node:
- ```bash
+ ```bash
echo 1024 | sudo tee /sys/devices/system/node/node*/hugepages/hugepages-2048kB/nr_hugepages
- ```
+ ```
- * Create a directory for mounting with `mkdir /mnt/huge`.
- * Mount hugepages with `mount -t hugetlbfs nodev /mnt/huge`.
- * Check that hugepages are reserved with `grep Huge /proc/meminfo`.
+ * Create a directory for mounting with `mkdir /mnt/huge`.
+
+ * Mount hugepages with `mount -t hugetlbfs nodev /mnt/huge`.
+
+ * Check that hugepages are reserved with `grep Huge /proc/meminfo`.
- > [NOTE]
+ > [!NOTE]
> There is a way to modify the grub file so that hugepages are reserved on boot by following the [instructions](https://dpdk.org/doc/guides/linux_gsg/sys_reqs.html#use-of-hugepages-in-the-linux-environment) for the DPDK. The instructions are at the bottom of the page. When you're using an Azure Linux virtual machine, modify files under **/etc/config/grub.d** instead, to reserve hugepages across reboots. 2. MAC & IP addresses: Use `ifconfig ΓÇôa` to view the MAC and IP address of the network interfaces. The *VF* network interface and *NETVSC* network interface have the same MAC address, but only the *NETVSC* network interface has an IP address. *VF* interfaces are running as subordinate interfaces of *NETVSC* interfaces.
After restarting, run the following commands once:
3. PCI addresses * Use `ethtool -i <vf interface name>` to find out which PCI address to use for *VF*.
+
* If *eth0* has accelerated networking enabled, make sure that testpmd doesnΓÇÖt accidentally take over the *VF* pci device for *eth0*. If the DPDK application accidentally takes over the management network interface and causes you to lose your SSH connection, use the serial console to stop the DPDK application. You can also use the serial console to stop or start the virtual machine. 4. Load *ibuverbs* on each reboot with `modprobe -a ib_uverbs`. For SLES 15 only, also load *mlx4_ib* with `modprobe -a mlx4_ib`.
To run testpmd in root mode, use `sudo` before the *testpmd* command.
If you're running testpmd with more than two NICs, the `--vdev` argument follows this pattern: `net_vdev_netvsc<id>,iface=<vfΓÇÖs pairing eth>`. 3. After it's started, run `show port info all` to check port information. You should see one or two DPDK ports that are net_failsafe (not *net_mlx4*).+ 4. Use `start <port> /stop <port>` to start traffic. The previous commands start *testpmd* in interactive mode, which is recommended for trying out testpmd commands.
The following commands periodically print the packets per second statistics:
When you're running the previous commands on a virtual machine, change *IP_SRC_ADDR* and *IP_DST_ADDR* in `app/test-pmd/txonly.c` to match the actual IP address of the virtual machines before you compile. Otherwise, the packets are dropped before reaching the receiver. ### Advanced: Single sender/single forwarder+ The following commands periodically print the packets per second statistics: 1. On the TX side, run the following command:
The following commands periodically print the packets per second statistics:
--stats-period <display interval in seconds> ```
-When you're running the previous commands on a virtual machine, change *IP_SRC_ADDR* and *IP_DST_ADDR* in `app/test-pmd/txonly.c` to match the actual IP address of the virtual machines before you compile. Otherwise, the packets are dropped before reaching the forwarder. You wonΓÇÖt be able to have a third machine receive forwarded traffic, because the *testpmd* forwarder doesnΓÇÖt modify the layer-3 addresses, unless you make some code changes.
+When you're running the previous commands on a virtual machine, change *IP_SRC_ADDR* and *IP_DST_ADDR* in `app/test-pmd/txonly.c` to match the actual IP address of the virtual machines before you compile. Otherwise, the packets are dropped before reaching the forwarder. You can't have a third machine receive forwarded traffic, because the *testpmd* forwarder doesnΓÇÖt modify the layer-3 addresses, unless you make some code changes.
## References * [EAL options](https://dpdk.org/doc/guides/testpmd_app_ug/run_app.html#eal-command-line-options)+ * [Testpmd commands](https://dpdk.org/doc/guides/testpmd_app_ug/run_app.html#testpmd-command-line-options)+ * [Packet dump commands](https://doc.dpdk.org/guides/tools/pdump.html#pdump-tool)
virtual-network Virtual Network Scenario Udr Gw Nva https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/virtual-network-scenario-udr-gw-nva.md
In this example, there's a subscription that contains the following items:
* A virtual network named **onpremvnet** segmented as follows used to mimic an on-premises datacenter.
- * **onpremsn1**. Subnet containing a virtual machine (VM) running Ubuntu to mimic an on-premises server.
+ * **onpremsn1**. Subnet containing a virtual machine (VM) running Linux distribution to mimic an on-premises server.
- * **onpremsn2**. Subnet containing a VM running Ubuntu to mimic an on-premises computer used by an administrator.
+ * **onpremsn2**. Subnet containing a VM running Linux distribution to mimic an on-premises computer used by an administrator.
* There's one firewall virtual appliance named **OPFW** on **onpremvnet** used to maintain a tunnel to **azurevnet**.
virtual-network Virtual Network Service Endpoint Policies Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/virtual-network-service-endpoint-policies-cli.md
Create a VM in the *Private* subnet with [az vm create](/cli/azure/vm). If SSH k
az vm create \ --resource-group myResourceGroup \ --name myVmPrivate \
- --image UbuntuLTS \
+ --image <SKU linux image> \
--vnet-name myVirtualNetwork \ --subnet Private \ --generate-ssh-keys
sudo mkdir /mnt/MyAzureFileShare2
``` Attempt to mount the Azure file share from storage account *notallowedstorageacc* to the directory you created.
-This article assumes you deployed the latest version of Ubuntu. If you are using earlier versions of Ubuntu, see [Mount on Linux](../storage/files/storage-how-to-use-files-linux.md?toc=%2fazure%2fvirtual-network%2ftoc.json) for additional instructions about mounting file shares.
+This article assumes you deployed the latest version of Linux distribution. If you are using earlier versions of Linux distribution, see [Mount on Linux](../storage/files/storage-how-to-use-files-linux.md?toc=%2fazure%2fvirtual-network%2ftoc.json) for additional instructions about mounting file shares.
Before executing the command below, replace *\<storage-account-key>* with value of *AccountKey* from **$saConnectionString2**.
virtual-wan Monitor Virtual Wan Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/monitor-virtual-wan-reference.md
The following diagnostics are available for Virtual WAN point-to-site VPN gatewa
### ExpressRoute gateway diagnostics
-Diagnostic logs for ExpressRoute gateways in Azure Virtual WAN aren't supported.
+Diagnostic logs for ExpressRoute gateways in Azure Virtual WAN are supported.
### <a name="view-diagnostic"></a>View diagnostic logs configuration
virtual-wan Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/whats-new.md
You can also find the latest Azure Virtual WAN updates and subscribe to the RSS
| Type |Area |Name |Description | Date added | Limitations | | ||||||
+|Feature |ExpressRoute | [ExpressRoute metrics can be exported as diagnostic logs](monitor-virtual-wan-reference.md#expressroute-gateway-diagnostics)|| April 2023||
|Feature |ExpressRoute | [ExpressRoute circuit page now shows vWAN connection](virtual-wan-expressroute-portal.md)|| August 2022|| ### Site-to-site